text
stringlengths
0
2.11M
id
stringlengths
33
34
metadata
dict
[email protected]@[email protected]^aDepartamento de Matemáticas, Facultad de Ciencias Universidad Autónoma de Baja California, A.P. 1880, C.P. 22860, Ensenada B. C. México ^bDivisión de Ciencias e Ingenierías Campus León, Universidad de Guanajuato, A.P. E-143, C.P. 37150, León, Guanajuato, México. A generalized entropy arising in the context of superstatistics is obtained for an ideal gas. The curvature scalar associated to the thermodynamic space generated by this modified entropy is calculated using two formalisms ofthe geometric approach to thermodynamics.Using the curvature/interaction hypothesis of the geometric approach to thermodynamic geometry it is found thatas a consequence of considering a generalized statistics,an effective interaction arises but the interaction is not enough to give a phase transition. This generalized entropy seems to be relevant in confinement or in systems with not so many degrees of freedom, so it could be interesting to use such entropies to characterize the thermodynamics of small systems.Thermodynamic geometry for a non-extensive ideal gas J. Torres-Arenas^b 2017-06-14 ====================================================§ INTRODUCTIONBecause of the existence of anomalous systems which do not seem to obey the rules of common statisticsgenerally associated to non equilibrium processes, a more general statistics has been proposedbased on superstatistics <cit.> which considers large fluctuations of intensive quantities <cit.>. We review here this formulation in which the intensive fluctuating quantity is the temperature.This fluctuation gives rise to a certain probability distribution characterized by a generalized Boltzmann factor.We can, in principle, associate an entropy forevery probability distribution and in this context the Boltzmann-Gibbs entropy corresponds to the usual Boltzmann factor. It is, however, possible to obtain other generalized expressions for entropies associated to different probability distributions <cit.> depending on one orseveral parameters <cit.>. In <cit.>, it was shown how to generate an entropy depending only on the probability. The particular entropy considered here arises from a generalized gamma distribution depending only on the probability p_l.This entropy has several interesting features and it seems to be relevant for particular thermodynamic systems like confined systems <cit.> and in this context we find an interesting and necessary application of such generalized entropies.The quantum version of this entropy which is a generalization of the Von Neumann entropy arises by means of a natural generalizationof the replica trick <cit.>.In the formalism of the geometric approach to thermodynamics, a geometric structure is given for usual thermodynamic systems by means ofRiemannian geometry <cit.>. Particularly in the context of the so called geometrothermodynamics <cit.>,the geometrical relevant quantities, like the thermodynamic metric, are invariant with respect to Legendre transformations resembling the fact thatthe thermodynamic information does not depend on what fundamental relation (thermodynamic potential) is used. In this formalism, the representation invariance of the metric and its corresponding curvature scalar has been proved for simple thermodynamic systems <cit.>. On the other hand, inspired in fluctuation theory, a distance between points in a thermodynamic space can also be defined <cit.>, and we canassociate to this space a thermodynamic metric, a corresponding Riemann tensor and consequently a curvature scalar R.Both approaches coincide in the physical interpretation of the curvature scalar as amanifestation of the existence of intermolecular interactions. When the curvatureassociated to the corespondent thermodynamic metric is non-zero, an interaction of some nature is present <cit.>, this is known as the curvature/interaction hypothesis. Other physical aspects that the scalar reveals, which has been proven for several thermodynamic systems, is the existence of first order phase transitions. The curvature scalar diverges at some point if a phase transition exists. For some systems, thepoint where the scalar diverges happens to be the critical point where the phase transition occurs <cit.>. In the thermodynamic geometry of fluctuation theory, the sign of the curvature scalar also provides additional information. For some systems it is clear that the sign of R represents the kind of interaction, being attractive for a negative scalar and repulsive for a positive scalar <cit.>. The sign of R canalso be associated to the bosonic or fermionic nature of the thermodynamic system, the Bose and Fermi ideal gases are a clear example of this,we have R < 0 in the first case and R > 0 in the second case <cit.>. For some other systems, it appears a change of sign in the curvature scalar, from negativeto positive or the other way around. These cases generally appear where a different statistics, other than that of Boltzmann's, is considered <cit.>. Following the vast literature of the subject related to the thermodynamic geometry of the two approaches considered here, we find that the signinterpretation is more clear in the formalism of G. Ruppeiner <cit.> but even so, it is not at all clear that this interpretation could be valid forall thermodynamic systems. In this work we particularly consider the curvature of a non-extensive ideal gas characterized by a generalized non-extensive entropy. The particular entropy we use depends only on the probability distribution and arises in the realm ofsuperstatistics <cit.>. In order to get a better insight of the physics behind the curvature scalar of our thermodynamic system, we calculate the curvature scalar using the two formalisms mentioned earlier, we will call these two scalars,the geometrothermodynamic scalar for the scalar calculated following the formalism followed in <cit.> and the fluctuation theory scalar,to the scalar calculated following <cit.>. We will find that the particular entropy (statistics) we propose <cit.> modifies thegeometric structure of the generalized thermodynamic space considered, namely a generalized ideal gas, giving rise to the appearance ofan effective interaction.The paper is organized as follows: First in section II we explain how our modified entropy, and its associated Boltzmann factor, arises by assuminga particular probability distribution. In section III we briefly introduce first the formalism of geometrothermodynamics developed by H. Quevedo <cit.> and describe how the thermodynamic metric is calculated. In this same section we also introduce the thermodynamic metricin the formalism of G. Ruppeiner <cit.>. We calculate the curvature scalar in both formalisms to further analyze and comparethe thermodynamic information contained in the scalars using the interpretation of both formalisms.In section IV we make a discussion about the interpretation of both scalars and in section V we conclude and present the main results of our work. § GENERALIZED ENTROPIES DEPENDING ONLY ON THE PROBABILITY DISTRIBUTIONThe Boltzmann factor depending on the energy E of a microstate associated with a local cell of average temperature 1/β is given by B(E) = ∫ f(β)e^-β E d βand different distributions f(β) lead to different Boltzmann factors. Following the procedure stated in <cit.> it is possible, in principle, to associate a modified entropy to every Boltzmann factor. As an example we have that for the distribution f(β) = δ(β-β_0) the usualBoltzmann factor is recovered and from this, the Boltzmann-Gibbs entropy follows directly <cit.>.In <cit.> a Gamma distribution of the form f_p_l(β) = 1/β_0 p_l Γ( 1/p_l)(β/β_01/p_l)^1-p_l/p_le^-β/β_0 p_l was proposed where, by maximizing the appropriate information measure, the parameter p_l can be identified with the probability and β_0 is the average inverse temperature.This distribution yields to the Boltzmann factorB_p_l (E) = (1+p_lβ_0 E)^-1/p_lthat leads to the following generalized entropy <cit.> S = k∑_l = 1^Ω (1-p_l^p_l). Associated to this generalized entropy there is a generalized H function,H = ∫ d^3p(e^f ln f - 1 ) which it can be shown that it satisfies a generalized H-theorem <cit.>.Using a Maxwell distribution to calculate this new H function, keeping only the first order correction and the relation H = - S/kV, it follows S_eff =-kN[ln (n λ^3)-3/2]- kVn^2λ^3/2^5/2[ ln^2(n λ^3)-3/2ln(n λ^3) +15/16], where λ = h/√(2π mkT) can be identified with the mean thermal wavelenght, k is the Boltzmann constant,V is the volume and T is the absolute temperature. The authors in <cit.> study the thermodynamic properties of the corresponding generalized ideal gas.In this context, analysis of response functions shows a first correction having a universal form, that is, the same functional correction toall thermodynamic quantities derived from the generalized equations of state. In order to obtain a thermodynamic potential, we say that the conventionallinear relation between internal energy and temperature holds. Within this approximation we obtain the following thermodynamic fundamental relation S = kNlnv + 3kN/2lnu/b + 3kN/2 -kN/2^5/2b^3/2/u^3/2v[ln^2( b^3/2/vu^3/2)- 3/2ln( b^3/2/vu^3/2) +15/16]where u = U/N, v = V/N and b = 3h^2/4π m. The first terms correspond to the usual entropy of the ideal gas, S = kNlnv + 3kN/2lnu/b + 3kN/2.We notice that the Sackur-Tetrode expression for the entropy of the ideal gas S = kNlnv + 3kN/2lnu/b + 5kN/2can be recovered by an ad-hoc fixing term as it was originally proposed by Gibbs. It is not possible to recover the 5kN/2 term from the classical calculation we madebut this does not affect the further analysis which involves derivatives of the entropy, and this constant term does not affect the final result. At this point we have to clarify that the linear relation betweeninternal energy and temperature that we have assumed makes our calculations to be more accurate for low densities or high temperatures and we will take this into account to make the interpretation of the behavior of the curvature scalars.A previous thermodynamic analysis was made corresponding to a system characterized by a particular interaction. The authors in <cit.> considered a system of gas particles exposed to square-well and Lennard-Jonnes potentials. These potentials are well defined and a respective Boltzmann factor B(E) = e^-β U can be associated to these systems. A further analysis with Monte Carlo simulations showed that when considering the generalized statistics and generalized entropy (<ref>)the resulting thermodynamics can be modeled with a classical Boltzmann factor but with an effective interactionB(E) = e^-β U_eff, in the sense that the same form of the potential is recovered but with the redefinition of its parameters. For the particular interactions considered and for particular thermodynamic states, the contribution generated by the new statistics in the generalized potential turned out to be repulsive <cit.> but the nature of the interaction depends in general on the thermodynamic states considered.In the next section we move to the thermodynamic geometry formalism and calculate the scalar curvature R associated to the fundamental relation (<ref>). It is our purpose to investigate and interpret the physical information contained in R in two different approaches. We will calculate the geometrothermodynamic scalar <cit.> and the fluctuation theory scalar <cit.>. The main reason to make the calculation in these two different formalisms is because we are interested to see wether or not the two calculations show the same qualitative behavior in order to make an accurate physical interpretationaccording to these two, in principle, different theories. The motivation and the origin of these two approaches is different but they have some common points of view regarding the physical interpretation of Rin terms of its behavior.§ GENERALIZED THERMODYNAMIC METRIC AND CURVATURE As we have mentioned earlier, in this section we will calculate the thermodynamic metric and its corresponding curvature scalar associated to the entropy (<ref>) in two formalisms; i) using geometrothermodynamics, namely using Quevedo's metric. In this formalisms we will refer to the curvature scalar as R_G. ii) using fluctuation theory,namely using Ruppeiner's metric. In this formalisms we will refer to the curvature scalar as R_F. §.§ Geometrothermodynamic approach In this formalism, the way in which the metric of the thermodynamic space is calculated makes it an object invariant under Legendre transformations. This fact allows to include in the geometrical description of thermodynamics the invariance under thermodynamic potential representation <cit.>. Thethermodynamic potential is an essential part of the so called contact manifold which represents the thermodynamic space and its geometrical properties will beencoded in its related metric and geometrical objects related to it <cit.>. A contact manifold is characterized by three specific components(τ,Θ,G), a 2n+1-dimensional differential manifold τ with tangent space T(τ), adifferential 1-form Θ on the cotangent manifold T^*(τ) such that an existing field of hyperplanes ν satisfies ν = Θ and a non degeneratemetric G on τ <cit.>. We can choose a coordinate patch on τ Z^A = { Φ,E^a,I^a } with a = 1,...,n. In the particular thermodynamicrepresentations Φ is identified with the thermodynamic fundamental relation, namely any thermodynamic potential containing all the thermodynamic information of the system.E^a and I^a correspond to the extensive and intensive variables, respectively. The general metric G is given by G = (dΦ -I_adE^a)^2 + Λ(E_aI_a)^2k+1dE^bdI^b. where k is an integer and E_a = δ_abE^a, I_a = δ_abI^a. Λ is an arbitrary Legendre invariant real function of the variables E^a and I^a. One special characteristic of thermodynamic systems is the fact that extensive and intensive variables are related among them through the equations of state. The functional relation among these variables is given in a geometric way by means of a harmonic mapφ : {E^a } →{ Z^A(E^a)} = { Φ (E^a),E^a,I^a(E^a) }. The space of coordinates ℰ is a sub manifold of τ which is equippedwith the non degenerate metric G, so the pullback of the harmonic map φ^* induces a metric on ℰ, i. e.g = φ^*.Given the equilibrium conditions given by the equations of state the simplest thermodynamic metric for the entropy representation in terms ofinternal energy u and volume v s(u,v) is given by g =[( u∂ s/∂ u)^-1∂^2 s /∂ u^2du^2+ ( v∂ s/∂ v)^-1∂^2 s /∂ v^2dv^2 ] + [ ( u ∂ s/∂ u)^-1 + ( v∂ s/∂ v)^-1] ∂^2 s/∂ u ∂ vdudv,which can be computed straightforwardly if the fundamental relation s(u,v) is known.We should remark that particular values of Λ and k has been chosenin order to get the metric (<ref>). It is not clear that an arbitrary choice of Λ and k will display the same physical information in the curvature scalar and It could also be possible that these kind of general metrics (<ref>) will not be in general invariant under infinitesimal Legendre transformations <cit.>. However, it has been shown that this metric is able to predict accurately the divergence of the curvature scalar at the critical point wherethe phase transition occurs for some ordinary thermodynamic systems <cit.>. In that sense, we will consider this metric as a good one toinfer if our generalized system has some interaction or a phase transition, informationthat can be deduced from its corresponding curvature scalar. Nevertheless, we will compare the results with the curvature scalar using the metric of the fluctuation theoryapproach (<ref>) which was constructed with no reference to Legendre invariance at all <cit.>.We can compute now the components of the metric associated to the entropy (<ref>), these are given byg_uu =- 3/2u^2[ 1-21√(2)/256x + 17√(2)/32xln x+ 5√(2)/16x ln^2 x /3/2 - x (27√(2)/256-3√(2)/32lnx-3√(2)/16ln^2x) ],g_uv = g_vu = x/uv[ 3√(2)/256 - 15√(2)/32lnx- 3√(2)/16ln^2x/1 - x (9√(2)/128-√(2)/16lnx -√(2)/8ln^2x) ],g_vv =- 1/v^2[1-5√(2)/64x + 3√(2)/8xlnx + √(2)/4x ln^2x/ 1 - x (9√(2)/128-√(2)/16lnx -√(2)/8ln^2x) ], where we have introduced the dimensionless factor x = b^3/2/u^3/2v.We notice thatin the limit of low densities and high temperatures, that is when x → 0, the entropy (<ref>) and metric (<ref>) approach respectivelyto those corresponding to the ideal gas <cit.>. The curvature scalar for the metric (<ref>) in terms of x is then given by R_G = -3 · 2^7[I_0 - 2^3 I_1 lnx- 2^5 I_2 ln^2x + 2^7 I_3 ln^3x] /G_0D(x)^3-3 · 2^7[ 2^8 I_4 ln^4x - 2^12I_5ln^5x -2^14I_6 ln^6x]/G_0D(x)^3 +3 · 2^7[ 2^16 I_7 ln^7x - 2^16 I_8 ln^8x - 2^19 I_9ln^9x]/ G_0D(x)^3 - 3 · 2^7[ 2^21I_10ln^10x + 11 · 2^23 I_11ln^11x - I_12ln^12x] / G_0D(x)^3 ,D(x) = (G_1 + G_2)lnx + G_3ln^2x + G_4ln^3x + G_5ln^4x. where the I_i's, and G_i's are dimensionless functions depending on x. We will analyze the behavior of this scalar in the section of discussions where we will show the plot of R_G. In order to have a better insight with respect to the physical interpretation of the curvature scalar,in the next section we will also compute the curvature scalar in the fluctuation theory of thermodynamic geometry <cit.>. §.§ Fluctuation thermodynamic geometry approach Now we turn to the calculation in the formalism developed by G. Ruppeiner <cit.>. The thermodynamic curvature in this formalismcomes from fluctuation theory. The distance in the thermodynamic space arises from the calculation of the probability of a fluctuation of some variablex^α away from equilibrium of a system with fixed volume V, using Einstein's formula <cit.> Probability∝exp( -V/2(Δ l)^2 ), where the differential of distance squared (Δ l)^2 defines a thermodynamic information metric (Δ l)^2 = g_αβΔ x^αΔ x^β, with Δ x^α = (x^α - x^α_0), being x^α_0 the value of the fluctuating variable x^α at equilibrium.The thermodynamic space could be in general considered as n-dimensional and the x^α represent the independent fluctuating variables in this thermodynamic space. Considering the fluctuation of two variables, x^1 and x^2 are two independent extensive variables or mechanical parameters. The entropy can be considered to be a function of internal energy and the number of particles. With fixed volume, the variablesof entropy s(u,n) = S/V are the densities, u = U/V n=N/V. The expansion to second order of the conditional probability allows to define the following metric g_αβ = -1/kV∂^2 s/∂ x^α∂ x^β, where k is Boltzmann's constant and V is the fixed volume. We will need to rewrite the entropy (<ref>) in terms of the densities u,n in order tocalculate the curvature scalar associated to (<ref>), the entropy is given by S = kn ln1/n + 3kn/2lnu/bn + 3kn/2 -kn/2^5/2b^3/2n^5/2/u^3/2[ln^2( b^3/2n^5/2/u^3/2) - 3/2ln( b^3/2n^5/2/u^3/2) +15/16], and the components of the metric following this formalism are given by g_uu = 3n/2u^2V[ 1- 21√(2)/256 z + 17√(2)/32z lnz + 5√(2)/16z ln^2z], g_nn = 5/2nV[ 1- 23√(2)/256 z + 27√(2)/32z lnz + 7√(2)/16zln^2z], g_un = -3/2uV[1- 23√(2)/256 z + 27√(2)/32z lnz + 7√(2)/16z ln^2z].where g_un = g_nu. In the last expression, we have used the dimensionless variable z = b^3/2n^5/2/u^3/2 which is convenient to measure how the systemgets away from ideality. It is clear that in the limit z → 0, the entropy (<ref>) and metric (<ref>) tend to the entropy and metric corresponding to those of the ideal gas <cit.>.With the metric (<ref>) the curvature scalar is given in this case byR_F = 5 · 2^10z[I̅_0 - 2^7 I̅_1 lnz - 2^8 I̅_2 ln^2z]/n D(z)^3 +5 · 2^10z[ - 2^9 I̅_3 ln^3z + 2^9I̅_4 ln^4z + 2^14I̅_5ln^5z]/n D(z)^3 +5 · 2^10z [ 2^15I̅_6 ln^6z + 7 · 2^17I̅_7 ln^7z + 7^22^16I̅_8 ln^8z] /n D(z)^3 .D(z) = G̅_0 + 2^4G̅_1lnz + 2^5 G̅_2ln^2z + G̅_3 ln^3z+ G̅_4ln^4zwhere n = N/V, so R_F has dimensions of volume. Also here the I̅_i's, and G̅_i's are dimensionless functions depending in this case on z.In the next section we will show the plots of these scalars and analyzethem in terms of what their respective formalisms tell us about the interpretation of thebehavior of R as function of its independent variables. § RESULTS AND DISCUSSIONS Before we show the plots of the scalars (<ref>) and (<ref>), we notice the following; R_G is dimensionless and R_F has dimensions of volumeand we can see from the generalized entropy Eq. (<ref>) that we have the natural parameters m and h involved in our expressions, so we will introduce the Bohr radius a_0 as a parameter of length in order to construct dimensionless quantities and express the plots of the curvature scalars in definite units.We can easily construct with m,h,a_0 the quantities u_0 = h^2/ma_0^2 of energy and v_0 = a_0^3 of volume. Using these quantitieswe construct and plot the reduced curvature scalars R^*_G(u^*,v^*) and R_F^*(u^*,n^*) where u^* = u/u_0, v^* = v/v_0, n^* = nv_0 and R_F^* = R_F/v_0.In order to define the entropy (<ref>) we have assumed a linear relation between internal energy and temperature, approximation which isvalid in the region of low densities and high temperatures, so we are confident that our results are more accurate in these regions, that is when x → 0 and z → 0 which correspond to small deviations from the entropy of the ideal gas. Following this argument, we have plotted the scalars as function of inverse of internal energy to show the regime of high temperaturesnear the origin as it is the region of validity of the approximation made. Figure 1 shows the reduced curvature scalar corresponding to Quevedo's metric R^*_Gin terms of inverse reduced internal energy.Figure 2 shows the reduced scalar R^*_F corresponding to Ruppeiner's metric. In both figures we fixed the reduced particle number density to n^* = 10^-4. A typical low density value was chosen as n = 10^29, for which we have a reduced value of n^* = 10^-4. From these figures we see that both scalars consistently tend to the ideal gas scalar (zero scalar)in the limits x → 0 and z → 0. Figure 3 shows both scalars for a fixed lower density (n^* = 10^-5).As we have mentioned earlier, the interpretation of R has its little differences in both approaches but they both agree in at least two things; First, they both coincidein the interpretation of the non-zero scalar as a manifestation of a non-vanishing intermolecular interaction; Second, the divergence of R at some point signs the existence of a phase transition. We also found in the literature that the sign of R has a clear interpretation in the formalismof fluctuation theory and is less clear in the geometrothermodynamic approach. The fact that the sign of R in Quevedo's formalisms does not clarifythe kind of interaction is because the metric itself depends on an arbitrary global multiplicative function <cit.>.According to the results followed by Ruppeiner, the sign of R_F shows the natureof the interaction, being attractive when R < 0 and repulsive when R > 0. In some cases, it reflects the fact that the system is composed of bosonic (R< 0) or fermionic (R>0) particles, then some quantum aspects of the thermodynamic system are also contained in R_F <cit.>. A generic discussion of the systems where the interpretation of the sign of R is clear can be found in <cit.>.The scalars R_F and R_G of our thermodynamic system show a non-zero effective interaction and in the right limit, both consistently approach to thecurvature scalar of the ideal gas, that is R = 0. We have already clarified that a linear relation between internal energy and temperature has been assumedin the fundamental relations (<ref>), (<ref>) and in this expressions for the entropy, it is clear that in the respective limits x → 0 and z → 0, the generalized entropy tends to that of the ideal gas.We show also in Fig. 4 and Fig. 5, the dimensionless scalars R^*_G and R^*_F in terms of the parametersx and z respectively. In this plots we can also see that the scalars have the right behavior in the ideal gas limits and here we notice that none of these scalars havedivergences in the valid range of the approximation, they indeed do not diverge at any point in the whole range. This imply according to the interpretationof both formalisms that it does not exist a phase transition, so we havethat an effective interaction appears as a consequence of assuming a different statistics, statistics which on the other hand defines the generalizednon-extensive entropy of our non-extensive ideal gas.§ CONCLUSIONSWe calculated the curvature scalar of a particular thermodynamic system,one that corresponds to the generalization of an ideal gas generated by a modified entropy.The modified entropy arises from a generalized Boltzmann factor (generalized statistics), namely a modifiedprobability distribution. We found that such modification in the probability introduces an effective interaction. Using the curvature/interaction hypothesis of thermodynamic geometry we found that such interaction shows up in a non-zero curvature scalar R. Two different formalisms were used, the geometrothermodynamicapproach and the fluctuation theory one. Both formalisms recover the limitof a conventional ideal gas characterized by a zero curvature scalar.Despite the non-zero value for the curvature R implying a non-zero interaction, no evidenceof a phase transition is obtained, namely the curvature R does not diverge at any point. It is remarkable the similarity in the results even when the metrics (<ref>) and (<ref>) are in principle different.In the fluctuation theory approach, a formal interpretation to the sign of R can be given describing the effective interaction,being attractive when R < 0 or repulsive when R > 0. We have therefore, according to Fig. 2, the appearance of an effective interaction as a consequence of introducing a different statistics. Near the limit of the ideal gas, the effective interaction is attractive but it also shows regions where the interactioncan become repulsive. A more rigorous statistical analysis is needed to better understand non-equilibrium inspired systems that obey ageneralized statistic however, the curvature scalaris a useful tool revealing well defined characteristics of the thermodynamic system. O. Obregón was supported by CONACyT Projects No. 257919 and 258982, Promep andUG projects. J. Torres-Arenas acknowledge the University of Guanajuato for the Grant 740/2016 Convocatoria Institucional de InvestigaciónCientífica and J. L. López was supported by CONACyT Grant 329847 and a PRODEPposdoctoral grant.ccObregon O. Obregón, Superstatistics and gravitation, Entropy 12, 2067 (2010).Beck C. Beck and E. G. D. Cohen, Superstatistics, Physica A 322, 267 (2003).Tsallis2 C. Tsallis, Possible generalization of Boltzmann-Gibbs statistics, J. Stat. Phys. 52 479 (1988).Tsallis1 C. Tsallis and A. M. C. Souza, constructing a statistical mechanics for Beck-Cohen superstatistics, Phys. Rev. E 67, 026106 (2003).Obregon1 O. Obregón and Alejandro Gil-Villegas, Generalized information entropies depending only on the probability distribution, Phys. Rev. E. 88 062146 (2013).Nana N. Cabo-Bizet and O. Obregón, Generalized entanglement entropy and holography arXiv: 1507.00779 (2015).Calabrese P. Calabrese and J. L.Cardy, Entanglement entropy and quantum field theory, J. Stat. Mech. 0406, P06002 (2004). Weinhold F. Weinhold, Metric geometry of equilibrium thermodynamics, J. Chem. Phys. 63, 2484, 2488 (1975); 65, 559 (1976).Rup G. Ruppeiner, Thermodynamics: A Riemannian geometric model, Phys. Rev. A 20, 1608 (1979).Rup1 G. Ruppeiner, Riemannian geometry in thermodynamic fluctuation theory, Rev. Mod. Phys. 67 605 (1995).Quevedo1 H. Quevedo, Geometrothermodynamics, J. Math. Phys. 48, 013506 (2007).Quevedo3 H. Quevedo, F. Nettel, Cesar S. Lopez-Monsalvo and A. Bravetti, Representation invariant geometrothermodynamics: applications to ordinary thermodynamic systems,J. Geom. Phys. 81 1 (2014). Quevedo2 H. Quevedo, Alberto Sánchez and Alejandro Vázquez, Relativistic like structure of classical thermodynamics, Gen. Rel. Grav. 47, 36 (2015). Rup2 G. Ruppeiner, Thermodynamic curvature: pure fluids to black holes, J. Phys. Conf. Ser. 410, 012138 (2013).Rup3 G. Ruppeiner, Thermodynamic metric and black holes, Springer. Proc. Phys. 153 179 (2014).Janyszek H. Janyszek and R. Mrugala, Riemannian geometry and stability of ideal quantum gases, J. Phys. A: Math.Ġen23, 467 (1990).Torres O. Obregón, J. Torres-Arenas and A. Gil-Villegas, H-theorem and thermodynamics for generalized entropies that depend only on the probability,arXiv: 1610.06596.Monsalvo D. García Pelaez and C. S. López Monsalvo, Infinitesimal Legendre Symmetry in the geometrothermodynamics programme, J. Math. Phys. 55 083515 (2014).
http://arxiv.org/abs/1710.00679v1
{ "authors": [ "J. L. López", "O. Obregón", "J. Torres-Arenas" ], "categories": [ "cond-mat.stat-mech" ], "primary_category": "cond-mat.stat-mech", "published": "20170927032938", "title": "Thermodynamic geometry for a non-extensive ideal gas" }
1,2]Felix Hülsmann2]Stefan Kopp 1]Mario Botsch [1]Computer Graphics Group [2]Social Cognitive Systems Group[]Bielefeld University, GermanyAutomatic Error Analysis of Human Motor Performance for Interactive Coaching in Virtual Reality [=============================================================================================== In the context of fitness coaching or for rehabilitation purposes, the motor actions of a human participant must be observed and analyzed for errors in order to provide effective feedback. This task is normally carried out by human coaches, and it needs to be solved automatically in technical applications that are to provide automatic coaching (e.g. training environments in VR). However, most coaching systems only provide coarse information on movement quality, such as a scalar value per body part that describes the overall deviation from the correct movement. Further, they are often limited to static body postures or rather simple movements of single body parts. While there are many approaches to distinguish between different types of movements (e.g., between walking and jumping), the detection of more subtle errors in a motor performance is less investigated. We propose a novel approach to classify errors in sports or rehabilitation exercises such that feedback can be delivered in a rapid and detailed manner: Homogeneous sub-sequences of exercises are first temporally aligned via Dynamic Time Warping. Next, we extract a feature vector from the aligned sequences, which serves as a basis for feature selection using Random Forests. The selected features are used as input for Support Vector Machines, which finally classify the movement errors. We compare our algorithm to a well established state-of-the-art approach in time series classification, 1-Nearest Neighbor combined with Dynamic Time Warping, and show our algorithm's superiority regarding classification quality as well as computational cost.§ INTRODUCTIONCoaching environments for motor learning have become a more and more popular research topic in the field of Virtual Reality (VR) <cit.>. They are promising in areas such as rehabilitation or fitness training. Obviously, high-quality feedback on the coachee's performance is crucial for the success of such systems. Therefore, an intelligent coaching system does not only have to detect which task — in the following called motor action — is executed. It also has to detect the specific errors the coachee performs during an exercise and has to address them using appropriate feedback. While lots of approaches exist for the classification of motor actions <cit.>,fewer consider the analysis of the performance quality. If they do, authors often focus on reporting simple scores, which summarize the performance quality in terms of a deviation from a desired performance <cit.>. Others provide scoring functions which describe overall improvement or decline in quality for a specific exercise <cit.>. However, many types of complex sports movements can be executed correctly yet with different individual styles <cit.>. Moreover, some parts of the body are often completely irrelevant for the successful execution of the movement. For instance, the orientation of the hands is negligible when analyzing the quality of a body weight squat. Consequently, feedback that only relies on an overall deviation from a prerecorded desired performance, including task-irrelevant deviations, is non-optimal when aiming at improving the coachee's performance <cit.>.For many types of motor actions, a set of typical errors can be found <cit.>.Often, there is only a very subtle distinction between a correct movement and the occurrence of a certain error.For many known errors, coaches have established feedback strategies to support a coachee in improving her performance. This could be, for instance,verbal descriptions of the error together with best practices on how to eliminate it. Intelligent coaching environments in VR need to be able to detect such error patterns automatically and to provide elaborate feedback, e.g. taken from real-world coaching experience. Such feedback must be provided online or rapidly, i.e., either directly after a coachee has finished the movement or — even better — already during the motor action being performed. Some approaches try to achieve this using manually designed rules that can be evaluated online <cit.>. However, this requires enormous manual effort and bears the risk of gaps or under-fitting of the designed rules. In this paper, we present an approach to automatic error analysis of human motor performance in an immersive VR coaching environment for sports and rehabilitation exercises (see Figure <ref>). We focus on the squat movement as a test case for our approach. The squat is a full-body motor action that is frequently used in the context of rehabilitation <cit.> as well as for sports training <cit.>. When executed by novice coachees, various different error patterns can be observed in a squat.We consider the detection of such error patterns as a time series classification problem. In the field of time series classification, 1-Nearest-Neighbor combined with Dynamic Time Warping (1NN-DTW) proved to be state of the art and difficult to beat by other classifiers <cit.>.We aim to extend the current state of the art in the classification of typical error patterns in motor performance. Our contribution is as follows:* We propose a novel approach towards the classification of error patterns in motor performances which uses a reference-based Dynamic Time Warping of movement segments as a basis for a feature selection using Random Forest. The selected features are in a final step classified by a Support Vector Machine (SVM). * We show that this classifier outperforms the 1NN-DTW approach, in both classification performance as well as time needed for classification. * We show the effectiveness of the approach on an exemplary data set and demonstrate the impact of all components on classification performance as well as on time needed for classification.In the next section, we discuss related work towards motor performance analysis and time series classification. Then, we describe how we obtain our data set, which consists of a list of typical error patterns, together with annotated movement data. In Section <ref>, we first evaluate the performance of 1NN-DTW on our data set. Next, we provide a step-by-step evaluation of the components of our approach.In Section <ref>, we discuss the results and conclude the paper. The video in the online material demonstrates how we use the proposed analysis to generate verbal feedback inside our “Intelligent Coaching Space”[<http://graphics.uni-bielefeld.de/research/icspace/>], an immersive coaching environment for sports and rehabilitation exercises (see Figure <ref>) <cit.>. § RELATED WORK Two main approaches have been applied to assess the quality of human motor performances. The first approach (Section <ref>) is to engineer a highly specialized method, e.g., for the evaluation of feedback strategies for a very specific type of motor action. In this approach, a common choice is to assess quality by determining the overall distance of the performed motion to the desired motion. Often, a model for these specific performance patterns is manually designed drawing from expert knowledge. The second direction (Section <ref>) consists in using more general, data-based approaches, such as well established techniques from time series classification. In the following, we will present and discuss work stemming from both directions. §.§ Specific, Manually Designed Approachesuse a manually designed scoring function to represent patients' performance changes in a rehabilitation setting <cit.>. Even though this approach provides compelling results in the field of application, no detailed information on occurred error patterns is gained, which would be necessary for the application of complex coaching strategies. Other approaches make use of rule-based systems to detect the occurrence of certain error patterns. In the context of yoga training,define optimal yoga poses <cit.>. De Kok et al. went one step further by manually defining error patterns <cit.> focussing on the whole trajectory. Rules are implemented, first to split the motion into sequential movement segments, and then to describe the error patterns. A state machine performs the classification. One major advantage of the approaches by Rector et al. or de Kok et al. is their real-time capability: Specific feedback strategies linked to typical error patterns can be applied immediately. Further, the results are deterministic: If the rules are correct and exhaustive and the motion capture system works properly, an incorrect classification is unlikely to occur. This directly leads to the major disadvantage: As the rules have to be designed manually, they are prone to errors during the design phase, which might be difficult to be tracked down later on. A single error during the design of only one pattern might have a devastating effect on the resulting system in terms of effectiveness and even safety of the training. Moreover, it is mostly not trivial — even when interviewing sports coaches — to obtain exact information about which features are significant or where to draw the border between a correct or an incorrect movement. Finally, the design of rules requires enormous manual effort: For each motor action and for each type of error, a detailed investigation on how to describe the motor action and the error has to be performed. For complex error patterns, this quickly becomes infeasible. Thus, it is desirable to focus on approaches that automatically learn most of their information from data. §.§ Data-based Approachesfocus on classifying error patterns in rehabilitation exercises using a combination of rule-based segmentation and AdaBoost on a set of manually defined features <cit.>.In a within-subject cross validation, the authors obtain highly convincing results. However, classification performance decreases significantly when generalizing to new subjects. Furthermore, the design of feature sets requires additional manual work. present an approach towards distinguishing between good, moderate, and bad performances of squat movements <cit.>. They use a feature vector based on manually designed features, such as skewness and range, whose dimensionality is reduced using Sparse Principal Component Analysis (SPCA). Finally, Decision Trees are used for classification. The classification accuracy to distinguish between good, moderate, and bad squats in a leave-one-subject-out cross validation is 73%. For the distinction between only two classes (good and bad), a higher accuracy of 98.6% was achieved. The presented approach is only able to distinguish between three coarse classes of quality and cannot spot single error patterns. In addition, manual effort is needed for feature preparation. Furthermore, SPCA is an unsupervised algorithm, which searches for a set of sparse principal components which cover as much as possible of the variance inside the data <cit.>. This is problematic when most of the variance is due to individual differences rather than performance errors, which holds for sports movements that can differ considerably between subjects. use a neural network classifier to differentiate between correct and incorrect performances of squats and to classify error patterns. A leave-one-out cross validation resulted in an accuracy of 80% to distinguish between correct and incorrect, but only in an accuracy of 57% for the classification of error patterns. Similar experiments were conducted by  <cit.>. proposed an extension of Dynamic Time Warping (DTW) that is able to detect multiple occurrences of multiple exercise types in trajectories as well as to classify error patterns <cit.>. Classification is performed by comparing the just performed motion to pre-recorded templates and then selecting the best matching one. This leads to a very high accuracy of 93% for exercise classification and 89% for the classification of errors in motor performances (inter-subject performance was not tested).However, combinations of multiple error patterns cannot be considered as long as they are not included as individually pre-recorded templates. Overall, the data-based approaches employed in the context of sports and rehabilitation applications have three weaknesses: First, it is often not analyzed how well the trained classifiers generalize to new subjects. Many approaches require the system be re-trained for each user. This leads to problems as subjects are often physically not able to provide all the required training data. For instance, in the context of sports performances, some users are not able to perform the desired motor action correctly or, on purpose, with a certain type of error. Second, the motor actions and error patterns are often rather simple. Some of the presented systems only distinguish between, e.g., “good” or “bad” for a motor action that only involves a very small number of joints. Especially algorithms using variance-based dimensionality reduction or pure comparisons with prototypes will perform worse on more subtle errors or more complex movements: Most of the variance and also the similarity to prototypes would be covered by inter-subject variations instead of the movement patterns underlying the errors. Finally, for most algorithms, no information on the applicability in interactive or real-time systems is given. Especially algorithms which require expensive calculations for each classification do not meet the requirements of VR coaching systems as, e.g., described in <cit.>.Another group of data-based approaches has been developed in the field of Computer Graphics to capture and synthesize human motion with particular styles. Analysis ofobserved movements is then often possible through “analysis by synthesis”. Giese et al. introduced Spatio-Temporal Morphable Models for analysis and synthesis of morphs between gait styles <cit.>. First, recordings of prototypical performances are brought into spatio-temporal correspondence. Then, new trajectories can be described as spatio-temporal blends between prototypes. The underlying assumption is that a clearly defined prototype can be obtained for each desired style. In our case, these styles would be the possible error patterns in a motor performance. However, in the context of motor learning, movements often contain a combination of different error patterns and prerecorded single prototypical errors do not work equally well for different subjects. A related approach has been proposed by  <cit.>: Their model, called Motion Graphs++, describes human movements by (a) discrete structural variations that define the motor action together with (b) continuous variations that capture the movement style. Style variations are represented using Principal Component Analysis (PCA) together with a Mixture of Gaussians. MotionGraphs++ are powerful as they do not need an isolated demonstration of each prototype. However, if a targeted variation in style is not covered by the PC dimensions, the model cannot detect this style pattern. In the case of typical error patterns in motor performances, the differences between users who perform the same error may berelatively big, whereas the difference between error patterns within a user can be very subtle. Thus, MotionGraphs++ would rather encode the inter-individual differences than the characteristics of the error patterns.Finally, the classification of errors in motor performances is a special case of time series classification, for which several machine learning algorithms have been proposed. Ground-breaking work was performed by , who used hidden Markov models (HMM) for the recognition of gestures <cit.>. Other methods are based on decision trees <cit.>, SVMs <cit.>, or Multi-Layer Perceptrons (MLP) <cit.>. Dynamic Time Warping (DTW) is usually used to temporally align two recorded trajectories. As a pseudo-metric combined with a subsequent classification, DTW has a highly positive impact on motion classification <cit.>. Xi et al. provide an extensive review comparing a large set of available classification methods, such as HMMs, MLPs, and decision trees on time series data <cit.>. They show that no tested classifier is able to beat a combination of DTW and 1-Nearest-Neighbor (1NN-DTW), which basically compares the query trajectory to each available training trajectory using DTW as distance measure. Then the most similar training trajectory is used to predict the label of the query trajectory. The superiority of this approach in comparison with nine classifiers, including Random Forests, SVM, Bayes Networks, et cetera, is supported by work from<cit.>. Likewise,achieved good classification results using a method similar to 1NN-DTW, which, however, was limited to simple movement patterns and was not evaluated with respect to generalization to movements of other persons <cit.>.To sum up, the approaches discussed in this section suffer from a number of limitations that prevent their use for real-time coaching of human motor performances. We aim to go beyond this by developing a classification approach that can classify subtle errors in a complex motor action with high accuracy, works on a small or unbalanced dataset, achieves good generalization over different users, and provides its results very quickly and already after relevant parts of the performance have been observed. We will base our approach on knowledge from Sports Science about which errors are particularly relevant, and we present an approach that determines discriminatory features of these errors and then realizes classifiers with the desired properties. We will take 1NN-DTW as a baseline in evaluating them. § DOMAIN AND DATASET Sports coaches and sports scientists have developed coaching strategies to address specific error patterns during a coaching session. Before developing a VR coaching system, and to enable it to detect those errors automatically, it is important to identify relevant error patterns along with corresponding feedback strategies for each motor action of interest. To this end, we analyzed 21 video recordings of real-world squat coaching sessions. A part of these data comes from the corpus described in <cit.>; additional other videos were recorded in our lab. We used the videos together with information from Sports Scientists as well as literature (e.g. <cit.>) to compile a list of 21 relevant error patterns. For instance, one error pattern is an incorrect weight distribution (depicted in Figure <ref>), which happens if the coachee shifts major parts of the body weight too much to the front. Motion data was recorded using an OptiTrack motion capture system, which consists of ten Prime 13W cameras. Passive markers were mostly attached to a customized motion capture suit; markers at the arms and the hands were directly attached to the subjects' skin (see Figure <ref>). The motion capture system outputs kinematic features for 19 joints (see Figure <ref>) per frame at 120Hz. In our representation, each frame consists of k joint rotations as well as k joint positions (with k=19). Joint rotations are represented as quaternions q⃗_1, …, q⃗_k. Each quaternion denotes the rotation of a joint with respect to its parent. The root rotation q⃗_1describes to rotation of the root with respect to its rotation at the beginning of the movement. As root joint, we use the hips. The joint positions are represented by vectors t⃗_1, …, t⃗_k ∈ℝ^3. Each denotes the y- component of the translation (height) of the joint as well as the translation relative to the x- and z- position of the root joint at the beginning of the movement, after removing the subjects orientation at the beginning of the movement. Further we additionally use joint angles as Euler angles, calculated from the quaternion representation, which correspond to flection/extension, abduction/adduction and twist of the corresponding joint.We asked 49 subjects to perform squats inside the capture volume. Up to two squats per participant were annotated by an expert for the presence of any of the error patterns. The expert had to add confidence and intensity ratings for each decision. These ratings were combined into a score in the interval [0,1] by averaging. Only ratings with a score above 0.5 were used for the experiment. Trajectories which contained severe errors caused by the motion capture system (e.g. due to missing markers), were excluded. The final training data set consisted of N=95 squat movements coming from 49 subjects. We selected the error patterns that appeared with a sufficient frequency (at least 15 positive and negative examples) for training. The ten resulting patterns and their frequency in the training data are listed in Table <ref>.§ CLASSIFICATION ALGORITHM The combination of Dynamic Time Warping and 1-Nearest-Neighbor (1NN-DTW) is one of the most successful classifiers for time series classification <cit.>. Thus it will serve as our baseline. In the following, we first report how we evaluate classifier performance. Then we describe the 1NN-DTW baseline approach and carve out its drawbacks for motor performance analysis in interactive coaching sessions. Then, we develop classifiers to eliminate or mitigate its weaknesses step by step. Finally, we verify that our approach is suitable for error analysis of human motor performances in the context of interactive VR coaching sessions.§.§ Evaluation ProcedureMotor actions in sports or rehabilitation training often exhibit large inter-subject variation <cit.>. Consequently, it is important to ensure that classifiers are tested on data from persons whose performances are not included in the training data.This hypothesis is experimentally supported by , who measure a huge difference in classifier scores when testing on samples from participants included in the training set, as compared to samples from participants who were not included in the training data <cit.>. We made sure that for the results described in the following, no data from subjects who provided a recording to the training set is contained in the test set. We applied 5-fold cross validation under this constraint for each error pattern. In each fold, we aimed at achieving a similar proportion of positive and negative labels as in the overall data set. For our experiments, the variables of interest are the quality of the classification and the time needed for the classification of a single query trajectory.To investigate the quality of a classification, different types of scores can be used. We report the accuracy of the described classifier, defined as the number of correctly classified samples weighted by the overall number of samples:acc = #TP + #TN/#P + #N.#TP is the number of true positives and #TN the number of true negatives. #P is the overall number of positive examples and #N the overall number of negative examples in the training data. Additionally, at the end of Section <ref>, we provide plots for F1 scores, which is the harmonic mean of precision and recall of the classifier: F1 = 2#TP/2#TP + #FP + #FN.Here, #FP is the number of false positives, and #FN the number of false negatives.All measured scores and standard deviations for the cross validation folds can be found in the supplementary online material.In addition to the quality of classification, we report information on the time each algorithm needs toclassify a new query trajectory. As DTW is an essential part for each of the proposed algorithms, we report the time that is approximately needed for a DTW without any parallelization. Furthermore, to be able to compare the algorithms that only have to perform one DTW per query, we report the average time per query needed for the classification of a single error pattern. All experiments were conducted on a machine with Intel Xeon CPU E5-1620 3.6Ghz. §.§ Baseline: 1NN-DTW As described above, we take as baseline one of the most successful classification algorithms for time series: 1-Nearest-Neighbor as classification algorithm together with Dynamic Time Warping as distance measure (1NN-DTW). For an input query, 1NN searches for the data point that is most similar to the input. Then it returns the classification label of this nearest neighbor in the training set. The underlying assumption is that data points that lie nearby belong to the same class. To determine which points lie nearby, a frame-wise comparison is problematic in time series such as motion trajectories. If the trajectories would be compared simply frame-to-frame, results would be highly distorted: Even if the movement is performed completely in the same way in space, but with a slight temporal offset, this measure would report a very high distance, whereas if a movement is performed with similar timing but different postures (e.g. a slightly weaker movement of some joints), the distance would be very low. Dynamic Time Warping (DTW) is typically used to solve this problem as it establishes a frame-to-frame correspondence between two trajectories by warping in time and then allows to determine the distance between them. We implemented 1NN-DTW as follows. Given two trajectories T_1 and T_2, consisting of n and m frames, respectively, we use DTW to calculate the optimal match between them <cit.>. First, a n × m local cost matrix M is constructed. Each element M(i,j) of this matrix corresponds to the distance between the postures T_1(i) and T_2(j). This distance is defined as the sum of the quaternion distances of the corresponding joints. As quaternion distance, we use the inner product as evaluated by  <cit.>. Thus, each element in the matrix M is calculated as follows:M(i,j) = ∑_d=1^k (1- |q⃗_i,dq⃗_j,d|).To establish a frame-to-frame correspondence, an optimal path through M from M(1,1) to M(n,m) is determined based on dynamic programming. The distance between the two trajectories T_1, T_2 can now be defined as mean value of the M(i,j) on the warping path. Comparison of classification results using different features, such as joint angles or joint positions, yielded no significant improvements in the 1NN step. Results of these comparisons can be found in the supplementary online material.We applied the above procedure to the relevant error patterns: For each query trajectory T_q we compute DTW to each training trajectory T_1, …, T_N. Next, the trajectory with the smallest DTW distance to T_q which is annotated with respect to the error pattern, is selected. The label of this trajectory is then returned for T_q. As shown in Figure <ref>, 1NN-DTW is able to detect some of the error patterns with accuracies of more than 60 percent. This is comparable to the results by  <cit.> and  <cit.> for simple rehabilitation exercises. The computational cost of DTW are quadratic with respect to the lengths of the trajectories. In our setting, a single DTW takes about 55ms on average per trajectory. On average, the trajectories used for this experiment consist of 500 frames. For each trajectory to be classified, DTW has to be calculated with each of our training trajectories (N=95). This leads to an average time of over 5 seconds to calculate the DTWs necessary for one single query trajectory. Thus, even if the classification led to optimal results, it would not be applicable in an interactive setting. §.§ Reducing Alignment Cost: 1NN-RefDTW To reduce computational cost, we can exploit the general similarity between the trajectories that all represent the same motor action (squat). We can thus warp all training trajectoriesto a normalized timing in an offline preprocessing step. This is done by selecting one reference trajectory T_r and warping all trajectories to its timing. If T_r it is a very short trajectory (i.e. a fast movement), information from the original trajectories gets lost due to the warping. Thus, as reference trajectory, we select the longest trajectory that contains all available movement segments. The warping exploits the correspondences found by DTW. For each frame t of T_r, the corresponding frame in the to-be-warped trajectory is selected according to the correspondence path from DTW. For classification, we perform 1NN using the mean of the frame-by-frame distance between the warped query trajectory T_q and the warped training trajectories as distance measure:dist(T_q,T_i) = 1/| T_r |∑_t=1^| T_r |∑_d=1^k (1- |q⃗_t,d^q q⃗_t,d^i|).q⃗_t,d^q is the quaternion describing the d-th joint in the t-th frame of the warped query trajectory, whereas q⃗_t,d^i refers to the corresponding joint of the training trajectory i. | T_r | is the length of the reference trajectory. In our case, we have | T_r |=902. For each classification, the calculation of one DTW for T_q is sufficient: All comparisons between warped query and training trajectories can now be done frame-by-frame with computational cost linear in | T_r |. In our setting, this process needs on average 25ms per trajectory. We call the resulting algorithm 1NN-RefDTW and expect it to have similar classification performance as 1NN-DTW while incurring reduced computational cost.Figure <ref> summarizes the classification results of 1NN-RefDTW. The classification accuracy is comparable to 1NN-DTW, with some error patterns detected slightly better. Still, the classification accuracy is insufficient for being applied in a coaching scenario. Concerning the computational costs, the new classifier only needs one DTW per query trajectory. Warping a training trajectory into the timing of the reference trajectory needs on average 90ms. Additionally, the frame-to-frame distance between the warped query and the training trajectories has to be calculated. The computational effort for classification is thus |T|^2+N|T| instead of N|T|^2 if all trajectories are of size |T|. In our setting, the classification process for N=95 needs approximately 2.5s. However, the time needed for classification still depends on the number of trajectories in the data set, which is problematic for large training sets.§.§ Separate Classification of Error Patterns: RefDTW-SVM Errors during the performance of motor actions can occur in many different combinations. 1NN-DTW and its extension 1NN-RefDTW only return the whole set of labels of the nearest neighbor as classification for each query. Combinations of error patterns that do not exist in the training data cannot be detected by the algorithm, unless the training data contains all possible combinations of error patterns. As this is typically not the case, it is desirable to learn a separate classifier for each pattern. Furthermore, we would like to provide a classifier with even more reduced computational cost, ideally independent of the size of the training set. Both goals can be achieved using Support Vector Machines (SVM), one of the most successful machine learning algorithms in general <cit.>. An SVM learns a decision hyperplane which maximizes the margin between two classes <cit.>. For classification, the SVM only has to determine on which side of a hyperplane an input query lies. In our case, we can learn a classifier for each error pattern, considering each training trajectory as one data point with the label pattern occurs or pattern does not occur. To use the SVM for training, we first warp all training trajectories to the timing of the reference trajectory. Then, for each warped training trajectory, a feature vector is constructed and standardized via scaling to unit variance and removing the mean. This vector consists of all joint angles in Euler angle representation as well as the joint positions for each frame in the warped trajectory. The feature vector thus has size 6| T_r |k, where | T_r | is the number of frames of the reference trajectory and k the number of joints. In our case, we have | T_r |=902 and k=19. Again, we tested different features and found that using joint angles in Euler angle representation together with joint positions leads to good classification results (cf. supplementary online material).We trained one two-class SVM for each error pattern on the feature vectors obtained from the warped trajectories. In our experiments, a non-linear RBF kernel was unable to beat the linear kernel, thus we decided to use SVMs with linear kernel (cf. supplementary online material). We use the standard SVM implementation from scikit-learn <cit.> in version 0.17.1. For classification, a query trajectory is first warped to the timing of the reference trajectory. Then the feature vector is constructed and classified by the trained SVMs. The resulting algorithm is called RefDTW-SVM.Results can be seen in Figure <ref>: Now, three of the error patterns are classified with an accuracy greater than 80%. Also, most of the other patterns reach higher results than with the previous 1NN approaches. However, the overall classification performance is still not sufficient. One explanation is the immense number of features per trajectory. We will approach this problem in the next section. Concerning the time needed for classification, for each error pattern, the classifier now only needs a mean of 9.7ms. Before starting the classification of error patterns, one DTW has to be calculated, which takes about 90ms as described in Section <ref>.§.§ Reducing Features: RefDTW-RF-SVMOur feature vector of size 6 | T_r | k comprises many irrelevant features: For instance, we intuitively do not consider the rotation of the wrist to be related to having a straight back. The SVM classifier might suffer from this high number of irrelevant features as shown by  <cit.> and  <cit.>. According to their results, we assume a robust feature selection method to be able to help improving classifier performance. To this end, we use Random Forests (RF) for feature selection <cit.>. Random Forests perform feature selection as well as classification. They are based on Decision Trees, which learn a hierarchical set of rules to distinguish between classes. Thereby, they implicitly weight the importance of each feature. Random Forests extend Decision Trees and reduce their susceptibility to overfitting via training multiple randomized Decision Trees and averaging them. This leads to an improved accuracy of the estimator as well as a reduced overfitting <cit.>. See <cit.> for an in-depth analysis of the statistical properties and the mathematical background of Random Forests.Direct classification using Random Forests leads to high computational cost, as all trees in the forest must be considered. We are interested in a model that provides good classification performance with minimal time for classification. As the SVM-based classification presented in Section <ref> provides almost acceptable results in real-time, we boosted it with a Random-Forest-based feature selection: We trained one Random Forest for each error pattern. The Random Forests are trained on the same feature vectors extracted from the warped trajectories as described for RefDTW-SVM. To train the trees, we used the Gini impurity as criterion to optimize the decision rules. As break condition for growing, we require all leaves to contain only a single class or less than two samples. We observed a number of 200 trees to lead to good results. The idea of our new algorithm RefDTW-RF-SVM is to use the Random Forests only for feature selection during training: For each error pattern, the Random Forest assigns an importance value to each feature via averaging the relative importance of the feature in each decision tree. Following an idea of  <cit.>, we add 20 random features to each frame before performing the feature weighting by Random Forests. The average of their importance values is used as threshold to discard irrelevant features. This leads to 580 features on average per error pattern (from originally around 100,000 features) which we use as input for the SVMs. We trained the SVMs with the same parameters as for RefDTW-SVM. For the Decision Trees as well as the Random Forests, we use the the scikit-learn implementation <cit.>.Figure <ref> shows the resulting classification accuracy, which outperforms RefDTW-SVM for nearly all patterns. Five patterns reach accuracies higher than 80 percent. Concerning the classification time, only 0.1ms is needed in addition to the DTW step. This leads to a total time to classify all patterns after DTW of around 1ms. §.§ Getting Classification Results Earlier: Segment-based RefDTW-RF-SVM RefDTW-RF-SVM and all other approaches presented before only allow classification after the whole motor action is completed, as the full query trajectory needs to be warped by DTW. However, some error patterns are limited to parts of the motor action. For instance, the desired depth of the squat is relevant only at the deepest point of the motion. This information can be exploited by using the concept of movement segments: Each performance of a motor action can be considered a combination of simpler sequential sub-actions. These movement segments are homogeneous and functionally meaningful parts of a more complex movement. For the squat, we define the movement segments preparation, going down, is down, going up, and wrap up. The underlying idea of a segment-based RefDTW-RF-SVM is to simply apply RefDTW-RF-SVM to a single movement segment once it has been completed. The segmentation is done based on a state machine which splits the trajectory at boundary points (state changes) where important joints like the knees start or stop moving. This is similar to the approach proposed in <cit.>. The segmentation takes less than 1ms per frame.As shown in Figure <ref>, the classification results are comparable to the results obtained with RefDTW-RF-SVM, which however works on the complete trajectories. For each pattern, the maximum accuracy per movement segment is reported. Seven error patterns are classified with an accuracy of above 80 percent. We performed the classification with the automatically segmented trajectories as well as with manually segmented trajectories. Both led to similar results. Concerning the time needed for classification, as the trajectories for the movement segments are shorter than for the whole motor actions, DTW only needs about 10ms per movement segment instead of about 90ms for a whole trajectory. The classification step itself using Segment-based RefDTW-RF-SVM needs around 0.1ms. Overall, an error pattern is classified on average around 10.1ms after the movement segment of interest has been performed. As the DTW, which is responsible for around 10ms of this time, has to be performed only once, we now need approximately 11.0ms to classify each of our ten error patterns. §.§ Summary of the Results All algorithms except from 1NN-RefDTW were able to beat the classification performance of our baseline 1NN-DTW. The best classification quality is achieved by RefDTW-RF-SVM and Segment-based RefDTW-RF-SVM. Most error patterns, including the most frequent ones “wrong dynamics”, “incorrect weight distribution”, and “too deep”, can all be detected with an accuracy above 80%. The patterns “incorrect weight distribution” and “feet distance not sufficient” are even nearlyperfectly classified. Only the error patterns with the fewest occurances in our training data, namely “not symmetric” (17 occurances) and “knees tremble sideways” (23 occurances) are classified with an accuracy below 70%. Additionally, Figure <ref> reports the F1 score of all presented approaches. Concerning the F1 score, the data looks similar: Only four patterns are classified with a score below 0.8. This enables our system to make use of various feedback strategies (cf. video in the supplementary online material). The exact scores and their standard deviation in the 5-fold cross validation can be found in the supplementary online material. All algorithms, except from Segment-based RefDTW-RF-SVM, require the calculation of DTW on the whole trajectory, which takes on average about 90ms. Segment-based RefDTW-RF-SVM only needs single movement segments to be warped, which can be performed in around 10ms. Table <ref> summarizes the time needed to classify a query trajectory with respect to the ten error patterns. Segment-based RefDTW-RF-SVM is clearly the fastest classifier as the classification step itself only needs 1ms and the result is potentially available already during the execution of the movement, directly after a single movement segment has been completed. § DISCUSSION AND CONCLUSION We have presented steps to yield a novel classifier for a fast detection of a variety of error patterns in movement trajectories, as required for interactive coaching applications, e.g., in virtual reality environments. We evaluated all algorithms on a complex motor task involving a high number of relevant error patterns. All scores were measured using cross validation, in a setup where data from one single subject is not allowed to be distributed over multiple folds. Thus our results capture the algorithms' abilities to generalize across subjects. The resulting algorithm, Segment-based RefDTW-RF-SVM, provides the best balance between quality of classification and computation time: Besides being the fastest classifier in our set, it is among the two classifiers with the highest accuracy scores. Nearly all error patterns, especially the most frequent ones, are classified with accuracies above 80%. In contrast to many related approaches, this classifier is able to work in interactive setups as shown in our demonstration of how online verbal feedback can be triggered through our automatic error analysis (see the video in the supplementary material). Overall, from the evaluation of each of the different steps taken in the previous section, we can derive the following conclusions about automatic error analysis of human motor performances:* If the data consists of structurally similar movements such as the same type of motor actions, it is sufficient to temporally align all trajectories via performing DTW with one reference trajectory. Thereby we were able to reduce the computational effort while keeping the quality of the classification in a similar range for nearly all error patterns. * For the classification of multiple error patterns, independent classifiers should be trained. A nearest neighbor-based classification, which only copies all labels from the nearest neighbor of a query, is insufficient especially for small training data sets. Learning independent classifiers for all error patterns increased the classifier performance for nearly all examined error patterns. * Random Forests help to select relevant features from high-dimensional input trajectories, even if the number of training examples is small. Such a preprocessing step significantly improves the performance of SVM-based classification. This holds especially for error patterns which are characterized only by very few features such as the “hollow back”. * By classifying data from appropriate movement segments, instead of whole trajectories, the time needed for classification can be drastically minimized while keeping the classification performance high. Note that even though general classification performance of our algorithm is high, the performance is not convincing specifically for two error patterns: The pattern “not symmetric” is detected only with F1 scores around 0.43. This error pattern is annotated in trajectories where some joints are not symmetric between the left and the right side of the body. As this can occur in almost all joints and all phases of the movement, the feature selection cannot easily spot those features of interest that are relevant. Further, the classifier has no possibility to infer information on the relationship between multiple joints with respect to symmetry. For the other problematic pattern, “knees tremble sideways”, our best classifier only achieves an F1 score of 0.51. This pattern describes a very subtle movement. Also, it can spread temporarily: Exactly the frames that are problematic for subject A can be correct for subject B and vice versa. Finally, the number of trembles can be different for different subjects which also makes classification harder. One way to deal with these twoproblematic patterns is the construction of more complex higher-level features. A higher-level feature could, for instance, describe the relationship between certain parts of the body or the movement of the athlete's center of mass. The automatic generation and inclusion of such higher-level features is a promising field of future work. Another limitation is thattemporal properties of the movements are not covered directly by our algorithm. For motor actions where the user's timing has an influence on whether certain errors occur, temporal information could be included via adding velocity as well as information on the warping function extracted from DTW. § ACKNOWLEDGEMENTSThis research was supported by the Cluster of Excellence Cognitive Interaction TechnologyCITEC (EXC 277) at Bielefeld University, which is funded by the German Research Foundation (DFG).§ SUPPLEMENTARY ONLINE MATERIALS §.§ Detailed ScoresHere, we report the measured classification performance of all tested classifiers with respect to accuracy (see Figure <ref> and Table <ref>), F1 score (see Table <ref>) and Receiver Operating Characteristics Area Under the Curve (ROC AUC) (see Figure <ref> and Table <ref>). ROC curves provide a plot which describes the relationship between recall and fall-out. The true positive rate is plotted on the y axis, the false positive rate on the x axis. The higher the curve, the better the classification. The area under the curve (AUC) is thus often used as score for classifier performance as it provides the probability to rank a randomly chosen positive instance higher than a randomly chosen negative one. Thus the higher the result, the better the classifier performs. This section also contains results for the pure Random-Forest-based classification (RefDTW-RF). This, leads to a classification performance in a similar range to RefDTW-RF-SVM, but also to more computational effort: We need around 160ms additional to the time needed for the DTW step to classify all of our 10 error patterns even if the trees inside the Random Forests are evaluated in parallel. All further components of the system, such as dialogue planning, Text-to-Speech, coaching animation, et cetera have to wait this period of time until they can start planning the feedback corresponding to the motion the trainee just performed in the virtual environment. Thus, for RefDTW-RF-SVM, we only use the Random Forests for feature selection during training to significantly speed up the classification time. §.§ Comparison of Different Feature TypesFirst, we compare the classification results of the baseline 1NN-DTW when using rotations as quaternions, as Euler angles or using joint positions. In the nearest neighbor step, the euclidean distance between the warped frames is used. All approaches lead to results in a similar range on average over all error patterns (see Figure <ref> and Figure <ref>).Second, we compare the classification results of our own final classifier Segment-based RefDTW-RF-SVM with respect to different feature sets. The feature weighting using Random Forests on quaternions is implemented component-wise. All quaternions with at least one feature weight above the threshold are completely used for SVM classification. For some error patterns, we observe that the quality of the classification complements each other for joint angles and joint translations: Some patterns (such as “feet distance not sufficient”) can be classified best based on the translations, others (such as “hollow back”) are classified much better based on the angles. We thus combine joint angles and joint translations which leads to a slight enhancement of the overall performance. Here, we finally decide to use Euler angles instead of quaternions for the sake of better interpretability of the selected features and a slightly shorter feature vector. In general, all classifiers behave similarly (see Figure <ref> for the accuracies and Figure <ref> for the F1 scores).§.§ RFB Kernel vs. Linear Kernel in SVMIn this part, we compare the classification performance of Segment-based RefDTW-RF-SVM using a linear kernel compared to using a radial basis function kernel. Results are in a similar range (see Figure <ref> for the accuracies and Figure <ref> for the F1 scores). We finally decide to use the linear kernel for the sake of simplicity.§.§ 1NN-DTW Based on Movement SegmentsFinally, we evaluated the performance of 1NN-DTW on movement segments which leads to Segment-based 1NN-DTW. Here, the results are again worse than for our own classifier Segment-based RefDTW-RF-SVM (see Figure <ref> for the accuracies and Figure <ref> for the F1 scores).
http://arxiv.org/abs/1709.09131v1
{ "authors": [ "Felix Hülsmann", "Stefan Kopp", "Mario Botsch" ], "categories": [ "cs.AI" ], "primary_category": "cs.AI", "published": "20170926170132", "title": "Automatic Error Analysis of Human Motor Performance for Interactive Coaching in Virtual Reality" }
Multi-Label Classification of Patient Notes: Case Study on ICD Code Assignment Tal BaumelBen-Gurion University Beer-Sheva, IsraelJumana Nassour-KassisBen-Gurion University Beer-Sheva, IsraelRaphael CohenChorus.ai San Francisco, CAMichael ElhadadBen-Gurion University Beer-Sheva, IsraelNoémie ElhadadColumbia UniversityNew York, NY December 30, 2023 ============================================================================================================================================================================================================================================================================================= The automatic coding of clinical documentation according to diagnosis codes is a useful task in the Electronic Health Record, but a challenging one due to the large number of codes and the length of patient notes. We investigate four models for assigning multiple ICD codes to discharge summaries, and experiment with data from the MIMIC II and III clinical datasets. We present Hierarchical Attention-bidirectional Gated Recurrent Unit (HA-GRU), a hierarchical approach to tag a document by identifying the sentences relevant for each label. HA-GRU achieves state-of-the art results. Furthermore, the learnedsentence-level attention layer highlights the model decision process, allows for easier error analysis, and suggests future directions for improvement.§ INTRODUCTION In Electronic Health Records (EHRs), there is often a need to assign multiple labels to a patient record, choosing from a large number of potential labels. Diagnosis code assignment is such a task, with a massive amount of labels to chose from (14,000 ICD9 codes and 68,000 ICD10 codes). Large-scale multiple phenotyping assignment, problem list identification, or even intermediate patient representation can all be cast as a multi-label classification over a large label set. More recently, in the context of predictive modeling, approaches to predict multiple future healthcare outcomes, such as future diagnosis codes or medication orders have been proposed in the literature. There again, the same setup occurs where patient-record data is fed to a multi-label classification over a large label set.In this paper, we investigate how to leverage the unstructured portion of the EHR, the patient notes, along a novel application of neural architectures. We focus on three characteristics: (i) a very large label set (6,500 unique ICD9 codes and 1,047 3-digit unique ICD9 codes); (ii) a multi-label setting (up to 20 labels per instance); (iii) instances are long documents (discharge summaries on average 1,900-word long); and (iv) furthermore, because we work on long documents, one critical aspect of the multi-label classification is transparency—to highlight the elements in the documents that explain and support the predicted labels.While there has been much work on each of these characteristics, there has been limited work to tackle all at once, particularly in the clinical domain. We experiment with four approaches to classification: an SVM-based one-vs-all model, a continuous bag-of-words (CBOW) model, a convolutional neural network (CNN) model, and a bidirectional Gated Recurrent Unit model with a Hierarchical Attention mechanism (HA-GRU). Among them, the attention mechanism of the HA-GRU model provides full transparency for classification decisions. We rely on the publicly available MIMIC datasets to validate our experiments. A characteristic of the healthcare domain is long documents with a large number of technical words and typos/misspellings. We experiment with simple yet effective preprocessing of the input texts. Our results show that careful tokenization of the input texts, and hierarchical segmentation of the original document allow our Hierarchical Attention GRU architecture to yield the most promising results, over the SVM, CBOW, and CNN models, while preserving the full input text and providing effective transparency. § PREVIOUS WORKWe review previous work in the healthcare domain as well as recent approaches to extreme multi-label classification, which take place in a range of domains and tasks. §.§ Multi-label Patient ClassificationsApproaches to classification of patient records against multiple labels fall into three types of tasks: diagnosis code assignment, patient record labeling, and predictive modeling.*Diagnosis Code Assignment. Automated ICD coding is a well established task, with several methods proposed in the literature, ranging from rule based <cit.> to machine learning such as support vector machines, Bayesian ridge regression, and K-nearest neighbor <cit.>. Some methods exploit the hierarchical structure of the ICD taxonomy <cit.>, while others incorporated explicit co-occurrence relations between codes <cit.>. In many cases, to handle the sheer amount of labels, the different approaches focus on rolled-up ICD codes (i.e., 3-digit version of the codes and their descendants in the ICD taxonomy) or on a subset of the codes, like in the shared community task for radiology code assignment  <cit.>.It is difficult to compare the different methods proposed, since each relies on different (and usually not publicly available) datasets. We experiment with the MIMIC dataset, since it is publicly available to the research community. Methods-wise, our approach departs from previous work in two important ways: we experiment with both massively large and very large label sets (all ICD9 code and rolled-up ICD9 codes), and we experiment with transparent models that highlight portions of the input text that support the assigned codes.*Patient Record Labeling. Other than automated diagnosis coding, most multi-label patient record classifiers fall in the tasks of phenotyping across multiple conditions at once. For instance, the UPhenome model takes a probabilistic generative approach to assign 750 latent variables <cit.>. More recently, in the context of multi-task learning, Harutyunyan and colleagues experimented with phenotyping over 25 critical careconditions <cit.>.*Predictive Modeling. Previous work in EHR multi-label classification has mostly focused on predictive scenarios. The size of the label set varies from one approach to another, and most limit the label set size however:DeepPatient <cit.> predicts over a set of 78 condition codes. <cit.> leverage an LSTM model to predict over a vocabulary of 128 diagnosis codes. DoctorAI <cit.> predicts over a set of 1,183 3-digit ICD codes and 595 medication groups. The Survival Filter <cit.> predicts a series of future ICD codes across approximately 8,000 ICD codes. *Inputs to Multi-Label Classifications.Most work in multi-label classification takes structured input. For instance, the Survival Filter expects ICD codes as input to predict the future ICD codes. DoctorAI takes as input medication orders, ICD codes, problem list, and procedure orders at a given visit. Deep Patient does take the content of notes as input, but the content is heavily preprocessed into a structured input to their neural network, by tagging all texts with medical named entities. In contrast, our approach is to leverage the entire content of the input texts. Our work contributes to clinical natural language processing <cit.>, which only recently investigated neural representations and architectures for traditional tasks such as named entity recognition <cit.>. §.§ Multi-label Extreme ClassificationIn extreme multi-label learning, the objective is to annotate each data point with the most relevant subset of labels from an extremely large label set.Much work has been carried outside of the healthcare domain on tasks such as imageclassification <cit.>, question answering <cit.>, and advertising <cit.>. In <cit.>, the task of annotating a very large dataset of images (>10M) with a very large label set (>100K) was first addressed. The authors introduced the WSABIE method which relies on two main features: (i) records (images) and labels are embedded in a shared low-dimension vector space; and (ii) the multi-label classification task is modeled as a ranking problem, evaluated with a Hamming Loss on a P@k metric.The proposed online approximate WARP loss allowed the algorithm to perform fast enough on the scale of the dataset.We found that in our case, the standard Micro-F measure is more appropriate as we do not tolerate approximate annotations to the same extent as in the image annotation task.The SLEEC method <cit.> also relies on learning an embedding transformation to map label vectors into a low-dimensional representation.SLEEC learns an ensemble of local distance preserving embeddings to accurately predict infrequently occurring labels. This approach attempts to exploit the similarity among labels to improve classification, and learns different representations for clusters of similar labels. Other approaches attempt to reduce the cost of training over very large datasets by considering only part of the labels for each classification decision <cit.>. SLEEC was later improved in <cit.> with the PfastreXML method which also adopted P@k loss functions aiming at predicting tail labels. In <cit.>, the FastText method was introduced as a simple and scalable neural bag of words approach for assigning multiple labels to text.We test a similar model (CBOW) in our experiments as one of our baselines.§ DATASET AND PREPROCESSING We use the publicly available de-identified MIMIC dataset of ICU stays from Beth Israel Deaconess Medical Center <cit.>.§.§ MIMIC Datasets To test the impact of training size, we relied on both the MIMIC II (v2.6) and MIMIC III (v1.4) datasets. MIMIC III comprises records collected between 2001 and 2012, and can be described as an expansion of MIMIC II (which comprises records collected between 2001 and 2008), along with some edits to the dataset (including de-identification procedures). To compare our experiments to previous work in ICD coding, we used the publicly available split of MIMIC II from  <cit.>. It contains 22,815 discharge summaries divided into a training set (20,533 summaries) and a test-set of unseen patients (2,282 summaries).We thus kept the same train and the test-set from MIMIC II, and constructed an additional training set from MIMIC III. Wemade sure that the test-set patients remained unseen in this training set as well. Overall, we have two training sets, which we refer to as MIMIC II and MIMIC III, and a common test-set comprising summaries of unseen patients. While there is a large overlap between MIMIC II and MIMIC III, there are also marked differences. We found many cases where discharge summaries from 2001-2008 are found in one dataset but not in the other. In addition, MIMIC III contains addenda to the discharge summaries that were not part of MIMIC II.After examining the summaries and their addenda, we noticed that the addenda contain vital information for ICD coding that is missing from the main discharge summaries; therefore, we decided to concatenate the summaries with their addenda.Table <ref> reports some descriptives statistics regarding the datasets. Overall, MIMIC III is larger than MIMIC II from all standpoints, including amounts of training data, vocabulary size, and overall number of labels. §.§ ICD9 CodesOur label set comes from the ICD9 taxonomy. TheInternational Classification of Diseases (ICD) is a repository maintained by the World Health Organization (WHO) to provide a standardized system of diagnostic codes for classifying diseases. It has a hierarchical structure, connecting specific diagnostic codes through is-a relations. The hierarchy has eight levels, from less specific to more specific. ICD codes contain both diagnosis and procedure codes. In this paper, we focus on diagnosis codes only. ICD9 codes are conveyed as 5 digits, with 3 primary digits and 2 secondary ones.Table <ref> provides the ICD9 label cardinality and density as defined by <cit.>. Cardinality is the average number of codes assigned to records in the dataset. Density is the cardinality divided by the total number of codes. For both training sets, the number of labels is of the same order as the number of records, and the label density is extremely low. This confirms that the task of code assignment belongs to the family of extreme multi-label classification.We did not filter any ICD code based on their frequency. We note, however that there are approximately 1,000 frequent labels (defined as assigned to at least 50 records) (Table <ref>). We experimented with two versions of the label set: one with all the labels (i.e., 5-digit), and one with the labels rolled up to their 3-digit equivalent. §.§ Input Texts*Tokenization. Preprocessing of the input records comprisedthe following steps: (i) tokenize all input texts using spaCy library [https://spacy.io/]; (ii) convert all non-alphabetical characters to pseudo-tokens (e.g., “11/2/1986” was mapped to “dd/d/dddd”); (iii) build the vocabulary as tokens thatappear at least 5 times in the training set; and (iv) map any out-of-vocabulary word to its nearest word in the vocabulary (using the edit distance). This step is simple, yet particularly useful in reducing the number of misspellings of medical terms. These preprocessing steps has a strong impact on the vocabulary. For instance, there were 1,005,489 unique tokens in MIMIC III and test set before preprocessing, and only 121,595 remaining in the vocabulary after preprocessing (an 88% drop). This step improved F-measure performance by ~0.5% when tested on the CBOW and CNN methods (not reported). *Hierarchical Segmentation. Besides tokenization of the input texts, we carried one more level of segmentation, at the sentence level (using the spaCy library as well). There are two reasons for preprocessing the input texts with sentence segmentation.First, because we deal with long documents, it is impossible and ineffective to train a sequence model like an GRU on such long sequences. In previous approaches in document classification, this problem was resolved by truncating the input documents. In the case of discharge summaries, however, this is not an acceptable solution: we want to preserve the entire document for transparency. Second, we are inspired by the moving windows of  <cit.> and posit that sentences form linguistically inspired windows of word sequences. Beyond tokens and sentences, discharge summaries exhibit strong discourse-level structure (e.g., history of present illness and past medical history, followed by hospital course, and discharge plans) <cit.>. This presents an exciting opportunity for future work to exploit discourse segments as an additional representation layer of input texts.§ METHODSWe describe the four models we experimented with.ICD coding has been evaluated in the literature according todifferent metrics: Micro-F, Macro-F, a variant of Macro-F that takes into account the hierarchy of the codes <cit.>, Hamming and ranking loss <cit.>, and a modified version of mean reciprocal rank (MRR) <cit.>. We evaluate performance using the Micro-F metric, since it is the most commonly used metric. *SVM.We used Scikit Learn <cit.> to implement a one-vs-all, multi-label binary SVM classifier. Features were bag of words, with tf*idf weights (determined from the corpus of release notes) for each label. Stop words were removed using Scikit Learn default English stop-word list. The model fits a binary SVM classifier for each label (ICD code) against the rest of the labels.We also experimented with χ^2 feature filtering to select the top-N words according to their mutual information with each label, but this did not improve performance. *CBOW. The continuous-bag-of-words (CBOW) model is inspired by the word2vec CBOW model <cit.> and FastText <cit.>. Both methods use a simple neural-network to create a dense representation of words and use the average of this representation for prediction. The word2vec CBOW tries to predict a word from the words that appear around it, while our CBOW model for ICD classification predicts ICD9 codes from the words of its input discharge summary.The model architecture consists of an embedding layer applied toall the words in a given input text [w_1,w_2,...,w_n], where w_i is a one-hot encoding vector of the vocabulary. E is the embedding matrix with dimension n_emb× V, where V is the size of the vocabulary and n_emb is the embedding size (set to 100).The embedded words are averaged into a fixed-size vector and are fed to a fully connected layer with a matrix W and bias b, where the output dimension is the number of labels. We use a sigmoid activation on the output layer so all values are in the range of [0-1] and use a fixed threshold (0.5) to determine whether to assign a particular label. To train the model, we used binary cross-entropy loss(loss(target, output)=-(target·log(output)+(1-target)·log(1-output)).Embedding= E·[w_1,w_2,...,w_n]Averaged= 1/nΣ_e∈ Embedding(e)Prob= sigmoid(W · Averaged + b) While the model is extremely lightweight and fast it suffers from known bag-of-words issues: (i) it ignores word order; i.e., if negation will appear before a diagnosis mention, the model would not be able to learn this; (ii) multi-word-expressions cannot be identified by the model, so different diagnoses that share lexical words will not be distinguished by the model. *CNN. To address the problems of the CBOW model, the next model we investigate is a convolutional neural network (CNN). A one dimensional convolution applied on list of embedded words could be considered as a type of n-gram model, where n is the convolution filter size.The architecture of this model is very similar to the CBOW model, but instead of averaging the embedded words we apply a one dimensional convolution layer with filter f, followed by a max pooling layer. On the output of the max pool layered a fully connected layer was applied, like in the CBOW model. We also experimented with deeper convolution networks and inception module <cit.>, but they did not yield improved results. Embedding= E·[w_1,w_2,...,w_n]Conved= max_i∈ channels(Embedding∗ f)Prob= sigmoid(W · Conved + b) In our experiments, we used the same embedding parameter as in the CBOW model. In addition, we set the number of channels to 300, and the filter size to 3.*HA-GRU. We now introduce the Hierarchical Attention-bidirectional Gated Recurrent Unit model (HA-GRU) an adaptation of a Hierarchical Attention Networks <cit.> to be able to handle multi-label classification. A Gated Recurrent Unit (GRU) is a type of Recurrent Neural Network. Since the documents are long (see Table <ref> ad up to 13,590 tokens in the MIMIC III training set), a regular GRU applied over the entire document is too slow as it requires number layers of the document length. Instead we apply a hierarchal model with two levels of bidirectional GRU encoding. The first bidirectional GRU operates over tokens and encodes sentences. The second bidirectional GRU encodes the document , applied over all the encoded sentences. In this architecture,each GRU is applied to a much shorter sequence compared with a single GRU. To take advantage of the property that each label is invoked from different parts of the text, we use an attention mechanism over the second GRU with different weights for each label. This allows the model to focus on the relevant sentences for each label <cit.>. To allow clarity into what the model learns and enable error analysis attention is also applied over the first GRU with the same weights for all the labels.Each sentence in the input text is encoded to a fixed length vector (64) by applying an embedding layer over all the inputs, applying a bidirectional GRU layer on the embedded words, and using a neural attention mechanism to encode the bidirectional GRU outputs (size of 128). After the sentences are encoded into a fixed length vector, we apply a second bidirectional GRU layer over the sentences using different attention layers to generate an encoding specified to each class (128×#labels). Finally we applied a fully connected layer with softmax for each classifier to determine if the label should be assigned to the document. Training is achieved by using categorical cross-entropy on every classifier separately (loss(target, output)=-∑_x ouput(x)· log(target(x)))AttWeight(in_i,v,w)= v· tanh(w· (in_i)) AttWeight(in_i,v,w)=e^AttWeight(in_i,v,w)/e^∑_j AttWeight_j(v,w) Attend(in,v,w)= sum(in_i·AttWeight(in_i,v,w))Embedding= E·[w_1,w_2,...,w_n]EncSents_j= Attend(GRUwords(Embedding),v_words,w_words)EncDoc_label = Attend(GRU_sents(EncSents,v_label,w_label),)Prob_label = softmax(pw_label· EncDoc_label + pb_label) Where w_i is a one-hot encoding vector of the vocabulary size V, E is an embedding matrix size of n_emb× V, GRU_words is a GRU layer with state size h_state, w_words is a square matrix (h_state× h_state) and v_words is a vector (h_state) for the sentence level attention. GRU_sents is a GRU layer with state size of h_state. w_label is a square matrix (h_state× h_state) and v_label is a vector (h_state) for the document level attention for each class, pw_label is a matrix (h_state× 2) and pb_label is a bias vector with a size of 2 for each label. We implemented the model using DyNet <cit.>[Code available at <https://github.com/talbaumel/MIMIC>.].§ RESULTS §.§ Model ComparisonTo evaluate the proposed methods on the MIMIC datasets, we conducted the following experiments. In the first setting we considered all ICD9 codes as our label set. We trained the SVM, CBOW, and CNN on the MIMIC II and on the MIMIC III training sets separately. All models were evaluated on the same test set according to Micro-F. In the second setting, we only considered the rolled-up ICD9 codes to their 3-digit codes. There (Table <ref>). HA-GRU gives the best results in the rolled-up ICD9 setting, with a 7.4% and3.2% improvement over the CNN and SVM, the second best methods, in MIMIC II and MIMIC III respectively. In the full ICD-9 scenario, all methods yield better results when trained on MIMIC III rather than on MIMIC II. This is expected considering the larger size of MIMIC III over II. We note that our CNN yields the best Micro-F when trained on MIMIC III passing the HA-GRU by a small margin.In comparison to the previous work of <cit.>, our one-vs-all SVM yielded better results than their flat and hierarchy classifiers. This trend was confirmed when training on the new MIMIC III set, as well as when using the same evaluation metrics of  <cit.>. We attribute these improved results both to the one-vs-all approach as well as our tokenization approach.§.§ Label FrequencyWe also tested the effect label frequency on the performance of the HA-GRU classifier. We recalculated precision and recall scores on subsets of labels. The subsets were created by sorting the labels by frequency they appear in MIMIC-III dataset and binning them to groups of 50 labels. As such, bin 1 comprises the 50 most frequent ICD9 codes in the training set (with an average 12% frequency over the records in the training set), codes in bin 2 had an average 1.9% frequency, codes in bin 3 appeared in 1.1% of the records, up to bin 8 which 0.2% of the records in the training set. The effect can be seen in Figure <ref>. We note that the recall score drops much more dramatically than the precision as the label frequency decreases. §.§ Model Explaining PowerWe discuss how the CNN and HA-GRU architectures can support model explaining power. *CNN. To analyze the CNN prediction we can test which n-grams triggered the max-pooling layer. Given a sentence with n words we can feed it forward through the embedding layer and the convolution layer. The output of the convolution a list of vectors each the size of the number of channels of the convolution layer where vector corresponds to an n-gram. We can identify what triggered the max pooling layer by finding the maximum value of each channel. Thus, for predicted labels, one of the activated n-grams does include information relevant for that label (whether correct for true positive labels or incorrect for false positive labels). For example in our experiments, for the label: “682.6-Cellulitis and abscess of leg, except foot” one of the activated n-gram detected was “extremity cellulitis prior”. This transparency process can also be useful for error analysis while building a model, as it can highlight True Positive and False Positive labels. However, it is difficult in the CNN to trace back the decisions for False Negatives predictions. *HA-GRU For the HA-GRU model we can use attention weights to better understand what sentences and what words in that sentence contributed the most to each decision. We can find which sentence had the highest attention score for each label, and given the most important sentence, we can find what word received the highest attention score. For example, in our experiments for label “428-Heart failure” we found that the sentence with the highest attention score was “d . congestive heart failure ( with an ejection fraction of dd % to dd % ) .”, while the token “failure” was found most relevant across all labels. Figure <ref> provides additional examples. Note that the “d" and “dd" tokens are from the pre-procecssing step, which mapped all numbers to pseudo-tokens.Like in the CNN, we can use this process for error analysis. In fact, the HA-GRU model explains prediction with greater precision, at the sentence level. For instance, we could explore the following False Positive prediction: the model assigned the label “331-Other cerebral degenerations” to the sentence:“alzheimer 's dementia .”. We can see that the condition was relevant to the medical note, but was mentioned under the patient's past medical history (and not a current problem). In fact, many of the False Positive labels under the HA-GRU model were due to mentions belonging to the past medical history section. This suggests that the coding task would benefit from a deeper architecture, with attention to discourse-level structure.In contrast to the CNN, the HA-GRU model can also help analyze False Negative label assignments. When we explored the False Negative labels, we found that in many cases the model found a relevant sentence, but failed to classify correctly. This suggests the document-level attention mechanism is successful. For instance, for the False Negative “682-Other cellulitis and abscess”, the most attended sentence was “... for right lower extremity cellulitis prior to admission ...”. The false positive codes for this sentence included “250-Diabetes mellitus” and “414-Other forms of chronic ischemic heart disease”. We note that in the case of cellulitis, it is reasonable that the classifier preferred other, more frequent codes, as it is a common comorbid condition in the ICU.[Full visualizations of sample discharge summaries are provided at <https://www.cs.bgu.ac.il/ talbau/mimicdemo>]§ CONCLUSIONWe investigate four modern models for the task of extreme multi-label classification on the MIMIC datasets. Unlike previous work, we evaluate our models on all ICD9 codes thus making sure our models could be used for real world ICD9 tagging. The tokenization step, mapping rare variants using edit distance, improved results for CBOW and CNN models by ~0.5%, highlighting the importance of preprocessing data noise problems in real-world settings.The HA-GRU model not only achieves the best performance on the task of rolled-up codes (55.86% F1 on MIMIC III, ~2.8% absolute improvement on the best SVM baseline) but is able to provide insight on the task for future work such as using discourse-level structure available in medical notes yet never used before. The ability to highlight the decision process of the model is important for adoption of such models by medical experts.On the sub-task of MIMIC II, which includes a smaller training dataset, HA-GRU achieved ~7% absolute F1 improvement, suggesting it requires less training data to achieve top performance, which is important for domain adaptation efforts when applying such models to patient records from other sources (such as different hospitals). § ACKNOWLEDGEMENTSThis work is supported by National Institute of General Medical Sciences Grant R01GM114355 (NE) and Frankel Center for Computer Science . aaai
http://arxiv.org/abs/1709.09587v3
{ "authors": [ "Tal Baumel", "Jumana Nassour-Kassis", "Raphael Cohen", "Michael Elhadad", "No`emie Elhadad" ], "categories": [ "cs.CL", "cs.AI" ], "primary_category": "cs.CL", "published": "20170927154607", "title": "Multi-Label Classification of Patient Notes a Case Study on ICD Code Assignment" }
Fighting topological freezing in the two-dimensional CP^N-1 model Talk given at the 35th International Symposium on Lattice Field Theory, 18 - 24 June 2017, Granada, Spain. Martin Hasenbusch ============================================================================================================================================================================== The impressive ability of children to acquire language is a widely studied phenomenon, and the factors influencing the pace and patterns of word learning remains a subject of active research. Although many models predicting the age of acquisition of words have been proposed, little emphasis has been directed to the raw input children achieve. In this work we present a comparatively large-scale multi-modal corpus of prosody-text aligned child directed speech. Our corpus contains automatically extracted word-level prosodic features, and we investigate the utility of this information as predictors of age of acquisition. We show that prosody features boost predictive power in a regularized regression, and demonstrate their utility in the context of a multi-modal factorized language models trained and tested on child-directed speech. § INTRODUCTIONOut of the many challenges infants face throughout their first years, the task of learning words of a language, i.e., mapping seemingly arbitrary sounds to concepts in the world, is particularly stunning. How do children accomplish this, and what factors influence the characteristics of this process?This work aims to shed light on these question, approaching it from the perspective of predicting the age of acquisition of words. Diverse research efforts have addressed this question in recent years by collecting data sets revealing the time course in which language is acquired <cit.>, or investigating the influence of various linguistic (e.g., word frequency or mean length of utterance) and semantic (e.g., babiness or concreteness) on word age of acquisition <cit.>.Past research efforts have been shaped by two major shortcomings. The first one relates to the reference data of age of acquisition of words which traditionally has consisted of adult estimates of the age at which they learnt a particular word (e.g., Kuperman:2012). Clearly these estimates are highly influenced by subjective inaccuracies. A more recent approach <cit.> has used data from the MacArthur-Bates Communicative Development Index (CDI; Fenson:2007), which comprises large sets of parent-filled questionnaires of the words their child understands and/or produces at a particular age <cit.>. Such questionnaires exist for a large number of children of a variety of mother tongues, and consequently provide a reliable and reasonable scale data set for predictive models. We will evaluate our models against this resource in this work.The second shortcoming relates to the kind of predictors of age of acquisition which have been investigated in the past. Recent work <cit.> has investigated a variety of predictors including input-derived properties such as word frequency or mean length of utterance, as well as semantic properties such as concreteness and babiness. None of these predictors, however, directly consider the full characteristics of child-directed speech the child receives.Here we advance prior work by investigating the impact of raw language input on age of acquisition.[Supplementary material (code and data) is available at <https://github.com/ColiLea/prosodyAOA>] In particular we are interested in the prosodic properties of individual words in child-directed speech. We derive generic sets of prosodic features (eGemaps features; Eyben:2016) information from large amounts of raw child-directed data, and investigate their utility in predicting the age of acquisition of words. We also incorporate such features into a language model, which we view as a simplistic model of word learning, and in turn investigate the impact of prosodic features in this framework. We summarize our contributions below.* We construct a corpus of word-level aligned text-speech data of two portions of the English CHILDES corpus. From this corpus, we extract sets of word-type level prosodic eGemaps features. We present a pipeline utilizing open source toolkits and established prosodic feature sets.* We examine the power of this multi-modal corpus of raw child-directed speech as a predictor of age-of-acquisition, and evaluate our predictors against a recent large-scale gold standard of CDI parent questionnaire. * We incorporate the prosodic eGemaps features into language models and show that prosodic information can reduce perplexity of words in the context of the CHILDES corpus. * In contrast to previous work we (a) examine predictive power on different sub-corpora of CHILDES and (b) adopt a cross-validation paradigm as well as regularized (Ridge) regression in an attempt to avoid overfitting our regression models and ensure generalizability of our results.§ A CORPUS OF WORD-LEVEL PROSODY FEATURES IN CHILD-DIRECTED SPEECHEven though the importance of multi-modal experience on language acquisition is fairly uncontroversial, relevant models have been largely trained and tested on text transcriptions of child-directed speech from the CHILDES corpus. Some exceptions include attempts to incorporate visual or pragmatic information in the input <cit.>. This information is typically derived from manual annotations of videos accompanying some of the CHILDES data, is inherently expensive to obtain, resulting in small data sets.A large portion of the English CHILDES corpus, however, consists of the raw audio records of parent-child interactions together with their orthographic transcripts, as well as utterance-level text-audio alignments. From this data we derive word- and phone-level alignments and use this corpus to extract word-level prosodic characteristics (cf. Section <ref> for more details).Given recent advances in automatic speech recognition and downstream tasks like forced alignment, the aligned orthographic-audio CHILDES data provide a fruitful resource for large-scale investigations of the influence and characteristics of prosody in child-directed speech. Elsner:2017 describe a similar approach to ours towards aligning text-audio child-directed data. Their approach was developed independently of and concurrently to our work.We align and extract features for two CHILDES corpora. First, we use the Brent corpus <cit.> which includes child-directed speech to 16 infants aged between 9 and 15 months (∼ 154,700 child-directed utterances). In addition, we use the the Providence corpus <cit.> which comprises child-directed speech to six children roughly between age 1;00 and 4;00 (∼ 259,800 child-directed utterances).For each child in our corpora we consider child-directed language by its main interactor (always the mother) and remove any utterance whose orthographic transcription included noise markers. §.§ Word-level text-audio alignment of English CHILDES dataWe use the Montreal Forced Alignment (MFA) tool <cit.> to align orthographically transcribed child-directed speech to the corresponding audio snippets on the word level. The MFA tool also provides us with phoneme-level alignments which we have not made use of in this work. MFA uses Kaldi, a state-of-the-art forced aligner <cit.>. Crucially, MFA takes into account speaker information and optimizes its alignments for individual speakers. Our corpus consisting of individual mother-child dyads is thus predestined for this approach.MFA takes as input pairs of text and audio (wav) files containing the same information in orthographic and auditory modality, respectively. We provide one such pair for each child-directed utterance (taking utterance-level alignments as provided in the CHILDES corpus). MFA returns a textgrid file for each input pair, containing start and end time stamps for each word in the utterance, as well as for each phoneme in each word. §.§ Prosodic Feature extraction We are now in a position where we can extract prosodic features of individual spoken words. Eyben:2016 recently presented sets of core prosody features. Features were selected based on their empirical and theoretical value in prior work, and robust automatic computability, and include standard features based on F0 – F3 formants, spectral shape and rhythm, intensity and MFCC features among others. We use their extended feature set which comprises a total of 88 features. For feature extraction, we use the scripts provided by the authors implemented within the OpenSMILE toolkit <cit.>.Their scripts take as input the textgrid files produced by the MFA forced aligner, as well as the corresponding word-level audio snippets. They return an 88-dimensional feature vector for each spoken word in our corpus. §.§ EvaluationIn order to estimate the quality of the automatically extracted eGemaps features, we test their discriminative power on established linguistic classes (part of speech). Concretely, we map the 88-dimensional eGemap feature vectors for 600 target words into 2-dimensional space using principle component analysis (PCA). We then inspect the extent to which words are clustered by their part of speech. We derive word-type level eGemaps representation from the word-token level eGemaps in our corpus by averaging all token-level features for each type. We combine the data from the Brent and Providence corpus for this analysis.Figure <ref> displays the resulting space for various subsets of part of speech classes, which are color-coded. All dots in space correspond to a target word from the CDI data set, and are labeled with their word. It demonstrates that the transformed features are able to distinguish nouns from function words (right) and to some degree from verbs (centre). We also show clusters over all words in the CDI AoA data set (left), where nouns (yellow), verbs (violet) and function words (black) are well-separated along the first principle component. Additionally we observe that phonologically similar words are clustered together. For example the words stick, cheek, chalk, sick, kick all appear in a cluster in the top centre; and words melon, meow, mailman, moon appear in the bottom center of the space. We consider these results as initial support of the quality of our prosodic eGemaps features of child-directed speech.§ EXPERIMENT 1: PROSODIC FEATURES AS PREDICTORS OF AGE OF ACQUISITION In this experiment we test the word-level eGemaps features as predictors of age of acquisition. We fit regularized ridge regression models including various subsets. Ridge regression is an extension of standard linear regression, which augments the sum of squares objective with a regularization term on the magnitude of the coefficients. This (a) prevents the model from fitting the data too closely and generalizing poorly to new data; and (b) to some extent alleviates the problem of feature collinearity.[We investigated collinearity between eGemaps features and observed strong correlations only between few selected feature pairs.]We evaluate our models using 10-fold cross validation with random 90% train and 10% test splits. Note that this method departs from approaches in previous work <cit.> where models were fitted to and tested on the same data set. We emphasize the importance of exploring any model's generalization ability by testing models on unseen data. §.§ Experiments and ResultsWe fit ridge regression models to predict the age of acquisition of 600 target words in months.We obtain word type level eGemap feature representations by averaging feature values across all (token) observations of a target word in our corpus. In addition we used word frequency as a predictor, which has been shown to be a strong predictor of age of acquisition in previous work <cit.>. We scale all predictors to zero mean and unit variance and map frequency into log space.[Many of the eGemaps features are not strictly positive so we cannot map them to log space, and refrain from any other transformation for the time being.] Scores are reported as mean squared error (MSE) on the test set, so lower is better.§.§.§ Exploratory Egemaps Feature AnalysisIn this set of experiment we explore the 88 individual eGemaps features as well as word frequency as predictors of AoA.Figure <ref> shows our results. We report results on our three corpora (Brent, Providence, both). The top row in Figure <ref> displays the mean squared error achieved by each individual eGemaps prosody feature as a predictor, plus the performance of word frequency as a predictor (leftmost data point, and emphasized by the horizontal red lines). For the Brent corpus frequency predicts AoA better than any prosodic feature, which is an expected result given the strong performance of frequency in prior work of AoA prediction <cit.>. For the Providence (and consequently the combined) corpus, however, frequency is a less strong predictor than for the Brent corpus and is outperformed by selected prosodic features.The bottom row of Figure <ref> shows MSE performance of the frequency predictor in combination with any individual eGemaps prosody feature. We can observe that a large number of prosodic features enhance prediction, i.e., explain variance above and beyond pure frequency-based models.§.§.§ Egemaps Feature Selection Based on our exploratory study, for both the Brent corpus and the Providence corpus we select the 10 eGemaps features which combined with the frequency predictor lead to best AoA prediction performance in terms of MSE. Out of these 20 features seven occur in both corpus-specific feature list, so that we obtain a final set of 13 individual features. The features are listed in Table <ref>.Figure <ref> shows mean squared error results of ridge regression models incorporating frequency (f) (leftmost data point, and emphasized through horizontal red bar) and one of our selected eGemaps features (p1–p13; cf. Table <ref> for details) across the three corpora, as well as all features combined. We see that all eGemaps based predictors lower MSE compared to using frequency only. We furthermore observe that combining all 13 eGemaps features and frequency (f_all_13) further improves prediction. Finally we combine frequency with our full set of 88 eGemaps features (f_all_88) which, as expected, results in best MSE. These results are stable across corpora. §.§ DiscussionThis experiment investigated the utility of automatically extracted word level prosodic features from child-directed speech as predictors of age of acquisition of words. We utilized a generic set of 88 prosodic features which have been previously proposed in the literature <cit.>.We conducted an exploratory analysis of all 88 features as predictors of age of acquisition, and selected a the ten predictors which most notably boosted predictive performance over using word frequency as the sole predictor independently for our two training corpora. The fact that seven out of the 20 predictors (35%) overlapped across the two corpora suggests that the selected feature subset is robust and generalizable across other data sets and tasks.Regression models with large sets of potentially noisy predictors, like our automatically obtained prosody features, are inherently susceptible to discovering spurious regularities in the data set and overfit the training data without making meaningful predictions on unseen data. We address this issue by conducting all regression experiments within the framework of cross validation, i.e., random splits of the full training data set into train and test sets. We furthermore evaluate all our models on two data sets of child-directed speech, which can be viewed as two independent samples from the overall distribution of child-directed data. Overall we observe similar patterns across the corpora, further supporting the generalizability of our results.While we could show that eGemaps features explain some of the variance of age of acquisition estimates beyond what is explained by frequency, the way eGemaps features were used implies a significant information loss. In particular, we averaged feature values across all mentions of any predicted word throughout the whole corpus.Ideally, we would like to make use of each individual word mentions prosody features and their prosodic contexts. To this end, in the next set of experiments, we incorporate the eGemaps features into a (highly simplified) model of word learning. The model estimates word probabilities based on word mentions in local linguistic context in a large corpus of child directed speech. We define the context such that it encodes both lexical and prosodic information.§ EXPERIMENT 2: ENRICHING LANGUAGE MODELS WITH PROSODY FEATURES In addition to employing prosody features directly as predictors of age of acquisition, we are interested in using such features in statistical models of word learning. As a very simple first attempt we investigate a classical n-gram language model as a highly constrained model of word learning. N-gram language models can be viewed as a purely syntactic model of word acquisition. The model is exposed to natural language sentences and learns conditional probability distributions over words given the immediate preceding context.§.§ Factored Language ModelsIn a classical n-gram model this context is represented as the n preceding words in a sentence. There have been numerous attempts, however, to enrich this context with extra-textual information. One such attempt are factored language models <cit.>. Factored language models represent each word as a factor which can consist of a set of user-defined values, including the word string itself the word's stem, its part of speech or derived prosodic features (among many others). Prosodic features have been previously incorporated in factored language models. Huang:2007 show that prosodic context in addition to word context improves perplexity of language models conversational data from meetings. Here, we integrate prosodic features in language models over child-directed speech. We hypothesize that prosodic features will add complementary disambiguating information to a lexical context and consequently lead to decreased perplexity over unigram or purely lexical n-gram models.Factored language models allow to flexibly specified relevant context, and generalized backoff methodologies have been developed which allow to choose which conditioning factors to drop and keep at each backoff level of the model. The SRILM language modeling toolkit <cit.> offers an interface for defining such factored language models and backoff strategies, and we implement all our models in this framework.§.§.§ Representation of Prosody FeaturesN-gram language models are Markov models over a discrete state space. As such all factors must be represented symbolically. Following prior work <cit.>, we transform our 88-dimensional eGemaps prosody feature vectors into symbolic representations using vector quantization. We take as input the original eGemaps features for each word in our corpus and perform k-means clustering on this set. We then map each word's feature representation to its closest centroid, turning the vector into a symbolic label ∈{1...k}. We experiment with different k below. §.§ Experiments and ResultsWe implement seven different language models conditioning on different contexts as illustrated with an example in Table <ref>. As a baseline we implement a unigram model (uni) with no contextual information. We compare the baseline to models conditioning on increasing lexical context, i.e., a bigram model conditioning on the immediately preceding lexeme (bi), and a trigram model considering the two preceding lexemes (tri). Finally, we add prosodic factors to the bigram and trigram models in form of either only the currently predicted word's prosodic class (prosUni) or both the currently predicted and the immediately preceding words' prosodic class (prosBi). Table <ref> illustrates our model with a text example.We experiment with k ∈{50,100,500} target clusters for prosody vector quantization in order to investigate the dependence of our model on this parameter. We train language models on the Brent corpus (106,647 sentences), the Providence corpus (134,690 sentences), and a combination of both corpora (267,337 sentences).[Number of sentences don't add up because we remove 26,000 sentences from each corpus (13,000 for dev / test each) and do this after concatenating the full Brent and Providence for the combined corpus.] After estimating our models on a training corpus (using SRILM) we report both training set perplexity scores, as well as perplexity scores on a test corpus of 13,000 utterances sampled from both Brent and Providence. This test set remains the same across all training corpora and models. Table <ref> presents results when evaluating perplexity on a held-out test set of 13,000 utterances, while Table <ref> summarizes training set perplexities of different language models. Unsurprisingly, the baseline unigram model receives highest perplexity scores. The purely lexical bigram and trigram models significantly improve perplexity. Training set perplexity consistently improves with richer contexts both in terms of lexical and prosodic information. Adding prosodic features reliably improves over their non-prosodic counterpart (e.g., tri_prosUni and tri_prosBi outperform the tri model). Except for the combined corpus, including prosodic bigrams improves performance over prosodic unigrams.Results are less consistent on test set perplexity. Adding prosodic features further improves perplexity over its non-prosodic counterpart in the case of the trigram model for the Providence corpus and the combined corpora. The tri_prosUni models, combining a trigram language model with target word prosodic features improves language model performance. The bigram models do not benefit from additional prosodic information. In addition, we do not observe an analogous improvement for the Brent corpus. This might be an artifact of the sampling of our test data set which was sampled from the combined corpus where Providence dominates such that the model trained on Brent is effectively tested on `out of domain data'. §.§ DiscussionAdding prosodic context to textual context in n-gram language models trained on CHILDES data consistently improves training set perplexity scores, however, did not reveal consistent improvements in test set perplexity. Our experimental setup is influenced by several factors. First, we use vector quantization to convert real-valued prosodic feature vectors into prosodic symbols. This process likely leads to substantial loss of information. Secondly, vector quantization involves k-mean clustering and consequently to select a parameter k. We test three values, k ∈{50,100,500}, and observed very similar patterns in the results suggesting that the results do not strongly depend on this parameter.We performed vector quantization on the full 88-dimensional prosodic feature vector. It is possible that a subset of those features is most informative and could be quantized with less of an information loss.Training set perplexity improved consistently and significantly with added prosodic features. We also observed improvements on test set perplexity, however, the trends were less consistent. This suggests that prosodic features may not generalize (as well as lexical features) across data sets.In the cognitive context of the current investigation we believe that the training set results are still revealing. Our overall results suggest a clear trend of prosodic features being able to improve language model perplexity when trained on child-directed data.In recent years neural network language models have been introduced and applied to various tasks with great success, largely replacing n-gram language models. In addition to their higher predictive power, neural language models are more amenable to real-valued input vectors than n-gram language models. Few works exist which combine prosody and lexical information in neural network language models (but see Reddy:2015). We plan to pursue this direction in future work. § EXPERIMENT 3: LANGUAGE-MODEL REPRESENTATIONS AS PREDICTORS OF AGE OF ACQUISITIONThe overall motivation behind our work is to investigate the influence of prosodic features in child-directed speech on age of acquisition of words. To this end, in experiment 2 we trained language models over child-directed language containing both lexical and prosodic information. In this experiment we view those language models as (simplistic) models of word learning and investigate the representation learnt by such models as predictors of age of acquisition.For each word in the age of acquisition data set we approximate the extent to which a language model has learnt a specific word after training. We quantify this extent by the average probability assigned by the model to the word in either the training data or a test data set. We use this averaged probability as a predictor of age of acquisition. We test the same set of language models conditioning on varying lexical and prosodic context as in Experiment 2, as listed in Table <ref>. §.§ Experiments and ResultsLike in experiment 1 we fit ridge regression models to predict age of acquisition in months for a target set of 600 words. We test as a predictor word frequency as well as word probabilities as estimated by each of our language models (cf., Table <ref>) and various combinations thereof. We scale all predictors to zero mean and unit variance and map them into log space. Scores are again reported in terms of mean squared error (lower is better). As before, we train and test our models using 10-fold cross validation with random 90% train / 10% test splits of the data on the word level. Figure <ref> displays results when using language model-induced word probabilities derived from a test set. Figure <ref> summarizes results using language model-induced word probabilities derived from the training set. Models are listed along the x-axis. We combine the word frequency predictor (f) with word representations derived from different our six language models of varying context. Asterisks in the model names indicate that all versions of context types are used (e.g., f_*_prosUni uses as predictors representation from both purely lexical language models, i.e., unigram and bigram).First we observe that only the test set-derived word probabilities (Figure <ref>) reliably explain variance in the age of acquisition data beyond the frequency detector. Training set derived probabilities (Figure <ref>) are not informative in this experiment. Below, we focus on the test set probability results in Figure <ref>.Throughout all corpora adding language model features improve mean squared error compared to using frequency as the sole predictor. This is encouraging, demonstrating that the induced language model word probabilities explain variance in the age of acquisition data beyond the variance explained by the already strong predictor of word frequency. The contribution of prosodic information is less clear. Patterns vary across corpora. For the Providence corpus (center) and the combined corpora (right) language models including prosodic features can lead to reduced mean squared error compared to the purely lexical models f_bi and f_tri. Finally, combining all predictors (rightmost point; all) consistently leads to best MSE with the exception of the test set perplexity for the Brent corpus. §.§ DiscussionThe language model derived predictors in form of word type probabilities as averaged word token probabilities in the test data set strongly depend on the characteristics of this test data set. In particular, we expect estimates of infrequent words to highly vary based on their frequency and context of occurrence test set.As noted in the discussion of experiment 2 the test set from which word probabilities were derived is dominated by Providence data. This makes estimation of parameters based on the Brent corpus particularly brittle.Experiment 3 builds on representations induced by language models on fairly small training corpora. In addition the underlying prosodic information in Experiment 2 was highly transformed and filtered through vector quantization. These inaccuracies accumulate in our experimental setup and are likely a reason for the inconclusive results. § CONCLUSIONSEarly word learning strongly depends on a number of linguistic and extra-linguistic factors. Apart from the content of child-directed language, pragmatic cues such as speech prosody, gaze and gestures play an important role of guiding children's attention towards correct word-meaning mappings. In this work we explored the utility of prosody in raw, large-scale child-directed speech as a predictor of age of acquisition.The motivation behind our work was two-fold. First, corpora of child-directed data are augmented to a large portion with audio recordings of the interaction. This allowed us to construct a large-scale multi-modal corpus of transcribed speech and its prosodic features. Secondly, prior work on predicting age of acquisition used features which were derived from child language input (e.g., frequency or mean length of utterance), but not the characteristics of the raw input itself. Here we bridge this gap.We conducted three sets of experiments. First, we used the automatically obtained prosodic features of each word type (as averaged over token occurrences) from raw child-directed speech. We explored these features as predictors of age of acquisition. We were able to identify a subset of those features which seemed particularly useful for this task. Experiments 2 and 3 utilized token-level prosodic information rather than marginalizing it into one type-level representations. In experiment 2, we trained language models with different conditioning contexts on child-directed data. Results showed that conditioning on prosodic information in addition to lexical context can improve language model perplexity. Finally, experiment 3 views the estimated language models as (simplistic) models of language learning and used the induced word representations, in the form of word probabilities assigned by the language models, as predictors in regression models for age of acquisition. To the best of our knowledge we present the first large-scale data-driven investigation of prosodic features of child-directed speech on age of acquisition of words. While our initial models are rather simplistic and results mixed, our work opens up various interesting directions for future investigations. First, we would like to incorporate our prosodic features into richer models of word learning. One could for example augment the cross-situational models of Fazly:2010 with prosodic features as a more realistic mechanism to construct extra-linguistic scenes.We also plan to further investigate the utility of prosodic information for language models on child-directed speech. Using n-gram models forced us to discretize our real-valued prosodic feature vectors. More recent neural network language models allow more naturally to incorporate feature vectors, and provide richer methods of using and selecting relevant aspects of such features. Investigating the utility of multi-modal information through neural language models is an exciting future direction to pursue. acl2012
http://arxiv.org/abs/1709.09443v1
{ "authors": [ "Lea Frermann", "Michael C. Frank" ], "categories": [ "cs.CL" ], "primary_category": "cs.CL", "published": "20170927105012", "title": "Prosodic Features from Large Corpora of Child-Directed Speech as Predictors of the Age of Acquisition of Words" }
firstpage–lastpage Transport Coefficients from Large Deviation Functions David T. Limmer^1,2,3 December 30, 2023 =====================================================We present a study of photometric redshift performance for galaxies and active galactic nuclei detected in deep radio continuum surveys. Using two multi-wavelength datasets, over the NOAO Deep Wide Field Survey Boötes and COSMOS fields, we assess photometric redshift (photo-z) performance for a sample of ∼ 4,500 radio continuum sources with spectroscopic redshifts relative to those of ∼ 63,000 non radio-detected sources in the same fields. We investigate the performance of three photometric redshift template sets as a function of redshift, radio luminosity and infrared/X-ray properties. We find that no single template library is able to provide the best performance across all subsets of the radio detected population, with variation in the optimum template set both between subsets and between fields. Through a hierarchical Bayesian combination of the photo-z estimates from all three template sets, we are able to produce a consensus photo-z estimate which equals or improves upon the performance of any individual template set. § INTRODUCTIONPhotometric redshifts are a vital tool for estimating the distances to large samples of galaxies observed in extragalactic surveys. At almost all survey scales, from large area surveys such as the Sloan Digital Sky Survey <cit.> or the Dark Energy Survey <cit.> to deep pencil-beam Hubble Space Telescope (HST) surveys such as CANDELS <cit.>, it is impractical to obtain spectroscopic redshifts for more than a small fraction of photometrically detected sources. For the vast majority of sources that are currently detected or will be detected in future photometric surveys, we are therefore reliant on photometric redshift techniques to estimate their distance or extract information about the intrinsic physical properties <cit.>.While this statement is applicable to photometric surveys across all of the electromagnetic spectrum, the latest generation of deep radio continuum surveys by Square Kilometre Array (SKA) precursors and pathfinders such as the Low Frequency Array <cit.>, the Australian SKA Pathfinder <cit.> and MeerKAT <cit.> pose anew challenge. Probing to unprecedented depths, these surveys will increase the detected population of radio sources by more than an order of magnitude and probe deep into the earliest epochs of galaxy formation and evolution <cit.>.The population of radio detected sources is itself extremely diverse - with radio emission tracing both black hole accretion in active galactic nuclei (AGN) and star formation activity. With the majority of these sources lacking useful radio morphology information (being unresolved in radio continuum observations), classifying and separating the various sub-populations of radio sources will rely on photometric methods <cit.>. Accurate and unbiased photometric redshift estimates for the radio source population will therefore be essential for studying the faint radio population and achieving the scientific goals of these deep radio continuum surveys.Since the publication of the first widely used photometric redshift (photo-z) estimation tools <cit.>, both the accuracy of photo-z estimates and our understanding of their biases and limitations has significantly improved. The development and testing of photometric redshift techniques has been driven not just by studies of galaxy evolution at high redshifts <cit.>, but also by the next generation of tomographic weak lensing cosmology surveys <cit.>; specifically, the need for computationally fast, accurate and un-biased photometric redshifts for unprecedented samples of galaxies.Detailed studies have shown that while it is possible to produce accurate photo-zs for X-ray selected AGN <cit.>, care must be taken to correct for the effects of optical variability on photometric data which have been observed over long time periods. Similarly, various studies have been increasingly successful in estimating accurate photometric redshifts for large photometric quasar samples such as the SDSS <cit.>, e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.> and <cit.>. However, fundamental to all of these efforts is the large representative spectroscopic sample upon which the empirical redshift estimation algorithms are trained. Several studies have illustrated that the AGN populations selected at different wavelengths (X-ray, optical, IR, radio) are often distinct, with only some overlap between different selection methods <cit.>. The optimal photometric redshift techniques and systematics identified for one particular AGN population are therefore not necessarily applicable to an AGN sample selected by other means.In this paper we aim to quantify some of these systematic effects and find the optimum strategy for estimating accurate photometric redshifts for radio selected populations.Specifically, we want to understand how the photometric redshift accuracy of radio sources varies as a function of radio luminosity and redshift.Do the current methods and optimization strategies developed for `normal' galaxies or other AGN populations in optical surveys extend to radio selected galaxies?Finally, based on the results of these tests, we wish to construct an optimised method which can then be applied successfully to other survey fields in preparation for the next generations of radio continuum surveys <cit.> and the millions of radio sources they will detect <cit.>.The paper is structured as follows: Section <ref> outlines the multi-wavelength datasets used in this analysis, including details of optical data used for the photometric redshift estimates and the corresponding radio continuum and spectroscopic redshift datasets. Section <ref> then describes how the individual photometric redshift estimates used in this comparison were determined and the choice of software, templates and settings used. Section <ref> presents the detailed comparison and analysis of these photo-z methods in the context of deep radio continuum surveys. In Section <ref> we outline the improved photometric redshift method devised for the LOFAR survey. Section <ref> presents a discussion of the results presented in Section <ref> and their implications for future galaxy evolution and cosmology studies with the forthcoming generation of radio continuum surveys. Finally, Section <ref> presents our summary and conclusions. Throughout this paper, all magnitudes are quoted in the AB system <cit.> unless otherwise stated. We also assume a Λ-CDM cosmology with H_0 = 70 kms^-1Mpc^-1, Ω_m=0.3 and Ω_Λ=0.7.§ DATATo maximise the parameter space explored in this analysis, we make use of two complementary datasets. Firstly, we make use of the extensive multi-wavelength data over the large ∼9 deg^2NOAO Deep Wide Field Survey in Boötes <cit.>. Secondly, we also include data from the COSMOS field which extends to significantly fainter depths across all wavelengths but over a smaller ∼2 deg^2 area. §.§ Wide' field - Boötes Field §.§.§ Optical photometryThe Boötes photometry used in this study is taken from the PSF matched photometry catalogs of available imaging data in the NDWFS <cit.>. The full catalog covers a wide range of wavelengths, spanning from 0.15 to 24 μm.The photometry included in the subsequent analysis is based primarily on the deep optical imaging in B_W, R and I-bands from <cit.>. At optical wavelengths there is also additional z band coverage from the zBoötes survey <cit.>. Near-infrared observations of the field are provided by NEWFIRM observations at J, H and K_s <cit.>.Filling in two critical wavelength ranges not previously covered by the existing NOAO Boötes data is additional imaging in the U_spec (λ_0 = 3590 Å) and y (λ_0 = 9840 Å) bands from the Large Binocular Telescope <cit.>, covering the full NDWFS observational footprint. Finally, IRAC observations <cit.> at 3.6, 4.5, 5.8 and 8 μ m are provided by the Spitzer Deep Wide Field Survey (SDWFS, ). Although the available GALEX NUV data cover a significant fraction of the NDWFS field and reach depths comparable to the NOAO B_W data, the large point-spread function (PSF) with full-width half maximum (FWHM) equal to ∼ 4.9 could result in significantly increased source confusion relative to the other bands used in the catalog. As such, the NUV data were not included for the purposes of photometric redshift estimation.[Initial tests with eazy also found including the NUV data made no appreciable improvement.]Finally, we also include the u, g, r, i and z imaging from SDSS <cit.>.Although the limiting magnitudes reached by the SDSS photometry are not as faint as the NDWFS optical dataset at comparable wavelengths, the different central wavelengths of the SDSS filters provide valuable additional colour information for bright sources and are therefore worth including.The matched aperture photometry in the catalogs produced byare based on detections in the NOAO I band image as measured by SExtractor <cit.>. Forced aperture photometry was then performed on each of the available UV to infrared images for a range of aperture sizes. The optical/near-infrared images were all first gridded to a common pixel scale and smoothed to a matched PSF. The common PSF chosen was that of a Moffat profile with β = 2.5 and a FWHM equal to 1.35 for the B_W, R, I, Y, H and Ks filters and a larger 1.6 for u, z and J.For the matched catalog for photometric redshift estimation, we use fluxes in 3 apertures for all optical/near-IR bands and 4 for the IRAC bands. These aperture fluxes were then corrected to total fluxes using the aperture corrections based on the 1.35 or 1.6 Moffat profiles or the corresponding IRAC PSF curves of growth. Tests performed using 2, 3 and 4 apertures for the optical bands indicate that for Boötes the 3 and 4 aperture-based photometry perform almost identically for photometric redshift estimation while the 2-based photometry performed significantly worse. The choice of 3 over 4 apertures is based solely on consistency with the `Deep' data presented in the following sub-section. §.§.§ Spectroscopic redshiftsSpectroscopic redshifts for sources in Boötes are taken from a compilation of observations within the field (Brown, priv. communication).The majority of redshifts within the sample come from the AGN and Galaxy Evolution Survey <cit.> spectroscopic survey, with additional samples provided by numerous follow-up surveys in the field including <cit.>, <cit.>, <cit.>, <cit.> and Hickox, R. C. et al (priv. communication).The spectroscopic redshift catalog was matched to the combined multi-wavelength catalog based on their quoted physical coordinates in the two respective catalogs and using a maximum separation of 1. In total, the combined sample consists of 22830 redshifts over the range 0 < z < 6.12, with 88% of these at z < 1.It is important to raise a caveat to the analysis in the following sections, namely that while the spectroscopic sample used here represents one of best available in the literature and includes a diverse range of galaxy types, it may still not be fully representative of the radio source population. As with any non spectroscopically complete sample, the subset of sources with available spectroscopic redshifts represents a somewhat biased sample with respect to both the overall photometric sample and the radio selected galaxy population. In particular, low excitation radio galaxies (LERGS) may be under-represented within the spectroscopic sample due to the lack of strong emission lines available for redshift estimation. §.§.§ Radio fluxesRadio observations for the Boötes field are taken from new LOw Frequency ARray <cit.> observations presented in <cit.>. Full details of the radio data andreduction are presented in <cit.>, including details of the methods used during calibration and imaging to correct for direction-dependent effects (DDEs) caused by the ionosphere and the LOFAR phased array beam.In summary, the observations consist of 8 hr of data taken with the LOFAR High Band Antennae (HBA) and covering the frequency range 130-169 MHz, with a central frequency of ≈150 MHz. The resulting image covers 19 deg^2, with a rms noise of ≈ 120 - 150 μ^-1 and resolution of 5.6×7.4 arcsec <cit.>.Within the LOFAR field of view, the final source catalog contains a total of 6267 separate 5σ radio sources. Of these sources, 3902 fall within the boundaries of the I-band optical imaging and can therefore be matched to the optical catalogs. Matches between the LOFAR radio observations and the optical catalog were estimated using a multi-step likelihood ratio technique. Full details of the visual classifications, radio positions and likelihood ratio technique are presented in Williams et al. (in prep). However, the key steps are as follows. Firstly, radio sources were visually classified into distinct morphological classes. Next, optical counterparts for each radio source are determined through a likelihood ratio technique based on the positions, positional uncertainty and brightness of the optical and radio sources (with the radio centroid position and uncertainty dependent on the radio morphology classification). For the small subset of large extended sources where the automated likelihood ratio matching technique cannot be applied, matches were determined individually based on source morphology and visual comparison with the optical imaging.The cross-matching process yields a total of 2971 matches to sources within the full list of sources within the <cit.> optical catalog. However, of these 2971 matches, 578 are matches to optical sources which are flagged as potentially being affected by bright stars/extended sources or are on chip edges. Of the ∼ 1000 sources which lie within the I-band optical footprint for which no reliable counterpart could be found, a large fraction represent faint un-resolved radio sources for the optical counterpart is too faint. These sources may be optically faint either due to being at high redshift or as a result of having intrinsically red SEDs. The Boötes sample used in this analysis is potentially biased against radio sources with optically faint counterparts. However, thanks to the deep near-IR imaging which forms the basis of the `Deep' COSMOS field we are still able to explore the photo-z properties for these sources. §.§ `Deep' field - COSMOS §.§.§ Optical photometryThe optical/near-IR data used in the COSMOS field are taken from the C OSMOS2015 catalog presented in <cit.>. <cit.> outline fully the details of the optical dataset, including data homogenisation, source detection and extraction.We therefore refer the interested reader to said paper for further detail.For the analysis in this paper, we use the seven optical broad bands (B, V, g, r, i, z^+/z^++), 12 medium bands (IA427, IA464, IA484, IA505, IA527, IA574, IA624, IA679, IA709, IA738, IA767, and IA827) and two narrow bands (NB711,NB816) taken with the Subaru Suprime-Cam <cit.>.Also included at optical wavelengths are the u^*-band data from the Canada-France-Hawaii Telescope (CFHT/MegaCam) as well as Y-band data taken with the Subaru Hyper-Suprime-Cam. As with the Boötes field, we do not include GALEX UV data in the fitting. At longer wavelengths we include the UltraVISTA YJHK_s-band data <cit.> and 3.6, 4.5, 5.8 and 8  Spitzer-IRAC bands. We make use of the aperture corrected 3  flux estimates for all optical to near-IR bands in combination with the deconfused IRAC photometry as outlined in <cit.>.§.§.§ Spectroscopic redshiftsSpectroscopic redshifts for the COSMOS field were taken from the sample of redshifts compiled by and for the Herschel Extragalactic Legacy Project <cit.>.[The goal of HELP is to produce a comprehensive panchromatic dataset for studying the galaxy population at high redshift - assembling multi-wavelength data and derived galaxy properties over the ∼1200 deg^2 surveyed by the Herschel Space Observatory.] The compilation includes the large number of publicly available redshifts in the field <cit.> and a small number of currently unpublished samples. In total, the sample comprises 44,875 sources extending to z > 6 and with ∼12,000 sources at z > 1.Thanks to the optical depths probed by both the photometric and spectroscopic data available in the COSMOS field, the `Deep' spectroscopic samples a range of galaxy types magnitudes, redshifts which may be missing from the `Wide' sample. Despite this, the subset of radio detected galaxies with available spectroscopic redshifts may still be biased against towards brighter sources and populations with higher spectroscopic success rates. §.§.§ Radio fluxesRadio observations for the COSMOS field were taken from the recently released deep VLA observations presented in <cit.>; the VLA-COSMOS 3 GHz Large Project. Reaching a median rms ≈ 2.3μ over the COSMOS field with at a resolution of 0.75, these observations represent the deepest currently available deep extragalactic radio survey covering a representative volume. Radio sources from the <cit.> catalog were matched to their optical counterparts based on the optical matches to <cit.> provided in the companion paper <cit.>. Within the spectroscopic redshift subsample there are a total of 3400 radio detected sources. While a comparison of the difference in the redshift distribution and source types between a 150 MHz and 3GHz selected survey may be of scientific interest, it is a topic which we do not intend to address here. To facilitate direct comparison with the LOFAR 150MHz fluxes, we convert the observed 3 GHz fluxes to estimated 150MHz fluxes assuming a median 3000 to 150MHz spectral slope of α = -0.7 <cit.>.§.§ Flagging of known X-ray sources and known IR/Optical AGNIn deep radio continuum surveys, the radio detected population includes a very diverse range of sources, ranging from rapidly star-forming galaxies to radio quiet quasars and massive elliptical galaxies hosting luminous radio AGN. To fully characterise the diverse radio population and to facilitate comparison between the radio population and other AGN selection methods, we classify all sources in the spectroscopic comparison samples using the following additional criteria: * Infrared AGN are identified using the updated IR colour criteria presented in <cit.>. In addition to the colour criteria outlined by <cit.>, we split the IR AGN sample into two subsets based on their signal to noise in the IRAC 5.6 and 8μm bands. To be selected as a candidate IR AGN, we require that all sources have S/N > 5 at 3.5 and 4.6μm and S/N > 2 at 5.6 and 8μm. The subset of robust AGN sources is then based on a stricter criteria of S/N > 5 at 5.6 and 8μm. * X-ray selected sources in the Boötes field were identified by cross-matching the positions of sources in our catalog with the X-Böotes Chandra survey of NDWFS <cit.>. We matched the x-ray sources through a simple nearest-neighbour match between the optical photometry catalog used in this analysis and the position of most-likely optical counterpart for each x-ray source presented in <cit.>. In COSMOS, we make use of the compilation of matched x-ray data presented in <cit.> and the corresponding papers detailing the X-ray sources and optical cross-matching <cit.>.For both fields we calculate the x-ray-to-optical flux ratio, X/O = log_10(f_X/f_), based on the i^+ or I band magnitude following <cit.> and <cit.> respectively.To be selected as an X-ray AGN, we require that an x-ray source have X/O > -1 or an x-ray hardness ratio > 0.8 <cit.>. * Bright, known Optical AGN were also identified through two additional selection criteria. Where available, any sources which have been spectroscopically classified as AGN are flagged. Secondly, we also cross-match the optical catalogs with the Million Quasar Catalog compilation of optical AGN, primarily based on SDSS<cit.> and other literature catalogs <cit.>.Objects in the million quasar catalog were cross-matched to the photometric catalogs using a simple nearest neighbour match in RA and declination and allowing a maximum separation of 1. Simulations using randomised positions indicate that at 1 separation, the chance of a spurious match with an object in the optical catalog is less than 5%. While this value is relatively high due to the depth of the optical catalogs, the actual median separation between matches is ≈ 0.2 and are highly unlikely to be spurious. Visual inspection of the quasar catalog sources with no optical counterpart in our catalog indicates that the majority fall within masked regions of the optical catalog (e.g. around bright stars and artefacts) and thus are not expected to have a match. In Fig. <ref> we show the relative numbers of radio detected sources and sources which satisfy any of the X-ray/Optical IR AGN critera within the full spectroscopic subsets. In both the Boötes and COSMOS spectroscopic samples, there are large numbers of radio or X-ray detected sources, as well as large numbers of sources classified as IR AGN. For the Boötes field, the large number of IR AGN is due to the specific selection criteria targeting these sources within the AGES spectroscopic survey <cit.>.Within the subset of radio detected sources itself there is a clear diversity in the nature of sources. In Fig. <ref> we show the multi-wavelength classifications of the respective radio samples.Inspecting the radio flux and luminosity distributions of the two samples (Fig. <ref>) reveals that the X-ray detected sources and IR AGN typically have a higher radio luminosity than the sample median - in line with the expected dominance of AGN at L_150≳10^24 <cit.>. However as seen in Fig. <ref>, of the most radio luminous sources, e.g. L_150 > 10^25, only ∼ 40 - 50% also satisfy another AGN selection criterion. Of all X-ray and IR AGN sources in our samples, we note that ≈ 10-20% are radio detected, broadly consistent with the measured radio-loud fraction of optical quasars <cit.>.Finally, to illustrate the magnitude and redshift parameter space probed by our spectroscopic redshift comparison samples, in Fig. <ref>, we plot the apparent I(i^+) band magnitudes and redshifts of the radio detected populations. By construction, the `Deep' COSMOS sample probes to significantly fainter magnitudes than the wide area Boötes sample. Between the two samples we are able to sample a wide range of magnitudes in the redshift range 0 < z < 3 (0 < log_10(1+z) ≲ 0.6). We also caution that due to the nature of the AGES spectroscopic survey selection criteria <cit.>, the majority of spectroscopic redshifts at z > 1 in the Boötes field are known AGN. Conclusions on the photo-zs for sources at z > 1 will therefore largely be driven by the less biased COSMOS sample.§ PHOTOMETRIC REDSHIFT METHODOLOGYPhotometric redshift estimation techniques fall broadly into two distinct categories. Firstly, one can use redshifted empirical or model template sets fitted to the model photometry through χ^2-minimisation or maximum likelihood techniques <cit.>. Alternatively, one can take a representative training set of objects that has known spectroscopic redshifts and use any of a wide variety of supervised or un-supervised machine learning algorithms to estimate the redshifts for the sample of galaxies for which the redshift is unknown <cit.>.In recent years, empirical methods based on training sets have been shown to produce redshift estimates that can have lower scatter and outlier fractions than template-based methods <cit.>. Furthermore, because the computationally expensive training step only occurs once these methods can also be significantly faster than template fitting when applied to very large datasets.However, the drawback of training sample methods is that they are very dependent on the parameter space covered by the training sample and its overall representativeness of the sample being fitted <cit.>. While template-fitting methods do benefit from additional optimisation through spectroscopic training samples <cit.>, they can be applied effectively with no prior redshift knowledge and tested without spectroscopic samples for comparison <cit.>.Fully representative training samples for the rare sources of interest are not yet readily available for many different fields.Contributing to this problem is the inhomogeneous nature of the photometric data both within and across the various deep survey fields.While deep spectroscopic samples are available for fields such as COSMOS, the variation in filter coverage between survey fields makes it impractical to fully apply this training sample to other fields. Given these constraints, we believe that template based photometric redshifts still represent the best starting point when estimating photo-zs for the datasets and science goals of interest. Future work will explore the application of such empirical photo-z estimates to the widest tiers of the LOFAR survey. For this study we base the photometric redshift estimates on the eazy photometric redshift software presented in <cit.>. As mentioned above, several different template fitting photometric redshift codes have been published and have been widely used in the literature, e.g. BPZ <cit.>, LePhare <cit.> or HyperZ <cit.>.The key differences in approach (and potential outcomes) between these codes are primarily the choice in default template sets as well as their treatment of redshift priors based on magnitude or spectral type.Both of these assumptions can be changed either within eazy itself or in subsequent analysis of its outputs.We are therefore confident that our choice of specific photometric redshift code does not strongly bias the results of our analysis and we note that alternative template fitting codes could be used without systematically affecting the results. §.§ Template sets The three template sets used in this analysis are as follows: * Default eazy reduced galaxy set (`EAZY'):The first set used are the updated optimised galaxy template set provided witheazy and we refer the reader to <cit.> for full details of how these templates were generated.In the latest version of the software, this template set has been updated to incorporate nebular emission lines and includes both an additional dusty galaxy template and an extremely blue SED with strong line emission.Because the eazy template set includes only stellar emission it gives poor fits at wavelengths where the overall emission is typically dominated by non-stellar radiation (e.g. rest-frame mid-infrared; dust emission/PAH features). To minimise the effect of this potential bias, observed filters with wavelengths greater than that of IRAC channel 2 (4.5) are not included when fitting. * <cit.> `XMM-COSMOS' templates:Our second set of templates is that presented by <cit.> in their analysis of photometric redshifts for X-ray AGN. Based on the templates presented in <cit.>, this template set includes 30 SEDs and covers a wide range of galaxy spectral types in addition to both AGN and QSO templates. In contrast to the eazy templates, the XMM-COSMOS templates include both dust continuum and PAH features as well as power-law continuum emission for the appropriate AGN templates. We therefore do not exclude the IRAC 5.8 and 8.0 μm photometry when fitting with these templates. * <cit.> Atlas of Galaxy SEDs (`Atlas'):Finally, we make use of the large atlas of 129 galaxy SED templates presented in <cit.>. These templates are based on nearby galaxies and cover a broad range of galaxy spectral types including ellipticals, spirals and luminous infrared galaxies (both starburst and AGN). Constructed from panchromatic synthetic SED models <cit.> and optical to infrared photometry and spectroscopy, the library has been constructed to minimise systematic errors and span the full gamut of nearby galaxy colours. As with the template set 2, because the templates include rest-frame mid-infrared spectral and continuum features, IRAC 5.8 and 8.0 μm photometry were also used when fitting with this library. These three template libraries were selected either because of their common use within the literature (EAZY/XMM-COSMOS) or because of their explicit intention to fully represent the range of colours observed in local galaxies (Atlas). They are however not directly comparable in the intrinsic galaxy types they include and there are some key differences which could affect their potential performance for the radio galaxy population. As mentioned above, the EAZY template set models only stellar emission and does not include any templates with contributions from AGN. We may therefore expect the EAZY template set to perform very poorly for galaxies with SEDs which are dominated by AGN components.In contrast, while the Atlas library does include templates with significant AGN contributions (primarily at longer wavelengths), it does not include any bright optical quasars due to its local galaxy selection. The XMM-COSMOS library is therefore the only set included in this analysis which includes the full range of optical AGN classes. §.§ Photometric zeropoint offsetsThe addition of small magnitude offsets to the observed photometry of some datasets has been shown to improve photometric redshift estimates <cit.>. While typically small (≲ 10%), these additional offsets can often substantially reduce the overall scatter or outlier fractions for photo-z estimates.To calculate the appropriate photometric offsets we use the commonly followed strategy of fitting the observed SEDs of a subset of galaxies while fixing their redshift to the known spectroscopic redshift. For a training sample of 80% of the available spectroscopic sample, the offset for each band is then calculated from the median offset between the observed and fitted flux values for sources with S/N > 3 in that band.To ensure that spurious offsets are not being applied based on a small number of catastrophic failures in the photometry we perform a bootstrap analysis to calculate the scatter in estimated zeropoint offsets. The zeropoint offset is calculated for 100 iterations of a random subset of 10% of the spectroscopic training sample, with the standard deviation of this distribution then taken as the uncertainty in the zeropoint offset.An offset is then only applied to a given band if the offset is significant at the 2σ level.We apply this procedure to each template set individually, with the zeropoint offsets applied in all subsequent analysis steps. Using the remaining 20% of spectroscopic redshifts as a test sample we are able to verify that for each template set the inclusion of the zeropoint offsets in the fitting produces an overall improvement in the various photometric redshift quality metrics.Finally, before including the estimated photometric offsets in the fitting process for the full photometric samples we assess any potential adverse effects they could have. For the two example fields used in this study we find that there is no strong bias in the photometric offsets introduced by the redshift distribution of the spectroscopic sample. That is to say, applying photometric offsets based on a spectroscopic sample with z_s≈ 0.3 to a sample of photometric galaxies at higher redshift will not strongly bias the resulting redshifts. Such biases could arise either from aperture effects (due to the larger angular size of nearby galaxies) or from differences in the age-dependent features (e.g. 4000Åbreak) in the SEDs; a problem which which may be most acute for the local galaxy based Atlas template library. However, we find that for the extreme example of applying photometric offsets calculated for a spectroscopic sample at z∼0.2 to a test sample at z > 1, the photometric redshift quality of the test sample with the `biased' offsets applied is not significantly worse than when no offsets were applied. §.§ Fitting methods The EAZY template set is fitted following their intended use, using fits of N-linear combinations of templates and allowing all templates to be included in the fit.In contrast, the XMM-COSMOS templates are used in a way which best matches their implementation in LePhare <cit.> and their intended use<cit.>. A range of dust attenuation levels (0 ≤ A_V≤ 2) is applied to each of the 32 unique templates, using both the <cit.> starburst attenuation law and the <cit.> Small Magellanic Cloud (SMC) extinction curve. The extended set of dust attenuated templates are then fitted using single template mode in eazy.Due to the large number of unique templates already included (making fits of N-linear combinations impractical), the Atlas template set is fitted in a similar manner to the XMM-COSMOS set. To allow for finer sampling of the rest-frame UV/optical emission in the empirical Atlas SEDs, we also apply additional dust attenuation to the empirical templates as was done for the XMM-COSMOS set. Due to the wider range of dust extinction already intrinsic to the empirical templates, we apply a smaller range of additional dust attenuation (0 ≤ A_V≤ 1) and assume only the <cit.> starburst attenuation law. We note that the maximum dust extinction of A_V = 1 may be unrealistic for some of the galaxy archetypes included in the Atlas library (e.g. blue compact dwarfs), but dust ranges tuned to individual template type is beyond the scope of this work. As for the XMM-COSMOS template set, the extended Atlas of Galaxy SEDs template set is then fit in single template mode.For all three template sets, additional rest-frame wavelength dependent flux errors are also included through the eazy template error function <cit.>. These errors are added in quadrature to the input photometric errors and vary from <5% at rest-frame optical wavelengths to >15% at rest-frame UV and near-IR where template libraries are more poorly constrained.Finally, although eazy allows for the inclusion of a magnitude dependent prior in the redshift estimation, we choose not to include it at this stage. A summary of these three different photo-z fitting estimates is presented in Table <ref> for reference.§ RESULTSTo explore the performance of the three template sets on the two spectroscopic samples, we first look at the statistics of the best-fit photometric redshift estimates relative to the measured spectroscopic redshifts. Within the literature there are a wide range of statistical metrics used to quantify the quality of photometric redshifts <cit.>. In this analysis we choose to adopt a subset of the metrics outlined in <cit.>, including three measures of the redshift scatter, one measure of the bias and one of the outlier fraction (see Table <ref> for details of these definitions and their notation).We also introduce an additional metric, the continuous ranked probability score (CRPS) and the corresponding mean values for a given sample <cit.>. Widely used in meteorology, the CRPS is designed for evaluating probabilistic forecasts.We refer to <cit.> for full details of the metric and its behaviour, but its definition is presented in Table <ref> and represents the integral of the absolute difference between the cumulative redshift distributions of the predicted value((z))and true values(_z_s(z): i.e. a Heaviside step function at z_s). A key advantage over the more widely used metrics is that CRPS takes into account the full PDF rather than just a simple point value when evaluating a model prediction (i.e. the photometric redshift).§.§ Overall photometric redshift accuracyBefore analysing the photometric redshift properties of the radio source population specifically, it is useful to first verify the overall redshift accuracy of the estimates in the respective fields. For galaxy evolution studies (where the overall bias is less critical), the two most important metrics are typically the robust scatter, σ_, and outlier fraction, O_f. Figures <ref> and  <ref> illustrate how these metrics vary with redshift and magnitude for the full Boötes and COSMOS spectroscopic samples. In Table <ref> we also present the corresponding photo-z quality metrics for the full spectroscopic sample and all subsets of radio detected sources.As expected given the availability of medium band observations, the COSMOS photo-zs (Fig. <ref>) typically have lower scatter than the Boötes dataset at any given redshift. However, at z ≲ 1 the photo-zs for all template sets perform well in both fields, with 0.03 ≲σ_≲ 0.05 (Boötes) and 0.01 ≲σ_≲ 0.03 (COSMOS).For both samples we find that the redshift estimates for X-ray detected and Opt/IR AGN population (dashed lines) typically perform worse on average than the remaining galaxy population at z < 2. However, at z≳2.5 the two populations begin to converge to equivalent levels of scatter and outlier fraction. While the `normal' galaxies at z>2 deteriorate in quality (likely a result of decreasing S/N - Figure <ref>), the photo-z estimates for sources in the X-ray/Opt/IR AGN sample begin to improve. This convergence at higher redshift is potentially driven by the increasing importance of the common Lyman break feature in determining the fitted redshift.While the primary goal of this paper is to draw conclusions on the relative photo-z accuracies for different source populations, it is also useful to compare the absolute accuracy of the photo-zs produced relative to those of high-quality sets available in the literature. Therefore, in addition to the comparison between template sets, in Fig. <ref> we also present the quality metrics of the published COSMOS2015 photometric redshift set <cit.> for the same spectroscopic sample (the catalog `photoz' column; grey lines). Because the COSMOS2015 estimates are not optimised for AGN (and exclude estimates for some X-ray sources), photo-zs for X-ray detected galaxies are taken from the results of <cit.>. For the `normal' galaxy population, the scatter and outlier fractions of the EAZY and Atlas template sets perform comparably to the official COSMOS2015 estimates.In contrast, for the <cit.> estimates alone the X-ray/IR/opt AGN sample perform significantly worse than the best estimates from this analysis. Incorporating the photo-zs for X-ray sources from <cit.> the combined literature photo-zs performance improves, with scatter and outlier fractions at z < 2 comparable tothe best estimates from this analysis, but a poorer performance above this range.For both the Boötes and COSMOS samples we find that the EAZY and Atlas template sets perform comparably and typically produce the lowest scatter (σ_) in both the full spectroscopic sample and the full radio selected population. However, in the sub-samples of X-ray detected sources or IR AGN, we find no consistency between the two different datasets. In the wide area dataset, the XMM-COSMOS template set performs significantly better in almost all metrics than the other two sets, for AGN populations (see Table <ref> . Conversely, in the deep field the XMM-COSMOS set performs worst for the key σ_ and O_ metrics in the subset of X-ray/Opt/IR AGN sources.Given the consistent methodology used for both datasets, the underlying reason for this discrepancy is likely due to the differences in the source populations included in the relevant spectroscopic samples (see Section <ref>). As seen in Fig. <ref>, the Boötes X-ray/Opt/IR AGN source population is typically significantly optically brighter than that probed in COSMOS and may therefore have intrinsically different SEDs.One clear conclusion that can be drawn from Figures <ref>/<ref> and Table <ref> is that there is no single template set which performs consistently best across all subsets and datasets. Differences in the redshifts estimated by the three different template sets are found to systematically depend strongly both on optical magnitude (a proxy for overall S/N) and redshift. Specifically, as sources become optically fainter the range between the highest and lowest predicted redshifts systematically increases. As a function of redshift, this range of predicted photo-zs also increases significantly between 1 ≲ z ≲ 3; above z∼3 the estimates begin to converge again. We see these trends in both the wide and deep fields, leading us to conclude that the redshift effect is not due to the systematics of the available optical data itself (e.g. the relatively shallow near-IR data in Boötes). §.§ Relative photo-z accuracy for radio and non-radio sources It is clear that the absolute values for photometric redshift quality metrics are strong functions of the redshifts being probed, along with relative depth (S/N), resolution and wavelength coverage of the photometry available. The fundamental question for photometric redshift estimates in deep radio continuum surveys is how does the redshift accuracy differ between the radio detected and radio undetected source populations?To understand how the different intrinsic source populations affect the resulting photo-z accuracy, we therefore measured the relative photo-z scatter and outlier fraction between the radio detected and non detected populations as a function of redshift. To minimise the effects of known biases or photo-z quality dependencies, we first carefully match the two samples in redshift, magnitude and colour space.Within the 3 dimensional parameter space of the spectroscopic redshift, I-band (i^+) magnitude and I - 3.6μ m (i^+ - 3.6μ m) colour, we calculate the 10 nearest neighbours for each radio detected source. Due to the limited number of spectroscopic redshifts available, sources in the non-radio sample are allowed to be in the matched sample for more than one radio source. Next, for each redshift bin, we calculate σ_ and O_f for the two matched samples and use a simple bootstrap analysis to estimate the corresponding uncertainties in these metrics.In Fig. <ref> we show the relative scatter and outlier fractions of these two matched samples. We find that up to z∼1 (where both spectroscopic samples are fairly representative), photometric redshifts estimated for radio sources have typically lower scatter and outlier fraction than galaxies with no radio detection that have similar magnitudes. This trend is true for both datasets and for all three template sets.Above z∼1, photo-zs for radio sources are significantly worse than their matched non-radio detected counterparts. This trend of increasing scatter/outlier fraction with redshift is not unexpected given as redshift increases the radio detected sources are increasingly luminous AGN for which photo-z estimates are expected to struggle.In the Boötes field specifically (filled symbols), we see that at z∼1 there is a significant jump in the measured scatter for radio sources. Inspecting the magnitude-redshift distribution of the radio sample reveals that z∼1 (log_10(1+z) ≈ 0.3) marks the transition where the AGES spectroscopic sample become dominated by the AGN selection criteria and almost all sources are classified as either X-ray sources or IR AGN. We note however that the sample bias towards X-ray and IR AGN sources is true for both the radio and matched non radio samples, indicating that at higher redshift the radio-loud subset of X-ray/IR AGN sources is systematically more difficult to fit than the radio-faint population of similar magnitude. §.§ Photometric redshift accuracy as a function of radio powerIn Section <ref> we saw that the photo-zs for the radio detected population becomes systematically worse at high redshift. If this trend is driven by the evolution in sample radio luminosities from the flux limited samples, we expect to observe the same trend when looking at a fixed redshift but evolving radio luminosity.In Fig. <ref> we present the evolution in σ_ and O_f as a function of log_10(L_150) for sources with spectroscopic redshift in the range 0.2 < z < 0.9. We choose this redshift range because based on Fig. <ref> we know that the scatter and outlier fraction of the full sample do not evolve strongly across this range. In the COSMOS field, the scatter remains relatively constant with redshift for both the AGN and normal galaxy samples while the outlier fraction actually decreases slightly over this range.In contrast to the naive expectation of increasing scatter with radio luminosity, it is evident that there is no clear evolution in σ_ across the ∼4 orders of magnitude in radio luminosity probed by our samples. The measured scatter follows a similar trend when examined as a function of radio flux. Between < 1 mJy and 100 mJy, σ_ remains effectively constant for all three template libraries in both and both datasets. It is only at the very brightest radio fluxes (0.1 < S_ν, 150 < 1 Jy) where scatter increases for this redshift regime. Some evidence of increasing outlier fraction as a function of log_10(L_150) exists for both the deep and wide fields, with O_f rising by ∼ 2 between log_10(L_150) ≈ 23 and log_10(L_150) ⪆ 26. As a function of radio flux, the trend of increasing outlier fraction is even more pronounced. Although there is significant scatter in the outlier fraction values and the small samples available at high radio power result in significant uncertainties, the trend is consistent across all three template sets.In both fields, the overall AGN selected population has a higher outlier fraction across the redshift range (0.2 < z < 0.9) than the galaxy population. The rise in outlier fraction with radio flux illustrated in Fig. <ref> may therefore be a result of the increasing radio AGN fraction with radio flux (/luminosity), see e.g. <cit.>.As radio surveys push to lower radio luminosities at higher redshift (e.g. log_10(L_150) < 25 and z∼2), these results suggest that photometric redshifts for the large samples of intermediate power AGN will be very comparable to those of `normal' galaxies. However, what we cannot measure here is the effect of how the intrinsic SEDs of these sources might change over these redshifts and the resulting effects on photo-z estimates.§.§.§ Best-fitting templates vs radio power As radio luminosity increases and the source population becomes dominated by increasingly powerful AGN, a plausible expectation is that photo-z template sets incorporating AGN/QSO templates will perform better than stellar-only template sets. Equally, at low radio luminosities where the population becomes dominated by star-forming galaxies, one would expect the template sets optimised for stellar emission to provide the most accurate photo-zs. Based on the results of Fig. <ref> however, there is no clear dependence of the preferred template on radio luminosity.This result is in line with our expectation on the radio source population; namely its extremely diverse nature. Across a broad range of radio luminosities, the observed spectral energy distributions are consistent with sources ranging from radio galaxies with old stellar populations to star-forming galaxies and luminous QSOs. §.§ Accuracy of the redshift PDFsWhile the scatter between the estimated photometric redshift (whether that is the peak or median of the P(z)) and the spectroscopic redshifts is a useful metric for judging their accuracy, this does not take into account the uncertainties on individual measurements nor the potential complexities of the P(z) itself (e.g. multiple peaks or asymmetry). In addition to ensuring the minimum scatter and outlier fraction possible, it is therefore essential that the estimated P(z) accurately represent the true uncertainty of the photometric redshifts. Even with the inclusion of additional photometry errors, it is common for template fitting photo-z codes to be over-confident in the predicted redshift accuracy <cit.>.To quantify the over- or under-confidence of our photometric redshift estimates, we follow the example of <cit.> and x where the spectroscopic redshift is just included. For a set of redshift PDFs which perfectly represent the redshift uncertainty (e.g. 10% of galaxies have the true redshift within the 10% credible interval, 20% within their 20% credible interval, etc.), the expected distribution of c values should be constant between 0 and 1. The cumulative distribution, F̂(c), should therefore follow a straight 1:1 relation. Curves which fall below this expected 1:1 relation therefore indicate that there is overconfidence in the photometric redshift errors; the P(z)s are too sharp.In Figure <ref>, we show the F̂(c) distributions (Q-Q plots) for the uncorrected P(z) output of each template set. For both the full spectroscopic samples (dashed lines) and radio detected samples (solid lines), all three template sets show significant overconfidence in the photometric redshift errors. The P(z) estimates based on the EAZY template set are the most accurate while the XMM-COSMOS template set performs the worst. We also find that despite having significantly lower scatter relative to the spectroscopic sample, the COSMOS field redshift estimates are noticeably more overconfident than those in the Boötes field. Using a training subset of each population (AGN vs non-AGN), we smooth the redshift PDFs to minimise the euclidean distance between the observed F̂(c) and the desired 1:1 relation. To do this we define the rescaled redshift PDF for a galaxy, i, asP(z)_, i∝ P(z)_, i^1/α(m_i),where α(m) is a constant, c, below some characteristic apparent magnitude, m_c, and follows a simple linear relation above this magnitude, e.g.α(m) = α_cm ≤ m_c α_c + κ×(m-m_c) m > m_c.For both datasets, we use the equivalent I/i^+ band optical magnitude for calculating the magnitude dependence. We also assume a characteristic magnitude of i^+ = 20 for the COSMOS sample <cit.> and I = 18 for the shallower Boötes sample. The parameters c and k are then fit using the emcee Markov Chain Monte Carlo fitting tool <cit.>.In Fig. <ref> and <ref> we show the resulting F̂(c) distributions for the all sources as well as the radio detected sources (and subsets thereof) after the redshift PDFs have been calibrated using the full spectroscopic redshift sample. For the Boötes field, the consensus PDFs from all three template sets are significantly improved for the full spectroscopic sample. The 0 to 50% credible interval ranges are all very accurately measured, with only a small remaining overconfidence within the 80% credible interval. For the COSMOS field, all three template sets perform well within the 50% credible interval but the tails of the distributions are not as accurate as those for the Boötes field.Although the calibrated redshift PDFs for the radio detected subsets are somewhat improved by the calibration procedure, they do not match the same accuracy as the wider spectroscopic redshift sample. Of the three template sets, the calibrated PDFs from the XMM-COSMOS template set are the most accurate overall for the Boötes field. However, for the COSMOS field, the calibrated PDFs for the XMM-COSMOS set are under-confident for the overall radio sample. For both the AGN and non-AGN calibration subsets the smoothing applied to the XMM-COSMOS set is significantly higher than required for the EAZY and Atlas template sets (with the exception of the Boötes AGN sample for the Atlas library; Table <ref>). The resulting PDFs (while accurately representing the uncertainty for the overall sample) are much broader than the other template sets. The typical 1-2σ uncertainties on an individual galaxy redshift solution are ≈ 2-3 smaller for the EAZY/Atlas template sets compared to XMM-COSMOS. § OPTIMIZED REDSHIFTS THROUGH HIERARCHICAL BAYESIAN COMBINATIONAs illustrated in Sections <ref>-<ref>, no single template set can perform well for all types of radio-detected galaxy. To obtain the best photometric redshift estimates for sources in future deep radio continuum surveys one would therefore ideally like to pre-classify every galaxy and fit it with the optimum method for that source type <cit.>. However, some of the key properties necessary for such a priori classification are potentially not going to be known at the time photometric redshifts are fitted.A potential solution to this problem lies in the combination of multiple photo-z within a Bayesian framework such as hierarchical Bayesian(HB) combination <cit.> or Bayesian model combination/averaging <cit.>. Both of these ensemble methods for photometric redshifts have been illustrated to improve estimates for normal galaxy populations, with the combined redshift PDF more accurate and less biased than any individual photo-z determination incorporated in the analysis.To further improve the photometric redshifts for radio continuum sources we therefore combine the estimates from each template set through a hierarchical Bayesian combination. §.§ Hierachical Bayesian combination of redshift PDFsFollowing the method outlined in <cit.>, a consensus P(z) is determined while accounting for the possibility that individual measured redshift probability distributions P_m(z)_i are incorrect. The possibility that an individual P(z) is incorrect is introduced as a nuisance parameter, f_bad, that is subsequently marginalised over.Following <cit.>, we define for each redshift estimate, i,P(z, f_bad)_i = P(z|bad measurement)_if_bad + P(z|good measurement)_i (1-f_bad),where P(z|bad measurement) (U(z) hereafter for brevity) is the redshift probability distribution assumed in the case where the estimated P_m(z)_i is incorrect and P(z|good measurement) ≡ P_m(z)_i is the case where it is correct. The choice of U(z) is explored in detail in the following section. For now, given a sensible choice of U(z), the combined P(z, f_bad) for all n measurements is then given byP(z, f_bad) = ∏_i=1^nP(z, f_bad)_i^1/β,where the additional hyper-parameter, β, is a constant that defines the degree of covariance between the different measurements. For completely independent estimates β = 1, while for estimates that are fully covariant β = n (= 3 in this work). In this work we expect some reasonable degree of covariance between the three estimates as a result of the common photometric data and fitting algorithms used. Although the peak of the final redshift distribution is independent of β, changes in β do have an effect on the distribution widths. As part of the hierarchical Bayesian combination, β can also therefore be tuned such that posterior redshift distributions more accurately represent the redshift uncertainties.Finally, we marginalise over f_bad to produce the consensus redshift probability distribution for each objectP(z) = ∫^f^max_bad_f^min_bad P(z, f_bad) df_bad,where f^min_bad and f^max_bad are the lower and upper limits on the fraction of bad measurements. While fixed by definition to lie in the range 0 ≤ f_bad≤ 1, the exact limits used when marginalising over f_bad can also be tuned using the training sample <cit.>.§.§.§ Assumptions for the U(z) priorDuring the calculation of the consensus redshift PDF, it is necessary to make an assumption on what the redshift prior is in the case where a given measurement is bad. The simplest assumption for U(z) is that in cases where the measurement is bad, we have zero information on the redshift of a given object. Therefore, U(z) is a flat prior, whereby U(z) = 1 / N for redshifts in the range of fitting 0 < z < N.As is discussed by , we can also assume a more informative prior such as one which is proportional to the redshift dependent differential comoving volume, d V(z)/d z. Given the nature of the deep multi-wavelength surveys being used and the broad redshift range of interest, a volume prior increases the likelihood of sources being at higher redshifts and disfavours low-redshift solutions where d V(z)/d z is very small.Alternatively, as we adopt in our analysis, magnitude information for each source can also be incorporated through the use of an empirical or model-based magnitude prior <cit.>. The benefit of incorporating magnitude dependent redshift priors in template fitting has been well illustrated in the literature <cit.> and so the assumption of a magnitude dependent U(z) is therefore well motivated.Our empirical redshift prior is based on a modified version of the functional form outlined in <cit.>. Using subset of the spectroscopic training set, we fit the observed redshift - magnitude relation, p(z|m_I), with the function:p(z|m_I) ∝ ( c + d V(z)/d z ) ×exp{ - [ z/z_m(m_I)]^α}.As in <cit.>, the prior distribution at high redshifts is determined by an exponential cut-off above a magnitude-dependent redshift z_m. However, rather than a linear dependence on m_I, we assume z_m = z_0 + k_1z + k_2z^2. Additionally, in place of the power-law term z^α, we use the differential comoving volume element d V(z)/d z. Following <cit.>, we also include the additional parameter, c, to allow for non-vanishing likelihood as z→ 0. We make the assumption that for AGN selected sources, c = 0, while for all other sources c = 0.001.The parameters α, z_0, k_1 and k_2 are estimated by fitting the functional form to the desired subset of test galaxy samples using MCMC <cit.>. Fig. <ref> illustrates the resulting redshift priors as a function of I magnitude for `normal' galaxies (top) and X-ray/IR/opt selected AGN (bottom). At bright apparent optical magnitudes there is a clear difference in the redshift distribution between the two source populations, with AGN sources having both a higher median redshift and a more extended tail at higher redshifts (z > 3). §.§.§ Tuning of hyper-parameters using spectroscopic sampleIn addition to the assumption of U(z), it is also necessary to assume or fit the additional hyper-parameters, β, f^min_bad and f^max_bad. Our assumptions for f^min_bad and f^max_bad are based on the measured and expected outlier fractions for the relevant source populations. For non-AGN, we therefore marginalise f_bad over the range 0 < f_bad < 0.05 while for the X-ray/IR sample, we marginalise over the range 0 < f_bad < 0.5.As discussed in the previous section, β can be tuned to maximise the accuracy of the resulting consensus P(z) estimates. We therefore fit β using the spectroscopic training sample. Specifically, we find the β which minimises the distance between measured F̂(c) and the desired 1:1 relation within 80% HPD CI. The restriction of fitting only within the 80% HPD CI region is motivated by the observation by <cit.> that even for well calibrated photometric data, non-gaussian errors and uncalibrated template systematics can result in the tails of P(z) distribution being poorly described. By restricting the optimisation to only <80% HPD CI, we prevent over smoothing of the P(z) due to these low probability tails. §.§ Optimised photometric redshift propertiesIn Fig. <ref> and <ref> we illustrate the σ_, O_ and performance of the new consensus redshift estimates in each of the source population subsets (see also Table <ref>). For the full spectroscopic samples and almost all subsets of the radio detected populations, the HB photo-z estimates either match the scatter and outlier fraction performance of the best individual template set (c.f. Table <ref>) to within 10% or outperform all of the estimates. The high performance of the HB photo-z is consistent across both data sets and substantially improves upon the individual template sets in several areas.The improved performance of the consensus photo-zs also extends to the P(z) distributions. The performance of the HB redshifts in the mean continuous ranked probability score () is significantly better than any individual redshift estimate. In Figures <ref> and <ref> we show the F̂(c) distributions for the Boötes and COSMOS samples respectively. For both fields, not only is the overall P(z) accuracy for the full spectroscopic redshift sample improved, but the P(z) accuracy for the radio detected population (and all subsets) are also improved. Variation in P(z) accuracy between the different radio subsets is significantly reduced.Finally, we note that the average uncertainty on individual source for the HB photo-z estimates remain competitive with those of the best individual template set. In Fig. <ref>, we present the median 80% highest probability density confidence intervals, Δ_z_1, around the primary redshift solution, z_1, as a function of I(i^+) magnitude. It is important to note here that the observed improvement in redshifts for the HB consensus photo-zs results primarily from the combining of multiple estimates and is not driven by the magnitude prior. When folding in the magnitude priors to each individual estimate, there is only a very minor improvement in the photo-zs (namely a small reduction in outlier fraction). § DISCUSSION §.§ Radio surveys for studying galaxy and AGN evolutionIn the preceding two sections we have presented a large amount of detailed analysis on photometric redshift estimates for two deep radio continuum surveys, including a wealth of statistics and comparisons that can be somewhat abstract. To interpret the results presented in these sections, it is worth revisiting the questions we specifically posed in the introduction. Firstly, how does the photometric redshift accuracy of radio sources vary as a function of redshift and radio luminosity? And secondly, do the current methods and optimization strategies developed for `normal' galaxies or other AGN populations in optical surveys extend to radio selected galaxies?The answer to the first question is partially provided in Section <ref>. Across a wide range in radio luminosity, the measured photo-z scatter remains approximately constant, regardless of which template set is used. In contrast, there is a much stronger evolution in the measured outlier fraction, which increases significantly between 23 < log_10(L_150) < 27. At the very faintest fluxes probed by the LOFAR and VLA data used in this analysis (i.e. sub-mJy), the photo-zs for the radio source population perform very well; comparable to the overall properties of the radio undetected source population.Based on the results of our hierarchical Bayesian combination photo-z estimates presented in Section <ref>, our answer to the second question posed has to be a yes. Consensus redshifts from hierarchical Bayesian combination do improve the redshift accuracy for these populations whilst sacrificing no accuracy for the non-radio population. The redshifts produced perform better than any individual template set/method for both the full spectroscopic sample and the radio detected population. The success of the ensemble redshifts is an excellent illustration that the techniques developed to provide marginal gains for the `normal' galaxy population <cit.> can also provide very significant improvements for more diverse datasets.While the consensus estimates do improve on the photo-z predictions for radio sources which also satisfy optical and IR AGN criteria (and to a lesser degree have strong X-ray flux), the overall quality of these estimates still remains very poor compared to those of the general radio source population. <cit.> illustrated that although the use AGN-dominated SEDs in the photometric redshift fitting can improve results for AGN photo-zs, additional steps are required to maximise the accuracy: namely strict magnitude priors based on optical morphology and corrections for variability in optical magnitudes between observations at different epochs. Such steps can only be taken in a very select number of fields (in the case of variability) or in very small areas with high-resolution optical imaging (for morphology selection), making it impractical to incorporate these steps in our photo-z strategy[As optical surveys with long-term variability measurements (e.g. PanSTARRs Medium Deep Survey or the Large Synoptic Survey Telescope <cit.>) reach the depths required for deep extragalactic surveys, such corrections for variability will become significantly easier to implement and offer significant photo-z improvements for some source types.]. However, it is also these populations have been shown to benefit from empirical (or machine learning) photo-z estimates <cit.>. In future, the expansion of the hierarchical Bayesian analysis to include more redshift estimates tailored for the difficult quasar populations should therefore offer further significant improvements.Further improvements to photo-z estimates for the radio population will also be greatly aided by the forthcoming WEAVE-LOFAR spectroscopic survey <cit.>. WEAVE-LOFAR will obtain >10^6 spectra for radio sources from the LOFAR 150 MHz survey; providing spectroscopic redshifts and source classifications for an unprecedented number of radio sources. In particular, the deepest tier of the survey will target sources as faint as S_ν, 150≈ 100 μ^-1 over several deep fields. Such a sample will provide an extensive and unbiased training sample that can be used to improve photo-z estimates for the sub-mJy radio sources in the widest tier of the LOFAR survey and many others besides.In addition to providing samples for photo-z evaluation/training which are more representative, significantly larger samples of spectroscopic redshifts will also offer potential improvements in the HB combination procedure outlined in this paper. By allowing for the calibration of photo-zs and tuning of hyper-parameters in smaller subsets (e.g. split into bins in several parameters; optical magnitude/redshift/radio power etc.), further gains to the consensus redshift accuracy and precision will be gained <cit.>. §.§ Radio surveys for cosmologyOne of the key scientific capabilities provided by the next generation of radio interferometers like the Square Kilometre Array is as a tool for studying cosmology<cit.>. One such avenue for studying cosmology with radio surveys is through weak lensing (WL) experiments <cit.>. Thanks to the different systematics offered by radio continuum observations (both in the intrinsic populations explored and the instrumentation), weak lensing studies with the SKA will be highly complementary to the increasingly powerful optical weak lensing studies planned for the next decades <cit.>.As with their optical counterparts, weak lensing studies with the SKA will be heavily reliant on photometric redshift estimates based on the all sky photometric data available. While extensive effort is being invested in reaching the redshift accuracy requirements for optical weak lensing experiments <cit.>, a key question for the SKA experiments is what effect does the radio selection have on the expected photo-z accuracies and biases?The radio continuum depths explored in the study do not reach the faint fluxes expected for planned weak lensing studies outlined in <cit.>, so we cannot conclusively say what the full effects might be. Nevertheless, based on the available results for the brighter radio population it is relatively clear that the prognosis for SKA WL studies remains tied to that of the comparable optical studies.The radio source population for which our photometric redshift estimates are at least as accurate are those radio sources which are also identified as luminous X-ray sources or host dust obscured AGN. In this regard the SKA WL experiments will be limited by the same source types as will effect the optical WL experiments. The radio source population which are not classified as likely AGN were found to have more accurate photometric redshift estimates than the population not detected by radio imaging. Provided the techniques and selection criteria developed for removing the bias of AGN in optical WL experiments can be applied equally to the SKA WL samples, the radio continuum selection should not present any critical problems. More studies will be required to test this once the SKA pathfinder and precursor surveys reach their full depths and large samples of un-biased spectroscopic redshifts are available.§ SUMMARYWe have presented a study of template based photometric redshift accuracy for two samples of galaxies drawn from a wide area <cit.> and a deep <cit.> survey field. We calculate photometric redshifts using three different galaxy template sets from the literature. The three template sets represent libraries which are either commonly used in photometric redshift estimates within the literature <cit.> or are designed to cover the broad range of SEDs observed in local galaxies <cit.>.Exploring the photometric redshift quality as a function of galaxy radio properties, we find: * At low-redshift (z < 1), radio detected galaxies typically have better photo-z scatter and outlier fractions than galaxies with comparable magnitudes, redshifts and colours but are undetected in radio. However, as redshift increases, radio-detected galaxies perform worse than their radio undetected counterparts. * Within a redshift range where photo-z quality remains relatively constant, the outlier fraction of all photo-z estimates increases towards the highest radio powers (and radio flux) while scatter remains roughly the same. This trend is independent of survey field and template set. * Photo-zs for radio sources not identified as AGN through X-ray, optical or IR selection criteria perform comparably to radio un-detected sources at the same redshifts. * Without additional calibration, the redshift PDFs for all three template sets are overconfident; producing error estimates which are significantly underestimated. By combining all three photo-z estimates through hierarchical Bayesian combination <cit.> we are able to produce a new consensus estimate which outperforms any of the individual estimates which went into it. The consensus redshift estimates match or better the measured scatter or outlier fraction of the best individual estimate for most radio population subsets while also providing improved predictions on the redshift uncertainties. Nevertheless, while offering some improvement, the overall quality of photo-z estimates for radio sources which are X-ray sources or optical/IR AGN is still relatively poor; with high outlier fractions (>20%) and very large scatter (σ_>0.2×(1+z)).Future work tailored to improving our photo-z estimates for IR/optically selected AGN will be required to achieve the some of the key science goals for deep radio continuum surveys. In the second paper in this series we will explore the improvements offered by photometric redshift estimates from gaussian processes <cit.>, both in isolation and when combined with the template based estimates through our hierarchical bayesian combination procedure.§ ACKNOWLEDGEMENTSThe research leading to these results has received funding from the European Union Seventh Framework Programme FP7/2007-2013/ under grant agreement number 607254. This publication reflects only the author's view and the European Union is not responsible for any use that may be made of the information contained therein. KJD and HR acknowledges support from the ERC Advanced Investigator programme NewClusters 321271. PNB is grateful to STFC for support via grant ST/M001229/1. KM was supported by the Polish National Science Center (UMO-2012/07/D/ST9/02785 and UMO-2013/09/D/ST9/04030). The authors thank Mark Brodwin, Duncan Farrah, Mattia Vaccari and Lingyu Wang for valuable additional feedback and discussion. Finally, the authors thank the anonymous referee for their feedback and contributions to improving the manuscript.mnras
http://arxiv.org/abs/1709.09183v1
{ "authors": [ "Kenneth J Duncan", "Michael J. I. Brown", "Wendy L. Williams", "Philip N. Best", "Veronique Buat", "Denis Burgarella", "Matt J. Jarvis", "Katarzyna Malek", "S. J. Oliver", "Huub J. A. Rottgering", "Daniel J. B. Smith" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20170926180016", "title": "Photometric redshifts for the next generation of deep radio continuum surveys - I: Template fitting" }
APS/[email protected] now at PSI, Villigen, Switzerland now at SLAC, Menlo Park, CA, USAnow at VARIAN PT,Troisdorf, Germany now at DLR-VE, Oldenburg, Germany now at DESY,Hamburg, Germany KIT, Karlsruhe, GermanyTo understand and control dynamics in the longitudinal phase space, time-resolved measurements of different bunch parameters are required. For a reconstruction of this phase space, the detector systems have to be synchronized. This reconstruction can be used for example for studies of the micro-bunching instability which occurs if the interaction of the bunch with its own radiation leads to the formation of sub-structures on the longitudinal bunch profile. These sub-structures can grow rapidly – leading to a sawtooth-like behaviour of the bunch. At KARA, we use a fast-gated intensified camera for energy spread studies, Schottky diodes for coherent synchrotron radiation studies as well as electro-optical spectral decoding for longitudinal bunch profile measurements. For a synchronization, a synchronization scheme is used which compensates for hardware delays. In this paper, the different experimental setups and their synchronization are discussed and first results of synchronous measurements presented.Valid PACS appear here Synchronous Detection of Longitudinal and Transverse Bunch Signals at a Storage Ring Anke-Susanne Müller December 30, 2023 ====================================================================================§ INTRODUCTIONThe investigation of the dynamics of short bunches in a storage ring sets stringent requirements for the diagnostics: Response time and repetition rate of the different detector systems must be sufficient to study a single bunch in a multi-bunch environment on a single-turn basis. The required time scales are given by the RF frequency and the revolution time. In addition, the different detector systems must be synchronized to study simultaneous changes for example in the bunch shape and in the emitted synchrotron radiation. § MICRO-BUNCHING INSTABILITY Under certain conditions, periodic bursts of coherent synchrotron radiation (CSR) were recorded at different facilities (e.g. NSLS VUV <cit.>, SURF II <cit.>, ALS <cit.>, BESSY-II <cit.>, Diamond <cit.>, SOLEIL <cit.>). Theoretical studies of this phenomena gave hints to a self-interaction of the bunch with its own wake-field <cit.>, which can lead to the occurrence of sub-structures in the longitudinal phase space. The rising amplitude of these sub-structures can be explained by the CSR impedance <cit.>. The rapid increase of the amplitude is coupled to an overall increase in projected bunch size. At one point, diffusion and radiation damping outweigh the growth of the sub-structures and they start to dissolve and damp down, leading to a reduction of the bunch size. The reduced bunch size leads again to a stronger wake-field and subsequently the next CSR instability cycle starts over. Thus, the bunch size is modulated with a repetitive sawtooth pattern in time <cit.>. Due to the changing bunch profile, the CSR is emitted in bursts and the behavior is referred to as bursting. As the longitudinal phase space is spanned by time and energy, it can be studied by time-resolved measurements of the longitudinal bunch profile and energy spread. The CSR intensity depends on the longitudinal bunch profile, so it can also be used to probe the dynamics. A monitoring of the longitudinal phase space leads to a deeper understanding of the beam dynamics, especially above the bursting threshold and could potentially lead to methods for control. Such a reconstruction requires synchronous measurements of the different bunch properties.At KARA, we use different diagnostics tools for theses measurements which are integrated in a hardware synchronization system to enable simultaneous data acquisition. § KARA KARA (Karlsruhe Research Accelerator), the [2.5]GeV KIT storage ring, can be operated in different modes including a short bunch mode. In this mode, the momentum-compaction factor α_c is lowered by changing the magnet optics <cit.> to study the micro-bunching instability <cit.>. At KARA, the revolution time is [368]ns and the bunch spacing of [2]ns is defined by the radio frequency (RF) of [500]MHz. One single measurement should take at most [2]ns, so that only the signal of one bunch is detected.As part of the bunch detection setup, several systems have been installed and commissioned: A fast-gated intensified camera (FGC) to measure the horizontal bunch position and size, fast THz detectors to detect the coherent synchrotron radiation (CSR) as well as an electro-optical bunch profile monitor to determine the longitudinal bunch profile <cit.>.§ HORIZONTAL BUNCH SIZE MEASUREMENTSMeasurements of the horizontal bunch size σ_x allow studies of the energy spread σ_δ as they are related by<cit.>σ_x = √(β_x ·ϵ_x + ( D_x ·σ_δ)^2) in a dispersive section.Therefore, a time-resolved measurement of the horizontal bunch size reveals changes in energy spread. We use a fast-gated intensified camera (FGC). It is located at the Visible Light Diagnostics (VLD) port at KARA which uses incoherent bending radiation from a 5^∘ port ofa dipole magnet <cit.>. The FGC setup consists of a camera and a fast-rotating mirror which sweeps the incoming light over the entrance aperture of the FGC during the image acquisition. In combination with a fast switching of the image intensifier, this allows single turn images of a bunch, even in a multi-bunch fill <cit.>. The resolution of the setup is confined by a rectangular crotch absorber with a horizontal width of [20]mm which is located [1600]mm downstream of the radiation source point <cit.>. Taking diffraction effects into account, the resolution is estimated to be [77]μm using a wavelength of [400]nm <cit.>.Fig. <ref> shows a raw image recorded with the FGC. Here, the machine was operated without an active feedback system, therefore the bunch is undergoing a synchrotron oscillation (f_s ≈ [11.24]kHz).The individual spots on this image are the convolution of the charge distribution with the so-called filament beam spread function (FBSF). The FBSF can be seen as an extension of the point spread function for a moving point-like source, in this case a moving single electron <cit.>. We use the software OpTalix [OpTalix Pro 8.82, www.optenso.com] to simulate the optical system and the imaging process. Instead of deconvolving the FBSF from the data and fitting a Gaussian curve, we fit a convolution of the FBSF with a Gaussian curve. Compared to the deconvolution this has the advantage that no estimation of the signal-to-noise ratio is required, in addition, this method is fast and robust. For one spot of an FGC raw image, this analysis is illustrated in Fig. <ref>. It can be seen that the fit reproduces the distorted shape of the profile very well, enabling a determination of the horizontal bunch size with single turn resolution. Experimental studies showed that the horizontal bunch size remains constant below the bursting threshold, while it starts to increase for currents above the threshold <cit.>. For the equilibrium case below the bursting threshold, the measured bunch size is in good agreement with simulated values usingthe accelerator toolbox for Matlab (AT) <cit.> and the lattice model of KARA, see Fig. <ref>. For the AT model, LOCO fits <cit.> are used to determine the quadrupole strengths from measurements of the orbit response matrix and the dispersion.A quantitative determination of the energy spread is not possible due to the unknown contribution of the horizontal emittance to the horizontal bunch size, therefore we are limited to qualitative studies with the horizontal bunch size taken as a measure for the energy spread. Besides the studies of the micro-bunching instability, the system can be used to study different kinds of instabilities, e.g. during the injection or on the energy ramp.§ COHERENT SYNCHROTRON RADIATION The investigation of the CSR dynamics includes THz detectors that are fast enough to resolve the emitted synchrotron radiation for each bunch in a multi-bunch environment.We use commercially available room temperature zero-bias Schottky diode detectors with different sensitivity ranges <cit.>.To read out these detectors we use the data acquisition (DAQ) system KAPTURE <cit.>. KAPTURE is a fast FPGA-based system with four input channels that can be used to sample the signal of one detector with four sample points per bunch and turn. Alternatively, it is possible to sample up to four detectors in parallel with a bunch-by-bunch sampling rate of [500]MHz. For the measurements presented here, we configured KAPTURE to digitize the peak intensity of multiple different detectors simultaneously. Figure <ref> illustrates this multi-detector operation mode. It shows the signals recorded by an avalanche photo diode (APD) sensitive to incoherent synchrotron radiation in the visible range and a Schottky diode sensitive in the THz range. The APD is insensitive to arrival time oscillations caused by synchrotron oscillation, as the duration of the APD pulse is longer than the amplitude of the synchrotron oscillation and KAPTURE samples at a fixed phase relative to the RF. Thus, the observed oscillation of the APD signal is due to an intensity fluctuation. The bunch oscillates horizontally around the focal point of the optics and the limited aperture of the beam line optics transfers this into an intensity oscillation. This signal will play a vital role for the calibration of the synchronization in Sec. <ref>.The combination of KAPTURE and Schottky diodes is used for detailed and systematic studies of the CSR emission during the micro-bunching instability. The analysis of the fluctuation of the CSR intensity allows for example a fast and precise mapping of the bursting threshold <cit.>. Using four Schottky diodes with different frequency bands we can configure the KAPTURE system into a bunch-by-bunch spectrometer for CSR studies <cit.>. § LONGITUDINAL BUNCH PROFILE The technique of electro-optical spectral decoding (EOSD) <cit.> is used to determine the longitudinal bunch profile and arrival time <cit.>. This is done by inserting an electro-optical crystal into the vacuum pipe to sample theelectric near-field of the bunch <cit.>. This field turns the crystal birefringent. A laser pulse, chirped on the ps-time scale, is directed through the crystal and the induced birefringence turns the initial linear polarization into an elliptical. Thus the bunch profile is imprinted into the polarization of the laser pulse. A crossed polarizer transfers this into an intensity modulation and, due to the unique, ideally linear, correlation between time and wavelength in the chirped laser pulse, the bunch profile can be determined by recording the modulated laser pulse spectrum. Measuring in the near field has the advantage that there is no frequency cut-off of the electric field by the beam-line.The spectrometer used for this measurement is based on the KALYPSO system (KArlsruhe Linear arraY detector for MHz-rePetition rate SpectrOscopy <cit.>). KALYPSO consists of a 256-pixel line array that can be read out with a turn-by-turn frame rate of [2.7]MHz. An example recorded by this system is shown in Fig. <ref>. It shows color-coded longitudinal bunch profiles (in units of ps) as a function of time. These measurements were taken with an early development version of the KALYPSO system, which manifests in pixel errors and other artifacts. They are visible for example as dark horizontal lines between [-10 and -20]ps. At the beginning, the bunch is undergoing a synchrotron oscillation as also here the feedback system was switched off. At [1]ms, this oscillation abruptly changes its amplitude due to a triggered step in the RF phase at this point in time. For these measurements, the EO system was adjusted such that the phase step has the maximum possible effect on the signal. So, the bunch shape is not reproduced optimally. Instead, the frequency of the forced synchrotron oscillation after the RF phase step can be seen clearly. This enables to take the system into account for the detector synchronization. Generally, the EOSD system can also be set up to allow measurements of the bunch profile with a sub-ps resolution, that allows to resolve sub-structures on the longitudinal bunch profile <cit.>. By using KALYPSO this setup allows turn-by-turn measurements <cit.>.§ DETECTOR SYNCHRONIZATIONA straight forward way to measure synchronously the different bunch parameters is to feed the signals from the different detectors into one common DAQ system, e.g. an oscilloscope. This is suitable if all devices are located close to each other at a storage ring with compact design (e.g. <cit.>). In our case, this is not directly possible for several reasons. The first point is the location of the detector systems at different positions around the storage ring. In addition, at these systems dedicated post-processing of the raw-data is required. The FGC and the EOSD setup delivers 2D images while the KAPTURE system provides – in the multi-detector mode – one data point per bunch, turn, and detector. The acquisition has to be aligned temporally to enable correlation studies between derived parameters like CSR intensity, energy spread and bunch length.To provide a common origin for the time axis with single-turn resolution, we need to synchronize the acquisition. The timing system <cit.> – based on one event generator (EVG) and several event receivers (EVR) – is used to generate a synchronized measurement trigger. Furthermore, the intrinsic hardware delays have to be taken into account. KAPTURE and KALYPSO sample continuously, independent of the coupled detectors. The acquisition trigger starts the data storage into memory and therefore an instantaneous synchronization of the recording is achieved.For the FGC this is different: Due to the measurement controls and the inertia of the rotating mirror, a certain advance time is needed before the first sample can be recorded. To protect the sensor from overexposure, the rotating mirror has a hold position in which the light is not on the sensor. Thus the light spot has to be driven first onto the sensor to be recorded. This also ensures the measurement is taken in the regime of linear mirror motion. The synchronization schemes takes these delays into account and compensates for them by triggering the FGC the required time before other devices (see <cit.>). The test of the synchronization requires a signal with a signature on the longitudinal and horizontal plane of the beam that all detectors are sensitive to. We used the low-level RF (LLRF) system to trigger a step in the RF phase. This sudden change leads to the onset of a synchrotron oscillation around the new longitudinal equilibrium, which all different measurement stations are sensitive to. For the calibration of KAPTURE, we used the visible light APD as a reference. This convenient reference is insensitive to fluctuations on the bunch profile that can occur following a sudden RF phase-step, which can affect the CSR signal recorded by Schottky detectors.Figure <ref> shows a successful synchronization event. The onset of the synchrotron oscillation is recorded by all detectors at the same time. While the synchrotron oscillation on the FGC and the APD are in phase and both in the horizontal plane, the one fromis phase-shifted by π/2. This can be explained by the dynamics in longitudinal phase space. Here, the synchrotron oscillation corresponds to a rotation of the longitudinal electron phase space density. The projection onto the position and energy axes are separated by π/2. The FGC and the APD are sensitive to the energy because their source points are located in dispersive sections. KALYPSO, in contrast, measures the arrival time.The APD signal recorded with KAPTURE shows voltage dips at the highest values and maximum bunch deflection. The origin of the dips are unclear, but are likely due to intensity cut-off from the finite aperture of the optical beam path or jumps in the timing trigger as KAPTURE samples at a fixed phase. Nevertheless, the main feature, a strong synchrotron oscillation, can be seen and used to determine the phase and the frequency of the electron beam motion. As discussed before, the EOSD setup was optimized for temporal resolution. Due to the finite size of the line array and the length of the chirped laser pulse, this leads to a cut-off of the upper part of the signal showing the longitudinal bunch position.§ SYNCHRONOUS MEASUREMENTS Synchronization of the detector systems allows time-resolved studies of the beam dynamics. In the following, we limit our studies to energy spread and the CSR emission during the micro-bunching instability. Figures <ref> and <ref> show two synchronous measurements of the horizontal bunch for the energy spread (top) and the corresponding CSR emission (bottom) recorded with a broadband Schottky diode detector ([50]GHz - [2]THz). In both cases, the two curves have the same modulation period. This behaviour was predicted by simulations <cit.> and also observed experimentally at SURF III <cit.>.In Fig. <ref>, the CSR intensity decreases until it reaches a constant level between two bursts. After the blow-up at the onset of a burst, the energy spread decreases until it reaches the same lower limit where it is blown-up again. At the same time, the CSR burst starts to arise. Fig. <ref> shows a different case using different machine settings and a lower bunch current.Here, the CSR intensity is not constant between two bursts, but increases slightly while the energy spread is still decreasing. This can be explained by the coupling of bunch length and energy spread in the longitudinal phase space. If the bunch gets shorter, it emits more CSR – regardless of sub-structures. The sensitivity to these bunch length fluctuations is dependent on the frequency band of the detector and possible shielding of the CSR due to apertures in the beam line. Thus, we do not observe it for longer bunches.For both cases, the bunch contracts until an instability threshold is reached. At this point, the charge density inside the bunch is high enough and the bunch becomes unstable. Numerical simulations of the longitudinal phase space using the Vlasov-Fokker-Planck solver Inovesa <cit.> showed, that at this point the amplitude of the sub-structures starts to grow rapidly <cit.>. To study these effects in more detail, the FGC can be configured to record shorter time ranges with a higher temporal resolution. For the third example discussed here, the gate separation was set to [24]turns and a time range of [500]μ s was recorded. In this case an additional narrow-band Schottky diode was used synchronously. The result is illustrated in Fig. <ref>. At the beginning, the energy spread given by the horizontal bunch size (top panel) is slightly decreasing due to damping effects while the CSR intensity on the broadband Schottky diode is constant, for the narrow-band diode it is almost zero.Around t = [0.1]ms, the first changes can be seen when the CSR intensity on the broadband Schottky diode starts to increase, while for the narrow-band this increase starts approximately [0.1]ms later. The earlier increase of the signal on the broadband Schottky diode is due to its lower limit in the frequency sensitivity <cit.>.While the CSR intensity starts to increase, the energy spread is still decreasing due to damping effects.At [0.27]ms, it reaches a lower limit triggering the onset of another burst. These examples clearly illustrate the diagnostic capability of synchronized detector systems. It allows to fully exploit the potential of the different, synchronous systems. While with KAPTURE in the multi-detector mode it is possible to study phase differences between the different frequency bands of the CSR <cit.>, these phase studies can now be extended to take multiple detector systems into account. § SUMMARYThe analysis of fast rising instabilities, like the micro-bunching instability, benefits from turn-by-turn detector systems. To resolve complimentary features in 6-D phase space, several detector systems which observe different bunch parameters need to be synchronized. At KARA, several independent detector systems with a single-turn resolution are combined in a distributed sensor network. The energy spread is investigated using an FGC, while the CSR intensity is measured using a KAPTURE system and Schottky diodes. For studies of the longitudinal bunch profile an EOSD setup is used in the near-field combined with KALYPSO. The integration of these systems into a hardware synchronization scheme opens up new diagnostics opportunities.While the individual setups allowed an insight into the dynamics of the micro-bunching instability, successful synchronization is a first step towards a multi-dimensional analysis of the longitudinal phase space. We experimentally showed that the energy spread and the CSR undergo a modulation with the same period length. In addition, detailed studies revealed a certain phase difference between the CSR in different frequency ranges and energy spread.In the future we will improve and upgrade the individual detector systems while keeping the capability for synchronized recording. A fast line array optimized for the visible frequency range will be combined with KALYPSO and replace the FGC setup. This will allows us to continuously stream horizontal bunch profiles at each revolution. It will also be used at the EOSD setup with a line array, optimized for the laser frequency and improving the signal-to-noise ratio. Further, the next version of KALYPSO will support a streaming rate of up to [10]Mfps and up to 2048 pixels <cit.>. A new version of KAPTURE, KAPTURE2 will provide 8 simultaneous readout paths, allowing more accurate peak reconstruction and the measurement of the arrival time <cit.>. Alternatively, the amplitudes of up to 8 different detectors can be recorded. With narrow-band detectors, sensitive in different frequency ranges, this allows recording single-shot spectra <cit.>.§ ACKNOWLEDGEMENTS We would like to thank Anton Plech, Yves-Laurent Mathis and Michael Süpfle for their support. This work has been supported by the Initiative and Networking Fund of the Helmholtz Association under contract number VH-NG-320 and by the BMBF under contract numbers 05K10VKC and 05K13VKA. Miriam Brosi, Patrik Schönfeldt and Johannes Steinmann acknowledge the support of the Helmholtz International Research School for Teratronics (HIRST) and Edmund Blomley the support of the Karlsruhe School of Elementary Particle and Astroparticle Physics (KSETA).
http://arxiv.org/abs/1709.08973v3
{ "authors": [ "Benjamin Kehrer", "Miriam Brosi", "Johannes L. Steinmann", "Edmund Blomley", "Erik Bründermann", "Michele Caselle", "Stefan Funkner", "Nicole Hiller", "Michael J. Nasse", "Gudrun Niehues", "Lorenzo Rota", "Manuel Schedler", "Patrik Schönfeldt", "Marcel Schuh", "Paul Schütze", "Marc Weber", "Anke-Susanne Müller" ], "categories": [ "physics.acc-ph" ], "primary_category": "physics.acc-ph", "published": "20170926122930", "title": "Simultaneous Detection of Longitudinal and Transverse Bunch Signals at a Storage Ring" }
UTF8gbsn [email protected] Institute for Quantum Information and Matter, California Institute of Technology, Pasadena, California 91125, USA [email protected] Department of Physics, Harvard University, Cambridge, Massachusetts 02138, USAWe study the entanglement entropy of eigenstates (including the ground state) of the Sachdev-Ye-Kitaev model. We argue for a volume law, whose coefficient can be calculated analytically from the density of states. The coefficient depends on not only the energy density of the eigenstate but also the subsystem size. Very recent numerical results of Liu, Chen, and Balents confirm our analytical results.Eigenstate entanglement in the Sachdev-Ye-Kitaev model Yingfei Gu (顾颖飞) December 30, 2023 ======================================================§ INTRODUCTION Entanglement, a concept of quantum information theory, has been widely used in condensed matter and high energy physics <cit.> to provide insights beyond those obtained via “conventional” quantities. For ground states of (geometrically) local Hamiltonians, it characterizes quantum criticality <cit.> and topological order <cit.>. The scaling of entanglement <cit.> is quantitatively related to the classical simulability of quantum many-body systems <cit.>.Besides ground states, it is also important to understand the entanglement of excited eigenstates. Significant progress has been made in this regard <cit.>. For chaotic local Hamiltonians, one expects a volume law: The entanglement entropy of an eigenstate between a subsystem (smaller than half the system size) and its complement scales as the subsystem size with a coefficient depending on the energy density of the eigenstate. Indeed, analytical arguments <cit.> and numerical simulations <cit.> strongly suggest that it is to leading order equal to the thermodynamic entropy of the subsystem at the same energy density.Let us put this result in context. The eigenstate thermalization hypothesis (ETH) states that for expectation values of local observables, a single eigenstate resembles a thermal state with the same energy density <cit.>. The equivalence of entanglement and thermodynamic entropies is a variant of ETH for the von Neumann entropy.Over the past few years, the Sachdev-Ye-Kitaev (SYK) model <cit.> has become a very active research topic in condensed matter and high energy physics. As a quantum mechanical model of N≫1 Majorana fermions with random all-to-all interactions, it is exactly solvable in the large-N limit, and has an extensive zero-temperature entropy. Entanglement and ETH in the SYK model have recently been studied <cit.>. In particular, the numerical results of Fu and Sachdev <cit.> suggest the breakdown of the equivalence of entanglement and thermodynamic entropies in the SYK model: The ground-state entanglement entropy appears to be slightly less than the maximum (1/2ln2 per Majorana fermion) and is significantly greater than the zero-temperature entropy.We argue that due to all-to-all interactions, the equivalence of entanglement and thermodynamic entropies should be modified as follows: The entanglement entropy of an eigenstate (including the ground state) for a subsystem smaller than half the system size is to leading order equal to the thermodynamic entropy of the subsystem at a different energy density, which depends on not only the energy density of the eigenstate but also the subsystem size. This allows us to derive an analytical expression for the scaling of the eigenstate entanglement entropy in the SYK model from the density of states.The SYK model is maximally chaotic <cit.> in the sense of the maximum “Lyapunov exponent” <cit.> for the exponential growth of out-of-time-ordered correlators <cit.>. Our argument is not related to this dynamical behavior, and should apply to a broad class of chaotic quantum many-body systems.§ PRELIMINARIESThe entanglement entropy of a bipartite pure state ρ_AB is defined as the von Neumann entropyS(ρ_A)=-(ρ_Alnρ_A)of the reduced density matrix ρ_A=_Bρ_AB. Consider a quantum mechanical system of N Majorana fermions χ_1,χ_2,…,χ_N, where N≫1 is an even number. The Hamiltonian of the SYK model isH=∑_1≤ i<j<k<l≤ NJ_ijklχ_iχ_jχ_kχ_l,{χ_i,χ_j}=δ_ij,where the coefficients are independent real Gaussian random variables with zero mean J_ijkl=0 and varianceJ_ijkl^2= 3!/N^3. We summarize the spectral properties of the SYK model in the thermodynamic limit N→+∞: * The mean energy of H is H/d=0, where d=2^N/2 is the dimension of the Hilbert space.* The distribution of eigenvalues near the mean energy is asymptotically normal with variance (H^2)/d=Θ(N). Asymptotic normality is a general feature of models with random few-body interactions <cit.>.* The ground-state energy is -NE_0+o(N) (extensive), where E_0>0 is a known constant.* The density of states at energy NE is <cit.>D_N(E)∼ e^[1/2ln 2-1/16arcsin^2(E/E_0)]N.This is an excellent approximation away from the edges of the spectrum, i.e., when E_0-|E| is not too small. § ARGUMENT We divide the system into two subsystems A and B. Subsystem A consists of M Majorana fermions, where M is even. Assume without loss of generality that M≤ N/2. We split the Hamiltonian (<ref>) into three parts:H=H_A+H_∂+H_B,where H_A(B) contains terms acting only on subsystem A(B), and H_∂ consists of cross terms. We observe thatH'_A:=(N/M)^3/2H_Ais the SYK model of M Majorana fermions.The thermal state maximizes the von Neumann entropy among all states with the same energy.For completeness, we give a simple proof of this well-known fact. Letσ_β:=1/Z_βe^-β H, Z_β:= e^-β Hbe a thermal state at inverse temperature β. For any density matrix ρ, the relative entropyS(ρσ_β):=(ρlnρ-ρlnσ_β)≥0is nonnegative. If ρ and σ_β have the same energy, thenS(ρ)=-(ρlnρ)≤-(ρlnσ_β)=β(ρ H)+ln Z_β=β(σ_β H)+ln Z_β=-(σ_βlnσ_β)=S(σ_β).This completes the proof of Lemma <ref>. To leading order,S(ρ_A)≤ln D_M((ρ_AH'_A)/M)for any density matrix ρ_A of subsystem A.Consider the SYK model H'_A. The inverse temperature β at which the thermal state σ_β:=e^-β H'_A/ e^-β H'_A has the same energy as ρ_A is obtained by solving∫_-E_0^E_0Mϵ e^-β MϵD_M(ϵ)dϵ/∫_-E_0^E_0e^-β MϵD_M(ϵ)dϵ=(ρ_AH'_A).The saddle-point approximation, which becomes exact in the limit M→+∞, givesβ=∂ln D_M(ϵ)/M∂ϵ|_ϵ=(ρ_AH'_A)/M.Using Lemma <ref>,S(ρ_A)≤ S(σ_β)=ln∫_-E_0^E_0e^β((ρ_AH'_A)-Mϵ)D_M(ϵ)dϵ=ln D_M((ρ_AH'_A)/M).In the last step, we used the saddle-point approximation and Eq. (<ref>). Let |ψ⟩ be an eigenstate of the SYK model (<ref>) with energy NE. To leading order,S(ρ_A)≤ln D_M((M/N)^3/2E),where ρ_A=_B|ψ⟩⟨ψ| is the reduced density matrix. To leading order, the quartic Hamiltonian (<ref>) has N^4/4! terms, M^4/4! of which are in H_A. In average,(ρ_AH_A)=(M/N)^4NE.Substituting Eq. (<ref>),(ρ_AH'_A)=(M/N)^5/2NE=(M/N)^3/2ME.Hence, Theorem <ref> follows from Lemma <ref>. A slight modification of the proof(s) of Theorem <ref> (and Lemma <ref>) yieldsFor (translationally invariant) local Hamiltonians, the entanglement entropy of an eigenstate between two subsystems is to leading order upper bounded by the thermodynamic entropy of the smaller subsystem at the same energy density.The upper bound in Theorem <ref> holds regardless of whether the system is chaotic. Notably, it is not tight in (integrable) free-fermion systems <cit.>. In chaotic systems, analytical arguments <cit.> and numerical simulations <cit.> strongly suggest that the bound is attained. Since the SYK model is chaotic, one might expect that the upper bound in Theorem <ref> is attained. Thus, we have a volume lawS(ρ_A)∼[1/2ln 2-1/16arcsin^2(( M/N)^3/2E/E_0)]Mwith a coefficient depending on E (the energy density of the eigenstate) and M (the size of the smaller subsystem). This is the leading-order scaling of the eigenstate entanglement entropy, and we are unable to calculate the subleading corrections. Suppose that N is a multiple of 4. For M=N/2, the ground-state entanglement entropy scales asS(ρ_A)∼(1/2ln 2-1/16arcsin^21/2√(2))M ≈(0.34657-0.00816)M=0.33841M.§ CONCLUSIONS AND OUTLOOK In the SYK model, we have argued that the entanglement entropy of an eigenstate with energy NE for a subsystem of size M≤ N/2 is to leading order equal to the thermodynamic entropy of the subsystem at energy (M/N)^3/2ME. Therefore, * The entanglement entropy of an eigenstate obeys a volume law with the maximum coefficient 1/2ln2 if the subsystem size is a vanishing fraction of the system size. This is because the subsystem is at the mean energy density of the Hamiltonian (<ref>).* The entanglement entropy of an eigenstate with finite energy density obeys a volume law with a non-maximal coefficient if the subsystem size is a constant fraction of the system size. In the future, it would be interesting to study the Renyi entanglement entropy of eigenstates of the SYK model. As a generalization of entanglement entropy, the Renyi entanglement entropy reflects the entanglement spectrum (the full spectrum of the reduced density matrix ρ_A). See Refs. <cit.> for recent results on the Renyi entanglement entropy of eigenstates of chaotic local Hamiltonians.We would like to thank Xiao-Liang Qi for pointing out Lemma <ref>. Y.H. acknowledges funding provided by the Institute for Quantum Information and Matter, an NSF Physics Frontiers Center (NSF Grant PHY-1733907). Y.H. is also supported by NSF DMR-1654340. Y.G. is supported by the Gordon and Betty Moore Foundation EPiQS Initiative through Grant GBMF-4306.Note added.|Very recently, we became aware of related work by Liu et al. <cit.>, which studied eigenstate entanglement in the SYK model using different methods. Among other results, the ground-state entanglement entropy was calculated up to N=44 Majorana fermions using exact diagonalization. For M=N/2 (the subsystem size is half the system size), the data are well fitted by the expression 0.3375M-0.666, which is consistent with our analytical result (<ref>).abbrv
http://arxiv.org/abs/1709.09160v2
{ "authors": [ "Yichen Huang", "Yingfei Gu" ], "categories": [ "hep-th", "cond-mat.stat-mech", "cond-mat.str-el", "quant-ph" ], "primary_category": "hep-th", "published": "20170926175518", "title": "Eigenstate entanglement in the Sachdev-Ye-Kitaev model" }
^1 Research Institute for Astronomy and Astrophysics of Maragha (RIAAM), P.O. Box 55134-441, Maragha, Iran.^2Department of General Studies (Mathematics Section), Jubail Industrial College,Royal Commission for Jubail and Yanbu, Jubail Industrial City 31961, Kingdom of Saudi Arabia^3Department of Mathematics, Institute of Applied Sciences and Humanities, G L AUniversity, Mathura-281 406, Uttar Pradesh, India.A BIonic diode is constructed of two polygonal manifolds connected by aChern-Simons manifold.The shape and the angle between atoms of molecules on the boundary of two polygonal manifoldsare completelydifferent. For this reason, electrons on the Chern-Simons manifold are repelled bymolecules at the boundaryofone manifold and absorbed bymolecules on the boundary of another manifold. The attractive and repulsiveforces between electrons are carried by masive photons.For example, when two non-similar trigonal manifoldsjoin to each other, one non-symmetrical hexagonal manifold is emerged and the exchanged photons form Chern-Simonsfields which live on a Chern-Simonsmanifold in a BIon. While, for a hexagonal manifold, with similar trigonalmanifolds, the photons exchanged between two trigonal manifolds cancel the effect of each other and BIonic energybecomes zero. Also, exchanging photons between heptagonal and pentagonal manifolds lead to the motion of electronson the Chern-Simons manifold and formation of BIonic diode.The mass ofphotons depend on the shape of moleculeson the boundary of manifolds and the length of BIon in a gap between two manifolds. PACS numbers: 98.80.-k, 04.50.Gh, 11.25.Yb, 98.80.Qc Keywords: Diode, Photons, Holography, Chern-Simons, BIonic system The BIonic diode in a system of trigonal manifolds Anirudh Pradhan^3[[email protected]] December 30, 2023 ==================================================§ INTRODUCTION A diod is constructed of one subsystem with extra electrons which are paired with extra holes inother subsystem. By applying one external force, these pairs are broken and an electrical current is produced. Until now, less discussions have been done on this subject. For example - the researches ofthe past few years have shown that graphene can build junctions with 3D or 2D semi-conductor materials which have rectifying characteristics and act as excellent Schottky diodes <cit.>. Themain novelty of these systems is the tunable Schottky barrier height-a property which makes thegraphene/semiconductor junction a great platform for the consideration of interface transport methods, aswell as using in photo-detection <cit.>, high-speed communications <cit.>, solar cells <cit.>,chemical and biological sensing <cit.>, etc. Also, discovering an optimal Schottky interface of graphene,on other matters like Si, is challenging, as the electrical transport is corresponded on the graphenequality and the temperature.Such interfaces are of increasing research hope for integration in diverse electronicsystems being thermally and chemically stable in all environments, unlike standard metal/semiconductorinterfaces <cit.>.Previously, we have considered the process of formation of a holographic diode by joining polygonal manifolds <cit.>.In our model, first a big manifold with polygonal molecules is broken, two child manifolds and one Chern-Simons manifoldappeared. Then, heptagonal molecules on one of child manifolds repel electronsand pentagonal molecules on anotherchild manifold absorb them. Since, the angle between atoms in heptagonal molecules with respect to center ofthat is less thanpentagonal molecules andparallel electrons come nearer to each other and in terms of Pauli exclusionprinciple, therefore are repelled. Also, parallel electrons in pentagonal molecules become more distant and some holes emerge.Consequently, electrons move from one of child manifolds with heptagonal molecules towards other child manifold withpentagonal molecules via the Chern-Simons manifold and a diode emerges. Also, we have discussed that this is areal diod that may be built by bringing heptagonal and pentagonal molecules among the hexagonal molecules of graphene.To construct this diode, two graphene sheets are needed which are connected through a tube. Molecules, at thejunction points of one side of the tube, should have the heptagonal shapes and other molecules at the junctionpoints of another side of the tube should have the pentagonal shapes. Heptagonal molecules repel and pentagonalmolecules absorb electrons and a current between two sheets is produced. This current is very similar to the currentwhich is produced between layers N and P in a system in solid state. This current was produced only from the sidewith heptagonal molecules towards the side with pentagonal molecules. Also, the current from the sheet with pentagonalmolecules towards the sheet with heptagonal molecules is zero. This characteristic can also be seen in normal diod.In this paper, we extend the consideration on holographic diodes to BIonic systems. A BIon is a system which consistof two polygonal manifolds connected by a Chern-Simons manifold. We will show that when two manifolds withtwo different types of polygonal molecules come close to each other, some massive photons appear. These photonsjoin to each other and build Cherns-Simons fields. These fields lead to the motion of electrons on the Chern-Simonsmanifold between two manifolds and one BIonic diode emerges. The mass of these photons depend on the shape of moleculeson the manifolds and the length of gap. From this point of view, our result is consistent with previous predictions forthe mass of photons in <cit.>. The outline of the paper is as follows: In section <ref>, we will show that by joining non-similar trigonal manifolds, ahexagonal diode emerges. In this diode, photons join to each other and form the Chern-Simonsfields. In section <ref>,we will consider the process of the formation of a BIonic diode from a manifold with heptagonal molecules, a manifold withpentagonal molecules and a Chern-Simons manifold. We will show that exchanged photons between manifold are massive and theirmass depends on the length of gap between manifolds. The last section is devoted to summary and conclusion.§ THE HEXAGONAL DIODEIn this section, we will show that a hexagonal diode can be built by joining two non-similar trigonal manifolds which exchangesphotons between them form Chern-Simons fields. These fields force electrons and lead to their motion between two trigonalmanifolds. Also, we will explain that if two similar trigonal manifolds join to each other, exchanged photons cancel theeffect and no diode emerges.Previously, in ref <cit.>, for explaining graphene systems, we have used of the concept of string theory. In our model,scalar strings (X) are produced by pairing two electrons with up and down spins. Also, A denotes the photon which isexchanged between electrons and F is the photonic field strength. Now, we will extend this model to polygonal manifolds andtrigonal manifolds and write the following action <cit.>:S_3=-T_tri∫ d^3σ√(η^ab g_MN∂_aX^M∂_bX^N+2π l_s^2G(F)))G=(∑_n=1^31/n!(-F_1..F_n/β^2)) F=F_μνF^μν F_μν=∂_μA_ν- ∂_νA_μ where g_MN is the background metric, X^M(σ^a)'s are scalar fields which are produced by pairing electrons, σ^a's are the manifold coordinates, a, b = 0, 1, ..., 3 are world-volume indices of the manifold and M,N=0, 1, ..., 10 are eleven dimensional spacetime indices. Also, G is the nonlinear field <cit.> and A is the photonwhich exchanges between electrons. Using the above action, theLagrangian for trigonalmanifold can be written as:Ł=-4π T_tri∫ d^3σ√(1+(2π l_s^2)^2G(F)+ η^abg_MN∂_aX^M∂_bX^N) , where prime denotes the derivative respect to σ. To derive the Hamiltonian, we have to obtain the canonical momentum density for photon. Since we are interesting to consider electrical solutions. Therefore we suppose that F_01≠ 0 and other components of F_αβ are zero. So, we haveΠ=δŁ/δ∂_tA^1=-∑_n=1^3n/n!(-F_1..F_n-1/β^2)F_01/β^2√(1+(2π l_s^2)^2G(F)+ η^abg_MN∂_aX^M∂_bX^N)so, by replacing ∫ d^3σ =∫ dσσ^2, the Hamiltonian may be built as <cit.>:H=4π T_tri∫ dσσ^2Π∂_tA^1-Ł=4π∫ dσ [ σ^2Π(F_01)-∂_σ(σ^2Π)A_0]-Ł In the second step, we use integration by parts and obtain the term proportional to ∂_σA. Using the constraint (∂_σ(σ^2Π)=0), we obtain <cit.>:Π=k/4πσ^2,where k is a constant. Substituting equation (<ref>) in equation (<ref>) and ∫ d^3σ =∫ dσ_3 dσ_2 dσ_1yields the following Hamiltonian:H_1=4π T_tri∫ dσ_3 dσ_2 dσ_1√(1+(2π l_s^2)^2∑_nn/n!(-F_1..F_n-1/β^2)+η^abg_MN∂_aX^M∂_bX^N)O_1 O_1=√(1+k^2_1/σ^4_1) To obtain the explicit form of wormhole- like tunnel which goes out of trigonal manifold, we needa Hamiltonian in terms of separation distance between sheets. For this reason, we redefine Lagrangian as:Ł=-4π T_tri∫ dσσ^2√(1+(2π l_s^2)^2∑_nn/n!(-F_1..F_n-1/β^2)+z'^2)O_1With this new form of Lagrangian, we repeat our previous calculations. We haveΠ=δŁ/δ∂_tA^1=-∑_nn(n-1)/n!(-F_1..F_n-2/β^2)F_01F_1/β^2√(1+(2π l_s^2)^2∑_nn/n! (-F_1..F_n-1/β^2)+η^abg_MN∂_aX^M∂_bX^N)So the new Hamiltonian can be constructed as:H_2=4π T_tri∫ dσσ^2Π∂_tA^1-Ł=4π∫ dσ [ σ^2Π(F_01)-∂_σ(σ^2Π)A_0]-Ł Again, we use integration by parts to obtain the term proportional to ∂_σA. Imposing the constraint (∂_σ(σ^2Π)=0), we obtain:Π=k/4πσ^2where k is a constant. Substituting equation (<ref>) in equation (<ref>) yields the following Hamiltonian:H_2=4π T_tri∫ dσ_3 dσ_2 dσ_1√(1+(2π l_s^2)^2∑_nn(n-1)/n!(-F_1..F_n-2/β^2)+η^abg_MN∂_aX^M∂_bX^N) O_2 O_2=O_1√(1+k^2_2/O_1σ^4_2) And if we repeat these calculations for 3 times, we obtainH_3=4π T_tri∫ dσ_3 dσ_2 dσ_1√(1+η^abg_MN∂_aX^M∂_bX^N)O_totO_tot=√(1+k^2_3/O_2σ^4_3)√(1+k^2_2/O_1σ^4_2)√(1+k^2_1/σ^4_1) O_2=O_1√(1+k^2_2/O_1σ^4_2) At this stage, we will make use of someapproximations and obtain the simplest form of theHamiltonianof trigonal manifold:A√(1+k^2/O_1σ^4)√(1+k^2/σ^4)≃ A√(1+k^2/σ^4)+Ak^2/2σ^4≃A+Ak^2/2σ^4+Ak^2/2σ^4=A/2(1+k^2/σ^4)+ A/2(1+k^2/σ^4)≃2A'√(1+k^2/σ^4)⇒ O_tot=3/2√(1+k^2/σ^4)=3/2 O_1⇒ H_3=4π T_tri∫ dσσ^2√(1+η^abg_MN∂_aX^M∂_bX^N)O_tot=4π 3 T_tri∫ dσσ^2√(1+η^abg_MN∂_aX^M∂_bX^N)O_1= 4π 3 T_tri∫ dσσ^2√(1+η^abg_MN∂_aX^M∂_bX^N)√(1+k^2/σ^4)=3/2H_linearH_linear=4π 3 T_tri∫ dσσ^2√(1+η^abg_MN∂_aX^M∂_bX^N)√(1+k^2/σ^4)where A'=A/2 is a constant that depends on the trigonal manifold action(T_tri)and other stringy constants. This equation shows that each pair of electrons on each side of trigonal manifold connected by a wormhole- like tunnel and form a linear BIon; then these BIons join to each other and construct anonlineartrigonal BIon. For constrcuting a hexagonal manifold, we should put two trigonal manifolds near each other so that direction of themotion of electrons and photons on two trigonal manifolds are reverse to each other ( Fig.1.). In a symmetricalhexagonal manifold, two photons cancel the effect of each other and total energy of system becomes zero. Using expressions given in Eq. (<ref>), we can write:σ_1→ -σ̅_1σ_2→ -σ̅_2σ_3→ -σ̅_3 ∫ dσ_3 dσ_2 dσ_1→-∫ dσ̅_3 dσ̅_2 dσ̅_1A_0→A̅_0 A_1→A̅_1 ⇒ H_3→ -H̅_3 For a symmetrical hexagonal manifold, the Hamiltonians of two trigonal manifolds cancel the effect of each other and totalHamiltonian of system becomes zero. This system is completely stable and can't interact with other systems. For a non-symmetricalhexagonal manifold, fields are completely different and two Hamiltonian cannot cancel the effect of each other. Using equations(<ref> and <ref> ), we have:H_3=4π T_tri∫ dσ_3 dσ_2 dσ_1√(1+η^abg_MN∂_aX^M∂_bX^N)O_tot ≠H̅_3=4π T_tri∫ dσ̅_3 dσ̅_2 dσ̅_1√(1+η^abg_MN∂_aX̅^M∂_bX̅^N)O̅_tot ⇒ S=-T_tri∫ d^3σ√(η^ab g_MN∂_aX^M∂_bX^N+2π l_s^2G(F))) ≠S̅=-T_tri∫ d^3σ̅√(η^ab g_MN∂_aX̅^M∂_bX̅^N+2π l_s^2G(F̅))) Thus, total Hamiltonian and the action of two trigonal manifolds can be obtained as: H_6^tot=H_3-H̅_3S_6^tot=S_3-S̅_3 This equation shows that if two trigonal manifolds join to each other and form the hexagonal manifold, the Hamiltonian andalso the action of hexagonal manifold is equal to the difference between the actions and Hamiltonians of two trigonal manifolds.A non-symmetrical hexagonal manifold has an active potential and can interact with other manifolds (See Fig.2.).At this stage, we can assert that the exchanged photons between two trigonal manifolds produce the Chern-Simons fields.To write our model in terms of concepts in supergravity, we should define G and C-fields. G- fields with four indicesare constructed from two strings and C-fields with three indices are produced when three ends of G-fields are placedon one manifold and one index is located on one another manifold (Figure 3). We can define G and Cs-fields as follows:G_IJKL≈ F_[IJF_KL]Cs_IJK= ϵ^IJK F_IJA_K To obtain G- andCs-fields, we will assume that two spinors with up and down spins couple to each other and exchangedphotons (X^M→ A^Mψ_↓ψ_↑, X^0→ t). We also assume that spinorsare only functionsof coordinates (σ, t). Using equation (<ref>, <ref>,<ref>), we obtain: Π=k/4πσ^2=∑_n=1^3n/n! (-F̅_1..F̅_n-1/β^2)F̅_01/β^2√(1 + ([A̅^MA̅_M(ψ_1,↓ψ_1,↑ψ_2,↓ψ_2,↑)]')^2+2π l_s^2(∑_n=1^31/n!(-F̅_1..F̅_n/β^2))) H_3=4π T_tri∫ dσ_3 dσ_2 dσ_1√(1 + ([A^MA_M(ψ_1,↓ψ_1,↑ψ_2,↓ψ_2,↑)]')^2)O_totH_3=4π T_tri∫ dσ_3 dσ_2 dσ_1√(1 + ([A^MA_M(ψ_1,↓ψ_1,↑ψ_2,↓ψ_2,↑)]')^2)× √(1+1/O_2(∑_n=1^3n/n! (-F̅_1..F̅_n-1/β^2)F̅^3_01/β^2√(1 + ([A̅^MA̅_M(ψ_1,↓ψ_1,↑ψ_2,↓ψ_2,↑)]')^2+2π l_s^2(∑_n=1^31/n!(-F̅_1..F̅_n/β^2))))^2)× √(1+1/O_1(∑_n=1^3n/n! (-F̅_1..F̅_n-1/β^2)F̅_01^2/β^2√(1 + ([A̅^MA̅_M(ψ_1,↓ψ_1,↑ψ_2,↓ψ_2,↑)]')^2+2π l_s^2(∑_n=1^31/n!(-F̅_1..F̅_n/β^2))))^2)× √(1+(∑_n=1^3n/n! (-F̅_1..F̅_n-1/β^2)F̅_01^1/β^2√(1 + ([A̅^MA̅_M(ψ_1,↓ψ_1,↑ψ_2,↓ψ_2,↑)]')^2+2π l_s^2(∑_n=1^31/n!(-F̅_1..F̅_n/β^2))))^2) Where we have shown the exchanged photons on trigonal manifold by (A,F) and the exchanged photons on a gap between twotrigonal manifolds by (A̅,F̅). By using the Taylor expansion method and substituting results of(<ref>)in equation (<ref>), we obtain: H_tot^6 = H_3 - H̅_3≈(4π T_tri)[1 + (2π l_s^2/β^2)F̅_[IJF̅_KL]F̅^[IJF^KL] +((2π l_s^2)^2/β^4)(ψ_1,↓ψ_1,↑ψ_2,↓ψ_2,↑)' ϵ^IJKF̅_IJA_KF̅_[IJF̅_KL]F̅^[IJF̅^KL]-(2π l_s^2/β^2)^3F̅_[IJF̅_KLF̅_MN]F̅^[IJF^KLF̅^MN] -((2π l_s^2)^2/β^4)^3(ψ_1,↓ψ_1,↑ψ_2,↓ψ_2,↑)' ϵ^IJKF̅_IJA_KF̅_[IJF̅_KLF̅_MN]F̅^[IJF^KLF̅^MN] +....] - (4π T_tri)[1 + (2π l_s^2/β^2)F̅_[IJF̅_KL]F̅^[IJF^KL] +((2π l_s^2)^2/β^4)(ψ_1,↓ψ_1,↑ψ_2,↓ψ_2,↑)' ϵ^IJKF̅_IJA_KF̅_[IJF̅_KL]F̅^[IJF̅^KL]-(2π l_s^2/β^2)^3 F_[IJF_KLF_MN]F^[IJF^KLF^MN] -((2π l_s^2)^2/β^4)^3(ψ_1,↓ψ_1,↑ψ_2,↓ψ_2,↑)' ϵ^IJK F_IJA̅_K F_[IJF_KLF_MN]F^[IJF^KLF^MN] +....]=(4π T_tri) [1+(2π l_s^2/β^2) [G̅_IJKLG̅^IJKL-G_IJKLG^IJKL]+ ((2π l_s^2)^2/β^4)(ψ_1,↓ψ_1,↑ψ_2,↓ψ_2,↑)'[ Cs G̅_IJKLG̅^IJKL-C̅s̅ G_IJKL G^IJKL ] -(2π l_s^2/β^2)^3 [G̅_IJKLMNG̅^IJKLMN-G_IJKLMNG^IJKLMN]- ((2π l_s^2)^2/β^4)^3(ψ_1,↓ψ_1,↑ψ_2,↓ψ_2,↑)'[ Cs G̅_IJKLMNG̅^IJKLMN-C̅s̅ G_IJKLMN G^IJKLMN ]+.........] This equation shows that exchanged photons join to each other and build Chern-Simons fields. These fields make a bridge between twotrigonal manifolds and produce the BIonic diode(Figure 4.). For two similar trigonal manifolds, total Hamiltonian of BIon iszero, while for two different trigonal manifolds, a BIon is emerged. This BIon is a bridge for transferring energy of one manifoldto the other. At this stage, we can obtain the mass of exchanged photons between two trigonal manifolds. The length of photon relates to the separation distance between electrons or the length of Chern-Simons manifold and the mass of photon depends on the coupling between electrons (m^2=(ψ_1,↓ψ_1,↑ψ_2,↓ψ_2,↑)). The equation of motion for [A^MA_M(ψ_1,↓ψ_1,↑ψ_2,↓ψ_2,↑)]' which is extracted from the Hamiltonian of (<ref>) is A^MA_M(ψ_1,↓ψ_1,↑ψ_2,↓ψ_2,↑)→ m_photon^2l_Chern-Simons [m_photon^2l_Chern-Simons]'= ([O_tot(σ)^2-O̅_tot(σ)^2]/[O_tot(σ_0)^2-O̅_tot(σ_0)^2]-1)^-1/2 Solving this equation, we obtain: [m_photon^2]=1/l_Chern-Simons∫_σ^∞dσ'([O_tot(σ)^2-O̅_tot(σ)^2]/[O_tot(σ_0)^2-O̅_tot(σ_0)^2]-1)^-1/2Eq. (<ref>) shows that photonic mass depends on the length of Chern-Simons manifold and also the length of trigonal manifolds.This result is in agreement with previous predictions in <cit.> that photonic mass depends on the parametters of a gap between two systems. § THE BIONIC DIODE In this section, we will construct the BIonic diode by connecting a pentagonal and a heptagonal manifold by a Chern-Simons manifold.Theenergy and Hamiltonian of pentagonal manifold has a reverse sign in respect tothe energy and the Hamiltonian of heptagonalmanifold. Consequently, pentagonal manifold absorbs electrons and heptagonal molecules repels them.A pentagonal manifold can be built of two trigonal manifolds with a common vertex (See Figure 5). Consequently, both of trigonalmanifolds have a common photonic field. To avoid of calculating this photon for two times, we remove it from one of trigonalmanifolds. We have:S_5^tot=S_3-S̅_2H_5^tot=H_3-H̅_2 Following the mechanismin previous section, we obtain following actions:S_3=-T_tri∫ d^3σ√(η^ab g_MN∂_aX^M∂_bX^N+2π l_s^2(∑_n=1^31/n!(-F_1..F_n/β^2)) )) S̅_2=-T_tri∫ d^3σ√(η^ab g_MN∂_aX̅^M∂_bX̅^N+2π l_s^2(∑_n=1^21/n!(-F̅_1..F̅_n/β^2)) ))and following Hamiltonians: H_3=4π T_tri∫ dσ_3 dσ_2 dσ_1√(1+η^abg_MN∂_aX^M∂_bX^N)O^3_tot ≠H̅_2=4π T_tri∫ dσ̅_3 dσ̅_2 dσ̅_1√(1+η^abg_MN∂_aX̅^M∂_bX̅^N)O̅^2_totO^3_tot=√(1+k^2_3/O_2σ^4_3)√(1+k^2_2/O_1σ^4_2)√(1+k^2_1/σ^4_1) O̅^2_tot=O̅_1√(1+k^2_2/O̅_1σ̅^4_2) After doing some algebra on the above Hamiltonians and using the mechanism in (<ref>), we obtain: H_tot^5 = H_3 - H̅_2≈-(4π T_tri) [1+(2π l_s^2/β^2) [G̅_IJKLG̅^IJKL-G_IJKLG^IJKL]+ ((2π l_s^2)^2/β^4)(ψ_1,↓ψ_1,↑ψ_2,↓ψ_2,↑)'[ Cs G̅_IJKLG̅^IJKL-C̅s̅ G_IJKL G^IJKL ] -(2π l_s^2/β^2)^3 [G̅_IJKLMNG̅^IJKLMN]- ((2π l_s^2)^2/β^4)^3(ψ_1,↓ψ_1,↑ψ_2,↓ψ_2,↑)'[ Cs G̅_IJKLMNG̅^IJKLMN ]+.........] This equation shows that similar to the hexagonal manifold, the exchanged photons between trigonal manifolds in pentagonal manifoldform Chern-Simons fields, however the pentagonal manifold has less Chern-Simons and GG-fields. This is because that the pentagonalmanifold has less sides in respect to hexagonal manifold and consequently, it's exchanged photons are less.Also, usingthe Hamiltonians in equation (<ref>) and assuming all coordinates are the same ( σ_1 = σ_2 =σ_3), we obtain: E_tot^5 = H_3 - H̅_2≈ 4kπ T_tri [1/σ^5- 1/σ^3] F=- ∂ E/∂σ=4kπ T_tri [1/σ^6- 1/σ^4] ≪ 0 This equation shows that the force which is applied by a pentagonal manifold to an electron is attractive. Thus this manifoldattracts the electrons.In fact, a pentagonal manifold should be connected by another manifold and obtain the needed electrons.In next step, we want to consider the behaviour of heptagonal manifolds.A pentagonal manifold is formed by joining three trigonal manifolds which have two comon vertexes. These twotrigonal manifolds build a system with four vertexes and four fields (See figure 6 and figure 7). Thus, we can write:S_7^tot=S_3-S̅_3-3H_7^tot=H_3-H̅_3-3 Using the method in previous section, we obtain following actionsS_7=-T_tri∫ d^3σ√(η^ab g_MN∂_aX^M∂_bX^N+2π l_s^2(∑_n=1^31/n!(-F_1..F_n/β^2)) )) S̅_3-3=-T_tri∫ d^3σ√(η^ab g_MN∂_aX̅^M∂_bX̅^N+2π l_s^2(∑_n=1^41/n!(-F̅_1..F̅_n/β^2)) ))and following Hamiltonians: H_3=4π T_tri∫ dσ_3 dσ_2 dσ_1√(1+η^abg_MN∂_aX^M∂_bX^N)O^3_tot ≠H̅_3-3=4π T_tri∫ dσ̅_3 dσ̅_2 dσ̅_1√(1+η^abg_MN∂_aX̅^M∂_bX̅^N)O̅^3-3_tot O^3_tot=√(1+k^2_3/O_2σ^4_3)√(1+k^2_2/O_1σ^4_2)√(1+k^2_1/σ^4_1) O̅^3-3_tot=√(1+k^2_4/O̅_3σ̅^4_4)√(1+k^2_3/O̅_2σ̅^4_3)√(1+k^2_2/O̅_1σ̅^4_2)√(1+k^2_1/σ̅^4_1) Using the Taylor series in the above Hamiltonians and applying themethod in (<ref>) yields: H_tot^7 = H_3 - H̅_3-3≈-(4π T_tri) [1+(2π l_s^2/β^2) [G̅_IJKLG̅^IJKL-G_IJKLG^IJKL]+ ((2π l_s^2)^2/β^4)(ψ_1,↓ψ_1,↑ψ_2,↓ψ_2,↑)'[ Cs G̅_IJKLG̅^IJKL-C̅s̅ G_IJKL G^IJKL ]-(2π l_s^2/β^2)^3 [G̅_IJKLMNG̅^IJKLMN-G_IJKLMNG^IJKLMN]- ((2π l_s^2)^2/β^4)^3(ψ_1,↓ψ_1,↑ψ_2,↓ψ_2,↑)'[ Cs G̅_IJKLMNG̅^IJKLMN-C̅s̅ G_IJKLMN G^IJKLMN ] -(2π l_s^2/β^2)^4 [G_IJKLMNYZG^IJKLMNYZ]- ((2π l_s^2)^2/β^4)^4(ψ_1,↓ψ_1,↑ψ_2,↓ψ_2,↑)'[ C̅s̅ G_IJKLMNYZ G^IJKLMNYZ ]+.........] This equation indicates that like the hexagonal and pentagonal manifold, the exchanged photons between trigonal manifolds in heptagonal manifold form Chern-Simons fields, however the heptagonal manifold has more Chern-Simons and GG-fields. Thisis because that the heptagonal manifold has more sides in respect to hexagonal manifold and consequently, it's exchangedphotons are more.Similar to pentagonal manifold, usingthe Hamiltonians in equation (<ref>) and assuming all coordinates are the same (σ_1 = σ_2 =σ_3), we obtain: E_tot^7 = H_3 - H̅_3-3≈ 4kπ T_tri [1/σ^5- 1/σ^9] F=- ∂ E/∂σ=4kπ T_tri [1/σ^6- 1/σ^10] ≫ 0 This equation indicates that the force which is applied by a heptagonal manifold to an electron is repulsive. Thus thismanifold repel the electrons.In fact, a heptagonal manifold should be connected by a pentagonal manifold and givesthe extra electrons to it. A BIonic diode can be constructed from a pentagonal manifold which is connected to heptagonal manifold via a Chern-Simons fields (See figure 8). Using the Hamiltonians in (<ref> and <ref>), we obtain: H_tot^Diode =H_tot^5 +H_tot^7≈-(4π T_tri) [ -(2π l_s^2/β^2)^3 [G̅_IJKLMNG̅^IJKLMN]- ((2π l_s^2)^2/β^4)^3(ψ_1,↓ψ_1,↑ψ_2,↓ψ_2,↑)'[ Cs G̅_IJKLMNG̅^IJKLMN ]-(2π l_s^2/β^2)^4 [G_IJKLMNYZG^IJKLMNYZ]- ((2π l_s^2)^2/β^4)^4(ψ_1,↓ψ_1,↑ψ_2,↓ψ_2,↑)'[ C̅s̅ G_IJKLMNYZ G^IJKLMNYZ ]+.........] This equation shows that the Hamiltonian of the BIonic diode includes terms with 6 and 8 indices. This means that the rank ofCs-GG terms in a pentagonal-heptagonal diode is more than the rank of Cs-GG terms in a hexagonal diode. In fact, inpentagonal-heptagonal diode more photonic fields are exchanged and stability of system is more than the hexagonal diode.Using equations (<ref>,<ref>, <ref>), we can obtain the photonic mass in a BIonic diode:[m_photon^2]=1/l_Chern-Simons∫_σ^∞ dσ'([O_Diode(σ)^2- O̅_Diode(σ)^2]/[O_Diode(σ_0)^2-O̅_Diode(σ_0)^2]-1)^-1/2 where O_Diode=2O^3_tot -O^3-3_tot-O^2_tot In a pentagonal-heptagonal BIonic diode, the photonic mass depends not only on the separation distance between manifolds but alsoon the shape and topology of trigonal manifolds which construct manifolds. It is clear that for a small gap between manifolds,coupling of photons to electrons on the Chern-Simons manifold increases and they become massive. § SUMMARY In this paper, we have considered the formation and the evolutions of BIonic diodes on the polygonal manifolds. For example, wehave shown that a hexagonal BIonic diode can be constructed by two non-similar trigonal manifolds. Photons whichare exchangedbetween trigonal manifolds, form Chern-Simons fields which live on a Chern-Simons manifold. The hexagonal BIons interactwith each other via connecting two Chern-Simons manifolds. For a hexagonal manifold with similar trigonal manifolds, exchangedphotons cancel the effect of each other and the energy and also the length of Chern-Simons manifold becomes zero. These manifoldsare stable and cannot interact with each other. Ifthe symmetry of hexagonal manifolds is broken, another polygonal manifolds likeheptagonal and pentagonal manifolds are formed. Phtonos that are exchanged between these manifolds form two Chern-Simons fields whichlive on two Chern-Simons manifolds. These manifolds connect to each other and construct a BIonic diode. Photons that move via thismanifold, lead to the motion of electrons from heptagonal side to pentagonal side. These photons are massive and their mass dependsof the angles between atoms and length of gap between two manifolds.§ ACKNOWLEDGEMENTSThe work of Alireza Sepehri has been supported financially by the Research Institute for Astronomy and Astrophysics of Maragha (RIAAM),Iran under the Research Project NO.1/5237-77. A. Pradhan also acknowledges IUCAA, Pune, India for providingfacility during a visit under a Visiting Associateship where a part of this paper has been done. 1 q2 A.D. Bartolomeo, Physics Reports 606 (2016) 1-58. q3 T.J. Echtermeyer, P.S. Nene, M. Trushin, R.V. Gorbachev, A.L. Eiden, S. Milana, Z. Sun, J. Schliemann, E. Lidorikis,K.S. Novoselov, A. C. Ferrari, Nano Lett. 14 (2014) 3733. q4 T. Mueller, F. Xia, P. Avouris,Nature Photon. 4 (2010) 297-301. q5 C.E.P. Villegas, P.B. Mendonça, A. Rocha, Scientific Reports 4 (2014) 6579. q6 R.R. Nair, P. Blake, J.R. Blake, R. Zan, S. Anissimova, U. Bangert, A.P. Golovanov, S.V. Morozov, T. Latychevskaia,A.K. Geim, K.S. Novoselov, Appl. Phys. Lett. 97 (2010) 153102.q7 S. Parui, R. Ruiter, P.J. Zomer, M. Wojtaszek, B.J. van Wees, T. Banerjee, Journal of Applied Physics 116 (2014) 244505. q8Alireza Sepehri, Umesh Kumar Sharma, Anirudh Pradhan, In press in Physics Letters A, (2017)Alireza Sepehri, Int. J. Geom. Methods Mod. Phys. https://doi.org/10.1142/S0219887817501523,arXiv:1707.04283. A. Sepehri, R. Pincak, A.F. Ali, Eur. Phys. J. B 89 (2016) 250. A. Sepehri, R. Pincak, K. Bamba, S. Capozziello, E. N. Saridakis,International Journal of Modern Physics D, 1750094 (2017),arXiv:1607.01499 [gr-qc]. qq8 Marcelo Vieira, Sergei Sergeenkov, Claudio Furtado, Phys. Rev. A 96, 013852 (2017).D3G. Grignani, T. Harmark, A. Marini, N.A. Obers, M. Orselli, JHEP 1106 (2011) 058. DfAlireza Sepehri, Farook Rahaman, Salvatore Capozziello, Ahmed Farag Ali, Anirudh Pradhan, Eur.Phys.J. C76 (2016) no.5, 231. b1 P. Horava, E. Witten, Nucl. Phys. B 460 (1996) 506–524, arXiv:hep-th/9510209; P. Horava, E. Witten,Nucl. Phys. B 475 (1996) 94-114, arXiv:hep-th/9603142.
http://arxiv.org/abs/1709.10347v2
{ "authors": [ "Alireza Sepehri", "Mohd. Zeyauddin", "Anirudh Pradhan" ], "categories": [ "physics.gen-ph", "hep-th" ], "primary_category": "physics.gen-ph", "published": "20170927155325", "title": "The BIonic diode in a system of trigonal manifolds" }
Bose-Einstein condensation in heavy ion collisions: importance of and uncertainties in the finite volume corrections Kacper Zalewski The H. Niewodniczański Institute of Nuclear Physics, Polish Academy of Sciences Radzikowskiego 152, 31-342 Kraków, Poland andThe M. Smoluchowski Institute of Physics, Jagiellonian University Łojasiewicza 11, 30-348 Kraków, Poland December 30, 2023 ================================================================================================================================================================================================================================================================= The density of the Bose-Einstein condensate for non interacting pions in a cubic box, at given temperature and (average) total pion density, is calculated for three sets ofboundary conditions. The densities obtained are much larger than predicted from thermodynamics and depend significantly on the choice of the boundary conditions, even for volumes as large as 6.4·10^4fm^3. § INTRODUCTION It is plausible that a fraction of the pions produced in high-energy heavy ion collisions forms a Bose-Einstein condensate. At LHC energies, the presence of the pion condensate could, perhaps, explain the observed surplus of pions, in particular of pions with low transverse momenta, as compared to e.g. protons. According to the analysis of Begun and Florkowski <cit.>, at √(s_NN)= 2.76TeV about five percent of the pions are in the condensate.Assuming that the pions are just one kind of point-like, noninteracting bosons and that the thermodynamic limit can be used, one gets the standard formula for the density of the pions not in the condensate: ρ_th^*(T,μ) = 1/2π^2∫ p^2dp/e^β(E(p)-μ)-1.When the chemical potential μ exceeds the lowest single particle energy level, E_0, the particle density (the integrand) becomes negative at E(p) = E_0. Therefore, the chemical potential must satisfy the condition: μ≤ E_0.The density (<ref>) is an increasing function of the chemical potential and reaches its maximum allowed value, at given temperature, for μ=E_0. If the total density of pions exceeds this maximum, the surplus pions are assumed to form the condensate. This can be justified by the formula for the density of pions in the ground state: ρ_c(T,μ,L) = 1/L^3g_0/e^β(E_0-μ)-1,where L^3 is the volume of the pion fluid and g_0 is the degeneracy of the ground state. Indeed, for μ→ E_0 the ground state can accommodate an arbitrary number of pions. Let us note, however, that this justification goes beyond standard thermodynamics, where neither the density of the condensate nor the chemical potential depend on the volume. In order to get a finite, non-zero density of the condensate from (<ref>), it is necessary to assume, for example, that in the vicinity of the critical temperature: μ = E_0 - c(T)/L^3;c(T) > 0.For large volumes, the volume dependent correction to the chemical potential is negligible and thedensity of particles not in the condensate can be safely calculated from (<ref>) with μ = E_0. This justifies, in the limit L →∞, the thermodynamic result. However, in heavy ion collisions the volume of the pion fluid is not very large. Therefore, the question arises: what are the finite volume corrections? The finite volume corrections to the condensate density have been estimated by Begun and Gorenstein <cit.> for the free pion fluid, and, using essentially the same method, for the pion fluid in a uniform magnetic filed by Ayala, Mercado and Villavicencio <cit.>. In the present paper we consider non-interacting pions, of a single kind, and solve the problem for three models. In each, the fluid of non interactingpions is contained in a cubic box of volume L^3. In the first model, the boundary conditions are periodic. This is the approachconsidered in <cit.>. In the second, the particle wave functions of the pions are assumed to vanish at the boundaries of the cube. This model seems just as plausible as the previous one. In the third, the boundary conditions are free, i.e. the energy spectrum is as for free particles and formula (<ref>) remains valid. This model may seem unorthodox. However, as is well known, the pion fluid rapidly expands. For expanding systems a small volume does not necessarily imply a discrete energy spectrum. A well-known example are the free Gaussian wave packets. This is the approach used in <cit.> and <cit.>. We find in all the cases that the thermodynamic approximation grossly underestimates the amount of the condensate at given temperature and overall pion density.The calculations show that significant differences between the exact results and the thermodynamic approximation persist up to the highest values of L considered here, i.e. up to L=40fm. The differences between the predictions of the models depend on L, but in general they are very large. Therefore, if the system of non-interacting pions in a box is a good guide to estimate the condensate density, the finite volume corrections are very important and the choice of the boundary conditions is crucial for their evaluation.§ THE MODELS A full calculation, which would have to include the entire pion isotriplet and the π-π interactions, is beyond the scope of the present paper. What we further call pions are spinless particles of an ideal Bose fluid with particle mass m= 140MeV. We also assume g_0=1, which is true for all the models considered here.Let us consider the fluid of pions in a cubic box of volume L^3. As independent parameters we choose the edge length L, the temperature T and the chemical potential μ. The results are given for the interval 0 ≤ L ≤ 40.For large L it would be more realistic to consider a parallepiped, since the transverse cross-section of the fluid is certainly much less than 1600fm^2, but for our qualitative discussion the cube is good enough. Moreover, reducing the transverse dimensions at given length L can only increase the deviations from the thermodynamic limit.What temperature is considered realistic depends on the theoretical approach. We have done the calculations for T = 100which, according to the Alice collaboration <cit.>, is reasonable for about 40% of the most central collisions at LHC energies. We have also checked that for T=150MeV, which is in the range considered in <cit.>, similar results are obtained, though the deviations from the thermodynamic limit are somewhat smaller.The chemical potential is related to the total particle density. It is convenient to present the results at an L-independent total density of the pion fluid. The exact value of this density is not very important, but it should not be too unrealistic. We choose this density as follows. Putting μ = E_0 we can calculate from (<ref>) the density of the non-condensate particles in the large volume limit, i.e. in the thermodynamic approximation. Let us denote it ρ_th^*(T). By analogy with<cit.> we assume that, in the thermodynamic approximation, this is 95% of the total density. Thus, the total pion density is ρ(T) = 20/19ρ^*_th(T). For this density we calculate the fraction of the pions in the condensate (further called condensate fraction). Of course, in the thermodynamic approximation this is 5% by assumption.Since the density of the condensate cannot exceed the total pion density, relation (<ref>) implies that for L→ 0 the difference E_0 - μtends to infinity and, consequently, the density of pions not in the condensate tends to zero[For models with discrete energy levels this follows from the fact that the energy difference between the first two energy levels E_1-E_0 →∞ when L→ 0.]. On the other hand, for L→∞ we must take μ→ E_0 in order to get any condensate at all. Substituting μ = E_0 into the formulae for the density of the non-condensate pions, and taking where necessary the limit L →∞, we get for each model the same result: the thermodynamic approximation. Thus, the remaining question is: how fast does thecondensate fraction decrease, with increasing L?The density of pions not in the condensate is given for the free boundary conditions by formula (<ref>) and for the other two models by the corresponding formula from statistical physics ρ ^*(T,μ,L) = 1/L^3∑1/e^β(E-μ)-1,where the summation is over all the states with energies higher than E_0. From this point on, the densities are average densities, since we are working at fixed chemical potential and not at fixed number of particles, which makes a difference when the thermodynamic approximation does not hold. The total density of pions is ρ(T) = ρ_c(T,μ,L) + ρ^*(T,μ,L).This formula, whith a suitable redefinition of ρ^* for the free boundary conditions, is valid for all our models, which implies that μ = μ(T,L). Formula (<ref>) yields the relation μ(T,L) = E_0 - Tlog(1 + 1/L^3ρ_c(T,μ,L)).Using equation (<ref>) to elimiate ρ_c and substituting the result into (<ref>) one obtains an equation for ρ^*(T,μ(T,L),L), which can be easily solved by iterations, or by trial and error.Let us define now the three models. In the model with periodic boundary conditions, E(n_1,n_2,n_3) = √(m^2 + 4π^2/L^2s(n_1,n_2,n_3)),where s(n_1,n_2,n_3) = n_1^2 + n_2^2 + n_3^2and n_1,n_2,n_3 can be any integers. Thus, the ground state energy is E_0 = m.In the model with wave functions vanishing at the boundaries of the cube, E(n_1,n_2,n_3) = √(m^2 + π^2/L^2s(n_1,n_2,n_3))and n_1,n_2,n_3 are any positive integers. Thus, the ground state energy is E_0 =√(m^2 + 3π^2/L^2). In the model with free boundary conditions E(p) = √(m^2+p^2),the integration as in (<ref>) replaces the summation over n_1,n_2,n_3 and the ground state energy is E_0 = m.The condensate fractions for the three models are compared with each other and with the thermodynamic approximation in Fig. 1. It is seen that, according to these models, the finite volume corrections are large and strongly model-dependent.§ DISCUSSION AND CONCLUSIONS Phase transitions, with the accompanying discontinuities in the thermodynamic parameters and functions, can occur only in the thermodynamic limit. When the volume is finite and deviations from the thermodynamic limit are of interest, the phases can be defined only by analogy. It is natural (see <cit.>) to define the Bose-Einstein condensate as the set of particles in the single particle ground state.Our calculations for the three models of pions condensing in a cubic box with edge length L yield the following conclusions concerning the condensate fraction. * With increasing L, the condensate fraction decreases from one at L→ 0 to the thermodynamic limit.* The decrease towards the thermodynamic limit is very slow. For a cubic box with a volume ofL^3 = 6.4· 10^4fm^3, the corrections to the thermodynamic limit are still by factors of abouttwo and more. Thus, in practical calculations they must be taken into account.* The deviations from the thermodynamic limit depend strongly on the boundary conditions chosen. For example, for the ratios of the exactly calculated condensate fractions to the corresponding fraction in the thermodynamic approximation, at L = 40fm, we find: 2.3 for the periodic boundary conditions, 4.2 for the wave functions vanishing at the boundary and 1.7 for the free boundary conditions. The corresponding numbers for T=150MeV are: 1.8, 3.4 and 1.4. Thus, the corrections to the thermodynamic limit cannot be calculated reliably without specifying the choice of the boundary conditions. Acknowledgement This work was partly supported by the Polish National Science Center (NCN) under grant DEC-2013/09/B/ST2/00497. The author thanks W. Florkowski for helpful comments.99 BEFV. Begun and W. Florkowski, Phys.Rev. C91(2015)054909. BEGV.V. Begun and K.I. Gorenstein, Phys.Rev. C77(2008)064903. AMVA. Ayala, P. Mercado and C. Villavicencio, Magnetic catalysis of a finite size pion condensate, arXiv:1609.02595v2. BEG2V. Begun, arXiv:1412.6532 [nucl-th]. ALI B. Abelev et al (ALICE), Phys.Rev. C88(2013)044910..
http://arxiv.org/abs/1709.09362v1
{ "authors": [ "Kacper Zalewski" ], "categories": [ "hep-ph", "nucl-th" ], "primary_category": "hep-ph", "published": "20170927071126", "title": "Bose-Einstein condensation in heavy ion collisions: importance of and uncertainties in the finite volume corrections" }
1]Dejan [email protected] 2]Jonathan [email protected] [1]Department of Mathematics, South Kensington campus, Imperial College London, London SW7 2AZ, UK .1pc [2]Department of Mathematics, Stanford University, Stanford CA 94305-2125, USA.1pcThe interior of dynamical extremal black holes inspherical symmetry [===================================================================== We study the nonlinear stability of the Cauchy horizon in the interior of extremal Reissner–Nordström black holes under spherical symmetry. We consider the Einstein–Maxwell–Klein–Gordon system such that the charge of the scalar field is appropriately small in terms of the mass of the background extremal Reissner–Nordström black hole. Given spherically symmetric characteristic initial data which approach the event horizon of extremal Reissner–Nordström sufficiently fast, we prove that the solution extends beyond the Cauchy horizon in C^0, 12∩ W^1,2_loc, in contrast to the subextremal case (where generically the solution is C^0∖ (C^0, 12∩ W^1,2_loc)). In particular, there exist non-unique spherically symmetric extensions which are moreover solutions to the Einstein–Maxwell–Klein–Gordon system. Finally, in the case that the scalar field is chargeless and massless, we additionally show that the extension can be chosen so that the scalar field remains Lipschitz. § INTRODUCTION In this paper, we initiate the study of the interior of dynamical extremal black holes. The Penrose diagram corresponding to maximal analytic extremal Reissner–Nordström and Kerr spacetimes is depicted in Figure <ref>. In particular,if one restricts to a globally hyperbolic subset with an (incomplete) asymptotically flat Cauchy hypersurface (cf. the region D^+(Σ) in Figure <ref>), then these spacetimes possess smooth Cauchy horizons , whose stability property is the main object of study of this paper. Since the pioneering work of Poisson–Israel <cit.> and the seminal work of Dafermos <cit.> in the spherically symmetric setting, we now have a rather complete understanding of the interior of dynamical black holes which approach subextremal limits along the event horizon, at least regarding the stability of the Cauchy horizons. The works <cit.> culminated in the recent work of Dafermos–Luk <cit.>, which proves the C^0 stability of the Kerr Cauchy horizon without any symmetry assumptions, i.e. they show that whenever the exterior region of a black hole approaches a subextremal, strictly rotating Kerr exterior, then maximal Cauchy evolution can be extended across a non-trivial piece of Cauchy horizon as a Lorentzian manifold with continuous metric. Moreover, it is expected that for a generic subclass of initial data, the Cauchy horizon is an essential weak null singularity, so that there is no extension beyond the Cauchy horizon as a weak solution to the Einstein equations (see <cit.> for recent progress and discussions). On the other hand, much less is known about dynamical black holes which become extremal along the event horizon. Mathematically, the only partial progress was made for a related linear problem, namely the study of the linear scalar wave equation on extremal black hole backgrounds. For the linear scalar wave equation, the first author established <cit.> that in the extremal case, the Cauchy horizon is more stable than its subextremal counterpart. In particular, the solutions to linear wave equations are not only bounded, as in the subextremal case, but they in fact obey higher regularity bounds which fail in the subextremal case (see Section <ref> for a more detailed discussion). Extrapolating from the linear result, it may be conjectured that in the interior of a black hole which approaches an extremal black hole along the event horizon, not only does the solution remain continuous up to the Cauchy horizon as in the subextremal case, but in fact there are non-unique extensions beyond the Cauchy horizon as weak solutions. This picture, if true, would also be consistent with the numerical study of this problem by Murata–Reall–Tanahashi <cit.>.In this paper, we prove that this picture holds in a simple nonlinear setting. More precisely, we study the Einstein–Maxwell–Klein–Gordon system of equations with spherically symmetric initial data (see Section <ref> for further discussions on the system). We solve for a quintuple (ℳ, g, ϕ,A,F), where (ℳ,g) is a Lorentzian metric, ϕ is a complex valued function on ℳ, A and F are real 1- and 2-forms on ℳ respectively. The system of equations is as follows:{ Ric_μν-12 g_μν R=8π(𝕋^(sf)_μν+𝕋^(em)_μν),𝕋^(sf)_μν= 12 D_μϕD_νϕ +12D_μϕ D_νϕ- 12 g_μν ((g^-1)^βD_ϕD_βϕ+^2|ϕ|^2),𝕋^(em)_μν=(g^-1)^F_μF_ν- 14 g_μν(g^-1)^(g^-1)^γσF_γF_σ,(g^-1)^D_ D_ϕ=^2ϕ,F=dA,(g^-1)^αμ_α F_μν=2π i(ϕD_νϕ - ϕ D_νϕ). .Here,denotes the Levi–Civita connection associated to the metric g, and Ric and R denote the Ricci tensor and the Ricci scalar, respectively. We also use the notation D_ = _ + i A_, and ≥ 0, ∈ℝ are fixed constants. The extremal Reissner–Nordström solution (cf. Section <ref>) is a special solution to (<ref>) with a vanishing scalar field ϕ.In the following we will restrict the parameters so that || is sufficiently small in terms of M. More precisely, we assume1-(10+5√(6)-3√(9+4√(6)))|𝔢|M> 0.Under the assumption (<ref>), our main result can be stated informally as follows (we refer the reader to Theorem <ref> for a precise statement):Consider the characteristic initial value problem to (<ref>) with spherically symmetric smooth characteristic initial data on two null hypersurfaces transversely intersecting at a 2-sphere. Assume that one of the null hypersurfaces is affine complete and that the data approach the event horizon of extremal Reissner–Nordström at a sufficiently fast rate. Then, the solution to (<ref>) arising from such data, when restricted to a sufficiently small neighborhood of timelike infinity (i.e. a neighborhood of i^+ in Figure <ref>), satisfies the following properties: * It possesses a non-trivial Cauchy horizon.* The scalar field, the metric, the electromagnetic potential (in an appropriate gauge) and the charge can be extended in (spacetime) C^0, 12∩ W^1,2_loc up to the Cauchy horizon. Moreover, the Hawking mass (<ref>) can be extended continuously up to the Cauchy horizon.* The metric converges to that of extremal Reissner–Nordström towards timelike infinity and the scalar field approaches 0 towards timelike infinity in an appropriate sense.Moreover, the maximal globally hyperbolic solution is future extendible (non-uniquely) as a spherically symmetric solution to (<ref>).The extensions of the solution we construct has regularity below (spacetime) C^2 and as such do not make sense as classical solutions. As is well-known, however, the Einstein equations admit a weak formulation which makes sense already if the metric is in (spacetime) C^0 ∩ W^1,2_loc and the stress-energy-momentum tensor is in spacetime L^1_loc <cit.>. The weak formulation can be recast geometrically as follows: given a smooth (3+1)-dimensional manifold ℳ, a C^0_loc∩ W^1,2_loc Lorentzian metric g and an L^1_loc symmetric 2-tensor T, we say that the Einstein equations Ric(g) -12 g R(g) = 8π T is satisfied weakly if for all smooth and compactly supported vector fields X,Y, ∫_ℳ ((_μ X)^μ (_ν Y)^ν - (_μ X)^ν (_ν Y)^μ) = 8π∫_ℳ (T(X,Y) - 12 g(X,Y) _g T).It is easy to check that any classical solution is indeed a weak solution in the sense above. Moreover, the extensions that we construct in Theorem <ref> have more than sufficient regularity to be interpreted in the sense above. However, in our setting we do not need to use the notion in <cit.>. Instead, we introduce a stronger notion of solutions, defined on a quotient manifold for which we quotiented out the spherical symmetry; see Definition <ref> in Section <ref>. This class of solutions — even though they are not classical solutions — should be interpreted as strong solutions (instead of just weak solutions) since a well-posedness theory can be developed for them[In fact, in order to develop a well-posedness theory for strong solutions, one can even drop the assumption of spherical symmetry, but instead require additional regularity along the “spherical directions” with respect to an appropriately defined double null foliation gauge; see <cit.> for details.]; see Section <ref>. Like in the subextremal case, the solution extends in C^0 to the Cauchy horizon. However, the C^0, 12∩ W^1,2_loc extendibility, the finiteness of the Hawking mass, as well as the extendibility as spherically symmetric solution stand in contrast to the subextremal case. In particular, according to the results of <cit.> (see also <cit.>), there are solutions which asymptote to subextremal Reissner–Nordström black holes in the exterior region such that the Hawking mass blows up at the Cauchy horizon, and the solution cannot be extended as a spherically symmetric solution to the Einstein–Maxwell–scalar field system[Though the estimates in <cit.> strongly suggest that the scalar field ceases to be in W^1,2_loc for any C^0 extension of the spacetime, this remains an open problem unless spherical symmetry is imposed. In particular, it is not known whether the solutions constructed in <cit.> can be extended as weak solutions to the Einstein–Maxwell–scalar field system if no spherical symmetry assumption is imposed.].The fact that we can extend the solutions beyond the Cauchy horizon is intimately connected to the regularity of the solutions up to the Cauchy horizon. In particular this relies on the fact the metric, the scalar field and the electromagnetic potential remain in (spacetime) C^0∩ W^1,2_loc. In fact, the solutions are at a level of regularity for which the Einstein equations are still locally well-posed[Note that in general the Einstein equations are not locally well-posed with initial data only in C^0∩ W^1,2. Nevertheless, when there is spherical symmetry (away from the axis of symmetry), or at least when there is additional regularity in the spherical directions (cf. Footnote <ref>), one can indeed develop a local well-posedness theory with such low regularity.]. One can therefore construct extensions beyond the Cauchy horizon by solving appropriate characteristic initial value problems; see Section <ref>.In this connection, note that we emphasised in the statement of the theorem that the solution can be extended beyond the Cauchy horizon as a spherically symmetric solution to (<ref>). The emphasis on the spherical symmetry of the extension is made mostly to contrast with the situation in the subextremal case (cf. Remark <ref>). This should not be understood as implying that the extensions necessarily are spherically symmetric: In fact, with the bounds that we establish in this paper, one can in principle construct using the techniques in <cit.> extensions (still as solutions to (<ref>)) without any symmetry assumptions (cf. Footnote <ref>).The assumptions we impose on the event horizon are consistent with the expected late-time behavior of the solutions in the exterior region of the black hole, at least in the ==0 case if one extrapolates from numerical results <cit.>. In particular, the transversal derivative of the scalar field is not required to decay along the event horizon, and is therefore consistent with the Aretakis instability <cit.>. Of course, in order to completely understand the structure of the interior, one needs to prove that the decay estimates along the event horizon indeed hold for general dynamical solutions approaching these extremal black holes. This remains an open problem.Our result only covers a limited range of parameters of the model; see (<ref>). This restriction comes from a Hardy-type estimate used to control the renormalised energy (cf. Sections <ref> and <ref>) and we have not made an attempt to obtain the sharp range of parameters.In the special case ==0, the analysis is simpler and we obtain a stronger result; namely, we show that the scalar field in fact is Lipschitz up to the Cauchy horizon (cf. Theorem <ref>). While the result we obtain in the ≠ 0, ≠ 0 case is weaker, the general model allows for the charge of the Maxwell field to be non-constant, and serves as a better model problem for the stability of the extremal Cauchy horizon without symmetry assumptions. Another reason that we do not restrict ourselves to the simpler ==0 case is that in the ==0 case, extremal black holes do not naturally arise dynamically: * There are no one-ended black holes with non-trivial Maxwell field with regular data on ℝ^3 since in that setting the Maxwell field necessarily blows up at the axis of symmetry.* In the two-ended case, given future-admissible[The future-admissibility condition can be thought of as an analogue of the physical “no anti-trapped surface” assumption in the one-ended case.] (in the sense of <cit.>) initial data, the solution always approaches subextremal black holes in each connected component of the exterior region <cit.>.On the other hand, if ≠ 0, then in principle there are no such obstructions[Nevertheless, it is an open problem to construct a dynamical black hole with regular data that settles down to an extremal black hole.].One feature of the black hole interior of extremal Reissner–Nordström is that it is free of radial trapped surfaces—a stark contrast to the subextremal case (where every sphere of symmetry is the black hole interior is trapped!). Let us note that this feature has sometimes been taken as the defining feature of spherically symmetric extremal black holes; see for instance <cit.>. We will not use this definition in this paper, and when talking about “extremal black holes”, we will only be referring to black holes which converge to a stationary extremal black hole along the event horizon as in Theorem <ref>. Indeed, while our estimates imply that for the solutions in Theorem <ref>, the geometry of the black hole interior is close to that of extremal Reissner–Nordström, it remains an open problem in the general case whether the black hole interior contains any radial trapped surface.[Note however that in the ==0 case, if we assume in addition that _U r<0 everywhere along the event horizon, we can in principle modify the monotonicity argument of Kommemi <cit.> in establishing the subextremality of two-ended black holes (see Remark <ref>) to show that the interior in the extremal case is free of radial trapped surfaces. Indeed, the argument of <cit.> exactly proceeds by (1) showing that there are no interior trapped surfaces in the interior of extremal black holes and (2) establishing a contradiction with the future-admissibility condition. See also the appendix of <cit.>.] The fact that the extremal Cauchy horizons are “more stable” than their subextremal counterparts can be thought of as related to the vanishing of the surface gravity in the extremal case. Recall that in both the extremal and subextremal charged Reissner–Nordström spacetime, there is a global infinite blue shift effect such that the frequencies of signals sent from the exterior region into the black hole are shifted infinitely to the blue <cit.>. As a result, this gives rise to an instability mechanism. Indeed, Sbierski <cit.> showed[This statement is technically not explicitly proven in <cit.>, but it follows from the result there together with routine functional analytic arguments.] that for the linear scalar wave equation on both extremal and subextremal Reissner–Nordström spacetime, there exist finite energy Cauchy data which give rise to solutions that are not W^1,2_loc at the Cauchy horizon. On the other hand, as emphasised in <cit.>, this type of considerations do not take into account the strength of the blue shift effect and do not give information on the behaviour of the solutions arising from more localised data. Heuristically, for localised data, one needs to quantify the amplification of the fields by a “local” blue shift effect at the Cauchy horizon, whose strength can be measured by the surface gravity. In this language, what we see in Theorem <ref> is a manifestation of the vanishing of the local blue shift effect at the extremal limit.This additional stability of the extremal Cauchy horizon due to the vanishing of the surface gravity may at the first sight seem to make the problem simpler than its subextremal counterpart. Ironically, from the point of view of the analysis, the local blue shift effect in the subextremal case in fact provides a way to prove stable estimates! This method fails at the extremal limit.To further illustrate this, first note that in the presence of the blue shift effect, one necessarily proves degenerate estimates. By exploiting the geometric features of the interior of subextremal black holes, the following can be shown: when proving degenerate energy-type estimates, by choosing the weights in the estimates appropriately, one can prove that the bulk spacetime integral terms (up to a small error) have a good sign, and can be used to control the error terms (cf. discussions in the introduction of <cit.>). As a consequence, one can in fact obtain a stability result for a large class of nonlinear wave equations with a null condition, irrespective of the precise structure of the linearized system. This observation is also at the heart of the work <cit.>.In the extremal case, however, it is not known how to obtain a sufficiently strong coercive bulk spacetime integral term when proving energy estimates. Moreover, if one naively attempts to control the spacetime integral error terms by using the boundary flux terms of the energy estimate and Grönwall's lemma, one encounters a logarithmic divergence. To handle the spacetime integral error terms, we need to use more precise structures of the equations, and we will show that there is a cancellation in the weights appearing in the bulk spacetime error terms. This improvement of the weights then allows the bulk spacetime error terms to be estimated using the boundary flux terms and a suitable adaptation[In fact, using the smallness parameters in the problem, this will be implemented without explicitly resorting to Grönwall's lemma.] of Grönwall's lemma. In particular, we need to use the fact that (1) a renormalised energy can be constructed to control the scalar field and the Maxwell field simultaneously, and that (2) the equations for the matter fields and the equations for the geometry are “sufficiently decoupled” (cf. Section <ref>). These structures seem to be specific to the spherically symmetric problem: to what extent this is relevant to the general problem of stability of extremal Cauchy horizons without symmetry assumptions remains to be seen. The study of the stability properties of subextremal Cauchy horizons is often motivated by the strong cosmic censorship conjecture. The conjecture states that solutions arising from generic asymptotically flat initial data are inextendible as suitably regular Lorentzian manifolds. In particular, the conjecture, if true, would imply that smooth Cauchy horizons, which are present in both extremal and subextremal Reissner–Nordström spacetimes, are not generic in black hole interiors. As we have briefly discussed above, there are various results establishing this in the subextremal case; see for example <cit.>. In fact, one expects that generically, if a solution approaches a subextremal black hole at the event horizon, then the spacetime metric does not admit W^1,2_loc extensions beyond the Cauchy horizon; see discussions in <cit.>. On the other hand, our result shows that at least in our setting, this does not occur for extremal Cauchy horizons. Nevertheless, since one expects that generic dynamical black hole solutions are non-extremal, our results, which only concern black holes that become extremal in the limit, are in fact irrelevant to the strong cosmic censorship conjecture. In particular, provided that extremal black holes are indeed non-generic as is expected, the rather strong stability that we prove in this paper does not pose a threat to cosmic censorship.Finally, even though our result establishes the C^0, 12∩ W^1,2_loc stability of extremal Cauchy horizons in spherical symmetry, it still leaves open the possibility of some higher derivatives of the scalar field or the metric blowing up (say, the C^k norm blows up for some k∈ℕ). Whether this occurs or not for generic data remains an open problem.§.§ Previous results on the linear wave equation In this section, we review the results established in <cit.> concerning the behaviour of solutions to the linear wave equation _gϕ=0 in the interior of extremal black holes. The results concern the following cases: * general solutions on extremal Reissner–Nordström,* general solutions on extremal Kerr–Newman with sufficiently small specific angular momentum,* axisymmetric solutions on extremal Kerr.In each of these cases, the following results are proven (in a region sufficiently close to timelike infinity): * ϕ is bounded and continuously extendible up to the Cauchy horizon.* ϕ is C^0, up to the Cauchy horizon for all ∈ (0,1).* ϕ has finite energy and is W^1,2_locup to the Cauchy horizon. As we mentioned earlier, these results are in contrast with the subextremal case. (A) holds also for subextremal Reissner–Nordström and Kerr <cit.>, (B) is false[This result is not explicitly stated in the literature, but can be easily inferred given the sharp asymptotics for generic solutions in <cit.> and the blowup result in <cit.> appropriately adapted to the linear setting.] on subextremal Reissner–Nordström (<cit.>) and (C) is false on both subextremal Reissner–Nordström and Kerr <cit.> (In fact, in subextremal Reissner–Nordström, generic solutions fail to be in W^1,p_loc for all p>1; see <cit.>).At this point, it is not clear whether the estimates in <cit.> are sharp. In the special case of spherically symmetric solutions on extremal Reissner–Nordström, <cit.> proves that the solution is in fact C^1 up to the Cauchy horizon. Moreover, if one assumes more precise asymptotics along the event horizon (motivated by numerics), then it is shown that spherically symmetric solutions are C^2.Our results in the present paper can be viewed as an extension of those in <cit.> to a nonlinear setting. In particular, we show that even in the nonlinear (although only spherically symmetric) setting, ϕ still obeys (A) and (C), and satisfies (B) in the subrange ∈ (0, 12]. Moreover, the metric components, the electromagnetic potential, and the charge, in appropriate coordinate systems and gauges, verify similar bounds. §.§ Ideas of the proofModel linear problems. The starting point of the analysis is to study linear systems of wave equations on fixed extremal Reissner–Nordström background. A simple model of such a system is the following (where a,b,c,d∈ℝ):_g_eRNϕ = a ϕ+ bψ,_g_eRNψ = cψ+dϕ.It turns out that in the extremal setting, we still lack an understanding of solutions to such a model system in general. (This is in contrast to the subextremal case, where the techniques of <cit.> show that solutions to the analogue of (<ref>) are globally bounded for any fixed a,b,c,d∈ℝ.) Instead, we can only handle some subcases of (<ref>). Namely, we need a≥ 0, c≥ 0 and b=d=0. Put differently, this means that we can only treat decoupled Klein–Gordon equations with non-negative mass. Remarkably, as we will discuss later, although the linearized equations of (<ref>) around extremal Reissner–Nordström are more complicated than decoupled Klein–Gordon equations with non-negative masses, one can find a structure in the equations so that the ideas used to handle special subcases of (<ref>) can also apply to the nonlinear problem at hand.Estimates for linear fields using ideas in <cit.>. The most simplified case of (<ref>) the linear wave equation with zero potential _g_eRNϕ=0. In the interior of extremal Reissner–Nordström spacetime, this has been treated by the first author in <cit.>. The work <cit.> is based on the vector field multiplier method, which obtains L^2-based energy estimates for the derivatives of ϕ. The vector field multiplier method can be summarized as follows: Consider the stress-energy-momentum tensor𝕋_μν = _μϕ_νϕ - 12 g_μν(g^-1)^_ϕ_ϕ.For a well-chosen vector field V, one can then integrate the following identity for the current 𝕋_μνV^ν^μ (𝕋_μνV^ν) =12 𝕋_μν(^μ V^ν+^ν V^μ)to obtain an identity relating a spacetime integral and a boundary integral.When V is casual, future-directed and Killing, the above identity yields a coercive conservation law. In the interior of extremal Reissner–Nordström _t =12(_v+_u) (cf. definition of (u,v) coordinates in Section <ref>) is one such vector field. This vector field, however, is too degenerate near the event horizon and the Cauchy horizon, and one expects the corresponding estimates to be of limited use in a nonlinear setting. A crucial observation in <cit.> is that the vector field V=|u|^2_u+v^2_v (in Eddington–Finkelstein double null coordinates, cf. Section <ref>) can give a useful, stronger, estimate. More precisely, V=|u|^2_u+v^2_v has the following properties: * V is a non-degenerate vector field at both the event horizon and the Cauchy horizon. * V is causal and future-directed. Hence, together with 1., this shows that the current associated to V, when integrated over null hypersurfaces, corresponds to non-degenerate energy.* Moreover, although V is not Killing, ^μ V^ν+^ν V^μ has a crucial cancellation[More precisely, ^v V^u+^u V^v=- 12^-2(_u u^2 + _v v^2 + ^-2(u^2_u+v^2_v)^2)^-2,whereas each single term, e.g., ^-2_v v^2 ∼ v^-2, behaves worse as |u|,v→∞. Without this cancellation, the estimate exhibits a logarithmic divergence.] so that the spacetime error terms can be controlled by the boundary integrals. These observations allow us to close the estimate and to obtain non-degenerate L^2 control for the derivatives of ϕ. Furthermore, the boundedness of such energy implies that|ϕ|(u,v) (v) + |u|^- 12,where, provided the data term decays in v, ϕ→ 0 as |u|,v→∞. Moreover, using the embedding W^1,2↪ C^0, 12 in 1-D, we also conclude that ϕ∈ C^0, 12.In spherical symmetry, it is in fact possible to control the solution to the linear wave equation up to the Cauchy horizon using only the method of characteristics[In fact, as in shown in <cit.>, the method of characteristics, when combined with the energy estimates, yields more precise estimates when the initial data are assumed to be spherically symmetric. While <cit.> does not give a proof of the estimates in the spherically symmetric case purely based on the method characteristics, such a proof can be inferred from the proof of Theorem <ref> in Section <ref>.]. Here, however, there is an additional twist to the problem. We will need to control solutions to the Klein–Gordon equation with non-zero mass[As we will discuss below, the need to consider the Klein–Gordon equation with non-zero mass stems not only from our desire to include mass in the matter field in (<ref>), but when attempting to control the metric components, one naturally encounters a Klein–Gordon equation with positive mass.]:_g_eRNϕ = ^2ϕ.For this scalar equation, however, whenever the mass is non-vanishing, using the method of characteristics and naïve estimates leads to potential logarithmic divergences. Nonetheless, if ^2>0, the argument above which makes use of the vector field multiplier method can still be applied. In this case, one defines instead the stress-energy-momentum tensor as follows:𝕋_μν = _μϕ_νϕ - 12 g_μν((g^-1)^_ϕ_ϕ+^2ϕ^2).As it turns out, the observations 1., 2., and 3. still hold in the 𝔪^2>0 case for the vector field V=|u|^2_u+v^2_v. In particular, there is a crucial cancellation in the bulk term as above, which removes the logarithmically non-integrable term and allows one to close the argument. Again, this consequently yields also decay estimates and C^0, 12 bounds for ϕ. Let us recap what we have achieved for the model problem (<ref>). The discussions above can be used to deal fully with the case a≥ 0, c≥ 0 and b=d=0. If a<0 or c<0, one still has a cancellation in the bulk spacetime term, but the boundary terms are not non-negative. If, on the other hand, b≠ 0 or d≠ 0, then in general one sees a bulk term which is exactly borderline and leads to logarithmic divergence.Renormalised energy estimates for the matter fields. In order to attack our problem at hand, the first step is to understand the propagation of the matter field even without coupling to gravity. In other words, we need to control the solution to the Maxwell–charged Klein–Gordon system in the interior of fixed extremal Reissner–Nordström. (A special case of this, when ==0 is exactly what has been studied in <cit.>.) One difficulty that arises in controlling the matter fields is that when ≠ 0, the energy estimates for the scalar field couple with estimates for the Maxwell field. If one naïvely estimates each field separately, while treating the coupling as error terms, one encounters logarithmically divergent terms similar to those appearing when controlling (<ref>) for b≠ 0 or d≠ 0. Instead, we prove coupled estimates for the scalar field and the Maxwell field simultaneously. In order to prove coupled energy estimates, a natural first attempt would be to use the full stress-energy-momentum tensor (i.e. the sum 𝕋=𝕋^(sf)+𝕋^(em)) and consider the current 𝕋_μνV^ν, where V=|u|^2_u+v^2_v as in <cit.>. However, since the charge is expected to asymptote to a non-zero value (as it does initially along the event horizon), this energy is infinite! Instead, we renormalise the energy to take out the infinite contribution from the background charge. We are then faced with two new issues: * the renormalised energy is not manifestly non-negative* additional error terms are introduced.Here, it turns out that one can use a Hardy-type inequality to show that the renormalised energy is coercive. (This is the step for which we need a restriction on the parameters of the problem.) Moreover, the additional spacetime error terms that are introduced in the energy estimates also exhibit the cancellation described in Footnote <ref> on page footnote.cancellation.Estimates for the metric components. Having understood the uncoupled Maxwell–Klein–Gordon system, we now discuss the problem where the Maxwell–Klein–Gordon system is coupled with the Einstein equations. First, we write the metric in double null coordinates:g = -^2(u,v) dudv + r^2(u,v) σ_𝕊^2,where σ_𝕊^2 is the standard round metric on 𝕊^2 (with radius 1). In such a gauge, the metric components r andsatisfy nonlinear wave equations with ϕ and ϕ as sources. In addition, r satisfies the Raychaudhuri equations, which can be interpreted as constraint equations.We control r directly using the method of characteristics. As noted before, using the method of characteristics for wave equations with non-trivial zeroth order terms[The wave equation for r indeed has such zeroth order terms, cf. (<ref>).] leads to potentially logarithmically divergent terms. To circumvent this, we use both the wave equation and the Raychaudhuri equations satisfied by r: using different equations in different regions of spacetime, one can show that using the method of characteristics that|r-M| v^-1+|u|^-1,|_v r| v^-2, |_u r| |u|^-2. For Ω, instead of controlling it directly, we bound the difference logΩ - logΩ_0, where Ω_0 corresponds to the metric component of the background extremal Reissner–Nordström spacetime. We will control it using the wave equation satisfied by Ω (cf. (<ref>)). Again, as is already apparent in the discussion of (<ref>), to obtain wave equation estimates, we need to use the structure of the equation. Using the estimates for ϕ and r, the equation for logΩ - logΩ_0 can be thought of as follows (modulo terms that are easier to deal with and are represented by …):_u _v log_0 = - 14 M^2 (e^log^2-e^log_0^2)+…Thus, whenis close to _0, (<ref>) can be viewed as a nonlinear perturbation of the Klein–Gordon equation with positive mass, which is moreover essentially decoupled from the other equations. Hence, as long as we can control the error terms and justify the approximation (<ref>), we can handle this equation using suitable modifications of the ideas discussed before. In particular, an appropriate modification of (<ref>) implies that →_0 in a suitable sense as |u|,v→∞.Finally, revisiting the argument for the energy estimates for Maxwell–Klein–Gordon, one notes that it can in fact be used to control solutions to the Maxwell–Klein–Gordon system on a dynamical background such that r and log approach their Reissner–Nordström values with a sufficiently fast polynomial rate as |u|,v→∞. In particular, the estimates we described above for the metric components is sufficient for us to set up a bootstrap argument to simultaneously control the scalar field and the geometric quantities.Note that in terms of regularity, we have closed the problem at the level of the (non-degenerate) L^2 norm of first derivatives of the metric components and scalar field. As long as r>0, this is the level of regularity for which well-posedness holds in spherical symmetry. It follows that we can also construct an extension which is a solution to (<ref>). §.§ Structure of the paper The remainder of the paper is structured as follows. In Section <ref>, we will introduce the geometric setup and discuss (<ref>) in spherical symmetry. In Section <ref>, we discuss the geometry of the interior of the extremal Reissner–Nordström black hole. In Section <ref>, we introduce the assumptions on the characteristic initial data. In Section <ref>, we give the statement of the main theorem (Theorem <ref>, see also Theorem <ref>). In Section <ref>, we begin the proof of Theorem <ref> and set up the bootstrap argument. In Section <ref>, we prove the pointwise estimates. In Section <ref>, we prove the energy estimates. In Section <ref>, we close the bootstrap argument and show that the solution extends up to the Cauchy horizon. In Section <ref>, we complete the proof of Theorem <ref> by constructing a spherically symmetric solution which extends beyond the Cauchy horizon. In Section <ref>, we prove additional estimates in the case ==0. §.§ AcknowledgementsPart of this work was carried out when both authors were at the University of Cambridge. J. Luk is supported by a Sloan fellowship, a Terman fellowship, and the NSF grant DMS-1709458.§ GEOMETRIC PRELIMINARIES§.§ Class of spacetimesIn this paper, we consider spherically symmetric spacetimes (ℳ,g) with ℳ = 𝒬×𝕊^2 such that the metric g takes the formg=g_𝒬+r^2(dθ^2+sin^2θ dφ^2), where (𝒬,g_𝒬) is a smooth (1+1)-dimensional Lorentzian spacetime and r:𝒬→ℝ_>0 is smooth and can geometrically be interpreted as the area radius of the orbits of spherical symmetry. We assume that (𝒬,g_𝒬) admits a global double null foliation[Note that for sufficiently regular g_𝒬, the metric can always be put into double null coordinates locally. Hence the assumption is only relevant for global considerations. We remark that the interior of extremal Reissner–Nordström spacetimes can be written (globally) in such a system of coordinates (see Section <ref>) and so can spacetimes that arise from spherically symmetric perturbations of the interior of extremal Reissner–Nordström, which we consider in this paper.], so that we write the metric g in double null coordinates as follows:g=-Ω^2(u,v)dudv+r^2(u,v)(dθ^2+sin^2θ dφ^2),for some smooth and strictly positive function ^2 on 𝒬.§.§ The Maxwell field and the scalar field We will assume that both the Maxwell field F and the scalar field ϕ in (<ref>) are spherically symmetric. For ϕ, this means that ϕ is constant on each spherical orbit, and can be thought of as a function on 𝒬.For the Maxwell field, spherical symmetry means that there exists a function Q on 𝒬, so that the Maxwell field F takes the following formF = Q2(π^* r)^2π^*(^2 du∧ dv),where π denotes the projection map π:ℳ→𝒬. We will call Q the charge of the Maxwell field.§.§ The system of equations In this subsection, we write down the symmetry-reduced equations in a double null coordinate system as in Section <ref> (see <cit.> for details).Before we write down the equations, we introduce the following notation for the covariant derivative operator with respect to the 1-form A:D_μϕ=∂_μϕ+i𝔢 A_μϕ. §.§.§ Propagation equations for the metric components r∂_u∂_v r= -1/4Ω^2-∂_ur ∂_vr+𝔪^2π r^2 Ω^2 |ϕ|^2+1/4Ω^2 r^-2 Q^2,r^2∂_u∂_vlogΩ= -2π r^2(D_uϕD_vϕ+D_uϕD_vϕ)-1/2Ω^2 r^-2 Q^2+1/4Ω^2+∂_ur∂_vr. §.§.§ Propagation equations for the scalar field and electromagnetic tensor D_uD_vϕ+D_v D_uϕ= -1/2𝔪^2Ω^2ϕ-2r^-1(∂_ur D_vϕ+∂_v r D_u ϕ),D_uD_vϕ-D_vD_u ϕ=1/2 r^-2Ω^2i 𝔢Q·ϕ, ∂_uQ=2π i r^2𝔢 (ϕD_uϕ-ϕD_uϕ), ∂_vQ=-2π i r^2𝔢 (ϕD_vϕ-ϕD_vϕ).Furthermore, we can expressQ=2r^2Ω^-2(∂_uA_v-∂_vA_u).§.§.§ Raychaudhuri's equations ∂_u(Ω^-2∂_u r)= -4π rΩ^-2|D_uϕ|^2, ∂_v(Ω^-2∂_v r)= -4π rΩ^-2|D_vϕ|^2.§.§ Hawking mass Define the Hawking mass m bym :=r2 (1-g_𝒬(∇ r,∇ r)) = r2 (1+4_u r_v r^2).By (<ref>), (<ref>) and (<ref>), _u m = -8πr^2(_v r)^2|D_uϕ|^2+2(_u r)^2 π r^2|ϕ|^2+ 12 (_u r)Q^2r^2, _v m = -8πr^2(_u r)^2|D_vϕ|^2+2(_v r)^2 π r^2|ϕ|^2+ 12 (_v r)Q^2r^2.§.§ Global gauge transformationsConsider the following global gauge transformation induced by the function χ: 𝒟→, 𝒟⊂^2:ϕ(u,v)=e^-i𝔢χ(u,v)ϕ(u,v), A_μ(u,v)=A_μ(u,v)+∂_μχ(u,v),with μ=u,v. Let us denote D=d+i𝔢A. Then,D_μϕ=e^-i𝔢χD_μϕ.As a result we conclude that the following norms are (globally) gauge-invariant: |ϕ|=|ϕ| and |D_μϕ|=|D_μϕ|.In most of this paper, the choice of gauge will not be important. We will only explicitly choose a gauge when discussing local existence or when we need to construct an extension of the solution. Instead, most of the time we will estimate the gauge invariant quantities |ϕ| and |D_μϕ|. For this purpose, let us note that we have the following estimates regarding these quantities: The following estimates hold:|ϕ|(u,v)≤ |ϕ|(u_1,v)+ ∫_u_1^u |D_u ϕ|(u',v) du'and|ϕ|(u,v)≤ |ϕ|(u,v_1)+ ∫_v_1^v |D_v ϕ|(u,v') dv'. We can always pick χ such thatA_u=0 and D_uϕ=∂_uϕ. This fact, together with the fundamental theorem of calculus and the gauge-invariance property above, imply |ϕ|(u,v)=|ϕ|(u,v)≤ |ϕ|(u_1,v) + ∫_u_1^u |D_u ϕ|(u',v) du'= |ϕ|(u_1,v)+ ∫_u_1^u |D_u ϕ|(u',v) du',which implies (<ref>).Similarly, by choosing χ such that A_v=0, we obtain (<ref>).§ INTERIOR OF EXTREMAL REISSNER–NORDSTRÖM BLACK HOLES The interior region of the extremal Reissner–Nordström solution with mass M>0 is the Lorentzian manifold (ℳ_eRN,g_eRN), where ℳ_eRN = (0,M)_r× (-∞,∞)_t×𝕊^2 and the metric g_eRN in the (t,r,θ,φ) coordinate system is given byg_eRN = -_0^2 dt^2 + _0^-2 dr^2 + r_0^2(dθ^2+sin^2θ dφ^2),where _0 = (1-Mr_0). We define the Eddington–Finkelstein r^* coordinate (as a function of r) byr^*= M^2M-r+2Mlog (M-r) + r,and define the Eddington–Finkelstein double-null coordinates byu= t-r^*, v =t+r^*.In Eddington–Finkelstein double-null coordinates (u,v,θ,φ), the metric takes the form as in Section <ref>:g_eRN=-Ω^2_0(u,v) dudv+r_0^2(u,v)(dθ^2+sin^2θ dφ^2),where r_0 is defined implicitly by (<ref>) and (<ref>) and Ω^2_0(u,v) = (1-Mr_0(u,v))^2.For the purpose of this paper, we do not need the explicit expressions for r_0 and _0 as functions of (u,v), but it suffices to have some simple estimates. Since we will only be concerned with the region of the spacetime close to timelike infinity i^+ (see Figure <ref>)[Formally, it is the “2-sphere at u=-∞, v=∞”.], we will assume v≥ 1 and u≤ -1. In this region, we have the following estimates (the proof is simple and will be omitted): For v≥ 1 and u≤ -1, there exists C>0 (depending on M) such that for v≥ 1 and u≤ -1,|r_0-M|(u,v)≤C(v+|u|), |_v r_0|(u,v)+|_u r_0|(u,v)≤C(v+|u|)^2.Given any β>0, we can find a constant C_β>0 (depending on M and β) such that for v≥ 1 and u≤ -1,|Ω_0-2M/v+|u||(u,v)≤ C_β(v+|u|)^-2+β,and|∂_v(v^2Ω_0^2)+∂_u(u^2Ω_0^2)|(u,v)≤ C_β (v+|u|)^-2+β. §.§ Regular coordinates We would like to think of ℳ_eRN as as having the “event horizon” and the “Cauchy horizon” as null boundaries, which are formally the boundaries {u=-∞} and {v=∞} respectively. To properly define them, we will introduce double null coordinate systems which are regular at the event horizon and at the Cauchy horizon respectively. We will also use these coordinate systems later in the paper * to pose the characteristic initial value problem near the event horizon; and* to extend the solution up to the Cauchy horizon.§.§.§ Regular coordinates at the event horizonDefine U by the relationdUdu = Ω_0^2(u,1)=(1-M/r_0(u,1))^2, U(-∞) = 0.By Lemma <ref>, there exists a constant C (depending on M), so that we can estimate0≤dUdu≤ C(1+|u|)^-2.Define the event horizon as the boundary {U=0}. We will abuse notation to denote the event horizon as both the boundary in the quotient manifold {(U,v):U=0}⊂𝒬 and the original manifold {(U,v):U=0}×^2 ⊂ℳ_eRN (cf. Section <ref>).After denoting u(U) as the inverse of u↦ U, we abuse notation to write r_0(U,v)=r_0(u(U),v) and Ω̂_0 is defined byΩ̂_0^2(U,v) =Ω_0^-2(u(U),1) Ω_0^2(u(U),v),the extremal Reissner–Nordström metric takes the following form in the (U,v,θ,φ) coordinate systemg_eRN=-Ω̂^2_0(U,v) dU dv+r_0^2(U,v)(dθ^2+sin^2θ dφ^2),In particular, by (<ref>), it holds thatΩ̂_0(0,v) = 1for all v. Additionally, we have, for all v,r_0(0,v) = M.Hence, in the (U,v,θ,φ) coordinate system, Ω̂_0(U,v) and r_0(U,v) extend continuously (in fact smoothly) to the event horizon. Moreover, for every v≥ 1 and u(U)≤ -1, Ω̂_0^2(U,v) is bounded above and below as follows:2v+1≤_0(U,v) ≤ 1. §.§.§ Regular coordinates at the Cauchy horizonDefine V by the relationdVdv = Ω_0^2(-1,v)=(1-M/r_0(-1,v))^2 , V(∞) = 0.By Lemma <ref>, there exists a constant C (depending on M), so that we can estimate0≤dVdv≤ C(1+v)^-2. Define the Cauchy horizon as the boundary {V=0}. (Again, this is to be understood either as {(u,V):V=0}⊂𝒬 or the original manifold {(u,V):V=0}×⊂ℳ_eRN.) After denoting v(V) as the inverse of v↦ V, we abuse notation to write r_0(u,v(V))=r_0(u,v(v)) and Ω_0 is defined byΩ_0^2(u,v(V)) = Ω_0^-2(-1,v(V))Ω_0^2(u,v(V)), the extremal Reissner–Nordström metric takes the following form in the (u,V,θ,φ) coordinate systemg_eRN=-Ω^2_0(u,V) dudV+r_0^2(u,V)(dθ^2+sin^2θ dφ^2),In analogy with Section <ref>, it is easy to see that Ω^2_0 and r_0 extend smoothly to the Cauchy horizon.§ INITIAL DATA ASSUMPTIONS We will consider the characteristic initial value problem for (<ref>) with initial data given on two transversally intersecting null hypersurfaces, which in the double null coordinates (U,v) are denoted byH_0:={(U,v):U=0},H_v_0:={(U,v):v=v_0}.Here, the (U,v) coordinates should be thought of as comparable to the Reissner–Nordström (U,v) coordinates in Section <ref>; see Section <ref> for further comments.The initial data consist of (ϕ,r,Ω,Q) on both H_0 and H_v_0, subject to the equations (<ref>) and (<ref>) on H_v_0, as well as the equations (<ref>) and (<ref>) on H_0.We impose the following gauge conditions on the initial hypersurfaces H_v_0 and H_0:Ω̂(U,v_0)=Ω̂_0(U,v_0) Ω̂(0, v)=Ω̂_0(0,v)=1which can be thought of as a normalisation condition for the null coordinates. The initial data for (ϕ,r,Ω,Q) will be prescribed in Sections <ref>–<ref>, but before that, we will give some remarks in Sections <ref> and <ref>: in Section <ref>, we discuss our conventions on null coordinates; in Section <ref>, we discuss which parts of the data are freely prescribable and which parts are determined by the constraints. We then proceed to discuss the initial data and the bounds that they satisfy. In Section <ref>, we discuss the data for ϕ; in Section <ref>, we discuss the data for r; in Section <ref>, we discuss the data for Q; in Section <ref>, we discuss the data for _U r on H_0. §.§ A comment about the use of the null coordinates In the beginning of Section <ref>, we normalised the null coordinates (U,v) on the initial hypersurfaces by the condition (<ref>) so that they play a similar role to the (U,v) coordinates on extremal Reissner–Nordström spacetimes introduced in Section <ref>. This set of null coordinates has the advantage of being regular near the event horizon and therefore it is easy to see that the Einstein–Maxwell–Klein–Gordon system is locally well-posed with the prescribed initial data. However, in the remainder of the paper, it will be useful to pass to other sets of null coordinates. For this we introduce the following convention. We use all of the coordinate systems (U,v), (u,v) and (u,V), where u and V are defined (as functions of U and v respectively) by (<ref>) and (<ref>).All the data will be prescribed in the (U,v) coordinate system and we will prove estimates for ϕ, r and Q in these coordinates. Nevertheless, using (<ref>), they imply immediately also estimates in the (u,v) coordinate system, and it is those estimates that will be used in the later parts of the paper. §.§ A comment about freely prescribable data Since the initial data need to satisfy (<ref>), (<ref>), (<ref>) and (<ref>), not all of the data are freely prescribed. Instead, we have freely prescribable and constrained data: * A normalisation condition for u and v can be specified. In our case, we specify the condition (<ref>).* ϕ on H_0 and H_0 can be prescribed freely.* r and Q can then be obtained by solving (<ref>), (<ref>), (<ref>) and (<ref>) with appropriate initial conditions, namely, * r and Q are to approach their corresponding values in extremal Reissner–Nordström with mass M>0, i.e.lim_v→∞ r(0,v) = lim_v→∞ Q(0,v) = M. * (_U r)(0,v_0) can be freely prescribed, cf. (<ref>).We remark that in order to fully specify the initial data, it only remains to pick a gauge condition for A. For this purpose, it will be most convenient to set A_U(U,v_0)=0 and A_v(0,v)=0. (This can always be achieved as each of these are only set to vanish on one hypersurface, cf. discussions following (<ref>).) Nevertheless, the choice of gauge will not play a role in the rest of this subsection, since all the estimates we will need for ϕ and its derivatives can be phrased in terms of the gauge invariant quantities |ϕ|, |D_vϕ| and |D_uϕ|. §.§ Initial data for ϕ We assume that there exists constants 𝒟_ i and 𝒟_ o such that∫_0^U_0 |D_Uϕ |^2(U,v_0) dU≤𝒟_ i , ∫_v_0^∞v'^2+α|D_vϕ|^2(0,v') dv'≤𝒟_ o,where we will take α>0. We additionally assume that lim_v→∞ϕ(0,v)=0.The following estimate holds:|ϕ|(0,v) ≤√(𝒟_ o)v^-1/2-α/2.By (<ref>), (<ref>) and Cauchy–Schwarz inequality, we have that|ϕ|(0,v)≤∫_v^∞ |D_vϕ|(0,v') dv'≤√(∫_v^∞ v'^-2-αdv')·√(∫_v^∞ v'^2+α|D_vϕ|(0,v')dv'),so we can conclude using (<ref>).§.§ Initial data for rWe assume thatlim_v→∞r(0,v)=Mand we prescribe freely ∂_Ur(0,v_0). Let us assume that_U r(0,v_0)<0, |∂_Ur|(0,v_0)≤ M𝒟_ i. We use the equations (<ref>) and (<ref>) as constraint equations for the variable r along H_0={U=0} and H_v_0={v=v_0}.§.§.§ Initial data for r on H_0We obtain along H_0:∂_v^2r(0,v)=-4πr(0,v)|D_vϕ|^2,The above ODE can be solved to obtain r(0,v). There exists a unique smooth solution to (<ref>) satisfying (<ref>). Moreover, if v_0 satisfies the inequality𝒟_ ov^-1_0≤ 18π,then the following estimates hold for v≥ v_0:M/2≤ r(0,v)≤ M, |r-M|(0,v)≤ 4 π M 𝒟_ o v^-1, |_v r|(0,v)≤ 4 π M 𝒟_ o v^-2.Existence and uniqueness can be obtained using a standard ODE argument. We will focus on proving the estimates.First, observe that by integrating (<ref>), and using the assumptions r(0,v)→ M and (<ref>), it follows that the limit lim_v→∞ (_v r)(0,v) exists. Now using again the assumption r(0,v)→ M, we deduce that lim_v→∞(_v r)(0,v) = 0.Together with (<ref>) this implies that∂_vr(0,v)≥ 0.Since r(0,v)→ M, we can then boundr(0,v)≤ M. By (<ref>) and (<ref>),|∂_vr(0,v)|≤ 4πsup_v_0≤ v<∞r(0,v)·∫_v^∞ |D_vϕ |^2(u,v') dv'.We deduce, using (<ref>) and (<ref>), that|∂_vr(0,v)|≤ 4π M 𝒟_ ov^-2-,and therefore|r(0,v)-M|≤4π M1+𝒟_ ov^-1-.for all v_0≤ v <∞. In particular, given (<ref>), it holds that for all v∈ [v_0,∞),r(0,v)≥M2.The estimates stated in the lemma hence follow from (<ref>), (<ref>), (<ref>) and (<ref>).§.§.§ Initial data for r on H_0 We similarly obtain along H_0:∂_U(Ω̂^-2(U,v_0) _U r(U,v_0))=-4πΩ̂^-2(U,v_0)r(U,v_0)|D_Uϕ|^2.The above ODE can be solved to obtain r(U,v_0).There exists a unique smooth solution to (<ref>) satisfying (<ref>). Moreover, if U_0, v_0 satisfy the inequality𝒟_ ov^-1_0≤ 18π, M 𝒟_ i v_0^2 U_0≤136π,then the following estimates hold for U∈ [0,U_0]:M4≤ r(U,v_0)≤5M4, |_U r|(U,v_0)≤ 9π M 𝒟_ iv_0^2.As in Lemma <ref>, since existence and uniqueness is standard, we focus on the estimates. To this end, we introduce a bootstrap argument. Introduce the following bootstrap assumptionr(U,v_0) ≤ 2M.Integrating (<ref>) and using (<ref>),|Ω̂^-2(U,v_0)∂_Ur(U,v_0)-∂_Ur(0,v_0)|≤ 4πsup_0≤ U'≤ U_0Ω̂^-2(U',v_0) r(U',v_0)·∫_0^U |D_Uϕ |^2(U”,v_0) dU”.By (<ref>), (<ref>) and (<ref>), this implies|∂_Ur(U,v_0)|≤ M𝒟_ i + 8π M ·(v_0+12)^2·𝒟_ i = M 𝒟_ i(1+2π(v_0+1)^2)≤ 9 π M 𝒟_ i v_0^2. Integrating in U, this yields (for U∈ [0,U_0]),|r(U,v_0)-r(0,v_0)|≤ 9 π M 𝒟_ i v_0^2 U_0,which implies, using Lemma <ref> (or more precisely (<ref>) and (<ref>)),M 2 - 9 π M 𝒟_ i v_0^2 U_0≤ r(U,v_0)≤ M + 9 π M 𝒟_ i v_0^2 U_0.Hence, by (<ref>), it holds thatM4≤ r(U,v_0)≤5M4,and we have improved the bootstrap assumption (<ref>). This closes the bootstrap argument, and the desired estimates follow from (<ref>) and (<ref>). §.§ Initial data for QIn view of the equations (<ref>) and (<ref>) which have to be satisfied on the initial hypersurfaces, it suffices to impose Q on one initial sphere. We assume thatlim_v→∞Q(0,v)=M.Assume (<ref>) holds. Then the following estimate holds on H_0:|Q(0,v)-M|(0,v)≤ 4π ||M 𝒟_ ov^-1-α.Using (<ref>), (<ref>) and Cauchy–Schwarz inequality we estimate|Q(0,v)-M|(0,v)≤ 4π ||sup_v”∈ [v_0,∞ r^2(0,v”) ∫_v^∞ |ϕ|·|D_vϕ|(0,v') dv' ≤4π ||M^2sup_v≤ v'<∞|ϕ|(0,v')·√(∫_v^∞ v'^-2-αdv')·√(∫_v^∞ v'^2+α|D_vϕ|^2(0,v')dv'),so we can use (<ref>) and (<ref>) to conclude (<ref>). §.§ ∂_Ur along H_0The function _u r along H_0 is not freely prescribable, but is dictated by (<ref>) and the freely prescribable data for (_U r)(0,v_0) (which obeys (<ref>)). We will need the following estimate for ∂_Ur along H_0. Suppose (<ref>) holds. Then there exists a constant C>0 depending only on M andsuch that for every v∈ [v_0,∞),|∂_Ur(0,v)|≤ C(𝒟_ o+𝒟_ i).By (<ref>) we have that∂_v(r∂_Ur)(0,v)=4M^2𝔪^2π r^2 |ϕ|^2+M^2r^-2( Q^2-r^2)=4M^2𝔪^2π r^2 |ϕ|^2+M^2r^-2( Q^2-M^2)+M^2r^-2( M^2-r^2).Hence,|(r∂_Ur)(0,v)-(r∂_Ur)(0,v_0)|≲|∫_v_0^∞𝔪^2r^2 |ϕ|^2+1/4πM^-2( Q^2-M^2)+1/4πM^-2(M^2-r^2) dv' |≲ 𝒟_ o,where we have used Lemmas <ref>, <ref> and <ref> (and we crucially used that α>0). Together with (<ref>), we can therefore conclude (<ref>).§ STATEMENT OF THE MAIN THEOREM We are now ready to give a precise statement of the main theorem. Let us recall (from Section <ref>) that we also consider the coordinate system (u,v), where u(U) is defined via the relation (<ref>). It will be convenient from this point onwards to use the u (instead of U) coordinate.Suppose* the parameters M and 𝔢 obey[Note that(10+5√(6)-3√(9+4√(6)))∼ 9.24… .] 1-(10+5√(6)-3√(9+4√(6)))|𝔢|M>0. * the initial data are smooth and satisfy (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>) for some finite 𝒟_ o and 𝒟_ i. Then for |u_0| sufficiently large depending on M, , , , 𝒟_ o and 𝒟_ i and v_0 sufficiently large depending on M, , ,and 𝒟_ o (but not 𝒟_ i!), the following holds: * (Existence of solution) There exists a unique smooth, spherically symmetric solution to (<ref>) in the double null coordinate system in (u,v)∈ (-∞,u_0]× [v_0,∞).* (Extendibility to the Cauchy horizon) In an appropriate coordinate system, a Cauchy horizon can be attached to the solution so that the metric, the scalar field and the Maxwell field extend continuously to it.* (Quantitative estimates) The following estimates hold for all (u,v)∈ (-∞,u_0)× [v_0,∞) for some implicit constant depending on M,and(which shows that the solution is close to extremal Reissner–Nordström in an appropriate sense):|ϕ|(u,v)𝒟_ o v^- 12 - 2 + (𝒟_ o + 𝒟_ i) |u|^- 12, |r-M|(u,v)𝒟_ o v^-1 + (𝒟_ o + 𝒟_ i)|u|^-1, |^2-_0^2|(u,v) |u|^- 12(|u|+v)^-2, |_u r|(u,v) (𝒟_ o + 𝒟_ i)|u|^-2, |_v r|(u,v) (𝒟_ o + 𝒟_ i)v^-2, ∫_v_0^∞ v^2|_vϕ|^2(u,v)dv + ∫_-∞^u_0 u^2|_uϕ|^2(u,v)du𝒟_ o+𝒟_ i, ∫_-∞^u_0 u^2 (∂_u(logΩ/Ω_0))^2(u,v) du+∫_v_0^v_∞ v^2 (∂_v(logΩ/Ω_0))^2(u,v) dv≤ 12. * (Extendibility as a spherically symmetric solution) The solution can be extended non-uniquely in (spacetime) C^0, 12∩ W^1,2_loc beyond the Cauchy horizon as a spherically symmetric solution to the Einstein–Maxwell–Klein–Gordon system. Without loss of generality, we will from now on assume that v_0≥ 1 and u_0≤ -1.Recall that in Section <ref>, some of the estimates that were proven depend on the assumptions (<ref>) and (<ref>). From now on, we take v_0 sufficiently large and u_0 sufficiently negative (in a manner allowed by Theorem <ref>) so that (<ref>) and (<ref>) hold.Note that v_0 is assumed to be large depending on M, , ,and 𝒟_ o so that we restrict our attention to a region where the geometry is close to that of extremal Reissner–Nordström. However, in general, if we are given data with v_0,i=1 (say) and 𝒟_ o not necessarily small, we can do the following: * First, find a v_0 (sufficiently large) such that v_0 is sufficiently large depending on M, , ,and 𝒟_ o in a way that is required by Theorem <ref>. * Solve a finite characteristic initial value problem in (-∞,u_0]× [v_0,i,v_0] for some u_0 sufficiently negative. (Such a problem can always be solved for u_0 sufficiently negative. This can be viewed as a restatement of the fact that for local characteristic initial value problems, one only needs the smallness of one characteristic length, as long as the other characteristic length is finite, cf. <cit.>.)* Now, let 𝒟_ i be the size of the new data on {v=v_0} which is obtained from the previous step. By choosing u_0 smaller if necessary, it can be arranged so that |u_0| is large in terms of M, ,and 𝒟_ o + 𝒟_ i in a way consistent with Theorem <ref>.* Theorem <ref> can now be applied to obtain a solution in (-∞,u_0]× [v_0,∞) such that the conclusions of Theorem <ref> hold.In the case ==0, we obtain the following additional regularity of the scalar field:In the case ==0, suppose that in addition to the assumptions of Theorem <ref>, the following pointwise bounds hold for the initial data:sup_u∈ (-∞,u_0] |u|^2|_u ϕ|(u,v_0) + sup_v∈ [v_0,∞) v^2 |_vϕ|(-∞,v)<∞.Then, taking u_0 more negative if necessary, in the (u,V) coordinate system (see (<ref>)), the scalar field is Lipschitz up to the Cauchy horizon.§ THE MAIN BOOTSTRAP ARGUMENT§.§ Setup of the bootstrapWe will assume that* there exists a smooth solution (ϕ,Ω,r,A) to the system of equations (<ref>)–(<ref>) in the rectangle D_U_0,[v_0,v_∞)={(U,v) |0≤ U≤ U_0,v_0≤ v< v_∞}, such that * the initial gauge conditions are satisfied, i.e. Ω̂^2(U,v_0)=Ω̂_0^2(U,v_0) and Ω̂^2(0,v)=Ω̂_0^2(0,v), and* the initial conditions for ϕ, r, Q are attained.On this region, we will moreover assume that certain bootstrap assumptions hold (cf. Section <ref>). Our goal will then be to improve these bootstrap assumptions, which then by continuity, implies that the above three properties hold in for all v≥ v_0, i.e. in the region D_U_0,v_0=D_U_0,[v_0,∞).Recall again that we often use the (u,v) instead the (U,v) coordinates. Abusing notation, we will also writeD_u_0,[v_0,v_∞)=D_U_0,[v_0,v_∞)={(u,v) |-∞<u≤ u_0:=u(U_0),v_0≤ v≤ v_∞}. §.§ Bootstrap assumptions Fix η>0 sufficiently small (depending only onand M) so that 1-(10+5√(6)-3√(9+4√(6)))(1+η)|𝔢|M> 0.(Such an η exists in view of (<ref>).) Defineμ=(1-(10+5√(6)-3√(9+4√(6)))(1+η)|𝔢|M).Let us make the following bootstrap assumptions for the quantities (ϕ,Ω,r) in D_U_0,v_∞, for some 𝒜_ϕ≥ 1 to be chosen later: A1 sup_v∈ [v_0,v_∞]∫_-∞^u_0 u'^2 (∂_u(logΩ/Ω_0))^2(u',v) du'+sup_u∈(-∞,u_0]∫_v_0^v_∞ v'^2 (∂_v(logΩ/Ω_0))^2(u,v') dv'≤M, A2 sup_v∈ [v_0,v_∞]∫_-∞^u_0 u'^2 |D_u ϕ |^2(u',v) du'+sup_u∈(-∞,u_0]∫_v_0^v_∞v'^2|D_vϕ|^2(u,v') dv'≤ 𝒜_ϕ (𝒟_ o+𝒟_ i), A3 sup_u∈(-∞,u_0], v∈ [v_0,v_∞]|r-M|(u,v)≤ M/2.Our goal will be to show that under these assumptions, for |u_0| sufficiently large depending on M, , , , η, 𝒟_ o and 𝒟_ i and v_0 sufficiently large depending on M, , , , η and 𝒟_ o, * the estimate (<ref>) can be improved so that the RHS can be replaced by M2;* the estimate (<ref>) can be improved so that the RHS can be replaced by C(𝒟_ o+𝒟_ i), where C is a constant depending only on M, , ,and η;* the estimate (<ref>) can be improved to |r-M| ≤M/4. §.§ Conventions regarding constants In closing the bootstrap argument, the main source of smallness will come from choosing |u_0| and v_0 appropriately large. We remark on our conventions regarding the constants that will be used in the bootstrap argument. * All the implicit constants (either in the form of C or ) are allowed to depend on the parameters M, ,and . In particular, they are allowed to depend on η (defined in (<ref>)) and μ (defined in (<ref>)). There will be places where the exact values of these parameters matter (hence the corresponding restriction in Theorem <ref>): at those places the constants will be explicitly written.* |u_0| is taken to be large depending on M, , , , η, 𝒟_ o and 𝒟_ i, and v_0 is taken to be large depending on M, , , , η and 𝒟_ o. In particular, we will use𝒟_ o v_0^- 110≪ 1, (𝒟_ o + 𝒟_ i) |u_0|^- 110≪ 1without explicit comments, where by ≪ 1, we mean that it is small with respect to the constants appearing in the argument that depend on M, , ,and η.* 𝒜_ϕ≥ 1 will eventually be chosen to be large depending M, ,and , but not on 𝒟_ o and 𝒟_ i. In particular, we will also use𝒜_ϕ^3𝒟_ o v_0^- 110≪ 1,𝒜_ϕ^3(𝒟_ o + 𝒟_ i) |u_0|^- 110≪ 1without explicit comments. § POINTWISE ESTIMATES For all -∞<u≤ u_0, v_0≤ v≤ v_∞ we have that:|ϕ|(u,v)≲√(𝒟_ o)v^-1/2-2+ 𝒜_ϕ^ 12√(𝒟_ o+𝒟_ i)|u|^-1/2, |Q-M|(u,v)≲𝒟_ ov^-1-+ 𝒜_ϕ(𝒟_ o+𝒟_ i) |u|^-1, |Ω-Ω_0|(u,v)≲|u|^-1/2(v+|u|)^-1, |Ω^2-Ω_0^2|(u,v)≲|u|^-1/2(v+|u|)^-2, |Ω-2M/(v+|u|)|(u,v)≲|u|^-1/2(v+|u|)^-1, |Ω^2-4M^2/(v+|u|)^2|(u,v)≲|u|^-1/2(v+|u|)^-2.In particular, 1/2M≤ |Q|(u,v)≤3/2M, 2M^2(v+|u|)^-2≤Ω^2(u,v)≤ 6M^2(v+|u|)^-2.Proof of (<ref>). By (<ref>) and the Cauchy–Schwarz inequality, we obtain|ϕ|(u,v)≤ |ϕ|(-∞,v)+ |u|^-1/2√(∫_-∞^u u'^2 |D_u ϕ|^2(u',v) du').Using (<ref>) and the bootstrap assumption (<ref>) to control the first and second term respectively, we obtain (<ref>).Proof of (<ref>). Using the estimate (<ref>) for ϕ, the bootstrap assumption (<ref>) for r together with (<ref>), we obtain a pointwise estimate for |Q-M|:|Q-M|(u,v)≤ |Q-M|(-∞,v)+∫_-∞^u|∂_uQ|(u',v) du' ≤ |Q-M|(-∞,v)+4π |𝔢|∫_-∞^ur^2|ϕ||D_uϕ|(u',v) du' ≤ |Q-M|(-∞,v)+4π |𝔢||u|^-1/2sup_-∞<u'≤ ur^2|ϕ|(u',v)·√(∫_-∞^uu^2|D_uϕ|^2(u',v) du') ≤ |Q-M|(-∞,v)+C√(𝒜_ϕ(𝒟_ o+𝒟_ i))|u|^-1/2(|u|^-1/2√(𝒜_ϕ(𝒟_ o+𝒟_ i))+v^-1/2-2√(𝒟_ o)).Using (<ref>) and Young inequality, we therefore conclude that|Q-M|(u,v) 𝒟_ ov^-1-+ 𝒜_ϕ(𝒟_ o+𝒟_ i) |u|^-1. Proof of (<ref>), (<ref>), (<ref>) and (<ref>). By our choice of initial gauge (<ref>), we have thatlogΩ/Ω_0(-∞,v)=logΩ̂/Ω̂_0(U=0,v)=0,so we can estimate using (<ref>)|logΩ/Ω_0|(u,v)≤|u|^-1/2√(∫_-∞^u u'^2 (∂_u(logΩ/Ω_0))^2(u',v') du')≤ M^ 12|u|^- 12.Using (<ref>) and the simple inequality |e^ϑ - 1|≤ |ϑ| e^|ϑ|, we have|ΩΩ_0 -1|(u,v) ≤ M^ 12|u|^- 12max{ΩΩ_0, Ω_0Ω}(u,v),|Ω_0Ω -1|(u,v) ≤ M^ 12|u|^- 12max{ΩΩ_0, Ω_0Ω}(u,v).We now consider two cases. Suppose ΩΩ_0(u,v)>1 for some (u,v), we have|ΩΩ_0 -1|(u,v)≤ M^ 12|u|^- 12(ΩΩ_0-1)(u,v) + M^ 12|u|^- 12,which, after choosing u_0 to satisfy M^ 12|u_0|^- 12≤ 12, implies|ΩΩ_0 -1|(u,v)≤ 2M^ 12|u|^- 12.Multiplying (<ref>) by _0 in particular implies |Ω-Ω_0|(u,v)≤ 4M^ 12|u|^-1/2Ω_0(u,v).in this case. On the other hand, if ΩΩ_0(u,v)<1 for some (u,v), we have by a similar argument that for M^ 12|u_0|^- 12≤ 12,|Ω_0Ω -1|(u,v)≤ 2M^ 12|u|^- 12.This then implies|_0-|(u,v)≤ 2M^ 12|u|^- 12 |-_0|(u,v) + 2M^ 12|u|^- 12_0(u,v).Choosing M^ 12|u_0|^- 12≤ 14 implies that we also have (<ref>) in this case. Using (<ref>) and (<ref>), we conclude (<ref>). For (<ref>), we use (<ref>) twice to obtain|Ω^2-Ω_0^2|(u,v)≤ 4M^ 12|u|^-1/2Ω_0(+_0)≤ 16M|u|^-1Ω_0^2 + 8 M^ 12|u|^-1/2Ω_0^2,which, after choosing |u_0| to be sufficiently large and using (<ref>), implies (<ref>). Finally, (<ref>) and (<ref>) follow from (<ref>), (<ref>) and (<ref>) (with β =12).Proof of (<ref>) and (<ref>). (<ref>) is an immediate consequence of (<ref>) while (<ref>) is an immediate consequence of (<ref>). The following estimates hold:|∂_ur|(u,v)≲𝒜_ϕ(𝒟_ o+𝒟_ i)|u|^-2, |∂_vr|(u,v)≲𝒟_ ov^-2+𝒜_ϕ(𝒟_ o+𝒟_ i)min{v^-2,|u|^-2}, |r(u,v)-M|≲𝒟_ ov^-1+𝒜_ϕ(𝒟_ o+𝒟_ i)|u|^-1, |r(u,v)-r_0(u,v)|≲𝒟_ o v^-1+𝒜_ϕ(𝒟_ o+𝒟_ i)|u|^-1+ (v+|u|)^-1.In particular,|r(u,v)-M|≤ M 4,which improves the bootstrap assumption (<ref>). Proof of (<ref>). By (<ref>), (<ref>), (<ref>) and (<ref>), we can estimate|∂_ur|(u,v)≤Ω^2(u,v)|∂_Ur|(-∞,v)+CΩ^2(u,v)∫_-∞^uΩ^-2(u',v)|D_uϕ|^2(u',v) du' ≤ C(𝒟_ o+𝒟_ i)|u|^-2+C(v+|u|)^-2∫_-∞^u(v+|u'|)^2/|u'|^2|u'|^2|D_uϕ|^2(u',v) du'Note that for |u'|≥ |u|, we have that(v+|u'|/|u'|)^2=(1+v/|u'|)^2≤(1+v/|u|)^2=(v+|u|/|u|)^2,which we can use to further estimate|∂_ur|(u,v)≤ C(𝒟_ o+𝒟_ i)|u|^-2+C|u|^-2∫_-∞^u|u'|^2|D_uϕ|^2(u',v) du'and hence, by (<ref>),|∂_ur|(u,v)≤ C𝒜_ϕ(𝒟_ o+𝒟_ i)|u|^-2.Proof of (<ref>). Using the fundamental theorem of calculus and integrating (<ref>) in u, we obtain|r(u,v)-M|≤ |r(-∞,v)-M|+∫_-∞^u |∂_ur|(u',v)' du' ≲𝒟_ ov^-1+𝒜_ϕ(𝒟_ o+𝒟_ i)|u|^-1,where in the last inequality we have used Lemma <ref> and (<ref>). Combining this with the estimates for r_0 in Lemma <ref>, we thus obtain the estimate for r-r_0 in (<ref>). In particular, for 𝒟_ ov_0^-1, (𝒟_ o+𝒟_ i)|u_0|^-1 suitably small, we obtain:|r(u,v)-M|<1/4M,for all (u,v)∈ D_u_0,v_∞, which is the estimate (<ref>).Proof of (<ref>): the region {v≤ |u|}. We first rewrite (<ref>) as follows:∂_u(r∂_vr)= 1/4Ω^2r^-2(Q^2-M^2)+ 1/4Ω^2r^-2(M^2-r^2)+𝔪^2π r^2Ω^2 |ϕ|^2.By (<ref>) (for ϕ), (<ref>) (for r-M) and (<ref>) (for Q-M), the u-integral of the RHS of the above equation can be estimated (up to a constant) by∫_-∞^u Ω^2 (𝒟_ o v^-1 + 𝒜_ϕ(𝒟_ o+𝒟_ i) |u'|^-1)du'.Using (<ref>), we have, in the region {v≤ |u|},∫_-∞^u Ω^2 𝒟_ o v^-1du'𝒟_ o v^-1∫_-∞^u (v+|u'|)^-2du' 𝒟_ o v^-1(v+|u|)^-1and ∫_-∞^u Ω^2 𝒜_ϕ(𝒟_ o+𝒟_ i) |u'|^-1 du' 𝒜_ϕ(𝒟_ o+𝒟_ i)∫_-∞^u (v+|u'|)^-2 |u'|^-1du'𝒜_ϕ(𝒟_ o+𝒟_ i) |u|^-2.Together with the bound on _v r on the event horizon in Lemma <ref>, this implies that when v≤ |u|, we have the following estimate:|_v r|(u,v)𝒟_ ov^-2+𝒜_ϕ(𝒟_ o+ 𝒟_ i)min{v^-2,|u|^-2}. Proof of (<ref>): the region {v≥ |u|}. Notice that if we estimate in this region the same manner as before, we lose a factor of log v in the bound. So instead of (<ref>), we will use the Raychaudhuri's equation (<ref>)[On the other hand, in the region {v≤ |u|} that we considered above, it also does not seem that Raychaudhuri can give the desired bound.]. More precisely, given (u,v) with v≥ |u|, we integrate (<ref>) along a constant-u curve starting from its intersection with the curve {|u|=v} to obtain|_v r^2|(u,v)|_v r^2|(u,-u) + ∫_-u^v ^-2 |D_vϕ|^2(u,v') dv'.By the estimates for _v r in the previous step and (<ref>), we have|_v r^2|(u,-u)𝒜_ϕ(𝒟_ o+ 𝒟_ i).Since v'≥ |u| in the domain of integration, we have, after using (<ref>) and (<ref>), that∫_-u^v ^-2 |D_vϕ|^2(u,v') dv'∫_-u^v (v'+|u|)^2 |D_vϕ|^2(u,v') dv'∫_-u^v v^2 |D_vϕ|^2(u,v') dv'𝒜_ϕ(𝒟_ o+𝒟_ i).Combining the three estimates above with (<ref>) yields that when v≥ |u|, |_v r|(u,v) 𝒜_ϕ(𝒟_ o+ 𝒟_ i)min{v^-2,|u|^-2}.Together with the previous step, we have thus completed the proof of (<ref>). § ENERGY ESTIMATES In this section, we prove the energy estimates for (derivatives of) ϕ and . In particular, we will improve our bootstrap assumptions (<ref>) and (<ref>). As we discussed in the introduction, the argument leading to energy estimates for ϕ will go through the introduction of a renormalised energy, the analysis of which forms the most technical part of the paper.In Section <ref>, we will motivate the introduction of our renormalised energy for the matter field by considering the stress-energy-momentum tensor associated to the matter field. In Section <ref>, we show that the renormalised energy we introduce is coercive; and in Section <ref>, we show that one can bound this renormalised energy. Combining these facts yields the desired control for the matter field.Finally, in Section <ref>, we prove the energy estimates for _0. §.§ The stress energy tensor and the renormalised energy fluxesThe null components of the stress-energy-momentum tensor 𝕋_μν corresponding to the scalar field and electromagnetic tensor are given by:𝕋_uv= 1/4Ω^2(𝔪^2|ϕ|^2+1/4πr^-4Q^2),𝕋_uu= |D_uϕ|^2,𝕋_vv=|D_vϕ|^2.The above expressions suggest that the natural energy fluxes along the null hypersurfaces of constant u and v are obtained by integrating the contraction 𝕋(X,∂_v) and 𝕋(X,∂_u), respectively.With the choice X=u^2∂_u+v^2∂_v, this gives:∫_v_1^v_2 v^2r^2|D_vϕ|^2+ 1/4u^2r^2Ω^2(𝔪^2|ϕ|^2+1/4πr^-4Q^2) dv,∫_u_1^u_2 u^2r^2|D_uϕ|^2+ 1/4v^2r^2Ω^2(𝔪^2|ϕ|^2+1/4πr^-4Q^2) du,with -∞<u_1<u_2≤ u_0, v_0 ≤ v_1<v_2<∞.However, with the above energy fluxes, the initial energy flux is not finite even initially on the event horizon with the chosen initial data. We therefore introduce the following renormalised energy fluxes.[Notice that in the renormalisation, not only have we “added an infinity term” to each of the fluxes, we have also “replaced several factors of r by factors of M”. The replacement of r by M is strictly speaking not necessary to make the renormalised energy fluxes finite, but this simplifies the computations below.]E_v(u):=∫_v_0^v_∞ v^2M^2|D_vϕ|^2(u,v)+ 1/4u^2M^2Ω^2(𝔪^2|ϕ|^2+1/4πM^-4(Q^2-M^2))(u,v) dv,E_u(v):=∫_-∞^u_0 u^2M^2|D_uϕ|^2(u,v)+ 1/4v^2M^2Ω^2(𝔪^2|ϕ|^2+1/4πM^-4(Q^2-M^2))(u,v) du.§.§ Coercivity of the renormalised energy flux Our goal in this subsection is to prove that the renormalised energy flux we defined in (<ref>) and (<ref>) is coercive and controls the quantity on the LHS of (<ref>). The statement of the main result can be found in Proposition <ref> at the end of the subsection. This will be achieved in a number of steps. We will first need the following preliminary results: * We need an improved version of (<ref>), which keeps track of the constants appearing in the leading-order terms (Lemma <ref>)* We then show that ∫_v_0^v (|u|/v'+|u|)^2|ϕ|^2(u,v') dv' can be controlled by the LHS of (<ref>) (Lemma <ref>).* Similarly, we show that ∫^u_0_-∞(v/v+|u'|)^2|ϕ|^2(u',v) du' can be controlled by the LHS of (<ref>) (Lemma <ref>).Both 2. and 3. above are based on a Hardy-type inequality; see Lemma <ref>. After these preliminary steps, we turn to the terms in the renormalised energy (cf. (<ref>), (<ref>)) which are not manifestly non-negative. Precisely, we control * 1/16π∫_v_0^v_∞ u^2M^-2Ω^2(Q^2-M^2)(u,v) dv in Proposition <ref>, and* 1/16π∫_-∞^u_0 v^2M^-2Ω^2(Q^2-M^2)(u,v) du in Proposition <ref>.Putting all these together, we thus obtain the main result in Proposition <ref>.We now turn to the details, beginning with the following lemma: Let η>0 be as in (<ref>). Then there exists C>0 such that|Q^2-M^2|(u,v)≤ 8π M|𝔢|(1+η)|u|^-1∫_-∞^uu'^2M^2|D_uϕ|^2(u',v) du'+C𝒟_ ov^-1+C𝒜_ϕ(𝒟_ o+𝒟_ i)|u|^-2.We compute(Q^2-M^2)(u,v)=(Q(u,v)+M)(Q(u,v)-M)=(Q(u,v)+M)(Q(-∞,v)-M+∫_-∞^u∂_uQ(u',v) du')and use (<ref>) to estimate|Q^2-M^2|(u,v)≤ |Q(u,v)+M|(|Q(-∞,v)-M|+4π|𝔢|∫_-∞^ur^2|ϕ||D_uϕ| du').We further use (<ref>)to estimate:∫_-∞^ur^2|ϕ||D_uϕ|(u',v) du' ≤[M^2+sup_-∞<u'≤ u(r^2-M^2)]·sup_-∞<u'≤ u|ϕ|(u',v)·∫_-∞^u|D_uϕ|(u',v) du' ≤[M^2+sup_-∞<u'≤ u(r^2-M^2)]· |u|^-1/2|ϕ|(-∞,v)√(∫_-∞^uu'^2|D_uϕ|^2(u',v) du') +[M^2+sup_-∞<u'≤ u(r^2-M^2)]· |u|^-1/2∫_-∞^u |D_u ϕ|(u',v) du'·√(∫_-∞^uu'^2|D_uϕ|^2(u',v) du') ≤[M^2+sup_-∞<u'≤ u(r^2-M^2)]· |u|^-1/2|ϕ|(-∞,v)√(∫_-∞^uu'^2|D_uϕ|^2(u',v) du') +[M^2+sup_-∞<u'≤ u(r^2-M^2)]· |u|^-1∫_-∞^uu'^2|D_uϕ|^2(u',v) du'.Using (<ref>), it follows that|r^2-M^2|(u,v)=|r-M+2M|· |r-M|≲𝒟_ ov^-1+𝒜_ϕ(𝒟_ o+ 𝒟_ i)|u|^-1,where we have taken 𝒟_ ov_0^-1 and (𝒟_ o+𝒟_ i)|u_0|^-1 to be suitably small.Therefore, we can further estimate the right-hand side of (<ref>) to obtain∫_-∞^ur^2|ϕ||D_uϕ|(u',v) du'≤1/4η^-1 M^2|ϕ|^2(-∞,v)+(1+η)M^2|u|^-1∫_-∞^uu'^2|D_uϕ|^2(u',v) du'+C𝒟_ ov^-1|u|^-1+C𝒜_ϕ(𝒟_ o+𝒟_ i)|u|^-2,where we moreover applied Young's inequality with an η weight, where η>0 is as in the statement of the proposition.By applying (<ref>) and (<ref>) together with the initial data estimate (<ref>) and (<ref>), we therefore obtain:|Q^2-M^2|(u,v)≤ |Q(u,v)+M|(|Q(-∞,v)-M|+πη^-1M^2|𝔢| |ϕ|^2(-∞,v)+4π|𝔢|(1+η)|u|^-1∫_-∞^uu'^2M^2|D_uϕ|^2(u',v) du'+C𝒟_ ov^-1|u|^-1+C𝒜_ϕ(𝒟_ o+𝒟_ i)|u|^-2) ≤(2M+C𝒟_ ov^-1-+C𝒜_ϕ(𝒟_ o+𝒟_ i) |u|^-1) ·(4π|𝔢|(1+η)|u|^-1∫_-∞^uu'^2M^2|D_uϕ|^2(u',v) du'+C𝒟_ ov^-1+C𝒜_ϕ(𝒟_ o+𝒟_ i)|u|^-2) ≤ 8π M|𝔢|(1+η)|u|^-1∫_-∞^uu'^2M^2|D_uϕ|^2(u',v) du'+C𝒟_ ov^-1+C𝒜_ϕ(𝒟_ o+𝒟_ i)|u|^-2.In order to estimate the |ϕ|^2 integral, we will make use of a Hardy inequality: Let f:→ be in H^1_ loc() and let c≥ 0 be a constant. Then for any x_1,x_2∈ with 0<x_1<x_2<∞:∫_x_1^x_2 (x+c)^pf^2(x) dx≤2 (x_2+c)^p+1f^2(x_2)+4/(p+1)^2∫_x_1^x_2 (x+c)^p+2f'^2(x) dxifp>-1 ∫_x_1^x_2 (x+c)^pf^2(x) dx≤2 (x_1+c)^p+1f^2(x_1)+4/(p+1)^2∫_x_1^x_2 (x+c)^p+2f'^2(x) dxifp<-1.We will only prove the inequality in the case p>-1. The other inequality is similar. We compute0≤∫_x_1^x_2 (x+c)^p (p+12 f+ (x+c)f')^2(x) dx = ∫_x_1^x_2(p+1)^24 (x+c)^p f^2 +p+12 (x+c)^p+1ddx (f^2) +(x+c)^p+2 f'^2(x) dx= -(p+1)^24 ∫_x_1^x_2 (x+c)^p f^2(x)dx + p+12 (x_2+c)^p+1 f^2(x_2)- p+12 (x_1+c)^p+1 f^2(x_1)+ ∫_x_1^x_2 (x+c)^p+2 f'^2(x) dx.Rearranging and dropping a manifestly non-negative term yield the conclusion.Using Lemma <ref>, we prove a Hardy-type estimate on a constant-u hypersurface in our setting.Given η>0 as in (<ref>), there exists C>0 independent of 𝒜_ϕ such that the following holds:∫_v_0^v (|u|/v'+|u|)^2|ϕ|^2(u,v') dv'≤ C𝒟_ o+ 4 ∫_v_0^v v'^2 |D_v ϕ|^2(u,v') dv'+ 3(1+η)∫_-∞^u |u'|^2|D_uϕ|^2(u',|u|) du'.We will apply the Hardy inequality Lemma <ref> on a fixed constant-u hypersurface. For this purpose, it is useful to choose an auxiliary gauge for A (cf. Section <ref> and proof of Lemma <ref>) where |_vϕ|=|D_vϕ| (and we will perform all estimates in terms of the gauge invariant quantities |ϕ| and |D_vϕ|). Let γ>0 be a constant to be chosen. If v>|u|, we also partition the integration interval: [v_0,v]=[v_0,γ |u|]∪ [γ |u|,v], so we can estimate:∫_v_0^v (|u|/v'+|u|)^2|ϕ|^2(u,v') dv'=∫_v_0^γ|u|(|u|/v'+|u|)^2|ϕ|^2(u,v') dv'+∫_γ|u|^v (|u|/v'+|u|)^2|ϕ|^2(u,v') dv' ≤∫_v_0^γ|u| |ϕ|^2 dv'+|u|^2∫_γ|u|^v(v'+|u|)^-2|ϕ|^2(u,v') dv' ≤ 4∫_v_0^γ|u|v'^2 |D_v ϕ|^2(u,v') dv'+4|u|^2∫_γ|u|^v |D_v ϕ|^2(u,v') dv'+ 2γ|u||ϕ|^2(u,γ|u|)+2|u|/(γ+1)|ϕ|^2(u,γ|u|) ≤ 4 max{1,γ^-2}∫_v_0^v v'^2 |D_v ϕ|^2(u,v') dv'+(2γ+2γ+1)|u||ϕ|^2(u,γ|u|),where we applied Lemma <ref> with p=0 and p=-2 respectively in the two intervals. By (<ref>), Young's inequality and (<ref>) (to control the initial data), it moreover holds that for any η>0, there exists C_η>0 so that the following is verified:|u||ϕ|^2(u,v)≤ C_η𝒟_ o|u|/v+(1+η)∫_-∞^u |u'|^2|D_uϕ|^2(u',v) du'.Using this to control the last term in (<ref>), and setting γ=1, we obtain the desired conclusion. We have a similar Hardy-type estimate on a constant-v hypersurface, with appropriate modifications of the weights.Let η>0 be as in (<ref>). Then there exists C>0 independent of 𝒜_ϕ such that the following holds:∫^u_0_-∞(v/v+|u'|)^2|ϕ|^2(u',v) du' ≤ C 𝒟_ o + 6(1+η) ∫^u_0_-∞ u'^2 |D_u ϕ|^2(u',v) du'.Choose a gauge so that |D_uϕ|=|_uϕ|. (As in Lemma <ref>, we will only estimate the gauge invariant quantities.)Let σ>0 be a constant that will be chosen suitably later on. We partition the integration interval: [-∞,u_0]=[-∞,σ v]∪ [σ v,u_0]. For the sake of convenience, we will change our integration variable from u to -u and we will denote -u by |u|, so we can estimate:∫_|u_0|^∞(v/v+|u'|)^2|ϕ|^2(u',v) d|u'|=∫_|u_0|^σ v(v/v+|u'|)^2|ϕ|^2(u',v) d|u'|+∫_σ v^∞(v/v+|u'|)^2|ϕ|^2 d|u'| ≤∫_|u_0|^σ v |ϕ|^2(u',v) d|u'|+v^2∫_σ v^∞(v+|u'|)^-2|ϕ|^2(u',v) d|u'| ≤ 4∫_|u_0|^σ v|u'|^2 |D_uϕ|^2(u',v) d|u'|+4v^2∫_σ v^∞|D_u ϕ|^2(u',v) d|u'|+2 σ v|ϕ|^2(σ v,v)+2v^2/σ v+v|ϕ|^2(σ v,v) ≤ 4max{1,σ^-2}∫^u_0_-∞ u'^2 |D_u ϕ|^2(u',v) du'+(2+2/σ(1+σ))σ v|ϕ|^2(σ v,v),where we applied Lemma <ref> with p=0 and p=-2 in the two intervals. We apply (<ref>) to conclude that for any η', σ>0, there exists C_η',σ>0 such that∫_u^u_0(v/v+|u'|)^2|ϕ|^2 du'≤(4max{1,σ^-2}+(1+η')(2+2/σ(1+σ)))∫^u_0_-∞u'^2 |D_u ϕ|^2 du'+C_η',σ𝒟_ o.Finally, choosing η' sufficiently small and σ sufficiently large, we can choose4max{1,σ^-2}+4σ^-2+(1+η')(2+2/σ(1+σ)) ≤ 6(1+η),which then gives the desired conclusion. With Lemmas <ref> and <ref> in place, we are now ready to control the terms in E_v(u) and E_u(v) which are not manifestly non-negative. These are the terms 1/16π∫_v_0^v_∞ u^2M^-2Ω^2(Q^2-M^2)(u,v) dv,1/16π∫_-∞^u_0 v^2M^-2Ω^2(Q^2-M^2)(u,v) duin (<ref>) and (<ref>), which will be handled in Propositions <ref> and <ref> respectively. For η>0 as in (<ref>) and for κ>0 arbitrary, there exists a constant C>0 independent of 𝒜_ϕ (but dependent on κ in addition to M, , ,and η) such that | 1/16 π∫_v_0^v u^2M^-2Ω^2(Q^2-M^2)(u,v') dv'| ≤(κ+4κ^-1) |𝔢|M^3 ∫_v_0^vv'^2|D_vϕ|^2 dv'+(2+3κ^-1)(1+η)|𝔢|M^3 ∫_-∞^u |u'|^2|D_uϕ|^2 du' + C(𝒟_ o+𝒟_ i). The main integration by parts. We begin the argument by a simple integration by parts. | 1/16 π∫_v_0^v u^2M^-2Ω^2(Q^2-M^2)(u,v') dv'| ≤|-1/16 π∫_v_0^v(∫_v_0^v'u^2M^-2Ω^2(u,v') dv”)∂_v(Q^2-M^2)(u,v') dv'|+|1/16 π(∫_v_0^v'u^2M^-2Ω^2(u,v”) dv”)(Q^2-M^2)(u,v')|^v'=v_v'=v_0| ≤1/16 π|∫_v_0^v u^2M^-2Ω^2(u,v') dv'||Q^2-M^2|(u,v) +1/8 π∫_v_0^v |∫_v_0^v' u^2M^-2Ω^2(u,v”) dv”'||Q||∂_vQ|(u,v') dv' .Note that there are two terms that we need to estimate. An auxiliary computation. Before we proceed, we first estimate the integral∫_v_0^v u^2M^-2Ω^2(u,v') dv'which appears in (<ref>). From (<ref>) it follows thatM^-2Ω^2≤ 4(v+|u|)^-2+C|u|^-1/2(v+|u|)^-2,and hence,∫_v_0^v u^2M^-2Ω^2(u,v') dv'≤4∫_v_0^v (|u|/v'+|u|)^2 dv'+C|u|^-1/2∫_v_0^v(|u|/v'+|u|)^2 dv'.Note that∂_v(|u|v(v+|u|)^-1)=(|u|/v+|u|)^2.Hence,∫_v_0^v u^2M^-2Ω^2(u,v') dv'≤4|u|v(v+|u|)^-1+C|u|^1/2v(v+|u|)^-1. Estimating the boundary term. We first estimate the boundary term in (<ref>) above by using (<ref>) and (<ref>) 1/16 π|∫_v_0^v u^2M^-2Ω^2(u,v') dv'||Q^2-M^2|(u,v) ≤ 2(1+η)M|𝔢| ∫_-∞^uu^2|D_uϕ|^2(u',v)M^2 du'+C𝒟_ o+C𝒜_ϕ(𝒟_ o+𝒟_ i)|u|^-1+C(𝒟_ o+𝒟_ i)|u|^-1/2. Estimating the remaining integral I: the error term. Now, we estimate the remaining integral on the very right-hand side of (<ref>) by applying (<ref>) and (<ref>):1/8 π∫_v_0^v |∫_v_0^v' u^2M^-2Ω^2(u,v”) dv”||Q||∂_vQ|(u,v') dv'≤2M^3 |𝔢|∫_v_0^v |u|v'(v'+|u|)^-1|ϕ||D_vϕ| dv'_Mainterm+Err,whereErr= C∫_v_0^v |u|v'(v'+|u|)^-1|Q-M ||ϕ||D_vϕ|(u,v') dv'+C∫_v_0^v |M+(Q-M)| |u|^ 12v' (v'+|u|)^-1 |ϕ||D_vϕ|(u,v') dv'+C∫_v_0^v |u|v'(v'+|u|)^-1|r^2-M^2||ϕ||D_vϕ|(u,v') dv'.Let us start with the error term, beginning with the second term (<ref>), which is the hardest since the decay is the weakest. By (<ref>),∫_v_0^v |M+(Q-M)| |u|^ 12 v (v+|u|)^-1 |ϕ||D_vϕ|(u,v') dv' √(𝒟_ o)∫_v_0^v |u|^ 12 v^ 12-α2 (v+|u|)^-1 |D_vϕ|(u,v') dv' + √(𝒜_ϕ(𝒟_ o + 𝒟_ i))∫_v_0^v v (v+|u|)^-1 |D_vϕ|(u,v') dv'√(𝒜_ϕ(𝒟_ o +𝒟_ i)) (v_0+|u|)^- 14(∫_v_0^vv'^- 32 dv')^ 12(∫_v_0^v v'^2 |D_vϕ|^2 (u,v') dv')^ 12 𝒜_ϕ(𝒟_ o +𝒟_ i) (v_0+|u|)^- 14 v_0^- 14. The first term in (<ref>) can be treated similarly, with the only caveat that there is a contribution where |ϕ| and |Q-M| only give v-decay, and therefore one does not get any smallness in (v_0+|u|)^- 14. Nevertheless, this term has a coefficient that depends only on 𝒟_ o (and not on 𝒟_ i). More precisely, using (<ref>) and (<ref>), we have∫_v_0^v |u|v'(v'+|u|)^-1|Q-M ||ϕ||D_vϕ|(u,v') dv' 𝒟_ o^ 32∫_v_0^v v'^- 12|D_vϕ|(u,v') dv' + 𝒜_ϕ^2(𝒟_ o +𝒟_ i)^2 (v_0+|u|)^- 14 v_0^- 14 𝒟_ o^ 32 v_0^- 12(∫_v_0^v v'^2|D_vϕ|^2(u,v') dv')^ 12 + 𝒜_ϕ^2(𝒟_ o +𝒟_ i)^2 (v_0+|u|)^- 14 v_0^- 14 𝒜_ϕ^ 12𝒟_ o^2 v_0^- 12 + 𝒜_ϕ^2(𝒟_ o +𝒟_ i)^2 (v_0+|u|)^- 14 v_0^- 14.Finally, the third term in (<ref>) can be handled in an identical manner as the first term, except for using (<ref>) instead of (<ref>), so that we have∫_v_0^v |u|v'(v'+|u|)^-1|r^2-M^2||ϕ||D_vϕ|(u,v')(u,v') dv' 𝒜_ϕ^ 12𝒟_ o^2 v_0^- 12 + 𝒜_ϕ^2(𝒟_ o +𝒟_ i)^2 (v_0+|u|)^- 14 v_0^- 14.Putting all these together, choosing |u_0| and v_0 appropriately large (where the largeness of v_0 does not depend on 𝒟_ i), and returning to (<ref>), we thus obtainErr≲𝒟_ o+𝒟_ i.Estimating the remaining integral II: the main term. Now for the main term in (<ref>), we have, using Young's inequality,2M^3 |𝔢|∫_v_0^v |u|v'(v'+|u|)^-1|ϕ||D_vϕ| dv' ≤κ |𝔢| M^3 ∫_v_0^vv'^2|D_vϕ|^2 dv'+κ^-1|𝔢| M^3 ∫_v_0^v (|u|/v'+|u|)^2|ϕ|^2 dv' ≤(κ+4κ^-1) |𝔢|M^3∫_v_0^vv'^2|D_vϕ|^2 dv'+3(1+η)κ^-1|𝔢|M^3 ∫_-∞^u |u'|^2|D_uϕ|^2 du' + C_κ (𝒟_ o+𝒟_ i),where in the last line we have used Lemma <ref>.Putting everything together.Combining (<ref>), (<ref>), (<ref>), (<ref>)and (<ref>), we obtain the desired conclusion. We now turn to the analogue of Proposition <ref> on constant-v hypersurfaces.For η>0 as in (<ref>), there exists a constant C>0 independent of 𝒜_ϕ such that|1/16 π∫_-∞^u_0 v^2M^-2Ω^2(Q^2-M^2)(u',v) du' | ≤2(1+√(6))|| M^3(1+η) ∫_-∞^u_0 |u'|^2|D_uϕ|^2(u',v) du' + C(𝒟_ o + 𝒟_ i).We will consider the integral1/16 π∫_u^u_0 v^2M^-2Ω^2(Q^2-M^2)(u',v) du'and take the limit u↓ -∞.The main integration by parts. We integrate by parts as in (<ref>)|1/16 π∫_u^u_0 v^2M^-2Ω^2(Q^2-M^2)(u',v) du'| ≤|1/16 π∫_u^u_0(∫_u'^u_0v^2M^-2Ω^2 du”)∂_u(Q^2-M^2)(u',v) du'|+|1/16 π(∫_u'^u_0v^2M^-2Ω^2(u”,v) du”)(Q^2-M^2)(u',v)|^u'=u_0_u'=u| ≤1/16 π∫_u^u_0v^2M^-2Ω^2(u',v) du' ·|Q^2-M^2|(u,v)+1/8 π∫_u^u_0[∫_∞^u' v^2M^-2Ω^2(u”,v) du”] |Q||∂_uQ|(u',v) du' . An auxiliary computation. By (<ref>) and-_u(|u|v(v+|u|)^-1) = (v^2v+|u'|)^2,we obtain∫_u^u_0 v^2M^-2Ω^2(u',v) du' ≤ 4∫_u^u_0(v^2v+|u'|)^2du' + C v^2∫_u^u_01|u'|^ 12(v+|u'|)^2du' ≤ 4|u|v(v+|u|)^-1 + C|u|^ 12 v^2(v+|u|)^-2. Estimating the boundary term. We now control the boundary term in (<ref>). By (<ref>) combined with (<ref>), we obtain 1/16 π|∫_u^u_0 v^2M^-2Ω^2(u',v) du'||Q^2-M^2|(u,v) ≤ 2(1+η)M^3|𝔢| ∫_-∞^u u'^2|D_uϕ|^2(u',v) du'+C𝒟_ o+C𝒜_ϕ(𝒟_ o+𝒟_ i)|u|^-1+C(𝒟_ o+𝒟_ i)|u|^-1/2. Estimating the remaining integral. The remaining integral in (<ref>) can be controlled as follows:1/8 π∫_u^u_0|∫_u'^u_0 v^2M^-2Ω^2(u”,v) du”||Q||∂_uQ|(u',v) du'≤2M^3 |𝔢|∫_u^u_0 |u'|v(v+|u'|)^-1|ϕ||D_uϕ|(u',v) du'_Mainterm+Err,whereErr= C∫^u_0_u |u'|v(v+|u'|)^-1|Q-M ||ϕ||D_uϕ|(u',v) du'+C∫^u_0_u |M+(Q-M)| |u'|^ 12 v (v+|u'|)^-1 |ϕ||D_uϕ|(u',v) du'+C∫^u_0_u |u'|v(v+|u'|)^-1|r^2-M^2||ϕ||D_uϕ|(u',v) du'.The error term can be estimated in essentially the same manner as the error term in the proof of Proposition <ref>, except we use (<ref>) for ∫^u_0_u u'^2|D_uϕ|^2(u',v) du' instead of ∫_v_0^v v'^2|D_vϕ|^2(u,v') dv'. We omit the details and just record the following estimate:Err≲𝒟_ o+𝒟_ i.For the main term in (<ref>), we apply Hölder's inequality, Lemma <ref> and Young's inequality to obtain2M^3 |𝔢|∫_u^u_0 |u'|v(v+|u'|)^-1|ϕ||D_uϕ|(u',v) du' ≤ 2|𝔢|M^3 (∫_u^u_0|u'|^2|D_uϕ|^2(u',v) du')^ 12(∫_u^u_0(v/v+|u'|)^2|ϕ|^2(u',v) du')^ 12 ≤ 2|𝔢|M^3 √(6)(1+η)∫_u^u_0|u'|^2|D_uϕ|^2(u',v) du' + C𝒟_ o. Putting everything together. Putting together (<ref>), (<ref>), (<ref>) and (<ref>), and taking the limit u↓ -∞, we obtain the desired conclusion. We can now prove the main result of this subsection, namely, the coercivity of the renormalised energy flux (up to controllable error terms). We can estimate∫_-∞^u_0 |u|^2M^2 |D_uϕ|^2 du+ ∫_v_0^v_∞ v^2M^2 |D_vϕ|^2 dv≤μ^-1(E_u(v)+E_v(u))+C(𝒟_ o+𝒟_ i),with μ as in (<ref>).Plugging in the estimates in Propositions <ref> and <ref> into (<ref>) and (<ref>) respectively, and using ≥ 0, we deduce thatE_u(v)+E_v(u)≥(1-(4+2√(6)+3κ^-1)(1+η)|𝔢|M ) ∫_-∞^u_0 |u|^2r^2 |D_uϕ|^2 du'+ (1- (κ+4κ^-1)|𝔢|M) ∫_v_0^v_∞ v^2M^2 |D_vϕ|^2 dv'-C(𝒟_ o+𝒟_ i).Now we choose κ=2 + √(6) + √(9 + 4 √(6)) to obtain the conclusion. §.§ Energy estimates for ϕWe defineE_u(v;u):=∫_-∞^u u^2M^2|D_uϕ|^2(u',v)+ 1/4v^2M^2Ω^2(𝔪^2|ϕ|^2+1/4πM^-4(Q^2-M^2))(u',v) du',E_v(u;v):=∫_v_0^v v^2M^2|D_vϕ|^2(u,v')+ 1/4u^2M^2Ω^2(𝔪^2|ϕ|^2+1/4πM^-4(Q^2-M^2))(u,v') dv'.With the above definitions, we have that E_u(v)=E_u(v;u_0) and E_v(u)=E_v(u;v_∞).E_u(v;u) and E_v(u;v) obey the following estimate:sup_v_0≤ v≤ v_∞E_u(v;u)+sup_-∞<u<u_0E_v(u;v)≤ C(𝒟_ o+𝒟_ i).In order to simplify the notation, in this proof, we omit the arguments in the integrals, which will typically be taken as (u',v'). For any u∈ (-∞, u_0] and v∈ [v_0,v_∞], we decompose:[E_u(v;u)-E_u(v_0;u)]+[E_v(u;v)-E_v(-∞;v)]=∫_v_0^v ∂_v E_u(v';u) dv'+∫_-∞^u ∂_uE_v(u';v) du'=J_1+J_2+J_3+J_4+J_5+J_6,whereJ_1=M^2∫_v_0^v ∫_-∞^u u'^2∂_v(|D_uϕ|^2) du'dv',J_2=1/4 M^2 𝔪^2∫_v_0^v ∫_-∞^uv'^2 Ω^2 ∂_v(|ϕ|^2)+∂_v(v'^2Ω^2)· |ϕ|^2 du'dv',J_3=1/16π M^-2∫_v_0^v ∫_-∞^u 2v'^2Ω^2Q∂_vQ+∂_v(v'^2Ω^2)(Q^2-M^2) du'dv',J_4=M^2∫_v_0^v ∫_-∞^u v'^2 M^2 ∂_u(|D_vϕ|^2) du'dv',J_5=1/4 M^2 𝔪^2∫_v_0^v ∫_-∞^u u'^2 Ω^2 ∂_u(|ϕ|^2)+∂_u(u'^2Ω^2)· |ϕ|^2 du'dv',J_6=1/16π M^-2∫_v_0^v ∫_-∞^u 2u'^2Ω^2Q∂_uQ+∂_u(u'^2Ω^2)(Q^2-M^2) du'dv'. We first use equations (<ref>) and (<ref>) to rewrite the integral J_1 in terms of expressions that are zeroth- or first-order derivatives of the variables ϕ,Ω,r. For this, we use that we have the following identity for complex-valued functions f:∂_v(|f|^2)=(D_vf-i𝔢 A_vf)f̅+f(D_vf-i𝔢 A_vf)=f̅D_vf+fD_vfand similarly∂_u(|f|^2)=f̅D_uf+fD_uf. We therefore obtain:J_1=∫_v_0^v ∫_-∞^u u'^2 M^2(D_uϕD_vD_uϕ+D_uϕD_vD_u ϕ) du'dv'=-1/2∫_v_0^v ∫_-∞^u 1/2u'^2 M^2𝔪^2Ω^2(ϕD_uϕ+ϕ̅D_uϕ )+2M^2r^-1u'^2∂_ur(D_uϕD_vϕ+D_vϕD_uϕ)+4M^2r^-1u'^2∂_vr|D_uϕ|^2+1/2 i𝔢 M^2r^-2u'^2Ω^2Q(ϕD_uϕ-ϕ̅D_uϕ)du'dv'and similarly,J_4=∫_v_0^v ∫_-∞^u v'^2 M^2(D_vϕD_uD_vϕ+D_vϕD_uD_v ϕ) du'dv'=-1/2∫_v_0^v ∫_-∞^u 1/2v'^2 M^2𝔪^2Ω^2(ϕD_vϕ+ϕ̅D_vϕ )+2M^2r^-1v'^2∂_vr(D_uϕD_vϕ+D_vϕD_uϕ)+4M^2r^-1v'^2∂_ur|D_vϕ|^2-1/2 i𝔢M^2r^-2Ω^2v'^2 Q(ϕD_vϕ-ϕ̅D_vϕ)du'dv'.We also rewrite J_2 and J_5 to obtainJ_2=1/4 M^2 𝔪^2∫_v_0^v ∫_-∞^uv'^2 Ω^2 (ϕ̅D_vϕ+ϕD_vϕ)+∂_v(v'^2Ω^2)· |ϕ|^2 du'dv',J_5=1/4 M^2 𝔪^2∫_v_0^v ∫_-∞^uu'^2 Ω^2 (ϕ̅D_uϕ+ϕD_uϕ)+∂_u(u'^2Ω^2)· |ϕ|^2 du'dv'.Finally, we use equations (<ref>) and (<ref>) to rewrite J_3 and J_6:J_3=∫_v_0^v ∫_-∞^u -1/4i𝔢M^-2r^2v'^2Ω^2Q(ϕD_vϕ-ϕ̅D_vϕ)+1/16π M^-2∂_v(v'^2Ω^2)(Q^2-M^2) du'dv',J_6=∫_v_0^v ∫_-∞^u 1/4i𝔢M^-2r^2u'^2Ω^2Q(ϕD_uϕ-ϕ̅D_uϕ)+1/16π M^-2∂_u(u'^2Ω^2)(Q^2-M^2) du'dv'. By incorporating the cancellations in the terms in J_i, we can write:[E_u(v;u)-E_u(v_0;u)]+[E_v(u;v)-E_v(-∞;v)]=∑_i=1^7F_i,withF_1=-M^2 ∫_v_0^v ∫_-∞^u r^-1(u'^2∂_ur+v'^2∂_vr)(D_uϕD_vϕ+D_vϕD_uϕ) du'dv',F_2=-2M^2∫_v_0^v ∫_-∞^u r^-1∂_vr· u'^2|D_uϕ|^2+r^-1∂_ur· v'^2|D_vϕ|^2 du'dv',F_3=1/4 M^2 𝔪^2∫_v_0^v ∫_-∞^u[∂_v(v'^2Ω_0^2)+∂_u(u'^2Ω_0^2)]|ϕ|^2 du'dv',F_4=1/16πM^-2∫_v_0^v ∫_-∞^u [∂_v(v'^2Ω_0^2)+∂_u(u'^2Ω_0^2)](Q^2-M^2) du'dv' ,F_5=1/4∫_v_0^v ∫_-∞^u [∂_v(v'^2· (Ω^2-Ω_0^2)+∂_u(u'^2· (Ω^2-Ω_0^2))](M^2 𝔪^2|ϕ|^2+1/4πM^-2(Q^2-M^2)) du'dv',F_6=1/4i𝔢 M^-2∫_v_0^v ∫_-∞^u r^-2(r^4-M^4)u'^2QΩ^2(ϕD_uϕ-ϕD_uϕ) du'dv',F_7=1/4i𝔢 M^-2∫_v_0^v ∫_-∞^u r^-2(M^4-r^4)v'^2QΩ^2(ϕD_vϕ-ϕD_vϕ) du'dv'.We estimate using Cauchy–Schwarz inequality, Young's inequality, Proposition <ref> and (<ref>)|F_1|≲∫_v_0^v ∫_-∞^u(u'^2|∂_ur|+v'^2|∂_vr|)|D_uϕ||D_vϕ| du'dv' ≲∫_v_0^v ∫_-∞^u 𝒜_ϕ(𝒟_ o+𝒟_ i)|D_uϕ||D_vϕ| du'dv' ≲∫_v_0^v ∫_-∞^u 𝒜_ϕ(𝒟_ o+𝒟_ i) v'^-2u'^ 32|D_uϕ|^2 du'dv'+∫_v_0^v ∫_-∞^u 𝒜_ϕ(𝒟_ o+𝒟_ i)u'^- 32 v'^2|D_vϕ|^2 du'dv' ≲𝒜_ϕ(𝒟_ o+𝒟_ i)v_0^-1|u_0|^- 12·sup_v_0≤ v≤ v_∞∫_-∞^u u'^2|D_uϕ|^2 du'+ 𝒜_ϕ(𝒟_ o+𝒟_ i)|u_0|^- 12·sup_-∞<u<u_0∫_v_0^v_∞ v'^2|D_vϕ|^2 dv' ≲𝒜_ϕ^2(𝒟_ o+𝒟_ i)^2|u_0|^- 12.We can similarly estimate|F_2|≲∫_v_0^v ∫_-∞^u|∂_vr|· u'^2|D_uϕ|^2 du'dv'+∫_v_0^v ∫_-∞^u|∂_ur|· v'^2|D_vϕ|^2 du'dv' ≲𝒜_ϕ(𝒟_ o+𝒟_ i)(∫_v_0^v ∫_-∞^uv'^-32· u'^32|D_uϕ|^2 du'dv'+∫_v_0^v ∫_-∞^u u'^-2· v'^2|D_vϕ|^2 du'dv')+𝒟_ o∫_v_0^v ∫_-∞^uv'^-2· u'^2|D_uϕ|^2 du'dv' ≲[𝒟_ o v_0^-1 +𝒜_ϕ (𝒟_ o +𝒟_ i)(v_0^-1/2|u_0|^-1/2 +|u_0|^-1)] 𝒜_ϕ(𝒟_ o+𝒟_ i).In order to estimate |F_3|, we use (<ref>) (with a constant depending on β>0) and (<ref>) to get|F_3|≲∫_v_0^v ∫_-∞^u |∂_v(v^2Ω_0^2)+∂_u(u^2Ω_0^2)|· |ϕ|^2 du'dv' ≲∫_v_0^v ∫_-∞^u (v'+|u'|)^-2+β (𝒟_ ov'^-1+𝒜_ϕ(𝒟_ o+ 𝒟_ i)|u'|^-1) du'dv' ≲𝒜_ϕ(𝒟_ o+ 𝒟_ i)(v_0+|u_0|)^-1+2β.Similarly, using (<ref>) and (<ref>),|F_4|≲∫_v_0^v ∫_-∞^u |∂_v(v^2Ω_0^2)+∂_u(u^2Ω_0^2)|·|Q-M||Q+M| du'dv' ≲∫_v_0^v ∫_-∞^u (v'+|u'|)^-2+β (𝒟_ ov'^-1+𝒜_ϕ(𝒟_ o+ 𝒟_ i)|u'|^-1) du'dv' ≲𝒜_ϕ(𝒟_ o+ 𝒟_ i)(v_0+|u_0|)^-1+2β.Before we estimate |F_5|, it is convenient to rewrite the following expression:∂_v(v^2(Ω^2-Ω^2_0))+∂_u(u^2(Ω^2-Ω^2_0))=∂_v(v^2Ω_0^2(Ω^2/Ω_0^2-1))+∂_u(u^2Ω_0^2(Ω^2/Ω_0^2-1))=(∂_v(v^2Ω_0^2)+∂_u(u^2Ω_0^2))(Ω^2/Ω_0^2-1)+v^2Ω_0^2∂_v(Ω^2/Ω_0^2-1)+u^2Ω_0^2∂_u(Ω^2/Ω_0^2-1)=(∂_v(v^2Ω_0^2)+∂_u(u^2Ω_0^2))(Ω^2/Ω_0^2-1)+2v^2Ω^2∂_v(logΩ/Ω_0)+2u^2Ω^2∂_u(logΩ/Ω_0).Therefore,|F_5|≲∫_v_0^v ∫_-∞^u |∂_v(v'^2Ω_0^2)+∂_u(u'^2Ω_0^2)||Ω^2/Ω_0^2-1|·(|ϕ|^2+|Q-M||Q+M|) du'dv'+ ∫_v_0^v ∫_-∞^u v'^2Ω^2|∂_v(logΩ/Ω_0)|·(|ϕ|^2+|Q-M||Q+M|) du'dv'+ ∫_v_0^v ∫_-∞^u u'^2Ω^2|∂_u(logΩ/Ω_0)|·(|ϕ|^2+|Q-M||Q+M|) du'dv'=:F_5,1+F_5,2+F_5,3.Using (<ref>), (<ref>), (<ref>), (<ref>), we can estimate |F_5,1| in the same way as |F_3| and |F_4| to obtain|F_5,1|≲𝒜_ϕ(𝒟_ o+ 𝒟_ i)|u_0|^-1/2 (v_0+|u_0|)^-1+2β.For |F_5,2|, we use (<ref>), (<ref>) and (<ref>) to estimate|F_5,2| ≲∫_v_0^v ∫_-∞^u (𝒟_ ov'^-1+𝒜_ϕ(𝒟_ o+𝒟_ i)|u'|^-1)v'^2Ω^2|∂_v(logΩ/Ω_0)| du'dv' ≲𝒟_ o(sup_u'∈ (-∞,u_0]∫_v_0^v v'^2(∂_v(logΩ/Ω_0))^2(u',v') dv')^ 12∫_-∞^u(∫_v_0^v(v'+|u'|)^-4 dv')^ 12 du'+ 𝒜_ϕ(𝒟_ o+ 𝒟_ i)(sup_u'∈ (-∞,u_0]∫_v_0^v v'^2(∂_v(logΩ/Ω_0))^2(u',v') dv')^ 12∫_-∞^u(∫_v_0^vv'^2|u'|^2(v'+|u'|)^4 dv')^ 12 du' ≲𝒟_ o(v_0+|u_0|)^- 12+𝒜_ϕ(𝒟_ o+ 𝒟_ i)|u_0|^- 12,where in the last line we have evaluated an integral as follows: (We only include this estimate for completeness. In what follows, we will bound similar integrals in analogous manner without spelling out the full details.)∫_-∞^u(∫_v_0^vv'^2|u'|^2(v'+|u'|)^4 dv')^ 12 du' ∫_-∞^u(∫_v_0^|u|v'^2|u'|^2(v'+|u'|)^4 dv' + ∫_|u|^vv'^2|u'|^2(v'+|u'|)^4 dv')^ 12 du'∫_-∞^u(∫_v_0^|u|v'^2|u'|^6 dv' + ∫_|u|^v1|u'|^2 v'^2 dv')^ 12 du' ∫_-∞^u |u'|^- 32 du'|u_0|^- 12.For |F_5,3|, we similarly use (<ref>), (<ref>) and (<ref>) to estimate as follows:|F_5,3|≲∫_v_0^v ∫_-∞^u (𝒟_ ov'^-1+𝒜_ϕ(𝒟_ o+ 𝒟_ i)|u'|^-1)u'^2Ω^2|∂_u(logΩ/Ω_0)| du'dv' ≲𝒟_ o(sup_u'∈(-∞,u_0]∫_-∞^u u'^2(∂_u(logΩ/Ω_0))^2(u',v') du')^ 12∫_v_0^v( ∫_-∞^u u'^2/v'^2(v'+|u'|)^4 du')^ 12 dv'+ 𝒜_ϕ(𝒟_ o+𝒟_ i)(sup_u'∈(-∞,u_0]∫_-∞^u u'^2(∂_u(logΩ/Ω_0))^2(u',v') du')^ 12∫_v_0^v( ∫_-∞^u (v'+|u'|)^-4 du')^ 12 dv' ≲𝒟_ o v_0^-1/2+𝒜_ϕ(𝒟_ o+𝒟_ i)(v_0+|u_0|)^-1/2.Thus, combining the estimates for F_5,1, F_5,2 and F_5,3, we obtain|F_5|𝒟_ o v_0^-1/2+𝒜_ϕ(𝒟_ o+𝒟_ i)|u_0|^-1/2.We are left with |F_6| and|F_7|, which are slightly easier because more decay is available. For F_6, we use Cauchy–Schwarz inequality, (<ref>), (<ref>), (<ref>), and (<ref>) to obtain|F_6|≲∫_v_0^v ∫_-∞^u |r-M|u'^2Ω^2|ϕ||D_uϕ| du'dv' ≲∫_v_0^v ∫_-∞^u (𝒟_ ov'^-1+𝒜_ϕ(𝒟_ o+𝒟_ i)|u'|^-1)(√(𝒟_ o)v'^-1/2+√(𝒜_ϕ(𝒟_ o+𝒟_ i))|u'|^-1/2)u'^2Ω^2|D_uϕ| du'dv' ≲𝒟_ o^ 32(sup_v'∈ [v_0,v)∫_-∞^u u'^2|D_uϕ|^2(u',v') du')^ 12∫_v_0^v ( ∫_-∞^u v'^-3 u'^2 (v'+|u'|)^-4 du' )^ 12 dv'+ 𝒜_ϕ^ 32(𝒟_ o+𝒟_ i)^ 32(sup_v'∈ [v_0,v)∫_-∞^u u'^2|D_uϕ|^2(u',v') du')^ 12∫_v_0^v( ∫_-∞^u |u'|^-1(v'+|u'|)^-4 du' )^ 12dv' 𝒜_ϕ^ 12𝒟_ o^ 32(𝒟_ o+𝒟_ i)^ 12v_0^- 12|u_0|^- 12 + 𝒜_ϕ^2(𝒟_ o+𝒟_ i)^2|u_0|^- 12(v_0+|u_0|)^- 12.Similarly, we use Cauchy–Schwarz inequality, (<ref>), (<ref>), (<ref>), and (<ref>) to obtain|F_7|≲∫_v_0^v ∫_-∞^u |r-M|v'^2Ω^2|ϕ||D_vϕ| du'dv' ≲∫_v_0^v ∫_-∞^u (𝒟_ ov'^-1+𝒜_ϕ(𝒟_ o+𝒟_ i)|u'|^-1)(√(𝒟_ o)v'^-1/2+√(𝒜_ϕ(𝒟_ o+𝒟_ i))|u'|^-1/2)v'^2Ω^2|D_vϕ| du'dv' ≲𝒟_ o^ 32(sup_u'∈(-∞,u_0]∫_v_0^v v'^2|D_vϕ|^2(u',v') dv')^ 12∫_-∞^u(∫_v_0^v v'^-1 (v'+|u'|)^-4 dv')^ 12 du'+𝒜_ϕ^ 32(𝒟_ o+𝒟_ i)^ 32(sup_u'∈(-∞,u_0]∫_v_0^v v'^2|D_vϕ|^2(u',v') dv')^ 12∫_-∞^u (∫_v_0^v v'^2 |u'|^-3 (v'+|u'|)^-4 dv')^ 12 du' ≲𝒜_ϕ^ 12𝒟_ o^ 32(𝒟_ o+𝒟_ i)^ 12 v_0^- 12(v_0+|u_0|)^- 12+ 𝒜_ϕ^2(𝒟_ o+𝒟_ i)^2 |u_0|^- 12(v_0+|u_0|)^- 12.Choosing v_0 and |u_0| large in a manner allowed by (<ref>), we obtain|F_1|+… + |F_7|𝒟_ o+𝒟_ i.Finally, noting that the initial data contributions E_u(v_0;u) and E_v(-∞;v) are by definition bounded by 𝒟_ o+𝒟_ i, and returning to (<ref>), we obtainsup_v_0≤ v< v_∞E_u(v;u)+sup_-∞<u<u_0E_v(u;v)𝒟_ o+𝒟_ i,which is to be proved. Combining Propositions <ref> and <ref>, we obtain the following estimate. In particular, this is an improvement over the bootstrap assumption (<ref>) for 𝒜_ϕ sufficiently large depending on M, ,and η. Choosing 𝒜_ϕ sufficiently large (depending on M,and ), the following estimate holds:sup_v∈ [v_0,v_∞)∫_-∞^u_0 u^2 |D_u ϕ |^2(u,v) du+sup_u∈ (-∞,u_0)∫_v_0^v_∞v^2|D_vϕ|^2(u,v) dv≤ C(𝒟_ o+𝒟_ i) ≤A_ϕ2(𝒟_ o+𝒟_ i).At this point we fix 𝒜_ϕ so that Corollary <ref> holds. §.§ Energy estimates for logΩ/Ω_0Finally, we carry out the energy estimates for logΩ/Ω_0. As we noted in the introduction, the essential point is to establish that logΩ/Ω_0 obeys an equation of the form (<ref>) up to lower order terms. More precisely, starting with (<ref>), the estimates that we have obtained so far show that the D_uϕD_vϕ and _u r _v r terms have better decay properties, and that r and Q both decay to M. Therefore, (<ref>) can indeed be thought of as (<ref>).We split the proof of the energy estimates into two parts. First, in Lemma <ref>, we consider an energy inspired by the form (<ref>) and write down the error terms that arise when controlling this energy. Then, in Proposition <ref>, we will then bound all the error terms arising in Lemma <ref> to obtain the desired estimate for logΩ/Ω_0. The following identity holds for any u∈ (-∞,u_0) and v∈ [v_0,v_∞):∫_-∞^uu'^2(∂_v log(Ω/Ω_0))^2+1/8M^-4u'^2Ω^-2(Ω^2-Ω_0^2)^2(u',v) du'+∫_v_0^vv'^2(∂_v log(Ω/Ω_0))^2+1/8M^-4v'^2Ω^-2(Ω^2-Ω_0^2)^2(u,v') dv'=∑_i=1^6O_i,whereO_1=-4π∫_v_0^v ∫_-∞^u (D_uϕD_vϕ+D_uϕD_vϕ)(v'^2∂_v log(Ω/Ω_0)+u'^2∂_u log(Ω/Ω_0)) du'dv',O_2=1/8M^-4∫_v_0^v ∫_-∞^u [∂_v(v'^2Ω_0^2)+∂_u(u'^2Ω_0^2)]Ω_0^2/Ω^2(Ω^2/Ω_0^2-1)^2 du'dv',O_3=-1/4M^-4∫_v_0^v ∫_-∞^u Ω^-2(Ω^2-Ω_0^2)^2(v'^2∂_v log(Ω/Ω_0)+u'^2∂_u log(Ω/Ω_0)) du'dv',O_4=∫_v_0^v ∫_-∞^u Ω_0^2[Q^2(r_0^-4-r^-4)+r_0^-4(M^2-Q^2)-1/2(r_0^-2-r^-2)]·(v'^2∂_v log(Ω/Ω_0)+u'^2∂_u log(Ω/Ω_0)) du'dv',O_5=∫_v_0^v ∫_-∞^u (Ω^2-Ω_0^2)[r^-4(M^2-Q^2)+1/2(r^-2-M^-2)-M^2(r^-4-M^-4)]·(v'^2∂_v log(Ω/Ω_0)+u'^2∂_u log(Ω/Ω_0)) du'dv',O_6=2∫_v_0^v ∫_-∞^u( r^-2∂_ur∂_vr-r_0^-2∂_ur_0∂_vr_0)(v'^2∂_v log(Ω/Ω_0)+u'^2∂_u log(Ω/Ω_0)) du'dv',where, as in Proposition <ref>, we have suppressed the argument (u',v') in the integrand in the O_i terms. By (<ref>) we have that ∂_u∂_v log(Ω/Ω_0)=-2π (D_uϕD_vϕ+D_uϕD_vϕ)+r^-2∂_ur∂_vr-r_0^-2∂_ur_0∂_vr_0-1/2Ω^2r^-4Q^2+1/2Ω_0^2r_0^-4M^2+1/4Ω^2r^-2-1/4Ω_0^2r_0^-2=-2π (D_uϕD_vϕ+D_uϕD_vϕ)+r^-2∂_u(r-r_0)∂_vr+r^-2∂_ur_0∂_v(r-r_0)+∂_ur_0∂_vr_0(r^-2-r_0^-2)-1/2(Ω^2-Ω^2_0)r^-4M^2+1/2(Ω^2-Ω_0^2)r^-4(M^2-Q^2)+1/4(Ω^2-Ω^2_0)M^-2+1/4(Ω^2-Ω^2_0)(r^-2-M^-2)+1/2Ω_0^2Q^2(r_0^-4-r^-4)+1/2Ω_0^2r_0^-4(M^2-Q^2)-1/4Ω_0^2(r_0^-2-r^-2)=-2π (D_uϕD_vϕ+D_uϕD_vϕ)+r^-2∂_u(r-r_0)∂_vr+r^-2∂_ur_0∂_v(r-r_0)+∂_ur_0∂_vr_0(r^-2-r_0^-2)-1/2(Ω^2-Ω_0^2)M^-2-1/2(Ω^2-Ω_0^2)M^2(r^-4-M^-4)+1/2(Ω^2-Ω_0^2)r^-4(M^2-Q^2)+1/4(Ω^2-Ω^2_0)M^-2+1/4(Ω^2-Ω^2_0)(r^-2-M^-2)+1/2Ω_0^2Q^2(r_0^-4-r^-4)+1/2Ω_0^2r_0^-4(M^2-Q^2)-1/4Ω_0^2(r_0^-2-r^-2).Using the above equation, we obtain∂_u(v^2(∂_v log(Ω/Ω_0))^2)=2v^2∂_u∂_v log(Ω/Ω_0)·∂_v log(Ω/Ω_0)=-4π v^2(D_uϕD_vϕ+D_uϕD_vϕ)∂_v log(Ω/Ω_0)+2r^-2∂_u(r-r_0)∂_vrv^2∂_v log(Ω/Ω_0)+2r^-2∂_ur_0∂_v(r-r_0)v^2∂_v log(Ω/Ω_0)+2∂_ur_0∂_vr_0(r^-2-r_0^-2)v^2∂_v log(Ω/Ω_0)-1/2M^-2v^2(Ω^2-Ω_0^2)∂_v log(Ω/Ω_0)-(Ω^2-Ω_0^2)M^2(r^-4-M^-4)v^2∂_v log(Ω/Ω_0)+(Ω^2-Ω_0^2)r^-4(M^2-Q^2)v^2∂_v log(Ω/Ω_0)+1/2(Ω^2-Ω^2_0)(r^-2-M^-2)v^2∂_v log(Ω/Ω_0)+Ω_0^2Q^2(r_0^-4-r^-4)v^2∂_v log(Ω/Ω_0)+Ω_0^2r_0^-4(M^2-Q^2)v^2∂_v log(Ω/Ω_0)-1/2Ω_0^2(r_0^-2-r^-2)v^2∂_v log(Ω/Ω_0).Note that we can write(Ω^2-Ω_0^2)∂_v log(Ω/Ω_0)=1/2(Ω^2-Ω_0^2)∂_v(Ω^2/Ω_0^2-1)Ω_0^2/Ω^2 = 1/4Ω_0^4/Ω^2∂_v((Ω^2/Ω_0^2-1)^2).Hence,-1/2M^-4v^2(Ω^2-Ω_0^2)∂_v log(Ω/Ω_0)=-1/8v^2M^-4Ω_0^4/Ω^2∂_v((Ω^2/Ω_0^2-1)^2)=-∂_v(1/8M^-4v^2Ω^-2(Ω^2-Ω_0^2)^2)+1/8M^-4∂_v(v^2Ω_0^2Ω_0^2/Ω^2)(Ω^2/Ω_0^2-1)^2=-∂_v(1/8M^-4v^2Ω^-2(Ω^2-Ω_0^2)^2)+1/8M^-4∂_v(v^2Ω_0^2)Ω_0^2/Ω^2(Ω^2/Ω_0^2-1)^2-1/4M^-4v^2Ω^-2∂_v log(Ω/Ω_0)(Ω^2-Ω_0^2)^2.We similarly consider ∂_v(u^2(∂_u log(Ω/Ω_0))^2) and use Leibniz rule (with u replacing the role of v). Noting also that by the gauge condition (<ref>), log_0=0 on the initial hypersurfaces, this yields the statement of the Lemma. The following estimate holdssup_v∈ [v_0,v_∞]∫_-∞^u_0 u'^2 (∂_u(logΩ/Ω_0))^2(u',v) du'+sup_u∈(-∞,u_0]∫_v_0^v_∞ v'^2 (∂_v(logΩ/Ω_0))^2(u,v') dv'≤M2.In order to obtain the stated estimates, we need to bound each of the terms in Lemma <ref>. The basic idea is to use the bootstrap assumption (<ref>) to control _v log(Ω/Ω_0) and _u log(Ω/Ω_0) and to use the estimates that we have previously obtained to deduce the decay and smallness of these terms.We begin with the estimates for O_1. This turns out to be the most difficult term since we do not have any kind of pointwise estimates for |D_vϕ|, |D_uϕ|, |_v log (Ω/Ω_0)| and |_u log (Ω/Ω_0)|. We bound it as follows using (<ref>):|O_1|≲∫_v_0^v ∫_-∞^u |D_uϕ|·|D_vϕ|·(v'^2|∂_v log(Ω/Ω_0)|+u'^2|∂_u log(Ω/Ω_0)|) du'dv' ≲(∫_v_0^v v'^2sup_u' ∈ (-∞,u]|D_vϕ|^2(u',v') dv')^ 12·(∫_-∞^u u'^2 sup_v'∈ [v_0, v]|D_uϕ|^2(u',v') du')^ 12×( |u_0|^- 12(sup_u' ∈ (-∞,u]∫_v_0^v v'^2|∂_v log(Ω/Ω_0)|^2(u',v') dv')^ 12. .+ v_0^- 12(sup_v' ∈ [v_0,v]∫_-∞^u u'^2|∂_u log(Ω/Ω_0)|^2(u',v') du')^ 12) ≲ (|u_0|^- 12+v_0^- 12)(∫_v_0^v sup_u'∈ (-∞, u][∫_-∞^u'v'^2∂_u(|D_vϕ|^2)(u”,v') du”] dv')^ 12 ×(∫_-∞^usup_v'∈ [v_0, v][∫_v_0^v'|u'|^2∂_v(|D_uϕ|^2)(u',v”) dv”] du')^ 12.We first consider the last factor in (<ref>). From the computation for J_1 in the proof of Proposition <ref> (which used (<ref>) and (<ref>)), it follows that∫_-∞^u sup_v'∈ [v_0, v][∫_v_0^v'|u'|^ 32∂_v(|D_uϕ|^2) dv”] du' ≲∫_v_0^v ∫_-∞^uu'^ 32Ω^2|ϕ||D_uϕ|+u'^ 32|∂_ur||D_uϕ|D_vϕ|+u'^ 32|∂_vr||D_uϕ|^2 du'dv'=:O_1,1 + O_1,2 + O_1,3. The terms O_1,2 and O_1,3 have already been controlled in the proof of Proposition <ref>. More precisely, estimating as the terms F_1 and F_2 in the proof of Proposition <ref>, and noting that O_1,2 and O_1,3 have an additional u'^- 12 weight compared to F_1 and F_2, we haveO_1,2+O_1,3𝒜_ϕ^2(𝒟_ o+𝒟_ i)^2|u_0|^-1+ 𝒜_ϕ𝒟_ o (𝒟_ o+𝒟_ i)v_0^-1|u_0|^- 12𝒜_ϕ^2(𝒟_ o+𝒟_ i)^2|u_0|^- 12.It thus remains to bound O_1,1, which has no analogue in Proposition <ref>. To control this term, we use (<ref>), (<ref>), the Cauchy–Schwarz inequality and the bootstrap assumption (<ref>) to obtainO_1,1√(𝒜_ϕ(𝒟_ o+𝒟_ i))∫_v_0^v (∫_-∞^uu'^2 |D_uϕ|^2 du')^ 12(∫_-∞^u (v'+|u'|)^-4du')^ 12 dv' + √(𝒟_ o)∫_v_0^v (∫_-∞^uu'^2 |D_uϕ|^2 du')^ 12(∫_-∞^u|u'|v'^1+ (v'+|u'|)^-4du')^ 12 dv' √(𝒜_ϕ(𝒟_ o+𝒟_ i))(sup_v'∈ [v_0, v]∫_-∞^uu'^2 |D_uϕ|^2(u',v') du')^ 12( ∫_v_0^v (v'+|u|)^- 32 dv'+ ∫_v_0^v v'^- 12-2(v'+|u|)^-1 dv') 𝒜_ϕ(𝒟_ o+𝒟_ i) (v_0+|u_0|)^- 12.Combining all these and plugging back into (<ref>), we obtain∫_-∞^usup_v'∈ [v_0, v][∫_v_0^v'|u'|^ 32∂_v(|D_uϕ|^2) dv”] du'𝒜_ϕ^2(𝒟_ o+𝒟_ i)^2|u_0|^- 12 + 𝒜_ϕ(𝒟_ o+𝒟_ i)(v_0+|u_0|)^- 12 𝒜_ϕ^2(𝒟_ o+𝒟_ i)^2|u_0|^- 12. For the other factor in (<ref>), we estimate similarly by∫_v_0^vsup_-∞≤ u'≤ u[∫_-∞^u'v'^ 32∂_u(|D_vϕ|^2) du”] dv' ≲∫_v_0^v ∫_-∞^uv'^2 Ω^2|ϕ||D_vϕ|+v'^2|∂_vr||D_uϕ|D_vϕ|+v'^2|∂_ur||D_vϕ|^2 du'dv'=:O_1,4+O_1,5+O_1,6.The terms O_1,5 and O_1,6, just as O_1,2 and O_1,3, can be bounded above by 𝒜_ϕ^2(𝒟_ o+𝒟_ i)^2|u_0|^- 12 as the terms F_1 and F_2 in the proof of Proposition <ref>. For the term O_1,4, we have, using (<ref>), (<ref>), the Cauchy–Schwarz inequality and the bootstrap assumption (<ref>), ∫_v_0^v ∫_-∞^uv'^ 32Ω^2|ϕ||D_vϕ| du'dv'√(𝒜_ϕ(𝒟_ o+𝒟_ i))∫_-∞^u (∫_v_0^v v'^2 |D_vϕ|^2 dv')^ 12(∫_v_0^v |u'|^-1 v' (v'+|u'|)^-4 dv')^ 12 du' + √(𝒟_ o)∫_-∞^u (∫_v_0^v v'^2 |D_vϕ|^2dv')^ 12(∫_v_0^v v'^-2 (v'+|u'|)^-4 dv')^ 12 du'√(𝒜_ϕ(𝒟_ o+𝒟_ i))(sup_u'∈ (-∞, u](∫_v_0^v v'^2 |D_vϕ|^2(u',v') dv')^ 12)(∫_-∞^u |u'|^- 12 (v_0+|u'|)^-1 du' + ∫_-∞^u (v_0+|u'|)^- 32du')𝒜_ϕ(𝒟_ o+𝒟_ i)|u_0|^- 12.Combining, we obtain∫_v_0^vsup_-∞≤ u'≤ u[∫_-∞^u'v'^ 32∂_u(|D_vϕ|^2) du”] dv'𝒜_ϕ^2(𝒟_ o+𝒟_ i)^2|u_0|^- 12. Combining (<ref>), (<ref>) and (<ref>), we can therefore conclude that|O_1|≲𝒜_ϕ^2(𝒟_ o+𝒟_ i)^2|u_0|^- 12(|u_0|^- 12 + v_0^- 12). We estimate |O_2| by applying (<ref>) and (<ref>). (Here, as before, the implicit constant may depend on β for β>0.)|O_2|≲∫_v_0^v ∫_-∞^u | ∂_v(v'^2Ω_0^2)+∂_u(u'^2Ω_0^2)|Ω_0^2/Ω^2(Ω^2/Ω_0^2-1)^2 du'dv' ≲∫_v_0^v ∫_-∞^u (v'+|u'|)^-2+β(Ω^2/Ω_0^2-1)^2 du'dv' ≲∫_v_0^v ∫_-∞^u(v'+|u'|)^-2+β |u'|^-1 du'dv' ≲ v_0^-1+β+|u_0|^-1+β. It turns out that the remaining terms have a similar structure and is convenient to bound them in the same way. The following are the three basic estimates. First, using Cauchy–Schwarz inequality and (<ref>), we have∫_v_0^v ∫_-∞^u (v'+|u'|)^-2|u'|^-1(v'^2|∂_v log(Ω/Ω_0)|+u'^2|∂_u log(Ω/Ω_0)|) du'dv'(∫_v_0^v ∫_-∞^u (v'+|u'|)^-4|u'|^- 12v'^2 du'dv' )^ 12(∫_v_0^v ∫_-∞^u |u'|^- 32 v'^2|∂_v log(Ω/Ω_0)|^2du'dv' )^ 12+(∫_v_0^v ∫_-∞^u (v'+|u'|)^-4 v'^ 32 du'dv' )^ 12(∫_v_0^v ∫_-∞^u v'^- 32 u'^2|∂_u log(Ω/Ω_0)|^2 du'dv' )^ 12 ((|u_0|^- 14 +v_0^- 14)|u_0|^- 14 + (v_0+|u_0|)^- 14 v_0^- 14)|u_0|^- 14.Again, using Cauchy–Schwarz inequality and (<ref>), we have∫_v_0^v ∫_-∞^u (v'+|u'|)^-2v'^-1(v'^2|∂_v log(Ω/Ω_0)|+u'^2|∂_u log(Ω/Ω_0)|) du'dv'(∫_v_0^v ∫_-∞^u (v'+|u'|)^-4|u'|^ 32 du'dv' )^ 12(∫_v_0^v ∫_-∞^u |u'|^- 32 v'^2|∂_v log(Ω/Ω_0)|^2du'dv' )^ 12+(∫_v_0^v ∫_-∞^u (v'+|u'|)^-4 v'^- 12|u'|^2 du'dv' )^ 12(∫_v_0^v ∫_-∞^u v'^- 32 u'^2|∂_u log(Ω/Ω_0)|^2 du'dv' )^ 12((v_0+|u_0|)^- 14|u_0|^- 14 + (v_0^- 14 + |u_0|^- 14) v_0^- 14)v_0^- 14.Thirdly, we have a another slight variant of the above estimates, for which we again use Cauchy–Schwarz inequality and (<ref>):∫_v_0^v ∫_-∞^u v'^-2|u'|^-2(v'^2|∂_v log(Ω/Ω_0)|+u'^2|∂_u log(Ω/Ω_0)|) du'dv'(∫_v_0^v ∫_-∞^u v'^-2|u'|^- 52 du'dv' )^ 12(∫_v_0^v ∫_-∞^u |u'|^- 32 v'^2|∂_v log(Ω/Ω_0)|^2du'dv' )^ 12+(∫_v_0^v ∫_-∞^u v'^- 52|u'|^-2 du'dv' )^ 12(∫_v_0^v ∫_-∞^u v'^- 32 u'^2|∂_u log(Ω/Ω_0)|^2 du'dv' )^ 12(v_0^- 12|u_0|^-1 + v_0^-1|u_0|^- 12)|u_0|^- 14. Using these basic estimates, we now estimate |O_3|,…,|O_6|. Using (<ref>) and (<ref>) to bound ^-2(^2-_0^2)^2, we bound O_3 via (<ref>)|O_3|≲∫_v_0^v ∫_-∞^u Ω^-2(Ω^2-Ω_0^2)^2(v'^2|∂_v log(Ω/Ω_0)|+u'^2|∂_u log(Ω/Ω_0)|) du'dv' ≲∫_v_0^v ∫_-∞^u (v'+|u'|)^-2|u'|^-1(v'^2|∂_v log(Ω/Ω_0)|+u'^2|∂_u log(Ω/Ω_0)|) du'dv'≲ |u_0|^- 14.Similarly, using (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>) to obtain the following estimate for |O_4|:|O_4|≲∫_v_0^v ∫_-∞^u Ω_0^2(|r-M|+|r-r_0|+|Q-M|)(v'^2|∂_v log(Ω/Ω_0)|+u'^2|∂_u log(Ω/Ω_0)|) du'dv' ≲∫_v_0^v ∫_-∞^u (v'+|u'|)^-2(𝒜_ϕ(𝒟_ o+𝒟_ i)|u'|^-1+𝒟_ov'^-1+(v'+|u'|)^-1)·(v'^2|∂_v log(Ω/Ω_0)|+u'^2|∂_u log(Ω/Ω_0)|) du'dv' ≲ (𝒜_ϕ(𝒟_ o+𝒟_ i)+1)|u_0|^- 14 + 𝒟_o v_0^- 14.By Lemma <ref> and (<ref>), |Ω^2-Ω_0^2|(u',v') (v'+|u'|)^-2. Hence |O_5| can be controlled in a similar manner as O_4 as follows:|O_5|≲∫_v_0^v ∫_-∞^u |Ω^2-Ω_0^2|(|r-M|+|r-r_0|+|Q-M|)(v'^2|∂_v log(Ω/Ω_0)|+u'^2|∂_u log(Ω/Ω_0)|) du'dv' ≲∫_v_0^v ∫_-∞^u (v'+|u'|)^-2(𝒜_ϕ(𝒟_ o+𝒟_ i)|u'|^-1+𝒟_ov'^-1+(v'+|u'|)^-1)·(v'^2|∂_v log(Ω/Ω_0)|+u'^2|∂_u log(Ω/Ω_0)|) du'dv' ≲(𝒜_ϕ(𝒟_ o+𝒟_ i)+1)|u_0|^- 14 + 𝒟_o v_0^- 14.Finally, we estimate |O_6|. For this we use Lemma <ref>, (<ref>), (<ref>) and (<ref>) to obtain|O_6|≲∫_v_0^v ∫_-∞^u(|∂_vr||∂_ur|+|∂_vr_0||∂_ur_0|)(v'^2|∂_v log(Ω/Ω_0)|+u'^2|∂_u log(Ω/Ω_0)|) du'dv' ≲∫_v_0^v ∫_-∞^u [(v'+|u'|)^-4+𝒜_ϕ^2(𝒟_ o+𝒟_ i)^2|u'|^-2v'^-2]·(v'^2|∂_v log(Ω/Ω_0)|+u'^2|∂_u log(Ω/Ω_0)|) du'dv' ≲𝒜_ϕ^2(𝒟_ o+𝒟_ i)^2|u_0|^- 14. Hence, choosing v_0 and |u_0| large in a manner allowed by (<ref>), we obtainsup_v∈ [v_0,v_∞]∫_-∞^u_0 u^2 (∂_u(logΩ/Ω_0))^2(u,v) du+sup_u∈ (-∞,u_0]∫_v_0^v_∞ v^2 (∂_v(logΩ/Ω_0))^2(u,v) dv≤M2,which is to be proved. § STABILITY OF THE CAUCHY HORIZON OF EXTREMAL REISSNER–NORDSTRÖMWe now conclude the bootstrap argument and show that the solution exists and remains regular for all v≥ v_0 and that certain estimates hold. More precisely, we haveThere exists a smooth solution (ϕ,r,,A) to (<ref>)–(<ref>) in the rectangle (cf. Figure <ref>)D_u_0,v_0={(u,v') |-∞≤ u≤ u_0,v_0≤ v< ∞}with the prescribed initial data. Moreover, all the estimates in Sections <ref> and <ref> hold in D_u_0,v_0.For every v∈ [v_0,∞), consider the following conditions: * A smooth solution (ϕ,r,,A) to (<ref>)–(<ref>) exists in the rectangle D_u_0,[v_0,v)={(u,v') |-∞≤ u≤ u_0,v_0≤ v'< v} with the prescribed initial data.* The estimates (<ref>), (<ref>) and (<ref>) hold in D_u_0,[v_0,v).Consider the set ℑ⊂ [v_0,∞) defined byℑ := { v∈ [v_0,∞) : }.We will show that ℑ is non-empty, closed and open, which implies that ℑ = [v_0,∞). Standard local existence implies that ℑ is non-empty. Closedness of ℑ follows immediately from the definition of ℑ. The most difficult property to verify is the openness of ℑ. For this, suppose v∈ℑ. We then argue as follows: * Under the bootstrap assumptions (<ref>), (<ref>) and (<ref>), all the estimates in Sections <ref> and <ref> hold in D_u_0,[v_0,v). A standard propagation of regularity result shows that the solution can be extended smoothly up to D_u_0,[v_0,v]={(u,v') |-∞≤ u≤ u_0,v_0≤ v'≤v}.Hence, one can apply a local existence result for the characteristic initial value problem to show that there exists >0 such that a smooth solution (ϕ,r,,A) to (<ref>)–(<ref>) exists in D_u_0,[v_0,v+).* The estimates in (<ref>), Corollary <ref> and Proposition <ref> improve those in (<ref>), (<ref>) and (<ref>). Hence, by continuity, after choosing >0 smaller if necessary, (<ref>), (<ref>) and (<ref>) hold in D_u_0,[v_0,v+).Combining the two points above, we deduce that after choosing >0 smaller if necessary, (v-,v+)⊂ℑ. This proves the openness of ℑ. By connectedness of [v_0,∞), we deduce that ℑ = [v_0,∞). This implies the existence of a smooth solution in D_u_0,v_0. Moreover, this implies the assumptions (<ref>), (<ref>) and (<ref>) that are used in Sections <ref> and <ref> in fact hold throughout D_u_0,v_0. Therefore, indeed all the estimates in Sections <ref> and <ref> hold in D_u_0,v_0. We have therefore shown the existence of a solution in the whole region D_u_0,v_0. Since we have now closed our bootstrap argument, in the remainder of the paper, we will suppress any dependence on 𝒜_ϕ (which in turn depends only on M,and ). In the remainder of this section, we show that one can attach a Cauchy horizon to the solution and prove regularity of the solution up to the Cauchy horizon. More precisely, define V to be a function of v in exactly the same manner as in Section <ref>, i.e. dVdv = Ω_0^2(1,v), V(∞) = 0.We will use also the convention thatV_0:=V(v_0).Define moreover (as in Section <ref>) the Cauchy horizonas the boundary {V=0} in the (u,V,θ,φ) coordinate system. Note that this induces a natural differential structure on D_u_0,v_0∪. In the new coordinate system, in order to distinguish the “new ”, we follow the convention in Section <ref> and denote g(_u,_V) = - 12^2 instead. We show that the solution (ϕ,r,,A) (after choosing an appropriate gauge for A) extends to the Cauchy horizon continuously, and that in fact their derivatives are in L^2_loc up to the Cauchy horizon. (In fact, as we will show in Section <ref>, there are non-unique extensions as spherically symmetric solutions to (<ref>) beyond the Cauchy horizon.)We begin by restating some of the estimates we have obtained in this new coordinate system.In the (u,V) coordinate system, ϕ, r andsatisfy the following estimates:1|u|^2^2(u,V) 1, |_V r(u,V)|𝒟_ o+𝒟_ i, ∫_V_0^0 |_Vϕ|^2(u,V')dV' + ∫_-∞^u_0 u^2|_uϕ|^2(u',V)du' 𝒟_ o+𝒟_ i, ∫_V_0^0 |_Vlog|^2(u,V')dV' + ∫_-∞^u_0 u^2|_ulog|^2(u',V)du'1, ∫_V_0^0 |_V r|^2(u,V')dV' + ∫_-∞^u_0 u^2|_ur|^2(u',V)du' 𝒟_ o+𝒟_ i.Proof of estimates forin (<ref>). By (<ref>) and (<ref>),|^2(u,V)-4M^2/(v(V)+|u|)^2Ω^-2_0(-1,v(V))| = Ω^-2_0(-1,v(V)) | ^2(u,v(V))-4M^2/(v+|u|)^2||u|^- 12(v(V)+1v(V)+|u|)^2,which implies both the upper and lower bounds forin (<ref>).Proof of estimate for _V r in (<ref>). The estimate for _V r in (<ref>) follows from (<ref>) and (<ref>).Proof of (<ref>) and (<ref>). These follow from Corollary <ref>, Proposition <ref>, (<ref>) and (<ref>).Proof of (<ref>). Finally, (<ref>) can be obtained by directly integrating the pointwise estimates in (<ref>) (for _u r) and (<ref>) (for _V r). We also have the following W^1,2 estimate for the charge Q.In the (u,V) coordinate system, Q satisfies the following estimate:∫_V_0^0 |_V Q|^2(u,V')dV' + ∫_-∞^u_0 u^2|_u Q|^2(u',V)du' 𝒟_ o + 𝒟_ i.Estimate for _V Q. By (<ref>) (adapted to the (u,V) coordinate system), _V Q = -2π i r^2 (ϕD_Vϕ - ϕ D_Vϕ). Therefore, using (<ref>), (<ref>) and (<ref>),∫_V_0^0 |_V Q|^2(u,V')dV' ∫_V_0^0 |D_V ϕ|^2(u,V')dV' 𝒟_ o + 𝒟_ i. Estimate for _u Q. By (<ref>), we have ∂_uQ = 2π i r^2𝔢 (ϕD_uϕ-ϕD_uϕ). The desired estimate hence follows similarly as above using (<ref>), (<ref>) and (<ref>).In order to consider the extension, we will also need to choose a gauge for A_μ. We will fix A such that To see that this is an acceptable gauge choice, simply notice that given any Ã_u, Ã_V, we can defineχ(u,V) = ∫_u_0^u Ã_u(u,V) du'+ ∫_V_0^V Ã_V(u_0,V')dV',where V_0=V(v_0). This impliesA_u(u,V) = Ã_u(u,V) - (_uχ)(u,V) = 0,∀ u,∀ V, A_V(u_0,V)=Ã_V(u,V)-(_vχ)(u_0,V)=0, ∀ V.Now in the gauge (<ref>), we have the following estimates:Suppose A satisfies the gauge condition above. Then A_V, _u A_V and _V A_V obey the following estimatessup_u∈ (-∞,u_0], V∈ [V_0,0) |A_V(u,V)| (u_0-u),sup_V∈ [V_0,0)∫_u^u_0 |_uA_V|^2(u',V)du' (u_0-u), sup_u∈ (-∞,u_0]∫_V_0^0 |_V A_V|^2(u,V')dV'(𝒟_ o + 𝒟_ i)(u_0-u).Pointwise estimate for A_V. By (<ref>) (adapted to the (u,V) coordinate system),_u A_V =12 ^2 Qr^2.Using (<ref>), (<ref>) and (<ref>), and the fact that A_V(u_0,V)=0, we obtain that for any u≤ u_0,|A_V(u,V)|∫_u^u_0du' = (u_0-u). L^2 estimate for _u A_V. To obtain the desired L^2_u estimate for _uA_V, we simply use the fact that the RHS of (<ref>) is bounded (as shown above using (<ref>), (<ref>) and (<ref>)) and integrate it up in u.L^2 estimate for _V A_V. To estimate _V A_V, we differentiate (<ref>) in V to obtain_u _V A_V =12 _V (^2 Qr^2).Using the pointwise bounds in (<ref>), (<ref>) and (<ref>), and the L^2_V estimates in (<ref>), (<ref>) and (<ref>), we obtainsup_u∈ (-∞,u_0]∫_V_0^0|_V (^2 Qr^2)|^2(u,V')dV' 𝒟_ o + 𝒟_ i.Now since _V A_V(u_0,V)=0 for all V, we have, for any u≤ u_0,∫_V_0^0 |_V A_V|^2(u,V')dV' ≤∫_u^u_0 |_u_V A_V|^2(u',V')dV'du' (𝒟_ o + 𝒟_ i)(u_0-u). Let V be as in (<ref>) and A satisfy the gauge condition (<ref>). Then in the (u,V) coordinate system,* ϕ, r, , A_V and Q (as functions of (u,V)∈ (-∞,u_0]× [V_0,0)) can be continuously extended to the Cauchy horizon {V=0}.* The extensions of ϕ, r, , A_V and Q (as functions of (u,V)∈ (-∞,u_0]× [V_0,0]) are all in C^0, 12∩ W^1,2_loc.* The Hawking mass m (as a function of (u,V)∈ (-∞,u_0]× [V_0,0)) can be continuously extended to the Cauchy horizon {V=0}.Continuous extendibility and Hölder estimates. Let us first consider in detail the estimates for ϕ. As we will explain, the estimates for r, , A_V and Q are similar. Consider two points (u',V') and (u”,V”). Denote v'=v(V'), v”=v(V”), where v is the inverse function of v↦ V above. Then we have, using the fundamental theorem of calculus and the Cauchy–Schwarz inequality,|ϕ(u',V')-ϕ(u”,V”)| ≤|∫_u'^u” |_uϕ|(u”',V')du”'| + | ∫_V'^V” |_Vϕ|(u”,V”')dV”'| ≤|∫_u'^u” |D_uϕ|(u”',V')du”'| + | ∫_V'^V” |D_Vϕ|(u”,V”')+ |A_V||ϕ|(u”,V”')dV”'| |u'-u”|^ 12(∫_u'^u” |D_uϕ|^2(u”',v')du”')^ 12 + |V'-V”|^ 12( ∫_v'^v” (v”'+1)^2|D_vϕ|^2(u”,v”')dv”')^ 12+ (𝒟_ o^ 12+ 𝒟_ i^ 12)(u_0-u”)|V'-V”|(𝒟_ o + 𝒟_ i) (|u'-u”|^ 12+ |V'-V”|^ 12) + (𝒟_ o^ 12+ 𝒟_ i^ 12)|V'-V”|,where in the last two lines we have used (<ref>), (<ref>) and Lemma <ref>.In a similar manner, using Lemmas <ref>, <ref> and <ref> instead, r, , A_V and Q can be estimated as follows[In fact, the estimates for r,A_V and Q are simpler as we do not need to handle the difference between _V and D_V.]: (To simplify the exposition, we suppress the discussion on the explicit dependence of the constant on 𝒟_ o and 𝒟_ i.)|r(u',V')-r(u”,V”)|+|(u',V')-(u”,V”)| +|A_V(u',V')-A_V(u”,V”)|+|Q(u',V')-Q(u”,V”)| _𝒟_ o,𝒟_ i (|u'-u”|^ 12+ |V'-V”|^ 12).Define the extension of (ϕ,r,,A_V,Q) byϕ(u,V=0) := lim_V→ 0ϕ(u,V), r(u,V=0) := lim_V→ 0r(u,V), (u,V=0) := lim_V→ 0(u,V), A_V(u,V=0) := lim_V→ 0A_V(u,V), Q(u,V=0) := lim_V→ 0Q(u,V).The estimates in (<ref>) and (<ref>) above show that the extensions are well-defined and that the extensions of (ϕ,r,,A_V,Q) is indeed C^0, 12.W^1,2_loc estimates. Now that we have constructed an extension of (ϕ,r,,A_V,Q) to D_u_0,v_0∪, it follows immediately from Lemmas <ref>, <ref> and <ref> that the extension is in W^1,2_loc.C^0 extendibility of the Hawking mass. Finally, we prove the C^0 extendibility of the Hawking mass (whose definition we recall from (<ref>)). By (<ref>) and (<ref>) (appropriately adapted in the (u,V) coordinate system), we have_u m = -8πr^2(_V r)^2|D_uϕ|^2+2(_u r)^2 π r^2|ϕ|^2+ 12 (_u r)Q^2r^2, _V m = -8πr^2(_u r)^2|D_Vϕ|^2+2(_V r)^2 π r^2|ϕ|^2+ 12 (_V r)Q^2r^2.It now follows from (<ref>), (<ref>), (<ref>) and Lemma <ref> that the RHS of (<ref>) is bounded in L^1_u and the RHS of (<ref>) is bounded in L^1_V. This implies the following L^1 estimates∫_-∞^u_0 |_u m|(u',V)du' + ∫_V_0^0 |_V m|(u,V')dV'_𝒟_ o, 𝒟_ i 1.On the other hand, by the fundamental theorem of calculus,|m(u',V')-m(u”,V”)| ≤|∫_u'^u” |_um|(u”',V')du”'| + | ∫_V'^V” |_Vm|(u”,V”')dV”'|.Combining (<ref>) and (<ref>), we see that* m can be extended tobym(u,0)= lim_V→ 0 m(u,V),and that* the extension is continuous up to ,which concludes the proof of the proposition. (Let us finally note that since we only have L^1 (as opposed to L^2) estimates for _u m and _V m, we only show that m is continuous, but do not obtain any Hölder estimates.)In Proposition <ref>, we proved that the extensions of ϕ, r, , A_V and Q are C^0, 12∩ W^1,2_loc on the (1+1)-dimensional quotient manifold 𝒬 (cf. notations in Section <ref>). It easily follows that these functions, when considered as functions on ℳ = 𝒬×𝕊^2 are also in C^0, 12∩ W^1,2_loc. As a consequence, in the coordinate system (u,v,θ,φ), the spacetime metric, the scalar field and the electromagnetic potential all extend to the Cauchy horizon in a manner that is in the (3+1)-dimensional spacetime norm C^0, 12∩ W^1,2_loc. § CONSTRUCTING EXTENSIONS BEYOND THE CAUCHY HORIZONIn this section, we prove that the solution can be extended locally beyond the Cauchy horizon as a spherically symmetric W^1,2 solution to (<ref>) (in a non-unique manner). Together with Propositions <ref> and <ref>, this completes the proof of Theorem <ref>.The idea behind the construction of the extension is that the system (<ref>) is locally well-posed in spherical symmetry for data such that _Vϕ, _V r and _VlogΩ are merely in L^2 (when r andare bounded away from 0). This follows from the well-known fact that (1+1)-dimensional wave equations are locally well-posed with W^1,2 data. Related results in the context of general relativity can be found throughout the literature; see for instance <cit.>. For completeness, we give a proof in our specific setting.The section is organized as follows. We first discuss a general local well-posedness result on (1+1)-dimensional wave equation (cf. Definition <ref> and Proposition <ref>). We then apply the wave equation result in our setting to construct extensions to our spacetime solutions by solving appropriate characteristic initial value problems. In particular, since we will be able to prescribe data for the construction of the extensions, there are (infinitely many) non-unique extensions.We begin by considering a general class of (1+1)-dimensional wave equation and introduce the following notion of solution, which makes sense when the derivative of Ψ is only in L^2 in one of the null directions.Let k∈ℕ. Consider a wave equation[For k>1, this should be thought of as a system of wave equations.] for Ψ:[0,)× [0,) →𝒱 (where 𝒱⊂ℝ^k is an open subset) of the form_u _v Ψ_A = f_A(Ψ)+ N_A^BC(Ψ)_uΨ_B _vΨ_C + K_A^BC(Ψ) _u Ψ_B _u Ψ_C + L_A^B(Ψ)_uΨ_B + R_A^B(Ψ)_vΨ_B,where Ψ_A denotes the components of Ψ, f_A, N_A^BC, K_A^BC, L_A^B, R_A^B:𝒱→ℝ are smooth, and we sum over all repeated capital Latin indices.We say that a continuous function Ψ:[0,)× [0,)→𝒱 satisfying _vΨ∈ L^2_v(C^0_u) and _uΨ∈ C^0_u C^0_v is a solution in the integrated sense if(_v Ψ_A)(u,v) = (_v Ψ_A)(0,v) + ∫_0^u (u',v)du' ,and(_u Ψ_A)(u,v) = (_u Ψ_A)(u,0) + ∫_0^v (u,v')dv' ,. Given a solution Ψ in the sense of Definition <ref>, it is also a weak solution in the following sense: for any χ∈ C_c^∞, ∬ (_u χ)(u,v)(_vΨ)(u,v) du dv = -∬χ(u,v)(u,v)du dvand∬ (_v χ)(u,v)(_uΨ)(u,v) du dv = -∬χ(u,v)(u,v)du dv.The following is a general local existence result for (1+1)-D wave equation where _vΨ is initially only in L^2_v. We construct local solutions in the sense of Definition <ref>. (Let us note that the following wave equation result holds for rougher data where _vΨ is only in L^1_v. This will however be irrelevant to our problem; see Remark <ref>.)Consider the setup in Definition <ref>. Let 𝒦⊂𝒱 be a compact subset. Given initial data to the wave equation (<ref>) on two transversely intersecting characteristic curves {(u,v):u=0,v∈ [0,v_*]}∪{(u,v):v=0,u∈ [0,u_*]} such that* Ψ takes value in 𝒦; and* the following estimates hold for the derivatives of Ψ for some C_wave>0:∫_0^v_* |_vΨ|^2(0,v')dv'≤ C_wave,sup_u∈ [0,u_*]|_uΨ|^2(u',0)≤ C_wave.Then, there exist _wave>0 depending on 𝒦 and C_wave (and the equation) such that there exists a unique solution to (<ref>) in the sense of Definition <ref> in the region(u,v)∈{(u,v): u∈ [0,_wave),v∈ [0,_wave)}which achieves the prescribed initial data.We directly work with the formulation in Definition <ref> and prove the existence and uniqueness of integral solutions. This proposition can be proven via a standard iteration argument. In order to illustrate the main idea and the use of the structure of the nonlinearity, we will only discuss below the proof of a priori estimates. By a bootstrap argument, we assume that sup_u'∈ [0,_wave) v'∈ [0,_wave)|_uΨ|(u',v')≤ 4C_wave.Let 𝒦'⊂𝒱 be a fixed compact set such that 𝒦⊂𝒦'. We estimate Ψ using the fundamental theorem of calculus as follows:sup_u'∈ [0,_wave) v'∈ [0,_wave)|Ψ(u',v')-Ψ(0,v')| ≤ sup_v'∈ [0,_wave)∫_0^_wave |_uΨ|(u',v')du'≤ _wavesup_u'∈ [0,_wave) v'∈ [0,_wave)|_uΨ|(u',v') ≤ 4C_wave_wave .Using the compactness of 𝒦, we can choose _wave sufficiently small so that Ψ(u,v)∈𝒦' for all u∈ [0,) Now that we have estimated Ψ, since 𝒦' is compact, it follows that f_A(Ψ), N_A^BC(Ψ), K_A^BC(Ψ), L_A^B(Ψ), R_A^B(Ψ) are all bounded. From now on, we will use these bounds and write C for constants that are allowed to depend on sup_x∈𝒦'f_A(x), etc.We now turn to the estimates for the derivatives of Ψ. First, we bound _vΨ using (the integral form of) (<ref>) and Hölder's inequality and Young's inequality:∫_0^_wavesup_u'∈ [0,_wave)|_vΨ|^2(u',v')dv' ≤C_wave + C ∫_0^_wave∫_0^_wave |_vΨ| (1+|_vΨ|+|_uΨ|+|_vΨ||_uΨ|+|_uΨ|^2)(u',v') du'dv'≤C_wave + C (1+∫_0^_wavesup_u'∈ [0,_wave)|_vΨ|^2(u',v')dv')(_wave + _wavesup_u'∈ [0,_wave) v'∈ [0,_wave)|_uΨ|(u',v') )+ C_wave_wave^2sup_u'∈ [0,_wave) v'∈ [0,_wave)|_uΨ|(u',v').For _uΨ, we again use (the integral form of) (<ref>) and Hölder's inequality and Young's inequality to getsup_u'∈ [0,_wave) v'∈ [0,_wave)|_uΨ|(u',v') ≤ C_wave + C sup_u'∈ [0,_wave)∫_0^(1+|_vΨ|+|_uΨ|+|_vΨ||_uΨ|+|_uΨ|^2)(u',v'))(u',v') dv'≤ C_wave + C (1+∫_0^sup_u'∈ [0,_wave]|_vΨ|^2(u',v')dv')(_wave^ 12 + _wavesup_u'∈ [0,_wave) v'∈ [0,_wave)|_uΨ|^2(u',v')) ≤ C_wave + C (1+∫_0^sup_u'∈ [0,_wave]|_vΨ|^2(u',v')dv')(_wave^ 12 + _wave C_wavesup_u'∈ [0,_wave) v'∈ [0,_wave)|_uΨ|(u',v')).Summing the above two estimates and choosing _wave sufficient small (depending on C_wave and 𝒦'), it follows that ∫_0^sup_u'∈ [0,_wave)|_vΨ|^2(u',v')dv'+sup_u'∈ [0,_wave) v'∈ [0,_wave)|_uΨ|(u',v')≤ 2C_wave.This in particular improves the bootstrap assumption (<ref>) so that we conclude the argument.We now use Proposition <ref> to solve (<ref>). In particular, this allows us to extend the solution in D_u_0,v_0 (in infinitely many ways!) beyond the Cauchy horizon as a spherically symmetric strong solution to (<ref>). Before we proceed, let us define a notion of spherically symmetric strong solutions to (<ref>) (using Definition <ref>) appropriate for our setting. For simplicity, in our notion of spherically symmetric strong solutions, we will already fix a gauge so that A_u=0.Let (ϕ,,r,A_v,Q) be continuous functions on {(u,v): u∈ [u_0,u_0+),v∈ [v_0,v_0+)} for some >0 with ϕ complex-valued, (,r,A_v,Q) real-valued and , r>0. We say that (ϕ,,r,A_v,Q) is a spherically symmetric strong solution to (<ref>) if the following hold[We remark that (<ref>) is not explicitly featured below. Note however that (<ref>) follows as an immediate consequence of (<ref>).]: * (ϕ,,r,A_v,Q) are in the following regularity classes: _v ϕ, _vlog∈ L^2_v(C^0_u),_u ϕ, _ulog, _u r, _v r, _u A_v ∈ C^0_uC^0_v. * (<ref>), (<ref>) and (<ref>) are satisfied as wave equations in the integrated sense as in Definition <ref> after replacing D_v ↦_v + i 𝔢 A_v, D_u ↦_u.* (<ref>), (<ref>), (<ref>) and (<ref>) are all satisfied in the integrated sense as follows, again with the understanding that D_v ↦_v + i 𝔢 A_v, D_u ↦_u:Q(u,v)= Q(0,v)+∫_u_0^u [2π i r^2 (ϕD_uϕ-ϕD_uϕ)] (u',v) du',Q(u,v)= Q(u,0)-∫_v_0^v [2π i r^2𝔢 (ϕD_vϕ-ϕD_vϕ)] (u,v') dv',r∂_u r(u,v)= r∂_u r(0,v)+∫_u_0^u [2 r∂_u r∂_ulogΩ+(∂_ur)^2-4π r^2|D_uϕ|^2 ](u',v) du',r∂_v r(u,v)= r∂_v r(u,0)+∫_v_0^v [2 r∂_v r∂_vlogΩ+ (∂_v r)^2-4π r^2|D_vϕ|^2 ](u,v') dv',for all (u,v) ∈{(u,v): u∈ [u_0,u_0+),v∈ [v_0,v_0+)}.* (<ref>) is satisfied classically everywhere with A_u = 0, i.e._u A_v = Q^22r^2. We emphasize again that a spherically symmetric strong solution to (<ref>) in the sense of Definition <ref> is a fortiori a weak solution to (<ref>) in the sense of Remark <ref>. We now construct extensions to the solutions given by Proposition <ref> beyond the Cauchy horizon as spherically symmetric strong solutions to (<ref>): For every u_ext∈ (-∞,u_0), there exists _ext>0 such that there are infinitely many inequivalent extensions (ϕ,,r,A_V,Q) to the region D_u_0,v_0∪∪{(u,V):u∈ [u_ext,u_ext+_ext], V∈ [0,_ext)}, each of which is a spherically symmetric strong solution to (<ref>) (cf. Definition <ref>). Let us focus the discussion on constructing one such extension. It will be clear at the end that the argument indeed gives infinitely many inequivalent extensions.Setting up the initial data.Extend the constant u curve {u=u_ext} up to the Cauchy horizon. We will consider a sequence of characteristic initial problems with initial data given on {u=u_ext} and {V=V_n} where V_n approaches the Cauchy horizon, i.e. V_n→ 0. For a fixed n∈ℕ, the data on {V=V_n} are simply induced by the solution that we have constructed in Proposition <ref>. On {u=u_ext}, the data when V∈ [V_n,0) are induced by the solution, but we prescribe data for V≥ 0 (i.e. beyond the Cauchy horizon) by the following procedure: * (Data for .) As we showed in (<ref>), (<ref>) and Proposition <ref>, for a fixed u_ext, (u_ext,V) is continuous up to {V=0}, is bounded away from 0, and _V(u_ext,V)∈ L^2_V. We can therefore extendto {(u_ext,V):V≥ 0} so that it is continuous and bounded away from 0 and that _V(u_ext,V) ∈ L^2_V. * (Data for ϕ.) As we showed in (<ref>) and Proposition <ref>, ϕ(u_ext,V) is continuous up to {V=0} and D_Vϕ(u_ext,V) ∈ L^2_V. Since by Lemma <ref>, |A_V|(u_ext,V) (u_0-u_ext) for V≤ 0, this also implies that _Vϕ(u_ext,V) ∈ L^2_V. We can therefore extend ϕ to {(u_ext,V):V≥ 0} so that it is continuous and _Vϕ(u_ext,V) ∈ L^2_V. * (Data for A_V.) Next, by Lemma <ref>, A_V(u_ext,V) is continuous up to {V=0} and _Vϕ(u_ext,V) ∈ L^2_V. Thus, just likeand ϕ, we can extend A_V to {(u_ext,V):V≥ 0} so that it is continuous and _VA_V(u_ext,V) ∈ L^2_V. * (Data for r.) Finally, we prescribe r. Note that this is the only piece of the initial data which is not free, but instead is required to satisfy constraints. First we note that by (<ref>), (<ref>) and Proposition <ref>, for V≤ 0, r(u_ext,V) is continuous up to {V=0}, bounded away from 0 and (_V r)(u_ext,V) ∈ L^2_V. Moreover, using (<ref>) (and the also estimates (<ref>) and (<ref>)), it can be deduced that (_V r)(u_ext,V) can be extended continuously up to {V=0}. Now we extend r and _V r beyond the Cauchy horizon {V=0} by solving the equation (<ref>). Since _Vϕ∈ L^2_V and log is bounded (by the choices above), provided that we only solve slightly beyond the Cauchy horizon (i.e. for V sufficiently small), both r and |_V r| are continuous, bounded above, and r is also bounded away from 0.Formulating the problem as a system of wave equations. Now apply Proposition <ref> to solve the following system of wave equations for Ψ = (r,log, Re(ϕ), Im(ϕ), A_V):r∂_u∂_V r=-1/4Ω^2-∂_ur ∂_Vr+𝔪^2π r^2 Ω^2 |ϕ|^2+r^2Ω^2(_uA_V)^2,r^2∂_u∂_VlogΩ=-2π r^2(_uϕ(_V+ i A_V)ϕ+_uϕ(_V+ i A_V)ϕ) -2r^2Ω^2(_uA_V)^2 +1/4Ω^2+∂_ur∂_Vr, _u ((_V+ i A_V)ϕ)+(_V+ i A_V)_uϕ=-1/2𝔪^2Ω^2ϕ-2r^-1(∂_ur (_V+ i A_V)ϕ+∂_V r _u ϕ), ∂_V(r^2Ω^2_uA_V) = -π i r^2𝔢 (ϕD_Vϕ-ϕD_Vϕ).It is easy to check that this system of equations indeed has the structure as in (<ref>).Solving the system of wave equations. By Proposition <ref>, there exists _0>0 (independent of n) such that for every V_n, a unique solution to the above system of equation exists for (u,V)∈{(u,V): u∈ [u_ext,u_ext+_0), V∈ [V_n,V_n+_0)}. In particular, since V_n→ 0, we can choose n∈ℕ sufficiently large so that V_n+_0>0. Now fix such an n and choose _ext>0 sufficiently small such that _ext<V_n+_0. We have therefore constructed a solution (r,log, Re(ϕ), Im(ϕ), A_V) to (<ref>)–(<ref>) in D_u_0,v_0∪∪{(u,V):u∈ [u_ext,u_ext+_ext], V∈ [0,_ext)}.Definition of Q and equation (<ref>). Define Q = 2r^2^-2_uA_V. By definition Q is continuous and (<ref>) is satisfied classically. Moreover, since (<ref>) is satisfied in an integrated sense, it also follows that (<ref>) is satisfied.Plugging in the definition of Q into (<ref>)–(<ref>), we also obtain that r,and ϕ respectively satisfy (<ref>), (<ref>) and (<ref>) as wave equations in the integrated sense as in Definition <ref>.Propagation of constraints and equations (<ref>), (<ref>) and (<ref>). Next, we check that (<ref>), (<ref>) and (<ref>) are satisfied. This involves a propagation of constraints argument, which is standard except that we need to be slightly careful about regularity issues.First, we note that since the equations are satisfied classically at (u,V_n) for all u∈ [u_ext,u_ext+_0), (<ref>) and (<ref>) are satisfied on {V=V_n}. Moreover, by the construction of the data for r above, (<ref>) is also satisfied on {u=u_ext}.Therefore, it follows that (<ref>), (<ref>) and (<ref>) are equivalent respectively to the following equations:(Q(u,V) -Q(u,V_n))-(Q(u_ext,V)-Q(u_ext,V_n))= ∫_u_ext^u [2π i r^2(D_uϕ-ϕD_uϕ)] (u',V) du'- ∫_u_ext^u [2π i r^2(D_uϕ-ϕD_uϕ)] (u',V_n) du',(r∂_u r(u,V) -r∂_u r(u,V_n))-(r∂_u r(u_ext,V)-r∂_u r(u_ext,V_n))=∫_u_ext^u ( [2 r∂_u r∂_ulogΩ+(∂_ur)^2-4π r^2|D_uϕ|^2 ](u',V) - [⋯](u',V_n)) du',(r∂_V r(u,V) -r∂_V r(u_ext,V))-(r∂_V r(u,V_n)-r∂_V r(u_ext,V_n))=∫_V_n^V ([2 r∂_V r∂_VlogΩ+(∂_Vr)^2-4π r^2|D_Vϕ|^2 ](u,V')- [⋯](u_ext,V')) dV',where [⋯] means that we take exactly the same expression as inside the previous pair of square brackets. To proceed, observe now that we have the following integrated version of the Leibniz rule: let f, g:[0,T]→ℝ, f∈ C^0, g∈ C^1. Assume that there exists an F:[0,T] →ℝ in L^1 such that f(t)-f(0) = ∫_0^t F(s) ds for all t∈ [0,T]. Then by Fubini's theorem and the fundamental theorem of calculus,∫_0^t F(s) g(s) ds =g(0) ∫_0^t F(s)ds + ∫_0^t ∫_0^s F(s) g'(τ)dτ ds=f(t) g(0) - f(0) g(0) + ∫_0^t ∫_τ^t F(s) g'(τ)ds dτ=f(t) g(0) - f(0) g(0) + ∫_0^t [f(t) g'(τ) - f(τ) g'(τ)]dτ=f(t) g(t) - f(0) g(0) - ∫_0^t f(s) g'(s)ds.In other words, supposed Ψ_i satisfies _u_vΨ_i = F_i (for some F_i ∈ L^1_v C^0_u), the following integrated versions of the Leibniz rule hold:∂_uΨ_i(u,V) Ψ_j (u,V)=∂_uΨ_i(u,V_n)Ψ_j (u,V_n)+ ∫_V_n^V [Ψ_jF_i+∂_vΨ_j∂_uΨ_i] (u,V') dV', ∂_vΨ_i (u,V)Ψ_j (u,V)=∂_uΨ_i (u_ext,V)Ψ_j (u_ext,V)+ ∫_u_ext^u [Ψ_j F_i+∂_uΨ_j∂_vΨ_i] (u',V) du'. Let us now show that (<ref>), or equivalently (<ref>), holds. Since we have already established that (<ref>) holds, it follows that (<ref>) is equivalent to- ∫_V_n^V ([2π i r^2(ϕD_vϕ - ϕD_vϕ)](u,V') - [⋯](u_ext,V') )dV' =∫_u_ext^u ( [2π i r^2(ϕD_uϕ-ϕD_uϕ)] (u',V)-[⋯] (u', V_n)) du'.By (<ref>) and (<ref>) above, it follows that we need to check∫_u_ext^u ∫_V_n^V (_u(2π i r^2(ϕD_vϕ - ϕD_vϕ)) + ∂_V(2π i r^2(ϕD_uϕ-ϕD_uϕ))) (u',V') du'dV' = 0,where expressions such as _u D_vϕ and _V D_uϕ are to be understood after plugging in the appropriate inhomogeneous terms arising from (<ref>). On the other hand, after plugging in the appropriate expressions from (<ref>), it is easy to check that the integrand in (<ref>) vanishes almost everywhere. Therefore, (<ref>) indeed holds, which then implies that (<ref>) holds. Next, we consider (<ref>), or equivalently (<ref>). Since we have already established (<ref>) in an integrated sense, using the definition of Q above, it follows from (<ref>) that (<ref>) is equivalent to∫_V_n^V ([-1/4Ω^2+𝔪^2π r^2 Ω^2 |ϕ|^2+1/4^2r^2Q^2](u,V') - [⋯](u_ext,V')) dV' =∫_u_ext^u ( [2 r∂_u r∂_ulogΩ+(∂_ur)^2-4π r^2|D_uϕ|^2 ](u',V) - [⋯](u',V_n)) du'Using again the integrated Leibniz's rule (<ref>) and (<ref>), it then follows that (<ref>) is equivalent to∫_u_ext^u ∫_V_n^V _u ([-1/4Ω^2+𝔪^2π r^2 Ω^2 |ϕ|^2+1/4Ω^2r^2Q^2](u',V') ) dV' du' -∫_V_n^V∫_u_ext^u _V( [2 r∂_u r∂_ulogΩ+(∂_ur)^2-4π r^2|D_uϕ|^2 ](u',V')) du' dV' = 0,where (in a similar manner as (<ref>)) expressions _V_u r, _V_ulogΩ and _VD_uϕ are to be understood after plugging in the appropriate inhomogeneous terms arising from (<ref>), (<ref>) and (<ref>) respectively, and _uQ is to be understood as _uQ = 2π i r^2 (ϕD_uϕ-ϕD_uϕ) (cf. (<ref>)). Direct algebraic manipulations (using in particular Q=2r^2 Ω^-2∂_uA_V) then show that the integrand in (<ref>) vanishes almost everywhere. This verifies (<ref>).Finally, we need to check (<ref>), or equivalently (<ref>). This can be argued in a very similar manner as (<ref>); we omit the details. Checking the regularity of the functions. We have now checked that all the equations are appropriately satisfied. To conclude that we have a solution in the sense of (<ref>), it remains to check that _V r is continuous. (A priori, using Proposition <ref>, we only know that _V r ∈ L^2_V(C^0_u).) That _V r is continuous is an immediate consequence of (<ref>), the fact that the data for _V r are continuous on {u=u_ext}, and the regularity properties of all the other functions.We have thus shown how to construct one extension of the solution (as a spherically symmetric strong solution in the sense of Definition <ref>). Since the procedure involves prescribing arbitrary data, one concludes that in fact there are infinitely many inequivalent extensions.Notice that in spherical symmetry, one can solve the wave equations with data such that one only requires _Vϕ, _V r, _vlog∈ L^1_V. However, if _Vϕ∉ L^2_V, we have _V r→ -∞ along a constant u hypersurface, and one cannot make sense of (<ref>) beyond the singularity. In other words, if _Vϕ∉ L^2_V, we cannot find appropriate data to the system of the wave equations so as to guarantee that the solution indeed corresponds to a solution to (<ref>). § IMPROVED ESTIMATES FOR MASSLESS AND CHARGELESS SCALAR FIELDWe will prove thatsup_u∈ (-∞,u_0],v∈ [v_0,∞)(|u|^2|_uϕ|(u,v)+v^2|_vϕ|(u,v))<∞.Recalling the relation between v and the regular coordinate V in (<ref>), this then implies the desired conclusion.We prove the above bounds with a bootstrap argument. Assume thatsup_u∈ (-∞,u_0],v∈ [v_0,∞)|u|^2|_uϕ|(u,v)≤𝒜_imp.In the following argument, we will allow the implicit constant into depend on all the constants in the previous sections, as well as the size of the LHS of (<ref>). 𝒜_imp will then be thought of as larger than all these constants. We will show that for appropriate |u_0|, the estimate in (<ref>) can be improved.To proceed, note that when == 0, (<ref>) can be written as_u(r_vϕ) = -(_vr)(_uϕ)and_v(r_uϕ) = -(_ur)(_vϕ).Using (<ref>), we estimate v^2|_vϕ|(u,v) 1+ 𝒜_imp∫_-∞^u v^2|u'|^-2(v+|u'|)^-2du' 1+𝒜_imp |u_0|^-1.Using (<ref>) and the estimate (<ref>) that we just established, we have|u|^2|_uϕ|(u,v) 1+ (1+𝒜_imp|u_0|^-1) ∫_v_0^∞ |u|^2|v'|^-2(v'+|u|)^-2dv' 1+(1+𝒜_imp|u_0|^-1) v_0^-1.Choosing 𝒜_imp sufficiently large and u_0 sufficiently negative (in that order), we have improved the bootstrap assumption (<ref>). Then by (<ref>) and (<ref>), sup_u,V(|_Vϕ|(u,V) + |u|^2|_uϕ|(u,V)) sup_u,v(v^2|_vϕ|(u,v) + |u|^2|_uϕ|(u,v))<∞,from which the conclusion follows. hplain
http://arxiv.org/abs/1709.09137v2
{ "authors": [ "Dejan Gajic", "Jonathan Luk" ], "categories": [ "gr-qc", "math.AP" ], "primary_category": "gr-qc", "published": "20170926171601", "title": "The interior of dynamical extremal black holes in spherical symmetry" }
We prove a sparse bound for the m-sublinear form associated to vector-valued maximal functions of Fefferman-Stein type. As a consequence, we show that the sparse bounds of multisublinear operators arepreserved viaℓ^r-valued extension. This observation is in turn used to deducevector-valued, multilinear weighted norm inequalities for multisublinear operators obeying sparse bounds, which are out of reach for the extrapolation theory developed by Cruz-Uribe and Martell in <cit.>. As an example, vector-valued multilinear weighted inequalities for bilinear Hilbert transforms are deduced from the scalar sparse domination theorem of <cit.>.Effects of interaction strength, doping, and frustration on the antiferromagnetic phase of the two-dimensional Hubbard model A.-M. S. Tremblay December 30, 2023 ============================================================================================================================ § MAIN RESULTS Let m=1,2,… and p⃗=(p_1,…,p_m)∈ (0,∞]^m be a generic m-tuple of exponents. This note is centered around the vector-valued m-sublinear maximal function_p⃗,r (f^1,…, f^m) := sup_Q∏_j=1^m ł f^j_k _̊p_j,Q1_Q_ℓ^ r(ℂ^N),1≤ r≤∞.Here each f^j=(f^j_1,…,f^j_N) is a ℂ^N-valued locally p_j-integrable function on ^d, the supremum is being taken over all cubes Q⊂^d. and we have adopted for a=(a_1,…,a_N) ∈ℂ^N the usual notation a_ℓ^r :=( ∑_k=1^N |a_k|^r)^1/r, 0<r <∞,a_ℓ^∞:=sup_k=1,…,N|a_k|, as well asł f _̊p,Q:= f1_Q_p/|Q|^1/p. The parameter N is merely formal and all ℓ^r-valued estimates below are meant to be independent of N without explicit mention. Note that when m=1, (<ref>) reduces to the well studied Fefferman-Stein maximal function <cit.>. In fact, it follows by Hölder's inequality that _p⃗,r (f^1,…, f^m) ≤∏_j=1^m _p_j,r_j (f^j). Therefore, thefull range of strong Lebesgue space estimates_p⃗,r :∏_j=1^m L^q_j(^d;ℓ^r_j)→ L^q(^d),q= 1/∑_j=1^m1/q_j,r= 1/∑_j=1^m1/r_j, 1≤ p_j < min{ r_j,q_j}and the weak-type endpoint_p⃗,r :∏_j=1^m L^p_j(^d;ℓ^r_j) → L^p,∞(^d), p=1/∑_j=1^m1/p_j,r= 1/∑_j=1^m1/r_j, 1≤ p_j < r_jare subsumed by the m=1 casediscussed in <cit.>, via Hölder's inequality in strong and weak-type spaces respectively. Moreover, (<ref>) can be strengthened to the following form: given any partition ℐ:={I_1,…,I_s} of {1,…,m}, there holds_p⃗,r(f^1,…,f^m)≤∏_i=1^s_p⃗_i,r_i(f^(i))≤∏_j=1^m _p_j,r_j (f^j), where f^(i):={f^j}_j∈ I_i, p⃗_i:=(p_j)_j∈ I_i, and 1/r_i:=∑_j∈ I_i1/r_j.The first main result of this note, Theorem <ref> below, is a nearly sharpsparse estimate involving vector-valued m-sublinear maximal functions of the form∫_^d∏_i=1^s_p⃗_i,r_i( f^(i))(x)x̣,which strengthens the Lebesgue space estimates (<ref>), (<ref>).As an application of Theorem <ref>, we obtain a structural result on sparse bounds,Theorem <ref> below, which seems to have gone unnoticed in previous literature: sparse bounds in the scalar setting self-improve to the ℓ^r-valued setting. In other words, if a givensequence of operators are known to obey a uniform sparse bound,the vector-valued operator associated to the sequence satisfies the same ℓ^r-valued sparse bound, without the need for additional structure of the operators. We proceed to define the notion of sparse bound we have referred hitherto. A countable collection 𝒬 of cubes of ^d is sparse if there exist a pairwise disjoint collection of sets {E_Q: Q∈𝒬} such that for each Q ∈𝒬 there holdsE_Q⊂ Q,|E_Q|>1/2|Q|.Let n≥ 1 and T be a n-sublinear operator mapping (n copies of)L^∞_0(^d;ℂ) into locally integrable functions. If p⃗∈ (0,∞)^n+1,the sparse p⃗ norm of T, denoted by T_p⃗, is the least constant C>0 such that for all (n+1)-tuplesg⃗=(g^1,…,g^n+1)∈ L^∞_0(^d;ℂ)^n+1 we may find a sparse collection 𝒬=𝒬(g⃗) such that|ł T(g^1,…,g^n),g^n+1| ≤ C∑_Q∈𝒬 |Q| ∏_j=1^n+1ł g^j _̊p_j,Q.Beginning with the breakthrough work of Lerner <cit.>, sparse bounds have recently come to prominence in the study of singular integral operators, both at the boundary of <cit.> and well beyond Calderón-Zygmund theory <cit.>; the list of references provided herein is necessarily very far from being exhaustive. As we will see in Section <ref>, their interest lies in that they imply rather easily quantitative weighted norm inequalities for the corresponding operators.The concept of sparse bound extends naturallyto vector-valued operators. IfT={T_1,…,T_N }is a sequence ofn-sublinear operators as above, we may let T act on L^∞_0(^d;ℂ^N)^n asłT(f^1,…,f^n), f^n+1:̊= ∑_k=1^N łT_k( f^1_k,…, f^n_k),f^n+1_k.̊Let (r_1,…,r_n+1) be a Banach Hölder (n+1)-tuple, that is 1≤ r_j≤∞, j=1,…,n+1, r:=r_n+1/r_n+1-1=1/∑_j=1^n1/r_jand define the sparse (p⃗,r⃗)-norm of T as the least constant C>0 such that |łT(f^1,…,f^n), f^n+1|̊≤ C ∑_Q∈𝒬 |Q| ∏_j=1^n+1⟨f^j_ℓ^r_j⟩ _p_j,Qfor all (n+1)-tuples f⃗∈ L^∞_0(^d; ℂ^N)^n+1 and for a suitable choice of 𝒬=𝒬(f⃗). We denote such norm by T_(p⃗,r⃗). Our punchline result is the following.Let p⃗∈ [1,∞)^n+1 and r⃗ be as in (<ref>) with the assumption r_j>p_j. Then {T_1,…,T_N}_( p⃗,r⃗) ≲sup_k=1,…, NT_k _p⃗.The implicit constant depends on the tuples p⃗ and r⃗ and on the dimension d.The recent preprint <cit.> containsa direct proof of ℓ^r-valued sparse form estimates for multilinear multipliers with singularity along one-dimensional subspaces, generalizing the paradigmatic bilinear Hilbert transform, as well as for the variation norm Carleson operator. Theorem <ref> thus allows to recover these results of <cit.> from the corresponding scalar valued results previously obtained in <cit.>, which is recalled in (<ref>) below, and <cit.> respectively.We refer the readers to Subsection <ref> for a proof of Theorem <ref> and proceed with introducing the main theorem concerning sparse bounds of multisublinear forms of type (<ref>), whose proof is postponed to Section <ref>.Let there be given m-tuples p⃗=(p_1,…,p_m)∈ [1,∞)^m, r⃗=(r_1,…, r_m)∈ [1,∞]^m with1/r:=∑_j=1^m 1/r_j,p_j<r_j, j=1,…, m.1. Let >0. There exists a sparse collection 𝒬 such that∫_^d∏_j=1^m _p_j,r_j(f^j)(x) x̣≲∑_Q∈𝒬 |Q| ∏_j=1^m⟨f^j_ℓ^r_j⟩ _p_j+,Q.The implicit constant is allowed to depend on >0, as well as the tuples p⃗, (r_1,…,r_m) and on the dimension d. 1mm2. There exists a sparse collection 𝒬, possibly different from above, such that∫_^d_p⃗,r(f^1,…,f^m)(x) x̣≲∑_Q∈𝒬 |Q| ∏_j=1^m⟨f^j_ℓ^r_j⟩ _p_j,Q.The implicit constant is allowed to depend on p⃗, (r_1,…,r_m) and d.An immediate consequence of Theorem <ref> is a sparse bound for multisublinear forms involving any _p⃗,r. More precisely, for any partition ℐ:={I_1,…,I_s} of {1,…,m}, there exists a sparse collection 𝒬 (depending on ℐ) such that∫_^d∏_i=1^s_p⃗_i,r_i( f^(i))(x)x̣≲∑_Q∈𝒬 |Q| ∏_j=1^m⟨f^j_ℓ^r_j⟩ _p_j+,Q.We do point out that even though the s=1 case of (<ref>), i.e. when the partition ℐ contains only {1,…,m} itself, already implies a sparse bound for the form on the left hand side of (<ref>), it fails to recover the full strength of (<ref>) due to the -loss. §.§ Vector valued sparse estimates from scalar onesIn this subsection we prove Theorem <ref>, with the key ingredients being (<ref>) and the following observation, which we record as a lemma; a similar statement may be found in the argument following <cit.>.Let f⃗∈ L^∞_0(^d; ℂ^N)^n+1. Then |łT(f^1,…,f^n), f^n+1|̊≤ 2 (sup_k=1,…, NT_k _p⃗)∫_^d_p⃗,1( f^1,…,f^n+1) x̣where p⃗=(p_1,…,p_n+1). NormalizeT_k _p⃗=1 for k=1,…,N. Using the definition, for k=1,…,N we may find sparse collections 𝒬_1,…,𝒬_N such that|łT_k( f^1_k,…, f^n_k),f^n+1_k|̊≤∑_Q_k ∈𝒬_k|Q_k| ∏_j=1^n+1⟨f^j_k ⟩ _p_j,Q_k≤ 2∫_^d F_k(x) x̣,having defined F_k = ∑_Q_k ∈𝒬_k(∏_j=1^n+1⟨f^j_k ⟩_p_j,Q_k)1_E_Q_k,where the last inequality follows from the pairwise disjointness of the distinguished major subsets E_Q_k⊂ Q_k, with2|E_Q_k|≥| Q_k|. Therefore, |łT(f^1,…,f^n), f^n+1|̊≤2 ∫_^d_p⃗,1(f^1,…,f^n+1)(x) x̣. Theorem <ref> then immediately follows from Lemma <ref> recalling (<ref>).Lemma <ref> obviously applies to any (n+1)-sublinear form Λ(f^1,…,f^n+1), not necessarily of the form łT(f^1,…,f^n), f^n+1$̊.We then record the following observation: in the scalar valued caseN=1, there holds the equivalencesup_𝒬 sparse∑_Q∈𝒬|Q|∏_j=1^m⟨ f^j⟩_p_j,Q∼∫_^d_p⃗(f^1,…,f^m)(x) x̣.Incidentally, this is an alternative proof of the useful “one form rules them all” principle of Lacey and Mena Arias <cit.>. Indeed, (<ref>) follows from applying Lemma <ref> to the caseN=1and to them-sublinear form on the left hand side of (<ref>). Such anequivalence does not seem to hold in the vector-valued case.§ PROOF OF THEOREM<REF> The proof of the main result is iterative in nature and borrows some of the ingredients from the related articles <cit.>. Throughout, we assume that the tuples p⃗=(p_1,…,p_m) and (r_1,…,r_m) as in the statement of Theorem <ref> are fixed. We first prove part 1, and the proof of part 2, which is very similar and is in fact simpler, will be given at the end of the section. §.§ Truncations and a simple lemma We start by defining suitable truncated versions of the Fefferman-Stein maximal functions (<ref>). For s,t>0, write𝖠_j^s,t f^j:=sup_s<ℓ(Q)≤ tł f^j_k _̊p_j,Q1_Q_ℓ^ r_j(ℂ^N),j=1,…,m.Note that ∀ j,sup_s<t𝖠_j^s,tf^j = _p_j,r_jf^j.We will be using the following key lemma, which is simply the lower semicontinuity property of truncated maximal operators. Let x,x_0∈^d and s≳dist(x_0,x). Then𝖠_j^s,tf^j (x) ≲𝖠_j^s,tf^j (x_0).§.§ Main argumentWe work with a fixed δ>0; we will let δ→ 0in the limiting argument appearingbelow. For a cube Q wedefinefurther localized versions as𝖠_j^Q (f^j):=1_Q 𝖠_j^δ,ℓ(Q) (f^j) =1_Q 𝖠_j^δ,ℓ(Q)(f^j1_3Q)where the the last inequality follows from support consideration. By standardlimiting andtranslation invariance arguments, (<ref>) is reduced to the following sparse estimate: if Q is a cube belonging to one of the 3^d standard dyadic grids, thenΛ_Q(f^1,…,f^m):= ∫_Q∏_j=1^m 𝖠_j^Q (f^j) (x) x̣≲∑_L∈𝒬 |L| ∏_j=1^m⟨f^j_ℓ^r_j⟩ _p_j+,Luniformly over δ>0, where 𝒬 is a stopping collection of pairwise disjoint cubes. Estimate (<ref>) follows by iteration of the followinglemma: the iteration procedure is identical to the one used, for instance, in the proof of<cit.> and is therefore omitted.There exists a constant Θ, uniform in the data below, such that the following holds.Let Q be a dyadic cube and ( f^1,…,f^m)∈ L^∞_0(^d;ℂ^N)^m. Then there exists a collection L∈𝒬of pairwise disjoint dyadic subcubes of Q such that∑_L∈𝒬 |L| ≤ 2^-16|Q|andΛ_Q(f^1,…,f^m) ≤Θ |Q| ∏_j=1^m⟨f^j_ℓ^r_j⟩ _3Q,p_j++ ∑_L ∈𝒬Λ_L(f^1,…,f^m).§.§ Proof of Lemma <ref>We can assume everything is supported in 3Q. By horizontal dilation invariance we may assume |Q|=1. By vertical scaling we may assume ⟨f^j_ℓ^r_j⟩_p_j+,3Q=1 for all j=1,…, m.Define thecollection L∈𝒬 as the maximal dyadic cubes of ^d such that 9L⊂ E_Q whereE_Q =⋃_j=1^m{ x∈ Q: ∘𝖠_j^Q(f^j) (x) ≥ C},hereis the usual Hardy-Littlewood maximal function. If C is large enough, usingthe Lebesgue space boundedness of ∘𝖠_j^Q with the choices q_j=p_j+ in (<ref>), the set E_Q has small measure compared to Q and same for the pairwise disjoint cubes L in the stopping collection 𝒬. As a consequence of the construction of 𝒬 and of Lemma <ref> we obtain the following properties for all j=1,…,m and L ∈𝒬 sup_x ∉E_Q𝖠_j^Q(f^j) (x)≲ 1,sup_L'≳ Lł𝖠_j^Q(f^j) _̊1,L'≲ 1,sup_x∈ L𝖠_j^ℓ(L),ℓ(Q)( f^j) (x) ≲ 1.The third property follows from the fact that if x∈ L there is a point x_0∈ L', with L' a moderate dilate of L, with small _j, so that one may apply Lemma <ref>.We now prove the main estimate.By virtue of (<ref>), ∫_Q∖ E_Q∏_j=1^m𝖠_j^Q (f^j) (x)x̣≲ 1.Given thatL ∈𝒬 cover E_Q andare pairwise disjoint it then suffices to prove that for each L ∫_L∏_j=1^m 𝖠_j^Q (f^j) (x)x̣≤Λ_L(f^1,…,f^m)+C|L|and sum this estimate up. Observe that the left hand side of (<ref>) is bounded by the sum∫_L∏_j=1^m𝖠_j^δ,ℓ(L)f^j(x) x̣+ ∑_τ_1,…,τ_m∫_L∏_j=1^m𝖠_j^τ_jf^j (x) x̣,where 𝖠_j^τ_j is either 𝖠_j^δ,ℓ(L) or 𝖠_j^ℓ(L),ℓ(Q), and the sum is over all the possible combinations of {τ_1,…,τ_m} except the one with 𝖠_j^δ,ℓ(L) appearing for all j. Note that the first term in the above display is equal to Λ_L(f^1,…,f^m), so it suffices to show that∑_τ_1,…,τ_m∫_L∏_j=1^m𝖠_j^ρ_jf^j (x) x̣≲ |L|where 𝖠_j^ρ_j is either 𝖠_j^Q or 𝖠_j^ℓ(L),ℓ(Q) and 𝖠_j^ℓ(L),ℓ(Q) appears at least at one j. This is because the left hand side is larger than the second term of (<ref>). But this is immediate by using the L^1 estimate of (<ref>) on the terms of the type 𝖠_j^Qf^j and the L^∞ estimate of (<ref>) on the terms 𝖠_j^ℓ(L),ℓ(Q)f^j respectively. The proof is complete. §.§ Proof of (<ref>)The proof of (<ref>) proceeds very similarly to the one given above. Write f⃗=(f^1,…,f^m) for simplicity and define the multilinear version of the truncated operator𝖠^s,tf⃗:=sup_s<ℓ(Q)≤ t∏_j=1^m ł f^j_k _̊p_j,Q1_Q_ℓ^ r(ℂ^N),s,t>0. With this definition of𝖠^s,t, the analogues of (<ref>) and Lemma <ref> still hold. Therefore, a similar liming argument as above reduces the matter to showing Λ_Q(f⃗):= ∫_Q𝖠^Qf⃗(x) x̣≲∑_L∈𝒬|L|∏_j=1^m⟨f^j_r_j⟩_p_j,L uniformly over δ>0 for some stopping collection 𝒬, where 𝖠^Q is the localized version of 𝖠^s,t defined as in (<ref>). The proof of the last display proceeds by iteration of the analogous result toLemma <ref>: for any dyadic cube Qand f⃗∈ L^∞_0(^d;ℂ^N)^m there exists a collection L∈𝒬of pairwise disjoint dyadic subcubes of Q such that∑_L∈𝒬 |L| ≤ 2^-16|Q|andΛ_Q(f⃗) ≤Θ |Q| ∏_j=1^m⟨f^j_ℓ^r_j⟩ _3Q,p_j+ ∑_L ∈𝒬Λ_L(f⃗). To prove the last claim, the following changes are needed in the proof of Lemma <ref>. We use instead the normalization ⟨f^j_ℓ^r_j⟩_p_j,3Q=1 without the ε, and define the exceptional set without the extra Hardy-Littlewood maximal function, i.e.E_Q:={x∈ Q: 𝖠^Q(f⃗)(x)≥ C}. Since, from (<ref>), 𝖠^Qhas the weak-type bound at ∏_j=1^m L^p_j, the measure of E_Q is small for sufficiently large C. Note that one still has analogues of estimates (<ref>) and (<ref>) for 𝖠^Qf⃗ in place of 𝖠_j^Q(f^j), and (<ref>) becomes irrelevant in this case. The proof is completed by using these estimates as in (<ref>) and (<ref>) respectively. § VECTOR-VALUED WEIGHTED NORM INEQUALITIESUsing the almost equivalence between scalar and vector-valued sparse estimates of Theorem <ref>, we prove vector-valued weighted norm inequalities forn-sublinear operators with controlled sparsep⃗=(p_1,…,p_n+1)norm. The weighted bounds can be obtained via estimates for the form(g^1,…, g^n+1) ↦P_p⃗(g^1,…, g^n+1; F) :=∫_F_(p_1,… p_n) (g^1,…,g^n) (x) _p_n+1 g^n+1 (x)x̣.where_t⃗ (g^1,…, g^n) := |sup_Q∏_j=1^n ł g^j _̊t_j,Q1_Q|is the scalar valued version of (<ref>). We consider Hölder tuples1≤ q_1,…, q_n≤∞,q : =1/∑_j=1^n 1/q_j≤ 1and weight vectorsv⃗=(v_1,…,v_n)in^dwith v= ∏_j=1^n v_j^q/q_j.It is well known <cit.> that_(p_1,… p_n): ∏_j=1^n L^q_j(v_j) →L^q(v) q_1> p_1,…, q_n> p_n,[v⃗]_A_(q_1,…,q_n)^(p_1,…,p_n,1)< ∞where the vectorweight characteristic appearing above is defined more generally by[v⃗]_A_(q_1,…,q_n)^(t_1,…,t_n+1) := sup_Q( ł v _̊t_n+1/q-(q-1)t_n+1,Q^1/q∏_j=1^n ł (v_j)^-1_̊t_j/q_j-t_j,Q^1/q_j) <∞.Whenn=1, the above characteristics generalize the familiarA_t(Muckenhoupt) andRH_t(Reverse Hölder) classes, namelyA_q^(t_1,t_2) = A_q/t_1∩ RH_t_2/q-(q-1)t_2. Let(q_1,…,q_n), q be as in (<ref>) and let v⃗=(v_1,…,v_n), v be as in (<ref>). Assumethat1.sup_j=1,…,NT_j_p⃗≤ 1for some p⃗=(p_1,…,p_n+1) with 1≤ p_1≤ q_1,…,1≤ p_n≤ q_n;2. condition (<ref>) holds, namely v⃗∈ A_(q_1,…, q_n)^(p_1,…,p_n,1); 3. there exists t∈ [1, p_n+1'] such thatv∈ A_t ∩ RH_p_n+1/t(1-p_n+1)+p_n+1.Then the vector-valued strong type boundT: ∏_j=1^n L^q_j(v_j ; ℓ^r_j ) →L^q(v; ℓ^r)holds true whenever r_1≥ p_1, …, r_n≥ p_n, r_n+1=r' ≥ p_n+1. As T_j_p⃗≤ 1 for all j, Theorem <ref> implies that there exists a sparse collection 𝒬 such that|łT(f^1,…,f^n), f^n+1|̊≲∑_Q∈𝒬 |Q|∏_j=1^n+1⟨f^j_ℓ_r_j⟩_p_j,Qunder the the assumptions r_j>p_j, j=1,…, n+1. By interpolation, it suffices to prove the weak-type analogue of (<ref>).We use the well known principleT: ∏_j=1^n L^q_j(v_j ; ℓ^r_j ) →L^q,∞(v; ℓ^r)≲supinf_G⊂ Fv(F) ≤ 2v(G) |łT(f^1,…,f^n), f^n+1v1_G|̊/v(F)^1-1/q,where the supremum is taken over sets F⊂^d of finite measure, f^j ∈ L^q_j(v_j ; ℓ^r_j ), j=1,…, nof unit norm, and functions f^n+1 with f^n+1_L^∞(^d;ℓ^r_n+1)≤ 1. Fix F,f^j as such and introduce the scalar-valued functions g^j:=f^j_ℓ_r_j, j=1,…,n+1. Set,E={x∈^d: M_(p_1,…,p_n)(g^1,…,g^n) > β^1/q v(F)^-1/q}, where β>0 will be determined at the end.We let G=^d∖ E and finally we define the smaller set G=F∖ E' where E' is the union of themaximal dyadic cubes Q such that |Q|≤ 2^5|Q∩ E|. Notice that |E'| ≤ 2^5 |E|v(E') ≤ C([v]_A_∞) v(E) < C/β v(F) ≤1/2 v(F)by choosing β large enough and relying upon the bound (<ref>) to estimate v(E). Therefore G is a major subset of F. In this estimate we have used that v∈ A_∞, which is guaranteed by the third assumption of the theorem. Now, the argument used in <cit.> applied to (<ref>) with f^n+1 replaced by f^n+1v1_G returns|łT(f^1,…,f^n), f^n+1v1_G|̊ ≲∑_Q∈𝒬|Q∩G| ≥ 2^-5|Q| |Q|(∏_j=1^n⟨ g ^j ⟩_p_j,Q) ⟨ g^n+1v1_F ⟩_p_n+1,Q≲P_p⃗(g^1,…, g^n, g^n+1 v1_F;^d\ E) .Further, if t is as in the third assumption, an interpolation argument between (<ref>) and the L^∞ estimate off the set E yieldsM_(p_1,…,p_n)(g^1,…,g^n)1_^d\ E_L^t(v)≲ v(F)^1/t-1/q.Therefore |łT(f^1,…,f^n), f^n+1v1_G|̊≲P_p⃗(g^1,…, g^n, g^n+1 v1_F;^d\ E)= ∫_G( M_(p_1,…,p_n)(g^1,…,g^n) v^1/t)(M_p_n+1(g^n+1 v1_F) v^-1/t)x̣≤M_(p_1,…,p_n)(g^1,…,g^n)1_^d\ E_L^t(v)M_p_n+1( v1_F)_L^t'(v^1-t')≲ v(F)^1/t-1/q v(F)^1/t'=v(F)^1-1/qwhich, combined with (<ref>), gives the desired result. Note that the third assumption, which is equivalent <cit.> tov^1-t'∈ A_t'/p_n+1 was used toensure the boundedness of _p_n+1 on L^t'(v^1-t').The proof is thus completed.Theorem <ref> does not cover the range q>1.In that range, in fact, (<ref>) continues to hold with conditions 2. and 3. of Theorem <ref> replaced by a single condition of multilinear type. To wit, if{T_1,…,T_N}_p⃗<∞ with1≤ p_1≤min{ q_1,r_1},…,1≤ p_n≤min{ q_n,r_n},1≤ p_n+1≤min{q/q-1, r_n+1}and v⃗∈ A^(p_1,…,p_n+1)_(q_1,…,q_n), then the bound (<ref>) holds true. The proof uses the sparse bound (<ref>) in exactly the same fashion as <cit.>. When q≤ 1, we are not aware of a fully multilinear sufficient condition on the weights leading toestimate (<ref>); Theorem <ref> is a partial substitute in this context.As the multilinear weighted classes (<ref>) are not amenable to (restricted range) extrapolation, Theorem <ref>, as well as its corollaries described in the next section, cannot be obtained within the multilinear extrapolation theory developed in the recent article <cit.>.§.§ An example: the bilinear Hilbert transformWe show how, in view of the scalar sparse domination results of <cit.>, Theorem <ref> applies to a class of operators which includes the bilinear Hilbert transform. LetT_mbe bilinear operators whose action on Schwarz functions is given by ł T_m(g^1,g^2),g^3=̊∫_ξ_1+ξ_2+ξ_3=0m(ξ)∏_j=1^3 g^j(ξ_j)ξ̣.Herembelongs to the classℳofbilinearFourier multipliers with singularity along the one dimensional subspace{ξ∈^3: ξ_1=ξ_2}; thatissup_m∈ℳsup_|α| ≤ Nsup_ξ_1+ξ_2+ξ_3=0| ξ_1-ξ_2 |^α| ∂_α m (ξ)| ≲_N 1.The bilinear Hilbert transform <cit.> corresponds to the (formal) choicem(ξ)=sign(ξ_1-ξ_2). Sparse bounds for this type of operators were first established, and fully characterized in the open range, in <cit.>, where it was proved that sup_m ∈ℳT_m _p⃗ <∞1<p_1,p_2,p_3<∞, ∑_j=1^31/min{p_j,2}<2.Therefore, Theorem <ref> withn=2may be applied for anyp⃗in the range (<ref>).It is easy to see that there existssuch ap⃗with1≤q_1≤p_1,1≤q_2≤p_2for all(q_1,q_2)belonging to the sharp open range of unweighted strong-type estimates for the multipliers{T_m:m∈ℳ}, namely1<q_1,q_2 ≤∞, 2/3 <q<∞. Therefore,Theorem <ref>, together withits version forq>1described in Remark <ref>, yield weighted, vector-valued boundedness of the multipliers{T_m:m∈ℳ}for weightsv_1,v_2satisfying conditions 2. and 3.and the exponents recover the full unweighted range. Weighted bounds in such a full range, under more stringent assumption on the weights were obtained in <cit.> by extrapolation of the results of <cit.>. The vector-valued analogue of the results in <cit.> was instead provedin <cit.> by making use of vector-valued sparse bounds in a different way. To illustrate the subtle difference between the class of weights allowed in <cit.> and those falling within the scope of Theorem <ref>, we particularize our result to the diagonal caseq_1=q_2=2qwith2/3<q<∞. This is donefor simplicity of description of the multilinear classesA_(q_1,…,q_n)^(t_1,…, t_n+1)whent_n+1=1,t_1=⋯=t_n, but off diagonal results can also be obtained in a similar fashion.Note that the tuple (parametrized bys)p_1=p_2=2/s,p_3=1/2-s+δ,1≤ s≤3/2satisfies the conditions in (<ref>) for allδ>0. As notedin <cit.>, ifqs≥1, then(v_1,v_2) ∈ A_(2q,2q)^(2/s,2/s,1) v_1,v_2 ∈RC( 1/1-qs, 1/1+qs) ⊋ A_qs,v=(v_1v_2)^1/2∈ A_2qs.Recall from <cit.> that for-∞≤α< β≤∞, the weight classRC(α, β)contains those weightswon^dsuch thatł w_̊β, Q≤ Cł w_̊α, Q,withCuniform over all cubesQof^d.In particular, for1≤t<∞A_t = RC ( 1/1-t, 1 ), RH_t= RC( 1, t ).and the strict inclusion in (<ref>) follows from the obvious relations α≤γ≤δ≤βRC(α, β) ⊂RC(γ, δ) . Thisobservation characterizes the weights that will verify the second assumption of Theorem <ref>. Finally, rewriting the third assumption for our choice of tuplep⃗yields the following result, which strictly contains the diagonal case of the main results of<cit.>(see also <cit.> for the vector-valued analogue). Let 2/3<q≤ 1, v_1,v_2 be weights on . Assume that there exists∈[1/q, 3/2],t ∈[1, 1/s-1)such thatv_1,v_2 ∈RC( 1/1-qs, 1/1+qs) ⊋ A_qsandv:=(v_1v_2)^1/2∈ A_min{t,2qs}∩ RH_1/1-t(s-1). Then the vector-valued strong type boundT={T_m_j:m_j ∈ℳ}: ∏_j=1^2 L^2q(v_j ; ℓ^r_j ) →L^q(v; ℓ^r)holds true whenever min{r_1,r_2}≥2/s,r_3= r'≥1/2-s.For instance, the estimate, valid for all vector-valued tuples withmin{r_1,r_2,r_3} ≥2, T: ∏_j=1^2 L^2q(v; ℓ^r_j ) →L^q(v; ℓ^r) ,v_1,v_2 ∈A_3q/ 2,v ∈ A_3q/ 2∩ RH_2,2/3<q≤ 1,follows by takings =3/2,t=1in Theorem <ref>. This result includes<cit.>, in vector-valued form.§.§ AcknowledgmentThe authors would like to thank Kangwei Li for fruitful discussions on the weak-type weighted theory of multisublinear maximal functions.amsplain
http://arxiv.org/abs/1709.09647v1
{ "authors": [ "Amalia Culiuc", "Francesco Di Plinio", "Yumeng Ou" ], "categories": [ "math.CA", "42B25, 42B20" ], "primary_category": "math.CA", "published": "20170927173835", "title": "A sparse estimate for multisublinear forms involving vector-valued maximal functions" }
Department of modern physics, University of Science and Technology of China, Hefei, Anhui 230026, China The one-loop correction to heavy quark pair back-to-back production in unpolarized semi-inclusive deep inelastic scattering is given in this work in the framework of transverse momentum dependent(TMD) factorization. Both unpolarized and linearly polarized TMD gluon distribution functions are taken into account. A subtraction method based on diagram expansion is used to get finite hard coefficients. It is found the soft and collinear divergences of one-loop amplitude is proportional to tree level ones and can be expressed through several basic scalar triangle and bubble integrals. The subtraction of these divergences is spin independent. Beyond tree level an additional soft factor related to final heavy quark pair must be added into the factorization formula. This soft factor affects the azimuthal angle distribution of virtual photon in a nonperturbative way. Integrating over virtual photon azimuthal angle we construct three weighted cross sections, which depend on only three additional integrated soft factors. These weighted cross sections can be used to extract linearly polarized gluon distribution function. In addition, lepton azimuthal angle is unintegrated in this work, which provides more observables. All hard coefficients relevant to lepton and virtual photon azimuthal angle distributions are given at one-loop level. Back-to-back heavy quark pair production in Semi-inclusive DIS Guang-Peng Zhang December 30, 2023 ==============================================================§ INTRODUCTIONGluon transverse momentum dependent(TMD) distribution functions take important information about the hadron structure. In high energy region, gluon as a parton usually plays more important role than quark, since the momentum fraction of parton is always very small. Thus, precise knowledge of gluon distribution functions is desired to make precise prediction of the cross section. Since gluon is spin-1, it can have different polarizations in a hadron. When the transverse momentum of the gluon in a hadron is integrated over, there is only one gluon distribution function possible for unpolarized hadron, which is the usual collinear gluon parton distribution function(PDF). In such integrated PDF the gluon is unpolarized. But when the transverse momentum of gluon is unintegrated, the transverse momentum of gluon can couple with the spin of gluon to give a description of polarized distribution of gluon in a hadron<cit.>. For unpolarized hadron, there exists two such transverse momentum dependent gluon distribution functions: one is denoted as G(x,p_⊥^2), which reflects the distribution of unpolarized gluon in a hadron, with longitudinal momentum fraction x and transverse momentum p_⊥, the other is H^⊥(x,p_⊥^2), which reflects the distribution of linearly polarized gluon in the hadron. So far, the second gluon distribution function, i.e., H^⊥(x,p_⊥^2) has already aroused much interest in current researches, such as the effect of H^⊥ on higgs boson production at LHC<cit.>. One can see e.g.,<cit.> for a review on linearly polarized gluon distribution function. Various schemes are proposed to extract this function in literature. For hadron reaction, pair photon production<cit.>, photon-quarkonium<cit.> or quarkonium-quarkonium<cit.> associated production are proposed to extract this function from a cos(2ϕ) azimuthal angle distribution in these reactions. For hadron reaction, the problem is the potential breaking of TMD factorization as pointed out in <cit.>,<cit.>. All these proposed schemes require the detected final states are colorless(for quarkonium the heavy quark pair from hard interaction is in a color singlet). Contrast to hadron reaction, semi-inclusive deep inelastic scattering(SIDIS) for heavy quark pair or di-jet production is expect to have an exact TMD factorization, and can be used to extract some definite information about gluon TMD distribution functions. <cit.> have examined the effect of H^⊥ in heavy quark pair and di-jet production. For heavy quark pair production, the study in <cit.> is at tree level or leading order of _s with a simplified TMD formula. At this order the TMD formula just contains gluon TMD distribution functions as nonperturbative quantities. To higher order of _s, the soft radiation from final heavy quark pair will introduce an azimuthal angle dependent soft factor into TMD formula and affect the azimuthal angle distribution of virtual photon,i.e., ϕ_q, which is used to extract H^⊥ in <cit.>. Because of the azimuthal angle dependent soft factor, a complete description of ϕ_q distribution in the cross section is impossible. Instead we construct three azimuthal angle weighted cross sections, which depend on only three integrated soft factors. These weighted cross sections may help to extract H^⊥. In <cit.>, the TMD factorization for this process is examined at one-loop level, with H^⊥ not taken into account. It is found all collinear and soft divergences can be absorbed into gluon distribution function and a soft factor with azimuthal angle integrated. But part of the finite correction of hard coefficients is absent in <cit.>, and the study is confined to G(x,p_⊥^2). In this paper, we want to study the azimuthal angle effect introduced by the soft factor mentioned above. We will examine the factorization formula for both G(x,p_⊥) and H^⊥(x,p_⊥) based on diagram expansion method presented in <cit.>. This method is different from the method in <cit.>, which uses a single gluon to replace initial hadron. The method based on diagram expansion enables us to obtain the finite hard coefficients for various parton distributions in a systematic way. In addition, we keep lepton azimuthal angle unintegrated in this work, which can provide more observables. The structure of this paper is as follows: in Sect.II we illustrate kinematics for heavy quark pair production in SIDIS; in Sect.III, tree level factorization and resulting angular distributions are discussed, including lepton azimuthal angle distributions; in Sect.IV, the factorization formula is examined at one-loop level and the hard coefficients are calculated. In this section, the soft factors are also constructed and virtual one-loop corrections to soft factors and distribution functions are calculated; in Sect.V, the effect of soft factor is discussed and three weighted cross sections are constructed to extract H^⊥; Sect.VI is our summary.§ KINEMATICSWe consider the scattering of electron and hadrone(l)+h_A(P_A)→ e(l')+Q(k_1)+Q̅(k_2)+X,where X represents undetected hadrons<cit.>. At order of _em^2, this process is dominated by the exchange of a virtual photon between electron and hadron. The momentum of the virtual photon is q^μ=l^μ-l'^μ. In perturbative region, Q^2=-q^2≫Λ_QCD^2. Q,Q̅ with momenta k_1,k_2 are heavy quark and anti-quark produced in the hard collision. In the center of mass(c.m.) frame of virtual photon ^* and initial hadron h_A, we demand Q and Q̅ are nearly back-to-back.For convenience we defineK^μ=k_1^μ+k_2^μ, R^μ=1/2(k_1^μ-k_2^μ).Our requirement for the final quarks then becomesR_T^μ∼ Q, K_T^μ≪ Q,where R_T and K_T are the transverse components of R and K, respectively. In the c.m. frame of ^* and h_A, the transverse vector is relative to Z-axis. Here P⃗_A is along +Z-axis, and q⃗ is along -Z-axis.By considering only the contribution of virtual photon, the cross section we want to study can be written asd=1/2se^4 Q_q^2/Q^4(2π)^4 d^3 l'/(2π)^3 2l'^0d^3 k_1/(2π)^3 2E_1d^3 k_2/(2π)^3 2E_2L^μνW_μν,where the leptonic and hadronic tensors areL^μν=2(l^μl'^ν+l^νl'^μ-g^μνl· l')≐ 4l^μ l^ν +q^2 g^μν; W^μν= ∑_X⟨ P_A s|j^ν|QQ̅X⟩⟨ QQ̅X |j^μ(0)|P_A s⟩^4(P_A+q-k_1-k_2-P_X).In leptonic tensor we have used QED gauge invariance q^μ W_μν=0 to eliminate all q^μ and q^ν in leptonic tensor. This will simplify our calculation.Definex_B=Q^2/2P_A· q,y=P_A· q/P_A· l, z_1=P_A· k_1/P_A· q, z_2=P_A· k_2/P_A· q,s=(P_A+l)^2.The phase space integration measure can be written asd^3 l'/2l'^0=ys/4dx_B dy dψ_l,d^3 k_1/2E_1d^3 k_2/2E_2=1/4 dy_1 dy_2 d^2 K_T d^2 R_T,where ψ_l is the azimuthal angle between l'_T and R_T; y_1,2 are the rapidities of final quark and anti-quark, respectively.Then, the differential cross section becomesd/dx_B dy dψ_l dy_1 dy_2 d^2K_T d^2 R_T= y _em^2 Q_q^2/64π^3 Q^4L^μνW_μν,where Q_q is the electric charge of heavy quark in unit of the charge of electron. All information about the cross section now is contained in the hadronic tensor W^μν. In next sections we will focus on the factorization of hadronic tensor.^* N frame(c.m. frame of ^* and h_A) is useful to describe the cross section, but it is not very convenient for the calculation, especially for the factorization of W^μν, as we discussed later. More convenient frame is the hadron frame defined as the c.m. frame of final heavy quark and antiquark. In this frame, P⃗_A is still along +Z-axis, but virtual photon now has a small transverse momentum q_⊥. We use light-cone coordinate representation in this paper. In this representation any vector a^μ is written as (a^+,a^-,a_⊥^μ), where the transverse components are relative to Z-axis and denoted by a_⊥; ±-components are defined by two light-like vectors n^μ and n̅^μ with n·n̅=1, i.e., a^+= n· a, a^-=n̅· a. The decomposition of relevant momenta is likeP_A^μ= P_A^+ n̅^μ, K^μ=K^+ n̅^μ+K^- n^μ, q^μ=q^+ n̅^μ+q^- n^μ +q_⊥^μ.The following two tensors are useful to project transverse components of a vector:g_⊥^μν=g^μν-n^μn̅^ν-n^νn̅^μ,_⊥^μν=^μν-+=^μνρτn̅_ρn_τ,and our convention for -tensor is ^0123=1 so that _⊥^12=1. For any four-vector a^μ its transverse component is a_⊥^μ=g_⊥^μνa_ν.Given these two frames, q_⊥ in hadron frame and K_T in ^*N frame can be connected to each other. In ^* N frame,K_T^μ=K^μ - P_A^μ - q^μ.Hence, the transverse component of K_T in the hadron frame is(K_T)_⊥^μ=- q_⊥^μ=-(z_1+z_2)q_⊥^μ.On the other hand,(K_T)_⊥^μ=g_⊥^μνK_Tν=K_T^μ-P_A^μK_T^2/P_A· K.Combining these two equations we haveK_T^μ=-(z_1+z_2)q_⊥^μ + P_A^μK_T^2/P_A· K.This is an exact result. In the region we are considering, K_T is a small quantity, then the second term with K_T^2 can be ignored at leading power level.In this work all calculations will be performed in hadron frame. To define azimuthal angles on the transverse plane we assign k⃗_1⊥ along +X-axis, and Y-axis is defined by _⊥-tensor, that is,X^μ=k_1⊥^μ/|k⃗_1⊥|, Y^μ=_⊥^μνX_ν.Then the azimuthal angles of virtual photon and initial lepton, ϕ_q and ϕ_l, are defined as the angles of q_⊥ and l_⊥ relative to X-axis, respectively, as shown in Fig.<ref>. At leading power of q_⊥ expansion, ϕ_l is equal to ψ_l, which is defined in ^*N frame.With these two vectors X^μ and Y^μ the transverse metric and anti-symmetric tensor can be written into another formg_⊥^μν=-X^μ X^ν -Y^μ Y^ν, _⊥^μν=X^μ Y^ν-X^ν Y^μ.Since n,n̅,X,Y form a complete basis in four-dimension space, the leptonic tensor can be expressed through these four vectors. Equivalently, one can choose P_A,K,X,Y as basis. The advantage of this basis is the calculation can be done in a covariant way.The complete basis for symmetric rank-2 tensor isA_i^μν= {ñ^μñ^ν, ñ^μ X^ν+ñ^ν X^μ, ñ^μ Y^ν+ñ^ν Y^μ,X^μ X^ν+Y^μ Y^ν, X^μ X^ν-Y^μ Y^ν, X^μ Y^ν+X^ν Y^μ},i=1,⋯,6.With this basis leptonic tensor in hadron frame is expressed asL^μν=Q^24(1-y)/y^2ñ^μñ^ν + Q^21+(1-y)^2/y^2(X^μ X^ν+Y^μ Y^ν) + 2Q^2√(1-y)(y-2)/y^2cosϕ_l (ñ^μ X^ν + ñ^ν X^μ)+2Q^2√(1-y)(y-2)/y^2sinϕ_l (ñ^μ Y^ν + ñ^ν Y^μ) + 2Q^21-y/y^2cos(2ϕ_l)(X^μ X^ν-Y^μ Y^ν)+2Q^21-y/y^2sin(2ϕ_l)(X^μ Y^ν+X^ν Y^μ),whereñ^μ=1/√(2_k P_A· K+_k^2 K^2)(P_A+_k K)^μ, _k=-P_A· q/K· q.In the decomposition we have considered the constraint of QED gauge invariance, that is, P_A^μ and K^μ must be combined into ñ^μ to ensure q_μ A_i^μν=0. Since q· X∼ q· Y∼𝒪(q_⊥), QED gauge invariance is preserved at leading power of q_⊥. One can check that ñ^μ can have another representation likeñ^μ=1/Q(q+2 x_B P_A)^μ+𝒪(q_⊥).This representation simplifies our calculation dramatically. Since hadronic tensor does not depend on ϕ_l, this decomposition exhibits all possible lepton azimuthal angle distributions for unpolarized lepton beam. § TREE LEVEL AZIMUTHAL ANGLE DEPENDENCEIn hadron frame when q_⊥ is small, W^μν can have a TMD factorization formula at the leading power of q_⊥ expansion. At tree level the formula readsW^μν= x_B/x Q^2 M_p(N_c^2-1)(1-z_1-z_2)H^μν_∫ d^2 p_⊥^2(p_⊥+q_⊥)Φ^(x,p_⊥) +𝒪(q_⊥/Q,q_⊥/k_1⊥),where the gluon TMDPDF is<cit.>Φ^(x,p_⊥)= (p^+/2M_p)^-1∫dξ^- d^2ξ_⊥/(2π)^3e^ix ξ^-P_A^++iξ_⊥· p_⊥⟨ P_A|G_a⊥^+(0)G_a⊥^+(0^+,ξ^-,ξ_⊥)|P_A⟩ =-g_⊥^G(x,p_⊥^2)+2p_⊥^ p_⊥^-g_⊥^p_⊥^2/2M_p^2H^⊥(x,p_⊥^2),We work in Feynman gauge ∂^μ G_μ=0 in this work. The gauge links in Φ^ are suppressed for simplicity. There is a color summation in the definition of gluon TMDPDF. Therefore, there is a color average in the hard part. In formula eq.(<ref>), the average factor 1/(N_c^2-1) has been extracted, so, the hard part H^μν_ in eq.(<ref>) contains a summation over color.The derivation of this factorization formula for hadronic tensor has been given in <cit.> in detail. Under high energy limit or Q^2→∞, the general structure of the interaction factorizes into the form shown in Fig.<ref>, where the two central bubbles represent hard interaction, in which all propagators are far off-shell, and the lower bubble represents the jet part of initial hadron, in which all propagators are collinear to P_A. All possible soft interactions are ignored in Fig.<ref>. These soft interactions do not appear at the leading order of _s. We will discuss them in one-loop correction.For this process, the hard scales are Q and k_1⊥, which are taken to be the same order in this paper. Under the limit q_⊥≪ Q, k_1⊥, the above factorization formula is obtained from the expansion in ≃ q_⊥/Q, q_⊥/k_1⊥. Leading power contribution of this process is given by collinear partons, that is, the momentum of initial gluon satisfies p^μ=(p^+,p^-,p_⊥)∼ Q(1,^2,). According to this scaling law p_⊥, q_⊥ are of the same order, and then the delta function ^2(p_⊥+q_⊥) itself is already of leading power. Therefore, both p_⊥ and q_⊥ can be ignored in H^μν_. This means H^μν_ are on-shell amplitudes. This fact ensures QED gauge invariance of hadronic tensor.For factorization in Feynman gauge gluon distributions may have a problem concerning super-leading power contribution, which appears when the gluon in Fig.<ref> is longitudinally polarized, i.e., G^+. Such a gluon will give a 1/Λ enhancement compared to the leading power contribution we stated above. Note that the delta function in the hard part, ^2(p_⊥+q_⊥), should not be expanded, then super-leading power contribution is given by the longitudinal gluon with p_⊥=0. For the one-gluon case considered here, such a contribution from longitudinally gluon vanishes due to Ward identity. But in Feynman gauge, there can be any number of longitudinal gluons connecting the central bubble and lower bubble, whose contribution is not power suppressed. For the case with two gluons connecting the central bubble and lower bubble as shown in Fig.<ref>(a), <cit.> has given an explicit calculation to show the super-leading power contribution is absent even when the transverse momentum of the parton is preserved. In addition, at leading power one of the two gluons becomes gluon field strength tensor, the other is absorbed into gauge link as shown in Fig.<ref>(b). With this conclusion we will simply take the gluon in Fig.<ref> as transversely polarized even at one-loop level, and will consider only the one-gluon case in our calculation. This causes no problem about the hard coefficients at least at one-loop level. Because p_⊥ is set to zero, there is only one independent transverse momentum k_1⊥ in H^μν_. Then previous vector basis for L^μν can also be applied to H^μν_. That is,H^μν_=∑_i,jH_ijA_i^μνB^j_,with A_i^μν given by eq.(<ref>) andB_j^= {X^ X^+Y^ Y^, X^ X^-Y^ Y^, X^ Y^+X^ Y^},j=1,2,3.Due to P-parity conservation the number of Y vector in A_i B_j must be even. So, there are only following 10 rather than 18 nontrivial projected hard coefficients:H_ij={H_11,H_12,H_21,H_22,H_33,H_41,H_42,H_51,H_52,H_63}.With these projected hard coefficients, the angular distributions on ϕ_q and ϕ_l can be obtained asd/dψ_l dy dx_B dy_1 dy_2 d^2 K_T d^2 R_T [Q_q^2 _em^2 x_B y(1-z_1-z_2)/16π^3 Q^4 x M_p(N_c^2-1)]^-1 = 2(1-y)/y^2(H_11⟨ w_1 G⟩ -cos(2ϕ_q) H_12⟨ w_2 H^⊥⟩) +1+(1-y)^2/y^2(H_41⟨ w_1 G⟩ -cos(2ϕ_q) H_42⟨ w_2 H^⊥⟩) -2√(1-y)(y-2)/y^2cosϕ_l(H_21⟨ w_1 G⟩ -cos(2ϕ_q) H_22⟨ w_2 H^⊥⟩) +2√(1-y)(y-2)/y^2sinϕ_l sin(2ϕ_q) H_33⟨ w_2 H^⊥⟩ +2(1-y)/y^2cos2ϕ_l(H_51⟨ w_1 G⟩ -cos(2ϕ_q) H_52⟨ w_2 H^⊥⟩) -2(1-y)/y^2sin2ϕ_lsin(2ϕ_q) H_63⟨ w_2 H^⊥⟩.where⟨ w(p_⊥, q_⊥) f(x,p_⊥^2)⟩≡∫ d^2 p_⊥^2(p_⊥+q_⊥) w(p_⊥, q_⊥) f(x,p_⊥^2),andw_1(p_⊥, q_⊥)=1, w_2(p_⊥, q_⊥)=2(p_⊥· q_⊥)^2-p_⊥^2 q_⊥^2/2M_p^2 q_⊥^2. The tree level hard coefficients can be obtained by replacing the central bubbles in Fig.<ref> with the diagrams in Fig.<ref>. For the subprocess ^*+g→ QQ̅, there are three independent variables:s_1=2k_1· k_2, t_1=-2p· k_1, u_1=-2p· k_2,where p^μ=x P_A^μ is the momentum of initial gluon. Another independent parameter we choose in our calculation is quark mass m.For convenience we defineH̃_ij= 1/g_s^2 N_cC_F H_ij.After some simplifications the results areH̃_11= H̃_12=16 (2 m^2+s_1+t_1+u_1)(m^2(t_1^2+u_1^2)-s_1 t_1u_1)/t_1 u_1(t_1+u_1)^2=16 Q^2 |R_⊥|^2/t_1 u_1 ,H̃_21= H̃_22=Q|R_⊥|4 (t_1-u_1) (t_1u_1 (2 s_1+t_1+u_1)-2 m^2(t_1^2+u_1^2))/t_1^2 u_1^2 (t_1+u_1),H̃_33= Q|R_⊥| 4(t_1-u_1)/t_1 u_1,H̃_41= 2/t_1^2 u_1^2(t_1+u_1)^2(-8 m^4 (t_1^3 u_1+2t_1^2 u_1^2+t_1u_1^3+t_1^4+u_1^4). -2 m^2(s_1 (-4 t_1^3 u_1-2 t_1^2u_1^2-4 t_1u_1^3+t_1^4+u_1^4)+(t_1+u_1)(t_1^2+u_1^2)^2) .+t_1 u_1 (t_1^2+u_1^2)(2 s_1 (t_1+u_1)+2s_1^2+(t_1+u_1)^2)),H̃_42= -4 (m^2(t_1^2+u_1^2)-s_1 t_1u_1) (4 m^2 (t_1u_1+t_1^2+u_1^2)+(t_1^2+u_1^2)(s_1+t_1+u_1))/t_1^2 u_1^2 (t_1+u_1)^2,H̃_51= 8 (t_1 u_1(s_1+t_1+u_1)-m^2(t_1^2+u_1^2))(m^2(t_1^2+u_1^2)-s_1 t_1u_1)/t_1^2 u_1^2(t_1+u_1)^2,H̃_52= -4 (2 m^4(t_1^2+u_1^2)^2-2 m^2t_1 u_1 (t_1^2+u_1^2)(2 s_1+t_1+u_1)+t_1^2u_1^2 (2 s_1(t_1+u_1)+2s_1^2+(t_1+u_1)^2))/t_1^2 u_1^2(t_1+u_1)^2,H̃_63= 4 (2 m^2(t_1^2+u_1^2)-t_1 u_1(2s_1+t_1+u_1))/t_1 u_1(t_1+u_1),whereQ^2=-q^2=-(2m^2+s_1+t_1+u_1), R_⊥^2=-|R_⊥|^2=k_1⊥^2=m^2(t_1^2+u_1^2)-s_1 t_1 u_1/(t_1+u_1)^2.From C-parity conservation H_21,22,33 are anti-symmetric, while other ones are symmetric in t_1 and u_1. Our result satisfies this symmetry. The momentum fraction x can be obtained from s_1+t_1+u_1=-2m^2-Q^2, t_1=-2p· k_1=-2x P_A· k_1 and u_1=-2p· k_2=-2x P_A· k_2. The explicit value isx= x_Bs_1+2m^2+Q^2/Q^2(z_1+z_2)=x_Bs_1+2m^2+Q^2/Q^2,in which all variables can be measured in experiment and in the last equality we have used z_1+z_2=1.§ ONE-LOOP CORRECTIONFor TMD factorization here, the relative transverse momentum of final heavy quark and antiquark is fixed. The soft divergence from virtual correction cannot be cancelled by real correction, since the phase space integration is incomplete for real correction. Usually, a soft factor is introduced to absorb such soft divergences. The operator form of the soft factor can be obtained by using eikonal approximation for soft gluons emitted by final heavy quark pair and by initial gluon, see <cit.> for example. The procedure is standard and the heavy quark soft factor isS^Q(b_⊥)= 1/Tr(T^c T^c)⟨ 0|Û^†_ṽ(b_⊥,-∞)_aeTr[U_v_2(∞,b_⊥)T^e U^†_v_1(∞,b_⊥) U_v_1(∞,0)T^dU^†_v_2(∞,0)]_edÛ_ṽ(0,-∞)|0⟩.The definition of gauge link isU_v(∞,b_⊥)= P exp[-ig_s∫_0^∞ d v· G(b_⊥+ v)],Û_v(b_⊥,-∞)= P exp[-ig_s∫_-∞^0 d v·Ĝ(b_⊥+ v)],with v an arbitrary vector and P the path-ordering product so that fields with smallerare always put on right hand side of the fields with larger . U_v and Û_v are defined in fundamental and adjoint representations of color group, respectively. Correspondingly, G^μ=G^μ_a T^a and Ĝ^μ =G^μ_a T̂^a are gluon fields in fundamental and adjoint representations. Note that T̂^c_ba=-if^cba and Tr(T^a T^b)=^ab/2 in this work. With such definitions of the gauge link our covariant derivative is D^μ=∂^μ +ig_s G_a^μ T^a. The Tr[⋯] in eq.(<ref>) acts on matrices in fundamental representation. Then, one can check the definition eq.(<ref>) is color gauge invariant. At order of _s^0, S^Q(b_⊥) is normalized to 1. The definition in eq.(<ref>) is given in coordinate space. To get the definition in momentum space one should do a Fourier transformation, i.e.,S^Q(l_⊥)=∫d^2 b_⊥/(2π)^2 e^ib_⊥· l_⊥S^Q(b_⊥).At order of _s^0, S^Q(l_⊥)=^2(l_⊥).The sources of the gauge links in S^Q are clear: U_v_1 and U_v_2 are obtained from the coupling of soft gluon to on-shell heavy quark or anti-quark by using eikonal approximation. Here v_1,2=k_1,2/m are the four-velocities of heavy quark and anti-quark, respectively; Û_ṽ is extracted from the coupling of soft gluon to initial hadron or gluon. Here ṽ is collinear to the momentum of initial hadron. If ṽ^2=0, S^Q has a light-cone divergence, and thus is not well-defined. As a regulator ṽ is modified to be a little away from the light-cone direction but still with ṽ_⊥ vanishing, i.e., ṽ^+≫ṽ^- and ṽ_⊥=0.Without the soft factor the tree level TMD formula eq.(<ref>) cannot be right. The correct one should beW^μν = x_B(1-z_1-z_2)/x Q^2 M_p(N_c^2-1)∫ d^2 p_⊥ d^2 l_1⊥ d^2 l_2⊥^2(p_⊥+q_⊥-l_1⊥-l_2⊥)H^μν_Φ^(x,p_⊥) S^Q(l_1⊥)S̅(l_2⊥),where H^μν_ is the hard part.S̅(l_⊥) appears in order to avoid the double counting of soft divergences, since the soft divergence in the correction to gluon TMDPDF Φ^ is also contained in the correction to S^Q. Except that now the gauge link is defined in adjoint representation, S̅(l_⊥) is the same as that defined in of SIDIS<cit.>, i.e.,S̅(l_⊥)= ∫d^2 b_⊥/(2π)^2e^ib_⊥· l_⊥N_c^2-1/⟨ 0|Û^†_ṽ(b_⊥,-∞)_aeÛ^†_v(∞,b_⊥)_edÛ_v(∞,0)_dcÛ_ṽ(0,-∞)_ca|0⟩.In this soft factor the vector ṽ has appeared in S^Q. Another vector appearing in the gauge links is v^μ=(v^+,v^-,0_⊥) with v^-≫ v^+. As stated before, the little offshellness of v and ṽ is used to regularize light-cone singularity. Other regulartors for light-cone singularity have been given by <cit.>,<cit.>. The calculation procedure with these regulators is the same, so, we will not calculate the hard coefficients once more using the regulators in <cit.>,<cit.>.The calculation of one-loop hard coefficients can be performed in the same way as <cit.>. One-loop hard coefficient H^(1)_finite is given by∫_⊥ H^(1)_finiteΦ_(x,p_⊥) S_Q(l_1⊥)S̅(l_2⊥) = ∫_⊥ H^(1)Φ_(x,p_⊥) S_Q(l_1⊥)S̅(l_2⊥) -∫_⊥ H^(0)[Φ^(1)_(x,p_⊥) S_Q(l_1⊥)S̅(l_2⊥). .+Φ_(x,p_⊥) S^(1)_Q(l_1⊥)S̅(l_2⊥) +Φ_(x,p_⊥) S_Q(l_1⊥)S̅^(1)(l_2⊥) ],∫_⊥= ∫ d^2 p_⊥ d^2 l_1⊥ d^2 l_2⊥^2(p_⊥+q_⊥-l_1⊥-l_2⊥),where H^(1),Φ^(1),S_Q^(1),S̅^(1) represents the one-loop corrections to hard scattering part, gluon TMDPDF, and the two soft factors, respectively. One-loop integral in H^(1) includes parton-like contribution<cit.>, for which the loop integral is collinear to p or P_A. This parton-like part has been included in tree level result and should be subtracted to avoid calculating tree level diagrams by twice. For details of subtraction one can consult <cit.>,<cit.>,<cit.>. At leading power one can show that all real corrections to hadronic tensor can be subtracted by the correction to gluon distribution and soft factors. If the final gluon is collinear to P_A, <cit.> has shown that its coupling to heavy quarks can be absorbed into gauge link in gluon distributions. If the final gluon is soft, one can use eikonal approximation to transform the coupling of soft gluon to collinear gluon or to heavy quarks into gauge links in heavy quark soft factor S^Q. The overlap between Φ^ and S^Q in soft region is subtracted by another soft factor S̅.Hence, only virtual corrections contribute to one-loop hard coefficients. For virtual correction, the delta function ^2(p_⊥+q_⊥-l_1⊥-l_2⊥) has been of leading power. So, p_⊥,q_⊥ can be ignored in one-loop hard part H^(1), and then H^(1) is the product of on-shell amplitudes for subprocess ^*+p→ QQ̅. It is in this way the QED gauge invariance is preserved. If the TMD factorization formula is correct the subtracted hard coefficients H^(1)_finite must be free of any infrared(IR) divergence. It should be noted that in formula eq.(<ref>) all gluon TMDPDF, S^Q and S̅ are renormalized quantities, i.e., the UV divergences in these functions are removed by MS scheme. Thus these nonperturbative quantities all contain a renormalization scale μ. In this work, both UV and IR divergences are regularized in dimensional scheme with D=4-. Specially, in our scheme only loop momentum is generalized to D-dimension space, all other momenta are defined in four-dimension space. This is the four-dimensional-helicity(FDH) scheme(see <cit.> and references therein). This scheme is convenient for the tensor decomposition of hard part as done in eq.(<ref>). Next we will first calculate the virtual correction to gluon TMDPDF and to the two soft factors and then present the structure of one-loop virtual correction to hadronic tensor. In the last subsection we present the explicit result of hard coefficients after subtraction. §.§ Virtual correction to nonperturbative quantitiesThe complete gluon TMDPDF with gauge links isΦ^(x,p_⊥)= (p^+/2M_p)^-1∫dξ^- d^2ξ_⊥/(2π)^3e^-ix ξ^-P_A^+-iξ_⊥· p_⊥⟨ P_A|G_b⊥^+(ξ)Û^†_v(∞,ξ)_bcÛ_v(∞,0)_caG_a⊥^+(0)|P_A⟩_ξ^+=0. The virtual correction to this function is still obtained by power expansion. The diagrams contributing to virtual correction are given in Fig.<ref>. Here we take Fig.<ref>(a) as an example to illustrate our calculation scheme. According to collinear approximation, the leading power contribution of Fig.<ref>(a) is from the region where the momentum of the parton going through the hooked line is collinear to P_A, i.e., k^μ=(k^+,k^-,k_⊥)∼ Q(1,^2,). Therefore,Φ^(1)_(x,p_⊥)= ∫ d^4 k ^2(k_⊥-p_⊥)(k^+-p^+)M_^ρτ,cd(k^+,k^-,k_⊥) ∫d^4ξ/(2π)^4e^-ik·ξ⟨ P_A|G_⊥ d^τ(ξ^+,ξ^-,ξ_⊥)G_⊥ c^ρ(0)|P_A⟩≃ ∫ d^2 k_⊥^2(k_⊥-p_⊥) M_^ρτ,cd(p^+,0,k_⊥) ∫dξ^- d^2ξ_⊥/(2π)^3e^-ip^+ξ^–ik_⊥·ξ_⊥⟨ P_A|G_⊥ b^τ(0,ξ^-,ξ_⊥)G_⊥ a^ρ(0)|P_A⟩,whereM_^ρτ,cd(k^+,k^-,k_⊥)= -ig_s^2 (p^+/2M_p)^-1C_A _cd∫d^n l/(2π)^n[k^+ g_⊥^μ-n^μ(k+l)_⊥^ ]v^νΓ_ρνμ(k,l,-k-l)/(v· l+i)(l^2+i)[(k+l)^2+i],Γ_ρνμ(k,l,-k-l)=g_ρν(k-l)_μ+g_νμ(2l+k)_ρ+g_μρ(-2k-l)_ν.Now in M(k) the loop integral is divergent when k_⊥ goes to zero if n=4. But since the divergence is logarithms-like, it can by regularized in dimensional scheme. Then, M(k) is well-defined at k_⊥=0, and it can be expanded asM(p^+,0, k_⊥)=M(p^+,0,0)+k_⊥^ρ∂/∂ k_⊥^ρM(p^+,0,0)+⋯.Note that the hard scale in M(p^+,0, k_⊥) is ζ^2=(2v· p)^2/v^2, so, high twist contribution is suppressed by k_⊥/ζ. Since the delta function ^2(k_⊥-p_⊥) is already of leading power or leading twist, only the first term in the expansion should be preserved. Thus,Φ^(1)_(x,p_⊥)≃M(p^+,0,0)_^ρτ,ab∫ d^2 k_⊥^2(k_⊥-p_⊥) ∫dξ^- d^2ξ_⊥/(2π)^3e^-ip^+ξ^–ip_⊥·ξ_⊥⟨ P_A|G_⊥ b^τ(0,ξ^-,ξ_⊥)G_⊥ a^ρ(0)|P_A⟩.Now by changing gluon field to field strength tensor the integral of above equation is just gluon TMDPDF itself. From the derivation one can see that dimensional scheme is crucial for our power expansion. Note that the nonlinear term in gluon field strength tensor also contributes. Its effect is reflected in the special vertex<cit.> for the coupling of gluon and gauge link, as shown in Fig.<ref>.The total correction of Fig.<ref>(a) isΦ^(1)_(x,p_⊥)= 2×_s C_A/4π(4πμ^2)^/2/Γ(2-/2)[4/B(2-/2,1+/2) B(-/2,1+/2)(ζ^2)^-/2+(2/_UV-2/_IR) ] Φ^(0)_(x,p_⊥),where the factor 2 represents contribution from conjugated diagrams. Note that in this expression and formulas followingis _IR implicitly, unless there is a special illustration.Besides, wave function renormalization for the gauge link is given by Fig.<ref>(b) and its conjugate, the result isΦ^(1)_(x,p_⊥)|_b= 2× (Z_v^1/2-1)Φ^(0)_(x,p_⊥), Z_v= 1+_s/πC_A(1/_UV-1/_IR).The sum of eq.(<ref>) and eq.(<ref>) is the total virtual correction to gluon TMDPDF, that is,Φ^(1)_(x,p_⊥)= 2W_ΦΦ^(0)_(x,p_⊥), W_Φ=(Z_v^1/2-1)+_s C_A/4π(4πμ^2)^/2/Γ(2-/2)[4/B(2-/2,1+/2) B(-/2,1+/2)(ζ^2)^-/2+(2/_UV-2/_IR) ].Since the derivation does not depend on the polarization of initial hadron and the parton or gluon, the result indicates the virtual correction is the same to the two gluon TMDPDFs we consider here. Note that we will not calculate the self-energy correction to initial gluon in our following calculation for one-loop hard part, because this self-energy correction can be subtracted totally. For this reason, we do not calculate this self-energy correction to Φ_ here.There is no power expansion in higher order correction to soft factors, so, their virtual corrections are easy to calculate. The virtual correction to S^Q(l_⊥) is from Fig.<ref>.The result isS_Q^(1)(l_⊥)= 2W_SQ S_Q^(0)(l_⊥) , W_SQ=(√(Z_v_1Z_v_2Z_ṽ)-1)-_s/4π(1/_UV-1/_IR) [ C_A (ln(2v_1·ṽ)^2/v_1^2 ṽ^2+ln(2v_2·ṽ)^2/v_2^2 ṽ^2). .+(C_A-2C_F)1+/^1/2ln1-^1/2/1+^1/2],=1-√(v_1^2 v_2^2/(v_1· v_2)^2)/1+√(v_1^2 v_2^2/(v_1· v_2)^2),v_1 and v_2 are the four-velocities of heavy quark and anti-quark, respectively. If one takes v_1^2=v_2^2=1,becomes the usual phase space factor for final heavy quark pair, i.e., =1-4m^2/(k_1+k_2)^2=ρ_12^2. The virtual correction to S(l_⊥) is from Fig.<ref>, and the result isS̅^(1)(l_⊥)= 2W_S S̅^(0)(l_⊥), W_S= -[(√(Z_v Z_ṽ)-1) -_s/2πC_A(1/_UV -1/_IR)ln(2v·ṽ)^2/v^2ṽ^2].Note that in the derivation of these results we have ignored the corrections vanishing under the limit v^2,ṽ^2→ 0. §.§ Structure of one-loop virtual correction to hard partNext we consider the virtual correction to hadronic tensor. The central bubble in Fig.<ref> on the left hand side of the cut at one-loop level is given by diagrams in Fig.<ref>, where self-energy corrections to external fermion lines are not shown, since they are trivial to be taken into account. The contribution of conjugated diagrams are not shown but taken into account in the calculation. Denoting the amplitude for the subprocess q(p)+^*(q)→ Q(k_1)+Q̅(k_2) as ℳ^μ_, the one-loop hard part isH^μν(1)_= ℳ^μ(1)_( ℳ^ν(0)_)^* +ℳ^μ(0)_( ℳ^ν(1)_)^*.As an example, the tree level amplitude in Fig.<ref> isℳ^μ(0)_= -gT^a[1/2p· k_2(u̅(k_1)^μp_ v(k_2)-2k_2u̅(k_1)^μ v(k_2))-(k_1↔ k_2)],whereis transverse Lorentz index for initial gluon and μ is that for photon. By using this amplitude the result in eq.(<ref>) can be obtained. From eq.(<ref>) it is clearH^μν_=(H^νμ_)^*.So, if the hard part is symmetric in (μ,) and (ν,), it must be real. Since the transverse momenta p_⊥ and q_⊥ are ignored in one-loop hard part, the decomposition for tree level hard part in eq.(<ref>) can also be applied to one-loop hard part. From eq.(<ref>), the one-loop hard part has such a symmetry automatically. So, all projected hard coefficients H^ij are real. Since tree level amplitude ℳ^(0) is real, we just need the real part of one-loop amplitude. This simplifies the calculation, since absorptive part needs some caution for the i prescription in propagators in the loop. Next we discuss the IR property of one-loop amplitude. The complete amplitude is very complicated, but the IR divergent part can be shown to be simple and proportional to tree level amplitude. Here IR divergences include soft and collinear ones. These divergences can appear only in Fig.<ref>(a,b,c,f,i). Our observation is the loop integrals with IR divergence can be expressed through three basic scalar loop integrals. Fig.<ref>(a) contains only soft divergence, which is caused by the soft gluon exchange between the two quarks. Such soft divergence is contained in following integral,I_Box-a^(0)(k_1,k_2)=∫_l 1/[l^2+i][(l-k_2)^2-m^2+i][(l+p-k_2)^2-m^2+i][(l+k_1)^2-m^2+i], ∫_l=∫d^nl/(2π)^n.The soft divergence of such box integral can be expressed through scalar triangle integrals<cit.>. In soft region with l^μ=(l^+,l^-,l_⊥^μ)∼ Q(,,), (l+p-k_2)^2-m^2 is offshell. Thus setting l=0 in this propagator does not affect the IR divergence. This off-shell propagator can be decomposed as1/(l+p-k_2)^2-m^2=1/(p-k_2)^2-m^2+l^2-2l· k_2+2l· p/2p· k_2[(l+p-k_2)^2-m^2].The second term gives IR finite contribution. Then,I_Box-a^(0)(k_1,k_2)= 1/-2p· k_2CTri1(s_1)+DBox1(t_1,u_1),CTri1(s_1)= ∫_l1/(l^2+i)[(l-k_2)^2-m^2+i][(l+k_1)^2-m^2+i].All soft divergence of the box integral now is contained in the triangle integral CTri1(s_1). Note that CTri1(s_1) is symmetric in k_1, k_2 or t_1,u_1. By exchanging k_1 and k_2, the soft divergence of Fig.<ref>(b) is obtained. After some simplifications the sum of the divergent parts of Fig.<ref>(a,b) is.ℳ^μ_|_a+b^IR≐-ig^2(C_A-2C_F)(2k_1· k_2)CTri1(s_1){ -gT^a1/2p· k_2(u̅(k_1)^μp_ v(k_2)-2k_2u̅(k_1)^μ v(k_2))-(k_1↔ k_2)}.The quantity in {⋯} is rightly the tree level amplitude. Hence,.ℳ^μ_|_a+b^IR≐-ig^2(C_A-2C_F)(2k_1· k_2)CTri1(s_1)ℳ^μ(0)_. For Fig.<ref>(c), both soft and collinear divergences are contained in the loop integral, and the overlap of these two divergences makes the extraction of IR part nontrivial. The divergent loop integrals for this diagram areI_Box-c^(0,1,2)= ∫_l{1,(l^+/p^+),(l^+/p^+)^2}/D_0 D_1 D_2 D_3,withD_0=l^2, D_1=(l+p)^2, D_2=(l+p-k_2)^2-m^2, D_3=(l+k_1)^2-m^2.In I_Boxc^(0), collinear divergence comes from the region where l^μ is collinear to p^μ, while soft divergence comes from two regions: one is l^μ is soft, the other is (p-l)^μ is soft. In I_Boxc^(1,2), collinear divergences come from the same region as I_Boxc^(0), but the soft divergence only comes from the region where (p-l)^μ is soft. Note that the integral with (l^+)^3 in the integrand is absent in the amplitude, although this integral is also IR divergent.As we will show, the divergent part of Box-c can still be expressed through triangle integrals. First, setting l=0 in D_2 givesI_Box-c^(0)≐ -1/2p· k_2∫_l1/D_0 D_1 D_3-1/p· k_2∫_ll· k_2/D_0 D_1 D_2 D_3,where ≐ means the equality holds up to IR finite corrections. Now l· k_2=l^+ k_2^-+l^- k_2^++l_⊥· k_2, but l^- and l_⊥ are suppressed byor ^2 in soft or collinear regions, so, l^- and l_⊥ can be dropped. Then1/p· k_2∫_ll· k_2/D_0 D_1 D_2 D_3≐1/p^+∫_ll^+/D_0 D_1 D_2 D_3≐1/p· q∫_ll· q/D_0 D_1 D_2 D_3≐1/p· q∫_ll· (k_1+k_2)/D_0 D_1 D_2 D_3 =1/p· q∫_lD_3-D_0+D_1-D_2-2k_2· p/D_0 D_1 D_2 D_3,where we have used q^μ=-p^μ +k_1^μ +k_2^μ. Now D_0,1 in the numerator above can be dropped, because they are suppressed byor ^2 in soft or collinear regions. Then, substituting this result back to eq.(<ref>) one getsI_Box-c^(0)=-1/2p· k_2∫_l1/D_0 D_1 D_3-1/2p· k_1∫_l1/D_0 D_1 D_2+DBox2[t_1,u_1].In this way we express the IR part of Box-c through two triangle integrals.As expected, this expression is symmetric in k_1,k_2. In the above we have also defined the finite part of this box integral as DBox2[t_1,u_1].Similarly, one can showI_Box-c^(1)≐ 1/2p· k_1∫_l1/D_0 D_1 D_2,I_Box-c^(2)≐ -1/2p· k_1∫_l1/D_0 D_1 D_2-1/4p· k_1 p· k_2∫_l1/D_0 D_1.With the same method, the extraction of IR divergence from the triangle diagrams Fig.<ref>(f) becomes easy, and the results areI_Tri-f^(0)= ∫_l 1/l^2(l+p)^2[(l+k_2)^2-m^2]=∫_l 1/l^2(l+p)^2[(l+p-k_2)^2-m^2]=∫_l1/D_0 D_1 D_2, I_Tri-f^(1)= ∫_l (l^+/p^+)/l^2(l+p)^2[(l+k_2)^2-m^2]≐1/2p· k_2∫_l1/D_0 D_1.The integrals with (l^+)^2 or (l^+)^3do not appear in the amplitude, although they are divergent. Other integrals are IR finite for this triangle diagram. By exchanging k_1,k_2 the divergent part of Fig.<ref>(i) can be obtained.With the obtained IR divergence, the amplitude from Fig.<ref>(a,b,c,f,i) can be expressed through four basic scalar integrals J_01, J_012, J_013 and CTri1(s_1). After a lengthy calculation the result is put into a very neat form, which is proportional to tree level amplitude! Besides, the self-energy corrections to external fermion lines also contain IR divergence. Then, the complete IR divergent part of the amplitude is.ℳ^μ(1)_|_IR= W_IRℳ^μ(0)_, W_IR=(Z_2-1)-ig^2(C_A-2C_F)(2k_1· k_2)CTri1(s_1) -ig^2 C_A(2p· k_1 J_013 +2p· k_2 J_012+J_01),withJ_012=∫_l1/D_0 D_1 D_2, J_013=∫_l1/D_0 D_1 D_3, J_01=∫_l1/D_0 D_1.This is one of our main results. The derivation of this result is independent of the polarizations of external gluon, thus this result indicates the IR divergence, including soft and collinear ones, is spin independent. According to eq.(<ref>), the IR divergence of hard part is.H^μν(1)_|_IR= 2Re(W_IR) H^μν(0)_,where the factor 2 indicates the contribution from conjugated diagrams.Besides IR divergence, the hard part also contains UV divergence, which is subtracted by the counter terms of lagrangian. After the subtraction of UV divergence, the hard part gives a μ dependence, with μ the renormalization scale. We will separate the μ dependence in the following.Notice that the electro-magnetic current j^μ in hadronic tensor is conserved, so, the UV divergence from vertex correction is cancelled by self-energy correction to fermions. This means the UV divergences of Fig.<ref>(e,h) are cancelled by Fig.<ref>(j,k). So, the sum of these four diagrams does not generate μ dependence. Apart from the μ dependence in wave function renormalization constant for fermions, i.e., Z_2, the remaining μ dependence can only come from Fig.<ref>(d,f,g,i). The UV divergence of these diagrams reads.ℳ^μ_|_d+g≐ -C_A-2C_F/2_s(4πμ^2/Q^2)^/2/4πΓ(_UV/2) ℳ^μ(0)_,.ℳ^μ_|_f+i≐ 3C_A/2_s(4πμ^2/Q^2)^/2/4πΓ(_UV/2) ℳ^μ(0)_.With the μ dependence from Z_2, the total μ-dependence of the amplitude isℳ^μ(1)_= [-C_A-2C_F/2+3C_A/2-C_F] _s/4πlnμ^2/Q^2ℳ^μ(0)_+⋯,where ⋯ represents μ independent part.With renormalization scale separated explicitly, the hard part can be written asH^μν(1)_=H^μν_1+H^μν_2+H^μν_3,withH^μν_1= .2Re(W_IR) H^μν(0)_|_μ^2=Q^2, H^μν_2= (H^μν(1)_- 2Re(W_IR) H^μν(0)_)_μ^2=Q^2, H^μν_3=2_s/4πC_Alnμ^2/Q^2H^μν(0)_,where we have chosen the reference scale of μ^2 as Q^2 so that H_3 is definite. These three parts make the structure of one-loop virtual correction to hadronic tensor clear.At last, it should be pointed out that only in pole mass scheme, the μ dependence of Fig.<ref>(j,k) is proportional to tree level amplitude. In pole mass scheme the mass counter term in lagrangian is chosen so that renormalized fermion self-energy Σ_R(p̃,m)=0 when p̃=m. Then the renormalized self-energy is written asΣ_R(p̃,m)= -_s/4πC_Flnμ^2/Q^2(p̃-m)+_s/4πC_F(p̃B_v(-p̃^2/m^2,Q^2/m^2) +m B_m(-p̃^2/m^2,Q^2/m^2)),withB_v(z,τ) =∫_0^1 dx xln(x z+1)-lnτ=(2-z)z-2(1-z^2)ln(1+z)/4z^2-lnτ, B_m(z,τ) =-2-4∫_0^1 dxln(xz+1)+lnτ=2-4(1+z)/zln(1+z)+lnτ.From the structure of self-energy correction, lnμ part is proportional to tree diagrams. As stated before this μ dependence is cancelled by that of Fig.<ref>(e,h). This is an advantage of pole mass scheme. §.§ Subtraction and finite hard coefficientsAccording to our subtraction scheme in eq.(<ref>) and results in eq.(<ref>,<ref>,<ref>,<ref>), the finite hard coefficient is given by.H^μν(1)_|_finite=H^μν(1)_ -2Re[W_Φ+W_SQ+W_S]H^μν(0)_ =2Re[(W_IR+C_F_s/4πlnμ^2/Q^2 -C_A_s/4πlnμ^2/Q^2) -W_Φ-W_SQ-W_S]H^μν(0)_ +H^μν_3+H^μν_2,where we have used the factW_IR|_μ^2=Q^2=W_IR+C_F_s/4πlnμ^2/Q^2 -C_A_s/4πlnμ^2/Q^2,and the lnμ^2 is from Z_2 and J_01.The explicit expression of IR divergent scalar loop integrals are calculated by standard Feynman parametrization. One can also find general expressions in e.g. <cit.>,<cit.> and then continue the result into DIS region. Generally, the result in DIS region is very simple. For completeness we list the result of these integrals here asCTri1(s_1)=-i/16π^2R_1/s_12ρ_12{ -2/ln c_12 +(1-lnQ^2/s_12)ln c_12 +1/2ln^2c_12. .+2lnρ_12ln c_12 +2π^2/3+2Li_2(c_12)}, R_=(4πμ^2/Q^2)^/2/Γ(2-/2),andJ_013= i/16π^2R_1/Q^2 y_132/^2[ 1-/2 +/2lnr_b/y_13^2+^2/2(-1/2lnr_b/y_13^2 +1/4ln^2r_b/y_13^2+π^2/12+Li_2(r_b+y_13/r_b))], J_012= . J_013|_y_13→ y_23, J_01= i/16π^2(2/_UV-2/_IR),wheres_12=(k_1+k_2)^2, ρ_12=√(1-4m^2/s_12),c_12=ln1-ρ_12/1+ρ_12,andy_13=2p· k_1/q^2, y_23=2p· k_2/q^2,y_12=-2k_1· k_2/q^2, r_b=-m^2/q^2.In DIS region, y_13,y_23<0 and y_12,r_b>0. In these integrals we have ignored all absorptive parts.With these integrals, the explicit W_IR can be obtained asW_IR=(Z_2-1)-(C_A-2C_F)_s/4πR_1+ρ_12^2/2ρ_12{ -2/ln c_12 +(1-lnQ^2/s_12)ln c_12 +1/2ln^2c_12. .+2lnρ_12ln c_12 +2π^2/3+2Li_2(c_12)} -C_A _s/4πR_{4/^2-2/+2/lnr_b/y_13y_23 -lnr_b/y_13y_23. .+π^2/6 +1/4ln^2r_b/y_13^2+1/4ln^2r_b/y_23^2 +Li_2(r_b+y_13/r_b)+Li_2(r_b+y_23/r_b)} +C_A_s/4π(2/_UV-2/_IR).The expression of W_Φ has been given by eq.(<ref>), then we haveW_IR-W_Φ= (Z_2-Z_v^1/2)-(C_A-2C_F)_s/4πR_1+ρ_12^2/2ρ_12{ -2/ln c_12 +(1-lnQ^2/s_12)ln c_12 +1/2ln^2c_12. .+2lnρ_12ln c_12 +2π^2/3+2Li_2(c_12)} -C_A_s/4πR_{ (2/-1)ln(v·ṽ)^2/v^2 v_1·ṽv_2·ṽ-π^2/6+1/4ln^2r_b/y_13^2 +1/4ln^2r_b/y_23^2-1/2ln^2Q^2/ζ^2. .+Li_2(r_b+y_13/r_b)+Li_2(r_b+y_23/r_b) }.It can be seen the IR divergence from hard part cannot be subtracted by the correction to gluon TMDPDF alone. The TMD formula without soft factors must be wrong. At one-loop level, the two soft factors have been calculated in eq.(<ref>) and eq.(<ref>). We haveW_SQ+W_S=(√(Z_v_1Z_v_2Z_ṽ)-√(Z_v Z_ṽ)) -_s/4π(1/_UV-1/_IR) [2C_A lnv^2 v_1·ṽ v_2·ṽ/(v·ṽ)^2 +(C_A-2C_F)1+ρ_12^2/ρ_12ln1-ρ_12/1+ρ_12].Now it is very clear that the difference between W_IR-W_Φ and W_SQ+W_S is IR finite, i.e.,(W_IR-W_Φ)-(W_SQ+W_S) = -_s/4πC_F(4+3lnμ^2/m^2) -(C_A-2C_F)_s/4π1+ρ_12^2/2ρ_12{ -lnμ^2/Q^2ln c_12 +(1-lnQ^2/s_12)ln c_12 +1/2ln^2c_12. .+2lnρ_12ln c_12 +2π^2/3+2Li_2(c_12)} -C_A_s/4π{ (lnμ^2/Q^2-1)ln(v·ṽ)^2/v^2 v_1·ṽv_2·ṽ-π^2/6+1/4ln^2r_b/y_13^2 +1/4ln^2r_b/y_23^2-1/2ln^2Q^2/ζ^2. .+Li_2(r_b+y_13/r_b)+Li_2(r_b+y_23/r_b) },In this result, the UV counter terms for gluon TMDPDF and soft factors have been added and the scale μ is for UV renormalization. In eq.(<ref>) the first line in the equality is given by wave function renormalization, that is,(W_IR-W_Φ)-(W_SQ+W_S)⊃ Z_2-√(Z_v_1Z_v_2).After UV divergences are removed the wave function renormalization constants areZ_2= 1-_s/4πC_Flnμ^2/Q^2-_s/2πC_F(2/_IR-_E+2+ln 4π+3/2lnQ^2/m^2+lnμ_IR^2/Q^2), Z_v_1=Z_v_2= 1+_s/2πC_F(2/_UV-2/_IR) +c.t. =-_s/2πC_F(2/_IR-_E+ln4πμ_IR^2/μ^2).So,Z_2-√(Z_v_1Z_v_2)=-_s/4πC_F(4+3lnμ^2/m^2). Now, substituting eq.(<ref>) into eq.(<ref>), the finite hard coefficient is.H^μν(1)_ρτ|_finite=W_f H^μν(0)_ρτ+H^μν_2ρτ, W_f= 2{_s/4πlnμ^2/Q^2[-C_Aln(v·ṽ)^2/v^2 ṽ· v_1 ṽ· v_2-2C_F +(C_A-2C_F)(1+ρ_12^2/2ρ_12ln1-ρ_12/1+ρ_12)]. .+(W_IR-W_Φ-W_SQ-W_S)_μ^2=Q^2} = _s/2πlnμ^2/Q^2[-C_Aln(v·ṽ)^2/v^2 ṽ· v_1 ṽ· v_2-2C_F +(C_A-2C_F)(1+ρ_12^2/2ρ_12ln1-ρ_12/1+ρ_12)] -(C_A-2C_F)_s/2π1+ρ_12^2/2ρ_12[ (1-lnQ^2/s_12)ln c_12 +1/2ln^2c_12 +2lnρ_12ln c_12 +2π^2/3+2Li_2(c_12)] -C_A_s/2π[ -π^2/6+1/4ln^2r_b/y_13^2 +1/4ln^2r_b/y_23^2-1/2ln^2Q^2/ζ^2 +Li_2(r_b+y_13/r_b)+Li_2(r_b+y_23/r_b) ] -_s/2πC_F(4+3lnQ^2/m^2).Moreover, our decomposition for hard coefficient in eq.(<ref>) is effective to all orders of _s if only virtual correction is involved. So, H^μν_2 can still be projected into 10 scalar hard coefficients H_2^ij as we done for H^μν(0)_. There will be no other new tensor structure in higher order correction. That is,H^(1)ij_finite=W_f H^(0)ij+H_2^ij,with the projection in eq.(<ref>). There is not any new tensor structure appearing in higher order correction, since only virtual correction contributes. Eq.(<ref>,<ref>) is one of our main results. From the derivation, W_IR and W_Φ are independent of the polarizations of parton or gluon, so, W_f is the same to all gluon TMDPDFs, including those defined by polarized hadron as given in <cit.>. H_2 depends on the polarizations of gluon, but it is IR finite and μ independent. The finiteness of H_2 is justified explicitly according to eq.(<ref>). In our result, it is expressed through some basic IR finite scalar loop integrals, including box integrals DBox1,2, triangle integrals CTri3,4,5 and various bubble integrals b_0, B_v and B_m. These integrals are illustrated in Appendix..In our result, the projected hard coefficients H_2^ij according to eq.(<ref>) are given in a mathematica file subted2.m. These results are very lengthy and cannot be shown here. In subted2.m these projected hard coefficients are stored as a list{ξ_1 H_2^11,ξ_2 H_2^12,ξ_3 H_2^21,ξ_4 H_2^22,ξ_5 H_2^33,ξ_6 H_2^41,ξ_7 H_2^42,ξ_8 H_2^51,ξ_9 H_2^52,ξ_10 H_2^63},with normalization factorsξ_1= ξ_2=1/g_s^4Q^2|k_1⊥|^2,ξ_3= ξ_4=ξ_5=-1/g_s^42Q|k_1⊥|^3,ξ_6= ξ_7=ξ_8=ξ_9=ξ_10=1/g_s^42|k_1⊥|^4.All of these results are expressed through the invariants s_1,t_1,u_1 and m appearing in tree level result eq.(<ref>). There are two color factors in the stored result, i.e.,f(N_1)=Tr(T^aT^bT^aT^b)=1-N_c^2/4N_c, f(N_2)=Tr(T^aT^aT^bT^b)=(N_c^2-1)^2/4N_c. In the final part it is interesting to point out that our factorization formula eq.(<ref>) associated with the finite hard part given by eq.(<ref>) is μ independent to order of _s^2. This can be illustrated in the following. Because real correction is UV finite, μ dependence is purely generated by virtual corrections. Then, the μ dependence of S^Q(l_⊥) and S̅(l_⊥) can be obtained from eq.(<ref>) and eq.(<ref>), respectively. They are∂ S^Q(l_⊥)/∂lnμ^2 = _s/2π[2C_F+C_A -C_Aln4v_1·ṽv_2·ṽ/ṽ^2 -(C_A-2C_F)1+ρ_12^2/2ρ_12ln1-ρ_12/1+ρ_12]S^Q(l_⊥),∂S̅(l_⊥)/∂lnμ^2 = _s/2πC_A[ -2+ln4(v·ṽ)^2/v^2ṽ^2]S̅(l_⊥).For gluon TMDPDF, additional self-energy correction to gluon with momentum p should be added into virtual correction. Of course, this correction does not affect the finite hard part, because it cancels the same gluon self-energy insertion in hard scattering for hadronic tensor. The wave function renormalization constant in Feynman gauge isZ_G=1-_s/4π(2/3N_F-5/3C_A) (2/_UV-_E+ln 4π),in MS scheme. Then, from eq.(<ref>) we have∂/∂lnμ^2Φ^(x,p_⊥) = -_s/4π[ 2/3N_F-5/3C_A-4C_A ]Φ^(x,p_⊥).Recall that tree level hard coefficients are proportional to _s, which has a μ dependence∂_s/∂lnμ^2=β(_s)= -_s^2/4π(11/3C_A-2/3N_F ).Now the hadronic tensor to 𝒪(_s^2) can be written asW^μν∝ _s[1+_s/2πlnμ^2/Q^2(-C_Aln(v·ṽ)^2/v^2ṽ· v_1 ṽ· v_2-2C_F+(C_A-2C_F)1+ρ_12^2/2ρ_12ln1-ρ_12/1+ρ_12)+⋯] Φ^(x,p_⊥)⊗ S^Q(l_1⊥)⊗S̅(l_2⊥),where ⋯ part is 𝒪(_s) and μ independent; ⊗ means the transverse momentum convolution. From this equation and evolution equations given above, one can check quickly∂/∂lnμ^2W^μν=0+𝒪(_s^3).So, the factorization formula is μ independent to order _s^2. Notice that here we have used the fact that pole mass of heavy quark is μ independent. § AZIMUTHAL ANGLE DEPENDENCENow we have checked the corrected TMD formula for hadronic tensor, i.e., eq.(<ref>) at one-loop level, and the finite hard coefficients are given explicitly. One may think that the azimuthal angle dependence about ϕ_q can be obtained by following the same procedure as tree level formula. But unfortunately, S^Q depends on v_1⊥, which will change the tree level ϕ_q dependence in a nonperturbative way after integration over the transverse momenta in eq.(<ref>). Physically, this change of ϕ_q dependence is caused by the multiple emissions of soft gluons from final heavy quark pair. To proceed we have to integrate over ϕ_q, and it is best to do this in coordinate space.DefineΦ̃^(x,b_⊥)= ∫ d^2 p_⊥ e^ib_⊥· p_⊥Φ^(x,p_⊥)=-g_⊥^ G(x,b_⊥^2) -2(2b_⊥^ b_⊥^-b_⊥^2 g_⊥^)/M_p^2∂^2 H^⊥(x,b_⊥^2)/∂(b_⊥^2)^2,S̃^Q(b_⊥)= ∫ d^2 l_⊥ e^-ib_⊥· l_⊥ S^Q(l_⊥),S̃(b_⊥)= ∫ d^2 l_⊥ e^-ib_⊥· l_⊥S̅(l_⊥).Note that b_⊥^2=b_⊥· b_⊥<0. The formula eq.(<ref>) becomesW^μν = x_B/x Q^2 M_p(N_c^2-1)(1-z_1-z_2)H^μν_∫d^2 b_⊥/(2π)^2e^iq_⊥· b_⊥Φ̃^(x,b_⊥)S̃^Q(b_⊥)S̃(b_⊥^2).Remember that in hadron frame, i.e., the rest frame of final heavy quark pair, v⃗_1⊥ is along +X-axis, then all azimuthal angles are relative to v⃗_1⊥, as shown in Fig.<ref>.Since L^μν does not depend on ϕ_q, it is convenient to integrate over ϕ_q in W^μν rather than the cross section. Notice that due to scaling invariance of S^Q(b_⊥) under the transformation v_1→_1 v_1, v_2→_2 v_2, S̃^Q depends on ϕ_b through(b_⊥· v_1)^2/b_⊥^2 v_1^2=-|v_1⊥|^2 cos^2ϕ_b=-|v_1⊥|^2 cos 2ϕ_b+1/2.That is, S̃^Q is a function of cos 2ϕ_b. So, S̃^Q is unchanged under the transformation ϕ_b→ϕ_b+π. Due to this fact we have∫_0^2π dϕ_b cos nϕ_b S̃^Q(b_⊥)= (-1)^n ∫_0^2π dϕ_b cos nϕ_b S̃^Q(b_⊥),∫_0^2π dϕ_b sin nϕ_b S̃^Q(b_⊥)=0.This property of S̃^Q is valuable in our following analysis. We choose to study three quantities with ϕ_q integrated:∫ dϕ_q W^μν = x_B(1-z_1-z_2)/π x Q^2 M_p(N_c^2-1)∫ db̂_⊥b̂_⊥S̃(b_⊥^2)J_0(b̂_⊥q̂_⊥){ A_i^μνH_i1G̃(x,b_⊥^2)S̃_Q^(0)-A_i^μνH_i2H̃^⊥(2)(x,b_⊥^2)S̃_Q^(2)},∫ dϕ_q cos(2ϕ_q)W^μν = -x_B(1-z_1-z_2)/π x Q^2 M_p(N_c^2-1)∫ db̂_⊥b̂_⊥S̃(b_⊥^2)J_2(b̂_⊥q̂_⊥){ A_i^μνH_i1G̃(x,b_⊥^2)S̃_Q^(2). .-A_i^μνH_i2H̃^⊥(2)(x,b_⊥^2)S̃_Q^(0)+4S̃_Q^(4)/2},∫ dϕ_q sin(2ϕ_q)W^μν = x_B(1-z_1-z_2)/π x Q^2 M_p(N_c^2-1)∫ db̂_⊥b̂_⊥S̃(b_⊥^2)J_2(b̂_⊥q̂_⊥){ A_i^μνH_i3H̃^⊥(2)(x,b_⊥^2)S̃_Q^(0)-4S̃_Q^(4)/2},where b̂_⊥=|b_⊥|=√(-b_⊥^2), q̂_⊥=|q_⊥|=√(-q_⊥^2), and J_0,J_2 are 0-th and 2-th Bessel functions. A_i^μν are the basic tensors we introduced in the decomposition of leptonic tensor, i.e., eq.(<ref>). For simplicity, the quantity concerning H^⊥ is reorganized intoH̃^⊥(2)(x,b_⊥^2)= -2b_⊥^2/M_p^2∂^2/∂ (b_⊥^2)^2H̃^⊥(x,b_⊥^2).Due to the integration over ϕ_q, ϕ_b now can be integrated. As a result, three integrated soft factors are involved, i.e.,S̃_Q^(0)= ∫_0^2π dϕ_b S̃^Q(b_⊥), S̃_Q^(2)= ∫_0^2π dϕ_b cos(2ϕ_b) S̃^Q(b_⊥),S̃_Q^(4)= ∫_0^2π dϕ_b cos(4ϕ_b)S̃^Q(b_⊥).Generally, the quantities weighted by cos(nϕ_q), sin(nϕ_q) can be constructed, but more ϕ_b integrated soft factors will be introduced. This may not help to extract H^⊥.From these weighted hadronic tensors, the corresponding cross sections can be obtained. Define weighted cross section as⟨cos nϕ_q ⟩=∫_0^2π dϕ_q cos(nϕ_q)d/dψ_l dy dx_B dy_1 dy_2 d^2K_T d^2R_T.We have⟨ 1 ⟩= yQ_q^2_em^2 x_B(1-z_1-z_2)/64π^4 x Q^4 M_p(N_c^2-1)∫ db̂_⊥b̂_⊥S̃J_0(b̂_⊥q̂_⊥){. G̃S̃_Q^(0)[ H_114(1-y)/y^2+H_412(1+(1-y)^2)/y^2-H_214√(1-y)(y-2)/y^2cosϕ_l+H_514(1-y)/y^2cos(2ϕ_l)] .-H̃^⊥(2)S̃_Q^(2)[ H_124(1-y)/y^2+H_422(1+(1-y)^2)/y^2-H_224√(1-y)(y-2)/y^2cosϕ_l+H_524(1-y)/y^2cos(2ϕ_l)]}, ⟨cos(2ϕ_q) ⟩= -yQ_q^2_em^2 x_B(1-z_1-z_2)/64π^4 x Q^4 M_p(N_c^2-1)∫ db̂_⊥b̂_⊥S̃J_2(b̂_⊥q̂_⊥){. G̃S̃_Q^(2)[ H_114(1-y)/y^2+H_412(1+(1-y)^2)/y^2-H_214√(1-y)(y-2)/y^2cosϕ_l+H_514(1-y)/y^2cos(2ϕ_l)] .-H̃^⊥(2)S̃_Q^(0)+S̃_Q^(4)/2[ H_124(1-y)/y^2+H_422(1+(1-y)^2)/y^2-H_224√(1-y)(y-2)/y^2cosϕ_l.. ..+H_524(1-y)/y^2cos(2ϕ_l)]}, ⟨sin(2ϕ_q) ⟩=yQ_q^2_em^2 x_B(1-z_1-z_2)/64π^4 x Q^4 M_p(N_c^2-1) ∫ db̂_⊥b̂_⊥S̃J_2(b̂_⊥q̂_⊥){. .H̃^⊥(2)S̃_Q^(0)-S̃_Q^(4)/2[ -H_334√(1-y)(y-2)/y^2sinϕ_l+H_634(1-y)/y^2sin(2ϕ_l)]}.Here H^ij=H^(0)ij+H^(1)ij_finite, which are given in eq.(<ref>,<ref>). These three weighted cross sections are our main results. We can see clearly that ⟨ 1⟩ also depends on H^⊥ even if lepton azimuthal angle ϕ_l is integrated over, due to nonzero S̃_Q^(2). This feature is unexpected from tree level TMD formula eq.(<ref>). In addition, the soft factor S̃_Q is process independent and can be absorbed into gluon distribution G̃ or H̃^⊥, which results in so called subtracted TMDPDF<cit.>. But S̃_Q is process dependent and cannot be eliminated. Obviously, S̃_Q is similar to fragmentation functions in SIDIS.The difference is fragmentation functions can be extracted from e^+e^- experiment, but S̃^Q cannot be extracted in this way, because it contains a gauge link related to initial gluon. The detailed knowledge of S̃_Q is beyond the scope of this paper, and we will try to study its effect on resummation in future.§ SUMMARYIn this paper we first derive the angular distributions of heavy quark pair back-to-back production in SIDIS based on tree level TMD formula. Then we examine the one-loop correction to this formula. At one-loop level a special soft factor for final heavy quarks should be complemented. From our previous studies we have known that real correction does not contribute to higher order hard coefficients. Therefore, we calculate only virtual corrections to the hadronic tensor and various nonperturbative quantities in factorization formula. Really we find the IR divergence in hadronic tensor can be absorbed by these nonperturbative quantities. As a result, we give explicit form of finite hard part, including renormalization scale dependence. Interestingly we find the IR divergent part of one-loop amplitude for heavy quark pair production is proportional to the tree level amplitude and can be expressed through standard triangle and bubble loop integrals. This feature ensures the subtraction in polarized scatterings can also be done in the same formalism. In order to present the azimuthal angle dependence about virtual photon, i.e., ϕ_q in hadron frame, we project the hard part into ten scalar hard coefficients. However, at one-loop level, the appearance of soft factor S^Q affects the azimuthal angle dependence in a nonperturbative way, which makes the explicit ϕ_q dependence at cross section level become unclear. In order to extract some information about gluon TMDPDFs, especially about linearly polarized gluon distribution H^⊥, we construct three ϕ_q-weighted cross sections ⟨ 1⟩, ⟨cos 2ϕ_q⟩, ⟨sin 2ϕ_q⟩, which depend on only three integrated heavy quark soft factors S̃_Q^(0),(2),(4). We expect future experiments such as EIC<cit.> can give some constraints on these three soft factors and gluon distribution H^⊥. For TMD factorization, an important issue is the resummation of relative transverse momentum of heavy quark pair in this process. But due to the many hard scales in this process, e.g., Q^2, R_⊥^2, and the scales introduced by soft factors like ζ^2, 4v_1·ṽv_2·ṽ/ṽ^2, the resummation will be nontrivial. We want to study this issue in another paper. In appendix we present all involved IR finite loop integrals, which are used to express the finite hard part H_2^ij.§ ACKNOWLEDGEMENTSThe author would like to thank Jian-Ping Ma for reading the manuscript. The hospitality of ITP, CAS during the completion of this paper is appreciated. This work is supported by National Nature Science Foundation of China(NSFC) with contract No.11605195.§ IR FINITE SCALAR INTEGRALSFollowing scalar integrals are used to express the finite result. These integrals can be obtained from general results in <cit.> or references therein by proper continuation to DIS region. Straightforward calculation is also easy. Most of the results appear very simple. All of these functions may depend on s_1, therefore s_1 is suppressed in the arguments. DBox1, DBox2 have been defined in eq.(<ref>,<ref>). Their explicit expressions areDBox1[t_1,u_1]= -i/16π^21/s_12u_1ρ_12[ -2ln c_12lnr_b/-y_23+2ln(-c)lnρ-ρ_12/ρ+ρ_12 -ln^2(-c)+2lnρ^2-1/ρ^2-ρ_12^2ln c_12. .+4Li_2(-c_12)-2Li_2(c_12/c)-2Li_2(cc_12) ],DBox2[t_1u_1,t_1+u_1]= -i/16π^2(Q^2)^-2/y_13y_23[2/3π^2 +ln^2(-c)+ln^2y_13/y_23 +Li_2(r_b+y_13/r_b)+Li_2(r_b+y_23/r_b)],where ρ=√(1-4m^2/q^2)>1, ρ_12=√(1-4m^2/s_12) and c=(1-ρ)/(1+ρ), c_12=(1-ρ_12)/(1+ρ_12).IR finite triangle integrals areCTri3(u_1)= ∫_l1/[l^2+i][(l-k_2)^2-m^2+i][(l-k_2+p)^2-m^2+i],CTri4(t_1,u_1)= ∫_l1/[l^2+i][(l+p-k_2)^2-m^2+i][(l+k_1)^2-m^2+i],CTri5(t_1 u_1, t_1+u_1) = ∫_l1/[l^2-m^2+i][(l+p)^2-m^2+i][(l+p-k_1-k_2)^2-m^2+i].In DIS region, the results of CTri3 and CTri5 appear very simpleCTri3(u_1)=-i/16π^21/u_1{-π^2/6+Li_2(r_b+y_23/r_b)},CTri5(t_1 u_1, t_1+u_1)= i/16π^21/2(t_1+u_1){π^2+ ln^2(-c)-ln^2 c_12}.But that for CTri4 is a little complicatedCTri4(t_1,u_1)= i/16π^21/Q^2(_1-_2){ K(_1,_3)+K(_1,_4)-K(_2,_3)-K(_2,_4). .+π^2/2+1/2ln^2_1 - 1/2ln^2(-_2) -ln-u_1/Q^2ln-_2(1-_1)/_1(1-_2) +Li_2(_1)-Li_2(_2)},where_1=1+y_23+√((1+y_23)^2+4r_b)/2, _2=1+y_23-√((1+y_23)^2+4r_b)/2,and_3=1+√(1+4r_b)/2, _4=1-√(1+4r_b)/2.In DIS region, one can show _2<_4<0<_1<1<_3. What is crucial is _1<1, which ensures the functions in the result are real. That is,K(_1,_3)≐ ln(_3-1)ln1-_1/_3-_1- ln(_3-_1)ln_1/_3-_1-7π^2/6 +Li_2(1-_3/_1-_3)+Li_2(_1/_1-_3), K(_1,_4)≐ ln(_1-_4)ln1-_1/_1-_4- ln(-_4)ln_1/_1-_4+π^2/6 -Li_2(_1-1/_1-_4)-Li_2(-_4/_1-_4), K(_2,_3)≐ ln(_3-1)ln1-_1/_3-_2- ln_3ln-_2/_3-_2 +Li_2(_3-1/_3-_2)-Li_2(_3/_3-_2), K(_2,_4)≐ ln(1-_4)ln1-_2/_4-_2- ln(-_4)ln-_2/_4-_2 +Li_2(1-_4/_2-_4)-Li_2(-_2/_2-_4),where all imaginary parts are dropped. In addition, bubbles B_0(p̃^2,m_1^2,m_2^2) also appear in final result. They are UV divergent and thus μ dependent. According to our previous classification, the UV divergence in B_0 functions are removed by MS-scheme, and the μ dependence or factor lnμ/Q is absorbed into H_3. Thus, following finite b_0 functions are used to express H_2:b_0(p̃^2,m_1^2,m_2^2)=B_0(p̃^2,m_1^2,m_2^2) -i/16π^2(2/-_E+ln4πμ^2/Q^2).whereB_0(p̃^2,m_1^2,m_2^2)= ∫_l1/[l^2-m_1^2+i][(l+p̃)^2-m_2^2+i].Besides, there are two special bubble integrals B_v, B_m by mass renormalization for Fig.<ref>(j,k). They have been given in eq.(<ref>).For completeness we also present the explicit expressions of b_0(p̃^2,m_1^2,m_2^2) functions here. In our case, the involved b_0 functions can be divided into two classes: m_1=m_2=m and m_1=0 or m_2=0. For the first class we haveb_0(p̃^2,m^2,m^2)= i/16π^2( 2+lnQ^2/m^2+ωlnω-1/ω+1),ω=√(1-4m^2/p̃^2+i).Two special cases are involved in the result: p̃^2=s_1+2m^2 and p̃^2=q^2=-Q^2. After continuation the real parts of these two b_0 functions areb_0(s_1+2m^2,m^2,m^2)= i/16π^2( 2+lnQ^2/m^2+ωln1-ω/1+ω),ω=√(1-4m^2/s_1+2m^2), b_0(-Q^2,m^2,m^2)= i/16π^2( 2+lnQ^2/m^2+ωlnω-1/ω+1),ω=√(1+4m^2/Q^2).For the second class of b_0 functions, one inner propagator is massless. The general result isb_0(p̃^2,0,m^2)= i/16π^2( 2+lnQ^2/-p̃^2+m^2+m^2/p̃^2ln-p̃^2+m^2/m^2).There are three special cases in our calculation: 1) p̃^2=0, 2)p̃^2=m^2, 3)p̃^2<0. By taking proper limit the first two cases can be obtainedb_0(0,0,m^2)= i/16π^2(1+lnQ^2/m^2), b_0(m^2,0,m^2)= i/16π^2(2+lnQ^2/m^2).The last case is well-defined from the general expression. Specifically, p̃^2=m^2+t_1 or m^2+u_1 in our calculation, and both are negative in DIS region.
http://arxiv.org/abs/1709.08970v1
{ "authors": [ "Guang-Peng Zhang" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20170926122655", "title": "Back-to-back heavy quark pair production in Semi-inclusive DIS" }
1. IPNO, Univ. Paris-Sud, CNRS/IN2P3, Université Paris-Saclay, 91406 Orsay, France.2. National Centre for Nuclear Research (NCBJ), Hoza 69,00-681, Warsaw,Poland. We report on the potentialities offered by a fixed-target experiment at the LHC using the proton and ion LHC beams (AFTER@LHC project) regarding the study of J/ψ exclusive-photoproduction in pA and AA collisions. The foreseen usage of polarised targets (hydrogen, deuteron, helium) gives access to measurements of the Single-Transverse-Spin Asymmetries of this exclusive process, therefore allowing one to access the helicity-flip Generalised Parton Distribution (GPD) E_g. We detail the expected yields of photoproduced J/ψ in proton-hydrogen and lead-hydrogen collisions and discuss the statistical uncertainties on the asymmetry measurement for one year of data taking at the LHC. Quarkonium-photoproduction prospects at a fixed-target experiment at the LHC (AFTER@LHC) L. Massacrier^1, J.P. Lansberg^1, L. Szymanowski^2 and J. Wagner^2 Accepted . Received; in original form========================================================================================§ THE AFTER@LHC PROJECT AFTER@LHC is a proposal for a multi-purpose fixed-target programme using the multi-TeV proton or heavy ion beams of the LHC to perform the most energetic fixed-target collisions ever performed so far <cit.>. If used in a fixed-target mode, the LHC beams offer the possibility to study with high precision pH or pA collisions at √(s_NN) = 115 GeV and PbA collisions at √(s_NN) = 72 GeV, where A is the atomic mass of the target. The fixed-target mode offers several unique assets compared to the collider mode: outstanding luminosities can be reached thanks to the high density of the target; standard detectors can access the far backward center-of-mass (c.m) region thanks to the boost between the colliding-nucleon c.m system and the laboratory system (this region remains completely uncharted with hard reactions); the target species can easily be changed; polarised target can be used for spin studies. The physics opportunities offered by a fixed-target programme at LHC have been published in <cit.> and are summarised below in three main objectives. First, whereas the need for precise measurements of the partonic structure of nucleons and nuclei at small momentum fraction x is usually highlighted as a strong motivation for new large-scale experimental facilities, the structure of nucleon and nuclei at high x is probably as poorly known with long-standing puzzles such as the EMC effect <cit.> in the nuclei or a possible non-perturbative source of charm or beauty quarks in the proton which would carry a relevant fraction of its momentum <cit.>. With an extensive coverage of the backward region corresponding to high x in the target, AFTER@LHC is very well placed for performing this physics with a hadron beam. The second objective of AFTER@LHC is the search and characterisation of the Quark-Gluon Plasma (QGP), a deconfined state of nuclear matter, which was prevailing in the universe few microseconds after the Big Bang. QGP is expected to be formed when the surrounding hadronic matter is extremely compressed or heated. These conditions can be achieved in ultra-relativistic Heavy-Ion collisions (HI). AFTER@LHC with a c.m. energy of 72 GeV provides a complementary coverage to the RHIC- and SPS- based experiments, in the region of high temperatures and low baryon-chemical potentials, where the QGP is expected to be produced. AFTER@LHC will provide crucial information about the phase transition by: (i) scanning the longitudinal extension of the hot medium, (ii) colliding systems of different sizes, (iii) analysing the centrality dependence of these collisions. Together they should provide a measurement of the temperature dependence of the system viscosity both as a QGP or a hadron gas. Additionally, measurements of production of various quarkonia states in HI collisions can provide insight into thermodynamic properties of the QGP. Their sequential suppression was predicted to occur in the deconfined partonic medium due to Debye screening of the quark-antiquark potential <cit.>. However, other effects (Cold Nuclear Matter effects (CNM), feed-down, secondary production via coalescence...) can also alter the observed yields. AFTER@LHC will provide a complete set of quarkonia measurements (together with open heavy flavours) allowing one to access the temperature of the formed medium in AA collisions, and cold nuclear matter effects in pA (AA) collisions. Thanks to the large statistics expected, a golden probe will be the measurement of Υ(nS) production in pp, pA and AA collisions, allowing one to calibrate the quarkonium thermometer and to search for the phase transition by looking at Υ(nS) suppression (and other observables like charged hadron v_2) as a function of rapidity and the system size. Finally, despite decades of efforts, the internal structure of the nucleon and the distribution and dynamics of its constituents are still largely unknown. One of the most significant issues is our limited understanding of the spin structure of the nucleon, especially how its elementary constituents (quarks and gluons) bind into a spin-half object. Among others, Single Transverse Spin Asymmetries (STSA) in different hard-scattering processes are powerful observables to further explore the dynamics of the partons inside hadrons <cit.>.Thanks to the large yields expected with AFTER@LHC, STSA of heavy-flavours and quarkonia –which are currently poorly known– could be measured with high accuracy, if a polarised target can be installed <cit.>. We show here that AFTER@LHC can also rely on quarkonium exclusive-photoproduction processes to explore the three-dimensional tomography of hadrons via Generalised Parton Distributions (GPDs) <cit.>. Photoproduction is indeed accessible in Ultra-Peripheral Collisions (UPCs) and the quarkonium mass presumably sets the hard scale to use collinear factorisation in terms of (gluon) GPDs, which are directly related to the total angular momentum carried by quarks and gluons. In fact, exclusive J/ψ production <cit.> drew much attention in the recent years due to the fact that it is sensitive, even at leading order, to gluon GPDs. Beside, with the addition of a polarised hydrogen[Measurements with deuteron and helium targets are also considered. ] target, AFTER@LHC opens a unique possibility to study STSA of such process, which is sensitive to yet unknown GPD E_g <cit.>. In this contribution, we report on such studies through the collisions of proton and lead beams onto a polarised hydrogen target at AFTER@LHC energies. § POSSIBLE TECHNICAL IMPLEMENTATIONS AT THE LHC AND PROJECTED LUMINOSITIES Several promising technical solutions exist in order to perform fixed-target collisions at the LHC. One can either use an internal (solid or gaseous) target coupled to an already existing LHC detector or build a new beam line together with a new detector. The first solution can be achieved in a shorter time scale, at limited cost and civil engineering. Moreover the fixed-target programme can be simultaneously runwith the current LHC collider experiments, without affecting the LHC performances.The direct injection of noble gases in the LHC beam pipe is currently being used by the LHCb collaboration with the SMOG device <cit.>. However, this system has some limitations, in particular: (i) the gas density achieved inside the Vertex Locator of LHCb is small (of the order of 10^-7 mbar); (ii) there is no possibility to inject polarised gas; (iii) there is no dedicated pumping system close to the target; (iv) the data taking time has so far been limited to few days in a row. The use of more complex systems with higher density gaseous and polarised target inside one of the existing LHC experiment is under study. For instance, an hydrogen gas jet is currently used at RHIC to measure the proton beam polarisation <cit.>.The H-jet system consists of a polarised free atomic beam source cooled down to 80K, providing an hydrogen inlet flux of 1.3 × 10^7 H/s. With such a device, the gas density can be increased by about one order of magnitude with respect to the SMOG device and probably be continuously run. Another promising alternative solution is the use of an openable storage cell placed inside the LHC beam pipe. Such a system was developed for the HERMES experiment <cit.>. Polarised hydrogen, deuteron and Helium-3 at densities about 200 times larger than the ones of the H-jet system can be injected, as well as heavier unpolarised gases. Fixed-target collisions can also be obtained with a solid target (wire, foil) interacting with the LHC beams. There are two ways of doing so: either thanks to a system which permit to move directly the target inside the beam halo <cit.>; or by using a bent crystal (see work by UA9 collaboration <cit.>) upstream of an existing experiment (∼ 100 m) to deflect the beam halo onto the fixed target. In both cases, the target (or an assembly of several targets) can be placed at a few centimetres from the nominal interaction point, allowing one to fully exploit the performances of an existing detector. The usage of the bent crystal offers the additional advantage to better control the flux[The deflected halo beam flux is considered to be about 5 × 10^8 p/s and10^5 Pb/s.] of particles sent on the target, and therefore the luminosity determination. Most probably such simple solid targets could not be polarised. The spin physics part of the AFTER@LHC programme could naturally not be conducted with such an option. Tab. <ref> summarises the target areal density, the beam flux intercepting the target, the expected instantaneous and yearly integrated luminosities for pH and PbH collisions, for the possible technical solutions described above. Luminosities as large as 10 fb^-1 can be collected in pH collisions in one LHC year with a storage cell. In PbH collisions, similar luminosities (∼ 100 nb^-1) can be reached with a gas-jet or a storage cell since the gas density has to be levelled in order to avoid a too large beam consumption[Assuming a total cross section σ_ PbH = 3 barn, 15% of the beam is used over a fill.]. We stress that these are annual numbers and they can be cumulated over different runs. § PROSPECTS FOR J/Ψ PHOTOPRODUCTION STUDIES WITH AFTER@LHC Let us now quickly summarise our feasibility study: 100 000 photoproduced J/ψ have been generated in the dimuon decay channel, using STARLIGHT Monte Carlo (MC) generator <cit.> in pH^↑ (√(s) = 115 GeV) and Pb+H^↑ collisions (√(s_NN) = 72 GeV). In pH^↑ collisions, both protons can be photon emitters, while in Pb+H^↑ only the Pb nuclei was considered as photon emitter (dominant contribution). The J/ψ photoproduction cross sections given by STARLIGHT MC are summarised in Tab. <ref>. In order to mimic an LHCb-like forward detector, the following kinematical cuts have been applied at the single muon level: 2 < η_lab^μ < 5and p_ T^μ > 0.4 GeV/c. Fig. <ref> shows the rapidity-differential (left) and p_ T-differential (right) cross sections of the photoproduced J/ψ in the dimuon decay channel in pH^↑ collisions, generated with STARLIGHT MC generator. The blue curves have been produced without applying any kinematical cuts, while the red curves are produced by applying the cuts: 2 < η_lab^μ < 5and p_ T^μ > 0.4 GeV/c at the single muon level.On the left panel is also indicated the photon-proton c.m energy (W_γ p), calculated as: W_γ p = √(M_J/ψ^2 + M_p^2 + 2 × M_p× M_J/ψ×cosh ( y_ lab)), with M_J/ψ and M_p, being respectively the J/ψ and proton masses, and y_lab the J/ψ rapidity in the laboratory frame. On the vertical axis is shown as well the photoproduction J/ψ yield per year per 0.1 y_ lab unit (left) and per 0.1 GeV/c unit (right). An integrated yearly luminosity of L_ int = 10 fb^-1, corresponding to the storage cell solution,has been assumed in pH^↑ collisions in order to calculate the J/ψ yearly photoproduction yield. About 200 000 photoproduced J/ψ are expected to be detected in an LHCb-like acceptance per year with AFTER@LHC[The detector efficiency has been assumed to be 100%. ]. Similarly to Fig. <ref>, Fig. <ref> shows the rapidity-differential (left) and p_ T-differential (right) cross sections of the photoproduced J/ψ in the dimuon decay channel in PbH^↑ collisions, generated with STARLIGHT MC generator. The collection of an integrated luminosity of 100 nb^-1 per year is expected at AFTER@LHC with the storage cell option. This would result in about 1 000 photoproduced J/ψ per year emitted in an LHCb-like acceptance.In a forthcoming publication <cit.>, we will report on the evaluation ofthe uncertainty on the STSA, A_N, from the photoproduced-J/ψ yields obtained with STARLIGHT MCand the expected modulation for E_g following <cit.>. Indeed, A_N, the amplitude of the spin-correlated azimuthal modulation of the produced particles, is defined as A_N = 1/PN^↑ - N^↓/N^↑ + N^↓, where N^↑ (N^↓) are the photoproduced-J/ψ yields for an up (down) target-polarisation orientation, and P is the effective polarisation of the target. The statistical uncertainty u_A_N on A_N can thenbe derived using u_A_N = 2/P(N^↑ + N^↓)^2√(N^↓2u^↑2 + N^↑2u^↓2) with u^↑ (u↓) the relative uncertainties on the J/ψ yields with up (down) polarisation orientation. Dividing the sample into two J/ψ p_ T ranges relevant for GPDs extraction: 0.4 < p_ T < 0.6 GeV/c and 0.6 < p_ T < 0.8 GeV/c, the expected statistical precision on A_N is already expected to be better than 10% given the integrated yearly yield of 200 000 photoproduced J/ψ in pH^↑ collisions at AFTER@LHC. This will also allow for an extraction of A_N as a function Feynman-x, x_ F [x_ F = 2 × M_J/ψ×sinh (y_ cms/√(s)), withy_cms being the J/ψ rapidity in the c.m. system frame and √(s) the c.m energy].§ CONCLUSION We have presented projections for J/ψ photoproduction measurements in polarised pH^↑ and Pb+H^↑ collisions after one year of data taking at AFTER@LHC energies and assuming a storage cell technology coupled to an LHCb-like forward detector. In pH^↑ collisions, a yearly yield of about 200 000 photoproduced J/ψ is expected, allowing one to reach a very competitive statistical accuracy on the A_N measurement differential inx_ F. A non-zero asymmetry would be the first signature of a non-zero GPD E_g for gluons.§ ACKNOWLEDGEMENTS We thank D. Kikola, S. Klein, A. Metz, J. Nystrand and B. Trzeciak for useful discussions. This work is partly supported by the COPIN-IN2P3 Agreement, the grant of National Science Center, Poland, No. 2015/17/B/ST2/01838, by French-Polish scientific agreement POLONIUM, by the French P2IO Excellence Laboratory and by the French CNRS-IN2P3 (project TMD@NLO). utphys_cp
http://arxiv.org/abs/1709.09044v1
{ "authors": [ "L. Massacrier", "J. P. Lansberg", "L. Szymanowski", "J. Wagner" ], "categories": [ "nucl-ex", "hep-ex", "nucl-th" ], "primary_category": "nucl-ex", "published": "20170926143017", "title": "Quarkonium-photoproduction prospects at a fixed-target experiment at the LHC (AFTER@LHC)" }
DeepTransport: Learning Spatial-Temporal Dependency for Traffic Condition Forecasting Xingyi ChengThis work was done before leaving Baidu. Email: [email protected]., Ruiqing Zhang, Jie Zhou, Wei Xu Baidu Research - Institue of Deep Learning [email protected] {zhangruiqing01,zhoujie01,wei.xu}@baidu.com ================================================================================================================================================================================================================================================ While much of the work in the design of convolutional networks over the last five years has revolved around the empirical investigation of the importance of depth, filter sizes, and number of feature channels, recent studies have shown thatbranching, i.e., splitting the computation along parallel but distinct threads and then aggregating their outputs, represents a new promising dimension for significant improvements in performance. To combat the complexity of design choices in multi-branch architectures, prior work has adopted simple strategies, such as a fixed branching factor, the same input being fed to all parallel branches, and an additive combination of the outputs produced by all branches at aggregation points. In this work we remove these predefined choices and propose an algorithm to learn the connections between branches in the network. Instead of being chosen a priori by the human designer, the multi-branch connectivity is learned simultaneously with the weights of the network by optimizing a single loss function defined with respect to the end task. We demonstrate our approach on the problem of multi-class image classification using four different datasets where it yields consistently higher accuracy compared to the state-of-the-art “ResNeXt” multi-branch network given the same learning capacity. § INTRODUCTIONDeep neural networks have emerged as one of the most prominent models for problems that require the learning of complex functions and that involve large amounts of training data. While deep learning has recently enabled dramatic performance improvements in many application domains, the design of deep architectures is still a challenging and time-consuming endeavor. The difficulty lies in the many architecture choices that impact—often significantly—the performance of the system. In the specific domain of image categorization, which is the focus of this paper, significant research effort has been invested in the empirical study of how depth, filter sizes, number of feature maps, and choice of nonlinearities affect performance <cit.>. Recently, several authors have proposed to simplify the architecture design by defining convolutional neural networks (CNNs) in terms of combinations of basic building blocks. This strategy was arguably first popularized by the VGG networks <cit.> which were built by stacking a series of convolutional layers having identical filter size (3 × 3). The idea of modularized CNN design was made even more explicit in residual networks (ResNets) <cit.>, which are constructed by combining residual blocks of fixed topology. While in ResNets residual blocks are stacked one on top of each other to form very deep networks, the recently introduced ResNeXt models <cit.> have shown that it is also beneficial to arrange these building blocks in parallel to build multi-branch convolutional networks. The modular component of ResNeXt then consists of C parallel branches, corresponding to residual blocks with identical topology but distinct parameters. Network built by stacking these multi-branch components have been shown to lead to better results than single-thread ResNets of the same capacity.While the principle of modularized design has greatly simplified the challenge of building effective architectures for image analysis, the choice of how to combine and aggregate the computations of these building blocks still rests on the shoulders of the human designer. In order to avoid a combinatorial explosion of options, prior work has relied on simple, uniform rules of aggregation and composition. For example, ResNeXt models <cit.> are based on the following set of simplifying assumptions: the branching factor C (also referred to as cardinality) is fixed to the same constant in all layers of the network, all branches of a module are fed the same input, and the outputs of parallel branches are aggregated by a simple additive operation that provides the input to the next module. In this paper we remove these predefined choices and propose an algorithm that learns to combine and aggregate building blocks of a neural network. In this new regime, the network connectivity naturally arises as a result of the training optimization rather than being hand-defined by the human designer. We demonstrate our approach using residual blocks as our modular components, but we take inspiration from ResNeXt by arranging these modules in a multi-branch architecture. Rather than predefining the input connections and aggregation pathways of each branch, we let the algorithm discover the optimal way to combine and connect residual blocks with respect to the end learning objective. This is achieved by means of gates, i.e., learned binary parameters that act as “switches” determining the final connectivity in our network. The gates are learned together with the convolutional weights of the network, as part of a joint optimization via backpropagation with respect to a traditional multi-class classification objective. We demonstrate that, given the same budget of residual blocks (and parameters), our learned architecture consistently outperforms the predefined ResNeXt network in all our experiments. An interesting byproduct of our approach is that it can automatically identify residual blocks that are superfluous, i.e., unnecessary or detrimental for the end objective. At the end of the optimization, these unused residual blocks can be pruned away without any impact on the learned hypothesis while yielding substantial savings in number of parameters to store and in test-time computation. § TECHNICAL APPROACH §.§ Modular multi-branch architectureWe begin by providing a brief review of residual blocks <cit.>, which represent the modular components of ourarchitecture. We then discuss ResNeXt <cit.>, which inspired the multi-branch structure of our networks. Finally, we present our approach to learning the connectivity of multi-branch architectures using binary gates. Residual Learning. The framework of residual learning was introduced by He et al. <cit.> as a strategy to cope with the challenging optimization of deep models. The approach was inspired by the observation that deeper neural networks, despite having larger learning capacity than shallower models, often yield higher training error, due to the difficulty of optimization posed by increasing depth. Yet, given any arbitrary shallow network, it is trivially possible to reproduce its function using a deeper model, e.g., by copying the shallow network into the top portion of the deep model and by setting the remaining layers to implement identity functions. This simple yet revealing intuition inspired the authors to introduce residual blocks, which learn residual functions with reference to the layer input. Figure <ref>(a) illustrates an example of these modular components where the 3 layers in the block implement a residual function ℱ( x). A shortcut connections aggregates the residual block output ℱ( x) with its input x, thus computing ℱ( x) +x, which becomes the input to the next block. The point of this module is that if at any depth in the network the representation x is already optimal, then ℱ( x) can be trivially set to be the zero function, which is easier to learn than an identity mapping. In fact, it was shown <cit.> that reformulating the layers as learning residuals eases optimization and enables the effective training of networks that are substantially deeper than previously possible. Since we are interested in applying our approach to image categorization, in this paper we use convolutional residual blocks using the bottleneck <cit.> shown in Figure <ref>(a). The first 1×1 layer projects the input feature maps onto a lower dimensional embedding, the second applies 3×3 filters, and the third restores the original feature map dimensionality. As in <cit.>, Batch Normalization <cit.> and ReLU <cit.> are applied after each layer, and a ReLU is used after each aggregation. The multi-branch architecture of ResNeXt. Recent work <cit.> has shown that it is beneficial to arrange residual blocks not only along the depth dimension but also to implement parallel multiple threads of computation feeding from the same input layer. The outputs of the parallel residual blocks are then summed up together with the original input and passed on to the next module. The resulting multi-branch module is illustrated in Figure <ref>(b). More formally, let ℱ( x; θ^(i)_j) be the transformation implemented by the j-th residual block in module i-th of the network, where j=1,,C and i=1,,M, with M denoting the total number of modules stacked on top of each other to form the complete network. The hyperparameter C is called the cardinality of the module and defines the number of parallel branches within each module. The hyperparameter M controls the total depth of the network: under the assumption of 3 layers per residual block (as shown in the figure), the total depth of the network is given by D= 2 + 3M (an initial convolutional layer and an output fully-connected layers add 2 layers). Note that in ResNeXt all residual blocks in a module have the same topology (ℱ) but each block has its own parameters (θ^(i)_j denotes the parameters of residual block j in module i). Then, the output of the i-th module is computed as:y =x + ∑_j=1^C ℱ( x; θ^(i)_j)Tensor y represents the input to the (i+1)-th module. Note that the ResNeXt module effectively implements a split-transform-mergestrategy that perfoms a projection of the input into separate lower-dimensional embeddings (via bottlenecks), a separate transformation within each embedding, a projection back to the high-dimensional space and a final aggregation via addition. It can be shown that the solutions that can be implemented by such module are a strict subspace of the solutions of a single layer operating on the high-dimensional embedding but at a considerably lower cost in terms of computational complexity and number of parameters. In <cit.> it was experimentally shown that increasing the cardinality C is a more effective way of improving accuracy compared to increasing depth or the number of filters. In other words, given a fixed budget of parameters, ResNeXt multi-branch networks were shown to consistently outperform single-branch ResNets of the same learning capacity.We note, however, that in an attempt to ease network design, several restrictive limitations were embedded in the architecture of ResNeXt modules: each ResNeXt module implements C parallel feature extractors that operate on the same input; furthermore, the number of active branches is constant at all depth levels of the network. In the next subsection we present an approach that removes these restrictions without adding any significant burden on the process of manual network design (with the exception of a single additional integer hyperparameter for the entire network). Our gated multi-branch architecture. As in ResNeXt, our proposed architecture consists of a stack of M multi-branch modules, each containing C parallel feature extractors. However, differently from ResNeXt, each branch in a module can take a different input. The input pathway of each branch is controlled by a binary gate vector that is learned jointly with the weights of the network. Let g^(i)_j = [g^(i)_j,1, g^(i)_j,2, , g^(i)_j,C]^⊤∈{0, 1}^C be the binary gate vector defining the active input connections feeding the j-th residual block in module i. If g^(i)_j,k = 1, then the activation volume produced by the k-th branch in module (i-1) is fed as input to the j-th residual block of module i. If g^(i)_j,k = 0, then the output from the k-th branch in the previous module is ignored by the j-th residual block of the current module. Thus, if we denote with y^(i-1)_k the output activation tensor computed by the k-th branch in module (i-1), the input x^(i)_j to the j-th residual block in module i will be given by the following equation: x^(i)_j = ∑_k=1^Cg^(i)_j,k· y^(i-1)_kThen, the output of this block will be obtained throughthe usual residual computation, i.e., y^(i)_j =x^(i)_j+ ℱ( x^(i)_j; θ^(i)_j). We note that under this model we no longer have fixed aggregation nodes summing up all outputs computed from a module. Instead, the gate g^(i)_j now determines selectively for each block which branches from the previous module will be aggregated and provided as input to the block. Under this scheme, the parallel branches in a module receive different inputs and as such are likely to yield more diverse features. We point out that depending on the constraints posed over g^(i)_j, different interesting models can be realized. For example, by introducing the constraint that ∑_k g^(i)_j,k = 1 for all blocks j, then each residual block will receive input from only one branch (since each g^(i)_j,k must be either 0 or 1). It can be noted that at the other end of the spectrum, if we set g^(i)_j,k=1 for all blocks j,k in each module i, then all connections would be active and we would obtain again the fixed ResNeXt architecture. In our experiments we will demonstrate that the best results are achieved for a middle ground between these two extremes, i.e., by connecting each block to K branches where K is an integer-valued hyperparameter such that 1< K < C. We refer to this hyperparameter as the fan-in of a block. As discussed in the next section, the gate vector g^(i)_j for each block is learned simultaneously with all the other weights in the network via backpropagation. Finally, we note that it may be possible for a residual block in the network to become unused. This happens when, as a result of the optimization, block k in module (i-1) is such that g^(i)_jk=0 for all j=1,,C. In this case, at the end of the optimization, we prune the block in order to reduce the number of parameters to store and to speed up inference (note that this does not affect the function computed by the network). Thus, at any point in the network the total number of active parallel threads can be any number smaller than or equal to C. This implies that a variable branching factor is learned adaptively for the different depths in the network. §.§ : learning to connect branchesWe refer to our learning algorithm as . It performs joint optimization of a given learning objective ℓwith respect to both the weights of the network (θ) as well as the gates (g). Since in this paper we apply our method to the problem of image categorization, we use the traditional multi-class cross-entropy objective for the loss ℓ. However, our approach can be applied without change to other loss functions as well as to other tasks benefitting from a multi-branch architecture. In the weights have real values, as in traditional networks, while the branch gates have binary values. This renders the optimization more challenging. To learn these binary parameters, we adopt a modified version of backpropagation, inspired by thealgorithm proposed by Courbariaux et al. <cit.> to train neural networks with binary weights. During training we store and update a real-valued version g̃^(i)_j ∈[0,1]^C of the branch gates, with entries clipped to lie in the continuous interval from 0 to 1. In general, the training via backpropagation consists of three steps: 1) forward propagation, 2) backward propagation, and 3) parameters update. At each iteration, we stochastically binarize the real-valued branch gates into binary-valued vectors g^(i)_j ∈{0,1}^C which are then used for the forward propagation and backward propagation (steps 1 and 2). Instead, during the parameters update (step 3), the method updates the real-valued branch gates g̃^(i)_j. The weights θ of the convolutional and fully connected layers are optimized using standard backpropagation. We discuss below the details of our gate training procedure, under the constraint that at any time there can be only K active entries in the binary branch gate g^(i)_j, where K is a predefined integer hyperparameter with 1≤ K ≤ C. In other words, we impose the following constraints:g^(i)_j,k∈{0,1},    ∑_k=1^C g^(i)_j,k = K    ∀ j∈{1,,C} and ∀ i∈{1,,M}.These constraints imply that each residual block receives input from exactly K branches of the previous module. Forward Propagation. During the forward propagation, our algorithm first normalizes the C real-valued branch gates for each block j to sum up to 1, i.e., such that ∑_k=1^Cg̃^(i)_j,k=1. This is done so that Mult(g̃^(i)_j,1, g̃^(i)_j,2, , g̃^(i)_j,C) defines a proper multinomial distribution over the C branch connections feeding into block j. Then, the binary branch gate g^(i)_j is stochastically generated by drawing K distinct samplesa_1,a_2,,a_K ∈{1, , C} from the multinomial distribution over the branch connections. Finally, the entries corresponding to the K samples are activated in the binary branch gate vector, i.e., g^(i)_j,a_k← 1, for k=1,...,K. The input activation volume to the residual block j is then computed according to Eq. <ref> from the sampled binary branch gates. We note that the sampling from the Multinomial distribution ensures that the connections with largest g̃^(i)_j,k values will be more likely to be chosen, while at the same time the stochasticity of this process allows different connectivities to be explored, particularly during early stages of the learning when the real-valued gates have still fairly uniform values.Backward Propagation. In the backward propagation step, the gradient ∂ℓ/∂ y^(i-1)_k with respect to each branch output is obtained via back-propagation from ∂ℓ/∂ x^(i)_j and the binary gates g^(i)_j,k. Gate Update. In the parameter update step our algorithm computes the gradient with respect to the binary branch gates for each branch. Then, using these computed gradients and the given learning rate, it updates the real-valued branch gates via gradient descent. At this time we clip the updated real-valued branch gates to constrain them to remain within the valid interval [0, 1]. The same clipping strategy was adopted for the binary weights in the work of <cit.>. As discussed in the supplementary material, after joint training over θ and g, we have found beneficial to fine-tune the weights θ of the network with fixed binary gates (connectivity), by setting as active connections for each block j in module i those corresponding to the K largest values in g̃^(i)_j. Pseudocode for our training procedure is given in the supplementary material. § EXPERIMENTS We tested our approach on the task of image categorization using several benchmarks: CIFAR-10 <cit.>, CIFAR-100 <cit.>, Mini-ImageNet <cit.>, as well as the full ImageNet <cit.>. In this section we discuss results achieved on CIFAR-100 and ImageNet <cit.>, while the results for CIFAR-10 <cit.> and Mini-ImageNet <cit.> can be found in the Appendix. §.§ CIFAR-100CIFAR-100 is a dataset of color images of size 32x32. It consists of 50,000 training images and 10,000 test images. Each image in CIFAR-100 is categorized into one of 100 possible classes. Effect of fan-in (K). We start by studying the effect of the fan-in hyperparameter (K) on the performance of models built and trained using our proposed approach. The fan-in defines the number of active branches feeding each residual block. For this experiment we use a model obtained by stacking M=6 multi-branch residual modules, each having cardinality C=8 (number of branches in each module). We use residual blocks consisting of 3 convolutional layers with a bottleneck implementing dimensionality reduction on the number of feature channels, as shown in Figure <ref>. The bottleneck for this experiment was set to w=4. Since each residual block consists of 3 layers, the total depth of the network in terms of learnable layers is D=2+3M=20. We trained and tested this architecture using different fan-in values: K=1,..,8. Note that varying K does not affect the number of parameters. Thus, all these models have the same learning capacity. The results are shown in Figure <ref>. We can see that the best accuracy is achieved by connecting each residual block to K=4 branches out of the total C=8 in each module. Using a very low or very high fan-in yields lower accuracy. Note that when setting K=C, there is no need to learn the gates. In this case each gate is simply replaced by an element-wise addition of the outputs from all the branches. This renders the model equivalent to ResNeXt <cit.>, which has fixed connectivity. Based on the results of Figure <ref>, in all our experiments below we use K=4, since it gives the best accuracy, but also K=1, since it gives high sparsity which, as we will see shortly, implies savings in number of parameters. Varying the architectures. In Table <ref> we show the classification accuracy achieved with different architectures (the details of each architecture are listed in the Appendix). For each architecture we report results obtained using with fan-in K=1 and K=4. We also include the accuracy achieved with full (as opposed to learned) connectivity, which corresponds to ResNeXt. These results show that learning the connectivity produces consistently higher accuracy than using fixed connectivity, with accuracy gains of up 2.2% compared to the state-of-the-art ResNeXt model.We note that these improvements in accuracy come at little computational training cost: the average training time overhead for learning gates and weights is about 39% using our unoptimized implementation compared to learning only the weights given a fixed connectivity. Additionally, for each architecture we include models trained using sparse random connectivity (Fixed-Random). For these models, each gate is set to have K=4 randomly-chosen active connections, and the connectivity is kept fixed during learning of the parameters. We can notice that the accuracy of these nets is considerably lower compared to our models, despite having the same connectivity density (K=4). This shows that the improvements of our approach over ResNeXt are not due to sparser connectivity but they are rather due to learned connectivity.Parameter savings. Our proposed approach provides the benefit of automatically identifying during training residual blocks that are unnecessary. At the end of the training, the unused residual blocks can be pruned away. This yields savings in the number of parameters to store and in test-time computation. In Table <ref>, columns Train and Test under Params show the original number of parameters (used during training) and the number of parameters after pruning (used at test-time).Note that for the biggest architecture, our approach using K=1 yields a parameter saving of 40% compared to ResNeXt with full connectivity (20.5 vs 34.4), while achieving the same accuracy. Thus, in summary, using fan-in K=4 gives models that have the same number of parameters as ResNeXt but they yield higher accuracy; using fan-in K=1 gives a significant saving in number of parameters and accuracy on par with ResNeXt. Model with real-valued gates. We have also attempted to learn our models using real-valued gates by computing tensors in the forward and backward propagation with respect to gates g̃^(i)_j ∈[0,1]^C rather than the binary vectors g^(i)_j ∈{0,1}^C. However, we found this variant to yield consistently lower results compared to our models using binary gates. For example, for model {D=29,w=8,C=8} the best accuracy achieved with real-valued gates is 1.93% worse compared to that obtained with binary gates. In particular we observed that for this variant, the real-valued gates change little over training even when using large learning rates. Conversely, performing the forward and backward propagation using stochastically-sampled binary gates yields a larger exploration of connectivities and results in bigger changes of the auxiliary real-valued gates leading to better connectivity learning.Visualization of the learned connectivity. Figure 3 provides an illustration ofthe connectivity learned by for K=1 versus the fixed connectivity of ResNeXt for model {D=29,w=8,C=8}. While ResNeXt feeds the same input to all blocks of a module, our algorithm learns different input pathways for each block and yields a branching factor that varies along depth. §.§ ImageNet Finally, we evaluate our approach on the large-scale ImageNet 2012 dataset <cit.>, which includes images of 1000 classes. We train our approach on the training set (1.28M images) and evaluate it on the validation set (50K images). In Table <ref>, we report the Top-1 and Top-5 accuracies for three different architectures. For these experiments we set K=C/2. We can observe that for all three architectures, our learned connectivity yields an improvement in accuracy over the fixed connectivity of ResNeXt <cit.>. §.§ CIFAR-10 & Mini-ImageNet We invite the reader to review results achieved on CIFAR-10 & Mini-ImageNet in the Appendix. Also on these datasets our algorithm consistently outperforms the ResNeXt models based on fixed connectivity, with accuracy gains of up to 3.8%. § RELATED WORK Despite their wide adoption, deep networks often require laborious model search in order to yield good results. As a result, significant research effort has been devoted to the design of algorithms for automatic model selection. However, most of this prior work falls within the genre of hyper-parameter optimization <cit.> rather than architecture or connectivity learning. Evolutionary search has been proposed as an interesting framework to learn both the structure as well as the connections in a neural network <cit.>. Architecture search has also been recently formulated as a reinforcement learning problem with impressive results <cit.>. Unlike these approaches, our method is limited to learning the connectivity within a predefined architecture but it does so efficiently by gradient descent optimization of the learning objective as opposed to more costly procedures such as evolutionary search or reinforcement learning. Several authors have proposed learning connectivity by pruning unimportant weights from the network <cit.>. However, these prior methods operate in stages where initially the network with full connectivity is learned and then connections are greedily removed according to an importance criterion. In PathNet <cit.>, the connectivity within a given architecture was searched via evolution. Compare to these prior approaches, our work provides the advantage of learning the connectivity by direct global optimization of the loss function of the problem at hand rather than by greedy optimization of a proxy criterion or by evolution. Our technical approach shares similarities with the “Shake-Shake” regularization recently introduced in unpublished work <cit.>. This procedure was demonstrated on two-branch ResNeXt models and consists in randomly scaling tensors produced by parallel branches during each training iteration while at test time the network uses uniform weighting of tensors. Conversely, our algorithm learns an optimal binary scaling of the parallel tensors with respect to the training objective and uses the resulting network with sparse connectivity at test time. Our work is also related to approaches that learn a hierarchical structure in the last one or two layers of a network in order to obtain distinct features for different categories <cit.>. Differently from these methods, our algorithm learns efficiently connections at all depths in the network, thus optimizing over a much larger family of connectivity models. While our algorithm is limited to optimizing the connectivity structure within a predefined architecture, Adams et al. <cit.> proposed a nonparametric Bayesian approachthat searches over an infinite network using MCMC. Saxena and Verbeek <cit.> introduced convolutional neural fabric which are learnable 3D trellises that locally connect response maps at different layers of a CNN. Similarly to our work, they enable optimization over an exponentially large family of connectivities, albeit different from those considered here. § CONCLUSIONS In this paper we introduced an algorithm to learn the connectivity of deep multi-branch networks. The problem is formulated as a single joint optimization over the weights and the branch connections of the model. We tested our approach on challenging image categorization benchmarks where it led to significant accuracy improvements over the state-of-the-art ResNeXt model. An added benefit of our approach is that it can automatically identify superfluous blocks, which can be pruned without impact on accuracy for more efficient testing and for reducing the number of parameters to store.While our experiments were focused on a particular multi-branch architecture (ResNeXt) and a specific form of building block (residual block), we expect the benefits of our approach to extend to other modules and network structures. For example, it could be applied to learn the connectivity of skip-connections in DenseNets <cit.>, which are currently based on predefined connectivity rules. In this paper, our gates perform non-parametric additive aggregation of the branch outputs. It would be interesting to experiment with learnable (parametric) aggregations of the outputs from the individual branches. Our approach is limited to learning connectivity within a given, fixed architecture. Future work will explore the use of learnable gates for architecture discovery. iclr2018_conference § PSEUDOCODE OF THE ALGORITHM § EXPERIMENTS ON CIFAR-10The CIFAR-10 dataset consists of color images of size 32x32. The training set contains 50,000 images, the testing set 10,000 images. Each image in CIFAR-10 is categorized into one of 10 possible classes. In Table <ref>, we report the performance of different models trained on CIFAR-10. From these results we can observe that our models using learned connectivity achieve consistently better performance over the equivalent models trained with the fixed connectivity <cit.>. § EXPERIMENTS ON MINI-IMAGENET Mini-ImageNetis a subset of the full ImageNet <cit.> dataset. It was used in <cit.>. It is created by randomly selecting 100 classes from the full ImageNet <cit.>. For each class, 600 images are randomly selected. We use 500 examples per class for training, and the other 100 examples per class for testing. The selected images are resized to size 84x84 pixels as in <cit.>. The advantage of this dataset is that it poses the recognition challenges typical of the ImageNet photos but at the same time it does not need require the powerful resources needed to train on the full ImageNet dataset. This allows to include the additional baselines involving random fixed connectivity (Fixed-Random).We report the performance of different models trained on Mini-ImageNet in Table <ref>. From these results, we see that our models using learned connectivity with fan-in K=4 yield a nice accuracy gain over the same models trained with the fixed full connectivity of ResNeXt <cit.>. The absolute improvement (in Top-1 accuracy) is 3.87% for the 20-layer network and 3.17% for the 29-layer network. We can notice that the accuracy of the models with fixed random connectivity (Fixed-Random) is considerably lower compared to our nets with learned connectivity, despite having the same connectivity density (K=4). This shows that the improvement of our approach over ResNeXt is not due to sparser connectivity but it is rather due to learned connectivity. § VISUALIZATIONS OF LEARNED CONNECTIVITYThe plot in Figure <ref> shows how the number of active branches varies as a function of the module depth formodel {D=29,w=4,C=8} trained on CIFAR-100. For K=1, we can observe that the number of active branches tends to be larger for deep modules (closer to the output layer) compared to early modules (closer to the input). We observed this phenomenon consistently for all architectures. This suggests that having many parallel threads of computation is particularly important in deep layers of the network. Conversely, the setting K=4 tends to produce a fairly uniform number of active branches across the modules and the number is quite close to the maximum value C. For this reason, there is little saving in terms of number of parameters when using K=4, as there are rarely unused blocks.The plot in Figure <ref> shows the number of active branches as a function of module depth for model{D=50,w=4,C=32} trained on ImageNet, using K=16. § IMPLEMENTATION DETAILS §.§ Architectures and settings for experiments on CIFAR-100 and CIFAR-10 The specifications of the architectures used in all our experiments on CIFAR-10 and CIFAR-100 are given in Table <ref>. Several of these architectures are those presented in the original ResNeXt paper <cit.> and are trained using the same setup, including the data augmentation strategy.Four pixels are padded on each side of the input image, and a 32x32 crop is randomly sampled from the padded image or its horizontal flip, with per-pixel mean subtracted <cit.>. For testing, we use the original 32x32 image.The stacks have output feature map of size 32, 16, and 8 respectively. The models are trained on 8 GPUs with a mini-batch size of 128 (16 per GPU), with a weight decay of 0.0005 and momentum of 0.9. We adopt four incremental training phases with a total of 320 epochs. In phase 1 we train the model for 120 epochs with a learning rate of 0.1 for the convolutional and fully-connected layers, and a learning rate of 0.2 for the gates. In phase 2 we freeze the connectivity by setting as active connections for each block those corresponding to its top-K values in the gates.With these fixed learned connectivity, we finetune the model from phase 1 for 100 epochs with a learning rate of 0.1 for the weights. Then, in phase 3 we finetune the weights of the model from phase 2 for 50 epochs with a learning rate of 0.01 using again the fixed learned connectivity from phase 1. Finally, in phase 4 we finetune the weights of the model from phase 3 for 50 epochs with a learning rate of 0.001.§.§ Architectures and settings for experiments on ImageNetThe architectures for our ImageNet experiments are those specified in the original ResNeXt paper <cit.>. Also for these experiments, we follow the data augmentation strategy described in <cit.>. The input image has size 224x224 and it is randomly cropped from the resized original image. We use a mini-batch size of 256 on 8 GPUs (32 per GPU), with a weight decay of 0.0001 and a momentum of 0.9. We use four incremental training phases with a total of 120 epochs. In phase 1 we train the model for 30 epochs with a learning rate of 0.1 for the convolutional and fully-connected layers, and a learning rate of 0.2 for the gates. In phase 2 we finetune the model from phase 1 for another 30 epochs with a learning rate of 0.1 and a learning rate of 0.0 for the gates (i.e., we use the fixed connectivity learned in phase 1). In phase 3 we finetune the weights from phase 2 for 30 epochs with alearning rate of 0.01 and the learning rate of the gates is 0.0. Finally, in phase 4 we finetune the weights from phase 3 for 30 epochs with alearning rate of 0.001 while the learning rate of the gates is still set to 0.0.§.§ Architectures and settings for experiments on Mini-ImageNetFor the experiments on the Mini-ImageNet dataset, a 64x64 crop is randomly sampled from the scaled 84x84 image or its horizontal flip, with per-pixel mean subtracted <cit.>. For testing, we use the center 64x64 crop. The specifications of the models are identical to the CIFAR-100 models used in the previous subsection, except that the first input convolutional layer in the network is followed by a max pooling layer. The models are trained on 8 GPUs with a mini-batch size of 256 (32 per GPU), with a weight decay of 0.0005 and momentum of 0.9. Similar to training CIFAR-100 dataset, we also adopt four incremental training phases with a total of 320 epochs.
http://arxiv.org/abs/1709.09582v2
{ "authors": [ "Karim Ahmed", "Lorenzo Torresani" ], "categories": [ "cs.LG", "cs.CV" ], "primary_category": "cs.LG", "published": "20170927153421", "title": "Connectivity Learning in Multi-Branch Networks" }
Neural networks for topology optimization [ December 30, 2023 ========================================= Abstract. Variational integrators applied to degenerate Lagrangians that are linear in the velocities are two-step methods. The system of modified equations for a two-step method consists of the principal modified equation and one additional equation describing parasitic oscillations. We observe that a Lagrangian for the principal modified equation can be constructed using the same technique as in the case of non-degenerate Lagrangians. Furthermore, we construct the full system of modified equations by doubling the dimension of the discrete system in such a way that the principal modified equation of the extended system coincides with the full system of modified equations of the original system. We show that the extended discrete system is Lagrangian, which leads to a construction of a Lagrangian for the full system of modified equations. § INTRODUCTION An important technique to study the long-time behavior of numerical integrators is backward error analysis. This consists in finding a modified equation, a perturbation of the original differential equation whose solutions exactly interpolate the numerical solutions. When a modified equation has been found, one can study the behavior of the numerical solutions by comparing two differential equations, rather than comparing a differential equation with a difference equation. Several long-time (near) conservation laws for symplectic integrators can be proved this way. For a detailed introduction to modified equations we refer to <cit.>.In <cit.> we considered modified equations for variational integrators in the case of non-degenerate Lagrangians. We gave a construction for a modified Lagrangian, which produces the modified equation as its Euler-Lagrange equation up to a truncation error of arbitrarily high order. Although the construction was new, the claim that modified equations for variational integrators are Lagrangian was not. This follows by Legendre transformation from the well-known fact that modified equations for symplectic integrators are Hamiltonian. The construction of a modified Lagrangian was combined in <cit.> with the idea of modifying integrators <cit.> to construct variational integrators of improved convergence order.In this work we extend our previous construction to the case of degenerate Lagrangians that are linear in velocities. In this context the Legendre transformation is not invertible, so the fact that the modified equation is Lagrangian cannot be inferred in the same way from the theory of symplectic integrators. We consider Lagrangians : Tℝ^N ≅ℝ^2N→ℝ of the form(q, q̇) = α(q) q̇ - H(q),where α: ℝ^N →ℝ^N, H: ℝ^N →ℝ, and the bracketsdenote the standard scalar product. Variational integrators for such Lagrangians were studied for example in <cit.> and <cit.>. An important role will be played by the matricesA(q) = α'(q) = ( α_i(q)q_j)_i,j = 1,…,Nand A_(q) = A(q)^T - A(q) .We assume that A_(q) is invertible, then the Euler-Lagrange equation foris given byq̇ = A_(q)^-1 H'(q)^T,where q is considered to be a column vector and H'(q) is the row vector of partial derivatives of H with respect to q_1, …, q_N. In contrast to the case of non-degenerate Lagrangians, this is a first order ODE.A well-known example where a Lagrangian of the form (<ref>) arises is the dynamics of point vortices in the plane. We will discuss this example in detail in Section <ref>. Another reason to study this class of Lagrangians is that its extension to PDEs covers several important equations. For example, the nonlinear Schrödinger equation is the Euler-Lagrange equation of a Lagrangian whose kinetic term is linear in the time-derivatives (see e.g. <cit.>). Perhaps the most general application of Lagrangians that are linear in velocities is the variational formulation in phase space of mechanics, where : T T^* Q ≅ℝ^4N→ℝ is given by(p,q,ṗ,q̇) = pq̇ - H(p,q).Its Euler-Lagrange equations are Hamilton's canonical equations q̇ = ( Hp)^T andṗ = - ( Hq)^T. Note that even though A = [ 0 0; I 0 ] is singular in this case, the assumption that A_ is invertible still holds. Like many concepts in classical mechanics, the variational principle in phase space dates back to the 19th century <cit.>. A modern treatment can be found for example in <cit.>, and an application to geometric integration in <cit.>.The construction of modified Lagrangians for variational integrators, which we introduced in <cit.>, carries over to the case of degenerate Lagrangians that are linear in velocities. However, there is a catch. The original differential equation is of first order for the Lagrangians considered here, but the difference equation produced by a variational integrator is of second order. Hence, in this context, variational integrators are two-step methods and parasitic solutions can occur.In Section <ref> we present two variational integrators which will be the protagonists of all examples discussed in this work. InSection <ref> the essentials of the theory of modified equations for multi-step methods are reviewed. In Section <ref> we summarize the construction of modified Lagrangians from <cit.> and in Section <ref> we will present a method to extend it to the full system of modified equations. In Section <ref> we look at some example systems. §.§.§ A note on notation As mentioned above, we use the convention that a derivative with respect to a column vector yields a row vector. In particular, this means that the derivative of the scalar product of two column vectors is calculated asxy' = ( x^T y )' = x^T y' + y^T x' .Later on we will be taking higher derivatives of vectors with respect to other vectors, resulting in a zoo of tensors. We want to avoid heavy notations using indices, like∑_a A_a x^a, ∑_a,b B_a,b x^a y^b, ∑_a,b,c C_a,b,c x^a y^b z^c, ⋯.If the tensor involved is symmetric, we will use the notations A(x),B(x,y),C(x,y,z), …instead. If the tensor is of first or second order, we will often write these expressions as matrix multiplication, A x and x^T B y.We will also use the inner product notation A^Tx as an alternative to Ax. This allows us to emphasize one particular pairing in a product of more than two tensors. Using these notations interchangeably allows us to write equations in an intuitive form and avoid the heavy notation of (<ref>). The downside is that such inconsistent notation could be a source of confusion for the reader. We hope this note is enough to avoid that. § VARIATIONAL INTEGRATORSA variational integrator is a numerical integrator for Lagrangian differential equations, obtained by discretizing the Lagrange function. The action integral ∫(q,q̇)ṭ is replaced by a sum ∑_j L_(q_j,q_j+1,h). The sequence (q_j)_ is a critical point of the action sum if and only if it satisfies the discrete Euler-Lagrange equation_2 L_(q_j-1,q_j,h) + _1 L_(q_j,q_j+1,h) = 0,where _1 and _2 denote partial derivatives with respect to the first and second variable. Assuming this difference equation can be solved for q_j+1, it provides a numerical approximation of the Euler-Lagrange equations of . An excellent overview of the subject of variational integrators is given by Marsden and West <cit.>.For Lagrangians that are linear in the velocities, the continuous Euler-Lagrange equation (<ref>) is of first order, but the discrete Euler-Lagrange equation (<ref>) involves three points, i.e. it is of second order. This means that we are dealing with two-step methods.We will discuss two examples of variational integrators in detail. Both are obtained by using a simple quadrature rule to approximate the exact discrete LagrangianL_exact(q_j,q_j+1,h) = ∫_jh^(j+1)h(q(t),q̇(t))ṭ,where q(jh) = q_j, q((j+1)h) = q_j+1, and q(t) solves the continuous Euler-Lagrange equation.§.§.§ Midpoint rule Using q_j+1-q_j/2 to approximate q̇ and the average q_j+q_j+1/2 to approximate q in the integrand, we find the discrete LagrangianL_(q_j,q_j+1,h) = α( q_j+q_j+1/2) q_j+1 - q_j/h - H( q_j+q_j+1/2)with discrete Euler-Lagrange equation1/2( q_j - q_j-1/h)^T α'(q_j-1 + q_j/2)+ 1/2( q_j+1 - q_j/h)^T α'(q_j + q_j+1/2)- 1/hα(q_j + q_j+1/2)^T + 1/hα(q_j-1 + q_j/2)^T - 1/2 H'(q_j-1 + q_j/2) - 1/2 H'(q_j + q_j+1/2) = 0.In case α is linear, i.e. α(q)=Aq, this simplifies toq_j+1 - q_j-1/2h = A_^-1(1/2 H'(q_j-1 + q_j/2)^T + 1/2 H'(q_j + q_j+1/2)^T ),where A_ is defined in Equation (<ref>). In the case of a non-degenerate Lagrangian this discretization would lead to a variational integrator that is equivalent to the implicit midpoint rule applied to the corresponding symplectic system. Also in the present context we will refer to it as the midpoint rule.§.§.§ Trapezoidal rule To obtain the second discretization we use the trapezoidal quadrature rule to approximate the exact discrete Lagrangian: we take the average of the integrand evaluated with q = q_j and with q = q_j+1, while still using q_j+1-q_j/2 to approximate the derivative q̇. We find the discrete LagrangianL_(q_j,q_j+1,h) = 1/2α(q_j) + 1/2α(q_j+1) q_j+1 - q_j/h - 1/2 H(q_j) - 1/2 H(q_j+1)with discrete Euler-Lagrange equation( q_j+1 - q_j-1/2h)^T α'(q_j) - α(q_j+1)^T - α(q_j-1)^T/2h - H'(q_j) = 0.In case α is linear this simplifies toq_j+1 - q_j-1/2h = A_^-1H'(q_j)^T.This discretization is sometimes called the explicit midpoint rule, but we will not use this name to avoid confusion with the previous method. Instead we call this method the trapezoidal rule. In the case of a non-degenerate Lagrangian the trapezoidal rule would lead to the Störmer-Verlet method.§ MODIFIED EQUATIONS FOR MULTISTEP METHODSThe classical theory of modified equations does not capture parasitic solutions of multistep methods. An extension of this theory for linear multistep methods was developed by Hairer <cit.>. (See also <cit.>.) Here we mention some of the main results, restricted to the case of two-step methods.For a first order ODE q̇ = f(q), consider the linear two-step methoda_0 q_j + a_1 q_j+1 + a_2 q_j+2/h = b_0 f(q_j) + b_1 f(q_j+1) + b_2 f(q_j+2) .We call the method (<ref>) symmetric if a_0 = -a_2, a_1 = 0, and b_0 = b_2. We say that it is stable if all roots of the polynomial ρ(ζ) = a_0 + a_1 ζ + a_2 ζ^2 satisfy |ζ| ≤ 1, and the roots with |ζ| = 1 are simple. A method is stable if and only if the numerical solution for q̇ = 0 is bounded for any initial condition. Note that the trapezoidal rule is a stable symmetric linear two-step method, but that the midpoint rule is not of the form (<ref>).The theory of modified equations for one-step methods is easily extended to yield the following.Consider a consistent method of the form (<ref>). Then there exist unique functions (f_n(q))_ such that for every truncation index k, every solution ofq̇ = f(q) + h f_1(q) + h^2 f_2(q) + … + h^k f_k(q)satisfiesa_0 q(t) + a_1 q(t+h) + a_2 q(t+2h) /h = b_0 f(q(t)) + b_1 f(q(t+h)) + b_2 f(q(t +2h)) + Ø(h^k+1).In general the right hand side of Equation (<ref>) will not converge as k →∞. Nevertheless, we will call the formal differential equationq̇ = f(q) + h f_1(q) + h^2 f_2(q) + …the principal modified equation. Up to truncation errors, every solution of the principal modified equation gives a solution of the difference equation when evaluated on a mesh t_0 + h ℤ. However, not every solution of the difference equation can be obtained this way. The solutions that are missed are exactly the parasitic solutions. Assume that the method (<ref>) is stable, consistent, and symmetric. Then there exist functions (f_n(x,y))_n ∈ℕ and (g_n(x,y))_n ∈ℕ such that for every truncation index k, for every solution ofẋ = f_0(x,y) + h f_1(x,y) + … + h^k f_k(x,y)ẏ = g_0(x,y) + h g_1(x,y) + … + h^k g_k(x,y),with y(0) = Ø(h), the discrete curve q_j = x(t+jh) + (-1)^j y(t+jh) satisfiesa_0 q_j + a_1 q_j+1 + a_2 q_j+2/h = b_0 f(q_j) + b_1 f(q_j+1) + b_2 f(q_j+2) + Ø(h^k+1 )for every choice of t. We will call the corresponding system of formal differential equationsẋ = f_0(x,y) + h f_1(x,y) + h^2 f_2(x,y) + … ,ẏ = g_0(x,y) + h g_1(x,y) + h^2 g_2(x,y) + … ,the full system of modified equations. We call Equation (<ref>) the parasitic modified equation.If y=0, then Equation (<ref>) reduces to the principal modified equation (<ref>) and Equation (<ref>) reads ẏ = 0. Hence to determine whether parasitic solutions become dominant over time we need to determine the stability of the invariant manifold {y=0} of the system (<ref>)­–(<ref>).In general, even if the difference equation is not of the form (<ref>), we have the following definition. Let Φ(q_j-1,q_j,q_j+1,h) be a consistent discretization of some function F(q,q̇). * Equation (<ref>) is the principal modified equation for the difference equationΦ(q_j-1,q_j,q_j+1,h) = 0if for every truncation index k, every solution of the truncated equation (<ref>) satisfies Φ(q(t-h),q(t),q(t+h),h) = Ø(h^k+1)at all times t. * The system of equations (<ref>)­–(<ref>) is the full system of modified equations for the Equation (<ref>) if for every truncation index k, for every solution (x,y) of the truncated system (<ref>)­–(<ref>), the discrete curve q_j = x(t+jh) + (-1)^j y(t+jh) satisfiesΦ(q_j-1,q_j,q_j+1,h) = Ø(h^k+1)for all choices of t.§ A LAGRANGIAN FOR THE PRINCIPAL MODIFIED EQUATIONIn <cit.> we constructed a modified Lagrangian for variational integrators in the case of non-degenerate Lagrangian systems. A straightforward adaptation of this construction will give us a Lagrangian for the principal modified equation. Here we present the construction and a rough sketch of the proof. The details of the proof are perfectly analogous to the non-degenerate case, so we refer to <cit.> for their discussion.We identify points q_j of a numerical solution with step size h with evaluations q(jh) of an interpolating curve. Using a Taylor expansion we can write the discrete Lagrangian L_(q_j-1,q_j,h) as a function of the interpolating curve q and its derivatives, all evaluated at the point jh - h/2,_([q] ,h) := L_( q - h/2q̇ + 1/2(h/2)^2 q̈ - … ,q + h/2q̇ + 1/2(h/2)^2 q̈ + …, h ),where the square brackets denote dependence on q and any number of its derivatives.We want to write the discrete actionS_((q_j)_,h) = ∑_j=1^n h L_(q_j-1,q_j,h) = ∑_j=1^n h _( [q ( jh - h2)],h ) as an integral. This can be done using the Euler-Maclaurin formula. We obtain the meshed modified Lagrangian_([q(t)],h) : = ∑_i=0^∞(2^1-2i-1) h^2i B_2i/(2i)!^̣2i/ṭ^2i_([q(t)],h) = _([q(t)],h) - h^2/24^̣2/ṭ^2_([q(t)],h) + 7h^4/5760^̣4/ṭ^4_([q(t)],h) + …,where B_2i are the Bernoulli numbers. The power series defining _ generally does not converge. Formally, it satisfies S_((q(jh))_,h) = ∫_([q(t)],h)ṭ . Note that _ depends on higher derivatives of q. Below we will construct a modified Lagrangian that only depends on q and q̇. The word meshed refers to the fact that the discrete system provides additional structure for the continuous variational problem. In the meshed variational problem, non-differentiable curves are admissible as long as their singular points are consistent with the mesh, i.e. if they occur at times that are an integer multiple of h away from each other. This imposes additional conditions on critical curves, related to the natural boundary conditions and to the Weierstrass-Erdmann Corner conditions (see e.g. <cit.> for these concepts). These conditions are∀ℓ≥ 2: ∂/∂ q^(ℓ)(t) = 0.We will call them the natural interior conditions. Because the action integral of _ equals the discrete action, variations supported on a single mesh interval (i.e. in between consecutive points of the discrete curve) do not change the action integral of _. This implies that the natural interior conditions are automatically satisfied on solutions of the Euler-Lagrange equation (for the particular Lagrangian _, but not in general). Consider the Euler-Lagrange equation of _,∑_j=0^∞ (-1)^j ^̣j/ṭ^j_q^(j) = 0.Because the natural interior conditions (<ref>) are automatically satisfied on critical curves, it is equivalent to_q - /ṭ_q̇ = 0 .This equation is of the form_0(q,q̇) + h _1(q,q̇,q̈) + h^2 _2 ( q,q̇,q̈,q^(3)) + … = 0 .In the leading order we find a first order differential equation, which we can use to eliminate higher derivatives in the next order (assuming that the derivatives of q are bounded as h → 0). This can be applied recursively up to any order. Hence we can write the Euler-Lagrange equation formally as a first order differential equation, sayq̇ = F(q,h).Then expressions for all higher derivatives follow by differentiation and substitution,q̈ = F_2(q,h) ,q^(3) = F_3(q,h), … .The assumption that the derivatives of q are bounded as h → 0 is not restrictive in practice. The same assumption is necessary to state many other results regarding modified equations rigorously. Families of curves (q_h)_ that satisfy this condition are called admissible families in <cit.>. In particular there holds for admissible families that if the functions are small, q_h = Ø(h^k), then so are their derivatives, q_h^(ℓ) = Ø(h^k). We will use this implicitly later on.Using (<ref>) and (<ref>) we can replace second and higher derivatives in the meshed Lagrangian to find a first order modified Lagrangian, _(q,q̇,h) =_([q],h)|_q^(j) = F_j(q,h), ∀ j ≥ 2 .Or, avoiding formal power series, a truncated modified Lagrangian_,k(q,q̇,h) = _k( _([q],h)|_q^(j) = F_j(q,h), ∀ j ≥ 2),where _k denotes truncation of the power series after order k. In general the replacements q^(j) = F_j(q,h) would change the Euler-Lagrange equations, but because of the natural interior conditions (<ref>) this is not the case here. Indeed, one finds_,kq = _k( _q + ∑_ℓ = 2^∞_q^(ℓ)F_ℓ(q,h)q) = _k( _q) and_,kq̇ =_k( _q̇) .It follows that_,kq - /ṭ_,kq̇ =_k( ∑_j=0^∞ (-1)^j ^̣j/ṭ^j_q^(j)),so up to a truncation error, both Lagrangians yield the same Euler-Lagrange equations. Note that the natural interior conditions do not imply that ∂_ / ∂q̇ = 0, so replacing first derivatives using q̇ = F(q,h) is not allowed!The details presented in <cit.> carry over to the degenerate case and yield the following result.Consider a discrete Lagrangian that is a consistent discretization of a Lagrangian of the form (<ref>). Letbe either _ or _,k, derived from this discrete Lagrangian. Solve the equation q - /ṭq̇ = 0for q̇, and truncate the resulting power series after order k. The result,q̇ = f(q) + h f_1(q) + h^2 f_2(q) + … + h^k f_k(q),is a truncation of the principal modified equation.§.§.§ Midpoint ruleFrom the discrete Lagrangian (<ref>) we find_([q],h)= L_( q - h/2q̇ + h^2/8q̈ - …, q + h/2q̇ + h^2/8q̈ + …, h ) = α( q + h^2/8q̈ + …)q̇ + h^2/24 q^(3) + … - H ( q + h^2/8q̈ + …) = α(q) q̇ - H(q)+ h^2/24( α(q)q^(3) + 3 α'(q)q̈q̇ - 3 H'(q)q̈) + Ø(h^4).It follows that_([q],h)= αq̇ - H+ h^2/24( 2A_q̇q̈ - α”(q̇,q̇) q̇ - 2 H'q̈ + H”(q̇,q̇) )+ Ø(h^4),where the argument q of A_, α, H, and their derivatives is omitted. From this expression we obtain _,3 by replacing all second derivatives of q using the derivative of the leading order equation, q̈ = /ṭ( A_(q)^-1 H'(q)^T ) + Ø(h^2).In case that α is linear we haveq̈ = A_^-1 H”q̇ + Ø(h^2)and we find the following expression for the modified Lagrangian (truncated after h^3):_,3 = q̇^T A q - H + h^2/24( -q̇^T H”q̇- 2 H' A_^-1H”q̇). §.§.§ Trapezoidal ruleFrom the discrete Lagrangian (<ref>) we find_([q],h)= 1/2α( q - h/2q̇ + h^2/8q̈) + 1/2α( q + h/2q̇ + h^2/8q̈)q̇ + h^2/24 q^(3) - 1/2 H ( q - h/2q̇ + h^2/8q̈) - 1/2 H ( q + h/2q̇ + h^2/8q̈) + Ø(h^4) = αq̇ - H+ h^2/8( 1/3α q^(3) +α'q̈q̇ + α”(q̇,q̇) q̇ - H'q̈ - H”(q̇,q̇) ) + Ø(h^4)and_([q],h)= αq̇ - H + h^2/12(A_q̇q̈ + α”(q̇,q̇) q̇ - H'q̈ - H”(q̇,q̇) ) + Ø(h^4).Again we assume that α is linear. Using Equation (<ref>) we find the modified Lagrangian_,3 = q̇^T A q - H + h^2/12( - 2 q̇^T H”q̇- H' A_^-1H”q̇). § THE FULL SYSTEM OF MODIFIED EQUATIONS For linear symmetric two-step methods, Proposition <ref> describes the full system of modified equations. Here we will show that for variational integrators, without assuming linearity, the full system of modified equations is of the same form. In order to construct the system of modified equations, we split the variable q_j of the discrete system into two parts,q_j = x_j +(-1)^j y_j.The motivation for this is that we want to use one variable, x_j, to encode the principal behavior and the other, y_j, for the parasitic behavior. This is inspired by the formula q_j = x(t+jh) + (-1)^j y(t+jh) from Proposition <ref> and Definition <ref>.§.§ The Lagrangian approach A key property of the doubling of variables is that the extended system is still variational. The discrete curve (x_j,y_j)_ is critical for L(x_j,y_j,x_j+1,y_j+1,h) = 1/2 L(x_j + y_j, x_j+1 - y_j+1,h) + 1/2 L(x_j - y_j, x_j+1 + y_j+1,h),if and only if the discrete curves (q_j^+)_ and (q_j^-)_, defined by q_j^± = x_j ± (-1)^j y_j, are critical for L(q_j,q_j+1,h).The discrete Euler-Lagrange equations for L(x_j,y_j,x_j+1,y_j+1,h) are1/2_2 L(x_j-1+y_j-1,x_j-y_j,h) + 1/2_2 L(x_j-1-y_j-1,x_j+y_j,h) +1/2_1 L(x_j+y_j,x_j+1-y_j+1,h) + 1/2_1 L(x_j-y_j,x_j+1+y_j+1,h) = 0and-1/2_2 L(x_j-1+y_j-1,x_j-y_j,h) + 1/2_2 L(x_j-1-y_j-1,x_j+y_j,h) +1/2_1 L(x_j+y_j,x_j+1-y_j+1,h) - 1/2_1 L(x_j-y_j,x_j+1+y_j+1,h) = 0 .Taking the sum resp. the difference of these equations we find_2 L(x_j-1-y_j-1,x_j+y_j,h) + _1 L(x_j+y_j,x_j+1-y_j+1,h) = 0, _2 L(x_j-1+y_j-1,x_j-y_j,h) + _1 L(x_j-y_j,x_j+1+y_j+1,h) = 0.Depending on the parity of j, either the first or the second of those equations is_2 L(q_j-1^+,q_j^+,h) + _1 L(q_j^+,q_j+1^+,h) = 0.The other one is_2 L(q_j-1^-,q_j^-,h) + _1 L(q_j^-,q_j+1^-,h) = 0.Hence (x_j,y_j)_ satisfies the Euler-Lagrange equations for L(x_j,y_j,x_j+1,y_j+1,h) if and only if (q_j^+)_ and (q_j^-)_ satisfy the Euler-Lagrange equation for L(q_j,q_j+1,h).Letẋ = f_0(x,y) + h f_1(x,y) + … + h^k f_k(x,y)ẏ = g_0(x,y) + h g_1(x,y) + … + h^k g_k(x,y),be the k-th truncation of the principal modified equation for the difference equation described by the discrete Lagrangian L from Proposition <ref>. Then (<ref>) is the k-th truncation of the full system of modified equations for the variational integrator described by L. Let (x(t),y(t)) be a solution of the system (<ref>). By definition of the principal modified equation, the discrete curve( x(t+jh),y(t+jh) )_satisfies the discrete Euler-Lagrange equations for L up to a truncation error for any choice of t. Hence, by Proposition <ref>, the discrete curve( x(t+jh)+(-1)^j y(t+jh) )_satisfies the discrete Euler-Lagrange equations for L up to a truncation error. This is the defining property of the system of modified equations, see Definition <ref>(b). Up to a truncation error of arbitrarily high order, the full system of modified equations (<ref>) for a variational integrator is Lagrangian. Let us illustrate this construction by applying it to our two methods. §.§.§ Midpoint rule We haveL_(x_j,y_j,x_j+1,y_j+1,h)= 1/2α( x_j+y_j+x_j+1-y_j+1/2) x_j+1 - y_j+1 - x_j - y_j/h + 1/2α( x_j-y_j+x_j+1+y_j+1/2)x_j+1 + y_j+1 - x_j + y_j/h - 1/2 H ( x_j+y_j+x_j+1-y_j+1/2) - 1/2 H ( x_j-y_j+x_j+1+y_j+1/2).Hence_([x,y],h)= 1/2α( x - h/2ẏ)ẋ - 2/h y+ 1/2α( x + h/2ẏ)ẋ + 2/h y-H(x) + Ø(h) = α(x) ẋ + α'(x) ẏ y-H(x) + Ø(h) .This is also the leading order term of the modified Lagrangian, _,0(x,y,ẋ,ẏ,h). If α is linear, its Euler-Lagrange equations areẋ = A_^-1 H'(x)^T + Ø(h),ẏ = 0 + Ø(h).Since y is constant in leading order, we need to look at higher order terms to determine whether parasitic solutions occur. No higher order terms of the modified Lagrangian contain y itself, and those terms that contain derivatives of y are at least quadratic in the derivatives of y. From these observations one can deduce that the parasitic modified equation is ẏ = 0 to any order of accuracy. It follows that the parasitic oscillations are of constant magnitude. Hence if the initialization of the discrete system is close to a solution of the principal modified equation, then the discrete solution will remain close to it.§.§.§ Trapezoidal rule We haveL_(x_j,y_j,x_j+1,y_j+1,h)= 1/4α(x_j+y_j) + α(x_j+1-y_j+1) x_j+1 - y_j+1 - x_j - y_j/h + 1/4α(x_j-y_j) + α(x_j+1+y_j+1) x_j+1 + y_j+1 - x_j + y_j/h - 1/4 H(x_j+y_j) - 1/4 H(x_j+1-y_j+1)- 1/4 H(x_j-y_j) - 1/4 H(x_j+1+y_j+1).Hence_([x,y],h)= 1/4α(x+y-h/2ẋ-h/2ẏ) + α(x-y+h/2ẋ-h/2ẏ)ẋ - 2/h y+ 1/4α(x-y-h/2ẋ+h/2ẏ) + α(x+y+h/2ẋ+h/2ẏ)ẋ + 2/h y- 1/2 H(x+y) - 1/2 H(x-y) + Ø(h)= 1/4α(x + y)-h/2α'( x + y)(ẋ + ẏ) + α(x - y)+h/2α'(x - y)(ẋ - ẏ) ẋ - 2/h y+ 1/4α(x - y)-h/2α'(x - y)(ẋ-ẏ) + α(x + y)+h/2α'(x + y)(ẋ+ẏ) ẋ + 2/h y- 1/2 H(x+y) - 1/2 H(x-y) + Ø(h)= 1/2α(x+y)ẋ + 1/2α(x-y)ẋ + 1/2α'(x+y)(ẋ+ẏ)y - 1/2α'(x-y)(ẋ-ẏ)y - 1/2 H(x+y) - 1/2 H(x-y) + Ø(h).This is also the leading order term of the modified Lagrangian, _,0(x,y,ẋ,ẏ,h). If α is linear, α(q)=Aq, then we find_,0(x,y,ẋ,ẏ,h) = Axẋ + A ẏy - 1/2 H(x+y) - 1/2 H(x-y).Its Euler-Lagrange equations areẋ = A_^-1( 1/2 H'(x+y)^T + 1/2 H'(x-y)^T ) + Ø(h),ẏ = A_^-1( -1/2 H'(x+y)^T + 1/2 H'(x-y)^T ) + Ø(h).We linearize the second equation around y=0 and findẏ = - A_^-1 H”(x) y + Ø(|y|^2 + h).Heuristically we would expect exponentially growing parasitic solutions if the matrix -A_^-1 H”(x) has at least one eigenvalue with positive real part. However, since this matrix is not constant it is difficult to give a general condition for the occurrence of exponentially growing parasites. This has to be investigated on a case-by-case basis. §.§ The direct approach If one is not interested in the Lagrangian structure of the problem, it might be preferable to use a more direct approach to calculate the modified equation. We demonstrate this method in the case of linear α for our two integrators. For more details we refer to <cit.>.§.§.§ Midpoint rule In the difference equationq_j+1 - q_j-1/2h = A_^-1(1/2 H'(q_j-1 + q_j/2)^T + 1/2 H'(q_j + q_j+1/2)^T )we set q_j = x(t) + (-1)^j y(t) andq_j ± 1 = x(t ± h) + (-1)^j ± 1 y(t ± h) = ( x(t) ± h ẋ(t) + h^2/2ẍ(t) ±…)- (-1)^j ( y(t) ± h ẏ(t) + h^2/2ÿ(t) ±…).It follows thatq_j+1 - q_j-1/2h = ẋ(t) - (-1)^j ẏ(t) + Ø(h^2)andH' ( q_j + q_j ± 1/2) = H' (x ±h/2ẋ±h/2 (-1)^j+1ẏ) + Ø(h^2)= H'(x) ±h/2 H”(x) ( ẋ + (-1)^j+1ẏ) + Ø(h^2) .Henceẋ - (-1)^j ẏ = A_^-1( H'(x) +h/4 H”(x) ( ẋ + (-1)^j+1ẏ) -h/4 H”(x) ( ẋ + (-1)^j+1ẏ) )^T + Ø(h^2) = A_^-1 H'(x)^T+ Ø(h^2). Separating the alternating terms from the rest, we findẋ = A_^-1 H'(x)^T + Ø(h^2) ,ẏ = 0+ Ø(h^2).Unsurprisingly, we find the same system of modified equations as with the Lagrangian method.§.§.§ Trapezoidal rule Now we consider the difference equationq_j+1 - q_j-1/2h = A_^-1H'(q_j)^Tand make the same identifications as before. We findẋ - (-1)^j ẏ = A_^-1 H'(x + (-1)^j y)^T+ Ø(h^2)= A_^-1 H'(x)^T + (-1)^j A_^-1 H”(x) y + Ø(y^2 + h^2).If we assume that y = Ø(h), then the system of modified equations isẋ = A_^-1 H'(x)^T + Ø(h^2) ,ẏ = -A_^-1 H”(x) y + Ø(h^2). § EXAMPLES To illustrate the theory above, we apply our two integrators to two examples. Since the calculations tend to be quite long in real-world problems, we start with a minimal toy problem. After that, we discuss the dynamics of point vortices in the plane. §.§ Toy Problem Consider the Lagrangian (p,q,ṗ,q̇) = 1/2(pq̇-qṗ)-U(p)-V(q)on Tℝ^2. Its Euler-Lagrange equations areṗ = -V'(q) andq̇ = U'(p).As a concrete example, the choice V(q)= - cos(q) and U(p)=1/2p^2 describes the pendulum. §.§.§ Midpoint rule We haveL_(p_j,q_j,p_j+1,q_j+1,h)= 1/2( p_j+p_j+1/2q_j+1-q_j/h - q_j+q_j+1/2p_j+1-p_j/h)- U( p_j + p_j+1/2) - V( q_j + q_j+1/2) .This corresponds to the following system of difference equations:q_j+1 - q_j-1/2h = 1/2 U' ( p_j-1 + p_j/2) + 1/2 U' ( p_j + p_j+1/2) ,p_j+1 - p_j-1/2h = -1/2 V' ( q_j-1 + q_j/2) - 1/2 V' ( q_j + q_j+1/2) .By Taylor expansion we obtain_([p,q],h)= (p,q,ṗ,q̇)+ h^2/24( 1/2( pq^(3) + 3 p̈q̇ - 3 ṗq̈ - p^(3)q ) - 3 U'p̈ - 3 V'q̈) + Ø(h^4).It follows that_([p,q],h) = (p,q,ṗ,q̇) + h^2/24( 2p̈q̇ - 2ṗq̈ - 2U'p̈ + U”ṗ^2 - 2V'q̈ + V”q̇^2 ) + Ø(h^4) .Its Euler-Lagrange equations are0= _p - /ṭ_ṗ= q̇ - U' + h^2/24( 2 q^(3) - U^(3)ṗ^2 - 4 U”p̈) + Ø(h^4), 0= _q - /ṭ_q̇= -ṗ - V' + h^2/24( -2 p^(3) - V^(3)q̇^2 - 4 V”q̈) + Ø(h^4).Solving for q and q̇ we find the principal modified equationsq̇ = U' - h^2/24( U^(3)V'^2 + 2 U”V”U' ) + Ø(h^4) , ṗ = -V' + h^2/24( V^(3)U'^2 + 2 V”U”V' ) + Ø(h^4) .Eliminating higher derivatives in _ we find_(p,q,ṗ,q̇,h) = (p,q,ṗ,q̇) + h^2/24( - V”q̇^2 - U”ṗ^2 + 2 U'V”q̇ - 2 V'U”ṗ) + Ø(h^4) .As discussed in the previous section we do not expect parasitic solutions with this method (see Figure <ref>). §.§.§ Trapezoidal rule We haveL_(p_j,q_j,p_j+1,q_j+1,h)= 1/2( p_j+p_j+1/2q_j+1-q_j/h - q_j+q_j+1/2p_j+1-p_j/h)- 1/2 U(p_j) - 1/2 U(p_j+1) - 1/2 V(q_j) - 1/2 V(q_j+1) .The corresponding discrete Euler-Lagrange equations areq_j+1 - q_j-1/2h = U'(p_j) , p_j+1 - p_j-1/2h = -V'(q_j) .By Taylor expansion we obtain_([p,q],h)= (p,q,ṗ,q̇)+ h^2/24( 1/2( pq^(3) + 3 p̈q̇ - 3 ṗq̈ - p^(3)q ) - 3 U'p̈ - 3 U”ṗ^2 - 3 V'q̈ - 3 V”q̇^2 )+ Ø(h^4).It follows that_([p,q],h) = (p,q,ṗ,q̇) + h^2/12( p̈q̇ - ṗq̈ - U'p̈ - U”ṗ^2 - V'q̈ - V”q̇^2 ) + Ø(h^4) .Its Euler-Lagrange equations are0= _p - /ṭ_ṗ= q̇ - U' + h^2/12( q^(3) + U^(3)ṗ^2 + U”p̈) + Ø(h^4), 0= _q - /ṭ_q̇= -ṗ - V' + h^2/12( -p^(3) + V^(3)q̇^2 + V”q̈) + Ø(h^4).Solving for q and q̇ we find the principal modified equationsq̇ = U' - h^2/6( U^(3)V'^2 - U”V”U' ) + Ø(h^4) , ṗ = -V' + h^2/6( V^(3)U'^2 - V”U”V' ) + Ø(h^4) .Eliminating higher derivatives in _ we find_(p,q,ṗ,q̇,h) = (p,q,ṗ,q̇) + h^2/12( -2 V”q̇^2 -2 U”ṗ^2 + U'V”q̇ - V'U”ṗ) + Ø(h^4) . For the pendulum, V(q)= - cos(q) and U(p)=1/2p^2, we haveA = 1/2[01; -10 ]andH” =[ U”(p) 0; 0 V”(q) ]=[10;0 cos(q) ],hence the matrix in Equation (<ref>) is-A_^-1 H” = [ 0 -cos(q); 1 0 ].This matrix has a pair of real eigenvalues if cos(q) < 0 and a pair of purely imaginary eigenvalues if cos(q) > 0. This suggests (but does not prove; q is not constant) that exponentially growing parasites occur in the regions where cos(q) < 0. In the top right image of Figure <ref> one clearly observes parasitic solutions for this method. Note the parasites only seem to grow where |q| > π/2, i.e. where cos(q) < 0. In the region where |q| < π/2 there is no noticeable growth in the amplitude of the oscillations. Instead we observe a rotation in the direction of the oscillations, as expected when the eigenvalues are purely imaginary. This is visualized in Figure <ref> by line segments connecting the points of the discrete solution with the corresponding points on the solution of the principal modified equation.When the initial conditions are chosen such that q remains in the stable region |q| < π/2 no parasites are observed (bottom right image of Figure <ref>), even if the simulation is continued for many periods (not pictured).§.§ Point vorticesOur second example involves vortices on a planar surface. If all vorticity is contained in a finite number of points, then the movement of those points is described by first order ODEs <cit.>. To be precise, the dynamics of N point vortices in the (complex) plane is described by the Lagrangian(z,z,ż,ż) = ∑_j=1^N Γ_j ( z_j ż_j ) - 1/π∑_j=1^N ∑_k=1^j-1Γ_j Γ_k log| z_j-z_k |,where z_j and Γ_j are the position and circulation of the j-th vortex, and the bar denotes the complex conjugate. The equations of motion areż_j = i/2 π∑_k ≠ jΓ_k/z_j - z_kforj = 1, …, N.It follows thatz̈_j= i/2 π∑_k ≠ j-Γ_k/( z_j - z_k )^2( ż_j - ż_k ).§.§.§ Midpoint rule We have_([z,z],h) = (z,z,ż,ż) + h^2/24[ ∑_j=1^N Γ_j ( 3 ż_j z̈_j + z^(3)_j z_j )- ∑_j=1^N ∑_k=1^j-13 Γ_j Γ_k/π( z̈_j - z̈_k/z_j-z_k) ] + Ø(h^4)and_([z,z],h) = (z,z,ż,ż)+ h^2/24[4 ∑_j=1^N Γ_j ( ż_j z̈_j ) - ∑_j=1^N ∑_k=1^j-1Γ_j Γ_k/π( 2z̈_j - z̈_k/z_j-z_k + ( ż_j-ż_k/z_j-z_k)^2 )]+ Ø(h^4).To obtain the modified Lagrangian we evaluate the second derivatives in _ using the leading order equation (<ref>). We find∑_j=1^N Γ_j ( ż_j z̈_j ) = ∑_j=1^N ∑_k ≠ jΓ_j ( ż_j i/2 πΓ_k/(z_j-z_k)^2(ż_j-ż_k) ) + Ø(h^2) = ∑_j=1^N ∑_k ≠ jΓ_j Γ_k/2π( ż_jż_j-ż_k/(z_j-z_k)^2) + Ø(h^2)= ∑_j=1^N ∑_k ≠ jΓ_j Γ_k/4π( (ż_j-ż_k)^2/(z_j-z_k)^2) + Ø(h^2)and∑_j=1^N ∑_k ≠ jΓ_j Γ_k ( 2z̈_j - z̈_k/z_j-z_k)= 4 ∑_j=1^N ∑_k ≠ jΓ_j Γ_k ( z̈_j/z_j-z_k) + Ø(h^2)= 4 ∑_j=1^N ∑_k ≠ j∑_ℓ≠ jΓ_j Γ_k( -i/2πΓ_ℓ(ż_j-ż_ℓ)/(z_j-z_k)(z_j-z_ℓ)^2) + Ø(h^2) = 1/π∑_j=1^N ∑_k ≠ j∑_ℓ≠ jΓ_j Γ_k Γ_ℓ( (ż_j-ż_ℓ)/(z_j-z_k)(z_j-z_ℓ)^2) + Ø(h^2) .Therefore,_(z,z,ż,ż,h)= (z,z,ż,ż)+ h^2/24[1/2π∑_j=1^N ∑_k ≠ jΓ_j Γ_k (( ż_j-ż_k/z_j-z_k)^2 ) . . - 1/π^2∑_j=1^N ∑_k ≠ j∑_ℓ≠ jΓ_j Γ_k Γ_ℓ( (ż_j-ż_ℓ)/(z_j-z_k)(z_j-z_ℓ)^2) ]+ Ø(h^4).§.§.§ Trapezoidal rule For the Trapezoidal rule, we find in the same way that_([z,z],h) = (z,z,ż,ż)+ h^2/24[4 ∑_j=1^N Γ_j ( ż_j z̈_j )- 2 ∑_j=1^N ∑_k=1^j-1Γ_j Γ_k/π( z̈_j - z̈_k/z_j-z_k - ( ż_j-ż_k/z_j-z_k)^2 ) ]+ Ø(h^4)and_(z,z,ż,ż,h)= (z,z,ż,ż)+ h^2/24[2/π∑_j=1^N ∑_k ≠ jΓ_j Γ_k (( ż_j-ż_k/z_j-z_k)^2 ) . . - 1/π^2∑_j=1^N ∑_k ≠ j∑_ℓ≠ jΓ_j Γ_k Γ_ℓ( (ż_j-ż_ℓ)/(z_j-z_k)(z_j-z_ℓ)^2) ]+ Ø(h^4).In Figures <ref>­–<ref> we observe parasitic solutions for the trapezoidal rule, but not for the midpoint rule, where the solution of the principal modified equation shows excellent agreement with the discrete solution. § CONCLUSION We have described a Lagrangian algorithm to calculate the modified equation of a variational integrator applied to a degenerate continuous Lagrangian that is linear in the velocities. To obtain the principal modified equation this was a straightforward adaptation of the procedure developed for non-degenerate Lagrangians. To obtain the full system of modified equations we doubled the dimension of the discrete system in a suitable way. As a consequence, we proved that the system of modified equations is variational. We have illustrated the construction of modified Lagrangians and the possible issue of parasitic solutions with examples. Our construction is potentially useful to create more accurate variational integrators, for example in the spirit of <cit.>.Acknowledgment. The author is funded by the DFG Collaborative Research Center SFB/TRR 109 “Discretization in Geometry and Dynamics”.abbrv
http://arxiv.org/abs/1709.09567v2
{ "authors": [ "Mats Vermeeren" ], "categories": [ "math.NA" ], "primary_category": "math.NA", "published": "20170927150229", "title": "Modified equations for variational integrators applied to Lagrangians linear in velocities" }
1,2]Bálint Fülöp[[email protected], Tel.:+36 1 463 1650] 3]Zoltán Tajkov 4]János Pető 4]Péter Kun 3]János Koltai 5]László Oroszlány 1,6]Endre Tóvári 7]Hiroshi Murakawa 8]Yoshinori Tokura 1]Sándor Bordács 4]Levente Tapasztó 1,6]Szabolcs Csonka[1]Department of Physics, Budapest University of Technology and Economics, Budafoki út 8, 1111 Budapest, Hungary. [2]MTA-BME Condensed Matter Research Group, Budafoki ut 8, 1111 Budapest, Hungary. [3]Department of Biological Physics, Eötvös Loránd University, Pázmány Péter sétány 1/A, 1117 Budapest , Hungary. [4]Centre for Energy Research, Institute of Technical Physics and Materials Science, 2D Nanoelectronics Lendület Research Group, Konkoly-Thege út 29-33, 1121 Budapest, Hungary [5]Department of Physics of Complex Systems, Eötvös Loránd University, Pázmány Péter sétány 1/A, 1117 Budapest , Hungary. [6] MTA-BME Lendület Nanoelectronics Research Group, Budafoki út 8, 1111 Budapest, Hungary. [7]Department of Physics, Osaka University, Toyonaka 560-0043, Japan. [8]Department of Applied Physics, The University of Tokyo, Tokyo 113-8656, Japan; RIKEN Center for Emergent Matter Science (CEMS), Wako 351-0198, Japan.Exfoliation of single layer BiTeI flakes [ December 30, 2023 ======================================== Spin orbit interaction can be strongly boosted when a heavy element is embedded intoan inversion asymmetric crystal field. A simple structure to realize this concept in a 2D crystal contains three atomic layers, a middle one built up from heavy elementsgenerating strong atomic spin-orbit interaction and two neighboring atomic layers with different electron negativity. BiTeI is a promising candidate for such a 2D crystal, since it contains heavyBi layer between Te and I layers. Recently the bulk form of BiTeI attracted considerable attention due to its giant Rashba interaction, however, 2D form of this crystal was not yet created. In this work we report the first exfoliation of single layer BiTeI using a recently developed exfoliation technique on stripped gold. Our combined scanning probe studies and first principles calculations show that SL BiTeI flakes with sizes of 100 µm were achieved which are stable at ambient conditions. The giant Rashba splitting and spin-momentum locking of this new member of 2D crystals open the way towards novel spintronic applications and synthetic topological heterostructures.Keywords: Rashba spin splitting, BiTeI, stripped gold exfoliation, van der Waals heterostructures, topological insulator, TMDC.§ INTRODUCTION Recently, the scientific interest and research activity of stacked two dimensional (2D) van der Waals heterostructures have opened a new horizon to engineer materials at the nanoscale. These structures consist of single or few atomic layer thick crystals stacked on top of each other. The first member of the family of 2D materials was graphene, a zero-gap semiconductor, but it includes metals, semiconductors, insulators, furthermore semimetals, superconductors or strongly correlated materials <cit.>. For the application of these heterostructures in the field of spintronics and synthetic topological insulators, spin-momentum locking and band inversion are required, respectively, which can be provided by single or few-layer 2D crystals with high spin-orbit interaction (SOI) <cit.>. Crystals or nanostructures that lack inversion symmetry are good candidates to demonstrate strong SOI <cit.>. Among these, the polar semiconductor BiTeI is a very promising candidate due to its giant Rashba splitting <cit.>, but so far no fabrication of single layer BiTeI (SL BiTeI, one triplet of Te-Bi-I atomic layers) flakes has been reported.BiTeI is a member of a new class of polar crystals with layered structure, the class of ternary bismuth tellurohalides BiTeX (X = I, Br, Cl), that recently attracted considerable attention <cit.>. The key component is Bi, which as a heavy element has a strong atomic SOI. Its triangular lattice layer is asymmetrically stacked between a Te and an I (or Br or Cl) layer (see Fig. <ref>a-b) <cit.>. The Bi layer along with tellurium forms a positively charged (BiTe)^+ layer with similar geometry to metallic bismuth, whereas the I^- layer is negatively charged <cit.>. This polar structure, the narrow band gap and the same orbital character of the bands at the top valence band and the bottom conduction band lead to the appearence of a giant Rashba spin splitting <cit.>. The Rashba effect leads to a Hamiltonian H_R = α·σ (𝐧×𝐤), where α is a coupling constant,σ is a vector of the Pauli matrices acting on the elecron spin, n is a unit vector pointing out of the 2D plane of the crystal and k is the in-plane electron wave vector <cit.>.The corresponding band structure is shown at Fig. <ref>e. The Rashba effect dictates a spin momentum locking as the lower inset shows and it induces an energy shiftbetween opposite spin direction, which is described by the Rashba-energy, E_R. E_R is exceptionally high for BiTeI, for bulk BiTeI E_R≈ 110 meV <cit.>. This is four times higher than energy scale of the room temperature thermal fluctuations (k_B T = 25 meV), two orders of magnitude higher than the spin splitting measured on a conventional InGaAs/InAlAs semiconductor interface, or measured on the surface of Ag(111) or Au(111) <cit.>. Only extremely sensitive surface structures like Bi atoms on Ag surface existing only in ultra high vacuum have higher Rashba energy <cit.>. This built-in giant spin-orbit interaction makes BiTeI an attractive, novel component in van der Waals heterostructures.Recently several theoretical works proposed combination of BiTeI with other 2D crystals. According to first principles calculations of BiTeI/gra­phene <cit.> (and BiTeCl/graphene <cit.>) heterostructures, the strong Rashba interaction of BiTeI is expected to exert a significant influence on the Dirac electrons of graphene resulting in a nontrivial band structure, which paves the way for a new class of robust artificial topological insulators with vast possible application in spintronics. While in graphene the 2D Dirac states originally do not have spin-momentum locking, it is also possible to combine SL BiTeI with topological insulators that host 2D helical Dirac states, thus resulting a prototype to examine various more complex spin transport effect depending on the coupling strength between these systems <cit.>. Furthermore, a pair of inversely stacked SL BiTeI itself also expected to show topological insulating behaviour<cit.>. This shows the versatility of possible applications of SL BiTeI and the increasing demand for its production. However, no experimental demonstration of SL BiTeI has been reported yet. Previous studies only focussed on bulk properties <cit.>.The standard mechanical exfoliation technique, using adhesive tape pres­sed and released on a SiO_2 substrate, is successful for many 2D materials such as graphene and hexagonal boron nitride (hBN) because of the strong adhesion between these crystals and the SiO_2 surface<cit.>. However, according to our early experience, in the case of BiTeI mechanical exfoliation results in only 50-100 nm thin flakes with few micrometers of lateral dimensions and very low yield. In the literature, BiTeI thin films of thickness from 70 nm to 10 µm have been fabricated from polycrystalline powder using flash evaporation and their mechanical <cit.> and electrical <cit.> properties have been investigated, but these polycrystalline samples are far thicker than a single layer of the crystal. § RESULTS AND DISCUSSION In the current work SL BiTeI flakes were exfoliated with a novel method using freshly cleaved Au(111) substrate<cit.> that yields SL BiTeI flakes of lateral dimensions up to 100 µm. The underlying principle of the method that Te and I bonds to Au stronger than the cohesive energy between two BiTeI layers. Thus the last layer of BiTeI remains on the gold surface when a bulk BiTeI crystal comes into contact with Au surface and then detaches due to sonication. To understand the exfoliation process we also performed density functional theory (DFT) based calculations to obtain the geometry and binding energies. In the following we present our optical microscopy, STM and AFM measurements made on three samples obtained by the fabrication method presented in the Methods section, then compare them to the results of our thoretical calculations. A typical optical micrograph of a sample surface after exfoliation is shown in Fig. <ref>c. In this picture, due to the channel-selective contrast enhancement, the Au substrate is green and thick BiTeI crystals that stayed on the surface after the sonications are blue. Some of the thick BiTeI crystals that detached during sonication leave behind light green patches of several tens of microns lateral size, like the pair marked by red arrows in Fig. <ref>c. In the following we focus on these regions and show that SL BiTeI covers almost continously these patches.As a next step, atomic resolution STM measurements were made on these patches. Room temperature, ambient STM measurements revealed a trigonal atomic pattern at the surface (see Fig. <ref>a), similar to the bulk crystal structure of BiTeI. To precisely measure the periodicity of the observed trigonal pattern, we performed a two dimensional Fourier transform (see Fig. <ref>b) and measured the positions of the maxima yielding a periodicity 4.2± 0.2 Å. This is in agreement with the bulk lattice parameter of BiTeI in the layer plane according to previous reports (4.3 Å<cit.>), and is significantly different from the gold lattice parameter in the (111) direction (2.9 Å<cit.>). Thus, one can conclude that after sonication BiTeI is still present on the substrate. Analyzing further Fig. <ref>a, one can also find defects with bright apperance (see black circles). Comparing with bulk defect states of BiTeI <cit.>, the absence of pronounced threefold symmetry and the small height of <20 pm suggest that they are antisites. When zooming out for larger area scans, we found that the surface topology of the BiTeI flakes is also different from the pure Au substrate at the lengthscale of 1 µm (see Fig. <ref>). The Au surfaces are well known to consist of large (111) terraces separated by single or multiple steps along (112) or (110) directions measuring 2.5 Å step heights (see Fig. <ref>a) <cit.>. The areas which were covered by BiTeI crystal before sonication show characteristically different landscape (see Fig. <ref>b). Let us call it “cloudy” texture in the following. In this pattern one can find distinctive terraces of atomically flat regions similar to that of Au (111) faces, but in this case the boundaries of the terraces are more ragged, their size is smaller, contains multiple terraces on top of each other which are not present in the case of Au. The two surface structures are also well distinguishable in larger scan ranges as a later AFM image shows (see Fig. <ref>b). The trigonal atomic structure presented in Fig. <ref>a can be generally found anywhere where the cloudy pattern is visible and it is never present on the pure gold surface. In order to identify whether the terraces are related to atomic steps of BiTeI or to the substrate, we measured the height of dozens of the terraces using linecuts in the scan direction, some of which are marked on Fig. <ref>a-b using green and red segments for the clean Au and the BiTeI covered surfaces, respectively. Step heights corresponding to the marked segments are indicated by markers on the height colorbar in Fig. <ref>c. We found that the measured step height of the cloudy pattern corresponds to that of the Au (111) steps of 2.5 Å<cit.>, and is largely different from the bulk lattice parameters of BiTeI in the out-of-plane direction (6.5 Å<cit.>). This suggests that these steps do not mark the border of BiTeI monolayers but the gold terraces of the surface that the BiTeI layer follows closely. To identify the thickness of the covering BiTeI layer we have to search for an area where the flake ends on the optical image or the cloudy pattern ends in STM (called “border regions”).In Fig. <ref>a we show an AFM overview image of the investigated flake depicted in Fig. <ref>a. The bright curvy line connecting the two red arrows is the border of the flake, i.e. the top-left corner is the Au surface whereas the larger part of the image bounded by the bright line is covered by BiTeI. At the border of the flake a thick and broad pile of accumulated material can be found. The width of this line is rather large, 2-4 µm, which is in the same length scale as the surface roughness of the Au substrate, thus the measurement of the layer thickness across the border can not be performed reliably. Therefore we looked for holes in the flake such as the one marked by the green square. In these holes the surface is deeper and the texture is different. After zooming on the area marked by the green square (Fig. <ref>b) one can recognize the cloudy pattern of BiTeI (same as in STM measurement at Fig. <ref>b) in the outer region whereas in the hole the original Au surface is present, which corroborates the visual impression that the BiTeI layer is missing in the hole. On the border of these holes the accumulated contamination is not present therefore it is possible to zoom in further (see Fig. <ref>c) and measure directly the step height at the border (Fig. <ref>d). We investigated the step height at various positions around the border of the hole similar to Fig. <ref>c, measuring a couple of linecuts using various PID control parameters. The measured step heights are in the range of 8.5 ± 1.2 Å, which is close to the bulk lattice parameters of BiTeI in the out-of-plane direction (6.5 Å<cit.> or 6.8 Å<cit.>). Thus we conclude that the measured step height corresponds to a single layer step, and the regions showing the cloudy pattern are covered by a monolayer of BiTeI crystal. The small mismatch of the measured height and lattice parameter can be attributed to the fact that height measurement on different substrate could deviate slightly in AFM profiles<cit.>. Thus, our findings indicate that SL BiTeI can indeed be separated by stripped gold exfoliation technique, thus, it is a powerful method to produce large size SL 2D crystals from materials beyond graphene and TMDCs <cit.>. The BiTeI covered surface was analyzed several weeks after exfoliation as well and the atomic structure of the BiTeI layer (as in Fig. <ref>) was still presented which shows the long term environmental stability of the BiTeI monolayer. This finding is also remarkable, since TMDCs containing Te are usually unstable in ambient conditions on the Au surface in the time scale of several hours according to our previous experiments. To support the experimental results, we performed first principles calculations. First, we investigated the electronic structure of the freestanding single layer and compared to the calculated properties of the bulk reproduced from Ref. <cit.>. Our calculated band structure of the freestanding SL BiTeI is presented in Fig. <ref>e, which provided a band gap of E_g = 740 meV, and a Rashba energy of E_R = 35 meV.As a next step SL BiTeI on Au surface was investigated (see Fig. <ref>d). Geometry optimization left the Au surface largely unaltered, while it introduced a small buckling of 0.1 Åin the BiTeI layer irrespective of Te or I was facing the Au substrate. The binding energies of the relevant bonds are listed in Table <ref>. The energy relations clearly indicate that both the Te-faced and the I-faced BiTeI binds stronger to the Au surface than to another BiTeI layer (see the first 3 rows), which is in agreement with the experimental findings that a SL BiTeI remains at the Au surface after sonication. As a reference, we also included the bonding energies between the constituents of a single BiTeI layer which are much higher than the previous ones. Thus, it is highly unlikely that the BiTeI layer can be cleaved between Bi-Te or Bi-I planes, leaving only a part of the single layer on the substrate. This result further supports that the multiple terraces at the cloudy pattern (see Fig. <ref>b) are not related to BiTeI but the underlying gold surface. On the other hand, thestrong adhesion between the BiTeI and the Au substrate can be also a reason why the surface structure of the Au is significantly different under BiTeI coverage (cloudy pattern): the strong bonds can block the Au atoms which have otherwise high surface mobility; and also force the BiTeI to follow the terraces. This surface reconstruction is likely to be induced by the relative high temperature (T ≈90 C^∘) that the sample was exposed to during fabrication.To further characterize the obtained flakes, we calculated the partial DOS (PDOS) on the constituents with and without the presence of the Au substrate (see Fig. <ref>a-b, respectively). In the case of the freestanding SL BiTeI one can see a gap of 0.76 eV where the PDOS is zero for all the components, therefore we expect the SL BiTeI to show insulating behavior in this regime (see Fig. <ref>a and Fig. <ref>e). In the whole investigated range of ±3 eV, the position of the most prominent peaks in the PDoS are very similar for Bi, Te and I as well. One can observe that in the negative energy range the Te and I have more contribution while the part of te Bi orbitals increases towards higher energies. In the middle range, between 1 and 2 eV, the Te has the highest PDOS. Meanwhile, when the SL BiTeI is placed on the Au substrate, the PDOS change significantly (see Fig. <ref>b). The influence of the Au substrate can depend on the orientation of the crystal structure of BiTeI with respect to the Au substrate, whether it is placed on the substrate in the order Au^6-Te-Bi-I (Te-side) or Au^6-I-Bi-Te (I-side). After evaluation of the PDOS structures of the two cases, we indeed see a difference between the two sides, but it is minor compared to the freestanding case. Therefore, we only illustrate the effect of the Au substrate using the plot of the PDOS of Te-side (Fig. <ref>b). One can clearly see the additional states of the Au substrate in the negative range and the smeared peaks of every component in the positive range. Most importantly, the PDOS never reaches zero for any of the components, meaning that the gap disappears due to the strong hybridization with the Au substrate. Therefore insulating behavior is not expected in the case of the SL BiTeI placed on the stripped gold substrate.To confirm this prediction, tunneling current – voltage characteristics measurements were performed. A typical example of a dI/dV curve measured on the cloudy pattern is shown on Fig. <ref>c. The conductance never reaches zero and there is no clear signature of a gap in the measurement, which is in agreement with the calculated PDOS. However, even without hybridization the presence of the Au substrate can add a strong, featureless background to the measured dI/dV characteristics by letting electrons tunnel through the BiTeI layer directly into the Au substrate. § CONCLUSION We demonstrated for the first time that single layer flakes can be realized from the giant Rashba spin-orbit material BiTeI. Stripped gold exfoliation technique provides an efficient way to produce flakes with a size of 100 µm, which are stable at ambient condition for at least several weeks. We showed that after the exfoliation the position of the BiTeI flakes can be identified by simple optical microscope. Atomic resolution STM measurements confirmed the presence of BiTeI layer on the gold surface by resolving the in-plane crystal structure of BiTeI. AFM measurements showed that the flakes cover large areas continuously with only few holes. Step height measurements across the edges of these holes confirmed that the flake thickness corresponds to SL BiTeI. Our first principles calculations also supported the formation of SL BiTeI due to the strong bonding between Te and I to Au substrate. Moreover, BiTeI strongly hybridize with the Au substrate, which results in finite DoS in the gap in accordance with the differential conduction measurements. The first exfoliation of SL BiTeI adds a new member to the possible building blocks of van der Waals heterstructures with giant Rashba spin splitting, and thereby opens the way to engineer novel 2D heterostructures with special spin based functionality or topological protection. § METHODS§.§ FabricationGold layers of 100 nm thickness were grown epitaxially on mica prior to exfoliation. Before use the mica-Au interface was freshly cleaved, thus obtaining large area Au (111) surfaces where the exfoliation took place. Bulk BiTeI crsytals were grown by the Bridgman method as described in Ref.<cit.>. Thick BiTeI flakes were prepared on scotch tape by consecutive folding several times and transferred on the Au (111) surface using a thermal release tape. After the removal of the thermal release tape by heating the sample on a hot plate up to 90 ^∘C, the sample was mildly sonicated in room temperature acetone. The sonication causes some of the thick BiTeI flakes to fall off the substrate leaving only SL BiTeI pieces on the surface. The SL BiTeI flakes can be found by investigating these areas using channel-selective contrast-enhanced optical microscopy as depicted in Fig. <ref>c. §.§ Optical and scanning probe microscopy A Zeiss Axio Imager optical microscope was used for the optical investigation of the samples. We took 2.5x magnification pictures before and after the sonication and compared them to find areas where the bulk crystals fell off during the process (not shown in this paper). We found that the position of the single layer BiTeI flakes can be localized using channel-selective contrast-enhanced optical microscope imageswith 100x microscope lens (see Fig. <ref>c).Scanning Tunneling Microscopy (STM) measurements were performed on a Nanoscope E instrument using standard Pt-Ir 90%-10% tips created by mechanical shearing. High resolution 2D maps were scanned at a setpoint of 3 nA at 5 mV, large area 2D maps were scanned at 1 nA and 200 mV. The differential conduction measurements were obtained by measuring I-V characteristics at 1 nA and 200 mV, then a numerical differentiation was applied to the average of dozens of individual measurements. Atomic Force Microscope (AFM) was used in tapping mode. The used Bruker Multimode 8 Nanocope V. AFM instrument has larger scanning range than the STM device but sufficient resolution in the z axis to resolve atomic layer thicknesses. All STM and AFM measurements were performed under ambient conditions. §.§ First principles calculations The density functional theory calculations were performed for two configurations: the first model consists of a freestanding SL BiTeI cell, the second is the SL BiTeI placed on the Au substrate. In the case of the freestanding single layer, the slabs were separated by a considerable thick vacuum of at least 18.5 Å. For the SL BiTeI on Au, we considered the following geometry (depicted in Fig. <ref>d) for calculating the binding energies and the density of states (DoS) of SL BiTeI on the Au surface: the 2× 2 supercell of the SL BiTeI was placed on 6 layers of 3× 3 supercell of (111) Au both Te and I faced (latter is shown in Fig. <ref>d). The mismatch of the two lattices is 1.3% only, which is albeit expected to alter the calculated properties slightly but it is not expected to affect our conclusions. The slabs were also separated by a vacuum of at least 18.5 Å.The geometry optimization, the binding energies and the DOS were calculated both using the projector augmented-wave method asimplemented in the VASP Package <cit.> and in the linear combination of atomical orbitals method as implemented in the SIESTA package <cit.>.During optimization and the calculation of the binding energies the SOI was neglected as previous works showed that these quantities are barely sensitive to it <cit.>. The parameters used in the VASP calculation were the following: the plane-wave cutoff was 500 eV, the Brilloiun zone was sampled with the 12× 12 × 1 Gamma-centered Monckhorst-Pack set. In the geometry optimization we kept the lattice constant of the gold fixed and relaxed all the atoms until the atomic forces fell below 3 meV/Å.Next, the binding energies were calculated by substracting the total energies of the fully relaxed, separated fractions from the total energy of the fully relaxed compound. To test the validity of the applied method the binding energies of graphite layers (57 meV/atom) were considered and found to be in good agreement with experimental data in the literature (62± 5meV/atom) <cit.>. In order to take the van der Waals interactions into account we deployed the DFT-D3 Grimme corrections <cit.> for the Perdew–Burke–Ernzerhof (PBE) functionals <cit.>.The parameters of the SIESTA computation were the following: the mesh grid cutoff was 300Ry, the Brilloiun zone was sampled with the 5× 5× 2 Gamma-centered Monckhorst-Pack set. The force tolerance in coordinate optimization was 20 meV/Å. During relaxation we deployed the PBE functional. We used the pseudopotentials optimized by Rivero et al. <cit.>. The basis size was the double-zeta polarized. After relaxation a self-consistent single-point calculation was done with SOI included <cit.>. The sisl tool <cit.> was used to extract the partial density of states from SIESTA calculations, sampling the Brillouin zone with a 70× 70× 1 k-points set. § ACKNOWLEDGEMENTSThis work was financially supported by the Flag-ERA iSpinText project (NN118996), the "Lendület" program of the Hungarian Academy of Sciences, the Hungarian National Research, Development and Innovation Office (NKFIH) grants no. K115608, K108676, and K115575;the Hungarian Research Funds OTKA PD 111756 andFK 124723. We acknowledges the financial support of the National Research, Development and Innovation Office of Hungary via the National Quantum Technologies Program NKP-2017-00001. S.B., L.O. and J.K. acknowledges the Bolyai program of the Hungarian Academy of Sciences. J.P., P.K. and L.T. acknowledge financial support from ERC StG Nanofab2D and OTKA grant K108753.§ CONTRIBUTIONSB. F.,E. T., S. B. and S. C. conceived the project. H. M. and Y. T.grew the BiTeI crystals, B. F. and J. P. worked out the exfoliation procedure, did the STM studies and analyzed the data with L. T., E. T. and S. C.P. K.supported the AFM analysis. Z. T.,J. K. and L. O. did the calculations. B. F. and S. C. wrote the paper with all authors contributing to the discussion and preparation of the manuscript. [10]Geim2013A. K. Geim andI. V. GrigorievaVan der waals heterostructures,Nature 499(7459), 419–425 (2013). Koski2013K. J. Koski andY. CuiThe new skinny in two-dimensional nanomaterials,ACS Nano 7(5), 3739–3743 (2013). Kou2014L. Kou,S. C. Wu,C. Felser, T. Frauenheim,C. Chen,andB. YanRobust 2d topological insulators in van der waals heterostructures,ACS Nano 8(10), 10448–10454 (2014), PMID: 25226453. Eremeev2015S. V. Eremeev,S. S. Tsirkin,I. A. Nechaev, P. M. Echenique,andE. V. ChulkovNew generation of two-dimensional spintronic systems realized by coupling of rashba and dirac fermions,Scientific Reports 5(August), 12819 (2015). LaShell1996S. LaShell,B. A. McDougall,and E. JensenSpin splitting of an au(111) surface state band observed with angle resolved photoelectron spectroscopy,Phys. Rev. Lett. 77(Oct), 3419–3422 (1996). ast2007C. R. Ast,J. Henk,A. Ernst, L. Moreschini,M. C. Falub,D. Pacilé, P. Bruno,K. Kern,andM. GrioniGiant spin splitting through surface alloying,Phys. Rev. Lett. 98(May), 186807 (2007). Koroteev2004Y. M. Koroteev,G. Bihlmayer,J. E. Gayone, E. V. Chulkov,S. Blügel,P. M. Echenique, andP. HofmannStrong spin-orbit splitting on bi surfaces,Phys. Rev. Lett. 93(Jul), 046403 (2004). Nitta1997J. Nitta,T. Akazaki,H. Takayanagi,and T. EnokiGate control of spin-orbit interaction in an inverted in_0.53ga_0.47as/in_0.52al_0.48as heterostructure,Phys. Rev. Lett. 78(Feb), 1335–1338 (1997). Bahramy2011M. S. Bahramy,R. Arita,andN. NagaosaOrigin of giant bulk rashba splitting: Application to bitei,Phys. Rev. B 84(Jul), 041202 (2011). Kulbachinskii2012V. A. Kulbachinskii,V. G. Kytin,A. A. Kudryashov,A. N. Kuznetsov,andA. V. ShevelkovOn the electronic structure and thermoelectric properties of bitebr and bitei single crystals and of bitei with the addition of bii3 and cui,Solid State Chemistry and Materials Science of Thermoelectric Materials 193(September), 154–160 (2012). Martin2016C. Martin,A. V. Suslov,S. Buvaev, A. F. Hebard,P. Bugnon,H. Berger, A. Magrez,andD. B. TannerExperimental determination of the bulk rashba parameters in bitebr,EPL (Europhysics Letters) 116(5), 57003 (2016). Sakano2013M. Sakano,M. S. Bahramy,A. Katayama, T. Shimojima,H. Murakawa,Y. Kaneko, W. Malaeb,S. Shin,K. Ono, H. Kumigashira,R. Arita,N. Nagaosa, H. Y. Hwang,Y. Tokura,and K. IshizakaStrongly spin-orbit coupled two-dimensional electron gas emerging near the surface of polar semiconductors,Phys. Rev. Lett. 110(Mar), 107204 (2013). Landolt2012G. Landolt,S. V. Eremeev,Y. M. Koroteev, B. Slomski,S. Muff,T. Neupert, M. Kobayashi,V. N. Strocov,T. Schmitt, Z. S. Aliev,M. B. Babanly,I. R. Amiraslanov,E. V. Chulkov,J. Osterwalder,and J. H. DilDisentanglement of surface and bulk rashba spin splittings in noncentrosymmetric bitei,Phys. Rev. Lett. 109(Sep), 116403 (2012). Ishizaka2011K. Ishizaka,M. S. Bahramy,H. Murakawa, M. Sakano,T. Shimojima,T. Sonobe, K. Koizumi,S. Shin,H. Miyahara, A. Kimura,K. Miyamoto,T. Okuda, H. Namatame,M. Taniguchi,R. Arita, N. Nagaosa,K. Kobayashi,Y. Murakami, R. Kumai,Y. Kaneko,Y. Onose,and Y. TokuraGiant rashba-type spin splitting in bulk bitei,Nat Mater 10(7), 521–526 (2011). Nechaev2017I. A. Nechaev,S. V. Eremeev,E. E. Krasovskii,P. M. Echenique,andE. V. ChulkovQuantum spin hall insulators in centrosymmetric thin films composed from topologically trivial bitei trilayers,Scientific Reports 7(March), 43666 (2017). Eremeev2017S. V. Eremeev,I. A. Nechaev,andE. V. Chulkov, 2d and 3d topological phases in bitex compounds, 2017. Bahramy2012M. S. Bahramy,B. J. Yang,R. Arita,and N. NagaosaEmergence of non-centrosymmetric topological insulating phase in bitei under pressure,Nature Communications 3(February), 679 (2012). Ohmura2017A. Ohmura,Y. Higuchi,T. Ochiai, M. Kanou,F. Ishikawa,S. Nakano, A. Nakayama,Y. Yamada,andT. SasagawaPressure-induced topological phase transition in the polar semiconductor bitebr,Phys. Rev. B 95(Mar), 125203 (2017). Eremeev2012S. V. Eremeev,I. A. Nechaev,Y. M. Koroteev, P. M. Echenique,andE. V. ChulkovIdeal two-dimensional electron systems with a giant rashba-type spin splitting in real materials: Surfaces of bismuth tellurohalides,Phys. Rev. Lett. 108(Jun), 246802 (2012). Shevelkov1995A. V. Shevelkov,E. V. Dikarev,R. V. Shpanchenko,andB. A. PopovkinCrystal structures of bismuth tellurohalides bitex (x = cl, br, i) from x-ray powder diffraction data,Journal of Solid State Chemistry 114(2), 379–384 (1995). Kanou2013M. Kanou andT. SasagawaCrystal growth and electronic properties of a 3d rashba material, bitei, with adjusted carrier concentrations,Journal of Physics: Condensed Matter 25(13), 135801 (2013). Sakano2012M. Sakano,J. Miyawaki,A. Chainani, Y. Takata,T. Sonobe,T. Shimojima, M. Oura,S. Shin,M. S. Bahramy, R. Arita,N. Nagaosa,H. Murakawa, Y. Kaneko,Y. Tokura,andK. IshizakaThree-dimensional bulk band dispersion in polar bitei with giant rashba-type spin splitting,Phys. Rev. B 86(Aug), 085204 (2012). Monserrat2017B. Monserrat andD. Vanderbilt, Temperature dependence of the bulk rashba splitting in the bismuth tellurohalides, 2017. Crepaldi2012A. Crepaldi,L. Moreschini,G. Autès, C. Tournier-Colletta,S. Moser,N. Virk, H. Berger,P. Bugnon,Y. J. Chang, K. Kern,A. Bostwick,E. Rotenberg, O. V. Yazyev,andM. GrioniGiant ambipolar rashba effect in the semiconductor bitei,Phys. Rev. Lett. 109(Aug), 096803 (2012). Butler2014C. J. Butler,H. H. Yang,J. Y. Hong, S. H. Hsu,R. Sankar,C. I. Lu, H. Y. Lu,K. H. O. Yang,H. W. Shiu, C. H. Chen,C. C. Kaun,G. J. Shu, F. C. Chou,andM. T. LinMapping polarization induced surface band bending on the rashba semiconductor bitei,NatComm 5(June), 4066 (2014). Lee2011J. S. Lee,G. A. H. Schober,M. S. Bahramy, H. Murakawa,Y. Onose,R. Arita, N. Nagaosa,andY. TokuraOptical response of relativistic electrons in the polar bitei semiconductor,Phys. Rev. Lett. 107(Sep), 117401 (2011). Demko2012L. Demkó,G. A. H. Schober,V. Kocsis, M. S. Bahramy,H. Murakawa,J. S. Lee, I. Kézsmárki,R. Arita,N. Nagaosa,and Y. TokuraEnhanced infrared magneto-optical response of the nonmagnetic semiconductor bitei driven by bulk rashba splitting,Phys. Rev. Lett. 109(Oct), 167401 (2012). Bordacs2013S. Bordács,J. G. Checkelsky,H. Murakawa, H. Y. Hwang,andY. TokuraLandau level spectroscopy of dirac electrons in a polar semiconductor with giant rashba spin splitting,Phys. Rev. Lett. 111(Oct), 166403 (2013).Ogawa2013N. Ogawa,M. S. Bahramy,H. Murakawa, Y. Kaneko,andY. TokuraMagnetophotocurrent in bitei with rashba spin-split bands,Phys. Rev. B 88(Jul), 035130 (2013). Fiedler2014S. Fiedler,L. El-Kareh,S. V. Eremeev, O. E. Tereshchenko,C. Seibel,P. Lutz, K. A. Kokh,E. V. Chulkov,T. V. Kuznetsova, V. I. Grebennikov,H. Bentmann,M. Bode,and F. ReinertDefect and structural imperfection effects on the electronic properties of bitei surfaces,New Journal of Physics 16(7), 075013 (2014). Kohsaka2015Y. Kohsaka,M. Kanou,H. Takagi, T. Hanaguri,andT. SasagawaImaging ambipolar two-dimensional carriers induced by the spontaneous electric polarization of a polar semiconductor bitei,Phys. Rev. B 91(Jun), 245312 (2015). PRB.88.081104C. R. Wang,J. C. Tung,R. Sankar, C. T. Hsieh,Y. Y. Chien,G. Y. Guo, F. C. Chou,andW. L. LeeMagnetotransport in copper-doped noncentrosymmetric bitei,Phys. Rev. B 88(Aug), 081104 (2013). Bychkov1984Y. A. Bychkov andE. I. RashbaProperties of a 2d electron gas with lifted spectral degeneracy,Journal of Experimental and Theoretical Physics 39(2), 78–82 (1984). Tajkov2017Z. Tajkov,D. Visontai,P. Rakyta, L. Oroszlány,andJ. KoltaiTransport properties of graphene-bitei hybrid structures,Physica Status Solidi C pp. 1700215–n/a (2017), 1700215. Eremeev2014S. V. Eremeev,I. A. Nechaev,P. M. Echenique,andE. V. ChulkovSpin-helical dirac states in graphene induced by polar-substrate surfaces with giant spin-orbit interaction: a new platform for spintronics,Scientific Reports 4(November), 6900 (2014). Koenig2011S. P. Koenig,N. G. Boddeti,M. L. Dunn,and J. S. BunchUltrastrong adhesion of graphene membranes,Nat Nano 6(9), 543–546 (2011). Onopko1972aL. V. O. et al.The method of preparation structure and mechanical properties of bitel thin films,Izv VUZ Fiz 11, 117–120 (1972). Onopko1972bL. V. O. et al.Electro-physical properties of bitel thin films,Izv VUZ Fiz 11, 120–122 (1972). Magda2015G. Z. Magda,J. Pető,G. Dobrik, C. Hwang,L. P. Biró,and L. TapasztóExfoliation of large-area transition metal chalcogenide single layers,Scientific Reports 5(October), 14714 (2015).Sankar2014R. Sankar,I. Panneer Muthuselvam,C. J. Butler,S. C. Liou,B. H. Chen,M. W. Chu, W. L. Lee,M. T. Lin,R. Jayavel,and F. C. ChouRoom temperature agglomeration for the growth of bitei single crystals with a giant rashba effect,CrystEngComm 16(37), 8678–8683 (2014). Nie2012S. Nie,N. C. Bartelt,J. M. Wofford, O. D. Dubon,K. F. McCarty,and K. ThürmerScanning tunneling microscopy study of graphene on au(111): Growth mechanisms and substrate interactions,Phys. Rev. B 85(20), 205406 (2012). PhysRevLett.60.120R. C. Jaklevic andL. ElieScanning-tunneling-microscope observation of surface diffusion on an atomic scale: Au on au(111),Phys. Rev. Lett. 60(Jan), 120–123 (1988). Barth1990J. V. Barth,H. Brune,G. Ertl,and R. J. BehmScanning tunneling microscopy observations on the reconstructed au(111) surface: Atomic structure, long-range superstructure, rotational domains, and surface defects,Phys. Rev. B 42(Nov), 9307–9318 (1990). nemes2008P. Nemes-Incze,Z. Osváth,K. Kamarás, andL. BiróAnomalies in thickness measurements of graphene and few layer graphite crystals by tapping mode atomic force microscopy,Carbon 46(11), 1435–1442 (2008). kresse_1993G. Kresse andJ. HafnerAb initio molecular dynamics for liquid metals,Physical Review B 47(1), 558 (1993). kresse_1996G. Kresse andJ. FurthmüllerEfficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set,Physical review B 54(16), 11169 (1996). siesta_2002J. M. Soler,E. Artacho,J. D. Gale, A. García,J. Junquera,P. Ordejón,and D. Sánchez-PortalThe siesta method for ab initio order- n materials simulation,Journal of Physics: Condensed Matter 14(11), 2745 (2002). zacharia_2004R. Zacharia,H. Ulbricht,andT. HertelInterlayer cohesive energy of graphite from thermal desorption of polyaromatic hydrocarbons,Phys. Rev. B 69(Apr), 155406 (2004). grimme_2010S. Grimme,J. Antony,S. Ehrlich,and H. KriegA consistent and accurate ab initio parametrization of density functional dispersion correction (DFT-D) for the 94 elements H-Pu,The Journal of Chemical Physics 132(15), 154104 (2010). pbe_1996J. P. Perdew,K. Burke,and M. ErnzerhofGeneralized gradient approximation made simple,Phys. Rev. Lett. 77(Oct), 3865–3868 (1996). rivero_2015P. Rivero,V. M. García-Suárez, D. Pereñiguez,K. Utt,Y. Yang, L. Bellaiche,K. Park,J. Ferrer,and S. Barraza-LopezSystematic pseudopotentials from reference eigenvalue sets for DFT calculations,Computational Materials Science 98(February), 372–389 (2015). VASP_SOCD. Hobbs,G. Kresse,andJ. HafnerFully unconstrained noncollinear magnetism within the projector augmented-wave method,Phys. Rev. B 62(Nov), 11556–11570 (2000). siesta_SOC_2004V. M. García-Suárez,C. M. Newman,C. J. Lambert,J. M. Pruneda,andJ. FerrerFirst principles simulations of the magnetic and structural properties of Iron,The European Physical Journal B 40(4), 371–377 (2004). zerothi_sislN. R. Papior, sisl: v0.8.5.
http://arxiv.org/abs/1709.09732v1
{ "authors": [ "Bálint Fülöp", "Zoltán Tajkov", "János Pető", "Péter Kun", "János Koltai", "László Oroszlány", "Endre Tóvári", "Hiroshi Murakawa", "Yoshinori Tokura", "Sándor Bordács", "Levente Tapasztó", "Szabolcs Csonka" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20170927205354", "title": "Exfoliation of single layer BiTeI flakes" }
Crab Pulsar Scintillation Main, R. ^1Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, 53121, Bonn, Germany ^2Department of Astronomy and Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4, Canada ^3Canadian Institute for Theoretical Astrophysics, University of Toronto, 60 St. George Street, Toronto, ON M5S 3H8, Canada ^4Dunlap Institute for Astronomy and Astrophysics, University of Toronto, 50 St George Street, Toronto, ON M5S 3H4, Canada ^5Canadian Institute for Advanced Research, 180 Dundas St West, Toronto, ON M5G 1Z8, Canada ^6Perimeter Institute for Theoretical Physics, 31 Caroline Street North, Waterloo, ON N2L 2Y5, Canada ^7Astro Space Center, Lebedev Physical Institute, Russian Academy of Sciences, Profsoyuznaya ul. 84/32, Moscow, 117997 Russia ^8Department of Physics, Purdue University, 525 Northwestern Avenue, West Lafayette, IN 47907-2036, USAThe Crab pulsar has striking radio emission properties, with the two dominant pulse components – the main pulse and the interpulse – consisting entirely of giant pulses.The emission is scattered in both the Crab nebula and the interstellar medium, causing multi-path propagation and thus scintillation.We study the scintillation of the Crab's giant pulses using phased Westerbork Synthesis Radio Telescope data at 1668 MHz.We find that giant pulse spectra correlate at only ∼ 2 %, much lower than the 1/3 correlation expected from a randomized signal imparted with the same impulse response function. In addition, we find that the main pulse and the interpulse appear to scintillate differently; the 2D cross-correlation of scintillation between the interpulse and main pulse has a lower amplitude, and is wider in time and frequency delay than the 2D autocorrelation of main pulses. These lines of evidence suggest that the giant pulse emission regions are extended, and that the main pulse and interpulse arise in physically distinct regions which are resolved by the scattering screen. Assuming the scattering takes place in the nebular filaments, the emission regions are of order a light cylinder radius, as projected on the sky.With further VLBI and multi-frequency data, it may be possible to measure the distance to the scattering screens, the size of giant pulse emission regions, and the physical separation between the pulse components. § THE UNUSUAL PROPERTIES OF THE CRAB PULSAR The Crab pulsar is one of the most unusual radio pulsars, and has been the subject of much observational and theoretical research (for a review, see ). The two dominant components to its radio pulse profile, the main pulse and the low-frequency interpulse (simply referred to as the interpulse for the remainder of this paper), appear to be comprised entirely of randomly occurring giant pulses – extremely short and bright pulses of radio emission showing structure down to ns timescales and reaching intensities over a MJy <cit.>.Only the fainter components of the pulse profile – such as the precursor (to the main pulse) – are similar to what is seen for regular radio pulsars.The main pulse and interpulse are aligned within 2 ms with emission compenents at higher energy, from optical to γ-ray <cit.>, and giant pulses are associated with enhanced optical <cit.> and X-ray <cit.> emission. Since pair production strongly absorbs γ-ray photons inside the magnetosphere, this suggests all these components arise far from the neutron-star surface, with possible emission regions being the various magnetospheric “gaps” <cit.>, induced Compton scattering in the upper magnetosphere <cit.>, or regions outside the light cylinder <cit.>. In these regions, the giant pulses are thought to arise stochastically, likely triggered by plasma instabilities and/or reconnection <cit.>, from parts smaller, of order Γ c τ_ pulse≃0.1…1km (with τ_ pulse∼10ns the timescale of a pulse, and Γ∼100 an estimate of the relativistic motion), than the overall size of the emission region, of order cPδϕ_ pulse∼100km (with δϕ_ pulse∼0.01 the width of the pulse phase window in which giant pulses occur).While similar in their overall properties, the main pulse and interpulse have differences in detail.In particular, the interpulse has a large scatter in its dispersion measure compared to the main pulse, possibly suggesting that it is observed through a larger fraction of the magnetosphere <cit.>.In addition, it appears shifted in phase and shows “banding” in its power spectra above 4 GHz (the so-called “high-frequency interpulse”), with the spacing proportional to frequency <cit.>.The Crab pulsar, like many pulsars, exhibits scintillation from multi-path propagation of its radio emission. The scattering appears to include both a relatively steady component, arising in the interstellar medium, and a highly variable one, originatingin the the Crab nebula itself, with the former responsible for the angular and the latter for (most of) the temporal broadening ().This scintillation offers the prospect of “interstellar interferometry”, where the high spatial resolution arising from multiple imaging is used to resolve the pulsar magnetosphere.This has been applied to some pulsars, with separations between emission regions inferred from time offsets (or phase gradients) between the scintillation patterns seen in different pulse components.For some of these pulsars, the inferred separations were substantially larger than the neutron star radius: ∼10^3 km for PSR B1237+25 <cit.>, ≳100km for PSR B1133+16 <cit.>, and of order the light cylinder radius (several 10^4 km) in a further four pulsars <cit.>.In contrast, for PSR B0834+06, <cit.> find only a very small positional shift, constraining the separation between emission regions to ∼20km, comparable to the neutron star radius.In the above studies, the scintillation pattern offsets are small compared to the scintillation scale, i.e., the scintillation screen does not resolve the pulsar magnetosphere, but changes in position can be measured with high signal-to-noise data. For the Crab pulsar, however, the proximity of the nebular scattering screen to the pulsar implies that, as seen from the pulsar, the screen extends a much larger angle than would be the case if it were far away (for a given scattering time). Therefore, the scintillation pattern is sensitive to small spatial scales, of order ∼2000km at our observing frequency (see Sect. <ref>), comparable to the light-cylinder radius r_ LC≡ cP/2π≃1600km.The high spatial resolving power also implies that, for a given relative velocity between the pulsar and the screen, the scintillation timescale is short. Indeed, from the scintillation properties of giant pulses, <cit.> infer a de-correlation time of ∼25s at 1.4GHz. Unfortunately, their sample, while very large, had insufficient interpulse-main pulse pairs to look for differences between the two components (Cordes, 2017, pers. comm.).From an even larger sample, <cit.> identified pairs of pulses that either occurred in the same main pulse phase window, or with one in the main pulse and one in the interpulse window.They did not find major differences between the sets.Like for the close pairs in <cit.>, they found correlation coefficients consistent with the 1/3 expected for pulses that differ in their intrinsic time and frequency structure, but which have additional frequency structure imposed by scintillation.In this paper, we compare the scintillation structure of the main pulse and the interpulse in more detail.We find that in our sample the frequency spectra of close pulses are much more weakly correlated than was seen previously, suggesting that during our observations the scintillation pattern is sensitive to smaller spatial scales at the source than the separation between bursts (in effect, the scattering screen resolved the emission region) The scintillation patterns of the main pulse and interpulse also appear to differ, which, if taken at face value, suggests their emission locations are offset in projection by of order a light cylinder radius.§ OBSERVATIONS AND DATA REDUCTIONWe analyse 6 hours of data form the phased Westerbork Synthesis Radio Telescope (WSRT), and 2.5 hours of simultaneous data from the 305-mWilliamE.GordonTelescopeat the Areciboobservatory (AR), that were taken as part of a RadioAstron observing run on 2015 January 10–11 <cit.>. The data cover the frequency range of 1652–1684 MHz, and consist of both circular polarizations in two contiguous 16 MHz channels, recorded using standard 2-bit Mark 5B format (WSRT), and VDIF (AR).The use of a telescope with high spatial resolution is particularly beneficial in studies of the Crab pulsar, as it helps to resolve out the Crab nebula, effectively reducing the system temperature from 830Jy (for the integrated flux at 1.7 GHz) to 165Jy and 275Jy, for WSRT and AR, respectively <cit.>.To search for giant pulses, we coherently dedispersed[Using a dispersion measure of 56.7716 pc cm^-3 appropriate for our date (taken from <http://www.jb.man.ac.uk/ pulsar/crab.html>; ). We read in overlapping blocks of data, removed edges corrupted by de-dispersion, such that the de-dispersed data was contiguous in time.] the data from the two channels to a common reference frequency.Each 1 s segment of data was bandpass calibrated by channelizing the timestream into 8192 frequency channels per subband, normalizing by the square root of the time-averaged power spectrum. This correction works sufficiently well everywhere but the band edges. RFI spikes above 5σ are removed using a 128-channel median filter and time-variability is normalized by the square root of the frequency-averaged power spectrum. The signal was converted to complex by removing negative frequency components of the analytic representation signal (via a Hilbert transform), and shifting the signal down in frequency by half the signal bandwidth. After forming power spectra, we replace the outer 1 MHz (1652–1653 MHz, 1683–1684 MHz) and central 0.5 MHz (1667.75–1668.25 MHz) with the mean intensity in that time segment.We search for giant pulses in a rolling boxcar window of 8 μs in steps of 62.5 ns (one sample in the complex timestream), summing the power from both channels and both polarizations.We flagged peaks above 8σ in the WSRT data, corresponding to ∼60Jy, as giant pulses, finding 15232 events, i.e., a rate of ∼0.7 s^-1. This detection threshold was chosen to assure there were no spurious detections. We find 4633 pulses above 8σ in the overlapping 2.5 hours of AR data, and all pulses have a detectable, higher S/N counterpart in WSRT.We show a sample main pulse detected at both telescopes, as well as main pulse and interpulse nearby in time in Fig. <ref>.One possible concern is the effects of saturation from 2–bit recording, as described in .The dispersion sweep in our frequency range of 1652–1684 MHz is 3.26 ms, ∼100000 samples. Thus, even the strongest giant pulse, having peak flux density of ∼100 kJy and duration of 3 μs will be reduced in intensity by roughly a thousand times, increasing the system temperature dS/S by only 35% and 60% for AR and WSRT respectively, with the recording systems far from saturation. The majority of our pulses are much fainter, near our detection threshold of ∼60Jy, where saturation effects will be negligible.§ SCINTILLATION PROPERTIES With the phased WSRT array, our pulse detection rate is sufficiently high that it becomes possible to compute a traditional dynamic spectrum by summing intensities as a function of time.We do this first below, as it gives an immediate qualitative view of the scintillation.A more natural choice for pulses which occur randomly in time, however, is to parametrize variations as a function of Δ t, the time separation between pulses <cit.>. Hence, we continue by constructing correlation functions of the spectra, as functions of both time and frequency offset. §.§ The Dynamic Spectrum of the Main Pulse We construct the dynamic spectrum I(t, ν) by simply summing giant pulse spectra in each time bin. Since we wish to observe the scintillation pattern rather than the vast intrinsic intensity variations between giant pulses, we normalize each time bin by the total flux within that bin. While there will still be structure in the dynamic spectrum owing to the intrinsic time structure of the giant pulses <cit.>, any features in frequency which correlate in time should only be associated with scintillation.We show a 20 minute segment of the dynamic spectrum in Figure <ref>.While noisy, the dynamic spectrum shows scintillation features. They are resolved by our time and frequency bin sizes of 4 s and 250 kHz, respectively, but only by a few bins, suggesting that the scintillation timescale and bandwidth are larger than our bin sizes by a factor of a few (consistent with ν_ decorr=1.10±0.02MHz, t_ scint=9.2±0.13s measured below). §.§ Correlation Functions To infer the scintillation bandwidth and timescale, one usually uses the auto-correlation of the dynamic spectrum, but for pulses randomly spaced in time, it is easier to correlate spectra of pulse pairs and then bin by time separation Δ t to create an estimate of the intrinsic correlation coefficient ρ(Δν, Δ t) <cit.>.For two spectra P_1(ν) and P_2(ν), the expected correlation coefficient is given by,ρ_12(Δν) = ⟨(P_1(ν)-μ_1) (P_2(ν + Δν)-μ_2)⟩/σ_1 σ_2,where Δν is the offset in frequency, μ_1, μ_2, σ_1^2 and σ_2^2 are expectation values for the means and intrinsic variances of P_1 and P_2, and we use ⟨…⟩ to indicate the expectation value of the product.If the giant pulses were effectively delta functions in our band, but affected by the same impulse response function associated with the scintillation, one would expect ρ=1 for Δν=0, and a fall-off in frequency and time difference with the appropriate scintillation bandwidth and timescale, approaching 0 at large Δν and Δ t_12.As noted by <cit.>, however, if each pulse consists of multiple shots, the spectra of two pulses will have different structure, and, if still affected by the same impulse response function, one expects a reduced peak, with maximum ρ≃1/3 (as was indeed observed in their data set).From observed spectra, one can only estimate the correlation coefficient.As we show in Appendix <ref>, if one simply uses the standard equation for the sample correlation coefficient, using the sample mean m and sample variance s^2 as estimates of μ and σ^2, the result is biased in the presence of background noise, but an unbiased estimate can be made using,r_12(Δν)= 1/k-1∑_i=1^k(P_1(ν_i)- m_1) (P_2(ν_i+Δν)-m_2)/s_1 s_2 ×(m_1 m_2/(m_1-m_b)(m_2-m_b)),where m_b is the mean power in the background. We create spectra for pulses using the 8 μs bin centered at each peak, yielding 125 kHz channels.To ensure sufficiently reliable correlations between pulse pairs, we limited ourselves for the WSRT data to pulses with S/N > 16, corresponding to S/N > 1 per channel, leaving 6755 main pulses and 650 interpulses.We use the AR data as a cross-check for possible systematics in the WSRT data, lowering the limit to SN > 10 to have roughly the same sample of pulses, accounting for the ratio of 165/275 between the two telescope's system temperatures.For each pulse pair, the correlation between their spectra gives r(Δν) for a single value of Δ t, the time separation of the pulses.We average these correlated spectra in equally spaced bins of Δ t to construct our estimate of the correlation function.We show the result in Fig. <ref>, both for correlations between main pulse pairs and for correlations between main pulse and interpulse pairs (there are insufficient giant pulses associated with the interpulse to calculate a meaningful correlation function from those alone).One sees that the correlations are fairly well defined, and that the correlation of the main pulse with itself is clearly different from that with the interpulse, the latter being broader in both frequency and time, and having lower maximum correlation.More generally, one sees that the amplitudes of all correlations are surprisingly low.We investigate the latter further in section <ref>, but note here that it is not some systematic product of a given telescope: the results for WSRT and AR are entirely consistent with each other (as expected, as the giant pulses at both telescopes should differ only by noise and by systematics; space-ground VLBI results from <cit.> measure the spatial scale of the scintillation pattern of 34000 ± 9000 km during this observation, larger than the Earth's radius).For more quantitative measures, we fit the correlations with 2D Gaussians with variable amplitude, frequency and time width.For the correlation between main pulse and interpulse, we additionally allow for offsets in time and frequency Δ t_0 and Δν_0, where the sign convention used is IP - MP (eg. a positive Δ t_0 means the MP precedes the IP).For the main pulse correlations, we find an amplitude of 1.80±0.03% and decorrelation scales of ν_ decorr=1.10±0.02MHz in frequency, and t_ scint=9.24±0.13s in time.[We adopt the usual convention, defining ν_ decorr and t_ scint as the values where the correlation function drops to 1/2 and 1/e respectively.] The timescale is somewhat shorter than the value of 25±5s found at 1.475GHz by <cit.>, and the difference in observing frequency does not account for the difference (for t_ scint∝ν, our measurement corresponds to 8.17±0.12s at 1.475GHz). Differences are expected for observations at different epochs, however, as the scattering in the nebula is highly variable (, and often showing “echoes”, e.g., ).Our fits to the main pulse to interpulse correlations confirm the qualitative impression from Fig. <ref>, that compared to the main pulse to main pulse correlations they are weaker and broader in both frequency and time: the measured amplitude is 0.97±0.07%, t_ scint = 10.7±0.8 s, ν_ decorr = 1.44±0.10 MHz.We also find marginally significant time and frequency offsets, of Δ t_0 = 1.02±0.54 s and Δν_0 = -0.34 ± 0.09 MHz.To try to quantify the significance of these differences, we use simulated cross-correlations. For these, since we have many more giant pulses during the main pulse than the interpulse, we simply take 650 random main pulses (the number of interpulses above 16 σ) and correlate these with the other 6755 main pulses, without correlating identical pulses.We repeat this 10000 times, and fit each subset with a 2D Gaussian, allowing for offsets in time and frequency Δ t_0 and Δν_0.Comparing these with the value fit to the interpulse to main-pulse correlations (see Fig. <ref>), the differences appear significant: none of the simulated data sets have as small an amplitude, or larger frequency offset Δν_0, while only relatively small numbers have larger time offset Δ t_0 or wider frequency or time widths. §.§ Comparison to previous work on the same dataset The same data analysed in this paper were studied in <cit.> and <cit.>, who derive a de-correlation bandwidth of 279.2 ± 34.4 kHz, and 320 kHz, respectively. These values differ significantly from our value of ν_ decorr=1.10±0.02MHz, so here we further investigate the origin of these differences.Our methods differ to those of <cit.> and <cit.>; the crucial difference being that they auto-correlate individual giant pulses (between left and right circular polarization, to reduce intrinsic structure correlating), while we correlate pulse pairs. For a direct comparison, we try to follow the steps of <cit.>, adopting their cutoff of SN > 22, correlating left and right circular polarizations of each giant pulse and fitting a single exponential.From this, we measure ν_ decorr=0.39 MHz, much closer to their value of ν_ decorr=0.32 MHz. However, we find that a two-exponential fit is a much better fit to the data, giving two distinct scales of ν_ decorr, 1=1.0 MHz, ν_ decorr, 2=0.19 MHz. This is consistent with our results if the small bandwidth ν_ decorr is caused by intrinsic pulse structure (correlating only within a single pulse's spectrum), and the wide bandwidth ν_ decorr is the scintillation bandwidth (correlating between pulse pairs within t_ scint).Additionally, <cit.> derive a scintillation timescale of 22.4 ± 6.1, larger than our value by a factor of 2.Their timescale is derived in a different way, where they use their measured scintillation bandwidth (described above), and angular size θ of the scattering screen, derived directly from their VLBI correlation.Using the known velocity of the Crab, and an assumed isotropic scattering screen, gives a timescale estimate.Given the difference in our methods, and the fact that the angular broadening likely arises in the interstellar medium, rather than the nebula, we are not worried about the differences in these values.§.§ Secondary Spectra Pulsar scintillation is often best studied in terms of its conjugate variables τ and f_D, through their secondary spectrum A(τ, f_D) = Ĩ_1(τ, f_D) Ĩ_2^*(τ, f_D) (e.g. ).The secondary spectrum is simply the Fourier transform of the correlation function r(Δν, Δ t) = I_1(ν, t) ⊛ I_2(ν, t).For the MP-MP correlation, A(τ, f_D) is purely real, but in the MP-IP correlation, any time or frequency offsets in the correlation function will manifest as phase gradients in f_D or τ, respectively.We show the secondary spectra for both correlations, after padding by 60 zero bins in time, in Fig. <ref>.The MP-IP secondary spectrum is dominated by a phase gradient in τ, arising from the frequency offset in the correlation function. Removing a linear phase gradient in τ shows a marginally significant phase gradient in f_D. If the screen was one-dimensional, and the main pulse and interpulse emission locations were offset, there would be a phase gradient in f_D, independent of τ. § RAMIFICATIONS§.§ The Surprisingly low Correlation CoefficientGiant pulses are on average a few μs in duration, comprised of many smaller, unresolved “nanoshots” (e.g. ). However, if all nanoshots originate from the same projected physical location, they should all be imparted with the same impulse response function; an identical signal would correlate perfectly, and a signal with many random polarized shots with the same impulse response should correlate no worse than 1/3 (, Appendix  <ref>).The observed ∼ 2% correlation between main pulses is well below the expectation of 1/3. This could be explained if individual pulses come from small parts of the full extended emission region, which is larger than the resolution of the scattering screen (discussed in the following section). Under this explanation, the correlation should decrease even further during times of higher nebular scattering - this is something we investigate further in Lin et al. (2020, in preparation).§.§ Spatial Resolution of Scattering ScreenThe size and location of the scattering screen is not precisely known, but a model in which the majority of the temporal scattering occurs in the Crab nebula is favoured by VLBI measurements showing the visibility amplitude is constant through the scattering tail, and independent of the scattering time (, ) as well as by the short scintillation timescale ().Since scattering requires relatively large differences in (electron) density, it is very unlikely to happen inside the pulsar-wind filled interior of the Crab nebula, which must have very low density.For a reasonable bulk magnetic field of 10^-4 G, the emitting radio electrons are very relativistic, with γ∼ 10^3. The radio emitting electrons have a density of n_e ≈ 10^-5 cm^-3 <cit.>, implying that the refractive index deviates from unity by a tiny amount,Δ n ≈(ν_p/ν)^2 ∼ 10^-18,where ν_p = (e^2 n_e / πγ m_e)^1/2 is the plasma frequency, and ν the observed radio frequency.Instead, the only plausible location for the temporal scattering is in the optically emitting filaments in the Crab Nebula, which have n_e∼ 1000 cm^-3 <cit.>. These filaments develop because as the pulsar wind pushes on the shell material, the contact discontinuity accelerates, leading to the RT instability <cit.>.With 3-dimensional models fit to optical spectroscopic data of the Crab Nebula, <cit.> find the filaments reside conservatively in the range ∼ 0.5–2.0 pc when using a nominal pulsar distance of 2 kpc.The scattering causes a geometric time delay given by,τ = θ^2 d_ eff/2c,d_ eff = d_ psrd_ lens/d_ psr - d_ lens,where θ is the angle the screen extends to as seen from Earth, and d_ psr an d_ lens are the distances to the pulsar and the screen, respectively.The scattering screen can be seen as a lens, with physical size D=θ d_ lens and corresponding angular resolution λ/D, giving a physical resolution at the pulsar of Δ x = (d_ psr - d_ lens) λ / θ d_ lens, or, in terms of the scattering time τ,Δ x = λ/√(2)π(d_ psr - d_ lens/2cτd_ psr/d_ lens)^1/2.The prefactor 1 / √(2)π is model dependent, coming from using a square-law(α = 2) phase-structure function <cit.>.If we were to infer the scattering time from the scintillation bandwidth, we would find τ_ scint = 1/2πΔν≃ 160 ns, using Δν≈ 1 MHz.However, this is lower than the apparent ∼1 μs scattering seen in Figure <ref>, and lower than measurements of the scatteringat this epoch of τ(600 MHz) ≃ 0.1ms <cit.>, or τ(350 MHz) ≃ 0.6ms <cit.>, which would correspond to τ≃ 1.1-1.7 μswhen scaled by τ∝ν^-4 to our observing frequency.In <cit.>, it is noted that the relation τ = 1/2πΔν may underestimate τ in the case of an extended, resolved emission region asτ = √(1+4σ_1^2)/2πΔν,where σ_1 is the size of the emission region in units of the lens resolution. In this picture, the scintillation timescale would depend on the lens resolution and the emission region size.Using τ≃1 μ s and d_ psr-d_ lens≃1.0pc, then Δ x≃290km (for the full range of allowed distances, 205≲Δ x≲410 km). Thus, the resolution of the scattering screen is smaller than the light-cylinder radius of the Crab pulsar, R_ LC≡ cP/2π=1600km.Additionally, a nominal time offset between the main pulse and interpulse of ∼ 1-2s is ∼ 10-20 % of the scintillation timescale, which would suggest that the emission locations are separated by hundreds of km. We could turn a measured time offset into a physical separation given a relative velocity between the pulsar and the screen.Unfortunately, this is not known, though we can set limits from the proper motion. The proper motion of the Crab pulsar relative to its local standard of rest is measured to be 12.5±2.0 mas/yr in direction 290±9 deg (east of north ), where the uncertainties attempt to account for the uncertainty in the velocity of its progenitor, and, therewith, of the nebular material. At an assumed distance of 2 kpc, the implied relative velocity of the pulsar is ∼120 km/s, and non-radial motions in the filaments can be up to ∼70km/s <cit.>.A 1-2 s time delay between pulse components would then suggest a projected separation between the interpulse and main pulse emission regions of ∼ 50 - 400 km.As mentioned above, <cit.> argue that the short scintillation timescale suggests a nebular origin of the observed scintillation. Here we outline the argument using our measured values. The scintillation timescale is roughly the time it takes for the extended emission region to traverse a resolution element of the scattering screen; using the above resolution and proper motion gives an estimate of the timescale of √(σ_1^2+1)Δ x / v_ pm∼ 5.5-11 s, consistent our observed time of 9.24±0.13 s.Scintillation in the interstellar screen for our given scintillation bandwidth would result in much larger resolution elements (for a screen halfway to the pulsar, at d_ psr-d_ lens≃1 kpc, greater by a factor ∼√( 1 kpc/1 pc)∼ 30), and scintillation on several minute timescales, more typical of interstellar scintillation in this frequency range.If we assume pulses occur at random position within an extended region, we may also estimate the average expected correlation. To test this, we simulated 500 pulses with position drawn at random from a 2D Gaussian with σ_xy = σ_1Δ x.The correlation coefficient between each pair of pulses is estimated as r_ij = C e^-|x⃗_⃗i⃗j⃗|^2 / (Δ x), where |x⃗_⃗i⃗j⃗| is the projected position difference between each pulse pair, and C < 1 is an unknown constant which depends on both the intrinsic spectral structure in the pulses (eg.and the appendix), and whether the separate components forming giant pulses are resolved. Assuming an isotropic 2D screen, or a 1D screen, and assuming C=1/3 (ie. that individual giant pulses are unresolved, which may not be a good assumption), gives estimates of the expected correlation coefficient of ⟨ r_1D⟩≃ 0.071,⟨ r_2D⟩≃ 0.016 respectively, on the same order as our observed average correlation coefficient.We find the assumed picture of giant pulses occurring from an extended region of of ∼ 1000km to give a consistent result, being broadly in agreement with the observed scintillation bandwidth, scintillation timescale, and low correlation coefficient. This picture will be expanded in more detail in Lin et al. (in prep.).The picture we find above differs from <cit.>, who find the spectra of nearby pulses at 1.48GHz and 2.33GHz to correlate at a value of ∼ 1/3.They find values of the scintillation bandwidth and timescale at 2.33GHz of Δν_s=2.3±0.4MHz, and Δ t_s=35±5s respectively, which scaled to 1.68GHz gives Δν = 0.6±0.1MHz, and Δ t = 25±4s ( also consistently find Δν_s < 0.8MHz, Δ t_s=25±5s at 1.48GHz).They face a similar inconsistency between the measured scattering time (∼ 0.1ms at 600MHz, which imples ∼1.7 μs at 1.67GHz, ) and the inferred scattering time from 1/2πΔν≈ 250ns.However, the scintillation time they measure is much larger than ours, implying that the dominant screen must be at a further distance from the pulsar, or anisotropic and oriented such that there is poorer resolution along the direction of the relative velocity between the pulsar and the screen.§.§ Fully Quantifying Emission Sizes and Separations A major uncertainty in our above estimatesis the geometry of the lens.From studies of the scintillation in other pulsars, the scattering screens in the interstellar medium are known to be highly anisotropic, as demonstrated most dramatically by the VLBI observations of <cit.>.If the same holds for the nebular scattering screens, this implies that our resolution elements are similarly anisotropic.Since the orientation relative to the proper motion is unknown, the physical distance between the main and interpulse regions could be either smaller or larger than our estimate above.Since the scattering varies with time, it may be possible to average out these effects.With a perfectly 1D scattering screen, it is difficult to produce both a time and frequency offset, as there would necessarily be some position where the main pulse and interpulse pass through the same position along the screen's axis.One possible way to induce a frequency offset would be a spatial gradient of the column density (or “prism”) on the scale of separation between emission regions; Our frequency offset of ∼0.3 MHz could be explained by a DM gradient of ΔDM / DM∼ 0.02% over ∼ 1000 km.The DM variations of the Crab have not been probed on such small spatial scales, although it varies by considerably more than this on longer timescales (ie. larger spatial scales, eg. ). For two spatially separated emission components, <cit.> find that a two-dimensional screen can produce both a time and frequency shift, over a short timescale. The two-dimensionality of the screen, or the effect of multiple scattering screens, may need to be considered.Furthermore, all values relating to the scattering screen include the uncertain distance to the Crab pulsar, suggesting that a parallax distance would improve our constraints.In addition, the rough localization of the scattering in the filaments is based on physical arguments; the results would be greatly improved through a direct measurement.The distance to the screen(s) can be constrained through VLBI and through scintillation measurements across frequency. As the spatial broadening of the Crab is dominated by the interstellar screen, rather than the Nebula, VLBI at space-ground baselines <cit.> or at low frequencies <cit.> can help constrain the angular size of the scattering in the interstellar medium. This in turn can constrain the size of the nebular screen.The visibility amplitudes will only decrease below 1 when the scattered image of the pulsar is not point-like to the interstellar screen. Time-resolved visibilities throughout the rise time of scattered pulses may then elucidate the neublar scale; by increasing in time delay, one increases in angular size, and thus resolution, so one may observe the transition point beyond which the nebular screen becomes resolved. In addition, the interstellar screen will scintillate only when it does not resolve the nebular screen (the same argument has been made for scintillation in FRBs, ). The transition frequency for the two scintillation bandwidths to become apparent in the spectra could give a size measurement of the nebular screen. Applying this same analysis across different frequencies, or in times of different scattering in the nebula will also help to quantify both the separation of the main pulse and interpulse, and the size of the emitting regions of both components. The correlation function of spectra is a crude measurement - it is fourth order in the electric field, and the scintillation pattern is contaminated with intrinsic pulse substructure.A much cleaner measurement can hopefully be made in the regime where the duration of giant pulses is much less than the scattering time, akin to the coherent method of de-scattering pulses in <cit.>.As discussed in section <ref>, we associate the scattering screen with the filaments in the pulsar wind nebula <cit.>. These filaments appear from the Rayleigh-Taylor instability, when the pulsar wind pushes and accelerates freely expanding envelope. This stage terminates after few thousand years when the reverse shock from the interaction between the supernova remnant at the ISM reaches the pulsar wind nebula (, see review by ). Thus we expect such special scattering environments to be specific for pulsar wind nebulae during a fairly short period - sufficiently young for the reverse shock not to reach the pulsar wind nebula, but sufficiently advanced to have RT-induced filaments. §.§ Acknowledgements We thank the anonymous referee, whose comments greatly improved the draft. We thank Judy Xu who attempted the initial 1D correlation function of giant pulses.These data were taken as part of a RadioAstron observing campaign. The RadioAstron project is led by the Astro Space Center of the Lebedev Physical Institute of the Russian Academy of Sciences and the Lavochkin Scientific and Production Association under a contract with the State Space Corporation ROSCOSMOS, in collaboration with partner organizations in Russia and other countries.§ CORRECTING NOISE BIASES IN THE CORRELATION COEFFICIENTThe intrinsic correlation coefficient between two pulse spectra P_1,2(ν) can be generally defined as,ρ(P_1, P_2) = ⟨ (P_1(ν)- μ_1) (P_2(ν)-μ_2) ⟩/σ_1σ_2,where ⟨…⟩ indicates the expectation value for an average over frequency, and μ and σ^2 are expectation values of the mean and variance, respectively.With this definition, one will have ρ=1 for two pulses with identical frequency structure.Typically, as an estimate of ρ, one uses the sample correlation coefficient,r(P_1, P_2) = 1/k-1∑_i=1^k (P_1(ν_i) - m_1) (P_2(ν_i) - m_2)/s_1 s_2,where k is the number of frequency bins and m_p and s_p^2 are the usual sample mean and variance.In the presence of noise, subtracting m leaves the nominator unbiased, but s^2 will be systematically higher than σ^2, and thus r will be biased low.For normally distributed data, one could approximately correct with s_ int^2=s_p^2-s_n^2, but this does not hold for our case of power spectra. Here, we derive an expression valid for our case, where we wish to ensure that ⟨ r⟩=1 for two pulses that are sufficiently short that we can approximate them as delta functions, and that are affected by the interstellar medium the same way, i.e., have the same impulse response function g(t).In that case, the measured electric field of a giant pulse is,E_p(ν) = A_p g(ν) + n(ν),where A_p is the amplitude of the pulse's delta function in the Fourier domain, and g(ν) and n(ν) are the Fourier transforms of the impulse response function and the measurement noise, respectively. The measured intensity is thenP_p(ν) = E_p^2(ν) = A_p^2 g^2(ν) + n^2(ν) + 2A_p|g(ν)||n(ν)|cos(Δϕ(ν)),where Δϕ(ν) is the phase difference between n(ν) and g(ν), and where squares are of the absolute values.The expectation value for the average is,μ_p = ⟨ P_p ⟩ = A_p^2 ⟨ g^2 ⟩ + ⟨ n^2 ⟩,where we have dropped the dependencies on frequency for brevity, and used that the cross term averages to zero since ⟨cos(Δϕ)⟩=0. Hence, the expectation value for the variance is,σ_p^2 = A_p^4[⟨ g^4⟩-⟨ g^2⟩^2] + ⟨ n^4⟩ - ⟨ n^2⟩^2 + 4A_p^2 ⟨ g^2n^2cos^2(Δϕ)⟩,where we have again omitted terms that average to zero. The last term does not average to zero because of the squaring: it reduces to 2A_p^2⟨ g^2⟩⟨ n^2⟩, since g and n are independent and ⟨cos^2(Δϕ)⟩=1/2.For two pulses differing only by noise, the expectation value for the numerator of r is⟨ (P_1(ν)- μ_1) (P_2(ν)-μ_2) ⟩ = A_1^2 A_2^2 [⟨ g^4⟩-⟨ g^2⟩^2].Thus, for an unbiased estimate of r, we need to estimate σ_p=A_p^2[⟨ g^4⟩-⟨ g^2⟩^2]^1/2. We can do this by also measuring the properties of the background, which, if it is dominated by measurement noise with the same properties as the pulse, has μ_b=⟨ n^2⟩ and s^2_b=⟨ n^4⟩ - ⟨ n^2⟩^2 (this will underestimate the noise if the pulse is strong enough to raise the system temperature, although in that case, the noise has only a small contribution to the variance of the pulse, and is negligible in computing r. With this, it follows that to make estimates of r free of noise bias, we should use,s_ int^2 = s_p^2 - s_b^2 - 2(μ_p - μ_b)μ_b.This is a noisy quantity, however, and simply replacing the measured variance with this value will lead to some measured values of s_ int∼0, and thus diverging correlations.Instead of correcting the sample variances in the denominator of the sample correlation coefficient, one can use the properties of the pulse to estimate a correction of the sample correlation coefficient itself.Assuming the impulse response function g(ν) is approximately normally distributed, the power spectrum |g(ν)|^2 will distributed roughly as a χ^2 distribution with two degrees of freedom, with s_p^2≃ m_p^2. Using this, our unbiased estimate of the variance simplifies to s_ int^2 = (m_p - m_b)^2, which uses the well-measured mean of both the pulse and the background (in our case, after bandpass calibration described in section <ref>, the mean and standard deviation of the background are unity).We should not use the above estimate directly in the denominator of the standard correlation coefficient in Eq. <ref>, as at high S/N, this unnecessarily introduces extra variance.Instead, we can ensure an unbiased, noise-corrected estimate of the correlation that works at both low and high S/N by writing it as:r(P_1(ν), P_2(ν)) = ⟨ (P_1(ν)- m_1) (P_2(ν)-m_2) ⟩/s_1 s_2(m_1 m_2/(m_1-m_b)(m_2-m_b)). To test our correlation correction, we first verify our underlying assumption, that the mean and standard deviation are equal, by taking the ratio of the measured values for all pulses. The result is shown in the top panel of Figure <ref>: one sees that the ratio is around unity, except at the highest S/N where saturation biases the noise low.Next, we correlated the spectra of all overlapping pulses between WSRT and AR with S/N > 16 at both telescopes, shown in the bottom panel of Figure <ref>.Since these are the same pulses, but observed at two different telescopes and thus with different noise, we expect values to scatter around unity.Indeed, using the full noise correction, we find that the sample correlations are around a value close to unity, of ∼90%, independently of S/N (the difference from 100% likely reflects remaining differences in bandpass, etc.; the baseline is too short for interstellar scintillation to differ).Without the correction, the sample correlation coefficient is always less than 100%, and decreases with decreasing S/N.For a further test, we use simulated giant pulses. To begin, pulses are simulated as identical delta function giant pulses with the same impulse response function but different noise, in the manner described in <cit.>. We find that using the above estimates, the correlation coefficients between these pulses indeed average to unity.Trying a slightly more realistic simulation, forming giant pulses with N fully polarized shots, with random amplitudes (drawn from a normal distribution) and random phases, the correlation decreases, saturating at r = 1/3 for large numbers (N≳ 10), in line what is expected from the derivation in <cit.>.Finally, all of the above is for correlations of the power spectra of a single polarization, and throughout the paper, we correlate each polarization separately, then average the two values.For completeness, we note that if one were to use the total intensityI = P_L + P_R = P_X + P_Y, under the assumption that the noise is not correlated between the polarizations, the expectation value for the standard deviation isσ_I^2 = A_p^4[⟨ g^4⟩-⟨ g^2⟩^2] + ⟨ n^4⟩ - ⟨ n^2⟩^2 + 2A_p^2 ⟨ g^2n^2cos^2(Δϕ)⟩,and a noise corrected estimate can be made with,s_ int^2 = s_I^2 - s_b^2 - (μ_I - μ_b)μ_b,with the cross-term differing by a factor of 2 from the single-polarization case.Adding more samples with independent noise, the cross-term further diminishes, and can be treated as Gaussian in the limit of large N.We do not use these estimates, however, since in the case that the giant pulses are not single delta functions, the estimates start to depend on the degree of polarization <cit.>, which is a complication we rather avoid. apj
http://arxiv.org/abs/1709.09179v3
{ "authors": [ "Robert Main", "Rebecca Lin", "Marten H. van Kerkwijk", "Ue-Li Pen", "Alexei G. Rudnitskii", "Mikhail V. Popov", "Vladimir A. Soglasnov", "Maxim Lyutikov" ], "categories": [ "astro-ph.HE" ], "primary_category": "astro-ph.HE", "published": "20170926180006", "title": "Resolving the emission regions of the Crab pulsar's giant pulses" }
IEEE Access Shell et al.: Bare Demo of IEEEtran.cls for JournalsState Estimation in Smart Distribution System With Low-Precision Measurements Jung-Chieh Chen, Member, IEEE, Hwei-Ming Chung, Chao-Kai Wen, Member, IEEE, Wen-Tai Li, Student Member, IEEE, and Jen-Hao Teng, Senior Member, IEEEJ.-C. Chen is with the Department of Optoelectronics and Communication Engineering, National Kaohsiung Normal University, Kaohsiung 80201, Taiwan (e-mail: [email protected]). H.-M. Chung is with the Research Center for Information Technology Innovation, Academia Sinica, Taipei 11529, Taiwan. C.-K. Wen is with the Institute of Communications Engineering, National Sun Yat-sen University, Kaohsiung 80424, Taiwan. W.-T. Li is with Engineering Product Development, Singapore University of Technology and Design, Singapore. J.-H. Teng is with the Department of Electrical Engineering, National Sun Yat-sen University, Kaohsiung 80424, Taiwan. December 30, 2023 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================Efficient and accurate state estimation is essential for the optimal management of the future smart grid. However, to meet the requirements of deploying the future grid at a large scale, the state estimation algorithm must be able to accomplish two major tasks: (1) combining measurement data with different qualities to attain an optimal state estimate and (2) dealing with the large number of measurement data rendered by meter devices. To address these two tasks, we first propose a practical solution using a very short word length to represent a partial measurement of the system state in the meter device to reduce the amount of data. We then develop a unified probabilistic framework based on a Bayesian belief inference to incorporate measurements of different qualities to obtain an optimal state estimate. Simulation results demonstrate that the proposed scheme significantly outperforms other linear estimators in different test scenarios. These findings indicate that the proposed scheme not only has the ability to integrate data with different qualities but can also decrease the amount of data that needs to be transmitted and processed. Bayesian belief inference, data reduction, incorporation, quantization, smart grid, state estimation.§ INTRODUCTION Integrating renewable energy generations, distributed microgenerators, and storage systems into power grids is one of the key features of enabling the future smart grid <cit.>. However, this integration gives rise to new challenges, such as the appearance of overvoltages at the distribution level. Accurate and reliable state estimation must be developed to achieve the real-time monitoring and control of this hybrid distributed generation system and therefore assure the proper and reliable operation of the future grid <cit.>. An increase in the penetration of the distributed generator necessarily leads to an unusual increase in measurements <cit.>.Conventional state estimation techniques, such as the weighted least squares (WLS) algorithm, rely on measurements from the supervisory control and data acquisition (SCADA) systems <cit.>. A well-known fact is that the measurements provided by SCADA are intrinsically less accurate <cit.>. Moreover, adapting conventional WLS technique to SCADA-based state estimation is not robust due to its vulnerability to poor measurements <cit.>. More recently, the deployment of high-precision phasor measurement units (PMUs) in electric power grids has been proven to improve the accuracy of state estimation algorithms <cit.>. However, PMUs remain expensive at present, and limited PMU measurements, along with conventional SCADA measurements, must be incorporated into the state estimator for the active control of the smart grid.Several state estimation methods using a mix of conventional SCADA and PMU measurements have already been proposed for electric power grids, as shown in Refs. <cit.>. However, the joint processing of measurements of different qualities may result in an ill-conditioned system. Moreover, another critical challenge but essential task in deploying the future grid at a large scale is the massive amount of measurement data that needs to be transmitted to the data processing control center (DPCC). This poses a risk to the grid's operator: DPCC is drowning in data overload, a phenomenon called “data tsunami.” A massive amount of measurement data also results in a long time for data collection, so that the state estimation result is not prompt. To alleviate the impact of data tsunami, Alam et al. <cit.> took advantage of the compressibility of spatial power measurements to decrease the number of measurements based on the compressive sensing technique. Nevertheless, the performance of <cit.> is relatively sensitive to the influence of the so-called compressive sensing matrix. We first propose a practical solution to address the abovementioned challenges. Inspired by <cit.>, we can compress the measurement not only with compressive sensing matrix but also itself. Therefore, we use a very short length to compress the partial measurements of the system.[The work in <cit.> designed a compressed matrix to shorten the measurements, where the compressed measurements are still represented with 12 or 16 bits for transmission. However, in the present study, partial measurements are represented in extremely short length for transmission. Therefore, the focus of our study is different from that of <cit.>.] The use of a very short word length (e.g., 1-6 bits)[In practical application, all of the measurements obtained by the meter devices must be quantized before being transmitted to the DPCC for processing. Modern SCADA systems use a typical word length of 12 (or 16) bits to represent the measurements employed to obtain a high-resolution quantized measurement.] to represent a partial measurement of the system state in the meter device reduces the amount of data that the DPCC needs to process. This data-reduction technique considerably enhances the efficiency of the grid's communication infrastructure and bandwidth because only a limited number of bits representing the measurements are sent to the DPCC. In addition, instead of substituting all sensors in the cerrunt power system with PMUs, we only have to add several wireless meters with low bit analog-to-digital converter, which are cheaper than conventional meters. Hence, the cost of placing the meters can be reduced. Nevertheless, the traditional state estimation methods cannot be applied to the system with partial measurements represented by very short length. Thus, we develop a new scheme to obtain an optimal state estimate and then minimize the performance loss due to quantization while incorporating measurements of different qualities. Before designing the state estimation algorithm, we first formalize the linear state estimation problem using data with different qualities as a probabilistic inference problem. Then, this problem can be tackled efficiently by describing an appropriate factor graph related to the power grid topology. Particularly, the factorization properties of the factor graphs improve the accuracy of mixing measurements of different qualities. Then, the concept of the estimation algorithm is motivated by using the maximum posteriori (MAP) estimate to construct the system states.The proposed MAP state estimate algorithm derived from the generalized approximate message passing (GAMP)-based algorithms <cit.>, which exhibit excellent performance in terms of both precision and speed in dealing with high-dimensional inference problems, while preserving low complexity. In contrast to the traditional linear solver for state estimation, which does not use prior information on the system state, the proposed scheme can learn and therefore exploit prior information on the system state by using expectation-maximization (EM) algorithm <cit.> based on the estimation result for each iteration. The proposed framework is tested indifferent test systems. The simulation results show that the proposed algorithm performs significantly better than other linear estimates. In addition, by using the proposed algorithm, the obtained state estimations retain accurate results, even when more than half of the measurements are quantized to a very short word length. Thus, the proposed algorithm can integrate data with different qualities while reducing the amount of data.Notations—Throughout the paper, we use ℝ and ℂ to represent the set of real numbers and complex numbers, respectively. The superscripts (·)^𝖧 and (·)^* denote the Hermitian transposition and conjugate transpose, respectively. The identity matrix of size N is denoted by 𝐈_N or simply 𝐈. A complex Gaussian random variable x with mean x and variance σ^2_x is denoted by 𝒩_ℂ(x;x,σ^2_x) ≜ (πσ^2_x)^-1exp(-|x-x|^2/σ^2_x) or simply 𝒩_ℂ(x,σ^2_x). 𝔼[·] and 𝕍𝔸ℝ[·] represent the expectation and variance operators, respectively. {·} and {·} return the real and imaginary parts of its input argument, respectively. (·) returns the principal argument of its input complex number.Finally, j≜√(-1). § SYSTEM MODEL AND DATA REDUCTION§.§ System ModelOur interest is oriented toward applications in the distribution system. Following the canonical work on the formulation of the linear state estimation problem <cit.> and power flow analysis <cit.>, we use a π-model transmission line to indicate how voltage and current measurements are related to the considered linear state estimation problem. For easy understanding of this model, we start with a π-equivalent of a transmission line connecting two PMU-equipped buses i and j as shown in Fig.<ref>, where Y_ij is the series admittance of the transmission line, Y_i0 and Y_j0 are the shunt admittances of the side of the transmission line in which the current measurements I_i0 and I_j0 are taken, respectively, and the parallel conductance is neglected. In this case, the system state variables are the voltage magnitude and angle at each end of the transmission line, that is, V_i∈ℂ and V_j∈ℂ.In Fig. <ref>, the line current I_ij, measured at bus i, is positive in the direction flowing from bus i to bus j, which is given byI_ij = I_l+I_i0 =Y_ij(V_i-V_j) +Y_i0V_i.Likewise, the line current I_ji, measured at bus j, is positive in the direction flowing from bus j to bus i, which can be expressed asI_ji = -I_l+I_j0 =Y_ij(V_j-V_i) +Y_j0V_j.Then, (<ref>) and (<ref>) can be written in matrix form as[ I_ij; I_ji ]=[ Y_ij+Y_i0 -Y_ij; -Y_ij Y_ij+Y_j0 ][ V_i; V_j ].Given that PMU devices are installed in both buses, the bus voltage and the current flows through the bus are available through PMU measurements. Based on these measured data, the complete state equation can be expressed as[V_i;V_j; I_ij; I_ji ]= [ 1 0; 0 1; Y_ij+Y_i0 -Y_ij; -Y_ij Y_ij+Y_j0 ]_≜H[ V_i; V_j ].Here, H can be decomposed into four matrices related to power system topology <cit.>. These matrices are termed the current measurement-bus incidence matrix, the voltage measurement-bus incidence matrix, the series admittance matrix, and the shunt admittance matrix, as explained later in this section. Thereafter, (<ref>) can be further extended to the general model in power systems.Before explaining the rules for constructing these matrices used in the state equation, a fictitious six-bus system is first presented, as shown in Fig. <ref>(a). This simple system is used to demonstrate how each of these matrices is constructed for the sake of clarity. As Fig <ref>(a) indicates, the line current flowing through each line is directly measurable with a current meter. However, bus voltages are measurable only at buses 1, 5, and 6 because of the PMUs are installed only in these three buses. Thus, in this example, the number of buses is N=6, the number of PMUs (or the number of buses that have a voltage measurement) is L=3, and the number of current measurements is M = 5. The explicit rules for constructing each of the four matrices are provided. First, 𝐀∈ℝ^M× N is the current measurement-bus incidence matrix that indicates the location of the current flow measurements in the network, where the rows and columns of 𝐀 represent, respectively, the serial number of the current measurement and the bus number. More specifically, the entries of 𝐀 are defined as follows. If the m-th current measurement I_m (corresponding to the m-th row) leaves from the n-th bus (corresponding to the n-th column) and heads toward the n'-th bus (corresponding to the n'-th column), the (m,n)-th element of 𝐀 is 1, the (m,n')-th element of 𝐀 is -1, and all the remaining entries of 𝐀 are identically zero. Second, Π∈ℝ^L× N is the voltage measurement-bus incidence matrix that points out the relationship between a voltage measurement and its corresponding location in the network, where the rows and columns of Π represent the serial number of the voltage measurement and the bus number, respectively. Hence, the entries of Π can be defined in such a way. If the l-th voltage measurement (corresponding to the l-th row) is located at the n-th bus (corresponding to the n-th column), then the (l,n)-th element of Π is 1, and all the other elements of Π are zero. Third, 𝐘_l∈ℂ^M× M, which denotes the series admittance matrix, is a diagonal matrix, whose diagonal terms are the line admittance of the transmission line being measured. Thus, 𝐘_l is populated using the following single rule. For the m-th current measurement, the (m,m)-th element of 𝐘_l is the series admittance of the line being measured. Fourth, 𝐘_s∈ℂ^M× N is the shunt admittance matrix whose elements are determined by the shunt admittances of the lines which have a current measurement. The following rules are used to populate the matrix. If the m-th current measurement leaves the n-th bus, then the (m,n)-th element of 𝐘_s is the shunt admittance of the line, and all the other elements of 𝐘_s are zero. By following these rules, the constructions of 𝐀, Π, 𝐘_l, and 𝐘_s for the six-bus system are illustrated in Fig. <ref>(b).Given the above definitions, the linear state equation in (<ref>) can be further extended to general linear state equation with N buses, L voltage measurements, denoted by 𝐯∈ℂ^L, and M current measurements, denoted by 𝐢∈ℂ^M, as follows <cit.>[ 𝐯; 𝐢 ]_≜ 𝐳= [Π; 𝐘_l𝐀 + 𝐘_s ]_≜ 𝐇𝐱,where 𝐳∈ℂ^L+M denotes a vertical concatenation of the set of voltage and current phasor measurements, 𝐱∈ℂ^N is the complex system state, and H∈ℂ^(L+M)× N is a topology matrix (i.e., also referred to as the measurement matrix in a general linear system).[Using slight modifications, the system model in (<ref>) can easily be extended to three-phase power systems <cit.>. Each element of H is modified as follows. Elements “1” and “0” in Π and 𝐀 are replaced with a 3 × 3 identity matrix and a 3 × 3 null matrix, respectively. Each diagonal element of 𝐘_l is replaced with 3 × 3 admittance structures, whereas the off-diagonal elements become 3 × 3 zero matrices. Finally, each nonzero element of 𝐘_s is replaced with 3 × 3 admittance structures and the remaining elements become 3 × 3 zero matrices.] Considering again the fictitious six-bus system presented earlier, the full system state for this system is also provided in Fig. <ref>(c) for ease of understanding. Defining P≜ L+M and accounting for the measurement error in the linear state equation, (<ref>) then becomes[As described in (<ref>), the considered system model isexpressed as 𝐲 =𝐇𝐱 + 𝐞because we aimed to estimate 𝐱. Thus,𝐇 should be at least a square matrix or anoverdetermined system. In this case, P = (L+M) ≥ N.]𝐲 =𝐇𝐱_= 𝐳 + 𝐞,where 𝐲∈ℂ^P is the raw measurement vector of the voltage and current phasors, 𝐳∈ℂ^P is also referred to as the noiseless measurement vector, and 𝐞∈ℂ^P is the measurement error, in which each entry is modeled as an identically and independently distributed (i.i.d.) complex Gaussian random variable with zero mean and variance σ^2.§.§ Data Reduction and Motivation In reality, all of the measurements must be quantized before being transmitted to the DPCC for processing. For example, modern SCADA systems are equipped with an analog device that converts the measurement into binary values (i.e., the usual word lengths are 12 to 16 bits). To achieve this, the measurements 𝐲= {y_μ}_μ =1^P are processed by a complex-valued quantizer in the following componentwise manner:𝐲 = {y_μ}_μ =1^P= 𝒬_ℂ(𝐲) = {𝒬_ℂ(y_μ) }_μ =1^P,where 𝐲={y_μ}_μ =1^P is the quantized version of 𝐲={y_μ}_μ =1^P, and each complex-valued quantizer 𝒬_ℂ(·) is defined as y_μ = 𝒬_ℂ(y_μ) ≜𝒬((y_μ)) + j𝒬((y_μ)). This means that, for each complex-valued quantizer, two real-valued quantizers exist that separately quantize the real and imaginary part of the measurement data. Here, the real-valued quantizer 𝒬(·) is a B bit midrise quantizer <cit.> that maps a real-valued input to one of 2^B disjoint quantization regions, which are defined as ℛ_1=(-∞,r_1], ℛ_2=(r_1,r_2], …, ℛ_b=(r_b-1,r_b], …, ℛ_2^B=(r_2^B-1,∞), where -∞<r_1 < r_2<⋯<r_2^B-1<∞. All the regions, except for ℛ_1 and ℛ_2^B, exhibit equal spacing with increments of Δ. In this case, the boundary point of ℛ_b is given by r_b = ( -2^B-1 + b )Δ, for b=1,…, 2^B-1. Thus, if a real-valued input falls in the region ℛ_b, then the quantization output is represented by r_b-Δ/2, that is, the midpoint of the quantization region in which the input lies. When the DPCC receives the quantized measurement vector 𝐲, it can perform state estimation using the linear minimum mean square error (LMMSE) method:𝐱^𝖫𝖬𝖬𝖲𝖤 = ( H^𝖧 H + σ^2𝐈)^-1 H^𝖧𝐲.As can be observed, the accuracy of the LMMSE state estimator highly depends on the quantized measurements 𝐲. A relatively high-resolution quantizer must be employed in the meter device to maintain the high-precision measurement data and therefore prevent the LMMSE performance from being affected by lower-resolution measurements. However, this is unfortunately accompanied by a significant increase in the data for transmission and processing. This unusual trend of increasing data motivates the need for a data-reduction solution.To reduce the amount of high-precision measurement data, we propose quantizing and representing partial measurements using a very short word length (e.g., 1-6 bits), instead of adopting a higher number of quantization bits to represent all the measurements. In this way, a more efficient use of the available bandwidth can be achieved. However, lower-resolution measurements tend to degrade the state estimation performance. Moreover, quantized measurements with different resolutions require a proper design of the data fusion process to improve the state estimation performance. Given the above problems, we develop in the next section a new framework based on a Bayesian belief inference to incorporate the quantized measurements from the meter devices employing different resolution quantizers to obtain an optimal state estimate.§ STATE ESTIMATION ALGORITHM §.§ Theoretical Foundation and Factor Graph ModelThe objective of this work is to estimate the system state 𝐱={x_i}_i=1^N from the quantized measurement vector 𝐲 and the knowledge of matrix H using the minimum mean square error (MMSE) estimator. A well-known fact is that the Bayesian MMSE inference of x_i is equal to the posterior mean,[In what follows, we will derive the posterior mean and variance based on the MMSE estimation.] that is,x_i^𝖬𝖬𝖲𝖤 = ∫x_i𝒫(x_i| H ,𝐲) d x_i , ∀ i,where 𝒫(x_i|𝐇,𝐲) is the marginal posterior distribution of the joint posterior distribution 𝒫(𝐱|𝐇,𝐲). According to Bayes' rule, the joint posterior distribution obeys𝒫(𝐱|𝐇,𝐲) ∝𝒫(𝐲|𝐇,𝐱) 𝒫_𝗈(𝐱),where 𝒫(𝐲| H,𝐱) is the likelihood function, 𝒫_𝗈(𝐱) is the prior distribution of the system state 𝐱, and ∝ denotes that the distribution is to be normalized so that it has a unit integral.[On the basis of Bayes' theorem, (<ref>) is originally expressed as𝒫(𝐱|𝐇,𝐲) = 𝒫(𝐲|𝐇,𝐱) 𝒫_𝗈(𝐱) /𝒫(𝐲|𝐇) ,where the denominator𝒫(𝐲|𝐇) = ∫𝒫(𝐲|𝐇,𝐱) 𝒫_𝗈(𝐱) d𝐱defines the “prior predictive distribution” of 𝐲 for a given topology matrix 𝐇 and may be set to an unknown constant. In calculating the density of 𝐱, any function that does not depend on this parameter, such as 𝒫(𝐲|𝐇), can be discarded. Therefore, by removing 𝒫(𝐲|𝐇) from (<ref>), the relationship changes from being “equals” to being “proportional.” That is, 𝒫(𝐱|𝐇,𝐲) is proportional to the numerator of (<ref>). However, in discarding 𝒫(𝐲|𝐇), the density 𝒫(𝐱|𝐇,𝐲) has lost some properties,such as integration to one over the domain of 𝐱. To ensure that 𝒫(𝐱|𝐇,𝐲) is properly distributed, the symbol ∝ simply means that the distribution should be normalized to have a unit integral.]Given that the entries of the measurement noise vector e are i.i.d. random variables and under the assumption that the prior distribution of 𝐱 has a separable form, that is, 𝒫_𝗈(𝐱) = ∏_i= 1^N𝒫_𝗈(x_i), (<ref>) can be further factored as𝒫(𝐱|𝐇,𝐲) ∝∏_μ = 1^P𝒫(y_μ|𝐇,𝐱)∏_i= 1^N𝒫_𝗈(x_i),where 𝒫_𝗈(x_i) is the prior distribution of the i-th element of 𝐱 and 𝒫(y_μ|𝐇,𝐱) describes the μ-th measurement with i.i.d. complex Gaussian noise <cit.>, which can be explicitly represented as follows:𝒫(y_μ|𝐇,𝐱) = ∫_y_μ-Δ/2^y_μ+Δ/2𝒩_ℂ( y_μ; ∑_i=1^N H_μ i x_i, σ^2)d y_μ,where H_μ i denotes the component of 𝐇 in the μ-th row and i-th column. For the considered problem, the entries of the system state 𝐱 can be treated as i.i.d. complex Gaussian random variables with mean ν_x and variance σ^2_x for each prior distribution 𝒫_𝗈(x_i), that is, 𝒫_𝗈(x_i)=𝒩_ℂ(ν_x,σ^2_x) <cit.>. For brevity, the prior distribution of x_i is characterized by the prior parameter θ_𝗈={ν_x,σ^2_x}. The decomposition of the joint distribution in (<ref>) can be well represented by a factor graph 𝒢=(𝒱,ℱ,ℰ), where 𝒱={x_i}_i=1^N is the set of unobserved variable nodes, ℱ={𝒫(y_μ|𝐇,𝐱)}_μ=1^P is the set of factor nodes, where each factor node ensures (in probability) the condition y_μ =Q_ℂ( ∑_i H_μ i x_i + e_μ), and ℰ denotes the set of edges. Specifically, edges indicate the involvement between function nodes and variable nodes; that is, an edge between variable node x_i and factor node 𝒫(y_μ|𝐇,𝐱) indicates that the given factor function 𝒫(y_μ|𝐇,𝐱) is a function of x_i. Fig. <ref> provides a factor graph representation for the fictitious six buses system shown in Fig. <ref>.Given the factor graph representation, message-passing-based algorithms, such as belief propagation (BP) <cit.>, can be applied to compute 𝒫(x_i|𝐇,𝐲) approximately. Methodically, BP passes the following “messages,” which denote the probability distribution functions, along the edges of the graph as follows <cit.>:ℳ_i →μ^(t+1)(x_i) ∝𝒫_𝗈(x_i) ∏_cγ ∈ ℒ(i) γ ≠ μℳ_γ→ i^(t)(x_i),ℳ_μ→ i^(t+1)(x_i) ∝∫∏_c k∈ ℒ(μ)k≠i dx_k[𝒫(y_μ|𝐇,𝐱)∏_c k∈ ℒ(μ)k≠i ℳ_k →μ^(t)(x_k)],where superscript t indicates the iteration number, ℳ_i →μ(x_i) is the message from the i-th variable node to the μ-th factor node, ℳ_μ→ i(x_i) is the message from the μ-th factor node to the i-th variable node,ℒ(i) is the set of factor nodes that are neighbors of the i-th variable node, and ℒ(μ) is the set of variable nodes that are neighbors of the μ-th factor node. Then, the approximate marginal distribution is computed according to the following equation:𝒫^(t)(x_i|𝐇,𝐲) ∝𝒫_𝗈(x_i) ∏_μ ∈ ℒ(i)ℳ_μ→ i^(t)(x_i).§.§ EM-assisted GAMP for the State Estimation Problems However, according to <cit.>, BP remains computationally intractable for large-scale problems because of the high-dimensional integrals involved and large number of messages required. Moreover, the prior parameters θ_𝗈 and σ^2 are usually unknown in advance. Fortunately, BP can be simplified as the GAMP algorithm <cit.> based on the central limit theorem and Taylor expansions to enhance computational tractability.[To state this more precisely, applying the central limit theorem to messages yields a Gaussian approximation that can be used to simplify BP from a recursion on functions to a recursion on numbers. On the basis of a series of Taylor expansions, the number of messages can be reduced significantly.] By contrast, the EM algorithm can be applied to learn the prior parameters <cit.>. With the aid of these two algorithms, we develop an iterative method involving the following two phases per iteration for the state estimation problems: First, “swept” GAMP (SwGAMP) <cit.>, a modified version of GAMP, is exploited to estimate 𝐱. Second, the EM algorithm is applied to learn the prior parameters from the data at hand. The stepwise implementation procedure of the proposed state estimation scheme, referred to as EMSwGAMP, is presented in Algorithm <ref>. Considering the space limitations, a detailed derivation of (complex) GAMP and EM is excluded in this paper. For the details, refer to <cit.>. Then, we provide several explanations for each line of Algorithm <ref> to ensure better understanding of the proposed scheme. In Algorithm <ref>, x_i^(t) denotes the estimate of the i-th element of 𝐱 in the t-th iteration and τ_i^(t) can be interpreted as an approximation of the posterior variance of x_i^(t); these two quantities are initialized as x_i^(0) = 1 and τ_i^(0)=1, respectively. For each factor node, we introduce two auxiliary variables ω_μ^(t) and ϱ_μ^(t), given in Lines 9 and 8 of Algorithm <ref>, describing the current mean and variance estimates of the μ-th element of 𝐲, respectively. The initial conditions of ω_μ^(0) and ϱ_μ^(0) are specified in Lines 4 and 3 of Algorithm <ref>, respectively. As stated previously, z_μ is the μ-th element of the noise free measurement vector 𝐳, that is, z_μ = ∑_i H_μ i x_i. Therefore, according to the derivations of <cit.>, z_μ conditioned on x_i can be further approximated as a Gaussian distribution with the mean and variance given in Lines 9 and 8 of Algorithm <ref>, respectively, which are evaluated with respect to the following expression𝒫(z_μ|y_μ) ∝∫_y_μ-Δ/2^y_μ+Δ/2𝒩_ℂ(y_μ; z_μ, σ^2)𝒩_ℂ(z_μ; ω_μ^(t), ϱ_μ^(t))d y_μ.(<ref>) represents the quantization noise that can be regarded as a Gaussian distribution whose mean is ω_μ and variance is ϱ_μ. Finally, the messages from factor nodes to variable nodes are reduced to a simple message, which is parameterized by s_μ and ζ_μ, given in Lines 12 and 13 of Algorithm <ref>. Therefore, we refer to messages {s_μ,ζ_μ} as measurement updates.Similarly, for the variable nodes, we also introduce two auxiliary variables R_i^(t) and (Σ_i^(t))^2, given in Lines 18 and 17 of Algorithm <ref>, describing the current mean and variance estimates of the i-th element of 𝐱 without considering the prior information of x_i, respectively. Then, adding the prior information of x_i, that is, 𝒫_𝗈(x_i)=𝒩_ℂ(ν_x,σ^2_x), to the message updates, the posterior mean and variance of x_i are given in Lines 19 and 20 of Algorithm <ref>, respectively where is considered with respect to the following expression:𝒫(x_i;x_i^(t),τ_i^(t))∝𝒩_ℂ(x_i ; ν_x, σ^2_x) 𝒩_ℂ(x_i; R_i^(t), (Σ_i^(t))^2).Here, x_i^(t) (i.e., the calculation of Expectation in Line 19 of Algorithm 1) and τ_i^(t) (i.e., the calculation of VAR in Line 20 of Algorithm 1) can be easily obtained usingthe standard formulas for Gaussian distributions as <cit.>x_i^(t)= R_i^(t) + (Σ_i^(t))^2 /(Σ_i^(t))^2 + σ^2_x(ν_x - R_i^(t)), τ_i^(t)= (Σ_i^(t))^2 σ^2_x/(Σ_i^(t))^2 + σ^2_x.Manoel et al. <cit.> slightly modified the update scheme for GAMP, where partial quantities are updated sequentially rather than in parallel, to improve the stability of GAMP. [The empirical studies demonstrate that GAMP with slight modifications not only exhibits good convergence performances but is also more robust to difficult measurement matrix H as compared with the original GAMP.] Specifically, ∑_iH_μ ix_i^(t-1) and ϱ_μ^(t) are recomputed as the sweep updates over i for a single iteration step. Lines 22-24 of Algorithm <ref> are the core steps to perform the so-called sweep (or reordering) updates. In brief, we refer to messages {x_i,τ_i} as variable updates and to messages {s_μ,ζ_μ} as measurement updates for the SwGAMP algorithm. One iteration of the SwGAMP algorithm involves the implementation of these updates together with the estimation of the system state 𝐱.In the first phase of Algorithm <ref>, the prior parameters θ_𝗈= {ν_x,σ^2_x} are treated as known parameters, but may be unknown in practice. Thus, the second phase of the proposed algorithm is to adopt the EM algorithm to learn the prior parameters θ_𝗈 on the basis of the quantities acquired in the first phase of the algorithm. The EM algorithm is a general iterative method for likelihood optimization in probabilistic models with hidden variables. In our case, the EM-updates will be expressed in the following form <cit.>θ_𝗈^ new = _θ_𝗈𝔼{ln𝒫(𝐱,𝐲;θ_𝗈) },where the expectation takes over the posterior probability of 𝐱 conditioned on θ_𝗈 and 𝐲. Following similar steps in <cit.>,we can derive a set of EM-based update equations for the hyperparameters, that is, the prior information of the system states (i.e., ν_x and σ^2_x) that should be inferred. The detailed EM updates for the hyperparameters are provided in Lines 25 and 26 of Algorithm <ref>, respectively. Notably, the quantities {R_i^(t)}_i=1^N, {(Σ_i^(t))^2}_i=1^N, {z_μ^(t)}_μ=1^P, and {ς_μ^(t)}_μ = 1^P are readily available after running the SwGAMP algorithm in the first phase. Remark 3.1 (Calculating Lines 10 and 11 of Algorithm <ref> with high resolution representation of the measured data) : In modern SCADA systems, each measurement is quantized and represented using a word length of 12 (or 16) bits. With such high precision representation of the measurement data, the error between the quantized value y_μ and the actual value y_μ can be negligible, that is, y_μ≃y_μ. In this case, (<ref>) can be rewritten as follows:𝒫(z_μ|y_μ)∝𝒩_ℂ(y_μ; z_μ, σ^2) 𝒩_ℂ(z_μ; ω_μ^(t), ϱ_μ^(t)).Then, the moments, z_μ^(t) and ς_μ^(t), can be easily obtained using standard formulas for Gaussian distributions, as follows <cit.>:z_μ^(t)= ω_μ^(t) + ϱ_μ^(t)/ϱ_μ^(t) + σ^2(y_μ -ω_μ^(t)),ς_μ^(t)= ϱ_μ^(t)σ^2/ϱ_μ^(t) + σ^2. Remark 3.2 (Calculating Lines 10 and 11 of Algorithm <ref> under the “quantized” scenario) : When quantization error is nonnegligible, particularly at coarse quantization levels, (<ref>) is no longer valid because of the fact that using y_μ to approximate y_μ will result in severe performance degradation. In this case, we have to adopt (<ref>) to determine the conditional mean z_μ^(t) and conditional variance ς_μ^(t), which can be obtained as follows:z_μ^(t)= ∫_y_μ-Δ/2^y_μ+Δ/2 y_μ𝒩_ℂ(y_μ;ω_μ^(t), σ^2+ϱ_μ^(t))d y_μ/∫_y_μ-Δ/2^y_μ+Δ/2𝒩_ℂ(y_μ;ω_μ^(t), σ^2+ϱ_μ^(t))d y_μ,ς_μ^(t)= ∫_y_μ-Δ/2^y_μ+Δ/2 | y_μ - y_μ^(t) |^2 𝒩_ℂ(y_μ;ω_μ^(t), σ^2+ϱ_μ^(t))d y_μ/∫_y_μ-Δ/2^y_μ+Δ/2𝒩_ℂ(y_μ;ω_μ^(t), σ^2+ϱ_μ^(t))d y_μ.Explicit expressions of (<ref>) and (<ref>) are provided in <cit.>. Remark 3.3 (Stopping criteria): The algorithm can be discontinued either when a predefined number of iterations is reached or when it converges in the relative difference of the norm of the estimate of 𝐱, or both. The relative difference of the norm is given by the quantity ϵ≜∑_i^N |x_i^(t) - x_i^(t-1)|^2.§ SIMULATION RESULTS AND DISCUSSION In this section, we evaluate the performance of the proposed EMSwGAMP algorithm for single-phase state and three-phase state estimations. The optimal PMU placement issue is not included in this study, and we assume that PMUs are placed in terminal buses. In the single-phase state estimation, IEEE 69-bus radial distribution network <cit.> is used for the test system, where the subset of buses with PMU measurements is denoted by 𝒫_69={1, 27, 35, 46, 50, 52, 67, 69}. A modified version of IEEE 69-bus radial distribution network, referred to as 69m in this study, is examined to verify the robustness of the proposed algorithm. The system settings of this modified test system are identical to those of the IEEE 69-bus radial distribution network, with the exception of the bus voltages of this test system being able to vary within a large range, thereby increasing the load levels of this system. For these two test system, we have 68 current measurements and 8 voltage measurements. The software toolbox MATPOWER <cit.> is utilized to run the proposed state estimation algorithm for various cases in the single-phase state estimation. The IEEE 37-bus three-phase system is used as the test system for the three-phase state estimation, where the subset of buses with PMU measurements is denoted by 𝒫_37={1, 10, 15, 20, 29, 35}. In contrast to the single-phase state estimation, the system state of the three-phase estimation is generated by test system documents instead of MATPOWER. We have 105 current measurements and 18 voltage measurements in 37-bus three-phase system. Prior distributions of the voltage at each bus for different test systems can be found in Table <ref>. In each estimation, the mean squared error (MSE) of the bus voltage magnitude and that of the bus voltage phase angle are used as comparison measures, which are expressed as MSE = 1/N∑_i=1^N|x_i-x_i|^2, MSE_ magn = 1/N∑_i=1^N(|x_i|-|x_i|)^2, and MSE_ phase = 1/N∑_i=1^N[(x_i)-(x_i)]^2, respectively. The LMMSE estimator is tested for comparison. In our implementation, termination of Algorithm <ref> is declared when the corresponding constraint violation is less than ϵ =10^-8. A total of 1,000 Monte Carlo simulations were conducted and evaluated to obtain average results and to analyze the achieved measures. The simulations for computation time were conducted utilizing an Intel i7-4790 computer with 3.6 GHz CPU and 16 GB RAM. For clarity, the number of measurements quantized to be ℬ-bit is denoted as 𝒦, where ℬ denotes the number of bits used for quantization. Table <ref> shows a summary of the average MSE, MSE_ magn, and MSE_ phase achieved by EMSwGAMP and LMMSE for single-phase state estimation with various systems. The results show that even under the traditional unquantized setting,[As mentioned in Remark 3.1, when the measured data are represented using a wordlength of 16 bits, the quantization error can be negligible. In this case, such high-precision measurement data are henceforth referred to as the unquantized measured data. Therefore, all measurements are quantized with 16 bits in In Table <ref>.] EMSwGAMP still outperforms LMMSE because EMSwGAMP exploits the statistical knowledge of the estimated parameters θ_𝗈, which is learned from the data via the second phase of EMSwGAMP, that is, the EM learning algorithm. Table <ref> reveals that the estimation results of the system states using EMSwGAMP are close to the true values, which validates the effectiveness of the proposed learning algorithm. From a detailed inspection of Table <ref>, we found that the mean value of voltage magnitude can be exactly estimated through the EM learning algorithm. Therefore, the average MSE_ magn of EMSwGAMP is better than that of LMMSE. However, the mean value of voltage phase cannot be estimated accurately by the EM learning algorithm. As a result, the average MSE_ phase of EMSwGAMP is inferior to that of LMMSE. We consider an extreme scenario where several measurements are quantized to “1” bit, but others are not, to reduce the amount of transmitted data. The measurements selected to be quentized are provided in Appendix A. Table <ref> shows the average MSE, MSE_ magn, and MSE_ phase against 𝒦 obtained by EMSwGAMP, where in the performance of the LMMSE algorithm with 𝒦= 17 is also included for the purpose of comparison. The following observations are noted on the basis of Table <ref>: First, when the measurement is quantized with 1 bit, we only know that the measurement is positive or not so that the information related to the system is lost. Hence, as expected, increasing 𝒦 naturally degrades the average MSE performance because more information is lost. However, the achieved performance of the 69-bus test system is less sensitive to 𝒦 because the bus voltage variations in this system are small. Thus, the proposed algorithm can easily deal with incomplete data. Second, for the system with large bus voltage fluctuations, the obtained MSE_ phase performance of the modified 69-bus test system can still achieve 10^-3 when 𝒦≤ 17. However, with 𝒦= 17, the LMMSE algorithm exhibits poor performance for both considered test systems, which cannot be used in practice. Table <ref> also shows that the proposed algorithm can only achieve reasonable performance with 𝒦= 17. This finding naturally raises the question: How many quantization bits of these 17 measurements are needed to achieve a performance close to that of the unquantized measurements? Therefore, Table <ref> shows the performance of the proposed algorithm using different quantization bits for the 17 measurements, where the number of bits used for quantization is denoted as ℬ. For ease of reference, the performance of the proposed algorithm with the unquantized measurements is also provided. Furthermore, the average running time of the proposed algorithm for Table <ref> is provided in Table <ref>. Table <ref> shows that increasing ℬ results in the improvement of the state estimation performance. However, as shown in Table <ref>, the required running time also increases with the value of ℬ. Fortunately, the required running time is within 2 s even at ℬ=6. Moreover, if we further increase the quantization bit from ℬ = 6 to ℬ = 7, the performance remains the same. However, the corresponding running time rapidly increases from 1.90 s to 2.45 s. These findings indicate that (ℬ,𝒦) = (6,17) are appropriate parameters for the proposed framework. Therefore, in the following simulations, we consider the scenario where more than half of the measurements are quantized to 6 bits and the others are unquantized to reduce the data transmitted from the measuring devices to the data gathering center further. Table <ref> shows the average performance of two algorithms with 𝒦=34 and 𝒦=42 for the two test systems. Here, 𝒦 denotes the number of 6-bit quantized measurements and the performance of EMSwGAMP using only the unquantized measurements is also listed for convenient reference. Notably, the proposed EMSwGAMP algorithm significantly outperforms LMMSE, where the performance of LMMSE deteriorates again to an unacceptable level. We also observed that increasing the number of 6 bit quantized measurements from 𝒦=34 to 𝒦=42 only results in a slight performance degradation for EMSwGAMP. Consequently, by using EMSwGAMP, we can drastically reduce the amount of transmitted data without compromising performance. The total amount of measurements of a 69-bus test system is 76, where 68 current measurements and 8 voltage measurements originate from the meters and PMUs, respectively. Therefore, if the measurements are quantized as 16 bits for the conventional meters and PMUs, 16 × 76 = 1, 216 bits should be transmitted. However, for the proposed algorithm with 𝒦 = 34 (i.e., 34 measurements quantized with 6 bits and 42 measurements quantized with 16 bits), only 34 × 6 + 42 × 16 = 876 bits should be transmitted. In this case, the transmission data can be reduced by 27.96%. Similarly, for the proposed algorithm with 𝒦 = 42 (i.e., 42 measurements quantized with 6 bits and 34 measurements quantized with 16 bits), the transmission data can be reduced by 34.53%. In addition, we further discuss the required transmission bandwidth of the proposed framework and the conventional system under the assumption that the meters can update measurements every 1 s. As defined by IEEE 802.11n, when the data are modulated with quadrature phase-shift keying for a 20 MHz channel bandwidth, the data rate is 21.7 Mbps. Therefore, we can approximate the transmission rate as 1.085/Hz/s. For the proposed algorithm with 𝒦 = 34 and ℬ = 6 (i.e.,876 bits should be transmitted), the required transmission bandwidth is 808 Hz. However, the required transmission bandwidth for the conventional system (i.e., 1,216 bits should be transmitted) is 1, 121 Hz. In this case, the proposed architecture can also reduce the transmission bandwidth by 27.83%. Notably, this study only focuses on the reduction of the transmission data. However, references that specifically discuss the smart meter data transmission system are few (e.g., <cit.>), which are not considered in this study. Further studies can expand the scope ofthe present workto include these transmission mechanism to provide more efficient transmission framework. Finally, we evaluate the performance of the proposed EMSwGAMP algorithm for three-phase state estimation. The above simulation results show that almost half of the measurements can be represented with low-precision. Hence, in this test system, 𝒦=51 measurements are quantized with 6-bit. Therefore, Table <ref> shows the average performance of two algorithms with 𝒦=51 for IEEE 37-bus three-phase system. The performance of EMSwGAMP using only unquantized measurements is also included for ease of reference. Table <ref> shows that EMSwGAMP still outperformed LMMSE but only with a slight degradation as compared to the unquantized result. The proposed EMSwGAMP algorithm can reduce transmission data by 25.91% compared to the high-precision measurement data. Hence, the proposed algorithm can be applied not only to a single-phase but also to a three-phase system. Most importantly, the proposed algorithm can also decrease the amount of data required to be transmitted and processed.§ CONCLUSIONWe first proposed a data reduction technique via coarse quantization of partial uncensored measurements and then developed a new framework based on a Bayesian belief inference to incorporate quantization-caused measurements of different qualities to obtain an optimal state estimation and reduce the amount of data while still incorporating different quality of data. The simulation results indicated that the proposed algorithm performs significantly better than other linear estimates, even for a case scenario in which more than half of measurements are quantized to 6 bits. This finding verifies the effectiveness of the proposed scheme.§ HOW THE MEASUREMENTS ARE BEING PICKED FOR QUANTIZATIONFor ease of explanation, a 69-bus test system is provided in Fig. 4, where the subset of buses ℳ={1,2,…,27}, called the main chain of the system, plays an important role for estimating the system states. Therefore, the measurements from the side chain of the system are selected for quantization. In addition, the numbers of the quantized measurements considered in this study are 𝒦 = 2, 𝒦=4, 𝒦 = 17, 𝒦 = 19, 𝒦 = 23, 𝒦 = 27, 𝒦=34, and 𝒦=42. More specifically, if 𝒦 = 2, the current measurements from the subset of buses {12,68,69} are selected for quantization; if 𝒦 = 4, the current measurements from the subset of buses {12,68,69} and {11,66,67} are selected for quantization; if 𝒦 = 17, the current measurements from the subset of buses {12,68,69}, {11,66,67} and {9,53,54,…,65} are being picked for quantization; if 𝒦 = 19, the current measurements from the subset of buses {12,68,69}, {11,66,67}, {8,51,52} and {9,53,54,…,65} are selected for quantization; if 𝒦 = 23, the current measurements from the subset of buses {12,68,69}, {11,66,67}, {8,51,52}, {4,47,48,49,50}, and {9,53,54,…,65} are selected for quantization; if 𝒦 = 27, the current measurements from the subset of buses {12,68,69}, {11,66,67}, {8,51,52}, {3,28,29,…,35}, and {9,53,54,…,65} are selected for quantization; if 𝒦 = 34, the current measurements from the subset of buses {12,68,69}, {11,66,67}, {8,51,52}, {4,47,48,49,50}, {36,38,…,46}, and {9,53,54,…,65} are selected for quantization; if 𝒦 = 42, the current measurements from the subset of buses {12,68,69}, {11,66,67}, {8,51,52}, {4,47,48,49,50}, {3,28,29,…,35}, {36,38,…,46}, and {9,53,54,…,65} are selected for quantization.IEEEtran10url@samestyle SG-techreport “The smart grid: An introduction,” prepared by Litos Strategic Communication for U.S. Department of Energy, Tech. Rep., Oct. 2008.Huang-12 Y.-F. Huang, S. Werner, J. Huang, N. Kashyap, and V. Gupta, “State estimation in electric power grids: meeting new challenges presented by the requirements of the future grid,” IEEE Signal Process. Mag., vol. 29, no. 5, pp. 33–43, Sep. 2012.Nagasawa-12 K. Nagasawa, C. R. Upshaw, J. D. Rhodes, C. L. Holcomb, D. A. Walling, and M. E. Webber, “Data management for a large-scale smart grid demonstration project in Austin, Texas,” in Proc. ASME Int. Conf. Energy Sustainability, San Diego, CA, Jul. 2012, pp. 1027–1031.abur2004powe A. Abur and A. G. Exposito, Power System State Estimation: Theory and Implementation.1em plus 0.5em minus 0.4emBoca Raton, FL: CRC Press, 2004.Li-14-TPS X. Li, A. Scaglione, and T. H. Chang, “A framework for phasor measurement placement in hybrid state estimation via gauss」newton,” IEEE Trans. Power Syst., vol. 29, no. 2, pp. 824–832, Mar. 2014.Gol-14 M. Göl and A. Abur, “LAV based robust state estimation for systems measured by PMUs,” IEEE Trans. Smart Grid, vol. 5, no. 4, pp. 1808–1814, Jul. 2014.Hurtgen-08 M. Hurtgen and J.-C. Maun, “Advantages of power system state estimation using phasor measurement units,” in Proc. Power Syst. Comput. Conf., Glasgow, Scotland, Jul. 2008, pp. 1–7.Phadke-09 A. G. Phadke, J. S. Thorp, R. F. Nuqui, and M. Zhou, “Recent developments in state estimation with phasor measurements,” in Proc. IEEE Power Syst. Conf. Expo., Seattle, WA, Mar. 2009, pp. 1–7.PMU-09-book A. G. Phadke and J. S. Thorp, Synchronized Phasor Measurements and Their Aplication.1em plus 0.5em minus 0.4emNew York: Springer, 2008.Zhou-06 M. Zhou, V. A. Centeno, J. S. Thorp, and A. G. Phadke, “An alternative for including phasor measurements in state estimators,” IEEE Trans. Power Syst., vol. 21, no. 4, pp. 1930–1937, Nov. 2006.Gol-15 M. Göl and A. Abur, “A hybrid state estimator for systems with limited number of PMUs,” IEEE Trans. Power Syst., vol. 30, no. 3, pp. 1511–1517, May 2015.Alam-14 S. M. S. Alam, B. Natarajan, and A. Pahwa, “Distribution grid state estimation from compressed measurements,” IEEE Trans. Smart Grid, vol. 5, no. 4, pp. 1631–1642, Jul. 2014.Rangan-11-ISIT S. Rangan, “Generalized approximate message passing for estimation with random linear mixing,” in Proc. IEEE Int. Symp. Inf. Theory, Saint Petersburg, Russia, Aug. 2011, pp. 2168–2172.Krzakala-12 F. Krzakala, M. Mézard, F. Sausset, Y. Sun, and L. Zdeborová, “Probabilistic reconstruction in compressed sensing: Algorithms, phase diagrams, and threshold achieving matrices,” J. Stat. Mech. Theory Exp., vol. 2012, no. 8, Aug. 2012, Art. ID P08009.SwAMP-2015 A. Manoel, F. Krzakala, E. W. Tramel, and L. Zdeborová, “Swept approximate message passing for sparse estimation,” in Proc. 32nd Int. Conf. Mach. Learn., Lille, France, Jul. 2015, pp. 1123–1132.Vila-13TSP J. P. Vila and P. Schniter, “Expectation-maximization Gaussian-mixture Approximate Message Passing,” IEEE Trans. Sig. Proc., vol. 61, no. 19, pp. 4658–4672, Oct. 2013.Chen-91-TPD T.-H. Chen, M.-S. Chen, K.-J. Hwang, P. Kotas, and E. Chebli, “Distribution system power flow analysis—A rigid approach,” IEEE Trans. Power Del., vol. 6, no. 3, pp. 1146–1152, Jul. 1991.Jones-11 K. D. Jones, “Three-phase linear state estimation with phasor measurements,” Master's thesis, Elect. Comput. Eng. Dept., Virginia Polytech. Inst. State Univ., Blacksburg, VA, USA, May 2011.Proakis-book J. G. Proakis and M. Salehi, Digital Communications, 5th ed.1em plus 0.5em minus 0.4emNew York, USA: McGraw-Hill, 2008.Wen-15 C.-K. Wen et al., “Bayes-optimal joint channel-and-data estimation for massive MIMO with low-precision ADCs,” IEEE Trans. Signal Process., vol. 64, no. 10, pp. 2541–2556, May 2016.Hu-11-CIM Y.  Hu, A.  Kuh, A.  Kavcic, and T.  Yang, “A Belief Propagation Based Power Distribution System State Estimator,” IEEE Comput. Intell. Mag., vol. 6, no. 3, pp. 36–46, Aug. 2011.Pea-book-88 J. Pearl, Probabilistic Reasoning in Intelligent Systems, 2nd ed.1em plus 0.5em minus 0.4emSan Francisco, CA: Kaufmann, 1988.FG-01-IT F. R. Kschischang, B. J. Frey, and H.-A. Loeliger, “Factor graphs and the sum-product algorithm,” IEEE Trans. Inf. Theory, vol. 47, no. 2, pp. 498–519, Feb. 2001.MP-book C. M. Bishop, Pattern Recognition and Machine Learning.1em plus 0.5em minus 0.4emNew York, NY, USA: Springer, 2006.Guo-06-ITW D. Guo and C. C. Wang, “A symptotic mean-square optimality of belief propagation for sparse linear systems,” in Proc. IEEE Inf. Theory Workshop, Chengdu, China, Oct. 2006, pp. 194–198.Wang-14-WCNC S. Wang, Y. Li, and J. Wang, “Low-complexity multiuser detection for uplink large-scale MIMO,” in Proc. IEEE Wireless Commun. Netw. Conf., Istanbul, Turkey, Apr. 2014, pp. 236–241.Jay-15-TVT J.-C. Chen, C.-J. Wang, K.-K. Wong, and C.-K. Wen, “Low-complexity precoding design for massive multiuser MIMO systems using approximate message passing,” IEEE Trans. Veh. Technol., vol. 65, no. 7, pp. 5707–5714, Jul. 2016.Wang-15-ICC S. Wang, Y. Li, and J. Wang, “Large-scale antenna system with massive one-bit iintegrated energy and information receivers,” in Proc. IEEE Int. Conf. Commun., London, UK, Jun. 2015, pp. 2024–2029.Barbier-15 J. Barbier, C. Schülke, and F. Krzakala, “Approximate message-passing with spatially coupled structured operators, with applications to compressed sensing and sparse superposition codes,” J. Stat. Mech. Theory Exp., vol. 2015, no. 5, May 2015, Art. ID P05013.69-bus-data R. Paras, “Load Flow Analysis of Radial Distribution Network using Linear Data Structure,” arXiv preprint arXiv:1403.4702, 2014.MATPOWER R. D. Zimmerman, C. E. Murillo-Sánchez, and R. J. Thomas, “MATPOWER steady-state operations, planning and analysis tools for power systems research and education,” IEEE Trans. Power Syst., vol. 26, no. 1, pp. 12–19, Feb. 2011.DN-12-CM D. Niyato and P. Wang, “Cooperative transmission for meter data collection in smart grid,” IEEE Commun. Mag., vol. 50, no. 4, pp. 90–97, Apr. 2012.2016-smartgrid-comm-meter C. Karupongsiri, K. S. Munasinghe, and A. Jamalipour, “A novel communication mechanism for smart meter packet transmission on LTE networks,” in Proc. IEEE Int. Conf. Smart Grid Comm., Sydney, Australia, Nov. 2016, pp. 1–6.
http://arxiv.org/abs/1709.09765v1
{ "authors": [ "Jung-Chieh Chen", "Hwei-Ming Chung", "Chao-Kai Wen", "Wen-Tai Li", "Jen-Hao Teng" ], "categories": [ "cs.IT", "cs.SY", "math.IT" ], "primary_category": "cs.IT", "published": "20170927235833", "title": "State Estimation in Smart Distribution System With Low-Precision Measurements" }
Department of Physics, University of Illinois Urbana-Champaign, Urbana, Illinois 61801, USA[][email protected] Department of Physics, University of Illinois Urbana-Champaign, Urbana, Illinois 61801, USADepartment of Physics, University of Illinois Urbana-Champaign, Urbana, Illinois 61801, USADepartment of Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA Department of Physics, University of Illinois Urbana-Champaign, Urbana, Illinois 61801, [email protected] Department of Physics, University of Illinois Urbana-Champaign, Urbana, Illinois 61801, USA Alloys ofand((Bi_1-xSb_x)_2Te_3) have played an essential role in the exploration of topological surface states, allowing us to study phenomena that would otherwise be obscured by bulk contributions to conductivity. Thin films of these alloys have been particularly important for tuning the energy of the Fermi level, a key step in observing spin-polarized surface currents and the quantum anomalous Hall effect. Previous studies reported the chemical tuning of the Fermi level to the Dirac point by controlling the Sb:Bi composition ratio, but the optimum ratio varies widely across various studies with no consensus. In this work, we use scanning tunneling microscopy and Landau level spectroscopy, in combination with X-ray photoemission spectroscopy to isolate the effects of growth factors such as temperature and composition, and to provide a microscopic picture of the role that disorder and composition play in determining the carrier density of epitaxially grown (Bi,Sb)_2Te_3thin films. Using Landau level spectroscopy, we determine that the ideal Sb concentration to place the Fermi energy to within a few meV of the Dirac point is x∼ 0.7. However, we find that the post- growth annealing temperature can have a drastic impact on microscopic structure as well as carrier density. In particular, we find that when films are post-growth annealed at high temperature, better crystallinity and surface roughness are achieved; but this also produces a larger Te defect density, adding n-type carriers. This work provides key information necessary for optimizing thin film quality in this fundamentally and technologically important class of materials. Defect Role in the Carrier Tunable Topological Insulator (Bi_1-xSb_x)_2Te_3 Thin Films Vidya Madhavan December 30, 2023 ======================================================================================§ INTRODUCTIONThe V-VI semiconductor class of compounds contain several prototypical 3D topological insulators (TI), namely ,and , which possess gapless spin-momentum locked surface states and an insulating bulk <cit.>. While the existence of topological surface states has been verified <cit.>, difficulties remain in isolating the effects of the topological states from the bulk contribution to the total conductance, which is required for further applications in electronic devices. This is attributed to inherent bulk conductivity caused by intrinsic defect doping in the binary compounds <cit.>. Currently, the best way to reduce bulk carriers inis to alloy it with<cit.>. The rationale for this is that mixing , which is plagued mostly by n-type Te vacancies <cit.>, with , which mostly contains p-type anti-site impurities <cit.>, using appropriate compositional ratios, will result in a net zero bulk carrier density. This method has led to the successful observations of the quantum anomalous Hall effect (QAHE) <cit.> and chiral Majorana modes <cit.> in the thin films.Despite these successes, a fine tuning of the chemical potential in such (Bi_1-xSb_x)_2Te_3 (BST) thin films continues to remain an issue<cit.>, hindering further progress. The QAHE observation, for instance, is limited to very low temperatures (<1K), a problem attributed to doping inhomogeneities and local chemical potential variations <cit.>. In improving film quality and performance, it is notoriously challenging to isolate and distinguish the effects of the growth parameters and composition on the properties of the thin films. For example, low temperature growth can result in lowered surface adatom mobility and create rougher films. On the other hand, growth of films at higher temperatures can result in improper nucleation<cit.>. One technique to overcome this difficulty is to grow films at lower substrate temperatures and then anneal after growth at a higher temperature <cit.>. During post-annealing, remaining adatoms as well as atoms from smaller islands are expected to become more mobile, creating larger islands and resulting in flatter films. However, post-annealing at high temperatures may also result in re-evaporation of Te and thus change the electronic properties<cit.>.In this study, we use bulk and nanoscale characterization techniques to obtain the nanoscale morphology as well as the electronic structure of BST thin films. The composition and nanoscale structure of the thin films were determined with X-ray photoemission spectroscopy and scanning tunneling microscopy/spectroscopy (STM/S), and the powerful technique of Landau level spectroscopy was used to accurately determine the electronic properties, including the energy of the Dirac node relative to the Fermi energy. Our goal in this study is to identify the relationship between composition, growth conditions and film quality, and to particularly determine the effect of post-growth annealing on the level of disorder as well as the position of the Fermi energy with respect to the bulk bands.§ EXPERIMENTAL DETAILSBST thin films were grown using a homebuilt molecular beam epitaxy (MBE) system. The films were grown on c-plane oriented Al_2O_3 substrates, which were baked at 1000 prior to insertion to the MBE system with a base pressure of 4× 10^-10 Torr. The growth was done by co-evaporation of Bi (99.9999%), Sb (99.9999%) and Te (99.9999%) from standard single filament (Bi,Sb) and dual filament (Te) effusion cells. The substrates were held at a temperature of 180-200 during the growth. Typical growth rates used were 0.3-0.4 nm/min. In this letter, we compare and contrast the properties of two samples that we label sample-L and sample-H. The flux ratios were Sb:Bi = 1.36:1 and Te:(Sb,Bi)=2.1:1 for sample-L, and Sb:Bi=1.57:1 and Te:(Sb,Bi)=2.2:1 for sample-H.Directly after growth, the films were transferred to a low temperature scanning tunneling microscope using a custom vacuum shuttle system to prevent environmental exposure. STM/S measurements were performed at 4K. In the measurements, electrochemically etched tungsten tips were used. The tunneling spectra were acquired using the lock-in technique with ac modulation about 3mV at 987.5Hz.§ X-RAY PHOTOEMISSION SPECTROSCOPY RESULTSFirst, we show the RHEED patterns of two nominally similar thins films of BST, post-growth annealed at two different temperatures, 220 (sample-L) or 300 (sample-H) in Fig. <ref>(a, b). Both RHEED images indicate crystalline two-dimensional film growth. The slightly sharper streaks of the RHEED patterns of sample-H potentially signify better crystallinity. This will be confirmed later when we discuss STM data on these samples.We then determine values for Sb:Bi composition ratio, x, of our films. The composition was controlled by setting the effusion flux ratio of Bi and Sb during growth. However, setting a particular flux ratio is not sufficient to determine the actual composition. So, we determine the chemical composition using ex-situ X-ray photoemission spectroscopy (XPS). A Physical Electronics PHI 5400 instrument with a Mg source was used to obtain X-ray photoemission spectra as shown in Fig. <ref> (c, d). Quantitative information was obtained by analyzing Te 4d, Bi 4f and Sb 4d emission lines. The spectra were calibrated using 1s emission line of carbon, and the peaks were fit with a Gaussian-Lorentzian product with the 30% contribution of the Lorentzian factor (GL(30)). Quantitative information was obtained by integrating the fitted signal for all the chemical states of the chosen emission lines for each of the three elements and normalizing it with the instrument independent relative sensitivity factor (RSF), which were taken from the standard elemental library of the XPS analysis software CasaXPS. The resulting compositional fractions were x=0.68 for sample-L and x=0.71 for sample-H. For Te compositions, the Te:(Bi,Sb) ratios were 1.51 for the sample-L and 1.16 for the sample-H, indicating a Te deficiency in the latter.§ X-RAY DIFFRACTION RESULTS To verify the absence of other crystal phases, ex-situ X-ray diffraction (XRD) measurements were performed. 2θ-ω coupled scans and rocking curves were obtained for both samples using a Phillips Xpert XRD system. A Cu K-alpha source was used and alignment was performed using the Al_2O_3 (006) diffraction peak. Both 2θ-ω scans as shown in Fig. <ref>(a,b), reveal only c-plane reflections. Rocking curves, Fig. <ref>(c,d), were performed about the alignment with the BST (0015) peak yielding a full-width half maximum of 0.11 and 0.13 for the sample-L and -H respectively. Taken together, the c-plane orientation, rocking curves and RHEED patterns indicate the correct crystal phase is obtained, and that the Te deficiency measured by XPS should be the result of point defects.§ STM/S RESULTS Large scale (400x400nm) topographic images of sample-L and sample-H are shown in Fig. <ref>(a, c). The step height between terraces is about 1 nm, corresponding to one quintuple layer. The atomic-resolution images are shown as insets and the lattice constant is about 0.435 nm. The topography shows variations at nanometer length scales, which can be attributed to the random Bi/Sb alloying.Unlike the binary compounds<cit.>, it is very difficult to identify individual defects and we can not directly count the number of Te vacancies in the topographies of BST films. One possible reason is that the dominant Te vacancies lie in the middle of the quintuple layer<cit.>, and the random Bi/Sb alloy in the upper layer makes them invisible in the topography.Though the RHEED patterns indicated reasonable crystallinity, grain boundaries and screw dislocations were observed in topographic images of both samples. The microscopic roughness can be characterized by the height of the islands within a certain area. From Fig. <ref>(a, c), we find that the root-mean-squared roughness of sample-L is 1.2nm and for sample-H is 0.7nm, indicating that the post-anneal temperature affects roughness at the microscale.Next, we characterize the electronic structure and measure the energy of the Dirac point (DP) with respect to the Fermi level. The dI/dV (r, eV) spectra, which measure the local density of states (LDOS), are shown in Fig. <ref>(b, d). For sample-L, the LDOS is strongly suppressed from -20 meV to 260 meV, resulting in a bulk gap about 280 meV. The positions of DP, determined by the minima of LDOS, are very close to Fermi level for both samples. However, this method is fraught with problems because the tip's density of states can influence the shape of the dI/dV spectra, shifting the minimum away from the Dirac point. Additionally, the contribution of the surface states to the total density of states may be obscured by bulk bands. For example, this happens inbecause the Dirac point is at a lower energy than the top of the bulk valence bands. A more accurate and reliable method to identify the position of the Dirac point is to use Landau level spectroscopy. In the presence of a magnetic field, electrons fall into quantized cyclotron orbits called Landau levels (LL), which appear as peaks in the dI/dV spectra. The energies of these states disperse with respect to magnetic field strength. For massless Dirac fermions, the dispersion is E_n=E_D+sgn(n)v_F√(2eBh|n|) where E_D is the Dirac point energy, v_Fis the Fermi velocity, n is the Landau level index, and B is the field strength. Assuming the usual g-factor of 2, the term resulting from the electron g-factor is negligible and has been ignored in Eq. <ref>. In fact, as we will see later, our data are consistent with this assumption. An important aspect of Eq. <ref> is that we expect to see a non-dispersing peak exactly at the Dirac point, and the energy can be identified with accuracy.dI/dV spectra were acquired along a 15nm line cut for both samples at various perpendicular magnetic fields ranging from 0T to 7.5T. The linecut averaged spectra are shown in Fig. <ref> (a, c). To remove the background, the 0T spectrum was subtracted from spectra at other fields. LL peak positions were obtained by fitting the peaks to a Gaussian distribution then.The resulting LL peak energies are plotted in Fig. <ref> (b, d) with respect to √(nB) which can be converted into momentum. Fitting the peak positions to Eq. <ref> yields a Dirac point energy of 6 mV for sample-L and -48 mV for sample-H.The Fermi velocity obtained from both fits is 4.4× 10^5 m/s. This value is consistent with previous measurements of the Fermi velocity measured forand . To check the unique scattering properties of the topological surface states, and obtain more information on the dispersion, we perform quasiparticle interference (QPI) measurements on sample-L. In QPI measurements, the spatial modulations in differential conductance map are recorded and then Fourier-transformed to extract scattering vectors connecting electronic states in momentum space. In Fig. <ref>(c-d), we summarize the dI/dV(r, eV) maps from 500mV to 800mV, which exhibit pronounced standing wave patterns. The wavelength becomes shorter with increasing energy. These Fourier transforms after symmetrization are shown in Fig. <ref>(f-h), and we observe patterns centered along the ΓM direction of the first Brillouin Zone (red dashed lines). This ΓM scattering vector originates from the hexagonally warped Dirac cones with chiral spin texture (shown as dark solid arrows in Fig. <ref>(e)), and backscattering is prohibited under the protection of time-reversal symmetry<cit.>. This measurement offers direct proof of the topological nature of the surface states. In Fig. <ref>(i), we plot the dispersion along the ΓM direction together with the results obtained from LL measurements. To make a direct comparison with the dispersion of surface states in , we show the results obtained in our previous STM study<cit.> (orange spots) and ARPES<cit.> (black dashed lines) in the same plot. As can be seen, the dispersion of our BST films remains mainly unchanged, except that the Fermi level moves ∼130meV lower compared to the pristine .§ DISCUSSION From our measurements, we find that sample-L is very close to the optimum composition as indicated by the position of the Dirac node, which is 6 meV from the Fermi energy. On the other hand, the Fermi level of sample-H is higher than sample-L implying it is more n-doped. Since the Sb:Bi ratio of the two samples is very similar, the difference in doping cannot be attributed to the ratio.In fact, the slightly larger Sb content should in principle make sample-H more p-type. Given our XPS results indicate a Te deficiency in sample-H, we attribute the electron doping in sample-H to Te vacancies which are expected to act as n-type dopants in Bi/Sb rich samples. This implies that the higher annealing temperature used for sample-H caused a re-evaporation of the Te after it had been incorporated during growth.Our results show that defects arising from the film growth conditions and the BST composition affect the Fermi level separately. Therefore, the Fermi level tuning of BST by chemical composition is not simply a matter of setting the Sb:Bi compositional ratio and depends sensitively on the post-anneal temperature. Our findings explain the large variations seen in the optimum Sb:Bi ratio of thin BST films reported by different groups <cit.>. In the absence of substantial Te vacancies, we find that the optimum composition for placing the Fermi energy close to the Dirac point is x=∼ 0.7. Moreover, our findings indicate a path to obtaining ideal samples for the QAHE.hosts a Dirac point clearly in the gap, far from the bulk bands, but is intrinsically p-type. Compensating the p-typeby alloying with n-typehas been used to place the Fermi energy close to the Dirac point, but also effects the Dirac point by moving it closer to the conduction band. However, we demonstrated that post-annealing BST can introduce n-type carriers, which can be used as a parameter to obtain a finer degree of control of the electronic properties of BST films.§ CONCLUSION In summary, we have performed Landau level spectroscopy and quasiparticle interference spectroscopy, in combination with X-ray spectroscopy on BST thin films grown under different conditions. We find that Fermi energy can be placed to within a few meV of the Dirac point with Sb concentration  0.7. However, for the same Sb/Bi ratio, the Fermi level can be tuned to an energy approximately 50meV lower simply by a higher post-annealing temperature. This explains the wide variations seen in the optimum Sb/Bi ratio of the BST films reported before, and provides key information to obtain a finer control of the electronic properties of BST films. We would like to sincerely thank Jim Eckstein for useful discussions, and Richard Haasch for assistance with XPS data acquisition. The experiments were carried out in part in the Frederick Seitz Materials Research Laboratory Central Research Facilities, University of Illinois. V.M gratefully acknowledges support from the U. S. Department of Energy (DOE), Scanned Probe Division under award DE-SC0014335 for this work. 27 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Hasan and Kane(2010)]hasan10 author author M. Z. Hasan and author C. L. Kane, 10.1103/RevModPhys.82.3045 journal journal Rev. Mod. Phys. volume 82,pages 3045 (year 2010)NoStop [Qi and Zhang(2011)]qi11 author author X.-L. Qi and author S.-C. Zhang,10.1103/RevModPhys.83.1057 journal journal Rev. Mod. Phys. volume 83, pages 1057 (year 2011)NoStop [Zhang et al.(2009a)Zhang, Liu, Qi, Dai, Fang, andZhang]zhang09 author author H. Zhang, author C.-X. Liu, author X.-L. Qi, author X. Dai, author Z. Fang,and author S.-C. Zhang, http://dx.doi.org/10.1038/nphys1270 journal journal Nat Phys volume 5, pages 438 (year 2009a)NoStop [Chen et al.(2009)Chen, Analytis, Chu, Liu, Mo, Qi, Zhang, Lu, Dai, Fang, Zhang, Fisher, Hussain, and Shen]chen09 author author Y. L. Chen, author J. G. Analytis, author J.-H. Chu, author Z. K. Liu, author S.-K. Mo, author X. L. Qi, author H. J.Zhang, author D. H. Lu, author X. Dai, author Z. Fang, author S. C. Zhang, author I. R. Fisher, author Z. Hussain,and author Z.-X. Shen, 10.1126/science.1173034 journal journal Science volume 325, pages 178 (year 2009)NoStop [Xia et al.(2009)Xia, Qian, Hsieh, Wray, Pal, Lin, Bansil, Grauer, Hor, Cava, andHasan]xia09 author author Y. Xia, author D. Qian, author D. Hsieh, author L. Wray, author A. Pal, author H. Lin, author A. Bansil, author D. Grauer, author Y. S. Hor, author R. J. Cava,and author M. Z. Hasan, http://dx.doi.org/10.1038/nphys1274 journal journal Nat Phys volume 5, pages 398 (year 2009)NoStop [Hsieh et al.(2009)Hsieh, Xia, Qian, Wray, Dil, Meier, Osterwalder, Patthey, Checkelsky, Ong, Fedorov, Lin, Bansil, Grauer, Hor, Cava, andHasan]hsieh09 author author D. Hsieh, author Y. Xia, author D. Qian, author L. Wray, author J. H. Dil, author F. Meier, author J. Osterwalder, author L. Patthey, author J. G.Checkelsky, author N. P.Ong, author A. V. Fedorov, author H. Lin, author A. Bansil, author D. Grauer, author Y. S. Hor, author R. J. Cava,and author M. Z. Hasan, http://dx.doi.org/10.1038/nature08234 journal journal Nature volume 460, pages 1101 (year 2009)NoStop [Hanaguri et al.(2010)Hanaguri, Igarashi, Kawamura, Takagi, and Sasagawa]hanaguri10 author author T. Hanaguri, author K. Igarashi, author M. Kawamura, author H. Takagi,and author T. Sasagawa, 10.1103/PhysRevB.82.081305 journal journal Phys. Rev. B volume 82, pages 081305 (year 2010)NoStop [Cheng et al.(2010)Cheng, Song, Zhang, Zhang, Wang, Jia, Wang, Wang, Zhu, Chen, Ma, He, Wang, Dai, Fang, Xie, Qi, Liu, Zhang, and Xue]cheng10 author author P. Cheng, author C. Song, author T. Zhang, author Y. Zhang, author Y. Wang, author J.-F. Jia, author J. Wang, author Y. Wang, author B.-F. Zhu, author X. Chen, author X. Ma, author K. He, author L. Wang, author X. Dai, author Z. Fang, author X. Xie, author X.-L. Qi, author C.-X.Liu, author S.-C. Zhang,and author Q.-K. Xue, 10.1103/PhysRevLett.105.076801 journal journal Phys. Rev. Lett. volume 105, pages 076801 (year 2010)NoStop [Beidenkopf et al.(2011)Beidenkopf, Roushan, Seo, Gorman, Drozdov, Hor, Cava, and Yazdani]beidenkopf11 author author H. Beidenkopf, author P. Roushan, author J. Seo, author L. Gorman, author I. Drozdov, author Y. S.Hor, author R. J. Cava,and author A. Yazdani, http://dx.doi.org/10.1038/nphys2108 journal journal Nat Phys volume 7,pages 939 (year 2011)NoStop [Okada et al.(2011)Okada, Dhital, Zhou, Huemiller, Lin, Basak, Bansil, Huang, Ding, Wang, Wilson, and Madhavan]okada11 author author Y. Okada, author C. Dhital, author W. Zhou, author E. D. Huemiller, author H. Lin, author S. Basak, author A. Bansil, author Y.-B.Huang, author H. Ding, author Z. Wang, author S. D. Wilson,and author V. Madhavan, 10.1103/PhysRevLett.106.206805 journal journal Phys. Rev. Lett. volume 106, pages 206805 (year 2011)NoStop [Scanlon et al.(2012)Scanlon, King, Singh, de la Torre, Walker, Balakrishnan, Baumberger, and Catlow]scanlon12 author author D. O. Scanlon, author P. D. C. King, author R. P. Singh, author A. de la Torre, author S. M. Walker, author G. Balakrishnan, author F. Baumberger,and author C. R. A. Catlow, 10.1002/adma.201200187 journal journal Advanced Materials volume 24, pages 2154 (year 2012)NoStop [Jiang et al.(2012)Jiang, Sun, Chen, Wang, Li, Song, He, Wang, Chen, Xue, Ma, andZhang]JiangDopingPRL author author Y. Jiang, author Y. Y. Sun, author M. Chen, author Y. Wang, author Z. Li, author C. Song, author K. He, author L. Wang, author X. Chen, author Q.-K. Xue, author X. Ma,and author S. B.Zhang, 10.1103/PhysRevLett.108.066809 journal journal Phys. Rev. Lett. volume 108, pages 066809 (year 2012)NoStop [Zhang et al.(2011)Zhang, Chang, Zhang, Wen, Feng, Li, Liu, He, Wang, Chen, Xue, Ma, and Wang]zhang11 author author J. Zhang, author C.-Z. Chang, author Z. Zhang, author J. Wen, author X. Feng, author K. Li, author M. Liu, author K. He, author L. Wang, author X. Chen, author Q.-K. Xue, author X. Ma,and author Y. Wang, http://dx.doi.org/10.1038/ncomms1588 journal journal Nature Communications volume 2, pages 574 (year 2011)NoStop [He et al.(2012)He, Guan, Wang, Feng, Cheng, Chen, Li, andWu]he12 author author X. He, author T. Guan, author X. Wang, author B. Feng, author P. Cheng, author L. Chen, author Y. Li,and author K. Wu,http://dx.doi.org/10.1063/1.4754108 journal journal Applied Physics Letters volume 101, eid 123111 (year 2012)NoStop [He et al.(2015)He, Li, Chen, and Wu]he15 author author X. He, author H. Li, author L. Chen,and author K. Wu, http://dx.doi.org/10.1038/srep08830 journal journal Scientific Reports volume 5, pages 8830 (year 2015)NoStop [He et al.(2013)He, Kou, Lang, Choi, Jiang, Nie, Jiang, Fan, Wang, Xiu, andWang]he13 author author L. He, author X. Kou, author M. Lang, author E. S. Choi, author Y. Jiang, author T. Nie, author W. Jiang, author Y. Fan, author Y. Wang, author F. Xiu,and author K. L. Wang, http://dx.doi.org/10.1038/srep03406 journal journal Scientific Reports volume 3, pages 3406 (year 2013)NoStop [Kong et al.(2011)Kong, Chen, Cha, Zhang, Analytis, Lai, Liu, Hong, Koski, Mo, Hussain, Fisher, Shen, andCui]kong11 author author D. Kong, author Y. Chen, author J. J. Cha, author Q. Zhang, author J. G. Analytis, author K. Lai, author Z. Liu, author S. S. Hong, author K. J. Koski, author S.-K. Mo, author Z. Hussain, author I. R. Fisher, author Z.-X.Shen,and author Y. Cui, http://dx.doi.org/10.1038/nnano.2011.172 journal journal Nat Nano volume 6, pages 705 (year 2011)NoStop [Chang et al.(2013)Chang, Zhang, Feng, Shen, Zhang, Guo, Li, Ou, Wei, Wang, Ji, Feng, Ji, Chen, Jia, Dai, Fang, Zhang, He, Wang, Lu, Ma, and Xue]chang13 author author C.-Z. Chang, author J. Zhang, author X. Feng, author J. Shen, author Z. Zhang, author M. Guo, author K. Li, author Y. Ou, author P. Wei, author L.-L. Wang, author Z.-Q. Ji, author Y. Feng, author S. Ji, author X. Chen, author J. Jia, author X. Dai, author Z. Fang, author S.-C.Zhang, author K. He, author Y. Wang, author L. Lu, author X.-C. Ma,and author Q.-K. Xue, 10.1126/science.1234414 journal journal Science volume 340, pages 167 (year 2013)NoStop [He et al.(2017)He, Pan, Stern, Burks, Che, Yin, Wang, Lian, Zhou, Choi, Murata, Kou, Chen, Nie, Shao, Fan, Zhang, Liu, Xia, and Wang]KWangScience17 author author Q. L. He, author L. Pan, author A. L. Stern, author E. C. Burks, author X. Che, author G. Yin, author J. Wang, author B. Lian, author Q. Zhou, author E. S. Choi, author K. Murata, author X. Kou, author Z. Chen, author T. Nie, author Q. Shao, author Y. Fan, author S.-C. Zhang, author K. Liu, author J. Xia,and author K. L. Wang, DOI: 10.1126/science.aag2792 journal journal Science volume 357, pages 294 (year 2017)NoStop [Kellner et al.(2015)Kellner, Eschbach, Kampmeier, Lanius, Mlynczak, Mussler, Hollander, Plucinski, Liebmann, Grutzmacher, Schneider, and Markus]KellnerAPL author author J. Kellner, author M. Eschbach, author J. Kampmeier, author M. Lanius, author E. Mlynczak, author G. Mussler, author B. Hollander, author L. Plucinski, author M. Liebmann, author D. Grutzmacher, author C. M.Schneider,and author M. Markus, http://dx.doi.org/10.1063/1.4938394 journal journal Applied Physics Letters volume 107,eid 251603 (year 2015)NoStop [Mogi et al.(2015)Mogi, Yoshimi, Tsukazaki, Yasuda, Kozuka, Takahashi, Kawasaki,and Tokura]mogi15 author author M. Mogi, author R. Yoshimi, author A. Tsukazaki, author K. Yasuda, author Y. Kozuka, author K. S. Takahashi, author M. Kawasaki,and author Y. Tokura, 10.1063/1.4935075 journal journal Applied Physics Letters volume 107, pages 182401 (year 2015)NoStop [Harrison et al.(2013)Harrison, Li, Huo, Zhou, Chen, and Harris]harrison13 author author S. E. Harrison, author S. Li, author Y. Huo, author B. Zhou, author Y. L. Chen,and author J. S. Harris, 10.1063/1.4803717 journal journal Applied Physics Lettersvolume 102, pages 171906 (year 2013)NoStop [Liu et al.(2015)Liu, Endicott, Stoica, Chi, Clarke, and Uher]liu15 author author W. Liu, author L. Endicott, author V. A. Stoica, author H. Chi, author R. Clarke,and author C. Uher, http://dx.doi.org/10.1016/j.jcrysgro.2014.10.011 journal journal Journal of Crystal Growth volume 410, pages 23(year 2015)NoStop [Dai et al.(2016)Dai, West, Wang, Wang, Kwok, Cheong, Zhang, andWu]weida2016 author author J. Dai, author D. West, author X. Wang, author Y. Wang, author D. Kwok, author S.-W.Cheong, author S. B.Zhang,and author W. Wu, doi.org/10.1103/PhysRevLett.117.106401 journal journal Phys. Rev. Lett. volume 117, pages 106401 (year 2016)NoStop [Okada et al.(2012)Okada, Zhou, Walkup, Dhital, andV.]VidyaNcomm author author Y. Okada, author W. Zhou, author D. Walkup, author C. Dhital,and author W. S. D. M. V., doi: 10.1038/ncomms2150 journal journal Nature Communications volume 3, pages 1158 (year 2012)NoStop [Zhang et al.(2009b)Zhang, Cheng, Chen, Jia, Ma, He, Wang, Zhang, Dai, Fang, Xie, and Xue]ZhangQPI author author T. Zhang, author P. Cheng, author X. Chen, author J.-F. Jia, author X. Ma, author K. He, author L. Wang, author H. Zhang, author X. Dai, author Z. Fang, author X. Xie,and author Q.-K. Xue, 10.1103/PhysRevLett.103.266803 journal journal Phys. Rev. Lett. volume 103, pages 266803 (year 2009b)NoStop [Fu(2009)]FuWrap author author L. Fu, https://doi.org/10.1103/PhysRevLett.103.266801 journal journal Phys. Rev. Lett. volume 103, pages 266801 (year 2009)NoStop
http://arxiv.org/abs/1709.09133v1
{ "authors": [ "Kane L Scipioni", "Zhenyu Wang", "Yulia Maximenko", "Ferhat Katmis", "Charlie Steiner", "Vidya Madhavan" ], "categories": [ "cond-mat.mtrl-sci", "cond-mat.mes-hall" ], "primary_category": "cond-mat.mtrl-sci", "published": "20170926170733", "title": "Defect Role in the Carrier Tunable Topological Insulator (Bi$_{1-x}$Sb$_x$)$_2$Te$_3$ Thin Films" }
[ D. Huppenkothen1,2,3 and M. Bachetti4 December 30, 2023 ========================================= The analysis of area-level aggregated summary data is common in many disciplines including epidemiology and the social sciences. Typically, Markov random field spatial models have been employed to acknowledge spatial dependence and allow data-driven smoothing. In this paper, we exploit recent theoretical and computational advances in continuous spatial modeling to carry out the reconstruction of an underlying continuous spatial surface. In particular,we focus on models based on stochastic partial differential equations (SPDEs). We also consider the interesting case in which the aggregate data are supplemented with point data. We carry out Bayesian inference, and in the language of generalized linear mixed models, if the link is linear, an efficient implementation of the model is available via integrated nested Laplace approximations. For nonlinear links, we present two approaches: a fully Bayesian implementation using a Hamiltonian Monte Carlo algorithm, and an empirical Bayes implementation, that is much faster, and is based on Laplace approximations. We examine the properties of the approach using simulation, and then estimatean underlying continuous risk surface for the classic Scottish lip cancer data.Keywords: Change of support problem; Ecological bias; Hamiltonian Monte Carlo; Markovian Gaussian random fields. § INTRODUCTIONWhen modeling residual spatial dependence, it is appealing to reconstruct a continuous spatial surface, and this is the usual approach for point-referenced data.Continuous reconstruction becomes more difficult when the data contain regional aggregates at varying spatial resolutions. In epidemiological studies, data is often aggregated for reporting or anonymization. While there exists a wealth of techniques to model regional data at a fixed resolution <cit.>, these models do not extend in a straightforward fashion to situations where more than one resolution is used. In this paper, we develop methods for dealing with such situations.We describe a number of motivating settings. The first scenario we consider is one in which data are collected from surveys at known locations and/or from censuses over large regions. Our interest in this problem arises from spatial modeling of demographic indicators in a developing world context. In many countries in this setting, demographic information is not available on all of the population, so data is collected via surveys, such as Demographic and Health Surveys <cit.>. These surveys are typically stratified cluster designs with countries being stratified into coarse areas and into urban/rural, with enumeration areas (EAs) sampled within strata, and then households sampled within villages. In these surveys, the locations of the EAs, i.e., the GPS coordinates, are often available. We also consider census data, which is available at an aggregate level, e.g.,  the average or sum of a variable over an administrative areal unit. In the second scenario, we suppose we have areal data only. In epidemiology and the social sciences this situation is the most common, since such data usually satisfy confidentiality constraints, and typically arises from aggregation over a disjoint, irregular partition of the study map, based on administrative boundaries. As an example, we consider incident lip cancer counts observed in 56 counties in Scotland over the years 1975–1980. These data provide a good test case, since they have been extensively analyzed in the literature; see <cit.> and the references there-in.In this setting, we may view the continuous underlying surface as a device to induce a spatial prior for the areas that avoids the usual arbitrary element of defining neighbors over an irregular geography. In each of these examples, we assume that there is a latent, continuous Gaussian random field (GRF) that varies in space, {S(): ∈ R ⊂ℝ^2} where R is our study region of interest.The situation with which we are concerned with in this paper is closely related to the change of support problem <cit.>.This problem occurs when one would like to make inference at a particular spatial resolution, but the data are available at another resolution. Much of this work focuses on normal data and kriging type approaches, with block kriging being used. For example, <cit.> combine point and aggregate pollution data, with the latter consisting of outputs from numerical models, produced over a gridded surface; MCMC is used for inference with block kriging integrals evaluated over a grid. <cit.> considered the same class of problem, but added a time dimension and used a regression model with coefficients that varied spatially to relate the observed data to the modeled output.<cit.> develop a similar framework to ours and use a stochastic partial different equation (SPDE) approach in order to relate two levels of pollution data. Specifically, the model they propose relates the continuous surface to the area (grid) level by taking an unweighted average of the surface at various points within each grid. We extend this work in several regards: most importantly, our model can accommodate non-normal outcomes and we also allow for a more complex relationship between the point-level process and the aggregated data. Therefore, we are able to address a wider range of situations.<cit.> take a different approach for discrete data and model various applications using log-Gaussian Cox processes, including the reconstruction of a continuous spatial surface from aggregate data. Their approach is based on MCMC and follows <cit.> in simulating random locations of cases within areas, which is a computationally expensive step. A related problem to the COSP, is the modeling of data over time, based on areal data, but with boundary changes. <cit.> analyze space-time data on male bladder cancer in Nova Scotia; the spatial aggregation changes over time, with the older data tending to be of aggregate form and the point data being the norm in more recent years. Building on previous work <cit.>, they use a local EM algorithm in conjunction with a local polynomial to model the risk surface.We propose a three-stage Bayesian hierarchical model that can combine point and areal data by assuming acommon underlying smooth, continuous surface. We use the SPDE approach of <cit.> to model the latent field, which allows for computationally efficient inference. The paper is structured as follows. In Section <ref> we describe the model and in Section <ref> the computational details. A simulation study in Section <ref> considers a number of scenarios including: points data, areal data, and a combination of these data types. In Section <ref> we illustrate the non-linear areal data only situation for the famous Scotland lip cancer example. Section <ref> contains concluding remarks. § MODEL DESCRIPTION We propose a general model framework for inference that can be used for data collected at points, over areas, or a combination of the two. We describe the model first for normal and then for Poisson data (as an illustration of a non-normal outcome), before concluding with a discussion of the model for the latent spatial surface. §.§ Normal Responses In general, models are specified at the point level. We describe the normal model in the context of modeling household wealth over a spatial region. Since we willbe concerned with observations at the area-level we will introduce general notation. The region of interest, R, is divided into n disjoint areas denoted R_i, with N_i households in area R_i, i=1, …, n.Let Y_ih=Y(_ih) denote the h-thresponse associated with location _ih (e.g., longitude and latitude), with covariate information z_ih=z(_ih), h=1,…,N_i; we assume a single covariate only for notational simplicity, with the extension to multiple covariates being straightforward.The household-level model is Y_ih | μ_ih, σ^2 ∼_ind(μ_ih, σ^2), with μ_ih = μ_ih(_ih) = β_0 + β_1 z(_ih)+ S(_ih) and S_ih = S(_ih) being the spatial random effect, where the spatial model is a GRF.The measurement error variance σ^2 is assumed constant (though this can easily be relaxed). When data are available from a census we observe the average response in each of the areas Y̅_i = 1/N_i∑_h=1^N_i Y_ih. The induced area-level model is Y̅_i | μ_i, σ^2 ∼(μ_i, σ^2/N_i)where,μ_i = 1/N_i∑_h=1^N_i{β_0 + β_1 z_ih + S_ih} . §.§ Poisson Responses In the second case we consider, we assume that only the sum of all binary events, Y_i+ = ∑_i=1^N_iY_ij, is observed and recorded in area R_i. The individual-level model is Y_ij | p_ij∼_ind(p_ij). We assume a rare event scenario, along with a log-linear model, so that, p_ij=p_ij(_ij)=exp{β_0 + β_1z(_ij)+S(_ij)}. We sum over all cases to give, Y_i+ | μ_i∼( μ_i ), where, μ_i = ∑_j=1^N_i [Y_ij | _ij]= ∑_j=1^N_iexp{β_0 + β_1 z(_ij ) + S(_ij) }.If we have non-rareoutcomes and only observe the sum then the situation is far more difficult to deal with since the sum of binomialswith varyingprobabilities is a convolution of binomials. If we observe the individual outcomes Y_ij (and not just the sum), then we can model each as binomial (i.e., we do not have to resort to the convolution). The common situation in which disease counts and expected numbers are available across a set of areas is considered in Section <ref>. §.§ Model for the Latent Process We assume a zero-mean latent, continuous GRF. There are many choices for describing how the form of the covariance changes with distance, but we follow <cit.> and others who make a strong argument for the Matérn covariance function defined as,[S(_k),S(_k')] = λ^22^1-ν/Γ(ν)(κ||_k - _k'||)^ν K_ν(κ||_k - _k'||),where ||·|| denotes Euclidean distance, K_v is the modified Bessel function of the second kind and order ν, κ is a scaling parameter, and λ^2 is a variance parameter. In general, it is difficult to learn about the smoothness parameter ν, and so it is conventional to fix this parameter; we follow this convention and set ν=1. We define the practical range ϕ = √(8ν)/κ as the distance at which the correlation drops to approximately 0.1.§ COMPUTATION There are two steps to the computation, first the continuous latent surface is discretized in a convenient fashion (Section <ref>), and second the posterior is approximated. We begin with the normal case (Section <ref>) before turning to the more difficult Poisson case (Section <ref>). §.§ Approximating the Latent Process The major hurdle to the more widespread modeling of spatial data with a continuous surface has been the computation. In particular, inverting and finding the determinant of the Matérn covariance matrix, which is in general not sparse, has been a roadblock when the number of points is not small. However, recent work by <cit.> and <cit.> detail the connection between GRFs and Gaussian Markov random fields (GMRFs). They first note that GRFs with a Matérn covariance function are solutions to a particular stochastic partial differential equation (SPDE), and under certain relatively non-restrictive choices this produces a Markovian GRF (MGRF).They then show it is possible to obtain a representation of the solution to the SPDE using a GMRF.We follow the SPDE approach and approximate the GRF over a triangulation of the domain (called the mesh) by a weighted sum of basis functions,S() ≈S̃() = ∑_k=1^mw_kψ_k(),where m is the number of mesh points in the triangulation, ψ_k() is a basis function, and =[w_1,…,w_m]^T is a collection of weights. The distribution of the weightsis jointly Gaussian with mean 0 and sparse m × m precision matrix, , depending on spatial hyperparameters = [logτ, logκ]^T where τ^2 = 1/(4πκ^2λ^2); hence,is a GMRF. The form ofis chosen so that the resulting distribution for S̃() approximates the distribution of the solution to the SPDE, and thus the form will depend on the basis functions. The basis functions are chosen to bepiecewise linear functions; that is, ψ_k() = 1 at the k-th vertex of the mesh and ψ_k() = 0 at all other vertices, k=1,…,m. This results in a set of pyramids, each with typically a six- or seven-sided base. Therefore, the spatial prior consists offunctions that are weighted linear combinations of these pyramids, with the weights having a multivariate normal distribution. The sparsity ofeases computation.For inference, the discretized version of the spatial prior is combined with the likelihood. In the setting where we have known locations,it follows from (<ref>) that the value of the spatial random effect at an observation point, _ij, can be approximated by a weighted average of the value of the GMRF on the three nearest mesh vertices. We can write, S(_ij) ≈S̃(_ij) = _ij^T, where _ij is an m × 1-vector of weights that corresponds to the ij-th row of a sparse projection matrix . The nonzero entries of _ij, which correspond to the mesh points comprising the triangle containing _ij, are proportional to the inverse distance from _ij to those mesh points, such that these values sum to one. In the case where the observation location, _ij, is at a mesh vertex, _ij contains one non-zero entry that is equal to one.For the normal response model when we have areal data, we use a fully Bayesian approach, since a fast computational strategy is available. Specifically, the integrated nested Laplace approximation (INLA), anapproach for analyzing latent Gaussian models <cit.>, can be used. INLA works by using a combination of Laplace approximations along with numerical integration to obtain approximations to the posterior marginals. The SPDE approach has also been implemented in thepackage<cit.>, which allows for computationally efficient inference. For the Poisson response model,cannot be used for data aggregated over areas; instead, we consider approaches that involve empirical Bayes (EB), MCMC, or a combination.§.§ Normal Responses The likelihood is normal with mean (<ref>), and for simplicity we assume no covariates. The key to implementation is to approximate the integrated residual spatial area risk using the mesh. Defining d(_ik) to be the “relative” population density at location _ik satisfying d(_ik) ≥ 0 and ∑_k=1^m_i d(_ik) = 1, we obtain,μ_i≈β_0+∑_k=1^m_id(_ik)w_ik = β_0+_i^T,where m_i is the number of mesh points in area R_i and _i is an m× 1 vector with up to m_i nonzero entries d(_ik).This type of model can be fit using INLA, since _i^T is Gaussian. See Appendix <ref> for details on the implementation in the context of the simulation that we describe in Section <ref>.§.§ Poisson Responses For areal Poissondata, we have the model Y_i+ | μ_i ∼_ind(μ_i).We use a weighted average of the exponentiated spatial random effect at the mesh points contained in the area to form μ_i. That is, and again ignoring covariates, we approximate the integral (<ref>), to give μ_i≈ N_i exp(β_0) ∑_k=1^m_id(_ik)exp(w_ik) = N_i exp(β_0) _i^Twhere _i is an m × 1 vector as defined in Section <ref> and =[exp(w_1),…,exp(w_m)]^T. Due to the structure of this model, it is not possible to useINLA for fitting, but we describe three alternatives. First, a quick approximation is offered by EB with a Laplace approximation being used to integrate out the spatial random effects. To implement this, we use thepackage(which stands for Template Model Builder; ). This is very efficient and estimates of the spatial hyperparameters and fixed effects can be computed within minutes. Second, we resort to Markov chain Monte Carlo (MCMC) methods. It is well known that in the Gaussian Process context, MCMC methods can be inefficient <cit.>. We opt to use a Hamiltonian Monte Carlo (HMC; Neal, 2011neal:2011) transition operator for updating . Specifically, we first update the spatial hyperparametersusing a random walk proposal and then jointly updateand β_0 using HMC. Finally, we consider a hybrid approach where estimates for the spatial hyperparametersare found using the empirical Bayes approach and then, conditional on these estimates, posteriors forand any fixed effects are explored using MCMC methods. Details of these algorithms in the context of the Scotland example can be found in Appendix <ref>.In both the simulation and the real data example, we use relatively vague priors; see Appendices <ref> and <ref>. § SIMULATION STUDY IN THE NORMAL RESPONSE CASE §.§ Set Up We illustrate the method for normal responses via a simulation considering observations associated with points and observations associated with areas. As a motivating example, we assume the aim is to construct a poverty surface; understanding the spatial structure of poverty and poverty-related factors is of considerable interest<cit.>. Poverty has many different facets, and we take the wealth index as our measure <cit.>, which serves as a surrogate for long-term standard of living. We simulate a surface of the average wealth index within households. The wealth index is comprised of several variables such as household ownership of consumables, access to drinking water, and toilet facilities. The score is then standardized to have mean 0 and standard deviation 1. We will consider situations in which the wealth index is measured at point locations and we also consider incorporating census data, which provides the average wealth index at the area-level. Observations associated with points are taken from a design that is informed by the Kenya DHS <cit.>. It is simplified in that we do not consider stratification or explicit cluster sampling for the 400 locations, which correspond to the centroids of enumeration areas (EAs) from the Kenya 2008 DHS. The dots on the plots in Figure <ref> indicate the locations of these sampling points. We emphasize that these are point locations. Let i=1,…,n index the administrative areas in Kenya and j=1,…,n_i represent the EAs in area R_i. Hence, ∑_i=1^n n_i=400. Furthermore, let h=1,…,N_i index the households included in the census in area R_i and let N_ij be the number of households surveyed at the jth location in R_i. For our simulation, the number of households participating in a survey, N_ij, ranges from 41 to 81, with mean 55 to give 21,496 households in total. We let y_ih be the wealth index of household h in area R_i.We consider the data generating mechanism, Y_ih | μ_ih, σ^2 ∼( μ_ih,σ^2) for h=1,…,N_i, i=1,…,n. Thus, for census data we assume the following model for the average wealth index in area R_i,Y̅_i | μ_i, σ^2∼ ( μ_i,σ^2/N_i),μ_i= β_0 + 1/N_i∑_h=1^N_iS(_ih)where S(_ih) is the value of the spatial random effect at geographic location _ih. For survey data we assume the following model for the average household wealth index taken at EA j in area R_i,Y̅_ij | μ_ij, σ^2∼ ( μ_ij,σ^2/N_ij),μ_ij = β_0 + S(_ij)where S(_ij) is the spatial random effect evaluated at the centroid, _ij, of the EA. In this setting, σ^2 represents measurement error.We assume that the spatial model is a MGRF with Matérn covariance controlled by variance parameter λ^2 and scale parameter κ. Since the wealth index is standardized to have mean 0 and standard deviation 1, we have,[Y_ih]= [β_0 + S(_ih)] = 0, Var[Y_ih]= [σ^2] + Var[β_0 + S(_ih)] = 1.Therefore, we set β_0 = 0, and partition the variance as σ^2 = 0.25, and λ^2 = 0.75. To simulate the spatial surface, we use the SPDE approach, which requires a triangulated mesh. This mesh is shown in the left panel of Figure <ref> with m=2,785 mesh points; these mesh points are approximately 15 km apart in the interior of Kenya. We set κ = exp(1/2), which corresponds to a practical range of ϕ = √(8)/κ=1.72 degrees. The simulated average household wealth index surface, i.e.,  β_0 + S̃(), is shown in the right panel of Figure <ref>, where the spatial effect S̃() approximates S().To simulate data at the 400 EAs, we approximate (<ref>) by μ_ij = β_0 + S̃(_ij) where S̃(_ij) is the simulated spatial effect at EA j in area R_i. To simulate the census data we use gridded population estimates from SEDAC <cit.>, which are available on a (approximately) 1 km square grid at the equator. The gridded population estimates are then transformed to household estimates by dividing the population estimates by 3.9, the mean size of households in 2014 <cit.>. We then approximate (<ref>) by μ_i = β_0 + 1/N_i∑_g=1^G_i N_igS̃(_ig) where N_ig is the household estimate for grid g in area R_i, N_i = ∑_g=1^G_iN_ig is the household estimate for area R_i, and S̃(_ig) is the simulated spatial effect at the centroid of grid g.We consider five different scenarios with varying levels of information available on location: (1) survey data only, (2) census data up to county level (n=47) only, (3) both survey data and census data up to county level, (4) census data up to provincial level (n=8) only, and (5) both survey data and census data up to provincial level. When we analyze survey and census data together, we assume the two data sources are independent, which in practice means that the surveyed population is only a small fraction of the total population.To assess accuracy of the reconstruction under each scenario, we compute the mean squared error (MSE) and mean absolute error (MAE) of the spatial effect surface byMSE = (∑_i=1^n m_i)^-1∑_i=1^n∑_k=1^m_i(ŵ_ik-w_ik)^2, MAE =(∑_i=1^n m_i)^-1∑_i=1^n∑_k=1^m_i|ŵ_ik-w_ik|respectively, where ŵ_ik is the posterior mean and w_ik is the “true” value of the spatial effect at mesh point _ik. Both the MSE and MAE are given in Table <ref> for all 5 scenarios. The top row of Figure <ref> gives the sampling locations/areas, with each column corresponding to a different sampling scheme, and the middle and bottom rows the posterior means and standard deviations of the surface S(). §.§ Survey DataFor our first simulation, we consider a situation in which we have survey data available from 400 EAs. To fit the model using , we construct the projection matrixas described in Section <ref>. We fitmodel (<ref>) using the SPDE approach. Computational details can be found in Appendix <ref>.Posterior medians and 95% credible intervals (CIs) for the parameters are presented in Table <ref> and the predicted spatial random effect surface is depicted in Figure <ref> (left column). In general, the posterior medians are relatively close to their true values and all credible intervals cover the true value, though are fairly wide. The predicted spatial surface (posterior mean) over Kenya is visually similar to the true spatial surface, though there is some attenuation. Regions of Kenya that have a higher spatial effect are predicted to be lower and vice versa; this shrinkage to the mean phenomenon is well known in the spatial literature. We also see that the posterior standard deviation of the spatial effect is lower in the vicinity of the 400 enumeration areas and higher elsewhere. The posterior median and 2.5th and 97.5th percentiles of the predicted average household wealth index is depicted in Figure <ref> of Appendix <ref>. < g r a p h i c s >Comparison of results under the five scenarios: 400 surveys with exact location (Surveys), census data at the county level (47 Areas), both survey and census data at the county level (Surveys + 47 Areas), census data at the provincial level (8 Areas), and both survey and census data at the provincial level (Surveys + 8 Areas). Top row is the data available under each scenario. Black dots are the locations of the enumeration areas. Middle row is the predicted (posterior mean) of the spatial surface. Bottom row is the posterior standard deviation of the predicted surface.§.§ Census Data (47 Counties)We next consider a situation in which we have census data for each of the n=47 counties in Kenya.To implement (<ref>) weapproximate μ_i using (<ref>), which requires the population density at the mesh points. To determine the population estimate corresponding to the grid containing the mesh point, we used gridded population estimates from SEDAC. Figure <ref> depicts the n=47 counties and mesh points with population density d_ik > 1/m_i in gray.The results are presented in Table <ref> and depicted in Figure <ref> (second column).Again, we see that the posterior medians are relatively close to the true values. The predicted spatial surface is similar to the truth and is very similar to the predicted surface estimated for the point data. In general, the posterior standard deviation of the spatial effect is higher under this scenario than when we had information from 400 surveys. This is also evident when comparing the 2.5th and 97.5th percentile of the predicted average household wealth index, displayed in Figure <ref> of Appendix <ref>. §.§ Survey and Census Data (47 Counties)Another scenario that might arise is one in which we have both survey data at 400 EAs and census data available for 47 counties. Thus, we simply combine the methods from Sections <ref> and <ref>. The results are presented in Table <ref> and displayed in Figure <ref> (third column) and Figure <ref> of Appendix <ref>. Overall, there is a slight improvement over the survey information only case. We note that there are some identifiability problems when estimating the two variance parameters, which manifests itself here with σ^2 being overestimated and λ^2 underestimated. §.§ Census Data (8 Provinces) In order to evaluate the effect when area information is known at a more aggregate level than previously considered, we examined a situation where we only have census data available for each of the n=8 provinces in Kenya. Implementation-wise, this scenario is analogous to the one previously described in Section <ref>. Results are presented in Table <ref> and a depiction of the posterior mean and standard deviation along with a map displaying the 8 provinces is in Figure <ref> (fourth column). In this scenario, inference for the parameters is severely deteriorated when compared to the previous cases. In particular, the credible intervals are much wider than in the previous scenarios and the MSE and MAE are substantially larger. §.§ Survey and Census Data (8 Provinces) The last scenario we consider is similar to that in Section <ref>, where we have survey and census data available (at the provincial level) to use. Parameter estimates are presented in Table <ref> and the posterior mean and standard deviation of the random effect is depicted in Figure <ref> (last column). Again, identifiability issues are evident in inference for the variances. The spatial effect surface is similar to the surveys-only scenario.In terms of the mean squared errors, the values are 0.263, 0.307, and 0.202 when we have survey data with geographic coordinates, census data at the county-level (n=47), and a combination. In this simulation, there is a loss of accuracy when we only have census data, but it is not dramatic. However, when we aggregate at the provincial-level (n=8) the MSE is 0.557 and when we additionally incorporate survey data the MSE is 0.254. In general, we see a modest improvement when incorporating the census data over just using survey data. The improvement is significantly better when we used county-level census data rather than provincial-level census data. The same trends hold for the mean absolute errors. § APPLICATION TO SCOTTISH LIP CANCER DATAWe use the Scotland lip cancer data as an illustrative example of how the method can be applied to areal data. The most common model for spatial smoothing for such data is that of <cit.>. They propose a discrete spatial model by assigning the spatial random effects an intrinsic conditional autoregressive (ICAR) prior,S_i | S_i',  i' ∈ne(i) ∼N(S_i,1/τ_s m_i)where S_i',  i' ∈ne(i) are the spatial random effects of the neighbors of R_i, S_i is the mean spatial random effect of the neighbors, τ_s determines the spatial variability, and m_i is the number of neighbors. Unfortunately, this specification for the random effects depends on defining a somewhat arbitrary neighborhood structure.Instead, it may be of interest to predict a continuously-varying rather than a discretely-varying latent surface. One may view the underlying continuous spatial field simply as a mechanism to induce spatial dependence between the areas (and then in some instances one may report the aggregate estimates only). One simplistic approach is to use a GRF model, with the data assumed to arise(for example) at the centroids of the areas, but obviously this is arbitrary and does not reflect reality. Let R_i denote county i, i=1,…,n=56 and let Y_iaj be the binary male lip cancerindicator in stratum (age-band) a of county i at location _ij, j=1,…,N_ia where N_ia is the male population in county R_i age group a. In the usual case, the available data correspond to summed disease counts Y_i++=∑_a=1^A∑_j=1^N_ia Y_iaj and expected numbers E_i; these expected numbers are often pre-calculated as E_i = ∑_a=1^A N_ia q_a, where q_a is a reference risk for stratum a. The q_a may be taken from a previous time period or calculated (via internal standardization) in advance. The rarity of many diseases, and the lack of stratum-specific information, means that simplifying modeling assumptions are needed, as we now describe.We proceed as in the no strata case and assume for a rare disease Y_iaj |  p_iaj∼_ind(p_iaj), for j=1,…,N_ia individuals in strata a, county R_i, where p_iaj=exp{β_0+β_a + S_a(_ij)}= q_a exp{β_0 + S_a(_ij)}, with S_a(_ij) representing the spatial random effect for strata a at location _ij. This leads to Y_i++ | μ_i ∼(μ_i), and, proceeding as before,μ_i= ∑_a=1^A∑_j=1^N_ia_a [ Y_iaj | _ij] = ∑_a=1^A q_a∑_j=1^N_iaexp{β_0 + S_a(_ij)}≈ ∑_a=1^A N_ia q_aexp(β_0) ∑_k=1^m_i d_a(_ik)exp{ S_a(_ik) }=E_i exp(β_0) ∑_k=1^m_i d(_ik)exp{ S(_ik) = E_i θ_iwhere the first equality on the last line follows from assuming a common residual spatial risk surface across stratum (S_a() = S()) and common population density across stratum (d_a() = d()). This allows us to separate the age-standardization from the risk surface estimation to give the data model. Standardization in this fashion leads to the spatial modeling of the relative risk, θ_i, an aggregate summary. The standardized incidence ratio (SIR) is_i=Y_i/E_i and is the MLE of θ_i from the Poisson model with mean E_iθ_i. The SIRs are depicted in the top left hand panel of Figure <ref>. It is evident from the map that there is large variability in the area relative risks, with apparent strong spatial dependence.Inference for this model proceeds as discussed in Section <ref>. The mesh used is shown in Figure <ref> with m=2,417 mesh points, which results in mesh points that are ≈ 6.3 km apart. We determined the relative population density for each R_i in the same manner as we did for the Kenya simulation and mesh points associated with higher relative population densities are shown in Figure <ref>.We considered several different computational strategies, which are described more fully in Appendix <ref>. Briefly, we implemented an EB approach in which the spatial random effects were integrated out using Laplace approximations, a fully Bayesian approach (using HMC), and a hybrid of these two in which HMC was used, withfixed at the EB estimates. For the fully Bayesian approach, we initialized 4 chains, used a burn in of 10,000 iterations, ran up to an additional 1,000,000 iterations and thinned them to ultimately save 1,000 iterations from each chain. For the hybrid approach, we also initialized 4 chains, used a burn in of 500 iterations, ran up to an additional 1,000 iterations for each chain. Convergence summaries for both the fully Bayesian and hybrid approach are given in Appendix <ref>. Estimates and 95% CIs for the parameters exp(β_0), ϕ, and λ^2 are presented in Table <ref> for the three different computational strategies. There is good agreement, though we notice that the posterior credible intervals tend to be wider when using the fully Bayesian computational strategy, which is not surprising given the use of the delta-method to calculate the CIs in the EB approach.We also obtain predictions and posterior standard deviations of S̃(), displayed in Figure <ref> of Appendix <ref>. We note that the posterior standard deviation of the surface is smallest in regions of Scotland where the population is greatest, and larger elsewhere. Furthermore, the posterior standard deviation tends to be a little lower for the hybrid approach than for the fully Bayesian approach, which is not surprising given that the spatial parameters were fixed in the hybrid approach. The predicted continuous relative risk surface using the fully Bayesian approach (posterior median) is presented in Figure <ref>. We see that the continuous relative risk surface and is largest in the counties with higher SIRs, and lowest in the counties with the smallest SIRs.We obtain relative risk estimates (posterior medians), as well as 95% CIs for each of the 56 counties from this model by aggregating the continuous relative risk surface within each county; posterior medians are presented in Figure <ref> and the 95% CIs are displayed in <ref> of Appendix <ref>. To obtain estimates of the desired quantiles in both the fully Bayesian and hybrid approach, for each county R_i we obtain b=1,…,B=4,000 draws from the “aggregated” relative risk surface, θ_i^(b) = exp(β_0^(b))_i^T^(b), where ^(b) = [ exp(w_1^(b)),…,exp(w_m^(b)) ]^T (see (<ref>)). As before, _i has at most m_i nonzero entries that correspond to the population density estimates, d(_ik). From here, we can obtain the desired summary measures. We see that the relative risk estimates are nearly identical for both computational strategies and are similar to the SIRs, but that the estimates are shrunk towards the overall mean, which is as expected.We also compare our results to those obtained using an ICAR prior on the spatial random effects. Parameter estimates are in Appendix <ref>. The predicted relative risks (posterior medians) for each county are presented in Figure <ref>, and the 95% CIs are presented in Figure <ref> of Appendix <ref>. The results are very similar to the continuous model. For the fully Bayesian approach using HMC (using our own code), it took approximately a week to fit the model using a computing cluster. This can be improved tremendously by using the hybrid approach. It takes on the order of minutes to obtain the empirical Bayes estimates and about ten minutes to run the HMC.§ DISCUSSIONIn this article, we propose a Bayesian hierarchical model that can accommodate observations taken at different spatial resolutions. To this end, we assume a continuous spatial surface, which we model using the SPDE approach.In the simulation example, we considered surveys taken at point locations and census data associated with areas.When the only data available was census data at the county-level, there was not a substantial loss in accuracy when comparing it to a situation in which we had survey (point) data. In general, we would not expect this to hold when comparing point data to areal data as the loss of information depends on both the strength of the spatial dependence, the number and geographical configuration of the areas, and on the amount and quality of the survey data. When the size of the areas increased (comparing 47 counties to 8 provinces), estimates for the spatial parameters and the overall household wealth index were highly variable and the predicted spatial surface was much less nuanced.We also applied our method to the Scotland lip cancer dataset. Overall, there were very minor differences in the relative risk estimates for each county when comparing our continuous spatial model to a discrete spatial model (i.e., the ICAR model) but again there is strong spatial dependence in these data (which explains in part the popularity of these data). However, we note that modeling a continuous surface is particularly attractive in that it is not subject to definitions of administrative boundaries, which can often be arbitrary, and a continuous risk surface, in general, more accurately reflects disease etiology. Furthermore, it can easily be adapted to situations where we might also have point level covariate data. In the latter case, the use of the models we have described avoids the ecological fallacy which occurs when area-level associations differ from the individual-level counterparts. To avoid ecological bias, one requires point level covariates but the availability of such data is increasing <cit.>.For normal outcomes, all computation can be performed quickly using . In the simulation example, it took about 2 minutes to fit each of the models on a standard laptop. For Poisson outcomes, computation is much more difficult.There has been an increased interest in implementing sparse matrix operations in Stan, which would improve usability of this method. In general, there is still a need for computationally efficient MCMC schemes for Gaussian process data.With point data we can't learn anything about the surface at a spatial resolution which is less than the distance between the two closest points. If we only have areal data, the situation is far worse. Hence, one should notover-interpret fine spatial scale effects. In both the point and area data cases, model checking is difficult.As a minimum however, for areal data, one may simply view the model as a method to induce an area-level spatial prior. If there are doubts on the fine-scale surface, the results can be presented at the area-level, as we did with the Scottish lip cancer example.In the simulation study we considered, we looked at combining survey (point) data and census (areal) data. However, with older DHS surveys exact geographic coordinates are not available. Instead, it is only known in which area of the country the survey was taken. This is different from the problem we considered in that, instead of observing outcomes that are associated with an entire area (census data), outcomes are for a specific point in the area, but that exact location is unknown. Therefore, the methods proposed here would need to be altered to accommodate this type of situation.§ SOFTWARE All code and input data used in the simulation and application is available on github (<https://github.com/wilsonka/pointless-spatial-modeling>). § ACKNOWLEDGMENTS The authors would like to thank Dan Simpson for numerous helpful conversations, and Jim Thorson for advice on TMB. Both authors were supported by R01CA095994 from the National Institutes of Health.apalike § APPENDIXfiguresection§ COMPUTATIONAL DETAILS FOR THE KENYA SIMULATIONAs described in Section 4.1 of the paper, the household-level model is Y_ih | μ_ih∼(μ_ih,σ^2), withμ_ih = β_0 + S(_ih),with unknown variance σ^2.Across all simulation scenarios, we use the priors, β_0∼(μ_β_0,σ_β_0^2), σ^2 ∼Beta(2, 5),∼(_θ, _θ), with μ_β_0=0, σ_β_0^2=100, _θ=[[ -1.17; -0.0933 ]],_θ=[[ 100;0 10 ]].The hyperprior foris chosen to be fairly vague. Here, the prior mean for θ_1 corresponds to a marginal variance λ^2 of 1.The prior mean for θ_2 corresponds to a practical range ϕ of roughly 20% of the domain size.We use thepackagefor computation. Fitting models involving observations with exact locations is straightforward as there exist functions to define the matrixused to project the spatial random effect from the mesh vertices to point locations (see Section 3.1).Details of how to specify these models inusing the SPDE approach can be found in <cit.>. In order to fit the models that involve census data, we adaptsince this matrix can be viewed as a way to average the random effect at mesh points. In these scenarios, we defineto be a matrix with n rows and m columns, made up of row-vectors _i^T of length m, where m is the number of mesh points. These row vectors _i^T contain up to m_i non-zero entries d(_ik). In the case of area-level observations, we usein place ofwhen fitting the model using . In scenarios involving a combination of point and areal data, the resulting projection matrix contains rows from bothand .§ FURTHER MATERIAL ON THE KENYA SIMULATIONWe compare the predicted average household wealth index (posterior median) and 2.5th and 97.5th percentiles across three scenarios: 400 surveys with exact location, census data at the county level, and both survey and census data at the county level. Results are presented in Figure <ref>. When comparing to the true household wealth index surface, which tends to have lower values in the middle and northeastern sections of the country and higher values elsewhere, we see that the predicted surface when using the survey coordinates (“Surveys” and “Surveys + 47 Areas”) tends to be most similar.§ COMPUTATIONAL DETAILS FOR THE SCOTLAND EXAMPLEIn this section, we describe computational strategies for analyzing the Scotland dataset. We first describe two empirical Bayes based approaches followed by a description of the fully Bayesian approach. §.§ Empirical Bayes As fast alternatives to a completely Bayesian approach, we consider two strategies, both based on empirical Bayes (EB) estimation. In the first, we use EB estimation to obtain estimates for the spatial hyperparametersand the fixed effect β_0. In the second, we use a hybrid approach where we first use EB estimation to estimate , and then proceed, conditional on these values.For the first, strictly EB, approach, the EB estimates are defined as (^EB, β̂_0^EB) = _θ, β_0 p(, β_0|) = _θ, β_0∫_w p(,,β_0|)  d, = _θ, β_0∫_w f(|β_0,)p(|)p̃()p̃(β_0) d,where we use p̃(·) to denote a flat prior, that is p̃(·) ∝ 1. For the EB approach we use uninformative, flat priors for both the fixedeffect β_0 and spatial parameters . Therefore, the EB estimates, the posterior modes, are also maximum likelihood estimates (MLEs). We then use the invariance of MLEs and the delta-method to obtain estimates for exp(β_0) and functions of the spatial hyperparameters. For the second, “hybrid”, approach, the EB estimates are defined as^Hybrid= _θ p(|) = _θ∫_w∫_β_0 p(,,β_0|) dβ_0  d, = _θ∫_w∫_β_0 f(|β_0,)p(|)p̃()p(β_0) dβ_0  d. We then use these estimates in the second, MCMC-based, step, which is described in the following section. In the hybrid approach, we use a normal prior for the intercept, β_0 ∼(μ_β_0, σ_β_0^2)with μ_β_0=0,σ_β_0^2=100, and place uninformative, flat priors on the spatial parameters, p̃() ∝ 1.In both cases, to implement EB estimation we use thepackage<cit.>, which is computationally efficient since it uses sparse matrix operators. See <cit.> for a discussion on usingwith the SPDE approach.We briefly summarize how to implement this approach. We first construct a so-called template file that contains the joint distribution, which is the product of the likelihood f(|β_0, ) and priors p(|), p̃(β_0) or p(β_0), and p̃(). To obtain the densities that we would like to optimize, p(, β_0 | ) and p(|), we use TMB to integrate outand, optionally, β_0. This integration is carried out using Laplace approximations. We then numerically optimize the density using gradients to obtain the EB estimates denoted ^EB and β̂_0^EB (or ^Hybrid for the hybrid approach) and the associated variance-covariance matrix (based on the Hessian), denoted ^EB (for the strictly EB approach).§.§ MCMCFor the fully Bayesian approach we use as priors, β_0∼(μ_β_0,σ_β_0^2) and ∼(_θ, _θ), with μ_β_0=0,σ_β_0^2=100,_θ=[[3.24; -4.51 ]],_θ=[[ 100;0 10 ]].As in the Kenya simulation example, the priors for the hyperparametersare vague and _θ corresponds to a marginal variance λ^2 of 1 and practical range ϕ of roughly 20% of the domain size.In the fully Bayesian approach, we first begin by updatingconditional on , β_0, andusing a random-walk proposal. The proposal distribution we use is,^(t+1)∼(^(t),c×^EB_θ),where ^EB_θ is the inverse Hessian corresponding to the estimates forobtained from EB estimation described in the preceding section.The second step is then similar for both the hybrid and fully Bayesian approach. We updateand β_0 conditional onand , by using Hamiltonian Monte Carlo (HMC; Neal, 2011neal:2011). In the hybrid approach,is taken to be ^Hybrid. The negative log posterior U (modulo a constant term), is found to beU =-β_0 ^T1_n - ^Tlog() + exp(β_0)^T + 1/2^T + 1/2σ_β_0^2β_0^2,where 1_n is an n × 1 vector of all ones,is an n × m matrix where each row _i^T contains up to m_i nonzero weights d(_ik),=[ exp(w_1),…,exp(w_m) ]^T, and = [ E_1,…,E_n ]^T. We also compute the derivatives to be,∂ U/∂β_0= -^T1_n + exp(β_0) ^T + β_0/σ_β_0^2, ∂ U/∂= ( diag())^T[ exp(β_0) - (diag() )^-1] + ^T,where diag() is a diagonal matrix with entries T_1,…,T_m along the diagonal. Parameters that are tuned for desired acceptance are c, the step size, and number of leapfrog steps for each HMC iteration.In exploratory runs, it appeared that computation could be improved by defining ^* =+ β_0, in which case, ^* | β_0,∼ N(β_0, 𝐐^-1). Under this parameterization,U =- ^Tlog(^*) + ^T^* + 1/2(^* - β_0 1_m)^T(^* - β_0 1_m) + 1/2σ_β_0^2β_0^2, ∂ U/∂β_0= β_0 1_m^T1_m - 1_m^T^* + β_0/σ_β_0^2, ∂ U/∂^*= ( diag(^*))^T[- (diag(^*) )^-1] + ^*T - β_0 1_m,where ^*=[ exp(w^*_1),…,exp(w^*_m) ]^T. This alternative parameterization is implemented for the hybrid approach. Further gains in speed can be found by specifying a better scaling of the “mass matrix” (the covariance of the momentum variables used in the HMC algorithm). In the most simple case, the mass matrixis chosen to be the identity matrix, which corresponds to i.i.d. momentum variables. For the hybrid approach, we adapt this matrix to again be diagonal, but with entries along the diagonal corresponding to the inverse posterior variance. We empirically estimate this after running several hundred iterations of the algorithm using an identity mass matrix.§ FURTHER MATERIAL ON THE SCOTTISH LIP CANCER EXAMPLETrace plots and histograms for the fully Bayesian approach are shown in Figures <ref> and <ref>, respectively. The trace plot and histogram for the hybrid approach is displayed in Figure <ref>. Trace plots and calculated R̂ (which were all less than 1.05) suggested convergence <cit.>.The predicted spatial surface (posterior mean) and corresponding posterior standard deviation are depicted in Figure <ref> for both the hybrid (left column) and fully Bayesian approach (right column). In both cases the predicted continuous surface is similar and the posterior standard deviation follow similar trends. However, the posterior standard deviation tends to be lower in the hybrid approach when compared to the fully Bayesian approach, which is as expected since variability inis not taken into consideration in the hybrid approach.Using the ICAR model, we obtained a posterior median for exp(β_0) of 1.03 (95% CI: 0.926, 1.14) and for τ_s of 1.33 (95% CI: 0.770, 2.28).A comparison of the 95% CIs for the relative risk using the ICAR model and continuous surface model are presented in Figure <ref>. Plotted are the corresponding 2.5th percentiles and 97.5th percentiles. Results were nearly indistinguishable across the models and computational strategies.
http://arxiv.org/abs/1709.09659v1
{ "authors": [ "Katherine Wilson", "Jon Wakefield" ], "categories": [ "stat.ME" ], "primary_category": "stat.ME", "published": "20170927175615", "title": "Pointless Continuous Spatial Surface Reconstruction" }
fancy 0.5pt 0pt ACS Applied Materials & Interfaces 8 (2016), 28159 – 28165 DOI: 10.1021/acsami.6b08532 [ \begin@twocolumnfalse The influence of superparamagnetism on exchange anisotropy at CoO/Co/Pd interfaces .87 ^1 Institute of Nuclear Physics Polish Academy of Sciences, Deparment of Materials Science, Radzikowskiego 152, 31-342 Krakow, Poland^2 University of Rzeszow, Center of Innovation & Knowledge Transfer, Pigonia 1, 35-310 Rzeszow, Polandemail: [email protected] Abstract: Magnetic systems exhibiting exchange bias effect are being considered as materials for applications in data storage devices, sensors and biomedicine.As the size of new mag- netic devices is being continuously reduced, the influence of thermally induced instabilities in magnetic order has to be taken into account during their fabrication process.In this study we show the influence of superparamagnetism on magnetic properties of exchange-biased CoO/Co/Pd_10 multilayer.We find that the process of progressive thermal blocking of the superparamagnetic clusters causes an unusually fast rise of the exchange anisotropy field and coercivity, and promotes the easy axis switching to out-of-plane direction. DOI: 10.1021/acsami.6b08532 Keywords: exchange bias, multilayer, superferromagnetism, interface, magnetism, exchange anisotropy, superparamagnetism\end@twocolumnfalse ]§ INTRODUCTION Exchange bias is a magnetic effect which usually appears at an interface between ferromagnetic (FM) and antiferromagnetic (AFM) materials.<cit.>The magnetic hysteresis loop of the exchange-biased system is centered around a non-zero magnetic field and the loop shift along the external magnetic field axis is called exchange anisotropy field H_ex.This effect is driven by the exchange anisotropy occurring when the FM/AFM system is field-cooled through the AFM Néel temperature T_N lower than Curie temperature T_C of the FM.Exchange bias vanishes above the blocking temperature T_b which is typically lower than bulk AFM Néel temperature due to finite-size effects at the interface.<cit.>Due to the asymmetry in the magnetic reversal process magnetic systems exhibiting exchange bias are being considered as materials for applications in magnetic sensors, storage devices and memories,<cit.> as well as in biomedicine as drug carriers.<cit.>The exchange anisotropy field H_ex is inversely proportional to the thickness of the FM layer <cit.> indicating that the exchange bias effect has an interfacial origin.This property opens the way for manipulating the exchange anisotropy and the magnetization reorientation mechanism. A lot of experimental work <cit.> has been carried out on the AFM/FM multilayers for systems with FM layer thickness in nm range. However, multilayers having lower FM thickness are still less well studied. Decreasing the size of a FM leads to the superparamagnetism when the magnetic anisotropy is comparable to the thermal fluctuation energy. In such case the orientation of magnetic moments is thermally unstable. This makes the system inappropriate for industrial and biomedical application. In this study we want to address the question of the relationship between progressive blocking of the superparamagnetic particles and the exchange anisotropy on the example of the exchange-biased multilayer with FM volume below the limit for continuous layer formation. In this case superparamagnetic clusters are expected to occur, allowing the investigation of their magnetic coupling to the AFM material resulting in exchange anisotropy energy.Our research was carried out on CoO/Co/Pd_10 polycrystalline multilayer.In this structure the AFM/FM CoO/Co interface is a model system for the exchange bias phenomena investigations <cit.>, and is responsible for introducing the exchange anisotropy.The perpendicular magnetic anisotropy, important for applications of the exchange-biased systems in data storage and memories, is implemented in the Co-Pd interface in multilayers or alloys.<cit.> Our study shows that lowering the FM volume below the superparamagnetic limit results in large exchange anisotropy field and in its unusual dependence on temperature.Moreover, the progressive blocking of the particles also affects the coercivity of the system causing its fast rise with decreasing temperature, and causes the easy axis switching into the out-of-plane direction. § EXPERIMENTAL SECTION The investigated system was fabricated in ultra-high vacuum by thermal evaporation at room temperature under the pressure of 10^-7 Pa.Before the multilayer deposition the Si(100) single crystal substrate was covered by a 5 nm thick Pd buffer layer.The CoO layer was made from 0.5 nm thick Co layer by exposing it for 10 minutes to a pure oxygen atmosphere at the pressure of 3 × 10^2 Pa.Next, the oxide layer was covered by 0.3 nm of Co and 0.9 nm of Pd.After 10 repetitions of the CoO/Co/Pd trilayer 2 nm of Pd was deposited as a capping layer. The X-ray reflectivity (XRR) studies were carried out using X'Pert Pro PANalytical diffractometer equipped with Cu X-ray tube operated at 40 kV and 30 mA.The transmission electron microscopy (TEM) studies were performed with FEI Tecnai Osiris device.The primary electron beam was accelerated using a voltage of 200 kV. Magnetic measurements were performed using Quantum Design MPMS XL SQUID magnetometer.The zero field cooling (ZFC) and field cooling (FC) magnetization curves were obtained using a standard protocol.After demagnetization at 300 K the system was cooled down to 5 K without magnetic field.Then, the external magnetic field of 500 Oe was applied and the ZFC curve was recorded during heating up to 300 K.The FC curve was measured during cooling from 300 K down to 5 K in the same external magnetic field.The time-dependent magnetic relaxation curves were obtained at 10 K after switching off the magnetic field from +50 kOe.The hysteresis loops were measured at different temperatures after cooling the system in the magnetic field of +50 kOe.The data were corrected for diamagnetic background from sample holder.§ RESULTS AND DISCUSSION The constitution of the multilayer was investigated with transmission electron microscopy (TEM) and X-ray reflectivity (XRR), and the results are shown in <ref>.The TEM cross-section of the system is presented in <ref>a. The interfaces are highly jagged and the layers are not continuous, however; the alternating stacking of the interfaces along the growth direction is preserved. The total multilayer thickness is approximately 28 nm.Due to the small thickness of the deposited Co it is impossible to set apart this material from Pd and CoO layers, and therefore the multilayer structure will be considered as [CoO/Co-Pd]_10 with Co atoms intermixed with Pd layers.The size of the CoO regions ranges from 1 nm to 1.7 nm while the thickness of the Co-Pd regions is between 0.7 nm and 1.3 nm.Representative multilayer regions, inwhich the CoO grains are in the vicinity of the Co-Pd clusters, are presented in <ref>b and 1c.The size of the CoO/Co-Pd regions with periodic stacking of the crystallographic planes is in a range of a few nm and differs from cluster to cluster. The XRR measuremet is shown in <ref>d.The Kiessig fringes are observed up to 2.5^∘ and the angular distances between two neighboring maxima correspond to the total system thickness of approximately 35 nm.Taking into account the thicknesses of the buffer and capping Pd layers the thickness of the CoO/Co/Pd_10 multilayer can be estimated as 28 nm which remains in agreement with the TEM data.The presence of two Bragg reflections indicates a periodic change of the electron density along the growth direction with the period thickness of 2.8 nm.It was found by Rafaja et al. <cit.> that even in case of non-continuous layers the Bragg reflection should be observed in the XRR curve, which is consistent with our results.Due to the jaggedness of the interfaces the XRR data cannot be well-fitted with a multilayer structure model which precludes a precise determination of the densities of the constituent materials.The field cooling (FC) and zero field cooling (ZFC) measurements, presented in <ref>a, were carried out for in-plane (IP) and out-of-plane (OOP) geometries. Going from low temperature the ZFC and FC curves start to overlap at approximately 180 K defining the irreversibility temperature T_irr. Moreover, at the same temperature both out-of-plane and in-plane FC curves change their character from typical for ferromagnet to paramagnetic-like.Since the Curie temperatures for pure Co and for Co-Pd system are far above room temperature, the presence of the irreversibility temperature is the evidence for superparamagnetic behavior and indicates that above this point the Co-Pd FM clusters are fully unblocked.Below the Curie temperature a superparamagnetic particle has its own superspin which is a sum of the individual magnetic moments of the atoms within the particle.According to the relation V'E_a' ∝ k_B T_crit' every superparamagnetic cluster with volume V' and anisotropy energy E_a' has its own blocking temperature T_crit' for the superspin.Therefore the gradual decrease of temperature below T_irr causes blocking of the smaller particles.In both IP and OOP ZFC curves the maximum visible at approximately 140 K determines the average blocking temperature for the superspins of the blocked ferromagnetic clusters.This maximum is relatively broad with full width at half maximum of approximately 95 K suggesting a wide FM cluster size distribution. This distribution is caused by the interface jaggedness in the multilayer which promotes the formation of Co-Pd FM clusters of various sizes and orientations.The FC curve for in-plane geometry has a plateau below the average FM blocking temperature.Such FC curve shape is observed for (super)spin glasses and superferromagnetic materials and indicates a collective spin behavior with inter-particle interactions.<cit.> To determine the type of the interaction the magnetic relaxation measurements were carried out, and the results are presented in <ref>b.The relaxation curves for both IP and OOP geometries (<ref>b) show a magnetization decay with time following a power-law M(t) = M_∞ + M_1 t^α,<cit.> where M_∞ and M_1 are fit constants.The fitted values of the exponent α are 1.1 for OOP and 1.05 for IP geometries. The character of the curves as well as the values of the exponents indicates that the interactions between blocked particles have a superferromagnetic nature.The observed magnetization decay is due to the domain wall motion through the blocked particles assembly.Additionally, the magnetization decay is faster for the IP geometry and demonstrates a more easy move of the domain walls along this direction. Moreover, the relaxation curve for OOP geometry has a minimum for t ≈ 1.5 · 10^5 s, characteristic for the superferromagnetic behavior.<cit.> In such case the increase of the magnetization can be associated with superspin alignment within the magnetic domains.It was shown<cit.> that this part can be described by adding a saturating stretched exponential contribution M_2( 1 - exp[ -( t/τ)^β] ) (τ and β are relaxation time and stretching exponent, respectively) to the power-law.The fit of this equation to the data fails, which can be explained by the interactions occurring not only between blocked FM particles but also between FM and AFM materials. <ref>c shows the out-of-plane hysteresis loops at 300 K for [Co/Pd]_10 and [CoO/Co/Pd]_10 multilayers.In both cases the thicknesses of Co and Pd are the same, and equal to 0.3 nm and 0.9 nm, respectively.The [Co/Pd]_10 system reveals a clear ferromagnetism with out-of-plane easy axis due to the strong surface anisotropy of the Co/Pd interfaces.<cit.>The introduction of the CoO, placed between the consecutive Co-Pd layers, leads to the superparamagnetic behavior at room temperature with no remanence and coercivity.The information about the exchange anisotropy field and coercivity, together with their dependence on temperature, was obtained from the hysteresis loops.The measurements in both IP and OOP geometries were carried out after cooling from 300 K down to desired temperature in external magnetic field of +50 kOe.The loops were measured in the field ranging from +50 kOe to -70 kOe, and both of these fields were large enough to saturate the system.Representative measurements at 10 K and 50 K for IP and OOP geometries are shown in <ref>a.A negative loop shift from zero position is the evidence that the exchange anisotropy was induced in the system.At temperature 10 K the observed exchange anisotropy field H_ex for both OOP and IP geometries is 6 kOe.This loop shift is large in comparison to the other studies on CoO/Co system where the exchange anisotropy field of a few hundred Oe was reported <cit.>.However, these studies were performed on systems with FM thickness of a few nm while in our case the FM layer is replaced by the Co-Pd FM clusters. We observe that the loops have asymmetric shape, which is especially pronounced at 10 K and suggests a stepwise magnetic reversal process.The switching field distributions, calculated as first derivatives dM/dH of the lower and upper branches of theOOP magnetization curves measured at 10 K and 50 K, are shown in <ref>b.The two maxima observed in the dM/dH curves can be associated with two major magnetic phases reversing at different external magnetic fields.The positions of the soft component maxima (green areas in <ref>b), demonstrated for lower external field are centered around zero. The maxima of the hard component (orange areas in <ref>b) are observed for larger external fields with a clear bias along field axis.Therefore, this component can be identified as arising from the reversal process of the blocked FM particles coupled to the AFM grains.The unbiased soft component can be attributed to the blocked FM particles which are not interacting with the AFM material.The superposition of these two contributions results in the asymmetric loop shape seen in our results.Qualitatively similar behavior is observed for hysteresis loops obtained in IP geometry.The dependence of the exchange anisotropy field H_ex on temperature T can be described by the relation H_ex(T) ∝(1-T/T_b)^n, where T_b is the blocking temperature for exchange bias and n is a characteristic exponent <cit.>.Using this relation to fit both OOP and IP H_ex(T) dependencies (<ref>) we determined the average blocking temperature for exchange bias to be T_b=146 K, which is much lower than Neel temperature for bulk CoO (291 K). The larger blocking temperatures (175 K – 293 K) were reported by others <cit.> for thicker CoO AFM layers.However, Zaag et al. <cit.> found that reduction of the CoO thickness below 10 nm results in lowering the exchange bias blocking temperature below bulk Néel temperature.According to their work, for CoO thickness of 1.8 nm the blocking temperature for exchange bias is approx. 150 K, which remains in agreement with our experimental observation from XRR and TEM measurements.The H_ex(T) dependencies for both OOP and IP directions show that the H_ex values are nearly the same.This means that during field cooling below blocking temperature T_b the same amount of AFM material is coupled to the blocked Co-Pd FM clusters.By fitting the temperature dependence on the exchange anisotropy field H_ex(T) we determined the exponent n.According to Malozemoff <cit.> this exponent is n=1 for cubic anisotropy of the AFM, and n=1/2 for its uniaxial anisotropy.Typically, in case of exchange biased systems based on CoO antiferromagnet, either a linear dependence is observed, or the exponent is slightly lower than unity. <cit.>The exponent of n=1.2 has also been reported and explained with the thermal instabilities in the AFM order and oxide grain separation <cit.>.In our case for both OOP and IP geometries the fitted exponent is n=6, which is a result that has never been shown before for a CoO/Co-based system.This result suggests that a mechanism different than AFM symmetry is responsible for the fast rise of the exchange anisotropy field with decreasing temperature.We can see from the FC curves (<ref>a) that at temperature below 180 K the superparamagnetic Co-Pd clusters become blocked and the system starts to reveal ferromagnetic behavior. Below the blocking temperature for the exchange bias T_b=146 K the blocked FM regions couple to the antiferromagnetic grains resulting in the exchange anisotropy.Sahoo et al. <cit.> suggested that coupling the FM layer to the independent AFM grains results in small increase of the exponent n.However, the AFM domain separation alone cannot explain as large increase of the exponent n as observed in the results.The broad maximum observed in the ZFC curves (<ref>a) indicates that FM clusters with a wide size distribution are present in the system.Therefore, the occurrence of the exchange coupling between single FM and AFM grains at certain temperature depends on whether the Co-Pd cluster is in a blocked state.If the volume and anisotropy energy of the cluster are large enough to stabilize the superspin direction, then the superspin has an ability to be exchange coupled to the AFM grain (<ref>c, right side) contributing to the overall exchange anisotropy energy.In case when the FM cluster is thermally unstable, it does not couple to the nearby AFM grain.For lower temperature smaller Co-Pd FM clusters are blocked and coupled to the AFM domains and a larger number of FM/AFM grain pairs contribute to the exchange anisotropy energy (<ref>c, center and left side).Simultaneously, for each exchange coupled FM/AFM region lowering of the temperature makes the resulting exchange anisotropy field larger. Therefore, the overall dependence of the exchange anisotropy field on temperature H_ex(T) rises from the contributions of the FM clusters, which become blocked at different temperatures.This mechanism is responsible for a rapid rise of the exchange anisotropy field with decreasing temperature, and its influence dominates the contribution dependent on the anisotropy symmetry of the AFM material. Polycrystalline exchange-biased systems usually exhibit a training effect, which manifests itself as a monotonic decrease of the exchange anisotropy field through consecutive hysteresis loop cycling. The presence of this effect is related to the spin structure rearrangement in the AFM material leading to the equlibrium configuration. <cit.> The training loops for the CoO/Co/Pd_10 system measured at 10 K and 50 K in OOP geometry are presented in <ref>a, and the dependencies of the exchange anisotropy field on the loop number are shown in <ref>b.At both temperatures the bias field decreases only after first loop cycles and then stabilizes at a constant level. At temperature of 10 K the exchange bias field is reduced to the 0.8 of the value measured for the first loop while at 50 K this reduction is slightly lower and equals 0.83. The dependence of the exchange field on the loop number H_ex(n) does not follow the often observed power-law H_ex(n) - H_ex(∞) ∝ 1/√(n), where H_ex(∞) is the bias field in the limit of infinite loops.Moreover, the analytical model for training effect presented by Sahoo et al. <cit.> and applied by Wu et al. <cit.> also does not fit to the data obtained for the investigated system.Our results differ from reported by others for CoO/Co systems<cit.> where a larger decrease of H_ex was shown together with its slower reduction with the cycle number.A comparable training reduction of the exchange bias field for CoO/Co system was shown by Biniek et al.<cit.>.However, in this case the decrease of H_ex was also more gradual than the sharp one-step drop observed in our results.We want to emphasize that these previous studies looked at systems with continuous AFM and FM layers and their results can be explained using the domain state model.<cit.> In our case the exchange coupling does not take place all over the continuous interface between FM and AFM materials and can be addressed as local interactions between blocked FM clusters and AFM grains (see <ref>c).Therefore, the AFM spin rearrangement induced by the FM reversal process is limited to the smaller volume than the whole AFM layer which may restrict the possible AFM domain wall motion and AFM domain reorientation processes.Because of this, after the second field cycle the spin structures of the AFM grains freeze in the locally favorable spin configurations and prevents further spin relaxation through the wall motion. The temperature dependencies of coercivity H_c(T) (<ref>b), observed for both OOP and IP directions, show similar exponential growth for decreasing temperature as H_ex(T) data (<ref>a). One can expect that the coercivity of the assembly of small particles will follow the Kneller's temperature dependence H_c(T) ∝ [ 1-(T/T_crit)^1/2 ] <cit.>, where T_crit is a temperature above which FM clusters are fully unblocked. In our case the following relation is not fulfilled which reveals that the magnetization reversal process present in our system cannot be described by the Stoner-Wohlfarth model. The reasons for this are twofold. First, the blocked FM clusters show a superferromagnetic collective behavior and cannot be treated as non-interacting objects.Second, there is a permanent growth of the number of blocked FM clusters with decreasing temperature. Since the new clusters become blocked, they start to contribute to the overall coercivity of the system, and the situation is similar to the case of the exchange anisotropy field, which was discussed earlier.The area under a hysteresis loop is a measure of the magnetic energy stored in the system after cooling in certain conditions.The half of a difference between areas under OOP and IP loops is called effective anisotropy energy K_eff and its sign is dependent on the preferred easy axis of magnetization orientation.In our case it is positive for the out-of-plane configuration and negative for the in-plane case.The determined temperature dependence K_eff(T) is shown in <ref>. Below the blocking temperature for exchange bias T_b=146 K the effective anisotropy energy is negative and decreases down to the minimum at approx. 80 K.In this temperature range the main contribution to the energy K_eff is the demagnetization energy, which is proportional to the second power of the saturation magnetization and favors the in-plane spin configuration.Since the saturation magnetization of the system slowly increases with decreasing temperature (inset in <ref>) the in-plane spin orientation becomes more favorable.Effective anisotropy energy K_eff starts to grow below 80 K.Since the Co-Pd clusters have large out-of-plane anisotropy energy<cit.> the progressive blocking of such objects introduces an increasing out-of-plane magnetization component as more clusters become blocked.The constant increase of the blocked FM clusters number for the lower temperature results in further enhancement of perpendicular anisotropy energy and causes an easy axis switching at approximately 50 K when the out-of-plane magnetization component becomes dominant over the demagnetization energy.The effect of the easy axis switching with temperature was reported for the exchange biased permalloy/CoO multilayer by Zhou et al.<cit.>.However, in the case described in that study the FM layers were continuous and the reorientation process was driven only by the surface anisotropy of the interfaces, without any influence of superparamagnetism.In our case the observed increase of the perpendicular surface anisotropy from Co/Pd interface below 50 K is connected with the temperature distribution of the number of blocked ferromagnetic clusters.§ CONCLUSIONSIn this paper we have shown the influence of the superparamagnetism on exchange anisotropy and the orientation of the magnetization easy axis in the case of CoO/Co/Pd_10 system.We have found that the decrease of the Co thickness below the limit for continuous layer formation leads to the creation of the ferromagnetic Co-Pd clusters placed between consecutive AFM CoO grains.The FM particles interact with each other and show superferromagnetic collective behavior.In such case the exchange coupling between FM particles and the AFM material results in exchange anisotropy field out to 6 kOe.The Co-Pd FM clusters start to block their superspins below 180 K, and below the blocking temperature for exchange bias, which is 146 K, they couple to the CoO antiferromagnetic grains.The observed unusual rapid rise of the exchange anisotropy field with decreasing temperature is connected with the gradual process of thermal blocking of the FM clusters superspin.The increased number of the blocked FM particles gives rise to the coercivity resulting in its fast enhancement for lower temperature.Since the Co-Pd clusters have a large out-of-plane surface anisotropy the process of the FM cluster thermal blocking affects also the orientation of the easy magnetization axis and causes the axis switching to out-of-plane direction at approximately 50 K. plainnat⋆ ⋆ ⋆ 10Kiw01 Kiwi, M. Exchange Bias Theory. J. Magn. Magn. Mater. 2001, 234, 584 – 595.Nog99 Nogués, J.; Schuller, I. K. Exchange Bias. J. Magn. Magn. Mater. 1999, 192,203 – 232.Nog05 Nogués, J.; Sort, J.; Langlais, V.; Skumryev, V.; Suriñach, S.; Muñoz, J. S.; Baró, M. D. Exchange Bias in Nanostructures. Phys. Rep. 2005, 422, 65 – 117.Zho04 Zhou, S. M.; Sun, L.; Searson, P. C.; Chien, C. L. Perpendicular Exchange Bias and Magnetic Anisotropy in CoO/Permalloy Multilayers. Phys. Rev. B 2004, 69, 024408. Sta00 Stamps, R. L. Mechanisms for Exchange Bias. J. Phys. D: Appl. Phys. 2000, 33, R247 – R268. Pol14 Polenciuc, I.; Vick, A. J.; Allwood, D. A.; Hayward, T. J.; Vallejo-Fernandez, G.; O'Grady, K.; Hirohata, A. Domain Wall Pinning for Racetrack Memory Using Exchange Bias. Appl. Phys. Lett. 2014, 105, 162406. Kle07 Klem, M. T.; Resnick, D. A.; Gilmore, K.; Young, M.; Idzerda, Y. U.; Douglas, T. Synthetic Control over Magnetic Moment and Exchange Bias in All-Oxide Materials Encapsulated within a Spherical Protein Cage. J. Am. Chem. Soc. 2007, 129, 197 – 201.Iss13 Issa, B.; Obaidat, I. M..; Albiss, B. A.; Haik, Y. Magnetic Nanoparticles: Surface Effects and Properties Related to Biomedicine Applications. Int. J. Mol. Sci. 2013, 14, 21266 – 21305.Ber99 Berkowitz, A. E.; Takano, K. Exchange Anisotropy — A Review. J. Magn. Magn. Mater. 1999, 200, 552 – 570. Men14 Menéndez, E.; Modarresi, H.; Dias, T.; Geshev, J.; Pereira, L. M. C.; Temst, K.; Vantomme, A. Tuning the Ferromagnetic-Antiferromagnetic Interfaces of Granular Co-CoO Exchange Bias Systems by Annealing. J. Appl. Phys. 2014, 115, 133915. Dob12 Dobrynin, A. N.; Givord, D. Exchange Bias in a Co/CoO/Co Trilayer with Two Different Ferromagnetic-Antiferromagnetic Interfaces. Phys. Rev. B 2012, 85, 014413.Gru00 Gruyters, M.; Riegel, D.; Strong Exchange Bias by a Single Layer of Independent Antiferromagnetic Grains: The CoO/Co Model System. Phys. Rev. B 2000, 63, 052401.Car85 Carcia, P. F.; Meinhaldt, A. D.; Suna, A. Perpendicular Magnetic Anisotropy in Pd/Co Thin Film Layered Structures. Appl. Phys. Lett. 1985, 47, 178 – 180.Car03 Carrey, J.; Berkowitz, A. E.; Egelhoff, W. F.; Smith, D. J. Influence of Interface Alloying on the Magnetic Properties of Co/Pd Multilayers. Appl. Phys. Lett. 2003, 83, 5259 – 5261.Raf02 Rafaja, D.; Fuess, H.; Šimek, D.; Kub, J.; Zweck, J.; Vacínová, J.; Valvoda, V. X-Ray Reflectivity of Multilayers with Non-continuous Interfaces. J. Phys.: Condens. Matter 2002, 14, 5303 – 5314. Bed09 Bedanta, S.; Kleemann, W. Supermagnetism. J. Phys. D: Appl. Phys. 2009, 42, 013001.Che03 Chen, Xi; Kleemann, W.; Petracic, O.; Sichelschmidt, O.; Cardoso, S.; Freitas, P. Relaxation and Aging of a Superferromagnetic Domain States. Phys. Rev. B 2003, 68, 054433.Dia14 Dias, T.; Menéndez, E.; Liu, H.; Van Haesendonck, C.; Vantomme, A.; Temst, K.; Schmidt, J. E.; Giulian, R.; Geshev, J. Rotatable Anisotropy Driven Training Effects in Exchange Biased Co/CoO Films J. Appl. Phys. 2014, 115, 243903. Gie02 Gierlings, M.; Prandolini, M. J.; Fritzsche, H.; Gruyters, M.; Riegel, D. Change and Asymmetry of Magnetization Reversal for a Co/CoO Exchange-Bias System Phys. Rev. B 2002, 65, 092407.Gir03 Girgis, E.; Portugal, R. D.; Loosvelt, H.; Van Bael, M. J.; Gordon, I.; Malfait, M.; Temst, K.; Van Haesendonck, C.; Leunissen, L. H. A.; Jonckheere, R. Enhanced Asymmetric Magnetization Reversal in Nanoscale Co/CoO Arrays: Competition Between Exchange Bias and Magnetostatic Coupling Phys. Rev. Lett. 2003, 91, 187202. Mal88 Malozemoff, A. P. Mechanisms of Exchange Anisotropy. J. Appl. Phys. 1988, 63, 3874 – 3879. Lam13 Lamirand, A. D.; Ramos, A. Y.; De Santis, M.; Cezar, J. C.; De Siervo, A.; Jamet, M. Robust Perpendicular Exchange Coupling in an Ultrathin CoO/PtFe Double Layer: Strain and Spin Orientation. Phys. Rev. B 2013, 88, 140401(R).Kap03 Kappenberger, P.; Martin, S.; Pellmont, Y.; Hug, H. J.; Kortright, J. B.; Hellwig, O.; Fullerton, E. E. Direct Imaging and Determination of the Uncompensated Spin Density in Exchange-Biased CoO/(CoPt) Multilayers. Phys. Rev. Lett. 2003, 91, 267202. Zaa00 Van Der Zaag, P. J.; Ijiri, Y.; Borchers, J. A.;. Feiner, L. F.; Wolf, R. M.; Gaines, J. M.; Erwin, R. W.; Verheijen, M. A. Difference Between Blocking and Néel Temperatures in the Exchange Biased Fe_3O_4/CoO System. Phys. Rev. Lett. 2000, 84, 6102 – 6105. Men13 Menéndez, E.; Demeter, J.; Van Eyken, J.; Nawrocki, P.; Jedryka, E.; Wójcik, M.; Lopez-Barbera, J. F.; Nogués, J.; Vantomme, A.; Temst, K. Improving the Magnetic Properties of Co-CoO Systems by Designed Oxygen Implantation Profiles. ACS Appl. Mater. Interfaces 2013, 5, 4320 – 4327. Sah12 Sahoo, S.; Polisetty, S.; Wang, Y.; Mukherjee, T.; He, X.; Jaswal, S. S.; Binek, C. Asymmetric Magnetoresistance in an Exchange Bias Co/CoO Bilayer. J. Phys.: Condens. Matter 2012, 24, 096002. Bin04 Binek, C. Training of the Exchange-Bias Effect: A Simple Analytic Approach. Phys. Rev. B 2004, 70, 014421.Sah07 Sahoo, S.; Polisetty, S.; Binek, C.; Berger, A. Dynamic Enhancement of the Exchange Bias Training Effect. J. Appl. Phys. 2007, 101, 053902.Wu15 Wu, R.; Fu, J. B.; Zhou, D.; Ding, S. L.; Wei, J. Z.; Zhang, Y.; Du, H. L.; Wang, C. S.; Yang, Y. C.; Yang, J. B. Temperature Dependence of Exchange Bias and Training Effect in Co/CoO Film with Induced Uniaxial Anisotropy. J. Phys. D: Appl. Phys. 2015, 48, 275002.Ali12 Ali, S. R.; Ghadimi, M. R.; Fecioru-Morariu, M.; Beschoten, B.; Güntherodt, G. Training Effect of the Exchange Bias in Co/CoO Bilayers Originates from the Irreversible Thermoremanent Magnetization of the Magnetically Diluted Antiferromagnet. Phys. Rev. B 2012, 85, 012404. Bin05 Binek, C.; He, X.; Polisetty, S. Temperature Dependence of the Training Effect in a Co/CoO Exchange-Bias Layer. Phys. Rev. B 2005, 72, 054408.Now02 Nowak, U.; Usadel, K. D.; Keller, J.; Miltényi, P.; Beschoten, B.; Güntherodt, G. Domain State Model for Exchange Bias. I. Theory. Phys. Rev. B 2002, 66, 014430. Ali14 Ali, K.; Sarfraz, A. K.; Ali, A.; Mumtaz, A.; Hasanain, S. K. Temperature Dependence Magnetic Properties and Exchange Bias Effect in CuFe_2O_4 Nanoparticles Embedded in NiO Matrix. J. Magn. Magn. Mater. 2014, 369, 81 – 85.
http://arxiv.org/abs/1709.09399v1
{ "authors": [ "Marcin Perzanowski", "Marta Marszalek", "Arkadiusz Zarzycki", "Michal Krupinski", "Andrzej Dziedzic", "Yevhen Zabila" ], "categories": [ "cond-mat.mtrl-sci", "cond-mat.mes-hall" ], "primary_category": "cond-mat.mtrl-sci", "published": "20170927090957", "title": "The influence of superparamagnetism on exchange anisotropy at CoO/[Co/Pd] interfaces" }
Invoking Chiral Vector Leptoquark to explain LFU violation in B Decays Bhavesh Chauhan ^a,b†, Bharti Kindra ^a,b⋆ ^a Physical Research Laboratory, Ahmedabad, India. ^b Indian Institute of Technology, Gandhinagar, India. ^†[email protected] ^⋆[email protected] Abstract: LHCb has recently reported more than 2σ deviation from the Standard Model prediction in the observable R_J/ψ. We study this anomaly in the framework of a vector leptoquarkalong with other lepton flavor universality violating measurements which include R_K^(*), and R_D^(*). We show that a chiral vector leptoquark can explain all the aforementioned anomalies consistently while also respecting other experimental constraints. § INTRODUCTION Measurements of the rare decays of B mesons have shown a number of interesting deviations from the Standard Model (SM) predictions, the most recent being, R_J/ψ=ℬℛ(B_c^+→ J/ψτ^+ν_τ)/ℬℛ(B_c^+→ J/ψμ^+ν_μ) LHCb recently reported <cit.> the measured value of R_J/ψ to be 0.71± 0.17±0.18, which is 2σ away from the SM expectation <cit.>. At quark level, these processes involve b → c ℓν transition. Other anomalies based on the charged current transitions are R_D and R_D^* and are defined as R_D^(*)=ℬℛ(B̅→ D^(*)τ^-ν̅_τ)/ℬℛ(B̅→ D^(*)ℓ^-ν̅_ℓ) where the denominator is the average value for ℓ = e and μ.These observables have been studied by BABAR <cit.>, Belle<cit.>, and LHCb<cit.>, and the world average shows a deviation of 2.2σ and 3.4σ in R_D and R_D^* respectively. Other observables which show deviations involve neutral current transitions b→ sℓ^+ℓ^- and are defined as, R_K^(*)=ℬℛ(B̅→K̅^(*)μ^+μ^-)/ℬℛ(B̅→K̅^(*)e^+e^-). Recent measurements of R_K^* by LHCb show 2.1-2.3σ and 2.3-2.5σ deviations in the low-q^2(0.045-1.1GeV^2) and central-q^2(1.1-6GeV^2) regions respectively <cit.>. A deviation of 2.6σ from SM has also been reported in R_K. All of these deviations hint towards lepton flavor universality violation and are independent of hadronic uncertainties in the leading order <cit.>. This has been summarised in Table <ref>. These anomalies have been explained in variety of frameworks including LQs <cit.>. In <cit.>,LQ models which can explain R_K and R_K^* anomalies at tree level exchange are discussed, while in <cit.> LQ models have been tested to explain the R_D and R_D^* anomalies. A comparison of the two works suggest that the LQ solutions that simultaneously accommodate R_K^(*) and R_D^(*) are scalar LQ S_1 ∼( 3̅, 3, 1/3) and vector LQ U_1 ∼( 3, 1, 2/3). In this work, we also take into account the recently measured deviation in the ratio R_J/ψ along with other constraints from B decays and explain them using U_1 LQ model. § LEPTOQUARK MODEL The interactions of U_1 with the SM fields are given by <cit.>, ℒ∋ (g_L)_ijQ̅_L^i,aγ^μ U_1,μL_L^j,a+ (g_R)_ijd̅_R^iγ^μ U_1,μe_R^j + (g_R̅)_iju̅_R^iγ^μ U_1,μν_R^j . Albeit U_1 is a non-chiral LQ, we will work in a limit where the right-handed couplings are negligibly small. With this approximation, the above Lagrangian is expanded in the mass basis as, ℒ∋ (g_L)_ijd̅_L^iγ^μ U_1,μe_L^j+ (V· g_L · U)_iju̅_L^iγ^μ U_1,μν_L^j where V and U are the Cabibbo-Kobayashi-Maskawa (CKM) and Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrices respectively. We use the following normalization g_L = g_L^0 ( M_LQ/1 TeV) for the sake of brevity. The texture of the coupling matrix is assumed to be g_L^0 = [ 0 0 0; 0 λ_1 0; 0 λ_2 λ_3 ] so that it can accommodate b→ sμ^+μ^- and b→ cτν_τ transitions.§ RESULTS AND DISCUSSION We have used to following constraints as well: (a) ℬℛ( B_s →μ^+ μ^-) = 2.8^+0.7_-0.6× 10^-9 <cit.>, and (b) ℬℛ(B →τν) = (1.14 ± 0.27) × 10^-4 <cit.>. We also note that for interesting region of the parameter space, the LQ contribution to (g-2)_μ, ℬℛ(D_q^+→τν ), and ℬℛ(τ→μγ) is negligibly small. We have used the form factors presented in <cit.> to estimate R_J/ψ. Since R_J/ψ and R_D^* are mediated by same transition of heavy to heavy quark, heavy quark spin symmetry implies that the ratios should be same at leading order <cit.>. However, the experimental values of these ratio do not seem to agree with each other in 1σ range. Because of larger uncertainties in the measured value of R_J/ψ, we take an error of 2σ in the ratio, while R_D^(*) and R_K^(*) are explained at 1σ. In Fig. <ref> we show the allowed region of parameter space that explains all the mentioned flavor anomalies. § ACKNOWLEDGEMENTThe authors would like to thank Dr. Namit Mahajan for several useful discussions.99 lhcjpsi LHCb-PAPER-2017-035. lhcpres Presentation by M. Fontana, on behalf of LHCb Collaboration DuttaR. Dutta and A. Bhol,arXiv:1701.08598 [hep-ph]. Babarrdstar1J. P. Lees et al. [BaBar Collaboration],Phys. Rev. Lett.109, 101802 (2012) doi:10.1103/PhysRevLett.109.101802 [arXiv:1205.5442 [hep-ex]].Bellerd1M. Huschle et al. [Belle Collaboration],Phys. Rev. D 92, no. 7, 072014 (2015) doi:10.1103/PhysRevD.92.072014 [arXiv:1507.03233 [hep-ex]].Bellerdstar1Y. Sato et al. [Belle Collaboration],Phys. Rev. D 94, no. 7, 072007 (2016) doi:10.1103/PhysRevD.94.072007 [arXiv:1607.07923 [hep-ex]].Bellerdstar2A. Abdesselam et al.,arXiv:1608.06391 [hep-ex].LHCbrdstar1R. Aaij et al. [LHCb Collaboration],Phys. Rev. Lett.115, no. 11, 111803 (2015) Erratum: [Phys. Rev. Lett.115, no. 15, 159901 (2015)] doi:10.1103/PhysRevLett.115.159901, 10.1103/PhysRevLett.115.111803 [arXiv:1506.08614 [hep-ex]]. LHCbrkstarR. Aaij et al. [LHCb Collaboration],JHEP 1708, 055 (2017) doi:10.1007/JHEP08(2017)055 [arXiv:1705.05802 [hep-ex]]. HillerrkstarG. Hiller and F. Kruger,Phys. Rev. D 69, 074020 (2004) doi:10.1103/PhysRevD.69.074020 [hep-ph/0310219].MatiasrkstarB. Capdevila, S. Descotes-Genon, J. Matias and J. Virto,JHEP 1610, 075 (2016) doi:10.1007/JHEP10(2016)075 [arXiv:1605.03156 [hep-ph]]. HillerOneG. Hiller and I. Nisandzic,Phys. Rev. D 96, no. 3, 035003 (2017) doi:10.1103/PhysRevD.96.035003 [arXiv:1704.05444 [hep-ph]]. SakakiOneY. Sakaki, M. Tanaka, A. Tayduganov and R. Watanabe,Phys. Rev. D 88, no. 9, 094012 (2013) doi:10.1103/PhysRevD.88.094012 [arXiv:1309.0301 [hep-ph]]. var1D. Bečirević, S. Fajfer, N. Košnik and O. Sumensari,Phys. Rev. D 94, no. 11, 115021 (2016) doi:10.1103/PhysRevD.94.115021 [arXiv:1608.08501 [hep-ph]]. var2O. Sumensari,arXiv:1705.07591 [hep-ph]. var3C. H. Chen, T. Nomura and H. Okada,arXiv:1703.03251 [hep-ph]. var4R. Watanabe,arXiv:1709.08644 [hep-ph]. var5Y. Sakaki, M. Tanaka, A. Tayduganov and R. Watanabe,Phys. Rev. D 91, no. 11, 114028 (2015) doi:10.1103/PhysRevD.91.114028 [arXiv:1412.3761 [hep-ph]].varsixL. Calibbi, A. Crivellin and T. Ota,Phys. Rev. Lett.115, 181801 (2015) doi:10.1103/PhysRevLett.115.181801 [arXiv:1506.02661 [hep-ph]]. var7S. Fajfer and N. Košnik,Phys. Lett. B 755, 270 (2016) doi:10.1016/j.physletb.2016.02.018 [arXiv:1511.06024 [hep-ph]]. var8M. Bauer and M. Neubert,Phys. Rev. Lett.116, no. 14, 141802 (2016) doi:10.1103/PhysRevLett.116.141802 [arXiv:1511.01900 [hep-ph]]. var9C. Hati, G. Kumar and N. Mahajan,JHEP 1601, 117 (2016) doi:10.1007/JHEP01(2016)117 [arXiv:1511.03290 [hep-ph]]. var10S. Sahoo and R. Mohanta,Phys. Rev. D 93, no. 11, 114001 (2016) doi:10.1103/PhysRevD.93.114001 [arXiv:1512.04657 [hep-ph]]. var11J. Zhu, H. M. Gan, R. M. Wang, Y. Y. Fan, Q. Chang and Y. G. Xu,Phys. Rev. D 93, no. 9, 094023 (2016) doi:10.1103/PhysRevD.93.094023 [arXiv:1602.06491 [hep-ph]]. var12B. Dumont, K. Nishiwaki and R. Watanabe,Phys. Rev. D 94, no. 3, 034001 (2016) doi:10.1103/PhysRevD.94.034001 [arXiv:1603.05248 [hep-ph]]. var13D. Das, C. Hati, G. Kumar and N. Mahajan,Phys. Rev. D 94, 055034 (2016) doi:10.1103/PhysRevD.94.055034 [arXiv:1605.06313 [hep-ph]]. var14X. Q. Li, Y. D. Yang and X. Zhang,JHEP 1608, 054 (2016) doi:10.1007/JHEP08(2016)054 [arXiv:1605.09308 [hep-ph]]. var15B. Bhattacharya, A. Datta, J. P. Guvin, D. London and R. Watanabe,JHEP 1701, 015 (2017) doi:10.1007/JHEP01(2017)015 [arXiv:1609.09078 [hep-ph]].rukmohS. Sahoo, R. Mohanta and A. K. Giri,Phys. Rev. D 95, no. 3, 035027 (2017) doi:10.1103/PhysRevD.95.035027 [arXiv:1609.04367 [hep-ph]]. var17B. Bhattacharya, A. Datta, D. London and S. Shivashankara,Phys. Lett. B 742, 370 (2015) doi:10.1016/j.physletb.2015.02.011 [arXiv:1412.7164 [hep-ph]]. var18A. Crivellin, C. Greub and A. Kokulu,Phys. Rev. D 86, 054014 (2012) doi:10.1103/PhysRevD.86.054014 [arXiv:1206.2634 [hep-ph]].lqreviewI. Dor?ner, S. Fajfer, A. Greljo, J. F. Kamenik and N. Ko?nik,Phys. Rept.641, 1 (2016) doi:10.1016/j.physrep.2016.06.001 [arXiv:1603.04993 [hep-ph]]. bsmumuR. Aaij et al. [LHCb Collaboration],Phys. Rev. Lett.118, no. 19, 191801 (2017) doi:10.1103/PhysRevLett.118.191801 [arXiv:1703.05747 [hep-ex]].CorwinL. A. Corwin [BaBar Collaboration],Nucl. Phys. Proc. Suppl.169, 70 (2007) doi:10.1016/j.nuclphysbps.2007.02.105 [hep-ex/0611019]. FF2W. F. Wang, Y. Y. Fan and Z. J. Xiao,Chin. Phys. C 37, 093102 (2013) doi:10.1088/1674-1137/37/9/093102 [arXiv:1212.5903 [hep-ph]].LuLuG. G. Lu, Y. D. Yang and H. B. Li,Phys. Lett. B 341, 391 (1995). doi:10.1016/0370-2693(94)01333-8, 10.1016/0370-2693(95)80020-X
http://arxiv.org/abs/1709.09989v1
{ "authors": [ "Bhavesh Chauhan", "Bharti Kindra" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20170927173523", "title": "Invoking Chiral Vector Leptoquark to explain LFU violation in B Decays" }
Light Field Super Resolution ThroughControlled Micro-Shifts of Light Field Sensor M. Umair Mukati and Bahadir K. Gunturk M. Umair Mukati and Bahadir K. Gunturk are with Istanbul Medipol University, Istanbul, 34810 Turkey. E-mail: [email protected], [email protected]. This work is supported by TUBITAK Grant 114E095. Received …; accepted … ===================================================================================================================================================================================================================================================================This article presents a new proof of the rate of convergence to the normal distribution of sums of independent, identically distributed random variables in chi-square distance, which was also recently studied in <cit.>. Our method consists of taking advantage of the underlying time non-homogeneous Markovian structure and studying the spectral properties of the non-reversible transition operator, which allows to find the optimal rate in the convergence above under matching moments assumptions. Our main assumption is that the random variables involved in the sum are independent and have polynomial density; interestingly, our approach allows to relax the identical distribution hypothesis. Keywords: Berry-Esseen bounds, Central Limit Theorem, non-homogeneous Markov chain, Hermite-Fourier decomposition, χ_2-distance. Mathematics Subject Classification (MSC2010): 60F05, 60J05, 60J35, 47A75, 39B62.§ INTRODUCTION AND MAIN RESULT The present article is devoted to the convergence in Central Limit Theorem with respect to χ_2-distance, defined for two probability distributions θ, μ as:_2(θ,μ) :=(∫(dθ/dμ-1)^2dμ)^1/2 if θ is absolutely continuous with respect to μ and +∞ otherwise. In the following μ stands for the normal distribution. For a density function f∈ L^2(μ) we use the shortened notation_2(f):=_2(f·μ,μ)to refer to the χ_2-distance between the distribution with density f with respect to μ, denoted by f·μ, and μ itself. The χ_2-distance bounds by above usual quantities like total variation distance and relative entropy:d_TV(f,μ) :=∫|f-1|dμ ≤ _2(f),Ent(f||μ):=∫flog f dμ ≤ _2^2(f),the first relation being a consequence of Cauchy-Schwarz inequality and the second of the inequality log x ≤ x-1 for x>0.Let (X_i)_i≥ 1 be real i.i.d. random variables with density ϕ with respect to the normal distribution μ, and consider the renormalized sum Y_n :=1/√(n)∑_i=1^nX_i, n≥ 1.We call f_n the density of Y_n with respect to μ.The main result of this article is the following.If the moments of X_1 and the moments of μ match up to order r for a given integer r≥ 2, and if the density ϕ of X_1 with respect to μ is polynomial and satisfies to Hypothesis (H) stated in Section <ref>, thenn→ + ∞limsupn^r-1/2 _2(f_n)< +∞. The rate of convergence, which improves by a factor √(n) for each supplementary moment of X_1 that agrees with the corresponding moment of μ, is optimal: indeed by inequality (<ref>), it implies a rate of at least n^(r-1)/2 in total variation distance, which is proved to be optimal in <cit.>.While this article was being written, the authors took notice of the recent article <cit.>, which provides the same rate of convergence, as well as the constant in front of it, in the i.i.d. case, and under optimal assumptions.Our method of proof starts from the simple observation that the sequence of renormalized sums (Y_n)_n ≥ 1 is a non-homogeneous Markov chain, as it satisfies to the recursion equationY_n+1 =√(1-1/n+1) Y_n+ 1/√(n+1)X_n+1, n≥ 1. Although the result which we obtain in Theorem <ref> is weaker than that of <cit.> in the i.i.d. case, this method presents two advantages:* The result can be extended to independent, non necessarily identically distributed random variables (X_i)_i≥ 1, as this does not affect the Markovian character of (Y_n)_n ≥ 1, provided that the(X_i)_i≥ 1 are distributed according to a finite set of distributions, each polynomial and satisfying to (H) (cf Remark <ref>). The idea to use a Markovian framework to deal with non identically distributed random variables has also recently been used in <cit.>.* It is original and rather straightforward. The action of the Markov transition semigroup amounts to a barycentric convolution, since(√(1-1/n+1))^2 + ( 1/√(n+1))^2=1. From the point of view of functional analysis, our approach amounts to look at the bilinear barycentric convolution operator (f,ϕ) ↦ af*b ϕ,where f, ϕ are two density functions in L^2(μ), a, b∈ such that a^2+b^2=1, and af*b ϕ denotes the density of the random variable aU+bV whith U∼ f ·μ,V ∼ϕ·μ, as the linear operator (denoted 𝒬_a,ϕ^* in the following):L^2(μ)→ L^2(μ) f↦ af*b ϕ.The function ϕ is fixed and represents the density of the innovations (X_i)_i≥ 1. Thanks to spectral analysis in the Hermite-Fourier domain, we derive an estimate of the operator norm, which is directly related to the rate of convergence in Theorem <ref> and to Theorem <ref> below.More on the proof.– At the heart of the study is a formula describing the evolution of the χ_2-distance under the action of the barycentric convolution. Set (_̋n)_n∈^+ the set of renormalized Hermite polynomials, forming an orthonormal basis of L^2(μ):_̋n(x) :=(-1)^n/√(n!) e^x^2/2 D^n(e^-x^2/2), x∈, n∈^+,where D denotes the derivation operator acting on smooth functions fromto . We show the following result:Let r be a natural integer and f,ϕ be two densities in L^2(μ) whose moments match the moments of μ up to order r, and assume moreover that the density ϕ is polynomial. In particular, ϕ admits a decomposition on the Hermite basis of the form:ϕ=1+∑_k=r+1^N ϕ_k _̋k.Set:a_ϕ:=(1+N/r+1)^-1/4∈ (0,1],and for all a∈ (0,1),d_ϕ(a):=∑_k=r+1^N |ϕ_k|/√(k!)(-2(r+1+N)(1+N/r+1)log a)^k/2. If ϕ satisfies to Hypothesis (H) stated in Section <ref>, then for all a∈(a_ϕ,1), the following inequality stands:_2 (af*√(1-a^2)ϕ)≤ a^r+1(1+d_ϕ(a)) _2(f) + (1-a^2)^r+1/2 _2(ϕ). E_rMoreover,lim_a→ 1|log a|^-r+1/2d_ϕ(a)< +∞.Equation (<ref>), which involves Hermite coefficients (ϕ_k)_r+1 ≤ k ≤ N, is explained in Section <ref>. Hypothesis (H) gives conditions on these Hermite coefficients.Theorem <ref> does not preserve the symmetry of the equation af*(1-a^2)^1/2ϕ=(1-a^2)^1/2ϕ*af, in the first place because the assumption on f is weaker than the assumption on ϕ, and in the second place because the term d_ϕ(a) in the upper-bound is non-vanishing in general. However, in the regimen of interest a → 1 (which corresponds to (n/n+1)^1/2→ 1), one hasa^r+1=1-r+1/2(1-a)+o(a^2), 1+d_ϕ(a)=1+𝒪((1-a)^r+1/2),hence in the Taylor expansion of the prefactor a^r+1(1+d_ϕ(a)) the contribution from d_ϕ(a) is negligible with respect to the contribution from a^r+1 as soon as r ≥ 2, i.e. if the centering and normalizing condition [X_1]=0, [X_1^2]=1 is satisfied. Let us also mention that there exists an alternative inequality, holding without the polynomial assumption on ϕ, stated in Section <ref>. Bound (<ref>) bears a similarity to Shannon-Stam inequality for (absolute) entropy (<cit.>):Ent (a U+√(1-a^2)V)≤ aEnt(U)+√(1-a^2) Ent(V), a∈ [0,1],although, by the observation above, the coefficients in front of Ent are of different order with respect to the coefficients in front of χ for all natural integer r. For r=1, bound (<ref>) gives back Poincaré inequality for the Ornstein-Uhlenbeck semigroup (P_t)_t≥ 0, defined for f∈ L^2(μ) asP_t[f](x)=[f(e^-tx+√(1-e^-2t)Z)], Z ∼μ; x∈, t≥ 0.Indeed, in the case where ϕ is the density of the normal distribution μ, that is to say ϕ=1, we adopt the convention r=+∞ and N=0 in equality (<ref>), and set a_ϕ=0 and d_ϕ(a)=0 for all a ∈ [0,1]. As we will see in Section <ref>, (E_1) then corresponds (up to a positivity assumption which can actually be discarded in the proof) to Poincaré inequality for (P_t)_t≥ 0: for f∈ L^2(μ),_μ(P_t f)≤ e^-2t _μ(f), t≥ 0.Results of the literature.– The first quantification result for the convergence of renormalized sums of i.i.d. variables was obtained independently by Berry and Esseen through Kolmogorov distance (<cit.>). Rate of convergence in total variation distance is first addressed in <cit.> and the optimal rate under matching moment assumptions and regularity condition is proved in <cit.> using Malliavin's calculus. The rate of convergence in Theorem <ref> relies crucially on the fact that the inequality (<ref>) incorporates the matching moments assumption through exponent r on the barycentric coefficients. By comparison, Shannon-Stam inequality (<ref>) implies (jointly with a result of monotonicity of relative entropy and Fisher information under convolution) the convergence Ent(f_n)→ 0 without rate (<cit.>). The optimal rate is derived in <cit.> when the random variable X_1 satisfies to a Poincaré inequality. <cit.> provides an asymptotic expansion of entropy involving the moments of X_1. Similar developments occured for Fisher information (<cit.>, <cit.>), and for Rényi distances (<cit.>), which include the χ_2-distance. Other distances include Sobolev (<cit.>) and Wasserstein (<cit.>) distances. A typical assumption in Berry-Esseen theorems is the existence of moments up to a certain order for the random variable X_1. In the present framework, the fact that ϕ is in L^2(μ) implies that moments of all order exist. This is consistent with the fact that the χ_2-distance bounds by above the usual quantities (as shown in (<ref>); a similar inequality holds for Wasserstein distance of order 1). In another direction of research, Stein's method and Malliavin calculus revealed to be powerful tools to study, including in a quantitative way, the asymptotic normality of multidimensional random variables living in Gaussian chaoses. Let us cite, among an increasingly rich literature, the reference book <cit.>. It is interesting to remark the formal similarity between their objects and ours, although the results do not compare. Indeed, in the Stein-Malliavin framework, the typical random variable X_1 writes X_1=ϕ(Z), with Z a Gaussian random variable living in ^n with law μ_n and ϕ∈ L^2(μ_n). To compare to our framework, let us take n=1 and ϕ a density in L^2(μ). The random variable X_1=ϕ(Z), where Z∼μ, bears no relation with the random variable X_1 which has density ϕ with respect to μ; hence, the two approaches are not reducible one to another. The authors would like to thank Anthony Réveillac for interesting discussions on this subject.Structure of the article.– The remaining of this paper is organized as follows. In Section <ref>, Hermite-Fourier decomposition (<ref>) is detailed, and Hypothesis (H) is stated and commented. Section <ref> is devoted to the explicit expression of the convolution operator and of its Hermite-Fourier decomposition. Theorem <ref> is proved in Section <ref> by spectral analysis in the Hermite-Fourier domain. Finally Section <ref> is devoted to the proof of Theorem <ref>. § HERMITE-FOURIER DECOMPOSITION OF THE DENSITYFirst, let us introduce some notation. The symbolstands for the function fromtoidentically equal to 1, ^+ for the set of natural integers and nk for the binomial coefficient associated to natural integers k≤ n.For a fonction f∈ L^1(μ), we denote indifferentlyμ(f)=∫fdμ.The space L^2(μ) is a Hilbert space, with scalar product and associated norm defined as⟨ f,g ⟩_L^2(μ) :=∫fgdμ,f_L^2(μ):=√(μ(f^2)) , f,g ∈ L^2(μ).Set_μ(f):=∫(f-μ(f))^2dμ.Hermite polynomials (H_n)_n∈^+ are defined as: H_n(x) :=(-1)^n e^x^2/2 D^n(e^-x^2/2), x∈, n∈^+,where we recall that D stands for the derivation operator acting on smooth functions fromto . Hermite polynomials are also characterized by the following equation: for all smooth functions f:→,∫f H_n dμ=∫D^n f dμ.They form an orthogonal basis of L^2(μ): for all n,m ∈^+,∫H_m H_n dμ= n!δ_n,m.In the paper it is more convenient to work with renormalized Hermite polynomials (_̋n)_n∈^+:_̋n=1/√(n!)H_n, n ∈^+,which form an orthonormal basis of L^2(μ). By convention set _̋-1=0. One has:D _̋n = √(n)_̋n-1, n ∈^+.As H_0=1, for all natural integer _̋n is of degree n. The basis (_̋n)_n ∈^+ is diagonal for the Ornstein-Uhlenbeck semigroup defined in (<ref>):P_t[_̋n] =e^-nt_̋n, n ∈^+, t≥ 0. For all functions g ∈ L^2(γ), call (g_k)_k∈^+ its coefficients on the orthonormal Hermite basis,g =∑_k∈^+g_k _̋k,where the equality stands in L^2(μ), and denote indifferently (g):=g=(g_k)_k ∈^+ the sequence of its coefficients, which belongs to the Hilbert space l^2, defined as the set of real sequences (u_k)_k ∈^+ such that ∑_k∈^+u_k^2<+∞. The applicationL^2(μ) → l^2, g ↦(g),is an isometry of Hilbert spaces. If ϕ∈ L^2(μ) is a density, then ϕ_0=1. The matching moments assumption has a nice interpretation in terms of the coefficients: for all positive integer r,( ∀ k ∈{1,… r},∫x^kϕ(x)dμ(x)=∫x^kdμ(x))⇔( ∀ k ∈{1,… r},ϕ_k=0).Indeed, Hermite polynomial _̋n being of degree n for all natural integers, one has the equivalence( ∀ k ∈{1,… r},∫x^kϕ(x)dμ(x)=∫x^kdμ(x)) ⇔( ∀ k ∈{1,… r},∫_̋k(x)ϕ(x)dμ(x)=∫_̋k(x) dμ(x)),and by orthogonality of (_̋n)_n∈^+ it stands that for all k ∈^+,∫_̋k dμ =∫_̋k _̋0dμ=δ_0,k.The assumption that ϕ is a polynomial density whose moments agree with moments of μ up to r hence amounts to:∃ N ∈^+,N>r,ϕ=1+∑_k=r+1^Nϕ_k _̋k,which corresponds to equality (<ref>) above. From now on we set K=r+1. When ϕ is the density of μ itself, that is to say ϕ=_̋0=1, we set K=+∞ and N=0 by convention. Let us introduce the quantitiesC_k :=(1+N/K)^k/2,γ_k=1/√(k!)C_k |ϕ_k|, k ∈^+.We are now ready to state Hypothesis (H), which is composed of two parts, (H1) and (H2) as follows. (H1) If K≤ N-2, for all K≤ k≤ N-2, (k+2)γ_k+2≤γ_k.For non-vanishing ϕ_k this relation is equivalent to |ϕ_k+2/ϕ_k|≤1/1+N/K(1-1/k+2)^1/2if ϕ_k≠ 0 and is implied by the simplest assumption(H1') If K≤ N-2, for all K≤ k≤ N,|ϕ_k+1| ≤ r |ϕ_k|;r:=((1-1/K+2)^1/21/1+N/K)^1/2.Remark that assumption (H1) implies that γ_k=0 ⇒γ_k+2=0. The second condition (H2) has two parts:(H2a) If K ≤ N-1, (2(K+1)γ_K+1/N)^N(K-1/2γ_N)^K-1∨ (2Kγ_K/N-1)^N-1(K-2/2γ_N-1)^K-2 ≤ 1/(N-K+1)^N-K+1.(H2b) If K=N,γ_N≤1/2(N-2)^(N-2)/2. Loosely speaking, assumption (H1), which is present only if K≤ N-2, amounts to ask a geometric decrease for the coefficients (ϕ_k). If K=N or K=N-1, assumption (H2) requires the leading coefficients ϕ_N and ϕ_N-1 to be not too big, which is fair enough; it is more painful to write when K<N-1, but it can be interpreted as the requirement that the coefficients (ϕ_k)_K ≤ k ≤ N do not decay too fast.We conclude this section by the following comment on the range of validity of Theorems <ref> and <ref>.We conjecture that inequality (<ref>) holds without Hypothesis (H): indeed, the fact that ϕ, as a density, is nonnegative already implies restrictions on the coefficients, which are in fact sufficient to prove (<ref>) in the case of densities of the form ϕ=1+c_̋2 and ϕ=1+c'_̋4. Whether the polynomial assumption is necessary is less clear. For comparison, the hypothesis of Gaussian chaos of finite orders is needed in <cit.>. § EXPLICIT EXPRESSION OF THE CONVOLUTION OPERATOR§.§ Convolution as a Markovian transitionSet (X_i)_i≥ 1 random variable of density ϕ∈ L^2(μ). Throughout the paper, the notation f_n stands for the density of the renormalized sum Y_n =1/√(n)∑_i=1^nX_i, n≥ 1,which satisfy to the recursion equation: Y_n+1 =√(1-1/n+1) Y_n+ 1/√(n+1)X_n+1, n≥ 1. <ref>For a parameter a∈ [0,1], introduce the bilinear barycentric convolution operator K_a defined as∀ f,ϕ∈ L^2(μ), K_a(f ,ϕ):=af*√(1-a^2) ϕ. Equation (<ref>) in turn yields the corresponding recursion relation for the successive densities:f_n+1= K_a_n+1(f_n,ϕ), a_n+1:=√(1-1/n+1), n ≥ 1.On the other hand, relation (<ref>) translates into the the fact that (Y_n)_n≥ 1 is an inhomogeneous Markov chain. The associated semigroup (𝒬_p,q)_q≥ p ≥ 1 is defined for continuous bounded functions as:𝒬_p,q[f](x) :=[f(Y_q)|Y_p=x], x∈, q≥ p ≥ 1.By relation (<ref>), the explicit expression of 𝒬_n,n+1 is straightforward: for all f∈ L^2(μ),𝒬_n,n+1[f]=Q_a_n+1,ϕ[f],where the operator Q_a,ϕ is defined for all a ∈ [0,1] and density ϕ∈ L^2(μ) as:Q_a,ϕ[f](x) :=∫f(ax+√(1-a^2)y)ϕ(y)dy, x ∈.Denote 𝒬_n,n+1^* (resp.Q_a,ϕ^*) the adjoint of 𝒬_n,n+1 (resp. Q_a,ϕ) in L^2(μ). The discrete version of Kolmogorov backward relation reads:f_n+1:=𝒬_n,n+1^*[f_n], n≥ 1.Hence 𝒬_n,n+1^*[f_n]=K_a_n+1(f_n,ϕ). More generally, for f,ϕ densities in L^2(μ), the following identity holds:K_a(f,ϕ) =Q_a,ϕ^* [f], a∈[0,1].The strategy used in the paper relies on this interpretion of barycentric convolution as the action of a Markovian transition, as explained in Section <ref>.For all a∈ [0,1], one hasQ_a,ϕ[f](x) =∫f(ax+√(1-a^2)y)ϕ(y)dμ(y), Q_a,ϕ^*[f](x) =∫f(ax-√(1-a^2)y)ϕ(√(1-a^2)x+ay)dμ(y)=K_a(f,ϕ)(x). The formulas are to be understood in the following way: if f is bounded and continuous, they stand for all x∈; if f∈ L^2(μ), they stand in the almost everywhere sense. One sees that the operators Q_a,ϕ and Q_a,ϕ^* are actually defined for all f,ϕ∈ L^2(γ) independently from them being densities; in what follows, Q_a,ϕ and Q_a,ϕ^* refers to this extended definition when required. Furthermore, Q_a,ϕ and Q_a,ϕ^* are bounded in L^2(μ) for all ϕ∈ L^2(μ): indeed by Cauchy-Schwarz inequality, for all f∈ L^2(μ),∫(Q_a,ϕ[f])^2dμ≤f^2_L^2(μ)ϕ^2_L^2(μ),∫(Q_a,ϕ^*[f])^2dμ≤f^2_L^2(μ)ϕ^2_L^2(μ).Formula (<ref>) has already been given; let us prove formula (<ref>). As a ∈ [0,1], there exists θ∈ such that cosθ =a and sinθ =√(1-a^2). Denote R_θ the rotation of ^2 with parameter θ and Γ the Gaussian distribution on ^2 with identity as covariance matrix. Then, invariance of Γ under the action of R_θ implies that for all f,g ∈ L^2(γ),∫fQ_a,ϕ[g]dμ =∬f(x)ϕ(y)g((cosθ) x+ (sinθ) y)dμ(x)dμ(y)=∬f( (X))ϕ((X))g((R_θ X))dΓ(X)=∬f( (R_-θ X))ϕ((R_-θ X))g((X))dΓ(X)=∫gQ_a,ϕ^*[f]dμ. Recall the definition of the Ornstein-Uhlenbeck semigroup (P_t)_t≥ 0 given in (<ref>). Hence, for all a∈ [0,1] and f,ϕ∈ L^2(μ), one has the two useful equalities:Q_a,^*[f]=K_a(f,)=P_-log a[f], Q_a,ϕ^*[]=K_a(,ϕ)=P_-1/2log(1- a)^2[ϕ]. Let us introduce the multiplicative reversibilization of the Markov transition operator Q_a,ϕ, defined asM_a,ϕ:=Q_a,ϕQ_a,ϕ^*.The concept of reversibilization of a non-reversible Markov operator traces back to <cit.> which deals with homogeneous, invariant Markov chains. The new operator M_a,ϕ is now symmetric in L^2(μ), though in the general case, it is not Markovian: as Q_a,ϕ is not invariant, then Q_a,ϕ^*[]≠ and the mass conservation property M_a,ϕ[]= does not hold. Nonetheless, we will see in Section <ref> that spectral analysis of M_a,ϕ gives quantitative information on the action of Markovian transition Q_a,ϕ and convolution operator Q_a,ϕ^*.§.§ Hermite-Fourier decomposition of the convolution operator Let us now determine how the operators Q_a,ϕ, Q_a,ϕ^* and M_a,ϕ act with respect to the Hermite-Fourier decomposition defined in (<ref>). For a bounded operator Q of L^2(μ), we call Q the infinite matrix defined as:Q:=(Q(m,n))_n,m∈^+; (Q)(m,n):=∫Q(_̋n)_̋m dμ, n,m∈^+.The matrix Q is the unique bounded operator of l^2 such that(Q[f])= Qf, f∈ L^2(μ).Set R_op the operator norm of a bounded operator R on a Hilbert space ℋ, defined with evident notation asR_op:= sup_h ∈ℋ∖{0}Rh_ℋ/h_ℋ.The following property reveals useful: for all bounded operator Q on L^2(μ), it stands that:Q_op=Q_op. In the following proposition, we give the matrices Q_a,ϕ, Q^*_a,ϕ and M_a,ϕ associated to the operators Q_a,ϕ, Q_a,ϕ^* and M_a,ϕ. For all ϕ∈ L^2(μ) and a∈ [0,1], one has:∀ m,n ∈^+,Q_a,ϕ(m,n) =nm^1/2 a^m(1-a^2)^n-m/2ϕ_n-m ,m ≤ n0,m>n,Denoting ^TN the transpose matrix of N, it stands that:Q^*_a,ϕ = ^TQ_a,ϕ.Finally, the matrix M_a,ϕ is symmetric and∀ l,i∈^+,i ≤ l,M_a,ϕ(l,l-i) =a^2l-i∑_k≥ 0k+lk^1/2k+lk+i^1/2(1-a^2)^2k+i/2ϕ_k+iϕ_k. To begin with, one needs to compute the Hermite-Fourier decomposition of Q_a,_̋m[_̋n], for a∈ [0,1] and n,m ∈^+. Applying the properties of Hermite polynomials recalled in Section <ref> yields:Q_a,_̋m[_̋n] =∫_̋n(ax+√(1-a^2)y)_̋m(y)dμ(y)=1/√(n!)√(m!)∫D^m(H_n(ax+√(1-a^2)y))dμ(y).By the degree property, the integral vanishes for n<m. For n≥ m,Q_a,_̋m[_̋n] =(1-a^2)^m/2n ⋯ (n-m+1)/√(n!)√(m!)∫H_n-m(ax+√(1-a^2)y))dμ(y)=(1-a^2)^m/2n ⋯ (n-m+1)/√(n!)√(m!)P_-log a[H_n-m]=a^n-m(1-a^2)^m/2n ⋯ (n-m+1)/√(n!)√(m!)H_n-m=a^n-m(1-a^2)^m/2nm^1/2_̋n-m.By bilinearity, write Q_a,ϕ(_̋n) =∑_m∈^+ϕ_m Q_a,_̋m[_̋n]=∑_m=0^nϕ_m a^n-m(1-a^2)^m/2nm^1/2_̋n-m=∑_m=0^nϕ_n-m a^m(1-a^2)^n-m/2nm^1/2_̋m =∑_m=0^nQ_a,ϕ(m,n)_̋m,which proves (<ref>). Furthermore, definition (<ref>) implies that M_a,ϕ=Q_a,ϕ^TQ_a,ϕ, and formula (<ref>) follows by simple computation. * It is to be noted that convolution with barycentric coefficients only admits a nice decomposition; contrarily to what happens with Fourier transform associated to Lebesgue measure, the usual convolution has no explicit Hermite-Fourier representation. * Thanks to formula (<ref>) above, one finds that for all ϕ∈ L^2(μ), m∈^+ and a∈ [0,1],Q_a,ϕ^*(_̋m) =a^m ∑_n≥ 0m+nn^1/2(1-a^2)^n/2ϕ_n _̋m+n,which allows to better understand the behaviour of the barycentric convolution: each nonvanishing coefficient on _̋m and _̋n in the respective decompositions of f and ϕ contribute to a coefficient on _̋m+n in the decomposition of K_a(f,ϕ).* If ϕ is polynomial, then M_a,ϕ is a band matrix. We already noticed that Q_a,ϕ and Q_a,ϕ^*, and by composition M_a,ϕ=Q_a,ϕQ_a,ϕ^*, are bounded operators. In fact, they are Hilbert-Schmidt operators. By definition, a bounded operator R on the Hilbert space ℋ is Hilbert-Schmidt, if, (e_n)_n∈^+ standing for an orthonormal basis of ℋ, one has:∑_n∈^+R(e_n)_ℋ^2 < +∞. For all ϕ∈ L^2(μ) and a∈ [0,1), the operators Q_a,ϕ,Q_a,ϕ^*,M_a,ϕ are Hilbert-Schmidt, hence compact. Consider first Q_a,ϕ^*.∑_n ∈^+Q_a,ϕ^*(_̋n)_L^2(μ)^2 =∑_n ∈^+⟨_̋n,M_a,ϕ_̋n ⟩_L^2(μ)=∑_n∈^+M_a,ϕ(n,n)=n,k ∈^+∑k+nk(1-a^2)^k a^2nϕ_k^2.Now, by the equality ∑_n ∈^+k+nku^n=1/(1-u)^k+1, k∈^+, u ∈ [0,1),we find that∑_n ∈^+Q_a,ϕ^*(_̋n)_L^2(μ)^2 =∑_k ∈^+(1-a^2)^k/(1-a^2)^k+1ϕ_k^2=1/1-a^2ϕ_L^2(μ)^2 < +∞.This implies that Q_a,ϕ is Hilbert-Schmidt and in turn so is M_a,ϕ by composition.§ PROOF OF THEOREM <REF> §.§ Strategy for non-homogeneous Markov chainsLet us now explain the strategy to exploit the Markovian framework. In Remark <ref>, we noticed that if ϕ=, i.e. the X_i's are normal, then Q_a,^*=P_-log a. In this case, the renormalized sums (Y_n)_≥ 1 are also normal, which corresponds to the fact that the Ornstein-Uhlenbeck semigroup (P_t)_t≥ 0 is invariant, and in fact reversible, with respect to μ. The semigroup (P_t)_t≥ 0 also enjoys a Poincaré inequality recalled in equation (<ref>). If f is a density then _μ(f)=χ^2(f), hence Poincaré inequality for the Ornstein-Uhlenbeck semigroup reads:_2(P_t [f])≤ e^-t _2(f), t≥ 0.Furthermore, f_n+1=P_-log a_n+1 [f_n] by (<ref>), hence:_2(f_n+1)= _2( P_-log a_n+1[f_n] )≤a_n_2(f_n), n ≥ 1,and by straighforward calculation one gets the decrease of _2(f_n).The idea underlying our method consists in mimicking the reasoning above for the true operator Q_a,ϕ^* acting on densities, which is neither reversible nor satisfies to the mass conservation property in the general case, as was explained above. For a∈[0,1] and f a density in L^2(μ), let us write by triangular inequality:_2(Q_a,ϕ^*[f])= Q_a,ϕ^*[f] - 1 _L^2(μ)≤Q_a,ϕ^*[f-1]_L^2(μ) +Q_a,ϕ^*[] - 1 _L^2(μ).The termQ_a,ϕ^*[] - 1 _L^2(μ),can be thought of as a measure of the divergence from invariance of the transition operator, an idea tracing back to <cit.>. Second, the centered term Q_a,ϕ^*[f-1]_L^2(μ) rewrites:Q_a,ϕ^*[f-1]_L^2(μ)^2 =∫(Q_a,ϕ^*[f-1] )^2dμ=∫(f-1) Q_a,ϕ Q_a,ϕ^*[f-1] dμ=∫(f-1) M_a,ϕ[f-1] dμ,making appear the multiplicative reversibilization M_a,ϕ of Q_a,ϕ^* introduced above.In terms of convolution, this amounts to consider separately K_a(f-1,ϕ) and K_a(,ϕ). Following this roadmap, Proposition <ref> below deals with the default of invariance Q_a,ϕ^*[]-1_L^2(μ) and Proposition <ref> with the centered quantity Q_a,ϕ^*[f-1]_L^2(μ). Theorem <ref> is then proved in Section <ref>. For the sake of completeness, we conclude the part by stating an alternative bound to (<ref>) in Section <ref>. §.§ Improved Poincaré inequality for Ornstein-UhlenbeckThe following result is an improvement of the usual Poincaré inequality for the Ornstein-Uhlenbeck semigroup (<ref>) when more information is avalaible on the function f∈ L^2(μ) at play. Let r∈^+ and f be a function in L^2(μ) with Hermite decomposition of the formf=f_0+∑_n ≥ r+1f_k _̋k, Then for all t ≥ 0,_μ(P_t f)≤ e^-2(r+1)t_μ(f).In particular, if ϕ is the density of a variable agreeing with the Gaussian moments up to r, then for all a∈ [0,1],Q_a,ϕ^*[]-1_L^2(μ) ≤ (1-a^2)^r+1/2_2(ϕ).Thanks to the properties of Hermite polynomials,P_t f = ∑_k=0^∞f_k P_t[_̋k]=h_0 +∑_k=r+1^∞f_k e^-kt_̋k; _μ(P_t f) =∑_k=r+1^∞f_k^2 e^-2kt≤ e^-2(r+1)t∑_k=r+1^∞f_k^2 = e^-2(r+1)t_μ(f).The second inequality of Proposition <ref> follows from the first one by Remark <ref>.§.§ Poincaré-like inequality for the convolution operator For a∈[0,1] and a centered g∈ L^2(μ) (that is μ(g)=0), let us consider the quantity Q_a,ϕ^*[g]_L^2(μ). By analogy with the Poincaré inquality for the Ornstein-Uhlenbeck semigroup, which is reversible with respect to μ, we call the following result a Poincaré-like inequality holding for the operator Q_a,ϕ^*, which in general is non-reversible.Assume that ϕ is apolynomial density in L^2(μ) whose moments match the moments of μ up to order r ∈^+, and which satisfies to Hypothesis (H) stated in Section <ref>. Set a_ϕ∈ [0,1) and d_ϕ:(0,1) → as in Theorem <ref>.Then, for all function g∈ L^2(μ) which writes as: g=∑_k=r+1^∞ g_k _̋k,and for all a∈ (a_ϕ,1), it stands that:∫(Q_a,ϕ^*[g])^2 dμ ≤ a^r+1(1+d_ϕ(a))^2 ∫g^2 dμ . The proof is cut out in a number of steps. We begin by the following lemma: Let ϕ and g be as in Proposition <ref>, and set K=r+1. Then, for all a∈ (0,1),∫(Q_a,ϕ^*[g])^2 dμ ≤sup_l ≥ K Σ_a,ϕ(l) ∫g^2 dμ,whereΣ_a,ϕ(l):=∑_j=K^+∞|M_a,ϕ(l,j)|,l ≥ K.Let r∈^+, a∈ (0,1) and ϕ as in the statement of the proposition, and let K=r+1. First, notice that V_K, the set of functions g∈ L^2(μ) with Hermite decompositiong=∑_k=K^∞ g_k _̋k,is stable under action of Q_a,ϕ^* by Remark <ref>. The space V_K equipped with the L^2(μ) structure is again a Hilbert space. Let us call Q_a,ϕ^*_|_V_K the restriction of Q_a,ϕ^* to V_K. It is again bounded, with operator normQ_a,ϕ^*_|_V_K_op:=sup_g ∈ V_K∖{0}Q_a,ϕ^*[g]_L^2(μ)/g_L^2(μ). Hence, the desired majoration (<ref>) is equivalent to the following bound on the operator norm:Q_a,ϕ^*_|_V_K_op^2 ≤ sup_l ≥ K Σ_a,ϕ(l).The isometry between L^2(μ) and l^2 restricts to an isometry between V_K and l^2_K, defined as the space of real sequences (u_n)_n≥ K with ∑_n≥ Ku_n^2<+∞. By this isometry, if N_K=(N_K(i,j))_i,j ≥ K stands for the infinite matrix associated to Q_a,ϕ^*_|_V_K, then:Q_a,ϕ^*_|_V_K_op=N_K _op.Furthermore, by the properties of block matrix multiplication, one sees that the matrix ^TN_K N_K is nothing else but the matrix M_a,ϕ defined in (<ref>) (Section <ref>) restricted to l^2_K, that is:^TN_K N_K=(M_a,ϕ(i,j))_i,j ≥ K. For a complex Banach space E, set 𝒢(E) the set of inversible operators on E. The spectral radius of a bounded operator M in E is then defined as ρ(M):=max{ |λ|, λId-M ∈𝒢(E)}.Moreover, if E is Hilbert and if T is a bounded operator of E with adjoint T^*, thenT_op=√(ρ(T^*T)).The operator N_K being a bounded operator of l_K^2 (by restriction of a bounded operator), the preceding equation applies:N_K _op=√(ρ(^TN_K N_K)). Let us recall a theorem of Gershgorin (<cit.>) related to finite complex auto-adjoint matrices A, where A=(A_i,j)_1≤ i,j ≤ n for a positive integer n. Denoting 𝒢_n(ℂ) the set of invertible matrices of size n and B(x,r) the complex ball of center x∈ℂ and r>0, one has:{ł∈ℂ, łId-A ∈𝒢_n(ℂ)}⊂⋃_l=1^nB(A(l,l),|∑_1≤ j ≤ n, j≠ lA(l,j)|).As a consequence,ρ(A)≤sup_1≤ l≤ n|A(l,l)| + |∑_1≤ j ≤ n, j≠ lA(l,j)| ≤sup_1≤ l≤ n∑_j=1^n|A(l,j)|.Gershgorin's theorem is stated for finite matrices, but the proof extends without difficulty to eigenvalues of operators on l_K^2. The operator ^TN_K N_K being autoadjoint and compact by Proposition <ref>, its spectrum is included in the set of eigenvalues united with the singleton {0}, hence the formula above applies and yields majoration (<ref>), which proves the lemma. In order to derive an upper-bound of sup_l ≥ K Σ_a,ϕ(l) from the explicit expression of M_a,ϕ stated in (<ref>), we need two technical lemmas.Let N ≥ K be positive integers, and call as in the preceding sectionsa_ϕ:=(1+N/K)^-1/4, C_k:=(1+N/K)^k/2, k ∈^+.Let i,k be natural integers such that0≤ k ≤ N-1, K≤ i+k ≤ N, 1≤ i ≤ N.Then, for all a ∈ (a_ϕ,1) and for all l ≥ K,a^-ik+lk^1/2k+lk+i^1/21_l ≥ i+K+a^ik+l+ik^1/2k+l+ik+i^1/2 ≤ 2 C_k C_k+ik+lk^1/2k+l+ik+i^1/2. Let N ≥ K be positive integers and i,k,l be natural integers such that0≤ k ≤ N-1, K≤ i+k ≤ N, 1≤ i ≤ N,l ≥ K.For two positive integers m ≥ n, the notation [m]_n stands for [m]_n:=m ⋯ (m-n+1). One has:k+i+lkk+lk^-1=[k+i+l]_k/[k+l]_k.If i≥ k, thenk+i+lkk+lk^-1≤(l+N/l+1)^k≤(l+N/l+1)^i.If i<k, thenk+i+lkk+lk^-1=[k+i+l]_i [k+l]_k-i/[k+l]_k-i [l+i]_i=[k+i+l]_i/[i+l]_i≤(l+N/l+1)^i.In both cases,k+i+lkk+lk^-1≤(l+N/l+1)^i.Furthermore, in the case where l ≥ i,k+lk+ik+i+lk+i^-1=[k+l]_k [l]_i/[k+i+l]_i [k+l]_k=[l]_i/[k+i+l]_i≤(l/l+1)^i.Hence, for all a∈ (0,1),a^-ik+lk^1/2k+lk+i^1/21_l ≥ i+K+a^ik+l+ik^1/2k+l+ik+i^1/2≤(l/l+1)^i/2k+lk^1/2k+l+ik+i^1/2(a^-i1_l ≥ i+K + a^i(1+N/l)^i/2)≤k+lk^1/2k+l+ik+i^1/2 f(a),where we defined f(a):=a^-i+(1+N/K)^i/2a^i, a∈ (0,1).As one checks easily, the inequality (<ref>) still holds true if l <i, and f'(a) ≥ 0 if and only if a ≥ a_ϕ=(1+N/K)^-1/4. This yields for all a ∈ (a_ϕ,1),f(a)≤ f(1)= 1+(1+N/K)^i/2≤2(1+N/K)^i/2≤2 C_kC_k+i,where C_k has been defined as C_k=(1+N/K)^k/2 for all k∈^+. Let m>q be positive integers, and consider the polynomial P=-α X^m+β X^q-1 with α,β >0. Then P≤ 0 on ℝ^+ if and only if(β/m)^m(q/α)^q ≤1/(m-q)^m-q. Let P(x)=-α x^m+ βx^q-1 be as in the wording of the lemma. Then, for all x∈, P'(x) = -mα x^m-1+qβ x^q-1=x^q-1(-mα x^m-q+qβ),thus P attains its maximum on [0,+∞) at the point x_0=((β q)/(α m))^1/(m-q). Moreover,P(x_0) = x_0^q(-α x_0^m-q+β)-1= (β q/α m)^q/m-q(-β q/m+β)-1= β/m(β q/α m)^q/m-q (m-q)-1=(β/m)^m/m-q(q/α)^q/m-q(m-q)-1,so that P(x_0) ≤ 0 if and only if (β/m)^m(q/α)^q ≤1/(m-q)^m-q.We are now ready to show Proposition <ref>.Set K=r+1, let a ∈ (0,1) and l a positive integer such that l ≥ K. Then,Σ_a,ϕ(l) =∑_j=K^+∞|M_a,ϕ(l,j)| =|M_a,ϕ(l,l)|+ ∑_i=1^N (|M_a,ϕ(l,l-i)|_l-i≥ K+ |M_a,ϕ(l,l+i)|)=a^2l∑_k≥ 0k+lk(1-a^2)^kϕ_k^2+ ∑_i=1^N a^2l-i∑_k≥ 0k+lk^1/2k+lk+i^1/2(1-a^2)^2k+i/2ϕ_k+iϕ_k_l-i≥ K +∑_i=1^N a^2l+i∑_k≥ 0k+l+ik^1/2k+l+ik+i^1/2(1-a^2)^2k+i/2ϕ_k+iϕ_k=a^2l(1+∑_k=K^Nk+lk(1-a^2)^kϕ_k^2 + 0≤ k< k+i ≤ N∑{ (1-a^2)^2k+i/2|ϕ_k| |ϕ_k+i|𝒞_a,ϕ(l,i,k)}),where𝒞_a,ϕ(l,i,k) :=a^-ik+lk^1/2k+lk+i^1/21_l ≥ i+K+a^ik+l+ik^1/2k+l+ik+i^1/2is precisely the quantity addressed in Lemma <ref>. If ϕ_k ϕ_k+i≠ 0 and i≥ 1 then i+k≥ K, hence the assumptions of Lemma <ref> hold and we get for all a∈ (a_ϕ,1): Σ_a,ϕ(l) ≤ a^2l(1+∑_k=K^Nk+lk(1-a^2)^kϕ_k^2 )+2 a^2l( 0≤ k< k+i ≤ N∑∑{ (1-a^2)^2k+i/2|ϕ_k| |ϕ_k+i| C_k C_k+ik+lk^1/2k+l+ik+i^1/2}).Noticing that C_k ≥ 1 for every positive integer K and that C_0=1 allows to recognize the development of a square:Σ_a,ϕ(l) ≤ a^2l(1+∑_k=K^Nk+lk^1/2(1-a^2)^k/2C_k |ϕ_k|)^2≤ a^2l(1+∑_k=K^NC_k |ϕ_k|/√(k!)((N+l)(1-a^2))^k/2)^2,where we used that for all natural integer k ≤ N,k+lk ≤(N+l)^k/k!.We recognize the coefficient γ_k introduced in Section <ref> to state Hypothesis (H):γ_k = C_k |ϕ_k|/√(k!), k ≥ K,so thatΣ_a,ϕ(l) ≤ a^-2Na^2(N+l)(1+∑_k=K^Nγ_k((N+l)(1-a^2))^k/2)^2.For all a∈ (0,1) and l ∈^+, we perform the change of variablesu_a,ϕ(l):=-(l+N) log a >0,so that(N+l)(1-a^2)=(N+l)(1- exp( -2u_a,ϕ(l)/l+N) ) ≤ 2 u_a,ϕ(l),and introduce the functionh(u)=exp(-u)(1+∑_k=K^Nγ_k (2u)^k/2), u ≥ 0.Then, Σ_a,ϕ(l) ≤ a^-2N h^2( u_a,ϕ(l)).The last part of the proof is devoted to showing that the function h is non-increasing on [0,+∞); indeed in that case, we have for all a ∈ (a_ϕ,1) and l ≥ K:Σ_a,ϕ(l) ≤a^-2N h^2( u_a,ϕ(K)) =a^2K(1+∑_k=K^Nγ_k (-2(K+N)log a)^k/2)^2,which, jointly with Lemma <ref>, proves Proposition <ref>. So let us study the variation of h. For all u≥ 0,h'(u) =exp(-u)(-1-∑_k=K^Nγ_k (2u)^k/2+∑_k=K^Nγ_k k(2u)^(k-2)/2).Let us consider separately the powers of u^1/2 ranging from K to N-2 (when existing) and the remaining powers:-1-∑_k=K^Nγ_k (2u)^k/2+∑_k=K^Nγ_k k(2u)^(k-2)/2 =∑_k=K^N-2(-γ_k+(k+2)γ_k+2) (2u)^k/2+ Kγ_K (2u)^(K-2)/2+γ_K+1(K+1)(2u)^(K-1)/2-(1+γ_N(2u)^N/2+γ_N-1(2u)^(N-1)/2).As Hypothesis (H1) holds, the sum ∑_k=K^N-2 is nonpositive. If K≤ N-1, the remaining term writes1/2(-1-2γ_N(2u)^N/2+2γ_K+1(K+1)(2u)^(K-1)/2)+ 1/2(-1-2γ_N-1(2u)^(N-1)/2+2Kγ_K (2u)^(K-2)/2),which is nonpositive thanks to Hypothesis (H2a) and Lemma <ref>. If K=N, the same arguments provide the nonpositivity of the remaining term, which reduces to-1 -γ_N(2u)^N/2+Nγ_N(2u)^(N-2)/2.Finally, under Hypothesis (H), we find that h has a nonpositive derivative on [0,+∞[ hence is non-increasing, which completes the proof.§.§ Proof of Theorem <ref>Let us turn to the proof Theorem <ref>.Let ϕ, f ∈ L^2(μ) be as in the wording of the theorem. For all a∈ [0,1],_2 (af*√(1-a^2)ϕ)= K_a(f,ϕ) -1_L^2(μ) ≤K_a(f-1,ϕ) _L^2(μ)+K_a(,ϕ) -1_L^2(μ).According to Proposition <ref>, for all a∈ [0,1],K_a(,ϕ) -1_L^2(μ) =Q_a,ϕ^*[]-1_L^2(μ)≤ (1-a^2)^r+1/2_2(ϕ),while by Proposition <ref>, for all a∈ (a_ϕ,1),K_a(f-1,ϕ) _L^2(μ) =Q_a,ϕ^*[f-1]_L^2(μ)≤ a^r+1(1+d_ϕ(a)) _2(f),which proves the theorem. §.§ Alternative bound For the sake of completeness, let us conclude this section with a bound alternative to inequality (<ref>). Let f,ϕ∈ L^2(μ) be density with moments matching the Gaussian moments up to order r∈^+, and moreover assume that f is (r+1)-times derivable, with D^r+1f ∈ L^2(μ). Then, there exists a universal constant c_r>0 such that ∀ a ∈ (0,1),_2 (af*√(1-a^2)ϕ)≤ a^r+1_2(f)+(1-a^2)^r+1/2_2(ϕ) +c_r (1-a^2)^r+1/2( _2(f)_2(ϕ) + D^r+1f_L^2(μ)D^-(r+1)ϕ_L^2(μ)),where D^-1 stands for the operator which maps a function ϕ∈ L^2(μ) onto its primitive with vanishing mean.Contrarily to what happens for bound (<ref>), (<ref>) stands for all a∈ (0,1) and the polynomial assumption on ϕ is not required, making the relative roles of f and ϕ more symmetric. The drawback of bound (<ref>) is that it involves the norm of the (r+1)-th derivative of f, which we fail to control in the framework of the Central Limit Theorem.Set r∈^+, K=r+1 and f,ϕ two densities as in the wording of the remark, so that f=1+∑_k=K^+∞f_k _̋k,ϕ=1+∑_k=K^+∞ϕ_k _̋k.For all a∈ (0,1), one has:_2(af*√(1-a^2) ϕ) =K_a(f,ϕ)-1_L^2(μ)≤K_a(f-1,)_L^2(μ) +K_a(,ϕ-1)_L^2(μ)+K_a(f-1,ϕ -1)_L^2(μ).Now, K_a(f-1,) =P_-log a[f-1], K_a(,ϕ-1)=P_-1/2log (1-a^2)[ϕ-1],hence by the improved Poincaré inequality from Proposition <ref> we get the two first terms of the bound. It remains to consider K_a(f-1,ϕ -1)_L^2(μ). For all a∈ (0,1), one has by Remark <ref>:K_a(f-1,ϕ-1) =∑_m=K^+∞ ∑_n=K^+∞m+nm^1/2a^m(1-a^2)^n/2f_mϕ_n_̋n+m,henceK_a(f-1,ϕ -1)_L^2(μ)^2 =∑_l=2K^+∞(∑_m+n=l m,n≥ Klm^1/2a^m(1-a^2)^n/2f_mϕ_n)^2=(1-a^2)^K/2∑_l=2K^+∞(∑_m+n=l-K m≥ K,n ≥ 0lm^1/2a^m(1-a^2)^n/2f_mϕ_n+K)^2.By Cauchy-Schwarz inequality and Pascal formula, this rewrites again(1-a^2)^K/2∑_l=2K^+∞(∑_m+n=l-K m≥ K,n ≥ 0l-Km^1/2(l⋯(l-K+1)/(n+K)⋯(n+1))^1/2a^m(1-a^2)^n/2f_mϕ_n+K)^2 ≤ (1-a^2)^K/2∑_l=2K^+∞∑_m+n=l-K m≥ K,n ≥ 0l⋯(l-K+1)/(n+K)⋯(n+1)f_m^2ϕ_n+K^2.Now, there exists c_K>0 such that ∀ x ≥ K,∀ y ≥ 0,(x+y+K)⋯ (x+y+1)≤ c_K (x ⋯ (x-K+1)+(y+K)⋯ (y+1) ).Applying this to x=m and y=n, we find thatK_a(f-1,ϕ -1)_L^2(μ)^2 ≤ c_K(1-a^2)^K/2∑_l=2K^+∞∑_m+n=l-K m≥ K,n ≥ 0(1+ m⋯ (m-K+1)/(n+K)⋯(n+1))f_m^2ϕ_n+K^2=c_K(1-a^2)^K/2(∑_m≥ Kf_m^2)(∑_n≥ Kϕ_n^2) +c_K (1-a^2)^K/2(∑_m≥ Km⋯ (m-K+1) f_m^2)(∑_n≥ Kϕ_n^2/(n+K)⋯(n+1)),which is the Hermite representation of the expected quantity by formula (<ref>). § PROOF OF THEOREM <REF> Finally, we conclude the article with the proof of our main theorem, Theorem <ref>, which follows on from the recursion formula (<ref>) and barycentric convolution inequality for χ_2-distance (<ref>). In the framework of the theorem, denote n_0:=⌈1/1-a_ϕ^2⌉ ∨2.By the two aforementioned relations, we have for all integer n ≥ n_0:_2(f_n)≤(1-1/n)^r+1/2(1+d_ϕ(√(1-1/n))) _2(f_n-1)+1/n^r+1/2_2(ϕ).Remembering that r≥ 2, let us call for all n≥ n_0,c_n:=(1-1/n)^r+1/2(1+d_ϕ(√(1-1/n)))=1-r+1/2n+𝒪(1/n^3/2), d_n:=1/n^r+1/2_2(ϕ),where we denote v_n=𝒪(u_n) if lim sup_n → + ∞ |v_n/u_n|<+∞. The preceding recursive inequality yields_2(f_n)≤(∏_k=n_0^n c_k) _2(f_n_0-1) + ∑_k=n_0^n (∏_j=k+1^n c_k)d_k.Now,log( ∏_k=n_0^n c_k )=∑_k=n_0^n log c_k=- ∑_k=n_0^n(r+1/2n+𝒪(1/n^3/2))=-r+1/2log n + 𝒪(1).This leads to∏_k=n_0^n c_k = 𝒪( 1/n^r+1/2);∑_k=n_0^n (∏_j=k+1^n c_k)d_k= (∏_j=n_0^n c_j) ∑_k=n_0^nd_k/∏_j=n_0^k c_j= 𝒪( 1/n^r+1/2) ∑_k=n_0^n 𝒪(1)=𝒪( 1/n^r-1/2).Finally, _2(f_n):=𝒪( 1/n^r+1/2)+𝒪( 1/n^r-1/2)=𝒪( 1/n^r-1/2),which proves the theorem.If we suppose that the random variables (X_i)_i≥ 1 are independent and can be distributed according to densities (ϕ_j)_j ∈ J, where J is a finite set and each ϕ_j is polynomial and complies with (H), then the integer n_0 above is replaced byn_0:=max_j ∈ J⌈1/1-a_ϕ_j^2⌉ ∨2,and the quantities (c_n,d_n)_n ≥ n_0 byc_n :=(1-1/n)^r+1/2max_j ∈ J(1+d_ϕ_j(√(1-1/n)))=1-r+1/2n+𝒪(1/n^3/2), d_n :=1/n^r+1/2max_j ∈ J_2(ϕ_j)= 𝒪(1/n^r+1/2).The rest of the proof is unchanged. plain
http://arxiv.org/abs/1709.09410v2
{ "authors": [ "Claire Delplancke", "Laurent Miclo" ], "categories": [ "math.PR" ], "primary_category": "math.PR", "published": "20170927093335", "title": "Berry-Esseen bounds for the chi-square distance in the Central Limit Theorem: a Markovian approach" }
University of Oxford, Denys Wilkinson Building, Keble Road, Oxford, OX1 3RH,UKMax-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching, GermanyInstituto de Astrofisica e Ciencias do Espaco, Faculdade de Ciencias da Universidade de Lisboa, Edificio C8, Campo Grande, P-1749016, Lisboa, PortugalDepartment of Astronomy, Beijing Normal University, Beijing, 100875, ChinaInstitute Lorentz, Leiden University, PO Box 9506, Leiden 2300 RA, The Netherlands Kavli Institute for Cosmological Physics, Enrico Fermi Institute,The University of Chicago, Chicago, Illinois 60637, USANordita, KTH Royal Institute of Technology and Stockholm University, Roslagstullsbacken 23, SE-106 91 Stockholm, Sweden Berkeley Center for Cosmological Physics, LBL and University of California at Berkeley, CA94720, USADepartamento de Física, Centro de Investigación y de Estudios Avanzados del IPN, AP 14-740, Ciudad de México 07000, México Department of Physics & Astronomy, University of the Western Cape, Cape Town 7535, South AfricaDIFA, Dipartimento di Fisica e Astronomia, Alma Mater Studiorum Università di Bologna, Viale Berti Pichat, 6/2, I-40127 Bologna, ItalyINAF/IASF Bologna, via Gobetti 101, I-40129 Bologna, ItalyINFN, Sezione di Bologna, Via Berti Pichat 6/2, I-40127 Bologna, ItalyJodrell Bank Centre for Astrophysics, School of Physics and Astronomy, The University of Manchester,Manchester, M13 9PL, U.K.Laboratoire de Physique Subatomique et de Cosmologie, Université Grenoble-Alpes, CNRS/IN2P353, avenue des Martyrs, 38026 Grenoble cedex, France.University of Oxford, Denys Wilkinson Building, Keble Road, Oxford, OX1 3RH,UKSchool of Physics and Astronomy, Cardiff University, The Parade, Cardiff, CF24 3AA, UKDépartement de Physique Théorique and Center for Astroparticle Physics, Université de Genève, 24 quai Ansermet, CH-1211 Genève 4, SwitzerlandUniversity of Oxford, Denys Wilkinson Building, Keble Road, Oxford, OX1 3RH,UKINAF/IASF Bologna, via Gobetti 101, I-40129 Bologna, ItalyINFN, Sezione di Bologna, Via Berti Pichat 6/2, I-40127 Bologna, ItalySchool of Physics and Astronomy, Sun Yat-sen University, 2 Daxue Road, Zhuhai, 519082, ChinaInstitute of Physics, LPPC, École Polytechnique Fédérale de Lausanne, CH-1015, Lausanne, Switzerland Institute for Nuclear Research of the Russian Academy of Sciences, 60th October Anniversary Prospect, 7a, 117312 Moscow, RussiaInstitute for Theoretical Particle Physics and Cosmology (TTK), RWTH Aachen University, D-52056 Aachen, Germany. Institute for Computational Cosmology, Department of Physics, Durham University, Durham DH1 3LE, UKInstitut für Theoretische Physik, Universität Heidelberg, Philosophenweg 16, D-69120 Heidelberg, GermanyJodrell Bank Centre for Astrophysics, School of Physics and Astronomy, The University of Manchester,Manchester, M13 9PL, U.K.INAF/IASF Bologna, via Gobetti 101, I-40129 Bologna, ItalyINFN, Sezione di Bologna, Via Berti Pichat 6/2, I-40127 Bologna, ItalyCEICO, Fyzikální ustáv Akademie věd ČR, Na Slovance 1999/2, 182 21, Prague, CzechiaInstitute Lorentz, Leiden University, PO Box 9506, Leiden 2300 RA, The NetherlandsCEICO, Fyzikální ustáv Akademie věd ČR, Na Slovance 1999/2, 182 21, Prague, Czechia Department of Physics, University of Cyprus, 1, Panepistimiou Street, 2109, Aglantzia, CyprusInstitut d'Astrophysique de Paris, CNRS (UMR7095), 98 bis Boulevard Arago, F-75014, Paris, France UPMC Univ Paris 06, UMR7095, 98 bis Boulevard Arago, F-75014, Paris, France Sorbonne Universités, Institut Lagrange de Paris (ILP), 98 bis Boulevard Arago, 75014 Paris, FranceInstitut de Physique Théorique, UniversitéParis Saclay,CEA, CNRS, 91191 Gif-sur-Yvette, France We compare Einstein-Boltzmann solvers that include modifications to General Relativity and find that, for a wide range of models and parameters, they agree to a high level of precision. We look at three general purpose codes that primarily model general scalar-tensor theories, three codes that model Jordan-Brans-Dicke (JBD) gravity,a code that models f(R) gravity,a code that models covariant Galileons,a code that models Hořava-Lifschitz gravity and two codes that model non-local models of gravity. Comparing predictions of the angular power spectrum of the cosmic microwave background and the power spectrum of dark matter for a suite of different models, we find agreement at the sub-percent level. This means that this suite of Einstein-Boltzmann solvers is now sufficiently accurate for precision constraints on cosmological and gravitational parameters.A comparison of Einstein-Boltzmann solvers for testing General Relativity F. Vernizzi December 30, 2023 =========================================================================§ INTRODUCTIONParameter estimation has become an essential part of modern cosmology, e.g. <cit.>. By this we mean the ability to constrain various properties of cosmological models using observational data such as the anisotropies of the cosmic microwave background (CMB), the large scale structure of the galaxy distribution (LSS), the expansion and acceleration rate of the universe and other such quantities. A crucial aspect of this endeavour is to be able to accurately calculate a range of observables from the cosmological models. This is done with Einstein-Boltzmann (EB) solvers, i.e. codes that solve the linearized Einstein and Boltzmann equations on an expanding background <cit.>.The history of EB solvers is tied to the success of modern theoretical cosmology. Beginning with the seminal work of Peebles and Yu <cit.>, Wilson and Silk <cit.>, Bond and Efstathiou <cit.> and Bertschinger and Ma <cit.> these first attempts involved solving coupled set of many thousands of ordinary differential equations in a time consuming, computer intensive manner. A step change occurred with the introduction of the line of sight method and the CMBFAST code <cit.> by Seljak and Zaldarriaga, which sped calculations up by orders of magnitude. Crucial in establishing the reliability of CMBFAST was a cross comparison <cit.> between a handful of EB solvers (including CMBFAST) that showed that it was possible to get agreement to within 0.1%. Fast EB solvers have become the norm:CAMB <cit.>, DASh <cit.>, CMBEASY <cit.>and CLASS <cit.> all use the line of sight approach and have been extensively used for cosmological parameters estimation. Of these, CAMB and CLASS are kept up to date and are, by far, the most widely used as part of the modern armoury of cosmological analysis tools.While CAMB and CLASS were developed to accurately model the standard cosmology – general relativity with a cosmological constant – there has been surge in interest in testing extensions that involve modifications to gravity <cit.>. Indeed, it has been argued that it should be possible to test general relativity (GR) and constrain the associated gravitational parameters to the same level of precision as with other cosmological parameters. More ambitiously, one hopes that it should be possible to test GR on cosmological scales with the same level of precision as is done on astrophysical scales <cit.>. Two types of codes have been developed for the purpose of achieving this goal: general purpose codes which are either not tied to any specific theory (such as MGCAMB <cit.> and ISITGR <cit.> ) or model a broad class of (scalar-tensor) theories (such as<cit.> and <cit.>) and specific codes which model targeted theories such as Jordan-Bran-Dicke gravity <cit.>, Einstein-Aether gravity <cit.>, f(R) <cit.>, covariant galileons <cit.> and others.The stakes have changed in terms of theoretical precision. Up and coming surveys such as Euclid[https://www.euclid-ec.org/], LSST[https://www.lsst.org/], WFIRST[https://wfirst.gsfc.nasa.gov/], SKA[http://skatelescope.org/] and Stage 4 CMB[https://cmb-s4.org/] experiments all require sub-percent agreement in theoretical accuracy (cosmic variance is inversely proportional to the angular wavenumber probed, ℓ, and we expect to at most, reach ℓ∼ few× 10^3). While there have been attempts at checking and calibrating existing non-GR N-body codes <cit.>, until now the same effort has not been done for non-GR EB solvers with this accuracy in mind. In this paper we attempt to repeat what was done in <cit.> with a handful of codes. We will focus on scalar modes, neglecting for simplicity primordial tensor modes and B-modes of the CMB. In particular, we will show that two general purpose codes –and – agree with each other to a high level of accuracy. The same level of accuracy is reached with the third general purpose code – COOP; however, the latter code needs further calibration to maintain agreement at sub-Mpc scales. We also show that they agree with a number of other EB solvers for a suite of models such Jordan-Brans-Dicke (JBD), covariant Galileons, f(R) and Hořava-Lifshitz (khronometric) gravity. And we will show that for some models not encompassed by these general purpose codes, i.e. non-local theories of gravity, there is good agreement between existing EB solvers targeting them. This gives us confidence that these codes can be used for precision constraints on general relativity using observables of a linearly perturbed universe. We structure our paper as follows. In Section <ref> we layout the formalism used in constructing the different codes and we summarize the theories used in our comparison. In Section <ref> we describe the codes themselves, highlighting their key features and the techniques they involve. In Section <ref> we compare the codes in different settings. We begin by comparing the codes for specific models and then choose different families of parametrizations for the free functions in the general purpose codes. In Section <ref> we discuss what we have learnt and what steps to take next in attempts at improving analysis tools for future cosmological surveys. § FORMALISM AND THEORIES To study cosmological perturbations on large scales, one must expand all relevant cosmological fields to linear order around a homogeneous and isotropic background. By cosmological fields we mean the space time metric, g_μν, the various components of the energy density, ρ_i (where i can stand for baryons, dark matter and any other fluid one might consider), the pressure, P_i, and momentum, θ_i,as well as the phase space densities of the relativistic components, f_j (where j now stands for photons and neutrinos) as well as any other exotic degree of freedom, (such as, for example, ascalar field, ϕ, in the case of quintessence theories). One then replaces these linearized fields in the cosmological evolution equations; specifically in the Einstein field equations, the conservation of energy momentum tensor and the Boltzmann equations. One can then evolve the background equations and the linearized evolution equations to figure out how a set of initial perturbations will evolve over time. The end goal is to be able to calculate a set of spectra. First, the power spectrum of matter fluctuations at conformal time τ defined by⟨δ^*_M(τ, k')δ_M(τ, k)⟩≡ (2π)^3P(k,τ)δ^3( k- k'),where we have expanded the energy density of matter, ρ_M around its mean value, ρ̅_M, δ_M=(ρ_M-ρ̅_M)/ρ̅_M, and taken its Fourier transform. Second, the angular power spectrum of CMB anisotropies⟨ a^*_ℓ'm'a_ℓ m⟩=C^TT_ℓδ_ℓℓ'δ_mm' ,where we have expanded the anisotropies, δ T/T(n̂) in spherical harmonics such thatδ T/T(n̂)=∑_ℓ ma_ℓ mY_ℓ m(n̂) .More generally one should also be able to calculate the angular power spectrum of polarization in the CMB, specifically of the "E" mode, C^EE_ℓ, the "B" mode, C^BB_ℓ and the cross-spectra between the "E" mode and the temperature anisotropies, C^TE_ℓ, as well as the angular power spectrum of the CMB lensing potential, C^ϕϕ_ℓ. As a by product, onecan also calculate "background" quantities such as the history of the Hubble rate, H(τ), the angular-distance as a function of redshift, D_A(z) and other associated quantities such as the luminosity distance, D_L(z).To study deviations from general relativity, one needs to consider two main extensions. First one needs to include extra, gravitational degrees of freedom. In this paper we will restrict ourselves to scalar-tensor theories, as these have been the most thoroughly studied, and furthermore we will consider only one extra degree of freedom. This scalar field, and its perturbation, will have an additional evolution equation which is coupled to gravity. Second, there will be modifications to the Einstein field equations and their linearized form will be modified accordingly. How the field equations are modified and how the scalar field evolves depends on the class of theories one is considering. In what follows, we will describewhat these modifications mean for different classes of scalar-tensor theories and also theories that evolve restricted scalar degrees of freedom (such as Hořava-Lifshitz and non-local theories of gravity).§.§ The Effective Field Theory of Dark Energy A general approach to study scalar-tensor theories is the so-called Effective Field Theory of dark energy (EFT) <cit.>. Using this approach, it is possible to construct the most general action describing perturbations of single field dark energy (DE) and modified gravity models (MG). This can be done by considering all possible operators that satisfy spatial-diffeomorphism invariance, constructed from the metric in unitary gauge where the time is chosen to coincide with uniform field hypersurfaces. The operators can be ordered in number of perturbations and derivatives. Up to quadratic order in the perturbations, the action is given byS= ∫ d^4x √(-g){M_ Pl^2/2 [1+Ω(τ)]R+ Λ(τ) - a^2c(τ) δ g^00.+ .M_2^4 (τ)/2 (a^2δ g^00)^2 - M̅_1^3 (τ)/2a^2 δ g^00δK^μ_μμ - M̅_2^2 (τ)/2 (δ K^μ_μμ)^2 .-.M̅_3^2 (τ)/2δ K^μ_μνδ K^ν_νμ +a^2M̂^2(τ)/2δ g^00δ R^(3). + . m_2^2 (τ) (g^μν + n^μ n^ν) ∂_μ (a^2g^00) ∂_ν(a^2 g^00) + …} + S_m [χ_i ,g_μν],where R is the 4D Ricci scalar and n^μ denotes the normal to the spatial hypersurfaces; K_μν = (δ^ρ_μ + n^ρ n_μ) ∇_ρ n_ν is theextrinsic curvature, K its trace, and R^(3) is the 3D Ricci scalar, all defined with respect to the spatial hypersurfaces. Moreover,we have tagged with a δ all perturbations around the cosmological background. S_m is the matter action describing the usual components of the Universe, which we assume to be minimally and universally coupled to gravity. The ellipsis stand for higher order terms that will not be considered here. Theexplicit evolution of the perturbation of the scalar field can be obtained by applying theStückelbergtechnique to Eq. (<ref>) which means restoring the time diffeomorphism invariance by an infinitesimal time coordinate transformation, i.e. t → t +π(x^μ), where π is the explicit scalar degree of freedom. In Eq. (<ref>), the functions of time Λ(τ) and c(τ) can be expressed in terms of Ω(τ), the Hubble rate and the matter background energy density and pressure, using the background evolution equations obtained from this action <cit.>. Then, the general family of scalar-tensor theories is spanned by eight functions of time, i.e. Ω(τ), M_2^4(τ), M^2_i(τ) (with i=1,…,3), M̂^2(τ), m^2_2(τ) plus one function describing the background expansion rate as H≡ da/(adt).[Note that H does not completely fix the evolution of all the background quantities; it must be augmented by the evolution of the matter species encoded in S_m.] Their time dependence is completely free unless they are constrained to represent some particular theory. Indeed, besides their model independent characterization, a general recipe exists to map specific models in the EFT language <cit.>. In other words, by making specific choices for these EFT functions it is possible to single out a particular class of scalar-tensor theory and its cosmological evolution for a specific set of initial conditions. The number of EFT functions that are involved in the mapping increases proportionally to the complexity of the theory. In particular, linear perturbations in non-minimally coupled theories such as Jordan-Brans-Dicke are described in terms of two independent functions of time, Ω(τ) and H(τ), i.e. by setting M_2^4=0, M̅^2_i=0 (i=1,…,3) and m^2_2=0. Increasing the complexityof the theory, perturbations in Horndeski theories <cit.> are described by setting {M̅^2_2=-M̅^2_3=2M̂^2,m_2^2=0}, in which case one is left with four independent functions of time in addition to the usual dependence on H(τ) <cit.>. Moreover, by detuning 2M̂^2 from M̅^2_2=-M̅^2_3one is considering beyond Horndeski theories <cit.>. Lorentz violating theories, such as Hořava gravity <cit.>, also fall in this description by assuming m_2^2≠ 0. For practical purposes, it is useful to define a set of dimensionless functionsin terms of the original EFT functions asγ_1 = M^4_2/M_ Pl^2H_0^2 ,γ_2 =M̅^3_1/M_ Pl^2H_0 ,γ_3 = M̅^2_2/M_ Pl^2 , γ_4= M̅^2_3/M_ Pl^2 ,γ_5 =M̂^2/M_ Pl^2 ,γ_6= m^2_2/M_ Pl^2 ,where H_0 and M_ Pl are the Hubble parameter today and the Planck mass respectively.In this basis, Horndeski gravity corresponds to γ_4=-γ_3, γ_5=γ_3/2 and γ_6=0. As explained above, this reduces the number of free functions to five, i.e. {Ω,γ_1,γ_2,γ_3} plus a function that fixes the background expansion history. In this limit the EFT approach isequivalent to the α formalism described in the next section. Indeed, a one-to-one map to convert between the two bases is provided in Appendix <ref>.§.§ The Horndeski Action A standard approach to study general scalar-tensor theories is to write down a covariant action by considering explicitly combinations of a metric, g_μν, a scalar field, ϕ, and their derivatives. The result for the most general action leading to second-order equations of motion on any background is the Horndeski action <cit.>, which readsS=∫ d^4x √(-g)∑_i=2^5 L_i[ϕ,g_μν]+ S_m [χ_i ,g_μν],where, as always throughout this paper, we have assumed minimal and universal coupling to matter in S_m. The building blocks of the scalar field Lagrangian areL_2 =K ,L_3 = -G_3 ϕ , L_4 =G_4R+G_4X{(ϕ)^2-∇_μ∇_νϕ∇^μ∇^νϕ}, L_5 =G_5G_μν∇^μ∇^νϕ -1/6G_5X{ (ϕ)^3 -3∇^μ∇^νϕ∇_μ∇_νϕϕ +2∇^ν∇_μϕ∇^α∇_νϕ∇^μ∇_αϕ} ,where K and G_A are functions of ϕ and X≡-∇^νϕ∇_νϕ/2, and the subscripts X and ϕ denote derivatives. The four functions, K and G_A completely characterize this class of theories. Horndeski theories are not the most general viable class of theories. Indeed, it is possible to construct scalar-tensor theories with higher-order equations of motion and containing a single scalar degree of freedom, such as the so-called “beyond Horndeski” extension <cit.>. It was recently realized thathigher-order scalar-tensor theories propagating a single scalar mode can be understood as degenerate theories <cit.>.It is possible to prove that the exact linear dynamics predicted by the full Horndeski action, Eq. (<ref>), is completely described by specifying five functions of time, the Hubble parameter and <cit.>M^2_* ≡ 2(G_4-2XG_4X+XG_5ϕ-ϕ̇HXG_5X) , HM^2_*α_M ≡ d/dtM^2_* , H^2M^2_*α_K ≡ 2X(K_X+2XK_XX-2G_3ϕ-2XG_3ϕ X) +12ϕ̇XH(G_3X+XG_3XX-3G_4ϕ X-2XG_4ϕ XX) +12XH^2(G_4X+8XG_4XX+4X^2G_4XXX)-12XH^2(G_5ϕ+5XG_5ϕ X+2X^2G_5ϕ XX)+4ϕ̇XH^3(3G_5X+7XG_5XX+2X^2G_5XXX) , HM^2_*α_B ≡ 2ϕ̇(XG_3X-G_4ϕ-2XG_4ϕ X) +8XH(G_4X+2XG_4XX-G_5ϕ-XG_5ϕ X)+2ϕ̇XH^2(3G_5X+2XG_5XX),M^2_*α_T ≡ 2X[2G_4X-2G_5ϕ-(ϕ̈-ϕ̇H)G_5X],where dots are derivatives w.r.t. cosmic time t and H≡ da/(adt).While the Hubble parameter fixes the expansion history of the universe, the α_i functions appear only at the perturbation level. M_*^2 defines an effective Planck mass, which canonically normalize the tensor modes. α_K and α_B (dubbed as kineticity and braiding) are respectively the standard kinetic term present in simple DE models such as quintessence and the kinetic term arising from a mixing between the scalar field and the metric, which is typical of MG theories as f(R). Finally, α_T has been named tensor speed excess, and it is responsible for deviations on the speed of gravitational waves while on the scalar sector it generates anisotropic stress between the gravitational potentials.It is straightforward to relate the free functions {M_*, α_K, α_B,α_T} defined above to the free functions {Ω, γ_1, γ_2, γ_3} used to describe Horndeski theories in the EFT formalism. Themapping between these sets of functions is reported in Appendix <ref>. For an explicit expression of the functions {Ω, γ_1, γ_2, γ_3} in terms of the original {K, G_A} in Eq. (7), we refer the reader to <cit.> (see also <cit.>). Regardless of the basis (αs or EFT), it is clear now that there are two possibilities. The first one is to calculate the time dependence of α_i or γ_i and the background consistently to reproduce a specific sub-model of Horndeski, the second one is to specify directly their time dependence. Finally, the evolution equation for the extra scalar field and the modifications to the gravitational field equations depend solely on this set of free functions; any cosmology arising from Horndeski gravity can be modelled with an appropriate time dependence for these free functions. §.§ Jordan-Brans-DickeThe Jordan-Brans-Dicke (JBD) theory of gravity <cit.>, a particular case of the Horndeski theory, is given by the actionS=∫ d^4x √(-g)M_ Pl^2/2[ϕ R-ω_ BD/ϕ∇_μϕ∇^μϕ-2V]+ S_m [χ_i ,g_μν] ,where V(ϕ) is a potential term and ω_ BD is a free parameter. GR is recovered when ω_ BD→∞. For our test, we will not consider a generic potential but a cosmological constant instead, Λ, as the source of dark energy. In the EFT language, linear perturbations in JBD theories are described by two functions, i.e. the Hubble rate H(t) (or equivalently c(τ) or Λ(τ)) andΩ(τ)=ϕ-1, γ_i(τ)=0 .We can see that in this case there are no terms consisting of purely modified perturbations (i.e. any of the γ_i). Alternatively the α_i(τ) functions readα_M(τ) = dlnϕ/dln a,α_B(τ) = -α_M,α_K(τ) = ω_ BDα^2_M,α_T(τ) = 0.As with the EFT basis, one has to consider the Hubble parameter H(τ) as an additional building function. However, H(τ) can be written entirely as a function of the αs, meaning that the five functions of time needed to describe the full Horndeski theory reduce to two in the JBD case, consistently with the EFT description of the previous paragraph.In order to fix the above functions one has to solve the background equations to determine the time evolution of {H, ϕ}.§.§ Covariant Galileon The covariant Galileon model corresponds to the subclass of scalar-tensor theories of Eq. (<ref>) that (in the limit of flat spacetime) is invariant under a Galilean shift of the scalar field <cit.>, i.e. ∂_μϕ→∂_μϕ+b_μ (where b_μ is a constant four-vector). The covariant construction of the model presented in <cit.> consists in the addition of counter terms that cancel higher-derivative terms that would otherwise be present in the naive covariantization (i.e. simply replacing partial with covariant derivatives; see however <cit.> for why the addition of these counter terms is not strictly necessary). Galilean invariance no longer holds in spacetimes like FRW, but the resulting model is one with a very rich and testable cosmological behaviour. The Horndeski functions in Eqs. (<ref>) have this formL_2= c_2X -c_1M^3/2ϕ ,L_3= 2c_3/M^3Xϕ ,L_4=(M_p^2/2+c_4/M^6X^2)R +2c_4/M^6X[(ϕ)^2-ϕ_;μνϕ^;μν] ,L_5=c_5/M^9X^2G_μνϕ^;μν -1/3c_5/M^9X[(ϕ)^3+2ϕ_;μ^νϕ_;ν^αϕ_;α^μ.. -3ϕ_;μνϕ^;μνϕ] ,Here, as usual, we have set M^3=H_0^2M_p. Note that these definitions are related to Ref. <cit.> by c_3^ ours→ -c_3^ theirs and c_5^ ours = 3c_5^ theirs.There is some freedom to rescale the field and normalize some of the coefficients. Following Ref. <cit.> we can choose c_2<0 and rescale the field so that c_2 = -1 (models with c_2>0 have a stable Minkowski limit with ϕ_,μ=0 and thus no acceleration without a cosmological constant, see e.g. <cit.>). The term proportional to ϕ in ℒ_2 is uninteresting, so we will set c_1 = 0 from now on. This leaves us with three free parameters, c_3,4,5.An analysis of Galileon cosmology was undertaken in <cit.> identifying some of the key features which we briefly touch upon. The Galileon contribution to the energy density at a=1 is <cit.>Ω_gal=-1/6ξ^2-2c_3ξ^3+15/2c_4ξ^4+7/3c_5ξ^5 ,(defined such that the coefficients are dimensionless) and whereξ≡ϕ̇H/M_ Pl H_0^2 . Given that the theory is shift symmetric, there is an associated Noether current satisfying ∇_μ J^μ = 0 <cit.>. For a cosmological background J^i = 0, J^0≡ n and the shift-current decays with the expansion n ∝ a^-3→0 at late times. The field evolution is thus driven to an attractor whereJ^0 ∝ -ξ-6c_3ξ^2+18c_4ξ^3+5c_5ξ^4 =0 ,i.e. ξ is a constant and the evolution of the background is independent of the initial conditions of the scalar field. Although it has been claimed that background observations favour a non-scalingbehaviour of the scalar field <cit.>, CMB observations (not considered in Ref. <cit.>) require that the tracker has been reached before Dark Energy dominates (Fig. 11 of Ref. <cit.>).[Note that if inflation occurred it would set the field very near the attractor by the early radiation era <cit.>.] So if only considering the evolution on the attractor, one can use Eqs. (<ref>,<ref>) to trade two of the independent c_i for ξ and Ω_gal. It has thus become standard to refer to three models:* Cubic: c_4=c_5=0, with c_3 the only free parameter; choosing Ω_gal determines determes ξ. No additional parameters compared to ΛCDM. * Quartic: c_5 = 0; Ω_gal and ξ are free parameters. One more parameter than ΛCDM. * Quintic:c_3, ξ, Ω_gal are free parameters. Two extra parameters relative to ΛCDM.All of these models are self-accelerating models without a cosmological constant, and hence do not admit a continuous limit to ΛCDM.The covariant Galileon model is implemented inand GALCAMB assuming the attractor solution Eq. (<ref>); on the other hand solves the the full background equations both on- and off-attractor. The two approaches are equivalent if ones chooses the initial conditions for the scalar field on the attractor, which will be the strategy for the rest of the Galileon comparison. When the attractor solution is considered with the above conventions,the alpha functions readM_*^2α_ K E^4=- ξ^2-12 c_3 ξ^3+54 c_4 ξ^4+20 c_5 ξ^5, M_*^2α_ B E^4=-2 c_3 ξ^3+12 c_4 ξ^4+5 c_5 ξ^5, M_*^2α_ M E^4= 6 c_4 Ḣ/ H^2ξ^4+4 c_5 Ḣ/ H^2ξ^5, M_*^2α_ T E^4=2 c_4 ξ^4+c_5 ξ^5(1+Ḣ/ H^2),where E =H(τ)/ H_0 is the dimensionless expansion rate with H=aH and a dot now denotes a derivative w.r.t. conformal time, τ.With the same conventions, the EFT functions readΩ = a^4 H_0^4 ξ ^4 (ℋ^2 (c_4-2 c_5 ξ)+2 c_5 ξℋ̇)/2 ℋ^6 , γ_3= -a^4 H_0^4 ξ ^4 (2 c_4 ℋ^2+ c_5 ξℋ̇)/ℋ^6 , γ_2= -a^3 H_0^3 ξ ^3/ℋ^7[ c_5 ξ ^2 ℋℋ̈+2 ξℋ^2 (4 c_5 ξ -c_4) ℋ̇ +ℋ^4 (ξ( c_5 ξ+14 c_4)-2 c_3)-6 c_5 ξ ^2 ℋ̇^2] ,γ_1= a^2 H_0^2 ξ ^3/4 ℋ^8[2 ξℋ^3 (5 c_5 ξ -c_4) ℋ̈ +42 c_5 ξ ^2 ℋ̇^3+ℋ^4 (9 ξ(7/3 c_5 ξ -2 c_4)+2 c_3) ℋ̇ +ξℋ^2 ( c_5 ξ⃛ℋ+10 (c_4-5 c_5 ξ) ℋ̇^2) -18 c_5 ξ ^2 ℋℋ̇ℋ̈+4 ℋ^6 (3 ξ( c_5 ξ +4 c_4)-2 c_3)] . §.§ f(R) gravity f(R) models of gravity are described by the following Lagrangian in the Jordan frameS=∫ d^4x √(-g)[R+f(R)]+S_m[χ_i,g_μν],where f(R) is a generic function of the Ricci scalar and the matter fields χ_i are minimally coupled to gravity.They represent a popular class of scalar-tensor theories which has been extensively studied in the literature <cit.> and for which N-body simulation codes exist <cit.>. Depending on the choice of the functional form of f(R), it is possible to design models that obey stability conditions and give a viable cosmology <cit.>. A well-known example of viable model that also obeys solar system constraints is the one introduced by Hu & Sawicki in <cit.>.The higher ordernature of the theory, offers an alternative way of treating f(R) models, i.e. via the so-called designer approach. In the latter, one fixes the expansion history and uses the Friedmann equation as a second-order differential equation for f[R(a)] to reconstruct the f(R) model corresponding to the chosen history <cit.>. Generically, for each expansion history,one finds a family of viable models that reproduce it and are commonly labelled by the boundary condition at present time, f_R^0. Equivalently, they can be parametrized by the present day value of the functionB=f_RR/1+f_RR^'H/H^' ,where a prime denotes derivation w.r.t. lna. The smaller the value of B_0, the smaller the scale at which the fifth force introduced by f(R) kicks in. As in the JBD case, f(R) models are described in the EFT formalism by two functions <cit.>, the Hubble parameter andΩ=f_R γ_i(τ)=0 .This has been used to implement f(R) gravity into , both for the designer models as well as for the Hu-Sawicki one <cit.>. Alternatively, they can be described by the Equation of State approach (EoS) implemented in CLASS_EOS_fR <cit.>.In this comparison we will focus on designer f(R) models, since our aim is that of comparing the Einstein-Boltzmann solvers at the level of their predictions for linear perturbations. §.§ Hořava-Lifshitz gravityThis model was introduced in Ref. <cit.>. It was extended in Ref. <cit.>, where it was shown that action for the low-energy healthy version of Hořava-Lifshitz gravity is given by 𝒮_H = 1/16π G_H∫d^4x√(-g)[ K_ijK^ij-λ K^2 -2 ξΛ̅ + ξ R^(3)+η a_i a^i] +S_m[χ_i,g_μν] ,where λ, η, and ξ are dimensionless coupling constants,Λ̅ is the “bare” cosmological constant and G_H is the “bare" gravitational constant related to Newton's constant via 1/16π G_H=M_ Pl^2/(2ξ-η) <cit.>. Note that the choice λ=ξ=1,η=0 restores GR.In general, departures from these values lead to the violation of the local Lorentz symmetry of GR and the appearance of a new scalar degree of freedom, known as the khronon. It should be pointed out that the model (<ref>) is equivalent to khronometric gravity <cit.>, an effective field theory which explicitly operates the khronon.[In turn, khronometric gravity is a variantof Einstein–Aether gravity <cit.>, an effective field theory describing the effects of Lorentz invariance violation. It should be pointed out that these models have identical scalar and tensor sectors.] The correspondence between {λ, η, ξ} and the coupling constants of the khronometric model{α, β, λ}is η= -α_kh/β_kh -1,ξ = -1/β_kh -1,λ = -λ_kh +1/β_kh -1 ,where the subscript kh is added for clarity.The parameters λ, η, and ξ are subject to various constraints fromthe absence of the vacuum Cherenkov radiation, Solar system tests, astrophysics, and cosmology <cit.>.The cosmological consequences of this model have been investigated in Refs. <cit.>, including interesting phenomenological implications for dark matter and dark energy.The map of the action Eq. (<ref>) to the EFT functions <cit.> is Ω = η/(2ξ-η),γ_4 =-2/(2ξ-η)(1-ξ), γ_3= -2/(2ξ-η)(ξ-λ),γ_6 = η/4(2ξ-η), γ_1 = 1/2a^2H_0^2(2ξ-η)(1+2ξ-3λ)(Ḣ- H^2), γ_2 = γ_5=0,which has been implemented in  <cit.>.§.§ Non-local gravity RR ΛCDM The non-local theory we consider here is that put forward in <cit.> (known as themodel for short), which is described by the actionS_ = 1/16π G∫ d^4 x√(-g)[R - m^2/6R^-2R - ℒ_M],where ℒ_M is the Lagrange density of minimally coupled matter fields and ^-1 is a formal inverse of the d'Alembert operator = ∇^μ∇_μ. The latter can be expressed as, (^-1A)(x) = A_ hom(x) - ∫ d^4y√(-g(y))G(x, y)A(y),where A is some scalar function of the spacetime coordinate x, and the homogeneous solution A_ hom(x) and the Green's function G(x, y) specify the definition of the ^-1 operator. Eq. ((<ref>)) is meant to be understood as a toy-model to explore the phenomenology of the R^-2R term, while a deeper physical motivation for its origin is still not available (see <cit.> and references therein for works along these lines). In the absence of such a fundamental understanding, different choices for the structure of the ^-1 operator (i.e. different homogeneous solutions and G(x, y)) should be regarded as different non-local models altogether, and the mass scale m treated as a free parameter.In cosmological studies of themodel, it has become common to cast the action of Eq. (<ref>) into the following "localized" formS_, loc = 1/16π G∫ d^4 x√(-g)[R - m^2/6R S - ξ_1( U + R) .. - ξ_2( S + U) - ℒ_m],where U and S are two auxiliary scalar fields and ξ_1 and ξ_2 are two Lagrange multipliers that enforce the constraintsU=- R , S=- U.Invoking a given (left) inverse, one can solve the last two equations formally asU=- ^-1 R , S=- ^-1 U = ^-2 R.This allows one to integrate out U and S from the action (as well as ξ_1 and ξ_2), thereby recovering the original non-local action. The equations of motion associated with the action of Eq. (<ref>) are G_μν - m^2/6K_μν = 8 π G T_μν, U = -R, S = -U,withK_μν≡ 2SG_μν - 2∇_μ∇_ν S - 2∇_(μS∇_ν)U+ (2 S + ∇_α S∇^α U - U^2/2)g_μν.An advantage of using Eq. (<ref>) is that the resulting equations of motion become a set of coupled differential equations, which are comparatively easier to solve than the integro-differential equations of the non-local version of the model. To ensure causality one must impose by hand that the Green's function used within ^-1 in Eqs. (<ref>) and (<ref>) is of the retarded kind and this condition is naturally satisfied in integrating the localized version forward in time. Further, the quantities U and S should not be regarded as physical propagating scalar degrees of freedom, but instead as mere auxiliary scalar functions that facilitate the calculations. In practice, this means that once the homogeneous solution associated with ^-1 is specified, then the differential equations of the localized problem must be solved with the one compatible choice of initial conditions of the scalar functions. Here, we fix U, S and their first derivatives to zero, deep in the radiation dominated regime (this is as was done, for instance, in <cit.>; see <cit.> for a study of the impact of different initial conditions) which corresponds to choosing vanishing homogeneous solutions for them. Once the initial conditions of the U and S scalars are fixed, then the only remaining free parameter in the model is the mass scale m, which effectively replaces the role of Λ inand can be derived from the condition to render a spatially flat Universe.Finally, note that the Horndeski Lagrangian is a local theory featuring one propagating scalar degree of freedom, and hence, does not encompass themodel.§ THE CODESThere are a number of EB solvers, some of which are described below, developed to explore deviations of GR.While, schematically, we have summarized how to study linear cosmological perturbations, there are a number of subtleties which we will mention now briefly. For a start, there is redundancy (or gauge freedom) in how to parametrize the scalar modes of the linearized metrics; typically EB solvers make a particular choice of gauge – the synchronous gauge – although another common gauge – the Newtonian gauge – is particularly useful in extracting physical understanding of the various effects at play. Also it should be noted that the universe undergoes an elaborate thermal history: it will recombine and subsequently reionize. It is essential to model this evolution accurately as it has a significant effect on the evolution of perturbations. Another key aspect is the use of line of sight methods (mentioned in the introduction) that substantially speed up the numerical computation of the evolution of perturbations by many orders of magnitude; as shown in <cit.> it is possible to obtain an accurate solution of the Boltzmann hierarchy by first solving a truncated form of the lower order moments of the perturbation variables and then judiciously integrating over the appropriate kernel convolved with these lower order moments. All current EB solvers use this approach. Most (but not all) EB solvers currently being used are modifications of either CAMB or CLASS. This means that they have evolved from very different code bases, are in different languages and use (mostly) different algorithms. This is of tremendous benefit when we compare results in the next section. We should highlight, however, that there are a couple of cases – DASh and COOP – that do not belong to this genealogy. The codes used in this comparison, along with the models tested, are summarized in Fig. <ref> and Tab. <ref> and the details of each code can be found in the following sections.§.§ EFTCAMB is an implementation <cit.> of the EFT of dark energy into the CAMB <cit.> EB solver (coded in fortran90) which evolves the full set of perturbations (in the synchronous gauge) arising from the action in Eq. (<ref>), after a built inmodule checks for the stability of the model under consideration. The latter includes conditions for the avoidance of ghost and gradient instabilities (both on the scalar and tensor sector), well posedness of the scalar field equation of motion and prevention of exponential growth of DE perturbations. It can treat specific models (such as, Jordan-Brans-Dicke, designer-f(R), Hu-Sawicki f(R),Hořava-Lifshitz gravity, Covariant Galileon and quintessence) through an appropriate choice of the EFT functions. It also accepts phenomenological choices for the time dependence of the EFT functions and of the dark energy equation of state which may not be associated to specific theories. has been used to place constraints on f(R) gravity <cit.>, Hořava-Lifshitz <cit.>andspecific dark energy models <cit.>. It has also been used to explore the interplay between massive neutrinos and dark energy <cit.>, the tension between the primary and weak lensing signal in CMB data <cit.> as well as the form and impact of theoretical priors  <cit.>. An up to date implementation can be downloaded from http://eftcamb.org/. The JBD EFTCAMB solver is based on EFTCAMB_Oct15 version, while the others are based on the most recent EFTCAMB_Sep17 version.§.§ (Horndeski in the Cosmic Linear Anisotropy Solving System) is an implementation of the evolution equations in terms of the α_i(τ) <cit.> as a series of patches to the CLASS EB solver <cit.> (coded in C). solves the modified gravity equations for Horndeski's theory in the synchronous gauge (CLASS also incorporates the Newtonian gauge) starting in the radiation era, after checking conditions for the stability of the perturbations (both on the scalar and on the tensor sectors). The code has been used to place constraints on the α_i(τ) with current CMB data <cit.>, study relativistic effects on ultra-large scales <cit.>, forecast constraints with stage 4 clustering, lensing and CMB data <cit.> and constraint Galileon Gravity models <cit.>. The current public version of is v1.1 <cit.>. The only difference between this version and the first one (v1.0) is that v1.1 incorporates all the parametrizations used in this paper. This guarantees that the results provided in this paper are valid also for v1.0. Lagrangian-based models, such as JBD and Galileons, are still in a private branch of the code and they will be released in the future. The code is available from <www.hiclass-code.net>.§.§ COOPCosmology Object Oriented Package (COOP) <cit.>is anEinstein-Boltzmann code that solves cosmological perturbations including very general deviations from the ΛCDM model in terms of the EFT of dark energyparametrization <cit.>. COOPassumes minimal coupling of all matter species and solves the linear cosmological perturbation equations in Newtonian gauge, obtained from the unitary gauge ones by a time transformation t → t +π.For the ΛCDM model, it solves the evolution equation of the spatial metric perturbation and the matter perturbation equations; details are given in Ref. <cit.>. Beyond the ΛCDM model, COOP additionally evolves the scalar field perturbation π, usingEqs. (109)–(112) of Ref. <cit.> and verifying the absence of ghost and gradient instability along the evolution. Once the linear perturbations are solved, COOP computes CMB power spectra using a line-of-sight integral approach <cit.>. Matter power spectra are computed via a gauge transformation from the Newtonian to the CDM rest-frame synchronous gauge. COOP includes also the dynamics of the beyond Horndeski operatorand has been used to study the signature of a non-zero α_ H on the matter power spectrum as well as on the primary and lensing CMB signals <cit.>. COOP v1.1 has been used for this comparison. The code and its documentation are available at <www.cita.utoronto.ca/ zqhuang>.§.§ Jordan-Brans-Dicke solvers – modified CAMB and DAShA systematic study, placing state of the art constraints on Jordan-Brans-Dicke gravity was presented in <cit.> using a modified version ofCAMB and an altogether different EB Solver – the Davis Anisotropy Shortcut Code (DASh) <cit.>. DASh was initially written as amodification of CMBFAST <cit.> by separating out the computation of the radiation and matter transfer functions from the computation of the line-of-sight integral.The code in its initial version, precomputed and stored the radiation and matter transfer functions on a grid so that any model was subsequently calculatedfast via interpolation between the grid points, supplemented with a number of analytic estimates and fitting functions that speed up the calculation without significant loss of accuracy. Such a speedup allowed the efficient traversal of large multi-dimensional parameter spaces with MCMC methods and made the study of models containing such a large parameterspace possible <cit.>.The use of a grid and semi-analytic techniques was abandoned in later, not publicly availableversions of DASh, which returned to the traditional line-of-sight approach of other Boltzmann solvers.It is possible to solve the evolution equations in both synchronous and Newtonian gauge and thereforeis amenable to a robust internal validation of the evolution algorithm. Over the last few years a number of gravitational theories, such as the Tensor-Vector-Scalar theory <cit.>and the Eddington-Born-Infeld theory <cit.>, have been incorporated into the code andhas been recently used for cross-checks with CLASS in an extensive study of generalized dark matter <cit.>. In <cit.>, the authors used theinternal consistency checks within DASh and the cross checks between DASh and a modified version of CAMB to calibrate and validate their results. We will use theirmodified CAMB code as the baseline against which to compare , and CLASSig.§.§ Jordan-Brans-Dicke solvers – CLASSig The dedicated Einstein-Boltzmann CLASSig <cit.> for Jordan-Brans-Dicke (JBD) gravity was used in <cit.> to constrain the simplest scalar-tensor dark energy models with a monomial potential with the two Planck product releases and complementary astrophysical and cosmological data. CLASSig is a modified version of CLASS which implements the Einstein equations for JBD gravity at both the background and the linear perturbation levels without any use of approximations. CLASSig adopts a redefinition of the scalar field (γσ^2 = ϕ) which recasts the original JBD theory in the form of induced gravity in which σ has a standard kinetic term. CLASSig implements linear fluctuations either in the synchronous and in the longitudinal gauge (although only the synchronous version is maintained updated with CLASS). The implementation and results of the evolution of linear fluctuations has been checked against the quasi-static approximation valid for sub-Hubble scales during the matter dominated stage <cit.>. In its original version, the code implements as a boundary condition the consistency between the effective gravitational strength in the Einstein equations at present and the one measured in a Cavendish-like experiment (γσ_0^2 = (1+8 γ)/(1+6 γ)/(8 π G), being G=6.67 × 10^-8 cm^3 g^-1 s^-2 the Newton constant) by tuning the potential. For the current comparison, we instead fix as initial condition γσ^2 (a=10^-15)=1, σ̇(a=10^-15)= 0 consistently with the choice used in this paper. §.§ Covariant Galileon – modified CAMB A modified version of CAMB to follow the cosmology of the Galileon models was developed in <cit.>, and subsequently used in cosmological constraints in <cit.>. The code structure is exactly as in default CAMB (gauge conventions, line-of-sight integration methods, etc.), but with the relevant physical quantities modified to include the effect of the scalar field. At the background level, this includes modifying the expansion rate to be that of the Galileon model: this may involve numerically solving for the background evolution, or using the analytic formulae of the so-called tracker evolution (see Sec. <ref>). At the linear perturbations level, the modifications entail the addition of the Galileon contribution to the perturbed total energy-momentum tensor. More precisely, one works out the density perturbation, heat flux and anisotropic stress of the scalar field, and appropriately adds these contributions to the corresponding variables in default CAMB (due to the gauge choices in CAMB, one does not need to include the pressure perturbation; see <cit.> for the derivation of the perturbed energy momentum tensor of the Galileon field). In addition to these modifications to the default CAMB variables, in the code one also defines two extra variables to store the evolution of the first and second derivatives of the Galileon field perturbation, which are solved for with the aid of the equation of motion of the scalar field, and enter the determination of the perturbed energy-momentum tensor. Before solving for the perturbations, the code first performs internal stability checks for the absence of ghost and Laplace instabilities, both in the scalar and tensor sectors.We refer the reader to <cit.> for more details about the model equations as they are used in this modified version of CAMB. While the latter is not publicly available[It will nonetheless be made available by the authors upon request.], we will use this EB solver to compare codes for this class of models. §.§ f(R) gravity code – CLASS_EOS_fR CLASS_EOS_fR implements the Equation of State approach (EoS) <cit.> into the CLASS EB solver <cit.> for a designer f(R)model. In the EoS approach, the f(R) modifications to gravity are recast as an effective dark energy fluidat both the homogeneous and inhomogeneous (linear perturbation) level.The degrees of freedom of the perturbed dark-sector are thegauge-invariant overdensity and velocity fields, asdescribed in detail in <cit.>. These obey a system of two coupled first-order differential equations, which involve theexpressions of the gauge-invariant dark-sector anisotropic stress, Π_ de, and entropy perturbation,Γ_ de. The expansion of Π_ de and Γ_ de in terms of the other fluid degrees of freedom(including matter) constitute the equations of state at the perturbed level. They are the key quantities of the EoSapproach.The f(R) modifications to gravity manifest themselves in the coefficients that appear in the expressions ofΠ_ de and Γ_ de in front of the perturbed fluid degrees of freedom, see <cit.> for the exactexpressions.At the numerical level, the advantage of this procedure is that the implementation of f(R) modifications to gravityreduces to the addition of two first-order differential equations to the chosen EB code (e.g. CLASS), while none of theother pre-existing equations of motion, for the matter degrees of freedom and gravitational potential, needs to bedirectly modified since it receives automatically the contribution of the total stress-energy tensor.In the code CLASS_EOS_fR, the effective-dark-energy fluid perturbations are solved from a fixed initial time up topresent - the initial time being chosen so that dark energy is negligible compared to matter and radiation.At this stage, the code CLASS_EOS_fR is operational for f(R) models in both the synchronous and conformal Newtoniangauge.It shall soon be extended to other main classes of models such as Horndeski and Einstein-Aether theories.A dedicated paper with details of the implementation and theoretical results and discussion is in preparation <cit.>.§.§ Hořava-Lifshitz gravity code CLASS-LVDM This code was developed in order to test the model of dark matter with Lorentz violation (LV) proposed in Ref. <cit.>. The code is based on the CLASS code v1.7,and solves the Eqs. (16)–(23) of Ref. <cit.>. The absence of instabilities isachieved by a proper choice of the parameters of LV in gravity and dark matter. All the calculations are performed in the synchronous gauge,and if needed, the results can be easily transformed into the Newtonian gauge. Further details on the numerical procedure can be found in Ref. <cit.> where a similar model was studied.The code is available athttps://github.com/Michalychforever/CLASS_LVDM.Compared to the standard CLASS code, one has to additionally specify four new parameters: α,β,λ - parameters of LV in gravity in the khronometric model, described in Sec.<ref>, and Y - the parameter controlling the strength of LV in dark matter.For the purposes of this paper we switch off the latter by putting Y≡ 0 and focus only on the gravitational part of khronometric/Hořava-Lifshitz gravity.The details of differences in the implementationw.r.t.can be found in Appendix <ref>.§.§ Non-local gravity – modified CAMB and CLASSRR ΛCDMWe compare two EB codes, a modified version of CAMB and a modified version of CLASS, that compute the cosmology of a specific model of non-local gravity modifying the Einstein-Hilbert action by a term ∼ m^2 R ^-2 R (see Sec. <ref> for details).The modified version of CAMB[This version of the CAMB code for themodel is not publicly available, but it will be shared by the authors upon request.] was developed by the authors of the GalCAMB code, and as a result, the strategy behind the code implementation is in all similar to that already described in Sec. <ref> for the Galileon model. The strategy and specific equations used for modifying CLASS[The code is publicly available, see <cit.> for the link.] are outlined in details in Appendix A of <cit.> to which we refer the reader for an exhaustive account. In both cases, the equations that end up being coded are those obtained from the localized version of the theory that features two dynamical auxiliary scalar fields (see Sec. <ref>). Within both versions, the background evolution is obtained numerically by solving the system comprising the modified Friedmann equations together with the differential equations that govern the evolution of the additional scalar fields. Both implementations include a trial-and-error search of the free parameter m of the model to yield a spatially flat Universe. At the perturbations level, one works out the perturbed energy-momentum tensor of the latter, and then appropriately adds the corresponding contribution to the relevant variables in the default CAMB code, whereas these have been directly put into the linearized Einstein equations in the CLASS version. The resulting equations depend on the perturbed auxiliary fields, as well as their time derivatives, which are solved for with the aid of the equations of motion of the scalar fields. The modified CAMB code was used in <cit.> to display typical signatures in the CMB temperature power spectrum (although <cit.> focuses more on aspects of nonlinear structure formation), whereas the modified CLASS one was used in various observational constraints studies <cit.>.§ TESTS In this section we present the tests that we have performed to compare the codes described in the previous section. Ideally one should compare codes for a wide range of both gravitational and cosmological parameters. If one is to be thorough, this approach can be prohibitive computationally. Furthermore, that is not the way code comparisons have been undertaken in other situations. In practice one chooses a small selection of models and compares the various observables in these cases. This was the approach taken in the original EB code comparisons<cit.> but is also used in, for example, comparisons between N-body codesfor ΛCDM simulationsas well as modified gravity theories<cit.>. Therefore, we will follow this approach here: for each theory we will compare different codes for a handful of different parameters. A crucial feature of the comparisons undertaken in this section is that they always involve at least a comparison between a modified CAMB and a modified CLASS EB solver. This means that we are comparing codes which, at their core, are very different in architecture, language and genesis. For the majority of cases, we will use and as the main representatives for either CAMB or CLASS but in one case (non-local gravity) we will compare two independent codes. Another aspect of our comparison is that at least one of the codes for each model is (or will shortly be made) publicly available.In our comparisons, we will be aiming for agreement between codes – up to ℓ=3000 for the CMB spectra and k=10hMpc^-1 in the matter power spectrum– such that the relative distance between observables is of order 0.1%, with the exception of low-multipoles (ℓ<100) where we accept differences up to 0.5% since these scales are cosmic variance limited. We consider this as a good agreement, since it is smaller than the cosmic variance limit out to the smallest scales considered, i.e. 0.1% at ℓ=3000 in the most stringent scenario (see e.g. <cit.>). We shall see that for ℓ≲300 in the EE spectra the relative difference between codes exceeds the 1% bound. This clearly evades our target agreement, but it is not worrisome. Indeed, on those scales the data are noise dominated and the cosmic variance is larger than 1%. It is important to stress here that all the relative differences shown in the following figures are expressed in [%] units, with the exception of δ C_ℓ^TE. Since C_ℓ^TE crosses zero, we decided not to use it and to show the simple difference in [μ K^2] units instead.Another crucial aspect has been the calibration of the codes. To do so, we fixed the precision parameters so that all the tests of the following sections (i) had at least the target agreement, and (ii) the speed of each run was still fast enough for MCMC parameter estimation. While the first condition was explained in the previous paragraph, for the latter we established a factor 3-4 as the maximum speed loss w.r.t. the same model run with standard precision parameters. This factor is a rough estimate that assumes that in the next years the CPU speed will increase, but even with the present computing power MCMC analysis with these calibrated codes is already possible. It is important to stress that most of the increased precision parameters are necessary only to improve the agreement in the lensing CMB spectra on small scales, which is by default 1-2 orders of magnitude worse than the other spectra. We will be parsimonious in the presentation of results. As will become clear, we have undertaken a large number of cross-comparisons and it would be cumbersome to present countless plots (or tables). Therefore, we will limit ourselves to showing a few significant plots that help us illustrate the level of agreement we are obtaining and spell out, in the text, the battery of tests that were undertaken for each class of models. We have found our results (i.e. the precision with which codes agree) to be relatively insensitive to variations of the cosmological parameters.Before showing the results of our tests, it is useful to stress here that all the precision parameters used by the codes to generate these figures are specified in Appendix <ref>, while the cosmological parameters for each model are reported in Appendix <ref>.§.§ Jordan-Brans-Dicke gravity We have validated the , and CLASSig EB Solvers in two steps. We have first used DASh and the modified CAMBof <cit.> to validatewith particular caveats. The current implementation of DASh uses an older version of the recombination module RECFAST – specifically RECFAST 1.2. We have runwith this older recombination module and found that the agreement with DASh is at the sub-percent level. We have confirmed that this is also true in a comparison betweenand the modified CAMB of <cit.>. We note the codes of <cit.> have only been cross checked and calibrated out to ℓ=2000 and for a maximum wavenumber k_ max=0.5hMpc^-1.With the more restricted cross check of the first step in hand, we have then compared , and CLASSig with the more up to date recombination module– specifically RECFAST 1.5 – and out to large ℓ and k. There are two main effects on the perturbation spectrum in JBD gravity: the effect of the scalar field on the background expansion and the interaction of scalar field fluctuations with the other perturbed fields. In Fig. <ref> we show C_ℓ and P(k)for a few different values of ω_ BD (see Appendix <ref> for the cosmological parameters used in this figures) as well as the relative difference for these quantities between and . We can clearly see a remarkable agreement between the codes, well within what is required for current and future precision analysis. It is possible to notice that for ℓ≲10^2 the disagreement in the temperature C_ℓ increases for all the models up to ≃0.5%. As we shall see, this is a common feature when comparing a CAMB-based code with a CLASS-based code, and it is present even for ΛCDM, i.e. using CAMB and CLASS instead of our modified versions (see e.g. Fig. <ref>). Moreover, it has been checked that for ΛCDM a systematic bias of 1-2 orders of magnitude smaller than the cosmic variance at ℓ<100 does not affect parameter extraction with present data, see Section 2 of <cit.>. Therefore, even if this issue deserves further investigation for DE/MG models, we believe that a better agreement at those scales is beyond the scope of this paper. The other issue of Fig. <ref>, common to all the models we show in this paper, is that the disagreement in the C^EE_ℓ on very large scales exceeds the 1% bound. As we already mentioned, this is due to the fact that their amplitude approaches zero and then the relative difference is artificially boosted. This is not to worry, since (i) the amplitude of the polarization angular power spectrum is very small on large scales w.r.t. small scales and (ii) we are protected by cosmic variance. Finally, note that the agreement holds even for extremely small values of ω_ JBD; this is essential if these codes are to be accurately incorporated into any Monte Carlo parameter estimation algorithm.Similar results can be found in Fig. <ref>, where we compare the outputs of BD-CAMB, CLASSig and (for reference) with the outputs generated by . For simplicity, we show the result only for C^TT_ℓ and P(k) at z=0, but the other spectra have similar behaviour as in Fig. <ref>. It is possible to note that the level of agreement is well within the 1% requirement for all the codes, validating their outputs even in “extreme” regions of the parameter space.This is an important first cross check between EB solvers. JBD is a canonical theory, widely studied in many regimes, and at the core of many scalar-tensor theories. It is a simple model to look at in that the background is monotonic and that only a very small subset of gravitational parameters are non-trivial.§.§ Covariant GalileonsThe Covariant Galileon theory has been implemented in the current version of and . Both these codes were compared against the modified CAMB described in Section <ref>, i.e. GALCAMB. The differences in the implementation are thatand GALCAMB assume the attractor solution for the evolution of the background scalar field, while evolves the full background equations with the possibility of having arbitrary initial conditions. For comparison with other codes, in we will set the initial conditions for the background scalar field as if it were on the attractor, to make the two approaches consistent and comparable. As explained in Section <ref>, and unlike in the JBD case (which is not self-accelerating), there is no extra parameter to vary in the case of the cubic Galileon. Once one is on the attractor and one chooses the matter densities, the evolution is completely pinned down. On the contrary, for the quartic and quintic Galileon models, there are one (for the quartic) or two (for the quintic) additional parameters. This implies that care should be had in enforcing the stability conditions (i.e. enforcing ghost-free backgrounds or preventing the existence of gradient instabilities).In Fig. <ref> it is possible to see the CMB angular power spectra and the matter power spectrum at different redshifts for two cubic Galileon models, one quartic and one quintic. While the exact values for the parameters used for this comparison are shown in Appendix <ref>, here it is important to stress that all these models have been chosen to be bad fits to current CMB and expansion history data. From these figures it can be seen that andagree to within the required precision. We have checked that they are also completely consistent with GALCAMB, as it is possible to see in Fig. <ref>, where we show the comparison between and GALCAMB. As in the case of JBD, we have varied the cosmological and gravitational parameters and found that this agreement is robust.§.§ f(R) Gravityf(R) gravity has been implemented in bothand CLASS_EOS_fR following two independent approaches [Note that, even though f(R) gravity is a sub-class of Horndeski theories, it has not been implemented in the current version of .]. We focus on designer f(R) models that result in a ΛCDM expansion history and differ from GR at the perturbation level, displaying an enhancement of small scale structure clustering. Once the expansion history has been chosen one has to fix a residual parameter B_0, corresponding to the present value of B, as in Eq. (<ref>). We focus on two different values of the B_0 parameter: at first we compare cosmological predictions for B_0=1, a value that has already been excluded by experiments, to make sure no difference between the two codes is hidden by the choice of a small parameter;we then focus on B_0=0.01, that is at the boundary of CMB only experimental constraints <cit.> and in the range of interest for N-Body simulations. In Fig. <ref> it is possible to see that all compared spectra agree within the required precision.Discrepancies in all CMB spectra are consistent with the comparison to other codes and within 0.5%. As in the previous cases, we have varied cosmological and gravitational parameters and found that agreement is robust. The matter power spectrum comparison shows some residual difference that reaches approximately 1% on very small scales, k=10h/Mpc, for large values of the free parameter, B_0=1. The latter value is already largely excluded by CMB only data, and the scales involved are affected by non-linear clustering, hence this discrepancy is not worrisome. §.§ Non-local Gravity For the comparison of the two EB solvers of the non-local RR model, we have considered three sets of cosmological parameters values, shown in Appendix <ref>. Two of them are markedly poor fits to the data (RR-2 and RR-3 in Fig. <ref>, but the other gets closer to what is allowed observationally (called RR-1 here). In Fig. <ref>, the Λ CDM predictions shown correspond to the same parameters values as RR-1. Recall, the ΛCDM and RR models have the same number of free parameters. The corresponding figures show that the level of agreement between these two EB solvers meets the required standards for all spectra, scales and redshifts shown. In fact, the shape of the relative difference curves are similar in between Λ CDM and the RR models, which suggests that the observed differences (small as they are) are mostly due to intrinsic differences in the default codes (CAMB and CLASS), and less so due to the modifications themselves. §.§ Hořava-Lifshitz GravityWe now proceed in validating EFTCAMB and CLASS-LVDM for Hořava-Lifshitz gravity. Because of the different implementation of the background solver (see Appendix <ref> for details), we have limited the comparison to the subset of parameters satisfying the condition G_cosmo=G_N,eliminating all the differences arising from it. In the top panels of Fig. <ref> we compare the TT, EE, lensing and TE power spectra for two different models – HL-A and HL-B – and a reference ΛCDM model. These are defined by the sets of parameters specified in Appendix <ref>. As we can see from the plots, the codes agree always within the 1% precision for TT, EE and TE power spectra. As for the lensing power spectrum we can notice an order 3% deviation at both small and large scales.Looking more carefully, one can notice that this difference is not a peculiarity of the MG model,but it is already present at the ΛCDM level (blue line).The differences at large-ℓ are common to all the models under investigation.As for the discrepancy at low-ℓ, the fact that it is present even for ΛCDMsuggests that it is caused by an inaccuracy in CLASS v1.7, which CLASS-LVDM is based on, and not by the modification itself. Indeed, one may observe that this issue is absent in based on an updated version of the CLASS code.In the same figure (bottom panels) we show the matter power spectra for the same models. We can see that the two codes agree well up to k≃ 0.1 hMpc^-1, always under the 1% precision. On scales k≳ 0.1 hMpc^-1 it is possible to notice that the relative differences in P(k) are drastically increasing, both for ΛCDM and for the MG models.Like for the C_ℓ case, this discrepancy is due to the outdated version of the CLASS code (v1.7). For illustrative purposes we decided to cut the matter power spectrum at the value k=1 hMpc^-1. It should be pointed out that the scales k≳ 0.1 hMpc^-1 are significantly affected by non-linear clustering,therefore the output of linear Boltzmann codes in this region is of little practical value.Note that we used the standard CLASS accuracy flags except for lensing, where a more accurate mode has been employedby imposing . §.§ Parametrized Horndeski functionsUp to this point we have considered a specific set of theories which, albeit representative, only involve a very restricted set of possible time evolution for either the Horndeski or EFT functions. This means that either some of the free functions are set to zero or a lower dimensional subspace of the full function space is explored (see Eq. (<ref>) for a good example). We now need to explore a wider choice of theories and time evolutions.Ideally, we should somehow explore and compare the full parameter space described by the time dependent functions {α_i(τ), w_ DE(τ)}. This is obviously impossible, but also unnecessary for our purposes. Indeed, the only modifications introduced by COOP,and are at the level of the Einstein and scalar field equations. Therefore, it is sufficient to use a parametrization that is capable of capturing all the terms present there. Checking that for particular parametrizations, such as rapidly varying time dependent functions, the three codes agree would in practice correspond to acheck on the differential equations solvers of each code, and this is beyond the scope of this work.The guiding principle in choosing a particular parametrization has been to recover standard gravity at early times, to preserve the physics of the CMB and to ensure a quasi-standard evolution until recent times, i.e. approximately until the onset of dark energy. For example a parametrization closely related with this principle, which has been used in both data analysis <cit.> and forecasts <cit.>, takes the formw_ DE = w_0 + (1-a) w_aα_i= c_iΩ_ DE .Even if this parametrization is capable of turning on all the possible freedom of Horndeski theories up to linear level, it may be not sufficient. Indeed, the system of equations for the evolution of the perturbations contains both {α_i(τ), w_ DE(τ)} and their time derivatives. Thus, we have extended this parametrization to be able to modulate the magnitude of the derivatives of these functions. The simplest choice is thenw_ DE = w_0 + (1-a) w_a M^2_* =1+δ M^2_0 a^η_0 α_i =α^0_ia^η_i ,where i stands for K,B,T. The translation from the α_i functions to the EFT functions is provided in Appendix <ref>.In Fig. <ref>, we show the lensed temperature C_ℓ and the matter power spectrum P(k) calculated at different redshifts for few different values of {w_0, w_a}, δ M^2_0, α^0_i and η_i (see Appendix <ref> for the list of values used in this comparison). The cosmological parameters are the same for each curve in the plots. The models shown in the figures were built so as to isolate the effect of each α_i. Considering the fact that α_K and α_T alone are known to have a small effect on the observables, e.g. <cit.>, we have always combined them with other functions (either α_i or w_ DE). The α_K, B, M, T + w model (green dotted line) contains all the possible modifications that a Horndeski-like theory can produce. We should stress that the values used here were chosen specifically to have large deviations w.r.t. the reference ΛCDM model and w.r.t. each other. During the comparison process many more models were explored, both close to ΛCDM and unrealistically far from it.An additional requirement to accept models for this comparison was that they were not sensitive to the specific initial conditions (ICs) set for the perturbations: The codes are set up to start with and evolve superhorizon adiabatic ICs, as predicted by standard inflation. Typically, in models which go back to GR quickly enough at early times, the other, isocurvature, modes decay with respect to the adiabatic mode, so it is irrelevant what the initial condition for the scalar field is, since it will reach the required adiabatic mode quickly.However there are situations, typically when the modification of gravity does not decrease rapidly enough to the past, in which the isocurvature modes do not decay quickly enough (or even grow), and then it is very important that the correct, or at least equivalent, ICs be chosen. The codes currently have different methods of setting ICs, which is irrelevant when the isocurvature modes decay rapidly enough, but can be important when they are not. We thus have to ensure that we are in a situation where the adiabatic ICs are an attractor for perturbations during radiation domination. The issue of setting the correct ICs for dark-energy perturbations is still an open problem and it will be addressed in future versions of the codes under consideration.In all the cases we explored, except the ones sensitive to initial conditions as explained above, the results shown in Fig. <ref> holds. The comparison betweenand shows a remarkable agreement, well below the 1% level. It is possible to notice that the α_K, T + w and α_K, B, M, T + w models have relative differences slightly larger than the other models for the EE and TE CMB spectra. While it is difficult to identify one of the α_i or w as the responsible for these deviations, we found that improving the precision parameters of each code solves this issue. This indicates that these two models are particularly complicated and they need increased precision parameters to reach the agreement of the other models. For this particular parametrization, a third code has been tested, i.e. COOP. The agreement between COOP and EFTCAMB is shown in Fig. <ref>. It can be noted that, even if the relative differences in CMB spectra remain below the 1% level, they blow up in the matter power spectrum up to 2-3% on small scales. This seems to be an effect of the accuracy of COOP. Indeed, while COOP is calibrated to get a good agreement on large scales, it lacks of precision for k≳1hMpc^-1.§.§ Parametrized EFT functions The results presented in the previous section are able alone to establish the agreement between the three codes under consideration. However, while COOP and were built using the α_i basis,was built using the EFT approach described in Sec. <ref>. As such, the structure of this code is based on {Ω, γ_i} functions. In caseis to be used with the α basis, as in the previous section, there is a built-in module which translates the α_i into the EFT basis before solving for the perturbations. Correspondingly needs totranslate the {Ω, γ_i} functions into its preferred α_i basis, in order to be used for the comparison.Let us note that when simple parametrizations are chosen, the two different bases explore different regions of the parameter space. As an example,consider aparametrization where α_B∝ a. Using the conversion relations in Appendix <ref>, it is possible to show that (if Ω=0) γ_2∝ℋ, which scales as a during dark-energy domination, as a^-1/2 during matter domination and as a^-1 during radiation domination. Thus, we have also comparedand with a particular parametrization of the {w_ DE, Ω, γ_i} functions. In the same spirit as in Eqs. (<ref>), we choosew_ DE = w_0 + (1-a) w_aΩ =Ω_0 a^β_0 γ_i =γ^0_ia^β_i ,where i stands for 1,2,3.In Fig. <ref>, we show the TT, EE, TE, lensing C_ℓ's and the matter power spectrum P(k) calculated at different redshifts for a selection of different values of {w_0, w_a}, Ω_0, γ_i^0 and β_i. The exact parameters used in these figures are shown in Appendix <ref>, and the cosmological parameters used to obtain all the curves are the same. On top of a ΛCDM reference model, the model Ω (dark blue line) represents the model used in the analysis of current data <cit.>. The other models were built to have an increasingly number of γ_i functions and different imprints on the observables. Finally, the Ω + γ_1, 2, 3 + w model (green dotted line) turns on all possible modifications at the same time. As in the previous section, this last model shows how model dependent are the precision parameters, having deviations in the EE and TE CMB spectra slightly larger than the other models. Within this parametrization, after neglecting all the models sensitive to the initial conditions as described in the previous section, the disagreement betweenand is within our target accuracy even for the “extreme” models shown in the figures. § DISCUSSIONIn this paper we have shown that two general purpose publicly available EB solvers –and – are sufficiently accurate and reliable to be used to study a range of scalar-tensor theories. The third general purpose code – COOP – has the required precision for large scales, i.e. k≲1hMpc^-1, but it needs to be calibrated to give accurate predictions on smaller scales. We have done this analysis by comparing these three codes to each other and to six other EB solvers that target specific theories – DASh, BD-CAMB and CLASSig for JBD, GalCAMB for Galileons, CLASS_EOS_fR for f(R) and HL-CLASS for Hořava-Lifshitz. On top of that, we have shown that two EB solvers – RR-CAMB and RR-CLASS – agree very well when compared to each other for non-local gravity models. While the general principle behind these codes are similar, the implementation is sufficiently different that we believe this is a compelling validation of their accuracy. As such they are fit for purpose if we wish to analyse up and coming cosmological surveys. We have chosen the precision, or accuracy, settings on the codes being compared such that they could be used efficiently in a Monte Carlo Markov Chain (MCMC) analysis. It is possible to get even better agreement between the codes by boosting the precision settings. This would be done, of course, at a great loss of speed which might make the codes unusable for statistical analysis. We believe that the speed and accuracy we have achieved in this paper is a good, practical compromise. We want to emphasize here that the choice of the precision parameters is very model dependent. Indeed, for some particular configurations we had to increase somewhat the default precision to obtain agreement at the sub-percent level. If one uses the default precision parameters provided with each EB solver she might not get exactly the same agreement we have obtained in this paper. For the models we have considered, we have verified that the disagreement between the different codes was never worse than 1%, but it remains the responsibility of the user to verify that the precision parameters chosen are sufficient in order to obtain the accuracy desired.Of course, there is always more to be done. We have compared these codes at specific points in model and parameter space and our hope is that they should be sufficiently stable that this comparison can be extrapolated to other models and parameters.A possibility of taking what we have done a step further is to undertake parallel MCMC analysis with the codes being compared. [MCMC parameter extraction has been performed on the same covariant Galileon models. The results found using modified CAMB <cit.> and Planck 2013 data are fully consistent with those obtained with using Planck 2015 <cit.>.] This would fully explore the relevant parameter space and would strengthen the validation process we have undertaken in this paper.Furthermore, bothand will inevitably be extended to theories beyond scalar-tensor <cit.>. The same level of rigour will need to be enforced once the range of model space is enlarged.EB solvers can only tackle linear cosmological perturbations. There are attempts at venturing into the mildly non-linear regime using approximation schemes such as the halo-model, perturbation theory and effective field theory of large scale structure. All attempts at doing so with the level of accuracy required by future data have focus on the standard model. There have been preliminary attempts at doing so for theories beyond GR but, it is fair to say, accurate calculations are still in their infancy. Additional complications that need to be considered when exploring this regime will be the effects of baryons, neutrinos and, more specifically, the effects of gravitational screening which can greatly modify the naive predictions arising from linear theory (a crude attempt at incorporating screening was proposed in <cit.>).Finally, we want to emphasize that this paper is not meant to be a passepartout to justify every kind of analysis with the codes presented here. They should not be used blindly, and we do not guarantee that all the models implemented in each version of the codes investigated here are free from bugs and reliable. When we introduced in Sec. <ref> the publicly available codes we referred to a specific version, and our analysis only validates the accuracy of that version. On top of that one has to bear in mind that, even if we are quite confident that the system of equations (linearized Einstein plus scalar field equations) implemented in each code is bug free, these codes have been tested using a limited number of models. This implies that other built-in models may not be correctly implemented. So, if one wants to use one of the codes analyzed here has to follow the following steps: * If the version of the code is not the same as the one studied here, check that it gives the same results as this version for the same models (unless this is guaranteed by the developers of the code); * If the model that one wants to analyze has not been studied here, check that the map to convert the parameters of the models into the basis used by the code (e.g. α_i or γ_i) has been correctly implemented. Since the equations of motion are the same as used in this analysis, this is the most probable place where to find bugs, if any; * Check that, for the model, adiabatic initial conditions are an attractor at superhorizon scales during radiation domination. If not, implement the correct initial conditons, to ensure that the addition of dark-energy isocurvature modes does not spoil predictions at late times; * Check that the precision parameters used are sufficient to get the desired accuracy. This is very model dependent and can be done with an internal test. It is sufficient to improve them and check that the changes in the output are negligible; * Check for a few models that the output is realistic. It can be useful to have some known limit in the parameter space to compare with. We believe that, with this comparison, we have placed the cosmological analysis of gravitational degrees of freedom on a robust footing. With the tools discussed in hand, we are confident that it will be possible to obtain reliable, precision constraints on general relativity with up and coming surveys. § ACKNOWLEDGEMENTS EB and PGF are supported by ERC H2020 693024 GravityLS project, the Beecroft Trust and STFC. EC is supported by an STFC Rutherford Fellowship.BH is partially supported by the Chinese National Youth Thousand Talents Program, the Fundamental Research Funds for the Central Universities under the reference No. 310421107 and Beijing Normal University Grant under the reference No. 312232102.The work of MI was partly supported by the Swiss National Science Foundation and by the RFBR grant 17-02-01008.The research of NF is supported by Fundação para aCiência e a Tecnologia (FCT) through national funds(UID/FIS/04434/2013) and by FEDER through COMPETE2020(POCI-01-0145-FEDER-007672).NF, SP and AS acknowledge the COST Action(CANTATA/CA15117), supported by COST (European Cooperation inScience and Technology). The work of IS and CS is supported by European Structural and Investment Funds and the Czech Ministry of Education, Youth and Sports (Project CoGraDS – CZ.02.1.01/0.0/0.0/15_003/0000437).The support by the “ASI/INAF Agreement 2014-024-R.0 for the Planck LFI Activity of Phase E2” is acknowledged by MB, FF and DP. MB, FF and DP also acknowledge financial contribution from the agreement ASI n.I/023/12/0 "Attività relative alla fase B2/C per la missione Euclid".MR is supported by U.S. Dept. of Energy contract DE-FG02-13ER41958.The work of NAL is supported by the DFG through the Transregional Research Center TRR33 The Dark Universe.Numerical work presented in this publication used the Baobab cluster of the University of Geneva. YD is supported by the Fonds National Suisse.UC has been supported within the Labex ILP (reference ANR-10-LABX-63) part of the Idex SUPER, and received financial state aid managed by the Agence Nationale de la Recherche, as part of the programme Investissements d’avenir under the reference ANR-11-IDEX-0004-02.SP and AS acknowledge support from The Netherlands Organization for Scientific Research (NWO/OCW), and from the D-ITP consortium, a program of the Netherlands Organisation for Scientific Research (NWO) that is funded by the Dutch Ministry of Education, Culture and Science (OCW).MB acknowledge the support from the South African SKA Project.MZ is supported by the Marie Sklodowska-Curie Global FellowshipProject NLO-CO.FV acknowledges financial support from “Programme National de Cosmologie and Galaxies” (PNCG) of CNRS/INSU, France and the French Agence Nationale de la Recherche under Grant ANR- 12-BS05-0002.FP acknowledges support from the post-doctoral STFC grant R120562 'Astrophysics and Cosmology Research within the JBCA 2017-2020'.§ RELATION BETWEEN EFT FUNCTIONS AND Α'S In this Appendix we report the mapping between the EFT functions and the α bases for Horndeski theories:Ω(a)=- 1 + (1 + α_T )M_*^2/M_ Pl^2 ,γ_1(a)= 1/4a^2H_0^2M_ Pl^2[α_KM_*^2 H^2-2a^2c] , γ_2(a)= - H/aH_0[α_B M_*^2/M_ Pl^2+Ω^'], γ_3(a)=-α_T M_*^2/M_ Pl^2 , γ_4(a)=-γ_3 ,γ_5(a) = γ_3/2 γ_6(a) = 0,wherea^2 c(a)/M_ Pl^2 = ℋ(ℋ- ℋ^') (1+Ω +Ω^'/2)-ℋ^2/2(Ω^''-Ω^') -a^2 (ρ_m + p_m)/2M_ Pl^2 ,α_M= (ln M_*^2)^' and primes are derivatives w.r.t. ln a. Note that the above α_B=-2α_B^. § MODEL PARAMETERS IN PLOTS Here we list all the cosmological parameters used in this paper. For each theory we use the parameters name and the notation that can be found in Section <ref>. §.§ JBDIn Section <ref>, we kept fixed the following cosmological parameters: * Ω_b h^2 = 0.02222 * Ω_c h^2 = 0.11942 * A_s = 2.3× 10^-9 * n_s = 0.9624 * τ_ reio = 0.09and we varied ω_BD=10 ω_BD=50 ω_BD=100 ω_BD=1000 ω_BD10 50 100 1000 H_044.3161.4364.22 66.90§.§ Covariant GalileonsIn Section <ref>, for the Galileon models we varied all the cosmological parameters (a “D” in parenthesis indicates that we used that parameter as derived): Cubic Galileon A Cubic Galileon B Quartic Galileon Quintic Galileon H_0 75.5545 55 55 Ω_b h^20.021730.015750.021750.02202Ω_c h^20.1240.1000.1000.100A_s 2.05× 10^-9 2.16× 10^-9 2.16× 10^-9 2.09× 10^-9 n_s 0.9550.9800.9800.954τ_ reio 0.0520.0880.0880.062ξ -2.11 (D)-1.60 (D)2.65 1.4c_3 0.079 (D)0.104 (D)-0.124 (D) 0.2c_4 ---7.74× 10^-3 (D)0.125 (D)c_5 ----0.125(D) §.§ f(R)In Section <ref> we kept fixed the standard cosmological parameters to these values* H_0=69 * Ω_b h^2 = 0.022032 * Ω_c h^2 = 0.12038 * A_s = 2.3× 10^-9 * n_s = 0.96 * τ_ reio = 0.09while we varied the additional parameters ΛCDMfR-1 fR-2B_0010.01§.§ Non-Local GravityIn Section <ref> we varied all the cosmological parameters (a “D” in parenthesis indicates that we used that parameter as derived):RR-1 RR-2 RR-3 H_0 67 55 55 Ω_b h^20.0222 0.0222 0.0222 Ω_c h^20.1180.1000.120A_s 2.21× 10^-9 2.51× 10^-9 1.81× 10^-9 n_s 0.96 0.93 0.98 τ_ reio 0.09 0.06 0.12 m^2 4.06× 10^-9 (D)2.51× 10^-9 (D)2.18× 10^-9 (D) The ΛCDM model has the same parameters as RR-1, but with a cosmological constant instead of the Non-Local parameter m.§.§ Hořava-Lifshitz gravityIn Section <ref> we used the same standard cosmological parameters shown in Section <ref> and we varied the additional parameters ΛCDMHL-A HL-Bλ11.21.02807ξ11.3333 1.05263η0 0.0666 0.0210526§.§ Parametrized Horndeski functionsIn Section <ref> we used the same standard cosmological parameters shown in Section <ref> and we varied the MG parameters (note that here w_0=-1+δ w_0)α_K,B α_K,M α_K,T+w α_K,B,M,T+w δ w_0 --0.9-0.5 w_a---1.2 1δ M_0^2 -2-3η_0 -1.6-1α_K^0 1111η_K 1111α_B^0 1.8--1.8η_B 1.5--1.5α_T^0 ---0.9 -0.6 η_T --11 §.§ Parametrized EFT functionsIn Section <ref> we used the same standard cosmological parameters shown in Section <ref> and we varied the MG parameters (note that here w_0=-1+δ w_0)Ω Ω+γ_1 Ω+γ_1,2 Ω+γ_3+w δ w_0 -- - 0.9 w_a-- - -1.2Ω_0 21 2 2 β_010.4 1.5 1 γ_1^0 -1 1 - β_1-1 1 - γ_2^0 -- -4.8- β_2-- 0 - γ_3^0 -- - 2 β_3-- - 1 § PRECISION PARAMETERS IN PLOTSIn order to improve the accuracy of the results, keeping in mind that the CPU-time should remain acceptable for MCMC runs, we changed the default values for some precision parameter* CAMB-based codes* CLASS-based codes (except for CLASS-LVDM) § HOŘAVA-LIFSHITZ GRAVITY COMPARISONIn this Appendix we illustrate the differences in the approaches used to implement Hořava-Lifshitz gravity in CLASS-LVDM and .(1) The first key difference between CLASS-LVDM andis the treatment of the background. It is well known that the only effect of Hořava-Lifshitz (khrononmetric) gravity on the homogeneous and isotropic universe is the rescaling of the gravitational constant in the Friedman equation. CLASS-LVDM uses the rescaled background densities defined via the Friedman equationH^2 = ∑_i ρ̃_i =H_0^2∑_iΩ̃_i(z),in which way the densities ρ̃_i (correspondingly,Ω̃_i(z=0) - subject to input in CLASS-LVDM) are rescaled by G_cos/G_N, and the flatness condition ∑_all speciesΩ̃_i(z=0)=1 is satisfied automatically.On the other hand,uses the background densities defined via the gravitational constant in the Newtonian limit, which is generically different from that appearing in the Friedman equation. To be more precise,solves the following Friedman equation H^2 = H_0^2G_cos/G_N(∑_dm, b, γ, νΩ_i(z) + [Ω_DE^0 + G_N/G_cos -1] ) ,where Ω_DE^0 is the present time DE density parameter. The fractional densities Ω_i(z=0) (subject to input in ) are therefore the “bare" parameters. The modification in the effective Ω_DE^0 in the square brackets of (<ref>) is dictated by the requirementthat the flatness condition (∑_all speciesΩ_i =1) be satisfied at redshift zero <cit.>.[Note that one could redefine Ω_DE(z) to absorb all the modifications due to the rescaling of the gravitational constant into it. This would lead to a Ω_DE(z) dependent on the HL parameters plus a standard gravitational constant. This different convention would lead to the same cosmology as with our definitions (if all “bare” fractional densities are suitably chosen), since the two descriptions are equivalent.]To sum up, the background evolution in both codes is intrinsically different in the case G_N≠ G_cos, which is why for the purposes of this paper we focused only on the parameters for which G_N = G_cos.(2) The second difference is the definition of the matter power spectrum. As explained in Refs. <cit.>, in order to match the observations,the power spectrum in CLASS-LVDM is rescaled by the factor (G_cosm/G_N)^2. This is to be contrasted with , which uses the standard definition.Within our convention to study only the case G_N = G_cos, this difference becomes irrelevant. (3) The third difference is in the normalization of the primordial power spectrum. In order to isolate the LV effects from the standard cosmological parameters,in the CLASS-LVDM code by default the initial power spectrumof metric perturbations is normalized in a way to match the ΛCDM one for the same choice of A_s regardlessof values of the LV parameters. This is not the same in , where additionally to the background densities the initial power spectrum also bears the dependence on the the extra parameters of Hořava/khrononmetric gravity. Qualitatively, there is no difference between these two approaches. For the purposes of this paper for each set of parameters we normalized the initial power spectra to the same value in both codes.(4) The fourth difference is in the initial conditions. CLASS-LVDM assumes the initial conditions for the khronon fieldcorresponding to the adiabatic mode <cit.>. On the other hand, assumes for the initial conditions that DE perturbations are sourced by matter perturbations at a sufficiently early time so that the theory is close to General Relativity <cit.>. In order to take into account the difference in the initial conditions, only for this comparison in both codes we set the initial conditions asπ(τ_0)=0 andπ̇(τ_0)=0 ,where π is the extra scalar degree of freedom (i.e. khronon). It is important to note that this choice correspond to an isocurvature mode that totally compensates the adiabatic one at the initial time.
http://arxiv.org/abs/1709.09135v2
{ "authors": [ "E. Bellini", "A. Barreira", "N. Frusciante", "B. Hu", "S. Peirone", "M. Raveri", "M. Zumalacárregui", "A. Avilez-Lopez", "M. Ballardini", "R. A. Battye", "B. Bolliet", "E. Calabrese", "Y. Dirian", "P. G. Ferreira", "F. Finelli", "Z. Huang", "M. M. Ivanov", "J. Lesgourgues", "B. Li", "N. A. Lima", "F. Pace", "D. Paoletti", "I. Sawicki", "A. Silvestri", "C. Skordis", "C. Umiltà", "F. Vernizzi" ], "categories": [ "astro-ph.CO" ], "primary_category": "astro-ph.CO", "published": "20170926171301", "title": "A comparison of Einstein-Boltzmann solvers for testing General Relativity" }
compat=newest ../Matlab.files/./ matrixfont=Annals of Operations Research K>l Ml< C>c subsectionsectionsubsections theoremTheorem[section] lemma[theorem]Lemma proposition[theorem]Proposition corollary[theorem]Corollary definition remark[theorem]Remark definition[theorem]Definition example[theorem]Example ((kaq]Stefanos [email protected]]Costis [email protected]]Constandina [email protected][kaq]Singapore University of Technology and Design, 8 Somapah Rd, 487372 Singapore[kap]National and Kapodistrian University of Athens, Panepistimioupolis, 15784 Athens, Greece[kat]Athens University of Economics and Business, 76 Patission Street, 10434 Athens, Greece[cor]Corresponding authorMotivation: Pricing decisions are often made when market information is still poor. While modern pricing analytics aid firms to infer the distribution of the stochastic demand that they are facing, data-driven price optimization methods are often impractical or incomplete if not coupled with testable theoretical predictions. In turn, existing theoretical models often reason about the response of optimal prices to changing market characteristics without exploiting all available information about the demand distribution. Academic/practical relevance: Our aim is to develop a theory for the optimization and systematic comparison of prices between different instances of the same market under various forms of knowledge about the corresponding demand distributions. Methodology: We revisit the classic problem of monopoly pricing under demand uncertainty in a vertical market with an upstream supplier and multiple forms of downstream competition between arbitrary symmetric retailers. In all cases, demand uncertainty falls to the supplier who acts first and sets a uniform price before the retailers observe the realized demand and place their orders. Results: Our main methodological contribution is that we express the price elasticity of expected demand in terms of the mean residual demand (MRD) function of the demand distribution. This leads to a closed form characterization of the points of unitary elasticity that maximize the supplier's profits and the derivation of a mild unimodality condition for the supplier's objective function that generalizes the widely used increasing generalized failure rate (IGFR) condition. A direct implication is that optimal prices between different markets can be ordered if the markets can be stochastically ordered according to their MRD functions or equivalently, their elasticities. Using the above, we develop a systematic framework to compare optimal prices between different market instances via the rich theory of stochastic orders. This leads to comparative statics that challenge previously established economic insights about the effects of market size, demand transformations and demand variability on monopolistic prices. Managerial implications: Our findings complement data-driven decisions regarding price optimization and provide a systematic framework useful for making theoretical predictions in advance of market movements.Monopoly PricingRevenue MaximizationDemand UncertaintyPricing AnalyticsComparative StaticsStochastic OrdersUnimodality [2010] 91A1091A6591B54 § INTRODUCTION Making optimal pricing decisions is a crucial driver for firms' profitability and a well-studied problem in the existing theoretical literature. However, modern markets pose novel opportunities and challenges for firms' pricing decisions. On the one hand, the sheer amount of historical price/demand data and the wide range of pricing analytics methods allow firms to make more informed decisions. One the other hand, the inherent volatility of contemporary economies frequently renders such data-driven methods impractical. Sellers, often launch new or differentiated products for which demand is unknown or introduce existing products to uncharted emerging markets <cit.>. In other cases, firms act as wholesalers in foreign markets for which they have asymmetrically less information than local retailers or sell their products over competitive digital platforms to highly diversified clienteles <cit.>. More generally, firms often need to test the outcome of price changes in advance of anticipated market movements or to constantly adjust their prices in periods of turbulent market conditions. Common in all these cases is that firms have to make important pricing decisions when market information is still poor <cit.>. While uncertainties can be mitigated via marketing strategies or contracting schemes between the members of the supply chain, after all efforts, some uncertainty persists and the final point of interaction between wholesalers and retailers or more generally, between sellers and buyers is the selling price <cit.>.Whenever possible, pricing analytics on historic price/demand data aid firms to build up knowledge about the (probability) distribution of the uncertain demand. The main challenge lies in leveraging this information to set and adjust prices optimally. However, data-driven approaches that extrapolate past trends to make forward-looking pricing decisions may lead to suboptimal decisions if not coupled with or benchmarked against testable theoretical predictions. In turn, existing theoretical tools to optimize and compare prices across different market instances (instances of the same market that correspond to different demand distributions) are still under development <cit.> or make partial use of the available information, e.g., rely on summary statistics <cit.>. Moreover, from a managerial perspective, such methods often provide optimality conditions that are not easy to assess in practice or which do not provide intuitive and economic interpretable results <cit.>. Model Motivated by the above, we revisit the classic problem of monopoly pricing with demand uncertainty under the informational assumption that the firm knows the probability distribution of the uncertain demand. This distribution may reflect the seller's informed belief or estimations aggregated from historical data. Our purpose is to link the properties of the demand distribution to economic interpretable conditions and to develop a systematic theoretical framework in which the firm can optimally set and adjust its prices according to changing market characteristics. To account for the large variety of market structures that modern sellers are facing, we model the monopolistic firm as an upstream supplier who sells its product via a downstream market. The downstream market comprises an arbitrary number of retailers and various forms of market competition between the retailers, such as differentiated Cournot and Bertrand competition, no or full returns and collusions (cf. <Ref>).[The two-tier market model is an abstraction to capture the complexity of current markets. If we eliminate the second stage and assume that the supplier sells directly to the consumers, then our results still apply.] In all cases, retail demand is linear. In the first stage, the monopolistic firm sets a uniform price and in the second stage, the retailers observe the price and place their orders after the market demand has been realized. We assume that the supplier's capacity exceeds potential demand and that downstream retailers are symmetric. These assumptions serve the purpose to isolate the study of pricing decisions under uncertainty from various other strategic considerations such as stocking decisions, negotiation power, marketing and production <cit.>.[Under these assumptions, i.e., if the supplier's capacity exceeds potential demand and if the retailers are symmetric, then the single price scheme is optimal among a wide range of possible pricing mechanisms <cit.>. In addition, the symmetry of the retailers allows the study of purely competitive aspects which is not possible if retailers are heterogeneous <cit.>.] Results Our main theoretical contribution is that we characterize the seller's optimal prices as fixed points of the mean residual demand (MRD) function of the stochastic demand level. Informally, if α, r denote the random demand level and the supplier's price respectively, then any optimal price, r^*, satisfies the equation r^*=r^*, where r:=α-br|α>br and b is a proper scaling constant (that can be normalized to b=1). The MRD function measures the expected additional demand given that demand has reached or exceeded a threshold br.[In reliability applications, this function is known as the mean residual life (MRL), see <cit.>.] This characterization stems from the observation that the price elasticity of expected demand can be expressed in terms of the MRD function as r/r, cf. equation (<ref>). Thus, optimal prices that correspond to points of unitary elasticity are fixed points of the MRD function. If r/r is decreasing (or equivalently, if the price elasticity is increasing), then there exists a unique such optimal price. Both statements are presented in <Ref>. The above unimodality condition for the otherwise not necessarily concave nor quasi-concave seller's revenue function, strictly generalizes the well-known increasing generalized failure rate (IGFR) condition <cit.>. Given the inclusiveness of the IGFR conditions, this suggests that <Ref> applies to essentially most distributions that are commonly used in economic modeling <cit.>. The expressions of the price elasticity of expected demand and the seller's optimal price in terms of well-understood characteristics of the demand distribution (MRD functions) offer a novel perspective to the otherwise standard linear stochastic model.[While our results directly apply to the more general demand function of <cit.>, we stick to the linear model for expositional purposes. Except from the fact that linear markets have been consistently in the spotlight of economic research both due to their tractability and their accurate modeling of real situations, the study of the linear model is also technically motivated by <cit.> who demonstrate that when information about demand is limited, firms may act efficiently as if demand is linear. For practical purposes, this yields a simple and low regret pricing rule and provides a motivation to study linear markets in a systematic way.] They provide conditions that are easy to assess in practice and which can be useful to gain intuitive and economic interpretable results <cit.>. In particular, these expressions provide a novel way to derive comparative statics on the response of the optimal price to various market characteristics (expressed as properties of the demand distribution) and to measure market performance.The key intuition along this line, which is formally established in <Ref>, is that the seller's optimal price is higher in less elastic markets which are precisely markets that can be ordered in the MRD stochastic order. In other words, if two demand distributions can be ordered in terms of their MRD functions, then the supplier's optimal price is higher in the market with the dominating MRD function. Thus, the fact that elasticity is a critical factor in setting profitable prices and an important determinant of price changes in response to demand changes is formalized in terms of well-known demand characteristics. As a result, <Ref> is the key to leverage the theory of stochastic orders as a tool to compare prices in markets with different characteristics, such as market size and demand variability. Stochastic orders take into account various characteristics of the underlying demand distribution going thus, beyond summary statistics such as expectation or standard deviation, which provide a limited amount of information <cit.>. In the comparative statics analysis, we start with the effect of market size in optimal prices and ask whether larger markets give rise to higher prices. Our first finding is that stochastically larger markets do not necessarily lead to higher wholesale prices (<Ref>). Technically, this follows from the fact that the usual stochastic order does not imply (nor is implied by) the -order <cit.>. Hence, the intuition of <cit.> that size is not everything and that prices are driven by different forces is justified by an appropriate theoretical framework. As <Ref> demonstrates, the correct criterion to order prices is the elasticity of different market instances rather than their size. In Theorem 4.3, we use this to derive a collection of demand transformations that preserve the MRD order (i.e., the elasticity) and hence the order of optimal prices. Returning to the effects of market size, one may still ask whether there exist conditions under which a probabilistic increase in market demand will lead to an increase in optimal prices. This question is addressed in <Ref>, where we show that this is indeed the case whenever the initial demand is increased by a scale factor greater than one or (under some mild additional conditions) whenever an additional demand source is aggregated. Next, we turn to the effect of demand variability. Does the seller charge higher/lower prices in more variable markets? The answer to this question again depends on the exact notion of variability that will be used. Under mild additional assumptions, <Ref> gives two variability orders that preserve monotonicity: in both cases the seller charges a lower price in the less variable market. This conclusion remains true under the mean preserving transformation that is used by <cit.>. However, as was the case with market size, the general statement that more variable markets give rise to higher prices does not hold. In <Ref>, we provide an example to show that this generalization fails in the standard case of parametric families of distributions that are compared in terms of their coefficient of variation, cf. <cit.>. All results from the comparative statics analysis are summarized in <Ref>. We then turn to measure market performance and efficiency. Our main result in this direction, is a distribution-free (over the class of distributions with decreasing MRD function) upper boundon the probability of a stockout, i.e., of no trade between the supplier and the retailers (<Ref>). As shown in Examples <ref>, <ref> and <ref>, this bound is tight and cannot be further generalized to distributions with increasing price elasticity of expected demand. In case that trade takes place, we measure market efficiency in terms of the realized profits and their distribution between the supplier and the retailers. Our results are summarized in <Ref>. As intuitively expected (and in line with earlier results, cf. <cit.>), the supplier's profits are always higher if he is informed about the exact demand level and when retail competition is higher. However, there exists a range of intermediate demand realizations for which the supplier captures a larger share of the aggregate profits in the stochastic market.Finally, we compare the aggregate realized profits between the deterministic and stochastic markets. The outcomes depend on the interplay between demand uncertainty and the level of retail competition. More specifically, there exists an interval of demand realizations for which the aggregate profits of the stochastic market are higher than the profits of the deterministic market. The interval reduces to a single point as the number of downstream retailers increases, but is unbounded in the case of 2 retailers. In particular, for n=1,2, the aggregate profits of the stochastic market remain strictly higher than the profits of the deterministic market for all large enough realized demand levels. However, the performance of the stochastic market in comparison to the deterministic market degrades linearly in the number of competing retailers for demand realizations beyond this interval. This shows that uncertainty on the side of the supplier is more detrimental for the aggregate market profits when the level of retail competition is high, cf. <Ref>. §.§ Related LiteratureThe study of price-only contracts under demand uncertainty has been long motivated by <cit.> who show that committing to a single price is the optimal pricing strategy if the supplier's capacity exceeds potential demand and retailers are symmetric or if the monopolist is facing a known demand distribution. If the monopolist does not know the distribution of demand a-priori, as we assume in the present paper, then dispersed pricing improves upon the performance of a uniform price <cit.>. By contrast, <cit.> show that wholesale price contracts are optimal even in the case of two competing chains. Compelling arguments for the linear pricing scheme are also provided in <cit.> and references within. <cit.> and <cit.> argue that apart from their practical prevalence, price-only contracts are relevant in modeling worst-case scenarios or interaction between sellers and buyers in cases with remaining uncertainty, i.e., after any efforts have been made to reduce the initial uncertainty through more elaborate schemes. This is further supported by <cit.> who find that in a vertical market with a single manufacturer and a single retailer that is governed by a wholesale contract between them, less uncertainty may harm either or both members of the supply chain. This is partially the case also in the current model as we show in <Ref>. In particular, we find demand realizations for which the stochastic market outperforms the deterministic market in terms of aggregate profits. However, in our model, the supplier is always better off with reduced uncertainty whereas the retailers may be not. The vertical market of a single supplier and multiple downstream competing retailers has been widely studied from different perspectives <cit.>. Our results in <Ref> are comparable with or mirror earlier findings in this line of literature. However, in the current setting, these findings complement our main results on the characterization of the demand elasticity in terms of the MRD function and the resulting comparative statics rather than being the main focus of our study. Concerning its other assumptions (market structure and timing of demand realization), our model enjoys similarities with the model of <cit.>. For extensive surveys of similar demand models, we refer to <cit.> and for earlier studies to <cit.>. Indicatively, the additive demand model with linear deterministic component is used by <cit.> and <cit.>.Regarding the derived unimodality conditions, our findings are most closely related to <cit.> who derive the IGFR unimodality condition in the setting of one seller and one buyer. These are special cases of the present setting and, accordingly, the IGFR condition is a restriction of the unimodality condition in terms of the MRD function that we formulate in the current setting. IGFR distributions were first used in economic applications by <cit.> and were popularized in the context of revenue management by <cit.>. Technical aspects of IGFR distributions are studied in <cit.>. These results are more closely related to a companion paper <cit.>, in which the authors focus on the technical properties of distributions that satisfy the current unimodality condition in terms of the MRD function.[Other closely related papers in the same direction include <cit.>, preliminary versions of which appear in <cit.>.] The MRD function also arises naturally in many revenue management problems with demand uncertainty, see e.g., <cit.> and <cit.> for a non-exhaustive list of related papers. However, to the best of our knowledge, there is no formal link between the elasticity of uncertain demand and the MRD function or the theory of stochastic orders in these papers. Similarities regarding the technical analysis can also be found between the current model and the literature on the price-setting newsvendor under stochastic demand, see e.g., <cit.>. However, as the rest of the literature about the newsvendor, these results involve inventory considerations and hence are quite distinct from ours. More closely related to the current methodology is the study of <cit.>. This paper is quite distinct from ours since it focus on a restricted set of stochastic orders and in a different newsvendor model that involves both pricing and stocking decisions. Concerning the results, <cit.> show that a stochastically larger demand leads to higher selling prices for the additive demand case. We extend these findings by showing that different notions of market size still lead to the same conclusion under certain conditions (<Ref>) and by providing a case under which prices may be actually lower in a stochastically larger market, cf. <Ref>. Within similar contexts, <cit.> show that in general, optimal prices decrease as variability increases. Our current set of results refines these findings by using a wide range of stochastic orders that capture different notions of demand variability. Our findings suggest that increased demand variability may lead to both increased or decreased prices depending on the notion of variability that will be employed (cf. <Ref>). This demonstrates how various forms of knowledge about the demand distribution can be useful in the study of price movements and provides a theoretical explanation for empirically observed price changes in periods of turbulent market conditions.§.§ OutlineThe rest of the paper is structured as follows. In <Ref>, we define and analyze our model. <Ref> contains the comparative statics and <Ref> the study of market performance. <Ref> concludes the paper.§ THE MODELWe consider a vertical market with a monopolistic upstream supplier or seller, selling a homogeneous product (or resource) to n=2 downstream symmetric retailers who compete in a market with retail demand level α[To ease the exposition, we restrict to n=2 retailers. As we show in <Ref>, our results admit a straightforward generalization to arbitrary number n of symmetric retailers.]. The supplier produces at a constant marginal cost which we normalize to zero. This corresponds to the situation in which the supplier's capacity exceeds potential demand by the retailers and the supplier's lone decision variable is his wholesale price, or equivalently his profit margin, r≥0.The supplier acts first (Stackelberg leader) and applies a linear pricing scheme without price differentiation, i.e., he chooses a unique wholesale price, r≥0, for all retailers. We consider a market setting in which the supplier is less informed than the retailers about the retail demand levelα. [We will refer to α throughout as the demand level. However, based on equation (<ref>), α is also known as the choke or reservation price. Since, these constants are equivalent up to some transformation in our model, this should cause no confusion.] To model this, we assume that after the supplier's pricing decision but prior to the retailers' order decisions, a value for α is realized from a continuous (not-atomic) cumulative distribution function (cdf) F, with finite mean α <∞ and nonnegative values, i.e., F0=0. Equivalently, F can be thought of as the supplier's belief about the demand level and, hence, about the retailers' willingness-to-pay his price. We write :=1-F for the tail distribution of F and f for its probability density function (pdf) whenever it exists. The support of F is denoted by S, with lower bound L=sup{r≥0: Fr=0} and upper bound H=inf{r≥0: Fr=1} such that 0≤ L≤ H≤∞. We don't make any additional assumption about S: in particular, it may or may not be an interval. The case L=H is not excluded[Formally, this case contradicts the assumption that F is continuous or non-atomic. It is only allowed to avoid unnecessary notation and should cause no confusion.] and corresponds to the situation in which the supplier is completely informed about the retail demand level.Given the demand realization α, the aggregate quantity qr|α:=∑_i=1^2q_ir|α that the retailers will order from the supplier is a function of the posted wholesale price r. Assuming risk neutrality, the supplier aims to maximize his expected profit function Π_s, which is equal toΠ_sr=r·_αqr|α.The quantity qr|α depends on the form of second stage competition between the retailers. In this paper, we focus on markets with linear demand as in <cit.> and <cit.> among others, and allow for a wide range of competition structures between the retailers (<Ref>). All these structures give rise – in equilibrium – to the same (up to a scaling constant) functional form for qr|α and hence to the same mathematical expression for the supplier's objective function. More importantly, in all these structures, the second-stage equilibrium between the retailers is unique and hence, qr|α is uniquely determined under the assumption that the retailers follow their equilibrium strategies in the game induced by each wholesale price r≥0 (subgame perfect equilibrium). Specifically, we assume that each retailer i faces the inverse demand functionp_i=α-β q_i-γ q_j,for j=3-i and i=1,2. Here, α/β+γ denotes the potential market size (primary demand), β/β^2-γ^2>0 the store-level factor and γ/β the degree of product differentiation or substitutability between the retailers <cit.>. As usual, we assume that β≤γ. Each retailer's only cost is the wholesale price r≥0 that she pays to the supplier. Hence, each retailer aims to maximize her profit function Π_i, which is equal toΠ_iq_i,q_j=q_ip_i-r.Given the demand realization α, the equilibrium quantities q_i^*:=q^*_ir|α that maximize Π_i for i=1,2 are given for various retail market structures in <Ref> as functions of the wholesale price r. Here, α-r_+ denotes the positive part, i.e., α-r_+:=max{0,α-r}. The assumption of no uncertainty on the side of retailers about the demand level α implies that q_i^* corresponds both to the quantity that each retailer orders from the supplier and to the quantity that she sells to the market. The standard Cournot and Betrand outcomes arise as special cases of the above. In particular, for γ=0, the goods are independent and the monopoly solution q_i^*=1/2βα-r_+ for i=1,2 prevails. For γ=β>0, the goods are perfect substitutes with q^*_i=1/2βα-r_+ in Bertrand competition (at zero price) and q^*_i=1/3βα-r_+ in Cournot competition for i=1,2. All of the above are assumed to be common knowledge among the participants in the market (the supplier and the retailers).§ EQUILIBRIUM ANALYSIS: SUPPLIER'S OPTIMAL WHOLESALE PRICEWe restrict attention to subgame perfect equlibria of the extensive form, two-stage game.[Technically, these are perfect Bayes-Nash equilibria, since the supplier has a belief about the retailers' types, i.e., their willingness-to-pay his price, that depends on the value of the stochastic demand parameter α.] Assuming that at the second stage, the retailers play their unique equilibrium strategies q_1^*,q_2^*, then, according to (<ref>), the supplier will maximize Π^*_sr=r·_αq^*r|α. For the competition structures of <Ref>, q^*r|α has the general form q^*r|α=λ_M α-r_+, where λ_M>0 is a suitable model-specific constant. Thus, at equilibrium, the supplier's expected profit maximization problem becomesmax_r≥0Π^*_sr=λ_M·max_r≥0rα-r_+.From the supplier's perspective, we are interested in finding conditions such that the maximization problem in (<ref>) admits a unique and finite optimal wholesale price, r^*≥0. The vertical market structure is not necessary for our analysis to hold. In fact, if we eliminate the downstream market and instead assume that the firm sells directly to a market with inverse linear demand function qr=α-β r, then our analysis still applies. This follows from the observation that in this case, the seller's expected profit maximization problem is max_r≥0Π_sr=max_r≥0r𝔼α-β r_+ which is the same as the maximization problem in equation (<ref>) after normalizing β to 1.§.§ Deterministic MarketFirst, we treat the case in which the supplier knows the primary demand α (deterministic market). According to the notation introduced in <Ref>, this corresponds to the case α=L=H. In this case Π^*_sr=λ_M rα-r_+ and it is straightforward that r^*α=α/2. Hence, the complete information two-stage game has a unique subgame perfect Nash equilibrium, under which the supplier sells with optimal price r^*α=α/2 and each retailer orders quantity q_i^* as determined by <Ref>. §.§ Stochastic MarketThe equilibrium behavior of the market in which the supplier does not know the demand level (stochastic market) is less straightforward. Now, L<H and the supplier is interested in finding an r^* that maximizes his expected profit in (<ref>). For an arbitrary demand distribution F, Π^*_sr may not be concave (nor quasi-concave) and, hence, not unimodal, in which case the solution to the supplier's optimization problem is not immediate. To obtain a general unimodality condition, we proceed by differentiating the supplier's revenue function Π_s^*r. First, since α-r_+ is nonnegative, we write α-r_+ = ∫_0^∞Pα-r_+ >u =∫_r^∞u, for 0 ≤ r<H. Since α<∞ and F is non-atomic by assumption, we have that/ṛα-r_+=/ṛα-∫_0^ru=-rfor any 0<r<H. With this formulation, both the supplier's revenue function and its first derivative can be expressed in terms of the mean residual demand (MRD) function of α. In general, the MRD function, ·, of a nonnegative random variable α with cumulative distribution function (cdf) F and finite expectation, α <∞, is defined asr:=α-r |α >r= 1r∫_r^∞u,r< H and r:=0, otherwise, see, e.g., <cit.> or <cit.>[In this literature, the MRD function is known as the mean residual life function due to its origins in reliability applications.]. Using this notation, we obtain that =λ_Mrrr and Π̣^*_s/ṛr=λ_Mr-rr=λ_Mrr/r-1rfor 0<r<H. Based on (<ref>), the first order condition (FOC) for the supplier's revenue function is that r=r or equivalently that r/r=1. We call the expression r:=r/r,0<r<H the generalized mean residual demand (GMRD) function, see <cit.>, due to its connection to the generalized failure rate (GFR) function r:=rfr/r, defined and studied by <cit.> and <cit.>. Its meaning is straightforward: while the MRD function r at point r>0 measures the expected additional demand, given the current demand r, the GMRD function measures the expected additional demand as a percentage of the given current demand. Similarly to the GFR function, the GMRD function has an appealing interpretation from an economic perspective, since it is related to the price elasticity of expected or mean demand (PEED), εr=-r·/ṛ qr|α/ qr|α<cit.>. Specifically,r=r/r=--r/rr· r^-1 = -r·/ṛα-r_+/α-r_+^-1=ε^-1rwhich implies that r corresponds to the inverse of the price elasticity of expected demand (PEED). Hence, in the current setting, demand distributions with decreasing GMRD, (DGMRD property), are precisely distributions that describe markets with increasing PEED, (IPEED property). This observation ties the economic property of IPEED to the distributional property of DGMRD. Accordingly, we will use the terms DGMRD and IPEED interchangeably. Using (<ref>), the FOC in (<ref>) asserts that the supplier's payoff is maximized at the point(s) of unitary elasticity. For an economically meaningful analysis, since realistic problems must have a PEED that eventually becomes greater than 1<cit.>, we give particular attention to distributions for which r eventually becomes less than 1, i.e., distributions for which r̅:=sup{r≥0:r≥ r} is finite. Observe that for a nonnegative random demand α with continuous distribution F and finite expectation α, 0=α>0 and hence r̅>0.Based on these considerations, it remains to derive conditions that guarantee the existence and uniqueness of an r^* that satisfies the FOC and to show that this r^* indeed corresponds to a maximum of the supplier's revenue function as given in (<ref>). This is established in <Ref> which is the main result of the present Section.Consider the supplier's maximization problem max_r≥0Π^*_sr=λ_M·max_r≥0rα-r_+ and assume that the nonnegative demand parameter, α, follows a continuous (non-atomic) distribution F with support S within L and H. Then(a) Necessary condition:If an optimal price r^* for the supplier exists, then r^* satisfies the fixed point equationr^*=r^*.(b) Sufficient conditions:If the generalized mean residual demand (GMRD) function, r:=r/r, of F is strictly decreasing and α^2 is finite, then at equilibrium, the supplier's optimal price r^* exists and is the unique solution of (<ref>). In this case, r^*=α/2, if α/2 < L, and r^* ∈[L, H, otherwise.(a) Since r>0 for 0<α<H, the sign of the derivative Π̣^*_s/ṛr is determined by the term r-r and any critical point r^* satisfies r^*=r^*. Hence, the necessary part of the theorem is obvious from (<ref>) and the continuity of Π̣^*_s/ṛr. (b) For the sufficiency part, it remains to check that such a critical point exists and corresponds to a maximum under the assumptions that r is strictly decreasing and α^2<∞. Clearly, r-r is continuous and lim_r→ 0_+r-r=α>0. Hence,starts increasing on 0,H. However, the limiting behavior of r-r and hence of Π̣^*_s/ṛr as r approaches H from the left, may vary depending on whether H is finite or not. If H is finite, i.e., if the support of α is bounded, then lim_r→ H-r-r=-H. Hence, r eventually becomes less than 1 and a critical point r^* that corresponds to a maximum exists without any further assumptions. Strict monotonicity of r implies that this r^* is unique. If H=∞, then an optimal solution r^* may not exist because the limiting behavior of r as r→∞ may vary, see <Ref> or <cit.>. In this case, the condition of finite second moment ensures that r̅<∞. In particular, as shown in <cit.>, if the GMRD function r of a random variable α with unbounded support is decreasing, then lim_r→∞r<1 if and only if α^2 is finite. This establishes existence. Uniqueness follows again from strict monotonicity of r which precludes intervals of the form r=r that give rise to multiple optimal solutions.To prove the second claim of the sufficiency part, note that α<2L is equivalent to L< L. Then, the DGMRD property implies that r<r for all r>L, hence r^*< L. In this case, r^*=α-r^* and hence r^* is given explicitly by r^*=α/2, which may be compared with the optimal r^* of the complete information case. On the other hand, if α≥ 2L, then for all r<L, r=α-r≥ 2L-r>r which implies that r^* must be in [L,H. The economic interpretation of the sufficiency conditions in part (b) of <Ref> is immediate. By (<ref>), demand distributions with the DGMRD property are precisely distributions that exhibit increasing PEED (IPEED property). In turn, finiteness of the second moment is required to ensure that the expected demand will eventually become elastic, even in the case of unbounded support, see <cit.>, Theorem 3.2. Thus, part (b) characterizes in terms of their mathematical properties demand distributions that model linear markets with monotone and eventually elastic expected demand. These conditions apply to distributions that may neither be absolutely continuous (do not possess a density) nor have a connected support.In the statement of <Ref>, strict monotonocity can be relaxed to weak monotonicity without significant loss of generality. This relies on the explicit characterization of distributions with MRD functions that contain linear segments which is given in Proposition 10 of <cit.>. Namely, r=r on some interval J=[a,b]⊆ S if and only if rr^2=aa^2 for all r∈ J. If J is unbounded, this implies that α has the Pareto distribution on J with scale parameter 2. In this case, α^2=∞, see <Ref>, which is precluded by the requirement that α^2<∞. Hence, to replace strict by weak monotonicity – but still retain equilibrium uniqueness – it suffices to exclude distributions that contain intervals J=[a,b]⊆ S with b<∞ in their support, for which rr^2=aa^2 for all r∈ J. [Pareto distribution] The Pareto distribution is the unique distribution with constant GMRD and GFR functions over its support. Let α be Pareto distributed with pdf fr=kL^kr^-k+11_{L≤ r}, and parameters 0<L and k > 1 (for 0<k≤ 1 we get α=∞, which contradicts the basic assumptions of our model). To simplify, let L=1, so that fr=k r^-k-11_{1 ≤ r < ∞}, Fr = 1 - r^-k1_{1 ≤ r < ∞}, and α=k/k-1. The mean residual demand of α is given by r=r/k-1+k/k-11-r1_{0≤ r<1} and, hence, is decreasing on [0, 1 and increasing on [1, ∞. However, the GMRD function r=r/r is decreasing for 0<r<1 and is constant thereafter, hence, α is DGMRD. Similarly, for 1≤ r the failure (hazard) rate r=kr^-1 is decreasing, but the generalized failure rate r=k is constant and, hence, α is IGFR. The payoff function of the supplier is =λ_Mrrr=λ_M/k-1r r-rk+k r^2-k ,which diverges as r→∞, for k < 2 and remains constant for k=2. In particular, for k≤ 2, the second moment of α is infinite, i.e., α^2=∞, which shows that for DGMRD distributions, the assumption that the second moment of F is finite may not be dropped for part (b) of <Ref> to hold. On the other hand, for k > 2, we get r^* = k/2k-1 as the unique optimal wholesale price, which is indeed the unique fixed point of r.§.§ General case with j identical retailersTo ease the exposition, we restricted our presentation to n=2 identical retailers. However, the present analysis applies to arbitrary number n≥ 1 of symmetric retailers for all competition-structures that give rise to a unique second-stage equilibrium in which the aggregate ordered quantity depend on α via the term α-r_+ as in <Ref>. This relies on the fact, that in such markets, the total quantity that is ordered from the supplier depends on n only up to a scaling constant. Thus, the approach to the supplier's expected profit maximization in the first-stage remains the same independently of the number of second-stage retailers. To avoid unnecessary notation, we present the general case for the classic Cournot competition.Formally, let N={1,2, …, n}, with n≥ 1 denote the set of symmetric retailers. A strategy profile (retailers' orders from the supplier) is denoted by 𝐪=q_1, q_2, …, q_n with q=∑_j=1^nq_j and q_-i:=q-q_i. Assuming linear inverse demand function p=α-β q, the payoff function of retailer i, for i∈ N, is given by Π_iq, q_-i=q_ip-r. Under these assumptions, the second stage corresponds to a linear Cournot oligopoly with constant marginal cost, r. Hence, each retailer's equilibrium strategy, q^*_ir, is given by q^*_ir= 1/βn+1α-r_+, for r≥ 0. Accordingly, in the first stage, the supplier's expected revenue function on the equilibrium path is given by Π^*_sr=rq^*r=n/βn+1rα-r_+. Hence, it is maximized again at r^*α=α/2 if the supplier knows α or at r^*=r^* if the supplier only knows the distribution F of α. Based on the above, the number of second-stage retailers affects the supplier's revenue function only up to a scaling constant and <Ref> is stated unaltered for any n≥ 1.§ COMPARATIVE STATICSThe main implication of the closed form expression of the supplier's optimal price in terms of the MRD function (equation(<ref>)) is that it facilitates a comparative statics analysis via the rich theory of stochastic orders<cit.>. Since the equilibrium quantity and price q^*,p^* are both monotone in the wholesale price r^*, our focus will be on r^* as the demand distribution characteristics vary. To obtain a meaningful comparison between different market instances (i.e., instances of the same market that correspond to different demand distributions), we assume throughout equilibrium uniqueness and hence, unless stated otherwise, we consider only distributions for which <Ref> applies[Since the DGMRD property is satisfied by a very broad class of distributions, see <cit.>, <cit.> and <cit.>, we do not consider this as a significant restriction. Still, since it is sufficient (together with finitenes of the second moment) but not necessary for the existence of a unique optimal price, the analysis naturally applies to any other distribution that guarantees equilibrium existence and uniqueness.]. First, we introduce some additional notation.Let X_1∼ F_1,X_2∼ F_2 be two nonnegative random variables – or equivalently demand distributions – with supports between L_1 and H_1 and L_2 and H_2, respectively (cf. definition of L and H in <Ref>) and MRD functions _1r and _2r. We say that X_1 is less than X_2 in the mean residual demand order, denoted by X_1 X_2[In reliability applications, the MRD-order is commonly known as the mean residual life (MRL)-order, <cit.>.], if _1r≤_2r for all r≥0. This order plays a key role in the present model. Specifically, by (<ref>), we have that _1r≤_2r for any r≥0 if and only if ε_2r≤ε_1r for any r≥0, i.e., if and only if the price elasticity of expected demand in market X_2 is less than the price elasticity of expected demand in market X_1 for any wholesale price r≥0. This motivates the following definition. We will say that market X_2 is less elastic than market X_1, denoted by X_2 X_1, if ε_2r≤ε_1r for every r≥0. Based on the above, X_2 X_1 if and only if X_1 X_2. Using this notation, the following Lemma captures the importance of the characterization of the optimal price via the fixed point equation (<ref>). lemma]lem:technical Let X_1∼ F_1, X_2∼ F_2 be two nonnegative, continuous and strictly DGMRD demand distributions with finite second moments. If X_2 is less elastic than X_1, then the supplier's optimal wholesale price is lower in market X_1 than in market X_2. In short, if X_2 X_1, then r^*_1≤ r^*_2. By definition, X_2 X_1 implies that ε_2r≤ε_1r for every r≥0 which by (<ref>) is equivalent to _1r≤_2r for all r≥0. Hence, by (<ref>), 1=_1r_1^*≤_2r_1^*<_2r for all r<r^*_1, where the second inequality follows from the assumption that X_2 is strictly DGMRD. Since _2r^*_2=1, it follows that r^*_1≤ r^*_2. <Ref> states that the supplier charges a higher price in a less elastic market. Although trivial to prove once <Ref> has been established, it is the key to the comparative statics analysis in the present model. Indeed, combining the above, the task of comparing the optimal wholesale price r^* for varying demand distribution parameters – such as market size or demand variability – essentially reduces to comparing demand distributions (market instances) in terms of their elasticities or equivalently in terms of their MRD functions. Such conditions can be found in <cit.> and <cit.> and provide the framework for the subsequent analysis. §.§ Transformations that preserve the MRD-order <Ref> provides a natural starting point to study the response of the equilibrium wholesale price, r^*, to changes in the demand distribution. In particular, if a change in the demand distribution preserves the -order, then <Ref> readily implies that this change will also preserve the order of wholesale prices. Specifically, let X_1∼ F_1,X_2∼ F_2 denote two different demand distributions, such that X_1 X_2. In this case, we know by <Ref> that r^*_1≤ r^*_2. We are interested in determining transformations of X_1, X_2 that preserve the -order and hence the ordering r^*_1≤ r^*_2. Unless otherwise stated, we assume that the random demand is such that it satisfies the sufficiency conditions of <Ref> and hence that the supplier's optimal wholesale price exists and is unique.Let X_1∼ F_1,X_2∼ F_2 denote two nonnegative, continuous and strictly DGMRD demand distributions, with finite second moments, such that X_1 X_2.*If Z is a nonnegative, IFR distribution, independent of X_1 and X_2, then 𝐫^*_𝐗_1+𝐙≤𝐫^*_𝐗_2+𝐙.*If ϕ is an increasing, convex function, then r^*_ϕX_1≤ r^*_ϕX_2. *If X_p∼ p F_1+1-pF_2 for some p∈0,1, then r^*_X_1≤𝐫^*_𝐗_𝐩≤ r^*_X_2.Part (i) follows from Lemma 2.A.8 of <cit.>. Since the resulting distributions X_i+Z, i=1,2 may not be DGMRD nor DMRD, the setwise notation is necessary. Part (ii) follows from Theorem 2.A.19 (ibid). Equilibrium uniqueness is retained in the transformed markets, ϕX_i, i=1,2, since the DGMRD class of distributions is closed under increasing, convex transformations, see <cit.>. Finally, part (iii) follows from Theorem 2.A.19. However, the DGMRD class is not closed under mixtures and hence, in this case, the X_p market may have multiple equilibria, which necessitates, as in part (i), the setwise statement for the wholesale equilibrium prices of the X_p market. <cit.> show that the strict -order – i.e., if the inequality _1r<_2r is strict for all r– is closed under monotonically non-decreasing transformations and closed in a reversed sense under monotonically non-increasing transformations. The -order is also closed under convolutions, provided that the convoluting distribution has log-concave density (as is the case with many commonly used distributions, <cit.>), <cit.>. Finally, if instead of X_1 X_2, X_1 and X_2 are ordered in the stronger hazard rate order, i.e., if _1r≤_2r for all r≥0, denoted by X_1 X_2, then part (i) of <Ref> remains true by Lemma 2.A.10 of <cit.>, even if Z is merely DMRD (instead of IFR). §.§ Market sizeNext, we turn to demand transformations that intuitively correspond to larger market instances. Again, to avoid unnecessary technical complications, we will restrict attention to demand distributions which satisfy the sufficiency conditions of <Ref> (e.g., DMRD or DGMRD distributions).§.§.§ Stochastically larger marketsOur first finding in this direction is that stochastically larger markets do not necessarily lead to higher wholesale prices. Technically, this follows from the fact that the stochastic order does not imply (nor is implied by) the -order <cit.>. In particular, <Ref> demonstrates that the correct criterion to order prices is the elasticity rather than the size of different market instances. This provides a theoretical explanation for the intuition of <cit.> that size is not everything and that prices are driven by different forces.Formally, let X∼ F, Y∼ G denote two market instances. If r≤r for all r≥0, then Y is said to be less than X in the usual stochastic order, denoted by Y X. It is immediate that Y X implies Y≤ X. The following example, adapted from <cit.>, shows that wholesale prices can be lower in a stochastically larger market instance. Specifically, let X∼ F be uniformly distributed on [0,1] and let Y∼ G have a piecewise linear distribution with G0=0, G1/3=7/9, G2/3=7/9 and G1=1. Then, as shown in <Ref>, Y X (right panel) but r^*_X≤ r^*_Y (left panel).§.§.§ Reestimating demand The above example suggests that the statement larger markets lead to higher prices cannot be obtained in full generality. This brings us to the main part of this section which is to investigate conditions under which an increase in market demand leads to an increase in optimal prices. Formally, let X denote the random demand in an instance of the market under consideration. Let c≥1 denote a positive constant and Z an additional random source of demand that is independent of X. Moreover, let r^*_X denote the equilibrium wholesale price in the initial market and r^*_cX, r^*_X+Z the equilibrium wholesale prices in the markets with random demand cX and X+Z respectively. How does r^*_X compare to r^*_cX and r^*_X+Z? While the answer for r^*_cX is rather straightforward, see <Ref> below, the case of X+Z is more complicated. Specifically, since DGMRD random variables are not closed under convolution, see <cit.>, the random variable X+Z may not be DGMRD. This may lead to multiple equilibrium wholesale prices in the X+Z market, irrespectively of whether Z is DGMRD or not. To deal with the possible multiplicity of equilibria, we will write 𝐫^*_𝐖:={r: r=_Wr} to denote the set of all possible equilibrium wholesale prices. Here, _W denotes the MRD function of a W∼ F_W demand distribution, e.g., W:=X+Z. To ease the notation, we will also write 𝐫^*_𝐖≤𝐫^*_𝐕, when all elements of the set 𝐫^*_𝐖 are less or equal than all elements of the set 𝐫^*_𝐕.<Ref>conforms wite prices are always higher in the larger cX market and under some additional conditions also in the X+Z market.Let X∼ F be a nonnegative and continuous demand distribution with finite second moment.*If X is DGMRD and c≥ 1 is a positive constant, then r^*_X≤ r^*_cX.*If X is DMRD and Z is a nonnegative, continuous demand distribution with finite second moment and independent of X, then r^*_X≤𝐫^*_𝐗+𝐙, i.e., r^*_X≤ r^*_X+Z for any equilibrium wholesale price r^*_X+Z of the X+Z market.The proof of part (i) follows directly from the preservation property of the -order that is stated in Theorem 2.A.11 of <cit.>. Specifically, since _cXr=c_Xr/c is the MRD function of cX, we have that _cXr=r·_Xr/c/r/c=r·r/c≥ r·r=_Xr, for all r>0, with the inequality following from the assumption that X is DGMRD. Hence, XcX or equivalently cX X, cf. <Ref>, which by <Ref> implies that r^*_X≤ r^*_cX.Part (ii) follows from Theorem 2.A.11 of <cit.>. The proof necessitates that X is DMRD and hence requiring that X is merely DGMRD is not enough. Since, X is DMRD, we know that r<_Xr for all r<r^*_X=_Xr^*_X. Together with X X+Z, this implies that r<_Xr≤_X+Zr, for all r<r^*_X.Hence, 𝐫^*_𝐗+𝐙⊆[r^*_X, ∞), which implies that in this case, r^*_X is a lower bound to the set of all possible wholesale equilibrium prices in the X+Z market.§.§ Market demand variabilityThe response of the equilibrium wholesale price to increasing (decreasing) demand variability is even less straightforward. There exist several stochastic orders that compare random variables in terms of their variability and the effects on prices largely depend on the exact order that will be employed. To proceed, we first introduce some additional notation.§.§.§ Variability or dispersive orders paragraph]varordisp Let X_1∼ F_1 and X_2∼ F_2 be two nonnegative distributions with equal means, X_1= X_2, and finite second moments. If ∫_r^∞_1u≤∫_r^∞_2u for all r≥0, then X_1 is said to be smaller than X_2 in the convex order, denoted by X_1 X_2. If F_1^-1 and F_2^-1 denote the right continuous inverses of F_1,F_2 and F_1^-1r-F_1^-1s≤ F_2^-1r-F_2^-1s for all 0<r≤ s<1, then X_1 is said to be smaller than X_2 in the dispersive order, denoted by X_1 X_2. Finally, if ∫_F_1^-1p^∞_1u≤∫_F_2^-1p^∞_2u for all p∈0,1, then X_1 is said to be smaller thanX_2 in the excess wealth order, denoted by X_1 Y. <cit.> show that X YX YX Y which in turn implies that X≤Y. Does less variability imply a lower (higher) wholesale price? The answer to this question largely depends on the notion of variability that we will employ. <cit.> use the more general -order to conclude that under mild additional assumptions, less variability implies higher prices. Concerning the present setting, ordering two demand distributions X∼ F and Y∼ G in the -order does not in general suffice to conclude that wholesale prices in the X and Y markets are ordered respectively. This is due to the fact that the -order does not imply the -order. An illustration is provided in <Ref>. In <Ref>, we consider two demand distributions, X∼ F, a Lognormal μ=0.5,σ=1 and Y∼ G, a Gamma α=2, β=0.25. For this choice of parameters, X= Y=0.5 and hence X, Y are ordered in the -order if and only if the tail-integrals of F and G are ordered, see <cit.> Theorem 3.A.1. The right panel depicts the log of the ratio of these integrals, i.e., log∫_r^∞/∫_r^∞ which remains throughout positive (and increasing). Hence, Y X. The left panel depicts the price elasticities of expected demand in the X and Y markets. As can be seen, the supplier charges a higher price in the X∼ F market than in the less variable (according to the -order) Y∼ G market.The above conclusion is reversed in the case of <Ref>. In this example, we consider two demand distributions, X∼ F with F, as above, a Lognormalμ=0.5,σ=1 and Y∼ G, a Gamma α=8, β=0.25/4. This choice of parameters retains the equality X= Y=0.5 and hence, X, Y can be ordered in the -order if and only if the tail-integrals of F and G can be ordered. Again, the right panel depicts the log of the ratio of these integrals which remains throughout positive (and increasing). Hence, Y X. However, the picture in the left panel is now reversed. As can be seen, the supplier now charges a lower price in the X∼ F market than in the less variable (according to the -order) Y∼ G market. More can be said, if we restrict attention to the - and -orders. We will write L_i to denote the lower end of the support of variable X_i for i=1,2. Let X_1∼ F_1, X_2∼ F_2 be two nonnegative, continuous, strictly DGMRD demand distributions with finite second moment. In addition,*if either X_1 or X_2 are DMRD and X_1 X_2, and if L_1≤ L_2, then r^*_1≤ r^*_2.*if either X_1 or X_2 are IFR and X_1 X_2, then r^*_1≤ r^*_2.The first part of <Ref> follows directly from Theorem 3.C.5 of <cit.>. Based on its proof, the assumption that at least one of the two random variables is DMRD (and not merely DGMRD) cannot be relaxed. Part (ii) follows directly from Theorem 3.B.20 (b) of <cit.> and the fact that the -order implies the -order. As in part (i), the condition that both X_1 and X_2 are DGMRD does not suffice and we need to assume that at least one is IFR. Recall, that IFR⊂DMRD⊂DGMRD with all inclusions being strict, see e.g., <cit.>. The first implication of <Ref> is that there exist classes of distributions for which less variability implies lower wholesale prices. This is in contrast with the results of <cit.> and <cit.> (for the additive demand case) and sheds light on the effects of upstream demand uncertainty. In these models, uncertainty falls to the retailer, and the supplier charges a higher price to capture an increasing share of all supply chain profits as variability reduces. Contrarily, if uncertainty falls to the supplier as in the present model, then the supplier may charge a lower price as variability increases.The second implication is that these results, albeit general, do not apply to all distributions that are comparable according to some variability order. As illustrated with the examples in <Ref> and the convex-order, less variability may lead to both higher or lower wholesale prices. From a managerial perspective, this implies that the effect of demand variability on prices crucially depends on the exact notion of variability that will be employed and may be ambiguous even under the standard setting of linear demand that is studied here.§.§.§ Mean preserving transformationTo further study the effects of demand variability, we use the mean preserving transformation X_κ:=κ X+1-κμ, where μ= X and κ∈[0,1], see <cit.> and <cit.>. Indeed, X_κ = X and VarX_κ=κ^2VarX≤VarX, i.e., X_κ has the same mean and support as but is less variable than X. <Ref> shows that X_κ X and hence, by <Ref> the supplier always sets a higher price in market X than in the less variable market X_κ. This recovers in a straightforward way the finding of <cit.>. Let X∼ F be a nonnegative, continuous, DGMRD demand distribution with finite mean, μ, and variance, σ^2, and let X_κ:=κ X+1-κμ, for κ∈ [0,1]. Then, X_κ X and r^*_κ≤ r^*. It suffices to show that Y≡μ is smaller than X in the mrd-order, i.e., that Y X. The conclusion then follows from Theorem 2.A.18 of <cit.> and <Ref>. In turn, to show that Y X, it suffices to show that ∫_x^∞_Xuụ/∫_x^∞_Yuụ increases in x over {x: ∫_x^∞_Yuụ>0}, cf. <cit.> (2.A.3). Since, _Yu=1_{x<μ}, this is equivalent to showing that ∫_x^∞_Xuụ/μ-x increases in x for x<μ. Differentiating with respect to x and reordering the terms, we obtain that the previous expression increases in x for x<μ if and only if _Xx≥μ -x for x∈ [0,μ). However, this is immediate, since x≥∫_x^∞_Xuụ=μ-∫_0^x_Xuụ≥μ- x. §.§.§ Parametric families of distributionsTo elaborate on the fact that different variability notions may lead to different responses on wholesale prices, we consider the parametric approach of <cit.>. Given a random variable X with distribution F, let X_i:=δ_i+λ_iX with δ_i≥0 and λ_i>0 for i=1,2. <cit.> show that in this case, the wholesale price is dictated by the coefficient of variation, CV_i=√(X_i)/ X_i. Specifically, if CV_2<CV_1, then r^*_1<r^*_2, i.e., in their model, a lower CV, or equivalently a lower relative variability, implies a higher price. This is not true for our model. To see this, we consider two normal demand distributions X_1∼ Nμ_1,σ_1^2 and X_2∼ Nμ_2,σ_2^2. By Table 2.2 of <cit.>, if σ_1<σ_2 and μ_1≤μ_2, then X_1 X_2 and hence, by <Ref>, r^*_1≤ r^*_2. However, by choosing σ_i and μ_i appropriately, we can trivially achieve an arbitrary ordering of their relative variability in terms of their CV's. The reason for this ambiguity is that changing μ_i for i=1,2, not only affects CV_i, i.e., the relative variability, but also the central location of the respective demand distribution. In contrast, under the assumption that X_1= X_2, the stochastic orders approach of the previous paragraph provides a more clear insight. The results of the comparative statics analysis are summarized in <Ref>.§ MARKET PERFORMANCEWe now turn to the effects of upstream demand uncertainty on the efficiency of the vertical market. As in <Ref>, we restrict attention to the classic Cournot competition with linear demand and arbitrary number n of competing retailers in the second stage. After scaling β to 1, this implies that the equilibrium order quantities are q_i^*r=1/n+1α-r_+ for each i=1,…,n and any wholesale price r≥0. The supplier's optimal wholesale price, r^*, is given by <Ref>. §.§ Probability of no-tradeMarkets with incomplete information are usually inefficient in the sense that trades which are profitable for all market participants may actually not take place. In the current model, such inefficiencies appear for values of α for which a transaction does not occur in equilibrium under incomplete information although such a transaction would have been beneficial for all parties involved, i.e., supplier, retailers and consumers. If α<r^*, then the retailers buy 0 units and there is an immediate stockout. Hence, for a continuous distribution F of α, the probabilitiy of no-trade in equilibrium under incomplete information is equal to Pα≤ r^*=Fr^*. To study this probability as a measure of market inefficiency, we restrict attention to the family of DMRD distributions, i.e., distributions for which r is non-increasing.For any demand distribution F with the DMRD property, the probability Fr^* of no-trade at the equilibrium of the stochastic market cannot exceed the bound 1-e^-1. This bound is tight over all DMRD distributions.By expressing the distribution function F in terms of the MRD function, see <cit.>, we get Fr^*=1-0/r^*exp{-∫_0^r^*1/u}. Hence, by the DMRD property and the monotonicity of the exponential function, it follows that Fr^*≤ 1-0/r^*exp{-r^*/r^*}. Since r^*=r^*≤0, we conclude that Fr^*≤ 1- e^-1. If the MRD function is constant, as is the case for the exponential distribution, see <Ref>, then all inequalities above hold as equalities, which establishes the second claim of the Theorem. <Ref> highlight the tightness of the no-trade probability bound that is derived in <Ref>. <Ref> shows that this bound cannot be extended to the class of DGMRD distributions. The conclusions are summarized in <Ref>.[Exponential distribution] Let α∼expλ, with λ>0, and pdf fr=λ e^-λ r1_{0 ≤ r < ∞}. Since r=1/λ, for r>0, the MRD function is constant over its support and, hence, F is both DMRD and IMRD but strictly DGMRD, as r=1/λ r, for r>0. By <Ref>, the optimal strategy r^* of the supplier is r^*=1/λ. The probability of no transaction Fr^* is equal to Fr^*=F1/λ=1-e^-1, confirming that the bound derived in <Ref> is tight. Thus, the exponential distribution is the least favorable, over the class of DMRD distributions, in terms of efficiency at equilibrium. [Beta distribution] This example refers to a special case of the Beta distribution, also known as the Kumaraswamy distribution, see <cit.>. Let α∼ Beta1,λ with λ>0, and pdf fr= λ1-r^λ-11_{0<r<1}. Then, Fr = 1-1-r^λ and r=1-r/1+λ for 0 < r < 1. Since the MRD function is decreasing, <Ref> applies and the optimal price of the supplier is r^*=1/λ+2. Hence, Fr^*= 1-1-1/λ+2^λ→ 1-e^-1 as λ→∞. This shows that the upper bound of Fr^* in <Ref> is still tight over distributions with strictly decreasing MRD, i.e., it is not the flatness of the exponential MRD that generated the large inefficiency. [Generalized Pareto or Pareto II distribution] This example shows that the bound of <Ref> does not extend to the class of DGMRD distributions. Let α∼GParetoμ,σ,k, with pdf fr=1+kz^-1+1/k/σ and cdf Fr=1-1+kz^-1/k, with z=r-μ/σ. For the parametrization μ<σ_ϵ and σ_ϵ=k_ϵ=2+ϵ^-1, with ϵ>0, the cdf becomes F_ϵr=1-1+r-μ^-2+ϵ. Moreover, α_ϵ^2<∞, since k_ϵ<1/2 for any ϵ>0. Hence, by a standard calculation, _ϵr=1+r-μ1+ϵ, which shows that F_ϵ is DGMRD but not DMRD. In this case, r_ϵ^*=1-μ/ϵ and F_ϵr^*=1-1+ϵ1-μ/ϵ^-2+ϵ, which shows that the probability of a stockout may become arbitrarily large for values of ϵ close to 0. The pathology of this example relies on the fact that α_ϵ^2→∞ as ϵ↘ 0.§.§ Division of realized market profitsIf the realized value of α is larger than r^*, then a transaction between the supplier and the retailers takes place. In this case, we measure market efficiency in terms of the realized market profits. Specifically, we fix a demand distribution F (which satisfies the sufficiency conditions of <Ref>) with support S (with upper and lower bounds L and H respectively, as defined in <Ref>) and a realized demand level α∈ S and compare the individual realized profits of the supplier and each retailer between the deterministic and the stochastic markets. For clarity, we summarize all related quantities in <Ref>. We are interested in addressing the following questions: First,how do the supplier's (retailers') realized profits compare between the stochastic and the deterministic market? Second, how does retail competition and demand uncertainty affect the supplier's (retailers') share of realized market profits? Third, how does the level or retail competition – number n of retailers – affect supplier's profits in both markets? The answers are summarized in <Ref> which follows rather immediately from <Ref>. To avoid technicalities, we assume throughout that the upper bound H of the support S is large enough, so that H>2r^* (e.g., H=∞). Let F denote a demand distribution with support S within L and H, r^* the respective optimal wholesale price in the stochastic market such that H>2r^*, and α∈ S, with α>r^*, a realized demand level, for which trading between supplier and retailers takes place in both the stochastic and the deterministic market. Let, also, Π_s^U/Π^U_ and Π_s^D/Π^D_ denote the supplier's share of realized profits in the stochastic and deterministic markets respectively. Then,* Π_s^U≤Π_s^D, with equality only for α=2r^*. In particular, Π_s^U/Π_s^D=4r^*/α1-r^*/α for any α>r^*.* Π_s^U/Π^U_ decreases in the realized demand level α.* Π_s^D/Π^D_ is independent of the demand level α.* Π_s^U/Π^U_ is higher than Π_s^D/Π^D_ for values of α∈r^*,2r^*, equal for α=2r^*, and lower otherwise.* Π_s^D/Π^D_ and Π_s^U/Π^U_ both increase in the level n of retail competition.Finally, each retailer's profit in the stochastic market, Π_i^U, is strictly higher than her profit in the deterministic market Π_i^D for all demand levels α>2r^* and less otherwise, with equality for α=2r^* only.By <Ref>, we have that: (i) Π_s^U≤Π_s^D if and only if n/n+1r^*α-r^*_+≤n/n+1α/2^2 which holds with strict inequality for all values of α, except for α=2r^* for which the quantities are equal. The second part of statement (i) is immediate. For (ii) Π^U_s/Π^U_=nr^*+r^*/nr^*+α, and for (iii) Π^D_s/Π^D_=n+1/n+2. Now, (iv) and (v) directly follow from the previous calculations. Finally, Π_i^U≥Π_i^D if and only if 1/n+1^2α-r^*_+^2≥1/n+1^2α/2^2 which holds with strict inequality for all values of α>2r^* and with equality for α=2r^*. The statements of <Ref> are rather intuitive and in their largest part, with the exception of part (iv), they conform to earlier findings <cit.>. (i) The supplier is always better off if he is informed about the retail demand level. (ii) In the stochastic market, he captures a larger share of the realized market profits for lower values of realized demand (but not lower than the no-trade threshold of r^*) whereas in the deterministic market (iii) his share of profits is constant with respect to the demand level. (iv) Yet, in the stochastic market, there exists an interval of demand realizations, namely r^*,2r^*, for which the supplier's profits (although less than in the deterministic market) represent a larger share of the aggregate market profits. In any case, (v) retail competition benefits the supplier. Finally, in the case that the supplier prices under uncertainty, each retailer makes a larger profit for higher realized demand values which abides to intuition. These observations conform with the existence of conflicting incentives regarding demand-information disclosure between the retailers and the supplier, cf. <cit.>. §.§.§ Deterministic and stochastic markets: aggregate profitsWe next turn to the comparison of the aggregate market profits between the deterministic and the stochastic market. As before, we fix a demand distribution F (which is again assumed to satisfy the sufficiency conditions of <Ref>) with support S within L and H, and evaluate the ratio Π_^U/Π_^D of the aggregate realized market profits in the stochastic market to the aggregate market profits in the deterministic market. To study market performance under the two scenarios, we need to evaluate the combined effect of demand uncertainty and retail competition. For a realized demand α≤ r^*, there is a stockout and the realized aggregated profits Π_^U are equal to 0. In this case, the stochastic market performs arbitrarily worse than the deterministic market and the ratio is equal to 0 for any number n≥ 1 of competing retailers. Hence, for a non-trivial analysis, we restrict attention to α>r^* for which trading takes place in both the stochastic and the deterministic markets.Let F denote a demand distribution with support S within L and H, with H large enough, and let r^* denote the respective optimal wholesale price in the stochastic market. Additionally, suppose that α∈ S, with α>r^* is a realized demand level for which trading between supplier and retailers takes place in both the stochastic and the deterministic market. Let, also, Π_^U/Π^D_ denote the ratio of the aggregate realized profits in the stochastic market to the aggregate profits in the deterministic market. Then,* Π_^U/Π_^D> 1 for α>2r^* if n=1 or n=2 and for α∈2r^*, 2n/n-2r^* if n≥ 3. * Π_^U/Π_^D is maximized for α^*=2nr^*/n-1 for n≥ 2, for which it is equal to 1+nn+2^-1. Moreover, Π_^U/Π_^D converges to 4/n+2 as α→∞ for any n≥1.* Π_^U/Π_^D increases in the level of competition for demand levels α<2r^* and decreases thereafter.Again, to avoid unnecessary technicalities in the proof of <Ref>, we assume that H is large enough, e.g., H=∞. By <Ref>, a direct substitution yields that Π_^U-Π_^D>0 iff 2-n/4·α^2+r^*n-1α -nr^*^2>0with α>r^*. For n=1,2, the result is straightforward, whereas for n≥3 the result follows from the observation that the roots of the expression in the left part are given by α_1,2=2r^*·n-1±1/n-2. This establishes (i) and after some trivial algebra, also (ii). To obtain (iii), we compare Π_^U/Π_^D for arbitrary n to Π_^U/Π_^D for n+1 4α-r^*α+n+1r^*/α^2n+3>4α-r^*α+nr^*/α^2n+2 2r^*>αwhich yields the statement. Statement (i) of <Ref> asserts that there exists an interval of realized demand values, whose upper bound depends on the number n of competing retailers, for which the stochastic market outperforms the deterministic market in terms of aggregate profits. The effect of increasing retail competition on the aggregate profits of the stochastic market is twofold. First, the range (interval) of demand values for which the ratio of aggregate profits exceeds 1 reduces to a single point as competition increases (n→∞). Second, for larger values of realized demand, the ratio converges to 4/n+2 as α→∞. This shows that uncertainty on the side of the supplier is less detrimental for the aggregate market profits when the level of retail competition is low. In particular, for n=1,2, the aggregate profits of the stochastic market remain strictly higher than the profits of the deterministic market for all large enough realized demand levels. As competition increases this remains true only for lower (but still above the no-trade threshold) demand levels. However, for higher demand realizations, the ratio degrades linearly in the number of competing retailers.The statements of <Ref> are illustrated in <Ref>. Here α∼Gamma2,2 but the picture is essentially the same for any choice of demand distribution that satisfies the sufficiency conditions of <Ref> and for which H is large enough, i.e., H>2r^*.§ CONCLUSIONSIn this paper, we revisited the classic problem of optimal pricing by a monopolist who is facing linear stochastic demand. The monopolist may sell directly to the consumers or via a retail market with an arbitrary competition structure. Our main theoretical finding is that the price elasticity of expected demand, and hence also the monopolist's optimal prices, can be expressed in terms of the mean residual demand (MRD) function of the demand distribution. In economic terms, the MRD function describes the expected additional demand given that current demand has reached or exceeded a certain threshold. This leads to a closed form characterization of the points of unitary elasticity that maximize the monopolist's profits and the derivation of a mild unimodality condition for the monopolist's objective function that generalizes the widely used increasing generalized failure rate (IGFR) condition. A direct byproduct is a distribution free and tight bound on the probability of no trade between the supplier and the retailers.When we compare optimal prices between markets with different demand characteristics, the main implication of the above characterization is that it allows us to exploit various forms of knowledge on the demand distribution via the theory of stochastic orderings. Specifically, if two markets can be ordered in terms of their mean residual demand function, then the seller's optimal prices can be ordered accordingly. This establishes a link between the price elasticity of expected demand, which is naturally the critical determinant of price movements in response to changes in demand, and some well-understood characteristics of the demand distribution. The stochastic orders approach works under various informational assumptions on the demand distribution and provides a way to systematically exploit the abundance of data that firms possess about historical price/demand. From a managerial perspective, our study provides a tractable theoretical framework which can be used to reason about price changes in advance of anticipated market movements or to benchmark data-driven predictions that are derived from pricing analytics. Our results suggest that the effects of market size and demand variability on prices critically depend on the notions of size and variability that will be employed. This implies that exact predictions about price movements can only be done in a case by case basis and should only be used with caution. Such tools are particularly useful in industries in which fixing a price often precedes the demand realization such as subscription based businesses or businesses that sell durable goods, tickets or leisure time services. More generally, our findings can be used to explain the diversity of price responses to market characteristics that are observed in practice and provide a diverse toolbox for managers to optimally set and adjust prices under different market conditions.§ ACKNOWLEDGEMENTSStefanos Leonardos gratefully acknowledges support by the Alexander S. Onassis Public Benefit Foundation and partial support by NRF 2018 Fellowship NRF-NRFF2018-07.=0mu plus 1mu plainnat
http://arxiv.org/abs/1709.09618v7
{ "authors": [ "Stefanos Leonardos", "Costis Melolidakis", "Constandina Koki" ], "categories": [ "math.OC", "cs.GT", "math.PR", "91B24, 91B02, 90B30" ], "primary_category": "math.OC", "published": "20170927164935", "title": "Monopoly Pricing in Vertical Markets with Demand Uncertainty" }
These authors contributed equally to this workThese authors contributed equally to this workThese authors contributed equally to this work QuTech and Kavli Institute of Nanoscience, Delft University of Technology, 2600 GA Delft, The Netherlands Department of Physics and Astronomy, and Station Q Purdue, Purdue University, West Lafayette, Indiana 47907, USA Solid State Physics Laboratory, ETH Zürich, 8093 Zürich, SwitzerlandDepartment of Physics and Astronomy, and Station Q Purdue, Purdue University, West Lafayette, Indiana 47907, USACorrespondence should be sent to [email protected] QuTech and Kavli Institute of Nanoscience, Delft University of Technology, 2600 GA Delft, The Netherlands Electrostatic confinement in semiconductors provides a flexible platform for the emulation of interacting electrons in a two-dimensional lattice, including in the presence of gauge fields. This combination offers the potential to realize a wide host of quantum phases. Capacitance spectroscopy provides a technique that allows to directly probe the density of states of such two-dimensional electron systems.Here we present a measurement and fabrication scheme that builds on capacitance spectroscopy and allows for the independent control of density and periodic potential strength imposed on a two-dimensional electron gas. We characterize disorder levels and (in)homogeneity and develop and optimize different gating strategies at length scales where interactions are expected to be strong. A continuation of these ideas might see to fruition the emulation of interaction-driven Mott transitions or Hofstadter butterfly physics.A capacitance spectroscopy-based platform for realizing gate-defined electronic lattices L. M. K. Vandersypen December 30, 2023 ======================================================================================== § INTRODUCTION Artificial lattice structures have the potential for realizing a host of distinct quantum phases<cit.>. Of these, the inherent length scale of optical platforms allows for a clean emulation of quantum mechanical band physics, but also means interactions are weak and going beyond a single-particle picture is difficult<cit.>. For electronic implementations in solid-state, interactions can be made non-perturbatively strong, potentially leading to a host of emergent phenomena. An example is shown in graphene superlattices, where not only Hofstadter’s butterfly physics<cit.> but also interaction-driven and emergent fractional quantum Hall states in the butterfly appear<cit.>. The ideal platform would host a designer lattice with tunable electron density and lattice strength, allowing to emulate band physics for a wide variety of lattice types and giving access to the strong-interaction limit of correlated Mott phases<cit.>. Semiconductor heterostructures with nano-fabricated gate structures provide this flexibility in lattice design and operation, yet inherent disorder in the host materials as well as the short length scales required make the realization of clean lattices difficult<cit.>.In this Letter, we introduce a novel experimental platform for realizing artificial gate-induced lattices in semiconductors, based on a capacitance spectroscopy technique <cit.>, with the potential to observe both single-particle band structure physics such as Hofstadter's butterfly and many-body physics such as the interaction driven Mott insulator transition. We discuss different gating strategies for imprinting a two-dimensional periodic potential at length scales where interactions are expected to be strong, characterize intrinsic disorder levels and show first measurements of double gate devices. § HETEROSTRUCTURE AND CAPACITANCE SPECTROSCOPYTo host the 2D electron gas (2DEG), we use a GaAs quantum well with AlGaAs barriers, grown by molecular beam epitaxy. The substrate contains a highly Si-doped GaAs layer that acts as a back gate. It is tunnel coupled to the 2DEG through a Al_xGa_1-xAs tunnel barrier (see Fig. <ref>a and Table <ref>). There is no doping layer above the quantum well in order to avoid an important source of disorder. A metallic top gate is fabricated on the surface. A variable capacitor forms between the back and top gates: when an alternating potential difference is applied between them, electrons tunnel back and forth between the back gate and the 2DEG, modifying the capacitance by an amount proportional to the density of states (DOS) of the 2DEG. The tunnel frequency depends mainly on the thickness and the Al content (x) of the tunnel barrier. At the limits of zero or infinite DOS, the system behaves like a simple parallel plate capacitor, described by the distance between top gate and back gate or top gate and 2DEG, respectively. The capacitance is read out using a bridge design with a reference capacitor <cit.>, where the voltage at the bridge point is kept constant (Fig. <ref>b) by changing the amplitude ratio and phase difference of AC signals applied to each capacitor (see supplementary material section A for experimental details).To impose a periodic potential in the 2DEG, we pattern a metallic gate into a grid shape before making the top gate. From a capacitance spectroscopy perspective, this double-gate structure can be made with two different designs. In the first design, the top gate is separated from the grid gate by a thick dielectric layer, rendering its capacitance to the grid gate negligible (a few pF compared to tens of pF). In that case, we can ignore the grid gate from an AC perspective altogether (Fig. <ref>c). Alternatively, we can minimize the separation between the two gate layers, such that the capacitance between the two top gates (100's of pF) exceeds the sample capacitance. Here the two gates effectively form a single gate (Fig. <ref>d), as seen in AC. We investigate both designs below, starting with describing the fabrication (limits) and following with measurements of disorder levels and imposed potentials. § GATE DESIGN AND FABRICATION We distinguish devices with a single global gate (Fig. <ref>a) and devices with two layers of gates: a grid gate and a uniform global gate on top (Fig. <ref>c-d). The former will be used to characterize disorder levels in the next section, whereas the latter allows for the imposition of a periodic potential. The strength of the imparted periodic potential depends on the dielectric choice (thick or thin, compare Fig. <ref>c,d), gate design, grid gate pitch and the maximum voltages that can be applied.Grid gates are made with a pitch of 100 - 200 nm (Fig. <ref>a-b), which is mainly limited by the fabrication constraints. The maximum voltage is determined by the onset of leakage through the heterostructure or the accumulation of charges in the capping layer, and thus depends on heterostructure details such as the Al concentration and layer thicknesses.The expected imparted potentials at the 2DEG with typical maximum voltages for both designs are shown in Fig. <ref>(c-f) (calculated using COMSOL electrostatic simulation software). In order to observe a Mott transition and the corresponding localization of electrons on individual sites, the periodic potential amplitude must exceed the local Coulomb repulsion (typically several meVs) <cit.>. For 200 nm grids, both designs show similar maximum effective periodic potentials, and they should suffice for the formation of quantum dots. For the 100 nm grids, however, the achievable potentials exceed the charging energy only when using the overlapping gate design. For the smaller pitch grid, effective shielding of the top gate voltage by the grid gate is larger when the top gate is farther away from the heterostructure. Therefore, an overlapping gate design is required to go to sufficiently strong periodic potentials for localization at 100 nm site-to-site pitch.Furthermore, we note that screening induced by mobile charges in the back gate region has both desired and undesirable consequences. An intended benefit is that disorder from charged impurities or defects in the heterostructure is partly screened, and the more so the closer to the back gate the impurities or defects are located <cit.>. However, electron-electron interactions and the gate-voltage imposed potential modulation itself are partly screened as well, and more so as the lattice dimension is reduced. Double gate devices with either a thick (Fig. <ref>c) or a thin dielectric (Fig. <ref>d) between the two gates require different fabrication processes. Here we discuss the fabrication of the active regions in both designs, which have a size of 200 m by 200 m. The detailed information for all steps in the fabrication is provided in the supplementary material section B. In both designs, the square grid metallic gates are fabricated at pitches of 100-200 nm using electron beam lithography and evaporation of metals in a standard lift-off process (Fig. <ref>a-b). In the first design, both gates are made of Ti/Au(Pd) and separated by > 200 nm layer of oxide, such as plasma-enhanced chemical vapor deposition grown SiO_x or plasma-enhanced atomic layer deposition grown AlO_x. In the second design, both gates are made of Al, and an oxygen (remote) plasma oxidation step is used after depositing the first Al layer to ensure sufficient electrical isolation between the two layers by transforming part of the Al gate to Aluminum oxide <cit.>. In this design, we measure resistances exceeding 1 GΩ over several V.Because of the fabrication process, there are limits in the periodicity and homogeneity of the grid gate layer. We typically find (1) that plaquettes of smaller size than 40 nm x 40 nm will not lift off and that (2) the grain size of a particular metal determines the narrowest lines that can be made reliably with liftoff. For the materials used here, AuPd and Al, these effects limit the minimum lattice pitch (Fig. <ref>a). Furthermore, we have analyzed the homogeneity of the lattices by using image processing techniques to give the statistics of the non-metal plaquette areas (Fig. <ref>b). A more relaxed lattice constant means higher relative homogeneity but this is not necessarily helpful: it also increases the flux through a single plaquette when a perpendicular magnetic field is applied (relevant for Hofstadter butterfly physics, as will be described below) and it decreases the charging energy, relevant for Mott interaction physics.§ MEASUREMENTS§.§ Global gates: disorder levels In order to assess disorder levels, we first measure the devices with a single uniform top gate. We measure the capacitance at frequencies below and above the rate at which electrons tunnel between the 2DEG and the doped back gate region as a function of bias voltage (Fig. <ref>a-b) and magnetic field. Having measured the capacitance at low and high frequencies, we calculate the equilibrium DOS. There are essentially two unknown parameters in this conversion, namely the distance from top to bottom gate and the relative location of the 2DEG itself. The former can be directly inferred from the capacitance at high frequency, the latter by using either the known effective mass or the Landau level splitting with magnetic field as benchmarks (see supplementary material section C for details on this conversion).As a magnetic field is turned on, we see the onset of Landau level formation. For magnetic fields above 2 T, we observe a splitting between the spin subbands of the Landau levels which increases with the applied magnetic field (Fig. <ref>c). For a given magnetic field, the separation between the two subbands of any Landau level is significantly larger than the Zeeman energy with g = -0.44 for bulk electrons in GaAs <cit.>. This enhanced Zeeman splitting is an effect of the Coulomb repulsion between electrons in the same subband <cit.>. We focus on the low-field data (Fig. <ref>d) and infer disorder levels from the density of states data (Fig. <ref>e). Gaussian fits to the Landau levels yield typical widths ranging between 0.4-1 meV at densities above 10^11 cm^-2, which, although hard to compare directly to the mobilities reported for transport-based wafers <cit.>, is comparable to previously reported values for similar heterostructures<cit.>. The Landau levels themselves (aliased at low fields in Fig. <ref>d) become visible above fields of roughly 0.25 T, corresponding to densities per Landau level of 1.2×10^10 cm^-2 and cyclotron gaps of 0.43 meV. The Landau level width did not change when we increased the mixing chamber temperature from 10 mK to 100 mK or when we varied the excitation voltage. Furthermore, the Landau level width was consistent across fabrication schemes, but did vary with the wafer used. Therefore, we consider it a heuristic metric for the achievable disorder levels on a particular wafer.We have tried to optimize wafer design to minimize this disorder, whilst allowing for the imposition of a periodic potential. All in all, over twenty different GaAs/Al_xGa_1-xAs wafers grown by molecular beam epitaxy have been used. Growth details of the wafers can be found in Table <ref>.The initial wafer (W1) design was based on Dial et. al.<cit.>, and was grown on a conducting substrate. This simplifies the fabrication of single-gate devices, as an unpatterned ohmic back gate contact can be directly evaporated on the back side of the wafer, while simple metallic pads fabricated on the front side can be directly bonded to and used as a top gate. A double-gate design requires dedicated bond pads, which would give a sizable contribution to the total capacitance when fabricated directly on the wafer. The device used for Fig. <ref>a-b in the main text, fabricated on one of the first rounds of wafers (W2), therefore, had bond pads on top of the thick dielectric separating the two gates. This strategy is not compatible with the second design, where there is no thick dielectric layer, and also gives a very low wire bonding yield due to poor adhesion of the dielectric layers on the GaAs surface. Furthermore, handling both sides of a substrate during fabrication risks contaminating the front surface, and is particularly suboptimal when detailed features (grid gates) are present as well. Subsequent wafers were therefore grown with a 400-800 nm thin degenerately Si doped back gate region that is contacted from the front side of the wafer, and is etched to form electrically isolated device and bond pad mesas.We have further tried to optimize the wafer stacks aiming to increase the amplitude of the periodic potential at the 2DEG and to decrease disorder levels. A stronger periodic potential can be obtained by either increasing the maximum possible gate voltage, reducing the separation between the grid gate and the 2DEG or increasing the distance between the 2DEG and the back gate. The latter may also reduce disorder caused by dopant diffusion from the back gate. Increasing the quantum well thickness is also expected to reduce the effect of disorder by accommodating more of the electron wave-function away from the interfaces. Concretely, we have first varied spacer layer thickness (25 and 35 nm) and quantum well widths (15 and 30 nm). In further attempts to optimize the trade-off between the periodic potential that can be set at a fixed voltage and the maximum voltage we can apply to the gates before leakage sets in, we varied the blocking barrier thickness (40, 50, 60 and 70 nm) and fabricated devices with a thin dielectric layer (see wafers M1 and W3) added underneath the grid gate. None of these, however, managed to noticeably increase the maximum potential we could impose on the 2DEG, or to decrease disorder levels. The strongest effect on disorder was obtained by changing the aluminum concentration in the Al_xGa_1-xAs blocking and tunnel barrier (from x=0.31 everywhere to x=0.36 in the blocking barrier and x=0.20 in the tunnel barrier), while slightly increasing the tunnel barrier thickness in order to keep the tunnel rates roughly the same (see Table <ref>). The measurements shown in Fig. <ref> and Fig.s. <ref>c-d are taken on this optimized wafer, called M2.§.§ Grid gates: periodic potential strength For measurements of two-layer gate devices of both designs (Fig. <ref>), we keep the grid gate potential fixed, given that it serves as the gate voltage of the first transistor in the amplification chain, and map out the remaining two gate voltages over as large a range as possible. Initial devices of both designs indeed show accumulation as a function of the two gate voltages (transition from light grey to blue in Figs. <ref>a,c). At voltages where we expect a flat periodic potential (close to the center of each panel in Fig. <ref>), and for our final set of devices, we can still distinguish well-defined Landau levels, indicating that the added fabrication steps themselves do not severely increase the disorder levels (data not shown).This disorder in the potential landscape also leads to a broadening of the onset of accumulation, seen in the center ofFigs. <ref>a,c.For devices of the first design, this broadening increases as we move away from the center, along the grey-blue boundary(Fig. <ref>a). This suggests that we see a gate-voltage induced spatial variation in the 2DEG potential that exceeds disorder levels (0.4-1 meV) at low densities. Based on electrostatic simulations of the strength of the imposed potential, the gate-voltage induced variation is indeed expected to exceed the disorder levels (Fig. <ref>).The asymmetry between positive and negative top gate values seen in the data could possibly be explained by effective disorder levels being smaller when charges accumulate mainly underneath the grid gate, as compared to when charges accumulate mainly underneath the dielectric. Finally, in Fig. <ref>b we resolve separate lines at the onset of accumulation for negative top gate voltages. Even though we expect to see evidence of miniband formation, we do not attribute these splittings to miniband formation, as they show a much larger periodicity in back gate voltage than the 6 mV expected from the density of states calculation (see below).For devices of the second design, the widening of the onset of accumulation is less pronounced , but the effect of gating is seen at finite magnetic fields, where a voltage difference between the grid and top gate effectively blurs out the gaps between Landau levels (Fig. <ref>c-d). This indicates that the imposed local potential variation must be comparable to or stronger than the Landau level spacing at 1 T (1.7 meV). We conclude that also for the second design, the 200 nm periodic potential exceeds disorder levels.Increasing further the amplitude of the potential variation induced by the gates was limited by saturation of the gating effect. For the first gate design, we find a saturation to the effect of the top gate in gating the 2DEG at gate voltages exceeding 35 V in absolute value. This could be a sign of charges building up at interface of the capping layer and the dielectric, or in the dielectric itself, which screen the effect of the top gate. This saturation limits the potential we can impose on the 2DEG. For the second gate design, a maximum voltage difference of roughly 2 V can be set between the back gate and the surface gates before leakage starts to occur. As an attempt to allow for larger gate voltages before leakage through the heterostructure occurs, we have tried the same fabrication but with an additional 5 nm ALD-grown AlO_x dielectric placed underneath the grid gate. This indeed prevents leakage but the gating effect saturated at the same voltages as where leakage occurred for devices without this additional dielectric. Therefore, 2 V was still the maximum voltage we could apply between the back and surface gates in the second design.§ DISCUSSION: WHAT TO SEARCH FOR IN FUTURE DATA As we have just seen, (i) the periodic potential exceeds disorder levels. In order to see Hofstadter's butterfly and Mott physics, however, we also need to (ii) be able to resolve the induced density of states modulations and (iii) the lattice potential from the grid itself should be sufficiently homogeneous. The latter two considerations will be discussed below, based on the data presented.Using either gate design we find both gates to influence the accumulation of charges in the quantum well as expected, but neither shows clear evidence of a lattice potential imposed on the 2DEG (Fig. <ref>). At zero magnetic field, a lattice potential would lead to minibands that manifest as periodic modulations in the density of states (and capacitance) with a period corresponding to two electrons per lattice site, or 5×10^9 cm^-2 for a 200 nm square grid. Expressed in mV on the back gate, this corresponds to a period of 6 mV. Furthermore at finite magnetic field, Landau levels are expected to show structure due to Hofstadter butterfly physics<cit.>, with the largest gaps expected around k ± 1/4 of a flux quantum Φ_0 threading each lattice plaquette (with k an integer; Φ_0 corresponds to 104 mT for a 200 nm grid). Finally, a strong enough periodic potential would allow interaction effects to dominate. Miniband gaps are expected to split as filling starts to occur with a period of one electron per lattice site, akin to the interaction-driven Mott transition <cit.>.None of these effects are visible in Fig. <ref> nor in many detailed targeted scans of magnetic field and gate voltages on devices with 200 and 100 nm grid gate periodicity.If we compare the 5×10^9 cm^-2 density modulations expected from miniband formation with the 1.2×10^10 cm^-2 broadening of low-field Landau levels (global gate devices at high densities, i.e. we do not have evidence that we can resolve density variations below 1.2×10^10 cm^-2), it is reasonable that gaps are not yet seen opening up at densities corresponding to the filling of (pairs of) electrons on each lattice site. This suggests that either lattice size or wafer disorder has to be further reduced. As it proves hard to lift off plaquettes of metal that are smaller than roughly 40 nm by 40 nm, there is not much room to reduce lattice dimension further in this particular fabrication scheme (Fig. <ref>a). For 100 nm pitch grids, the period of the density modulations is expected to be four times larger, but is still comparable to current best-case scenario Landau level broadening.As such, reducing intrinsic disorder seems necessary. An appropriate goal would be to make double layer gate devices with Landau levels that are distinguishable at fields below 100 mT.The visibility of Hofstadter butterfly gaps depends not only on the intrinsic disorder in the device, but also on the inhomogeneity in the plaquette sizes, as this would entail a different number of flux quanta threading through different plaquettes. If the size variations from electron micrographs of our devices translated to identical size variations in the periodic potential (Fig. <ref>b), we should just be able to distinguish the largest gaps<cit.>. It is hard to assess, however, whether this indicator from the electron micrographs directly correlates to the relevant physics in the 2DEG.§ OUTLOOK There is room for further optimization of these devices. On the heterostructure side, the distance between the back gate and the 2DEG can be further increased, compensating with a decreased Al content in the tunnel barrier to keep the tunnel rate fixed. Furthermore, part of the spacer layer can be grown at reduced temperatures, which has been shown to strongly reduce disorder by limiting the diffusing of Si dopants from the back gate region<cit.>. On the fabrication side, there is still room left for a modest reduction of the lattice periodicity with the current lift-off process. Even smaller length scales can be obtained by switching to dry etching of the grid pattern, albeit at an unknown impact to wafer disorder levels.In summary, we have demonstrated a novel platform intended for the realization of artificial lattices of interacting particles. Although fine tuning the design to the point where a sufficiently homogeneous and strong periodic potential can be applied remains to be done, the quantum Hall data already shows how the strong-interaction, low-temperature limit can be reached. Such a platform has potential for studying the interaction-driven Mott insulator transition<cit.> and Hofstadter butterfly physics<cit.> with finite interactions, and can be extendedfrom the steady-state measurements presented here to include time-domain measurements of excited states<cit.>. The authors acknowledge useful discussions with O.E. Dial, R.C. Ashoori, G.A. Steele, R. Schmits, A.J. Storm and the members of the Delft spin qubit team as well as experimental assistance from M. Ammerlaan, J. Haanstra, S. Visser and R. Roeleveld. This work is supported by the Netherlands Organization of Scientific Research (NWO) VICI program, the European Commission via the integrated project SIQS and the Swiss National Science Foundation. The work at Purdue was supported by the US Department of Energy, Office of Basic Energy Sciences, under Award number DE-SC0006671.Additional support from the W. M. Keck Foundation and Microsoft Station Q is gratefully acknowledged. apsrev4-1 romanA capacitance spectroscopy-based platform for realizing gate-defined electronic lattices L. M. K. Vandersypen December 30, 2023 ======================================================================================== Supplementary Information for Capacitance spectroscopy of gate-defined electronic lattices T. Hensgens^1, U Mukhopadhyay^1, P. Barthelemy^1, S. Fallahi^2, G. C. Gardner^2, C. Reichl^3, W. Wegscheider^3, M. J. Manfra^2 and L. M. K. Vandersypen^1^1QuTech and Kavli Institute of Nanoscience, TU Delft, 2600 GA Delft, The Netherlands^2Department of Physics and Astronomy, and Station Q Purdue, Purdue University, West Lafayette, Indiana 47907, USA^3Solid State Physics Laboratory, ETH Zürich, 8093 Zürich, Switzerland §.§ Capacitance bridge The capacitance bridge is built on a printed circuit board (PCB) that is mounted on the 10 mK mixing chamber stage of a dilution fridge and whose main components are the device, the reference capacitor and a high electron mobility transistor (HEMT, that serves as the first amplifier). By mounting the HEMT orthogonal to the PCB surface, we can apply magnetic fields to the sample without influencing the amplification chain. All D/C lines on the sample PCB have R/C filters on top of the filtering in the fridge. A 10 and 40 MΩ resistance is used to bias the bridge point and top gate in D/C, respectively, and a bias-tee is added to bias the back gate on top of the measurement signal. The high frequency lines are not attenuated in the fridge, as we found this to lead to ground loop issues, but are instead attenuated on the PCB itself. Measurement excitations are simple sinusoidal signals that get attenuated to the µV level and are generated using a signal generator at room temperature. The bridge point voltage is amplified further at 0.7 K and at room temperature and measured using a lock-in.An iterative scheme is implemented to minimize the bridge point voltage by updating the amplitude ratio and phase difference of the two excitations as gate voltages and applied magnetic field are changed. The excitation on the sample side is kept constant and the excitation on the reference capacitor side is updated based on the secant method. For this, we model the bridge as a linear system of complex variables: Y =AX+B, where X is the reference signal, Y is the output from the lock-in, and A and B are complex numbers. Given two iterations with reference signals X_i and X_i+1 and respective output values Y_i and Y_i+1, A and B are calculated as well as X_i+2=-B/A, which is subsequently set and Y_i+2 measured. As the first two iterations, we take the last set reference signalas well as a point with a typically 1 % higher amplitude and a tenth of a degree increased phase. Convergence is reached when the amplitude difference between the last two reference signals drops below some pre-defined value, typically chosen to be several parts per thousand of the amplitude itself.The sample capacitance C_sample follows from the reference capacitor value C_ref and the applied amplitude ratio R=V_ref/V_sample and phase difference δϕ = π + ϕ_ref-ϕ_sample at equilibrium: C_sample= cos(δϕ) R C_ref. §.§ Design and fabrication details As discussed earlier, several different designs and fabrication recipes were used throughout this work to fabricate devices.In the first part, we give some general information on steps that have been employed for many of these fabrication runs. Next we describe fabrication processes of different dielectrics used in the first design to separate the top and grid gate layers. Finally, we provide detailed information for a fabrication run of the second design with overlapping aluminum gates, which serves as a clear example from which the steps required for fabricating the other devices measured can bededuced.All lithography steps were performed using electron beam lithography (Vistec EBPG 5000+ or 5200) at 100kV acceleration voltage. Etching Al_xGa_1-xAs was done using diluted Piranha (1:8:240 H_2SO_4:H_2O_2:H_2O) yielding etch rates of roughly 4 nm/s. The actual etch rate decreases on a timescale of minutes as the H_2O_2 concentration slowly decreases. Spinning is done at 500 rpm for 5 seconds and then for 55 seconds at speeds listed below. Etching SiO_2 and AlO_x was done using buffered HF (BOE 1:10) solutions. After either type of wet etch, devices are rinsed repeatedly in H_2O. Adhesion issues for resists with HF etch times longer than 20 s mean iterative etching and re-baking is necessary. For the 366 nm AlO_x layers, this meant we had to use a dry Cl etch to etch the bulk of the depth of the vias before finishing with a wet etch. Metallic layers were deposited using electron-beam evaporation at room temperature and subsequent lift-off in a solvent.The first design, with a thick dielectric separating the two gate layers, has been fabricated with two different dielectrics. For the results of Fig. 5a-b in the main text, plasma-enhanced chemical vapor deposition (PECVD) of 360 nm of SiO_2 as dielectric was used, which was found to introduce phase-noise during capacitance-bridge measurements. We have also fabricated devices with 366 nm of plasma-enhanced atomic layer deposition (ALD) grown AlO_x dielectric (optical image in Fig. S 1a). Although these devices had less phase-noise, they showed large top gate hysteresis, rendering them practically impossible to measure with (Fig. S 1b). Furthermore, etching small vias through such a thick layer of alumina is very cumbersome. All devices of the first design had low yield in wire-bonding because of poor adhesion of the dielectric layers to the GaAs surface.An overview of the fabrication steps for realizing double-layer gate devices with aluminum gates (second design) is given below. See Fig. S 2a for schematic side views of the process and Fig. S 2e for the top view of a finished device. * Ohmic contacts - spin PMMA 495K A8 resist at 6000 rpm - bake 15 min at 175 ^∘C (400 nm) - lithography - development 60 s in 1:3 MIBK/IPA - wet etch of 180 nm in diluted Piranha - evaporation of 5/150/25 nm Ni/AuGe/Ni - lift-off in acetone and IPA rinse - anneal 60 s at 440 ^∘C in forming gas.* Mesa etch - spin PMMA 495K A8 resist at 6000 rpm - bake 15 min at 175 ^∘C (400 nm) - lithography - development 60 s in 1:3 MIBK/IPA - wet etch of 700 nm in diluted Piranha - sputtering 700 nm of SiO_2 - lift-off in acetone and IPA rinse.* Bridges - spin PMMA 495K A8 resist at 6000 rpm - bake 15 min at 175 ^∘C (400 nm) - lithography - cross-link PMMA strips through electron beam overdose at 25 mC/cm^2. These sections act as bridges over which the leads will connect sample mesa and bond pad regions.* Connection pads and markers - spin PMMA 495K A8 resist at 6000 rpm - bake 15 min at 175 ^∘C (400 nm) - lithography - development 60 s in 1:3 MIBK/IPA - evaporation of 10/50 nm Ti/Au - lift-off in acetone and IPA rinse. These sections act either as markers or as pads that will be contacted on the top both by the Al gates and the leads contacting the bond pads. We found these thin layers of metal to be the most robust way to make an electrical connection (typically several Ohm) between the Al gates and the Au bond pads.* Grid gate - spin CSAR 62.04 resist at 5000 rpm - bake 3 min at 150 ^∘C (72 nm) - lithography - development 70 s pentyl acetate and 60 s 1:1 MIBK:IPA - evaporation of 20 nm Al - lift-off in NMP at 70 ^∘C using soft ultrasound excitation for 4 hrs and subsequent acetone and IPA rinse - oxidation in 20 min at 200 ^∘C at 100 mTorr and 300 W RF power using the remote plasma of an ALD machine. We have optimized the lithographic sequential writing such that a 200 m x 200 m grid is written in one go and at under a minute, avoiding stitching errors and reducing the effect of drift (typically several tens of nm/min). We have done this by direct programming of an iterative sequence that the e-beam follows in writing the grid instead of the standard procedure of converting a design file (in this case a large square grid) to an e-beam lithography file using BEAMER software. Furthermore, we add a 200 nm thin frame around the grids whose overdose is chosen to counter proximity edge effects (Fig. S 2d). Note also that we found the conflicting requirements of high resolution and undercut required for lift-off to be best served using a single layer CSAR62 resist. Finally, we find feature size, yield and reproducibility to be limited by the grain size of the evaporated Al, instead of by the resist mask or lithography process. To achieve a smaller grain size, we used a fast Al evaporation rate of 0.2 nm/s. As such, Ti/Au but especially Ti/AuPd gates are easier to fabricate than Al gates but they cannot be oxidized and would require actual deposition of a dielectric. Also note that the lift-off based fabrication of grids allows for different lattice types to be made, see Fig. S 2b-d.* Top gate - spin PMMA 495K A8 resist at 6000 rpm - bake 15 min at 175 ^∘C (400 nm) - lithography - development 60 s in 1:3 MIBK/IPA - evaporation of 50 nm Al - lift-off in acetone and IPA rinse.* Bonding pads - spin OEBR-1000 (200cp) lift-off resist at 3500nm - bake 30 min at 175 ^∘C (500 nm) - spin PMMA 950K A2 resist at 2000 rpm - bake 10 min at 175 ^∘C (90 nm) - lithography - evaporation of 50/200 nm Ti/Au -lift-off in acetone and IPA rinse. §.§ Conversion from capacitance to density of states In calculating density of states from capacitance data, we follow a procedure described before<cit.>. We model the system as a parallel plate capacitor made up of the top and bottom gates, with the potential for added charges at the location of the quantum well, as sketched in Fig. S 3. As a start, C_ are measured as function of gate voltages and magnetic field values (Fig. S 4a). Note that the heterostructure stack is designed to keep the tunnel frequency in the middle of the experimental measurement window (Fig. 4a-b in the main text): below 1 kHz signal to noise ratio declines (mainly because of the 1/f noise of the first transistor in the amplification chain) and above 2 MHz systematic errors occur (we find asymmetric cross-talk between the two excitation signals and the second transistor in the amplification chain).The total voltage difference over the device is a combination of the electric fields V=V_back-V_top=E_1(w+d)+E_2d, which in turn depends on the charges on the plates as V=σ_top(w+d)/ϵ + σ_QWd/ϵ. The total capacitance, which is the one measured at low enough frequencies, is defined as C_low = ∂ Q/∂ V = A∂σ_top/∂ V = ϵ A/w+d - dA/w+d∂σ_QW/∂ V + small terms that depend on changing distances and which we ignore. The first term describes the bare capacitor, and is therefore equal to the total capacitance measured at high frequencies: C_high= ϵ A/w+d. The second term is the one of interest. It describes changes between the capacitance measured at low and high frequency because of the addition of charges in the quantum well, which allows us to infer changes in electron density using ∂ n/∂ V=-1/e∂σ_QW/∂ V=1/eAw+d/d( C_low-C_high) (Fig. S 4b). The voltage required to change the Fermi level E_F of the quantum well can be found using a similar deduction to the one described above, and is described by a voltage-dependent lever arm α≡ -e∂ V/∂ E_F. We find the lever arm by following the dependence of the Fermi level in the quantum well through changes in the electric field as ∂ E_F/∂ V = -ew ∂ E_1/∂ V = -e ( w/w+d + e/ϵwd/w+d∂ n/∂ V) (Fig. S 4c). The first term describes how the Fermi level of a gapped system in the quantum well (δ n=0) changes with bias as expected given its relative location w/w+d between the plates of a simple parallel plate capacitor (Fig. S 4c). It is the second term that encompasses the electron filling, showing the lever arm to increase when charges can be added to the quantum well (after accumulation this becomes the dominant term, see Fig. S 4b). Given the above expressions for density and Fermi level changes as function of gate voltage, we can define the density of states in the 2DEG through DOS = ∂ n/∂ V∂ V/∂ E_F = 1/e^2 Aw+d/dα( C_low-C_high) (Fig. S 4d).As indicated by changes in C_high in Fig. S 4a, the distances describing the system are non-static with gate voltage. In the case of (w+d), this is most likely due to back gate charges populating part of the spacer layer as the electric fields bend the conduction band edge, indeed increasing C_high for more negative back gate voltages. The exact location of the charges in the quantum well and related distance d, however, we cannot directly infer from an independent measurement. As a first guess, the growth distances combined with the (w+d) extracted from C_high suffices. A better estimate can be made using the known linear degeneracy of Landau levels with magnetic field, n_LL = 2eB/h (Fig. S 4b). To obtain the best possible calibration, however, we compensate for any further dependence of the relative quantum well position on back gate voltage by pegging the 0 T DOS after accumulation to the expected value of m/πħ^2≈ 2.8 × 10^13 eV^-1cm^-2 (Fig. S 4d), and use this calibration for nonzero magnetic field values.
http://arxiv.org/abs/1709.09058v2
{ "authors": [ "T. Hensgens", "U. Mukhopadhyay", "P. Barthelemy", "S. Fallahi", "G. C. Gardner", "C. Reichl", "W. Wegscheider", "M. J. Manfra", "L. M. K. Vandersypen" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20170926143737", "title": "A capacitance spectroscopy-based platform for realizing gate-defined electronic lattices" }
compatibility=false [figure]format=plain,position=top,justification=centerlast,textfont=sf,width=.9 [figure]belowskip=12pt,aboveskip=8ptcalc decorations.pathmorphing
http://arxiv.org/abs/1709.08989v2
{ "authors": [ "Christoph Charles" ], "categories": [ "gr-qc" ], "primary_category": "gr-qc", "published": "20170926130139", "title": "Simplicity constraints: a 3d toy-model for Loop Quantum Gravity" }
firstpage–lastpage Faster Interpolation Algorithms for Sparse Multivariate Polynomials Given by Straight-Line ProgramsPartiallysupported by a NSFC grant No.11688101. Qiao-Long Huang^1,2 and Xiao-Shan Gao^1,2^1KLMM, Academy of Mathematics and Systems Science,Chinese Academy of Sciences ^2University of Chinese Academy of Sciences=============================================================================================================================================================================== Gaia's exceptional resolution (FWHM ∼ 0.1) allows identification and cataloguing of the multiple images of gravitationally lensed quasars. We investigate a sample of 49 known lensed quasars in the SDSS footprint, with image separations less than 2, and find that 8 are detected with multiple components in the first Gaia data release. In the case of the 41 single Gaia detections, we generally are able to distinguish these lensed quasars from single quasars when comparing Gaia flux and position measurements to those of Pan-STARRS and SDSS. This is because the multiple images of these lensed quasars are typically blended in ground-based imaging and therefore the total flux and a flux-weighted centroid are measured, which can differ significantly from the fluxes and centroids of the individual components detected by Gaia. We compare the fluxes through an empirical fit of Pan-STARRS griz photometry to the wide optical Gaia bandpass values using a sample of isolated quasars. The positional offsets are calculated from a recalibrated astrometric SDSS catalogue. Applying flux and centroid difference criteria to spectroscopically confirmed quasars, we discover 4 new sub-arcsecond-separation lensed quasar candidates which have two distinct components of similar colour in archival CFHT or HSC data. Our method based on single Gaia detections can be used to identify the ∼ 1400 lensed quasars with image separation above 0.5, expected to have only one image bright enough to be detected by Gaia. gravitational lensing: strong – techniques: miscellaneous – quasars: general § INTRODUCTIONGravitationally lensed quasars are powerful tools to study extragalactic and cosmological phenomena. As with all lenses, the unique geometry can be used for detailed study of both the sources <cit.> and also the lensing galaxies <cit.>. Since quasars are variable sources, the time delays between the lightcurves of the individual lensed images can be measured <cit.>. These time delays depend on the different path lengths from source to observer and the gravitational potential along these paths and thus can be used to determine the Hubble constant <cit.>, recently to better than 4 percent precision <cit.>.Each lightcurve is imprinted with further variations due to microlensing by stars in the lensing galaxy, making it possible to probe the structure of the source quasar accretion disk <cit.> and dark matter fractions in the lensing galaxy <cit.>. Though more than 150 lensed quasars are known to date, only a subset can be used to enable the above science. For example, microlensing events may be most noticeable in systems with low-redshift lenses, leading to smaller microlensing timescales .The methods developed in this paper are aimed at finding small-separation lensed quasars, with image separations typically less than 1. These lenses, though more difficult to find, are more common than wider-separation systems and can be used to probe either higher-redshift/lower-mass lensing galaxies and to understand mass distributions in the cores of galaxies <cit.>. Furthermore, forming a complete sample of lensed quasars at lower separations will yield better constraints on the cosmological mass density parameter and evolution of the lensing galaxy population<cit.>.Dedicated follow-up programs to determine statistical samples of lensed quasars have been based on radio surveys <cit.> or wide-field optical surveys <cit.>. Each survey has its own advantages and disadvantages; radio surveys like CLASS were only able to target radio-loud lensed sources but discovered smaller image separation objects, and SQLS had multi-band optical imaging of a large fraction of the sky, but was incomplete towards the smallest-separation lenses due to the median seeing of 1.4 in SDSS. Accordingly the median image separation of the CLASS and SQLS lenses are 0.8 and 1.65 respectively.Many techniques have been suggested in the literature for finding such rare systems in these vast datasets. Each method is tailored to different types of data, e.g. multi-band colour classification to find quasar-like systems which are extended or deblended into multiple components <cit.>. Other techniques rely on the time domain to reveal neighbouring variable objects <cit.> or search for time delays through cross-correlations <cit.>.However, many current wide-field surveys lack the necessary number of epochs to apply these methods. In many cases the lenses must be found through single-epoch imaging and in the case of some searches start from spectroscopically confirmed quasars.With the advent of many wide-field optical surveys covering the whole sky, <cit.> (hereafter OM10) predicted lensed quasar numbers expected in such surveys. Crucially, these numbers rely on all lensed quasars being detected at separations above two thirds of the point spread function (PSF) full width half-maximum (FWHM) down to the limiting magnitudes of the survey. This is a difficult task without many epochs of imaging, and so further techniques must be developed to meet this goal. It is important to note that, in agreement with these predictions, datasets like SDSS are not depleted of bright lensed quasars since searches like SQLS are biased towards certain source redshifts and quasar selection criteria<cit.>. SQLS also mainly targeted the candidate systems with image separation ≳ 1. Indeed lensed quasars are being found in SDSS photometric data alone <cit.> without spectroscopic selection.In this paper we show that the recent Gaia data release <cit.> is a very useful tool to overcome the difficulties of finding lensed quasars in ground-based surveys alone, based mainly on its excellent angular resolution and full-sky coverage. Figure <ref> demonstrates how Gaia is able to detect three of the four quasar images of PG1115+080 <cit.>. The methods we develop are useful to select any optical binary. This includes stellar binaries which are key to understanding star formation and evolutionary models <cit.> and also quasar pairs, including binary quasars and projected pairs <cit.>. The brightest close-separation binary companions are often found in ground-based imaging data, using similar methods to quasar lens searches, such as looking for extended systems inconsistent with a single PSF <cit.>. However this is an incomplete method at small separations, with pipeline PSF models unknowingly using these binaries to determine the local PSF, thus affecting the PSF magnitudes.In Section 2 of this paper we outline the most important features of the Gaia satellite and how its source detection relates to quasars and lensed quasars. In section 3 we consider this in the context of a sample of known lensed quasars with small image separations. In Sections 4 and 5 we present a technique to use Gaia to determine lens candidates and apply this technique to SDSS spectroscopically confirmed quasars, presentingpotential small-separation lens candidates. We summarise our findings in Section 6.§ GAIA Gaia is a space-based mission mapping the stars of the Milky Way with unprecedented astrometric precision. It is able to detect bright quasars, making it a useful tool for lensed quasar searches. In this section we describe the basic Gaia mission and satellite details to explain the cataloguing of lensed quasars in Gaia's first data release (hereafter GDR1). §.§ Overview of the Gaia mission and satelliteThe Gaia satellite <cit.> was launched on 19 December 2013, with GDR1 consisting of observations from the first 14 months (25 July 2014 to 16 September 2015) of the nominal 60 month mission. Gaia consists of two identical telescopes, each with a rectangular aperture of 1.45m×0.45m, that simultaneously point in directions separated by 106.5 degrees with beams folded to a common focal plane. As a result of the asymmetric focal plane with ratio 3:1, the PSF is also asymmetric with the same ratio and has a measured median FWHM of 103 mas <cit.> in the scanning axis direction. Gaia is located at L2 with a rotational sky-scanning orbit of period 6 hours and an orbital precession period of 63 days. Over the course of Gaia's 5 year mission it will measure each source ∼ 70 times. §.§ Gaia focal planeThe Gaia focal plane consists of 106 CCDs with 4500 pixels in the along-scan (AL) direction and 1966 pixels in the across-scan (AC) direction. Each pixel is rectangular in the ratio 1:3 similar to the PSF major and minor axes with size 10×30 microns corresponding to 59×177 mas on the sky, i.e. ∼ 2 pixels Nyquist sampling of the PSF FWHM. There are 14 sky-mapper (SM) CCDs and 62 astrometric field (AF) CCDs aligned in 7 rows in the AL direction and 9 columns in the AC direction with the middle CCD of the 9th AF column (AF9) assigned to one of the two focus wave front sensors (WFS). The SM CCDs are in two columns (SM1 and SM2) with baffling such that each SM can only view a single Gaia telescope whereas the AF CCDs view the two fields simultaneously. The integration time per CCD is 4.42 seconds but it is impossible to download all the pixels to Earth and hence astrometric measurements are made via windowed regions for sources detected in the SM CCDs.For sources brighter than G=13, 2D (AF) windows of 12 AC pixels (2.12) and 18 AL pixels (1.06) are transmitted to ground <cit.>. For all fainter sources (G>13) the windows are binned in the AC direction during readout to produce a 1D sample. For the faint sources brighter than G=16 (i.e. 13<G<16) there are 18 AL samples (Long 1D). For sources fainter than G=16 there are 12 AL samples (Short 1D) with total length of 0.71. Figure <ref> shows the 12×12 pixel 2D window for a faint source, for which a 1D data stream of 12 samples is downloaded with a sampling resolution of 0.059 in the AL direction and 2.12 in the AC direction.In principle, each component of a faint binary with separation greater than 2.12 will always be isolated in their window. Since each source will be observed multiple times with different scan directions during the 5 year Gaia mission, the 1D data can be used to create a 2D reconstruction <cit.>. At the current time the individual 1D scans have not been released by the Gaia project. §.§ Catalogue creationSource measurements are based on modelling single point sources in these 2D windowed regions of pixels or 1D samples. When close pairs are encountered in the same windowed region, depending on the scanning direction and relative orientation of the pair, the fainter object is given a truncated window of pixels, which has not been processed for GDR1 <cit.>. Therefore there is a significant lack of detections of the fainter companion in close binaries in GDR1.The main astrometric source catalogue for GDR1 contains over one billion sources (see <cit.> for details). The fluxes are measured in a wide optical band (hereafter G-band) as shown in Figure <ref>. 37 million sources were removed before GDR1 was published, when they were separated from another source by less than 5 times the combined astrometric positional uncertainty. This was due to the cataloguing of known objects against an initial Gaia source list, which catalogued many objects twice. Further objects were filtered from GDR1 if sources were observed fewer than 5 times (5 focal plane transits), or if their astrometric excess noise and positional standard uncertainty were greater than 20mas and 100mas respectively. Finally sources were removed if they had fewer than 11 G-band measurements (CCD transits in the astrometric part of the focal plane). See <cit.> for a full explanation of the data-processing and catalogue creation. The limiting magnitude of Gaia is ∼ 20.7 in the Vega magnitude system. §.§ Close pairs in Gaia Where Gaia can provide the most use for lensed quasar searches is in determining whether a source is composed of multiple stellar objects. In ground-based optical imaging surveys, the typically much larger FWHM causes many contaminant systems to resemble lensed quasars. These contaminants include single quasars with bright host galaxies, quasars or stars blended with galaxies, projected systems, and starburst galaxies with quasar-like colours. Because of the truncated windows given to fainter companions around brighter neighbours by Gaia, there is a limitation in Gaia's handling of close pairs <cit.>. We demonstrate how this issue affects lensed quasars in Figure <ref>. The blue histogram shows the distribution of separations of detected Gaia pairs in a sparse field out of the galactic plane (l ∼ 173^∘, b ∼ 67^∘), i.e. a field in which Gaia is able to read out all objects within the magnitude threshold. The green line is the expected distribution from randomly positioned sources on the sky matched to a large-separation asymptote. We expect an overdensity of close-separation objects relative to this line associated with stellar binaries, bright star-forming mergers and even lensed quasars, however the Gaia catalogue significantly lacks the detection of the second object below 2.5 arcseconds. The pairs with very small Gaia separations are possibly duplicate sources that could not be filtered from GDR1 <cit.>. We also show the numbers of lensed quasars OM10 predicted across the whole sky with either the brightest object detectable by Gaia (approximately equal to an i-band magnitude < 20.7) or with at least two images detectable by Gaia, in bins of 0.1 separation. The OM10 catalogue is truncated at an image separation of 0.5. This cut was made due to difficulty in detection and characterisation of lenses at lower image separations, however Gaia will likely be able to push past this limit in detection. <cit.> discuss the detection of smaller image separation lensed quasars in Gaia, however in this paper we use the full OM10 catalogue for predicted numbers of lenses since we are interested in those that can be characterised in ground-based imaging data. The area under the magenta curve of Figure <ref> shows that later Gaia catalogues, which will include secondary source detections in close binaries, should detect the multiple images of ∼ 900 lensed quasars, including ∼ 240 quadruply-imaged systems. This value is in agreement with other estimates based on Gaia's expected pre-launch capabilities <cit.>. The number of lensed quasars expected with only one image detected by Gaia is ∼ 1400, only ∼ 50 of which are quads. This very low quad fraction is due to the brightest two images of quads often having similarly magnified fluxes.§ A SAMPLE OF SMALL-SEPARATION LENSED QUASARSAt separations larger than 2, the images of lensed quasars are deblended in ground-based surveys and Gaia cannot provide much further information, though it is still useful in some cases, for example in detecting quasar images around bright lensing galaxies, where the system might remain blended.To investigate how Gaia catalogues known close-separation lensed quasars, we compile a set of 49 lensed quasars in the SDSS footprint with image separations less than 2 and which are typically blended in SDSS. All lenses in the sample have at least one Gaia detection, but only 8 have multiple detections in GDR1, whereas 43 are expected to have further images brighter than the detection limit. Only one lens, PG1115+080, is detected as three separate images (though we note this is one of the larger-separation systems) and is deblended into 2 components in SDSS. These 8 systems are shown in Figure <ref> with Gaia detections overlaid on SDSS gri colour images.Since many objects were removed as possible duplicates in GDR1, we check for duplicate removal of the non-catalogued images through the duplicated_source flag. Approximately 5 percent of objects in GDR1 have this flag. For the lensed quasars with single Gaia detections where multiple detections are expected, only 4 of 35 are flagged as duplicated sources, indicating the further missing detection(s) in Gaia may be due to truncated windows being given to the missing images in certain scan directions. Since these windows have not yet been processed, not all quasar images have enough G-band measurements to be included in GDR1 <cit.>.§ FINDING QUASAR LENSES IN GAIA Since small-separation lenses are much more common than the easily deblended larger-separation lenses (Figure <ref>), we now explore several ways that GDR1 can help identify these small-separation lensed quasars by considering our sample of known lenses. Firstly we consider multiple Gaia detections corresponding to the multiple images of a lensed quasar. In the case of single Gaia detections, we investigate the Gaia catalogue parameter of astrometric excess noise which might hint at the presence of a lensing galaxy or further quasar images. Finally we compare Gaia fluxes and positions to other datasets that blend lensed quasars into single objects, which will naturally measure larger fluxes and different centroids from those given in the Gaia catalogue.Throughout the rest of this paper we use spectroscopically confirmed objects from the SpecObjAll table from the twelfth data release of SDSS <cit.>, in which most spectroscopically confirmed quasars are from SDSS-III BOSS, but also includes spectra from all previous SDSS data releases. §.§ Multiple Gaia detectionsOur sample of small-separation lensed quasars shows that several objects are still separated into multiple components by Gaia. <cit.> have shown that the detection of both components of close pairs is more likely when they are of similar magnitude since the primary object can be either of the pair in different focal plane transits, leading to both objects being catalogued in GDR1. We see this in the 8 lensed quasars with multiple detections in our sample (see Table <ref>). Therefore even in GDR1 it is possible to use Gaia's resolution to deblend potential quasar lens candidates into multiple objects immediately. A common limitation to finding lensed quasars in ground-based imaging surveys is the lensing galaxy causing a possible contamination to the colour of the object, meaning conventional quasar colour classification will miss these objects (seefor a full discussion). However given the prior that two point sources must be present means Gaia can help detect these bright lens galaxy systems, relying less on constraints from quasar colour-selection techniques. §.§ Single Gaia detectionsAs discussed in Section <ref>, most bright lensed quasars will have one detection in GDR1. However, even when Gaia detects a single image of a lensed quasar, it provides useful information in its catalogue values of astrometric excess noise, position and flux for each detection. The latter two measurements are useful because we are able to search for systems with missing flux and/or a different centroid relative to that of a dataset capturing all the flux, e.g. from proper imaging, where the difference would be caused by Gaia not cataloguing all sources. That is, GDR1 effectively resolves out the flux from single images of lensed quasars. For a given object, a potential close-separation quasar lens or perhaps just a single quasar, we outline how to derive a synthesised Gaia magnitude from ground-based broad-band survey photometry, with which one can compare the Gaia magnitude. In the case of a large discrepancy, this could be accounted for by the presence of extra quasar images and a lensing galaxy, which the ground-based imaging has blended with the original Gaia detection. To determine the centroid difference we use SDSS positions, which have been recalibrated as in <cit.>. The detection of only one image of a lensed quasar in Gaia will not be limited to the first Gaia data release. Indeed ∼ 1400 lenses (Section <ref>) will have just one component bright enough to be detected by Gaia but have other images fainter than the magnitude threshold, as can be seen by the difference between the two lensed quasar population histograms in Figure <ref>.§.§.§ Astrometric Excess NoiseWe initially consider the catalogue value of astrometric excess noise (hereafter AEN). This parameter represents the scatter in the astrometric model for a single object <cit.>. A large AEN might indicate the presence of uncatalogued, nearby images in lenses or perhaps a bright lensing galaxy. We match Gaia observations to the SDSS spectroscopic catalogues for stars, galaxies and quasars within 0.5. We further require no flags from the spectral classification and recover 534172 stars, 286488 galaxies and 218980 quasars. We plot their magnitude and AEN distributions in Figure <ref>, splitting the quasar sample into those above and below redshift 0.3. This plot reflects the method to separate galaxies from stars using AEN, as used in <cit.>. From the overlaid positions corresponding to lensed quasars, AEN alone cannot sufficiently identify most lensed quasars, however it is a useful parameter to consider for the brightest candidates. Therefore we turn our attention to using other datasets and Gaia catalogue parameters to indicate missing components from the catalogue.§.§.§ Flux deficitGaia's 0.1 PSF FWHM means that it can completely resolve the images of any useful lensed quasar (those that have large enough image separation to be spectroscopically confirmed). Therefore the Gaia flux measurement is truly an indication of the magnitude of a single image of the system. Comparing this value to a ground-based flux measurement of the entire blended system may allow the detection of the presence of further components in the system. Firstly we must determine a relationship between the Gaia G-band and ground-based photometric magnitudes.To derive an empirical G-band fit for quasars, we need an isolated sample of quasars with Gaia detections. We use the extensive spectroscopically confirmed quasar catalogues from SDSS, which we match to the Gaia secondary source catalogue. On attempting to fit an empirical relation between the SDSS ugriz magnitudes and Gaia, the variability over the∼ 10 year mean epoch difference is apparent (2015 for Gaia and 2003 for SDSS). Therefore we use the much better matched mean epoch of Pan-STARRS <cit.>, 2013. For the photometric fit we use the Pan-STARRS PSF magnitudes to ensure that we are comparing to the flux from just the quasar, since this is what Gaia is measuring. We apply the following cuts to the combined spectroscopic and photometric catalogues:* spectroscopic class = QSO* z < 5* zerr < 0.05* zwarning = 0 * no Gaia, SDSS or Pan-STARRS neighbours within 5* type = 6 (PSF morphology in SDSS)* Pan-STARRS r_ PSF-r_ kron < 0.05* Gaia-SDSS centroid distance < 0.1* Gaia-SDSS proper motion < 5mas yr^-1 Though the SDSS PSF morphology removes the majority of objects with bright hosts or nearby neighbours, we also apply an r_ PSF-r_ kron magnitude cut to the deeper Pan-STARRS data to remove further extended objects. Finally once an empirical fit to the G-band photometry is made, we remove those with the most inconsistent flux when compared to Gaia (5 σ outliers in a given magnitude bin) and repeat the fit. We are left with a catalogue of 117599 Gaia-matched isolated quasars. We are now able to fit an empirical G-band magnitude from the Pan-STARRS photometry, via a simple linear combination of g, r, i and z:G_synth = α + r + β (g-r) + γ (r-i) + δ (i-z)σ^2 = σ_int^2+ σ_G^2 + σ_Gsynth^2log P = -(G-G_synth)^2/2 σ^2-log(2 πσ^2)/2 This is well-motivated since the nominal Gaia bandpass roughly overlaps these optical filters, as shown in Figure <ref>. To include the high-redshift quasars in the sample we allow for their g-band dropout by fitting for two separate formulae at some g-r cut. We optimise using the log likelihood of equation <ref>, determining the parameters α, β, γ, δ, σ_int and the g-r cutoff.On applying this method to SDSS DR9 photometry we find an intrinsic dispersion of 0.263 mag. The fit to Pan-STARRS gives an improved intrinsic dispersion of 0.151 mag, reflecting the increased brightness variation of quasars over larger timescales. The best fit formulae using Pan-STARRS photometry[The best fit formulae using SDSS photometry are:g-r < 0.24: G =0.029 + r + 0.139 (g-r) - 0.641 (r-i) - 0.496 (i-z)g-r > 0.24: G = -0.038 + r + 0.138 (g-r) - 0.400 (r-i) - 0.163 (i-z)] are: g-r < 0.11:G = -0.033 + r + 0.131 (g-r) - 0.660 (r-i) - 0.162 (i-z)g-r > 0.11: G = -0.061 + r + 0.217 (g-r) - 0.548 (r-i) - 0.013 (i-z)Figure <ref> shows the residual magnitudes against redshift. The non-uniform redshift distribution of the quasar sample does not significantly bias the fit given the very similar dispersion at all redshifts. Furthermore the residuals are essentially Gaussian-distributed about zero (Figure <ref>). Estimates of median variability for this sample between Gaia and Pan-STARRS or SDSS are ∼0.1 or ∼0.15 respectively <cit.>. Our dispersions are larger than these estimates because of the error from the simple model fit to all quasar spectral shapes and redshifts.We apply these empirical G-band fits to the sample of lensed quasars and compare them to Gaia measurements in Section <ref>. §.§.§ Positional OffsetsQuasar lenses are often discovered by looking for extended quasar candidates. This is done by performing a cut in a PSF magnitude minus a model magnitude (fit to a galaxy profile), where it is expected that a PSF magnitude will not capture all the flux of an extended object. However this technique may fail to find all lensed quasars, especially those with small separations. To determine the limitations of this method, we match all Gaia pairs (two catalogue detections) between 0.5 and 1.0 separation to SDSS and compute r_ PSF-r_ model <cit.>. We keep all pairs with brightness differences within 1 magnitude (based on their Gaia G-band magnitudes) as is typical for lensed quasar image fluxes, and remove objects obviously made up from non-point sources. We find a spread in r_ PSF-r_ model of 0.076 ± 0.049 for objects with image separation 0.5-0.6 and 0.215 ± 0.117 for those with image separation 0.9-1.0. To understand these values we compare to single PSF objects (isolated Gaia detections) and find a value of 0.003 ± 0.042. Therefore, given the overlap in the r_ PSF-r_ model values for single stars and pairs at separations of 0.5, the r_ PSF-r_ model magnitude comparison for classification of extended/point source objects will often not be able to distinguish between relatively close binaries and single stellar objects. Furthermore, the sample of pairs we have used here is highly biased to objects of similar magnitude (as explained inand due to our magnitude difference cut) whereas objects with large flux ratios will tend to appear even more PSF-like, implying a conservative upper bound on our r_ PSF-r_ model values for pairs.Given this discussion, it is clear that the PSF classification of SDSS (r_ PSF-r_ model < 0.145) will class many binaries and small separation lensed quasars as PSFs. This is perhaps less likely for quad lensed quasars in which there are four images; however, the image separations of the brightest components can often be much smaller than the Einstein radius, leading to a more PSF-like appearance. Therefore we again turn to Gaia's excellent resolution to define a parameter indicative of multiple system components. When a system is composed of two close stellar components, Gaia will catalogue the centroid of one of the two components with high precision and accuracy. However the same system in ground-based imaging will be blended and the catalogued centroid will lie between the two objects, offset from the Gaia centroid. This offset can easily be calculated and should be large for crowded systems like lensed quasars. <cit.> suggest searching for an offset between optical and radio positions for identifying lensed quasars in radio surveys. Our method is similar however we rely on both the galaxy and uncatalogued quasar images to cause the offset and only require optical data. While this offset might not be significant for doubles with large flux ratios, we always expect it to be apparent for quadruply-lensed quasars, since flux ratios between the two brightest images are approximately unity.To robustly compare the difference in Gaia and SDSS positions, we must understand the minimum uncertainty expected from fitting a single PSF. Using a Gaussian PSF defined only by the FWHM and assuming at least critical sampling, any unbiased estimator is limited to a centroid positional error of ∼ (FWHM)/(signal to noise) <cit.>. The FWHM, signal and noise are inferred from the SDSS catalogues in the r-band since the standard SDSS positions are derived from the r-band. In order to look for sub-pixel offsets we must have very accurate SDSS astrometry. We therefore use the Gaia-based calibration of SDSS positions explained in <cit.>. Using this catalogue, the median Gaia-SDSS offset for quasars brighter than 19th magnitude is 0.02. We use these recalibrated SDSS positions to define an offset parameter (OP):OP = distance between Gaia and SDSS centroids/SDSS PSF centroid uncertaintyThe spread of this offset parameter in combination with the flux deficit from Section <ref> can be seen for lenses and SDSS spectroscopically confirmed quasars in Figure <ref> (see Section <ref> for details).This plot confirms the intuition that lensed quasars should have centroid and flux differences between Gaia and ground-based survey measurements. We use this plot in the next section to search for new lens candidates.§ LENS CANDIDATE SELECTION GDR1 gives several clues to a source's nature, but cannot be used to distinguish between quasars and stars alone. Therefore we must start from a quasar candidate catalogue to find lensed quasars. The results of a search for lensed quasars usingGaia detections in a variety of quasar catalogues are presented in a companion paper (Lemon et al. in prep.). However to better understand the limitations of our methods, we will only consider the spectroscopically confirmed quasars of SDSS as a starting catalogue in this paper. The extra information from the spectra allows us to identify contaminant systems.Many of the SDSS spectroscopically confirmed quasars have already been targeted by extensive lens follow-up programmes, namely SQLS and the BOSS quasar lens search, both of which have confirmed the identity of many lenses and other contaminant close-separation systems <cit.>. However these campaigns focussed on lenses with larger image separations than 1, so we expect lenses still exist in this catalogue at smaller separations. We restrict the redshift range of the SDSS quasars to 0.6 < z < 5 to avoid low-redshift host galaxies creating outliers in positional and flux offsets and incorrectly identified objects as high-redshift quasars. Upon matching this set of quasars to Gaia detections within 3, we find 199376 objects which we take as our catalogue for the following searches. §.§ Multiple detectionsWe search the quasar catalogue for multiple Gaia detections within 1.5 to target the lenses that might have been missed by previous searches.We find that only 74 systems from our quasar catalogue have multiple detections in Gaia up to 1.5 and visually inspect their Pan-STARRS images. 7 of the 74 systems are the lensed quasars in our original sample (Section <ref>). Another 7 objects were falsely classified as high-redshift quasars. Approximately half of these 74 systems have very small separations (< 0.25) and appear as single PSFs in Pan-STARRS. Of the remaining higher-separation candidates, they are either obvious quasar and star pairs (indicated by large colour differences or stellar spectral features in the quasar spectra) or have already been followed up by SQLS and identified to be binary quasars or potential small-separation lenses. §.§ Single detections We calculate the flux deficit and offset parameter for our quasar catalogue and lens sample. The Pan-STARRS magnitudes are used to calculate a synthetic G-band magnitude as described in <ref>. However since we want the total flux from the ground-based survey instead of the PSF magnitudes we used to fit the relation, we use the KRON magnitudes. The parameter values are plotted in Figure <ref> for both the quasar and lensed quasar samples. As expected the lenses populate the area in which Gaia and SDSS have disagreeing centroids, and where Gaia is missing flux relative to Pan-STARRS. We note that the outlier with the smallest offset is SDSSJ0737+4825 <cit.>. This system is a faint double with a large flux ratio (i-band magnitudes of 18.28 and 20.58, a flux ratio of ∼ 8.5). Therefore, as we expected, it does not have a large statistical offset to the Gaia detection. Furthermore the other lenses that lie towards the bottom left of the plot are doubles. All quadruply-imaged lenses are well-separated from the single quasars since the extra images cause larger flux deficits and more robust offsets, as already predicted in Section <ref>. Based on the offset parameters (OP) and flux deficits (FD), G-G_synth, of our lens sample (Figure <ref>) we search for possible new lenses in our quasar catalogue by inspecting quasars with similar parameter values to the known lenses. We define a region in Figure <ref> which clearly retains the majority of lenses while including very few of the main spectroscopic sample. The region is defined as log(OP) > 0.25, FD > -0.05 and FD+0.86log(OP) > 0.914, and contains 362 objects in the quasar catalogue (from the original sample of 199376). We find about 10 percent of these objects are associated with blue stars with featureless spectra that the SDSS pipeline has classified as quasars without any spectrum-based warning. These objects have large OP and FD values perhaps due to proper motions and the empirical G-band values being based on a fit to quasars. After removing these from our sample by requiring a WISE detection <cit.>, we find 319 possibly-extended candidates, of which 63 have Canada France Hawaii Telescope (CFHT) archival data or Hyper Suprime Cam <cit.>) Survey data, which we inspect. Many do not show distinct components either due to being outliers from the single quasar population (e.g. highly-variable quasars between Pan-STARRS and Gaia measurements) or having component separations small enough to become blended in the ground-based imaging.We find 4 objects with clearly resolved components that have similar colours and do not show obvious contaminant spectral features (e.g. stellar absorption lines). These are shown in Figure <ref> and several properties for the systems are listed in Table <ref>. Examples of the contaminant systems that can be distinguished through their spectra or colour images are shown in Figure <ref> and their properties are also included in Table <ref>. Each of these systems was classed as a contaminant because of a strong colour gradient and, in the case of J0112+1512, the presence of an extended source. While we might be seeing lensing galaxies, no other quasar image is apparent and such large colour differences are unlikely between quasar images. These detections further demonstrate the effectiveness of our method in selecting close-separation pairs. § CONCLUSIONSGiven the sky coverage, depth and excellent resolution of Gaia's first data release, it is a prime dataset to use for lensed quasar searches. Starting from a quasar candidate catalogue, the Gaia source catalogue can be searched for multiple detections for each candidate, quickly identifying likely lensed quasars. This method will identify ∼900 lensed quasars (∼240 quads) with image separations above 0.5 across the whole sky <cit.>. We perform this search in a variety of photometric quasar catalogues and present the results in a companion paper (Lemon et al. in prep).However, GDR1 has often catalogued only one component of lensed quasars, in particular for systems with small image separations <cit.>. It is at these separations that we would benefit most from Gaia, since ground-based imaging is unable to resolve the separate images of a lensed quasar. Therefore we have developed a method to exploit a single Gaia detection to find the population of small-separation lensed quasars. This method relies on Gaia effectively resolving out the flux from just one image of a lensed quasar, whereas ground-based observations (from SDSS and Pan-STARRS) blend the components together. For a sample of 49 known small-separation lensed quasars, we demonstrate that the Pan-STARRS flux is larger than the Gaia flux, verifying the idea that Gaia is only measuring a single image. We also show that the offset between Gaia and SDSS positions for our sample of lensed quasars is significant, because the Gaia centroid lies on top of the detected image, whereas in SDSS it is at the luminosity-weighted centroid of the system.By performing the same flux and centroid difference measurements on spectroscopically confirmed SDSS quasars, we are able to search for new lensed quasars. Inspecting better-seeing data of the systems with the largest flux and centroid offsets, we find 4 new sub-arcsecond lensed quasar candidates with resolved components in HSC or CFHT data. At such small separations, projected systems (e.g. quasar+star) are less common, and so lensed quasars selected in this way are less contaminated.As future Gaia data releases improve the completeness of secondary source detection in close pairs, multiple Gaia detections will become an easily-implemented method to find lensed quasar candidates. However, our method of using single Gaia detections will still be a useful tool to discover lensed quasars that have only one image bright enough to be detected by Gaia. This will include ∼1400 lensed quasars. Furthermore, as wide-field optical surveys become deeper, the centroid offset will become a more robust statistic for differentiating between single quasars and lensed quasars. The task will then be to remove contaminant systems such as quasar and star alignments. Gaia's long temporal baseline and repeated measurements will help select systems with similar component variability and its blue and red photometer instruments can ensure components have a similar colour.Finally we note that these methods are not restricted to lensed quasar searches, but should be useful for searches for stellar binary companions, or to remove contaminants from samples of isolated quasars. These techniques demonstrate how Gaia's excellent resolution provides an important complement to future deep ground-based optical surveys.§ ACKNOWLEDGEMENTS We thank Diana Harrison and Lindsay Oldham for useful discussions and comments about the paper. CAL would like to thank the Science and Technology Facilities Council (STFC) for their studentship. MWA also acknowledges support from the STFC in the form of an Ernest Rutherford Fellowship. This paper includes data based on observations obtained at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique of France, and the University of Hawaii.mnras
http://arxiv.org/abs/1709.08976v1
{ "authors": [ "Cameron A. Lemon", "Matthew W. Auger", "Richard G. McMahon", "Sergey E. Koposov" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20170926123802", "title": "Gravitationally Lensed Quasars in Gaia: I. Resolving Small-Separation Lenses" }
http://arxiv.org/abs/1709.08993v1
{ "authors": [ "Miklós Antal Werner", "Eugene Demler", "Alain Aspect", "Gergely Zaránd" ], "categories": [ "cond-mat.dis-nn", "cond-mat.quant-gas" ], "primary_category": "cond-mat.dis-nn", "published": "20170926131034", "title": "Selective final state spectroscopy and multifractality in two-component ultracold Bose-Einstein condensates: a numerical study" }
+0.2in+0.2in 30pt 8.1in 6in 1.38.05in theoremTheorem plain acknowledgementAcknowledgement algorithmAlgorithm axiomAxiom caseCase claimClaim conclusionConclusion conditionCondition conjectureConjecture corollaryCorollary criterionCriterion definitionDefinition exampleExample exerciseExercise lemmaLemma notationNotation problemProblem propositionProposition remarkRemark assumptionAssumption solutionSolution summarySummary varproof[1][Proof]#1definition assumpAssumption myassump[2][] [#1] *definition*Definition Identification of hedonic equilibrium]Identification of hedonic equilibrium and nonseparable simultaneous equationsMIT, NYU and Sciences-Po, Penn State, University of AlbertaThis paper derives conditions under which preferences and technology are nonparametrically identified in hedonic equilibrium models, where products are differentiated along more than one dimension and agents are characterized by several dimensions of unobserved heterogeneity. With products differentiated along a quality index and agents characterized by scalar unobserved heterogeneity, single crossing conditions on preferences and technology provide identifying restrictions in <cit.> and <cit.>. We develop similar shape restrictions in the multi-attribute case. These shape restrictions, which are based on optimal transport theory and generalized convexity, allow us to identify preferences for goods differentiated along multiple dimensions, from the observation of a single market. We thereby derive identification results for nonseparable simultaneous equations and multi-attribute hedonic equilibrium models with (possibly) multiple dimensions of unobserved heterogeneity. One of our results is a proof of absolute continuity of the distribution of endogenously traded qualities, which is of independent interest. [ Victor Chernozhukov, Alfred Galichon, Marc Henry and Brendan Pass First version: 9 February 2014. The present version is of December 30, 2023. The authors thank Guillaume Carlier, Andrew Chesher, Pierre-André Chiappori, Hidehiko Ichimura, Arthur Lewbel, Rosa Matzkin and multiple seminar audiences for stimulating discussions. They also thank the editor James Heckman and four referees for insightful comments. Chernozhukov's research has received funding from the NSF. Galichon’s research has received funding from ERC grants CoG-866274 and 313699, from NSF grant DMS-1716489, and from FiME (Laboratoire de Finance des Marchés de l’Energie). Henry's research has received funding from SSHRC Grants 435-2013-0292 and NSERC Grant 356491-2013. Pass' research has received support from a University of Alberta start-up grant and NSERC grants 412779-2012 and 04658-2018. ====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================emptyKeywords: Hedonic equilibrium, multivariate quantile identification, multidimensional unobserved heterogeneity, cyclical monotonicity, optimal transport.JEL subject classification: C14, C39, C61, C78§ INTRODUCTION Hedonic models were initially introduced by <cit.> to price highly differentiated goods in terms of their attributes. The vast subsequent literature on hedonic regressions of prices on attributes aimed at measuring the marginal willingness of consumers to pay for the attributes of the good they acquired, or the marginal willingness of workers to accept compensation for the attributes of their occupations. When unobservable taste for attributes drives the consumers' choices, however, a simple regression of price on attributes cannot inform us on the willingness to pay for different quality levels from the ones characterizing the good actually acquired. Nor can they inform us on the willingness to pay for characteristics of the good they would acquire under counterfactual market conditions, with different endowments, preferences and technology.The willingness to pay for counterfactual transactions, together with structural parameters of preferences and technology, can be recovered with a general equilibrium theory of hedonic models, dating back to <cit.> and <cit.> (see <cit.> for an account of their respective contributions). The common underlying framework, which we also adopt here, is that of a perfectly competitive market with heterogeneous buyers and sellers and traded product quality bundles and prices that arise endogenously in equilibrium[When preferences are quasi-linear in price and under mild semicontinuity assumptions, <cit.> and<cit.> show that equilibria exist, in the form of a joint distribution of product and consumer types (who consumes what), a joint distribution of product and producer types (who produces what) and a price schedule such that markets clear for each traded product.]. <cit.> proposes a two-step procedure to estimate general hedonic models and thereby analyze general equilibrium effects of changes in buyer-seller compositions, preferences and technology on qualities traded at equilibrium and their price (see <cit.>). The first step is a regression of prices on attributes, and the second is a simultaneous equations estimation of the demand and supply system with marginal prices estimated in the first step as endogenous variables.<cit.> and <cit.> point out that changes in consumers' unobserved taste for attributes would lead them to source goods from different suppliers, so that exclusion restrictions from the supply side cannot be justified[See also <cit.>, <cit.> and <cit.>.]. The literature on recovering marginal willingness to pay for counterfactual transactions has since followed three strategies: relying on multiple markets across space or time (see <cit.>, <cit.>, <cit.>, <cit.> and many references in <cit.>); relying on specific functional forms for utility (<cit.>, and references therein); or assume consumers care about a single dimension of good heterogeneity, via a quality index ( ,, <cit.> and <cit.>). Each of these strategies has drawbacks. Multimarket strategies rely on the assumption of no leakage between markets, or consumption substitution across time and space. They also rely on the assumption that preferences and the distribution of preference types are stable across markets, so that variation comes from the supply side and preferences are identified, but not technology (or vice versa with symmetric assumptions, see <cit.>). Identification strategies based on specific parameterizations of preferences and technology cannot distinguish features of the specification that are crucial to identification and features that are convenient approximations. Identification proofs must also be repeated for each new parameterization, the suitability of which depends on the application (see the discussion in <cit.>). Finally, it is important in many applications to account for heterogeneity in consumers' (or workers') relative valuations of different attributes, and hence move beyond the case of a scalar index of attributes.This paper proposes an identification strategy based on a single market, where agents have heterogeneous relative valuations for different attributes, without relying on a specific parametric specification of preferences and technology. A leading case in the class of specifications we entertain is U(x,ε,z)=U̅(x,z)+z'ε, where U is the valuation of the bundle of attributes z as a function of the vectors of observable and unobservable consumer characteristics x and ε respectively. We show that for each choice of distribution of types ε, the function U̅ is recovered nonparametrically (and the same result holds for the supply side). Our contribution is a direct generalization of the main identification strategy in <cit.>. In the latter, under a single crossing condition[Also known as Spence-Mirlees or supermodularity condition.] on the utility function, the first order condition of the consumer problem yields an increasing demand function, i.e., quality demanded by the consumer as an increasing function of her unobserved type, interpreted as unobserved taste for quality. Assortative matching guarantees uniqueness of demand, as the unique increasing function that maps the distribution of unobserved taste for quality, which is specified a priori, and the distribution of qualities, which is observed. Hence demand is identified as a quantile function, as in<cit.>. Identification, therefore, is driven by a shape restriction on the utility function. The main achievement of this paper is to show that a suitable multivariate extension (called twist condition)of the single crossing shape restriction delivers the same identification resultin hedonic equilibrium with multiple good quality dimensions. Heuristically, the proof mirrors <cit.>in that is first involves showing identification of inverse demand, which then allows the identification of marginal utilityfrom the first order condition of the consumer's program. The identification of inverse demand, i.e., a single valued mapping froma vector of good qualities to a vector of unobserved consumer type, involves the twist shape restriction and cyclical monotonicityof the hedonic equilibrium solution. The recovery of marginal utility from the first order condition of the consumer's programis involved, because differentiability of the hedonic equilibrium price function is not guaranteed. Known conditions fordifferentiability of transport potentials in general optimal transport problems, due to <cit.> and which would yield differentiability of the hedonic equilibrium price in our context, are very strong and rule out many simple forms of the matching surplus (see Chapter 12 of <cit.>). We are able to by-pass the <cit.> conditions using the special structure of the hedonic equilibrium, and to show approximate differentiability (Definition 10.2 page 218 of <cit.>) of the price function,for which we need absolute continuity of the distribution of good qualities traded at equilibrium. To that end, we provide a set of mild conditions on the primitives under which the endogenous distribution of qualities traded at equilibrium is absolutely continuous. The proof of absolute continuity of the distribution of qualities traded at equilibrium is based on an argument from <cit.>, also applied in <cit.>[<cit.> and <cit.> focus on quadratic distance cost, i.e., ζ(x,ε,z)=-d^2(z,ε) in the notation of Assumption <ref>, and work in more exotic geometric spaces.]. An important special case of our main identification theorem is the case, where the consumer's utility depends on consumer unobserved heterogeneity ε onlythrough the index z'ε, where z is the vector of good qualities. This case has the appealing interpretation that each dimension of unobservable taste is associated with agood quality dimension and the appealing feature that marginal utility is characterized as the solution of a convex program. However, choosing the dimension of unobserved consumer heterogeneity to be equal to the dimension of the vector of good qualities is a somewhat arbitrary modelling choice, and we provide an extension of our main identification theorem to cases where the dimension of unobserved consumer heterogeneity is lower, including a model with scalar unobserved heterogeneity. We derive a local identification result under mild conditions, but for global identification, we need a shape restriction on the endogenous price function, for which we know of no sufficient conditions on primitives. Another restrictive aspect of our main result is the necessary normalization of the distribution of unobserved heterogeneity, when identifying primitives from a single market. We provide some relaxation of this constraint when data from multiplemarkets is available, but the results are still fragmentary.The analysis of identification of inverse demand in hedonic equilibrium reveals that inverse demand satisfies a multivariate notion of monotonicity, whose definition depends on the form of the utility function that is maximized. In the univariate case, this notion reduces to monotonicity of inverse demand. In the case, where the consumer's utility depends on consumer unobserved heterogeneity ε onlythrough the index z'ε, where z is the vector of good qualities, inverse demand is the gradient of a convex function. We show that this notion of multivariate monotonicity is a suitable shape restriction to identify nonseparable simultaneous equations models, generalizing the quantile identification method of <cit.> and complementing results in <cit.>, where monotonicity is imposed equation by equation[Not all results in <cit.> and <cit.> require normalization of the distribution of unobserved heterogeneity, as we do here.].§.§.§ Closely related work On the identification of multi-attribute hedonic equilibrium models, <cit.> require marginal utility (resp. marginal product) to be additively separable in unobserved consumer (resp. producer) characteristic. <cit.> show that demand is nonparametrically identified under a single crossing condition and that various additional shape restrictions allow identification of preferences without additive separability (see also). <cit.> emphasize the one-dimensional case but argue that their results can be extended to multivariate attributes under a separability assumption in the utility (without restricting the distribution of unobserved heterogeneity). Our paper directlyfollows <cit.> and generalizes the insight therein to allow for heterogeneity in response to different dimensions of amenities or qualitites. However, the analyses in <cit.> and ours are non nested. <cit.> considers several specifications with Barten scales that are outside the scope of our generalization. <cit.> (developed independently and concurrently) is the most closely related paper and complements our work. He achieves identification under an additive separability restriction, but without restricting the distribution of unobserved heterogeneity. He also imposes conditions from <cit.> to obtain differentiability of the price. <cit.> derive a matching formulation of hedonic models and thereby highlight the close relation between empirical strategies in matching markets and in hedonic markets. <cit.> have a section on identification (posterior to this paper), where they use a similar strategy. Two main differences arise from the difference between matching and hedonic models. On the one hand, <cit.> need an extra step to account for the fact that in their matching model the price function is not observed. On the other hand, they do not have to worry about regularity of endogenous objects such as the price and distribution of goods in the hedonic equilibrium setting. <cit.> extend the work of <cit.> and identify preferences in marriage markets, where agents match on discrete characteristics, as the unique solution of an optimal transport problem, but unlike the present paper, they are restricted to the case with a discrete quality space. The strategy is extended to the set-valued case by <cit.>, who use subdifferential calculus to identifydynamic discrete choice problems. <cit.> use network flow techniques to identify discrete hedonic models.On the identification of nonlinear simultaneous equations models, <cit.> uses equation by equation monotonicity in the one dimensional unobservables and exclusion restrictions.<cit.> consider transformation models, as in <cit.> (see also <cit.>). These strategies do not require normalization of the distribution of unobserved heterogeneity. <cit.> also (independently and in a very different context) use cyclical monotonicity for identification in panel discrete choice models. <cit.> and <cit.> propose a notion of multivariate quantile based on Brenier's Theorem. <cit.> (coetaneous with the present paper) propose a conditional version of the optimal transport quantiles of <cit.> and <cit.> and apply it to quantile regression, whereas <cit.> (also coetaneous with this paper) apply optimal transport quantiles to the definition of statistical depth, ranks and signs. However, these papers do not consider identification. The present work is, to the best of our knowledge, the first to apply the notion of multivariate quantiles based on optimal transport results to the identification of simultaneous equations, thus providing a multivariate extension of <cit.>'s quantile identification idea. §.§.§ Organization of the paper The remainder of the paper is organized as follows. Section  <ref> sets the hedonic equilibrium framework out. Section <ref> gives a brief account of the methodology and main results on nonparametric identification of preferences in single attribute hedonic models, mostly drawn from <cit.> and <cit.>. Section <ref> shows how these results and the shape restrictions that drive them can be extended to the case of multiple attribute hedonic equilibrium markets. Section <ref> derives multivariate shape restrictions to identify nonseparable simultaneous equations models. The last section concludes. Proofs of the main results are relegated to the appendix, as are necessary background definitions and results on optimal transport theory and a list of our notational conventions.§ HEDONIC EQUILIBRIUM AND THE IDENTIFICATION PROBLEMWe consider a competitive environment, where consumers and producers trade a good or contract, fully characterized by its type or quality z. The set of feasible qualities Z⊆ℝ^d_z is assumed compact and given a priori, but the distribution of the qualities actually traded arises endogenously in the hedonic market equilibrium, as does their price schedule p(z). Producers are characterized by their type ỹ∈Ỹ⊆ℝ^d_ỹ and consumers by their type x̃∈X̃⊆ℝ^d_x̃. Type distributions P_x̃ on X̃ and P_ỹ on Ỹ are given exogenously, so that entry and exit are not modelled. Consumers and producers are price takers and maximize quasi-linear utility U(x̃,z)-p(z) and profit p(z)-C(ỹ,z) respectively. Utility U(x̃,z) (respectively cost C(ỹ,z)) is upper (respectively lower) semicontinuous and bounded. In addition, the set of qualities Z(x̃,ỹ) that maximize the joint surplus U(x̃,z)-C(ỹ,z) for each pair of types (x̃,ỹ) is assumed to have a measurable selection. Then,<cit.> and <cit.> show that an equilibrium exists in this market, in the form of a price function p on Z, a joint distribution P_x̃z on X̃× Z and P_ỹz on Ỹ× Z such that their marginal on Z coincide, so that market clears for each traded quality z∈ Z. Uniqueness is not guaranteed, in particular prices are not uniquely defined for non traded qualities in equilibrium. Purity is not guaranteed either: an equilibrium specifies a conditional distribution P_z|x̃ (respectively P_z|ỹ) of qualities consumed by type x̃ consumers (respectively produced by type ỹ producers). The quality traded by a given producer-consumer pair (x̃,ỹ) is not uniquely determined at equilibrium without additional assumptions.<cit.> and <cit.> further show that a pure equilibrium exists and is unique, under the following additional assumptions. First, type distributions P_x̃ and P_ỹ are absolutely continuous. Second, gradients of utility and cost, ∇_x̃U(x̃,z) and ∇_ỹC(ỹ,z) exist and are injective as functions of quality z. The latter condition, also known as the Twist Condition in the optimal transport literature, ensures that all consumers of a given type x̃ (respectively all producers of a given type ỹ) consume (respectively produce) the same quality z at equilibrium.The identification problem consists in the recovery of structural features of preferences and technology from observation of traded qualities and their prices in a single market. The solution concept we impose in our identification analysis is the following feature of hedonic equilibrium, i.e., maximization of surplus generated by a trade. EC[Equilibrium concept]The joint distribution γ of (X̃,Z,Ỹ) and the price function p form an hedonic equilibrium, i.e., they satisfy the following. The joint distribution γ has marginals P_x̃ and P_ỹ and for γ almost all (x̃,z,ỹ),U(x̃,z)-p(z) = max_z'∈ Z( U(x̃,z')-p(z')),p(z)-C(ỹ,z) = max_z'∈ Z( p(z')-C(ỹ,z)).In addition, observed qualities z∈ Z(x̃,ỹ) maximizing joint surplus U(x̃,z)-C(ỹ,z) for each x̃∈X̃ and ỹ∈Ỹ, lie in the interior of the set of feasible qualities Z and Z(x̃,ỹ) is assumed to have a measurable selection. The joint surplus U(x̃,z)-C(ỹ,z) is finite everywhere. We assume full participation in the market[ The possibility of non participation can be modelled by adding isolated points to the sets of types and renormalizing distributions accordingly (see Section 1.1 of <cit.> for details). ].Given observability of prices and the fact that producer type ỹ (respectively consumer type x̃) does not enter into the utility function U(x̃,z) (respectively cost function C(ỹ,z)) directly, we may consider the consumer and producer problems separately and symmetrically (see <cit.>). We focus on the consumer problem and on identification of utility function U(x̃,z). Under assumptions ensuring purity and uniqueness of equilibrium, the model predicts a deterministic choice of quality z for a given consumer type x̃. We do not impose such assumptions, but we need to account for heterogeneity in consumption patterns even in case of unique and pure equilibrium. Hence, we assume, as is customary, that consumer types x̃ are only partially observable to the analyst. We write x̃=(x,ε), where x∈ X⊆ℝ ^d_x is the observable part of the type vector, and ε∈ ℝ^d_ε is the unobservable part. We shall make a separability assumption that will allow us to specify constraints on the interaction between consumer unobservable type ε and good quality z in order to identify interactions between observable type x and good quality z.H[Unobservable heterogeneity]Consumer type x̃ is composed of observable type x with distribution P_x on X⊆ℝ^d_x and unobservable type ε with a priori specified conditional distribution P_ε| x on ℝ^d_ε, with d_ε≤ d_z. The utility of consumers can be decomposed as U(x̃,z)=U̅(x,z)+ζ(x,ε,z), where the functional form of ζ is known, but that of U̅ is not[Despite the notation used, U̅ should not necessarily be interpreted as mean utility, since we allow for a general choice of ζ and P_ε| x. If this interpretation is desirable in a particular application, ζ and P_ε| x can be chosen in such a way that 𝔼[ζ(x,ε,z)| x]=0.].The main primitive object of interest is the deterministic component of utility U̅(x,z). For convenience, we shall use the transformation V(x,z):=p(z)-U̅(x,z). The latter will be called the consumer's potential, in line with the optimal transport terminology. Since the price is assumed to be identified, identification of V is equivalent to identification of U̅. To achieve identification, we require fixing a choice of function ζ: a leading example, discussed in Section <ref>, is the case ζ(x,ε,z)=z'ε. Identification also requires fixing the conditional distribution P_ε| x of unobserved heterogeneity. This corresponds to the normalization of the distribution of scalar unobservable utility in existing quantile identification strategies. Discrete choice models also typically rely on a fixed distribution for unobservable heterogeneity (generally extreme valued). The requirements to fix both ζ and P_ε| x will be relaxed to some extent in Section <ref> which entertains the possibility of further identification power using information from multiple markets.§ SINGLE MARKET IDENTIFICATION WITH SCALAR ATTRIBUTE In this section, we recall and reformulate results of <cit.> on identification of single attribute hedonic models. Suppose, for the purpose of this section, that d_z=d_ε=1, so that unobserved heterogeneity is scalar, as is the quality dimension. Suppose also that ζ is twice continuously differentiable in z and ε. R[Regularity of preferences and technology]The functions U̅(x,z), C(ỹ,z) and ζ(x,ε,z) are twice continuously differentiable with respect ε and z.Suppose further (for ease of exposition) that V is twice continuously differentiable in z. The main identifying assumption is a shape restriction on utility called single crossing, Spence-Mirlees or supermodularity, depending on the context. S1[Spence-Mirlees]We have d_z=1 and ζ_ε z(x,ε,z)>0 for all x,ε,z.The first order condition of the consumer problem yieldsζ_z(x,ε,z)=V_z(x,z),which, under Assumption <ref>, implicitly defines an inverse demand function z↦ε(x,z), which specifies which unobserved type consumes quality z. Combining the second order condition ζ_zz(x,ε,z)<V_zz(x,z) and further differentiation of (<ref>), i.e., ζ_zz(x,ε,z)+ζ_ε z(x,ε,z)ε_z(x,z)=V_zz(x,z), yieldsε_z(x,z)=V_zz(x,z)-ζ_zz(x,ε,z)/ ζ_ε z(x,ε,z)>0.Hence the inverse demand is increasing and is therefore identified as the unique increasing function that maps the distribution P_z|x to the distribution P_ε| x, namely the quantile transform. Denoting F the cumulative distribution function corresponding to the distribution P, we therefore have identification of inverse demand according to the strategy put forward in <cit.> as:ε(x,z)=F^-1_ε|x( F_z|x(z|x) ).The single crossing condition of Assumption <ref> on the consumer surplus function ζ(x,ε,z) yields positive assortative matching, as in the <cit.> classical model. Consumers with higher taste for quality ε will choose higher qualities in equilibrium and positive assortative matching drives identification of demand for quality. The important feature of Assumption <ref> is injectivity of ζ_z(x,ε,z) relative to ε and a similar argument would have carried through under ζ_zε(x,ε,z)<0, yielding negative assortative matching instead.Once inverse demand is identified, the consumer potential V(x,z), hence the utility function U̅(x,z), can be recovered up to a constant by integration of the first order condition (<ref>):U̅(x,z)=p(z)-V(x,z)=p(z)-∫_0^zζ_z(x,ε(x,z^'),z^')dz^'.We summarize the previous discussion in the following identification statement, originally due to <cit.>. Under Assumptions <ref>, <ref>, <ref>, <ref>, U̅(x,z) is nonparametrically identified, in the sense that z↦U̅_z(x,z) is the only marginal utility function compatible with the pair (P_xz,p), i.e., any other marginal utility function coincides with it, P_z| x almost surely.Unlike the demand function, which is identified without knowledge of the surplus function ζ, as long as the latter satisfies single crossing (Assumption <ref>), identification of the preference function U̅(x,z) does require a priori knowledge of the function ζ.§ SINGLE MARKET IDENTIFICATION WITH MULTIPLE ATTRIBUTES This section develops the multivariate analogue of identification results in Section <ref>. The strategy follows the same lines. First, a shape restriction on the utility function, analogue to Assumption <ref>, and a consequence of maximization behavior, called cyclical monotonicity, will identify the inverse demand: we will show that a single type ε(x,z) chooses good quality z at equilibrium. Second, formally, the utility is then recovered from the first order condition of the consumer's program. The latter step, however, involves significant difficulties, due to the possible lack of differentiability of the endogenous price function.§.§ Identification of inverse demand §.§.§ Shape restrictionIn the one dimensional case, identification of inverse demand was shown under the single crossing Assumption <ref>. We noted that the sign of the single crossing condition was not important for the identification result. Instead, what is crucial is the following, weaker, condition, which is commonly known as the Twist Condition in the optimal transport literature. The crucial condition, maintained throughout, is Assumption <ref>. As shown in <cit.>, it holds when for each distinct pair of consumer types (ε_1,ε_2), the function z↦ζ(x,ε_1,z)-ζ(x,ε_2,z) has no critical point. It is satisfied for example in the case ζ(x,ε,z)=z'ε, in the case ζ(x,ε,z)=F(x,z'ε), when z'ε>0, and F is increasing and convex in its second argument, or in the case ζ(x,ε,z)=∑_k=1^dF_k(x,ε_k,z_k), where, for each k and x, F_k is supermodular in (ε_k,z_k). All these examples are discussed in Sections <ref> and <ref> below. S2[Twist Condition]For all x and z, the following hold. (A) The gradient ∇_zζ(x,ε,z) of ζ(x,ε,z) in z is injective as a function of ε∈(P_ε| x).(B) The gradient ∇_εζ(x, ε,z) of ζ(x,ε,z) in ε is injective as a function of z∈ Z. From <cit.>, it is sufficient that D^2_zεζ(x,ε,z) be positive definite everywhere for Assumption <ref> to be satisfied. Alternative sets of sufficient conditions are given in Theorem 2 of <cit.>[Relatedly, <cit.> propose injectivity results under a gross substitutes condition.]. Assumption <ref>, unlike single crossing, is well defined in the multivariate case, and we shall show, using recent developments in optimal transport theory, that it continues to deliver the desired identification in the multivariate case. §.§.§ Cyclical monotonicityAn important implication of Assumption <ref> is that traded quality z maximizes the joint surplus[This observation is the basis for the characterization of hedonic models as transferable utility matching models in <cit.>.] U(x̃,z)-C(ỹ,z). Let, therefore, S(x,ε,ỹ):=sup_z∈ Z[U((x,ε),z)-C(ỹ,z)] be the surplus of a consumer-producer pair ((x,ε),ỹ) at equilibrium. Suppose consumer (x,ε_0) (resp. (x,ε_1)) is paired at equilibrium with producer ỹ_0 (resp. ỹ_1) to exchange good quality z. Then, as shown in the proof of Lemma <ref> in the Appendix, their total surplus is at least as large in the current consumer-producer matching as it would be, were they to switch partners. This is a property of the optimal allocation called cyclical monotonicity. The total surplus cannot be improved by a cycle of reallocations of consumers and producers. Applied here to cycles of length two, cyclical monotonicity yields:S(x,ε_0,ỹ_0) + S(x,ε_1,ỹ_1)≥S(x,ε_0,ỹ_1) + S(x,ε_1,ỹ_0)≥U((x, ε_0), z) -C(ỹ_1,z)+U((x, ε_1), z) -C(ỹ_0, z) = S(x,ε_0,ỹ_0) + S(x,ε_1,ỹ_1),where the first inequality holds because of cyclical monotonicity and the second inequality holds by definition of the surplus function S as a supremum. Hence, equality holds throughout in the previous display. Therefore, choice z maximizes both z↦ U((x, ε_0), z) -C(ỹ_0,z) and z↦ U((x, ε_1), z) -C(ỹ_0, z). It follows that ∇_zζ(x,ε_1,z)=∇_zζ(x,ε_0,z). The Twist Assumption <ref> then yields equality of ε_0 and ε_1 and the following lemma (proved formally in the Appendix). Under Assumptions <ref>, <ref>, <ref>(A) and <ref>, if consumers with characteristics (x,ε_0) and (x,ε_1) consume the same good quality z at equilibrium, then ε_0=ε_1.We see from Lemma <ref> that identification of inverse demand holds under conditions that are analogous to the scalar case, where the Twist condition replaces Spence-Mirlees as a shape restriction. We also see that the identification proof also relies on a notion of monotonicity. We will push this analogy further in the Section <ref> and show that the inverse demand function z↦ε(x,z) itself satisfies a generalized form of monotonicity, we call ζ-monotonicity, since its definition involves the function ζ. §.§ Identification of marginal utility Heuristically, once identification of inverse demand is established, and ε(x,z) is uniquely defined in equilibrium, the first order condition of the consumer's problem sup_z∈ Z{ζ(x,ε,z)-V(x,z)} delivers identification of marginal utility ∇_zU̅(x,z). However, using the first order condition presupposes smoothness of the potential V, hence of the endogenous price function z↦ p(z). Conditions for differentiability of the potential V in optimal transportation problems are given by <cit.>. They are applied to identification in hedonic models in <cit.>. However, the conditions in <cit.> are not transparent and they are known to be very strong, excluding simple forms of the surplus, such as S(x̃,ỹ):=|x̃-ỹ|^p, for all p2 for instance. The remainder of this section, therefore, will be devoted to proving a weaker form of differentiability of the price function, which can be used to identify marginal utility from the first order condition of the consumer's problem.The consumer's problem yields the expression for indirect utility:sup_z∈ Z{ζ(x,ε,z)-V(x,z)}:=V^ζ(x,ε).Equation (<ref>) defines a generalized notion of convex conjugation, in which the consumer's indirect utility is the conjugate of V(x,z)=p(z)-U̅(x,z). This notion of conjugation can be inverted, similarly to convex conjugation, into:V^ζζ(x,z)=sup_ε∈ℝ^d_ε{ζ(x,ε,z)-V^ ζ(x,ε)},where V^ζζ is called the double conjugate of V. In the special case ζ(x,ε,z)=z'ε, ζ-conjugate simplifies to the ordinary convex conjugate of convex analysis and convexity of a lower semi-continuous function is equivalent to equality with its double conjugate. Hence, by extension, equality with its double ζ-conjugate defines a generalized notion of convexity (see Definition 2.3.3 page 86 of <cit.>). A function V is called ζ-convex if V=V^ζζ.We establish ζ-convexity of the potential as a step towards a notion of differentiability, in analogy with convex functions, which are locally Lipschitz, hence almost surely differentiable, by Rademacher's Theorem (see for instance <cit.>, Theorem 10.8(ii)). It also delivers a notion of ζ-monotonicity for inverse demand, discussed in Section <ref>. Under <ref> and <ref>, the function z↦ V(x,z) is P_z| x a.s. ζ-convex, for all x.This result only provides information for pairs (x,z), where type x consumers choose good quality z in equilibrium. In order to obtain a global smoothness result on the potential V, we need conditions under which the endogenous distribution of good qualities traded in equilibrium is absolutely continuous with respect to Lebesgue measure on ℝ^d_z. They include absolute continuity of the distribution of unobserved heterogeneity, additional smoothness conditions on preferences and technology and the Twist Assumption <ref>(B), which requires the dimension of unobserved heterogeneity to be the same as the dimension of the good quality space, i.e., d_ε=d_z (this will be relaxed in Section <ref>).H'Assumption <ref> holds and the distribution of unobserved tastes P_ε| x is absolutely continuous on ℝ^d_z with respect to Lebesgue measure for all x. R'[Conditions for absolute continuity of P_z| x]The following hold. * The Hessian of total surplus D^2_zz( U(x,ε,z) - C(ỹ,z) ) is bounded above; that is ‖ D^2_zz( U(x,ε,z) - C(ỹ,z) ) ‖≤ M_1 for all x,ε,z,ỹ, for some fixed M_1.* If the support of P_ε| x is unbounded, for each x∈ X, ‖∇_zζ(x,ε,z)‖→∞ as ε→∞, uniformly in z∈ Z.* The matrix D^2_ε z ζ(x,ε,z) has full rank for all x, ε, z. Its inverse [D^2_ε z ζ(x,ε,z)]^-1 has uniform upper bound M_0: ‖[D^2_ε z ζ(x,ε,z)]^-1‖≤ M_0 for all x,ε,z, for some fixed M_0.We then obtain our principal intermediate lemma, which is of independent interest for the theory of hedonic equilibrium. The proof can be found in the technical appendix <cit.>. Under Assumptions <ref>,  <ref>, <ref> and <ref>, the endogenous distribution P_z| x of qualities traded at equilibrium is absolutely continuous with respect to Lebesgue measure.Lemma 4 in the technical appendix <cit.> shows everywhere differentiability of the double conjugate potential V^ζζ. This doesn't imply differentiability of V everywhere, since V is ζ-convex (i.e., V=V^ζζ) only P_z| x almost everywhere. However, combined with Lemma <ref>, this yields approximate differentiability of z↦ V(x,z) as defined in Definition 6 of the technical appendix <cit.>. Using uniqueness of the approximate gradient of an approximately differentiable function then yields identification of marginal utility ∇_zU̅(x,z) from the first order condition∇_zζ(x,ε(x,z),z)=∇_ap,z V(x,z)=∇_app(z)-∇_zU̅(x,z),where ∇_ap,(z) denotes the approximate gradient (with respect to z) of Definition 6 of the technical appendix <cit.>, and where the inverse demand function ε(x,z) is uniquely determined, by Lemma <ref>. This yields our main identification theorem. Under Assumptions <ref>,  <ref>, <ref> and <ref>, the following hold: * U̅(x,z) is nonparametrically identified, in the sense that z↦∇_zU̅(x,z) is the only marginal utility function compatible with the pair (P_xz,p), i.e., any other marginal utility function coincides with it, P_z| x almost surely.* For all x∈ X, U̅(x,z)=p(z)-V(x,z) and z↦ V(x,z) is P_z| x almost everywhere equal to the ζ-convex solution to the problem min_V(𝔼_z[V(x,z)| x]+𝔼_ε [V^ζ(x,ε)| x]), with V^ζ defined in (<ref>). Theorem <ref>(1) provides identification of marginal utility without any restriction on the distributions of observable characteristics of producers and consumers. The latter may include discrete characteristics. Regularity conditions in Assumption <ref> are satisfied in the cases ζ(x,ε,z)=ε'z and ζ(x,ε,z)=exp(ε'z) as we discuss in Sections <ref> and <ref> respectively. They preclude bunching of consumers at equilibrium, as shown in Lemma <ref>. The result is driven by the shape restriction <ref> and the strong normalization assumption on the distribution of unobserved heterogeneity. This assumption is inevitable in a single market identification based on a generalized quantile identification strategy. Section <ref> discusses (partial) identification without knowledge a priori of the distribution of unobserved heterogeneity. Theorem <ref>(2) provides a framework for estimation and inference on the identified marginal utility based on new developments in computational optimal transport (see for instance the survey in <cit.>). §.§ Special cases§.§.§ Marginal utility linear in unobserved tasteA leading special case of the identification result in Theorem <ref> is the choice ζ(x,ε,z)=z^'ε, where marginal utility is linear in unobservable taste. A natural interpretation of this specification is that each quality dimension z_j is associated with a specific unobserved taste intensity ε_j for that particular quality dimension. Assumptions <ref> and <ref>(2,3) are automatically satisfied when ζ(x,ε,z)=ε'z. In addition, ζ-convexity reduces to traditional convexity, so that we have the following corollary of Theorem <ref>:Under <ref>,  <ref> with ζ(x,ε,z)=ε'z, and <ref>(1), the following hold: * U̅(x,z) is nonparametrically identified, in the sense that z↦∇_zU̅(x,z) is the only marginal utility function compatible with the pair (P_xz,p), i.e., any other marginal utility function coincides with it, P_z| x almost surely.* For all x∈ X, U̅(x,z)=p(z)-V(x,z) and z↦ V(x,z) is P_z| x almost everywhere equal to the convex solution to the problem min_V(𝔼_z[V(x,z)| x]+𝔼_ε [V^∗(x,ε)| x]), where V^∗ is the convex conjugate of V. A significant computational advantage of Corollary <ref>(2) over the general case is that the potential solves a convex program. The first order condition (<ref>) also simplifies to ε(x,z)=∇_ap,z V(x,z), where ∇_ap,z denotes the approximate gradient with respect to z (Definition 6 of the technical appendix <cit.>). Hence, inverse demand in this case is the approximate gradient of a P_z| x almost surely convex function. This can be interpreted as a multivariate version of monotonicity of inverse demand and assortative matching, since in the univariate case, we recover monotonicity of inverse demand as in Section <ref>. §.§.§ Nonlinear transforms and random Barten scalesIn the special case of Section <ref>, the curvature of marginal willingness to pay for qualities varies only with observable characteristics, but is independent of unobservable type ε. Two ways Specification ζ(z,ε,z)=z'ε can be generalized to allow the curvature of marginal utility to vary with unobserved type are the following.* Specification ζ(x,ε,z)=F(x,z'ε) satisfies Assumptions <ref> and <ref> when the conditional distribution P_ε| x of types has bounded support, z'ε>0, and F is increasing and convex in its second argument (with non zero derivative). A special case is ζ(x,ε,z):=exp(z'ε). Under this specification, a type ε consumer's marginal willingness to pay for quality z is ∇_zU̅(x,z)+exp(z'ε)ε. The curvature of marginal willingness to pay increases with type, so that there are increasing returns to quality. * Specification ζ(x,ε,z)=∑_k=1^d_zF_k(x,ε_k,z_k), satisfies Assumption <ref> when F_k is supermodular in (ε_k,z_k)for each k and x, and satifies Assumption <ref> when ∂^2F_k/∂ z_k^2 is bounded above and ∂^2F_k/∂ε_k∂ z_k is bounded below for each k, uniformely over x. A special case is ζ(x,ε,z)=∑_k=1^dF_k(x,z_kε_k), in the spirit of random Barten scales, as in <cit.>.§.§ Extensions§.§.§ Lower dimensional unobserved heterogeneityOur main identification result, Theorem <ref>, is obtained under conditions that force the dimension of unobserved heterogeneity to be the same as the dimension d_z of the good quality space. The conditions that impose d_ε=d_z are Assumption <ref> which requires the distribution P_ε| x to be absolutely continuous with respect to Lebesgue measure on ℝ^d_z, and Assumption <ref>(B), which requires injectivity of z↦∇_εζ(x,ε,z). In the special case ζ(x,ε,z)=ε'z, the interpretation of each dimension of ε as a quality dimension specific taste is appealing. However, the choice of dimension of unobserved heterogeneity remains an arbirary modelling choice.In this section, we relax these assumptions and analyze identification with unobserved heterogeneity of lower dimension, including d_ε=0 and d_ε=1. First recall that inverse demand is identified in Lemma <ref> under Assumptions <ref>, <ref>, <ref>(A) and <ref>, which only require d_ε≤ d_z. We also know from Lemma <ref> that the potential z↦ V(x,z) is P_z| x almost everywhere ζ-convex. It is therefore ζ-convex on any open subset of the support of P_z| x, which we show implies local identification. To obtain a global identification result, we need this ζ-convexity everywhere.S3[ζ-convexity]The potential V is ζ-convex as a function of z for all x.Unfortunately, this global constraint implies a constraint on the endogenous price function, for which we do not have sufficient conditions in the general case. Under Assumption <ref>, we show differentiability of the potential function z↦ V(x,z), hence identification of marginal utility.* Local identification: Under Assumptions <ref>,  <ref>, <ref>, <ref>(2) and <ref>(A), * ∇_zU̅(x,z) is identified on any open subset of the support of P_z| x,* v'∇_zU̅(x,z) is identified for any vector v tangent to the support of P_z| x. * Global identification: If, in addition, Assumption <ref> holds, ∇_zU̅(x,z) is identified. Local identification therefore holds under very weak assumptions, as seen in Theorem <ref>(1a). However, in cases with lower dimensional unobserved heterogeneity, consumer choices may be concentrated on a lower dimensional manifold, so that there are no open subsets in the support of P_z| x. In such cases, Theorem <ref>(1b) tells us that we can only identify marginal willingness to pay along the support of good attributes actually traded at equilibrium. To illustrate the idea, suppose consumers are acquiring housing. The latter is differentiated along two dimensions, size and air quality, say. Suppose consumers with identical observable characteristics x are heterogeneous along a single scalar dimension of unobserved heterogeneity ε. We would then expect equilibrium housing choices to be concentrated on a curve in the (size × air quality) space, implicitly defining a scalar quality index that is monotonic in unobserved type ε. Our result says that we can identify counterfactual marginal willingness to pay for size and air quality along that curve of observed equilibrium choices only. To obtain global identification with lower dimensional unobserved heterogeneity, we need a global ζ-convexity assumption on the potential (which implies a shape restriction on the endogenous price function).We investigate special cases:* The case ζ(x,ε,z)=0. From the first order condition of the consumer's program, we then have ∇ p(z)=∇_z U(x,z), so that U(x,z) is additively separable in x and z and the data on matching between consumers and producers cannot inform utility. * Scalar unobserved heterogeneity: suppose d_ε=1 and in addition ζ(x,ε,z)=ζ(ε,z). Denote the (scalar) inverse demand z↦ε(x,z) and assume regularity of all the terms involved. Differentiating the first order condition of the consumer problem with respect to consumer observable characteristic x yieldsD^2_xzU̅(x,z)=-D^2_xεζ(ε(x,z),z)∇_xε(x,z),which is at most of rank 1 since D^2_xεζ(ε(x,z),z) is a d_z× d_ε matrix. §.§.§ Partial identification with multiple marketsAll identification results so far, require fixing the distribution of unobserved consumer heterogeneity a priori. In this section, we derive identifying information from multiple markets, and the possibility of jointly (partially) identifying the utility function U̅(x,z) and the distribution P_ε| x. Suppose  m_1 and m_2 index two separate markets, in the sense that producers, consumers or goods cannot move between markets. Markets differ in the distributions of producer and consumer characteristics (P_x^m_1,P_ỹ^m_1) and (P_x^m_2,P_ỹ^m_2). Suppose, however, that the distribution of unobserved tastes P_ε| x and the utility function U(x,ε,z)=U̅(x,z)+z'ε is identical in both markets. Both markets are at equilibrium. The equilibrium price schedule in market m is p^m(z). The equilibrium distribution of traded qualities in market m is P^m_z| x.Under the assumptions of Corollary <ref>, in each market, we recover a nonparametrically identified utility function U̅^m(x,z;P_ε| x), where the dependence in the unknown distribution of tastes P_ε| x is emphasized. For each fixed P_ε| x, Corollary <ref> tells us that ∇_zU̅^m(x,z;P_ε| x) is uniquely determined. In each market, the first order condition of the consumer's problem is ε(x,z;m)=∇_ap,zp^m(z)-∇_zU̅(x,z;P_ε| x). Differencing across markets therefore yields:ε(z,x;m_1)-ε(x,z;m_2)=∇_ap,z( p^m_1(z)-p^m_2(z)),which is an identifying equation for P_ε| x. The right-hand side of (<ref>) is identified, since the price functions are observed. Moreover, for each market m, the inverse demand ε(x,z;m) uniquely determines P_ε| x, since it pushes the identified P^m_z| x forward to P_ε| x. Point identification would require conditions under which the difference ε(z,x;m_1)-ε(x,z;m_2) uniquely determines P_ε| x, which is beyond the scope of the present work. § IDENTIFICATION OF NON SEPARABLE SIMULTANEOUS EQUATIONSThe analysis of hedonic equilibrium models in Section <ref> motivates a new approach to the identification of nonseparable simultaneous equations models of the type H(x,z)=ε, where x∈ℝ^d_x is an observed vector of covariates, z∈ℝ^d_z is the vector of dependent variables, P_z| x is identified from the data, H is an unknown function and ε∈ℝ^d_ε is a vector of unobservable shocks with distribution P_ε| x. In the case d_ε=d_z=1, H is identified by <cit.> subject to the normalization of P_ε| x and monotonicity of z↦ H(x,z) for all x. This section develops a class of shape restrictions that allows identification of H in the multivariate case 1≤ d_ε≤ d_z.As in the scalar case, we fix the conditional distribution P_ε|x of errors a priori. This is justified by the fact that for any vector of dependent variables Z∼ P_z| x and any pair of absolutely continuous error distributions (P_ε| x,P̃_ε̃| x), there is an invertible mapping T such that H(x,Z)∼ P_ε| x and H̃(x,Z):=H(x,T(Z))∼P̃_ε̃| x (by <cit.>), so that (H,P_ε| x) and (H̃,P̃_ε̃| x) are observationally equivalent. For identification, we rely on a shape restriction that emulates monotonicity in z of H(x,z) in the scalar case. This generalized monotonicity notion is inherited from utility maximizing choices of good quality z by consumers with characteristics (x,ε). As such, it is indexed by the utility function. Let ζ be a function on ℝ^d_x×ℝ^d_ε×ℝ^d_z that is continuously differentiable in its second and third variables and satisfies Assumption <ref>(A). A function H on ℝ^d_x×ℝ^d_z is ζ-monotone if for all x, there exists a ζ-convex function z↦ V(x,z) (see Definition <ref>) such that for all z, H(x,z)∈∂^ζ_zV(x,z), where ∂^ζ_z denotes the ζ-subdifferential with respect to z from Definition 4 of the technical appendix <cit.>.Two special cases help clarify the concept of ζ-monotonicity: * When d_z=d_ε=1, injectivity of ε↦ζ_z(x,ε,z) implies that V^ζ is convex or concave, and that z↦ H(x,z) is monotone.* When d_ε=d_z and ζ(x,ε,z)=z'ε, V^ζ=V^∗, which is convex, and therefore locally Lipschitz, hence almost surely differentiable by Rademacher's Theorem (<cit.>, Theorem 10.8(ii)), so that H(x,z)=∇_zV(x,z) is the gradient of a convex function. The class of ζ-monotone functions has structural underpinnings as demand functions resulting from the maximization of a utility function over good qualities z∈ℝ^d_z. Suppose a consumer with characteristics (x,ε) chooses z based on the maximization of U̅(x,z)+ζ(x,ε,z)-p(z). Suppose ζ satisfies Assumption <ref> and V(x,z):=p(z)-U̅(x,z) is ζ-convex. Then, the demand function H that satisfies ∇ V(x,z)=∇_zζ(x,H(x,z),z), P_z| x a.s., is ζ-monotonic by Theorem 10.28(b) page 243 of <cit.>. Theorem <ref> below then shows identification of demand when utility is of the form U̅(x,z)+ζ(x,ε,x) and ζ is fixed. In the simultaneous equations model H(x,z)=ε, with z∈ℝ^d_z and x∈ℝ^d_x, and ε follows known distribution P_ε| x, the function H:ℝ^d_x×ℝ^d_z→ℝ^d_ε is identified within the class of measurable ζ-monotone functions, for any function ζ on ℝ^d_x×ℝ^d_ε×ℝ^d_z that is bounded above, continuously differentiable in its second and third variables and satisfies Assumptions <ref>(A), <ref>(2) and ∫ζ(x,ε,z)dP_ε| x(ε) ≥ C∈ℝ for all x,z.Theorem <ref> is a relatively straightforward application of classical results in optimal transport theory, in particular Theorem 10.28 page 243 of <cit.>. Brenier's polar factorization theorem, in <cit.>, was, to the best of our knowledge, first used to define multivariate quantile functions by <cit.> and <cit.> with decision theoretic applications. <cit.> and <cit.>(both coetaneous with the present paper) apply <cit.> to multivariate quantile regression and multivariate depth, quantiles, ranks and signs respectively. This section relies on an extension of these optimal transport results to more general transport costs and interprets it as an identification result, thus extending scalar quantile identification strategies.If we revisit the two special cases of ζ-monotonicity above, we obtain the classical quantile identification of <cit.>, and a result on the identification of nonseparable simultaneous equations systems within the class of gradients of convex functions. * When d_z=d_ε=1, Theorem <ref> yields the identification of H in the system ε=H(x,z) when z↦ H(x,z) is monotone.* When d_ε=d_z and ζ(x,ε,z)=z'ε, Theorem <ref> yields the identification of H in the system ε=H(x,z) when z↦ H(x,z) is the gradient of a convex function. In the simultaneous equations model z=G(x,ε), with z∈ℝ^d_z, x∈ℝ^d_x, ε∼ P_ε| x, and P_ε| x a given absolutely continuous distribution on ℝ^d_z, the function G:ℝ^d_x×ℝ^d_z→ℝ^d_z is identified within the class of gradients of convex functions of ε, for each x.Although the previous result is presented as a corollary of Theorem <ref>, it holds under weaker conditions than would be implied by Theorem <ref> in case ζ(x,ε,z)=z'ε and is a direct application of the Main Theorem  in <cit.>. The only constraint is the absolute continuity of the distribution of ε, so that the outcome vector z and the covariate vector x are unrestricted.Beyond the special cases of Corollary <ref>, we revisit the examples of Section <ref>. First consider the case of exponential transform ζ(x,ε,z):=exp(z'ε). Given marginal utility U̅(x,z)+exp(z'ε) and price p(z), Theorem <ref> tells us that the solution ε=H(x,z) to the consumer's problem ∇ p(z)-∇_zU̅(x,z)=exp(z'H(x,z))H(x,z) is unique. In the case consumers are maximizing utility of the form U̅(x,z)+∑_k=1^dF_k(x,z_kε_k) , the corresponding system of differential equations with a unique solution is p_z_k(z)-U̅_z_k(x,z)=f_k(x,z_kH_k(x,z))H_k(x,z), each k, where p_z_k and U̅_z_k are the partial derivatives with respect to the k-th variable, f_k is the derivative of F_k with respect to the second argument, and H_k is the k-th component of H. § DISCUSSION This paper proposed a set of conditions under which utilities and costs in a hedonic equilibrium model are identified from the observation of a single market outcome. The proof strategy extends <cit.> and <cit.> (hereafter EHMN) to the case of goods characterized by more than one attribute. The proposed shape restriction on the utility function, called twist condition, extends the single crossing condition in EHMN. The proof of identification mirrors that of (one of the strategies in) EHMN. First, inverse demand is identified from the twist condition and cyclical monotonicity (a feature of equilibrium). Then the first order condition of the consumer's problem allows the recovery of the utility function, once a suitable form of weak differentiability of the endogenous price function is ensured. The identification proof highlights another parallel with EHMN, which is (generalized) monotonicity of inverse demand. In the scalar case, this generalized monotonicity reduces to monotonicity, whereas in the special case, where utility takes the form U((x,ε),z)=U̅(x,z)+z'ε, inverse demand is the gradient of a convex function. We then show that this generalized form of monotonicity is a suitable shape restriction to identify nonseparable simultaneous equations models with a strategy that extends the quantile identification of <cit.>. Most of our results involve fixing the distribution of unobserved consumer heterogeneity a priori, as in the original quantile identification method. Although we provide some discussion of the case, where data from multiple distinct markets can provide additional identifying equations to (partially) identifying (U̅(x,z),P_ε| x) jointly, more research is needed to develop point identification conditions for the latter.§ NOTATION Throughout the paper, we use the following notational conventions. Let f(x,y) be a real-valued function on ℝ^d×ℝ^d. When f is sufficiently smooth, the gradient of f with respect to x is denoted ∇_xf, the matrix of second order derivatives with respect to x and y is denoted D^2_xyf. When f is not smooth, ∂_xf refers to the subdifferential with respect to x, from Definition 2 of the technical appendix <cit.>, and ∇_ap,xf refers to the approximate gradient with respect to x, from Definition 6 of the technical appendix <cit.>. The set of all Borel probability distributions on a set Z is denoted Δ(Z). A random vector ε with probability distribution P is denoted ε∼ P, and X∼ Y means that the random vectors X and Y have the same distribution. The product of two probability distributions μ and ν is denoted μ⊗ν and for a map f:X↦ Y and μ∈Δ(X), ν:=f#μ is the probability distribution on Y defined for each Borel subset A of Y by ν(A)=μ( f^-1(A)). For instance, if T is a map from X to Y and ν a probability distribution on X, then μ:=(id,T)#ν defines the probability distribution on X× Y by μ(A)=∫_X1_A(x,T(x))dμ(x) for any measurable subset A of X× Y. Given two probability distributions μ and ν on X and Y respectively, ℳ(μ,ν) will denote the subset of Δ(X× Y) containing all probability distributions with marginals μ and ν. We denote the inner product of two vectors x and y by x^' y. The Euclidean norm is denoted ·. The notation |a| refers to the absolute value of the real number a, whereas |A| refers to the Lebesgue measure of set A. The set of all continuous real valued functions on Z is denoted C^0(Z) and B_r(x) is the open ball of radius r centered at x. For each fixed x∈ X, the function ε↦ V^∗(x,ε):=sup_z∈ℝ^d{z'ε-V(x,ε)} is called the convex conjugate (also known as Legendre-Fenchel transform) of z↦ V(x,z). Still for fixed x, the convex conjugate z↦ V^∗∗(x,ε):=sup_ε∈ℝ^d{z'ε-V^∗(x,ε)} is called the double conjugate or convex envelope of the potential function z↦ V(x,z). According to convex duality theory (see for instance <cit.>, Theorem 12.2 page 104), the double conjugate of V is V itself if and only if z↦ V(x,z) is convex and lower-semi-continuous. The notation ψ^ζ refers to the ζ-convex conjugate of the function ψ (Definition 3 of the technical appendix <cit.>) and ∂^ζ_x refers to the ζ-subdifferential with respect to x (Definition 4 of the technical appendix <cit.>).§ PROOF OF RESULTS IN THE MAIN TEXT§.§.§ Proof of Lemma <ref> By definition of V^ζ, we haveV(x,z) ≥ζ(x,ε,z) -V^ζ(x,ε).As, by definition of ζ-conjugation, V^ζζ(x,z) =sup_ε[ ζ(x,ε,z) -V^ζ(x,ε) ], we haveV(x,z) ≥ V^ζζ(x,z),by taking supremum over ε in (<ref>).Let γ be an hedonic equilibrium probability distribution on X̃× Z×Ỹ. By Assumption <ref>,ζ(x,ε,z)-V(x,z)=U(x,ε,z)-p(z)=max_z∈ Z(U(x,ε,z)-p(z))=max_z∈ Z(ζ(x,ε,z)-V(x,z))=V^ζ(x,ε)is true γ almost everywhere. Hence, there is equality in (<ref>) γ almost everywhere. Hence, for P_z|x almost every z, and ε such that (z, ε) is in the support of γ, we have V(x,z) = ζ(x,ε,z) -V^ζ(x,ε). But the right hand side is bounded above by V^ζζ(x,z) by definition, so we get V(x,z ) ≤ V^ζζ(x,z). Combined with (<ref>), this tells us V(x,z) =V^ζζ(x,z), P_z|x almost everywhere.□ §.§.§ Proof of Lemma <ref> For a fixed observable type x, assume that the types x̃_0:=(x,ε_0) and x̃_1:=(x,ε_1) both choose the same good, z̅∈ Z, from producers ỹ_0 and ỹ_1, respectively.We want to prove that this implies the unobservable types are also the same; that is, that ε_0=ε_1.This property is equivalent to having a map from the good qualities Z to the unobservable types, for each fixed observable type.Note that z̅ must maximize the joint surplus for both ε_0 and ε_1.That is, settingS(x,ε,ỹ) = sup_z∈ Z[U̅(x, z) +ζ(x, ε, z) -C(ỹ, z)]we have,S(x,ε_0,ỹ_0)= U̅(x,z̅) +ζ(x, ε_0,z̅) -C(ỹ_0,z̅) andS(x,ε_1,ỹ_1)= U̅(x,z̅) +ζ(x, ε_1,z̅) -C(ỹ_1,z̅). By Assumption <ref>, we can apply Lemma 1 of <cit.>, so that the pair of indirect utilities (V,W), where V(x̃)=sup_z∈ Z( U(x̃,z)-p(z)) and W(ỹ)=sup_z∈ Z( p(z)-C(ỹ,z)), achieve the dual (DK) of the optimal transportation problemsup_π∈ℳ(P_x̃,P_ỹ)∫ S(x̃,ỹ)dπ(x̃,ỹ),with solution π. This implies, from Theorem 1.3 page 19 of <cit.>, that for π almost all pairs (x̃_0,ỹ_0) and (x̃_1,ỹ_1),V(x̃_0)+W(ỹ_0) = S(x̃_0,ỹ_0),V(x̃_1)+W(ỹ_1) = S(x̃_1,ỹ_1),V(x̃_0)+W(ỹ_1) ≥ S(x̃_0,ỹ_1),V(x̃_1)+W(ỹ_0) ≥ S(x̃_1,ỹ_0).We therefore deduce the condition (called the 2-monotonicity condition):S(x,ε_0,ỹ_0)+S(x,ε_1,ỹ_1) ≥ S(x,ε_1,ỹ_0)+S(x,ε_0,ỹ_1), recalling that x̃_0=(x,ε_0) and x̃_1=(x,ε_1).Now, by definition of S as the maximized surplus, we haveS(x,ε_1,ỹ_0) ≥U̅(x, z̅) +ζ(x, ε_1, z̅) -C(ỹ_0, z̅) andS(x,ε_0,ỹ_1) ≥U̅(x,z̅) +ζ(x, ε_0, z̅) -C(ỹ_1,z̅). Inserting this, as well as (<ref>) and (<ref>) into the 2-monotonicity inequality yieldsU̅(x,z̅) +ζ(x, ε_0,z̅) -C(ỹ_0,z̅) + U̅(x,z̅) +ζ(x, ε_1,z̅) -C(ỹ_1,z̅)≥ S(x,ε_1,ỹ_0)+S(x,ε_0,ỹ_1) ≥ U̅(x, z̅) +ζ(x, ε_1, z̅) -C(ỹ_0,z̅)+U̅(x,z̅) +ζ(x, ε_0, z̅) -C(ỹ_1, z̅).But the left and right hand sides of the preceding string of inequalities are identical, so we must have equality throughout.In particular, we must have equality in (<ref>) and (<ref>).Equality in(<ref>), for example, means that z̅ maximizes z ↦U̅(x, z) +ζ(x, ε_1, z) -C(ỹ_0, z), and so, as z̅ is in the interior of Z by Assumption <ref>,we have∇ _zζ(x, ε_1, z̅) =∇ _zC(ỹ_0, z̅)-∇ _zU̅(x, z̅) .Since z̅ also maximizes z ↦U̅(x, z) +ζ(x, ε_0, z) -C(ỹ_0, z), we also have∇ _zζ(x, ε_0, z̅) =∇ _zC(ỹ_0, z̅)-∇ _zU̅(x, z̅).Equations (<ref>) and (<ref>) then imply ∇ _zζ(x, ε_1, z̅) =∇ _zζ(x, ε_0, z̅) and Assumption <ref>(A) implies ε_1 = ε_0. □ §.§.§ Proof of Theorem <ref> (1) Since by Lemma 4 in the technical appendix <cit.>, V(x,z) is approximately differentiable P_z| x almost surely, and since U̅(x,z) is differentiable by assumption, p(z)=V(x,z)+U̅(x,z) is also approximately differentiable P_z| x almost surely. Since, by Lemma <ref>, the inverse demand function ε(x,z) is uniquely determined, the first order condition ∇_zζ(x,ε(x,z),z)=∇_ap p(z)-∇_zU̅(x,z) identifies ∇_zU̅(x,z), P_z| x almost everywhere, as required. (2) In Part (1), we have shown uniqueness (up to location) of the pair (V,V^ζ) such that V(x,z)+V^ζ(x,ε)=ζ(x,ε,z), π almost surely. By Theorem 1.3 page 19 of <cit.> , this implies that (V,V^ζ) is the unique (up to location) pair of ζ-conjugates that solves the dual Kantorovitch problem as required.□§.§.§ Proof of Theorem <ref>(1a) and Theorem <ref>(2)Step 1: Differentiability of V in z. The objective is to prove that the subdifferential (see Definition 2 of the technical appendix <cit.>) at each z_0 is a singleton, which is equivalent to differentiability at z_0. We show Theorem <ref>(2). The same method of proof applies on any open subset of the support of P_z| x to yield the proof of Theorem <ref>(1a). From Assumption <ref>, V(x,z) is ζ-convex, and hence locally semiconvex (see Definition 5 of the technical appendix <cit.>), by Proposition C.2 in <cit.>. We recall the definition of local semiconvexity from the latter paper. Now, Lemma <ref> shows that for each fixed z, the set{ε∈ℝ^d_z: V(x,z) +V^ζ(x,ε) = ζ(x,ε,z)}:={f(z)}is a singleton.We claim that this means V is differentiable with respect to z everywhere.Fix a point z_0. We will prove that the subdifferential ∂_z V(x,z_0) contains only one extremal point (for a definition, see <cit.>, Section 18). This will yield the desired result. Indeed, all subdifferentials are closed and convex. Hence, so is the subdifferential of V. By Assumption <ref>, V is ζ-convex, hence continuous, by the combination of Propositions C.2 and C.6(i) in <cit.>. Hence, as the subdifferential of a continuous function, the subdifferential of V is also bounded. Hence, it is equal to the convex hull of its extreme points (see <cit.>, Theorem 18.5). The subdifferential of V at z_0 must therefore be a singleton, and V must be differentiable at z_0 (Theorem 25.1 in <cit.> can be easily extended to locally semiconvex functions). Let q be any extremal point in ∂_z V(x,z_0).Let z_n be a sequence satisfying the conclusion in Lemma 3 in the technical appendix <cit.>.Now, as V is differentiable at each point z_n, we have the envelope condition∇_z V(x,z_n) = ∇_z ζ (x,ε_n,z_n)where ε_n=f(z_n) is the unique point giving equality in (<ref>).As the sequence ∇_zζ(x,ε_n,z_n) converges, Assumption <ref>(2) implies that the ε_n remain in a bounded set. We can therefore pass to a convergent subsequence ε_n →ε_0. By continuity of ∇_zζ, we can pass to the limit in(<ref>) and, recalling that the left hand side tends to q, we obtain q=∇_zζ (x,ε_0,z_0). Now, by definition of ε_n, we have the equality V(x,z_n) +V^ζ(x,ε_n) = ζ(x,ε_n,z_n). By Assumption <ref>, V and V^ζ are ζ-convex, hence continuous, by the combination of Propositions C.2 and C.6(i) in <cit.>. Hence, we can pass to the limit to obtain V(x,z_0) +V^ζ(x,ε_0) = ζ(x,ε_0,z_0). But this means ε_0 =f(z_0), and so q=∇_zζ (x,ε_0,z_0) = ∇_zζ (x,f(z_0),z_0) is uniquely determined by z_0.This means that the subdifferential can only have one extremal point, completing the proof of differentiability of V. Step 2: Since by Step 1, V(x,z) is differentiable P_z| x almost surely, and since U̅(x,z) is differentiable by assumption, p(z)=V(x,z)+U̅(x,z) is also differentiable P_z| x almost surely. Since, by Lemma <ref>, the inverse demand function ε(x,z) is uniquely determined, the first order condition ∇_zζ(x,ε(x,z),z)=∇ p(z)-∇_zU̅(x,z) identifies ∇_zU̅(x,z), P_z| x almost everywhere, as required .□§.§.§ Proof of Theorem <ref>(1b)The argument in the proof of Theorem <ref>(2) applies to V^ζζ(x,z); this function is therefore differentiable throughout the support of P_z|x. Now, if v is tangent to the support of P_z|x at z_0, there is a curve z(t) in P_z|xwhich is differentiable at t_0, where t_0 is such that z_0=z(t_0), and z'(t_0)=v.Since V^ζζ(x,z) =V(x,z) on the support, we have V^ζζ(x,z(t)) =V(x,z(t)).Since the left hand side is differentiable as a function of t at t_0, the right hand side must be as well, and:∇_zV^ζζ(x,z(t))· v=∇_zV^ζζ(x,z(t))· z'(t_0) =∂/∂ t[V(x,z(t))]|_t=t_0.Therefore, p(z) =V(x,z)+U̅(x,z) is also differentiable along this curve, and ∂/∂ t[p(z(t))]|_t=t_0 =∂/∂ t[V(x,z(t))]|_t=t_0+∇_zU̅(x,z(t_0))· v. It then follows from the first order condition that∇_zζ(x,ϵ(x,z),z)· v =∇_zV^ζζ(x,z(t))· v= ∂/∂ t[V(x,z(t))]|_t=t_0=∂/∂ t[p(z(t))]|_t=t_0-∇_zU̅(x,z(t_0))· v,which identifies the direction derivative ∇_zU̅(x,z(t_0))· v.□ §.§.§ Proof of Theorem <ref> Fix x∈ℝ^d_x and omit from the notation throughout the proof. Fix a twice continuously differentiable function ζ on ℝ^d_ε×ℝ^d_z that satisfies Assumption <ref>(A). Assume there exist two ζ-monotonic functions H and H̃ such that H# P_z=P_ε. By the definition of ζ-monotonicity (Definition <ref>), there exist two ζ-convex functions ψ and ψ̃ such that H∈∂^ζψ and H̃∈∂^ζψ̃. By definition of the ζ-subdifferential (Definition 4 of the technical appendix <cit.>), it implies that P_z almost surely, ψ(z)+ψ^ζ(H(z))=ζ(H(z),z) and ψ̃(z)+ψ̃^ζ(H(z))=ζ(H̃(z),z). Hence, both H and H̃ are soutions to the Monge optimal transport problem with cost ζ, which is unique by Theorem 10.28 page 243 of <cit.>. It remains to show that the assumptions of Theorem 10.28 page 243 of <cit.> are satisfied. Indeed, by Assumption <ref>, ζ is differentiable everywhere, hence superdifferentiable, verifying (i); (ii) is Assumption <ref> and (iii) is satisfied under Assumption <ref>(2) by Step 1 of the proof of Theorem <ref>(2). Finally, integrating ∫ζ(x,ϵ,z)dP_ϵ|x(ϵ)> C over P_z| x with ζ bounded above yields ∫∫ζ(x,ϵ,z)dP_z|x(z)dP_ϵ|x(ϵ) finite, which is the remaining condition for Theorem 10.28 page 243 of <cit.> to hold.The result follows. □§ ONLINE SUPPLEMENT §.§ Hedonic equilibriumWe consider a competitive environment, where consumers and producers trade a good or contract, fully characterized by its type or quality z. The set of feasible qualities Z⊆ℝ^d_z is assumed compact and given a priori, but the distribution of the qualities actually traded arises endogenously in the hedonic market equilibrium, as does their price schedule p(z). Producers are characterized by their type ỹ∈Ỹ⊆ℝ^d_ỹ and consumers by their type x̃∈X̃⊆ℝ^d_x̃. Type distributions P_x̃ on X̃ and P_ỹ on Ỹ are given exogenously, so that entry and exit are not modelled. Consumers and producers are price takers and maximize quasi-linear utility U(x̃,z)-p(z) and profit p(z)-C(ỹ,z) respectively. Utility U(x̃,z) (respectively cost C(ỹ,z)) is upper (respectively lower) semicontinuous and bounded . In addition, the set of qualities Z(x̃,ỹ) that maximize the joint surplus U(x̃,z)-C(ỹ,z) for each pair of types (x̃,ỹ) is assumed to have a measurable selection. Then,<cit.> and <cit.> show that an equilibrium exists in this market, in the form of a price function p on Z, a joint distribution P_x̃z on X̃× Z and P_ỹz on Ỹ× Z such that their marginal on Z coincide, so that market clears for each traded quality z∈ Z. Uniqueness is not guaranteed, in particular prices are not uniquely defined for non traded qualities in equilibrium. Purity is not guaranteed either: an equilibrium specifies a conditional distribution P_z|x̃ (respectively P_z|ỹ) of qualities consumed by type x̃ consumers (respectively produced by type ỹ producers). The quality traded by a given producer-consumer pair (x̃,ỹ) is not uniquely determined at equilibrium without additional assumptions.The solution concept we impose is the following feature of hedonic equilibrium, i.e., maximization of surplus generated by a trade. EC[Equilibrium concept]The joint distribution γ of (X̃,Z,Ỹ) and the price function p form an hedonic equilibrium, i.e., they satisfy the following. The joint distribution γ has marginals P_x̃ and P_ỹ and for γ almost all (x̃,z,ỹ),U(x̃,z)-p(z) = max_z'∈ Z( U(x̃,z')-p(z')),p(z)-C(ỹ,z) = max_z'∈ Z( p(z')-C(ỹ,z)).In addition, observed qualities z∈ Z(x̃,ỹ) maximizing joint surplus U(x̃,z)-C(ỹ,z) for each x̃∈X̃ and ỹ∈Ỹ, lie in the interior of the set of feasible qualities Z and Z(x̃,ỹ) is assumed to have a measurable selection. The joint surplus U(x̃,z)-C(ỹ,z) is finite everywhere. We assume full participation in the market.We assume that consumer types x̃ are only partially observable to the analyst. We write x̃=(x,ε), where x∈ X⊆ℝ ^d_x is the observable part of the type vector, and ε∈ ℝ^d_ε is the unobservable part. We shall make a separability assumption that will allow us to specify constraints on the interaction between consumer unobservable type ε and good quality z in order to identify interactions between observable type x and good quality z.H[Unobservable heterogeneity]Consumer type x̃ is composed of observable type x with distribution P_x on X⊆ℝ^d_x and unobservable type ε with a priori specified conditional distribution P_ε| x on ℝ^d_ε, with d_ε≤ d_z. The utility of consumers can be decomposed as U(x̃,z)=U̅(x,z)+ζ(x,ε,z), where the functional form of ζ is known, but that of U̅ is not.For convenience, we shall use the transformation V(x,z):=p(z)-U̅(x,z). The latter will be called the consumer's potential, in line with the optimal transport terminology. The following condition is commonly known as the Twist Condition in the optimal transport literature. S2[Twist Condition]For all x and z, the following hold. (A) The gradient ∇_zζ(x,ε,z) of ζ(x,ε,z) in z is injective as a function of ε∈(P_ε| x).(B) The gradient ∇_εζ(x, ε,z) of ζ(x,ε,z) in ε is injective as a function of z∈ Z. From <cit.>, it is sufficient that D^2_zεζ(x,ε,z) be positive definite everywhere for Assumption <ref> to be satisfied. §.§ Absolute continuity of the distribution of traded qualities The consumer's problem can be writtensup_z∈ Z{ζ(x,ε,z)-V(x,z)}:=V^ζ(x,ε).Equation (<ref>) defines a generalized notion of convex conjugation, which can be inverted, similarly to convex conjugation, into:V^ζζ(x,z)=sup_ε∈ℝ^d_ε{ζ(x,ε,z)-V^ ζ(x,ε)},where V^ζζ is called the double conjugate of V. In the special case ζ(x,ε,z)=z'ε, ζ-conjugate simplifies to the ordinary convex conjugate of convex analysis and convexity of a lower semi-continuous function is equivalent to equality with its double conjugate. Hence, by extension, equality with its double ζ-conjugate defines a generalized notion of convexity (see Definition 2.3.3 page 86 of <cit.>). A function V is called ζ-convex if V=V^ζζ.We establish ζ-convexity of the potential as a step towards a notion of differentiability, in analogy with convex functions, which are locally Lipschitz, hence almost surely differentiable, by Rademacher's Theorem (see for instance <cit.>, Theorem 10.8(ii)). Under <ref> and <ref>, the function z↦ V(x,z) is P_z| x a.s. ζ-convex, for all x.This result only provides information for pairs (x,z), where type x consumers choose good quality z in equilibrium. In order to obtain a global smoothness result on the potential V, we need conditions under which the endogenous distribution of good qualities traded in equilibrium is absolutely continuous with respect to Lebesgue measure on ℝ^d_z. They include absolute continuity of the distribution of unobserved heterogeneity, additional smoothness conditions on preferences and technology and the Twist Assumption <ref>(B), which requires the dimension of unobserved heterogeneity to be the same as the dimension of the good quality space, i.e., d_ε=d_z.H'Assumption <ref> holds and the distribution of unobserved tastes P_ε| x is absolutely continuous on ℝ^d_z with respect to Lebesgue measure for all x. R'[Conditions for absolute continuity of P_z| x]The following hold. * The Hessian of total surplus D^2_zz( U(x,ε,z) - C(ỹ,z) ) is bounded above; that is ‖ D^2_zz( U(x,ε,z) - C(ỹ,z) ) ‖≤ M_1 for all x,ε,z,ỹ, for some fixed M_1.* For each x∈ X, ‖∇_zζ(x,ε,z)‖→∞ as ε→∞, uniformly in z∈ Z.* The matrix D^2_ε z ζ(x,ε,z) has full rank for all x, ε, z. Its inverse [D^2_ε z ζ(x,ε,z)]^-1 has uniform upper bound M_0: ‖[D^2_ε z ζ(x,ε,z)]^-1‖≤ M_0 for all x,ε,z, for some fixed M_0.We then obtain absolute continuity of the endogenous distribution of traded qualities. Under Assumptions <ref>,  <ref>, <ref> and <ref>, the endogenous distribution P_z| x of qualities traded at equilibrium is absolutely continuous with respect to Lebesgue measure.Lemma <ref> below shows everywhere differentiability of the double conjugate potential V^ζζ. This doesn't imply differentiability of V everywhere, since V is ζ-convex (i.e., V=V^ζζ) only P_z| x almost everywhere. However, combined with Lemma <ref>, this yields approximate differentiability of z↦ V(x,z) as defined in Definition <ref>. §.§.§ Proof of Lemma <ref> For an upper semi-continuous map S:ℝ^d_1×ℝ^d_2→ℝ, possibly indexed by x, and probability distributions μ on ℝ^d_1 and ν on ℝ^d_2, possibly conditional on x, let T_S(μ,ν) be the solution of the Kantorovich problem, i.e.,T_S(μ,ν) =sup_γ∈ℳ(μ,ν) ∫_ℝ^d_1×ℝ^d_2 S(ε, z)dγ(ε, z). We state and prove three intermediate results in Steps 1, 2 and 3 before completing the proof of Lemma <ref>. Step 1: We first show that P_z achievesmax_ν∈Δ(Z) [T_U(P_x̃,ν) +T_-C(P_ỹ,ν)].This step follows the method of proof of Proposition 3 in <cit.>. Take any probability distribution ν∈Δ(Z), μ_1∈ℳ(P_x̃,ν) and μ_2∈ℳ(P_ỹ,ν). By the Disintegration Theorem (see for instance <cit.>, Theorem 9, page 117), there are families of conditional probabilities μ_1^z and μ_2^z, z∈ Z, such that μ_1=μ_1^z⊗ν and μ_2=μ_2^z⊗ν. Define the probability γ∈Δ(X̃×Ỹ× Z) by∫_X̃×Ỹ× Z F(x̃,ỹ,z)dγ(x̃,ỹ,z)=∫_X̃×Ỹ× Z F(x̃,ỹ,z)dμ^z_1(x̃)dμ^z_2(ỹ)dν(z),for each F∈ C^0(X̃,Ỹ,Z). By construction, the projection μ of γ on X̃×Ỹ belongs to ℳ(P_x̃,P_ỹ). We therefore have:∫_X̃× ZU(x̃,z)dμ_1(x̃,z)-∫_Ỹ× ZC(ỹ,z)dμ_2(ỹ,z)= ∫_X̃×Ỹ× Z[ U(x̃,z)-C(ỹ,z) ] dγ(x̃,ỹ,z) ≤ ∫_X̃×Ỹ× Zsup_z∈ Z[ U(x̃,z)-C(ỹ,z) ] dγ(x̃,ỹ,z) = ∫_X̃×Ỹsup_z∈ Z[ U(x̃,z)-C(ỹ,z) ] dμ(x̃,ỹ).Since ν, μ_1 and μ_2 are arbitrary, the latter sequence of displays shows that the value of program (<ref>) is smaller than or equal to the value of program (<ref>) below.sup_μ∈ℳ(P_x̃,P_ỹ)∫_X̃×Ỹsup_z∈ Z[ U(x̃,z)-C(ỹ,z) ] dμ(x̃,ỹ).By Assumption <ref>, Program (<ref>) is attained by the projection μ̅ on X̃×Ỹ of the equilibrium probability distribution γ. Let μ_1 and μ_2 be theprojections of the equilibrium probability distribution γ onto X̃× Z andỸ× Z. Now, since the value of (<ref>) is smaller than or equal to the value of (<ref>), and the latter is attained by μ̅, we have(<ref>)≤ ∫_X̃×Ỹsup_z∈ Z[ U(x̃,z)-C(ỹ,z) ] dμ̅(x̃,ỹ)= ∫_X̃× Z U(x̃,z)dμ_1(x̃,z)+∫_Ỹ× Z[ -C(ỹ,z)] dμ_2(ỹ,z)≤ sup_μ_1∈ℳ(P_x̃,P_z)∫_X̃× Z U(x̃,z)dμ_1(x̃,z)+sup_μ_2∈ℳ(P_ỹ,P_z)∫_Ỹ× Z[ -C(ỹ,z)] dμ_2(ỹ,z) ≤ (<ref>),so that equality holds throughout, and P_z solves (<ref>).Step 2: We then show by contradiction that for P_x almost every x, P_z|x achievesmax_ν [T_U(P_ε| x,ν) +T_-C(P_ỹ|x,ν)].Assume the conclusion is false; that is, that there is some set E ⊂ X,with P_x(E) >0 such that, for each x in Ethere is some probability distribution P_z|x' on Z such thatT_U(P_ε| x, P_z|x') + T_-C(P_ỹ|x,P_z|x') > T_U(P_ε| x, P_z|x) + T_-C(P_ỹ|x,P_z|x).For x ∉ E, we set P_z|x' = P_z|x. This means that for every x, P_z|x' is defined and we haveT_U(P_ε| x, P_z|x') + T_-C(P_ỹ|x,P_z|x') ≥T_U(P_ε| x, P_z|x) + T_-C(P_ỹ|x,P_z|x);moreover, the inequality is strict on a set E of positive P_x measure, so that∫_X[ T_U(P_ε| x, P_z|x') + T_-C(P_ỹ|x,P_z|x') ]dP_x>∫_X[T_U(P_ε| x, P_z|x) + T_-C(P_ỹ|x,P_z|x)]dP_x.We define a new probability distribution P_z' on Z by P_z'(A) =∫_X P_z|x'(A) dP_x(x), for each Borel set A ⊆ Z.We claim thatT_U(P_x̃,P_z') + T_-C(P_ỹ,P_z') >T_U(P_x̃,P_z) + T_-C(P_ỹ,P_z) ,which contradicts the maximality of P_z in (<ref>) established in Step 1. Let P_ε z|x' achieve the optimal transportation between P_ε| x and P'_z|x; that is,T_U(P_ε| x, P_z|x')=∫_ℝ^d_ε× Z U(x,ε, z)dP_ε z|x.Existence of such a P_ε z|x is given by Theorem 1.3 page 19 of <cit.> . By construction, the marginals of P_ε z|x' are P_ε| x and P'_z|x. We define P'_x̃z on X̃× Z for any Borel subset B⊆X̃× Z by P'_x̃z (B):=∫_X P'_ε z|x(B_x)dP_x, where we define B_x:={(ε,z)∈ℝ^d_z× Z: (x,ε,z)∈ B}. Then, for any Borel subset A ⊆ Z,P'_x̃z(ℝ^d_z× X × A) =∫_X P'_ε z|x(ℝ^d_z× A)dP_x=∫_X P'_z|x(A)dP_x=P'_z(A).Similarly, for any Borel subset B ⊆ℝ^d_z× X,P'_x̃z(B × Z) =∫_X P'_ε z|x(B_x × Z)dP_x = ∫_X P_ε|x(B_x)dP_x =P_x̃ (B).The last two calculations show that P_x̃ and P'_z are the x̃ and z marginals of P'_x̃z, respectively, and so it follows by definition thatT_U(P_x̃, P'_z) ≥∫_ X×ℝ^d_z× Z U(x̃, z)dP'_x̃z.Similarly, welet P'_ỹz|x achieve the optimal transportation between P_ỹ|x and P'_z|x and define P'_ỹz on Ỹ× Z by P'_ỹz =∫_X P'_ỹz|xdP_x;a similar argument to above shows that the ỹ and z marginals of P'_ỹz are P_ỹ and P'_z, respectively.Therefore,T_-C(P_ỹ, P'_z) ≥∫_Ỹ× Z[ -C(ỹ,z) ] dP'_ỹz. Now, equations (<ref>) and (<ref>) yieldT_U(P_x̃, P'_z) +T_-C(P_ỹ, P'_z)≥ ∫_X̃× Z U(x̃, z)dP'_x̃z+∫_Ỹ× Z[ -C(ỹ, z) ] dP'_ỹz= ∫_X[∫_ℝ^d_z× Z U(x,ε, z)dP'_ε z|x]dP_x + ∫_X[∫_Ỹ× Z[ -C(ỹ, z) ] dP'_ỹz|x]dP_x= ∫_XT_U(P_ε |x, P'_z|x)dP_x + ∫_X[ T_-C(P_ỹ|x, P_z|x')dP_x> ∫_XT_U(P_ε |x, P_z|x)dP_x + ∫_X[ T_-C(P_ỹ|x, P_z|x)dP_x≥ ∫_X[∫_ℝ^d_z× Z U(x,ε, z)dP_ε z|x]dP_x + ∫_X[∫_Ỹ× Z[ -C(ỹ, z) ] dP_ỹz|x]dP_x= ∫_X̃× Z U(x̃, z)dP_x̃z + ∫_Ỹ× Z[ -C(ỹ, z) ] dP_ỹz= T_U(P_x̃, P_z) +T_-C(P_ỹ, P_z),where we have used (<ref>) in the fourth line above.This establishes the contradiction and completes the proof.Step 3: We finally now show that for P_x almost all x, P_z| x is the unique solution of Program (<ref>). This step follows the method of proof of Proposition 4 in <cit.>. We know from Step 2 that P_z| x is a solution to (<ref>). Suppose ν∈Δ(Z) also solves (<ref>). Let μ_ε z| x and μ_ỹz| x achieve T_U(P_ε| x,ν) and T_-C(P_ỹ| x,ν). The latter exist by Theorem 1.3 page 19 of <cit.> . We therefore have∫_ℝ^d_z× ZU(x,ε,z)dμ_ε z| x(ε,z| x)+∫_Ỹ× Z[-C(ỹ,z)]dμ_ỹz| x(ỹ,z| x)=T_U(P_ε| x,ν)+T_-C(P_ỹ| x,ν)=(<ref>).Let φ_x^U denote the U-conjugate of a function φ on Z, as defined by φ_x^U(ε):=sup_z∈ Z{U(x,ε,z)-φ(z)}. Similarly, let (-φ)^-C be the (-C)-conjugate of (-φ), defined as (-φ)^-C(ỹ):=sup_z∈ Z{-C(ỹ,z)+φ(z)}. By definition, for any function φ on Z, and any (ε,ỹ,z) on ℝ^d_z×Ỹ× Z, we haveφ_x^U(ε)+φ(z)≥ U(x,ε,z),[-φ]^-C(ỹ)-φ(z)≥ -C(ỹ,z).By Assumption <ref>, the price function p of the hedonic equilibrium satisfiesp_x^U(ε)+p(z)= U(x,ε,z), P_ε z| x[-p]^-C(ỹ)-p(z)= -C(ỹ,z), P_ỹz| x.Therefore, p achieves the minimum in the programinf_φ{∫_ℝ^d_z× Z[φ_x^U(ε)+φ(z)]dP_ε z| x(ε, z| x)+∫_Ỹ× Z [(-φ)^-C(ỹ)-φ(z)]dP_ỹz| x(ỹ,z| x) }= inf_φ{∫_ℝ^d_zφ_x^U(ε)dP_ε| x(ε| x)+∫_Zφ(z)dP_z| x(z| x)+∫_Ỹ (-φ)^-C(ỹ)dP_ỹ| x(ỹ| x)-∫_Zφ(z)dP_z| x(z| x) }= inf_φ{∫_ℝ^d_zφ_x^U(ε)dP_ε| x(ε| x)+∫_Ỹ (-φ)^-C(ỹ)dP_ỹ| x(ỹ| x) }.Since p achieves (<ref>), we have(<ref>) = ∫_ℝ^d_zp_x^U(ε)dP_ε| x(ε| x)+∫_Ỹ (-p)^-C(ỹ)dP_ỹ| x(ỹ| x) = ∫_ℝ^d_z× Z[p_x^U(ε)+p(z)]dμ_ε z| x(ε, z| x)+∫_Ỹ× Z [(-p)^-C(ỹ)-p(z)]dμ_ỹz| x(ỹ,z| x),since μ_ε z| x∈ℳ(P_ε| x,ν) and μ_ỹz| x∈ℳ(P_ỹ| x,ν) by construction. By the strong duality result in Theorem 3 of <cit.>, we have (<ref>)=(<ref>). Hence∫_ℝ^d_z× ZU(x,ε,z)dμ_ε z| x(ε,z| x)+∫_Ỹ× Z[-C(ỹ,z)]dμ_ỹz| x(ỹ,z| x) 100pt= ∫_ℝ^d_z× Z[p_x^U(ε)+p(z)]dμ_ε z| x(ε, z| x)+∫_Ỹ× Z [(-p)^-C(ỹ)-p(z)]dμ_ỹz| x(ỹ,z| x),which, given the inequalities (<ref>), impliesp_x^U(ε)+p(z)= U(x,ε,z), μ_ε z| x[-p]^-C(ỹ)-p(z)= -C(ỹ,z), μ_ỹz| x.We therefore have p_x^U(ε)+p(z)= U(x,ε,z) both P_ε z| x and μ_ε z| x almost surely. The U-conjugate p_x^U of p is locally Lipschitz by Lemma C.1 of <cit.>, hence differentiable P_ε|x almost everywhere by Rademacher's Theorem (see for instance <cit.>, Theorem 10.8(ii)). We can therefore apply the Envelope Theorem to obtain the equation below.∇ p_x^U(ε)=∇_ε U(x,ε,z)=∇_εζ(x,ε,z), P_ε z|xμ_ε z|x.By Assumption <ref>(B), ∇_εζ(x,ε,z) is injective as a function of z. Therefore, for each ε, there is a unique z that satisfies (<ref>). This defines a map T:ℝ^d_z→ Z, such that z=T(ε) both P_ε z|x and μ_ε z|x almost everywhere. Since the projections of P_ε z|x and μ_ε z|x with respect to ε are the same, namely P_ε|x, we therefore haveP_ε z|x=(id,T)#P_ε|x=μ_ε z|x.The two probability distributions P_ε z|x and μ_ε z|x being equal, they must also share the same projection with respect to z and ν=P_z|x as a result.Armed with the results in Steps 1 to 3, we are ready to prove Lemma <ref>. Fix x∈ X such that P_z|x is the unique solution to Program (<ref>). Let P_ỹ^N be a sequence of discrete probability distributions with N points of support on Ỹ⊆ℝ^d_ỹ converging weakly to P_ỹ|x. The set of probability distributions on the compact set Z is compact relative to the topology of weak convergence. By Assumption <ref>, U and C are continuous, hence, by Theorem 5.20 of <cit.>, the functional ν↦ T_U(P_ε| x,ν) +T_-C(P_ỹ^N,ν) is continuous with respect to the topology of weak convergence. Program max_ν∈Δ(Z)[ T_U(P_ε| x,ν) +T_-C(P_ỹ^N,ν)], therefore, has a solution we denote P_z^N.We first show that P_z^N converges weakly to P_z|x. For any probability measure ν on Z, we haveT_U(P_ε| x,ν) +T_-C(P_ỹ^N,ν) ≤ T_U(P_ε| x,P_z^N) +T_-C(P_ỹ^N,P_z^N).Since Z is compact, Δ(Z) is compact with respect to weak convergence. Hence, we can extract from (P_z^N) a convergent subsequence, which we also denote (P_z^N), as is customary, and we call the limit P̅. By the stability of optimal transport (<cit.>, Theorem 5.20), we have T_-C(P_ỹ^N,ν)→ T_-C(P_ỹ|x,ν), T_U(P_ε| x,P_z^N) → T_U(P_ε| x,P̅)and T_-C(P_ỹ^N, P̅) → T_-C(P_ỹ|x, P̅), and so passing to the limit in inequality (<ref>) yields T_U(P_ε| x,ν) +T_-C(P_ỹ|x,ν) ≤ T_U(P_ε| x,P̅) +T_-C(P_ỹ|x , P̅).As this holds for any probability distribution ν on ℝ^d_z, it implies that P̅ is optimal in Program (<ref>). By uniqueness proved in Step 3, we then have P̅ = P_z|x, as desired.We are now ready to complete the proof of Lemma <ref>. Combining Steps 1 to 3 above, we know that P_z^N is the marginal with respect to z of a hedonic equilibrium distribution γ^N on ℝ^d_z× Z×{ỹ_1,…,ỹ_N}, with consumer and producer distributions P_ε| x and P_ỹ^N, respectively. By Step 1 in the proof of Theorem 1(1) in <cit.> applied to this hedonic equilibrium with producer type distribution P_ỹ^N, for each N, there is an optimal map F_N pushing P_z^N forward to P_ε| x i.e., P_ε| x=F_N#P_z^N. For i=1,2,...N, we define the subsets S_i^N ⊆ Z byS_i ^N={ z∈ Z: z∈max_w(U(x,F_N(z), w) - C(ỹ_i, w)) };note that S_i^N is the set of quality vectors that are produced by producer type ỹ_i.Since P_ỹ^N has finite support {ỹ_1,…,ỹ_N}, P_z^N almost all z belong to some S_i, i=1,…,N. We then setE^N_i ={ z ∈ S_i^Nandz ∉ S_j^Nfor allj <i},for each i=1,…,N, with the convention S_0^N=∅. The E^N_i are disjoint, and E^N=∪_i=1^N E^N_i has full P_z^N measure, P_z^N(E^N)=1.On each E^N_i,F_N coincides with a map G_i, which satisfies∇_zU(x,G_i(z),z) -∇_zC(ỹ_i, z)=0or∇_zU̅(x,z) + ∇_zζ(x,G_i(z),z) -∇_zC(ỹ_i, z)=0.By Assumption <ref>(3) and the Implicit Function Theorem, G_i is differentiable and we have[D^2_zzU̅(x,z) + D^2_zzζ(x,G_i(z),z) -D^2_zzC(ỹ_i, z)] +D^2_zεζ(x,G_i(z),z)DG_i(z)=0so thatDG_i(z) =[D^2_zεζ(x,G_i(z),z)]^-1[D^2_zzU̅(x,z) + D^2_zzζ(x,G_i(z),z) -D^2_zzC(ỹ_i, z)].Therefore, ‖ DG_i(z)‖≤M_0M_1:=C by Assumption <ref>. Now, this implies that G_i is Lipschitz with constant C, and therefore, F_N restricted to E_i^N is also Lipschitz with constant C.Now, for any Borel A ⊂ℝ^d_z, we can write A ∩ E^N =∪_i=1^N(A∩ E^N_i).Therefore,P_z^N(A) =P_z^N(A∩ E^N ) ≤P_z^N(F_N^-1(F_N(A∩ E^N )))= P_ε| x( F_N(A∩ E^N)) =P_ε| x(F_N(∪_i=1^N(A∩ E^N_i) )= P_ε| x(∪_i=1^NF_N(A∩ E^N_i) ). Denote by |A| the Lebesgue measure of a set A in the rest of this proof. We now show the absolute continuity of of P_z|x by contradiction.Assume not; then there is a set A with |A|=0but δ=P_z|x(A) > 0.We can choose open neighbourhoods, A ⊆ A_k, with |A_k| ≤1/k;by weak convergence of P_z^N to P_z|x, we havelim inf_N →∞ P_z^N(A_k) ≥ P_z|x(A_k) ≥δand so for sufficiently large N, we have P_z^N(A_k) ≥δ/2. On the other hand, by the Lipschitz property of F_N on E_i^N, we have|F_N(A_k∩ E_i^N)|≤ C^d_z|A_k∩ E_i^N|,so that|∪_i=1^NF_N(A_k∩ E_i^N)|≤ C^d_z∑_i=1^N|A_k∩ E_i^N| =C^d_z|A_k ∩ E^N| ≤ C^d_z|A_k| ≤C^d_z/k.Now, P_ε| x is absolutely continuous, so that P_ε| x(∪_i=1^NF_N(A_k∩ E_i^N)) → 0 as k →∞ (because |∪_i=1^NF_N(A_k∩ E_i^N)| → 0 as k →∞). On the other hand, by (<ref>), P_ε| x(∪_i=1^NF_N(A_k∩ E_i^N)) ≥ P_z^N(A_k) ≥δ/2 for all k, which is a contradiction, completing the proof. □§.§ Smoothness of the endogenous price functionThe subdifferential ∂ψ(x_0) of a function ψ:ℝ^d→ℝ∪{+∞} at x_0∈ℝ^d is the set of vectors p∈ℝ^d, called subgradients, such that ψ(x)-ψ(x_0)≥ p'(x-x_0)+o(x-x_0). Let ζ be a function on ℝ^d_ε×ℝ^d_z that is continuously differentiable and satisfies Assumption <ref>. The ζ-conjugate ψ^ζ of a function ψ:ℝ^d→ℝ∪{+∞} is ψ^ζ(ε)=sup_z{ζ(ε,z)-ψ(z)}. Let ζ be a function on ℝ^d_ε×ℝ^d_z that is continuously differentiable and satisfies Assumption <ref>. The ζ-subdifferential ∂^ζψ(z) of a function ψ:ℝ^d→ℝ∪{+∞} at z∈ℝ^d is the set of vectors ε∈ℝ^d_ε, such that ψ^ζ(ε)+ψ(z)=ζ(ε,z).A function ψ:ℝ^d→ℝ∪{+∞} is called locally semiconvex at x_0∈ℝ^d if there is a scalar λ>0 such that ψ(x)+λx^2 is convex on some open ball centered at x_0. Since the term λx^2 in the definition of local semiconvexity simply shifts the subdifferential by 2λ x, we can extend Theorem 25.6 in <cit.> to locally semiconvex functions and obtain the following lemma. Let ψ:ℝ^d→ℝ∪{+∞} be a locally semiconvex function, and suppose that q∈ℝ^d is an extremal point in the subdifferential ∂ψ(x_0) of ψ at x_0.Then there exists a sequence x_n converging to x_0, such that ψ is differentiable at each x_n and the gradient ∇ψ(x_n) converges to q. If we havelim_r↓0|B_r(x)∩ E|/|B_r(x)|=1,where |B| is Lebesgue measure of B, B_r(x) is an open ball of radius r centered at x, and E⊆ℝ^d is a measurable set, x is called density point of E. Let x_0 be a density point of a measurable set E⊆ℝ^d and let f:E→ℝ be a measurable map. If there is a linear map A: ℝ^d→ℝ such that, for each η>0, x_0 is a density point of{ x∈ E:-η≤f(x)-f(x_0)-A(x-x_0)/ x-x_0≤η},then f is said to be approximately differentiable at x_0, and A is called the approximate gradient of f at x_0. The approximate gradient is uniquely defined, as shown below Definition 10.2 page 218 of <cit.>. Under Assumptions <ref>,  <ref>, <ref> and <ref>, the potential z↦ V(x,z) is approximately differentiable P_z| x almost everywhere.§.§.§ Proof of Lemma <ref> By Lemma <ref>, we know that for each x, V(x,z)=V^ζζ(x,z), P_z|x almost everywhere, and P_z|xis absolutely continuous with respect to Lebesgue measure, by Lemma <ref>.We will prove that V is therefore approximately differentiable P_z|x almost everywhere, with ∇_ap,zV(x,z)=∇_z V^ζζ(x,z), where ∇_ap,zV(x,z) denotes the approximate gradient of V(x,z) with respect to z. By Lemma <ref>, we have V=V^ζζ, P_z|x almost everywhere.Moreover, as a ζ conjugate, V^ζζ is locally Lipschitz by Lemma C.1 of <cit.>, hence differentiable P_z|x almost everywhere by Rademacher's Theorem (see for instance <cit.>, Theorem 10.8(ii)). Hence, there exists a set S of full P_z|xmeasure such that, for each z_0 ∈ S, * V^ζζ(x,z) is differentiable with respect to z at z_0.* V(x,z_0)=V^ζζ(x,z_0).By Lebesgue's density theorem (still denoting Lebesgue measure of B by |B|),lim_r↓ 0|S ∩ B_r(z)|/|B_r(z)| =1for Lebesgue almost every z∈ Z, hence also for P_z| x almost every z, by the absolute continuity of P_z|x. Since S has P_z| x measure 1, the set S̅ of density points of S,S̅ ={z ∈ S: lim_r↓ 0|S ∩ B_r(z)|/|B_r(z)| =1},therefore has P_z|x measure 1.Take any density point z_0 of S, i.e., z_0 ∈S̅. Fix η>0. Since z_0∈S̅⊆ S, V^ζζ is differentiable at z_0. Hence there is r>0 such that for all z∈ B_r(z_0),-η≤V^ζζ(z)-V^ζζ(z_0)-∇_zV^ζζ(z_0)·(z-z_0)/z-z_0≤η.Since V^ζζ=V on S, for all z∈ B_r(z_0)∩ S, we have-η≤V(z)-V(z_0)-∇_zV^ζζ(z_0)·(z-z_0)/z-z_0≤η.Therefore B_r(z_0)∩ S=B_r(z_0)∩S̃, whereS̃:={z∈ S: -η≤V(z)-V(z_0)-∇_zV^ζζ(z_0)·(z-z_0)/z-z_0≤η}.As z_0 is a density point of S,lim_r↓ 0|S̃∩ B_r(z_0)|/|B_r(z_0)| =lim_r↓ 0|S ∩ B_r(z_0)|/|B_r(z_0)| =1,so that z_0 is also a density point of S̃, which means, by definition, that V is approximately differentiable at z_0, and that its approximate gradient is ∇_ap,z V(x,z_0)=∇_z V^ζζ(x,z_0). The latter is true for any z_0∈S̅ and S̅ has P_z| x measure 1. Hence, V is approximately differentiable P_z| x almost everywhere.□ abbrvnat
http://arxiv.org/abs/1709.09570v6
{ "authors": [ "Victor Chernozhukov", "Alfred Galichon", "Marc Henry", "Brendan Pass" ], "categories": [ "econ.EM", "econ.TH" ], "primary_category": "econ.EM", "published": "20170927150736", "title": "Identification of hedonic equilibrium and nonseparable simultaneous equations" }
@nat@width>@nat@width kframe@end@of@kframe @end@of@kframe ##1totalleftmargin -##1---totalleftmargin -totalleftmargin@ setminipage@end@of@kframe theoremTheorem[section] definition[theorem]Definition lemma[theorem]Lemma proposition[theorem]Proposition corollary[theorem]Corollary *remarkRemarkupquote.styInterpretable High-Dimensional Inference Via Score Projection with an Application in NeuroimagingSimon N. Vandekar, Philip T. Reiss, and Russell T. Shinohara* December 30, 2023 =================================================================================================Authors' Footnote: Simon N. Vandekar is Doctoral Candidate (<[email protected]>), Department of Biostatistics, Epidemiology, and Informatics, University of Pennsylvania, Philadelphia, PA 19104. Philip T. Reiss is Associate Professor (<[email protected]>), Department of Statistics, University of Haifa, Haifa, Israel.Russell T. Shinohara is Assistant Professor of Biostatistics (<[email protected]>), Department of Biostatistics, Epidemiology, and Informatics, University of Pennsylvania, Philadelphia, PA 19104. SNV was supported by NIMH grant T32MH065218-11; PTR was supported by NIMH grant R01MH095836 and Israel Science Foundation grant 1777/16; RTS was supported by NINDS grant R01NS085211 and R21NS093349 as well as NIMH grant R01MH112847. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The authors thank Wei Pan and Haochang Shou for helpful discussions related to this work. Code to perform the analyses in this manuscript is provided at <https://bitbucket.org/simonvandekar/pst>. The data used in this manuscript are publicly available at <http://adni.loni.usc.edu/>.  Corresponding authorAbstract In the fields of neuroimaging and genetics, a key goal is testing the association of a single outcome with a very high-dimensional imaging or genetic variable. Often, summary measures of the high-dimensional variable are created to sequentially test and localize the association with the outcome. In some cases, the results for summary measures are significant, but subsequent tests used to localize differences are underpowered and do not identify regions associated with the outcome. Here, we propose a generalization of Rao's score test based on projecting the score statistic onto a linear subspace of a high-dimensional parameter space. In addition, we provide methods to localize signal in the high-dimensional space by projecting the scores to the subspace where the score test was performed. This allows for inference in the high-dimensional space to be performed on the same degrees of freedom as the score test, effectively reducing the number of comparisons. Simulation results demonstrate the test has competitive power relative to others commonly used.We illustrate the method by analyzing a subset of the Alzheimer's Disease Neuroimaging Initiative dataset. Results suggest cortical thinning of the frontal and temporal lobes may be a useful biological marker of Alzheimer’s risk.Keywords: posthoc inference, association test, neuroimaging§ INTRODUCTIONIn scientific fields where high-dimensional data are prominent, significant interest lies in testing the association of a single continuous or categorical outcome with a large number of predictors. A common approach used in neuroimaging is to perform sequential tests to reduce the number of hypothesis tests. For example, it is common to first perform a test for the association of a phenotype with an imaging variable averaged across the entire brain. If the test rejects the null hypothesis of no association between brain and phenotype, then subsequent tests are conducted on regional averages of the data or on every voxel in the image using multiplicity correction to address the number of tests performed. Often, location-specific results yield few or no significant findings due to reduced signal and the necessary adjustment for the large number of tests, even though the whole brain average data show a significant association.In this paper, we propose a unified approach to test the association of an imaging or other high-dimensional predictor with an outcome and perform post hoc inference to localize signal. The framework is a modification of Rao's score test for models with a high-dimensional or infinite-dimensional parameter. The theory developed assumes the parameter being tested is defined on a compact space such as the brain. Though the approach is designed for hypothesis testing in neuroimaging, it is applicable to a wide range of scientific domains. The standard framework assumes a model where Y_i are iid observations from density f(y; θ) and that the parameter θ = (α, β) ∈Θ⊂^m+p where α∈^m is a nuisance parameter and β∈^p is the parameter of interest. We seek to test the hypothesis H_0: β = β_0. Define the score function U = U(θ) = n^-1∑_i=1^n∂log f(Y_i |θ) /∂β(θ). Let θ_0 = (α, β_0) be the null value of the parameter, where α is the true value of the parameter. Let S = ( α̂, β_0 ) be the score function evaluated at the maximum likelihood estimate of α under the null hypothesis H_0. Under the null and the conditions described in Appendix <ref>, the covariance ofcan be obtained from the Fisher information evaluated at the null parameter value, Ω(θ_0) = 𝔼{[(∂/∂θ) log f(Y_1 |θ)]^T [(∂/∂θ) log f(Y_1 |θ) ] |_θ_0}. The sum of scores (Sum) test originally discussed by <cit.> has been used in genetics and neuroimaging <cit.>. The Sum test is based on the statisticn(S^T ζ)^2/ζ^T Ω̂ζ,where ζ∈^p is a given vector of weights. The denominator is an estimate of the variance ofS^T ζ, so that the statistic is asymptotically χ^2_1 <cit.>. The Sum test is locally most powerful <cit.>, however, the test has low power when there is a large number of variables that are not associated with the outcome <cit.>. This is due to the fact that the variance of the statistic in the numerator of (<ref>) increases by adding variables unassociated with the outcome without increasing the expected value of the numerator.In the case of unknown weights, when p<n, <cit.> proposed maximizing the Sum test statistic with respect to the weights,max_ζ 0 n(S^T ζ)^2/ζ^T Ω̂ζ = nS^TΩ̂^-1S.This statistic is distributed as χ^2_p under the null. When n >p, (<ref>) is the usual score statistic, however this statistic cannot be used for high dimensional data due to the estimate Ω̂ being noninvertible when p>n. For finite dimensional parameters, our proposed test can be thought of as a generalization of Rao's test in the case where the estimate of the information matrix is noninvertible. When p>n, the test maximizes the statistic (<ref>) with respect to the vector ζ over a subspace, 𝕃, of ^p.Maximization of the Sum test in the subspace 𝕃 is equivalent to projecting the scores for the original model to a lower dimensional space where the information matrix is invertible. For this reason, we call the test a projected score test (PST).The procedure does not assume sparsity, but attempts to conserve power by reducing the dimension of the data and performing inference in the lower dimensional space.In many cases, if a score test rejects H_0, then it is of primary interest to perform post hoc inference to identify nonzero parameters. In neuroimaging, this amounts to a high-dimensional testing problem where the association is tested at each location in the image. The standard approach is to perform a hypothesis test at each parameter location and use a multiplicity correction procedure. Such methods this in neuroimaging that control the family-wise error rate (FWER) have relied on Gaussian random field theory <cit.>, but have recently been shown to have type 1 error rates far from the nominal level in real data due to assumption violations <cit.>.Recently, considerable research activity has focused on leveraging the dependence of the tests to control the false discovery rate (FDR) in high-dimensional settings <cit.>. <cit.> develop a procedure to control the FDR for spatial data as well as an approach for controlling the expected proportion of false clusters. <cit.> discuss estimation of the false discovery proportion (FDP) under dependence for normally distributed test statistics based on a factor approximation. In contrast, the PST post hoc inference procedure is performed by projecting the scores onto 𝕃, and controlling the FWER of the projected scores. Several recent studies have considered hypothesis tests for functional data, which is conceptually similar to our approach for an infinite-dimensional parameter. <cit.> propose inverting simultaneous confidence bands for the parameter of a functional predictor to test which locations of the image are associated with the outcome. <cit.> use a binary Markov random field model to compute the joint probability that the marginal parameter estimates are equal to zero. Our post hoc inference is most similar to <cit.> as the interpretation of the contribution of the scores retains a marginal interpretation. Here, we derive the asymptotic null distribution for the PST statistic under some standard regularity conditions. For data that are measured on a compact space, such as brain images, we discuss sufficient theoretical assumptions for characterizing test behaviors as both n and p approach infinity. For a normal linear model, we show how the finite sample distribution of the statistic can be calculated exactly for fixed n and p. To demonstrate how the test can be used in neuroimaging, we investigate the association of cortical thickness with mild cognitive impairment (MCI) in the Alzheimer's Disease Neuroimaging Initiative (ADNI) study, a data set where p=18,715 and n=628. The outer surface of the brain (cortex) represents a highly folded sheet in 3-dimensional space. The thickness of the cortex is known to be affected in individuals with psychopathology and neurological illness. MCI is a subtle pre-Alzheimer's disease decline in cognitive functioning. There is significant clinical interest in finding biological markers of MCI in order to identify those at risk for developing Alzheimer's disease, as prevention strategies and therapeutics for early disease are increasingly common.In this data set, we seek to localize regions of the brain where cortical thinning provides additional information with regard to the diagnosis of MCI beyond what can be ascertained by neurocognitive scales alone.For the remainder of the manuscript, we denote matrices by uppercase italic letters (X), vectors by lowercase (x), and random vectors by uppercase roman letters (X). Hilbert spaces are denoted with black-board letters (𝕏) and Greek letters denote model parameters. For the singular value decomposition (SVD) of any matrix we will assume that the smallest dimensions of the matrices obtained are equal to the rank of the matrix X unless otherwise noted.denotes convergence in law anddenotes convergence in probability.§ THE PROJECTED SCORE TESTIn Section <ref> and Appendix <ref> we define the PST statistic, give its asymptotic distribution, and lay out the theoretical framework. In Section <ref>, we detail conditions sufficient for studying asymptotics in p. We discuss maximization of the sum test for normal linear models in Section <ref>.§.§ PST for finite-dimensional parametersWe assume the observed data are finite-dimensional representations that are generated from an underlying stochastic process. In Appendix <ref> we describe how to define the finite-dimensional likelihood from the infinite-dimensional likelihood. Here, we informally define the finite-dimensional likelihood, the interested reader can refer to Appendix <ref> for further details.Let 𝕍 be a nonempty compact subset of ^3 and ℬ(𝕍) be the space of square integrable functions from 𝕍 to . 𝕍 represents the space on which data can be observed; in neuroimaging this space is the volume of the brain. The underlying stochastic processes are assumed to take values in ℬ(𝕍), but the observed finite-dimensional data are p-dimensional discretizations of the stochastic processes defined on 𝕍. Thus, the observed data can be described as iid observations _i = (_i^(1), _i^(2)), for i=1,…, n, with _i^(1) taking values in ^k, and _ip^(2)∈^p. Observations of _i are a vector of k variables that are nonimaging covariates and the outcome variable, together with the observed finite-dimensional neuroimaging data. We denote the collection of data by = (_1, …, _n). We define a parameter space Θ = ^m ×^p that includes a finite-dimensional nuisance parameter α∈^m and the p-dimensional discretized parameter of interest β_p ∈^p. Together these parameters describe the joint distribution of the imaging and nonimaging data. Denote the finite dimensional likelihood by ℓ(θ_p; ), where θ_p = (α, β_p) andare the discretized parameters and data, respectively.Define the score function _np = ∂ℓ/∂β_p{θ_p; } and let S_np = U_np(α̂, β_p0)∈^pdenote the score function evaluated at the maximum likelihood estimate (MLE) under the null hypothesis H_0: β_p = β_p0.Let the Fisher information for the full model beΩ_F(θ_p0) =_θ_p0({∂/∂θ_plog f(_1 |θ_p)}^T {∂/∂θ_plog f(_1 |θ_p)}|_θ_p0) =[Ω_α Ω_αβ; Ω_βαΩ_β ],where θ_p0 = (α, β_p0). Then the asymptotic variance for √(n)S_npunder H_0 is Ω(θ_p0) = Ω_β - Ω_βαΩ^-1_αΩ_αβ. With the finite parameter scores defined, we can define the PST. Let P_𝕃 be the orthogonal projection matrix onto a linear space 𝕃⊂^p with r = dim(𝕃) < n-m.Let S_np be as defined in (<ref>) and Ω̂ be the plug-in estimator of the covariance (<ref>) obtained from Ω̂_F = n^-1∑_i=1^n (∂log f(_i ; θ_p)/∂θ_p^T)(∂log f(_i |θ_p)/∂θ_p) |_θ̂_p0,where θ̂_p0 = (α̂, β_p0) denotes the maximum likelihood estimate of the parameter vector under the null hypothesis H_0 : β = β_p0. Then the PST statistic with respect to 𝕃 is defined asR^𝕃 = max_ζ∈𝕃∖{0} n(ζ^T S_np)^2/ζ^T Ω̂(θ_p0) ζ= max_ζ∈^p∖{0} n(ζ^T P_𝕃S_np)^2/ζ^T P_𝕃Ω̂(θ_p0) P_𝕃ζ. The following theorem states that the asymptotic distribution (with respect to n) of the PST statistic can be found for any finite dimensional likelihood based on independent observations provided the same regularity conditions required for the convergence of the scores to a multivariate normal random variable. Assume all objects are as described in Definition <ref>. Let P_𝕃 = Q Q^T where the columns of the r× p matrix Q are any orthonormal basis for 𝕃. Define V = V(θ_p0 ) = Q^T Ω(θ_p0) Q,and assume the estimate V̂ = Q^T Ω̂(θ_p0) Q is invertible, and that the conditions given in Appendix <ref> are satisfied. Then, under the null, the rotated scores, denoted ^Q_np, aren^1/2^Q_np = n^1/2 Q^T S_np S^Q_p ∼ N_r( 0 , V),the PST statistic isR^𝕃 = n (S_np^Q)^T V̂^-1S^Q_np,and R^𝕃χ^2_r as n→∞. Theorem <ref> requires that V̂ is nonsingular, however, in practice it is possible to ensure that Q is in the column space of Ω̂(θ_p0), so that V̂^-1 exists. The proof of Theorem <ref> is given in Appendix <ref>. We also demonstrate there that the result of Theorem <ref> does not depend on the choice of Q. We show how 𝕃 can be chosen for GLMs in Sections <ref> and <ref> and for imaging data in the analysis of the ADNI dataset in Section <ref>. §.§ The PST as p→∞We will show that as p→∞ the PST statistic converges to an integral over a stochastic process. The rate that p approaches infinity does not depend on the sample size. Here, we assume the data can take values on the Hilbert space 𝕐 =^k×ℬ(𝕍), where 𝕍 is a nonempty compact subset of ^3 and ℬ(𝕍) is the space of square integrable functions from 𝕍 to . Let _i = (_i^(1), _i^(2)), for i=1,…, n, be iid with _i^(1) taking values in ^k, and _i^(2) a stochastic process taking values in ℬ(𝕍). Realizations of _i are a vector of k variables that are nonimaging covariates and the outcome variable, together with a function on 𝕍. We assume the parameter β∈ℬ(𝕍). The infinite-dimensional score function is defined in Appendix <ref> as the Fréchet derivative of the likelihood with respect to the parameter β, _n = _n(v) = ∂ℓ/∂β{(α, β(v)); (v)}. And the score is defined as the functionS_n = U_n{· ; (α̂, β_0) }∈ℬ(𝕍). Throughout we assume that the infinite-dimensional scores converge in law, i.e.n^1/2S_n →_L S,where S is a mean zero Gaussian process. Theorem <ref> in Appendix <ref> gives conditions under which this convergence holds <cit.>. The following definition of the PST statistic for infinite-dimensional parameters is motivated by formula (<ref>). Let (q_1(v), …, q_r(v)) be an orthonormal basis for the linear subspace 𝕃⊂ℬ(𝕍) with respect to the L^2(ν) inner product where ν is the Lebesgue measure, and r = dim(𝕃) < n-m. Assume q_j are such thatν( { v : q_j is discontinuous at v} )=0for all j=1,…,r. Define the column vector ^Q_n ∈^r, with jth element (^Q_n)_j = ∫_𝕍 q_j(v) _n(v)dv,and let V̂_n be the r × r covariance matrix with (j,k)th elementV̂_n^j,k = n^-1∑_i=1^n (∫_𝕍 q_j(v)[∂/∂βlog f{_i(v) |α̂, β_0(v)}]dv ) ×(∫_𝕍 q_k(v)[∂/∂βlog f{_i(v) |α̂, β_0(v)}]dv ),where ∂/∂βlog f{_i(v) |α̂, β_0(v)} denotes the Fréchet derivative evaluated at β_0. Let V_n = V̂_n. Assume that V̂_n is invertible.Then the PST statistic with respect to 𝕃 is defined asR^𝕃 = n(^Q_n)^T V̂_n^-1^Q_n. While we have given a definition of the PST statistic in infinite dimensions, in practice this statistic is not estimable because it depends on functions which are only observed on finite a grid. The following theorem states that as the resolution of the grid is increased then the finite parameter PST statistic converges to the PST statistic for the infinite-dimensional parameter. Moreover, as the sample size increases the statistic converges to a statistic based on the Gaussian process(<ref>). The rate that p increases does not depend on n. Let _np be as defined in (<ref>). For objects as defined in Definition <ref>, let q_jp = (q_j(v_1p)ν(𝕍_1p), …, q_j(v_pp)ν(𝕍_pp) )^T, where v_jp and 𝕍_jp are as defined in Appendix <ref>.Let Q_p be the p × r matrix with jth column q_jp. Denote ^Q_np = Q^T_p _n. Define the jth element of the vector ^Q ∈^r as(^Q)_j = ∫_𝕍 q_j(v) (v)dv,Assume the conditions for Theorems <ref> and <ref>, and that _n and ∂/∂βlog f{_i(v) |α̂, β_0(v)} have continuous sample paths with respect to v (i.e. for almost every ∈𝕐^n and θ∈Θ, _n(·;, Θ) is continuous). Let V = lim_n→∞ V_n. For p_1 > p_2, let 𝒱_p_1 be a refinement of 𝒱_p_2 such that lim_p→∞sup_kν(𝕍_kp) = 0.Then as n,p →∞,n (^Q_np)^T V̂_np^-1^Q_np (^Q)^T V^-1^Q. The proof is given in Appendix <ref>. §.§ The PST in Normal Linear Models The finite-sample distribution for the PST statistic can be found exactly for a normal linear model. Define X = [x_1, …, x_n]^T to be an n× m full rank matrix of covariates for each observation, G̃ = [g_1, …, g_n]^T to be an n × p full rank matrix of predictor variables of interest with p>n, and = [ _1, …, _n ]^T to be n × 1 normal random vector with independent elements conditional on X and G.The Sum test with normal error is based on the model _i = α^T x_i + β^T g_i + E_i, where all variables are as previously defined and E_i ∼ N(0, σ^2) are independent.If we let A A^T =(I-H) be the SVD of the projection (I-H), where H = X(X^TX)^-1X^T, and define G = A^T G̃ and Y = A^T, then under the null Y∼ N_(n-m)(0, σ^2 I).The Sum test statistic for H_0 : β = 0 is <cit.> R_Sum = n(S_np^T ζ)^2/ζ^T Ω̂ζ = (^T G ζ)^2/σ̂^2 ζ^T G^T G ζ, where ζ∈^p is a known vector of weights, S_np = n^-1 G^Tis the score vector evaluated at the MLEs under the null (𝔼(_i) =α^Tx_i), and Ω̂= n^-1σ̂^2 G^T G is an estimate of the variance of _np which corresponds to using the estimator (<ref>).The PST statistic for this model with respect to a linear subspace 𝕃 of ^p by Definition <ref> is R^𝕃 = max_ζ∈𝕃(^T G ζ)^2/σ̂^2 ζ^T G^TG ζ.The following theorem gives a closed-form expression for R^𝕃 and its null distribution.Define W be the projection matrix onto the column space of G P_𝕃. ThenR^𝕃 = (n-m)^T W /^T ,and under the null, R^𝕃 =_L r(n-m)/r + (n-m -r) F_(n-m-r),r,where F_(n-m-r),r is F-distributed with (n-m-r) and r degrees of freedom.The proof can be found in Appendix <ref>. The form of equation (<ref>) shows that for a normal linear model, the test statistic is a ratio of quadratic forms.Due to the rotation invariance ofunder the null, the finite-sample distribution of R^𝕃 depends only on the sample size and the dimension of the basis, but not on the particular choice of 𝕃.§ SPECIFYING THE LINEAR SUBSPACE 𝕃§.§ Specifying 𝕃 in generalized linear modelsHere, we discuss choices for the selection of 𝕃 in the context of GLMs with the canonical link function. We restrict attention to finite dimensional parameters and forgo the subscripts on the finite sample score vector . Define X = [x_1, …, x_n]^T, and G = [g_1, …, g_n]^T where objects are as defined in Section <ref>. Assume the outcome = [ _1, …, _n ]^T is from an exponential family where the expectation can be writtenh(_i) = α^T x_i + β^T g_i,where h is the canonical link function. For the GLM with canonical link, the scores are <cit.> S = n^-1 G^T ( - ),where = [ _1, …_n ]^Tand _i = h^-1( x^T_i α̂) is the ith fitted value under the null.Let Γ be the n× n diagonal matrix with ith diagonal element Γ_ii = (_i - _i)^2. Then the estimate of the covariance (<ref>) obtained using (<ref>) isΩ̂= n^-1{ G^T Γ G - G^T Γ X ( X^T Γ X)^-1 X^TΓ G }The score statistic is obtained from the scores and the estimated information as in expression (<ref>).In this setup, the basis for 𝕃 can be constructed from the principal component analysis (PCA) of G. We write the PCA of G in terms of the SVD G = T_* D Q^T, where the principal scores are T = T_*D = GQ.With this basis, the PST is equivalent to performing Rao's score test in a principal components regression model. To see this, first note that principal component regression is defined byh( ) = X α + T β_T.The scores for β_T areS_T = n^-1 Q^T G^T ( - ) = Q^T S,which are the same as the rotated scores in (<ref>). The information estimate is also equivalent. Thus the score test statistic, n S_T^TΩ̂_T^-1S_T, in principal component regression is equivalent to the PST statistic (<ref>). Another useful basis for 𝕃 may be constructed from vectors that are indicators of variables that are expected to have a similar relationship with the outcome. The anatomical basis used in Section <ref> is an example. To define the basis vectors q_j, j=1,…, r, we let 𝒬_j ⊂{1, …, p} such that 𝒬_j ∩𝒬_j' = ∅ for j j', and then set the kth element of the jth basis vector to be q_kj = 1( k ∈𝒬_j). These define orthogonal basis vectors since the sets 𝒬_j are disjoint. This basis is equivalent to averaging r subsets of the p predictor variables and performing a hypothesis test of the regression onto the r averaged variables.The choice of the basis is a critical decision as it affects the power and interpretation of the post hoc inference. To clarify, under the alternative the scores have nonzero mean = μ∈^p.If the projection is orthogonal to μ then the test will have power equal to the type 1 error rate. The PCA basis assumes that μ has a spatial pattern similar to the covariance structure of the predictor variables. The anatomical basis assumes that all locations within a region have the same parameter value. We discuss the effect of the basis on the interpretation of the post hoc inference in Section <ref>.§.§ Choosing a dimension for the PCA basis In order to choose a dimension for the PCA basis, we propose an adaptive procedure that sequentially tests bases of increasing dimension while controlling the type 1 error rate. To do this we first condition on the parameter estimate α̂ for the reduced model and perform the SVD (Γ - Γ X (X^T Γ X)^-1 X^T Γ)^1/2G = T D Q^T. We use subsets of columns Q as the basis 𝕃. For two columns q_j and q_k of QCov(q^T_j , q_k^T ) = q^T_j G^T (Γ - Γ X (X^T Γ X)^-1 X^T Γ) G q_k = q^T_j Q D^2 Q^T q_k= 0.Thus each projected score n^1/2 q^T_j is asymptotically independent because n^1/2Q^T S is asymptotically normal and can be tested by a separate chi-squared test at level α^*. If this is done sequentially for r=1,…,n-m, then, due to their asymptotic independence, the probability of a type 1 error under the global null is∑_r=1^n-m((q^T_j )^2 >χ^2_1(α^*)for all j≤ r)≈∑_r=1^n-m (α^*)^r≤∑_r=1^∞ (α^*)^r = 1/1-α^* - 1,where χ^2_1(α^*) denotes the 1-α^* quantile of a chi-squared distribution. The approximate equality is due to the asymptotic approximation and the final inequality is the geometric series solution. In order to control the type 1 error at level α, we choose α^*= 1-(1+α)^-1. Then we sequentially test r=1,…,(n-m) until we fail to reject a test at level α^*. Note that the power depends critically on the first test in the sequence; subsequent tests serve only to increase the dimension of the basis. If the first component is orthogonal μ in (<ref>), the probability of reaching other components that do is less than α^*.A potentially more robust procedure is to test chunks of PCs by varying r = {r_1 = 0, r_2, r_3, …, r_k = n-m} and for the jth test perform a chi-squared test of all PCs (r_j+1), …,r_j+1 on r_j+1 - r_j degrees of freedom. So long as the tests are independent, which they are under the global null, the rejection threshold α^* will control the type 1 error rate at level α. This adaptive PCA (aPCA) procedure is implemented below by testing the first 5 components together and sequentially testing components 6,…,n-m one-at-a-time. We demonstrate the procedure in the ADNI data analysis below and type 1 error rates are assessed in Section <ref>.§ POST HOC INFERENCE FOR LOCALIZING SIGNALAfter performing the test of association using the PST statistic, it is of primary interest to investigate the contribution of the scores to the statistic in order to identify which locations in the image are associated with the outcome and the direction of the effect. This can be done by projecting the scores onto 𝕃 and performing inference that controls the FWER for the projected scores. Because the projected scores are distributed in a subspace of ^p, inference is much less conservative compared to performing inference on the original score vector.Our aim is to construct a rejection region for each element of the projected score vector (P_𝕃S)_j, for j=1, … p. Under the null,P_𝕃S∼ N(0, P_𝕃Ω P_𝕃).The diagonal elements of P_𝕃Ω P_𝕃^T are not equal, so defining a single rejection threshold for all elements favors rejection for elements with larger variances. To resolve this issue we scale by the inverse of the standard deviation of the projected scores. Let Δ be the diagonal matrix with jth diagonal element Δ_jj = 1/√((P_𝕃Ω P_𝕃)_jj). Then the rejection threshold that controls the FWER for the standardized projected scores is defined by c that satisfies1-(| (Δ P_𝕃S)_j | > cfor any j)= ℙ( max_j| (Δ P_𝕃S)_j | < c) = 1 - α. Thus, the distribution of the infinity norm of Δ P_𝕃 can be used to compute a rejection threshold for the standardized projected scores that controls the FWER. The rejection threshold that controls the FWER is c∈^+ such thatℙ( |Δ P_𝕃S|_∞ < c) = 1 - α.This set defines the region where the probability any element of the standardized projected score vector is greater than c is equal to α under the global null H_0: β = β_0. We reject the null hypothesis at location j if the observed projected score | (Δ P_𝕃 s)_j| > c. This threshold corresponds to a single-step “maxT" joint multiple testing procedure <cit.> and satisfies the assumption of subset pivotality, so it controls the FWER at the nominal level in the case that some projected scores have nonzero mean <cit.> (see Appendix <ref>).By (<ref>) we haveΔ P_𝕃SΔ Q V^1/2Z,where Z∼ N_r(0, I). Thus we can approximate the region in (<ref>) by finding c so that∫_|Δ Q V^1/2 z |_∞≤ cϕ_r(z) dz = 1 - α,where ϕ_r denotes the PDF of Z. In practice we approximate this interval by plugging in estimates for Δ and V^1/2.This integral is difficult to calculate due to the large dimensions of Q, but can be approximated quickly and easily using Monte Carlo simulations. B simulations are used to estimate the CDF of the infinity norm, F̂_B(·), which we use to obtain p-values for each observed standardized projected score, (Δ Ps)_j, by evaluating p_j = 1- F̂_B((Δ P_𝕃s)_j),or a rejection threshold can be obtained by usingc = F̂_B^-1(1-α).The p-value for a given element of the standardized projected score vector is the probability of observing a projected score as large as (Δ P_𝕃 s)_j under the global null H_0: β = β_0. The standard deviation of the Monte Carlo estimate (<ref>) decreases at a √(B) rate and depends only on the volume of the space being integrated, so the procedure will perform well for computing adjusted p-values with a small error <cit.>. For example, because the volume of the space being integrated is 1, with 10,000 simulations the standard deviation is on the order of B^-1/2 = 0.01.Rejection of the null hypothesis H_0: β = β_0 is not strictly necessary to proceed with the post hoc inference procedure; the post hoc procedure can be used separately from the PST. In addition, it is important to note that the post hoc inference can have improved power by interpreting the projected scores. When the alternative hypothesis is true, the rejection regions for the projected scores do not necessarily control the type 1 error for the unprojected scores.This is demonstrated in the imaging simulations in Section <ref>.As mentioned above, the basis affects the interpretation of the inference on the projected scores. For the PCA basis the interpretation is as follows: over repeated experiments if the data are projected onto 𝕃, then the probability of falsely rejecting one or more scores j with (Δ P_𝕃μ)_j = 0 is at most α, where μ is as defined in (<ref>). § ADNI NEUROIMAGING DATA ANALYSISWe obtained datafromtheAlzheimer'sDisease NeuroimagingInitiative(ADNI)database (adni.loni.usc.edu). The ADNIwaslaunchedin 2003 asapublic-privatepartnership,ledbyPrincipalInvestigator MichaelW.Weiner, MD. The ADNI is a longitudinal observational study designed to investigate the early biomarkers of Alzheimer's disease; detailed MRI methods are given by <cit.>.Mild cognitive impairment (MCI) represents a subtle pre-Alzheimer's Disease decline in cognitive performance. The goal of our analysis is to identify whether a subset of the neuroimaging data from the ADNI can provide more information regarding diagnosis of MCI than the standardized memory tests obtained as part of the study. Moreover, we are interested in localizing areas of the cortex that differ between healthy controls (HC) and individuals with MCI. Three-dimensional T1-weighted structural images for 229 healthy controls and 399 subjects with MCI were obtained as part of the ADNI. This sample consists of subjects who had images and a composite memory score available at baseline.We perform the analysis in two ways: First, we proceed with standard analysis methods currently available for neuroimaging data in open access software <cit.>. Second, we use the PST statistic and the high-dimensional inference procedure described above. Cortical thickness was estimated using Freesurfer <cit.>.Subjects' thickness data were registered to a standard template for analysis and smoothed at 10mm FWHM to reduce noise related to preprocessing and registration. The template contains 18,715 vertex locations where cortical thickness is measured for each subject. Our goal is to identify whether the 18,715 cortical thickness measurements provide any additional information regarding the diagnosis of the individuals. For all analyses we include age, sex, and the composite memory score as covariates <cit.>. §.§ Standard Neuroimaging Analysis Procedure: Average and Vertex-wise TestingBecause neuroimaging studies typically collect many types of images with many covariates and possible outcomes, it is common to obtain a summary measure of a high-dimensional variable, and then proceed with further analysis if the summary measure appears to be associated with an endpoint of interest. In this analysis we first take the average of all the cortical thickness measurements across the cortical surface for each subject and perform a regression with diagnosis as the outcome using logistic regression. Specifically let C_i denote the average cortical thickness measurement for subject i, and X_i denote a vector with an intercept term, age, an indicator for sex, and the composite memory score for subject i. Then we fit the modellogit{(Y_i=1 | C_i, X_i)} = X_i^T α + C_i β_C.If there is a significant relationship with the average cortical thickness measurements, i.e. if we reject H_0: β_C = 0, then we will proceed by performing mass-univariate vertex-wise analyses by running a separate model at each point on the cortical surface.The analysis using the average cortical thickness variable suggests a highly significant association of cortical thickness with diagnosis, indicating that subjects with thinner cortices are more likely to have MCI (Table <ref>). Based on these results we choose to investigate the relationship at each vertex to localize where in the cortex the association occurs.For the vertex-wise analyses, we use the software package Freesurfer to perform Benjamini-Hochberg (BH) correction separately across each hemisphere (Figure <ref> A). The spatial extent of the FDR-corrected results is more limited than what we might expect given the very strong association between diagnosis and average cortical thickness. Uncorrected exploratory analyses were conducted to further identify regions related to the whole-brain results (Figure <ref> B). The most significant results occur in left and right frontal lobes. These analyses suggest that thinning in larger portions of the frontal and temporal lobes is associated with increased risk of MCI; however, these results are not found using a method that guarantees control of the FWER or FDR. §.§ PST and High-Dimensional Inference ProceduresTo use the PST procedure we perform the following steps: * Select a subspace 𝕃.* Perform the PST for the association between the image and diagnosis.* If the test in step 2 rejects, then perform post hoc inference as in Section <ref>. We select a basis for 𝕃 in the two ways described in Section <ref>. For this analysis we use the aPCA procedure described in Section <ref> to choose the best PCA basis by testing the first 5 components together and sequentially testing components 6,…,n-m. We also present results for the PCA basis fixed at several other dimensions (r=10,20,50) to demonstrate how the basis affects the results of the analysis. In addition we consider a basis constructed from ther=148 regions (74 per hemisphere) of the anatomical atlas of <cit.>. If we were unwilling to condition on the covariance structure of the scores or the anatomical atlas, a basis could be constructed that approximates a predetermined covariance structure (e.g. a spatial AR(1)), or a covariance structure estimated from an independent sample to be used to construct the PCA basis. In addition to the PST we perform the sequence kernel association test (SKAT) <cit.>, the sum of powered scores (SPU) test using the infinity norm, which corresponds to testing the max across the scores <cit.>, and the adaptive sum of powered scores test (aSPU), which has competitive power to many other score tests <cit.>. The SKAT is known to be more powerful if there is a distributed signal, and the SPU infinity norm will be most powerful for a sparse signal. The aSPU test combines multiple tests based on the norms ‖‖_γ^γ for γ varying over a finite subset of ℕ by choosing one with the smallest p-value. Permutation testing is used to assess the significance of these statistics.The aPCA basis selected r = 7 by testing for r=5 and then sequentially testing the next two PCs. With this basis we reject the null hypothesis using the PST (Table <ref>), indicating that there is an association between the image and diagnosis conditioning on the effects of age, sex, and composite memory score.The test rejects at the α=0.05 threshold irrespective of which basis is used. The SKAT, SPU, and aSPU tests also reject the null. Given the results of the PST we are then interested in investigating how the scores contribute to the significant test statistic. To investigate the contributions of the scores to the PST statistic we perform post hoc inference on the projected scores. We use 10,000 simulations to obtain rejection regions for each of the basis dimensions. The simulations ran for all bases in less than 2 minutes. Results suggest that thinner cortex in bilateral temporal and frontal lobes and right precuneus is associated with an increased risk of MCI (Figure <ref> C & D). Results are given as -log_10(p) where p is obtained using the simulated distribution (<ref>). These locations are known to be thinner in AD versus HC as well as in AD versus MCI <cit.> and the results here demonstrate that there are significant differences between MCI and HC in the same region. The results indicate that the degree of frontal and temporal lobe thinning is correlated with diagnostic severity, and suggest that measurements of cortical thickness may provide useful information over neurospsychological scales in identifying people at risk for AD. Differences in these regions between MCI and HC were previously shown by <cit.>; however the authors did not control for multiple comparisons or adjust for covariates.To reiterate, the blue areas in Figure <ref> C & D are based on low-rank inference and control the FWER of the projected scores. The procedure has improved power over standard correction methods seen in Figure <ref> A & B by performing inference in a lower dimensional space. The p-values obtained in Figures <ref> and <ref> use (<ref>) and indicate the probability of observing a projected score statistic as extreme under the global null H_0: β = 0. Though interpretation is restricted to the projected scores, the results align strongly with previous reports <cit.>. To demonstrate the impact of the choice of r, we performed post hoc inference on the scores for 4 different PCA bases (Figure <ref>). It is clear from Figure <ref> that increasing the dimension of the basis increases the spatial specificity of the results. However, the larger bases also come with the cost of reduced power due to the larger degrees of freedom of the basis. This is also illustrated in Table <ref>, where the larger bases have a higher rejection threshold. § NEUROIMAGING-BASED SIMULATION STUDY As a simulation study, we perform analyses using data generated for the right hemisphere of the cortical thickness data from the ADNI dataset measured at p=9,361 locations called vertices. We simulate an artificial outcome of interest that is categorical, as in the ADNI analyses presented above. We select two anatomical regions (superior temporal sulcus and superior frontal sulcus) of 669 vertices total to have a negative association with the outcome and one region (anterior part of the cingulate gyrus and sulcus) of 191 vertices to have a positive association. The first two of these regions were selected because of their association in the ADNI data set. The third region was selected to compare the performance of the tests when there are different locations with positive and negative associations with the outcome. To create a mean and covariance structure similar to real data within the regions of association, we create the mean vectors and covariance matrices for the simulations from the full sample of subjects used in the ADNI Freesurfer analysis above, yielding two full rank covariance matrices, Σ_- and Σ_+ and mean vectors μ_+ and μ_-.For each simulation, we select a random subset of size n without replacement from the subset of control subjects used in the ADNI neuroimaging analysis. Data within the negatively and positively associated regions are generated as independent multivariate normal distributions for each subject, with covariance structures G_i,-∼ N(μ_-, Σ_-) and G_i,+∼ N(μ_+, Σ_+), respectively. We centered the imaging data prior to analysis.In each simulation the outcome is generated under a logistic modellogit(𝔼 Y_i) = α_0 -β1^T G_i,- +2β1^T G_i,+,where α_0 is set to the log ratio of MCI to controls in the neuroimaging analyses section. 1, is a vector of ones, and β is an unknown parameter that we vary from 0 to 0.005. We multiply the values in the positive region by 2 to increase signal because it is a spatially smaller cluster than the two negative regions. In addition to simulations where the coefficients are constant across each region, in the Supplementwe perform simulations generating the parameters from a uniform distribution.We construct the subspace 𝕃 in three ways. The first is to use the adaptive procedure (Section <ref>) in each sample conditioning on the estimate α̂_0. The second basis type is constructed in each sample from the first r = 10, 20, 50 principal components from a PCA of G(I-H), where H is the projection onto the intercept. The third basis is constructed from regions of anatomical atlas of <cit.>, by randomly grouping the 74 regions into r groups and using normed indicator vectors for each group as the basis.We assess power for indices with nonzero mean and type 1 error for indices with zero mean. If we denote the set of indices with a nonzero association with the outcome by J, then the expectation of the score μ_j is nonzero only for j ∈ J, where μ is as defined in (<ref>).So, for indices with j ∉ J we report type 1 error and for indices with j ∈ J we report power.Similarly, the mean of the standardized projected scores, Δ P μ, determines type 1 error and power for the projected scores Δ P.The FWER and FDR of the projected scores are reported for the basis constructed from the anatomical atlas and the PCA bases. In general, no element of the standardized projected mean is exactly zero, so type 1 error is assessed by thresholding the standardized projected parameter vector at the 0.2 quantile and reporting the rejection rate for vertices with projected parameter values below that threshold.We perform 1000 simulations for sample sizes of n=100, 200 and compare the PST for the adaptive procedure and fixed bases with dimensions of r=10, 20, 50. In addition, we compare the PST to the sequence kernel association test (SKAT) <cit.>, the sum of powered scores (SPU) test using the infinity norm, and the adaptive sum of powered scores test (aSPU) <cit.>. We assess pointwise power and type 1 error of the PST inference with uncorrected, Bonferroni-corrected, and BH-corrected results. We also compare FWER and FDR between methods. For these comparisons we assess the type 1 error for the unprojected scores using inference designed for the projected scores.The PST with fixed bases demonstrates superior power to the other tests (Figure <ref>), due to its ability to remove the influence of unassociated scores from the test by maximizing over the basis and by leveraging the spatial information in the data. If these features of the data were not informative then the PST would not perform well.The aPCA is has better power than the other PCA bases because a low rank basis is all that is required to capture the signal in the data. aSPU is adaptive to the sparsity of the signal, so it performs better than the SKAT, but does not use the information in the covariance of the scores to leverage power. As expected, the post hoc inference procedure controls the FWER of the projected scores for all basis dimensions (Table <ref>). In general, the post hoc inference procedure does not control the FWER or FDR of the unprojected scores (Table <ref>) as the inference is intended for the projected scores. However, for larger PCA bases our procedure does control the FDR (bold rows in Table <ref>). This is likely because the projection captures most of the variation in μ, so that the projection Δ Pμ is close to μ. Future investigation of whether inference for the projected scores will control any error rate for unprojected score vector is warranted. The vertexwise error rate describes how effective a procedure is at controlling the error rate for the unprojected scores at each location. The vertexwise error rate of the PST inference procedure for the unprojected scores using the PCA 10 basis is low while maintaining better vertexwise power than BH (Figure <ref>; PCA 10). This is because in any given sample there may be a high false positive rate, but the errors across samples do not appear in the same locations. The BH and Bonferroni corrections both work well at controlling the vertexwise type 1 error rate but have lower power compared to the PCA-based PST procedure (Figure <ref>). The bases constructed from the anatomical atlas tend to have large regions of vertexwise type 1 error for the unprojected scores. At the largest basis dimension the atlas allows for enough specificity to reduce the vertexwise error. All methods have lower power to detect the positive cluster than the two negative clusters. This is possibly due to the characteristics of the covariance structure in the positive cluster which overlaps gyral and sulcal regions. § DISCUSSIONWe have proposed the PST, which maximizes the weights for the Sum statistic in a subspace of the parameter space. The procedure offers a novel post hoc inference on the projected scores by performing inference in the subspace where the test statistic was estimated. Because the posthoc inference is based on the same model and degrees of freedom as the PST statistic, the interpretation of high-dimensional results agree closely with the results from the PST.Instead of choosing a specific value for the weight vector, ζ as in the sum test, our methodology allows the investigator to select a space to consider for ζ. The ability to choose a space makes the procedure very flexible. For example, in imaging the basis for the space can be chosen based on anatomical or functional labels, or from data acquired in another imaging modality. Particular hypotheses can be targeted by selecting a basis that includes indicators of certain regions or weights particular locations to target specific spatial patterns.If orthogonal indicator vectors are used as the basis, then the approach can be seen as testing averages of subregions of the data as in Section <ref>. In this case, the PST procedure can be seen as a maxT multiple testing procedure that accounts for the correlation structure of the tests. There are several limitations of the proposed procedure. First, the success of the procedure depends critically on the projection chosen. If a projection is chosen that is orthogonal to the mean vector then the PST will fail to capture any signal in the data. This is a limitation of any dimension-reducing procedure. Further research could investigate whether maximization of the score test with regularization can yield a test statistic whose distribution is tractable. Regularization may remove the subjectivity of selecting a basis and make the procedure more robust. Second, while the dimension reduction procedure preserves power and the results align closely with those from previous research, the inference does not guarantee control of the FWER or FDR of the original score vector. Future research will investigate how inference of the original score vector can be made by thresholding the projected score vector. This is similar in concept to the dependence-adjusted procedure discussed by <cit.> for controlling the FDP and may offer increased power by leveraging the covariance of the test statistics. These limitations notwithstanding, our procedure generalizes Rao's score test to the high- and infinite- dimensional settings and introduces a new inference approach based on projecting the test statistics to a lower-dimensional space where inference can be made on fewer degrees of freedom.§ APPENDIX A §.§ Theoretical framework We assume the observed data are finite-dimensional representations that are generated from an underlying stochastic process. To be more specific, define the Hilbert space 𝕐 =^k×ℬ(𝕍), where 𝕍 is a nonempty compact subset of ^3 and ℬ(𝕍) is the space of square integrable functions from 𝕍 to . 𝕍 represents the space on which data can be observed; in neuroimaging this space is the volume of the brain. Let (𝕐, 𝒴, ) be a probability space where 𝒴 is the Borel σ-algebra on 𝕐.Let _i = (_i^(1), _i^(2)), for i=1,…, n, be iid with _i^(1) taking values in ^k, _i^(2)a stochastic process taking values in ℬ(𝕍), and (_i ∈ A) = (A) for all A ∈𝒴. Observations of _i are bounded functions together with a vector of k variables that are nonimaging covariates and the outcome variable. We denote the collection of data by = (_1, …, _n). Althoughrepresents the underlying data, in practice _i^(2) are unobservable and we only observe discretized data at a finite number of locations that are voxels in the image.We define a parameter space Θ = Α×Β that includes a finite-dimensional parameter α∈Α⊂^m and an infinite-dimensional parameter on β∈Β. Together these parameters describe the joint distribution of the imaging and nonimaging data. Throughout, we assume Β = ℬ(𝕍), so that the infinite-dimensional parameter and infinite-dimensional data are defined on the same space, but this assumption is not required. The distribution of the observed data will be defined by a p-dimensional discretization of the infinite-dimensional parameter. We will prove that under a few assumptions, as n,p →∞ the test statistic for the discretized data approaches the statistic for the infinite-dimensional parameter.To relate the unobserved datato the parameters, we further assume that the measureis in a set of probability models, { F_θ : θ∈Θ}, indexed by the parameter θ = (α, β).That is, there exists a regular point θ_0 ∈Θ such that for all sets A ∈𝒴(_i ∈ A) = F_θ_0(A). We define the density function f_θ as the Radon-Nikodym derivative of F_θ with respect to the Lebesgue measure μ,( _i ∈ A) = ∫_A dF_θ/dμ dμ = ∫_A f_θ dμ. Let ℓ(θ; ) = n^-1∑_i=1^nlog f_θ(_i) be the log-likelihood function for θ.In order to define the finite-dimensional data and parameter space we must partition 𝕍 into finitely many sets and define the observable random variables as a realization from the partitioned space. For any integer p, the space 𝕍 can be partitioned into p nonempty sets.Denote the partition 𝒱_p = {𝕍_1p, …, 𝕍_pp}. Note that, by the definition of a partition, ⋃_j=1^p 𝕍_jp = 𝕍 and 𝕍_jp∩𝕍_kp = ∅. Let v_j be an arbitrary interior point of 𝕍_jp∈𝒱_p. Let the discretized data be _ip = (_i^(1), _i^(2)(v_1), …, _i^(2)(v_p)) ∈^k+p whose distribution is determined by the finite parameter θ_p = (α, β(v_1), …, β(v_p)) ∈^m+p. In order to define a finite-dimensional likelihood from the likelihood for the infinite-dimensional parameters we define the function β_p ∈ℬ(𝕍) byβ_p(v) = β(v_j)forv ∈𝕍_jpand the stochastic processes^(2)_ip(v) = ^(2)_i(v_j)forv ∈𝕍_jp.As 𝒱_p is a partition, each v is in only one 𝕍_jp. This allows us to define the log-likelihood from the functionℓ(θ_p; _p) = n^-1∑_i=1^n log f_θ_p(_ip),where f_θ_p(y_p) = f((y^(1), y_p^(2) ); (α, β_p) ).Finally, assuming ℓ is Fréchet differentiable with respect to β, we can define the scores_n = _n(v)= ∂ℓ/∂β{(α, β(v)); (v)} _np = _np(v)= ∂ℓ/∂β_p{(α, β_p(v)); _p(v)} .Let S_n= U_n(· ; (α̂, β_0) ) ∈ℬ(𝕍)S_np= U_np(α̂, β_p0)∈^pwhere β_0 denotes the value of the parameter under the nullH_0: β = β_0 and α̂ is the maximum likelihood estimator for α under the null. §.§ Conditions for Theorem <ref>The conclusion of Theorem <ref> requires the asymptotic normality of the scores, which holds under the following conditions: * The ability to interchange integration and expectation of the likelihood so that_θ_p0_np= ∂/∂β_p_θ_p0ℓ{(α, β_p(v)); _p(v)}= 0.* The score(<ref>)variance is finite.Asymptotic normality of the scores then follows by the multivariate central limit theorem <cit.>.§.§ Proof of Theorem <ref>Let ϕ = Q^T ζ. Then the PST statistic isR^𝕃= max_ζ∈^p∖{0}n (ζ^T P _np)^2/ζ^T P Ω̂P ζ = max_ϕ∈^r∖{0}n (ϕ^T Q^T _np)^2/ϕ^T V̂ϕ = n S_np^T Q V̂^-1 Q^T S_np,where the last line follows from a standard maximization lemma <cit.>.Equation (<ref>) holds by the multivariate central limit theorem and the variance estimate of n^1/2Q^T _np isV̂(θ_0) = Q^TΩ̂(θ_0) Q= (Q^TΩ̂_β Q - Q^TΩ̂_βαΩ̂^-1_αΩ̂_αβ Q), which converges toV(θ_0) by the continuous mapping theorem because Ω̂_F →_P Ω_F. Thus n S_np^T Q V̂^-1 Q^T S_np→_L χ^2_r by the continuous mapping theorem.The conclusion of Theorem <ref> implies that expression (<ref>) does not depend on the choice of Q. This fact can also be shown directly, as follows. Consider another matrix Q_* with orthonormal columns such that P=Q_*Q_*^T, and accordingly define V̂_*=Q_*^TΩ̂(θ_0)Q_*. Then Q_*=QM where M= Q^TQ_*. Since P=QMQ_*^T is of rank r, M is of rank r and hence invertible, soQ_*V̂_*^-1Q_*^T=QM(M^TV̂M)^-1M^TQ^T=QV̂^-1Q^T,and thus formula (<ref>) for the PST statistic is unchanged by substituting Q_*,V̂_* forQ,V̂. §.§ Details for Section <ref>Fory = (y^(1), …, y^(k)) ∈𝔻_1×…×𝔻_k, where 𝔻_j are Hilbert spaces, we define the norm‖ y ‖ = sup_j‖ y^(j)‖.Then, following <cit.>, define a derivative on the space Θ from Section <ref> .For Θ as defined in Section <ref>, a function f:Θ→ is called Fréchet differentiable at θ if there exists a bounded linear operator A: Θ→ such thatlim_‖ h‖→ 0‖ f(θ+h) - f(θ) - Ah ‖/‖ h ‖,for h ∈Θ.The following theorem <cit.> gives conditions under which n^1/2S_n →_L S, where S is a mean zero Gaussian process. The sequence of elements √(n)S_n converges in law to a mean zero Gaussian process S if and only if* The sequence n^1/2(_n(v_1), …, _n(v_p)) converges in distribution in ^p for every finite set of points v_1,…, v_p ∈𝕍.* For every ϵ, η >0 there exists a partition of 𝕍 into finitely many sets 𝕍_1, …, 𝕍_p such thatlim sup_n→∞( sup_j sup_ v_1, v_2 ∈𝕍_j|_n(v_1) - _n(v_2) |≥ϵ) ≤η. Condition (a) of Theorem <ref> is satisfied under the assumptions in Appendix <ref>. Condition (b) implies that the processis continuous in probability. We require this as an assumption for asymptotics in p.§.§ Proof of Theorem <ref>If we showlim_n lim_p √(n) Q_p^T _np =_L lim_p lim_n √(n) Q_p^T _np =_L Q^T , where =_L denotes equality in distribution andV̂_np→_P V,then the continuous mapping theorem implies (<ref>). Expanding an arbitrary element of the vector on the left-hand side of (<ref>) giveslim_n lim_p √(n) (Q_p^T _np)_j = lim_n lim_p √(n)∑_k=1^p q_j(v_kp) _n(v_kp) ν(𝕍_kp)= lim_n √(n)∫_𝕍 q_j(v) _n(v)dν(v)=_D ∫_𝕍 q_j(v) (v) d ν(v) = (Q^T )_j.The second equality follows because the numerator on the right-hand side is a Riemann integral and our assumptions (<ref>) and that _n(· ; ) is continuous for almost allin Theorem <ref> guarantee that q_j(v) _n(v) is integrable. The final equality follows from the continuous mapping theorem since _n →_L. For the limitlim_p lim_n √(n)∑_k=1^p q_j(v_kp) _n(v_kp) ν(𝕍_k) =_L lim_p ∑_k=1^p q_j(v_kp) (v_kp) ν(𝕍_k)= ∫_𝕍 q_j(v) (v) d ν(v) = (Q^T )_j,where the first line applies the continuous mapping theorem to the finite dimensional vector _np and the continuity ofimplies the integral exists. For both directions the limit on p requires (<ref>), so that the volume of all voxels goes to zero. Theorems <ref> and <ref> are needed to ensure that _np→_p and _n →. The proof for the convergence of V̂_np→_P V is a similar argument and relies on the assumption that the sample paths of ∂/∂θlog f(_i; θ(v)) are continuous almost everywhere, so that the Riemann integral converges. §.§ Proof of Theorem<ref> We will ignore the term 1/σ̂^2 in the maximization as it is constant with respect to ζ. Define 𝕄 = G 𝕃 to be the column space of W. From the definition of R^𝕃R^𝕃∝ max_ζ∈ℝ ^p(^T G P ζ)^2/ζ^T P G^T G P ζ = max_ζ∈ℝ^p (^T W G P ζ)^2/ζ^T P G^T W G P ζ = max_γ∈𝕄(^T W γ)^2/γ^T W γ,= max_γ∈ℝ^p (^T W γ)^2/γ^T W γ,where γ =G P ζ. Because γ must be in the subspace 𝕄 since it is the column space of W and GP. The solution to the Rayleigh quotient (<ref>) is the solution to the largest generalized eigenvalue problem,W ^T W γ = λ_max W γ,where λ_max∈, and ‖γ‖ = 1. By letting ϕ = Wγ the solution is equivalent to the largest eigenvalue problemW^T W ϕ = λ_maxϕTheneigmax( W^T W) = eigmax( ^T W ) = ^T W .Thus, we haveR^𝕃 = (n-m)^T W /^T .To derive (<ref>), note^T /^T W= 1 + ^T (I-W) /^T W .The numerator and denominator of the random term on the left hand side are independent since P(I-P) = 0. So (<ref>) is distributed as 1 + n-m-r/r F_(n-m-r),r. ThusR^𝕃=_L (n-m)/1 + (n-m-r)/rF_(n-m-r),r = r(n-m)/r + (n-m-r)F_(n-m-r),r. §.§ The post hoc inference procedure controls the FWER The assumption of subset pivotality states that for any set I⊂{ 1,…, p }= H the distribution of the maximum of the standardized projected scores (<ref>) in set I, given indices in I are true nulls is equal to the distribution of the maximum given all hypotheses are true nulls <cit.>max_j∈ I{| (Δ P )_j ||I are null}= max_j∈ I{| (Δ P )_j ||H are null}.By true null we mean that the expectation of the projected score is zero. This assumption allows us to construct rejection regions assuming all scores are true nulls, but still have strong control of the FWER, i.e. in the case that Δ P μ 0, where μ is defined in (<ref>). Subset pivotality is satisfied for normally distributed statistics because the covariance of the statistics is not affected by changing the mean structure. Thus, the maximum over a subset of true nulls is not affected by the value of the mean for the other statistics. So long as the covariance estimates for the true nulls are consistent we will maintain asymptotic control of the FWER. In our post hoc inference procedure the variance estimates are consistent because (Δ P Ω̂PΔ)_jj →_P Var{(Δ P )_j} + (Δ Pμ)_j (Δ P Ω̂PΔ)_jk →_P { (Δ P )_j (Δ P )_k}.Because (Pμ)_j = 0 for all true nulls the variance and covariance estimates are consistent for all of true nulls.§ SUPPLEMENTARY MATERIAL§.§ Simulation analysesIn addition to the simulations performed in Section <ref> we performed similar simulations in which the data were generated from the modellogit(𝔼 Y_i) = α_0 -βω_1^T G_i,- +2βω_2^T G_i,+,where ω_1j∼Unif(0.5,1.5) andω_2j∼Unif(1, 3).Table <ref> gives FWER and FDR for the simulation analyses with uniform coefficients. Figure <ref> gives power results for the additional simulations. Table <ref> gives FWER and FDR for the simulations where there is no association between image and outcome. plainnat
http://arxiv.org/abs/1709.09009v1
{ "authors": [ "Simon N. Vandekar", "Philip T. Reiss", "Russell T. Shinohara" ], "categories": [ "math.ST", "stat.TH" ], "primary_category": "math.ST", "published": "20170926133638", "title": "Interpretable High-Dimensional Inference Via Score Projection with an Application in Neuroimaging" }
Information processing features can detect behavioral regimes of dynamical systems Rick Quax1,*, Gregor Chliamovitch2, Alexandre Dupuis2, Jean-Luc Falcone2, Bastien Chopard2, Alfons G. Hoekstra1,3, Peter M.A. Sloot1,3,4 1 Computational Science Lab, University of Amsterdam, The Netherlands 2 Department of Computer Science, University of Geneva, Switzerland 3 ITMO University, Saint Petersburg, Russia 4 Complexity Institute, Nanyang Technological University, Singapore* [email protected] § ABSTRACT In dynamical systems, local interactions between dynamical units generate correlations which are stored and transmitted throughout the system, generating the macroscopic behavior. However a framework to quantify and study this at the microscopic scale is missing. Here we propose an ‘information processing’ framework based on Shannon mutual information quantities between the initial and future states. We apply it to the 256 elementary cellular automata (ECA), which are the simplest possible dynamical systems exhibiting behaviors ranging from simple to complex.Our main finding for ECA is that only a few features are needed for full predictability and that the `information integration' (synergy) feature is always most predictive. Finally we apply the formalism to foreign exchange (FX) and interest-rate swap (IRS) time series data and find that the 2008 financial crisis marks a sudden and sustained regime shift (FX and EUR IRS) resembling tipping point behavior. The USD IRS market exhibits instead a slow and steady progression which appears consistent with the hypothesis that this market is (part of) the driving force behind the crisis. Our work suggests that the proposed framework is a promising way of predicting emergent complex systemic behaviors in terms of the local information processing of units.§ INTRODUCTION Emergent, complex behavior can arise from the interactions among (simple) dynamical units. An example is the brain whose complex behavior as a whole cannot be explained by the dynamics of a single neuron. In such a system, each dynamical unit receives input from other (upstream) units and then decides its next state, reflecting these correlated interactions. This new state is then used by (downstream) neighboring units to decide their new states, and so on, eventually generating a macroscopic behavior with systemic correlations. A quantitative framework is missing to fully understand how local interactions lead to complex emergent behaviors, or to predict whether a given system of local interactions will eventually generate complex systemic behavior or not.Our hypothesis is that Shannon's information theory <cit.> can be used to construct such a framework. In this viewpoint, a unit's new state reflects its past interactions in the sense that it stores mutual information about the past states of upstream neighboring units. In the next time instant a downstream neighboring unit interacts with this state, implicitly transferring this information and integrating it together with other information into its new state, and so on. In effect, each interaction among dynamical units is interpreted as a Shannon communication channel and we aim to trace the onward transmission and integration of information through this network of `communication channels'.In this paper we characterize the information in a single unit's state at time t by enumerating its mutual information quantities with all possible sets of initial unit states (t=0). Then we quantify `information processing' as the progression of a unit's vector of information quantities over time (see Methods). We first test whether this notion of information processing could be used to predict if local interactions will generate complex emergent behavior in the theoretical framework of elementary cellular automata (ECA). Next we also test if information processing could be used to detect a difference of systemic behavior in real financial time series data, namely the regimes before and after the 2008 crisis.The study of `information processing' of dynamical systems is a young and growing research topic. As illustrative examples, Lizier et al. propose a framework to formulate dynamical systems in terms of distributed ‘local’ computation: information storage, transfer, and modification <cit.> defined by individual terms of the Shannon mutual information sum (see Eq. <ref>). For cellular automata they provide evidence for the long-held conjecture that so-called particle collisions are the primary mechanism for locally modifying information, and for a networked variant they show that a phase transition is characterized by the shifting balance of locally information storage over transfer <cit.>. A crucial difference with our work is that we operate in the ensemble setting, whereas Lizier et al. study a single realization of a dynamical system. Williams et al. trace how task-relevant information flows through a minimally cognitive agent’s neurons and environment to ultimately be combined into a categorization decision <cit.> or sensorimotor behavior <cit.>. Studying how local interactions lead to multi-scale systemic behavior is also a domain which benefits from information-theoretic approaches, such as by Bar-Yam et al. <cit.>, Quax et al. <cit.>, and Lindgren <cit.>. Finally, extending information theory itself to deal with complexity, multiple authors are concerned with decomposing a single information quantity into multiple constituents, such as synergistic information, including James et al. <cit.>, Williams and Beer <cit.>, Olbrich et al. <cit.>, Quax et al. <cit.>, Chliamovitch et al. <cit.>, and Griffith et al. <cit.>. § METHODS§.§ Model of dynamical systemsIn general we consider discrete-time, discrete-state Markov dynamics. Let X^t≡ (X^t_1, X^t_2, …, X^t_N) denote the stochastic variable of the system state defined as the sequence of N unit states at time t. Each unit chooses its new state locally according to the conditional probability distribution _i(X^t+1_i| X^t), encoding the microscopic system mechanics where i identifies the unit. The state space of each unit is equal and denoted by the set Σ. We assume that the number of units, the system mechanics, and the state space remain unchanged over time. Finally we assume that all unit states are initialized identically and independently (i.i.d.), i.e., (X^0) = ∏_i=1^N(X^0_i). The latter ensures that all correlations in future system states are generated by the interacting units and not an artifact of the initial conditions. Elementary Cellular Automata. Specifically we focus on the set of 256 elementary cellular automata (ECA) which are the simplest discrete spatio-temporal dynamical systems possible <cit.>. Each unit has two possible states and chooses its next state deterministically using the same transition rule as all other cells. The next state of a cell deterministically depends only on its own previous state and that of its two nearest neighbors, forming a line network of interactions. That is,_r(X^t+1_i| X^t) = _r(X^t+1_i| X^t_i-1, X^t_i, X^t_i+1). There are 256 possible transition rules and they are numbered 0 through 255, denoted r ∈ 0..255. As initial state we take the fully random state so that no correlations exist already at t=0, i.e., _r(X^t=0_i)=1/2 for all r and all i. The evolution of each cellular automaton is fully deterministic for a given rule, implying that the conditional probabilities in Eq. <ref> can only be either 0 or 1. This is not a necessary condition for our framework. §.§ Quantifying the information processing in a dynamical modelBasics of information theory. We interpret each interaction _i(X^t+1_i| X^t) as a set of Shannon communication channels, where each channel communicates information from a subset of X^t to X^t+1_i. In general, a communication channel A → B between two stochastic variables is defined by the one-way interaction (B | A) and is characterized by the amount of information about the state A which transfers to the state B due to this interaction. The average amount of information stored in the sender's state A is determined by its marginal probability distribution (A), which is known as its Shannon entropy: H(A) =- ∑_a(A = a)log_2(A = a) . After a perfect, noiseless transmission, the information at the receiver B will share exactly H(A) bits with the information stored at the sender A. After a failed transmission the receiver shares zero information with the sender, and for noisy transmission their mutual information is somewhere in between. This is quantified by the so-called mutual information: I(A:B) = H(A) - H(A | B) = ∑_a,b (A = a,B = b)log_2(A = a,B = b)/ (A = a) (B = b). The conditional variant H(A | B) obeys the chain rule H(A,B) = H(B) + H(A | B) and is written explicitly as H(A | B) =- ∑_b(B = b)∑_a (A = a | B = b)log_2 (A = a | B = b) . This denotes the remaining entropy (uncertainty) of A given that the value for B is observed. For intuition it is easily verified that the case of statistical independence, i.e., P(B|A)=P(B), leads to H(A|B)=H(A) which makes I(A:B)=0, meaning that B contains zero information about A. At the other extreme, B=A would make H(A|B)=0 so that I(A:B)=H(A), meaning that B contains the maximal amount of information needed to determine a unique value of A. Characterizing the information stored in a unit's state. First we characterize the information stored in a unit's state at time step t, denoted i⃗(X^t_i), as the ordered sequence of mutual information quantities with all possible sets of unit states at time t=0, i.e., i⃗(X^t_i) ≡ (I(X^t_i:s) : s ∈ 2^X^0). Here 2^X^0 is the power set notation for all subsets of initial cell state variables. We will refer to i⃗(X^t_i) as the sequence of information features of unit i at time t. In particular we highlight the following three types of information features. The `memory' of unit i at time t is defined as the features I(X^t_i:X^0_i) ∈i⃗(X^t_i), i.e., the amount of information that the unit retains about its own initial state. The `transfer' of information is defined as non-local mutual information such as I(X^t_i:X^0_j) ∈i⃗(X^t_i) (i ≠ j). Non-local mutual information must be due to interactions because the initial states are independent (all pairs of units have zero mutual information). Finally we define the `integration' of information as the difference I(X^t_i : X^0) - ∑_j=1^N I(X^t_i : X^0_j). Information integration is not itself a member of i⃗(X^t_i) but it is fully induced by i⃗(X^t_i) since each of its terms is in i⃗(X^t_i). Therefore we will treat integration features as separate features in our results analysis but we do not add them to i⃗(X^t_i).§.§ Predicting the class of dynamical behavior using information processing features.Behavioral class of a rule. Wolfram observed empirically that each rule r ∈ 0,...,255 tends to evolve from a random initial state to one of only four different classes of dynamical behavior <cit.>. These de facto established behavioral classes are: * Homogeneous (all cells end up in the same state);* Periodic (a small cycle of repeating patterns);* Chaotic (pseudo-random patterns); and* Complex (locally stable behavior and long-range interactions among patterns).These classes are conventionally numbered 1 through 4 respectively. We obtained the class number for all 256 rules from Wolfram Alpha <cit.> and denote it C_r ∈ 1,...,4.Predictive power of the information processing features. At time step t we numerically compute the sequence of information featuresI⃗^t_r≡(i⃗(X^t=0_i), …, i⃗(X^t_i))for rule r ∈{0, 1, …, 255} to characterize its information processing up to time t. Then we formalize the prediction problem by the conditional probabilities (C_r|I⃗^t_r), treating r as an unobservable, uniformly random stochastic variable. That is, given only the sequence of information features of a rule, what is the probability that the ECA will eventually exhibit behavior of class r? We can interpret this problem as a communication channel I⃗^t_r→ C_r and quantify the predictive power of I⃗^t_r using the mutual information I(I⃗^t_r : C_r). The predictive power is thus zero in case the information features do not reduce the uncertainty about C_r, whereas it achieves its maximum value H(C_r) in case a sequence of information features I⃗^t_r always uniquely identifies the behavioral class C_r. We will normalize the predictive power as I(I⃗^t_r : C_r) / H(C_r).Note that a normalized predictive power of, say, 0.75 does not mean that 75% of the rules can be correctly classified. Our definition yields merely a relative measure where 0 means zero predictive power, 1 means perfect prediction, and intermediate values are ordered such that a higher value implies that a more accurate classification algorithm could in principle be constructed. The benefit of our definition based on mutual information is that it does not depend on a specific classifier algorithm, i.e., it is model-free. Indeed, the use of mutual information as a predictor of classification accuracy has become the de facto standard in machine learning applications <cit.>.Selecting the principal features. Some information features are more predictive than others for determining the behavioral class of a rule. Therefore we perform a feature selection process at each time t to find these 'principal features' as follows. First we extend the set of information features I⃗^t_r by the following set of `information integration' (also called `whole-minus-sum' (WMS) <cit.>) features: S⃗^t_r≡{ I ( X^t'_i : s ) - ∑_s_i∈ s I ( X^t'_i : s_i): s ∈ 2^X^t=0, t' ∈ 0,...,t } Their concatenation makes the extended feature set: F⃗^t_r≡(I⃗^t_r, S^t_r) . The extended feature set F⃗ has no additional predictive power compared to I⃗ so for the prediction task F⃗ and I⃗ are equivalent. Namely, the integration features S⃗^t_r are completely redundant given I⃗^t_r since each of its terms is a member of I⃗^t_r. The reason for adding them separately is that they have a clear meaning as `information integration' <cit.> and we are interested to see whether this phenomenon plays a significant role in generating dynamical behaviors. To illustrate this meaning, consider the first time step of rule 105. At t=1 each cell computes its next state deterministically from its three predecessor states, transferring information from these states and indeed I(X^1_i : X^0_i-1, X^0_i, X^0_i+1)=1. However, if we try to determine from which predecessor cells this information originated we find ∀ j: I(X^0_i : X^0_j)=0, i.e., none of the individual predecessor cells correlates with the cell's new state. This apparent paradox is resolved if we look more closely at the state transition table for this rule. Namely, the new state stores whether the sum of the predecessor states is even (0) or uneven (1), not whether one particular predecessor state is 0 or 1. This is a so-called `synergistic', higher-order relation which cannot be reduced to a sum of individual relations. Therefore we can say that the new unit state `integrates' three bits of information into one new bit of information. Other rules may partially integrate information or not at all. For our case of statistical independence among the input variables, the WMS functions precisely quantify this notion of information integration; in the general case it remains an open question how to quantify it.We define the first principal feature f^t_1 as maximizing its individual predictive power, quantified by a mutual information term as explained before, as f^t_1≡_f_r ∈F⃗^t_r I(f_r : C_r). Here, r is treated as a uniformly random stochastic variable with (r)=1/256 which in turn makes f_r and C_r stochastic variables. In words, f^t_1 is the single most predictive information feature about the behavioral class that will eventually be generated. More generally, the principal set of n features is identified in similar spirit, namely f^t_n≡_f⃗_r ⊆(F⃗^t_r)^n I(f⃗_r : C_r).*Informational non-locality. C_r is a function of the long-term (large t) behavior of the system, after the transient phase X^0 ↦ X^1 ↦… X^T from the i.i.d. initial state, so we can write loosely C_r = f(X^T, X^T+1, …, X^T+M). This means that the total predictive power I(I⃗^t_r : C_r) increases with time t (as t approaches T) but must converge at some t=t_c to a maximum value, which is at most H(C_r) in case C_r can be perfectly predicted from the information features. We refer to t_c as the `informational non-locality'. This number reflects how many steps of information processing are needed to optimally predict which class of behavior will be generated, starting from a random initial configuration. It is a function of both the set of dynamical systems under investigation (in our case the 256 ECA rules) as well as the criterion for distinguishing dynamical behaviors (in our case the long-term behavior classification by Wolfram). To illustrate the concept in the general case, t_c=1 would imply that all interaction network topologies in the set of dynamical systems lead to the same behavioral class. Namely, in the first time step each unit operates on i.i.d. input states regardless of how their interactions are placed. Therefore t_c=1 implies that the placement of interactions is completely irrelevant. In contrast, t_c=2 would imply that the local network topology plays a role, since different interaction networks generate different correlated states at t=1 on which the unit process at t=2. Namely, in the second time step each unit operates on correlated states at distance 1 in the interaction network. In this case, changing a non-local network characteristic (such as the betweenness-centrality <cit.>) would not change the behavioral class that is generated, as long as the local network topological features (such as the transitivity coefficient <cit.>) are left invariant. The higher t_c, the larger the scale of the network features that play a role in generating the behavioral class. For the case of the 256 ECA the interaction network changes only locally, namely, implicitly by the choice of rule number: some rules ignore their left and/or right neighbor and thus effectively reduce or drop local pairwise (directed) interactions. In addition, the type of interactions can range from strictly one-to-one (e.g., copying the left neighbor's state) to strictly high-order `hyperedges' <cit.> (e.g., rule 105). However, globally the interactions in the 256 ECA invariably form a homogeneous linear network, i.e., no global network properties are ever changed while keeping the local network properties equal. This means that the network structure can be learned already by any cell by looking at its neighbors and how its new state is generated from the neighbor states, as this will tell which interactions are dropped and which interactions are one-to-one or higher-order. It is possible that the interaction structure changes in the presence of correlations among the neighbors, in which case one or a few extra time steps will be needed before the eventual network is learned. Therefore we intuitively expect the t_c for ECA to be rather small, on the order of 2 or 3, since after this many time steps the information feature values will (implicitly) capture all the network's variability. This hypothesis will be tested in the Results (Section <ref>). *Information-based classification of rules. The fact that Wolfram's classification relies on the behavior exhibited by a particular initial configuration makes the complexity class of an automaton dependent on the initial condition. Moreover, there is no universal agreement regarding how “complexity” should be defined and various alternatives to Wolfram's classification have been proposed, although Wolfram's remains by far the most popular. Our hypothesis is that the complexity of a system has very much to do with the way it processes information. Therefore we attempt to classifying ECA rules using only their informational features.We use a classification algorithm which takes as input the 256 vectors of information features and computes the Euclidean distance between these vectors. The two vectors nearest to each other are clustered together. Then the remaining nearest elements or clusters are clustered together. The distance between two clusters is defined as the distance between the most distant elements in each cluster. The result is a hierarchy of clusters with different distances which we visualize as a dendrogram. §.§ Computing information processing features in foreign exchange time series In the previous section we define information processing features for the simplest (one-dimensional) model of discrete dynamical systems. In the second part of this paper we aim at investigating if information features can distinguish “critical” regimes in the real complex dynamical system of the foreign exchange market. Most importantly, we are interested in the behavior of the information features before, at, and after the start of the 2008 financial crisis, which is commonly taken to coincide with the bankruptcy of Lehman Brothers on September 15, 2008. We consider two types of time series datasets in which the dynamical variables can be interpreted to form a one-dimensional system in order to stay as close as possible to the ECA modeling approach.The information features can then be computed as discussed above, except that each mutual information term is now estimated directly from the data. This estimation is performed within a sliding window of length w up to time point T which enables to see how the information measures evolve over time T. For instance, the memory of variable X at time T will be measured as I(X^t : X^t+1) where the joint probability distribution (X^t, X^t+1) is estimated using only the data points X^T-w, …, X^T. Details regarding the estimation procedure are given in the following paragraph. *Estimating information processing features from the dataThe mutual information between two financial variables (time series) at time t is estimated using the k-nearest-neighbor algorithm using the typical setting k=3 <cit.>. This estimation is calculated using a sliding window of size w leading up to and including time point t, after first detrending each time series using a Gaussian smoothing kernel of width σ. Both parameters w and σ are evaluated for robustness. σ is selected to be about half the value for which pairs of time series become co-integrated at the 0.05 significance level. For calculating the integration measure we apply a correction which makes it distributed around zero. The reason is that the WMS measure of Eq. <ref> assumes independence among the stochastic state variables s_i, which for the real data is taken to be the previous day's data instead of the initial state. When this assumption is violated it can become strongly negative and, more importantly, co-integrated with the memory and transfer features whose sum will then dominate the integration feature. We remedy this by rescaling the sum of the memory and transfer features which are subtracted in Eq. <ref> to equal the average value of the total information (positive term in Eq. <ref>). This rejects the co-integration null-hypothesis between total information and the subtracted term at the 0.05 significance level (p ≈ 0.002). This results in the integration feature being distributed around zero and being independent of the sum of the other two features so that it may functionally be used as part of the feature space, however the value itself should not be trusted as quantifying precisely the notion `integration' or synergy.A similar procedure to remove the co-integration between memory and transfer is not possible because it would amount to having to assign the common dynamics to one of the features and making the other features only residuals. This means that this arbitrary choice would have a determining and potentially misleading impact on the distribution of the time points in the feature space. *Description of the foreign exchange dataThe first data we consider are time series of five foreign exchange (FX) daily closing rates (EUR/USD, USD/JPY, JPY/GBP, GBP/CHF, CHF/EUR) for the period 1999-01-01 through 2017-04-21.[<http://www.global-view.com/forex-trading-tools/forex-history/>] Each currency pair has a causal dependence on its direct neighbors in the order listed because they share a common currency. For instance, if the EUR/USD rate changes then USD/JPY will quickly adjust accordingly (among others) because the rate imbalance can be structurally exploited for profit. In turn, among others through the rate JPY/EUR (not observed in this dataset) the rate EUR/USD will also be adjusted due to profit-making imbalances, eventually leading to all neighboring rates returning to a balanced situation. *Description of the interest-rate swap dataThe second data are interest-rate swap (IRS) daily rates for the EUR and USD market <cit.>. The data spans over twelve years: the EUR data from 1998-01-12 to 2011-08-12 and the USD data from 1999-29-04 to 2011-06-06. The datasets consist of 14 and 15 times to maturity (durations), respectively, ranging from 1 year to 30 years. Rates for nearby maturities have a dependency because the higher maturity can be constructed by the lower maturity plus a delayed (`forward') short-term swap. This basic mechanism between maturities leads to generally monotonically upward `swap curves'.§ RESULTS §.§ Predicting the Wolfram class of ECA rules using information processing featuresInformation processing in the first time step. The information processing occurring in the first time step of each ECA rule r ∈ 0..255 is characterized by the corresponding feature set F⃗^t=1_r, consisting of 7 time-delayed mutual information quantities (I⃗^t=1_r) and 4 information integration quantities (S⃗^t=1_r). We show three selected features for all 256 rules as points in a vector-space in Fig. <ref> along with each rule's Wolfram class as a color code. It is apparent that the three features already partially separate the behavioral classes. Namely, it turns out that chaotic and complex rules tend to have high information integration, low information memory, and low information transfer. Fig. <ref> also relates intuitively to the classic categorization problem in machine learning, namely, perfect prediction would be equivalent to the existence of hyperplanes that separate all four behavior classes. Predictive power of information processing features over time. The single most predictive information feature is information integration, as shown in Fig. <ref>. Its predictive power is about 0.37 where 1.0 would mean perfect prediction. The most predictive pair of features is formed by adding information transfer at 0.43, so adding the information transfer feature increases the predictive power by 0.06. The information transfer feature by itself has actually over three times this predictive power at 0.19, showing that two or more features can significantly overlap in how they characterize the behavioral class. The total predictive power of all information processing features at t=1 is 0.49, formed by 4 of the 11 possible information features.For the second time step (Fig. <ref>) we again find that the most predictive information feature is information integration. An intriguing difference however is that it is now significantly more predictive at 0.90. This means that already at t=2 there is a single information characteristic of dynamical behavior (i.e., information integration) which explains the vast majority of the entropy H(C_r) of the behavioral class that will eventually be exhibited. A second intriguing difference is that the maximum predictive power of 0.98 is now achieved using only 3 out of 57 possible information features, where 4 features were needed at t=1.Finally, for t=3 we find that only 2 information features achieve the same possible maximum predictive power of 1.0, i.e., the value for these two features unique identify the behavior class. Firstly this confirms the apparent trend that fewer information features capture more of the relevant dynamical behavior as time t increases. Secondly we find again that information integration is the single most predictive feature. In addition, we find again that the best secondary feature is a particular combination of memory and longest-range transfers, as in t=2. Including the intermediate transfers actually only slightly convolutes the prediction: adding them in t=2 reduces predictive power by 0.028, whereas in t=3 only by 4· 10^-15. In t=1 there are no intermediate transfers possible since there are only three predecessors of a cell's state, and apparently then it pays off to leave out memory (which would reduce power by 0.025 if added).To validate that the predictive power values of the information features are indeed meaningful we also plot the expected `base line' prediction power in each panel in Fig. <ref> along with the 95% confidence interval. The base line is formed by randomizing the pairing of information features with class identifiers, so it shows the expected predictive power of having the same number and frequencies of feature values but sampled with zero correlation with the classification. In effect this results in a statistical test with a null-hypothesis of the information features and Wolfram classification being uncorrelated. Since the predictive power of the information features are always above the base line we consider the meaningfulness of the information features validated, i.e., we reject the null-hypothesis at the 95% confidence level. Informational non-locality. In the first time step each unit operates on random and independent states by our restriction of the initial conditions, so the system state X^t=1 is invariant under the network topology of the interaction. That is, regardless of whether the interaction network is a line graph or binary tree graph, the stochastic variable X^t=1 has the same distribution and is thus insensitive to the interaction topology. Only the number of interactions (degree) per cell may have an effect at this stage, but for ECA it is invariably 2. We can thus say that the information features I⃗^t=1_r only characterize the local unit dynamics (i.e., their rule table), but does not yet take into account which unit interacts with which other units. The total predictability at t=1 of 0.49 thus suggests that the local unit dynamics alone play already a significant role in determining the behavioral class that will be generated, almost on equal footing with the network topology of the interactions among the units, which should then account for the remaining 0.56. In the second time step each unit operates on possibly correlated states, induced by the interaction topology, where two correlated states are at most distance 2 away from each other. The system state X^t thus depends on highly local network characteristics, such as whether the two neighbors of a unit are also connected with each other or not. That is, a line graph and a sequence of triangles will still yield the same X^t, but a binary tree graph would yield a different X^t. We find that the total predictability at t=2 equals 0.98, so adding this dependence on highly local network characteristics increases the predictability by 0.49.In the third time step and beyond we find that the total predictive power reaches 1.0, so we find that the `informational non-locality' equals t_c=2. This finite value means that system-size network characteristics, such as average betweenness centrality <cit.>, play no role in determining the behavioral class in the set of 256 ECA rules. Indeed this is to be expected since large-scale network characteristics do not change across the 256 ECA rules: globally they are all line graphs of infinite length. Only at the smallest scale the interaction networks vary effectively since some rules (partly) ignore their `left' neighbor's state, whereas others ignore their `right' neighbor's state, or both. This explains why only the network characteristics at the smallest scale play a role in determining the behavioral class of ECA rules, which is successfully recovered by our t_c measure. We expect that for more heterogeneous classes of dynamical systems, such as random boolean networks, the informational non-locality is (much) higher, but we leave this as a future study.Relation to Langton's parameter. Langton's λ parameter <cit.> is the most well-known single feature of an ECA rule which partially predicts the Wolfram class. It is a single scalar computed for rule r as λ_r = (X^1_i = 0 | r). It is known that the λ parameter is more effective for a larger state space and a larger number of interactions, however we briefly highlight it here because of its widespread familiarity and because the information processing measures can be written in terms of `generalized' λ parameters (see Supporting Material). This means that λ's relation with the Wolfram classification is captured within the information features, implying that the information are minimally as predictive as features based on λ parameter(s). Indeed, the predictive power I(λ : C_r)/H(C_r) ≈ 0.175, which is significantly lower than the information integration feature alone which achieves 0.361 at t=1. Moreover, as indicated by the black dots in the left panel of Fig. <ref> the vast majority of information features have higher predictive power than 0.175 (only three single features have slightly lower power). Information processing-based clustering. In the previous sections we showed how information features predict Wolfram's behavioral classification. In this section we investigate the hierarchical clustering of rules induced by the information features in their own right. One important reason for studying this is the fundamental problem of undecidability of CA classifications based on long-term behavior characteristics <cit.>, such as for the Wolfram classification. In the best case this makes the classification difficult to compute; in the worst case the classification is impossible to compute, leading to questions about its meaning. In contrast, if local information features correlate strongly with long-term emergent behaviors then a CA classification scheme based on information features is practically more feasible. In this section we visualize how the clustering overlaps with Wolfram's classification. Fig. <ref> (a) shows a clustering made using information features evaluated where t=0 is the randomized state. Interestingly, while the features have a low predictability on the Wolfram class, the resulting clustering also does not overlap at all with Wolfram classification. We have to make an exception for rules 60, 90, 105 and 150 which are all chaotic rules and share the same information processing features.Fig. <ref> (b), on the other hand, displays the clustering for the case where features are not evaluated with respect to the randomized state but to the stationary distribution (i.e., X^t=0 is a randomly selected system state from the cyclic attractor). One reason for this is to ignore the initial transient phase; another reason is to make a step towards the financial application in the next subsection, which obviously does not start from random initial conditions. By `stationary' we mean that we simulate the CA dynamics long enough until the time-shifted information features no longer have a changing trend. Feature values can no longer be calculated exactly and thus are estimated numerically by sampling initial conditions and for size N=15. In that case we find that the clustering has increased overlap with Wolfram's classification. In particular, we can note that uniform rules cluster together and that chaotic and complex rules are all on the same large spray. However the agreement is far from perfect. For instance, the spray bearing chaotic and complex rules also bears periodic rules. Note also that rules 60, 90, 105 and 150are indistinguishable when considered from the information processing viewpoint, even though they exhibit chaotic patterns that can be visually distinguished from each other. On the contrary, rules 106 and 154 are very close to each other and the pattern they exhibit indeed shows some similarities, but the former is complex while the latter is periodic.Note that using this clustering scheme all rules converging to a uniform pattern, but one, are close to each other in the information-features space. The remaining one, rule 168, has a transient regime which is essentially dominated by a translation of the initial state. This translational regime can be found as well in rules 2 and 130, which are classified in the same sub-spray as rule 168. The similarity of any information feature (information transfer in this case) can thus lead to rules whose behavior differs in other respects to get classified similarly. §.§ Detecting a regime shift in foreign exchange time series Inspired by the results for the ECA models we study here whether a small set of information features is also capable of identifying different `regimes' of dynamical behavior of real systems using real data. We focus on financial data because it is of high quality and available in large quantities compared to for instance biological datasets. Also at least one large regime shift is known: the 2008 financial crisis, separating the pre-crisis and post-crisis regimes. We focus on two time series datasets: daily IRS rates of 14 and 15 maturities in the USD and EUR markets, and five consecutive daily FX closing exchange rates. We selected these datasets because the variables in each dataset can be considered to form a line graph similar to ECA rules, staying as close as possible to the previous analysis.In Figure <ref> we show the 3-dimensional `information feature space' with the same axes as Figure <ref>. We observe a remarkable separation of the pre-crisis and post-crisis periods which are well separated by a fast transition trajectory. Before this transition we also observe in the pre-crisis regime that the information features are not stationary but slowly progress along a path in the general direction of the post-crisis regime (high memory, high transfer, low integration) before the fast transition. Interestingly, this closely resembles the dynamics observed for so-called tipping points <cit.> where a system is slowly pushed `over the hill' after which it progresses quickly `downhill' to the next attractor state. This is relevant because slow progressions to tipping points offer a potential for developing early-warning signals <cit.>, however we do not further explore this possibility here. §.§ Detecting a regime shift in interest-rate swap time series In Figure <ref> we show the same feature space for the IRS markets in EUR and USD. In EUR we similarly see a good separation of pre-crisis and post-crisis into two separate regimes, although the two regimes are closer together compared to the FX case. A slow progression towards a tipping point is less clear in this case.In contrast, in USD we observe the completely different scenario of a steady progression of the information features during most of the duration of the dataset. One possible but hypothetical explanation for this is that the IRS market in USD could have been (part of) a slow but steady driving factor in the global progression to the crisis, whereas the EUR IRS and the FX markets may have been more exogenously forced towards their regime shift. Indeed, the progression to the 2008 crisis is often explained by referring at least to substantial losses in fixed income and equity portfolios followed by the U.S. subprime home loan turmoil <cit.>, suggesting at least a central role for the trade of risks concerning interest rates in USD. The exact sequence of events leading to the 2008 crisis is still debated among financial experts and our numerical analyses may help to shed light on interpreting the relative roles of different markets.Another intriguing observation is that financial markets appear to settle to a new steady state after the crisis as opposed to reverting back to their original dynamics. This would suggest that there is no `in and out of' the financial crisis; instead, the `crisis' appears to be actually a substantial and sustained regime shift. This is corroborated by a series of new, post-2008 market practices. A crucial development is that before 2008 counterparty credit risk was largely ignored whereas since 2008 it is explicitly priced in various forms <cit.>. For instance, it has become market standard to charge credit valuation adjustments (CVA) for unsecured over-the-counter derivative trades. CVA is the difference between the risk-free portfolio value and the true portfolio market value that takes into account the possibility of a counterparty’s default. Although research into how exactly to calculate CVA is still ongoing <cit.>, it is clear that the post-crisis systemic market dynamics have markedly and sustainably changed, especially regarding the detection and mitigation of various forms of counterparty and systemic risks.Nevertheless at this stage it remains unclear why the information processing features move towards higher memory, higher transfer, and lower integration (FX) or higher integration (IRS). We note that the memory and transfer features can be reliably estimated from the data and that their relative change is consistent with a move from complex and chaotic ECA rules toward class 1 and 2 rules. We also note that memory and transfer are correlated in the real data, leading to the diagonal tendency of the time points in the feature spaces. The integration feature must be considered with more caution since the existence of (strong) correlations makes it problematic to exactly quantify it. Still it does become clear that this feature plays a significant role.A possible explanation for the higher integration in the IRS markets may be that the rates are increasingly determined by counterparty and systemic risk indicators. After all these indicators are surely aggregate functions of IRS rates (among others) as opposed to direct transformations of an individual IRS rate, leading to reduced individual correlations and thus to higher integration. Risk indicators play a much less significant role in the dependencies among foreign exchange rates, offering a possible explanation for the observation that its integration at least does not increase. Although the absolute exchange rates may depend on external systemic risk indicators, leading to a larger exogenous factor in determining the rates, our measures would not detect this because mutual information is insensitive to monotonic transformations of the variables. The way in which different rates depend on each other remains largely the same: the FX market is a rather technical market with a large fraction of algorithmic traders which exploit (small) imbalances between different rates. Also, transactions are instantaneous as opposed to the long-term IRS contracts, meaning no counterpart dependencies and thus no risk factors incorporated in the way that different rates depend on each other.§ DISCUSSIONOur working assumption is that dynamical systems inherently process information. Our leading hypothesis is that the way that information is locally processed determines the global emergent behavior. In this article we formalize the notion of `information processing' and then present two lines of supporting evidence: from a model perspective (ECA) and a data-analysis perspective (FX data). Our formalization builds upon Shannon's information theory, which means that we consider an ensemble of state trajectories rather than a single trajectory. That is, we do not quantify the information processing that occurs during a particular, single sequence of system states (attempts to this end are due to Lizier et al. <cit.>). Rather, we consider the ensemble of all possible state sequences along with their probabilities. One way to interpret this is that we quantify the `expected' information processing averaged over all trajectories <cit.>. Another way to interpret it is that we characterize a dynamical model in its totality, rather than a particular symbolic sequence of states of a model. Our reasoning is that if (almost) every state trajectory of a model (such as a CA rule) leads to a particular emergent behavior (such as chaotic or complex patterning) then we would argue that the emergent behavior is a function of the ensemble dynamics of the model.This seems at odds with computing information features from real time series, which are measurements of a single trajectory of a system. We resolve this issue by assuming `local stationarity'. This assumption is common in time series analysis and used (implicitly or explicitly) in most `sliding window' approaches and moving statistic estimations, among others. In other words, we assume that the rate of sampling data points is significantly faster than the rate at which the underlying statistical property changes, which in our case are the information features. The consequence is that a finite number of consecutive data points can be used to estimate the probability distribution of the system at the corresponding time, which in turn enables estimating mutual information quantities. Our first intriguing result from the ECA analysis is that fewer information features capture more of the relevant dynamical behavior, as time t progresses away from a randomized system state. One potential explanation is the processing of correlated states, or equivalently, of overlapping information. Namely, at t=1 each cell operates exclusively on uncorrelated inputs, so the resulting state distribution is a direct product of the state transition table. Neighboring cell states at t=1 are now correlated due to overlapping input states, induced by the interaction topology. Consequently, at time t=2 and beyond, the inputs to each cell have become correlated in a manner dictated by the interaction topology. The way in which an ECA rule subsequently deals with these correlations is evidently an important characteristic. In other words, two ECA rules may have exactly the same information features for t=1 but different features for t=2 which must be due to a different way of handling (combining) correlated states. This result leads us to propose the information non-locality concept quantified by t_c. Namely, the information measures at t=1 of each cell do not yet characterize the interaction topology (by the correlations it induces) other than perhaps the degree distribution. This suggests that the `missing' predictive power at t=1 is a measure of the relevance of the (non-local) interaction topology. In the case of ECA this quantity is roughly half: 1-0.488=0.512. In other words, if no correlations would ever be induced at t=1 and the CA still always operates on a randomized state at t=2 then the information measures at t=2 would yield zero additional predictive power about the eventual emergent behavior, which would lead to t_c=2. To the extent that correlations are induced, this added predictive power increases. Indeed we find t_c=3 for ECA. This concept extends to further time steps at which correlations over larger distances could be induced. The distance or time step t_c at which the predictive power no longer increases is a measure of the `locality' of topological characteristics that are relevant for the emergent behavior.To illustrate the potential importance of t_c, consider the set of all possible gene-gene regulatory interaction networks as the universe of dynamical systems, and consider the stability of the generated patterns as behavioral classes. t_c then reflects the relative importance of the network topology versus the local dynamics. In turn, this would also inform the experimentation and modeling: if a large t_c is found then any modeling attempt must take great care to measure and reproduce the large-scale network characteristics beyond only the degree distribution, and conversely, a low t_c may imply for instance that only the degree distribution and the clustering coefficient need to be taken into account. Our second intriguing result is that the most predictive information feature is invariably information integration. In each time step it accounts for the vast majority of the total predictive power (75%, 92%, and 96%, respectively). This is the feature that we would consider to actually capture the `processing' or modification of information, rather than the memory and transfer features which capture the simple `copying' of information. Indeed, the cube of Fig. <ref> suggests that the interesting behaviors (chaotic and complex) are associated with high integration, low memory, and low transfer. In this extreme we find rule 60 (the XOR rule) and similar rules which are all chaotic rules. For complex behavior non-zero but low memory and transfer appear to be still necessary ingredients.The good separation of the dynamic behavioral classes in the ECA models using only a few information features ultimately leads to the question whether the same can be done for real systems based on real data. This is arguably a large step and certainly more rigorous research should be done using intermediate models of increasing complexity and for different classifications of dynamical behavior. On the other hand, if promising results could be obtained from real data using a small set of information features then this would add more urgency to such future research, even if not yet fully understanding the role of information processing features in systemic behavior. This is the purpose of our application to financial data. Financial data is of high quality and available in large quantities and at least one large regime shift is known, namely the 2008 financial crisis. We stay as close as possible to ECA by selecting two datasets in which the dynamical variables can be interpreted to form a line graph. Indeed, we consider our results in the financial application promising enough to warrant further study into information processing features in complex system models and other real datasets. Our results suggest tipping point behavior for the FX and EUR IRS markets and a possible driving role for the USD IRS market.All in all we conclude that the presented information processing concept appears indeed to be a very promising framework to study how dynamical systems generate emergent behaviors. In this paper we present initial results which support this claim. Further research may identify concrete links between information features and various types of emergent behaviors, as well as the relative impact of the interaction topology. Our lack of understanding of emergent behaviors is exhibited by the ECA model: it is arguably the simplest dynamical model possible, and the choice of local dynamics (rule) and initial conditions fully determine the emergent behavior that is eventually generated. Nevertheless even in this case no theory exists that predicts the latter from the former. The information processing concept may provide a new perspective on dynamical systems with potentially pervasive impact across the sciences.§ SUPPORTING INFORMATION§.§ Lambda parameter and information featuresRelation to Lambda parameter.The Lambda parameter is a well-known `local' characterization of a CA rule which correlates with Wolfram's complexity classes. It is local in the sense that it is computed from the state transition table, so effectively it can be seen as a prediction of the Wolfram class at t=0. Here we demonstrate that our information features includes part of the predictive power of the Lambda parameter by relating the two.Let us first consider a dynamical system with N variables whose state a_i(t) evolves in time according to a given rule, depending on the state of the neighborhood of agent i, denoted n_i (we shall always assume the neighborhood of a node includes the node itself). We assume a probabilistic description so as to write this rule as a transition matrix W_{a_j}→ a_i = P(a_i, t |{a_j}, t-1) ∀ j ∈ n_i. This gives the probability that agent i takes value a_i at time t knowing that the neighbors of i take value a_j_k for k ∈{1, ..., | n_i |}. With the W matrix we can express two-steps joint probabilities as P(a_i, t;{a_j}, t-1) = P({a_j}, t-1) W_{a_j}→ a_i . We can now define and compute the total information I_tot as I_tot:= I(a_i, t; {a_j}, t-1)= ∑_{a_j}, a_i P(a_i, t;{a_j}, t-1) lnP(a_i, t;{a_j}, t-1)/P(a_i, t)P({a_j}, t-1) = ∑_{a_j}, a_i P({a_j}, t-1) W_{a_j}→ a_ilnW_{a_j}→ a_i/P(a_i, t) Now comes a crucial assumption: we shall assume that the agents are randomly initialized, namely that P({a_k}, t-1) = 2^-| n_k | for any configuration of the neighborhood. In the case of cellular one is also allowed to write n_i = n ∀ i, so that I_tot= 2^-n∑_{a_j}, a_i W_{a_j}→ a_ilnW_{a_j}→ a_i/2^-n∑_{a_j} W_{a_j}→ a_i = 2^-n∑_{a_j}, a_i W_{a_j}→ a_iln W_{a_j}→ a_i - 2^-n∑_{a_j}, a_i W_{a_j}→ a_iln( 2^-n∑_{a_j} W_{a_j}→ a_i)= - 2^-n∑_{a_j}, a_i W_{a_j}→ a_iln( 2^-n∑_{a_j} W_{a_j}→ a_i)= - λlnλ - (1-λ) ln (1-λ) where λ is the Langton parameter, namely the fraction of ones in the lookup table characterizing the dynamics. The last equality arises from the fact that ∑_{a_j} W_{a_j}→ a_i={[ 2^nλ ; 2^n(1-λ) ;]. while the penultimate is because for a deterministic dynamics transition elements take either value 0 or 1.The next piece of information we consider is memory, which we define as I_mem := I(a_i, t; a_i, t-1) . It can be computed as I_mem= ∑_a_i, a'_i P(a_i, t;a'_i, t-1) lnP(a_i, t;a'_i, t-1)/P(a_i, t)P(a'_i, t-1) = ∑_a_i, {a'_j} P(a_i, t;{a'_j}, t-1) ln∑_{a'_j ≠ i} P(a_i, t;{a'_j}, t-1)/P(a_i, t)P(a'_i, t-1) = 2^-n∑_a_i, {a'_j} W_{a'_j}→ a_iln∑_{a'_j ≠ i} W_{a'_j}→ a_i/2^-1∑_{a'_j} W_{a'_j}→ a_i We now refine the previous analysis by defining two quantities λ_0 and λ_1 as Langton parameters restricted to the case when the central site is 0 or 1 respectively. In other words we have ∑_a'_m, m i W_{a'_m,a'_i}→ a_i={[ 2^n (1/2-λ_0); 2^n λ_0; 2^n (1/2-λ_1); 2^n λ_1; ]. so that λ_0 + λ_1 = λ. This allows to rewrite I_mem as I_mem= 2^-n∑_a_i, a_i', {a'_j≠ i} W_a_i', {a'_j}→ a_iln∑_{a'_j ≠ i} W_a_i', {a'_j}→ a_i/2^-1∑_a_i', {a'_j≠ i} W_a_i', {a'_j}→ a_i = 2^-n∑_{a'_j≠ i} W_0, {a'_j}→ 0ln1/2 - λ_0/2^-1∑_a_i', {a'_j≠ i} W_a_i', {a'_j}→ 0 + 2^-n∑_{a'_j≠ i} W_0, {a'_j}→ 1lnλ_0/2^-1∑_a_i', {a'_j≠ i} W_a_i', {a'_j}→ 1 + 2^-n∑_{a'_j≠ i} W_1, {a'_j}→ 0ln1/2 - λ_1/2^-1∑_a_i', {a'_j≠ i} W_a_i', {a'_j}→ 0 + 2^-n∑_{a'_j≠ i} W_1, {a'_j}→ 1lnλ_1/2^-1∑_a_i', {a'_j≠ i} W_a_i', {a'_j}→ 1 = (1/2 - λ_0) ln1 - 2λ_0/1-λ + λ_0 ln2λ_0/λ + (1/2 - λ_1) ln1 - 2λ_1/1-λ + λ_1 ln2 λ_1/λ The third and last piece of information we consider is transfer, defined as I_trans := I(a_i, t; a_k, t-1) . where a_j is any neighbor of a_i different from a_i itself. Its computation is quite similar to that of memory and we get I_trans= ∑_a_i, a'_k P(a_i, t;a'_k, t-1) lnP(a_i, t;a'_k, t-1)/P(a_i, t)P(a'_k, t-1) = ∑_a_i, {a'_j} P(a_i, t;{a'_j}, t-1) ln∑_{a'_j ≠ k} P(a_i, t;{a'_j}, t-1)/P(a_i, t)P(a'_k, t-1) = 2^-n∑_a_i, {a'_j} W_{a_j'}→ a_iln∑_{a'_j ≠ k} W_{a_j'}→ a_i/2^-1∑_{a_j'} W_{a_j'}→ a_i We can follow the same way as previously by defining a set of parameters (λ_0^(L), λ_1^(L), λ_0^(R), λ_1^(R)) where the superscript index L or R refers to the left of right neighbor respectively. In other words, λ_0^(L) is the fraction of ones in the lookup table when the left neighbor is zero, etc. As for memory we obviously have λ_0^(L) + λ_1^(L) = λ and λ_0^(R) + λ_1^(R) = λ. The reasoning exposed for the memory information may be carried over without modification so as to get finally I_trans^(L)= (1/2 - λ_0^(L)) ln1 - 2λ_0^(L)/1-λ + λ_0^(L)ln2λ_0^(L)/λ + (1/2 - λ_1^(L)) ln1 - 2λ_1^(L)/1-λ + λ_1^(L)ln2 λ_1^(L)/λI_trans^(R)= (1/2 - λ_0^(R)) ln1 - 2λ_0^(R)/1-λ + λ_0^(R)ln2λ_0^(R)/λ + (1/2 - λ_1^(R)) ln1 - 2λ_1^(R)/1-λ + λ_1^(R)ln2 λ_1^(R)/λ The case of Rule 110As an example consider Rule 110, which is an elementary cellular automaton which is well known for its complex behavior. The lookup table of this dynamics is given by W_a_i-1'a_i'a_i+1'→ a_i=[ 0 1; 000 1 0; 001 0 1; 010 0 1; 011 0 1; 100 1 0; 101 0 1; 110 0 1; 111 1 0; ] One immediately gets λ=5/8, λ_0=λ_1^(L)=λ_0^(R)=1/4 and λ_1=λ_0^(L)=λ_1^(R)=3/8. It results that I_tot^110 = - 5/8ln5/8 - 3/8ln3/8≈ 0.66156I_mem= (1/2 - λ_0) ln1 - 2λ_0/1-λ + λ_0 ln2λ_0/λ + (1/2 - λ_1) ln1 - 2λ_1/1-λ + λ_1 ln2 λ_1/λ = 1/4ln4/3 + 1/4ln4/5 + 1/8ln2/3 + 3/8ln6/5≈ 0.03382I_trans^(L)= (1/2 - λ_0^(L)) ln1 - 2λ_0^(L)/1-λ + λ_0^(L)ln2λ_0^(L)/λ + (1/2 - λ_1^(L)) ln1 - 2λ_1^(L)/1-λ + λ_1^(L)ln2 λ_1^(L)/λ = 1/8ln2/3 + 3/8ln6/5 + 1/4ln4/3 + 1/4ln4/5≈ 0.03382I_trans^(R)= (1/2 - λ_0^(R)) ln1 - 2λ_0^(R)/1-λ + λ_0^(R)ln2λ_0^(R)/λ + (1/2 - λ_1^(R)) ln1 - 2λ_1^(R)/1-λ + λ_1^(R)ln2 λ_1^(R)/λ = 1/4ln4/3 + 1/4ln4/5 + 1/8ln2/3 + 3/8ln6/5≈ 0.03382 Discussion about the Langton parameter relationSo far we have proved that the basic quantities of information processing could be expressed in terms of generalized Langton parameters. This is nevertheless true only when in the initial state the nodes are completely uncorrelated. When the nodes are initially correlated, the values are significantly shifted. Figs. <ref>, <ref>, <ref> and <ref> show the behavior of the total, memory and left- and right-transfer information respectively after the nodes have become correlated. The dotted line pictures the value of each information quantity in the uncorrelated state, highlighting the shift between this value and the corresponding value in the stationary regime.§.§ Robustness of financial cube plotsSliding window size. Here we show result plots for the financial time series (Fig. <ref> and Fig. <ref> in the main text) for lower sliding window size (w=1400 data points in main text) in order to demonstrate that the results in the main text are not obtained by carefully fine-tuning this parameter. The lower the sliding window size, the fewer datapoints and thus the more difficult it is to accurately estimate mutual information. We show here w=800 and w=1000 in Figs. <ref> through <ref> to illustrate how the main text's results already start to appear at these significantly lower values. § ACKNOWLEDGMENTSPMAS and RQ acknowledge the financial support of the Future and Emerging Technologies (FET) program within Seventh Framework Programme (FP7) for Research of the European Commission, under the FET- Proactive grant agreement TOPDRIM, number FP7-ICT-318121. PMAS, RQ, and OHS also acknowledge the financial support of the Future and Emerging Technologies (FET) program within Seventh Framework Programme (FP7) for Research of the European Commission, under the FET-Proactive grant agreement Sophocles, number FP7-ICT-317534. PMAS acknowledges the support of the Russian Scientific Foundation, Project number 14-21-00137.§ CONFLICTS OF INTERESTSThe authors declare that there is no conflict of interest regarding the publication of this paper.plos2015
http://arxiv.org/abs/1709.09447v1
{ "authors": [ "Rick Quax", "Gregor Chliamovitch", "Alexandre Dupuis", "Jean-Luc Falcone", "Bastien Chopard", "Alfons G. Hoekstra", "Peter M. A. Sloot" ], "categories": [ "cs.IT", "math.IT", "physics.soc-ph" ], "primary_category": "cs.IT", "published": "20170927110402", "title": "Information processing features can detect behavioral regimes of dynamical systems" }
Resolving the emission regions of the Crab pulsar's giant pulses Robert Main^1,2,3, Rebecca Lin^2,Marten H. van Kerkwijk^2, Ue-Li Pen^3,5,4,6, Alexei G. Rudnitskii^7, Mikhail V. Popov^7, Vladimir A. Soglasnov ^7, Maxim Lyutikov^8 Accepted; Received ==========================================================================================================================================================================§ INTRODUCTIONSmall perturbations of a black hole generically take the form of damped oscillations called quasinormal modes (QNMs). These perturbations occur only at a discrete set of frequencies, depending on the black hole itself. The spectrum of QNM frequencies contains a wealth of information about the black hole.They describe the late time evolution of any dynamical process that results in a black hole at equilibrium. In particular in the black hole mergers recently observed by LIGO<cit.> the last part of the gravitational wave signal is described by quasinormal modes. With more data of black hole mergers expected in the future, matching the signal to the quasinormal modes will be a good test of General Relativity<cit.>. In particular as noted in <cit.> a more accurate determination of the mass and angular momentum of the black hole is necessary to rule out or constrain many alternative theories of gravity.More precisely, quasinormal modes are linear perturbations δϕ which behave in time asδϕ(t) ∼ e^-i ω t,for some complex frequency ω, its (negative) imaginary part giving exponential damping and its real part giving an oscillation. By causality we require the perturbation to be ingoing at the black hole horizon, and either outgoing (asymptotically flat or de Sitter spacetimes) or finite (anti-de Sitter) at spatial infinity. This results in a discrete spectrum of frequencies, ω_n.When there is no black hole there is nothing for the perturbation to dissipate energy into, there will be only normal modes, undamped oscillations, as we will see in <ref>.A perturbation can also become unstable, when Im(ω) > 0, showing exponential increase instead of decay. This can be used to look for new solutions as the end point of such instabilities, see e.g. <cit.>. There are also cases known where the instability has no end point and the black hole continues to grow until it hits the anti-de Sitter boundary <cit.>.Through the fluid-gravity correspondence<cit.> the quasinormal modes of black branes are related to hydrodynamics. In particular, the quasinormal modes whose frequencies vanish as momentum is taken to zero encode in principle all hydrodynamic transport coefficients in their dispersion relation <cit.>. Higher modes decay more quickly and contain more information than hydrodynamics. Recently <cit.> found strong evidence indicating that these modes set a limit on the range of applicability of hydrodynamics.Holography relates the physics of black holes to the physics of strongly coupled quantum field theories<cit.>. To eachfield on the gravity side corresponds a gauge-invariant operator on the QFT side. The quasinormal frequencies of this field are equal to the poles in the retarded Green's function of the operator in the dual QFT<cit.>. By summing quasinormal modes one can compute one loop determinants<cit.>. Out of equilibrium gauge theory plasmas are dual to out of equilibrium asymptotically anti-de Sitter black holes. At late times these are again described by quasinormal modes, see <cit.> for an explicit comparison.For two extensive reviews on quasinormal modes, see <cit.> and <cit.>.Being so full of information, it would be desirable to be able to compute these for any black hole. While there are some cases where it can be done analytically (a recent quite involved example is <cit.>, and see <cit.> for a review of analytic computations and approximations), of course the generic case can only be done numerically.In this paper we present a Mathematica package which numerically computes quasinormal modes<cit.>.It is applicable to a broad class of cases: it does not matter what the asymptotics are, if the equations are coupled, if the frequency occurs to a higher power in the equation, or if the background is numerical (assuming this numerical background has been computed).The method used was first used for general relativity in <cit.>.Essentially it discretizes the quasinormal mode equation(s) using spectral methods, and then directly solves the resulting generalized eigenvalue equation. Some great reviews on numerics in gravity are <cit.>.We have attempted to make the package easy to use, efficient in its computation, fully documented and with code that is itself easy to read and, if necessary, debug. Earlier versions were used successfully for the quasinormal mode computations in <cit.>.We encourage anyone to use it, and possibly to contribute by making improvements or adding new features.The paper is structured as follows.In section <ref> we describe the method used in the package. Then in section <ref> we describe how to use it at a more practical level, illustrated with one of the simplest cases: the Schwarzschild-anti-de Sitter black brane, in 5 dimensions. The next two sections contain more examples. In section <ref> we study the 4-dimensional Schwarzschild black hole for various asymptotics: again anti-de Sitter, flat space and de Sitter. In this last case we find an infinite set of purely imaginary modes that has been overlooked in the literature. In section <ref> we derive the quasinormal mode equations for a generic Einstein-Maxwell scalar action (with a homogeneous and isotropic background). We use this to compute a more involved example: the 5-dimensional asymptotically anti-de Sitter Reissner-Nordström black brane.This serves to illustrate the method in the more complicated case of coupled equations while also giving a check on the numerics, since we also have some analytic control on these quasinormal modes. In the appendices we give a short report on the performance of the package and tables with numerical values for some quasinormal modes.§ METHODTo find the quasinormal mode spectrum, we have to solve the linear ordinary differential equation (ODE) (or coupled system of ODE's) describing a linearized perturbation on top of a black hole/black brane. At the horizon, causality requires us to choose only ingoing waves. Similarly at spatial infinity, or the cosmological horizon in the case of de Sitter, we must require only outgoing waves. In asymptotically anti-de Sitter spacetime we require the perturbation to be normalizable at the boundary.These two boundary conditions can only be satisfied for a discrete set of frequencies ω∈{ω_n | n = 1, 2, 3, ...}[We stick to staring at n=1, though starting at n=0 is also common in the literature], so at the same time as solving the ODE we have to solve for these ω_n.In this section we will discuss the method that is used in the package to do this.§.§ AnalyticsWe will illustrate the method by the example of a massless scalar field in a 5-dimensional asymptotically anti-de Sitter Schwarzschild black brane. It turns out that Eddington-Finkelstein coordinates, are perfectly suited for this problem. For the time-independent backgrounds we will consider, these coordinates take the formds^2 = - f(u) dt^2 + 2 g(u) dt du + (spatial part).Although not strictly necessary, they usually simplify the problem, we will see why below, and we will use these coordinates throughout this paper. In these coordinates the asymptotically AdS Schwarzschild black brane isf(u)= u^-2(1-u^4) , g(u)= -u^-2,where u=0 corresponds to the boundary and u=1 corresponds to the horizon.As long as the background is homogeneous, which in our case it is, we can write the fluctuation as a plane wave,δϕ(u,t,x) = e^-i (ω t - k x )δϕ(u) ,where ω is the frequency and k is the momentum. The same relation would hold for any other fluctuation in this background.Inserting this into the Klein-Gordon equation we obtain,( -4 q^2 u -6 i λ) δϕ(u)+(-u^4+4 i λ u-3) δϕ'(u) + (u-u^5) δϕ”(u)= 0 .We have rescaled the quasinormal mode frequency ω and the momentum k to the dimensionless ratios λ≡ω / 2 π T and q ≡ k / 2 π T, where T = 1/(4π) is the temperature of the black brane.Here we can already see one advantage of using Eddington-Finkelstein coordinates: the ansatz (<ref>) implies that g^tt = 0, and as a consequence the quasinormal mode equation <ref> is linear in the frequency, whereas it would be quadratic in Fefferman-Graham coordinates. The advantage of this will become clear in subsection <ref>.Now we must analyze the behavior near the horizon and near the boundary to see how to deal with the boundary conditions. Starting with the horizon, by plugging in an ansatz ϕ(u) = (1-u)^p, we find that there are two solutions: δϕ_in(u) = const + 𝒪(1-u) and δϕ_out(u) = (1-u)^i λ (1 + 𝒪(1-u)).Including time dependence, this last solution behaves as δϕ_out(t,u) = e^-i λ (2 π T t - log(1-u) ),so as t increases, 1-u has to increase as well to keep a constant phase, meaning u has to decrease, which means that this solution is outgoing. Hence we must make sure that we only get the other, ingoing solution. Notice that the ingoing solution is perfectly smooth near the horizon, while the outgoing one oscillates more and more rapidly as we approach the horizon. These properties will help us pick the correct solution, as we will see later.Near the boundary there are two solutions, a non-normalizable mode δϕ(u) ∝ 1 which we must discard, and a normalizable one δϕ(u) ∝ u^4. If we redefine δϕ(u) ≡ u^3 δϕ̃(u), then the normalizable solution has ϕ̃ approaching zero linearly, while the non-normalizable solution diverges. [One can also rescale by u^4, but u^3 seems to be numerically more stable, the important point in any case is that the non-normalizable solution diverges while the normalizable one does not.]Doing this rescaling, and also rescaling the equation itself so that it is finite but nontrivial at the boundary, the equation becomes:(-3-9 u^4 - 4 q^2 u^2 +6 i u λ) δϕ̃(u) +u (3-7 u^4+4 i u λ) δϕ̃'(u)+ (u^2-u^6) δϕ̃”(u)=0 .Now the normalizable solution behaves perfectly smoothly both at the boundary and at the horizon, while the non-normalizable solution behaves pathologically, diverging and rapidly oscillating respectively. We will see in the next section how this can be used to automatically deal with the boundary conditions.§.§ Discretization: Pseudospectral MethodsHaving derived the equation (<ref>) in a form without divergences, we will now discretize it in order to solve it numerically. For this we use pseudospectral methods, the standard reference on this is <cit.>. The pseudospectral method solves a (differential) equation by replacing a continuous variable, the radial one in our application, by a discrete set of points, also called collocation points.The collection of these points is usually called the grid.A function can then be represented as the values the function takes when evaluated on the gridpoints.An equivalent and useful way of looking at this set of numbers representing a particular function is as coefficients of the so-called cardinal functions. The cardinal functions corresponding to the grid {x_i | i = 0, ... , N } are polynomials C_j(x) of degree N, with j = 0, ..., N, satisfying C_j(x_i) = δ_ij. The choice of a grid uniquely specifies the cardinal functions as C_j(x) = ∏_j=0,j≠ i^Nx - x_j/x_i - x_j. A function f is then approximated asf(x) ≈∑_j=0^N f(x_j) C_j(x) . The expansion in terms of cardinal functions allows one to construct the matrix D_ij^(1) that represents the first derivative, D_ij^(1) = C_i^'(x_j), and similarly for higher derivatives.Solving the resulting linear equation will give a function which solves the original equation exactly at the collocation points. The hope is that as the number of gridpoints is increased, it will also solve the equation at other points.For this to work, the choice of collocation points is crucial. The choice which tends to work best and is therefore most often used is the Chebyshev grid:x_i = cos(i/Nπ) , i = 0, ..., N . For this grid it can be proven that any analytic function can be approximated with exponential convergence in N.This can be understood by realizing that when we increase N, not only does the largest distance between gridpoints decrease proportional to 1/N,but at the same time the order of the cardinal functions increases, leading to a numerical error scaling as (1/N)^N. Note that while this means that we can approximate the equation with exponential convergence,this does not mean that the number of quasinormal modes we find will grow exponentially with N, for more details see Appendix <ref>.As given, these points lie in the interval [-1,1], but this can be rescaled and shifted to any other interval. In particular, we will want to rescale it to [0,1] if the horizon is at 1.For this grid, the cardinal functions are linear combinations of Chebyshev polynomials T_n(x):C_j(x) = 2/N p_j∑_m=0^N 1/p_m T_m(x_j) T_m(x) ,p_0 = p_N = 2 ,p_j = 1 .In Fig. (<ref>) we show what these functions look like for N = 6.Of course these functions are all perfectly smooth, and at the endpoints they are either 0 or 1 (since the endpoints themselves are collocation points). With a linear combination of these smooth and finite functions we can never approximate either a function that is diverging or that is rapidly oscillating. As a consequence, we have already implicitly solved the boundary conditions by choosing these functions as a basis.§.§ Generalized Eigenvalue EquationNow we have turned the problem of solving a linear ODE, subject to specific boundary conditions, into solving a matrix equation, where the boundary conditions are already implicitly solved. It is still not purely numerical though, as the equation depends on the frequency, which we have to solve for as well. This can be done by recognizing that this is a generalized eigenvalue problem. The simplest type of quasinormal mode equation is of the formc_0(u,ω) ϕ(u) + c_1(u,ω) ϕ^'(u) + c_2 (u,ω) ϕ^''(u) = 0 ,where each of the c_i are linear in ω: c_i(u,ω) = c_i,0(u) + ω c_i,1(u).To be completely explicit, in our example Eq. (<ref>), setting the momentum q to zero, this will give c_0,0 = -3 - 9 u^4, c_0,1 = 6 i u, c_1,0 = u(3 - 7 u^4), c_1,1 = 4 i u^2, c_2,0 = u(u - u^5) and c_2,1 = 0. Each of these coefficients c_i,j(u) is turned into a vector by evaluating it on the gridpoints. These vectors are multiplied with the corresponding derivative matrices D_ij^(n) and the resulting matrices added, to bring the equation into the form,(M_0 + ω M_1 ) ϕ = 0 ,where the M_i are now purely numerical matrices. Explicitly, (M_0)_ij = c_0,0(x_i) δ_i j + c_1,0(x_i) D_ij^(1) + c_2,0(x_i) D_ij^(2), and similarly for M_1.Equation (<ref>) is precisely a generalized eigenvalue equation. This can be solved directly using Mathematica's built-in function Eigenvalues (or Eigensystem to get the eigenfunctions as well).§.§ ExtensionsThe method presented works not just for the simple case used to illustrate it, in fact it works quite generally with a few simple modifications.Firstly, suppose that instead of one ODE, we have a coupled system of ODE's. Say there are two of them (but the following applies generally), of the formc_0(u,ω) ϕ(u) + c_1(u,ω) ϕ^'(u) + c_2(u,ω) ϕ^''(u) + d_0(u,ω) ψ(u) + d_1(u,ω) ψ^'(u) + d_2(u,ω) ψ^''(u)= 0 , e_0(u,ω) ψ(u) + e_1(u,ω) ψ^'(u) + e_2(u,ω) ψ^''(u) + f_0(u,ω) ϕ(u) + f_1(u,ω) ϕ^'(u) + f_2(u,ω) ϕ^''(u)= 0 . We can discretize this in a similar way, by joining the two functions ϕ and ψ into a single vector (ϕ(u),ψ(u)). The matrix M_0 of Eq. (<ref>) becomesM_0 = [ M̃_0,1(ϕ) M̃_0,1(ψ); M̃_0,2(ϕ) M̃_0,2(ψ) ],where the 0 everywhere indicates that this is the piece of order 0 in the frequency, the second index indicates the equation number and the argument indicates the function, so that M̃_0,1(ϕ) is the matrix coefficient of ϕ in the first equation. Of course there is a similar equation for the linear term in the frequency.Further, we can generalize to an equation depending on the frequency as an arbitrary polynomial. Whether coupled or not, the procedure above will bring such a (system of) equation(s) to the formℳ(ω) ϕ = ( M̃_0 + ωM̃_1 + ω^2 M̃_2 + ... + ω^p M̃_p ) ϕ = 0 .Note that in the case of a coupled equation, the ϕ here would be a vector composed of several functions, as above.We can again write this as a generalized eigenvalue equation of the form of Eq. (<ref>) by defining,M_0 = [ M̃_0 M̃_1 M̃_2⋯ M̃_p-1;010⋯0;001⋯0;⋮⋮⋮⋱⋮;00001;], M_1 = [000⋯0 M̃_p; -100⋯00;0 -10⋯00;⋮⋮⋮⋱⋮⋮;000⋯ -10;].These matrices act on the vector (ϕ^(0),ϕ^(1),ϕ^(2), ..., ϕ^(p-1)), where the first row represents the original equation, and the other rows enforce that ϕ^(i) = ω^i ϕ.Notice that for a coupled system of N_eq equations with the maximal power of the frequency being p, the resulting matrix will be N_eq p (N+1) (where N is the N in Eq. (<ref>)). So both extensions come at a price in computational time.An alternative here is to solve det(ℳ(ω)) = 0. This is prohibitively slow to do symbolically, but it can be done numerically by sweeping the complex ω-plane. This has the advantage that higher powers of ω don't require larger matrices, but the extra complexity of selecting and possibly refining a grid of points in the complex plane. [This is also implemented in the package (by setting Method→“Sweep”), but not as worked out as the main method.]§ USING THE PACKAGEIn this section, we will show at a very practical level how to work with the package. Our starting point will be the properly rescaled equation for a massless scalar in an asymptotically anti-de Sitter Schwarzschild black brane, at zero momentum, Eq. (<ref>).First some practical remarks. The package can be found here <cit.>, where installation instructions can also be found. Once installed there are several ways to get started and familiarize oneself with it.Apart from the present paper, the package also has its own documentation.This can be accessed through Mathematica by going to Help, Documentation and typing in “QNMspectral”, and describes all the functions and their options. It includes some tutorials, which also go into more detail on how to obtain Eq. (<ref>). We also provide a separate notebook with several examples, also found at <cit.>.Here we continue with Eq. (<ref>). The function which implements everything mentioned in the previous section is called GetModes. Assuming Eq. (<ref>) is stored in Mathematica under eq, one can compute the quasinormal modes simply asmodes = GetModes[eq,{40,0}];This does the computation with N=40 (meaning with 41 gridpoints), at machine precision (machine precision has about 16 digits, anything less than that in the last argument will use machine precision). Note that it is not necessary to specify what are the frequency, the radial variable and the fluctuation, this can be determined unambiguously from the equation and is done automatically.Now the result, a list of quasinormal mode frequencies, is stored in modes and can be displayed by evaluating PlotFrequencies[modes,Name→“ω / 2 π T”], producing Fig. (<ref>).The computation used a grid of 41 points, so there are 41 eigenvalues. Typically the quasinormal modes lie in an approximately straight line, they certainly should in this case.This means that most of the eigenvalues found are numerical artifacts, only a few are accurate. This is unavoidable and will continue to be the case if we increase N or the precision. We will get more accurate results, but also more results in general, so still only a small fraction of the total will be accurate.To test whether a computed eigenvalue really is a quasinormal mode and not just a numerical artifact, one has to repeat the computation at different grid sizes and precisions, and look for convergence. A simple and efficient implementation of this is given in the function GetAccurateModes. Used for example asmodesaccurate = GetAccurateModes[eq,{40,0},{80,40}]; ,this does the computation twice, once with N = 40 at machine precision and once with N = 80 at precision 40. It returns only those modes which occur in both computations, and of the modes it returns, it takes only those digits which agree.Note that for this to actually give only converged modes, one has to be sure that the equation itself is either fully analytic, or if it is numeric that the error in the equation is not the dominant one.One also has to be sure that there is no common error in both computations, as this will not get filtered out. This can happen when taking the grid sizes too close.We can display these results in a plot of the complex plane and a table by evaluating ShowModes[modesaccurate,Name→“ω/2π T”,Precision→Infinity], giving Fig. (<ref>). The option Precision→Infinity is used here to show all computed digits, by default only the first 6 are shown.[For more details on the various options of all the functions in the package see the documentation, which can also be accessed by evaluating for example ?GetAccurateModes, as with built-in Mathematica functions.] These values can be compared with <cit.>, where we see that indeed all displayed digits are accurate.We see that the lowest mode has been computed quite accurately, and as the mode number increases the precision of our result decreases.Apart from the caveats mentioned above, in practice this often gives only correct results. We advise however to do more elaborate checks to ensure accuracy.One additional check that one can do is to look at the eigenfunctions as well, and check that they are smooth and finite. By default, these are not computed. To compute them, simply add the option Eigenfunctions→True to either GetModes or GetAccurateModes. Repeating the computation (<ref>) with this option, we can now plot the eigenfunctions by PlotEigenfunctions[modesaccurate], producing Fig. (<ref>).In this case, one can see that all eigenfunctions are perfectly smooth, and go to 0 at the boundary, as was expected from the rescaling we did.Notice that they are normalized to be 1 at the horizon.§ SCHWARZSCHILD BLACK HOLEAs another simple example and to illustrate the general applicability we now consider the fluctuations of a massless scalar on top of a Schwarzschild black hole in various 4-dimensional spacetimes: anti-de Sitter, flat space and de Sitter. These black holes are all maximally symmetric solutions of the equations of motion coming from the actionS = 1/8 π G∫ d^4 x √(-g)( R - 2 Λ) . The background metric we take isds^2= - f(u) dt^2 + 2 ζ(u) dt dr + S(u)^2 ( dθ^2 + sin^2 (θ) dϕ^2 ) ,ζ(u)= - u^-2, S(u)= u^-1, f(u) = 1 - 2 G M u + ϵ1/L^2 u^2,where we set Newton's constant G = 1, M is the mass of the black hole, and depending on the asymptotics we have ϵ = 1, 0, -1 and Λ = -3/L^2, 0, 3/L^2, for anti-de Sitter, flat space and de Sitter respectively.Apart from the different asymptotics to the previous example we take a different horizon topology: a sphere instead of a plane. This has the consequence that plane waves are no longer solutions, and we must instead use spherical harmonics, labelled by a multipole number l. The perturbation we can do is then:δϕ(t,u,θ,ϕ) = e^- i ω t Y_l^m(θ,ϕ) δϕ(u) . These perturbations have to satisfy the Klein-Gordon equation, which takes the form:(u l (l+1) +2 i ω) δϕ (u) -(2 i u ω + u^3 f'(u) )δϕ '(u) - u^3 f(u) δϕ”(u) = 0 .In each of the cases considered, the behavior of the fluctuation near the horizon is the same as in the black brane discussed before. There is an outgoing mode δϕ_out(u) = (1-u)^i λ, and an ingoing mode δϕ_in(u) = const., due to our use of Eddington-Finkelstein coordinates. This is perfect, as the numerical approximation is only able to resolve the ingoing mode.We will explore each of the cases in turn, starting with the one most similar to the previous example: anti-de Sitter. §.§ Anti-de SitterJust as in the previous example, anti-de Sitter has a conformal boundary at u = 0. The scalar field contains a normalizable mode as well as a non-normalizable mode, behaving near the boundary respectively as ϕ∝ u^3 and ϕ∝const. We do not want our fluctuation to change the boundary, therefore we demand the non-normalizable mode to vanish. Similar to before, we do this by redefining δϕ(u) = u^2 δ̃ϕ̃(u) ,so that the normalizable mode approaches 0 as u → 0 and the non-normalizable mode diverges.The blackening factor, in Eq. (<ref>) with ϵ = 1, has a horizon at some u = u_h, in terms of which the mass is M = (1 + L^2 u_h^2)/(2 L^2 u_h^3) and the black hole temperature is T = (3 + L^2 u_h^2)/(4 π L^2 u_h). We will set u_h = 1 without loss of generality.Replacing M by L using the equation above, replacing ω by λ≡ω / (2 π T) and doing the rescaling of δϕ we obtain the quasinormal mode equation:0 = ( -2 - 4 u^3 + (2-4 u - l (l+1)) u^2 L^2 + (L^2+3) u i λ) δ̃ϕ̃(u) +u ( 2 - 5 u^3 +(4 - 5 u) u^2 L^2 + (L^2+3) u i λ) δ̃ϕ̃^'(u) + (1-u) u^2 (1 + u + (L^2+1) u^2) δ̃ϕ̃^''(u) .Note that we now have the dimensionless ratio M/L as a parameter describing the relative size of the black hole and the whole anti-de Sitter universe.In Fig. (<ref>) we show the quasinormal modes for l = 0, 1, 2 as M/L is varied between 0 and infinity. Some of these values were computed before in <cit.>, with which we of course agree[The case where the scalar also has a conformal coupling was studied in <cit.>.].In the limit M/L →∞ the Schwarzschild black brane is approached. The l-dependence drops out and all modes converge at those of the brane, indicated by the blue dots. The opposite limit M/L → 0 is that of empty anti-de Sitter. In this limit we obtain the normal modes of empty anti-de Sitter, ω_n L = ±( l + 1 +2 n), n = 1, 2, ... <cit.>. These are pure oscillations without decay, since there is no horizon to decay into.§.§ Flat SpaceNow we discuss the case of flat asymptotics. The background is obtained by setting ϵ = 0 in Eq. (<ref>), which when inserted into Eq. (<ref>) gives the quasinormal mode equation for the asymtotically flat Schwarzschild black hole:0=( ul (l+1)+2 iω) ϕ (u) +(u^3-2 u iω) ϕ '(u)-(1-u) u^3 ϕ”(u) . The behavior of ϕ is singular at spatial infinity, u = 0, but we can scale this out. There are again two solutions, ϕ_out(u) = exp(2 i ω/u) u^1-2 i ω( 1 + ...) representing waves going out to infinity, and ϕ_in(u) = u (1 + ... ) representing waves coming in from infinity. We must demand the latter to vanish, so we redefineδϕ(u) = exp(2 i ω/u) u^-2 i ωδ̃ϕ̃(u) ,so that δ̃ϕ̃(u) ∝ u for the outgoing modes, while the modes coming in from spatial infinity diverge.We set the horizon u_h = 1/(2M) = 1 and define λ = ω M, to get the final equation,0 =(l (l + 1) u - 4 i λ - 16 u (1 + u) λ^2) ϕ̃(u) + (u^3 + 4u (1 - 2 u^2) i λ) ϕ̃^'(u) + (-1 + u) u^3 ϕ̃^''(u) . In Fig. <ref> we show the scalar quasinormal modes for l = 0, 1, 2, 3, along with a table for the first four modes for each l. These points reproduce table 1 of<cit.> exactly. §.§ De SitterThe Schwarzschild-de Sitter black hole is the most physically rich case of all. There are two horizons: the cosmological de Sitter horizon and the black hole horizon, whose coordinates we will denote as u_c and u_b respectively. The blackening factor, Eq. (<ref>) with ϵ = -1, can be conveniently rewritten in terms of these quantities as f(u) = 1 - u_c^2 u_b^2/u_c^2+u_c u_b + u_b^21/u^2 - u_c+u_b/u_c^2+u_c u_b + u_b^2 u .Note that this is symmetric in u_c, u_b, but for our naming to make sense we require that u_c < u < u_b, so in particular we have to restrict 0 < u_c < u_b.These horizons have surface gravities κ_c = | u_c^2/2 f^'(u_c) |= u_c/2 2 u_b^2 - u_c u_b - u_c^2/u_c^2+u_c u_b + u_b^2 and κ_b = | u_b^2/2 f^'(u_b) | =u_b/2 u_b^2 + u_c u_b -2 u_c^2/u_c^2+u_c u_b + u_b^2 respectively. The quasinormal mode equation in these parameters becomes,0=(u_c^2+u_b u_c+u_b^2) (u l (l+1)+2 i ω)ϕ (u)+( u^3 (u_b+u_c)-2 i u ω(u_c^2+u_b u_c+u_b^2)-2 u_b^2 u_c^2 ) ϕ '(u) +u (u-u_b) (u-u_c) (u_b u_c+u (u_b+u_c)) ϕ”(u) . The background solution has two independent dimensionful parameters, from which we can extract a dimensionless ratio which we will take as M/L, the mass of the black hole divided by the radius of de Sitter. In terms of u_c and u_b this is M/L = u_b u_c (u_b+u_c)/2 (u_c^2+u_b u_c+u_b^2)^3/2. Solving Eq. (<ref>) near the cosmological horizon we again find two solutions,δϕ(u) = (u-u_c)^ - i λ_c, with λ_c ≡ω / κ_c, or δϕ(u) →const. Restoring time dependence on the first solution we get δϕ(t,u) = exp(- i ω (t + 1/2π T_clog(u - u_c) ) ).To keep a constant phase as t increases, we need to decrease u, which means that this solution is going into the cosmological horizon. In order to keep only this solution we need it to be smooth while the outgoing one oscillates rapidly, so we have to redefineδϕ(u) = 1/u-u_c(u-u_c )^-i λ_cδ̃ϕ̃(u) ,so that ϕ̃(u) ∝ (u-u_c) for the ingoing solution, while the outgoing solution oscillates rapidly.Making this replacement, rescaling the radial coordinate u_c < u < u_b to 0 < x < 1, and setting u_b = 1, we obtain the final quasinormal equation, shown in appendix <ref>. Before going into the numerical results it is instructive to look at the limiting cases.One limit is the extremal limit M/L → 1/(3 √(3)). In this limit u_c → u_b = 1, but the proper distance remains finite and one obtains the Nariai spacetime. Here the quasinormal modes can be calculated analytically<cit.> to be:extremal: ω_n / κ_c = - (n-1/2) i + √(l(l+1) - 1/4),n = 1, 2, ... , The other limit, M/L → 0, has two qualitatively different interpretations: it can be reached by taking M → 0 at fixed L, corresponding to the limit of empty de Sitter,or L →∞ at fixed M, corresponding to the limit of the Schwarzschild black hole in asymptotically flat spacetime. The former has analytic quasinormal modes<cit.>:de Sitter: ω_1 / κ_c = - l i ; ω_n = -(l + n) i, n =2, 3, ... .For l = 0 this gives a zero mode, which is present for any M/L, reflecting the fact that ϕ can be shifted by a constant.There is no analytic result for the quasinormal modes of Schwarzschild black hole in asymptotically flat spacetime, but of course they have been computed numerically in e.g. <cit.> and in the previous section.For small M/L, we expect the equilibration process of the full space time not to be influenced by the presence of a small black hole,and conversely we do not expect the equilibration of the black hole to be significantly different from the same black hole in asymptotically flat spacetime. So we expect to see two decoupled sets of quasinormal modes which are small deformations of those of empty de Sitter and of Schwarzschild in asymptotically flat spacetime.In Fig. (<ref>) we show the imaginary part of the quasinormal modes, for l=0 and for the full range of M/L. In both cases, blue lines correspond to complex frequencies, while red lines correspond to purely imaginary frequencies. On the left plot, the modes are normalized with respect to κ_c, and at M/L ≈ 0 we indeed see the pure de sitter modes of Eq. (<ref>).As M/L grows they move down the complex plane, but they stay on the imaginary axis for the whole range of M/L. The extremal values of Eq. (<ref>) are approached by the other set of modes, which are complex in general but become purely imaginary in the extremal limit M/L → 1/(3√(3)).In Fig. (<ref>b) the same data is presented, but now normalized as ω M. In these units, the complex modes approach those of the Schwarzschild black hole in flat space, as can be seen by comparing with Table (<ref>b), and as indicated by the blue dashed lines. All of the imaginary modes now disappear in the limit, simply because M goes to zero. In this limit it becomes numerically difficult to compute the complex modes, as there are so many smaller purely imaginary modes.Note that in this asymptotically flat limit, the boundary condition at the cosmological horizon approaches the boundary condition of outgoing waves in asymptotically flat spacetime continuously. This is most easily seen in different coordinates, ds^2 = - f(r) dt^2 + f(r)^-1 dr^2 + r^2 ( dθ^2 + sin^2 (θ) dϕ^2 ), where f is the same as in Eq. (<ref>), with r = 1/u. Defining the “tortoise” coordinate r_⋆ by dr_⋆/dr = 1/f(r) and ϕ(t,r,θ,ϕ) = e^-i ω t r^-1 Y_l m(θ,ϕ) ψ(r) we obtain the Schrödinger form of the quasinormal mode equation,0= ∂_r_⋆^2 ψ + ( ω^2 - V(r) ) ψ, V(r)=f(r) [ l(l+1)/r^2 + f^'(r)/r].The Schrödinger potential V(r) approaches zero both at the cosmological horizon of Schwarzschild de Sitter and as r →∞ in the asymptotically flat case, so that expanding ψ near these points we find in both cases ψ(r) = exp(± i ω r_⋆), where we have to choose the plus sign to obtain the outgoing wave.In the anti-de Sitter case, the Schrödinger potential no longer approaches zero at the boundary,and so in the flat limit the boundary condition does not approach that of the asymptotically flat Schwarzschild black hole. This is why in this case, discussed in section <ref>, there isn't a set of modes converging to the asymptotically flat Schwarzschild values.So the limits match, but what happens in between is also interesting. Firstly, for small M/L the purely imaginary mode coming from de Sitter is dominant, so the late time behavior of a small perturbation (that is sufficiently spread out to excite both sets of modes) is purely damped at late times. At M/L ≈ 0.051 there is a crossover, where the complex mode becomes dominant and late time behavior is a damped oscillation. This crossover occurs at M/L ≈ 0.090, 0.047, 0.032 for l = 1, 2, 3, so for l=1 the imaginary mode dominates in the largest range, which matches the fact that the dominant mode in pure de Sitter is that of l=1 (excluding the nondynamical zero mode at l=0).For larger M/L we no longer expect two decoupled sets of modes, and in fact we observe an interaction between the two sets, which produces the oscillations seen in Fig. (<ref>). This can be seen more clearly in Fig. (<ref>a), where we show the spectrum in the complex plane, plotted for all M/L at the same time, with arrows indicating increasing M/L. What happens is that as a purely imaginary mode moves down the axis and a complex mode reaches a similar imaginary part, the complex mode moves towards the axis, as if attracted by the imaginary mode,and moves away again once the difference in imaginary parts increases again.The “attraction” becomes stronger for higher complex modes, and for each it is strongest when it approaches the lowest imaginary mode. This makes intuitive sense, as the larger the mode number, the higher the value of M/L when it crosses the lowest imaginary mode.The consequence of this interaction is that black holes of certain masses, where the imaginary parts of a complex mode and a purely imaginary mode are equal, will oscillate slightly less. It would be hard to detect this in a real evolution though, as the effect is quite small for the lower modes, and it occurs at a different mass for each mode.Starting from the 9th lowest complex mode the complex modes actually reach the imaginary axis and hit the imaginary mode before moving away from it again, in the region indicated by the box in Fig. (<ref>a). What happens is more clearly seen in Fig. (<ref>b), which shows only the 9th mode in the region of this box, plotting the real and imaginary part separately as a function of M/L. Since ϕ is real, complex modes must occur in conjugate pairs, but once they become purely imaginary they can split. The conjugate pair moves towards the real axis and splits into two purely imaginary modes when they hit the axis. One of these imaginary modes moves up the axis towards the original purely imaginary mode, which it then pairs up with to become complex again. The other continues down the axis and stays imaginary.For higher l these interactions are weaker, with l=1 still showing some oscillations but from l=2 on they appear to be absent. Apart from that the picture at larger l is qualitatively similar to what has been discussed. In Table (<ref>) we present some numerical values.These match the values of <cit.>, which used a sixth order WKB approximation, except that we find the additional infinite set of purely imaginary modes, which have not been reported before in the literature. The WKB approximation is known to miss overdamped modes<cit.>, which in some cases misses the mode that is physically the most relevant. These modes do fit nicely with <cit.>, which did an explicit time evolution of a scalar perturbation of a black hole in de Sitter with Λ≪ 1 (i.e. very small M/L). They observe an initial exponential decay with approximately the asymptotically flat Schwarzschild black hole QNM, and a late time exponential decay with approximately the empty de Sitter QNM. Here we showed how these QNMs are modified for larger M/L, and that this picture only holds up to some crossover value where the complex mode is dominant.Although we did not compute electromagnetic or metric perturbations it seems clear that there also one expects purely imaginary modes as continuations of pure de Sitter modes, just as for the scalar. One might then also expect similar oscillations in the complex modes at the lowest l = s.It is therefore interesting to look at <cit.> in this light, which studies the high overtones of these electromagnetic and metric perturbations. It is shown that at fixed Λ M^2 = 0.02 and for large overtone n, Re(ω_n) oscillates as a function of Im(ω_n).This might be similar to the oscillation seen in Fig. (<ref>a), since taking a snapshot at constant Λ M^2 would still produce oscilaltory behavior as a function of n, although it is difficult to give a more precise relation, since we consider only scalar modes, at much lower n.§ EINSTEIN-MAXWELL-SCALAR MODELTo illustrate the method in a more complex example, we will look at the quasinormal modes of the 4+1-dimensional Reissner-Nordström anti-de Sitter black brane. Before we specialize to this case we will first derive the equations in a much more general setting, namely any Einstein-Maxwell-scalar black brane that is homogeneous and isotropic (and ofcourse time-independent), in any number of dimensions.We consider the following action,S = 1/2κ∫ d^d+1x√(-g)( R -Λ -ξ (∂ϕ)^2 -1/4 Z(ϕ) F^2 - V(ϕ)) ,where F = d A is the field strength of a U(1) gauge field A, which has its kinetic term modified by a function Z(ϕ) of a scalar field ϕ, we have a cosmological constant Λ and a potential V(ϕ) for the scalar field,and finally ξ is an arbitrary normalization factor for the scalar field kinetic term. We work in units where κ = 1/(8π G) = 1/2.The equations of motion derived from this action are as follows,0= R_μν - 1/2 g_μν R + 1/2 g_μνΛ + 1/2ξ g_μν(∂ϕ)^2 - ξ∂_μϕ∂_νϕ + 1/2 g_μν V(ϕ)+ 1/8 g_μν Z(ϕ) F^2 - 1/2 Z(ϕ) F_μρ F_ν^ρ, 0= ∇_μ( Z(ϕ) F^μν) , 0= 2ξ/√(-g)∂_μ(√(-g)∂^μϕ) - 1/4 Z^'(ϕ) F^2 - V^'(ϕ) . We specialize to a homogeneous, isotropic and time-independent planar black hole background, allowing us to use the following ansatz,ds^2= - f(u) dt^2 + 2 ζ(u) dt du + S(u)^2 dx^2_(d-1), A= A_t(u) dt ,ϕ = ϕ(u) . In these coordinates the temperature of any black hole with a horizon at u_h is given by T = 1/(2π) √(-1/2 (∇_μχ_ν) ( ∇^μχ^ν) )|_u=u_h = f^'(u_h) / (4 πζ(u_h) ).In order to bring the quasinormal mode equations into a useful form, we follow the approach of <cit.>, which we generalize from pure gravity to this rather general Einstein-Maxwell-scalar case.We start by fluctuating all of the fields as plane waves, with momentum q in the x direction,δ g_μν =e^-i( ω t - q x) h_μν(u) ,δ A_μ =e^-i( ω t - q x) a_μ(u) ,δϕ =e^-i( ω t - q x)δϕ(u) . In what follows we will denote spatial indices other than x, so transverse to the momentum, with α≠β.We can classify all of these fluctuations according to their helicity, or their transformation properties under the remaining SO(2) symmetry of rotations around the x-asix. [For d>4 there is a larger SO(d-2) symmetry, but it suffices to look at the same SO(2) and take the same representatives in all the channels.] This immediately groups the fluctuation equations into three mutually decoupled groups, also called channels, with helicities 0, ± 1 and ± 2 and containing the fields,(helicity ± 2): h_α,β, h_α,α - h_β,β,(helicity ± 1): a_α, h_t,α, h_r,α, h_x,α,(helicity 0): δϕ, a_t, a_r, a_x, h_t,t, h_t,r, h_t,x, h_r,r, h_r,x, h_x,x, h ,where for helicity 0 we defined h ≡Σ_α=4^d+1 h_αα/(d-2).Note that for d=3 there is only one transverse spatial direction, so there are no helicity 2 fields. For any higher d the decomposition is as above.Since the quasinormal modes that we want to calculate are gauge invariant quantities, we expect to be able to find gauge invariant equations for them. In order to do this, we have to express the fluctuations in gauge invariant combinations, under infinitesimal diffeomorphisms ξ_μ(u,t,x) = e^-i (ω t - q x)ξ_μ(u), and infinitesimal U(1) gauge transformations λ(u,t,x) = e^-i(ω t - q x)λ(u), which act on the fluctuations asδ h_μν = -∇_μξ_ν - ∇_νξ_μ,δ a_μ = - ∂_μλ - ξ^ρ∇_ρ A_μ - A_ρ∇_μξ^ρ,δϕ = -ξ^ρ∇_ρϕ,where all covariant derivatives are taken only with respect to the background metric (other contributions would be second order).By simply taking an arbitrary linear combination of all the fluctuations in a given channel, performing the above gauge transformation on this combination and demanding the variation to vanish we find the following gauge invariant variables,(helicity ± 2):Z_3 = h_α,β,(helicity ± 1):Z_1 = q h_t,α + ω h_x,α, E_α = a_α,(helicity 0):Z_2 = q^2 h_t,t + 2 ω q h_t,x + ω^2 h_x,x +(- ω^2 +q^2 f^'/2 S S^') h ,E_x = q a_t + ω a_x - q A_t^'/2 S S^' h, Z_ϕ = δϕ - ϕ^'/2 S S^' h.In the helicity 2 case h_α,α - h_β,β is also gauge invariant and yields the same equation.The combinations of Eq. (<ref>) are gauge invariant for any dimension d ≥ 4. For d=3 the only difference is that there is no helicity 2 component as noted above, but the other combinations remain identical. These are the unique (up to taking linear combinations) gauge invariant combinations of the fluctuations, though if one allows for their radial derivatives as well there are more possibilities <cit.>.Substituting the gauge invariant fluctuations into the fluctuation equations, they can be completely decoupled from the gauge-dependent ones,again simply by taking an arbitrary linear combination of the equations in a given channel and demanding the coefficients of the gauge-dependent functions and their derivatives to vanish. We stress that it is not necessary to make any gauge choice in order to do this, although it simplifies the process.In the general case the gauge invariant variables in the same channel do remain coupled however, resulting in a single decoupled equation for the helicity 2 channel, two coupled equations for helicity 1, and three coupled equations for helicity 0.Very schematically, not writing any of the coefficients, the equations take the following form, for the helicity 2 channel,0 = Z_3(u) + Z_3^'(u) + Z_3^''(u),for the helicity 1 channel,0=E_y(u) + E_y^'(u) + Z_1(u) + Z_1^'(u) + Z_1^''(u) , 0= Z_1(u) + Z_1^'(u) + E_y(u) + E_y^'(u) + E_y^''(u),and for the helicity 0 channel,0= Z_ϕ(u) + E_x(u) + E_x^'(u) + Z_2(u) + Z_2^'(u) + Z_2^''(u), 0= Z_2(u)+ Z_2^'(u) + E_x(u)+ E_x^'(u) +Z_ϕ(u) + Z_ϕ^'(u) + Z_ϕ^''(u), 0= Z_2(u) + Z_2^'(u)+ Z_ϕ(u) + Z_ϕ^'(u) + E_x(u) + E_x^'(u) + E_x^''(u).Each term has a coefficient that in general depends on u, the frequency ω and the momentum q, and any parameters of the background solution.Note that even though the original coupled equations are only quadratic in ω, through the decoupling higher powers of ω arise. The equations go up to ω, ω^3 and ω^5 for helicity 2, 1 and 0 respectively, though the imposing of boundary conditions may raise this further, as we saw in sections <ref> and <ref>.At zero momentum, most equations decouple. The equations for E_γ (γ=x, y, z) become identical to eachother, as do those for Z_γ (γ=1, 2, 3), however the fluctuation Z_2 in general still occurs in the equation for Z_ϕ. Furthermore, if the background gauge field vanishes the equations for E_γ (γ=x, y, z) decouple, since the action is quadratic in the gauge field, and if the background value of the scalar vanishes, the equation for Z_ϕ decouples only if the potentials V(ϕ), Z(ϕ) are at least quadratic in ϕ.§.§ Reissner-Nordström anti-de Sitter black braneWe now apply the previous to the asymptotically AdS_5 Reissner-Nordström black brane. We will encounter most of the numerical difficulties of the general Einstein-Maxwell scalar black brane,while still having an analytic background and even some analytic control on the quasinormal modes. This allows us to demonstrate the method in a more complicated case while also showing that it gives correct results.The background, in terms of the ansatz of Eq. (<ref>), is as follows,f(u)= u^-2( 1 - M u^4 + Q^2 u^6 ), S(u)=u^-1,ζ(u)= - u^-2, A_t(u)=√(3) Q ( u^2 - 1 ),and ofcourse ϕ = 0. We fix the horizon to be at u=1, which fixes M = 1 + Q^2. The temperature of this brane is T = (1 - Q^2/2)/π, so the extremal solution is Q_extr = √(2), which has a vanishing temperature. It has an entropy density s = 1/(4π) and a chemical potential μ = √(3) Q. This background has a single dimensionless ratio, which we choose to take as Q̃≡ Q/Q_extr.We simply plug this background into the generally derived quasinormal mode equations. As before, the boundary condition of ingoing modes at the horizon is enforced automatically by using Eddington-Finkelstein coordinates. The only thing we still need to do is fix the boundary conditions at the boundary u = 0, where we must set the non-normalizable mode to zero. For each gauge-independent fluctuation, the solutions near the boundary, and the rescaling we do to enforce this boundary condition, areZ_γ(u)∝ s u^-2 (1 + ...)+ v u^2 (1 + ...) ;Z_γ(u) = u Z̃_γ(u)(γ = 1, 2, 3, ϕ) ,E_γ(u)∝ s u(1 + ...) + v u^2 (1 + ...) ;E_γ(u) = u Ẽ_γ(u)(γ = x, y) ,where each fluctuation is now rescaled such that the non-normalizable mode diverges, while the normalizable mode approaches 0 linearly.Having made these redefinitions we are ready to compute the quasinormal modes. In Fig. (<ref>) we show the full quasinormal mode spectum of all the channels, for the background charge Q/Q_extr = 1/2 and momentum of the perturbations q/π T = 1.5. Modes of the different channels nearly overlap. This can be understood from the fact that, as we mentioned in the previous section, at zero momentum the equations for all the E_γ become identical to eachother,giving the nearly overlapping spin 0 and spin 1 modes coming from E_x and E_y. The equations for all the Z_γ also become identical, giving the nearly overlapping spin 0, 1 and 2 modes. It is somewhat remarkable though that the modes are still so close at the reasonably high q/π T = 1.5.In Table (<ref>) we give the numerical values of the lowest 10 quasinormal modes in each channel, for Q/Q_extr = 1/2 and q/π T = 1.5 as in Fig. (<ref>). What depends much more strongly on the momentum are the three modes near ω = 0, shown enlarged in the inset. These are the hydrodynamic modes, which have the property that ω(q→ 0)→ 0. The spin 1 channel contains the shear mode, while the spin 0 channel contains the sound mode, and a mode governing charge diffusion.In Fig. (<ref>) we show the momentum-dependence of these hydrodynamic modes, again for Q/Q_extr = 1/2. At low momentum these modes are given by the dispersion relations(see e.g. <cit.>), as: shear: ω(q) = - i η/ϵ + P q^2 ,sound: ω(q) =± v_s q - i/21/ϵ + P(ζ + 4/3η) q^2,charge: ω(q) = - i/2D/ϵ + P q^2,where ϵ and P are the energy and pressure, η and ζ the shear and bulk viscosity, v_s^2 = ∂ P / ∂ϵ is the speed of sound and D is the charge diffusion, normalized to be dimensionless and 1 at Q=0.The shear viscosity is universally fixed, for any two derivative gravitational action, to be η/s = 1/(4π)<cit.>. Furthermore the theory we are looking at is conformal, so that the bulk viscosity vanishes and v_s^2 = 1/(d-1) = 1/3. This means that the sound mode is also completely fixed.While the charge diffusion constant D is not fixed by η/s and conformality, it can be calculated analytically <cit.>, so that the three hydrodynamic modes become,shear: ω(q)/π T = - i/41-Q̃^2/1+2Q̃^2(q/π T)^2 ,sound: ω(q)/π T = ±1/√(3)q/π T - i/61-Q̃^2/1+2Q̃^2(q/π T)^2 ,charge: ω(q)/π T = - i/21-Q̃^2/1+2Q̃^2(1 + Q̃^2 ) (q/π T)^2where we used ϵ + P = 4M = 4 (1 + 2 Q̃^2), following from a standard renormalization procedure<cit.>.In Fig. (<ref>) Eqs. (<ref>) are plotted as dashed lines going through the numerically computed points. In each case the dispersion relations describe the modes well at low momenta, although we can already see the higher order corrections around q/π T ≈ 1. We also check that the dispersion relations of Eq. (<ref>) are followed for any other Q. § DISCUSSION The example of the Schwarzschild-de Sitter black hole illustrates one of the main advantages of the method we employed. As one of the simplest black holes, its quasinormal modes have been computed already in 1990<cit.>,yet all this time the infinite set of purely imaginary modes, which depending on the black hole mass may even be the dominant mode, have been missed. This is either because the approximation used misses this type of modes (e.g. the WKB approximation<cit.>), or in other cases. e.g. with Leaver's method<cit.>, because the method requires an initial guess, effectively giving only what's expected. We checked that even the thirteenth order WKB approximation with Padé resummation of <cit.>, while giving very accurate results for the complex modes, still completely misses the purely imaginary modes.The method used here is completely a-priori, it has no guesses, no assumptions other than that the eigenfunctions be analytical, and no approximations other than the grid-size, which can easily be varied.Two additional examples, rederiving the QNM's of <cit.> and <cit.>, are included in <cit.>. Although we did not discuss a case where the background is known only numerically, it should be clear that as long as the numerical background is known to a high enough precision this will not give any problems,as an analytical background has to be converted to a numerical one in any case. In <cit.> we did use the package successfully for a numerical background, and the same method as we use here has been used with numerical backgrounds for instance in <cit.>, and in analytic but more involved situations for instance in <cit.>.In an upcoming work<cit.> we will study the QNM's of the Reissner-Nordström black hole in asymptotically de Sitter spacetime using <cit.>, in particular in relation to strong cosmic censorship.There are several possible extensions to the current method. A limitation of the current approach is that the equation, after the rescaling to implicitly solve the boundary conditions, has to have a polynomial dependence on the frequency.This is not always the case, see for instance <cit.>. There are two main ways around this issue. The most closely related one is to take the determinant of the matrix representing the full QNM equation, as a function of the frequency, and look for zeroes by evaluating it on numerical points in the complex plane <cit.>. Another way, developed and used in <cit.>, is to use the Newton-Raphson method.This can be advantageous even when not strictly necessary, as it allows one to efficiently track a single mode as a parameter is varied <cit.>.Another extension that is possible and would be interesting is to nonhomogeneous backgrounds. The main complication there is that the matrices will get a lot larger, by a factor of the size of the grid in the extra direction. All these extensions would fit nicely in the current framework and are worth adding in the future.We hope that the present package will be of use to the community.We have certainly found it very convenient in our own work, and believe others could benefit from it. In addition, we encourage anyone to contribute with optimizations or bug fixes or extra features, it is hosted on GitHub, which facilitates such collaboration.§ ACKNOWLEDGEMENTSWe want to thank especially L. Yaffe and the organizers of the 2014 Mathematica summer school on theoretical physics in Porto, as this package was inspired by an exercise by L. Yaffe at this school <cit.>.We also want to thank W. van der Schee and W. Sybesma for using earlier versions of this package and helping fix the occasional bugs and other improvements, and U. Gursoy, J. M. Magan and S. Vandoren for collaborations on different projects during which this package was used and developed.We thank N. Kaplis, M. Ammon and O. Dias for helpful discussions regarding the numerics, J. Matyjasek for help with his method, V. Cardosofor interesting discussions on the quasinormal modes of the Schwarzschild and Schwarzschild-de Sitter black holes and Jason Harris for comments and suggestions on the code.This work was supported by the Netherlands Organisation for Scientific Research (NWO) under VIDI grant 680-47-518, and the Delta-Institute for Theoretical Physics (D-ITP) that is funded by the Dutch Ministry of Education, Culture and Science (OCW). § BENCHMARKSHere we provide some benchmarks on the performance, in terms of speed and accuracy.We will consider again the massless scalar in Schwarzschild-AdS, Eq. (<ref>).When it comes to timing we expect this equation to be fully representative, the speed should at least to a good approximation be independent of the details of the equation.One just has to keep in mind that for N_eq coupled equations with the maximal power of the frequency being p the matrix size scales as N_eqp.In Fig. (<ref>) we show a log-log plot of the time t taken to compute the quasinormal modes (in seconds) as a function of the grid size N, using machine precision (blue points), a precision of 50 digits (yellow points) and a precision depending on the grid size as p = N/2 (green points). In all cases we find a power law t(N) = t_0 N^q. For the computation with machine precision, the power seems to be different for smaller grid sizes (q ≈ 2.38) and for larger grid sizes (q ≈ 3.2). For the case with 50 digits of precision, we find q ≈ 3.1, while for the one with precision increasing as N/2 we find q ≈ 3.3. Summarizing, the scaling with N is roughly as N^3, except for grids with roughly less than 400 points where it scales as N^2.38.The dependence on precision is very mild, as we can see by comparing the yellow and green points, except in the transition from machine precision to higher precision, which slows the computation down by roughly 2 orders of magnitude.To the right of Fig. (<ref>) we give a few explicit values of the time taken on a laptop.Note that for larger grid sizes and/or using higher precision, the computation of the eigenvalues takes up virtually all of the time, while only for low grid sizes the other steps in the computation, namely the construction of the matrices, become important. We check that for a grid of 40 points with machine precision, which we consider roughly the smallest grid that can still give valuable results, about 65% of the time is taken up by highly optimized built-in functions of Mathematica (which are programmed in C). These are the construction of the derivative matrices and the actual computation of the eigenvalues. This means that in the worst case, where the rest of the computation can be optimized so much as to take a negligible amount of time, the speed gain would be about 35%.However, since for example the construction of the numerical matrix, which involves computations with vectors (in particular the grid), does take some time, we believe the actual room for improvement to be much smaller.In Fig. (<ref>) we show the number of accurately computed modes as a function of the grid size N, for different values of precision, as found by comparison with the computation with N = 300. The left figure is for the massless scalar at zero momentum, Eq. (<ref>). Note that before the different colors branch off the straight line, they follow it exactly, lying under the blue dots.This means that when it comes to the digits of precision used, for a given grid size N it needs to be above some threshold to compute the most accurate modes, but increasing it further has no effect.As long as the precision is high enough, the number of accurate modes increases linearly with the grid size, up until the point where the precision is no longer high enough, when the number of modes actually goes down a little with increasing grid size.In order to see what features we can expect to be generic, independent of the equation we are solving, and which are not,we repeat this procedure on the right-hand side of Fig. (<ref>) for the same scalar but now with a large momentum of q/π T = 160 (these modes were also computed in <cit.>, we ofcourse agree on the result).This may seem like a trivial change, and on the level of the equation it is, but the solutions become highly oscillatory and thus harder to capture with a few Chebyshev polynomials.From these two cases, we expect that the two features mentioned above are generic, at least after N is large enough to resolve the first mode.The details are of course not generic, such as the actual value of n(N) for a specific N, or the actual value of the precision p(N) needed, or the coefficient of the linear growth of n(N) (curiously, at high momentum we need less precision than at zero momentum).§ QUASINORMAL MODE EQUATIONSIn this appendix we write the quasinormal mode equations in their rescaled form which can directly be used in the package. Quasinormal mode equations Here we present the scalar quasinormal mode equations of the Schwarzschild black hole for de Sitter and flat asymptotics, in a form that can directly be used for numerical computation.For the Schwarzschild black hole, the anti-de Sitter case was given in Eq. (<ref>) and the asymptotically flat case in Eq. (<ref>). In de Sitter, using the radial variable v = (u - u_c)/(u_b - u_c) with the cosmological horizon at v = 0 and the black hole horizon at v = 1 , and having rescaled ϕ according to Eq. (<ref>) to enforce the correct boundary conditions, the equation is0 = (1-u_c) C_0 ϕ̃(v) + C_1 ϕ̃^'(v) + C_2 ϕ̃^''(v) ,with,C_0= ( u_c^2 ((l^2+l+3) v-v^3-2)+(v-1) u_c^3 (-(l^2+l+2) v+v^2+1)+ v u_c (l^2+l-v^2+5 v-6)+v^2 (l^2+l+v-2) ) +4 v ω ^2(u_c^2+u_c+1)^2 ((v^2-3 v+2) u_c^3-(v^2-2) u_c^2-(v^2-4 v+1) u_c+(v-1) v)/(u_c-1) u_c^2 (u_c+2)^2+ 2 i ω(u_c^2+u_c+1)(-2 (v^3-2 v+1) u_c^2+v (-2 v^2+9 v-5) u_c+(2 v^3-6 v^2+4 v-1) u_c^3+v^2 (2 v-3))/u_c (u_c+2), C_1= v (u_c-1) (-(v^3-3 v+2) u_c^2-v (v^2-5 v+6) u_c+(v-1)^3 u_c^3+(v-2) v^2)- 2 i v ω(u_c^2+u_c+1) ((-2 v^3+5 v-2) u_c^2-2 v (v^2-4 v+2)u_c+(2 v^3-6 v^2+5 v-1) u_c^3+2 (v-1) v^2)/u_c (u_c+2), C_2= (v-1) v^2 (1-u_c) (-(v^2+v-2) u_c^2+(v-1)^2 u_c^3-(v-3) v u_c+v^2) . The equations for the general Einstein-Maxwell-scalar backgrounds, and those for the specific case of the Reissner-Nordström black brane can be found in <cit.>.§ NUMERICAL VALUESIn this appendix we provide some quantitative results. All complex modes come with their negative complex conjugate which we do not show.In Table (<ref>) we show the lowest lying quasinormal mode of the Schwarzschild-de Sitter black hole discussed in section <ref>, for l = 1, 2, and the second lowest for l=2. We take as parameter Λ M^2 = 3 M^2/L^2, for ease of comparison with <cit.> which computed the same QNMs with a sixth order WKB approximation. Results agree to a high precision, except that we find an extra set of purely imaginary modes (in the top left entry this is the dominant mode). In Table (<ref>) we show the higher quasinormal modes of Reissner-Nordström, at Q/Q_extr = 1/2 and q/π T = 1.5, as in Fig. (<ref>). The hydrodynamic modes, not shown in the table, are ω_shear/π T = -0.29171230 i, ω_sound/π T = 0.90170632 - 0.19067682 i and ω_charge / π T = -0.76581370 i. 00package A. P. Jansen, QNMspectral, https://github.com/APJansen/QNMspectralLIGO1 A. P. Abbott et al., Observation of Gravitational Waves from a Binary Black Hole Merger, Phys. Rev. Lett. 116 (2016) 061102.LIGO2 A. P. Abbott et al., GW151226: Observation of Gravitational Waves from a 22-Solar-Mass Binary Black Hole Coalescence, Phys. Rev. Lett. 116 (2016) 241103.LIGO3 A. P. Abbott et al., GW170104: Observation of a 50-Solar-Mass Binary Black Hole Coalescence at Redshift 0.2, Phys. Rev. Lett. 118 (2017) 221101.LIGOtests A. P. Abbot et al., Tests of general relativity with GW150914, Phys. Rev. Lett. 116 (2016) 221101.Konoplya:2016pmh R. Konoplya and A. Zhidenko, Detection of gravitational waves from black holes: Is there a window for alternative theories?, Phys. Lett. B 756 (2016) 350.reviewBertiCardosoStarinets E. Berti, V. Cardoso and A. O. Starinets, Quasinormal modes of black holes and black branes, Class. Quant. Grav. 26 (2009) 163001.reviewKonoplyaZhidenko R. A. Konoplya and A. Zhidenko, Quasinormal modes of black holes: From astrophysics to string theory, Rev. Mod. Phys.83 (2011) 793.KovtunStarinets P. K. Kovtun and A. O. Starinets, Quasinormal modes and holography, Phys. Rev. D 72 (2005) 086009.HubenyHorowitz G. T. Horowitz and V. E. Hubeny, Quasinormal modes of AdS black holes and the approach to thermal equilibrium, Phys. Rev. D 62 (2000) 024027.planarAdS A. Nunez and A. O. Starinets, AdS / CFT correspondence, quasinormal modes, and thermal correlators in N=4 SYM, Phys. Rev. D 67 (2003) 124013.SonStarinetsD. T. Son and A. O. Starinets,Minkowski space correlators in AdS / CFT correspondence: Recipe and applications,JHEP 09 (2002) 042.fluidgravity S. Bhattacharyya, V. E. Hubeny, S. Minwalla and M. Rangamani, Nonlinear Fluid Dynamics from Gravity, JHEP 02 (2008) 045.holography J. M. Maldacena, The Large N limit of superconformal field theories and supergravity, Int. J. Theor. Phys. 38 (1999) 1113-1133.resurgence M. P. Heller, R. A. Janik and P. Witaszczyk, Hydrodynamic Gradient Expansion in Gauge Theory Plasmas, Phys. Rev. Lett. 21 (2013) 211602.linearevolution M. P. Heller, D. Mateos, W. van der Schee and M. Triana, Holographic isotropization linearized, JHEP 09 (2013) 026.linearevolution2 M. P. Heller, D. Mateos, W. van der Schee and D. Trancanelli, Strong Coupling Isotropization of Non-Abelian Plasmas Simplified, Phys. Rev. Lett.108 (2012) 191601.hydro1 G. Policastro, D. T. Son and A. O. Starinets, From AdS / CFT correspondence to hydrodynamicsJHEP 09 (2002) 043.etaoversG. Policastro, D. T. Son and A. O. Starinets,The Shear viscosity of strongly coupled N=4 supersymmetric Yang-Mills plasma,Phys. Rev. Lett 87 (2001) 081601.oneloop F. Denef, S. A. Hartnoll and S. Sachdev, Quantum oscillations and black hole ringing,Phys. Rev. D 80 (2009) 126016.analyticExample P. Betzios, U. Gürsoy, J. Matti and G. Policastro, Quasi-normal modes of a strongly coupled non-conformal plasma and approach to criticality, arXiv:hep-th/1708.02252.NatarioSchiappa J. Natario and R. Schiappa, On the classification of asymptotic quasinormal frequencies for d-dimensional black holes and quantum gravity, Adv. Theor. Math. Phys. 8 (2004) 1001-1131.reviewCheslerYaffe P. M. Chesler and L. G. Yaffe, Numerical solution of gravitational dynamics in asymptotically anti-de Sitter spacetimes, JHEP 07 (2014) 086.reviewDiasSantosWay O. J. C. Dias, J. E. Santos and B. Way, Numerical Methods for Finding Stationary Gravitational Solutions, JHEP 07 (2014) 086.reviewGN P. Grandclement and J. Novak, Spectral Methods for Numerical Relativity, JHEP 07 (2014) 086.boyd J. P. Boyd, Chebyshev & Fourier Spectral Methods, Courier Dover Publications (2001).Dias:2009iu O. J. C. Dias, P. Figueras, R. Monteiro, J. E. Santos and R. Emparan, Instability and new phases of higher-dimensional rotating black holes, Phys. Rev. D 80 (2009) 111701.Dias:2010eu,O. J. C. Dias, P. Figueras, R. Monteiro, H. S. Reall and J. E. Santos,An instability of higher-dimensional rotating black holes,JHEP 05 (2010) 076. Monteiro:2009keR. Monteiro, M. J. Perry and J. E. Santos,Semiclassical instabilities of Kerr-AdS black holes,Phys. Rev. D 81 (2010) 024001.qnmYaffe L. Yaffe,Mathematica Summer School for Theoretical Physics - Numerical holography using Mathematica,http://msstp.org/?q=node/289.YaffeFuiniUhlemann J. F. Fuini, C. F. Uhlemann and L. G. Yaffe, Damping of hard excitations in strongly coupled 𝒩 = 4 plasma,JHEP bf 12 (2016) 042.scalarinstabilityU. Gürsoy, A. Jansen and W. van der Schee, New dynamical instability in asymptotically anti-de Sitter spacetime, Phys. Rev. D 94 (2016) 061901.explosionInstability1 P. Bosch, A. Buchel and L. Lehner, Unstable horizons and singularity development in holography, JHEP 07 (2017) 135.explosionInstability2 A. Buchel, Singularity development and supersymmetry in holography, JHEP 08 (2017) 134.lifshitzevolutionU. Gürsoy, A. Jansen and W. Sybesma and S. Vandoren,Holographic Equilibration of Nonrelativistic Plasmas,Phys. Rev. Lett. 5 (2016) 051601. entropyA. Jansen and J. M. Magan,Black hole collapse and democratic models,Phys. Rev. D 94 (2016) 104007. extremaldsV. Cardoso and J. P. S. Lemos,Quasinormal modes of the near extremal Schwarzschild-de Sitter black hole,Phys. Rev. D 67 (2003) 084020.dsnumericA. Zhidenko,Quasinormal modes of Schwarzschild de Sitter black holes,Class. Quant. Grav. 21 (2004) 273-280. firstSdSF. Mellor and I. Moss, Stability of black holes in de Sitter space,Phys. Rev. D 41 (1990)403.flatSR. A. Konoplya,Quasinormal modes of the Schwarzschild black hole and higher order WKB approach,J. Phys. Stud. 8 (2004) 93-100.puredS A. Lopez-Ortega, Quasinormal modes of D-dimensional de Sitter spacetime, Gen. Rel. Grav. 38 (2006) 1565-1591.Konoplya:2004uk R. A. Konoplya and A. Zhidenko, High overtones of Schwarzschild-de Sitter quasinormal spectrum, JHEP 0406 (2004) 037.chargedBraneS. Janiszwekski and M. Kaminski,Quasinormal modes of magnetic and electric black branes versus far from equilibrium anisotropic fluids,Phys. Rev. D 93 (2016) 025006.pureAdS C. P. Burgess and C. A. Lutken, Propagators and Effective Potentials in Anti-de Sitter Space, Phys. Lett. B153 (1985) 137-141.pureAdSlimit R. A. Konoplya, Quasinormal modes of a small Schwarzschild anti-de Sitter black hole, Phys. Rev. D 66 (2002) 044009.evolutionSdS P. R. Brady, C. M. Chambers, W. G. Laarakkers and E. Poisson, Radiative falloff in Schwarzschild-de Sitter space-time, Phys. Rev. D 60 (1999) 064003.masterequations H. Kodama and A. Ishibashi, Master equations for perturbations of generalized static black holes with charge in higher dimensions, Prog. Theor. Phys. 111 (2004) 29-73.chargedbranesM. Ammon, M. Kaminski, R. Koirala, J. Lieber and J. Wu,Quasinormal modes of charged magnetic black branes & chiral magnetic transport,JHEP 04 (2017) 067.chargediffusionJ. Mas, J. P. Shock and J. Tarrio,A Note on conductivity and charge diffusion in holographic flavour systems, JHEP 01 (2009) 025.renorm S. De Haro, S. N. Solodukhin and K. Skenderis, Holographic reconstruction of space-time and renormalization in the AdS / CFT correspondence, Commun. Math. Phys. 217 (2001) 595-622.WKB S. Iyer and C. M. Will, Black-hole normal modes: A WKB approach. I. Foundations and application of a higher-order WKB analysis of potential-barrier scattering, Phys. Rev. D 35 (1987) 3621-3631.WKBhighorder J. Matyjasek and M. Opala, Quasinormal modes of black holes. The improved semianalytic approach, Phys. Rev. D 96 (2017) 024011.leaver E. W. Leaver, An Analytic Representation for the Quasi-Normal Modes of Kerr Black Holes, Proc. Roy. Soc. Lond. A403 (1985) 285.numericalbackground1 R. A. Janik, J. Jankowski and H. Soltanpanahi, Nonequilibrium Dynamics and Phase Transitions in Holographic Models, Phys. Rev. Lett. 117 (2016) 091603.numericalbackground2 R. A. Janik, J. Jankowski and H. Soltanpanahi, Quasinormal modes and the phase structure of strongly coupled matter, JHEP 06 (2016) 047.RNdSSCCV. Cardoso, J. Costa, K. Destounis, P. Hintz and A. Jansen,Quasinormal modes and Strong Cosmic Censorship,to appear.QNMnonpolyO. J. C. Dias and J. E. Santos,Boundary Conditions for Kerr-AdS Perturbations,JHEP 10 (2013) 156. QNMbydetS. Waeber, A. Schfer, A. Vuorinen and L. G. Yaffe,Finite coupling corrections to holographic predictions for hot QCD,JHEP 11 (2015) 087. QNMbyNR V. Cardoso, O. J. C. Dias, G. S. Hartnett, L. Lehner and J. E. Santos, Holographic thermalization, quasinormal modes and superradiance in Kerr-AdS, JHEP 04 (2014) 183. confcoupling1 J. S. F. Chan and R. B. Mann, Scalar wave falloff in asymptotically anti-de Sitter backgrounds, Phys. Rev. D 55 (1997) 7546-7562.confcoupling2 J. S. F. Chan and R. B. Mann, Scalar wave falloff in topological black hole backgrounds, Phys. Rev. D 59 (1999) 064025.Santos:2015iua J. E. Santos and B. Way, Neutral Black Rings in Five Dimensions are Unstable, Phys. Rev. Lett. 114 (2015) 221101.Dias:2015wqa O. J. C. Dias, G. Mahdi and J. E. Santos, Linear Mode Stability of the Kerr-Newman Black Hole and Its Quasinormal Modes,Phys. Rev. Lett. 114 (2015) 151101. Dias:2014eua O. J. C. Dias, G. S. Hartnett and J. E. Santos Quasinormal modes of asymptotically flat rotating black holes, Class. Quant. Grav. 31 (2014) 245011. Hartnett:2013fba G. S. Hartnett and J. E. Santos, Non-Axisymmetric Instability of Rotating Black Holes in Higher Dimensions, Phys. Rev. D 88 (2013) 041505.Dias:2011jg O. J. C. Dias, R. Monteiro and J. E. Santos, Ultraspinning instability: the missing link, JHEP 08 (2011) 139.Dias:2010gk O. J. C. Dias, P. Figueras, R. Monteiro and J. E. Santos, Ultraspinning instability of anti-de Sitter black holes, JHEP 12 (2010) 067.Dias:2010ma O. J. C. Dias, R. Monteiro, H. S. Reall and J. E. Santos, A Scalar field condensation instability of rotating anti-de Sitter black holes, JHEP 11 (2010) 036.Dias:2010maa O. J. C. Dias, P. Figueras, R. Monteiro and J. E. Santos, Ultraspinning instability of rotating black holes, Phys. Rev. D 82 (2010) 104025.
http://arxiv.org/abs/1709.09178v2
{ "authors": [ "Aron Jansen" ], "categories": [ "gr-qc", "hep-th" ], "primary_category": "gr-qc", "published": "20170926180004", "title": "Overdamped modes in Schwarzschild-de Sitter and a Mathematica package for the numerical computation of quasinormal modes" }
OT1pzcmit theoremTheorem[section] propositionProposition[section] lemmaLemma[section] corollaryCorollary[section] assumptionAssumption[section] definitionDefinition[section] remarkRemark[section]
http://arxiv.org/abs/1709.10345v1
{ "authors": [ "Yingdong Lu", "Mark S. Squillante", "Chai Wah Wu" ], "categories": [ "math.OC" ], "primary_category": "math.OC", "published": "20170927215311", "title": "Control of Time-Varying Epidemic-Like Stochastic Processes and Their Mean-Field Limits" }
http://arxiv.org/abs/1709.09645v2
{ "authors": [ "Moo K. Chung" ], "categories": [ "q-bio.NC", "stat.ME" ], "primary_category": "q-bio.NC", "published": "20170927173010", "title": "Statistical Challenges of Big Brain Network Data" }
label1,label2]Ernesto Kemp [email protected][label1]“Gleb Wataghin" Institute of Physics, University of Campinas – UNICAMP, 13083-859, Campinas, SP, Brazil [label2]Fermi National Accelerator Laboratory, Batavia, IL 60510-0500, USA [cor1]To appear in the proceedings of the STARS 2017 conference:<https://indico.cern.ch/event/542644/> The last decade was remarkable for neutrino physics. In particular, the phenomenon of neutrino flavor oscillations has been firmly established by a series of independent measurements. All parameters of the neutrino mixing are now known and we have elements to plan a judicious exploration of new scenarios that are opened by these recent advances. With precise measurements we can test the 3-neutrino paradigm, neutrino mass hierarchy and CP asymmetry in the lepton sector. The future long-baseline experiments are considered to be a fundamental tool to deepen our knowledge of electroweak interactions. The Deep Underground Neutrino Experiment – DUNE will detect a broad-band neutrino beam from Fermilab in an underground massive Liquid Argon Time-Projection Chamber at an L/E of about 10^3 km / GeV to reach good sensitivity for CP-phase measurements and the determination of the mass hierarchy. The dimensions and the depth of the Far Detector also create an excellent opportunity to look for rare signals like proton decay to study violation of baryonic number, as well as supernova neutrino bursts, broadening the scope of the experiment to astrophysics and associated impacts in cosmology. In this presentation, we will discuss the physics motivations and the main experimental features of the DUNE project required to reach its scientific goals.DUNE Long-Baseline Neutrino Oscillations Supernovae Neutrinos Baryon Number Violation§ THE DUNE AND LBNF PROJECTSAlthough the Standard Model of particle physics presents a remarkably accurate description of the elementary particles and their interactions, it is known that the current model is incomplete and that a more fundamental underlying theory must exist. Results from the last decades, that the three known types of neutrinos have nonzero mass, mix with one another and oscillate between generations, implies physics beyond the Standard Model. The neutrino mass generation shows to be more complex than the Higgs mechanism embedded in the Glashow-Salam-Weinberg electroweak theory. The neutrino interactions have very small cross-sections, for this reason, neutrinos can only be observed and studied via intense neutrino sources and large detectors. Neutrino experiments can be a straight way (at lower cost) to test the fundamentals of electroweak interactions. Additionally, a large detector placed underground, in a low-background environment can be used to study rare process, as the proton decay or supernovae neutrinos bursts, in which neutrinos seem to play a major role in the core-collapse mechanism and the subsequent ejection of the star's matter. If the collapsing star is massive enough, it can also end as a black hole. The following list summarizes the above remarks and their related open questions in neutrino physics:– Matter-antimatter asymmetry: Immediately after the Big Bang, matter and antimatter were created equally, but now matter dominates.– Nature's fundamental underlying symmetries: the patterns of mixings and masses between the particles of the Standard Model is not understood. – Grand Unified Theories (GUTs): Experimental resultssuggest that the physical forces observed today were unified into one force at the birth of the universe. GUTs predict that protons should decay, a process that has never been observed. – Supernovae: How do supernovae explode and what new physics will we learn from a neutrino burst? To address all of these questions the worldwide neutrino physics community is developing a long-term program to measure unknown parameters of the Standard Model of particle physics and search for new phenomena. The physics program will be carried out as an international, leading-edge, dual-site experiment for neutrino science and proton decay studies, which is known as the Deep Underground Neutrino Experiment (DUNE).The infrastructure for the beam and the experimental sites constitutes the Long-Baseline Neutrino Facility (LBNF). Together, DUNE and LBNF represent a very ambitious program in neutrino and elementary particle science, not to mention the impacts of the expected outcomes in astrophysics and cosmology.DUNE will comprise two experimental sites. The first at the Fermilab site, hosting the world’s highest-intensity neutrino beam and a high-precision near detector. The second at the Sanford Underground Research Facility (SURF) 1300 km away in Lead (SD), where a 40 kton liquid argon time-projection chamber (LArTPC) far detector will be installed underground (∼ 1500 m deep). Fermilab will also provide all of the conventional and technical facilities necessary to support the beamline and detector systems.§.§ The Beam The LBNF beamline includes key features inherited from the successful NuMI beam line design for the MINOS and NOvA experiment (Ayres et al. 2004).It exploits the same configuration of a target and horn system, with the spacing of the target and two horns tuned to obtain an intense neutrino flux at the first oscillation maximum and to extend as much as possible to the second as well. Profiting from the effective experience of the NuMI design the decay pipe is helium-filled, while the target chase is air-filled. The proton beam energy can be adjusted within the 60 and 120 GeV range, with the corresponding range of beam power from 1.0 to 1.2 MW. The ability to vary the proton beam energy is essential for optimizing the neutrino spectrum and to understand systematic effects in the beam production. An energy tunable neutrino beam also provides flexibility to allow the addressing of future questions in neutrino physics that may require a different neutrino energy spectrum.The reference design has values of 204 m length and 4 m diameter for the decay pipe, both matching well to the physics of DUNE but studies to determine the optimal dimensions continue. The main elements of the beam line are shown in Figure <ref>. §.§ The Far Detector The far detector (FD) will be composed of four similar modules, each one a liquid argon time-projection chamber (LArTPC). The LArTPC technology (Rubbia 1977; Acciarri et al. 2015a) provides excellent tracking and calorimetry performance, making it as an excellent choice for massive neutrino detectors such as the DUNE FD. Moreover, the LArTPC ability for precise reconstruction of the kinematical properties of particles increases the correct identification and measurements of neutrino events over a wide range of energies. The full imaging of events will allow study of neutrino interactions and other rare events with an unprecedented resolution. The huge mass will grant the collection of a vast number of events, with sufficient statistics for precision studies. The reference design adopts a single-phase (SP) readout, where the readout anode is composed of wire planes in the LAr volume. An alternative design is also considered, based on a dual-phase (DP) approach, in which the ionization charges are extracted, amplified and detected in gaseous argon (GAr) above the liquid surface.The photon-detection schemes in the two designs are also different, in the SP the photon detectors are distributed within the LAr volume, in the DP they are concentrated at the bottom of the tank. A sketch of both proposals (SP and DP) can be found in Figure <ref>.The 10 kton TPC reference design has an active volume with 12 m high, 14.5 m wide and 58 m long. The TPC is instrumented with anode plane assemblies (APAs), which are 6.3 m high and 2.3 m wide, and cathode plane assemblies (CPAs), 3 m high by 2.3 wide. They are arranged in stacks forming walls (three CPAs interleaved by two APAs) and providing drift modules separated by 3.6 m each along the beam direction (see Figure <ref>). The CPAs are held at −180 kV, such that ionization electrons drift a maximum distance of 3.6 m in the electric field of 500 V/cm. The ultimate validation of the engineered solutions for both designs of the FD is foreseen in the context of the neutrino activities at the CERN around 2018, where full-scale engineering prototypes will be assembled and commissioned. Following this milestone, a test-beam data campaign will be executed to collect a large sample of charged-particle interactions in order to study the response of the detector with high precision. There is recognition that the LArTPC technology will continue to evolve with (1) the large-scale prototypes at the CERN Neutrino Platform and the experience from the Fermilab SBN program (Acciarri et al. 2015b), and (2) the experience gained during the construction and commissioning of the first 10 kton module. The chosen strategy for implementing the far detector is a staged approach. The deployment of consecutive modules will enable an early science program while allowing implementation of improvements and developments during the experiment’s lifetime.§.§ The Near Detector The primary role of the DUNE near detector system is to perform a precise characterization of the energy spectrum and composition of the neutrino beam at the source, in terms of both muon and electron-flavored neutrinos and antineutrinos. It can also be profited to provide measurements of neutrino interaction cross sections. These features aim to control systematic uncertainties with the precision needed to fulfill the DUNE primary science objectives. The discrimination between fluxes of neutrinos and antineutrinos requires a magnetized neutrino detector to charge-discriminate electrons and muons produced in the neutrino charged-current interactions. As the near detector will be exposed to an intense flux of neutrinos, it will collect an unprecedentedly large sample of neutrino interactions, allowing for an extended science program. The near detector will, therefore, provide a broad program of fundamental neutrino interaction measurements, which are an important part of the ancillary scientific goals of the DUNE collaboration. The reference design for the near detector design is a fine-grained tracker (FGT), illustrated in Figure <ref>. Its subsystems include a central straw-tube tracker and an electromagnetic calorimeter surrounded by a 0.4 T dipole field. The steel of the magnet yoke will be instrumented with muon identifiers.§ THE DUNE PHYSICS PROGRAM§.§ Neutrino Oscillations The small size of neutrino masses and their relatively large mixing bears little resemblance to quark masses and mixing, suggesting that different physics – and possibly different mass scales – in the two sectors may be present, thus motivating precision study of mixing and CP violation in the lepton sector of the Standard Model. DUNE plans to pursue a detailed study of neutrino mixing, resolve the neutrino mass ordering, and search for CP violation in the lepton sector by studying the oscillation patterns of high-intensity ν_μ and ν_μ beams measured over a long baseline. The oscillation probability of flavor conversion P(ν_μ→ν_e), to first order (Nunokawa et al. 2008), considering propagation of a neutrino beam through matter in a constant density approximation, have two major contributions for observations of CP asymmetry: δ_CP and a=G_fN_e / √(2). In a, G_f is the Fermi constant, N_e is the is the e^- number density of the matter crossed by neutrinos.Both δ_CP and a switch signs in going from neutrinos to antineutrinos. The matter effect is modulated by the value of constant a, according to the presence of e^- and the absence of e^+. In the few-GeV energy range, the asymmetry from the matter effect increases with baseline as the neutrinos pass through more matter, therefore an experiment with a longer baseline will be more sensitive to the neutrino mass hierarchy (MH). For baselines longer than ∼1200 km, the degeneracy between the asymmetries from matter and CP-violation effects can be resolved; hence DUNE, with a baseline of ∼1300 km, will be able to unambiguously determine the neutrino MH and measure the value of δ_CP (Diwan 2004). The experimental sensitivities presented here are estimated using GLoBES (Huber et al. 2005). GLoBES takes neutrino beam fluxes, cross sections, and detector-response parameterization as inputs. It was also included dependences on the design of the neutrino beam. The cross section inputs to GLoBES have been generated using GENIE 2.8.4 (Andreopoulos 2010). The neutrino oscillation parameters and the uncertainty on those parameters are taken from the Nu-Fit (Gonzalez-Garcia 2014) global fit to neutrino data. Sensitivities to the neutrino MH and the degree of CP violation are obtained by performing a simultaneous fit over the (-)ν_μ→(-)ν_μ and(-)ν_μ→(-)ν_e oscillated spectra. Figure <ref> shows the significance with which the MH can be determined as a function of the value of δ_CP, for an exposure which corresponds to seven years of data (3.5 years in neutrino mode plus 3.5 years in antineutrino mode) with a 40 kton detector and a 1.07 MW (80 GeV) beam. For this exposure, the MH is determined with a minimum significance of √(Δχ^2)= 5 for nearly 100% of δ_CP values for the reference beam design.In the approximation for the electron neutrino appearance probability (Nunokawa et al. 2006) there are CP-odd terms (dependent on sin δ_CP) that have opposite signs in ν_μ→ν_e and ν_μ→ν_e oscillations. For δ_CP≠ 0 or π, these terms introduce an asymmetry in neutrino versus antineutrino oscillations.The variation in the ν_μ→ν_e oscillation probability (Nunokawa et al. 2006) with the value of δ_CP indicates that it is experimentally possible to measure the value of δ_CP at a fixed baseline using only the observed shape of the ν_μ→ν_e or the ν_μ→ν_e appearance signal measured over an energy range that encompasses at least one full oscillation interval. A measurement of the value of δ_CP≠ 0 or π, assuming that neutrino mixing follows the three-flavor model, would imply CP violation.Figure <ref> shows the significance with which the CP violation (δ_CP≠ 0 or π) can be determined as a function of the value of δ_CP for an exposure of 300 kt × MW × year, which corresponds to seven years of data (3.5 years in neutrino mode plus 3.5 years in antineutrino mode) with a 40 kton detector and a 1.07 MW 80 GeV beam.§.§ Supernovae Neutrinos The DUNE experiment will be sensitive to neutrinos in the few tens of MeV range. This regime is of particular interest for detection of the burst of neutrinos from a galactic core-collapse supernova. The sensitivity of DUNE is primarily to electron flavor supernova neutrinos, and this capability is unique among existing and proposed supernova neutrino detectors for the next decades. Neutrinos from other astrophysical sources are also potentially detectable.Liquid argon has a particular sensitivity to the ν_e component of a supernova neutrino burst, via the charged-current (CC) absorption ν_e+^40Ar→e^-+^40Ar^* for which the observables are the e^- plus de-excitation products from the K^* final state, as well as a ν_e interaction and elastic scattering on electrons. DUNE’s capability to characterize the ν_e component of the signal is unique and critical. Other interesting astrophysics studies can be carried out by DUNE, such as solar neutrinos and supernova neutrinos diffuse background, neutrinos from accretion disks and black-hole/neutron star mergers. There may also be signatures of dark-matter WIMP annihilations in the low-energy signal range. Neutral-current (NC) scattering on Ar nuclei by any type of neutrino, ν_x+^40Ar→ν_x+^40Ar^*, is another process of interest for supernova detection in LAr detectors although is not yet fully studied. The signature is given by the cascade of de-excitation γs from the final-state Ar nucleus that can be potentially be used in tagging NC events. The predicted event rate (NC or CC) from a supernova burst is calculated by folding expected neutrino differential energy spectra in with cross sections for the relevant channels and with detector response. The number of signal events scales with mass and inverse square of distance as shown in Figure <ref>. The rates in the Figure <ref> show the ability of DUNE in have a large statistics in case of a galactic supernova, and resolve astrophysical phenomena to be observable in the neutrino burst signatures. In particular, the supernova explosion mechanism, which in the current paradigm involves energy deposition via neutrinos, is still not well understood, and the neutrinos themselves will bring the insight needed to confirm or refute the paradigm.§.§ Baryon Number Violation Grand Unified Theories (GUTs) unite the three gauge interactions of particle physics – strong, weak, and electromagnetic – into one single force. One of the consequences is the predictions about baryon number violation and proton lifetime that may be accessible to DUNE, since the observation requires a kton detector working in low background environment (Senjanovic 2010). Although no evidence for proton decay has been detected, lifetime limits from the current generation of experiments constrain the construction of GUT models. In some cases, these limits are approaching the upper bounds of what these models will allow. This situation points naturally toward continuing the search with new, highly capable underground detectors, especially those with improved sensitivity to specific proton decay modes favored by GUT models. In particular, the exquisite imaging, particle identification and calorimetric response of the DUNE LArTPC Far Detector opens the possibility of obtaining evidence for nucleon decay on the basis of a single well reconstructed event. The strength of the DUNE experiment for proton decay studies relies on the capability to detect two significant decay modes: i) p→e^+π^0 which is often predicted to have the higher branching fraction. The event kinematics and final states make this channel particularly suitable for a water Cherenkov detectors. ii) p→K^+ν. This mode is dominant in most supersymmetric GUTs, and is uniquely interesting for DUNE since stopping kaons have a higher ionization density than lower-mass particles. A LArTPC could identify the K^+ track with high efficiency. Also, many final states of K^+decay would be fully reconstructible in an LArTPC. The advantage of LArTPC over Cherenkov detectors is clear from the comparison of efficiencies and backgrounds (per kton) for the decays with K in final states. While LArTPC efficiencies are > 95% in most channels, Cherenkov detectors are in the range of 10% to 20%. In LArTPC background are < 2 while in Cherenkov detectors can reach up to 8 counts (Kearns 2013). Another promising way of probing baryon number violation in DUNE is through the search for the spontaneous conversion of neutrons into antineutrons in the nuclear environment. While these are less well motivated theoretically, opportunistic experimental searches cost little and could have a large payoff. Based on the expected signal efficiency and upper limits on the background rates, the expected limit on the proton lifetime as a function of running time in DUNE for p→K^+ν is shown in Figure <ref>.§ TIMELINE In the following we summarize some significant milestones for DUNE and LBNF:– Full-scale prototypes for SP and DP designs working at CERN in 2018. – Installation of the first 10 kton TPC module underground by 2021. – Choosen the technology for the remaining modules (2nd, 3rd and 4th). – Far Detector will start to taking data by 2024 (cosmics). – Far Detector data taking with beam starting at 2026. – Near detector fine grained tracker installed by 2026. – Finish all construction by 2028. – Exposure of 120 kton × MW × year by 2035.§ CONCLUSIONS The remarkable advances in last decade on the knowledge of neutrino mixing angles and mass splitting paved the way to test the 3-neutrino paradigm, neutrino mass hierarchy and CP asymmetry in the lepton sector.DUNE will have the key features to successful reach its physics goals: a powerful MW neutrino beam, a highly-capable fine-grained near detector, a massive 40 kton LArTPC working deep underground. In the last years, a strong collaboration has been formed. The strategy for construction has been extensively discussed and provided solid grounds for a clear construction plan.DUNE and LBNF together consist one of the most ambitious neutrino experiment for the next era of precise measurements. DUNE can shed light on some intriguing opened questions in physics, as the ordering of neutrino mass eigenstates and CP violation in the lepton sector. The expected results are highly significant in statistical terms and can be achieved in reasonable time. Moreover, there is a rich non-oscillation physics program, covering topics as supernovae, nucleon decay, and neutrinos interactions. With DUNE and LBNF we foresee impressive technical and scientific achievements for neutrino physics in the next decades.§.§ AcknowledgementsThis work was supported by the Brazilians agencies Fundação de Amparo aPesquisa do Estado de São Paulo (FAPESP) and Conselho Nacional de Ciência e Tecnologia (CNPq).§ REFERENCES 1em 99 biblabel#1 @bibitem#1@bibitem#1-Acciarri, R., Adamowski,M., Artrip, D., et al. 2015a, JINST, 10, 7, T07006. Acciarri, R., Adams, C., An, R., et al. 2015b, e-print: <arXiv:1503.01520v1 [physics.ins-det]> Andreopoulos, C., Bell,A., Bhattacharya,D. et al. 2010, Nucl. Instrum. Meth., A614, 87. Ayres, D. S., Dawson,J. W., Drake, G. et al. 2004, FERMILAB-PROPOSAL-0929,e-Print: hep-ex/0503053Diwan, M. V. 2004, Frascati Phys. Ser., 35, 89.Gonzalez-Garcia, M., Maltoni, M. & Schwetz,T. 2014, JHEP, 1411, 052.Huber, P., Lindner,M., & Winter, W. 2005, Comput.Phys.Commun., 167, 195. Kearns, E. 2013,e-print: <https://cdcvs.fnal.gov/redmine/attachments/download/24389/kearns_pdk.pdf>. Rubbia, C., 1977, CERN-EP-INT-77-08, e-print: <http://inspirehep.net/record/857394/files/CERN-EP-INT-77-8.pdf> Nunokawa, H., Parke,S. J. ,& Valle, J. W.2008, Prog.Part.Nucl.Phys., 60, 338. Senjanovic, G., 2010, AIP Conf.Proc., 1200, 131. Tamborra, I., Muller, B., Hudepohl, L. et al. 2012, Phys.Rev., D86, 125031.
http://arxiv.org/abs/1709.09385v2
{ "authors": [ "Ernesto Kemp" ], "categories": [ "hep-ex", "physics.ins-det" ], "primary_category": "hep-ex", "published": "20170927083859", "title": "The Deep Underground Neutrino Experiment -- DUNE: the precision era of neutrino physics" }
Multi-Label Classification of Patient Notes: Case Study on ICD Code Assignment Tal BaumelBen-Gurion University Beer-Sheva, IsraelJumana Nassour-KassisBen-Gurion University Beer-Sheva, IsraelRaphael CohenChorus.ai San Francisco, CAMichael ElhadadBen-Gurion University Beer-Sheva, IsraelNoémie ElhadadColumbia UniversityNew York, NY December 30, 2023 ============================================================================================================================================================================================================================================================================================= In many macroeconomic applications, confidence intervals for impulse responses are constructed by estimating VAR models in levels - ignoring cointegration rank uncertainty. We investigate the consequences of ignoring this uncertainty. We adapt several methods for handling model uncertainty and highlight their shortcomings. We propose a new method – Weighted-Inference-by-Model-Plausibility (WIMP) - that takes rank uncertainty into account in a data-driven way. In simulations the WIMP outperforms all other methods considered, delivering intervals that are robust to rank uncertainty, yet not overly conservative. We also study potential ramifications of rank uncertainty on applied macroeconomic analysis by re-assessing the effects of fiscal policy-shocks.JEL Classification: C15; C32; C52; E62.Keywords: Impulse response analysis; cointegration; model uncertainty; bootstrap inference; fiscal policy shocks.§ INTRODUCTIONVector autoregressions (VAR) and, more importantly, their implied impulse responses (IR) are essential tools for applied macroeconomists to investigate the dynamic propagation of (structural) shocks. While VARs fitted to macroeconomic data can incorporate information about unit roots and possible cointegration relations, this evidence is regularly ignored in applied work and inference for IR coefficients is usually based on the VAR specification in levels or first-differences. A common argument for the specification in levels is that estimation by ordinary least-squares (OLS) and the associated traditional approach to inference – for example via an asymptotically normal <cit.> or a bootstrap <cit.> approximation – `allows' for the presence of cointegration. Indeed the level specification results in consistent estimates of the VAR parameters regardless of the true underlying cointegration relations, and, for a fixed horizon, associated inferential procedures remain valid for inference on IR coefficients. However, albeit asymptotically valid, confidence intervals may have poor coverage in small samples when the data are highly persistent and when considering responses at “longer” horizons <cit.>. <cit.> shows theoretically that if one (or more) unit roots are present, confidence bands based on the normal approximation become invalid at “(very) long horizons”, while <cit.> and <cit.> show that the bootstrap also becomes invalid at such increasing horizons.These seemingly contradicting theoretical results depend on the asymptotic framework considered; or more precisely on the notion of “(very) long horizons”. If the considered horizon is kept fixed while the sample size is growing, one arrives at standard asymptotic results. However, if the horizon is modelled as a constant proportion of the sample size, the asymptotic distribution becomes non-standard if (near) unit root(s) are present. Similarly, inference via an asymtotically normal approximation based on a wrongly specified vector error correction (VECM) formulation of the VAR becomes invalid at long horizons as well <cit.>. Also, it is well known in the bootstrap literature that misspecification of the cointegration rank leads to an invalid bootstrap procedure <cit.>.Within this growing horizon framework, <cit.> construct confidence intervals for “long-horizon” IRs using local-to-unity asymptotics. The resulting confidence bands differ substantially from those obtained through traditional approaches, and suffer in turn from size distortions in short to medium horizons. Moreover, their proposed approach to inference does not account for the possibility of near cointegration, limiting its usefulness for applied work.<cit.> proposes a procedure that works uniformly well over the entire parameter space and the entire trajectory of the IRs, but her approach only allows for the construction of uniformly valid inference if at most one “uncertain” (unit) root is present in the VAR. Furthermore, her suggested inferential procedure is computationally very expensive even for bivariate VARs, let alone VARs of dimensions usually considered in applied research. Similar settings and problems are considered by <cit.>, <cit.>, <cit.>, <cit.> and <cit.> among others, but all consider at most one unknown root near unity. This setting does not allow for uncertainty about the number of cointegrating relations (if any), which we face in practice. <cit.> consider the more general setting in an extensive simulation study and conclude that the applied researcher is best advised to estimate the system in levels and construct inference in a traditional way. <cit.> propose an averaging approach for impulse responses of potentially cointegrated VAR models, but their approach still requires a pre-selection of rank, and does not deal with inference explicitly.In this paper we re-assess the construction of bootstrap confidence intervals for IRs in persistent, possibly non-stationary VARs. Our main intention is to provide the applied researcher with a reliable and robust alternative to the traditional “levels” approach, independent of the IR horizon of interest. We approach the issue of choosing the cointegration rank from a model selection perspective, and consider (bootstrap) methods initially designed to overcome model selection uncertainty in different contexts. In particular, we adapt the endogenous lag selection procedure of <cit.>, the model averaging estimators of <cit.> and the bagging approach proposed by <cit.> to the rank selection problem in VECMs. As elaborated by <cit.>, inference after model selection is difficult, and there is no guarantee that the above-mentioned methods can solve the problems in our setting.Therefore, we draw inspiration from the Post-Selection Inference (PoSI) approach of <cit.>, which explicitly deals with inference after model selection, to propose a novel way of constructing confidence bands by combining intervals of models for any rank. In our approach, labeled as Weighted Inference by Model Plausibility (WIMP), upper and lower bounds of all associated fixed-rank intervals are combined depending on the relative evidence for, or plausibility of, each model. Unlike many approaches considered in the VAR literature, our method does not require any pre-selection of ranks; that is, no pre-testing or selection using economic theory is needed. Instead, the method is fully agnostic about the cointegration rank and is fully data-driven. We provide some simple theoretical results establishing pointwise asymptotic validity of our method under general conditions. Our WIMP intervals tend to deliver coverage probabilities close to nominal levels across the entire trajectory of the IRs, even for “difficult” situations where cointegrating relations are very weak. Simulation-based evidence also suggests that the WIMP intervals generally outperform all other considered methods, including the traditional “level” approach to inference.[An alternative way to account for rank uncertainty is to consider lag-augmentation, where the VAR in levels is estimated with an additional lag. <cit.> and <cit.> show that Wald tests on the VAR parameters remain valid regardless the order or (co)integration if one lag too many (i.e. p+1) is added to the VAR model, and only the first p lags are used for subsequent analyses. <cit.> and <cit.> suggest this approach for inference on impulse responses as well. However, neither its theoretical nor its small sample properties have been properly investigated in the literature for impulse response analysis. Moreover, combining the lag-augmentation with a bootstrap procedure is no trivial task and would require further study. Notwithstanding these shortcomings, we considered the lag-augmentation approach in our simulation study, where it is shown to perform considerably worse than the WIMP method.]While we focus on frequentist inference in this paper, it is worth mentioning that rank uncertainty could also be tackled in a Bayesian VAR framework. However, in many Bayesian applications, uncertainty regarding the cointegration rank is often not taken into account explicitly. Although conceptually different, the Bayesian approach to cointegration is often similar in nature to the construction of classical (likelihood-based) inference. That is, the posterior distribution of (impulse response) parameters is often derived conditional on a pre-determined rank, selected using the marginal likelihood or other model comparison approaches <cit.>. However, several approaches incorporating uncertainty about the cointegration rank when analyzing VARs have been suggested in the Bayesian literature. For instance, <cit.>, <cit.>, <cit.> and <cit.> propose a Bayesian model averaging scheme, similar in spirit to the approach discussed in Section <ref> below. Alternatively, some authors have suggested various priors on the cointegration relations obtained using economic theory , which is a different conceptual approach than our fully data-driven, agnostic approach. Moreover, an explicit (theoretical) investigation of the (joint) posterior distribution of impulse responses of VARs under uncertainty on the (co-)integration relations is, however, limited also in the Bayesian literature.Since uncertainty about the true cointegration rank is mostly ignored in applied macroeconomic research, we investigate to what extend our more robust approach(es) may change the interpretation of results in practice. More specifically, we re-evaluate the effects of fiscal policy based on four influential structural VAR frameworks. Considering'srecursive identification strategy, 'ssign-restriction approach based on penalty functions, 'snarrative VAR framework, and 'sproxy-VAR, we find that neglecting rank uncertainty might lead to misleading results. As a companion to this paper, a ready-to-use MATLAB toolbox for the WIMP approach combined with various SVAR identification schemes is available online.[http://www.stephansmeekes.nlwww.stephansmeekes.nl]The remainder of this paper is organized as follows. In Section <ref> we discuss standard (bootstrap) approaches to inference in cointegrated VARs and illustrate empirically potential ramifications of rank misspecification. Section <ref> first discusses several approaches considered in the literature about model uncertainty and their adaptations to account for rank uncertainty, and next introduces the WIMP method. The performance of the suggested methods is investigated by simulation in Section <ref>. Fiscal policy under rank uncertainty is analyzed in Section <ref>. Section <ref> concludes. Appendices <ref> and <ref> contain additional simulation results and data descriptions, respectively.§ BOOTSTRAP INFERENCE FOR IMPULSE RESPONSES §.§ The Cointegrated VAR Model and Impulse ResponsesConsider the K-dimensional structural vector autoregressive (SVAR) time series process y_t = (y_1,t, …, y_K,t)^' observed at t=1, …,T:B_0 y_t = ∑_j=1^p B_j y_t-j + ε_t,where ε_t is a K-dimensional vector of contemporaneously and serially uncorrelated, weakly stationary structural shocks and B_0 is the invertible contemporaneous impact matrix. Pre-multiplying both sides of (<ref>) with B_0^-1, we obtain the reduced-form VARy_t = ∑_j=1^p A_j y_t-j + u_t,where A_j = B_0^-1 B_j and u_t = B_0^-1ε_t.Define the lag polynomial A(z) as A(z) = I_k - ∑_j=1^p A_j z^j, such that we can write A(L) y_t = u_t, where L is the lag operator L^j y_t = y_t-j. We now formulate assumptions that allow y_t to be (co)integrated with r cointegrating relations, which we label the `I(1,r) conditions' as in <cit.>.[Note that we do not necessarily require that all elements in y_t are integrated of order one. That is, some series may be I(0). In this case cointegration is of a trivial form, as any linear combination of I(0) series remains I(0).] [I(1,r) conditions] * A(z) has exactly K- r roots equal to 1 and all other roots are outside the unit circle.* Defining Π = A(1), we have that Π = αβ^' for K × r matrices α and β with full column rank, with the implicit definition that αβ^' = 0 when r=0.If y_t satisfies the I(1,r) conditions, we can write y_t as a VECMΔ y_t = Π y_t-1 + ∑_j=1^p-1Γ_j Δ y_t-j + u_t,t=1, …,T,where Γ_j = -∑_i=j+1^p A_j for j=1,…, p-1.We can invert the VAR model (<ref>) to obtain the moving average representationy_t = ∑_j=0^t-1Ψ_j u_t-j = ∑_j=0^t-1Ψ_j B^-1_0 ε_t-j,where the Ψ_j matrices contain the reduced-form (i.e. forecast error) impulse responses and Φ_j = Ψ_j B_0^-1 the structural impulse responses. However, as B_0 is not identified, we cannot obtain Φ_j in a unique way, and estimating the structural shocks and their impulse responses requires imposing a particular identification scheme. For that purpose, let P be a K × K matrix such that PP^' = Σ_u, where the specific form of P depends on the identification method. Then we define the identified structural impulse responses as Φ_j = Ψ_j P. In Section <ref> we discuss several ways to identify the structural shocks.[As the impulse responses only depend on the cointegration parameters β through their product with the loadings α, that is through the error correction term Π = αβ^', we are not concerned with identification of β, unlike the setting where inference on the long run relations themselves is the objective.]For ease of notation later on, we directly link the impulse responses to the VECM parameters. Let θ = vec(Π, Γ_1, …, Γ_p-1) denote the vector of VECM parameters. Then we can define Ψ_j = f_j (θ) and Φ_j = f_j (θ) P for j = 0, …, t-1, where the nonlinear functions f_j(·) are defined implicitly through inverting the VAR model. §.§ Inference Conditional on a Selected RankWe can estimate the VECM (<ref>) for a given rank r using the Gaussian quasi maximum likelihood estimator of <cit.> to obtain estimates θ̂^(r) = (Π̂^(r), Γ̂_1^(r), …, Γ̂_p^(r), Σ̂_u^(r) )^', where the superscript (r) emphasizes that estimation is conditional on r. Note that Π̂^(r) = α̂^(r)β̂^(r)' and P̂^(r) is an estimate of P such that P̂^(r)P̂^(r)' = Σ̂_u^(r), with Σ̂_u^(r) the residual variance estimator from the VECM. From inverting the VAR representation of the model, we can then straightforwardly obtain the estimates of the moving average terms, Ψ̂_0^(r), …, Ψ̂_h^(r), where h is the (maximum) horizon we are interested in. Specifically, we define the estimated impulse responses as Ψ̂_j^(r) = f_j (θ̂^(r)) and Φ̂_j^(r) = f_j (θ̂^(r)) P̂^(r), for j=0,…,h. To account for deterministic components, we can first regress y_t on a constant and possibly a linear time trend to obtain the detrended series ỹ_t = y_t - μ̂_0 - μ̂_1 t for t=1,…,T and estimate the VECM without deterministic components on ỹ_t (see also Remark <ref>).Now consider a general impulse response ζ, which is the object of interest of the analysis. Typically, this would be an element of either Ψ_j or Φ_j for a certain j; that is, ζ = ψ_j,a,b or ζ = ϕ_j,a,b, where the subscript `a,b' indicates the (a,b)-th element of the matrix. It might also be a combination of elements; for example, if one wants to perform simultaneous inference across horizons, using the ideas proposed in <cit.> and <cit.>, we could take ζ = max_0 ≤ j ≤ hψ_j,a,b, ζ = max_0 ≤ j ≤ hϕ_j,a,b, or its studentized versions. Similarly, one could take the Wald statistics of <cit.> as ζ. The bootstrap algorithm works the same regardless of the specific object of interest; writing ζ for a general object of interest simply avoids too cumbersome notation and the need to be specific about its particular form. Regardless of the specific form of ζ, it will be a function of the VAR model parameters θ, and its estimator ζ̂^(r) will be the same function of the VAR parameter estimators θ̂^(r), that is, ζ = f̅ (θ) and ζ̂^(r) = f̅ (θ̂^(r)), where the form of the function f̅ (·) depends on the desired object of interest.Various algorithms can be used to construct bootstrap confidence intervals for ζ. In the simulation and empirical sections we use straightforward algorithm based on 's () bootstrap percentile interval, which has regularly been considered in the literature, see e.g. <cit.>. Details are provided in Appendix <ref>. Other common bootstrap methods that are used include 's () percentile interval and 's () bias-corrected bootstrap. Irrespective of the specific choices that can be made, all these algorithms have in common that they generate a bootstrap sample, say {y_t^*}_t=1^T, that has a fixed cointegrating rank r. Bootstrap impulse responses are then estimated from this bootstrap sample and used to set up a confidence interval of the form [L^(r)(γ), U^(r)(γ)], where the superscript `(r)' again highlights the dependence on the chosen rank r, and γ is the desired confidence level. Hence, the bootstrap adds a second layer of potential rank misspecification next to the estimators themselves, which turns out to lead to further complications if one wants to account for rank uncertainty, as we discuss in Section <ref> below. Before discussing methods that potentially can account for rank uncertainty, we illustrate the perils of rank misspecification next. §.§ Effects of Rank MisspecificationStandard bootstrap inference assumes knowledge of the true cointegrating rank, labeled as r_0; if r ≠ r_0, inference on ζ will be inappropriate, in particular for longer horizons. If the chosen rank r is smaller than the true rank, the estimated IRs converge to `pseudo-true' values θ_j^(r) which are different from the true ones. This arises because the VAR parameters converge to their pseudo-true values which satisfy the (incorrect) rank restriction, c.f. <cit.>. While in this case bootstrap inference remains valid for the pseudo-true parameters, these parameters can be substantially different from the true IRs, making their interpretation and therefore inference somewhat meaningless, in particular as one typically tries to uncover structural effects which requires knowledge of true parameters.On the other hand, if r > r_0, as for instance in the VAR in levels specification, the short (fixed j) and medium (j/n → 0) horizon IRs are estimated consistently, but at long horizons (j ∼ n) IRs are inconsistent and even random and inference becomes invalid <cit.>.[Consistency of the estimated IRs also depends on the type of identification considered. For example, under long-run identification the short-run IRs are also not estimated consistently, see e.g. <cit.>.] The inconsistency is caused by the domination of the error correction terms for the long-horizon IRs, and their insufficient estimation accuracy under rank misspecification. The same occurs for bootstrap inference; while valid for short and medium horizon IRs, it becomes invalid at long horizons, as demonstrated in different contexts by <cit.>, <cit.> and <cit.>.Figure <ref> illustrates potential consequences of rank uncertainty for the construction of inference in practice. Displayed in the left panel are confidence intervals for output responses to a government spending shock identified as in <cit.> for all possible numbers of cointegration relations.[The VAR specification and the data are described in Section <ref>.] Clearly, the assessment of the effectiveness of the spending policy varies drastically with the chosen cointegration rank, indicating that choosing the wrong rank hampers the interpretation of results – for long but equally so for short horizons. One could argue that with proper rank estimation, the most appropriate of these intervals can be selected. However, as demonstrated in the right panel, if evidence for a particular rank is weak, different but equally well established “respectable” rank selection procedures may suggest different models, providing little guidance for the applied researcher.Finally, note that the unrestricted VAR in levels gives substantially different (and narrower) intervals than the VAR models with reduced rank, even the model with the next highest rank (r=9). Of course, if the true model is indeed a VAR of full rank, all variables are stationary and no (co)integration would be present. However, many macroeconomic series exhibit persistent behavior, which may be caused by stochastic trends. Indeed, ADF tests cannot reject a unit root for most series in our dataset, casting doubt on whether the levels specification is indeed the most appropriate one. If the series are really cointegrated, a reduced-rank VAR model would be more appropriate and constructing inference based on the VAR in levels would be invalid for long horizons. In practice, distinguishing long from short (or medium) horizons is difficult, and as we show in the simulations, for sample sizes compared to this particular example, inference based on the VAR in levels becomes inaccurate at fairly short horizons already.As Figure 1 shows, the imposed rank matters for the interpretation of the results, and a “robust” decision to use the VAR in levels could, in this example, lead to a misguided interpretation of the IRs. The strategy to use the VAR in levels based on a robustness argument therefore appears questionable, while rank selection techniques also do not appear to give conclusive answers. It is therefore crucial to take rank uncertainty into account when conducting inference for impulse responses.§ INFERENCE ACCOUNTING FOR RANK UNCERTAINTYIn this section we discuss several ways of accounting for rank uncertainty, first utilizing existing methods from the model uncertainty literature, before discussing a new principle. §.§ Adaptations of Existing Model Uncertainty Methods The perils of ignoring model uncertainty when performing model selection are well known in the statistical literature about model selection. For instance, in a sequence of papers, Leeb and Pötscher <cit.> highlight the risk of treating a selected model as a known and correct when performing inference, pointing out that even consistent model selection is no justification for treating the selected model as known. While this post-model selection inference problem is hard to solve, various methods have been proposed to at least mitigate the problem. Here we highlight some of these methods and show how they can be adapted to the problem at hand. We stress though that, although they are regularly used in practice to account for model uncertainty, none of these methods are formally shown to deliver valid post-model selection inference.The most straightforward way, and our baseline benchmark, to deal with rank uncertainty is to pre-estimate the rank, and then perform inference for the impulse responses conditional on the estimated rank. While this seems, given the discussion in the previous section, not always an advisable strategy, rank estimation underlies many of the methods considered afterwards. We therefore first discuss how to perform rank estimation and how it can be seen as a model selection problem.Let the function M_r(Y_T): Y_T ↦{0,1…,K} be a rank selection procedure that determines the cointegration rank based on the sample Y_T = (y_1, …, y_T)^'. Then the estimated rank r̂ can be imposed in the VECM estimation to obtain the estimated impulse responses of interest as ζ̂^(r̂) = f̅ (θ̂^(r̂)),where r̂ = M_r (Y_T).Several methods can be considered in practice for estimation of the rank. The most common is to perform a sequence of sequential tests in the likelihood framework of <cit.>, in particular using the trace or eigenvalue test statistics. Instead of the standard critical values, one can also use one of its many bootstrap extensions <cit.>. Either way, due to the nature of hypothesis testing, this estimation strategy will not lead to consistent estimation of the rank (unless the significance level is chosen to decrease with sample size); the probability of selecting a rank that is too high converges to the chosen significance level instead of to zero.Alternatively, one can use an information criterion as proposed by <cit.>, <cit.>, <cit.> and <cit.>. This has two advantages compared to the sequential testing approach. First, rank selection and lag length selection can be done in a single step. Second, depending on the penalty function chosen in the information criterion, it is possible to estimate the rank consistently. A recent alternative is provided by <cit.> who propose to select the rank and lag length simultaneously by penalized reduced rank regression. An advantage of this approach is that model selection and estimation are performed simultaneously, thus needing only a single step for the full estimation from start to end.Irrespective of the chosen selection method, standard inference is based on the selected rank, treating it as known. This is often justified by the consistency of the rank selection method, but even in those cases where it is indeed consistent, ignoring the selection step leads to invalid inference as referred to earlier <cit.>. In particular if the data do not provide clear and strong evidence for one particular cointegrating rank, this approach will fail to deliver reliable confidence intervals. We therefore next consider methods that explicitly take rank uncertainty into account in the inference procedure.§.§.§ Endogenous Rank Selection <cit.> proposes the endogenous lag selection bootstrap method for autoregressive models where the autoregressive lag length is re-estimated within the bootstrap to account for the model selection uncertainty. We adapt his approach to rank selection, labeling this approach Bootstrap Endogenous Rank Selection (BERS). That is, after generating a bootstrap sample {y_t^*}_t=1^T with rank r, we re-estimate the rank from this bootstrap sample to estimate the bootstrap impulse responses.[Details for our implementation are given in Algorithm <ref>.]We can choose to generate the bootstrap sample {y_t^*}_t=1^T with the “neutral” maximum rank K or the estimated rank r̂. While <cit.> reports that this choice has little consequence for lag selection, this is very different for rank selection. After all, if the rank used to generate {y_t^*}_t=1^T is not correct, we still face all the problems with the bootstrap as we described before. Hence, while some rank uncertainty is taken into account, the validity of this approach still hinges on the correct rank being used for the generation of the bootstrap data, which as we argued before, is impossible to guarantee.§.§.§ Model Averaging One of the most popular approaches to account for model uncertainty is to use model averaging <cit.>. By combining estimators from different models (and potentially weighting by evidence for these models), model uncertainty is taken into account. Given that the decision of which model to use is discrete, and therefore the selected model may change abruptly for a slight variation in the sample, the resulting estimators after model selection may be quite unstable and exhibit a large variability. By constructing weighted averages of the estimators arising from the individual models, one smoothes out the changes in the estimator, resulting in more stable estimators that typically display lower variability.Given rank-specific impulse response estimators ζ̂^(0), …, ζ̂^(K), we define the Model Averaging (MA) impulse response estimator ζ̂^MA = ∑_r=0^K W_K (r) ζ̂^(r), where W_K (r) =W(Y_T,r)/∑_s=0^K W(Y_T,s)and W(Y_T,r) is a function that determines a weight for rank r based on the sample Y_T. Unlike the typical application of model averaging, which often focuses on improving accuracy of point estimators in a mean squared error sense, we are not interested in the averaged point estimators. Instead, we only take the MA estimator as an input into our bootstrap scheme in order to construct confidence intervals: By using the more stable MA estimator, we may hope that the confidence intervals are more robust to rank misspecification. The bootstrap scheme can straightforwardly be adapted to incorporate this estimator after generating the bootstrap sample {y_t^*}_t=1^T.Typical weights in the model averaging literature are exponential weights based on information criteria such as BIC. However, in our simulations we find that such standard weighting schemes give weights that are too close to each other and do not differ much from simple unweighted averages. Given the widely varying behavior of impulse responses under different ranks, such weights are therefore not the most useful ones in our setting. Instead, we advocate using weights that are derived directly from cointegration tests, following the spirit of <cit.>, but rather than their KPSS type weights, we opt for weights based on the trace test statistic proposed by <cit.>. Details about the weights and their properties can be found in Lemma <ref> in Section <ref>.In a similar framework, <cit.> propose an averaging approach for impulse responses of potentially cointegrated VAR models based on a very specific set of weights. While they allow for uncertainty regarding the order of integration, their approach only averages two estimators: the one obtained from the VAR in levels, and one obtained from a cointegrated VAR where the number of cointegrating relations is pre-determined by pre-testing or economic theory. It can therefore not account for the general case where we are agnostic about the number of cointegration relations.While such model averaging explicitly takes model uncertainty into account, it still relies on an explicit choice of the cointegration rank in the bootstrap algorithm to do inference. Hence, even while the weight construction can be endogenized in the bootstrap in the same way as for rank selection, the bootstrap DGP relies on the choice of a single cointegration rank. As such it still does not fully account for rank uncertainty in our context.§.§.§ Bagging We now take a first step in endogenizing the rank uncertainty in the bootstrap DGP itself, by bootstrapping a bagging estimator. The bagging estimator is constructed by averaging the bootstrap estimates over an initial bootstrap procedure in which the cointegration rank is re-estimated for every bootstrap sample. Bagging was originally proposed by <cit.> to improve estimation accuracy of unstable estimators. <cit.> analyzed bagging formally and found that it can lead to a variance reduction of estimation after hard decisions, such as an initial model selection. As the model averaging described above, bagging smoothes those hard decisions yielding more accurate estimators. <cit.> considers bagging in the context of post-selection inference, rather than point estimation, and we build on his approach here.As bagging is essentially the simulation equivalent of model averaging, with the weights implicitly determined by how often each rank is selected within the bootstrap, it is subject to the same critique. However, one can modify the bagging algorithm to endogenize rank uncertainty in the bootstrap DGP by performing a second-level bootstrap in which we draw new bootstrap samples from the first-level bootstrap samples. By determining the rank of the second-level bootstrap DGPs from the first-level bootstrap samples, the ranks are randomized according to their evidence in the (simulated) sample. This allows to take the uncertainty into account when constructing the bootstrap confidence intervals based on the second-level bootstrap samples. While this does not fully solve the bootstrap invalidity problem (bootstrap samples are still generated under incorrect ranks, especially in the first step), the method has the potential to alleviate the problem.There is a computational problem with this method though, as one has B_1 iterations in the first bootstrap and B_2 in each second-level bootstrap, such that a full double bootstrap requires B_1 (1+B_2) iterations which quickly becomes computationally infeasible. To circumvent this problem, we implement the Fast Double Bootstrap (FDB) developed by <cit.>, which requires drawing only a single second-level bootstrap sample for every first-level bootstrap sample. That is, the computation cost of the FDB is only double (2 B_1) that of a regular bootstrap. Algorithm <ref> describes the method, labeled as FDB bagging (FDBb), in detail. §.§ Weighted Inference by Model Plausibility None of the methods described above fully address the post-model selection inference problem. To work towards a more satisfactory solution, we now combine the ideas discussed above with new concepts arising from the recent statistical literature that directly addresses the post-model selection inference problem.We would like to build on the idea of averaging or weighting models to account for rank uncertainty. However, as elaborated on in the previous section, such weighting is typically designed for point estimation and translating it to confidence intervals, as needed here, is not straightforward. In order to make the transition, we take inspiration from the perspective taken by <cit.>, who view the issue of constructing valid post-model selection inference (PoSI) as a simultaneous inference problem: by controlling for performing inference in all models simultaneously, the specific model selected by a model selection procedure is covered by construction. This would involve finding lower and upper bounds L^ (γ) and U^(γ) to construct intervals [L^ (γ), U^ (γ)] such that ( L^ (γ) ≤ζ^(r)≤ U^ (γ), ∀ r ∈{0,1,…,K}) → 1 - γ as T →∞. Note that ζ^(r) = f̅ (θ^(r)) is a pseudo-true parameter defined in terms of θ^(r), the pseudo-true parameters of the model (<ref>) under the restriction that rank r is imposed – see Lemma 1 and its proof in <cit.> for a formal definition. These parameters represent the probability limits of the estimators of (<ref>) under the restriction of imposing rank r, and can informally be seen as those parameters which minimize a distance to the true parameters under the restriction that the cointegration rank is r. If r < r_0, the true parameter cannot be recovered, and therefore the pseudo-true parameter will be different.For our purposes, there is a fundamental problem with the sub-model view of <cit.> where the pseudo-true parameters are the objects of interests, as also highlighted by <cit.>. In the context of structural impulse responses, the sub-model view has little relevance, as it cannot uncover any structural effects. We therefore need the full model view, in which it is assumed that one of the models is the true (structural) one. Denoting this extended PoSI approach as _0, we seek to control ( L^_0 (γ) ≤ζ≤ U^_0 (γ), ∀ r ∈{0,1,…,K}) → 1 - γ as T →∞. As the interval bounds are typically constructed by considering the distribution of the fixed-rank estimator ζ̂^(r) minus the (pseudo-)true value, this approach requires that the distance between every fixed-rank estimate ζ̂^(r) and the true impulse response ζ is accounted for, rather than the much shorter distance between ζ̂^(r) and its probability limit or pseudo-true impulse response ζ^(r). This will therefore result in rather wide intervals. The seemingly only way to control this quantity is to construct confidence intervals for every rank separately, and then take the union of these, which typically results in very wide intervals that are useless in practice.However, we have not yet considered any evidence on the plausibility of each rank, that can be extracted from the data. If this information can incorporated into our inferential procedure, we may be able to achieve intervals that are still useful in applications, as the impact of ranks that the data deem very implausible can be eliminated, or at least reduced. We therefore augment the PoSI view of simultaneous inference by a weighting scheme akin to model averaging, except that we apply the weighting not to the estimators but directly to the bounds of the intervals. The direct weighting of the inference output, in this case the interval bounds, by evidence of the plausibility of each model, leads us to label our approach as Weighted Inference by Model Plausibility (WIMP).§.§.§ The WIMP PrincipleDefine the most plausible model - according to a certain plausibility measure based on the data - as the reference model, and denote the corresponding confidence interval arising from this model (ignoring model uncertainty) as the reference interval. As input to the WIMP procedure we consider all model intervals, which are defined as the confidence intervals obtained by assuming any particular model as the true one. In our case these would be the intervals obtained by imposing all the K+1 different cointegrating ranks. Before going into the details of our application, we now propose a set of general conditions that a “prudent” WIMP scheme should adhere to: WIMP Prudence Conditions * The WIMP confidence interval must always cover at least the reference interval. That is, any non-reference model can only lead to widening the WIMP interval compared to the reference interval.* If two models are equally plausible, the model interval bounds which are furthest away from the reference model must contribute the most to widening the WIMP interval.* If the bounds of two model intervals are equally far away from the reference interval, the most plausible model must contribute the most to widening the WIMP interval for a given distance of the bounds from the reference interval.* The WIMP confidence interval may not be wider than the interval obtained by joining all individual model intervals. The first condition is needed to avoid invalid intervals, in whatever way validity is measured. If obtaining a confidence interval which is more narrow than the “standard” interval assuming no model uncertainty is possible, the WIMP interval is unlikely to contain an adequate coverage probability. The second condition ensures that the locations of intervals in relation to the reference interval are properly taken into account for equally plausible models. Compare two equally plausible models with almost identical intervals, to two equally plausible models with very different intervals. Any prudent method of accounting for model uncertainty must result in wider intervals for the second case than for the first case. The third condition implies that plausible models are more strongly taken into account than implausible models. In particular, this condition allows to reduce the impact of implausible models that may have very different intervals than the reference model but are so implausible, that there is little to no uncertainty about them. Finally, the fourth condition ensures that the WIMP intervals do not become too conservative. While the first and fourth condition impose hard (but sensible) restrictions on the WIMP intervals, the second and third conditions allow for variation in the procedure. Finding a right balance between conservatism and interval length is therefore of great practical importance, and varies per setting.For our specific implementation of the WIMP Prudence Conditions, let W_K(r) be model plausibility weights assigned to all ranks r=0,…,K and define X(r,s) = W_K(r)/W_K(s) as the relative plausibility of rank r compared to rank s. Letting R = _0 ≤ r ≤ K W_K(r) be the (most plausible) reference rank, we define the WIMP interval [L^(γ), U^ (γ) ] asL^(γ)= min_r=0,…,K{L^(R) (γ) - X(r,R) [ L^(r)(γ) - L^(R)(γ) ]^- },U^(γ)= max_r=0,…,K{U^(R) (γ)+ X(r,R) [U^(r)(γ) - U^(R)(γ) ]^+ },where x^+ = max(x,0), x^- = -min(x,0) and L^(r)(γ) and U^(r)(γ) are the lower and upper bounds respectively of the confidence intervals with fixed rank r.The term [ L^(r)(γ) - L^(R)(γ) ]^- (respectively [U^(r)(γ) - U^(R)(γ) ]^+) ensures that only lower bounds smaller (upper bounds larger) than those of the reference interval are taken into account; for lower bounds larger (upper bounds smaller) than those of the reference interval, this term is simply zero. Together with X(r,s) ≥ 0, this implies that the WIMP interval always contains the reference interval, hence Condition 1 is satisfied. Condition 2 is also trivially satisfied as this term increases when the lower (upper) bound of the rank r interval is further away from the reference interval.The shape of X(r,s) determines how strongly less plausible models are taken into account and can be different from the linear function of W_K(r) imposed above. As long as X(r,s) is an increasing function of W_K(r), more plausible ranks are given more importance and Condition 3 is satisfied; varying X(r,s) and W_K(r) allows one to change the balance between conservatism and interval length. Finally, with respect to Condition 4, note that as long as X(r,s) ≤ 1, the WIMP interval can never be wider than the interval obtained by combining the smallest lower bound with the largest upper bound.[If some of the individual model intervals are disjoint, the “maximal” WIMP interval as constructed in (<ref>) is larger than the union of these intervals, apparently violating Condition 4. It is a matter of personal preference whether to consider disjoint intervals or to “fill the gaps” and extend it from the lowest lower bound to the highest upper bound, which is exactly what the WIMP construction described above does automatically. As we believe that such a disjointed confidence set, which is not a confidence interval anymore, can be rather difficult to interpret, we consider this modification, though it is by no means crucial to the WIMP approach.] Although we focus here exclusively on the case of rank uncertainty, other types as uncertainty, such as about the lag order or the deterministic components can be incorporated into the WIMP procedure as well. For instance, if one wants to allow for P different lag orders in addition to the K+1 ranks, one needs weights that measure the plausibility of each of the (K+1)P different models resulting from combining the different ranks and lag orders. In this paper we focus on rank uncertainty only as it has a far bigger and more fundamental impact than (slight) lag misspecification. Uncertainty about the deterministic specification is typically a bigger issue, but due to our initial detrending all consequent analysis (including the statistics used to construct W_K(r)) are invariant to the deterministic specification (also see Remark <ref>), and we can separate the two sources of uncertainty.The WIMP intervals are not built directly around a single point estimator for ζ. While all K+1 fixed-rank estimators are incorporated through their respective confidence intervals, we do not directly obtain a corresponding point estimate for ζ. Of course, if there is a desire to pair the confidence interval with a point estimator, one can do so, in which case the model averaging estimator with the same weights W_K(·) as used for the WIMP intervals is the most natural candidate.[As expected from the model averaging literature, unreported simulations in the same setup as considered in Section <ref> show that this estimator performs very well in terms of mean squared error when compared to fixed-rank estimators. Of course, its performance purely as a point estimator is different from its performance as basis for inference, as we shall see in Section <ref>.] §.§.§ Asymptotic Properties To complete our theoretical discussion of the WIMP method, we establish some basic asymptotic properties of the WIMP intervals. We mainly do so under general high-level assumptions on the tests and bootstrap method available, but we will also provide some details about how these assumptions can be verified in our application. We first characterize the general asymptotic properties of our method.Let Y_T be generated according to (<ref>), and let Θ^(r) denote the parameter space of θ such that the I(1,r) conditions are satisfied. Then assume that * As T →∞, (W_K(r_0) ≥ W_K(r)) → 1 for all r ≠ r_0;* As T→∞, it holds that(L^(r_0) (γ) ≤ζ≤ U^(r_0)(γ) ) → 1 - γ, for all θ∈Θ^(r_0) andr_0 ∈{0,1,…,K}. Then, as T →∞,( L^ (γ) ≤ζ≤ U^ (γ) ) ≥ 1 - γ +o(1), for all θ∈Θ^(r_0) andr_0 ∈{0,1,…,K}.Theorem <ref> establishes the asymptotic conservativeness of the WIMP intervals under two assumptions. First, (i) requires that the weight attached to the true rank is asymptotically at least as large as the weight of the other ranks. This requires that a “decent”, yet not necessarily consistent, procedure is used to obtain the weights. Equal weights satisfy this condition, but will lead to too conservative intervals as they would result in taking the union of all rank r intervals. Note that if condition (i) is strengthened to require the true weight to receive the full weight asymptotically, corresponding to using a consistent rank selection approach, the WIMP interval is not conservative anymore but has the appropriate (pointwise) coverage rate.Assumption (ii) implies pointwise asymptotic validity of the intervals under a known rank, which has been verified for many bootstrap methods under different assumptions on {u_t} (or equivalently {ε_t}). For instance, if we assume that {u_t} is i.i.d. with sufficiently many moments existing, one can show that the i.i.d. bootstrap version of Algorithm <ref> satisfies assumption (ii), c.f. <cit.> and <cit.>. <cit.> also formulate general assumptions to assure bootstrap validity, while alternative methods that allow for heteroskedasticity are considered by <cit.>. The WIMP principle can be applied to any of these - or other - methods.We now propose a simple weighting scheme and consider its asymptotic properties. Following the spirit of <cit.>, we base our weights on cointegration tests. Rather than their KPSS type weights, we opt for weights based on the trace test statistic proposed by <cit.>, which, as a “standard” cointegration test, has intuitive appeal and is available in all standard econometric and statistical software.[We also explored 'smaximum eigenvalue test statistic, which similarly satisfies assumption (i) in Theorem <ref>. Numerical experiments showed virtually no difference with the trace test.]Let J_T(r) = -T ∑_i=r+1^K ln (1 - λ̂_i) denote the trace test of <cit.> for testing H_0: r_0 ≤ r. For constants c_1>0 and 0<c_2<1, define[W(Y_T,r) = e^-c_1 T^-c_2 J_T(r)forr = 0.; W(Y_T,r) = e^-c_1 T^-c_2 J_T(r) - e^-c_1 T^-c_2 J_T(r-1) forr=1,…, K-1,;W(Y_T,r) = 1-e^-c_1 T^-c_2 J_T(r-1)forr = K, ]and W_K (r) = W(Y_T,r)/ ∑_r=0^K W(Y_T,r). Then W_K(r) 1 (r=r_0) as T →∞. Note that our weights ensure that the true rank asymptotically receives a weight of one, which is stronger than required in Assumption (i). This implies that using these weights, the WIMP intervals are not conservative asymptotically. In practice one faces the trade-off between the desired robustness to model uncertainty and the width of the resulting intervals. However, we stress that changing the constants c_1 and c_2, may lead to very different small sample properties – even though the asymptotic properties remain unaffected. Therefore it remains crucial to investigate the small sample properties of the chosen approach. This we do in the next sections. While the results above establish pointwise asymptotic validity, this does not imply validity uniformly over the parameter space.[Note that our notion of uniform and pointwise validity is conceptually different from the notion occasionally encountered in the impulse response literature, such as <cit.> and <cit.>. In those papers, “pointwise” relates to inference on a single impulse response, whereas uniform or joint confidence bands are valid for a set of impulse responses. Our notion of uniform and pointwise relates to to the parameter space Θ, and applies to both inference on single responses and joint inference on a set of responses. Methods establishing joint coverage are as sensitive to rank uncertainty as methods for single impulse responses, and our arguments apply equally well to these methods.]Uniform validity is a more informative property about finite sample behavior of the intervals, as it explicitly accounts for “small” parameters, such as roots that are local to one. While one may expect that Assumption (i) helps to establish uniform validity by not relying on the oracle property that the true rank is always selected asymptotically, one would also need a uniform version of Assumption (ii). While this would allow us to formulate Theorem <ref> in a uniform sense, we do not do so as it would hide the fact that the current state of the (bootstrap) literature does not provide general bootstrap approaches that are uniformly valid in a general sense. To the best of our knowledge, uniform results have only been established in the presence of a single local-to-unit root <cit.>, while our setting would require validity under an arbitrary number of roots near unity. While clearly of great interest, developing appropriate bootstrap methods would require a separate study that is outside the scope of the paper and is therefore left for future research. § MONTE CARLO SIMULATIONS In this section we investigate the performance of the various methods discussed above by simulation. We assess coverage probabilities (CP) of confidence bands for forecast error impulse responses, and hence evaluate intervals for the moving average parameters. We intentionally abstract from the identification problem in structural VARs, since the structural moving average parameters are linear combinations of their reduced-form counterparts, and one can expect that the performance of one inferential procedure for reduced-form parameters is inherited by the structural parameters.[Except for SVARs identified through long-run restrictions, the exact persistence properties of the underlying reduced-form process are of no direct relevance for identification.] The data generating process (DGP) for the Monte Carlo experiment is a three-dimensional VAR of order one inspired by <cit.>, given by y_t = (I_3 +Π)y_t-1+ϵ_t, with ϵ_t ∼ i.i.d. 𝒩(0,I_3) for all t. The cointegration matrix is specified as Π=d_1 α_1β_1' + d_2α_2β_2', where α_1=(0,1,0)', α_2=(0,0,1)', β_1=(2,-1,0)',and β_2=(1,-1,-1)'. We consider two versions of the above process when simulating data. DGP1 features two “weak” cointegration relations by setting d_1=0.05 and d_2=0.02, which implies that the model has one root at unity and two roots close to one at 0.98 and 0.95. DGP2 features two “strong” cointegration relations by setting d_1=d_2=1, which implies a VAR with one unit root and two roots at zero. This is the original setting considered by <cit.>.We evaluate CPs of 95% confidence intervals for each response and horizon (h=1,2,...,60) for T=100, 200. The results are based on 1000 MC simulations and 399 bootstrap replications. To compute the WIMP intervals we set c_1 = 1 and c_2=0.5 for the weights in (<ref>).[This choice of parameters seems to be natural for the weights in (<ref>). We did not experiment with changing these values, as the performance in the simulations was already quite satisfactory. It is likely that by careful tuning these parameters, even better performance can be obtained. However, the optimal choice will typically be highly case-dependent, and optimal values should therefore be treated with caution. Instead we prefer to report results for a natural albeit naive choice of parameters without claiming any optimality.] We abstract from lag length selection (we fix p=1), deterministic components, and small sample bias correction <cit.>. All simulations were done in MATLAB.Figure <ref> and <ref> display CPs of the various inferential procedures discussed above for DGP1 for T=100 and T=200. Based on the two model selection criteria employed, we can partly confirm the findings of <cit.>. That is, if evidence for a particular rank is weak, pre-testing seems not to deliver more accurate inference than (bootstrap) CIs based on unrestricted OLS. This holds for both sample sizes considered. However, these two frequently used approaches can both not be considered as reliable strategies for the construction of inference – minimum CPs are well below 60%. Surprisingly, even when the true model specification is imposed (which could be considered to be the oracle method), CPs are generally not closer to the nominal level either; both for short and long horizons. Endogenous rank selection does not seem to improve the performance compared to the pre-testing procedure. FDB bagging does give CPs closer to nominal level, in particular when based on AIC. However, the WIMP intervals outperform all other methods, and deliver CPs that are on average quite close to the 95% nominal level.Figure <ref> presents the corresponding average width of the bootstrap intervals over all horizons for the five most relevant methods. There are several interesting observations to make from this figure. First, note that even though FDB bagging and WIMP produce much more accurate intervals than OLS or imposing the true rank, they actually do not produce intervals that are much wider and overly conservative. Second, even though the WIMP method produces more accurate intervals than FDB bagging, intervals are not wider, indicating that the mechanism imposed in the WIMP to reduce the impact of implausible models works well in practice.It stands to reason that if evidence for a specific cointegration relation is strong, rank pre-estimation could result in more reliable inference than unrestricted OLS and may outperform the WIMP intervals which – despite weighting down implausible ranks – are inherently more conservative. We investigate this further by turning to DGP2. Figure <ref> displays CPs for the case of strong cointegration relations. Indeed, CPs implied by model selection based on AIC and BIC are much closer to the nominal level than those entailed by OLS. Bootstrap intervals based on unrestricted estimation can again not be considered as reliable, with minimum CPs around 60% for both sample sizes. Imposing the true rank delivers CPs close to but still below the nominal level. As in the weak cointegration setting, the WIMP intervals again outperform all other approaches and even deliver CPs closer to nominal level than those implied by the correct rank specification. It is noticeable that the WIMP intervals do not produce overly conservative inference when evidence for a particular rank is strong, but result in CPs very close to the 95% level. This is also reflected in the average width (over 1000 MC simulations) of the CIs displayed in Figure (<ref>). WIMP intervals are (if at all) only marginally wider than those implied by the correct rank specification, and are even much narrower than some of the intervals based on the unrestricted model. Finally, note that the WIMP intervals are now also much narrower than some of the FDB bagging intervals while having superior coverage.We also considered the lag-augmentation approach proposed by <cit.> and <cit.> in the simulations where we implemented a variety of bootstrap versions in combination with lag-augmenting (p=2) the VAR; see Appendix <ref> for details.Although the performance varied considerably with the specific bootstrap algorithm implemented, even the best performing lag-augmentation method did not seem able to account for rank uncertainty in a satisfactory way, and performed no better than the standard VAR in levels.[We also investigated the bias correction proposed by <cit.>.We find that this method provides intervals with coverage close to nominal and comparable to the WIMP. However, some of the intervals are much wider than WIMP and even OLS intervals. The results are summarized in Appendix <ref>.] § FISCAL POLICY SHOCKS AND RANK UNCERTAINTYWe now study the potential ramifications of rank uncertainty on applied macroeconomic analysis. With our proposed approaches to construct inference accounting for rank uncertainty, we aim to assess the robustness of results obtained from unrestricted VARs. While there are countless VAR-based studies that use impulse response analysis to investigate the propagation of structural economic shocks, we focus in the following on fiscal policy shocks.As our focus is methodological, we do not complement the literature on identification of structural VARs. Therefore we dispense with a detailed literature review on VAR-based policy analysis and only focus on evaluating seminal papers, reflecting various ways of identification. We also skip a detailed discussion of different identification approaches and their respective merits.[For a detailed exposition we refer to <cit.> for a recent survey on various identification approaches and results in the literature.] Moreover, we omit any discussion on point estimates and focus solely on inference. [Our aim is not to challenge (widely accepted) empirical findings on the effects of economic policies, but to provide the applied researcher with tools that might help to construct more reliable inference. For that reason, we refrain from a simple replication exercise comparing different inferential approaches, and we want to stress that our goal is certainly not to contrast our findings to the original papers. Instead, we use the same reduced-form VAR and the same dataset across all applications, in order to move away from the original papers and only contrast results based on different identification procedures.]Fiscal policy can relate to both the expenditure and revenue side of the government's budget. Measuring the effect of active spending policies as well as the consequences of tax changes has been an active field of economic research since decades. One of the first influential contributions using VAR-based impulse responses to assess the effect of government purchases is <cit.>. The authors identify spending shocks by a recursive identification scheme. With government spending ordered first, this translates into the assumption that government purchases are predetermined within the quarter.Due to their assumed independence from general macroeconomic conditions, <cit.> construct narrative recordsbased on military buildups to identify truly exogenous spending changes. Those narrative time series have been embedded in several VAR studies and used to identify spending shocks by ordering this series first in a Cholesky-identified VAR. Among the most prominent studies following this approach is <cit.>. In her paper she revisits the construction of the government spending news variable, filtering out possible distortions due to anticipation effects.Narrative series have also been used to identify tax changes. In a series of papers <cit.> construct various “dis-aggregates” of the <cit.> measures of legislated changes in federal tax liabilities. Specifically, Mertens and Ravn distinguish between announced and unannounced tax changes, or between personal and corporate taxes. Moreover, they do not view those narrative series as a direct measure of “tax-shocks” but rather as an external proxy which is correlated with the unknown structural shocks.[See also <cit.> and <cit.>.]Thus, instead of including the narrative variable in the VAR, one can obtain the structural shock of interest by regressing the narrative proxy on the reduced-form residuals.Yet another structural VAR identification approach imposes signs on the impulse responses to a particular shock for a certain horizon. <cit.> identify a contractionary tax-shock as a shock, which leads to non-negative responses in government revenue during the first year after impact. Additionally, this tax-shock is identified by requiring it to be orthogonal to a business cycle shock and a monetary policy shock – both identified through signs.[All shocks are identified sequentially by maximizing a penalty function which rewards responses in the desired direction and penalizes the others. Business cycle shocks are identified by assuming co-movements in the same direction as output, consumption, investment and government revenue. Contractionary monetary policy shocks affect responses in reserves and prices negatively and interest rate positively.] In particular, the orthogonality to business cycle fluctuations aims at controlling for movements in the government's budget caused by automatic stabilizers.We compare uncertainty associated with the estimated impulse responses resulting from the above mentioned four identification approaches using the same data, and the same specification (as far as possible) of the underlying (reduced-form) VAR. That is, we use 'sstructural VAR approach as well as 'sstrategy to incorporate her narrative series in a VAR to identify the effect of government spending. Further, we use 'ssign-restriction scheme and 'sproxy-VAR to assess the effect of tax-shocks.The choice of variables and the sample period is largely determined by the “highest minimal requirement” across the above identification approaches. The benchmark VAR is estimated in GDP,private consumption, non-residential investment, government spending, (federal) tax receipts, total non-borrowed reserves, the federal funds rate, real wages, a price index, and the GDP deflator, where all variables except the federal funds rate are transformed to logs. The data is sampled quarterly from 1950/Q1 to 2006/Q4. A detailed description of the data is given in Appendix <ref>. Additionally we use 'snews variable and 's unanticipated tax-change proxy. The VAR representation in levels includes an intercept and a deterministic linear time trend, and four lags are included. We construct inference using the residual-based bootstrap algorithm presented in Algorithm <ref>.[We did not find strong evidence of heteroskedasticity in the reduced-form residuals and refrain from using a robust bootstrap procedure such as the moving block bootstrap <cit.>. All approaches outlined in this paper could be easily extended in this way.]^,[While 'snews series is included in the VAR, and thus, bootstrapped “endogenously”, we jointly draw (with replacement) from the reduced-form residuals and 's external variable to account for uncertainty in estimating the effects of tax-shocks using this proxy.]In order to make results somewhat comparable, impulse responses are normalized such that the point estimate of the response of the policy instruments has a peak at unity across different identification approaches <cit.>. As a measure of uncertainty we plot 68% confidence intervals, which is standard in the fiscal policy literature.[The data set as well as a MATLAB toolbox for the WIMP method with the identification schemes used used in this section are available at http://www.stephansmeekes.nlhttp://www.stephansmeekes.nl.] 0.45< g r a p h i c s >68% confidence intervals of impulse responses to a government spending shock identified as in <cit.>. Dashed lines are OLS intervals, dotted lines FDBb/AIC intervals, solid linesWIMP intervals. 0.45< g r a p h i c s >68% confidence intervals of impulse responses to a government spending shock identified as in <cit.>. For details see Figure <ref>. Figure <ref> and Figure <ref> display unrestricted VAR in levels (estimated by OLS), FDB bagging (with AIC selection), and WIMP confidence bands (using the same specifications as in Section <ref>) of impulse responses due to a government spending shock. For the recursive VAR as in <cit.>, all three measures of uncertainty suggest that government spending shocks generate an initial boost in GDP. While the FDBb intervals indicate a rather moderate increase relative to the OLS intervals, the WIMP intervals imply maximum multiplier effects greater in range (roughly between 0.7 and 1.5). Considering impulse responses following Ramey's news shocks, it seems to be less clear whether government spending stimulates output or not. While the OLS confidence bands (and to a lesser extend the FDBb bands) support findings in the literature suggesting a short-lived boost in GDP, the WIMP intervals indicate greater uncertainty associated with the output response. Indeed, “robust” spending peak multipliers range between 0 and 3.3, such that a reliable conclusion on the effectiveness of spending policies cannot be made in this case. 0.45< g r a p h i c s >68% confidence intervals of impulse responses to a tax-shock identified as in <cit.>. For details see Figure <ref>. 0.45< g r a p h i c s >68% confidence intervals of impulse responses to a tax-shock identified as in <cit.>. For details see Figure <ref>. Confidence intervals of impulse responses following a contractionary tax-shock are displayed in Figures <ref> and <ref>. Qualitatively, responses of GDP and its main aggregates are rather similar across both identification approaches and across all three inferential procedures: Output, consumption, and investment decrease significantly. The long-lived contraction in economic activity is accompanied by an equally lengthy decline in government spending, which hinders interpretation of the shocks as “pure” tax-shocks. Quantitatively, the implied response of output is much greater in the proxy VAR framework compared to the SVAR one. Intervals for peak multipliers include -6 for the former, and -3 for the latter.Similar to the responses due to a government spending shock, the FDBb intervals are not necessarily wider than the OLS intervals. However, when considering the impact on output, and in contrast to scenario investigated above, the two intervals do not intersect at times and the FDBb intervals imply a significantly smaller impact on economic activity. This holds for both the shocks of <cit.> and <cit.>. Reflecting potentially more conservative inference, the WIMP intervals are wider, often encompassing the OLS intervals. Yet the WIMP intervals indicate that OLS-based inference rather underrates the effect of the identified tax-shocks on almost all variables. Generally, tax-shocks estimated by the proxy VAR imply greater effects on economic activity than those identified through sign restrictions. Moreover, the comparison with the spending shocks, supports some results in the literature suggesting that tax-cuts may be more effective in stimulating the economy.Figure <ref> compares confidence intervals for peak multipliers. Indeed, evidence suggesting that multipliers exceeding unity is much stronger for tax-cut policies than for spending policies. Based on the results for Ramey's news shock, multipliers due to expansionary spending policies might even not be significant at all.The above results illustrate that ignoring uncertainty about the co-integration relations may lead to ambiguous quantification of statistical significance. Incorporating this uncertainty via the WIMP approach allows for a more confident interpretation of the results.§ DISCUSSION In this paper we have shown empirically and through a simulation study that ignoring uncertainty about cointegration relations may lead to unreliable inference for (structural) impulse responses.Since the commonly used specification of the VAR in levels ignores any evidence for cointegration in the data, associated inference captures uncertainty only poorly. Also, model selection techniques, such as rank pre-estimation by sequential testing or information criteria, seem to deliver reliable inference only if evidence for the true cointegration rank is strong. In this paper we propose a novel data-driven approach to robust inference for impulse responses in the presence of uncertainty regarding the cointegration rank. Our WIMP approach is shown both by simulation and empirically to still be able to deliver meaningful (i.e. not too wide) confidence intervals while being robust to rank uncertainty. As such it provides a reliable and simple alternative to the unreliable standard approaches.Practical implementation of the WIMP approach only requires fixed-rank (bootstrap) intervals plus the sequence of trace tests for all rank tests, which are both readily available in any standard statistical software. While a toolbox for the WIMP methods used in our application is directly available, our approach can also easily be implemented for any desired SVAR analysis, as the fixed-rank intervals used as input for the WIMP can be based on any appropriate method, both in terms of inference method such as the bootstrap and identification scheme. Finally, the computational cost of the method is fairly low; on any modern computer bootstrap intervals for a fixed rank are fast to compute, and given that in this kind of VAR model the number of variables (and hence the number of ranks) has to be relatively low to avoid the curse of dimensionality, doing so for all ranks should pose no problem.While prudent construction of inference is particularly important for impulse responses, our proposed WIMP procedure is equally beneficial in different VAR contexts, such as forecasting. While forecast combinations across different models are well accepted as point forecasts, our WIMP method allows to construct corresponding interval forecasts that account for model uncertainty. More generally, the approach can be adapted to a variety of model selection problems, as long as the relative evidence for a particular model can be assessed against a modest number of alternatives. While in theory it can be applied to high-dimensional problems as well, computationally the method is best suited for low-dimensional problems where the number of models is relatively small. While this is a limitation of the method, it is inherent to the simultaneous inference philosophy behind, which also holds for the PoSI method of <cit.>. Exploring the usefulness and limitations of the WIMP in more general settings is therefore an interesting avenue for future research.[Benkwitz, Lütkepohl, and WoltersBenkwitz et al.2001]BLW01 Benkwitz, A., H. Lütkepohl, and J. Wolters (2001). Comparison of bootstrap confidence intervals for impulse responses of German monetary systems. Macroeconomic Dynamics 5, 81–100.[Berk, Brown, Buja, Zhang, and ZhaoBerk et al.2013]PoSI13 Berk, R., L. Brown, A. Buja, K. Zhang, and L. Zhao (2013). Valid post-selection inference. Annals of Statistics 41, 802–837.[Bernstein and NielsenBernstein and Nielsen2014]BernsteinNielsen14 Bernstein, D. and B. Nielsen (2014). Asymptotic theory for cointegration analysis when the cointegration rank is deficient. Economic Working Papers 2014-W06, Nuffield College, University of Oxford.[Blanchard and PerottiBlanchard and Perotti2002]BlanchardPerotti02 Blanchard, O. and R. Perotti (2002). An empirical characterization of the dynamic effects of changes in government spending and taxes on output. The Quarterly Journal of Economics 117(4), 1329–1368.[BreimanBreiman1996]Breiman96 Breiman, L. (1996). Bagging predictors. Machine Learning 24, 123–140.[Bruder and WolfBruder and Wolf2017]BruderWolf17 Bruder, S. and M. Wolf (2017). Balanced bootstrap joint confidence bands for structural impulse response functions. Technical Report No. 246, University of Zurich.[Brüggemann, Jentsch, and TrenklerBrüggemann et al.2016]BJT16 Brüggemann, R., C. Jentsch, and C. Trenkler (2016). Inference in VARs with conditional heteroskedasticity of unknown form. Journal of Econometrics 191, 69–85.[Bühlmann and YuBühlmann and Yu2002]BuhlmannYu02 Bühlmann, P. and B. Yu (2002). Analyzing bagging. Annals of Statistics 30, 927–961.[Cavaliere, Rahbek, and TaylorCavaliere et al.2010a]CRT10ET Cavaliere, G., A. Rahbek, and A. M. R. Taylor (2010a). Cointegration rank testing under conditional heteroskedasticity. Econometric Theory 26, 1719–1760.[Cavaliere, Rahbek, and TaylorCavaliere et al.2010b]CRT10JoE Cavaliere, G., A. Rahbek, and A. M. R. Taylor (2010b). Testing for co-integration in vector autoregressions with non-stationary volatility. Journal of Econometrics 158, 7–24.[Cavaliere, Rahbek, and TaylorCavaliere et al.2012]CRT12 Cavaliere, G., A. Rahbek, and A. M. R. Taylor (2012). Bootstrap determination of the co-integration rank in vector autoregressive models. Econometrica 80, 1721–€“1740.[Chao and PhillipsChao and Phillips1999]ChaoPhillips99 Chao, J. C. and P. C. B. Phillips (1999). Model selection in partially nonstationary vector autoregressive processes with reduced rank structure. Journal of Econometrics 91, 227–€“–271.[Cheng and PhillipsCheng and Phillips2009]ChengPhillips09 Cheng, X. and P. C. B. Phillips (2009). Semiparametric cointegrating rank selection. Econometrics Journal 12, S83–€“S104.[Cheng and PhillipsCheng and Phillips2012]ChengPhillips12 Cheng, X. and P. C. B. Phillips (2012). Cointegrating rank selection in models with time-varying variance. Journal of Econometrics 142, 201–211.[ChoiChoi2005]Choi05 Choi, I. (2005). Inconsistency of bootstrap for nonstationary, vector autoregressive processes. Statistics & Probability Letters 75, 39–48.[Davidson and MacKinnonDavidson and MacKinnon2002]DavidsonMacKinnon02 Davidson, R. and J. G. MacKinnon (2002). Fast double bootstrap tests of nonnested linear regression models. Econometric Reviews 21, 419–429.[Del Negro and SchorfheideDel Negro and Schorfheide2011]DelNegro11 Del Negro, M. and F. Schorfheide (2011). Bayesian macroeconometrics. In J. Geweke, G. Koop, and H. van Dijk (Eds.), The Oxford Handbook of Bayesian Econometrics, pp.293–389. Oxford University Press.[Del Negro, Schorfheide, Smets, and WoutersDel Negro et al.2007]DSSW07 Del Negro, M., F. Schorfheide, F. Smets, and R. Wouters (2007). On the fit of New Keynesian models. Journal of Business & Economic Statistics 25, 123–143.[Dolado and LütkepohlDolado and Lütkepohl1996]DoladoLuetkepohl96 Dolado, J. J. and H. Lütkepohl (1996). Making Wald tests work for cointegrated VAR systems. Econometric Reviews 15, 369–386.[EfronEfron1979]Efron79 Efron, B. (1979). Bootstrap methods: another look at the jackknife. Annals of Statistics 7, 1–26.[EfronEfron2014]Efron14 Efron, B. (2014). Estimation and accuracy after model selection. Journal of the American Statistical Association 109, 991–1007.[ElliottElliott1998]Elliott98 Elliott, G. (1998). On the robustness of cointegration methods when regressors almost have unit roots. Econometrica 66, 149–158.[Giannone, Lenza, and PrimiceriGiannone et al.2016]GLP16 Giannone, D., M. Lenza, and G. E. Primiceri (2016). Priors for the long run. CEPR Discussion Paper 11261, Centre for Economic Policy Research.[GospodinovGospodinov2004]Gospodinov04 Gospodinov, N. (2004). Asymptotic confidence intervals for impulse responses of near-integrated processes. Econometrics Journal 7, 505–527.[GospodinovGospodinov2010]Gospodinov10 Gospodinov, N. (2010). Inference in nearly nonstationary SVAR models with long-run identifying restrictions. Journal of Business & Economic Statistics 28, 1–12.[Gospodinov, Herrera, and PesaventoGospodinov et al.2013]GHP13 Gospodinov, N., A. M. Herrera, and E. Pesavento (2013). Unit roots, cointegration, and pretesting in VAR models. In T. B. Fomby, L. Kilian, and A. Murphy (Eds.), VAR Models in Macroeconomics - New Developments and Applications: Essays in Honor of Christopher A. Sims, Volume 32 of Advances in Econometrics, pp.81–115. Emerald Group Publishing Limited.[Gospodinov, Maynard, and PesaventoGospodinov et al.2011]GMP11 Gospodinov, N., A. Maynard, and E. Pesavento (2011). Sensitivity of impulse responses to small low-frequency comovements: reconciling the evidence on the effects of technology shocks. Journal of Business & Economic Statistics 29, 455–467.[HallHall1992]Hall92 Hall, P. (1992). The bootstrap and Edgeworth expansions. New York: Springer-Verlag.[Hjort and ClaeskensHjort and Claeskens2003]HjortClaeskens03 Hjort, N. L. and G. Claeskens (2003). Frequentist model average estimators. Journal of the American Statistical Association 98, 879–899.[Inoue and KilianInoue and Kilian2002]InoueKilian02 Inoue, A. and L. Kilian (2002). Bootstrapping autoregressive processes with possible unit roots. Econometrica 70, 377–391.[Inoue and KilianInoue and Kilian2016]InoueKilian16 Inoue, A. and L. Kilian (2016). Joint confidence sets for structural impulse responses. Journal of Econometrics 192, 421–432.[Inoue and KilianInoue and Kilian2019]InoueKilian19 Inoue, A. and L. Kilian (2019). The uniform validity of impulse response inference in autoregressions. Department of Economics Working Paper Series 19-00001, Vanderbilt University.[Jardet, Monfort, and PegoraroJardet et al.2013]JMP13 Jardet, C., A. Monfort, and F. Pegoraro (2013). No-arbitrage near-cointegrated VAR(p) term structure models, term premia and {GDP} growth. Journal of Banking and Finance 37, 389–402.[JohansenJohansen1995]Johansen95 Johansen, S. (1995). Likelihood-Based Inference in Cointegrated Vector Autoregressive Models. Oxford University Press.[KilianKilian1998a]Kilian98b Kilian, L. (1998a). Accounting for lag order uncertainty in autoregressions: the endogenous lag order bootstrap algorithm. Journal of Time Series Analysis 19, 531–548.[KilianKilian1998b]Kilian98 Kilian, L. (1998b). Small-sample confidence intervals for impulse response functions. Review of Economics and Statistics 80, 218–230.[Kilian and ChangKilian and Chang2000]KillianChang00 Kilian, L. and P.-L. Chang (2000). How accurate are confidence intervals for impulse responses in large VAR models? Economics Letters 69, 299–307.[Kilian and LütkepohlKilian and Lütkepohl2017]KilianLuetkepohl17 Kilian, L. and H. Lütkepohl (2017). Structural Vector Autoregressive Analysis. Cambridge University Press.[Koop, Potter, and StrachanKoop et al.2008]KPS08 Koop, G., S. M. Potter, and R. W. Strachan (2008). Re-‐examining the consumption–-wealth relationship: the role of model uncertainty. Journal of Money, Credit and Banking 40, 341–367.[Leeb and PötscherLeeb and Pötscher2005]LeebPoetscher05 Leeb, H. and B. M. Pötscher (2005). Model selection and inference: Facts and fiction. Econometric Theory 21, 21–59.[Leeb, Pötscher, and EwaldLeeb et al.2015]LPE15 Leeb, H., B. M. Pötscher, and K. Ewald (2015). On various confidence intervals post-model-selection. Statistical Science 30, 216–227.[Liao and PhillipsLiao and Phillips2015]LiaoPhillips15 Liao, Z. and P. C. B. Phillips (2015). Automated estimation of vector error correction models. Econometric Theory 31, 581–€“646.[LütkepohlLütkepohl1990]Luetkepohl90 Lütkepohl, H. (1990). Asymptotic distributions of impulse response functions and forecast error variance decompositions of vector autoregressive models. Review of Economics and Statistics 72, 116–125.[Lütkepohl, Staszewska-Bystrova, and WinkerLütkepohl et al.2015]LSW15 Lütkepohl, H., A. Staszewska-Bystrova, and P. Winker (2015). Comparison of methods for constructing joint confidence bands for impulse response functions. International Journal of Forecasting 31, 782–798.[Mertens and RavnMertens and Ravn2011]MertensRavn11 Mertens, K. and M. O. Ravn (2011). Understanding the aggregate effects of anticipated and unanticipated tax policy shocks. Review of Economic Dynamics 14, 27–54.[Mertens and RavnMertens and Ravn2012]MertensRavn12 Mertens, K. and M. O. Ravn (2012). Empirical evidence on the aggregate effects of anticipated and unanticipated US tax policy shocks. American Economic Journal: Economic Policy 4, 145–181.[Mertens and RavnMertens and Ravn2013]MertensRavn13 Mertens, K. and M. O. Ravn (2013). The dynamic effects of personal and corporate income tax changes in the United States. American Economic Review 103, 1212–1247.[Mertens and RavnMertens and Ravn2014]MertensRavn14 Mertens, K. and M. O. Ravn (2014). A reconciliation of SVAR and narrative estimates of tax multipliers. Journal of Monetary Economics 68, 1–19.[MikushevaMikusheva2007]Mikusheva07 Mikusheva, A. (2007). Uniform inference in autoregressive models. Econometrica 75, 1411–1452.[MikushevaMikusheva2012]Mikusheva12 Mikusheva, A. (2012). One-dimensional inference in autoregressive models with the potential presence of a unit root. Econometrica 80, 173–212.[Montiel-Olea, Stock, and WatsonMontiel-Olea et al.2016]MSW16 Montiel-Olea, J. L., J. H. Stock, and M. W. Watson (2016). Uniform inference in SVARs identified with external instruments. Mimeo.[Mountford and UhligMountford and Uhlig2009]MountfordUhlig09 Mountford, A. and H. Uhlig (2009). What are the effects of fiscal policy shocks? Journal of Applied Econometrics 24, 960–992.[Pesavento and RossiPesavento and Rossi2006]PesaventoRossi06 Pesavento, E. and B. Rossi (2006). Small-sample confidence intervals for multivariate impulse response functions at long horizons. Journal of Applied Econometrics 21, 1135–1155.[Pesavento and RossiPesavento and Rossi2007]PesaventoRossi07 Pesavento, E. and B. Rossi (2007). Impulse response confidence intervals for persistent data: What have we learned? Journal of Economic Dynamics & Control 31, 2398–2412.[PhillipsPhillips1996]Phillips96 Phillips, P. C. B. (1996). Econometric model determination. Econometrica 64, 763–812.[PhillipsPhillips1998]Phillips98 Phillips, P. C. B. (1998). Impulse response and forecast error variance asymptotics in nonstationary VARs. Journal of Econometrics 83, 21–56.[RameyRamey2011]Ramey11 Ramey, V. A. (2011). Identifying government spending shocks: It's all in the timing. Quarterly Journal of Economics 126(1), 1–50.[RameyRamey2016]Ramey16 Ramey, V. A. (2016). Macroeconomic shocks and their propagation. NBER Working Papers 21978, National Bureau of Economic Research.[Ramey and ShapiroRamey and Shapiro1998]RameyShapiro98 Ramey, V. A. and M. D. Shapiro (1998). Costly capital reallocation and the effects of government spending. Carnegie-Rochester Conference Series on Public Policy 48(1), 145–194.[Romer and RomerRomer and Romer2009]RomerRomer09 Romer, C. D. and D. H. Romer (2009). A narrative analysis of postwar tax changes. Mimeo, University of California, Berkeley.[SmeekesSmeekes2013]Smeekes13 Smeekes, S. (2013). Detrending bootstrap unit root tests. Econometric Reviews 32, 869–891.[Sobreira and NunesSobreira and Nunes2012]SobreiraNunes12 Sobreira, N. and L. C. Nunes (2012). Testing for broken trends in multivariate time series. Mimeo, Nova School of Business and Economics.[Stock and WatsonStock and Watson2012]StockWatson12a Stock, J. H. and M. W. Watson (2012). Disentangling the channels of the 2007-2009 recession. NBER Working Papers 18094, National Bureau of Economic Research.[Strachan and van DijkStrachan and van Dijk2007]StrachanVanDijk07 Strachan, R. W. and H. K. van Dijk (2007). Bayesian model averaging in vector autoregressive processes with an investigation of stability of the US great ratios and risk of a liquidity trap in the USA, UK and Japan. Econometric Institute Research Papers EI 2007-11, Erasmus University Rotterdam.[Strachan and Van DijkStrachan and Van Dijk2013]StrachanVanDijk13 Strachan, R. W. and H. K. Van Dijk (2013). Evidence on features of a DSGE business cycle model from bayesian model averaging. International Economic Review 54, 385–402.[SwensenSwensen2006]Swensen06 Swensen, A. R. (2006). Bootstrap algorithms for testing and determining the cointegration rank in VAR models. Econometrica 74, 1699–1714.[Toda and YamamotoToda and Yamamoto1995]TodaYamamoto95 Toda, H. Y. and T. Yamamoto (1995). Statistical inference in vector autoregressions with possibly integrated processes. Journal of Econometrics 66, 225–250.[VillaniVillani2001]Villani01 Villani, M. (2001). Bayesian prediction with cointegrated vector autoregressions,. International Journal of Forecasting 17, 585–605.[WrightWright2000]Wright00 Wright, J. H. (2000). Confidence intervals for univariate impulse responses with a near unit root. Journal of Business & Economic Statistics 18, 368–373.algorithmsection figuresection remarksection § ALGORITHMS Here we describe the bootstrap algorithms used in the paper. Algorithm <ref> is the specific fixed-rank bootstrap algorithm used in the simulation and empirical sections. Depending on the specific assumptions made on {u_t}, a variety of different bootstrap methods, such as i.i.d., wild or block bootstrap, can be used in Step 2 of Algorithm <ref>; we provide further details in Section <ref>. Similarly, different initializations in Step 3 can be used. For the simulation study and application in this paper, we use the i.i.d. bootstrap in Step 2 and initialize the bootstrap sample in step 3 by setting y_t^* = y_t for t=1,…,p+1. Instead of detrending or demeaning (with μ̂_1=0) prior to estimation, one could also directly incorporate deterministic components in the VECM <cit.>. However, one then has to decide how the deterministic components affect the long run and short run components separately, resulting in a multitude of different specifications. Our simpler, robust, strategy corresponds to the typical approach taken in most empirical studies, and makes the estimators of the detrended VECM invariant to the true deterministics present in the DGP.In Step 4 of the algorithm we detrend the bootstrap data again, re-estimating the deterministic components, which might appear unnecessary as the bootstrap data do not contain any trends. However, this is done to mimic the effect of detrending on the calculated impulse responses, which under cointegration and at very long horizons, will affect the asymptotic distributions as it would unit root or cointegration analyses. It might be tempting to also first “retrend” the bootstrap data, that is, to put the estimated trend back into the bootstrap sample. This is however unnecessary as the consequent detrending makes the estimators invariant to the exact value of the trend coefficient, see for example Remark 2 in <cit.>. Algorithm <ref> shows how endogenous rank selection can be implemented in the bootstrap.Algorithm <ref> details how to implement bagging with the Fast Double Bootstrap. § PROOFSBy Assumption (i), we have that (R = r_0) → 1. As by construction L^ (γ) ≤ L^(R), it follows that(L^ (γ) ≤ L^(r_0) (γ) )= ( .L^ (γ) ≤ L^(R) (γ) | R = r_0 )(R = r_0) + o(1) = P(R = r_0) + o(1) → 1,and similarly (U^ (γ) ≥ U^(r_0) (γ)) → 1. The result then follows from assumption (ii) as( L^ (γ) ≤ζ≤ U^ (γ) ) ≥(L^(r_0) (γ) ≤ζ≤ U^(r_0)(γ) ) +o(1) → 1 - γ. It follows from <cit.> and <cit.> that for all r ≥ r_0,J_T (r) = O_p(1), such that T^-c_2 J_T(r)0, while for r < r_0, we have that J_T (r)/T is tight, such that T^-c_2 J_i(r) = T^1-c_2 J_T(r) / T ∞. Therefore we have that e^-c_1 T^-c_2 J_T(r)1(r ≥ r_0) and consequently W_T(r) 1(r = r_0). § ADDITIONAL SIMULATIONSIn this section we investigate by simulation the properties of two alternative bootstrap approaches, the lag-augmentation proposed by <cit.> and <cit.> as well as the bias correction of <cit.>.In order to combine the idea of lag-augmentation with a bootstrap algorithm several choices have to be made. In particular, the VAR process from which the bootstrap samples are built (see e.g. Step 1/Step 3 of Algorithm <ref>) has to be specified. Potential candidates are the “correctly specified” VAR and the lag-augmented one. Similarly, one has to decide how to re-estimate the VAR parameters from each bootstrap sample, i.e. using lag-augmentation or not. <cit.> and <cit.> provide little practical guidance on these decisions. We investigated various possible ways to generate bootstrap inference with lag-augmented VARs and found that the performance heavily varied with these choices. We here report the performance of the best performing method, where in Step 1 in Algorithm <ref>, we estimate a (correctly lag-specified) VAR(1) in levels to construct the bootstrap DGP in Step 3, whereas ζ̂ and ζ̂^* in Step 4 and 5 are based on (the first lag of) a lag-augmented VAR in levels (p=2) estimated on the data Y_T and the simulated data Y_T^*, respectively.Figure <ref> and Figure <ref> summarize the simulation results for the lag-augmented approach, including OLS and WIMP as well for ease of comparison. The empirical coverage probabilities of the lag-augmented VAR in Figure <ref> are reasonably good, especially for longer horizons. For short horizons, the intervals have more serious undercoverage than OLS and WIMP. However, the widths of the intervals in Figure <ref> show that the lag-augmented VAR intervals are much wider than OLS and WIMP intervals, and clearly far too wide to be of any practical use. This is because the lag-augmented VAR tends to imply overly persistent, often explosive dynamics – at least in our simulation design.<cit.> suggest to use lag-augmentation in combination with a small sample bias-correction as proposed in <cit.>. However, it is (again) not entirely clear how to best combine both procedures in the bootstrap algorithm. Thus, we investigate the implication of Kilian's bias-correction separately. We follow the algorithm in <cit.> in combination with a i.i.d. bootstrap.The empirical coverage probabilities of the bias-corrected VAR in Figure <ref> are close to their nominal levels and comparable to those of the WIMP. As displayed in Figure <ref>, some of the bias-corrected VAR intervals are, however, much wider than WIMP and even OLS intervals.§ DATA All data is quarterly, sampling from 1950/Q1-2006/Q4. We composed the data from three sources: The Bureau of Economic Analysis' U.S. National Income and Product Accounts (NIPA) (https://www.bea.gov/nationalbea.gov/national), The Bureau of Labor Statistics (BLS) (https://www.bls.govbls.gov), and FRED Economic Database hosted by the Federal Reserve Bank of St. Louis (https://fred.stlouisfed.orgfred.stlouisfed.org). * GDP is taken from NIPA table 1.1.5.* Consumption is private consumption, NIPA table 1.1.5.* Investment is gross private non-residential investment, NIPA table 1.1.5.* Government spending is government expenditure and gross investment, NIPA table 3.9.5.* Taxes are Federal government current tax receipts plus contributions for social insurance minus income taxes from federal reserve banks, all in NIPA table 3.2.* Real wages are nonfarm business sector: real compensation per hour, from the BLS.* GDP deflator is taken from NIPA table 1.1.9.* Federal funds rate is taken from FRED, series code: fedfunds.* Adjusted reserves is taken from FRED, series code: ADJRESSL.GDP and its components, government revenue, and adjusted reserves are transformed into real per capita values using the GDP deflator and a population measure (NIPA table 7.1).
http://arxiv.org/abs/1709.09583v3
{ "authors": [ "Lenard Lieb", "Stephan Smeekes" ], "categories": [ "econ.EM", "stat.AP", "stat.ME" ], "primary_category": "econ.EM", "published": "20170927153517", "title": "Inference for Impulse Responses under Model Uncertainty" }
Gaia FGK Benchmark stars]Gaia FGK benchmark stars: a bridge between spectroscopic surveysP. Jofré]Paula Jofré^1,2email: , Ulrike Heiter^3 and Sven Buder^4^1 Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK^2 Núcleo de Astronomía, Universidad Diego Portales, Ejército 441, Santiago, Chile.^3 Department of Physics and Astronomy,Uppsala University, Box 516, 75120 Uppsala, Sweden^4 Max-Planck Institute for Astronomy, 69117, Heidelberg, Germany2012 00 firstpage–lastpage[ [ Received — ; accepted — ===========================The Gaia benchmark stars (GBS) are very bright stars of different late spectral types, luminosities and metallicities. They are well-known in the Galactic archaeology community because they are widely used to calibrate and validate the automatic pipelines delivering parameters of on-going and future spectroscopic surveys.The sample provides us with consistent fundamental parameters as well as a library of high resolution and high signal-to-noise spectra. This allows the communityto study details of high resolution spectroscopy and to compare results between different survey pipelines, putting the GBS at the heart of this community. Here we discuss some results arising from using the GBS as main data source for spectral analyses.stellar atmospheres – Gaia – spectroscopic surveys § INTRODUCTIONOne goal of the Galactic archaeology community is to unravel the structure and evolution of the Galaxy. This is approached by studies of the spacial, dynamical and chemical distributions of stars of different ages at different Galactic directions, based on the photometry, astrometry, spectroscopy and seismology of large samples of stars complemented by distances, proper motions, radial velocities, fundamental stellar parameters and individual chemical abundances. In the era of ultra-high precision stellar astrophysics opened up by the CoRoT, Kepler, and Gaia missions, there are many spectroscopic surveys available to the community today, notable examples are RAVE, SDSS, LAMOST, Gaia-ESO, APOGEE and GALAH. In the future, high-resolution spectroscopic surveys like WEAVE and 4MOST dedicated to follow up Gaia sources will provide even larger datasets. These different surveys are designed to observe different kind of stars, and therefore their spectra are of different nature. Some cover the optical wavelength range, others cover infrared. Some target the nearby solar neighbourhood, others the faint distant halo. The resolution,signal-to-noise, and wavelength coverage, also varies from one survey data to another. This naturally leads to different and independentautomatic pipelines developed for each of these surveys. Some of these pipelines estimate parameters with spectrum syntheses, others use equivalent width measurements. The ideal way to compare the results of these pipelines, which gives a handle of the systematic uncertainties of these results, is to test the pipelines against sets of references stars in common. The GBS is one example of a very suitable and successful set of common stars. Other examples are stars in well-known clusters like M67, and in fields with asteroseismic observations <cit.>In the IWSSL 2017 workshop, several issues of stellar spectroscopy that have arisen from the analysis of GBS were discussedin the context of large spectroscopic surveys, Gaia and the future of Galactic archaeology.We start with a short description of the GBS, then we summarise two of our recent activities and we finalise with future prospects. § THE FGK GAIA BENCHMARK STARS The GBSare among the main calibrators of the Gaia-ESO Survey and are a central source for validation of many recent independent spectroscopic analyses. Documentation of the work can be found in the series of six A&A articles published between 2014 and 2017. In these articles one finds their selection criteria <cit.>, a public spectral library <cit.>, their spectral analyses for metallicity <cit.> and α and iron-peak abundances <cit.>.Five new metal-poor candidates were included tothe GBS list in <cit.> and recently a study of the systematic uncertainties in abundance determinations was presented in <cit.>. The final sample with its recommended parameters is available in the website of ourlibrary (<https://www.blancocuaresma.com/s/benchmarkstars>).§ VALIDATION/CALIBRATION FIELDS FOR SPECTROSCOPIC SURVEYS Recently, having large numbers of common targets between different spectroscopic surveys became one of the first priorities as this allow to perform direct comparisons of the different pipelines. This is necessary to improve the performance of the pipelines and to develop a strategy to put the parameters of the different surveys into the same scale. Furthermore, the larger the sample of stars, the betterfor developing data-driven methods which transfer the information (e.g. stellar parameters and chemical abundances) from one dataset onto another. So far, still a limited number of common stars between surveys is publicly available to the community. In July 2016 a group of specialists involved in survey spectral parametrisation pipeline development agreed to meet in Sexten at a workshop entitled “Industrial Revolution of Galactic Astronomy" organised by A. Miglio, P. Jofré, L. Casagrande and D. Kawata. During a week the group compared and discussed the parameters and abundances obtained by different survey pipelines. There are about 200 stars in common between Gaia-ESO Data Release 4 and APOGEE Data Release 13whose parameters can be obtained from public databases[<http://www.eso.org/qi/> and <http://www.sdss.org/dr13/irspec/> for GES and APOGEE, respectively.]. These common stars correspond to a subset of GBS, cluster members and CoRoT targets. The comparison of these parameters is shown in Fig. <ref>, with each symbol representing a different set of stars. At the left top of each panel the mean differenceis indicated, with its standard deviation in parenthesis. Temperature, surface gravity and metallicity show a very good agreement with negligible systematic differences and a scatter that is comparable with typical errors of such parameters. This is encouraging since both pipelines have been developed in completely independent ways, employing a very different strategy and focused in spectral windows with no overlap. The APOGEE pipeline is estimating the best parameters by chi2-optimisation with a pre-computed grid of synthetic spectra in the infrared. <cit.>. In contrast, the GES parameters and abundances are the product of the homogenisation of multiple independent pipelines, which employ a variety of approaches, including excitation and ionisation equilibrium estimation based on equivalent widths as well aschi2-optimisation of synthetic spectra in the optical. <cit.>. The alpha abundances, however, do not show a clear correlation like the other parameters, and the reason for this was investigated further.It is important to clarify that the [α/Fe] values reported by GES and APOGEE correspond to the value considered in the atmosphere model that reproduces the synthetic spectrum that best fits the data.The value is however based on the α-element features dominating the respective spectral regions of the two surveys. The APOGEE spectra have for example numerous very strong oxygen features, while the GES spectra is dominated by Mg and Ca features. Since the α elements are produced by slightly different nucleosynthesis processes, it is not surprising that a star might have slightly different abundances of O and Mg, for example. The fact that APOGEE and GES's [α/Fe] parameters are not tightly correlated does not mean that one of the datasets have incorrect values, but reflects that the α signatures differ in the different spectral domains.Figure <ref> shows the comparison of individual α-elements that have been commonly measured by GES and APOGEE for the same stars as shown in Fig. <ref>. This removes some of the extreme outliers seen in Fig. <ref>. The correlation improves even more when the abundances are weighted according to the precision with which they are measured (right hand panel). It remains to be seen which is the best way for spectroscopic surveys to provide an α abundance parameter, such that different stellar populations can be properly identified, as well as different surveys properly scaled. This test shows the importance of making clear of what the α parameter means by each of the surveys.§ THE ART OF STELLAR SPECTROSCOPY It has been argued that modern astrophysics started with the advent of stellar spectroscopy in the 19th century <cit.>[<http://faculty.humanities.uci.edu/bjbecker/huggins/>].A remarkable example of stellar spectroscopy is the creation of the Henry Draper (HD) Catalogue, in which the Harvard Computers (women hired to perform computations in Harvard) classified by eye more than 200,000 stellar spectra according to the depth of some lines leading to the sequence of spectral types we employ today.While this classification is still used today, the amount and quality of data has grown massively recently. The level of detail with which we can resolve spectral linestoday with current high-resolution spectrographs is impressive. At the same time, our computers (now machines performing computations) are capable of solving quite complicated equations of stellar structure including 3D modelling of atmospheres, rotation and departures from local thermo-dynamical equilibrium. Thus,we can achieve a much higher accuracy and much more objective classification of stars than simply by eye. However, it has become evident that even in modern times there is much room for improvement in the art of interpreting stellar spectra. It has been shown that different input material, namely atmosphere models, atomic and molecular data, and observed spectra –which have different resolutions, wavelength coverage, signal-to-noise ratios (SNR)– might cause different results in derived stellar parameters, in particular in metallicity and individual abundances. By fixing these variables one might be able to quantify the systematic uncertainties in stellar parameters when different methodologies are based on the same input material. The Gaia-ESO Survey is the first and only project that has attempted, massively and systematically, to employ multiple available techniques on a very large dataset. Its approach has been to combine the results results of about 20 different groups analysing simultaneously several thousands of spectra in about 5 different wavelength regions and resolutions. A final value and a statistical uncertainty for the stellar parameters of each star are provided (see talk of R. Smiljanic in this conference). To the surprise of many,this uncertainty has turned out to be much larger than expected in some cases.Is it only the different choices of spectral lines and the possibly insufficient SNR of the observations which are to blame for the discrepancies in obtained stellar parameters? That is one of the questions we try to answer using the GBS, for which we have spectra at exquisite SNR, resolution and wavelength coverage. In February 2016 a workshop in Cambridge was organised to tackle this question and the results were published in <cit.>. We couldquantify the differences obtained in abundance determination with different methods based on the same spectral lines, atmosphere models, stellar parameters and atomic data. The methods considered used using state-of-the art tools based on synthesis and equivalent widths. Essentially we investigated to what extent the “default" parameters of each method (continuum placement, model interpolation, continuum opacities, etc) affect the final abundances. We found that differences in continuum normalisation, even in very high SNR spectra,caused an impact of up to 0.6 dex in the retrieved abundances, while weak blends or interpolation of model atmospheres had insignificant effects on final abundances.§ FUTURE PROSPECTSThe GBS work is continuing to progress towards different directions simultaneously. Below we list some of them.* Hunting new and better candidates. Several interferometric programmes are ongoing to enlarge the sample of stars and to improve the accuracy of measured angular diameters. With Gaia DR2 parallaxes and the new interferometric data,we will revise our GBS sample and provide a new set of parameters.* Improving spectral library and line list for analysis. We have included spectra of the entire optical range using all setups of UVES archival data.This has been distributed to a small group of people to do abundance analysis (below) but will be provided soon in our Library webpage. * Determination of abundances. We are analysing the new GBS library with the goal to provide reference abundances of light elements (Li, C, N, O, Al, Na) and heavy elements (tbd). For this purpose,extensive workis on-going in improving the atomic line list outside the GES range starting from the data available in the VALD database[<http://vald.astro.uu.se>]. Furthermore, significant effort had to be invested to select the best ranges to determine abundances of C,N and O from molecular bands. The future of Galactic archaeology is moving towards analyses of very large combined datasets provided by future surveys providing with parallaxes, proper motions and radial velocities (Gaia), colours (LSST), stellar parameters and chemical abundances (4MOST and WEAVE)and masses and ages (K2 and PLATO). At the same time, the extremely high levels of details that we can detect in high-quality spectra of stars in our solar neighbourhood tell us how complex a star like the Sun or Arcturus can be, challenging the finest physical assumptions involved in the theory of stellar structure and evolution. As insisted during this conference by E. Griffin, a proper connection between the “wide sweeper" and the “ultimate refiner" is now more important than ever. [Heiter et al.(2015)]paperI Heiter, U., Jofré, P., Gustafsson, B., et al. 2015, A&A, 582, A49[Blanco-Cuaresma et al.(2014)]paperII Blanco-Cuaresma, S., Soubiran, C., Jofré, P., & Heiter, U. 2014, A&A, 566, A98[Jofré et al.(2014)]paperIII Jofré, P., Heiter, U., Soubiran, C., et al. 2014, A&A, 564, A133 [Jofré et al.(2015)]paperIV Jofré, P., Heiter, U., Soubiran, C., et al. 2015, A&A, 582, A81[Hawkins et al.(2016)]paperV Hawkins, K., Jofré, P., Heiter, U., et al. 2016, A&A, 592, A70[Jofré et al.(2017)]paperVI P. Jofré, U. Heiter, C. C. Worley, S., et al.2017, A&A, 601, A38[Pancino et al.(2017)]pancino Pancino, E., Lardo, C., Altavilla, G., et al. 2017, A&A, 598, A5[García Pérez et al.(2016)]garciaperez García Pérez, A. E., Allende Prieto, C., Holtzman, J. A., et al. 2016, AJ, 151, 144[Smiljanic et al.(2014)]smiljanic Smiljanic, R., Korn, A. J., Bergemann, M., et al. 2014, A&A, 570, A122[Becker (1993)]Becker:93 Becker, B. 1993, PhD Thesis,The Johns Hopkins University, Baltimore, MD
http://arxiv.org/abs/1709.09366v1
{ "authors": [ "Paula Jofre", "Ulrike Heiter", "Sven Buder" ], "categories": [ "astro-ph.SR", "astro-ph.GA" ], "primary_category": "astro-ph.SR", "published": "20170927073116", "title": "Gaia FGK benchmark stars: a bridge between spectroscopic surveys" }
23truecm16.7truecm1.3truecm=-2truecm=-2truecm § theoremTheorem[section] lemma[theorem]Lemma corollary[theorem]Corollary definition[theorem]Definition proposition[theorem]Proposition remark[theorem]Remark example[theorem]Example assumption[theorem]Assumption.theorem section .subsection .subsubsection .equation [ [ 1nM_1,Δ x`@=11 `@=12 ℕ𝕆ℝ𝕊ℤ𝔼ℙ∫∫ß#1 #1ε#̅1#1#1 (<ref>)∂ d_t S + v_x S_t S + v·∇_x Sℱ𝒜ℬ𝒶ℳ𝒲𝒫𝒟ℐ𝒯𝒱 S_ MabX∇ WρXaYConvergence analysis of upwind type schemes for the aggregation equation with pointy potential F. DelarueLaboratoire J.-A. Dieudonné,UMR CNRS 7351,Univ. Nice, Parc Valrose, 06108 Nice Cedex 02, France. Email: ,F. LagoutièreUniv Lyon,Université Claude Bernard Lyon 1,CNRS UMR 5208,Institut Camille Jordan,43 blvd. du 11 novembre 1918, F-69622 Villeurbanne cedex, France, Email: , N. VaucheletUniversité Paris 13, Sorbonne Paris Cité, CNRS UMR 7539, Laboratoire Analyse Géométrie et Applications, 93430 Villetaneuse, France, Email:December 30, 2023 ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================== A numerical analysis of upwind type schemes for the nonlinear nonlocal aggregation equation is provided. In this approach, the aggregation equation is interpreted as a conservative transportequation driven by a nonlocal nonlinear velocity field with low regularity.In particular, we allow the interacting potential to be pointy,in which case the velocity field may have discontinuities. Based on recent results of existence and uniqueness of a Filippov flow for this type of equations, we study an upwind finite volume numerical schemeand we prove that it is convergent at order 1/2 in Wasserstein distance. The paper is illustrated by numerical simulations that indicate that this convergence order should be optimal. Keywords:Aggregation equation, upwind finite volume scheme, convergence order, measure-valued solution. 2010 AMS subject classifications:35B40, 35D30, 35L60, 35Q92, 49K20.§ INTRODUCTION This paper is devoted to the numerical approximation of measure valued solutions to the so-called aggregation equation in space dimension d. This equation reads _tρ = ((∇_x W*ρ) ρ) ,t>0, x∈^d,with the initial condition ρ(0,·)=ρ^ini. Here, W plays the role of an interaction potential whose gradient ∇_x W(x-y) measures the relative force exerted by a unit mass localized at a point y onto a unit mass located at a point x.This system appears in many applications in physics and population dynamics. In the framework of granular media, equation (<ref>) is used to describe the large time dynamics of inhomogeneous kinetic models, see <cit.>. Models of crowd motion with a nonlinear term of the form ∇_xW*ρ are also addressed in <cit.>. In population dynamics, (<ref>) provides a biologically meaningful description of aggregative phenomena.For instance, the description of the collective migration of cells by swarming leads to such a kind of PDEs with non-local interaction, see e.g. <cit.>. Another example is the modelling of bacterial chemotaxis. In this framework, the quantity S=W*ρ is the chemoattractant concentration, which is a substance emitted by bacteria allowing them to interact with one another. The dynamics can be macroscopically modelled by the Patlak-Keller-Segel system <cit.>. In the kinetic framework,the most frequently used model is the Othmer-Dunbar-Alt system,the hydrodynamic limit of which leads to the aggregation equation (<ref>), see <cit.>. In many of these examples, the potential W is usually mildly singular, i.e. W has a weak singularity at the origin. Because of this low regularity, smooth solutions of such systems may blow-up in finite time, see e.g. <cit.>. In the latter case, finite time concentration may be regarded as a very simple mathematical way to account for aggregation of individuals, as opposed to diffusion. Since finite time blow-up of smooth solutions may occur and since equation (<ref>) conserves mass, a natural framework to study the existence of global in time solutions is to work in the space of probability measures. In this regard, two strategies have been proposed in the literature. In <cit.>, the aggregation equation is seen as a gradient flow taking values in the Wasserstein space and minimizing the interaction energy. In <cit.>, this system is considered as a conservative transport equation with velocity field ∇_x W*ρ. Then a unique flow, say Z=(Z(t,·))_t ≥ 0, can be constructed, hence allowing to define the solution as a pushforward measure by the flow, namely ρ=(ρ(t)=Z(t,·)_#ρ^ini)_t ≥ 0. When the singularity of the potential is stronger than the mild form described above,such a constructionhas been achievedin the radially symmetric casein <cit.>, but uniqueness is then lacking. Actually,the assumptions on the potential W that are needed to ensure the well-posedness of the equation in the space of measure valuedsolutions require a certain convexity property of the potential that allows only for a mild singularity at the origin. More precisely, we assume that the interaction potential W : ^d→ satisfies the following properties: (A0)W(x)=W(-x) and W(0)=0; (A1)W is λ-convex for someλ∈, i.e. W(x)-λ/2|x|^2 is convex; (A2)W∈ C^1(^d∖{0}); (A3)W is Lipschitz-continuous.Such a potential will be referred to as apointy potential.Typical examples are fully attractive potentials W(x)=1-e^-|x|, which is -1-convex, and W(x) = |x|, which is 0-convex. Notice that the Lipschitz-continuity of the potential allows to bound the velocity field: there exists a nonnegative constant w_∞ such that for all x≠ 0,|∇ W(x)| ≤ w_∞.Observe also that(A3) forces λ in(A1) to be non-positive, as otherwise W would be at least of quadratic growth, whilst(A3) forces it to be at most of linear growth.However, we shall sometimes discard (A3), when the initial datum is compactly supported. In this case, as W - λ |x|^2/2 is convex, it is locally Lipschitz-continuous, so that W is locally Lipschitz-continuous, what will be sufficient for compactly supported initial data.In that case it perfectly makes sense to assume λ >0 in(A1). For numerical analysis, we will assume in this case that the potential is radial, that is to say that W is a function of the sole scalar |x|, W(x) = 𝒲(|x|). Although very accurate numerical schemes have been developped to study the blow-up profile for smooth solutions, see <cit.>, very few numerical schemes have been proposed to simulate the behavior of solutions to the aggregation equation after blow-up.The so-called sticky particles method was shown to be convergent in <cit.>and used to obtain qualitative properties of the solutions such as the time oftotal collapse. However, this method is not so practical to catch the behavior of the solutions after blow-up in dimension d larger than one. In dimension d=1, this question has been addressed in <cit.>. In higher dimension, particle methods have been recently proposed and studied in <cit.>,but only the convergence of smooth solutions, before the blowup time, has been proved. Finite volume schemes have also been developed. In <cit.>, the authors propose a finite volume scheme to approximate the behavior of the solution to the aggregation equation (<ref>) after blow-up and prove that it is convergent. A finite volume method for a large class of PDEs including in particular (<ref>) has been also proposed in <cit.>, but no convergence result has been given. Finally, a finite volume scheme of Lax-Friedrichs type for general measures as initial data has been introduced and investigated in <cit.>. Numerical simulations of solutions in dimension greater than one have been obtained, allowing to observe the behavior after blow-up. Moreover, convergence towards measure valued solutions has been proved. However, no estimate on the order of convergence has been established so far. In the current work, we provide a precise estimate of the order of convergence in Wasserstein distance for an upwind type scheme. This scheme is based on an idea introduced in <cit.> and used later on in <cit.>. It consists in discretizing properly the macroscopic velocity so that its product with the measure solution ρ is well-defined. In this paper, we introduce an upwind scheme for which this product is treated accurately, and we prove its convergence at order 1/2 in Wasserstein distance (the definition of which is recalled below).For a given velocity field, the study of the order of convergence for the finite volume upwind scheme for the transport equation has received a lot of attention. This scheme is known to be first order convergent in L^∞ norm for any smooth initial data in C^2(^d) and for well-suited meshes, provided a standard stability condition (Courant-Friedrichs-Lewy condition) holds, see <cit.>. However, this order of convergence falls down to 1/2 in L^p norm when considering non-smooth initial data or more general meshes. This result has been first proved in the Cartesian framework by Kuznetsov in <cit.>. In <cit.>, a 1/2 order estimate in the L^∞([0,T],L^2(^d)) norm for H^2(^d) initial data has been established.Finally in <cit.>, a 1/2 order estimate in L^1 has been proved for initial data in L^1(^d)∩ BV(^d), whilst, for Lipschitz-continuous initial data, an estimate of order 1/2- in L^∞ for any >0 has been obtained in <cit.>. We emphasize that the techniques used in <cit.> and <cit.> are totally different. In the former, the strategy of proof is based on entropy estimates,whereas in the latter, the proof relies on the construction and the analysis of stochastic characteristics for the numerical scheme. Finally, when the velocity field is only L^∞ and one-sided Lipschtiz-continuous, solutions of the conservative transport equation are defined only inthe sense of measures. In this regard, Poupaud and Rascle <cit.> have proved that solutions of the conservative transport equation could be defined as the pushforward of the initial conditionby a flow of characteristics. A stability estimate for such solutions has been stated later in <cit.>. In dimension d=1, these solutions, as introduced in<cit.>,are equivalent to duality solutions, as definedin <cit.>. Numerical investigations may be found in <cit.>. In such a framework with a low regularity, numerical analysis requires towork with a sufficiently weak topology, which is precisely whathas been done in <cit.>. Therein, the convergence at order 1/2 of a finite volume upwind schemehas been shownin Wasserstein distance by means of a stochastic characteristic method, as done in<cit.>. Observe also that, recently,such an approach has been successfully used in <cit.> forthe numerical analysis of the upwind schemefor the transport equation with rough coefficients.In the current work, we adapt the strategy initiated in <cit.> to prove the convergence at order 1/2 of an upwind scheme for the aggregation equation for which the velocity field depends on the solution in a nonlinear way. We will strongly use the fact that, as mentioned above, measure valued solutions of (<ref>) are constructed by pushing forward the initial condition by an ^d-valued flow. Noticeably, we entirely reformulate the stochastic approach used in <cit.> by means of analytical tools. In the end, our proof is completely deterministic. Although using analytical instead of probabilistic arguments do not change the final result (neither nor the general philosophy of the proof), it certainly makes the whole more accessible for the reader. As we pointed out, thekey fact in <cit.> is to represent the scheme through a Markov chain; here,the main idea is to use the sole transition kernel of the latter Markov chain to couplethe measure-valued numerical solution at two consecutive times (and hence to bypass any use of the Markov chain itself). We refer to Remark <ref> below for more details. The outline of the paper is the following.In the next section, we introduce the notations and recall the theory for the existenceof a measure solution to (<ref>). Then we present the upwind scheme and state the main result: the scheme is convergent at order 1/2. In case when the potential W is strictly convex and radially symmetric and the initial condition has a bounded support, the rate is claimed to be uniform in time.Section <ref> is devoted to the properties of the scheme.The proof of the main result for a Cartesian grid meshis presented in Section <ref>. In Section <ref>, we explain briefly how to extend our result to simplicial meshes. Finally, numerical illustrations are given in Section <ref>. In particular, we show that the order of convergence is optimal and we provide severalnumerical simulations in which we recover the behavior of the solutions after blow-up time. § NOTATIONS AND MAIN RESULTS §.§ Notations Throughout the paper, we will make use of the following notations. We denote by C_0(^d) the space of continuous functions from ^d tothat tend to 0 at ∞. We denote by _b(^d) the space of Borel signed measures whose total variation is finite. For ρ∈ M_b(^d), we call |ρ|(^d) its total variation. The space _b(^d) is equipped with the weak topology σ( M_b(^d),C_0(^d)). For T>0, we let :=C([0,T]; M_b(^d)-σ( M_b(^d),C_0(^d))). For ρ a measure in _b(^d) and Z a measurable map,we denote Z_#ρ the pushforward measure of ρ by Z;it satisfies, for any continuous function ϕ,∫_^dϕ(x)Z_#ρ(dx) = ∫_^dϕ(Z(x)) ρ(dx).We call (^d) the subset of _b(^d) of probability measures. We define the space of probability measures with finite second order moment by_2(^d) := {μ∈(^d), ∫_^d |x|^2 μ(dx) <∞}.Here and in the following, |·|^2 stands for the square Euclidean norm, and ⟨·,·⟩ for the Euclidean inner product. The space 𝒫_2(^d) is equipped with the Wasserstein distance d_W defined by (see e.g. <cit.>)d_W(μ,ν) := inf_γ∈Γ(μ,ν){∫_^d×^d |y-x|^2 γ(dx,dy)}^1/2 where Γ(μ,ν) is the set of measures on ^d×^d with marginals μ and ν, i.e.Γ(μ,ν) = {γ∈_2(^d×^d);∀ ξ∈ C_0(^d), .∫ξ(y_1)γ(dy_1,dy_2) = ∫ξ(y_1) μ(dy_1), .∫ξ(y_2)γ(dy_1,dy_2) = ∫ξ(y_2) ν(dy_2) }.By a minimization argument, we know thatthe infimum in the definition of d_W is actually a minimum. A measure that realizes the minimum in the definition (<ref>) of d_W is called anoptimal plan, the set of which is denoted by Γ_0(μ,ν). Then, for all γ_0∈Γ_0(μ,ν), we haved_W(μ,ν)^2= ∫_^d×^d |y-x|^2 γ_0(dx,dy). We will make use of the following properties of the Wasserstein distance. Given μ∈𝒫_2(^d) and two μ-square integrable Borel measurable maps X,Y:^d→^d, we have the inequalityd_W(X_#μ,Y_#μ) ≤X-Y_L^2(μ).It holds because π=(X,Y)_#μ∈Γ(X_#μ,Y_#μ) and∫_^d×^d |x-y|^2 π(dx,dy)=X-Y_L^2(μ)^2. §.§ Existence of a unique flow In this section, we recall the existence and uniqueness result for the aggregation equation (<ref>) obtained in <cit.> (and extend it a bit for non-globally Lipschitz-continuous potentials). For ρ∈ C([0,T];_2(^d)), we define the velocity field a_ρ by _ρ(t,x) :=-∫_^d(x-y) ρ(t,dy) ,where we have used the notation(x) := {[ ∇ W(x), x≠ 0,;0,x=0. ].Due to the λ-convexity of W, see(A2), we deduce that, for allx, y in ^d∖{0},⟨∇ W(x)-∇ W(y) , x-y⟩≥λ |x-y|^2.Moreover, since W is even, ∇ W is odd and by taking y=-x in (<ref>), we deduce that inequality (<ref>) still holds for , even when x or y vanishes: ∀x,y∈^d, ⟨(x)-(y),x-y⟩≥λ |x-y|^2.This latter inequality provides a one-sided Lipschitz-continuity (OSL) estimatefor the velocity field _ρ defined in (<ref>), i.e. we have∀x,y∈^d,t ≥ 0, ⟨_ρ(t,x)-_ρ(t,y), x-y⟩≤ -λ |x-y|^2.We recall that, for a velocity field b∈ L^∞([0,+∞);L^∞(^d))^d satisfying an OSL estimate, i.e. ∀x,y∈^d, t ≥ 0, ⟨ b(t,x)-b(t,y),x-y⟩≤α(t) |x-y|^2, for α∈ L^1_loc([0,+∞)), it has been established in <cit.> that a Filippov characteristic flow could be defined. For s≥ 0 and x∈^d, a Filippov characteristic starting from x at time s is defined as a continuous function Z(·;s,x)∈ C([s,+∞);^d)such that / tZ(t;s,x) existsfor a.e. t∈[s,+∞) and satisfies Z(s;s,x)=x together with the differential inclusion/ t Z(t;s,x) ∈{ Convess(_ρ)(t,·)}(Z(t;s,x)),for a.e. t ≥ s.In this definition, { Convess(_ρ)(t,·) }(x)denotes the essential convex hull of the vector field_ρ(t,·) at x. We remind briefly the definition for the sake of completeness (see<cit.> for more details). We denote by Conv(E) the classical convex hull of a set E ⊂^d, i.e., the smallest closed convex set containing E.Given the vector field_ρ(t,·):^d→^d, its essential convex hull at point x is defined as{ Convess(_ρ)(t,·) }(x) :=⋂_r>0⋂_N∈𝒩_0 Conv[ _ρ( t, B(x,r)∖ N)] ,where 𝒩_0 is the set of zero Lebesgue measure sets.Moreover, we have the semi-group property: for any t,τ,s ∈ [0,+∞) such that t ≥τ≥ s andx∈^d,Z(t;s,x)=Z(τ;s,x)+∫_τ^t _ρ(σ,Z(σ;s,x)) dσ.From now on, we will make use of the notation Z(t,x)=Z(t;0,x). Using this characteristic, it has been established in <cit.> that solutions to the conservative transport equation with a given bounded and one-sided Lipschitz-continuous velocity field could be defined as the pushforward of the initial condition by the Filippov characteristic flow. Based on this approach, existence and uniqueness of solutions to (<ref>)defined by a Filippov flow has been established in <cit.>. More precisely the statement reads:<cit.> (i) Let W satisfy assumptions(A0)–(A3)and let ρ^ini be given in _2(^d). Then, there exists a unique solution ρ∈ C([0,+∞);_2(^d)) satisfying, in the sense of distributions, the aggregation equation_t ρ + (_ρρ) = 0, ρ(0,·)=ρ^ini,where _ρ is defined by (<ref>).This solution may be represented as the family of pushforward measures(ρ(t):=Z_ρ(t,·)_#ρ^ini)_t ≥ 0where (Z_ρ(t,·))_t ≥ 0 is the unique Filippov characteristic flow associated to the velocity field _ρ.Moreover, the flow Z_ρ is Lipschitz-continuous and we have sup_x,y∈^d, x ≠ y| Z_ρ(t,x) - Z_ρ(t,y) |/| x- y |≤ e^|λ|t,t ≥ 0. At last, if ρ andρ' are the respective solutions of(<ref>) with ρ^ini and ρ^ini,' as initial conditions in 𝒫_2(^d),then d_W(ρ(t),ρ'(t)) ≤ e^|λ| t d_W(ρ^ini,ρ^ini,'),t ≥ 0.(ii) Let W satisfy(A0)– (A2) and be radial, λ be (strictly) positive and let ρ^ini be given in _2(^d) with compact support included in B_∞(M_1,R), where M_1 is the first moment of ρ^ini ( i.e. its center of mass) and B_∞(M_1,R) the closed ball for the infinite norm on ^d centered at M_1 with radius R. Then, there exists a unique solution ρ∈ C([0,+∞);_2(^d)) with support included in B_∞(M_1,R) satisfying, in the sense of distributions, the aggregation equation (<ref>) where _ρ is defined by (<ref>).Moreover, the flow Z_ρ is Lipschitz-continuous and we havesup_x,y∈^d, x ≠ y| Z_ρ(t,x) - Z_ρ(t,y) |/| x- y |≤ e^-λ t,t ≥ 0.At last, if ρ^ini and ρ^ini,' have a bounded support,then,d_W(ρ(t),ρ'(t)) ≤d_W(ρ^ini,ρ^ini,'),t ≥ 0. The stability estimates that are present in this result are Dobrushin type estimates in the quadratic Wasserstein distance, in the case where the kernel is not Lipschitz-continuous but only one-sided Lipschitz-continuous. See <cit.> and <cit.>. We mention that the solution,which is here represented by the Filippov characteristic flow,may be also constructed as agradient flow solution in the Wasserstein space 𝒫_2(^d), see <cit.>. Here it is also important to remark that (<ref>) is true under the sole assumptions (A0)– (A2) whenever λ >0 (which is a mere consequence of(<ref>) and (<ref>) below). In that case, it ensures that B_2(M_1,R) (the closed Euclidean ball) is preserved by the flowwithout the assumption that W is radial. As a result, it may be tempting to address the analysis below without requiring the potential to be radial.Nevertheless, the problem is that thenumerical scheme does not satisfy a similar property. Indeed, the Euclidean ball B_2(M_1,R) is not convex from a numerical point of view, that is to say, if we regard the mesh underpinning the scheme, then the union of the square cells whose center is included in B_2(M_1,R) is not convex. Due to this drawback, the flow associated to the scheme does not preserve the ball B_2(M_1,R). This is in contrast with Lemma <ref> below, which shows that, in the radial setting, the ball B_∞(M_1,R+Δ x) is kept stable by the scheme, whereΔ x is the step of the spatial mesh. This latter fact is the reason why we here assume that the potential is radial.4pt For the first two statements of the Theorem, existence of a unique solution and Lipschitz-continuity of the flow, we refer to <cit.>.These statements remain true whenever the sole(A0)– (A2) hold true, W is radial, λ is (strictly) positive and the support of ρ^ini is bounded, provided that the notion of solution is limited to collections (ρ(t,·))_t ≥ 0 that have a compact support, uniformly in t in compact subsets. Indeed, if we denote by M_1(t) the center of mass of the solution at time t, namely M_1(t) := ∫_^dρ(t,dx), then this center of mass is known to be preserved: M_1(t) = M_1(0) =: M_1 (see <cit.> or Lemma <ref> below for the discrete counterpart).Now, if λ≥ 0 and if W is radial, ∇ W(x - y) is positively proportional to x-y, so that -∇ W(x - y) is parallel to x - y and directed from x to y.Thus, if ρ(t) is zero outside the ball B_∞(M_1,R), then, for any x ∈∂ B_∞(M_1,R), the velocity _ρ(t,x) is directed toward the interior of B_∞(M_1,R). This shows that B_∞(M_1,R) is preserved by the flow and guarantees that ρ(t) has its support included in B_∞(M_1,R) for any time t ≥ 0, if it is the case for t = 0.Given the fact that the support of ρ(t) remains bounded in B_∞(M_1,R), everything works as if W was globally Lipschitz-continuous.Existence and uniqueness of a solution to the aggregation equation can thus be proved by a straightforward localization argument. Indeed, observe that from the very definition of the velocity a, the Lipschitz-continuity constant of W that is involved in the existence and uniqueness theory is the local one of W on the compact subsetB_∞(M_1,R), provided that the support of ρ^ini is included inB_∞(M_1,R). Now it only remains to prove the two inequalities regarding the Wasserstein distance between solutions starting from different data. Under assumptions(A0)–(A3) on the potential, it was proven in <cit.>, but with a constant 2|λ| instead of |λ| in the exponential (as in <cit.> and <cit.>, where the convolution operator is however replaced with a slightly more general integral operator), thus we here provide a proof of the present better estimate. We consider the two Filippov flows (Z_ρ(t,·))_t ≥ 0 and (Z_ρ'(t,·))_t ≥ 0 as defined in the statement ofTheorem <ref>. We recall that Z_ρ(t,·)_#ρ^ini= ρ(t,·), Z_ρ'(t,·)_#ρ^ini,'= ρ'(t,·),t ≥ 0.To simplify, we just writeZ(t,·) = Z_ρ(t,·) and Z'(t,·) = Z_ρ'(t,·). Then, for any x,y ∈^d and t ≥ 0,d/dt| Z(t,x) - Z'(t,y) |^2 = - 2 ⟨ Z(t,x) - Z'(t,y),∫_^d∇ W(Z(t,x) -Z(t,x') ) ρ^ini(dx')- ∫_^d∇ W(Z'(t,y) -Z'(t,y') ) ρ^ini,'(dy') ⟩.Call π∈Γ_0(ρ^ini,ρ^ini,') an optimal plan between ρ^ini and ρ^ini,'.Then,d/dt| Z(t,x) - Z'(t,y) |^2 = - 2 ⟨ Z(t,x) - Z'(t,y), ∫_^2d[∇ W(Z(t,x) -Z(t,x') )-∇ W(Z'(t,y) -Z'(t,y') ) ]π(dx',dy') ⟩.Integrating in (x,y) with respect to π, we get d/dt∫_^2d| Z(t,x) - Z'(t,y) |^2 π(dx,dy)=- 2∫_^2d∫_^2d⟨ Z(t,x) - Z'(t,y),[∇ W(Z(t,x) -Z(t,x') )-∇ W(Z'(t,y) -Z'(t,y') ) ]⟩ π(dx,dy)π(dx',dy').Thanks to the fact that ∇ W is odd, see (A0), we can write, by a symmetry argument,d/dt∫_^2d| Z(t,x) - Z'(t,y) |^2 π(dx,dy)=- ∫_^2d∫_^2d⟨ Z(t,x) - Z'(t,y)- ( Z(t,x') - Z'(t,y') ),[∇ W(Z(t,x) -Z(t,x') )-∇ W(Z'(t,y) -Z'(t,y') ) ]⟩ π(dx,dy)π(dx',dy').Using(<ref>), we obtaind/dt∫_^2d| Z(t,x) - Z'(t,y) |^2 π(dx,dy) ≤ - λ∫_^2d∫_^2d|Z(t,x) - Z'(t,y)- ( Z(t,x') - Z'(t,y') ) |^2π(dx,dy)π(dx',dy').Observe that the above right-hand side is equal to∫_^2d∫_^2d|Z(t,x) - Z'(t,y)- ( Z(t,x') - Z'(t,y') ) |^2π(dx,dy)π(dx',dy') = 2 ∫_^2d|Z(t,x) - Z'(t,y) |^2π(dx,dy)- 2 |∫_^2d( Z(t,x) - Z'(t,y) ) π(dx,dy) |^2.4pt1st case. If λ≤ 0, we deducefrom(<ref>) and (<ref>) that d/dt∫_^2d| Z(t,x) - Z'(t,y) |^2 π(dx,dy) ≤ 2 |λ|∫_^2d|Z(t,x) - Z'(t,y) |^2π(dx,dy),which suffices to complete the proof of the first claim by notingthat ∫_^2d| Z(0,x) - Z'(0,y) |^2 π(dx,dy) =∫_^2d| x - y |^2 π(dx,dy) = d_W(ρ^ini,ρ^ini,')^2,and∫_^2d| Z(t,x) - Z'(t,y) |^2 π(dx,dy) ≥d_W(ρ(t),ρ(t)^')^2,see (<ref>).4pt2nd case. If λ≥ 0,we just use the fact that the right-hand side in(<ref>) is non-positive. Proceeding as above, this permits to complete the proof of the second claim.§.§ Main result The aim of this paper is to prove the convergence at order 1/2 of an upwind type scheme in distance d_W for the aggregation equation. The numerical scheme is defined as follows. We denote by Δ t the time step and consider a Cartesian grid with step Δ x_i in the ith direction, i=1,…,d; we then let Δ x:=max_i Δ x_i. We also introduce the following notations.For a multi-index J=(J_1, …, J_d)∈^d,we call C_J:=[(J_1-1/2)Δ x_1,(J_1+1/2)Δ x_1)×… [(J_d-1/2)Δ x_d,(J_d+1/2)Δ x_d) the corresponding elementary cell. The center of the cell is denoted by x_J := (J_1Δ x_1, …, J_d Δ x_d). Also, we lete_i := (0,…,1,…,0) be the ith vector of the canonical basis, for i ∈{1,…,d}, andwe expand the velocity field in the canonical basis under the forma=(a_1,…,a_d). For a given nonnegative measure ρ^ini∈_2(^d), we put, for any J∈^d,ρ_J^0:= ∫_C_Jρ^ini(dx)≥ 0. Since ρ^ini is a probability measure, the total mass of the system is ∑_J∈^dρ_J^0 = 1.We then construct iteratively the collection ((ρ_J^n)_J ∈^d)_n ∈ℕ, each ρ^n_J being intended to provide an approximation of the value ρ(t^n,x_J), for J∈^d. Assuming that the approximating sequence (ρ_J^n)_J∈^d is already given at time t^n:=n Δ t, we compute the approximation at time t^n+1 by: [ ρ_J^n+1 :=ρ_J^n - ∑_i=1^dΔ t/Δ x_i((a_i^n_J)^+ ρ_J^n - (a_i^n_J+e_i)^- ρ_J+e_i^n-(a_i^n_J-e_i)^+ ρ_J-e_i^n + (a_i^n_J)^- ρ_J^n ). ] The notation (a)^+ = max{0,a} stands for the positive part of the real a and respectively (a)^- = max{0,-a} for the negative part. The macroscopic velocity is defined bya_i^n_J :=-∑_K∈^dρ_K^nD_iW_J^K, D_iW_J^K := _x_i W(x_J-x_K ).Since W is even, we also have:D_iW_J^K =-D_iW^J_K. The main result of this paper is the proof of the convergence at order 1/2of the above upwind scheme. More precisely the statement reads: (i) Assume that W satisfies hypotheses(A0)–(A3) and that the so-called strict 1/2-CFL condition holds:w_∞∑_i=1^d Δ t/Δ x_i < 1/2,with w_∞ as in(<ref>).For ρ^ini∈_2(^d), let ρ=(ρ(t))_t ≥ 0 be the unique measure solution to the aggregation equation with initial data ρ^ini, as given by Theorem <ref>. Define ((ρ_J^n)_J∈^d)_n ∈ℕ as in(<ref>)–(<ref>)–(<ref>) and letρ_Δ x^n := ∑_J∈^dρ_J^n δ_x_J, n ∈ℕ.Then, there exists a nonnegative constant C,only depending on λ, w_∞ and d, such that, for all n∈^*,d_W(ρ(t^n),ρ_Δ x^n ) ≤ C e^|λ| (1+Δ t)t^n ( √(t^n Δ x) + Δ x ). (ii) Assume that W is radial and satisfies hypotheses(A0)–(A2) with λ (strictly) positive, thatρ^ini is compactly supported in B_∞(M_1,R) where M_1 is the center of mass of ρ^ini, and that the CFL condition (<ref>) holds, with w_∞ defined asw_∞ = sup_x ∈ B_∞(0,2R+2Δ x) ∖{0}|∇ W(x) |,Assume also that Δ t ≤ 1/2 and 2 λΔ t < 1. Then, there exists a nonnegative constant C,only depending on λ, w_∞, d and R such that, for all n∈^*, (<ref>) is valid, as well as d_W(ρ(t^n),ρ_Δ x^n) ≤ C ( √(Δ x) + Δ x ),which proves that the error can be uniformly controlled in time.We stress the fact that, under the setting defined in (ii), (<ref>) is valid. In small time, it provides a better estimate than (<ref>).As indicated in the statement, the constant C in (<ref>) may depend on the value of Rin the assumptionSupp(ρ^ini) ⊂ B_∞(M_1,R).We also point out that, although the computations below are performed for the sole upwind scheme, the first part of the statement, which holds true under the full set ofhypotheses(A0)–(A3),can be straightforwardly adapted to other diffusive schemes, see for instanceour previous article <cit.>. As for (ii), the statement remains true provided that the supports of the approximating measures(ρ^n)_n ≥ 0 remain bounded as n grows up. It must be stressed that there are some schemes for which the latter property fails (e.g.Lax-Friedrichs' scheme).Moreover, as already mentioned in Introduction, the convergence rate is optimal; this latter fact will be illustrated by numerical examples in Section <ref>. In one dimension, the scheme (<ref>) readsρ_i^n+1 = ρ_i^n - Δ t/Δ x((a_i^n)^+ ρ_i^n - (a_i+1^n)^- ρ_i+1^n- (a_i-1^n)^+ ρ_i-1^n + (a_i^n)^-ρ_i^n),where i is just taken in .The scheme has then the following interpretation. Given ρ^n_Δ x = ∑_j∈ρ_j^n δ_x_j, we construct the approximation at time t^n+1 by implementing the following two steps:*The Delta mass ρ_i^n located at position x_i moves with velocity a_i^n to the position x_i+a_i^n Δ t. Under the CFL condition w_∞Δ t ≤Δ x (which is obviously weaker than what we require in (<ref>)), the point x_i+a_i^nΔ t belongs to the interval [x_i,x_i+1] if a_i^n≥ 0, and to the interval [x_i-1,x_i] if a_i^n≤ 0.*Then the mass ρ_i^n is split into two parts;if a_i^n ≥ 0, a fraction a_i^n Δ t/Δ x of it is transported to the cell i+1, while the remaining fraction is left in cell i;if a_i^n ≤ 0, the same fraction | a_i^n |Δ t/Δ x of the mass is not transported to the cell i+1 but to the cell i-1. This procedure may be regarded as a linear interpolation of the mass ρ_i^n between the points x_i and x_i+1 if a_i^n ≥ 0 and between the pointsx_i and x_i-1 if a_i^n ≤ 0.This interpretation holds only in the one dimensional case. However thanks to this interpretation, we can define a forward semi-Lagrangian scheme in any dimension on (unstructured) simplicial meshes, which is then different from (<ref>).Such a scheme is introduced in Section <ref>.Finally, we emphasize that this scheme differs from the standard finite volume upwind scheme in which the velocity is computed at the interface a_i+1/2^n. This subtlety is due to the particular structure of the equation, as the latter requires the product _ρρ to be defined properly. A convenient way to make it proper is to compute, in the discretization, the velocity and the density at the same grid points. This fact has already been noticed in <cit.> and is alsoillustrated numerically in Section <ref>.§ NUMERICAL APPROXIMATION §.§ Properties of the scheme The following lemma explains why we called CFL the condition on the ratios(Δ t/Δ x_i)_i=1,⋯,d that we formulated in the statement ofTheorem<ref>.Assume that W satisfies hypotheses(A0)–(A3) and that the condition (<ref>) is in force. For ρ^ini∈_2(^d), define(ρ_J^0)_J ∈^d by (<ref>). Then the sequences (ρ_J^n)_n ∈,J ∈^d and (a_i_J^n)_n ∈,J ∈^d, i=1,…,d,given by the scheme defined in(<ref>)–(<ref>), satisfy, for all J∈^d and n∈,ρ_J^n ≥ 0,|a_i_J^n|≤ w_∞,i=1,…, d,and, for all n ∈,∑_J ∈^dρ_J^n=1.The total initial mass of the system is ∑_Jρ_J^0=1.By summing equation (<ref>) over J, we can show that the mass is conservative, namely, for all n∈^*, ∑_Jρ_J^n= ∑_Jρ_J^0=1.Also, we can rewrite equation (<ref>) asρ_J^n+1 = ρ_J^n [ 1 - ∑_i=1^d Δ t/Δ x_i |a_i^n_J| ] + ∑_i=1^d ρ_J+e_i^n Δ t/Δ x_i(a_i^n_J+e_i)^- + ∑_i=1^d ρ_J-e_i^n Δ t/Δ x_i(a_i^n_J-e_i)^+. We prove by induction on n that ρ_J^n ≥ 0 for all J ∈^d and for all n ∈ℕ. Indeed, if, for some n ∈ℕ,it holds ρ_J^n ≥ 0 for all J ∈^d,then, by definition (<ref>) and assumption (<ref>), we clearly have|a_i_J^n|≤ w_∞∑_K∈^dρ_K^n = w_∞,i=1,…,d.Then, assuming that the condition (<ref>) holds, we deduce that, in the relationship (<ref>), all the coefficients in front of ρ_J^n, ρ_J-e_i^n and ρ_J+e_i^n, i=1,…,d, are nonnegative. Thus, using the induction assumption, we deduce that ρ_J^n+1≥ 0 for all J∈^d. In the following lemma, we collect two additional properties of the scheme: the conservation of thecenter of mass and the finiteness of the second order moment. Let W satisfy(A0)–(A3) and condition (<ref>) be in force. For ρ^ini∈_2(^d),define (ρ_J^0)_J∈^d by (<ref>). Then, the sequence (ρ_J^n)_J∈^d given by the numerical scheme (<ref>)–(<ref>) satisfies:(i) Conservation of the center of mass. For all n∈^*,∑_J∈^d x_J ρ_J^n = ∑_J∈^d x_J ρ_J^0.We will denote the right-hand side (and thus the left-hand side as well) by 1n. (ii) Bound on the second moment. There exists a constant C>0, independent of the parameters of the mesh, such that,for all n∈^*, M_2,Δ x^n := ∑_J∈^d |x_J|^2ρ_J^n ≤ e^C t^n(M_2,Δ x^0 + C),where we recall that t^n=nΔ t.We recall from Lemma <ref> that, for all n∈, the sequence (ρ_J^n)_J ∈^d is nonnegative and that its sum is equal to 1.(i) Using (<ref>) together with a discrete integration by parts, we have:[∑_J∈^d x_Jρ_J^n+1 = ∑_J∈^d x_Jρ_J^n - ∑_i=1^dΔ t/Δ x_i∑_J∈^d((a_i^n_J)^+ρ_J^n (x_J-x_J+e_i) -(a_i^n_J)^-ρ_J^n (x_J-e_i-x_J) ). ]By definition of x_J, we deduce∑_J∈^d x_Jρ_J^n+1 = ∑_J∈^d x_Jρ_J^n + Δ t ∑_i=1^d ∑_J∈^da_i^n_J ρ_J^n.Bydefinition of the macroscopic velocity (<ref>) and by(<ref>), we also have∑_J∈^da_i^n_J ρ_J^n = -∑_J∈^d∑_K∈^d D_iW_J^K ρ_K^nρ_J^n= ∑_J∈^d∑_K∈^d D_iW^J_K ρ_K^nρ_J^n = ∑_J∈^d∑_K∈^d D_iW^K_J ρ_K^nρ_J^n,where we exchanged the role of J and K in the latter sum. We deduce that it vanishes. Thus,∑_J∈^d x_Jρ_J^n+1 = ∑_J∈^d x_Jρ_J^n. (ii) For the second moment, still using (<ref>) and a similar discrete integration by parts, we get[ ∑_J∈^d |x_J|^2 ρ_J^n+1 =∑_J∈^d |x_J|^2ρ_J^n;[2mm] - ∑_i=1^d Δ t/Δ x_i∑_J∈^d[ (a_i^n_J)^+ρ_J^n (|x_J|^2-|x_J+e_i|^2) -(a_i^n_J)^-ρ_J^n (|x_J-e_i|^2-|x_J|^2) ]. ]By definition of x_J,|x_J|^2-|x_J+e_i|^2=-2J_i Δ x_i^2 - Δ x_i^2 and |x_J-e_i|^2-|x_J|^2=-2J_i Δ x_i^2 + Δ x_i^2. Therefore, we get∑_J∈^d |x_J|^2 ρ_J^n+1 = ∑_J∈^d |x_J|^2 ρ_J^n + 2Δ t ∑_i=1^d∑_J∈^dJ_i Δ x_ia_i^n_J ρ_J^n + Δ t∑_i=1^d Δ x_i ∑_J∈^dρ_J^n |a_i^n_J|.As a consequence of Lemma <ref>, we have |a_i^n_J|≤ w_∞.Using moreover the mass conservation, we deduce that the last term is bounded by w_∞Δ t ∑_i=1^d Δ x_i. Moreover, applying Young's inequality and using the mass conservation again, we get|∑_J∈^da_i^n_J ρ_J^n J_i Δ x_i| ≤1/2( w_∞^2 + ∑_J∈^d |J_iΔ x_i|^2ρ_J^n ) ≤1/2( w_∞^2 + ∑_J∈^dρ_J^n | x^n_J|^2).We deduce then that there exists a nonnegative constant Conly depending on d and w_∞ such that∑_J∈^d |x_J|^2 ρ_J^n+1≤(1+CΔ t)∑_J∈^d |x_J|^2 ρ_J^n+ CΔ t(∑_i=1^dΔ x_i+1).We conclude the proof using a discrete version of Gronwall's lemma. In case when W is radial and satisfies(A0)–(A2), λ is (strictly) positive and ρ^ini has a bounded support, Lemmas <ref> and <ref> become:Assume that W is radial and satisfies(A0)– (A2),λ is (strictly positive) and ρ^inihas a bounded support, then the conclusions ofLemmas <ref> and <ref> remain true provided that w_∞ is definedas in(<ref>). Moreover, for any R ≥ 0 such that Supp(ρ^ini) ⊂ B_∞(M_1,R), it holds, for any n ∈,Supp(ρ^n_Δ x) ⊂B_∞(1n,R+Δ x),that is ∀ J ∈^d,x_J∉B_∞(1n,R+Δ x) ⇒ρ^n_J = 0. The meaning of Lemma<ref> is pretty clear. For R as in the statement, the mass, as defined by the numerical scheme, cannot leave the ball B_∞(1n,R+Δ x). We here recover the same idea as in Theorem <ref>.4pt As long as we can prove that the mass, as defined by the numerical scheme, cannot leave the ballB_∞(1n,R+Δ x), the proof is similar to that of Lemmas<ref> and <ref>. So, we focus on the second part ofthe statement.We first recall that ρ^0_J = ∫_C_Jρ^ini(dx), for J ∈^d. Hence, if x_J∉B_∞(1n,R+Δ x), we havex_J∉B_∞(M_1,R+Δ x/2) and thenC_J∩ B_∞(M_1,R) = ∅ and thus ρ^0_J=0. Below, we prove by induction that the same holds true for any n ∈.To do so, we assume thatthere exists an integer n ∈ such that, for all J ∈^d, ρ^n_J =0 if x_J∉B_∞(1n,R + Δ x).The goal is then to prove that, for any J satisfying (<ref>),ρ^n+1_J=0. By (<ref>), it suffices to prove that, for any coordinate i ∈{1,⋯,d} and any J as in (<ref>),ρ^n_J+e_i( a_i^n_J+e_i)^- = 0, andρ^n_J-e_i( a_i^n_J-e_i)^+ = 0. Without any loss of generality, we can assume that there exists a coordinate i_0∈{1,⋯,d} such that(x_J)_i_0 > R+ Δ x + (1n)_i_0 (otherwise (x_J)_i_0 < -R - Δ x+ (1n)_i_0 and the argument below is the same). Hence, (x_J+e_i_0)_i_0 >R+ Δ x + (1n)_i_0 and, by the induction hypothesis,ρ^n_J+e_i_0=0, which proves the first equality in(<ref>) wheni=i_0. In order to prove the second equality when i=i_0, we noticefrom(<ref>)that a_i_0^n_J-e_i_0 =-∑_K∈^dρ_K^n_x_i_0 W(x_J-e_i_0-x_K )=-∑_K∈^d : (x_K)_i_0≤ R+ Δ x+(1n)_i_0ρ_K^n_x_i_0 W(x_J-e_i_0-x_K )= -∑_K∈^d : (x_K)_i_0 < (x_J)_i_0ρ_K^n_x_i_0 W(x_J-e_i_0-x_K ) = -∑_K∈^d : (x_K)_i_0≤ (x_J-e_i_0)_i_0ρ_K^n_x_i_0 W(x_J-e_i_0-x_K ).As W is radial andλ >0, ∇ W(x - y) is positively proportional to x-y. Hence, _x_i_0 W(x_J-e_i_0-x_K) ≥ 0 when(x_K)_i_0≤ (x_J-e_i_0)_i_0.Therefore,(a_i_0^n_J-e_i_0)^+=0,which proves the second equality in(<ref>).It remains to prove(<ref>) for i ≠ i_0. Obviously, (x^n_J-e_i)_i_0=(x^n_J+e_i)_i_0 = (x^n_J)_i_0 > R+ Δ x +(1n)_i_0. By the induction hypothesis,ρ^n_J-e_i =ρ^n_J+e_i = 0, which completes the proof.Lemma<ref> is the main rationale for requiringW to be radial. Indeed, the counter-example below shows thatthe growth of the support of ρ^ini can be hardly controlledwheneverλ >0 andW is just assumed to satisfy (A0)– (A2).Consider for instance the following potential in dimension d=2:W(x_1,x_2) = 1/2( x_1 - q x_2)^2 +q^2/2 x_2^2,(x_1,x_2) ∈^2,where q is a free integer whose value will be fixed later on.It is well checked that∂_x_1 W(x_1,x_2) = x_1- q x_2, ∂_x_2 W(x_1,x_2) = q ( q x_2 - x_1) + q^2 x_2.Standard computations show that the smallest eigenvalue of the Hessian matrix (which is independent of (x_1,x_2)) is (1+2q^2) - 2q^2 √(1+1/(4q^4))/2∼_q →∞1/2,so that W is λ-convex with λ converging to 1/2 as q tends to ∞.Take now a centered probability measure ρ and compute the first coordinate of the velocity field _ρ. By centering,(_ρ)_1(x_1,x_2)= qx_2- x_1.In particular, if x_2=1, then(_ρ)_1(x_1,1)= q- x_1, which is non-negative as long as x_1< q. Therefore, if the numerical scheme is initialized with some centered ρ^0_Δ x supported by the unit square [-1,1]^2,it holds (_ρ^0_Δ x)_1(1,1) >0,if q>1.Hence, provided that condition (<ref>) holds true, ρ^1_Δ x charges the point (1+Δ x,1). Since the numerical scheme preserves the centering,we also have (_ρ^1_Δ x)_1(1+Δ x,1) >0,if q>1+Δ x,and then ρ^2_Δ x also charges the point (1+2Δ x,1), and so on up until (Δ x ⌊ q/Δ x ⌋,1).This says that there is no way to control the growthof the support of the numerical solution in terms of the sole lower bound of the Hessian matrix. Somehow, the growth of ∇ W plays a key role.This is in stark contrast with the support of the real solution, which may be bounded independently of q, as emphasized in the proof of Theorem <ref>.A possible way to overcome the fact that the numerical scheme does not preserve any ball containing the initial support in the general case when W is not radial would be to truncate the scheme. We feel more reasonable not to address this question in this paper, as it would require to revisit in deep the arguments used to tackle the case λ≤ 0. §.§ Comparison with a potential non-increasing scheme It must be stressed that the scheme could be defined differently in order to force the potential(or total energy: ∬_^d ×^d W(x - y)ρ(dx) ρ(dy))to be non-increasing. Basically, this requires the velocity a to be defined as a discrete derivative.For simplicity, we provide the construction of the scheme in dimension 1 only. For a probability measure ϱ∈𝒫() and a cell I ∈, we consider the following two discrete convolutions of finite differences:1/Δ x∑_J ∈[ ( W( Δ x (I + 1 - J) ) -W( Δ x (I - J) ))ϱ_J] = [ ∫_^dW(x+Δ x - y ) - W(x-y)/Δ xϱ_Δ x(dy) ]_| x=I Δ xand 1/Δ x∑_J ∈[ ( W( Δ x (I -1 - J) ) -W( Δ x (I - J) ))ϱ_J]= [ ∫_^dW(x-Δ x - y ) - W(x-y)/Δ xϱ_Δ x(dy) ]_| x=I Δ x,where, as before, ϱ_Δ x is obtained by pushing forward ϱ by the mapping y ↦Δ x y.The two terms above define velocities at the interfaces of the cell I. Namely, we callthe first term -a_I+12 and the second one a_I-12.Of course, the sign - in the former term guarantees the consistency of the notation, that is a_(I+1)-12 is equal to a_I+12. Following(<ref>), the scheme is defined by: ρ^n+1_J := ρ^n_J - Δ t/Δ x( ( a^n_J+12)^+ρ_J^n -( a^n_J+12)^-ρ_J+1^n +( a^n_J-12)^-ρ_J^n -( a^n_J-12)^+ρ_J-1^n), for n ∈ and J ∈. It is shown in <cit.> that the potential is non-increasing for the semi-discretized version of this scheme, which is to say that, up to a remainder of order 2 in Δ t (the value of Δ x being fixed), the potential of the fully discretized scheme does not increase from one step to another. The proof of the latter claim follows from a direct expansion of the quantity1/2∫_^d∫_^d W(x-y)ρ_Δ x^n+1(dx)ρ_Δ x^n+1(dy)by using the updating rule for ρ^n+1_J in terms ofρ^n_J, ρ^n_J-1 and ρ^n_J+1.The numerical scheme investigated in this paper does not satisfy the same property.Indeed, we provide a counter example, which shows that the potential may increase when W is convex, as a consequence of the numerical diffusion. However, the same example, but in dimension 1, shows that the scheme (<ref>) may not be convergent for certain forms of potential for which Theorem <ref> applies, see Subsection <ref>.Choose d=2, W(x)= | x | and take Δ x_1 = Δ x_2 = 1.Let the initial condition of the scheme, which we just denote by ρ^0, charge the points 0=(0,0), e_1=(1,0) and e_2=(0,1) with 1-p, p/2 and p/2 as respective weights, where p ∈ (0,1). Then, denoting by ρ^1 the distribution at time 1 obtained by implementing the upwind scheme, it holds that:∫_^2∫_^2| x- y |ρ^1(dx) ρ^1(dy) = ∫_^2∫_^2| x- y |ρ^0(dx) ρ^0(dy) + ( √(2) -1 ) p^2 (2p-1) Δ t + O(Δ t^2),where the Landau symbol O(·) may depend upon p. Choosing p>1/2 in (<ref>), we see that the potential may increase at the same rate as the time step.We first compute the potential at time 0. To do so,we compute ∫_^2| x- y |ρ^0(dy), for x ∈{0,e_1,e_2}:∫_^2| y |ρ^0(dy) = p, ∫_^2| e_1- y |ρ^0(dy) = ∫_^2| e_2- y |ρ^0(dy) =(1-p) + p/√(2),so that ∫_^2∫_^2| x- y |ρ^0(dx) ρ^0(dy) = 2 (1-p) p + p^2/√(2).In order to compute the potential at time 1, we compute the velocity at each of the above points.Observing that the velocity at point x is given by the formula:a_i^0_x = ∫_^2y_i -x_i/| y-x |ρ^0(dy),i =1,2,with the convention 0/0 = 0,we get:a_1^0_(0,0) = p/2, a_2^0_(0,0) = p/2, a_1^0_(1,0) = - (1-p)- p/2√(2), a_2^0_(1,0) = p/2√(2), a_1^0_(0,1) = p/2√(2), a_2^0_(0,1) = - (1-p) - p/2√(2).We then compute the new masses at time 1. There is one additional point which is charged: e_1+e_2=(1,1). We have: ρ^1(0)= (1-p) + p^2/2√(2)Δ t,ρ^1(e_1) = ρ^1(e_2)= p/2- p^2/2 √(2)Δ t,ρ^1(e_1+e_2)= p^2/2 √(2)Δ t. We now have all the required data to compute the potential at time 1.∫_^2| y |ρ^1(dy)= p - p^2/√(2)Δ t + p^2/2Δ t,∫_^2| e_1- y |ρ^1(dy) = ∫_^2| e_2- y |ρ^1(dy)=(1-p)+ p/√(2) + p^2/√(2)Δ t - p^2/2Δ t , ∫_^2|e_1+e_2 - y |ρ^1(dy)= (1-p) √(2)+ p+p^2/2Δ t- p^2/√(2)Δ t.Finally, the potential at time 1 is given by:∫_^2∫_^2| x- y |ρ^1(dx) ρ^1(dy)= ( (1-p) + p^2/2 √(2)Δ t )(p - p^2/√(2)Δ t+ p^2/2Δ t )+( p - p^2/√(2)Δ t )((1-p)+ p/√(2) + p^2/√(2)Δ t - p^2/2Δ t ) +p^2/2 √(2)Δ t ( (1-p) √(2) + p+p^2/2Δ t- p^2/√(2)Δ t).We expand the above right-hand side in powers of Δ t. The zero-order term is exactly equal to∫_^2∫_^2| x-y |ρ^0(dx) ρ^0(dy).So, we just compute the terms in Δ t. It is equal to (1- √(2)) (1-p) p^2 + (√(2)-1)p^3 = (√(2)-1) p^2 (2p-1),which completes the proof. § ORDER OF CONVERGENCE This section is devoted to the proof of Theorem <ref>.§.§ Preliminaries Before presenting the proof, we introduce some notations and establish some useful properties.We first define the following interpolation weights: for J∈^d andy ∈^d, we letα_J(y) = {[ 1-∑_i=1^d |⟨ y-x_J,e_i⟩|/Δ x_i when y∈ C_J,;1/Δ x_i(⟨ y-x_J-e_i,e_i⟩)^+ when y∈ C_J-e_i, for i= 1,…,d,;1/Δ x_i(⟨ y-x_J+e_i,e_i⟩)^-when y∈ C_J+e_i, for i=1,…,d,;0 otherwise. ].The terminology interpolation weights is justified by the following straightforward observation.Given a collection of reals (h_J)_J ∈ℤ^d indexed by the cells of the mesh, which we may regard as a real-valued function h : x_J↦ h_J defined at the nodes of the mesh, we may define an interpolation of h=(h_J)_J ∈ℤ^d by lettingℐ(h)(y) = ∑_J ∈ℤ^d h_Jα_J(y),y ∈^d.Obviously, the sum in the right-hand side makes sense since only a finite number of weights are non-zero for a given value of y. Clearly, the functional ℐ is an interpolation operator. As explained below, ℐ makes the connection between the analysis we perform in this paper and the one we performed in our previous work <cit.>.Several crucial facts must be noticed.The first one is that, contrary to what one could guess at first sight, the weights are not necessarily non-negative. For a given J ∈^d, take for instance y=(y_i=(J_i-12) Δ x_i)_i=1,…,d∈ C_J. Then α_J(y) = 1 - d2, which is obviously negative if d≥ 3.However,the second point is that, for useful values of y, the weights are indeed non-negative provided that the CFL condition (<ref>) is in force.Fora given J ∈^d, call indeed U_J the subset of C_J of so-called useful values that are in C_J, as given byU_J = { y ∈^d : |⟨ y - x_J,e_i⟩|≤ w_∞Δ t, i = 1,…,d }.Then, for any J,L ∈^d and any y ∈ U_L, α_J(y) is non-negative, which is a direct consequence of the CFL condition (<ref>). In fact, the CFL condition (<ref>) says more, and this is the rationale for the additional factor12 in (<ref>): U_J is included in C_J.Of course, the consequence is that, under the CFL condition (<ref>), we have, for any J∈^d,x_J+a_J^nΔ t ∈ C_J, where a^n_J is thed-dimensional vector with entries (a_i^n_J)_i=1,⋯,d(indeed |a_i_J^n| Δ t ≤ w_∞Δ t < Δ x_i/2).Another key fact is that the definition of α_J(y) in (<ref>) is closely related to the definition of the numerical scheme (<ref>). Indeed, we have the following formula, for any J,L ∈ℤ^d, α_J( x_L + Δ t a^n_L) ={[1-∑_i=1^d |a_i^n_J|Δ t/Δ x_i when L=J,; Δ t/Δ x_i( a_i^n_J-e_i)^+ when L = J-e_i, for i= 1,…,d,; Δ t/Δ x_i( a_i^n_J+e_i)^- when L = J+e_i, for i= 1,…,d,; 0otherwise. ]. In particular, we may rewrite (<ref>) as∀J ∈^d, ρ^n+1_J = ∑_L∈^dρ^n_L α_J(x_L+Δ t a^n_L),which is the core of our analysis below. In this regard,The following lemma gathers some useful properties.Let (α_L(y))_L ∈^d,y ∈^d be defined as in (<ref>). Then, for any y ∈^d, we have∑_L∈^dα_L(y) = 1 ∑_L∈^d x_L α_L(y)= y.There exists a unique J∈^d such that y∈ C_J. Then, we compute∑_L∈^dα_L(y) = α_J(y) + ∑_i=1^d (α_J+e_i(y)+α_J-e_i(y)) = 1 - ∑_i=1^d |⟨ y-x_L,e_i⟩|/Δ x_i + 1/Δ x_i∑_i=1^d (⟨ y-x_J,e_i⟩)^+ + (⟨ y-x_J,e_i⟩)^- = 1Then, using the fact that x_J+e_i-x_J = Δ x_i e_i, for i=1,…,d, we have∑_L∈^d x_L α_L(y)= x_J α_J(y) + ∑_i=1^d (x_J+e_iα_J+e_i(y)+ x_J-e_iα_J-e_i(y)) = x_J + ∑_i=1^d ( 1/Δ x_i(⟨ y-x_J, e_i⟩)^+ Δ x_i e_i- 1/Δ x_i(⟨ y-x_J, e_i ⟩)^- Δ x_i e_i ) = x_J + ∑_i=1^d ⟨ y-x_J, e_i⟩ e_i = y,which completes the proof.Lemma <ref> prompts us to draw a comparison with our previous paper <cit.>. For a given y ∈^d in the set of useful values U:=∪_J ∈^d U_J, namely y ∈ U_J for some J ∈^d,the collection of weights (α_L(y))_L ∈^d forms a probability measure, as the weights are non-negative and their sum is 1! In particular,ℐ(h)(y)in (<ref>), for y ∈ U, may be interpreted as an expectation. Using the same terminology as in <cit.> (which is in fact the terminology of the theory of Markov chains), those weights should be regarded as transition probabilities: For a given y in the set of useful values,α_L(y) reads as the probability of jumping from a certain state depending on the sole value of y tothe node x_L. Of course, the interpretation of the so-called certain state depending on the sole value of y is better understood from (<ref>).In(<ref>), if we fix a cell L ∈^d (or equivalently a node x_L), then α_J(x_L+Δ t a^n_L) should read as the probability of passing from the node x_L to the node x_J (or from the cell L to the cell J) at the n^ th step of a (time inhomogeneous) Markov chain having the collection of nodes (or of cells) as state space.In this regard,(<ref>) is nothing but the Kolmogorov equation for the corresponding Markov chain, as (ρ^n_J)_J ∈^d can be interpreted as the law at time n of the Markov chain driven by the latter transition probabilities.The reader can easily check that the so-called stochastic characteristic used in <cit.>is in fact this Markov chain.Below, we do not make use of the Markov chain explicitly. Still, we use the weights(α_J(y))_J ∈^d, y ∈^d to construct a coupling between the two measuresρ^n_Δ x and ρ^n+1_Δ x, that is to construct a specific element of Γ(ρ^n_Δ x,ρ^n+1_Δ x). In <cit.>,this coupling does not explicitly show up but it is in fact implicitly used, as it coincides with the joint law of two consecutive states of the aforementioned Markov chain.In a nutshell, the reader can reformulate the whole analysis below in a probabilistic fashion. The only (conceptual) difficulty to do sois that, in contrast with <cit.>, the Markov chain is here nonlinear:as a^n in(<ref>) depends on ρ^n, the transition probabilities of the Markov do depend upon the marginal law of the Markov chain itself, which fact gives rise to a so-called nonlinear Markov chain! §.§ Proof of Theorem <ref>1st step. We first consider the case where the initial datum is given by ρ^ini:=ρ_Δ x^0 = ∑_J∈^dρ_J^0 δ_x_J, where we recall that ρ_J^0 is defined in (<ref>). For n∈^*, let us define D_n:= d_W (ρ(t^n),ρ_Δ x^n ).Clearly, with our choice of initial datum, we have D_0=0.Let γ be an optimal plan in Γ_0(ρ(t^n),ρ_Δ x^n), we have D_n = (∬_^d×^d |x-y|^2 γ(dx,dy))^1/2. Let us introduce a^n_Δ x, the piecewise affine in each direction reconstruction of the velocity such that for allJ∈^d, a^n_Δ x(x_J)=a_J^nDenote also byZ :=Z_ρ the flow given by Theorem<ref>, when ρ^ini is prescribed as above.Recalling the definitionof α_J(y) from (<ref>),we then consider a new measure γ', defined as the image of γ by thekernel 𝒦 that associates with a point (x,y) ∈^d ×^d the point (Z(t^n+1;t^n,x),x_L) with measure α_L(y+Δ ta^n_Δ x(y)), namely,for any two Borel subsets A and B of ^d,𝒦( (x,y), A × B )= 1_A( Z(t^n+1;t^n,x) )∑_L ∈^dα_L(y+Δ ta^n_Δ x(y) ) 1_B(x_L)= ∬_^d×^d1_A × B(x',y')[ δ_Z(t^n+1;t^n,x)⊗( ∑_L ∈^dα_L(y+Δ ta^n_Δ x(y) ) δ_x_L) ] (dx',dy'),where δ_z denotes the Dirac mass at point z, and thenγ'(A × B) = ∬_^d ×^d𝒦( (x,y),A × B )γ(dx,dy). Equivalently, for any bounded Borel-measurable function θ : ^d ×^d →,∬_^d×^dθ(x,y) γ'(dx,dy) = ∬_^d×^d[∑_L∈^dθ(Z(t^n+1;t^n,x),x_L)α_L(y+Δ t a^n_Δ x(y))] γ(dx,dy).Then we have γ'∈Γ(ρ(t^n+1),ρ_Δ^n+1). Indeed, for any bounded Borel-measurable function θ_1: ^d →, we have, from (<ref>) and Lemma <ref>,∬_^d×^dθ_1(x) γ'(dx,dy)= ∬_^d×^d[∑_L∈^dθ_1(Z(t^n+1;t^n,x)) α_L(y+Δ t a^n_Δ x(y))] γ(dx,dy) = ∬_^d×^dθ_1(Z(t^n+1;t^n,x)) γ(dx,dy) = ∫_^dθ_1(Z(t^n+1;t^n,x)) ρ(t^n,dx) = ∫_^dθ_1(x) ρ(t^n+1,dx),where we used Theorem <ref> and where ρ(t^n,dx) is a shorter notation for ρ(t^n)(dx) and similarlyfor ρ(t^n+1,dx).Similarly, for any bounded Borel-measurable function θ_2 : ^d →,∬_^d×^dθ_2(y) γ'(dx,dy)= ∬_^d×^d[∑_L∈^dθ_2(x_L) α_L(y+Δ t a^n_Δ x(y))] γ(dx,dy) = ∑_J∈^d∑_L∈^dθ_2(x_L) α_L(x_J+Δ t a_J^n) ρ_J^n = ∑_L∈^dθ_2(x_L) ρ_L^n+1 = ∫_^dθ_2(y) ρ^n+1_Δ x(dy),where we used (<ref>). In particular, we deduceD_n+1^2 ≤∬_^d×^d |x-y|^2 γ'(dx,dy).Using the definition of γ' given in (<ref>), we getD_n+1^2 ≤∬_^d×^d∑_L∈^d|Z(t^n+1;t^n,x)-x_L|^2 α_L (y+Δ t a^n_Δ x(y)) γ(dx,dy).Using both equalities of Lemma <ref>, we compute[The probabilistic reader will easily recognize the standard computation of the L^2 norm of a random variable in terms of its variance and its expectation, which indeed plays, but under a conditional form, a key role in<cit.>.] ∑_L∈^d|Z(t^n+1;t^n,x) - x_L|^2 α_L(y+Δ t a^n_Δ x(y)) = ∑_L∈^d| ( Z(t^n+1;t^n,x) - ( y+Δ t a^n_Δ x(y)) ) - ( x_L- ( y +Δ t a^n_Δ x(y)))|^2 α_L(y+Δ t a^n_Δ x(y)) =|Z(t^n+1;t^n,x) - y - Δ t a^n_Δ x(y) |^2+ ∑_L∈^d|x_L -y - Δ t a^n_Δ x(y) |^2 α_L(y+Δ t a^n_Δ x(y)) -2⟨ Z(t^n+1;t^n,x) - y - Δ ta^n_Δ x(y), ∑_L ∈^d(x_L-y-Δ t a^n_Δ x(y)) α_L(y+Δ t a^n_Δ x(y) ⟩ .Now, as a consequence of Lemma<ref>, we observe that∑_L∈^d(x_L-y-Δ t a^n_Δ x(y)) α_L(y+Δ t a^n_Δ x(y)) = 0.Thus, equation (<ref>) rewrites∑_L∈^d|Z(t^n+1;t^n,x) - x_L|^2 α_L(y+Δ t a^n_Δ x(y)) =|Z(t^n+1;t^n,x)-y-Δ t a^n_Δ x(y)|^2 + ∑_L∈^d|x_L-y-Δ t a^n_Δ x(y)|^2 α_L(y+Δ t a^n_Δ x(y)).Injecting into (<ref>), we deduceD_n+1^2 ≤∬_^d×^d|Z(t^n+1;t^n,x)-y-Δ t a^n_Δ x(y)|^2 γ(dx,dy) + ∫_^d∑_L∈^d|x_L-y-Δ t a^n_Δ x(y) |^2 α_L(y+Δ t a^n_Δ x(y)) ρ_Δ x^n(dy),where we used the fact that ρ_Δ x^n is the second marginal of γ.By definition, ρ_Δ x^n(y) = ∑_J∈^dρ_J^n δ_J(y), so that ∑_L∈^d∫_^d|x_L-y-Δ t a^n_Δ x(y)|^2 α_L(y+Δ t a^n_Δ x(y)) ρ_Δ x^n(dy) = ∑_J∈^d∑_L∈^d|x_L-x_J-Δ t a^n_J|^2 α_L(x_J+Δ t a^n_J) ρ_J^n.Moreover using the definition of α_L in (<ref>), we compute∑_L∈^d|x_L-x_J-Δ t a^n_J|^2α_L(x_J+Δ t a^n_J)= Δ t^2 |a_J^n|^2 (1-∑_i=1^d Δ t/Δ x_i |a_i_J^n|)+∑_i=1^d ( |Δ x_i e_i -Δ t a^n_J|^2 Δ t/Δ x_i (a_i_J^n)^+ +|Δ x_i e_i +Δ t a^n_J|^2 Δ t/Δ x_i (a_i_J^n)^- ) ≤CΔ t(Δ t + Δ x),where we used, for the last inequality, the CFL condition (<ref>) and the fact that the velocity (a_J^n)_J is uniformly bounded (see Lemma <ref> or Lemma <ref>). Then, (<ref>) givesD_n+1^2 ≤∬_^d×^d|Z(t^n+1;t^n,x)-y-Δ t a^n_Δ x(y)|^2 γ(dx,dy)+ C Δ t (Δ t+Δ x).2nd step. We have to estimate the error between the exact characteristic Z(t^n+1;t^n,x) and the forward Euler discretization y+Δ t a^n_Δ x(y). By definition of the characteristics (<ref>), we haveZ(t^n+1;t^n,x) = x + ∫_t^n^t^n+1_ρ(s,Z(s;t^n,x)) ds = x - ∫_t^n^t^n+1∫_^d(Z(s;t^n,x)-Z(s;t^n,ξ)) ρ(t^n,dξ) ds.We recall also that, by definition (<ref>), the approximating velocity is given bya_L^n = - ∑_J∈^dρ_J^n (x_L-x_J), so that for y, a node of the mesh, y + Δ t a^n_Δ x(y)= y - Δ t ∫_^d( y - ζ) ρ^n_Δ x( d ζ).Thus, by a straightforward expansion and still for y a node of the mesh, |Z(t^n+1;t^n,x)-y-Δ t a^n_Δ x(y)|^2 ≤|x-y|^2 - 2∫_t^n^t^n+1∬_^d×^d⟨ x-y, (Z(s;t^n,x)-Z(s;t^n,ξ))-(y-ζ)⟩ρ(t^n,dξ) ρ_Δ x^n(dζ) + CΔ t^2.By definition of the optimal plan γ∈Γ_0(ρ(t^n),ρ^n_Δ x), we also have∬_^d×^d⟨ x-y, (Z(s;t^n,x)-Z(s;t^n,ξ))-(y-ζ)⟩ρ(t^n,dξ) ρ_Δ x^n(dζ) =∬_^d×^d⟨ x-y, (Z(s;t^n,x)-Z(s;t^n,ξ))-(y-ζ)⟩γ(dξ,dζ)Injecting into (<ref>), we getD_n+1^2 ≤D_n^2 + C Δ t(Δ t+Δ x)- 2 ∫_t^n^t^n+1∬_^d×^d∬_^d×^d⟨ x-y, (Z(s;t^n,x)-Z(s;t^n,ξ))-(y-ζ) ⟩γ(dξ,dζ) γ(dx,dy).Decomposing x-y=x-Z(s;t^n,x)+Z(s;t^n,x)-y and using the fact that|Z(s;t^n,x)-x|≤ w_∞ |s-t^n|, we getD_n+1^2 ≤D_n^2 + C Δ t(Δ t+Δ x)- 2 ∫_t^n^t^n+1∬_^d×^d∬_^d×^d⟨ Z(s;t^n,x)-y, (Z(s;t^n,x)-Z(s;t^n,ξ))-(y-ζ)⟩γ(dξ,dζ) γ(dx,dy).Then, we may use the symmetry of the potential W in assumption(A0) for the last term to deduceD_n+1^2 ≤D_n^2 + C Δ t(Δ t+Δ x)- ∫_t^n^t^n+1∬_^d×^d∬_^d×^d⟨ Z(s;t^n,x)-Z(s;t^n,ξ)-y+ζ,(Z(s;t^n,x)-Z(s;t^n,ξ))-(y-ζ)⟩ γ(dξ,dζ) γ(dx,dy).Moreover, from the λ-convexity of W(<ref>), we obtainD_n+1^2 ≤D_n^2 + C Δ t(Δ t+Δ x)- λ∫_t^n^t^n+1∬_^d×^d∬_^d×^d|Z(s;t^n,x)-y-Z(s;t^n,ξ)+ζ|^2γ(dξ,dζ) γ(dx,dy).Expanding the last term, we deduceD_n+1^2 ≤D_n^2 + C Δ t(Δ t+Δ x)- 2λ∫_t^n^t^n+1∬_^d×^d|Z(s;t^n,x)-y|^2γ(dx,dy) + 2λ∫_t^n^t^n+1|∬_^d×^d(Z(s;t^n,x)-y)γ(dx,dy) |^2.3rd step.Now we distinguish between the two cases λ≤ 0 and λ>0.(i) Starting with the case λ≤ 0, we have that the last term in (<ref>) is nonpositive. Using Young's inequality and the estimate |x-Z(s;t^n,x)| ≤ w_∞ (s-t^n), we get, for any ε>0,|Z(s;t^n,x)-y |^2 ≤ (1+ε) |x-y|^2 + (1+1/ε) w_∞^2 |s-t^n|^2.Hence, injecting into (<ref>), we deduceD_n+1^2 ≤(1 + 2(1+ε)|λ| Δ t ) D_n^2 + C Δ t(Δ x+Δ t(1+Δ t/ε)).Applying a discrete Gronwall inequality, we obtainD_n^2 ≤ e^2 (1+ε)|λ| t^n(D_0^2+ C t^n(Δ x+Δ t(1+Δ t/ε))).We recall that our choice of initial data implies D_0=0. Finally, taking ε=Δ t, we concluded_W(ρ(t^n),ρ_Δ^n) ≤ C e^(1+Δ t)|λ| t^n√(t^n(Δ x+Δ t)).It allows to conclude the proof of Theorem <ref> (i) in the case ρ^ini=ρ_Δ x^0.(ii) Considering now the case λ>0, we have∬_^d×^d(Z(s;t^n,x)-y ) γ(dx,dy) =∫_^d(Z(s;t^n,x)-x)ρ(t^n,dx) + ∫_^d x ρ(t^n,dx) - ∑_J∈^d x_J ρ_J^n.By conservation of the center of mass, see Lemma <ref> (i), we deduce that ∫_^d x ρ(t^n,dx) - ∑_J∈^d x_J ρ_J^n =∫_^d x ρ^ini(dx) - ∑_J∈^d x_J ρ_J^0 = 0,since we have chosen the initial data such that ρ^ini=ρ_Δ x^0. Using also the bound |Z(s;t^n,x)-x| ≤ w_∞ (s-t^n), we may bound the last term of (<ref>) by w_∞^2 Δ t^2. Moreover, using again Young's inequality and the estimate |Z(s;t^n,x)-x| ≤ w_∞ (s-t^n), we have, for any ε>0,|x - y|^2 ≤ (1+ε) |Z(s;t^n,x)-y |^2 + (1+1/ε) w_∞^2 |s-t^n|^2.It implies, for any ε∈ (0,1),- |Z(s;t^n,x)-y |^2≤ - 1/1+ε |x-y|^2 + 1/ε w_∞^2 |s-t^n|^2≤ -(1-ε) |x-y|^2 + 1/ε w_∞^2 |s-t^n|^2.Thus we deduce that- 2 λ∫_t^n^t^n+1∬_^d×^d|Z(s;t^n,x)-y |^2 γ(dx,dy) ≤ -2 λ (1-ε) Δ t D_n + 2/3λ/ε w_∞^2 Δ t^3.Injecting this latter inequality into (<ref>) and taking ε=Δ t, we deduceD_n+1^2 ≤(1-2λ (1-Δ t)Δ t ) D_n^2 + C Δ t(Δ t+Δ x)Hence, since 2λ (1-Δ t)Δ t<1, we have by induction, recalling that D_0=0,D_n^2 ≤ C Δ t(Δ t+Δ x) ∑_k=0^n-1(1-2λ (1-Δ t)Δ t )^k≤C/2(1-Δ t)λ (Δ t+Δ x).Using the assumption Δ t ≤ 1/2, we conclude the proof of Theorem <ref> (ii) in the case ρ^ini=ρ_Δ x^0.2pt 4th step. We are left with the case ρ^ini≠ρ_Δ x^0.Let us define ρ'(t)=Z'(t)_#ρ_Δ x^0, the exact solution with initial data ρ_Δ x^0. From the triangle inequality, we haved_W(ρ(t^n),ρ_Δ x^n) ≤ d_W(ρ(t^n),ρ'(t^n)) +d_W(ρ'(t^n),ρ_Δ x^n).The last term in the right hand side may be estimated thanks to the above computations.For the first term in the right hand side, we use the estimates in Theorem <ref> (we apply (i) if λ≤ 0 and (ii) if λ >0):d_W(ρ(t^n),ρ'(t^n)) ≤e^(λ)^- t^n d_W(ρ^ini,ρ_Δ x^0),where (λ)^-=max(-λ,0) is the negative part of λ. Let us define τ:[0,1]×^d →^d by τ(σ,x) = σ x_J + (1-σ)x, for x ∈ C_J. We have that τ(0,·)=id and τ(1,·)_#ρ^ini = ρ_Δ x^0. Then d_W(ρ^ini,ρ_Δ x^0)^2≤∫_^d×^d |x-y|^2[(id×τ(1,·))_#ρ^ini](dx,dy) ≤∑_J∈^dρ_J^0 ∫_C_J |x-x_J|^2ρ^ini(dx).We deduce d_W(ρ^ini,ρ_Δ x^0) ≤Δ x. Then, we getd_W(ρ(t^n),ρ'(t^n)) ≤ e^(λ)^- t^nΔ x.§ UNSTRUCTURED MESH We can extend our convergence result to more general meshes. For the sake of simplicity of the notation, we present the case of a triangular mesh in two dimensions. This approach can be easily extended to meshes made of simplices, in any dimension. §.§ Forward semi-Lagrangian scheme Let us consider a triangular mesh = (T_k)_k∈ with nodes (x_i)_i∈. We assume this mesh to be conformal: A summit cannot belong to an open edge of the grid.The triangles (T_k)_k ∈ are assumed to satisfy ⋃_k∈ T_k = ^2 and T_k ∩ T_l = ∅ if k ≠ l (in particular, the cells are here not assumed to be closed nor open). For any triangle T with summits x, y, z, we will use also the notation (x,y,z) = T. We denote by (T) = (x,y,z) the area of this triangle, and h(T) its height (defined as the minimum of the three heights of the triangle T). We make the assumption that the mesh satisfies ħ:=inf_k∈ h(T_k) >0.For any node x_i, i ∈, we denote by K(i) the set ofindices indexing triangles that have x_i as a summit, and we denote by _i the set of alltriangles ofthat have x_i as a summit: thus _i = {T_k ; k ∈ K(i) }.For any triangle T_k, k ∈, we denote by I(k) = {I_1(k), I_2(k), I_3(k)}the set of indices indexing the summits of T_k (for some arbitrary order, whose choice has no importance for the sequel).We consider the following scheme, which may be seen as a forward semi-Lagrangian scheme on the triangular mesh.*For an initial distribution ρ^ini of the PDE (<ref>), define the probability weights (ρ^0_i)_i ∈ through the following procedure:Consider the one-to-one mapping ι :∋ k ↦ι(k) ∈ such that, foreach k ∈, x_ι(k) is a node of the triangle T_k;ι is thus a way to associate a nodewith a cell; then, for all i ∈,let ρ^0_i = ∑_k:ι(k) = iρ^ini(T_k). Observe from(<ref>)thatρ^0_Δ x = ∑_j ∈ρ^0_jδ_x_j is an approximation of ρ^ini. *Assume that, for a given n ∈, we already haveprobability weights (ρ_i^n)_i∈such thatρ^n_Δ x = ∑_j ∈ρ^n_jδ_x_j is an approximation of ρ(t^n,·),where ρ is the solution to(<ref>) withρ^ini as initial condition. For i ∈,we leta_i^n:=-∫_^d(x_i-y) ρ_Δ x^n(dy),y_i^n:= x_i + a_i^n Δ t.Under the CFL-like conditionw_∞Δ t ≤ħ, y_i^n belongs to one (and only one) of the elements of _i. We denote by k_i^n the index of this triangle, namelyy_i^n∈ T_k_i^n.* We use a linear splitting rule between the summits ofthe triangle T_k_i^n: the mass ρ_i^n is sent to the three pointsx_I_1(k_i^n), x_I_2(k_i^n), x_I_3(k_i^n) according to the barycentric coordinates of y_i^n in the triangle.(0,0) – (5,0) – (3,4) – (0,0);(0,0) node[below]x_i =x_I_1(k_i^n);(5,0) node[below]x_I_2(k_i^n);(3,4) node[above]x_I_3(k_i^n);(2.5,2) node[below]y_i^n;[->] (0,0) – (2.5,2);[dashed] (5,0) – (2.5,2) – (3,4);Let us make more precise the latter point. Let T = (x,y,z) ∈, and ξ∈ T.We define the barycentric coordinates of ξ with respectto x, y and z, λ_x^T,λ_y^T and λ_z^T: λ_x^T(ξ) = (ξ,y,z)/(T), λ_y^T(ξ) = (ξ,x,z)/(T), λ_z^T(ξ) = (ξ,x,y)/(T),and then have ξ = λ_x^T(ξ) x + λ_y^T(ξ) y+ λ_z^T(ξ) z.Note also that λ_x^T(ξ) + λ_y^T(ξ)+ λ_z^T(ξ) = 1. Therefore, we have the following fundamentalproperty, which will be used in the sequel: λ_x^T(ξ) (x - ζ) +λ_y^T(ξ) (y - ζ) + λ_z^T(ξ) (z - ζ) = ξ - ζ,for any ζ∈^2. In the same spirit as in Section <ref>, we here define the interpolation weights by: For j∈, and y∈^2,α_j(y) := {[ λ_x_j^T(y),when y∈ T,;0,otherwise. ].Then, the numerical scheme readsρ_j^n+1 = ∑_i∈ρ_i^n α_j(x_i+a_i^nΔ t), j ∈,n ∈.We easily verifyfrom (<ref>) and (<ref>) that the interpolation weights satisfy:Let (α_j(y))_j ∈,y ∈^2 be defined as in (<ref>). Then, for any j ∈ and y ∈^2, α_j(y) ≥ 0. Moreover,for any y ∈^2, ∑_j∈α_j(y) = 1, ∑_j∈ x_j α_j(y) = y.§.§ Convergence result By the same token as in Section <ref>, we can use Lemma <ref> and Theorem <ref> to prove that the numerical scheme (<ref>) is of order 1/2:Assume that W satisfies hypotheses(A0)–(A3). For ρ^ini∈_2(^d), let (ρ(t))_t ≥ 0 be the unique measure solution to the aggregation equation with initial data ρ^ini, as given by Theorem <ref>. Let us also consider a triangular conformal mesh (T_k)_k∈ with nodes (x_j)_j∈such that ħ = inf_k∈ h(T_k) >0 and the CFL condition (<ref>) holds true.We denote byΔ x the longest edge in the mesh. Define ((ρ_j^n)_j∈)_n ∈ℕ as in(<ref>) and letρ_Δ x^n := ∑_j∈ρ_j^n δ_x_j, n ∈ℕ.Then, there exists a nonnegative constant C,independent of the discretization parameters, such that, for all n∈^*,d_W(ρ(t^n),ρ_Δ x^n) ≤ C e^|λ|(1+Δ t) t^n( √(t^n Δ x) + Δ x ). Importantly, we do not claim that(ii) in the statement of Theorem <ref> remains true in the framework of Theorem <ref>. Indeed, it would require to prove that the support of the numerical solution remains included in a ball when the support of the initial condition is bounded. As made clear bythe proof of Lemma<ref>, this latter fact depends on the geometry of the mesh.§ NUMERICAL ILLUSTRATIONS We now address several numerical examples. InSubsection<ref>,we show that the rate of convergence established inTheorem <ref> is optimal in a one-dimensionalexample. This prompts us to start with a short reminderon the Wasserstein distance in dimension d=1.In Subsection <ref>, we provide several numerical examples indimension d=1 for the Newtonian potential, whilst examples indimension d=2 are handled in Subsection<ref>. §.§ Wasserstein distance in one dimensionThe numerical computation of the Wasserstein distance between two probablity measuresin any dimension is generally quite difficult. However, in dimension d=1, there is an explicit expression of the Wasserstein distance and this allows for direct computations, including numerical purposes, as shown in the pioneering work <cit.>. Indeed, any probability measure μ on the real linecan be described thanks to its cumulative distribution function F(x)=μ((-∞,x ]), which is a right-continuous and non-decreasing function with F(-∞)=0 and F(+∞)=1. Then we can define the generalized inverse Q_μ of F (or monotone rearrangement of μ) by Q_μ(z)=F^-1(z):=inf{x∈ : F(x) > z};it is a right-continuous and non-decreasing function, defined on [0,1).For every non-negative Borel-measurable map ξ : →, we have∫_ξ(x) μ(dx) = ∫_0^1 ξ(Q_μ(z)) dz.In particular, μ∈_2() if and only ifQ_μ∈ L^2((0,1)). Moreover, in the one-dimensional setting, there exists a unique optimal transport plan realizing the minimum in (<ref>). More precisely, if μ and ν belong to _p(), with monotone rearrangements Q_μ and Q_ν, then Γ_0(μ,ν)={(Q_μ,Q_ν)_#𝕃_(0,1)}where𝕃_(0,1) is the restriction to (0,1) of the Lebesgue measure. Then we have the explicit expression of the Wasserstein distance (see <cit.>) d_W(μ,ν) = (∫_0^1 |Q_μ(z)-Q_ν(z)|^2 dz)^1/2,and the map μ↦ Q_μis an isometry between _2() and the convex subset of (essentially) non-decreasing functions ofL^2([0,1)).We will take advantage of this expression (<ref>) of the Wasserstein distance in dimension 1 in our numerical simulations to estimate the numerical error of the upwind scheme (<ref>). This scheme in dimension 1 on a Cartesian mesh reads, with time stepΔ t and cell size Δ x:ρ_j^n+1 = ρ_j^n - Δ t/Δ x((a_j^n)^+ ρ_j^n-(a_j+1^n)^- ρ_j+1^n - (a_j-1^n)^+ ρ_j-1^n +(a_j^n)^- ρ_j^n ).With this scheme, we define the probability measure ρ_Δ x^n =∑_j∈ρ_j^nδ_x_j. Then the generalized inverse of ρ_Δ x^n, denoted byQ_Δ x^n, is given byQ_Δ x^n(z) = x_j+1,z∈[∑_k≤ jρ_k^n,∑_k≤ j+1ρ_k^n).§.§ Optimality of the order of convergence Thanks to formula (<ref>) in dimension d=1, we can verify numericallythe optimality of our result. Let us consider the potential W(x)=2x^2 for |x|≤ 1 and W(x)=4|x|-2 for |x|> 1; such a potential verifies our assumptions(A0)–(A3) withλ=0. We choose the initial datum ρ^ini=1/2δ_-x_0+1/2δ_x_0 with x_0=0.25. Then the solution to the aggregation equation (<ref>) is given byρ(t) = 1/2δ_-x_0(t) + 1/2δ_x_0(t),x_0(t)=1/4 e^-4t,t ≥ 0.The generalized inverse Q_ρ(t,·)= Q_ρ(t) of ρ(t) is given, for z∈ [0,1), by Q_ρ(t,z) = -x_0(t) if z∈ [0,1/2), and Q_ρ(t,z) = x_0(t) if z∈ [1/2,1). Therefore, letting u_j^n:=∑_k≤ jρ_k^n for j ∈, we can easilycompute the error at time t^n=nΔ t by means of the two formulas (<ref>)–(<ref>):e_n:=d_W(ρ(t^n),ρ_Δ x^n) = ∑_k∈∫_u_k-1^n^u_k^n |x_k - Q_ρ(t^n,z)|dz.We then define the numerical error as e:=max_n≤ T/Δ t e_n. We display in Figure <ref> the numerical error with respect to the number of nodes in logarithmic scale, as computed with the above procedure (the time steps being chosen in a such a way that the ratio (<ref>) in the CFL condition is kept constant). We observe that the computed numerical error is of order 1/2.§.§ Newtonian potential in one dimension An interesting and illustrative example is the Newtonian potential in dimension d=1.Let us indeed consider the case W(x)=|x| and an initial datum given by the sum of two masses located at points x_i_1 and x_i_2 of the grid mesh, namely ρ^ini=1/2δ_x_i_1+1/2δ_x_i_2,with say x_i_1<x_i_2. The solution of the aggregation equation in Theorem <ref> is given by ρ(t) =1/2δ_x_1(t)+ 1/2δ_x_2(t), wherex_1(t) = x_i_1 + t/2,x_2(t) = x_i_2 - t/2, t<x_i_2-x_i_1.Indeed, recalling definition (<ref>), we have, for t<x_i_2-x_i_1:_ρ(t,x) = {[ 1,x<x_1(t),; 1/2,x=x_1(t),; 0, x_1(t)<x<x_2(t),;-1/2,x=x_2(t),; - 1,x>x_2(t). ].At t= x_i_2-x_i_1, the two particles collapse, then for t≥ x_i_2-x_i_1, we haveρ(t)=δ_1/2 (x_i_1+x_i_2). Standard finite volume upwind scheme.This simple example explains why we have chosen the scheme (<ref>) instead of the standard finite volume upwind scheme introduced inSubsection<ref>. In dimensiond=1 andon a Cartesian grid, this latter one readsρ_i^n+1 = ρ_i^n - Δ t/Δ x((a_i+1/2^n)^+ ρ_i^n-(a_i+1/2^n)^- ρ_i+1^n - (a_i-1/2^n)^+ ρ_i-1^n + (a_i-1/2^n)^- ρ_i^n ),where a_i+1/2^n=-∑_k∈ρ_k^n(x_i+1/2-x_k).Assume indeed that, at time t^n, for some n ∈, we have obtained the approximation ρ_i^n = 0 for i∈∖{i_1,i_2}, and ρ_i_1^n=ρ_i_2^n=1/2. We then compute a_i+1/2^n ={[ 1,i<i_1; 0, i_1≤ i < i_2;-1,i≥ i_2. ].So, when applying the upwind scheme for i∈{i_1-1,i_1,i_1+1}, we get[ ρ_i_1-1^n+1 = ρ_i_1-1^n - Δ t/Δ x(ρ_i_1-1^n-ρ_i_1-2^n) = 0,; ρ_i_1^n+1 = ρ_i_1^n + Δ t/Δ xρ_i_1-1^n = ρ_i_1^n,;ρ_i_1+1^n+1 = ρ_i_1+1^n = 0.; ]Doing the same computation for i∈{i_2-1,i_2,i_2+1}, we deduce thatρ^n+1=ρ^n. Thus the above upwind scheme may be not able to capture the correct dynamics of Dirac deltas. The above computation is illustrated by the numerical results in Figure <ref>, where a comparison between the numerical results obtained with (<ref>) (left) and with (<ref>) (right) is displayed. We observe that the Dirac deltas are stationary when using the scheme (<ref>), whereas the scheme (<ref>) permits to catch the right dynamics. Another interesting numerical illustration of this phenomenon is provided by Figure <ref>. In this example, we choose the potential W(x)=1-e^-2|x|, which is -4-convex,and a smooth initial datum given by the sum of two Gaussian functions: ρ^ini(x) = 1/M(e^-20(x-0.5)^2 + e^-20(x+0.5)^2), where M=ρ^ini_L^1 is a normalization coefficient. With this choice, we observe that the solution blows-up quickly.Dirac deltas appear in finite time and, as observed above, the scheme (<ref>) (Fig. <ref>-left) does not allow to capture the dynamics after blow-up time, whilst the scheme (<ref>) (Fig. <ref>-right) succeeds to do so. For these numerical simulations, the numerical spatial domain is [-1.25,1.25]; it is discretized with a uniform Cartesian grid of 800 nodes, and the ratio in the CFL condition (<ref>) is 1/2.Comparison with Burgers-Hopf equation. Considering the potential W(x)=1/2 |x|, it has been proved in <cit.> (see also <cit.>)that the following equivalenceholds true: ρ is the solution in Theorem <ref>if and only if u=-W'*ρ is the entropy solution of the Burgers-Hopf equation _t u+1/2_x u^2 = 0. Let (ρ^n_i)_i ∈,n ∈ be given by the scheme (<ref>)–(<ref>). By conservation of the total mass, see Lemma <ref>, we have ∑_k∈ρ_k^n=1. Introducing u_i^n := 1/2 - ∑_k≤ iρ_k^n,i ∈,n ∈,we deduce, by summing (<ref>) and by using the fact thatρ_i^n=-(u_i^n-u_i-1^n),that the family (u_i^n)_i ∈,n ∈ satisfies:u_i^n+1 = u_i^n -Δ t/Δ x((a_i^n)^+ (u_i^n - u_i-1^n) -(a_i+1^n)^- (u_i+1^n-u_i^n) ),where, with (<ref>), we have a_i^n = -1/2∑_k≠ iρ_k^n (x_i-x_k).Thena_i^n = -1/2(∑_k<iρ_k^n - ∑_k>iρ_k^n ) = -1/2(∑_k<iρ_k^n - 1 + ∑_k≤ iρ_k^n ) = 1/2 (u_i-1^n + u_i^n).Moreover, as ρ_i^n remains nonnegative under the CFL condition (see Lemma <ref>), u_i^n - u_i-1^n = -ρ_i^n ≤ 0, so that (a_i^n)^+ (u_i^n-u_i-1^n) = -(a_i^n (u_i^n-u_i-1^n))^- = -1/2( (u_i^n)^2-(u_i-1^n)^2)^-.Similarly, we get (a_i+1^n)^- (u_i+1^n-u_i^n) = -(a_i+1^n (u_i+1^n-u_i^n))^+ = -1/2( (u_i+1^n)^2-(u_i^n)^2)^+,so that the scheme (<ref>) for u finally rewritesu_i^n+1 = u_i^n -Δ t/2Δ x(((u_i+1^n)^2-(u_i^n)^2)^- -((u_i^n)^2 - (u_i-1^n)^2)^+ ). Then we may apply the main result of this paper and deduce the convergence at order 1/2 of the above scheme:Let u^ini be given in BV() such that _xu^ini≤ 0 and TV(u^ini)=1. Define the family (u^n_i)_i ∈,n∈by means of (<ref>),with the initial data u_i^0 := 1/2 + _x u^ini(-∞,x_i+1/2), andlet u_Δ x^n:=∑_i∈ u_i^n 1_[x_i,x_i+1).Let u be the entropy solution to the Burgers equation _t u + 1/2_x u^2=0 with u^ini as initial condition. Then, there exists C≥ 0, independent of the discretization parameters, such that if the CFL conditionΔ t <Δ x is satisfied, one has u(t^n)-u^n_Δ x_L^1≤ C (√(t^n Δ x) + Δ x ).We do not claim that the scheme converges for any initial datum of the Cauchy problem for the Burgers equation (and actually it does not). The convergence result above only applies to a non-increasing initial condition belonging to [-1/2, 1/2]. Note that this scheme is not conservative, but, surprisingly (see <cit.>) this does not prevent it from converging toward the right solution. First remark that the CFL condition that is here required is w_∞Δ t < 1/2Δ x, with w_∞ = 1/2 as W(x) = 1/2 |x|.The entropy solution u of the Burgers equation with a nonincreasing BV initial datum is a nonincreasing BV function.By Cauchy-Schwarz inequality, we have∫_0^1 |Q_ρ(t^n)(z) - Q_ρ^n_Δ x(z)| dz ≤ Q_ρ(t^n) - Q_ρ^n_Δ x_L^2(0,1) = d_W(ρ(t^n),ρ_Δ x^n),where (ρ(t))_t ≥ 0 is the solution of(<ref>), with W(x) = 1/2|x| as before and ρ^ini=-∂_x u^ini as initial condition, and (ρ^n_Δ x)_n ≥ 0 is the numerical solution obtained by Scheme (<ref>) with d = 1 together with initial condition (<ref>) (numerical solution whose convergence at order 1/2 is stated in Theorem <ref>).Observing that W is convex, we apply Theorem <ref> with λ =0. We obtain∫_0^1 |Q_ρ(t^n)(z) - Q_ρ^n_Δ x(z)| dz ≤d_W(ρ(t^n),ρ_Δ x^n) ≤ C ( √(t^n Δ x) + Δ x ).The claim follows provided we prove that ∫_ |u(t^n,x)-u^n_Δ x(x)| dx = ∫_0^1|Q_ρ(t^n)(z) - Q_ρ^n_Δ x(z)| dz. In order to prove (<ref>), we notice that, from a geometrical point of view, the left hand side of equality (<ref>) corresponds to the area betweenthe curves x↦ u(t^n,x) and x↦ u^n_Δ x(x).Also, the right hand side is a measure of the area between their generalized inverses. However, the graph of the pseudo-inverse of a function may be obtained by flippingthe graph of the functionwith respect to the diagonal.Since this operation conserves the area, we deduce that both areas are equal, that is (<ref>) holds.Another way to prove the identity (<ref>) is to observe that the solutionu of the Burgers-Hopf equation reads:u(t,x) = 1/2[ ρ(t,(x,+∞)) -ρ(t,(-∞,x)) ],t ≥ 0,x ∈ℝ,where ρ is the solution in Theorem <ref>. In fact, as the number of points x for which ρ(t,{x})>0 is at most countable for any given t >0, we have the almost everywhere equality:u(t,x) = ρ(t,(x,+∞)) - 1/2.Similarly, u^n_Δ x(t,x)= ∑_i ∈ℤ u ^n_i1_[x_i,x_i+1)(x) = 1/2 -∑_i ∈ℤ1_[x_i,x_i+1)(x) ∑_k ≤ iρ^n_k= 1/2 - ∑_i ∈ℤ1_[x_i,x_i+1)(x) ρ^n_Δ x(t,(-∞,x_i]) = 1/2 - ρ^n_Δ x(t,(-∞,x]) = ρ^n_Δ x(t,(x,+∞)) -1/2. So, to complete the proof, it suffices to use the fact that, for any two probability measures μ and μ' on , ∫_|μ((x,+∞))- μ'((x,+∞)) | dx= ∫_0^1| Q_μ(z) - Q_μ'(z) | dz,see<cit.>, noticing that the functionQ_μ we use here is the right continuous version of the quantile function used in<cit.>. §.§ Numerical simulation in two dimensions As an illustration, we propose now a numerical example in two dimensions. The spatial domain is the square [0,1]×[0,1]; it is discretized with N_x=70 nodes in the x-direction and N_y=70 nodes in the y-direction; we take a time step Δ t=10^-3. We consider two different initial data: the sum of three bumps (as in <cit.>)ρ^ini(t,x)= 1/M(e^-100((x_1-0.25)^2+(x_2-0.3)^2)+e^-100((x_1-0.77)^2+(x_2-0.7)^2) + 0.9 e^-100((x_1-0.37)^2+(x_2-0.62)^2)),where M is a normalization constant such that ρ^ini_L^1=1;and an initial density with a square shapeρ^ini(t,x)=5×1_[0.2,0.8]×[0.2,0.8]∖ [0.3,0.7]×[0.3,0.7].With these numerical data, we compare the numerical results between the two potentials W_1(x)=1-e^-5|x| and W_2(x)=5|x|. For |x| close to 0, we have that ∇ W_1 ∼∇ W_2.Then the short range interaction is similar between both potentials, but the long range interaction is different. The numerical results are displayed in Figures <ref> and <ref> for the potentialW_1(x)=1-e^-5|x| and in Figures <ref> and <ref> for the potential W_2(x)=5|x|.In each case, we observe, as expected, the aggregation in finite time of ρ towards a Dirac delta. Indeed it has been proved in <cit.> that when the initial data is compactly supported, solutions converge towards a Dirac delta in finite time. We also observe that the time dynamics during this step of concentration is different betweenpotentials W_1 and W_2.The case with an initial datum with three bumps has been implemented in <cit.>with a Lax-Friedrichs scheme. We obtain here similar results but we observea smaller numerical diffusion. Then we can make similar comments for the comparison between the two potentials W_1 and W_2. For the potential W_1, we observe that each bump coalesces into a Dirac delta, then the three remaining Dirac deltas merge into a single Dirac delta (see Fig <ref>). For the potential W_2, the solution seems to be more regular and Dirac deltas seems toappear for larger time (see Fig <ref>).For the initial data with a square shape, the density ρ keeps,for both potentials, a shape similar to the initial square shape which tightens as time increases. However with the potential W_1 (Fig <ref>), we notice a strong concentration at the corners of the square, whereas in the case of the potential W_2 (Fig <ref>) the densityis homogeneous along the edges of the squarewith a slight concentration in the middle of the edges. Acknowledgements. The authors acknowledge partial support from the french “ANR blanche" project Kibord : ANR-13-BS01-0004, as well as from the “BQR Acceuil EC 2017” grant from Université Lyon 1.99AubinCellina J.-P. Aubin, A. Cellina,Differential inclusions. Set-valued maps and viability theory. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences],264. Springer-Verlag, Berlin, 1984.Ambrosio L. Ambrosio, N. Gigli, G. Savaré,Gradient flows in metric space of probability measures, Lectures in Mathematics, Birkäuser, 2005benedetto D. Benedetto, E. Caglioti, M. Pulvirenti,A kinetic equation for granular media, RAIRO Model. Math. Anal. Numer.,31 (1997), 615-641.Andrea_c_toaa A.L. Bertozzi, J.B. Garnett, T. Laurent,Characterization of radially symmetric finite time blowup in multidimensional aggregation equations, SIAM J. Math. Anal.44(2) (2012) 651–681.Bertozzi2 A.L. Bertozzi, T. Laurent, J. Rosado, L^p theory for the multidimensional aggregation equation, Comm. Pure Appl. Math.,64 (2011), no 1, 45–83.Bianchini S. Bianchini, M. Gloyer,An estimate on the flow generated by monotone operators, Comm. Partial Diff. Eq.,36 (2011), no 5, 777–796.bobkov:ledoux S. Bobkov, M. Ledoux,One-dimensional empirical measures, order statistics, and Kantorovich transport distances, to appear in Memoirs of AMS.BV M. Bodnar, J.J.L. Velázquez,An integro-differential equation arising as a limit of individual cell-based models, J. Differential Equations222 (2006), no 2, 341–380.bonaschi G.A. Bonaschi, J.A. Carrillo, M. Di Francesco, M.A. Peletier,Equivalence of gradient flows and entropy solutions for singular nonlocal interaction equations in 1D, ESAIM Control Optim. Calc. Var.21 (2015), no 2, 414–441.bouche D. Bouche, J.-M. Ghidaglia, F. Pascal,Error estimate and the geometric corrector for the upwind finite volume method applied to the linear advection equation, SIAM J. Numer. Anal.43 (2005), no 2, 578–603.bj1 F. Bouchut, F. James,One-dimensional transport equations with discontinuous coefficients, Nonlinear Analysis TMA,32(1998), no 7, 891–933.Freda M. Campos Pinto, J.A. Carrillo, F. Charles, Y.-P. Choi,Convergence of a linearly transformed particle method for aggregation equations, preprint, .CCH J.A. Carrillo, A. Chertock, Y. Huang,A Finite-Volume Method for Nonlinear Nonlocal Equations with a Gradient Flow Structure, Comm. in Comp. Phys.17 (2015), no 1, 233–258.Carrillo J.A. Carrillo, M. DiFrancesco, A. Figalli, T. Laurent, D. Slepčev,Global-in-time weak measure solutions and finite-time aggregation for nonlocal interaction equations, Duke Math. J.156 (2011), 229–271.CJLV J.A. Carrillo, F. James, F. Lagoutière, N. Vauchelet, The Filippov characteristic flow for the aggregation equation with mildly singular potentials, J. Differential Equations.260 (2016), no 1, 304–338. CCV J.A. Carrillo, R.J. McCann, C. Villani,Contractions in the 2-Wasserstein length space and thermalization of granular media, Arch. Rational Mech. Anal.179 (2006), 217–263.pieton R.M. Colombo, M. Garavello, M. Lécureux-Mercier,A class of nonlocal models for pedestrian traffic, Math. Models Methods Appl. Sci.,22 (2012), no 4:1150023, 34.CB K. Craig, A.L. Bertozzi,A blob method for the aggregation equation, Math of Comp85 (2016), no 300, 1681–1717.pieton2 G. Crippa, M. Lécureux-Mercier,Existence and uniqueness of measure solutions for a system of continuity equations with non-local flow, NoDEA Nonlinear Differential Equations Appl., (2013)20 (2013), no 3, 523–537.caniveau F. Delarue, F. Lagoutière,Probabilistic analysis of the upwind scheme for transport equations, Arch. Rational Mech. Anal.199 (2011), 229–268.DLV F. Delarue, F. Lagoutière, N. Vauchelet,Analysis of finite volume upwind scheme for transport equation with discontinuous coefficients. Accepted for publication in J. Math. Pures Appliquées. Despres B. Després,An explicit a priori estimate for a finite volume approximation of linear advection on non-Cartesian grid, SIAM J. Numer. Anal.42 (2004), no 2, 484–504.dob R. Dobrushin,Vlasov equations, Funct. Anal. Appl.13 (1979), 115–123.dolschmeis Y. Dolak, C. Schmeiser,Kinetic models for chemotaxis: Hydrodynamic limits and spatio-temporal mechanisms, J. Math. Biol.,51 (2005), 595–615.filblaurpert F. Filbet, P. Laurençot, B. Perthame,Derivation of hyperbolic models for chemosensitive movement, J. Math. Biol.,50 (2005), 189–207.Filippov A.F. Filippov,Differential Equations with Discontinuous Right-Hand Side, A.M.S. Transl. (2)42 (1964), 199–231.Golse F. Golse,On the Dynamics of Large Particle Systems in the Mean Field Limit. In: A. Muntean, J. Rademacher, A. Zagaris (eds), Macroscopic and Large Scale Phenomena: Coarse Graining, Mean Field Limits and Ergodicity. Lecture Notes in Applied Mathematics and Mechanics, vol 3. Springer, Cham, 2016. GJ L. Gosse, F. James,Numerical approximations of one-dimensional linear conservation equations with discontinuous coefficients, Math. Comput.69 (2000) 987–1015.GT L. Gosse, G. Toscani,Identification of Asymptotic Decay to Self-Similarity for One-Dimensional Filtration Equations, SIAM J. Numer. Anal.43 (2006) 2590–2606.sisc L. Gosse, N. Vauchelet,Numerical high-field limits in two-stream kinetic models and 1D aggregation equations, SIAM J. Sci. Comput.38 (2016), no 1, A412–A434.Legloc T.Y. Hou, P.G. LeFloch,Why nonconservative schemes converge to wrong solutions: error analysis, Math. Comp.62 (1994), no 206, 497–530. Huang1 Y. Huang, A.L. Bertozzi,Asymptotics of blowup solutions for the aggregation equation, Discrete and Continuous Dynamical Systems - Series B,17 (2012), 1309–1331.Huang2 Y. Huang, A.L. Bertozzi,Self-similar blowup solutions to an aggregation equation in ^n, SIAM Journal on Applied Mathematics,70 (2010), 2582–2603.NoDEA F. James, N. Vauchelet,Chemotaxis: from kinetic equations to aggregation dynamics, Nonlinear Diff. Eq. and Appl. (NoDEA),20 (2013), no 1, 101–127.GF_dual F. James, N. Vauchelet,Equivalence between duality and gradient flow solutions for one-dimensional aggregation equations, Disc. Cont. Dyn. Syst.,36 (2016), no 3, 1355–1382.sinum F. James, N. Vauchelet,Numerical method for one-dimensional aggregation equations, SIAM J. Numer. Anal.53 (2015), no 2, 895–916.keller E.F. Keller, L.A. Segel,Initiation of slime mold aggregation viewed as an instability, J. Theor. Biol.,26 (1970), 399–415.Kuznetsov N.N. Kuznetsov,The accuracy of certain approximate methods for the computation of weak solutions of a first order quasilinear equation, Ž. Vyčisl. Mat. i Mat. Fiz.16 (1976), no 6, 1489–1502.lava F. Lagoutière, N. Vauchelet,Analysis and simulation of nonlinear and nonlocal transport equation, to appear in Innovative algorithms and analysis, Springer INdAM Series 16, L. Gosse and R. Natalini Ed, 2016. Li H. Li, G. Toscani,Long time asymptotics of kinetic models of granular flows, Arch. Rat. Mech. Anal.,172 (2004), 407–428.M B. Merlet, L^∞- and L^2-error estimates for a finite volume approximation of linear advection, SIAM J. Numer. Anal.46 (2007), no 1, 124–150.MV B. Merlet, J. Vovelle,Error estimate for finite volume scheme, Numer. Math.106 (2007), 129–155.morale D. Morale, V. Capasso, K. Oelschläger,An interacting particle system modelling aggregation behavior: from individuals to populations, J. Math. Biol.,50 (2005), 49–66.okubo A. Okubo, S. Levin,Diffusion and Ecological Problems: Modern Perspectives, Springer, Berlin, 2002.patlack C.S. Patlak,Random walk with persistence and external bias, Bull. Math. Biophys.,15 (1953), 311-338.PoupaudRascle F. Poupaud, M. Rascle,Measure solutions to the linear multidimensional transport equation with discontinuous coefficients, Comm. Partial Diff. Equ.,22 (1997), 337–358.rachev S.T. Rachev and L. Rüschendorf,Mass Transportation Problems. Vol. I. Theory, Probab. Appl. (N. Y.), Springer-Verlag, New York, 1998.Filippo c touo F. Santambrogio,Optimal transport for applied mathematicians. Calculus of variations, PDEs, and modeling. Progress in Nonlinear Differential Equations and their Applications, 87. Birkhäuser/Springer, Cham, 2015. schlichting A. Schlichting, C. Seis,Convergence rates for upwind schemes with rough coefficients, SIAM J. Numer. Anal.55 (2017), no 2, 812–840.topaz C.M. Topaz, A.L. Bertozzi,Swarming patterns in a two-dimensional kinematic model for biological groups, SIAM J. Appl. Math.65 (2004), 152–174.Toscani G. Toscani,Kinetic and hydrodynamic models of nearly elastic granular flows, Monatsh. Math.142 (2004), 179–192.Villani1 C. Villani,Optimal transport, old and new, Grundlehren der Mathematischen Wissenschaften 338, Springer, 2009.Villani2 C. Villani,Topics in optimal transportation, Graduate Studies in Mathematics58, Amer. Math. Soc, Providence, 2003. ] ]
http://arxiv.org/abs/1709.09416v2
{ "authors": [ "François Delarue", "Frédéric Lagoutìère", "Nicolas Vauchelet" ], "categories": [ "math.AP" ], "primary_category": "math.AP", "published": "20170927094437", "title": "Convergence analysis of upwind type schemes for the aggregation equation with pointy potential" }
Why: Natural Explanations from a Robot Navigator Raj Korpan1, Susan L. Epstein1,2, Anoop Aroor1, Gil Dekel21The Graduate Center and 2Hunter College, City University of New [email protected], [email protected], [email protected], [email protected] December 30, 2023 ==============================================================================================================================================================================================================================================================empty empty Robot-assisted dressing offers an opportunity to benefit the lives of many people with disabilities, such as some older adults. However, robots currently lack common sense about the physical implications of their actions on people. The physical implications of dressing are complicated by non-rigid garments, which can result in a robot indirectly applying high forces to a person's body. We present a deep recurrent model that, when given a proposed action by the robot, predicts the forces a garment will apply to a person's body. We also show that a robot can provide better dressing assistance by using this model with model predictive control. The predictions made by our model only use haptic and kinematic observations from the robot's end effector, which are readily attainable. Collecting training data from real world physical human-robot interaction can be time consuming, costly, and put people at risk. Instead, we train our predictive model using data collected in an entirely self-supervised fashion from a physics-based simulation. We evaluated our approach with a PR2 robot that attempted to pull a hospital gown onto the arms of 10 human participants. With a 0.2s prediction horizon, our controller succeeded at high rates and lowered applied force while navigating the garment around a persons fist and elbow without getting caught. Shorter prediction horizons resulted in significantly reduced performance with the sleeve catching on the participants' fists and elbows, demonstrating the value of our model's predictions. These behaviors of mitigating catches emerged from our deep predictive model and the controller objective function, which primarily penalizes high forces. § INTRODUCTIONRobotic assistance presents an opportunity to benefit the lives of many people with disabilities, such as some older adults. However, robots currently lack common sense about the physical implications of their actions on people when providing assistance. Assistance with dressing can improve a person's quality of life by increasing his or her independence and privacy. Yet, dressing presents further difficulties for robots due to the complexities that arise when manipulating fabric garments around people.Model predictive control (MPC) enables robots to account for errors and replan actions in real time when interacting in dynamic environments. For example, MPC has found success in several contexts such as obstacle avoidance and object manipulation <cit.>. However, these existing robotic controllers do not take into consideration the physical implications of a robot's actions on a person during physical human-robot interaction. This is especially true during robot-assisted dressing in which a robot may never make direct physical contact with a person, but instead apply force onto the person through an intermediary non-rigid garment. Yet, robots could greatly benefit from predicting the physical implications of their actions when interacting with people.In this paper, we propose a Deep Haptic MPC approach that allows a robot to minimize the predicted force it applies to a person during robotic assistance that requires physical contact. We train a recurrent model that consists of both an estimator and predictor network in order to predict the forces applied onto a person, and we detail the benefits of this approach in Section <ref>. The estimator outputs the location and magnitude of forces applied to a person's body given haptic sensory observations from a robot's end effector. The predictor outputs future haptic observations given a proposed action. Together, these two networks allow a robot to determine the physical implications of its actions by predicting how future actions will exert forces onto a person's body. We demonstrate our approach on a real robotic system that assisted 10 human participants in pulling a hospital gown onto a person's right arm, as seen in Fig. <ref>. We train our model on data generated entirely in a physics-based simulation, allowing us to quickly collect thousands of diverse training sequences that would otherwise be dangerous or infeasible to collect on real robotic systems that physically interact with people. Our simulated robot can make mistakes, explore new approaches for interaction, and investigate error conditions without putting real people at risk.These training data are generated in a self-supervised fashion, without a reward function or specified goal. Once training is complete, we define an objective function that enables our controller to prioritize future actions that minimize the predicted force applied to a person during dressing. Since our model is trained without a predefined reward function, we can redefine the objective function without retraining the model. We further compare dressing results for various time horizons with MPC and observe emergent behaviors as the prediction horizon increases.The key contribution of this paper is to demonstrate that a deep recurrent model over haptic and kinematic measurements can be used by real robotic systems to predict the physical implications of future actions and lower the forces applied to a person during robot-assisted dressing. We show that this model can be trained in simulation and applied to a real robotic task of pulling a garment onto a person's arm. By combining our learning-based model with MPC, we observe emergent behaviors that result in the robot navigating a garment up a person's entire arm.§ RELATED WORK §.§ Robot-Assisted Dressing and Force Estimation Several robotic dressing approaches have relied on visual systems to estimate a person's pose and the state of a garment. For example, Koganti et al. <cit.> used RGB-D and motion capture data to estimate the topological relationship between a person's body and a garment. Klee et al. <cit.> visually detected a person's pose which was used by a Baxter robot to assist in putting on a hat. Pignat et al. <cit.> tracked a person's hand movement in real time using an AR tag. The researchers then used a Baxter robot to pull one sleeve of a jacket onto a person's arm. Unlike this body of work, our approach does not rely on visual observations, but is instead able to fully dress a person's arm using only haptic and kinematic measurements obtained at the robot's end effector.Several researchers have similarly explored haptic sensing within the context of robot-assisted dressing. Gao et al. <cit.> proposed a force feedback control approach that allowed a Baxter robot to assist in dressing a sleeveless jacket. Kapusta et al. <cit.> explored how haptic observations at a robot's end effector can be coupled with an HMM to predict the future outcome of a dressing task. Yamazaki et al. <cit.> described a failure detection approach for robot-assisted dressing that leveraged force data while assisting participants in pulling up pants. Instead, our work demonstrates that haptic sensing and learning can be used to predict the physical implications of a robot's future actions when assisting people. When coupled with MPC, we show that these predictions also enable a robot to replan its actions in real time during robotic assistance.In prior work <cit.> we presented an LSTM model trained in simulation to estimate the forces applied onto a simulated arm and leg during robot-assisted dressing tasks. The estimator we present in this paper uses a similar network architecture except we also provide end effector position and yaw rotation measurements to the model so that our PR2 can navigate around a person's elbow. We pair this estimator with a predictor and evaluate a PR2's ability to predict the physical implications of its actions during assistance. §.§ Model Predictive Control Model predictive control has found success in several robotics domains. Some examples include aerial control vehicles <cit.> and robot locomotion <cit.>.This work has similarity to <cit.>, using haptic information as a model input for control in the manipulation domain.Prior robotics research has used analytical models for MPC  <cit.>, whereas we employ a learning-based model as in <cit.>. Many past works have relied on vision-based approaches for robotic control with MPC.Finn and Levine <cit.> combined a predictive model of image observations with MPC for nonprehensile pushing tasks. Watter et al. <cit.> presented a learning-based control method for non-linear dynamical systems using raw pixel images. Boots et al. <cit.> learned a predictive model that generates RGB-D images of a robot arm moving in free space. In comparison to these vision-based methods, our learned model uses only haptic and kinematic information.Chow et al. <cit.> leveraged haptic observations with MPC to assist in reposition a person's limbs in simulation. Lenz et al. <cit.> learned material properties for cutting various foods with a PR2, but rely on joint torques for haptic feedback, which have a lower dimensionality and accuracy than the 6-DoF discrete force/torque sensor in our system. In addition, Jain et al. <cit.> showed how a robot arm can reach into cluttered spaces using haptic sensing skin.Learning models with neural networks for robot control is common throughout many robotic control approaches <cit.>. Lenz et al. <cit.> used a recurrent model with MPC and demonstrated their approach on a PR2 that learned deep latent material properties by performing 1,488 cuts across 20 foods. Finn and Levine <cit.> combined a deep predictive model of image sequences with MPC and trained their model on 50,000 pushing attempts of objects using 7-DoF manipulators. Unlike these approaches, our model is trained entirely in simulation, which presents several benefits for physical human-robot interaction, as discussed in Section <ref>.Fu et al. <cit.> used model-based reinforcement learning in simulation, where a PR2 learned to manipulate rigid objects with MPC. Unlike reinforcement learning, our method does not require a reward function during training, which allows us to decouple the objective function from the learned model.Furthermore, we show that our learning-based model can enable a real PR2 to predict the physical implications of its actions when assisting human participants with dressing.§ SIMULATION AND MODEL TRAININGTo perform deep haptic MPC that considers forces applied to a person, our model consists of two recurrent neural networks trained on a dataset of simulated robot-assisted dressing trials. Here we introduce notation and we provide a brief description of the model, simulation, and data collection process. Our dataset consists of 10,800 dressing trials generated in a simulated robot-assisted dressing environment presented in prior work <cit.>. As shown in Fig. <ref>, this physics-based simulation consists of a robotic end effector that pulls a hospital gown onto a simulated human arm. The colored fields along the arm represent a force map that encompasses a set of force magnitudes applied at specific locations on the body. Several advantages arise from collecting data with a physics-based simulation. First, we can easily parallelize data collection to collect thousands of dressing experiences in a few hours. We can also test anomalous scenarios that may be infeasible or dangerous to test with real people, such as cloth getting caught on a body part. These anomalous conditions could be especially valuable for a robot, so that it can learn to mitigate potentially harmful consequences.Finally, we can calculate the location and magnitude of all forces applied to a person by a clothing garment within simulation, something that is highly challenging in the real world.During data collection, the simulated robotic end effector attempts to pull the sleeve of a hospital gown onto the person's arm. The simulator randomly selects a starting position near the arm and movement velocity for the end effector prior to each trial. During a dressing trial, the simulation iteratively selects a new random action for the robot's end effector at each time step. In doing so, our model learns about diverse situations for a garment to make contact with a person's arm.We represent actions in the fixed coordinate frame of the robot's torso, and actions consist of a 3D velocity for the end effector and a change in yaw rotation around an axis parallel to gravity, i.e. a=(v_x, v_y, v_z, Δψ). The simulation selects new actions at 5 Hz and records sensor measurements at 100 Hz. Measurements x_t = (ρ, v, ψ, f^r, τ^r) ∈ℝ^13, at time t, include the 3D position ρ, 3D velocity v, and yaw rotation ψ of the end effector, and the 3D forces f^r and torques τ^r applied at the robot's end effector by the garment. We record all forces applied to the human's arm, which occur when a vertex on the fabric mesh makes contact with the simulated arm. We construct a force map, as shown in Fig. <ref>, by mapping these applied forces to a discrete set of fixed points (taxels) spaced across the surface of the arm. <cit.> provides further details of this mapping procedure and force map definition. In this work, we use 37 taxels distributed across the fist, forearm, and upper arm.As Yu et al. <cit.> proposed, we used Covariance Matrix Adaptation Evolution Strategy (CMA-ES) <cit.> to optimize the parameters of our simulator with respect to data collected from a real robotic system that assisted human participants in pulling on a hospital gown. Some of these parameters include garment stretch, stiffness, shear forces, and friction. Because of this optimization, the force and torque measurements in simulation align closely to those observed in the real world. However, the simulated end effector performs exact movements, whereas the motion trajectory of a PR2's end effector often includes noise due to the compliant nature of the arms. To account for this, we added a small amount of uniformly sampled noise, ξ∈[-0.8, 0.8] mm/s, to each component of the end effector's velocity at every time step in the simulation. During model training, this also serves as a form of regularization to help mitigate overfitting to the position and velocity measurements from simulation. We leverage a pair of recurrent networks to predict the forces applied to a person given a sequence of proposed robot actions. We define a predictor G(x_1:t, a_t+1:t+H_p), which predicts a sequence of future end effector haptic measurements, x̂_t+1:t+H_p, that result from the robot executing actions a_t+1:t+H_p over a prediction horizon H_p. We then use an estimator, F(x_1:t, x̂_t+1:t+H_p), that estimates the forces, f_t+H_p, applied to a person at time t+H_p given all prior measurements x_1:t, and the predicted measurements, x̂_t+1:t+H_p. We can predict future force maps by composing the estimator and predictor, F ∘ G = F(x_1:t, G(x_1:t, a_t+1:t+H_p)), wherein we estimate force maps given predicted haptic measurements. Furthermore, we can make predictions beyond time t+H_p by feeding the predicted measurements x̂_t+1:t+H_p back into G along with an action sequence a_t+H_p+1:t+2H_p. Thus, x̂_t+H_p+1:t+2H_p can be predicted via G({x_1:t,x̂_t+1:t+H_p},a_t+H_p+1:t+2H_p).Although these two networks could be merged, there are several advantages to a split architecture. First, this setup allows for additional flexibility in that a new predictor can be learned without impacting the accuracy of force map estimation, or vise versa. Furthermore, we are able to run these two networks at different frequencies which is beneficial during real time use. We run the estimator at 100 Hz as this results in greater accuracy and resolution for force map estimates. However, since we update the robot's action at 5 Hz, we need only make predictions at a 5 Hz rate. In total, the estimation model receives 20 measurements for each step of the prediction model. This difference in frequencies was crucial for real-time implementation since prediction is a computationally demanding task for each candidate action.This approach also presents several advantages over formulating a reinforcement learning problem and solving for a policy. Our objective function is decoupled from the learned model, thus the objective can be redefined for different dressing tasks without retraining the model. Also, the data we collect for training the estimator can be reused to train the predictor, whereas model-free reinforcement learning methods require further data collection and new sets of rollouts from the evolving policy after training an estimator. As shown in Fig. <ref>, our model uses LSTMs to estimate force maps and predict future measurements. Each recurrent model consists of three LSTM layers with 50 recurrent cells and a tanh activation. The final output layer is fully connected with a linear activation. Fig. <ref> shows this network architecture for our estimator model. The predictor uses the same architecture, but with different input/output. Note that if the robot maintains a constant action throughout the entire prediction horizon, as is the case in our work, a sequence of identical actions, a_t+1:t+H_p, can be collapsed down to a single action, a_t+1. Because of this, our predictor outputs a sequence of measurements x̂_t+1:t+H_p given a single action, a_t+1, and measurement, x_t. We use H_p=20, which aligns with the 5 Hz rate used for predictions when the difference between time steps is 0.01s. Ideally, the predictor would evaluate sequences of actions that vary over time. However, our experiments showed that using the same action over the entire prediction horizon was computationally tractable and worked well in practice for physical human-robot interaction. Related literature has also found a 5 Hz action replanning rate to be computationally feasible for MPC on real robots <cit.>. § MODEL PREDICTIVE CONTROL Our system uses model predictive control (MPC) with our recurrent estimator and predictor to choose actions that minimize the predicted force applied to a person during physical assistance. Here we present the cost function that we used to encourage certain robot actions and we describe ways in which this function could be adapted to allow for personalized robotic assistance. In addition, we present our MPC method for replanning actions, which involves predicting applied forces for a set of candidate robot actions.We define a cost function leading to lower forces applied to a person during dressing assistance. The cost function input includes the current and prior measurements, x_1:t and a sequence of candidate actions, a_t+1:t+H_p. In addition to penalizing large forces applied on the person's body, the cost function encourages forward moving end effector actions and penalizes yaw rotations, represented by three weighted terms: J(x_1:t,a_t+1:t+H_p)= w_1‖ F(x_1:t, G(x_1:t, a_t+1:t+H_p))‖^2_1- w_2 ∑_j=t+1^t+H_pd̅·a_j,v + w_3 ∑_j=t+1^t+H_p |a_j,ψ|3 where a_j=(v_x, v_y, v_z, Δψ) represents a candidate action, a_j,v represents the 3-axis velocity components of the action, a_j,Δψ represents the yaw rotation component of the action, d̅=(1, 0, 0) depicts a forward moving action, and w_1, w_2, w_3 are constant weights set based on the importance of making task progress versus keeping forces low. The first term, ‖ F(x_1:t, G(x_1:t, a_t+1:t+H_p))‖_1, represents the L_1 norm of all predicted forces, f_t+H_p, at the 37 taxels along a person's arm at time t+H_p. We square this term to reduce the influence of small forces that occur at the beginning of dressing. This is supported by the notion that small forces are unlikely to cause issues during assistance <cit.>. However, as more force is applied to a person's arm, this term becomes the dominating factor for selecting which action the robot will execute. For various applications, this L_1 norm term may also be modified to focus on certain body joints, e.g. only minimizing force around the hand and wrist, rather than the entire arm. The second term, d̅·a_j,v, rewards actions that move in a forward direction along the +X global coordinate axis, or approximately the central axis of a person's forearm, as shown in Fig. <ref>. The last term, |a_j,ψ|, penalizes actions that perform a yaw rotation. Without these last two terms, the optimal action to minimize cost is sometimes an action that performs no movement. Depending on the task, the terms in Equation (<ref>) may also be combined via a nonlinear function to support a variety of complex behaviors. From our experiments described in Section <ref>, we observe that this simple cost function can lead to emergent behaviors in which the robot can navigate a garment up a person's entire arm.We update the robot's action by selecting the sequence of actions that minimize Equation (<ref>). This can be denoted as,a^*_t+1:t+H_p = a_t+1:t+H_p J(x_1:t, a_t+1:t+H_p).Algorithm <ref> presents our procedure for updating the robot's actions during robot-assisted dressing. At each time step t, we observe sensor measurements x_t. Every τ_p time steps, our controller chooses the actions a_t+1:t+H_p^* that minimize the cost function, based on a set of N candidate action sequences, {a_t+1:t+H_p^(n)}. In this work, we use H_p=τ_p=20 and we initialize a fixed set of N=28 actions whose velocity lie within a hemisphere facing the +X global coordinate axis. Computing the cost for each action sequence involves predicting a sequence of future end effector measurements x̂_t+1:t+H_p and feeding these measurements into the estimator, F, to estimate the force map at time t+H_p. We terminated a trial when the magnitude of forces measured at the robot's end effector exceeded 10 N, or the robot's arm reached its joint limits, which can occur when the arm fully extends to pull a garment onto a participant's shoulder.This predictive control approach runs in real time on a PR2, using only the robot's on-board CPUs, and both our estimator and predictor can make predictions at ∼2 kHz. One limitation is that our model is constrained to relatively short horizon tasks. Notably, our system performed well even with short horizon planning. Computation time limits both the action replanning rate and the prediction horizon, yet our work leaves significant room for future improvements with GPUs, greater parallelization, and off-board computation. Additionally, in this work, we evaluate our model's predictive capabilities, so we restrict our controller from selecting actions that move "backwards". Future implementations could relax this for more freedom while replanning a trajectory. § EVALUATIONWe conducted experiments with 10 participants (2 female, 8 male) with approval from the Georgia Institute of Technology Institutional Review Board (IRB), and obtained informed consent from all participants.We recruited able-bodied participants to meet the following inclusion/exclusion criteria: ≥18 years of age; have not been diagnosed with ALS or other forms of motor impairments; fluent in written and spoken English. Their ages ranged from 18 to 30 years. A video of our experiments can be found online[Video: <http://healthcare-robotics.com/haptic-mpc>].We evaluated our model predictive control approach on two robot-assisted dressing scenarios that involve pulling a hospital gown onto a participant's arm: (1) Full arm dressing: the robot must rotate its end effector to navigate around a participant's elbow and pull the garment onto the person's shoulder, as shown in Fig. <ref>. (2) Circumvent a catch: the robot must predict that the garment will soon get caught on a person's fist, as seen in Fig. <ref>, and lower its end effector to avoid the catch. The robot performed 24 dressing trials per scenario, for a total of 48 trials per participant. We randomized the dressing scenarios and prediction horizons across all 48 trials. We updated the robot's actions at 5 Hz via Equation (<ref>). We selected w_1=0.5, w_2=20, and w_3=0.5 for our cost function presented in Equation (<ref>) as this empirically provided a balance between making task progress and keeping applied forces low.For each scenario, we tested our method using three different prediction horizons, with 8 trials per horizon: 0.01s, 0.05s, and 0.2s. By testing multiple horizons, we show that a robot can better perform assistive tasks when it can predict the physical implications of its own actions. Note that changing the prediction horizon does not require model retraining since our model is capable of recursively predicting further into the future, which we discussed in Section <ref>.We used a Willow Garage PR2 robot to dress participants. The robot performed actions using the Orocos Kinematics and Dynamics Library[Orocos KDL: <http://www.orocos.org/kdl>], which provided joint-level input to the PR2's low-level PID controllers. For participant safety, the PR2's arms were compliant and we set low PID gains for all arm joints. We zero out all forces and torques on the ATI force/torque sensor prior to a trial to account for the garment's weight. Additionally, we ran a force threshold monitor that halted all robot movement if forces measured at the robot's end effector exceeded 10 N. All computations to predict force maps for MPC were performed in real time on the robot's on-board CPUs.Participants sat on a conventional folding chair and we instructed them to hold a specified static posture during each trial, shown in Fig. <ref>, and described below:* Right arm bent 90 degrees at the elbow* Upper arm and forearm parallel with the ground* Fingers curled into a fist, knuckles vertically aligned* Thumb folded inwards over the fingersWe set the initial robot configuration to hold the gown 15 cm in front of the participant's fist with the forearm direction normal to the opening in the gown. All participants started each trial seated comfortably while holding his or her arm in the specified posture. To promote consistency of arm position for appropriately comparing results with different prediction horizons, we used a commercial grade FDA approved laser pointer that pointed at the desired location for the participant's metacarpal-phalangeal joint—the base of the participant's thumb. We placed the laser on an adjustable height table to the left of the participant, facing the robot and orthogonal to the person's forearm, and we aligned the laser according to the participant’s height and posture.We evaluated this work with participants who held a fixed arm pose, yet it may be preferable for a participant to hold their arm in different poses. We note that predicting the future forces applied to a person at varying poses remains an open problem and a limitation of our current work. Allowing the robot to estimate a person's pose prior to dressing, as seen in other works <cit.>, may help alleviate this issue.§.§ Full Arm Dressing For half of the dressing trials, we evaluated the robot's ability to navigate around the elbow and pull the garment entirely up the participant's arm. We were interested in what actions emerged when the controller's primary focus was to minimize the predicted forces applied to a participant's arm.During a dressing trial, the robot selected actions that minimized the cost from Equation (<ref>). Each trial began with the PR2 holding the top of the gown opening 10 cm above the top of a participant's fist. We marked the end of a dressing trial whenever the magnitude of forces measured at the end effector exceeded 10 N, or the robot's arm reached its joint limits. For the full arm dressing trials, we classified a trial as successful if the trial completed without reaching the force threshold and the inner seam on the sleeve, defined by where the sleeve is sewn onto the main body of the gown, had passed the participant's elbow. Fig. <ref> shows a successful sequence of this dressing scenario when the robot used our MPC method with a prediction horizon of 0.2s (20 time steps). Note that once the robot's end effector reaches a person's elbow, the robot can continue to minimize applied forces by performing a yaw rotation to navigate around the elbow and begin moving along the upper arm. This results in the robot pulling the garment entirely up a person's arm.In Fig. <ref>, we display outcomes of dressing trials for the three prediction horizons. For a horizon of 0.01s, the predicted force maps across candidate actions are nearly identical. Because of this, the robot was unable to find an action that significantly lowered applied forces and instead continued to pull the garment into a person's elbow until the 10 N threshold was reached. In contrast, both the 0.05s and 0.2s horizons led to the robot rotating its end effector and pulling the garment up to a participant's shoulder, successfully navigating around the person's elbow. Both Fig. <ref> and the supplementary video show this procedure in detail. Fig. <ref> shows a top-down view of the end effector path for each prediction horizon, averaged across all 10 participants. A horizon of 0.2s led to the robot rotating and moving along the upper arm sooner than for a horizon of 0.05s, yet both led to actions that fully dressed a person's arm. The task success rates for each prediction horizon can be found in Table <ref>. These success rates are averaged over 80 trials for each scenario. Fig. <ref> displays the magnitude of the force measured at the robot's end effector across trials for all 10 participants. For a 0.01s horizon, we again notice that the robot continues to apply more force on a person's elbow until it reaches the 10 N threshold. When contact occurs between the garment and a person's body, our control approach can use haptic and kinematic observations to dress a person by primarily minimizing predicted forces. Yet, a limitation of this purely haptic and kinematic-based approach is that the controller is provided with no information about a person's initial pose. As a result, the robot would be unable to recognize or replan actions if the garment were to entirely miss a person's body. Future work could address this by incorporating other modalities, such as vision-based techniques, to estimate a person's pose before or during dressing <cit.>.§.§ Circumvent a Catch In this section, we evaluate our model's ability to predict that a garment will get caught and apply large force onto a participant's fist. During these trials, we also evaluated how well our MPC approach selected actions that properly averted the catch in order to reduce predicted forces. We adjusted the starting height of the robot's end effector according to each participant's arm height. Specifically, we aligned the end effector so that the bottom seam of the sleeve would get caught in the middle of a participant's fist when the robot followed a forward linear trajectory. A dressing trial ended whenever the end effector forces exceeded 10 N, or the end effector reached the participant's elbow, along the X-axis. A trial was successful if the end effector reached the elbow along the X-axis without exceeding the force threshold. Fig. <ref> presents a sequences of images for a successful trial with a 0.2s horizon in which the robot's end effector would drop down closer to a participant's forearm to bypass the catch. Notice that the robot could also choose to lift its end effector to avoid the catch. The robot may not have chosen to lift up over the hand due to the forces that occur when the entire garment drags across a person's fist.Fig. <ref> shows example outcomes of dressing trials for each of the three prediction horizons. A horizon of 0.01s consistently led to the garment getting caught on a person's fist for 93.75% of the trials, as shown in Table <ref>. A prediction horizon of 0.05s also failed to avoid the catch for most trials.Finally, Fig. <ref> shows a side view of the end effector path for each prediction horizon, averaged across all 10 participants. As shown, the horizon length impacts how soon our approach detects the catch and replans. The controller attempted to move the end effector downwards to avert the catch for all three prediction horizons. However, timing is crucial and only the 0.2s horizon allowed our method to detect the catch soon enough to consistently avoid it.Overall, these results suggest that our approach can enable a robot to predict and react to the forces a garment will exert onto a person during robot-assisted dressing. With a prediction horizon of 0.2s, our model predictive controller is able to fully dress a person's arm in clothing and mitigate the chance of a garment getting caught on a person's body. § CONCLUSION In this work, we presented a learning-based MPC approach that allows a robot to predict the physical implications of its actions and reduce applied force to a person during robot-assisted dressing. We trained a recurrent model on data collected in a self-supervised setting from a physics-based dressing simulation. Unlike prior robot control approaches that use vision-based techniques, our model is able to predict the forces applied to a person's body using only haptic and kinematic measurements from a robot's end effector.Our model is trained via purely supervised learning, which allows us to define a cost function for MPC post training. This cost function enables a robot to prioritize actions that minimize the predicted force applied to a person's body during physical assistance. Note that this cost function could be changed for different tasks or to allow for personalization, without needing to retrain the model. For a person with a weak or injured wrist, a new function might be defined that primarily focuses on reducing forces applied to the person's hand or wrist. When coupled with state estimation <cit.>, it may be possible to define dynamic cost functions that change depending on the current state of a task.We evaluated our method with a PR2 that pulled the sleeve of a hospital gown onto the arms of 10 human participants. Our approach enables a robot to predict and react to the forces a garment will exert onto a person during robot-assisted dressing. Our approach also runs in real time on a PR2, using only the robot's on-board CPUs, yet computation time may be a limiting factor for tasks that require faster action replanning rates or longer prediction horizons.From our experiments, we observed emergent behaviors during dressing as we increased the prediction horizon for MPC. With a horizon of 0.2s, our predictive controller was able to fully dress a person's arm in clothing and mitigate the chance of the garment getting caught on the person's body. § ACKNOWLEDGMENT This work was supported by NSF award IIS-1514258 and AWS Cloud Credits for Research. Dr. Kemp is a cofounder, a board member, an equity holder, and the CTO of Hello Robot, Inc., which is developing products related to this research. This research could affect his personal financial status. The terms of this arrangement have been reviewed and approved by Georgia Tech in accordance with its conflict of interest policies. IEEEtran 10url@rmstyle finn2017deep C. Finn and S. Levine, “Deep visual foresight for planning robot motion.” in ICRA, 2017,, pp. 2786–2793.jain2013reaching A. Jain, M. D. Killpack, A. Edsinger, and C. C. Kemp, “Reaching in Clutter with Whole-Arm Tactile Sensing,” The International Journal of Robotics Research, vol. 32, no. 4, pp. 458–482, 2013.lenz2015learning I. Lenz, R. A. Knepper, and A. Saxena, “Deepmpc: Learning deep latent features for model predictive control.” RSS, 2015.koganti2017bayesian N. Koganti, T. Tamei, K. Ikeda, and T. Shibata, “Bayesian nonparametric learning of cloth models for real-time state estimation,” IEEE Transactions on Robotics, 2017.klee2015personalized S. D. Klee, B. Q. Ferreira, R. Silva, J. P. Costeira, F. S. Melo, and M. Veloso, “Personalized assistance for dressing users,” in International Conference on Social Robotics.1em plus 0.5em minus 0.4emSpringer, 2015, pp. 359–369.pignat2017learning E. Pignat and S. Calinon, “Learning adaptive dressing assistance from human demonstration,” RAS, vol. 93, pp. 61–75, 2017.gao2016iterative Y. Gao, H. J. Chang, and Y. Demiris, “Iterative path optimisation for personalised dressing assistance using vision and force information,” in IROS, 2016, pp. 4398–4403.kapustadata A. Kapusta, W. Yu, T. Bhattacharjee, C. K. Liu, G. Turk, and C. C. Kemp, “Data-driven haptic perception for robot-assisted dressing,” in RO-MAN, 2016.yamazaki2014bottom K. Yamazaki, R. Oya, K. Nagahama, K. Okada, and M. Inaba, “Bottom dressing by a life-sized humanoid robot provided failure detection and recovery functions,” in SII, 2014, pp. 564–570.erickson2017does Z. Erickson, A. Clegg, W. Yu, C. Liu, G. Turk, and C. C. Kemp, “What does the person feel? learning to infer applied forces during robot-assisted dressing,” in ICRA, 2017.abbeel2010autonomous P. Abbeel, A. Coates, and A. Y. Ng, “Autonomous helicopter aerobatics through apprenticeship learning.” The International Journal of Robotics Research, vol. 29, no. 13, pp. 1608–1639, 2010.bellingham2002receding J. Bellingham, A. Richards, and J. How, “Receding horizon control of autonomous aerial vehicles,” in American Control Conference, 2002.erez2012infinite T. Erez, Y. Tassa, and E. Todorov, “Infinite-horizon model predictive control for periodic tasks with contacts.” RSS, vol. 73, 2012.wieber2006trajectory P. Wieber, “Trajectory free linear model predictive control for stable walking in the presence of strong perturbations,” in IEEE-RAS International Conference on Humanoid Robots, 2006, pp. 137–142.chow2016robotic K. Chow and C. C. Kemp, “Robotic repositioning of human limbs via model predictive control.” in RO-MAN, 2016, pp. 473–480.dominici2014model M. Dominici and R. Cortesao, “Model predictive control architectures with force feedback for robotic-assisted beating heart surgery.” in ICRA, 2014, pp. 2276–2282.duchaine2007computationally V. Duchaine, S. Bouchard, and C. Gosselin, “Computationally efficient predictive robot control,” IEEE/ASME Transactions on Mechatronics, vol. 12, no. 5, pp. 570–578, 2007.fu2016one J. Fu, S. Levine, and P. Abbeel, “One-shot learning of manipulation skills with online dynamics adaptation and neural network priors,” in IROS, 2016, pp. 4019–4026.nguyen2011model D. Nguyen-Tuong and J. Peters, “Model learning for robot control: a survey.” Cognitive Processing, vol. 12, no. 4, pp. 319–340, 2011.watter2015embed M. Watter, J. Springenberg, J. Boedecker, and M. Riedmiller, “Embed to control: A locally linear latent dynamics model for control from raw images,” in NIPS, 2015, pp. 2746–2754.boots2014learning B. Boots, A. Byravan, and D. Fox, “Learning predictive models of a depth camera & manipulator from raw execution traces,” in ICRA, 2014, pp. 4021–4028.yu2017haptic W. Yu, A. Kapusta, J. Tan, C. C. Kemp, G. Turk, and C. K. Liu, “Haptic data simulation for robot-assisted dressing,” in IRCA, 2017.hansen2016cma N. Hansen, “The CMA evolution strategy: A tutorial,” Technische Universitat Berlin, TU Berlin, 2016.chance2016assistive G. Chance, A. Camilleri, B. Winstone, P. Caleb-Solly, and S. Dogramadzi, “An assistive robot to support dressing-strategies for planning and error handling,” in BioRob.1em plus 0.5em minus 0.4emIEEE, 2016, pp. 774–780.yamazaki2013method K. Yamazaki, R. Oya, K. Nagahama, and M. Inaba, “A method of state recognition of dressing clothes based on dynamic state matching,” in SII, 2013, pp. 406–411.jimenez2017visual P. Jiménez, “Visual grasp point localization, classification and state recognition in robotic manipulation of cloth: An overview,” RAS, vol. 92, pp. 107–125, 2017.
http://arxiv.org/abs/1709.09735v3
{ "authors": [ "Zackory Erickson", "Henry M. Clever", "Greg Turk", "C. Karen Liu", "Charles C. Kemp" ], "categories": [ "cs.RO", "cs.AI", "stat.ML" ], "primary_category": "cs.RO", "published": "20170927211026", "title": "Deep Haptic Model Predictive Control for Robot-Assisted Dressing" }
Non-cocompact Group Actions and π_1-Semistability at Infinity Ross Geoghegan, Craig Guilbault[This research was supported in part by Simons Foundation Grants 207264 & 427244, CRG]and Michael Mihalik December 30, 2023 ================================================================================================================================================ A finitely presented 1-ended group G has semistable fundamental group at infinity if G acts geometrically on a simply connected and locally compact ANR Y having the property that any two proper rays in Y are properly homotopic. This property of Y captures a notion of connectivity at infinity stronger than “1-ended", and is in fact a feature of G, being independent of choices. It is a fundamental property in the homotopical study of finitely presented groups. While many important classes of groups have been shown to have semistable fundamental group at infinity, the question of whether every G has this property has been a recognized open question for nearly forty years. In this paper we attack the problem by considering a proper but non-cocompact action of a group J on such an Y. This J would typically be a subgroup of infinite index in the geometrically acting over-group G; for example J might be infinite cyclic or some other subgroup whose semistability properties are known. We divide the semistability property of G into a J-part and a “perpendicular to J" part, and we analyze how these two parts fit together. Among other things, this analysis leads to a proof (in a companion paper <cit.>) that a class of groups previously considered to be likely counter examples do in fact have the semistability property.§ INTRODUCTION In this paper we consider a new approach to the semistability problem for finitely presented groups. This is a problem at the intersection of group theory and topology. It has been solved for many classes of finitely presented groups, for example <cit.>,<cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.> - but not in general. We begin by statingThe Problem. Consider a finitely presented infinite group G acting cocompactly by cell-permuting covering transformations on a 1-ended, simply connected, locally finite CW complex Y. Pick an expanding sequence {C_n} of compact subsets with *intC_n⊆ C_n+1 and ∪ C_n=Y, then choose a proper “base ray” ω:[0,∞)→ Y with the property that ω([n,n+1]) lies in Y-C_n. Consider the inverse sequenceπ_1(Y-C_0,ω (0))λ_1⟵π_1 (Y-C_1, ω (1))λ_2⟵π_1(Y-C_2,ω (3) )λ_2⟵⋯where the λ_i are defined using subsegments of ω. The Problem is: EITHER to prove that this inverse sequence is always semistable, i.e. is pro-isomorphic to a sequence with epimorphic bonding maps, OR to find a group G for which that statement is false. This problem is known to be independent of the choice of Y, {C_n}, and ω, and it is equivalent to some more geometrical versions of semistability which we now recall.A 1-ended, locally finite CW complex Y, with proper base ray ω, has semistable fundamental group at ∞ if any of the following equivalent conditions holds:* Sequence (<ref>) is pro-isomorphic to an inverse sequence of surjections. * Given n there exists m such that, for any q, any loop in Y-C_m based at a point ω(t) can be homotoped in Y-C_n, with base point traveling along ω, to a loop in Y-C_q. * Any two proper rays in Y are properly homotopic. Just as a basepoint is needed to define the fundamental group of a space, a base ray is needed to define the fundamental pro-group at ∞. And just as a path between two basepoints defines an isomorphism between the two fundamental groups, a proper homotopy between two base rays defines a pro-isomorphism between the two fundamental pro-groups at ∞. In the absence of such a proper homotopy it can happen that the two pro-groups are not pro-isomorphic (see <cit.>, Example 16.2.4.) Thus, in the case of G acting cocompactly by covering transformations as above, semistability is necessary and sufficient for the “fundamental pro-group at infinity of G” to be well-defined up to pro-isomorphism. The approach presented here. In its simplest form our approach is to restrict attention to the sub-action on Y of an infinite finitely generated subgroup J having infinite index in G. We separate the topology of Y at infinity into“the J-directions” and“the directions in Y orthogonal to J”, with the main result being that, having appropriate analogs of semistability in the two directions, implies that Y has semistable fundamental group at ∞.For the purposes of an introduction, we first describe a special case of the Main Theorem and give a few examples. A more far-reaching, but more technical, version of the Main Theorem is given in Section <ref>. Suppose J is a finitely generated group acting by cell-permuting covering transformations on a 1-ended locally finite and simply connected CW complex Y. Let Γ(J,J^0) be the Cayley graph of J with respect to a finite generating set J^0 and let m:Γ→ Y be a J-equivariant map. Thena) J is semistable at infinity in Y if for any compact set C⊆ Y there is a compact set D⊆ Y such that if r and s are two proper rays based at the same point in Γ(J,J^0) -m^-1(D) then mr and ms are properly homotopic in Y-C relative to mr(0)=ms(0).Standard methods show that the above property does not depend on the choice of finite generating set J^0. b) J is co-semistable at infinity in Y if for any compact set C⊆ Y there is a compact set D⊆ Y such that for any proper ray r in Y-J· D and any loop α based at r(0) whose image lies in Y-D, α can be pushed to infinity in Y-C by a proper homotopy with the base point trackingr. If J is both semistable at infinity in Y and co-semistable at infinity in Y, then Y has semistable fundamental group at infinity. * To our knowledge, the theorems proved here are the first non-obvious results that imply semistable fundamental group at ∞ for a space Y which might not admit a cocompact action by covering transformations. * In the special case where J is an infinite cyclic group, condition (a) above is always satisfied since Γ( J,J^0) can be chosen to be homeomorphic to ℝ; any two proper rays in ℝ which begin at the same point and lie outside a nonempty compact subset of ℝ are properly homotopic in their own images. Moreover, since condition (b) is implied by the main hypothesis of <cit.> (via <cit.> or <cit.>), Theorem <ref> implies the main theorem of <cit.> . * The converse of Theorem <ref> is trivial. If Y is semistable at infinity and J is any finitely generated group acting as covering transformations on Y, it follows directly from the definitions that J is both semistable at infinity in Y and co-semistable at infinity in Y. So, our theorem effectively reduces checking the semistability of the fundamental group at infinity of a space to separately checking two strictly weaker conditions. * In our more general version of Theorem <ref> (not yet stated), the group J will be permitted to vary for different choices of compact set C. No over-group containing these various groups is needed unless we want to extend our results to locally compact ANRs. That issue is discussed in Corollary <ref>.Some examples. We now give four illuminating examples. Admittedly, the conclusion of Theorem <ref> is known by previous methods in the first three of these, but they are included because they nicely illustrate how the semistability and co-semistability hypotheses lead to the semistability conclusion of the Theorem. Moreover an understanding of these examples helps to motivate later proofs. In the case of the fourth example the conclusion was not previously known. Let G be the Baumslag-Solitar group B( 1,2)=⟨ a,t| t^-1at=a^2⟩ acting by covering transformations on Y=T× ℝ, where T is the Bass-Serre tree corresponding to the standard graph of groups representation of G, and let J=⟨ a⟩≅ ℤ. Then J is semistable at infinity in Y for the reasons described in Remark <ref>(<ref>) above. To see that J is co-semistable at infinity in Y, choose D⊆ Y to be of the form T_0×[-n,n], where n≥1 and T_0 is a finite subtree containing the “origin” 0 of T. Then each component of Y-J· D is simply connected (it is a subtree crossed with ℝ). So pushing α to infinity along r can be accomplished by first contracting α to its basepoint, then sliding that basepoint along r to infinity. to 2in < g r a p h i c s > Figure 1 Let J=⟨ a,b|⟩ be the fundamental group of a punctured torus of constant curvature -1 and consider the corresponding action of J on Y=ℍ^2. Figure 1 shows ℍ^2 with an embedded tree representing the image of a well-chosen m:Γ(J,{a,b})→ℍ^2. The shaded region represents a typical J· D for a carefully chosen compact D⊆ℍ^2, which is represented by the darker shading. The components of ℍ^2-J· D are open horoballs. Notice that two proper rays in Γ(J,{a,b}) -m^-1(D), which begin at the same point, are not necessarily properly homotopic in Γ(J,{a,b}) -m^-1(D), but their images are properly homotopic in ℍ^2-D; so J is semistable at infinity in ℍ^2. Moreover, since each component of ℍ^2-J· D is simply connected, J is co-semistable at infinity in ℍ^2 for the same reason as in Example <ref>.Let K⊆ S^3 be a figure-eight knot; endow S^3-K with a hyperbolic metric; and consider the corresponding proper action of the knot group J on S^3-K=ℍ^3. Much like the previous example, there exists a nice geometric embedding of a Cayley graph of J into ℍ^3 and choices of compact D⊆ℍ^3 so that ℍ^3-J· D is an infinite collection of (3-dimensional) open horoballs. Since J itself is known to be 1-ended with semistable fundamental group at infinity (a useful case to keep in mind), the first condition of Theorem <ref> is immediate. And again, co-semistability at infinity follows from the simple connectivity of the horoballs.For many years an outstanding class of finitely presented groups not known to be semistable at ∞ has been the class of finitely presented ascending HNN extensions whose base groups are finitely generated but not finitely presented[The case of finitely presented base group was settled long ago in <cit.>.]. While Theorem <ref> does not establish semistability for this whole class, it does so for a significant subclass — those of “finite depth”. This new result is established in <cit.>, a paper which makes use of the more technical Main Theorem <ref> proved here. In particular, allowing the group J to vary (see Remark <ref>(<ref>)) is important in this example.Outline of the paper. The paper is organized as follows. We consider 1-ended simply connected locally finite CW complexes Y, and groups J that act on Y as covering transformations. In <ref> we review a number of equivalent definitions for a space and group to have semistable fundamental group at ∞. In <ref> we state our Main Theorem <ref> in full generality andformally introduce the two somewhat orthogonal notions in the hypotheses of Theorem <ref>. The first is that of a finitely generated group J being semistable at ∞ in Y with respect to a compact set C, and the second defines what it means for J to be co-semistable at ∞ in Y with respect to C. In <ref> we give a geometrical outline and overview of the proof of the main theorem. In <ref> we prove a number of foundational results. Suppose C is a compact subset of Y and J is a finitely generated group acting as covering transformations on Y. Define J· C to be ∪_j∈ Jj(C). We consider components U of Y-J· C such that the image of U in J\ Y is not contained in a compact set. We call such U, J-unbounded. We show there are only finitely many J-unbounded components of Y-J· C, up to translation in J and the J-stabilizer of a J-unbounded component is an infinite group. In <ref> we use van Kampen's Theorem to show that for a finite subcomplex C of Y, the J-stabilizer of a J-unbounded component of Y-J· C is a finitely generated group. A bijection between the ends of the stabilizer of a J-unbounded component of Y-J· C and “J-bounded ends" of that component is produced in <ref>. The constants that arise in our bijection are shown to be J-equivariant. In <ref> we prove our main theorem. A generalization of our main theorem from CW complexes to absolute neighborhood retracts is proved in <ref>. § EQUIVALENT DEFINITIONS OF SEMISTABILITY Some equivalent forms of semistability have been stated in the Introduction. It will be convenient to have the following:(see Theorem 3.2<cit.>) With Y as before, the following are equivalent: * Y has semistable fundamental group at ∞.* Let r:[0,∞)→ Y be a proper base ray. Then for any compact set C there is a compact set D such that for any third compact set E and loop α based at r(0) whose image lies in Y-D, α is homotopic to a loop in Y-E, by a homotopy with image in Y-C, where α tracks r.* For any compact set C there is a compact set D such that if r and s are proper rays based at v and with image in Y-D, then r and s are properly homotopic rel{v} by a proper homotopy supported in Y-C.* If C is compact in Y there is a compact set D in Y such that for any third compact set E and proper rays r and s based at a vertex v and with image in Y-D, there is a path α in Y-E connecting points of r and s such that the loop determined by α and the initial segments of r and s is homotopically trivial in Y-C. That the first three conditions are equivalent is shown in Theorem 3.2 of <cit.>. Condition 4 is clearly equivalent tothe more standardCondition 3. § THE MAIN THEORM AND ITS DEFINITIONS We are now ready to state our main theorem in its general form. After doing so, we will provide a detailed discussion of the definitions that go into that theorem. Both the theorem and the definitions generalize those found in the introduction. Let Y be a 1-ended simply connected locally finite CW complex. Assume that for each compact subset C_0 of Y there is a finitely generated group J acting as cell preservering covering transformations on Y, so that (a) J is semistable at ∞ in Y with respect to C_0, and (b) J is co-semistable at ∞ in Y with respect to C_0. Then Y has semistable fundamental group at ∞.If there is a group G (not necessarily finitely generated) acting as covering transformations on Y such that each of the groups J of Theorem <ref> is isomorphic to a subgroup of G, then the condition that Y is a locally finite CW complex can be relaxed to: Y is a locally compact absolute neighborhood retract (ANR) (see Corollary <ref>). The distance between vertices of a CW complex will always be the number of edges in a shortest edge path connecting them. The space Y is a 1-ended simply connected locally finite CW complex, and for each compact subset C_0 of Y, J(C_0) is an infinite finitely generated group acting as covering transformations on Y and preserving some locally finite cell structure on Y. Fix ∗ a base vertex in Y. Let J^0 be a finite generating set for J and Λ(J,J^0) be the Cayley graph of J with respect to J^0. Let z_(J,J^0):(Λ(J,J^0 ),1)→(Y,∗) be a J-equivariant map so that each edge of Λ is mapped to an edge path of length ≤ K(J^0). If r is an edge path in Λ, then z(r) is called a Λ-path in Y. The vertices J∗ are called J-vertices.If C_0 is a compact subset of Y then the group J is semistable at ∞ in Y with respect to C_0 if there exists a compact set C in Y and some (equivalently any) finite generating set J^0 for J such that for any third compact set D and proper edge path rays r and s in Λ(J,J^0) which are based at the same vertex v and are such that z(r) and z(s) have image in Y-C then there is a path δ in Y-D connecting z(r) and z(s) such that the loop determined by δ and the initial segments of z(r) and z(s) is homotopically trivial in Y-C_0 (compare to Theorem <ref>(4)).Note that this definition requires less than one requiring z(r) and z(s) be properly homotopic rel{z(v)} in Y-C_0 (compare to Theorem <ref>(3)). It may be that the path δ is not homotopic to a path in the image of z by a homotopy in Y-C_0. This definition is independent of generating set J^0 and base point ∗ by a standard argument, although C may change as J^0, ∗ and z do. When J is semistable at infinity in Y with respect to C_0, we may say J is semistable at ∞ in Y with respect to J^0, C_0, C and z. Observe that if Ĉ is compact containing C then J is also semistable at ∞ in Y with respect to J^0, C_0, Ĉ and z.If J is 1-ended and semistable at ∞ or 2-ended, then J is always semistable at ∞ in Y with respect to any compact subset C_0 of Y. The semistability of the fundamental group at ∞ of a locally finite CW complex only depends on the 2-skeleton of the complex (see for example, Lemma 3 <cit.>). Similarly, the semistability at ∞ of a group in a CW complex only depends on the 2-skeleton of the complex.The notion of J being co-semistable at infinity in a space Y is a bit technical, but has its roots in a simple idea that is fundamental to the main theorems of <cit.> and <cit.>. in both of these papers J is an infinite cyclic group acting as covering transformations on a 1-ended simply connected space Y with pro-monomorphic fundamental group at ∞. Wright <cit.> showed that under these conditions the following could be proved:(∗) Given any compact set C_0⊂ Y there is a compact set C⊂ Y such that any loop in Y-J· C is homotopically trivial in Y-C_0.Condition (∗) is all that is needed in <cit.> and <cit.> in order to prove the main theorems. In <cit.> condition (∗) is used to show Y is proper 2-equivalent to T×ℝ (where T is a tree). Interestingly, there are many examples of finitely presented groups G (and spaces) with infinite cyclic subgroups satisfying (∗) but the fundamental group at ∞ of G is not pro-monomorphic (see <cit.>). In fact, if G has pro-monomorphic fundamental group at ∞, then either G is simply connected at ∞ or (by a result of B. Bowditch <cit.>) G is virtually a closed surface group and π _1^∞(G)=ℤ.Our co-semistability definition generalizes the conditions of (∗) in two fundamental ways and our main theorem still concludes that Y has semistable fundamental group at ∞ (just as in the main theorem of <cit.>).1) First we expand J from an infinite cyclic group to an arbitrary finitely generated group and we allow J to change as compact subsets of Y become larger.2) We weaken the requirement that loops in Y-J· C be trivial in Y-C_0 to only requiring that loops in Y-J· C can be “pushed" arbitrarily far out in Y-C_0. We are now ready to set up our co-semistability definition. A subset S of Y is bounded in Y if S is contained in a compact subset of Y. Otherwise S is unbounded in Y. Fix an infinite finitely generated group J acting as covering transformations on Y and a finite generating set J^0 of J. Assume J respects a cell structure on Y. Let p:Y→ J\ Y be the quotient map. If K is a subset of Y, and there is a compact subset D of Y such that K⊂ J· D (equivalently p(K) has image in a compact set), then K is a J -bounded subset of Y. Otherwise K is a J-unbounded subset of Y. If r:[0,∞)→ Y is proper and pr has image in a compact subset of J\ Y then r is said to be J -bounded. Equivalently, r is a J-bounded proper edge path in Y if and only if r has image in J· D for some compact set D⊂ Y. In this case, there is an integer M (depending only on D) such that each vertex of r is within (edge path distance) M of a vertex of J∗. Hence r `determines' a unique end of the Cayley graph Λ(J,J^0).For a non-empty compact set C_0⊂ Y and finite subcomplex C containing C_0 in Y, let U be a J-unbounded component of Y-J· C and let r be a J-bounded proper ray with image in U. We say J is co-semistable at ∞ in U with respect to r and C_0 if for any compact set D and loop α:[0,1]→ U with α(0)=α(1)=r(0) there is a homotopy H:[0,1]× 0,n]→ Y-C_0 such that H(t,0)=α(t) for all t∈0,1] and H(0,s)=H(1,s)=r(s) for all s∈0,n] and H(t,n)⊂ Y-D for all t∈0,1]. This means that α can be pushed along r by a homotopy in Y-C_0 to a loop in Y-D. We say J is co-semistable at ∞ in Y with respect to C_0 (and C) if J is co-semistable at ∞ in U with respect to r and C_0 for each J-unbounded component U of Y-J· C, and any proper J-bounded ray r in U. Note that if Ĉ is a finite complex containing C, then J is also co-semistable at ∞ in Y with respect to C_0 and Ĉ.It is important to notice that our definition only requires that loops in U can be pushed arbitrarily far out in Y-C_0 along proper J-bounded rays in U (as opposed to all proper rays in U).§ AN OUTLINE OF THE PROOF OF THE MAIN THEOREMA number of technical results are necessary to prove the main theorem. The outline in this section is intended to give the geometric intuition behind these results and describe how they connect to prove the main theorem. Figure 6 will be referenced throughout this section. Here C_0 is an arbitrary compact subset of Y, J^0 is a finite generating set for the group J which respects a locally finite cell structure on Y and acts as covering transformations on Y. The finite subcomplex C of Y is such that J is co-semistable at ∞ in Y with respect to C_0 and C, and J is semistable at ∞ in Y with respect to J^0, C_0 and C. The proper base ray is r_0, E is a finite union of specially selected compact sets and α is a loop based on r_0 with image in Y-E. The path α is broken into subpaths α=(α _1,e_1,β_1,ẽ_1,α_2,…,α_n) where the α_i lie in J· C, the β_i lie in Y-J· C and the edges e_i and ẽ_i serve as “transition edges". We let F be an arbitrary large compact set and we must show that α can be pushed along r_0 to a loop outside of F by a homotopy avoiding C_0 (see Theorem <ref> (2)).In <ref> and <ref> we show Y-J· C has only finitely many J-unbounded components (up to translation in J) and that the stabilizer of any one of these components is infinite and finitely generated. We pick a finite collection of J-unbounded components of Y-J· C such that no two are J-translates of one another, and any J-unbounded component of Y-J· C is a translate of one of these finitely many. Each g_iU_f(i) in Figure 6 is such that g_i∈ J and U_f(i) is one of these finitely many components. The edges e_i have initial vertex in J· C and terminal vertex in g_iU_f(i). Similarly for ẽ_i. The fact that the stabilizer of a J-unbounded component of Y-J· C is finitely generated and infinite allows us to construct the proper edge path rays r_i, r̃_i, s_i and s̃_i in Figure 6. Let S_i be the (finitely generated infinite) J-stabilizer of g_iU_f(i). Lemma <ref> allows us to construct proper edge path rays r_i in J· C (far from C_0) that are “S_i-edge paths", and proper rays s_i in g_iU_f(i) so that s_i and r_i are (uniformly over all i) “close" to one another. Hence r_i is properly homotopic rel{r_i(0)} to (γ_i, e_i, s_i) by a homotopy in Y-C_0. This mean e_i can be “pushed" between s_i and (γ_i^-1, r_i) into Y-F by a homotopy avoiding C_0 and we have the first step in moving α into Y-F by a homotopy avoiding C_0. Similarly for r̃_i, s̃_i and ẽ_i.Since all of the paths/rays α_i, γ_i, r_i, γ̃_i, and r̃_i have image in J· C, they are uniformly (only depending on the size of the compact set C) close to J-paths/rays. But the semistability at ∞ of J in Y with respect to C_0 then implies there is a path δ_i connecting (γ̃_i-1^-1, r̃_i-1) and (α_i, γ_i^-1, r_i) in Y-F such that the loop determined by δ_i and the initial segments of (γ̃_i-1^-1, r̃_i-1) and (α_i, γ_i^-1, r_i) is homotopically trivial by a homotopy avoiding C_0. Geometrically that means α_i can be pushed outside of F by a homotopy between (γ̃_i-1^-1, r̃_i-1) and (γ_i^-1,r_i), and with image in Y-C_0.All that remains is to push the β_i into Y-F by a homotopy between s_i and s̃_i. A serious technical issue occurs here. If we knew that s_i and s̃_i converged to the same end of g_iU_f(i) then we could find a path in g_iU_f(i)-F connecting s_i and s̃_i and Lemma <ref> explains how to use the assumtion that J is co-semistable at ∞ in Y with respect to C_0, to slide β_i between s_i and s̃_i to a path in Y-F, finishing the proof of the main theorem. But at this point there is no reason to believe s_i and s̃_i determine the same end of g_iU_f(i). This is where two of the main lemmas (and two of the most important ideas) of the paper, Lemmas <ref> and <ref> come in. All but finitely many of the components gU_i of Y-J· C avoid a certain compact subset of E. If g_iU_f(i) is one of these components then Lemma <ref> explains how to select the proper ray r̃_i and a path ψ in Y-F connecting r_i and r̃_i so that the loop determined by ψ, initial segments of r_i and r̃_i and the path (γ_i,e_i ,β_i,ẽ_i,γ̃_i^-1) is homotopically trivial in Y-C_0 (so that the section of α defined by (e_i,β_i ,ẽ_i) can be pushed into Y-F by a homotopy between (γ _i^-1,r_i) and (γ̃_i^-1,r̃_i)). Lemma <ref> tells us how to select the compact set E so that if g_iU_f(i) is one of the finitely many remaining components of Y-J· U, then the proper rays s_i and s̃_i can be selected, so that s_i and s̃_i converge to the same end of g_iU_f(i). In either case, α is homotopic rel{r_0} to a loop in Y-F by a homotopy in Y-C_0.§ STABILIZERS OF J-UNBOUNDED COMPONENTSThroughout this section, J is a finitely generated group acting as cell preserving covering transformations on a simply connected locally finite 1-ended CW complex Y and p:Y→ J\ Y is the quotient map. Suppose C, is a large (see Theorem <ref>) finite subcomplex of Y and U is a J-unbounded component of Y-J· C. Lemma <ref> and Theorem <ref> show the J-stabilizer of U is finitely generated and infinite. Lemma <ref> shows that there is a finite subcomplex D(C)⊂ Y such that for any compact E containing D and any J-unbounded component U of Y-J· C there is a special bijection ℳ between the set of ends of the J-stabilizer of U and the ends of U∩(J· E). For C compact in Y, Lemma <ref> shows there are only finitely many J-unbounded components of Y-J· C up to translation in J.Suppose that J is semistable at ∞ in Y with respect to C_0 and C, U is a J-unbounded component of Y-J· C and J is co-semistable at ∞ in U with respect to the proper J-bounded ray r and C_0. Once again co-semistability at ∞ only depends on the 2-skeleton of Y and from this point on we may assume that Y is 2-dimensional. The next two lemmas reduce complexity again by showing that in certain instances we need only consider locally finite 2-complexes with edge path loop attaching maps on 2-cells. Such complexes are in fact simplicial and this is important for our arguments in <ref>.Suppose Y is a locally finite 2-complex and the finitely generated group J acts as cell preserving covering transformations on Y, then there is a J-equivariant subdivision of the 1-skeleton of Y and a locally finite 2-complex X also admitting a cell preserving J-action such that:* The image of a 2-cell attaching map for Y is a finite subcomplex of Y. * The space X has the same 1-skeleton as Y and there is a J-equivariant bijection between the cells of Y and X that is the identity on vertices and edges and if a is a 2-cell attaching map for Y and a^' is the corresponding 2-cell attaching map for X then a and a^' are homotopic in the image of a, and a^' is an edge path loop with the same image as a. * The action of J on X is the obvious action induced by the action of J on Y. * If K_1 is a finite subcomplex of Y and K_2 is the corresponding finite subcomplex of X, then there is a bijective correspondence between the J-unbounded components of Y-J· K_1 and X-J· K_2, so that if U_1 is a J-unbounded component of Y-J· K_1 and U_2 is the corresponding component of X-J· K_2 then U_1 and U_2 are both a union of open cells, and the bijection of cells between Y and X induces a bijection between the open cells of U_1 and U_2. In particular, the J-stabilizer of U_1 is equal to that of U_2. Suppose D is a 2-cell of Y and the attaching map on S^1 for D is a_D. Then the image of a_D is a compact connected subset of the 1-skeleton of Y. If e is an edge of Y then im(a_D)∩ e is either ∅, a single closed interval or a pair of closed intervals (we consider a single point to be an interval). In any case add vertices when necessary to make the end points of these intervals vertices. This process is automatically J-equivariant and locally finite. The map a_D is homotopic (in the image of a_D) to an edge path loop b_D with image the same as that of a_D. Let Z be the 1-skeleton of Y. Attach a 2-cell D^' to Z with attaching map b_D. For j∈ J the attaching map for jD is ja_D and we automatically have an attach map for X (corresponding to the cell jD) defined by jb_D. This construction is J-equivariant. Call the resulting locally finite 2-complex X and define the action of J on X in the obvious way.It remains to prove part 4. Suppose K_1 and K_2 are corresponding finite subcomplexes of Y and X respectively. Recall that vertices are open (and closed) cells of a CW complex and every point of a CW complex belongs to a unique open cell. If A is an open cell of Y then either A is a cell of J· K_1 or A is a subset of Y-J· K_1.Claim <ref>.1 Suppose U is a component of Y-J· K_1. If p and q are distinct points of U then there is a sequence of open cells A_0,…, A_n of U such that p∈ A_0, q∈ A_n and either A_i∩A̅_i+1∅ or A̅_i∩ A_i+1 ∅. (Here A̅ is the closure of A in Y, equivalently the closed cell corresponding to A.) Let α be a path in U from p to q. By local finiteness, there are only finitely many closed cells B_0, …, B_n that intersect the compact set im(α). Note that B_i⊄K so that the open cell A_i for B_i is a subset of U. In particular, im(α)⊂ A_0∪⋯∪ A_n. Let 0=x_0 and assume that α(x_0)=p∈ A_0. Let x_1 be the last point of α^-1(B_0) in [0,1] (it may be that x_1=x_0). If α(x_1)∉A_0 then α(x_1)∈ A_1∪⋯∪ A_n and assume that α(x_1)∈ A_1. In this case α(x_1)∈A̅_0∩ A_1(=B_0∩ A_1).If α(x_1)∈ A_0, then take a sequence of points {t_i} in (x_1,1] converging to x_1. Infinitely many α(t_i) belong to some A_j for j≥1 (say j=1). Then α(x_1)∈ A_0∩A̅_1.Let x_2 be the last point of α^-1 (B_1) in [0,1]. Continue inductively. Claim <ref>.2 If A_1 A_2 are open cells of Y such that A_1∩A̅_2∅ and A_i corresponds to the open cell Q_i of X for i∈{1,2}, then Q_i∩Q̅_i+1 ∅. We only need check this when A_1 or A_2 is a 2-cell (otherwise Q_i=A_i). Note that A_1 is not a 2-cell, since otherwise A_1 ∩A̅_2=∅. If A_2 is a 2-cell, and A_1∩A̅_2 ∅ then by construction A_1⊂A̅_2, and Q_1 ⊂Q̅_2. Write U as a union ∪_i∈ IA_i of the open cells in U. Let Q_i be the open cell of X corresponding to A_i. By Claims <ref>.1 and <ref>.2, ∪_i∈ IQ_i is a connected subset of X-J· K_2. The roles of X and Y can be reversed in Claims <ref>.1 and <ref>.2. Then writing a component of X-J· K_2 as a union of its open cells ∪_l∈ LQ_l (and letting A_l be the open cell of Y corresponding to Q_l) we have ∪_l∈ LA_l is a connected subset of Y-J· K_1. There are maps g:X→ Y and f:Y→ X that are the identity on 1-skeletons and such that fg and gf are properly homotopic to the identity maps relative to the 1-skeleton. In particular, X and Y are proper homotopy equivalent. This basically follows from the proof of Theorem 4.1.8 of <cit.>. These facts are not used in this paper. The remainder of this section is a collection of elementary (but useful) lemmas. The boundary of a subset S of Y (denoted ∂ S) is the closure of S (denoted S̅) delete the interior of S. If K is a subcomplex of a 2-complex Y then ∂ K is a union of vertices and edges.If A⊂ Y, then p(A)=p(J· A) and p^-1 (p(A))=J· A. If C is compact in Y and B is compact in J\ Y such that p(C)⊂ B, then there is a compact set A⊂ Y such that C⊂ A and p(A)=B.The first part of the lemma follows directly from the definition of J· A. Cover B⊂ J\ Y by finitely many evenly covered open sets U_i for i∈{1,…, n} such that U̅_i is compact and evenly covered. Pick a finite number of sheets over the U̅_i that cover C and so that there is at least one sheet over each U̅_i. Call these sheets K_1,…, K_m. Let A=(∪_i=1^m K_i)∩ p^-1(B). Then C⊂ A, and A is compact since (∪_i=1^m K_i) is compact and p^-1(B) is closed. We claim that p(A)=B. Clearly p(A)⊂ B. If b∈ B, then there is j∈{1,…, n} such that b∈U̅_j. Then there is k_b∈ K_j^' such that p(k_b)=b, and so k_b∈ p^-1(B)∩(∪_i=1^m K_i) and p maps A onto B. If C is a compact subset of Y, j is an element of J and U is a component of Y-J· C then j(U) is a component of Y-J· C, and p(U) is a component of J\ Y-p(C). Suppose C is a non-empty compact subset of Y and U is an unbounded component of Y-J· C. Then ∂ U is an unbounded subset of J· C.Otherwise ∂ U is closed and bounded in Y and therefore compact. But ∂ U separates U from J· C, contradicting the fact that Y is 1-ended. The next remark establishes a minimal set of topological conditions on a topological space X in order to define the number of ends of X.If X is a connected, locally compact, locally connected Hausdorff space and C is compact in X, then C union all bounded components of X-C is compact, any neighborhood of C contains all but finitely many components of X-C, and X-C has only finitely many unbounded components. Suppose C is a compact subset of Y and U is a component of Y-J· C. Then U is J-unbounded if and only if p(U) is an unbounded component of J\ Y-p(C). Hence up to translation by J there are only finitely many J-unbounded components of Y-J· C.First observe that p(C)∩ p(U)=∅. Suppose p(U) is unbounded. Choose a ray r:[0,∞)→ p(U) such that r is proper in J\ Y. Select u∈ U such that p(u)=r(0). Lift r to r̃ at u. Then r̃ has image in U, and there is no compact set D⊂ Y such that im (r̃)⊂ J· D. Hence U is J-unbounded. If U is J-unbounded then by definition, p(U) is not a subset of a compact subset of Y. Suppose C is a compact subset of Y. Then there is a compact subset D⊂ Y such that C⊂ D, every J-bounded component of Y-J· C is a subset of J· D and each component of Y-J· D is J-unbounded.Let U be a J-bounded component of Y-J· C. Then p(U) is a bounded component of J\ Y-p(C). Let B be the union of p(C) and all bounded components of J\ Y-p(C). Then B is compact (Remark <ref>). By Lemma <ref>, there is a compact set D containing C such that p(D)=B. Suppose C and D are finite subcomplexes of Y. Then only finitely many J-unbounded components of Y-J· C intersect D.Note that J· C is a subcomplex of Y. If the lemma is false, then for each i∈ℤ^+ there are distinct unbounded components U_i of Y-J· C such that U_i∩ D∅. Choose u_i∈ U_i∩ D. Let E_i be an (open) cell containing u_i. Then E_i⊂ U_i and the E_i are distinct. Then infinitely many cells of Y intersect D, contrary to the local finiteness of Y. Suppose C is a finite subcomplex of Y and U is a J-unbounded component of Y-J· C. Then there are infinitely many j∈ J such that j(U)=U. In particular the J-stabilizer of U is an infinite subgroup of J.If x∈∂ U⊂∂(J· C) then any neighborhood of x intersects U. Let x_1,x_2,… be sequence in U converging to x. By local finiteness infinitely many x_i belong to some open cell D of U and so x∈D̅. By Lemma <ref>, there are infinitely many open cells D of U and distinct j_D∈ J such that j_D(C)∩D̅∅. For all such D, j_D^-1(D̅)∩ C∅ and by the local finiteness of Y, there are infinitely many such D with j_D^-1(D) all the same. If j_D_1^-1(D_1)=j_D_2^-1(D_2) then j_D_2j_D_1^-1(D_1)=D_2 so j_D_2j_D_1^-1 stabilizes U. Suppose C is a finite subcomplex of Y, U is a J-unbounded component of Y-J· C and S<J is the subgroup of J that stabilizes U. Then for any g∈ J, the stabilizer of gU is gSg^-1.Simply observe that hgU=gU if and only if g^-1hgU=U if and only if g^-1hg∈ S if and only if h∈ gSg^-1. Suppose C⊂ Y is compact and R_1 is a J-unbounded component of Y-J· C. If D⊂ Y is compact, and C⊂ D then there is a J-unbounded component R_2 of Y-J· D such that R_2⊂ R_1.Choose an unbounded component V_2 of J\ Y-p(D) such that V_2⊂ p(R_1). By Lemma <ref>, there is a component R_2^' of Y-J· D such that p(R_2^')=V_2 and so R_2^' is J-unbounded. Choose points x∈ R_1 and y∈ R_2^' such that p(x)=p(y)∈ V_2. Then the covering transformation taking y to x takes R_2^' to a J-unbounded component R_2 of Y-J· D. As x∈ R_2∩ R_1, we have R_2⊂ R_1. § FINITE GENERATION OF STABILIZERS The following principal result of this section allows us to construct proper rays in J-unbounded components of Y-J· D that track corresponding proper rays in a copy of a Cayley graph of the corresponding stabilizer of that component. These geometric constructions are critical to the proof of our main theorem. Suppose J is a finitely generated group acting as cell preserving covering transformations on the simply connected, 1-ended, 2-dimensional, locally finite CW complex Y. Let p:Y→ J\ Y be the quotient map. Suppose D is a connected finite subcomplex of Y such that the image of π_1(p(D)) in π_1(J\ Y) (under the map induced by inclusion of p(D) into J\ Y) generates π _1(J\ Y). Then for any J-unbounded component V of Y-J· D, the stabilizer of V under the action of J is finitely generated. By Lemma <ref> and Remark <ref> we may assume that Y is simplicial. Theorem 6.2.11<cit.> is a cellular version of van Kampen's theorem. The following is an application of that theorem.Suppose X_1 and X_2 are path connected subcomplexes of a path connected CW complex X, such that X_1∪ X_2=X, and X_1∩ X_2=X_0 is non-empty and path connected. Let x_0∈ X_0. For i=0,1,2 let A_i be the image of π_1(X_i, x_0) in π _1(X,x_0) under the map induced by inclusion of X_i into X. Then π_1(X,x_0) is isomorphic to the amalgamated product A_1∗_A_0 A_2. Suppose that X is a connected locally finite 2-dimensional simplicial complex. If K is a finite subcomplex of X such that the inclusion map i:K↪ X induces an epimorphism on fundamental group and U is an unbounded component of X-K then the image of π _1(U) in π_1(X), under the map induced by the inclusion of U into X is a finitely generated group.If V is a bounded component of X-K then V∪ K is a finite subcomplex of X. So without loss, assume that each component of X-K is unbounded. If e is edge in X-K and both vertices of e belong to K, then by baracentric subdivision, we may assume that each open edge in X-K has at least one vertex in X-K. Equivalently, if both vertices of an edge belong to K, then the edge belongs to K. If T is a triangle of X and each vertex of T belongs to K, then each edge belongs to K, and T belongs to K (otherwise the open triangle of T is a bounded component of X-K).The largest subcomplex Z of X contained in a component U of X-K contains all vertices of X that are in U, all edges each of whose vertices are in U, and all triangles each of whose vertices are in U.Suppose that U is a component of X-K and Z is the largest subcomplex of X contained in U. Then Z is a strong deformation retract of U. In particular, Z is connected.If e (resp. T) is an open edge (resp. triangle) of X that is a subset of U, but not of Z, then some vertex of e (respectively T) belongs to K and some vertex of e (resp. T) belongs to Z. Say e has vertices v and w and v∈ Z and w∈ K then clearly [v,w) linearly strong deformation retracts to v. If T is a triangle of X with vertices v,w∈ Z and u∈ K then for each point p∈[v,w] the linear strong deformation retraction from of (u,p] to p agrees with those defined for (u,v] and (u,w] and defines a strong deformation for the triangle [v,w,u]-{u} to the edge [v,w]. Similarly if v∈ Z and u, w∈ K. Combining these deformation retractions gives a strong deformation retraction of U to Z. Suppose that U is a component of X-K and Z is the largest subcomplex of X contained in U. Let Q_1 be the (finite) subcomplex of X consisting of all edges and triangles that intersect both U and K (and hence intersect both Z and K). By Lemma <ref> we may add finitely many edges in Z to Q_1 so that the resulting complex Q_2, and Q_2∩ Z are connected. The complex Q_3= Q_2∪(X-U) is a connected subcomplex of X.The subcomplexes Q_3 and Z are connected and cover X, and Q_3∩ Z=Q_2∩ Z is a non-empty connected finite subcomplex of X. Let A_0, A_1 and A_2 be the image of π_1(Q_3∩ Z), π_1(Q_3) and π_1(Z) respectively in π_1(X) under the homomorphism induced by inclusion. By Theorem <ref>, π_1(X) is isomorphic to the amalgamated product A_1∗_A_0A_2. Now as K⊂ Q_3, A_1 =π_1(X). But then normal forms in amalgamated products imply that A_2=A_0. As Q_3∩ Z is a finite complex, A_0 and hence A_2 is finitely generated. This completes the proof of Theorem <ref>. Suppose J is a finitely generated group acting on a simply connected 2-dimensional simplicial complex Y and let K be a finite subcomplex of J\ Y such that the image of π_1(K) under the homomorphism induced by the inclusion map of K into J\ Y, generates π _1(J\ Y). Let D be a finite subcomplex of Y that projects onto K so that p^-1(K)=J· D. Let X_1 be an unbounded component of J\ Y-K. The number of J-unbounded components of Y-J· D that project to X_1 is the index of the image of π_1(X_1) in π _1(J\ Y)=J under the homomorphism induced by inclusion; and the stabilizer of such a J-unbounded component is isomorphic to the image of π_1(X_1) in π_1(J\ Y)=J under the homomorphism induced by inclusion. Hence Theorem <ref> is a direct corollary of Theorem <ref>.§ A BIJECTION BETWEEN J-BOUNDED ENDS AND STABILIZERSAs usual J^0 is a finite generating set for an infinite group J which acts as covering transformations on a 1-ended simply connected locally finite 2-dimensional CW complex Y. Assume that C is a finite subcomplex of Y and U is a J-unbounded component of Y-J· C. The main result of this section connects the ends of the J-stabilizer of U to the J-bounded ends of U (and allows us to construct the r and s rays in Figure 6). Recall z:(Λ(J,J^0),1)→(Y,∗) and K is an integer such that for each edge e of Λ, z(e) is an edge path of length ≤ K.Suppose C and D are finite subcomplexes of Y, U is a J-unbounded component of Y-J· C and some vertex of J· D belongs to U. Let S be the J-stabilizer of U. Then there is an integer N_<ref>(U,C,D) such that for each vertex v∈ U∩(J· D) there is an edge path of length ≤ N from v to S∗ and for each element s∈ S there is an edge path of length ≤ N from s∗ to a vertex of U∩(J· D).Without loss, assume that ∗∈ D and D is connected. Let A be an integer such that any two vertices in D can be connected by an edge path of length ≤ A. For each vertex v of U∩(J· D) let α_v be a path of length ≤ A from v to a vertex w_v∗ of J∗. The covering transformation w_v^-1 takes α_v to an edge path ending at ∗ and of length ≤ A. The vertices of U∩(J· D) are partitioned into a finite collection of equivalence classes, where v and u are related if w_v^-1(α_v) and w_u^-1(α_u) have the same initial point. Equivalently, w_vw_u^-1u=v. In particular, u∼ v implies w_vw_u^-1∈ S. Let d_Λ denote edge path distance in the Cayley graph Λ(J, J^0) and |g|_Λ =d_Λ (1,g). Note that, as vertices of Λ:d_Λ(w_vw_u^-1, w_v) = |w_u|_ΛFor each (of the finitely many) equivalence class of vertices in U∩(J· D), distinguish u in that class. Let N_1 be the largest of the numbers |w_u|_Λ (over the distinguished u). If u is distinguished and v∼ u then let β be an edge path in Λ of length ≤ N_1 from w_v to w_vw_u^-1. Then zβ (from w_v∗ to w_vw_u^-1∗∈ S∗) has length ≤ KN_1. The path (α_v, zβ) (from v to w_vw_u^-1∗∈ S∗) has length ≤ N_1K+A.Let α be an edge path from ∗ to a vertex of U∩(J· D). Then for each s∈ S, s(α) is an edge path from s∗ to a vertex of U∩(J· D). Let N_2=|α| then let N be the largest of the integers N_1K+A and N_2.Assume we are in the setup of Lemma <ref>. Suppose g∈ J. Then each vertex of (gU)∩(J· D) is within N of a vertex of gS∗ and within N+|g|K of gSg^-1∗ (as d_Λ(gs, gsg^-1)=|g^-1|), where by Lemma <ref>, gSg^-1 stabilizes gU. Also, each vertex of gS∗ is within N of a vertex of (gU)∩(J· D) and each vertex of gSg^-1∗ is within N+|g|K of a vertex of (gU)∩(J· D). By Lemma <ref> there are only finitely many J-unbounded components of Y-J· C up to translation in J. Hence finitely many integers N cover all cases. If C⊂ E are compact subsets of Y and U a J-unbounded component of Y-J· C, let ℰ(U,E) be the set of equivalence classes of J-bounded proper edge path rays of U∩(J· E), where two such rays r and s are equivalent if for any compact set F in Y there is an edge path from a vertex of r to a vertex of s with image in (U∩(J· E))-F. If X is a connected locally finite CW complex, let ℰ(X) be the set of ends of X. In the next lemma it is not necessary to factor the map m through z:Λ(J,J^0)→ Y in order to be true, but for our purposes, it is more applicable this way. For a 2-dimensional CW complex X and subcomplex A of X, let A_1 be the subcomplex comprised of A, union all vertices connected by an edge to a vertex of A, union all edges with at least one vertex in A. Let St(A) be A_1 union all 2-cells whose attaching maps have image in A_1. Inductively define St^n (A)=St(St^n-1(A)) for all n>1. The next lemma is a standard result that we will employ a number of times.Suppose L is a positive integer, then there is an integer M(L) such that if α is an edge path loop in Y of length ≤ L and α contains a vertex of J∗, then α is homotopically trivial in St^M(L)(v) for any vertex v of α.Since Y is simply connected each of the (finitely many) edge path loops at ∗ which have length ≤ L is homotopically trivial in St^M_1 (∗) for some integer M_1. If α is a loop at ∗ of length L and v is a vertex of α then St^M_1(∗)⊂ St^M_1 +L(v) and so α is homotopically trivial in St^M(v) where M=M_1+L. The lemma follows by translation in J. Suppose C is a finite subcomplex of Y and U is a J-unbounded component of Y-J· C. Let S^0 be a finite generating set for S (the J-stabilizer of U), and let Λ(S,S^0) be the Cayley graph of S with respect to S^0. Let m_1:Λ(S,S^0 )→Λ(J,J^0) be an S-equivariant map where m_1(v)=v for each vertex v of Λ(S,S^0), and each edge of Λ(S,S^0) is mapped to an edge path in Λ(J,J^0). Let m=zm_1:Λ (S,S^0)→ Y. Then there is a compact set D_<ref>(C,U, S^0)⊂ Y such that for any compact subset E of Y containing D, there is a bijectionℳ_U:ℰ(Λ(S,S^0))↠↣ℰ(U, E)=ℰ(U,D)and an integer I_<ref>(U,C,D) such that if q is a proper edge path ray in Λ(S,S^0) and ℳ([q])=[t] then there is a t^'∈[t] such that for each vertex v of m(q) there is an edge path of length ≤ I from v to a vertex of t^' and if w is a vertex of t^' then there is an edge path of length ≤ I from w to a vertex of m(q).Throughout this proof Λ=Λ(S,S^0). We call the points m(S)(= S∗) ⊂ Y, the S-vertices of Y. There is an integer 𝐁(𝐒^0) such that if e is an edge of Λ then the edge path m(e) has length ≤ B. Fix α_0 an edge path in Y from ∗ to a vertex of u∈ U. If [v,w] is an edge of Λ then (vα_0^-1, m(e), wα_0) is an edge path of length ≤ B+2|α_0| in Y connecting vu and wu (the terminal points of v(α_0) and w(α_0)). Hence there is an integer 𝐀 (depending only on the integer B+2|α_0|) and an edge path of length ≤ A in U from the terminal point of v(α_0) to the terminal point of w(α_0). Let 𝐈=|α_0|+max{A,B}. Let 𝐃_1 be a finite subcomplex of Y containing St^A+B(∗)∪ St(C). By Lemma <ref> there is an integer 𝐍 such that each vertex of (J· D_1)∩ U is connected by an edge path of length ≤ N to a vertex of S∗. There is an integer 𝐙 such that if a and b are vertices of U which belong to an edge path in Y of length ≤ N+|α_0|, and this path contains a point of J∗, then there is an edge path of length ≤ Z in U connecting a and b. Let 𝐃 contain D_1∪ St^Z+N(∗).Let q be a proper edge path ray in Λ with q(0)=1. Let the consecutive S-vertices of m(q) be v_0=∗, v_1,v_2,…. (So the edge path distance in Y between v_i and v_i+1 is ≤ B.) For simplicity assume that v_i is the element of S that maps ∗ to v_i. Then v_i(α_0) is an edge path that ends in U. By the definition of D_1, there is an edge path β_i in U∩(J· D) from the end point of v_i(α_0) to the end point of v_i+1 (α_0) of length ≤ A (see the left hand side of Figure 2). For each vertex v of the proper edge path ray β_q=(β_0 ,β_1,…) (in U∩(J· D)) there is an edge path of length ≤ A+|α_0|≤ I from v to a vertex of m(q). For each vertex w of m(q) there is an edge path of length ≤ B+|α_0|≤ I from w to a vertex of β_q. In particular, β_q is a proper J-bounded ray in U. If p∈[q]∈ℰ(Λ(S,S^0)) (with p(0)=1) then m(p) is of bounded distance from β_p. If δ_i is a sequence of edge paths in Λ each beginning at a vertex of q and ending at a vertex of p, such that any compact subset intersects only finitely many δ_i, then the paths m(δ_i) connect m(q) to m(p) and (since m is a proper map) any compact subset of Y intersects only finitely many m(δ_i). The m(δ_i) determine (using translates of α_0 as above) edge paths in U∩(J· D) connecting β_q and β_p so that [β_p]=[β_q] in ℰ(U,E) for any finite subcomplex E of Y which contains D. This defines a map ℳ:ℰ(Λ)→ℰ(U, E) which satisfies the last condition of our lemma and it remains to show that ℳ is bijective.Let r be a proper edge path J-bounded ray in U. Then r has image in J· E for some finite subcomplex E containing D. Let v_1 ,v_2,… be the consecutive vertices of r. By Lemma <ref> there is an integer N_E such that each v_i is within N_E of S∗. Let τ_i be a shortest edge path from v_i to S∗, so that |τ _i|≤ N_E. We may assume without loss that the image of τ_i is in J· E. Let w_i∈ S∗ be the terminal point of τ_i. Let z_i be the first vertex of τ_i in J· D_1. Then the segment of τ_i from z_i to w_i has length ≤ N. For each i there is an edge path in Y of length ≤2N_E+1 connecting w_i to w_i+1. Hence there is a proper edge path ray q(r) in Λ such that m(q(r)) contains each w_i. The proper edge path ray β_q(r) has image in U∩(J· D_1) and there is an edge path of length ≤ Z in U∩(J· D) from z_i to a vertex of β_q(r). Hence there is an edge path in U∩(J· E) of length ≤ Z+N_E from v_i to a vertex of β_q(r) so that [r]=[β_q(r)] in ℰ(U,E). In particular, ℳ is onto.Finally we show ℳ is injective. Suppose a and b are distinct proper edge path rays in Λ with initial point 1, such that [β_a]=[β_b] in ℰ(U,E) for some E containing D. Let τ_i be a sequence of edge paths in U∩(J· E) where each begins at a vertex of β_a, ends at a vertex of β_b and so that only finitely many intersect any given compact set (a cofinal sequence). By the construction of β_a and β_b we may assume the initial point of τ_i is the end point of v_iα_0 for v_i a vertex of a in Λ and the terminal point of τ_i is the end point of w_iα_0 for w_i a vertex of b. By Lemma <ref> there is an integer N_E(≥|α_0|) such that each vertex of τ_i is within N_E of S∗. For each i, this defines a finite sequence A_i of points in S∗ beginning with v_i∗ on m(a), ending with w_i∗ on m(b), each within N_E of a point of τ_i and adjacent points of A_i are within 2N_E+1 of one another. Since the τ_i are cofinal, so are the A_i. Since the distance between adjacent points of A_i is bounded, if u and v are vertices of Λ(S,S^0) such that m(u) and m(v) are adjacent in A_i then there is a bound on the distance between u and v in Λ(S,S^0). This implies a and b determine the same end of Λ(S,S^0). Consider Lemma <ref> for components gU of Y-J· C for g∈ J. The stabilizer of gU is gSg^-1 and there may be no bound on the integers I(gU, C,D) or the size of D(C,gU). For gU, one can consider instead m_g:Λ(S,S^0)→ Y by m_g(x)=gm(x) (so m_g(1)=g∗). Lemma <ref> is a generalization of Lemma <ref> that applies to all J-translates of U. Since there are only finitely many J-unbounded components of Y-J· C up to J-translation, the dependency of I and D on U can be eliminated and in the next lemma I_<ref> and D_<ref> are taken to only depend on C. For C compact in Y, let 𝒰={U_1,…, U_l} be a set of J-unbounded components of Y-J· C such that if U is any J-unbounded component of Y-J· C then U=gU_i for some g∈ J and some i∈{1,…, l}. Also assume that U_i gU_j for any i j and any g∈ J. Call 𝒰 a component transversal for Y-J· C. Let S_i^0 be a finite generating set for S_i, the J-stabilizer of U_i and Λ_i=Λ(S_i,S_i^0) the Cayley graph of S_i with respect to S_i^0. For g∈ J, let m_(g,i):Λ_i→ Y be defined by m_(g,i)(x)=gm_i(x) (where m_i:Λ_i→ Y is defined by Lemma <ref>). In particular, m_(g,i)(S_i)=gS_i∗.For i∈{1,…, l}, letD_i= D_<ref>(C,U_i,S_i^0), D_<ref>(C)=∪_i=1^lD_i⊂ Y, I_<ref> (C)=max{I_<ref>(U_i,C, D_i}_i=1^l and ℳ _i:ℰ(Λ_i)↠↣ℰ(U_i, E) (Lemma <ref>). For E compact containing D_<ref>(C) and g∈ J, there is a bijectionℳ_(g,i):ℰ(Λ_i)↠↣ℰ(gU_i, E)where ℳ _(g,i)([q])=gℳ_i([q])such that if q is a proper edge path ray in Λ_i and ℳ _(g,i)([q])=[t] then there is t^'∈[t] such that for each vertex v of m_(g,i)(q) there is an edge path of length ≤ I_<ref>(C) from v to a vertex of t^' and if w is a vertex of t^' then there is an edge path of length ≤ I_<ref>(C) from w to a vertex of m_(g,i)(q)=gm_i(q). to 2in < g r a p h i c s > Figure 2§ PROOF OF THE MAIN THEOREM We set notation for the proof of our main theorem. Let C_0 be compact in Y, and J^0 be a finite generating set for the infinite group J which acts as cell preserving covering transformations on Y. Let C be a finite subcomplex of Y such that J is co-semistable at ∞ in Y with respect to C_0 and C, and J is semistable at ∞ in Y with respect to J^0, C_0 and C. As in the setup for Lemma <ref> we let 𝒰={U_1,…,U_l} be a component transversal for Y-J· C, S_i^0 be a finite generating set for S_i, the J-stabilizer of U_i and Λ_i=Λ (S_i,S_i^0) be the Cayley graph of S_i with respect to S_i^0. For g∈ J, let m_(g,i):Λ_i→ Y be defined by m_(g,i)(x)=gm_i(x) (where m_i:Λ_i→ Y is defined by Lemma <ref>). In particular, m_(g,i)(S_i)=gS_i∗.The next lemma is a direct consequence of Lemma <ref>.Let N_i be N_<ref>(U_i,C,St(C)) and N_<ref>=max{N_1,…, N_l}. If g∈ J and [v,w] is an edge of Y with v∈ gU_i and w∈ J· C then there are edge paths of length ≤ N_<ref> from v and w to gS_i∗ and for each q∈ S_i∗, an edge path of length ≤ N_<ref> from gq to a vertex of St(J· C)∩ gU_i. There is an integer M_<ref>(C) and compact set D_<ref>(C) in Y containing St^M_<ref> (C) such that for any U_i∈{U_1,…, U_l}, g∈ J and edge [v,w] of Y with v∈ gU_i-D_<ref> and w∈ J· C, (see Figure 3) we have the following:* There is an edge path γ of length ≤ N_<ref> from a vertex x=gx^'∗∈ gS_i∗ to w, where x^' is a vertex in an unbounded component Q of Λ(S_i,S^0_i )-m_(g,i)^-1(St^M_<ref>(C)). * If γ is as in part 1, and r_0^' is any proper edge path ray in Q beginning at x^' (so r_0= m_(g,i) (r_0^') is a proper edge path ray beginning at x), then there is a proper J-bounded ray s_v beginning at v such that s_v has image in gU_i and is properly homotopic rel{v} to ([v,w],γ^-1,r_0) by a proper homotopy with image in St^M_<ref>(im(r_0))⊂ Y-C. So (by hypothesis) J is co-semistable at ∞ in gU_i with respect to s_v and C_0. Let A^' be an integer such that if s∈∪_i=1^lS_i^0 then there is an edge path of length ≤ A^' in Λ(J,J^0) from 1 to s. The image of this path under z:(Λ,1)→(Y,∗) is a path in Y of length ≤ KA^'=A. Let N= N_<ref>. Select B an integer such that if a and b are vertices of St(J· C)∩ gU_i (for any g∈ J and i∈{1,…, l}) of distance ≤2N+A+1 in Y then they can be joined by an edge path of length ≤ B in gU. By Lemma <ref> there is an integer M_<ref> such that if β is a loop in Y of length ≤ A+B+2N+1 and containing a vertex of J∗, then β is homotopically trivial in St^M(b) for any vertex b of β.There are only finitely many pairs (g,i) with g∈ J and i∈{1,…, l} such that gS_i∗∩ St^M(C)∅. If gS_i∩ St^M(C)=∅, then m_(g,i)^-1(St^M(C))=∅. Lemma <ref> implies there is an edge path γ of length ≤ N_<ref> from a vertex x=gx^'∗∈ gS_i∗ to w. Now let r_0^'=(e_0,e_1,…) be any proper edge path ray at x^'∈Λ(S_i,S_i^0). Let τ_i be the edge path m_(g,i)(e_i) so that τ_i is an edge path in Y of length ≤ A and r_0=m_(g,i)(r_0^')=(τ_1,τ_2,…) is a proper edge path at x (see Figure 3). Let x_0^'=x^' and x_j^' be the end point of e_j so that x_j=gx_j^'∗ is the end point of τ_j. Let γ_0=(γ,[w,v]) (of length ≤ N+1). For j≥1, let γ_j be an edge path of length ≤ N_<ref> from x_j to v_j∈ gU_i∩ St(J· C) (by Lemma <ref>). By the definition of B there is an edge path β_j in gU_i from v_j to v_j+1 of length ≤ B. Let s_v be the proper edge path (β_1,β_2,…), with initial vertex v. The loop (γ_j-1, β_j, γ_j^-1, τ_j^-1) has length ≤ A+B+2N+1 and contains the J-vertex x_j, and so is homotopically trivial in St^M(x_j)⊂ Y-C. Combining these homotopies shows that s_v is properly homotopic rel{v} to ([v,w], γ^-1, r_0) by a proper homotopy with image in St^M(im(r_0))⊂ Y-C. As long as D_<ref> contains St^M(C) the conclusion of our lemma is satisfied for all such pairs (g,i).If (g,i) is one of the finitely many pairs such that gS_i∩ St^M(C)∅ then we need only find a compact D_(g,i) so that the lemma is valid for the pair (g,i) and D_(g,i), since we can let D be compact containing St^M(C) and the union of these finitely many D_(g,i). to 2in < g r a p h i c s > Figure 3 Fix (g,i) and let E be compact in Λ(S_i,S_i^0)=Λ_i containing the compact set m_(g,i)^-1(St^M(C)) and all bounded components of Λ_i-m_(g,i)^-1(St^M(C)). Let D_(g,i) be compact in Y containing m_(g,i)(E). Select γ exactly as in the first case. Since x^' is a vertex of Λ_i in an unbounded component Q of Λ_i-m_(g,i)^-1(St^M(C)), there is a proper edge path ray r_0^' at x^' with image in Q. Then r_0= m_(g,i)(r_0^') is a proper edge path ray at x and the vertices of r_0^' are mapped to vertices x_0=x, x_1 ,… of (gS_i∗)-St^M(C). Select paths τ_i and β_i as in the first case and the same argument shows that s_v=(β_1 ,β_2,…) is properly homotopic rel{v} to ([v,w], γ^-1, r_0) by a proper homotopy with image in St^M(im(r_0))⊂ Y-C. The homotopy of Lemma <ref> (pictured in Figure 3) of s_v to ([v,w],γ^-1, r_0) is sometimes called a ladder homotopy. The rungs of the ladder are the γ_i and the sides of the ladder are s_v and r_0. The loops determined by two consecutive rungs and the segments of the two sides connecting these rungs have bounded length and contain a vertex of J∗. Lemma <ref> implies there is an integer M such that each such loop is homotopically trivial by a homotopy in St^M(v) for v any vertex of that loop. Combining these homotopies gives a ladder homotopy. We briefly recall the outline of <ref>. We determine a compact set E(C_0,C) such that for any compact set F, loops outside of E and based on a proper base ray r_0 can be pushed outside F relative to r_0 and by a homotopy avoiding C_0. A loop outside E is written in the formα= (α_1, e_1,β_1, ẽ_1,α_2,e_2β_2, ẽ_2…, α_n-1, e_n-1 ,β_n-1, ẽ_n-1, α_n)where α_i is an edge path in J· C, e_i (respectively ẽ_i) is an edge with terminal (respectively initial) vertex in Y-J· C and β_i is an edge path in Y-J· C (see Figure 6).We can push the α_j subpaths of α arbitrarily far out between (γ̃_j-1^-1, r̃_j-1) and (γ_j^-1,r_j) using the semistability of J in Y with respect to C. Lemmas <ref> and <ref> consider subpaths of the form (e,β, ẽ) in α. The edges e and ẽ are properly pushed off to infinity using ladder homotopies given by Lemma <ref>. The β paths present difficulties and two cases are considered. If β lies in gU_i and gS_i∗ does not intersect St^M_<ref>(C) then Lemma <ref>, provides a proper homotopy to compatibly push (e,β, ẽ) arbitrarily far out. In Lemma <ref> we consider paths (e,β, ẽ) not considered in Lemma <ref>. For g∈ J and i∈{1,…,l} there are only finitely many cosets gS_i such that (gS_i∗)∩ St^M_<ref> (C)∅ and we are reduced to considering paths (e,β, ẽ) with β in gU_i for these gS_i.Suppose that g∈ J, i∈{1,…, l} and ([w,v],β, [ṽ,w̃]) is an edge path in Y-D_<ref>. Suppose further that1) w,w̃∈ J· C and v,ṽ∈ gU_i,2) β is an edge path in gU_i,3) γ (respectively γ̃) is an edge path of length ≤ N_<ref> from x =gx^'∈ gS_i∗ (resp. x̃= gx̃^'∈ gS_i) to w (resp. w̃), (such paths exist by Lemma <ref>) and4) x^' and x̃^' belong to the same unbounded component Q of Λ(S_i,S_i^0)-m_(g,i)^-1 (St^M_<ref> (C)) (in particular, when m_(g,i)^-1 (St^M_<ref> (C))=∅) then:There are proper Λ_i-edge path rays r^' at x ^' and r̃ ^' at x̃ ^' such that, r ^' and r̃ ^' have image in Q and if r = m_(g,i)(r^') and r̃ = m_(g,i)(r̃ ^') then for any compact set F⊂ Y, there is an integer d≥0 and edge path ψ in Y-F from r (d) to r̃ (d) such that the loop:(r |_[0,d]^-1, γ,[ w ,v ], β, [ṽ ,w̃ ], γ̃^-1 , r̃ |_[0,d],ψ^-1)is homotopically trivial by a homotopy in Y-C_0. (So ([w,v], β, [ṽ,w̃]) can be pushed between (γ^-1, r) and (γ̃^-1, r̃) to a path in Y-F, by a homotopy in Y-C_0.)Let r ^' be any proper edge path in Q with initial point x ^'. Let τ^'=(e ^',…, e_k^') be an edge path in Q from x̃^' to x ^' with consecutive vertices (x̃ ^'=t_0^',t ^',…, t_k ^'=x^'). Let r̃^'=(τ^',r ^'). Let t_j=m_(g,i)(t_j^') for all j∈{0,1,…, k}, r=m_(g,i)(r^'), r̃=m_(g,i)(r̃^') and τ=m_(g,i)(τ^') (an edge path from x̃ to x with image in Y-St^M_<ref>(C)).By Lemma <ref> and the definition of M_<ref>, there is an edge path δ in gU_i from ṽ to v such that the loop ([ṽ,w̃],γ̃^-1, τ, γ,[w ,v ],δ^-1) is homotopically trivial by a ladder homotopy H_1 (with rungs connecting the two sides τ and δ and) with image in St^M_<ref> ({t_0,t ,…, t_k})⊂ Y-C.By Lemma <ref>, there is a proper edge path ray s at v and with image in gU_i such that r is properly homotopic rel{x} to (γ,[w,v],s) by a ladder homotopy H_2 in Y-C. Since J is co-semistable at ∞ in Y with respect to C_0 and C (and s is J-bounded), the loop (β,δ) can be pushed along s by a homotopy H_3 (with image in Y-C_0) to a loop ϕ in Y-F, where if ϕ is based at s(k), then s([k,∞)) avoids F.to 2in < g r a p h i c s > Figure 4Combine these homotopies as in Figure 4 to obtain ψ. If U is a J-unbounded component of Y-J· C, and s and s̃ are proper edge path rays in Y and with image in U, then we say s and s̃ converge to the same end of U (in Y) if for any compact set F in Y, there are edge paths in U-F connecting s and s̃. Figure 6 can serve as a visual aid for Lemma <ref>.There is a compact set D_<ref>(C, U_1,…, U_l) such that:If g∈ J, i∈{1,…, l}, and ([w,v],β, [ṽ,w̃]) is an edge path in Y-D_<ref> with w,w̃∈ J· C and β a path in gU_i, then there are edge paths γ and γ̃ of length ≤ N_<ref> from x=gx^'∗∈ gS_i∗ to w and x̃=gx̃^'∗∈ gS_i∗ to w̃ respectively, and proper edge path rays r^' at x^' and r̃^' at x̃^' with image in Λ (S_i,S_i^0)-m_(g,i)^-1(D_<ref>) such that for r= m_(g,i)(r^') and r̃= m_(g,i)(r̃^'), one of the following two statements is true:* For any compact set F in Y, there is an integer d∈[0,∞) and edge path ψ in Y-F from r(d) to r̃(d) such that the loop(r |_[0,d]^-1, γ,[ w ,v ], β, [ṽ ,w̃ ], γ̃^-1, r̃ |_[0,d],ψ^-1)is homotopically trivial by a homotopy in Y-C_0. * There are proper J-bounded edge path rays s at v and s̃ at ṽ with image in gU_i such that, the ray s (respectively s̃) is properly homotopic rel{v} to ([v,w],γ^-1, r) (respectively rel{ṽ} to ([ṽ,w̃],γ̃^-1, r̃) by a (ladder) homotopy in Y-C (just as in Lemma <ref>), and s and s̃ converge to the same end of g U_i. We define D_<ref> to be the union of a finite collection of compact sets. The first is D=D_<ref>(C) (which contains St^M_<ref> (C)). If Λ(S_i,S_i^0)-m_(g,i)^-1(St^M_<ref>(C)) has only one unbounded component (in particular when m_g,i^-1 (St^M_<ref>(C))=∅) then conclusion 1) is satisfied (by Lemma <ref>). There are only finitely many pairs (g,i) with g∈ J and i∈{1,…,l} such that Λ(S_i,S_i^0)-m_g,i ^-1(St^M_<ref>(C)) has more than one unbounded component. List these pairs as (g(1),ι(1)),…, (g(t),ι(t)). Now assume that gU_i=g(q)U_ι(q) for some q∈{1,…,t}. There are finitely many unbounded components of Λ(S_i,S_i^0)-m_(g,i) ^-1(St^M_<ref>(C)). List them as K_1,…, K_a. Consider pairs (K_j,K_k) with j k.If for every compact set F in Y, there are vertices y_j^'∈ K_j and y_k^'∈ K_k, edge paths τ_j and τ_k of length ≤ N_<ref> from m_(g,i)(y_j^') to gU_i and m_(g,i)(y_k^') to gU_i respectively, and an edge path in gU_i-F connecting the terminal point of τ_j and the terminal point of τ_k, then we call the pair (K_j, K_k) inseparable and let F_(j,k)=∅. Otherwise, we call the pair separable and let F_(j,k) be the compact subset of Y for which this condition fails. Let E_(g,i)=∪_j k F_(j,k). As gU_i=g_qU_ι(q), define E^q=E_(g,i).We now define D_<ref>=D_<ref>(C)∪ E^1∪⋯∪ E^t. As noted above we need only consider the case where β has image in g(q)U_ι(q) for some q∈{1,…, t}. Simplifying notation again let g=g(q) and U_i=U_ι(q). Lemma <ref> implies there are edge paths γ and γ̃ of length ≤ N_<ref> from x= gx^'∗∈ gS_i∗ to w and x̃= gx̃^'∗∈ gS_i∗ to w̃ respectively. Again let K_1,…, K_a be the unbounded components of Λ(S_i,S_i ^0)-m_(g,i)^-1(St^M_<ref>(C)). Assume that x^' belongs to K_1. If x̃^' also belongs to K_1, then conclusion 1) of our lemma follows directly from Lemma <ref>.So, we may assume x̃^' belongs to K_2 K_1. Notice that the existence of β (in Y-D_<ref>) implies that the pair (K_1,K_2) is inseparable. This implies that there is a sequence of pairs of vertices (y_1(j)^', y_2(j)^') for j∈{1,2,…} with y_1(j)^'∈ K_1, y_2(j)^'∈ K_2 and edge paths τ_1(j) and τ_2(j) of length ≤ N_<ref> from m_(g,i)(y_1(j)^') to gU_i and m_(g,i)(y_2(j)^') to gU_i respectively, and an edge path β_j in gU_i from the terminal point of τ_1(j) to the terminal point of τ_2(j) and such that only finitely may β_j intersect any compact set. Pick proper edge path rays r^' in K_1 at x^' and r̃^' in K_2 at x̃^' so that for infinitely many pairs (y_1(j)^', y_2(j)^'), r^' passes through y_1(j)^' and r̃^' passes through y_2(j)^'. Let r=m_(g,i)(r^') and r̃=m_(g,i)(r^'). Choose s and s̃ for r and r̃ respectively as in Lemma <ref> where γ and γ̃ for r and r̃ are chosen to be τ_1(j) and τ_2(j) when ever possible. Lemma <ref> implies the ray s is properly homotopic rel{v} to ([v,w],γ_w^-1, r) and s̃ is properly homotopic rel{ṽ} to ([ṽ,w̃],γ̃^-1, r̃) by ladder homotopies in Y-C. The paths β_j show that s and s̃ converge to the same end of g U_i, so that conclusion 2) of our lemma is satisfied. Suppose U is a J-unbounded component of Y-J· C, F is any compact subset of Y and s_1 and s_2 are J-bounded proper edge path rays in U determining the same end of U, and with s_1(0)=s_2 (0), then there is an integer n and a path β from the vertex s_1(n) to the vertex s_2(n) such that the image of β is in Y-F and (s_1|_[0,n], β, s_2|_[0,n]^-1) is homotopically trivial in Y-C_0.Choose an integer n such that s_1([n,∞)) and s_2([n,∞)) avoid F. Since s_1 and s_2 determine the same end of U, there is an edge path α in U-F from s_1(n) to s_2(n). Consider the loop (s_1|_[0,n]^-1,s_2|_[0,n],α^-1) based at s_1|_[n,∞). By co-semistability, there is a homotopy H:[0,1]×0,l]→ Y-C_0 (see Figure 5) such thatH(0,t)=H(1,t)=s_1(n+t) for t∈0,l], H(t,l)∈ Y-F for t∈0,1] and H|_[0,1]×{0}=(s_1|_[0,n]^-1,s_2|_[0,n],α^-1)to 2in < g r a p h i c s > Figure 5Define τ(t)=H(t,l) for t∈[0,1] (so that τ(0)=τ(1)=s_1 (l+n)). Now defineβ=(s_1|_[n,n+l], τ, s_1|_[n,n+l]^-1, α)to finish the proof. Suppose r_1^' and r_2^' are proper edge path rays in Λ(J,J^0) such that m_(g,i)(r_1^')= r_1 and m_(g,i)(r_2^')= r_2 have image in Y-C. There is a compact set D_<ref>(C) in Y such that: if α is an edge path in (J· C)∩(Y-D_<ref>) from r_1(0) to r_2(0) and F is any compact set in Y, then there is an edge path ψ in Y-F from r_1 to r_2 such that the loop determined by ψ, α and the initial segments of r_1 and r_2 is homotopically trivial in Y-C_0.There is an integer N_<ref>(C) such that for each vertex v of C there is an edge path in Y from v to ∗ of length ≤ N_<ref>. Then for each vertex v of J· C there is an edge path of length ≤ N_<ref> from v to J∗. Choose an integer P such that if v^' and w^' are vertices of Λ(J,J^0) and z(v^')= v and z(w^')= w are connected by an edge path of length ≤2N_<ref>+1 in Y then v^' and w^' are connected by an edge path of length ≤ P in Λ(J,J^0). Recall that if e is an edge of Λ(J,J^0) then z(e) is an edge path of length ≤ K. By Lemma <ref> there is an integer M_<ref> such that any loop containing a vertex of J∗ and of length ≤ KP+ 2N_<ref> + 1 is homotopically trivial in St^M_<ref>(v) for any vertex v of this loop.Let D_<ref>=St^M_<ref>(C). Write α as the edge path (e_1,…, e_p) with consecutive vertices v_0,v_1,…, v_p. Let β_0 and β_p be trivial and for i∈{1,…, p-1} let β_i be an edge path of length ≤ N_<ref> from v_i to some vertex g_i∗ for g_i∈ J. Let g_0=r_1^'(0) and g_p=r_2^'(0) (so g_0∗=v_0 and g_p∗=v_p). For i∈{0,…, p-1}, there is an edge path τ_i^' in Λ(J,J^0) from g_i-1 to g_i of length ≤ P. Let τ _i= z(τ_i^') (an edge path of length ≤ PK. Then the loop (β_i,τ_i+2,β_i+1^-1,e_i^-1) has length ≤ KP+2N_<ref>+1 and so is homotopically trivial in St^M_<ref> (v) for any vertex v of the loop. Let τ^'=(τ_1^',…, τ_p^'), then α is homotopic rel{v_0,v_p} to z(τ^')=τ by a (ladder) homotopy in Y-C. Since J is semistable at ∞ in Y with respect to J^0, C_0 and C, there is an edge path ψ in Y-F from r_1 to (τ,r_2) such that the loop determined by ψ, τ and the initial segments of r_1 and r_2 is homotopically trivial in Y-C_0. Now combine this homotopy with the homotopy of α and τ.(of Theorem <ref>) Let C_0 be a finite subcomplex of Y and J_0 be a finite generating set for an infinite finitely generated group J, where J acts as cell preserving covering transformations on Y, J is semistable at ∞ in Y with respect to J_0, C_0 and C (a finite subcomplex of Y) and J is co-semistable at ∞ in Y with respect to C_0 and C. Also assume that Y-J· C is a union of J-unbounded components. Let U_1,…,U_l be J-unbounded components of Y-J· C forming a component transversal for Y-J· C and let S_i be the J-stabilizer of U_i for i∈{1,…,l}. Let N_<ref> be defined for C and U_1,…,U_l as in Lemma <ref>. Let r_0^' be a proper edge path ray in Λ (J,J^0) at 1 and r_0= zr_0^'.to 2in < g r a p h i c s > Figure 6 Let E be compact containing St^N_<ref>(D_<ref>)∪ D_<ref>(C,U_1,…, U_l) and such that once r_0 leaves E it never returns to D_<ref>(C). Suppose α is an edge path loop based on r_0 with image in Y-E (see Figure 6). Let F be any compact subset of Y. Our goal is to find a proper homotopy H:[0,1]×[0,1]→ Y-C_0 such that H(0,t)=H(1,t) is a subpath of r_0, H(t,0)=α and H(t,1) has image in Y-F (so that Y has semistable fundamental group at ∞ by Theorem <ref> part 2).) Write α as:α= (α_1, e_1,β_1, ẽ_1,α_2,e_2β _2,ẽ_2…, α_n-1, e_n-1 ,β_n-1, ẽ_n-1, α_n)where α_i is an edge path in J· C, e_i (respectively ẽ_i) is an edge with terminal (respectively initial) vertex in Y-J· C and β_i is an edge path in the J-unbounded component g_iU_f(i) of Y-J· C where f(i)∈{1,…, l}.By Lemmas <ref> and <ref> and the definition of D_<ref> (C), there is an edge path γ_i of length ≤ N_<ref>, from a vertex x_i= gx_i^'∗ of g_iS_f(i)∗ to the initial vertex of e_i, and there are proper edge path rays r_i^' at x_i^' in Λ(S_f(i), S_f(i)^') and s_i at the end point of e_i such that s_i has image in g_iU_f(i) and r_i is properly homotopic to (γ_i, e_i, s_i) (where r_i= m_(g,f(i))(r_i^')), by a proper (ladder) homotopy H_i with image in Y-C. Similarly there is an edge path γ̃_i of length ≤ N_<ref> from x̃_i, a vertex of g_iS_j(i)∗, to the terminal vertex of ẽ_i, and there are J-bounded proper edge path rays r̃_i at γ̃_j(0) and s̃_i at the initial point of ẽ_i, such that r̃_i= m_(g_i,f(i))(r̃_i^') for some proper ray r̃_i^' in Λ(S_f(i), S_f(i)^'), s̃_i has image in g_iU_f(i) and s̃_i is properly homotopic to (ẽ_i, γ̃_i^-1, r̃_i) by a proper (ladder) homotopy H̃_i with image in Y-C. In particular, the r_i, and r̃_i-rays have image in Y-C.By Lemma <ref>, either r_i is properly homotopic rel {r_i(0)} to the ray (γ_i,e_i,β_i, ẽ_i, γ̃_i^-1, r̃_i) by a homotopy in Y-C_0 or the rays s_i and s̃_i converge to the same end of g_i U_f(i). In the former case: The path (γ_i,e_i,β_i, ẽ_i, γ̃_i^-1) can be moved by a homotopy along r_i and r̃_i to a path outside F where the homotopy has image in Y-C_0.In the later case, Lemma <ref> implies there is a there is an integer n_i and edge path β̃_i from s_i(n_i) to s̃_i(n_i) and with image in Y-F such that β_i can be moved by a homotopy along s_i and s̃_i to β̃_i, such that this homotopy has image in Y-C_0. In any case, the (ladder) homotopy H_i (of r_i to (γ_i,e_i,s_i)) tells us that (γ _i,e_i) can be moved (by a homotopy in Y-C_0) along r_i and s_i to a path in Y-F and similarly for (γ̃_i,ẽ_i) using H̃_i. Combining these three homotopies, we have in the latter case (as in the former):∗) The path (γ_i,e_i,β_i, ẽ_i, γ̃_i^-1) can be moved by a homotopy along r_i and r̃_i to a path outside F by a homotopy with image in Y-C_0.For consistent notation, let r̃_0=r_n be the tail of r_0 beginning at α_1(0), and let γ̃_0 and γ_n be the trivial paths at the initial point of α_1. It remains to show that for 0≤ i≤ n, there is a path δ_i in Y-F from r̃_i to r_i+1 such that the loop determined by δ_i, the path (γ̃_i,α_i+1, γ_i+1^-1), and the initial segments of r̃_i and r_i+1 is homotopically trivial in Y-C_0. These homotopies are given by Lemma <ref> since the paths γ_i and γ̃_i all have length ≤ N_<ref> and so by the definition of E they have image in Y-D_<ref> (as do the α _i), and since the rays r_i and r̃_i have image in Y-C. § GENERALIZATIONS TO ABSOLUTE NEIGHBORHOOD RETRACTS There is no need for a space X to be a CW complex in order to define what it means for a finitely generated group J to be semistable at ∞ in X with respect to a compact subset C_0 of X, or for J to be co-semistable at ∞ in X with respect to C_0.Suppose X is a 1-ended simply connected locally compact absolute neighborhood retract (ANR) and G is a group (not necessarily finitely generated) acting as covering transformations on X. Assume that for each compact subset C_0 of X there is a finitely generated subgroup J of G so that (a) J is semistable at ∞ in X with respect to C_0, and (b) J is co-semistable at ∞ in X with respect to C_0. Then X has semistable fundamental group at ∞.By a theorem of J. West <cit.> the locally compact ANR G\ X is proper homotopy equivalent to a locally finite polyhedron Y_1. A simplicial structure on Y_1 lifts to a simplicial structure structure on Y, its universal cover, and G acts as cell preserving covering transformations on Y. A proper homotopy equivalence from G\ X to Y_1 lifts to a G-equivariant proper homotopy equivalence h: X→ Y. Let f:Y→ X be a (G-equivariant) proper homotopy inverse of h. Since the semistability of the fundamental group at ∞ of a space is invariant under proper homotopy equivalence it suffices to show that Y satisfies the hypothesis of Theorem <ref>.First we show that if C_0 is compact in Y then there is a finitely generated subgroup J of G such that J is semistable at ∞ in Y with respect to C_0. There is a finitely generated subgroup J of G, with finite generating set J^0 and compact set C⊂ X such that J is semistable at ∞ with respect to J^0, h^-1(C_0), C and z_1, where z_1:Λ(J,J_0)→ X is J-equivariant. Note that z=hz_1 is J-equivariant. Let r^' and s^' be proper edge path rays in Λ such that r^'(0)=s^'(0) and both r=z_1(r^') and s=z_1(s^') have image in X-C. Then given any compact set D in X there is path δ_D in X-D from r to s such that the loop determined by δ_D and the initial segments of r and s is homotopically trivial in X-h^-1(C_0).Now, let D be compact in Y. Suppose that r^' and s^' are proper edge path rays in Λ such that r^'(0)=s^'(0) and both r=hz_1(r^') and s=hz_1(s^') have image in X-h(C) (in particular, z_1(r^') and z_1(s^') have image in X-C). Let δ be a path from z_1(r^') to z_1(s^') in X-h^-1(D) (so that h(δ) is a path from r to s in Y-D) such that the loop determined by δ and the initial segments of z_1(r^') and z_1(s^') is homotopically trivial by a homotopy H_0 with image in X-h^-1(C_0). Then the loop determined by h(δ) and the initial segments of r and s is homotopically trivial in Y-C_0 by the homotopy hH_0.Finally we show that if C_0 is compact in Y there is a finitely generated subgroup J of G such that J is co-semistable at ∞ in Y with respect to C_0. Consider the compact set h^-1(C_0)⊂ X. Choose C compact in X such that J is co-semistable at ∞ in X with respect to h^-1(C_0) and C.Let H:Y×[0,1]→ Y be a proper homotopy such that H(y,0)=y and H(y,1)=hf(y) for all y∈ Y. Let D_1 be compact in Y so that if s is a proper ray in Y-D_1 then the proper homotopy of s to hf(s) (induced by H) has image in Y-C_0. Let D_2=D_1∪ f^-1(C). It suffices to show that if r is a J-bounded proper ray in Y-J· D_2 and α is a loop in Y-J· D_2 with initial point r(0), then for any compact set F in Y, α can be pushed along r to a loop in Y-F, by a homotopy in Y-C_0. Define τ(t)= H(r(0),t)) for t∈[0,1].Let H_1:[0,∞)×[0,1]→ Y-C_0 be the proper homotopy (induced by H) of the proper ray (α, r) to (hf(α), hf(r)) so that H_1(t,0)=(α,r)(t), H_1(t,1)=(hf(α), hf(r))(t) for t∈[0,∞) and H_1(0,t)=τ(t) (see Figure 7). Let H_2 :[0,∞)×[0,1]→ Y-C_0 be the proper homotopy (induced by H) of r to hf(r) so that H_2(t,0)=r(t), H_2(t,1)=hf(r)(t) for t∈[0,∞) and H_2(0,t)=τ(t) for t∈[0,1].to 2in < g r a p h i c s > Figure 7Recall that f is J-equivariant. Since r and α have image in Y-J· D_2 (and f^-1(C)⊂ D_2), f(r) and f(α) have image in X-J· C. Also f(r) is J-bounded in X. There is a homotopy H_3 with image in X-h^-1(C_0) that moves f(α) along f(r) to a loop ϕ in X-h^-1 (F), where if fr(q) is the initial point of ϕ then fr([q,∞))⊂ X-h^-1(F). The homotopy hH_3 has image in Y-C_0 and moves hf(α) along hf(r) to the loop h(ϕ) in Y-F. Combine the homotopies H_1, H_2 and H_3 as in Figure 7 to see that α can be moved along r into Y-F by a homotopy in Y-C_0. amsalpha
http://arxiv.org/abs/1709.09129v1
{ "authors": [ "Ross Geoghegan", "Craig Guilbault", "Michael Mihalik" ], "categories": [ "math.GR" ], "primary_category": "math.GR", "published": "20170926165520", "title": "Non-cocompact Group Actions and $π_1$-Semistability at Infinity" }
1Yunnan Observatories, Chinese Academy of Sciences, 396Yangfangwang, Guandu District, Kunming, 650216, P.R. China; [email protected], [email protected] for Astronomical Mega-Science, Chinese Academy of Sciences, 20A Datun Road, Chaoyang District, Beijing, 100012, P.R. China3Key Laboratory for the Structure and Evolution of Celestial Objects, Chinese Academy of Sciences, Kunming, 650011, ChinaTwo blue-straggler sequences discovered in globular cluster M30 provide a strong constraint on the formation mechanisms of blue stragglers. We study the formation of the blue-straggler binaries through binary evolution, and find that binary evolution can contribute to the blue stragglers in both of the sequences. Whether a blue straggler is located in the blue sequence or red sequence depends on the contribution of the mass donor to the total luminosity of the binary, which is generally observed as a single star in globular clusters. The blue stragglers in the blue sequence have a cool white-dwarf companion, while the majority (∼ 60%) of the objects in the red sequence are the binaries that are still experiencing mass transfer, but there are also some objects that the donors have just finished the mass transfer (the stripped-core stars, ∼ 10%) or the blue stragglers (the accretors) have evolved away from the blue sequence (∼ 30%). Meanwhile, W UMa contact binaries found in both sequences may be explained by various mass ratios, that is, W UMa contact binaries in the red sequence have two components with comparable masses (e.g. mass ratio q∼ 0.3-1.0), while those in the blue sequence have low mass ratio (e.g. q< 0.3). However, the fraction of blue sequence in M30 cannot be reproduced by binary population synthesis if we assumed the initial parameters of a binary sample to be the same as that of the field. This possible indicates that dynamical effects on binary systems is very important in globular clusters.§ INTRODUCTION Blue stragglers are a class of anomalous stars that are brighter and bluer than the main-sequence (MS) turnoff stars in the color-magnitude diagram of globular clusters. They are very common objects in almost all Galactic globular clusters <cit.>, and can be used to probe the dynamical evolution of clusters <cit.>. Their locations in the color-magnitude diagram suggests that they may be MS stars more massive than typical MS turnoff stars <cit.>, and they should have evolved away from the main sequence. At present, there are two popular mechanisms to explain the formation of blue stragglers: binary evolution <cit.> and direct stellar collision <cit.>, and a series of work have been done for the two mechanisms in recent ten years <cit.>. It is generally believed that binary evolution plays an important role in open clusters and in the field, while direct stellar collision are likely important in dense environments such as globular clusters or the core of open clusters <cit.>. However, observations show that the two mechanisms may be important in the same clusters <cit.>.An important and perhaps critical clue to the origin of blue stragglers is the two blue-straggler sequences observed in the color-magnitude diagram of globular cluster M30 <cit.>. Similar features are also found in NGC 362 <cit.> and NGC 1261 <cit.>. The occurrence of two sequences can be explained by the coexistence of blue stragglers formed through two different formation mechanisms enhanced by core collapse 1-2 Gyr ago <cit.>. Each of the two sequences may correspond to a distinct formation mechanism <cit.>, because the blue sequence is outside the “low-luminosity boundary" defined by the binaries with ongoing mass transfer <cit.> and the red one is too red to be reproduced by collisional models <cit.>. However, NGC 1261, one of three globular clusters with two blue-straggler sequences, does not show the classical signatures of core-collapse <cit.>. It should be noted that three W UMa contact binaries have been detected in both sequences of blue stragglers in M30 <cit.>. W UMa contact binaries are very common among blue stragglers in globular clusters <cit.>, which are thought to come mainly from binary evolution <cit.>. Hence, the formation of both sequences may be related to the binary evolution. Meanwhile, <cit.> found that some blue stragglers produced by Case B binary evolution are below the low-luminosity boundary given by <cit.>. <cit.> show that binary merger can produce single blue stragglers very close to or even below the zero-age main sequence (ZAMS), i.e. in the blue sequence. In addition, <cit.> found that binary merger can form blue sequence of blue stragglers while binary blue stragglers can lead to a red sequence. Therefore, more study about the formation of blue stragglers by binary evolution should be done to check whether binary evolution can provide a contribution to the formation of the blue-straggler blue sequence in globular cluster M30. § THE POSSIBILITY OF BINARY EVOLUTION CONTRIBUTING TO THE BLUE-SEQUENCE BLUE STRAGGLERS Before detailed binary evolution calculations are performed, we will simply discuss the possibility of binary evolution contributing to the blue-sequence blue stragglers. At first, if not considering contact binaries, binary evolution can produce two kinds of blue-straggler binaries (as shown in Figure 1): those are still experiencing mass transfer <cit.> and those have finished mass transfer <cit.>. The blue-straggler binaries during the mass-transfer phase have a “low-luminosity boundary" (about 0.75 mag brighter than the ZAMS) given by <cit.>, and can match the observed red-sequence blue stragglers in globular cluster M30 <cit.>. However, for the blue-straggler binaries that have finished mass transfer, including a blue straggler and a white dwarf (the BS-WD binaries), their locations in the color-magnitude diagram of M30 depend on the contribution of white dwarfs to the combined magnitudes of these binaries.We can simply estimate the location of a BS-WD binary in the color-magnitude diagram as follows. We take the binaries with a 0.8 M_⊙ primary (metallicity Z = 0.0003) as examples, and the secondary masses are taken to be 0.75, 0.7, 0.65,..., 0.3M_⊙. These binaries are assumed to experience Case B mass transfer at about 12 Gyr. The primaries will transfer their envelopes (about 0.55 M_⊙ in conservative case of mass transfer) to the secondaries, leaving a helium WD star (about 0.25M_⊙). The secondaries that gain mass will rejuvenate and evolve up along the main sequence to higher luminosity and effective temperature <cit.>. When mass transfer finished (assuming at about 12.5 Gyr), the secondaries become blue stragglers with masses of 1.3, 1.25, 1.2,..., 0.85 M_⊙ as rejuvenated star. We approximate this rejuvenation[According to the description of <cit.>, the rejuvenation of main-sequence stars with no convective core (0.3 ∼ 1.3M_⊙) can be approximated by taking the remaining fraction of main-sequence life directly proportional to the remaining fraction of unburnt hydrogen at the centre, and adjusting the effective age t of the stars, t' = t × ( τ'_ MS/τ_ MS). ] as described by <cit.> and <cit.>. After the rejuvenation, these blue stragglers continue to evolve as single stars.At the time of the end of mass transfer, the primaries are the stripped giant stars, which are brighter and redder than the turnoff. So we simply assume that they have the same magnitude and color as that of a single star with 0.8 M_⊙ at the red giant branch (e.g. V ∼ 2.5 and V-I ∼ 0.6). By combining their companions (the rejuvenated stars), the combined magnitude of these BS-WD binaries can be calculated using a formula given by <cit.>, for example, the V-band magnitude of the binary systemV = V_ 1-2.5log (1+10^(V_ 1-V_ 2)/2.5),where V_ 1 and V_ 2 are the V-band magnitudes of two components, respectively. After mass transfer terminates, these primaries evolve quickly to a helium white dwarf and cool down. According to the equation of luminosity evolution of white dwarfs given by <cit.>,L_ WD = 635MZ^0.4/[(A(t+0.1))]^1.4(where M, Z, A and t are the mass, metallicity, effective baryon number and age of white dwarfs, respectively), these white dwarfs would have much lower luminosities than the blue-straggler companion when they cool to the age of M30 (13 Gyr) from the end of mass transfer (12.5 Gyr). Here, we roughly assume that the V-band magnitude of the white dwarfs increase with their age (e.g. V_ WD=V_ BS+3 at 13.0 Gyr, V_ WD=V_ BS+4 at 14.0 Gyr), while these white dwarfs have a color V-I=-0.2.In figure 2a, we show the locations of these BS-WD binaries in the color-magnitude diagram at the time of the end of mass transfer(12.5 Gyr), at the age of M30 (13.0 Gyr), and at their subsequent evolution (e.g. 14.0 Gyr). When the mass transfer terminates, these binaries are above the “low-luminosity boundary", which is 0.75 mag brighter than the ZMAS as suggested by <cit.>. They then move to the region below the “low-luminosity boundary" when the primaries become a helium WD and cool to the age of M30. They mainly appear in the location between the ZAMS and the boundary at 13.0 Gyr and 14.0 Gyr, where the blue sequence defined by <cit.>. Therefore, it is possible that the BS-WD binaries may contribute to the blue sequence in M30. It should be noted that at the time of the end of mass transfer, these binaries have very different locations from the blue-straggler components as shown in Figure 2a, because the primaries (stripped giant stars) are brighter than the blue-straggler components, dominating the positions of these binaries. As the white dwarfs cool, they are closer to the BS-WD binaries, and almost overlap the BS-WD binaries at 14.0 Gyr. The faint blue-straggler components are close to the ZAMS because their progenitors are less evolved, low-mass stars. Considering the effect of non-conservative mass transfer (e.g. 50% of the transferred mass is assumed to be lost during mass transfer), the BS-WD binaries are still below the “low-luminosity boundary". However, there are no bright and blue BS-WD binaries because of the decrease of accreted mass of the secondaries. Moreover, in Figure 2b, we compare these BS-WD binaries with the collision isochrones corresponding to ages of 1 and 2 Gyr, which agree with the observed blue sequence as shown in Figure 4 of <cit.> and given by <cit.>. These collisional isochrones have been transferred into the absolute plane using a distance modulus of (m-M)_ v = 15.04 mag and a reddening of E(V-I) = 0.112 mag, and the values of distance modulus and reddening are obtained by comparing the location of our 13 Gyr isochrone of single stars and ZAMS (the blue dotted and blue dashed lines) with those lines in <cit.> (the red dotted and red dashed lines). As shown in Figure 2b, the BS-WD binaries are in the similar region to the collision isochrones, which are between the ZAMS and the “low-luminosity boundary".§ BINARY EVOLUTION CALCULATIONSWe have carried out a detailed study of the formation of blue stragglers from binary evolution using Eggleton's stellar evolution code. This code is a variant of the code ev described, in its initial version, by <cit.> and <cit.>, updated during the last four decades <cit.>. The current version of ev (private communication 2003) is obtainable on request from [email protected], along with data files and a user manual. We calculate the evolution of binaries with metallicity Z = 0.0003, which are close to the metallicity ([Fe/H]=-1.9) of M30 <cit.>. We construct a grid of conservative binary evolutionary models from ZAMS to the age of M30 (13 Gyr), with the following ranges of initial primary mass M_10, initial mass ratio (q_0=M_20/M_10) and initial orbital period P_0:log M_10 = -0.110, -0.105, -0.100,..., -0.02, log (1/q_0) = 0.025, 0.050, 0.075,..., 0.600, log (P_0/P_ ZAMS) = 0.025, 0.050, 0.075,..., 1.000,where P_ ZAMS is the period at which the primary would just fill its Roche lobe on the ZAMS <cit.>. We assume that the binary orbit is circular because it will circularize quickly during the mass transfer[The originally eccentric orbits would be circularized quickly during mass transfer <cit.>, and only a few evolved binaries have eccentric orbits <cit.>. However, the parameter space for the formation of blue stragglers at 13 Gyr may be smaller if the mass transfer will not always lead to circular orbits as suggested by <cit.>. This is because mass transfer in the eccentric binaries may be episodic (only at periastron), which results in the less-massive, more evolved accretor stars than that in the circular binaries.]. In Eggleton's Stellar evolution code, the magnitude of each component is given from luminosity and effective temperature based on the table given by <cit.>. Then we calculate the combined magnitude of blue-straggler binaries using equation (1). Four representative examples of our binary evolution calculations are shown in Figure 3. The first example is a blue-straggler binary that is experiencing mass transfer, and appears in the red sequence at the age of M30. The other three examples are the BS-WD binaries that can evolve into the blue sequence. However, at the age of M30, the second example appears in the blue sequence, while the third and fourth examples appear in the red sequence.Figures 3a and 3b show the evolutionary track of the first example, including the combined evolutionary tracks of the binary system and both components. Mass transfer begins at 10.44 Gyr, and this system evolves into the region of blue straggler along a line parallel to the “low-luminosity boundary". This binary is in the observed red-sequence region at 13 Gyr, while it is still in the mass-transfer stage. At this time, the donor star is in the red giant branch at 13 Gyr, and its luminosity is high enough to significantly change the position of this binary relative to the accretor star (△V = -0.2; △(V-I) = -0.14) in the color-magnitude diagram. Finally, this binary leaves the blue-straggler region at about 13.63 Gyr.For the second example in Figures 3c and 3d, Mass transfer between two stars begins and terminates at 9.91 and 10.86 Gyr, respectively, and then this BS-WD binary evolves across the “low-luminosity boundary". At the age of globular cluster M30 (13 Gyr), this binary is in the region between the ZAMS and the “low-luminosity boundary", where the observed blue sequence in M30 lies. It should be noted that after the end of mass transfer, the location of this BS-WD binary depends on two timescales: (1) the timescale for the white dwarf that cools to almost no contribution to V-band magnitude (e.g. the location of the BS-WD binary having largest V-band magnitude at 11.08 Gyr), which is about 0.22 Gyr; (2) the timescale for the remaining main-sequence lifetime of the blue straggler (about 2 Gyr). The cooling timescale is much shorter than the remaining MS lifetime of the blue straggler. Therefore, this BS-WD binary can appear below the “low-luminosity boundary", like a“single" blue straggler. Although the third and fourth examples (Figure 3e and 3g) are BS-WD binaries that have similar evolutionary tracks to the second example, they appear above the “low-luminosity boundary" in the color-magnitude diagram at the age of M30, before they evolve into the blue sequence or after they evolve away the blue sequence. The products of binary evolution, the blue-straggler binaries in the two sequences, may have different initial binary parameters. We show their initial binary parameters in the initial orbital period-secondary mass planes for different initial primary mass (Figure 4). They are roughly classified based on above or below the “low-luminosity boundary" (0.75 mag brighter than the ZAMS). The progenitors of the blue sequence are constrained to host 0.80-0.93 M_⊙ primaries and 0.23-0.76 M_⊙ secondaries in orbits with initial orbital periods of 0.4-2.3 d. The range of initial primary mass for the red sequence is slightly larger than that for the blue sequence. However, the blue stragglers in the red sequence have a shorter initial orbital period and a more massive initial secondary than those in the blue sequence with the same initial primary masses. In general, the blue sequence mainly comes from case B binary evolution, while the red sequence mainly comes from case A binary evolution.§ BINARY POPULATION SYNTHESISIn order to estimate the distribution of blue-straggler binaries, we performed a series of Monte Carlo simulations (see Table 1) based on our grid of conservative binary evolutionary models described above. The following input is adopted for the simulation <cit.>. (1) The initial mass function (IMF) of <cit.> is adopted. An alternative IMF of <cit.> is also considered. (2) We adopt three initial mass-ratio distributions: a constant mass-ratio distribution, a rising mass-ratio distribution and one where the component masses are uncorrelated. (3) The distribution of separations is taken to be constant in log a for wide binaries, where a is the orbital separation. We also take a different period distribution of <cit.>. (4) A circular orbit is assumed for all binaries. Figure 5 shows the result of the simulation set 1 at 13 Gyr in the color-magnitude diagram. It is striking that this simulation agrees quite well with the distribution of the observed blue stragglers in M30. There is a blue-straggler sequence, similar to the observed blue sequence, which is between the ZAMS and the “low-luminosity boundary" and about half a magnitude brighter than the ZAMS. Meanwhile, the blue stragglers above the "low-luminosity boundary" cover a wider, sparser area, which agrees with the distribution of the observed red sequence. The blue-sequence binaries have a blue straggler orbiting a white dwarf, while those red-sequence binaries include the binaries that are experiencing mass transfer (∼60%), or just terminate mass transfer (∼10%), and the binaries that the blue stragglers have evolved away from the blue sequence (∼30%). The results of other Monte Carlo simulations (sets 2 to 5) are plotted in Figure 6, and these simulations give similar results to the simulation set 1. These four simulations also show the presence of two blue-straggler sequences that are in agreement with the observed distribution of blue stragglers in M30.To estimate the total number of blue stragglers and the fraction of blue sequence in M30 from binary population synthesis, we assume an initial binary fraction (f_ b) of M30, e.g. 25%, which is half of the binary fraction in the solar neighbourhood <cit.>. As alternatives we also consider a binary fraction, 15%. These fractions are assumed to be higher than the binary fraction of M30 at present day <cit.> because the binary fraction decreases with time due to the dynamical interactions and binary evolution <cit.>. In addition, the initial mass of M30 is simply assumed to be twice as massive as the current mass of M30 <cit.> as the globular clusters may have lost a significant fraction of total mass driven by relaxation, stellar evolution and the tidal field of the Galaxy <cit.>. Because the binaries are more difficult to lose from the globular clusters than single stars, the binary fraction in the lost stars is assumed to be half of the initial binary fraction.The results are summarized in Table 2 for the total numbers of blue stragglers (N_ total) and the fraction of blue sequence (N_ blue/N_ total). The total numbers of blue stragglers range from 10 to 99, and the fraction of blue sequence ranges from 60% to 70%. It is clear that the initial distributions of binaries are very important in determining the formation of blue stragglers from binary evolution. The uncorrelated mass-ratio distribution (set 3) or the nonconstant distribution of orbital separation (set 5) makes a smaller N_ total and a larger N_ blue/N_ total, as compared to the simulation set 1. On the other hand, the rising mass-ratio distribution (set 2) or the IMF of of <cit.> makes a larger N_ total and a smaller N_ blue/N_ total. The total number of blue stragglers depends strongly on f_ b, although the fraction of blue sequence does not depend on f_ b. Based on the observed results given by <cit.> and <cit.> (as shown in Figure 5), the observed values of N_ total and N_ blue/N_ total are 49 and 49%, respectively (25 red-sequence stars and 24 blue-sequence stars). The results of binary population synthesis can explain the observed total numbers of blue stragglers in M30, but fail to explain the fraction of blue sequence in M30. § DISCUSSION §.§ The fraction of blue sequence In our study, the fraction of blue sequence cannot be reproduced by binary population synthesis. Our simulations predict that 60%-70% of the total blue stragglers should be observed in the blue sequence, while the observed blue sequence only contains 49% of the total blue stragglers in M30[If four red-sequence stars below the “low-luminosity boundary" (red points with blue circles in Figure 5) are classified as blue sequence, the observed fraction of blue sequence is 57%.]. If the contribution of direct stellar collision and binary merger to the blue sequence is considered, the problem is more challenging.One possible explanation is mass loss during mass transfer which has to be considered in our present study, and we would overestimate the fraction of blue stragglers in the blue sequence. As examples shown in Figure 7, we compare the relative regions of the progenitors for two sequences in the conservative and non-conservative cases (e.g. 50% of the transferred mass is assumed to be lost from the system). We find that for the non-conservative cases, the progenitor region of the blue sequence is reduced more remarkably, relative to the region of the red sequence. But we should note that the brightest blue stragglers are significantly fainter that shown in previous study for the non-conservative assumption since the accretor cannot increase mass as much as that. This may decrease the percentage of the blue sequence in the binary scenario.Another possible explanation is the uncertainties in the distribution of binary parameters in globular clusters that are important to binary population synthesis. The uncertainties do not change the appearance of two blue-straggler sequences as shown in Figure 6, but significantly alter the quantitative estimates of the total number of blue stragglers and the fraction of blue sequence. At present, our results of binary population synthesis are based on the assumptions that adopted in the field. It is very likely that dynamical interactions in globular clusters alter the parameter distribution of primordial binary population. For example, the Heggie-Hills law <cit.> tends to make the binaries closer, and exchange encounters (often eject the least massive of the three stars) are more likely to increase the mass ratio of binaries. Based on the initial distribution of binaries shown in Figure 4, these dynamical effects can decrease the fraction of blue sequence by bringing the binary systems from Case B evolution to Case A evolution. Therefore, it is very important to understand these uncertainties in the distribution of binaries in globular clusters. Our results do not rule out the contribution of dynamical interactions, especially the core collapse, to the formation of two blue-straggler sequences in M30. Our results show that binary evolution can produce a blue sequence below the "low-luminosity boundary", but this sequence is not as tight as the blue sequence observed in M30. This tight blue sequence may come from core collapse <cit.>, which limited the time range for the formation of blue-sequence blue stragglers from binary evolution and direct stellar collision.From present results of binary evolution, we predict that the majority of binary-origin blue stragglers in the blue sequence should have a low-luminosity white-dwarf companion if they are not already disrupted due to dynamical interactions. Meantime, not all blue stragglers in the red sequence are experiencing mass transfer, and some of them may also have a white-dwarf companion. Moreover, the blue sequence may show chemical anomalies (as a significant depletion of carbon and oxygen), similar to the red sequence with O-depletion <cit.> because chemical anomalies are expected for the binary-origin blue stragglers <cit.>, but not for the collision-origin blue stragglers <cit.>. Future observations of two sequences of blue stragglers would determine whether binary evolution can contribute to both sequences of blue stragglers in M30. §.§ Comparisons with previous studies Our results are consistent with previous theoretical studies. It has been indicated that binary evolution can produce blue stragglers below the “low-luminosity boundary" from binary merger <cit.> and case B mass transfer <cit.>. <cit.> have shown that the case B binary evolution can reproduce a bluer sequence of blue stragglers than those from case A binary evolution. Moreover, the observations have shown that in the color-magnitude diagram of open cluster NGC 188, some binary blue stragglers are close to the ZAMS as shown in Figure 1 of <cit.>. So far these observed blue stragglers have been interpreted to have a binary origin with white dwarf companions <cit.>, which are similar to those BS-WD binaries in our models.<cit.> investigated the binary origin of blue stragglers in M30 by using a different version of Eggleton's stellar evolution code. They showed that the binary models nicely match the observed red sequence in M30, but can not attain the observed location of blue sequence. Their calculations missed those binary models that can produce blue stragglers in the blue sequence, maybe because their grid covers a larger range of primary from 0.7 M_⊙ to 1.1 M_⊙, but with larger steps of 0.1 M_⊙. Moreover, their code stopped in some cases because of numerical instabilities that prevent to follow the complete evolutionary tracks of these systems <cit.>. §.§ Special or common phenomenon in globular clusters?We suggest that the age of M30 (13 Gyr) is not special for the formation of two blue-straggler sequences in globular clusters from binary evolution, because the accretor stars with a white dwarf become inevitable when the mass transfer is finished. For example, a binary system with M_10=1.0 M_⊙ (M_20=0.45 M_⊙, P_0=0.677 d) can be located in the blue sequence at 8 Gyr as shown in Figure 8, which suggests that the different primary mass of the binary may contribute to the blue stragglers in the blue sequence at various age. Therefore, the appearance of two sequences produced by binary evolution may not be a short-lived phenomenon in globular clusters.However, clearly separated sequences are not easy to be observed. It should be noted that in the other two globular clusters (NGC 362 and NGC 1261), the gaps between two sequences are much smaller than the widths of blue sequence. This may be because of reddening variation, distance variation, observational error etc. For example, the core of globular clusters may be the most probable place showing two clearly separated sequences, in which all blue stragglers can be thought to have the exact same distance and reddening, without the pollution from the blue stragglers in the outer region of cluster that may have a slightly different reddening and distance modulus. Moreover, the photometric error needs to be significantly smaller than the shift by the companions in the color-magnitude diagram. Considering the subsequent evolution of blue-sequence blue stragglers, some of them may also appear between two sequences, e.g. the brightest blue-sequence blue straggler observed in M30 that is very close the “low-luminosity boundary" as shown in Figure 5.§.§ W UMa contact binaries in the blue-straggler region Our simulations cannot take into account W UMa contact binaries because of the numerical difficulty of constructing their physical models, which is still one of the most important unsolved problems of stellar evolution <cit.>. However, <cit.> has shown that the total luminosity of W UMa contact binaries becomes more and more similar to that of the bright component in such a binary with the mass ratio decreasing, since the contribution from the faint one becomes smaller and smaller. Here we estimate roughly their locations in the color-magnitude diagram in a similar way to <cit.> and <cit.> as follows. Observations show that two components of these binaries have nearly equal surface effective temperatures (T_1=T_2=T), while their radius ratio is constrained by Roche geometry, R_2/R_1 ≈ (M_2/M_1)^0.46 <cit.>. Their total surface luminosities are nearly equal to their total nuclear luminosities <cit.>. Considering the luminosity transfer between two components and the primaries still in the main sequence <cit.>, we can obtain these luminosities and effective temperatures of two components of contact binaries, and then their combined magnitudes when these binaries are observed as one point.Figure 9 shows the distribution of contact binaries with mass ratios q=0.1, 0.3, 0.5, 0.7, 0.9 in the color-magnitude diagram, while the primary masses range from 0.91 M_⊙ to 1.44 M_⊙. These contact binaries become brighter and redder as the mass ratio increases, and this shift agrees with the results given by <cit.>. These contact binaries with different mass ratio can cover the observed distribution of W UMa contact binaries in M30, and we find that the observed W UMa contact binary in the blue sequence corresponds to a smaller mass ratio (e.g. q < 0.3) than the two W UMa contact binaries on the red sequence (e.g. q > 0.3). This suggests that W UMa contact binaries in both sequences of blue stragglers may be due to their different mass ratio.Moreover, we suggest that W UMa contact binaries could evolve from the red sequence to the blue sequence, when they evolve into systems with smaller mass ratios due to dynamical evolution <cit.>. At present, there are about a dozen W UMa systems with known mass ratios in globular clusters <cit.>, and only half of them are located in the blue-straggler region. According to the color-magnitude diagram of NGC 6397 given by <cit.>, the W UMa system V8 seems to be a blue-sequence blue straggler, which has a low mass ratio <cit.>. This may be consistent with our prediction.Despite a similar origin of binary evolution, W UMa contact binaries in M30 have different Roche-lobe-filling situations from other blue-straggler binaries. For other blue-straggler binaries, those in the red sequence are the semi-detached or detached BS-WD binaries, while those in the blue sequence are mainly detached BS-WD binaries. However, W UMa contact binaries can appear at different sequences in the color-magnitude diagram for the reason similar to other blue-straggler binaries. The reason is that the contributions of the less-massive companions are different when they are observed as a single star. Therefore, the binary scenario is not incompatible with the observed W UMa contact binaries in M30. § CONCLUSIONSIn this paper, we explore the possibility that binary evolution contributes to the formation of blue stragglers in two sequences in globular cluster M30. Our results show that the primordial binaries may contribute to the blue sequence of blue stragglers in M30. Considering W UMa contact binaries observed in both sequence, this possibility of binary evolution having contribution to both sequences should not be ruled out. We suggest that this feature, a blue sequence with a much wider red sequence, may not be uncommon among globular clusters. However, the observed fraction of blue sequence cannot be reproduced by the studies of binary population synthesis with the initial distribution of field binaries, which suggests that initial distribution of binaries in globular clusters may be modified by dynamical interaction or be very different from that in the field. It is a pleasure to thank an anonymous referee for many valuable suggestions and comments, which improved the paper greatly. We thank Professor L. Deng, Professor F. Ferraro, Professor A. Sills and Dr Y. Xin for the helpful discussions. This work is supported by the Natural Science Foundation of China (Nos 11573061, 11521303, 11733008, 11422324, 11773065 and 11661161016), by the Yunnan province (Nos 2017HC018, 2013HA005, 2015FB190).[Chen & Han2004]Chen2004 Chen, X., & Han, Z. 2004, MNRAS, 355, 1182[Chen & Han2008a]Chen2008 Chen, X., & Han, Z. 2008, MNRAS, 384, 1263[Chen & Han2008b]Chen2008b Chen, X., & Han, Z. 2008, MNRAS, 387, 1416[Chen & Han2009]Chen2009 Chen, X., & Han, Z. 2009, MNRAS, 395, 1822[Dalessandro et al.2013]Dalessandro2013 Dalessandro, E., Ferraro, F. R., Massari, D., et al. 2013, ApJ, 778, 135 [de Mink et al.2007]deMink2007 de Mink, S. E., Pols, O. R., & Hilditch, R. W. 2007, A&A, 467, 1181 [Duquennoy & Mayor1991]Duquennoy1991 Duquennoy A., & Mayor M., 1991, A&A, 248, 485 (DM91) [Eggleton1971]Eggleton1971 Eggleton, P. P. 1971, MNRAS, 151, 351[Eggleton1972]Eggleton1972 Eggleton, P. P. 1972, MNRAS, 156, 361[Eggleton et al.1973a]Eggleton1973a Eggleton, P. P., Faulkner, J., & Flannery, B. P. 1973, A&A, 23, 325 [Eggleton et al.1973b]Eggleton1973b Eggleton, P. P. 1973, MNRAS, 163, 279[Eggleton & Kiseleva-Eggleton2002]Eggleton2002 Eggleton, P. P., & Kisseleva-Eggleton, L. 2002, ApJ, 575, 461 [Eggleton & Kiseleva-Eggleton2006]Eggleton2006 Eggleton, P. P., & Kisseleva-Eggleton, L. 2006, Ap&SS, 304, 75[Eggleton 2006]Eggleton2006a Eggleton P. P., 2006, Evolutionary Processes in Binary and Multiple Systems. Cambridge Univ. Press, Cambridge [Eggleton 2010]Eggleton2010 Eggleton, P. P. 2010, NewAR, 54, 45[Ferraro et al.1993]Ferraro1993 Ferraro, F. R., Pecci, F. F., Cacciari, C., et al. 1993, AJ, 106, 2324 [Ferraro et al.1995]Ferraro1995 Ferraro, F. R., Fusi Pecci, F., & Bellazzini, M. 1995, A&A, 294, 80 [Ferraro et al.1997]Ferraro1997 Ferraro, F. R., Paltrinieri, B., Fusi Pecci, F., et al. 1997, A&A, 324, 915 [Ferraro et al.2004]Ferraro2004 Ferraro, F. R., Beccari G., Rood R. T., et al. 2004, ApJ, 603, 127 [Ferraro et al.2006]Ferraro2006 Ferraro, F. R., Sabbi, E.; Gratton, R., et al. 2006, ApJL, 647, L53 [Ferraro et al.2009]Ferraro2009 Ferraro, F. R., Beccari, G., Dalessandro, E., et al. 2009, Natur, 462, 1028 (F09) [Ferraro et al.2012]Ferraro2012 Ferraro, F. R., Lanzoni, B., Dalessandro, E., et al. 2012, Nature, 492, 393 [Flower1996]Flower1996 Flower, P. J. 1996, ApJ, 469, 335 [Geller & Mathieu2011]Geller2011 Geller, A. M., & Mathieu, R. D. 2011, Nature, 478, 356[Glebbeek et al.2008]Glebbeek2008 Glebbeek, E., Pols, O. R., & Hurley, J. R. 2008, A&A, 488, 1007 [Gosnell et al.2008]Gosnell2015 Gosnell, N. M., Mathieu, R. D., & Geller, A. M. 2015, A&A, 814, 163 [Gosnell et al.2014]Gosnell2014 Gosnell, N. M., Mathieu, R. D., & Geller, A. M., et al. 2014, ApJL, 783, L8[Halbwachs et al.2003]Halbwachs2003 Halbwachs, J. L., Mayor, M., Udry, S., & Arenou, F. 2003, A&A, 397, 159[Han et al.1994]Han1994 Han, Z., Podsiadlowski, Ph., & Eggleton, P. P. 1994, MNRAS, 270, 121[Han et al.1995]Han1995 Han, Z., Podsiadlowski, Ph., & Eggleton, P. P. 1995, MNRAS, 272, 800 [Heggie1975]Heggie1975 Heggie D. C., 1975, MNRAS, 173, 729[Hills1975]Hills1975 Hills, J. G. AJ, 80, 809[Hills & Day1976]Hills1976 Hills, J. G., & Day, C. A. 1976, ApL, 17, 87 [Hurley et al.2000]Hurley2000 Hurley, J. R., Pols, O. R., & Tout, C. A. 2000, MNRAS, 315, 543 [Hurley et al.2002]Hurley2002 Hurley, J. R., Tout, C. A., & Pols, O. R. 2002, MNRAS, 329, 897[Ivanova et al.2005]Ivanova2005 Ivanova, N., Belczynski, K., Fregeau, J. M., & Rasio, F. A., 2005, MNRAS, 358, 572[Jiang et al.2009]Jiang2009 Jiang D., Han Z., Jiang T. & Li L., 2009, MNRAS, 396, 2176 [Jiang et al.2014a]Jiang2014a Jiang, D., Han, Z., & Li, L. 2014a, MNRAS, 438, 859 [Jiang et al.2014b]Jiang2014b Jiang, D., Han, Z., & Li, L. 2014b, ApJ, 789, 88 [Kallrath et al.1992]Kallrath1992 Kallrath, J., Milone, E.F., Stagg, C.R. 1992, ApJ, 389, 590 [Kaluzny et al.2006]Kaluzny2006 Kaluzny, J., Thompson, I. B., Krzeminski, W., Schwarzenberg-Czerny, A. 2006, MNRAS, 365, 548 [Kaluzny et al.2007]Kaluzny2007 Kaluzny, J., Thompson, I. B., Rucinski, S. M., et al. 2007, AJ, 134, 541 [Kuiper1941]Kuiper1941 Kuiper, G. P. 1941, ApJ, 93, 133[Leigh et al.2013]Leigh2013 Leigh, N., Knigge, C., Sills, A., et al. 2013, MNRAS, 428, 897[Li et al.2013]Li2013 Li, K., & Qian, S.-B. 2013, NewA, 25, 12[Li et al.2008]Li2008 Li, L., Zhang, F., Han, Z., Jiang, D., Jiang, T. 2008, MNRAS, 387, 97[Lombardi et al.1995]Lombardi1995 Lombardi, J. C. J., Rasio, F. A. & Shapiro, S. L. 1995, ApJL, 445, L117[Lovisi et al.2013]Lovisi2013 Lovisi, L., Mucciarelli, A., Lanzoni, B., et al. 2013, ApJ, 772, 148[Lu et al.2010]Lu2010 Lu, P., Deng, L. C., & Zhang, X. B. 2010, MNRAS, 409, 1013[Mathieu & Geller2009]Mathieu2009 Mathieu, R. D., & Geller, A. M. 2009, Natur, 462, 1032 [McCrea1964]McCrea1964 McCrea, W. H., 1964, MNRAS, 128, 147 [McVean et al.1997]McVean1997 McVean, J. R., Milone, E. F., Mateo, Mario, Yan, L. 1997, ApJ 481, 782 [Miller & Scalo1979]Miller1979 Miller, G. E., & Scalo, J. M. 1979, ApJS, 41, 513[Milone et al.2012]Milone2012 Milone A. P., Piotto, G., Bedin, L. R., et al. 2012, A&A, 540, 16[Nelson & Eggleton2001]Nelson2001 Nelson, C. A., & Eggleton, P. P. 2001, ApJ, 552, 664 [Pietrukowicz & Kaluzny2001]Pietrukowicz2004 Pietrukowicz, P., & Kaluzny, J. 2004, AcA, 54, 19[Piotto et al.2004]Piotto2004 Piotto, G., De Angeli, F., & King, I. R. et al. 2004, ApJL, 604, L109 [Pols et al.1995]Pols1995 Pols, O. R., Tout, C. A., Eggleton, P. P., & Han, Z. 1995, MNRAS, 274, 964[Rubenstein2001]Rubenstein2001 Rubenstein E. P., 2001, AJ, 121, 3219[Rucinski1998]Rucinski1998 Rucinski, S. M. 1998, AJ, 116, 2998[Rucinski2000]Rucinski2000 Rucinski, S. M. 2000, AJ, 120, 319 [Rucinski2004]Rucinski2004 Rucinski, S. M. 2004, NewAR, 48, 703 [Sandquist et al.1999]Sandquist1999 Sandquist, E. L., Bolte, M., Langer, G. E., Hesser, J. E. & de Oliveira, C. M. 1999, ApJ, 518, 262[Scalo1986]Scalo1986 Scalo, J. M. 1986, Fund. Cosm. Phys., 11, 1 (S86)[Sepinsky et al.2007]Sepinsky2007 Sepinsky, J. F., Willems, B., Kalogera, V. & Rasio, F. A. 2007, ApJ, 667, 1170[Sepinsky et al.2009]Sepinsky2009 Sepinsky, J. F., Willems, B., Kalogera, V. & Rasio, F. A. 2009, ApJ, 702, 1387[Sepinsky et al.2010]Sepinsky2010 Sepinsky, J. F., Willems, B., Kalogera, V. & Rasio, F. A. 2010, ApJ, 724, 546 [Sills & Bailyn1999]Sills1999 Sills, A., & Bailyn, C. D. 1999, ApJ, 513, 428[Sills et al.2000]Sills2000 Sills, A., Bailyn, C. D., Edmonds, P. D., & Gilliland, R. L. 2000, ApJ, 535, 298[Sills et al.2002]Sills2002 Sills, A., Adams, T., Davies, M. B., & Bate, M. R. 2002, MNRAS, 332, 49 [Sills & Lattanzio2009]Sills2009 Sills, A., & Lattanzio, J. 2009, ApJ, 692, 1411[Simunovic et al.2014]Simunovic2014 Simunovic, M., Puzia, T. H., & Sills, A. 2014, ApJL, 795, L10 [Stepien & Kiraga2015]Stepien2015 Stepien, K., & Kiraga, M. 2015, A&A, 577, 117[Tian et al.2006]Tian2006 Tian, B., Deng, L., Han, Z., & Zhang, X. B. 2006, A&A, 455, 247[Tout et al.1997]Tout1997 Tout, C. A.; Aarseth, S. J.; Pols, O. R.; Eggleton, P. P. 1997, MNRAS, 291, 732 [Vesperini & Heggie1997]Vesperini1997 Vesperini E., & Heggie D. C. 1997 MNRAS 289,898 [Vilhu1982]Vilhu1982 Vilhu, O. 1982, A&A, 109, 17 [Webbink2003]Webbink2003 Webbink R. F., 2003, in Turcotte S., Keller S. C., Cavallo R. M., eds, ASP Conf. Ser. Vol. 293, 3D Stellar Evolution. Astron. Soc. Pac., San Francisco, p. 76[Xin et al.2007]Xin2007 Xin, Y., Deng, L., & Han, Z. 2007, ApJ, 660, 319 [Xin et al.2015]Xin2015 Xin, Y., Ferraro, F. R., Lu, P., et al. 2015, ApJ, 801, 67[Yakut & Eggleton2005]Yakut2005 Yakut, K., Eggelton, P. P. 2005, ApJ, 629, 1055
http://arxiv.org/abs/1709.09643v1
{ "authors": [ "Dengkai Jiang", "Xuefei Chen", "Lifang Li", "Zhanwen Han" ], "categories": [ "astro-ph.SR" ], "primary_category": "astro-ph.SR", "published": "20170927172848", "title": "Contribution of primordial binary evolution to the two blue-straggler sequences in globular cluster M30" }
http://arxiv.org/abs/1709.09164v2
{ "authors": [ "Martin Schmaltz", "Neal Weiner" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20170926175905", "title": "A Portalino to the Dark Sector" }
Université Paris-Dauphine, PSL Research University, CNRS, CEREMADE, 75016 Paris, France [email protected] investigate the long time behavior of a passive particle evolving in a one-dimensional diffusive random environment, with diffusion constant D.We consider two cases: (a) The particle is pulled forward by a small external constant force, and (b) there is no systematic bias.Theoretical arguments and numerical simulations provide evidence that the particle is eventually trapped by the environment.This is diagnosed in two ways:The asymptotic speed of the particle scales quadratically with the external force as it goes to zero,and the fluctuations scale diffusively in the unbiased environment, up to possible logarithmic corrections in both cases.Moreover, in the large D limit (homogenized regime), we find an important transient region giving rise to other, finite-size scalings, and we describe the cross-over to the true asymptotic behavior. Response to a small external force and fluctuations of a passive particle in a one-dimensional diffusive environment François Huveneers December 30, 2023 ==================================================================================================================== § INTRODUCTIONExtending the paradigms of statistical mechanics to the study of active matter is part of the main issues in contemporary theoretical physics <cit.>.Random walks in static or dynamical random environments constitute a good case study to analyze numerous out of equilibrium phenomena <cit.>. More specifically, a variety of interesting behaviors can be observed for particles advected by a viscous fluid;as it turns out, an initially uniform density of passive particles may display aging, clustering, phase separation and intermittency as time evolves <cit.>. In this paper we consider a passive particle driven by a one-dimensional (d=1) time dependent potential fluctuating diffusively (Edwards Wilkinson dynamics),as shown on Fig. <ref>.This system is a good candidate to host the phenomena mentioned above. Moreover, it is a rather natural set-up to consider since diffusive fluctuations occur in all typical extended systems that satisfy local equilibrium and have extensive conserved quantities. However, predicting the long time behavior of the passive particle turns out to be very puzzling in d=1 <cit.> (in contrast to a lot of progress made for divergent free fields in d≥ 2 <cit.>). Indeed,since time correlations decay only as t^-d/2, one expects memory effects to play a dominant role in d=1, but it is hard to decide what their influence actually is.Our study reveals that their role is to trap the particle: Potential barriers confine it to a certain region of space for a finite time, and the behavior of the particle is eventually dominated by the dynamics of the barriers. For short times, the mechanism is already visible on Fig. <ref>, while on longer timescales, it is due to the low modes of the potential;see <cit.> for the analogous phenomenology in a static environment. In order to satisfactorily check our understanding, we consider two different set-ups and analyze them consistently. First we analyze the differential mobility of the particle, i.e. its response to a small external force,and second we consider its fluctuations in a unbiased environment.In equilibrium, these two quantities are related through the celebrated Sutherland-Einstein relation <cit.>,while generalizations of this relation to systems violating the detailed balance condition are actively studied at the present time, see <cit.>as well as <cit.>. Our findings indicate that the system is genuinely out of equilibrium:The differential mobility is zero, because the asymptotic velocity of the particle scales quadratically with the applied force, while the fluctuations are normal (up to possible logarithms). An important aspect of the model is the presence of big finite size effects in the limit where the diffusion constant D of the diffusive field grows large.In this regime, the trapping only becomes effective for very small external forces or very long times (depending on the considered set-up).This fact led to the proposal of the existence of two distinct phases as a function of D in <cit.>.We will show instead that there is a single phase and we will describe quantitatively the cross-over between a finite-size scaling region and the true asymptotic region.The paper is organized as follows.After a proper description of the model in Section <ref>,we introduce the main results of this paper in Section <ref>, together with some brief account of previous studies.These results are summarized in Table <ref>, and are shown by means of scaling arguments and numerics inSections <ref>-<ref>.A heuristic theory connecting the behavior of fluctuations to the differential mobility of the particle is developed in Section <ref>.Finally Section <ref> contains several technical results on a self-consistent approximation introduced below,while the details of our numerical scheme are gathered in Section <ref>.§ MODELLet X_t be the position of the particle at time t. In the overdamped regime, its evolution is governed by Ẋ_t = λ(-∂_x V (X_t ,t) + F)where λ is the mobility of the advected particle, V is the fluctuating potential, and F is an external constant force. See Fig. <ref>.In our context, the potential V may conveniently be though of as a height function and measured in units of length.It evolves in time according to an Edwards-Wilkinson type dynamics <cit.>∂_t V(x,t) = D ∂_x^2 V(x,t) + ξ(x,t)where D is the diffusion coefficient and where ξ is white noise in time and smooth in space with finite correlation length ℓ: ⟨ξ (x,s) ξ (y,t) ⟩ = 2 D δ (t-s) ^- (x-y)^2/2 ℓ^2.Our aim in introducing the finite correlation length ℓ is to avoid any problem in understanding the dynamics on short timescales, i.e. t ≲ D^-1ℓ^2.Moreover, we assume that X_0 = 0 and thatthe field -∂_x V(x,t) is in equilibrium with ⟨∂_x V(x,t) ⟩ = 0, so that ⟨ X_t ⟩ = 0 at F=0 by symmetry.Numerical results are expressed in units ℓ = 1, D = 1.To lighten the notations, we will also use these units throughout the text and mostly drop ℓ and D from our notations (except in a few expressions where it may just help to keep them).The control parameters are thus the mobility λ and the constant force F.The evolution equation (<ref>) neglects the effect of thermal fluctuations.Indeed, if the passive particle is at positive temperature,one should add some white noise term κdB_t/dt in eq.(<ref>), where κ is the molecular diffusivity.However, it is cumbersome to have an extra parameter in the modeland one may reasonably conjecture that a finite molecular diffusivity does not affect the long time asymptotic behavior of the passive particle,that is eventually dictated by the low modes of the the potential. Fourier representation. Both for the theoretical analysis and numerical implementations, it is convenient to express the force field - ∂_x V(x,t) by its Fourier transform:Let p(·) be the standard normal distribution (for concreteness) and -∂_x V(x,t) = ∫_ k√(p(k)) ( A_k (t) ^i kx + c.c.)where A_k(t) are stationary, zero mean, Gaussian processes such that⟨ A_k (s) A_k' (t) ⟩ =0 and ⟨ A_k (s) A_k'^* (t) ⟩ = 1/2δ(k-k') ^-k^2|t-s|. One can readily recover eq.(<ref>-<ref>) from this representation, observing that the processes A_k(t) defined by the correlation (<ref>) are independent complex Ornstein-Uhlenbeck processes obeying the evolution equationA_k (t)/ t = -k^2 A_k (t) + |k|B_k (t)/ twhere B_k(t) are independent complex Brownian motions: B_k(t) = 1/√(2)(B_k^1(t) + i B_k^2 (t)),with B_k^1(t) and B_k^2 (t) independent real Brownian motions. Our numerical scheme is described in details in Section <ref>,but it may be worth to mention here that it uses a simple discretization of the evolution equation (<ref>)with - ∂_x V(x,t) given by eq.(<ref>).In eq.(<ref>), the integral is replaced by a sum over a finite number of modes.Since one expects the low modes to play the crucial role in determining the behavior of the particle, not all modes are sampled equally:the resolution becomes ever finer as k→ 0.With this way of doing, we are able to reach significantly larger times than in <cit.>.Lattice models. Lattice versions of the evolution equation (<ref>) have been considered both in the probabilist community <cit.> and in numerical studies, e.g. <cit.>. Since our results do likely not depend on the specific modeling, we believe that it may be of some interest to make here some explicit “dictionary” between our set-up and a lattice model as in <cit.>. On the integer lattice , the correlation length ℓ = 1 featuring in eq.(<ref>) corresponds simply to the lattice spacing.As a very simple choice, one may require that the force field - ∂_x V(x,t) = - (V(x+1,t) - V (x,t)) takes only the values ± 1,and that the time evolution of V is governed by the so called corner flip dynamics: V(x,t+ t) - V(x,t) =(V(x-1,t) - 2 V(x,t) + V(x+1,t))N_D(t)where N_D(t) is a Poisson point process with rate D. In this case, the evolution of -∂_x V (x,t) can be mapped to the simple exclusion process, see e.g. <cit.>, by identifyingη(x,t) := (-∂_x V (x,t) + 1)/2 with an empty site if η(x,t) = 0 and with an occupied site ifη(x,t)=1.The simple exclusion process is at equilibrium at a density ρ = ⟨η(x,t) ⟩ = 0.5 + F for 0 ≤ F ≤ 0.5.The passive particle is usually referred to as a random walker in this context, and evolves according to the following rule:It jumps to the right if it sits on top of an occupied site, and to the left if it sits on a vacant site, with constant jump rate λ: X_t+ t - X_t = ( δ(η(X_t,t) = 1) - δ(η(X_t,t) = 0) )M_λ (t)where M_λ (t) is a Poisson point process with rate λ.§ RESULTSThe fluctuations of the passive particle for analogous or even identical set-ups have been studied in<cit.>. In <cit.>, the authors predict that ⟨ X^2_t ⟩ crosses over between a super-diffusive behavior λ^2 t^3/2 for t ≲λ^-4 to an almost diffusive behavior t log t for t ≳λ^-4.This prediction is however not directly backed up by numerical simulations.In <cit.>, a transition is proposed as a function of λ: For small λ, a self-consistent approximation (SCA) predicts ⟨ X_t^2 ⟩∼ (λ t)^4/3 at long times,while for larger λ, the particle gets trapped by the diffusive field and becomes itself diffusive.We review the SCA in Section <ref>, and a close look reveals that it also predicts the behavior ⟨ X_t^2 ⟩∼λ^2 t^3/2 for t ≲λ^-4, see eq.(<ref>-<ref>). The approximations in <cit.> and <cit.> for λ small are of a similar nature,see eq.(<ref>-<ref>) below, and it is not obvious to decide whether any of them is correct.The theoretical predictions of <cit.> are validated numerically in the same paper,but the possibility is raised that the (λ t)^4/3 regime at small λ is actually a finite size effectand that the particle becomes eventually always diffusive.In <cit.>, it is claimed that this is indeed what happens.This conclusion is however based on numerics at a single value of λ, and no hint is given that the transition does not actually occur at some lower value. Finally, in <cit.>, a bunch of different exponent for ⟨ X^2_t ⟩ are observed in numerical experiments,but possible finite size effects are not analyzed.Our study confirms the original prediction by <cit.>,though we make no clear stand on the logarithmic correction in the regime t ≳λ^-4.Indeed, our data are compatible with a behavior of the type t (log t)^δ for some 0 ≤δ < 1,but we are not able to extract a value for δ and it is not even obvious whether δ > 0 is a transient effect or not.See Fig. <ref>.In addition, we study the behavior of the walker in the presence of the external force F>0 in the limit F→ 0.When F > 0, we expect the particle to drift and eventually escape to the strong memory effects of the environment.For this reason, we also expect that the full system, consisting of the particle and its environment, will reach a non-equilibrium stationary states (NESS).We first analyze the time T(F) needed for the system to reach a NESS and find that this time diverges as F→ 0.As a consequence, the system should never reach stationarity at F=0, which is as such a good indication that aging or trapping effects are at play.Second we investigate the behavior of the asymptotic velocity of the walker, v(F) = lim_t→∞ X_t/t for given F>0in the limit F→ 0. Let us notice that mean field like approximations as in <cit.> or <cit.> in the small λ regime, would all predict the behavior v(F) ∼λ F.Instead we find that this scaling is only valid for F ≳λ, and then crosses over to the behavior v(F) ∼ F^2. All these conclusions are based on scaling arguments and numerical simulations, and are summarized in Table <ref>.Two remarks are in order.First, the transient behavior (left column) can only be neatly observed for λ significantly smaller than 1,as a consequence of the obvious ballistic behavior of the particle for t≤ 1.Second, we notice that in the true asymptotic regime (right column), the behavior of the particle does not depend anymore on the value of λ.This is one of the signs of the trapping by the environment. Let us finally comment on a recent result in <cit.>:The variance of a tracer a particle driven by a constant force and evolving in a quasi-1d diffusive environment is found to cross-overfrom a super-diffusive t^3/2 to a diffusive behavior.The cross-over time is there of the order of the time needed for the tracer particle to reach a new carrier (hole).After this time, its increments become practically independent,since the slow on-site decay of the correlations of the environment becomes irrelevant thanks to the drift,see e.g. <cit.> for a mathematical study of a similar phenomenology. This results in a diffusive behavior.It seems thus that rater different mechanisms are at play in <cit.> and that the analogy with our results is mostly a coincidence. § TIME TO STATIONARITYLet us assume that F>0 and let us estimate the time T(F) needed for the particle to reach a stationary state,i.e. the time after which the average of any local observable, in a frame moving with the particle, converges to some stationary value.As is by now well documented in the mathematical literature, in the case where the environment is itself able to relax to equilibrium in a finite time,T(F) can be estimated, or at least upper-bounded, by this time itself, see e.g. <cit.> and references therein.This does not apply as such in our casesince the dynamics defined by eq.(<ref>) is diffusive and does not converge to equilibrium in a finite time,due to the presence of the low modes (k ∼ 0) in eq.(<ref>) that relax in a time of order 1/k^2, see eq.(<ref>).The point is however that the force F provides an effective infra-red cut off for all modes with |k| ≪ F^2.Indeed, the contribution of these modes corresponds roughly to the smearing of the field - ∂_x V(x,t) over boxes with length of order L ≫ 1/F^2.Since the amplitude of this averaged field is significantly smaller than F, its only effect is a slight modulation of the average velocity.With this cut-off, the time for stationarity of the field is of order F^-4, and we conclude that T(F) ∼ F^-4as F→ 0.for generic observables.It still can be that some observables converge faster.Of particular interest for us is to know the time needed for the particle to reach its asymptotic speed v(F) defined in eq.(<ref>). To probe this numerically, let us measure how fast v(F,t) := ⟨ X_t⟩/t converges to v(F) as a function of F, for various values of the parameter λ.For given F, let us define a rescaled time 0 ≤τ≤ 1 via t = K F^-4τ for some large constant K, and let us compare the curves𝒲 (τ) = ⟨ X_K F^-4τ⟩ / τ/⟨ X_K F^-4 1⟩/1for various F.They collapse if the scaling (<ref>) is valid. Numerical results are shown on Fig. <ref>, with K = 2^10 in the definition of the rescaled time τ(such a large pre-factor is needed to reach values that are stationary in good approximation at τ =1).The scaling (<ref>) is manifestly accurate for λ = 1, λ = 0.5 (top panels). For λ = 2^-3, λ = 2^-4 (lower panels), the scaling is only accurate for the smallest values of F.The fact that the convergence is faster than expected for the large values ofF may be interpreted as the fact that the system is initially in a state close to the stationary state. It is indeed reasonable that these two states are similar to each other in the homogenized regime λ < F discussed in more details in the next section.The important point is thus that we observe a reasonably good collapse of the data for F < λ in Fig.<ref>.§ DRIFTLet us now investigate the behavior of the asymptotic velocity v(F) in the limit F → 0.The most naive expectation from eq.(<ref>) is that v(F) ∼λ F.This scaling is plotted on the upper panel of Fig. <ref>, where one sees that it is only approximately correct for F/λ > 1. Since our main interest is in the behavior of v(F) as F → 0 at a fixed value of λ, we seek thus for another scaling. For this, we notice that the modes with |k| ∼ F^-2 in eq.(<ref>) have a strong tapping effect:the amplitude of this set of modes is comparable to F, its relaxation time is of order F^-4 and it varies in space on a scale of order F^-2. Thus, if the field -∂_x V (x,t) would consist only of them, we would conclude right away that the velocity v(F)of the particle must be of order F^-2/F^-4 = F^2. Since we identify no stronger source of slowing down, we come to the proposal v(F) ∼ F^-2.This guess is rough and ignores the possible effects of fluctuations, stemming from all modes with momentum higher than F^-2.Nevertheless,the data in the lower panel onFig. <ref> show that this is a reasonably good scaling, up to possible logarithmic like corrections. In addition, this simple way of thinking yields the cross-over value λ_c ∼ F between the two regimes represented on Fig. <ref>. Indeed, all trapping effects turn out to be prohibited for λ < F.The reason for this is that the modes that could trap the particle relax too fast as compared to the time needed for the particle to get trapped.Let us illustrate this with the set of modes with |k| ∼ F^-2. As we showed above, trapping due to these modes takes place on a length scale of order F^-2.But the time for the particle to travel such a distance(if the field -∂_x V(x,t) would consist only of these modes)is at least (λ F)^-1 F^-2 and, in the regime λ < F,this is clearly larger than the relaxation time F^-4. This reasoning can be repeated for higher modes as well (that could potentially also trap the particle though less strongly) and yields the same conclusion (the set of modes with |k|<F^-2 has an amplitude smaller than F and cannot trap the particle). § FLUCTUATIONSLet us now set F=0 and study the behavior of ⟨ X_t^2 ⟩ as t→∞.The behavior ⟨ X_t^2 ⟩∼λ^2 t^3/2 is best understood if one approximates - ∂_x V(X_t,t) by -∂_x V(0,t) in eq.(<ref>),since this scaling follows then from an explicit computation.While it is clear that this approximation must be reasonable at small enough λ, it is less obvious that it is still valid up to t ∼λ^-4. The kind of reasonings in <cit.> furnish eventually the shortest way to get there,see also eq.(<ref>-<ref>) below for an explicit computation.Moreover, in a similar way to what we did in the previous section for the drift,we may estimate that t ∼λ^-4 corresponds to the minimal time for trapping effects to appear.Indeed, the particle may be trapped by the modes of order k if the relaxation time of these modes (1/k^2)is longer than the time needed for them to bring the particle over a distance of the order of one wavelength (k^-1 (k^1/2λ)^-1). Thus only modes with k ≲λ^2 do provide trapping, and this effect needs a time at least λ^-4 to be effective. The scaling ⟨ X_t^2 ⟩∼λ^2 t^3/2 is shown on the upper panel of Fig. <ref>.As we see, the curves do not properly collapse for t < λ^-4.The reason for this is to be found in the obvious transient ballistic behavior of the passive particle.To check this, we have plotted ⟨ X_t^2 ⟩ / λ^2 t^3/2 for - ∂_x V(X_t,t) replaced by -∂_x V(0,t) in eq.(<ref>) at λ = 2^-5(the result is clearly independent of λ and this parameter only enters since the data are plotted as a function of the rescaled time λ^4 t),see the dotted line on the upper panel on Fig. <ref>. If we could consider much smaller values of λ and wait long enough, we would expect all curves to eventually reach a plateau at a value close to 2.5. The scaling ⟨ X_t^2 ⟩ /t is shown on the lower panel of Fig. <ref>.Up to logarithmic like corrections, this scaling seems accurate in the regime t> λ^-4.Finally, in the inset of Fig. <ref>, we consider the scaling ⟨ X_t^2 ⟩ / (λ t)^4/3 predicted in <cit.>.As we see, it is only accurate near the cross-over point t∼λ^-4, where it coincides with the scalings λ^2 t^3/2 and t.We conclude thus that this scaling is never genuinely realized in this system.§ RELATING DRIFT TO FLUCTUATIONSWe finally provide a heuristic scheme relating the behaviors of drift and fluctuations.This leads to the conclusion that trapping dominates the true asymptotic regime, as observed in the numerics.As a first step, we establish a phenomenological relation between the exponent α of the drift, and the exponent β of the fluctuations,i.e. v(F) ∼ F^α asF → 0, ⟨ X_t^2 ⟩∼ t^β ast →∞. Assuming a given value for α (we take 1 ≤α≤ 2 as suggested by the data on Fig. <ref>),we replace the evolution equation (<ref>) at F=0 by Ẋ_t = λφ (X_t,t) where φ is an effective force field defined by φ (x,t) = ∫_ k | k|^α - 1/2√( p(k)) ( A_k (t) ^i kx + c.c.)with A_k(t) as in (<ref>).The introduction of the weight factor | k|^α - 1/2 wrt (<ref>) is such thatthe amplitude of the integral over | k| ≤ F^2 for any F > 0 is of order F^α as F→ 0 instead of being of order F for -∂_x V(x,t) defined by (<ref>).This is consistent: If the response to an external force scales in a certain way as this force goes to zero, then the response to the lowest modes of the fluctuating field should scale the same way.Once the field - ∂_x V(x,t) has been replaced by the effective field φ (x,t), one may assume that all trapping effects have been taken into account and one may apply the SCA, reviewed and generalized in the next section, to determine the fluctuations of X_t.Straightforward computations yieldsβ = 4 / (2 + α),see eq.(<ref>-<ref>) below.In a second step we determine the values of α and β.Let T be some arbitrary large time, and let us decompose φ into an almost static part and a fluctuating part, φ (x,t) = φ_sta (x) + φ_flu(x,t), according toφ (x,t) = ∫_|k|^2 ≤ 1/T k (…) +∫_|k|^2 > 1/T k (…).In the absence of φ_flu(x,t) the particle would move to the nearest stable fixed point of φ_sta(x),i.e. a point x^* such that φ_sta(x^*) = 0 and (φ_sta/ x) (x^*) < 0.To evaluate the effect of φ_flu(x,t) we proceed again through the SCA; due to the infrared cut-off, fluctuations are always diffusive with a diffusion constant scaling as T^(2- α)/4, see eq.(<ref>) below.Hence, in the vicinity of a stable fixed point x^*, the dynamics can be effectively described by the overdamped Ornstein-Uhlenbeck equationẎ_t = - λT^-2 + α/4 Y_t + λ^1/2 T^2 - α/8 B_t/twith Y_t = X_t - x^*, as long as Y_t remains smaller than T^1/2. The process Y_t reaches a stationary state after some time τ≪ T for 1 ≤α < 2(since τ∼λ^-1 T^2 + α/4),and remarkably this state is characterized by a mean square displacement equal to T for any value of α. Thus X_t should not be trapped on a length scale T^1/2 (at least if α < 2) but on a slightly longer length scale,by the same mechanism as a random walker is trapped in a static environment <cit.>. Indeed X_t will be trapped for a time of order ^c L^2/T if φ_sta(x) keeps a fixed sign for length L. Hence, considering an approximate mapping on the model in <cit.> for a lattice with spacing T^1/2 and hopping time of the walker τ,we conclude that X_T ∼ T^1/2 (log T)^2 if α < 2(the logarithmic correction is not present if α = 2). In all cases, this leads to the exponent β = 1, and hence α = 2. § SELF CONSISTENT APPROXIMATIONWe review and generalize the self-consistent approximation (SCA) introduced in <cit.> yielding predictions for the average velocity and the fluctuations of the passive particle.Let us rewrite the evolution equation (<ref>) in integral form asX_t = λ∫_0^ts (- ∂_x V (X_s,s) + F ).The basic idea is to replace the process X_t on the right hand side of (<ref>) by an independent process Y_t that simply “visits" the environment,in such a way that X_t and Y_t have the same probability distribution.More precisely, we look for twoprocesses (X_t)_t≥ 0 and (Y_t)_t≥ 0 with the three following requirements:(a) the processes (X_t)_t≥ 0 and (Y_t)_t≥ 0 have the same probability distribution,(b) the process (Y_t)_t≥ 0 is stochastically independent of the environment, hence of (X_t)_t≥ 0 (since X_t depends deterministically on the environment),(c) X_t and Y_t solve the equationX_t = λ∫_0^ts (- ∂ V (Y_s,s) + F )in distribution.The hope is that processes (X_t,Y_t)_t≥ 0 satisfying (a-c) can be found rather explicitlyand that the probability distribution of X_t solving (<ref>) is qualitatively similar to the distribution of X_t solving (<ref>).Here, we will not deal at all with the second issue and we will solve (<ref>) at the level of the first and second moments through some Gaussian approximations (that are arguably harmless). However, we will replace the force field - ∂_x V (x,t) by some more general field φ (x,t), as needed for the theory in Section <ref>.Asymptotic speed. For any zero average field φ we get⟨ X_t ⟩ = λ F t by taking expectations in eq.(<ref>).Hence the SCA predicts alwaysv(F) = λ F. Fluctuations. Let us now assume F=0 and let us study the second moments of X_t.The generalized field φ (x,t) is defined by eq.(<ref>) with now⟨ A_k (s) A_k'^* (t) ⟩ = f(k)δ(k-k') ^- k^2|t-s|instead ofeq.(<ref>), for some function f( k).This expression boils down obviously to eq.(<ref>) forf(k) = 1,henceφ (x,t) = - ∂_x V (x,t) in this case. We will consider the more general functionf(k) = χ (|k| ≥ k_0)| k|^γfor some 0 ≤ k_0 ≪ 1 and 0 ≤γ≤ 0.5, where χ(A) is the indicator function of the set A. We will look for X_t and Y_t having stationary increments and we will compute both ⟨ X_t^2 ⟩ in the leading order of the t →∞ asymptotic,and the correlations ⟨ζ(0) ζ (t) ⟩, with ζ (t) = φ (Y_t ,t).Let us start by computing ⟨ X_t^2 ⟩ in the large t limit, without keeping track of constant prefactors:⟨ X_t^2 ⟩ =λ^2 ∫_0^t ∫_0^tss' ⟨φ (Y_s,s) φ (Y_s',s') ⟩∼λ^2 t ∫_ k p( k) f^2( k)∫_0^t θ ^-k^2 θ⟨cos k Y_θ⟩∼λ^2 t ∫_0^t θ∫_ k^- k^2 (1/2 +θ + ⟨ X_θ^2 ⟩) f^2( k) .To get the second line, we used the assumption that (Y_t)_t≥ 0 have stationary increments;to get the last line, we used that p(z) is a standard normal distribution, that⟨ X_t^2 ⟩ = ⟨ Y_t^2 ⟩ by assumption, and that (Y_t)_t ≥ 0 is Gaussian. This last assumption is presumably not exact, but we expect that the results do not depend qualitatively on this Gaussian approximation.Equation (<ref>) is the self-consistent equation solved by the variance ⟨ X_t^2⟩.For various values of k_0 and γ in eq.(<ref>), eq.(<ref>) yields⟨ X_t^2 ⟩∼ (λ t)^4/3+2γ, 0 ≤γ < 0.5, k_0 = 0, ⟨ X_t^2 ⟩∼λ t (log t)^1/2,γ = 0.5, k_0 = 0, ⟨ X_t^2 ⟩∼ k_0^γ - 1/2λ t,0 ≤γ < 0.5, k_0 >0. Next, to compute the correlations ⟨ζ(0) ζ(t)⟩, we notice that ⟨ζ(0) ζ(t) ⟩ = ⟨φ(Y(0),0) φ(Y(t),t) ⟩∼∫ kf^2 (k) ^-k^2 (1/2 + t + ⟨ X_t^2 ⟩).Hence, from (<ref>-<ref>), ⟨ζ(0) ζ (t)⟩∼1/ (λ t)^2 1+2γ/3+2γ ,0 ≤γ < 0.5,k_0 = 0,⟨ζ(0) ζ (t)⟩∼1/(λ t) (log t)^1/2 , γ = 0.5,k_0 = 0, ⟨ζ(0) ζ (t)⟩∼^- k_0^3/2 + γ(λ t)/(ℓ k_0)^γ^2 - 1/4 (λ t)^1/2 + γ,0 ≤γ < 0.5,k_0 >0. Remark on transient behavior. The expressions (<ref>-<ref>) and (<ref>-<ref>)only hold in the limit t →∞ at fixed values of all other parameters, and may have to be modified on some transient time scales.E.g. to determine eq.(<ref>), we assumed that θ≲⟨ X_θ^2 ⟩ in eq.(<ref>).If this condition is violated, we obtain instead ⟨ X_t^2 ⟩∼λ^2 t^2 - 1+γ/2and in particular ⟨ X_t^2 ⟩∼λ^2 t^3/2 for γ = 0. This expression should replace eq.(<ref>) for all times short enough so that t ≳⟨ X_t^2 ⟩for ⟨ X_t^2 ⟩ given by eq.(<ref>).E.g. for γ = 0 we find that eq.(<ref>) holds as long ast ≲λ^-4.We recover thus the behavior announced in Section <ref>.Remark on static environments. The same SCA can be applied to a random walker in a static environment (D = 0) in d=1:Ẋ_t = -λ∂_x V(X_t) + κ B_t/ t(one needs here to take the molecular diffusivity κ to be finite in order to avoid a trivial dynamics).If we reintroduce the parameters D,ℓ in eq.(<ref>), we find actually ⟨ X_t^2 ⟩∼ℓ^2 (λ t /ℓ)^4/3. This expression is independent of D, and performing similar computations for the evolution equation (<ref>) yields again the same expression.In this case, it is known that the SCA does not predict the correct behavior for ⟨ X_t^2 ⟩ at any value of λ.Indeed the particle is always strongly sub-diffusive: ⟨ X_t ^2⟩ scales as (log t)^4 as t→∞, see <cit.>.Remark on <cit.> and <cit.>. It may be interesting to point out explicitly the difference between the approximations made in <cit.> and <cit.> to determine the asymptotic behavior of ⟨ X_t^2 ⟩ as t→∞ for F=0. In both cases, it is determined through a consistency condition, but the correlatorC_s,s' = ⟨∂_x V (X_s,s) ∂_x V (X_s',s')⟩is estimated in a slightly different way. Let us assume s' ≥ s.In <cit.>, the approximationC_s,s'∼⟨∂_x V (⟨ X^2_s'-s⟩^1/2,s'-s) ∂_x V (0,0)⟩is used, while the slightly more refined approximation C_s,s'∼∫ y ^-y^2/ 2⟨ X_s'-s^2 ⟩/√(2 π⟨ X_s'-s^2⟩)⟨∂_x V(y,s'-s) ∂_x V(0,0) ⟩.is made in <cit.>, for s' - s > 0. § NUMERICAL SCHEME We describe the discretization of eq.(<ref>) and (<ref>-<ref>) used in our numerics.Let us denote by Δ t the elementary time step of the particle.Eq.(<ref>) becomesX_t+Δ t = X_t + λ (Δ t) (- ∂_x V(X_t,t) + F).The integral (<ref>) defining - ∂_x V (x,t) becomes a sum - ∂_x V (x,t) = ∑_k ∈ Kρ(k) (A_k(t) ^i kx + c.c.)where K = {k_1, … , k_N} is the set of accessible inverse wavelength k_i > 0 for 1 ≤ i ≤ N, and where ρ(k) is the weight of mode k.Given some 0 < δ < 1, we setk_i = δ^i - 1,1 ≤ i ≤ Nand the corresponding weightsρ (k_i) = (δ^i-1 - δ^i)^1/2,1 ≤ i ≤ N- 1and ρ(k_N) = δ^(N-1)/2. Let us next see how the force field -∂_x V(x,t) is updated.Since every mode A_k(t) is an independent Ornstein-Uhlenbeck process evolving according to eq.(<ref>), if one knows the value of A_k(t) at some time t,one can write explicitly the value of the real and imaginary part of A_k(t') at any time t' > t: A_k (t') = ^-k^2 (t' - t) A_k (t) + 𝒩(0, 1 - ^-2 k^2 (t'-t)/4)and similarly for the imaginary part (real and imaginary part are independent),where 𝒩 (m, σ^2) denotes a normal distribution with mean m and variance σ^2.In our scheme, there is no reason to update the modes at times shorter than the elementary time step Δ t of the particle.Moreover, it is reasonable to update the low modes less frequently than the high modes, in such a way that all modes are updated in essentially the same way each time they are.The mode k_i is updated once every Δ t_i = ⌈ K k_i^-2⌉ Δ tfor some K > 0, asA_k (t + Δ t_i) = ^-D k_i^2 Δ t_i A_k (t) + 𝒩(0, 1 - ^-2D k_i^2 Δ t_i/4),and similarly for the imaginary part.As we see, if not for rounding off, the update is identical for all modes since D k_i^2 Δ t_i ≃ D K Δ t.Since Δ t comes as (Δ t) λ in eq.(<ref>), one may fix Δ t = 1.The parameters N,δ and K are discretization parameters, fixed to N = 130, δ = 0.9 and K = - log (0.5) ≃ 0.7 in all our experiments. With these values of N and δ, our results should be safe of any periodicity or quasi-periodicity effects.Moreover, for intermediate time scales, we checked that reasonable variations of these parameters did not fundamentally affect the results. § CONCLUSIONSWe have investigated the behavior of a passive particle advected by a fluctuating surface in the Edwards-Wilkinson universality class.Both the differential mobility and the fluctuations have been analyzed with the same rational. Our study exhibits the existence of a finite size scaling limit, that differs from the true asymptotic limit. The latter regime is dominated by trapping effects of the environment. I thank M. Barma, M. Salvi, F. Simenhaus, T. Singha, G. Stoltz and F. Völlering for helpful and stimulating discussions. I also thank an anonymous referee for very useful suggestions. I benefited from the support of the projects EDNHS ANR-14-CE25-0011 and LSD ANR-15-CE40-0020-01 of the French National Research Agency (ANR).
http://arxiv.org/abs/1709.09008v3
{ "authors": [ "François Huveneers" ], "categories": [ "cond-mat.stat-mech", "math.PR" ], "primary_category": "cond-mat.stat-mech", "published": "20170926133251", "title": "Response to a small external force and fluctuations of a passive particle in a one-dimensional diffusive environment" }
[email protected] Max Planck Institute for Gravitational Physics (Albert Einstein Institute), Am Mühlenberg 1, D-14476 Potsdam-Golm, [email protected] Center for Relativistic Astrophysics and School of Physics,Georgia Institute of Technology, Atlanta, GA [email protected] Max-Planck-Institut f¨ur Gravitationsphysik, Albert-Einstein-Institut, D-30167Hannover, Germany 04.80.Nn, 04.25.dg, 04.25.D-, 04.30.-w Current searches for the gravitational-wave signature of compact binary mergers rely on matched-filtering data from interferometric observatories with sets of modelled gravitational waveforms. These searches currently use model waveforms that do not include the higher-order mode content of the gravitational-wave signal. Higher-order modes are important for many compact binary mergers and their omission reduces the sensitivity to such sources. In this work we explore the sensitivity loss incurred from omitting higher-order modes. We present a new method for searching for compact binary mergers using waveforms that include higher-order mode effects, and evaluate the sensitivity increase that using our new method would allow. We find that, whenevaluating sensitivity at a constant rate-of-false alarm, and when including the fact that signal-consistency tests can reject some signals that include higher-order mode content, we observe a sensitivity increase of up to a factor of 2 in volume for high mass ratio, high total-mass systems. For systems with equal mass, or with total mass ∼ 50 M_⊙, we see more modest sensitivity increases, < 10%, which indicates that the existing search is already performing well. Our new search method is also directly applicable in searches for generic compact binaries. Searching for the full symphony of black hole binary mergers Alexander Nitz December 30, 2023 ============================================================§ INTRODUCTION The Advanced LIGO gravitational-wave observatories <cit.> have observed multiple black hole binary mergers in Advanced LIGO's first two observing runs <cit.>. Many additional black hole binary mergers are expected to be observed in the coming years <cit.>, with additional detectors in Italy <cit.>, Japan <cit.> and India <cit.> helping to improve coverage of the gravitational-wave sky <cit.>. The continued observation of black hole binary mergers will allow for a better understanding of the rate of such mergers <cit.>, and give a sense of the mass and component spin distribution of black hole binary systems <cit.>. This will in turn allow a better understanding of how such systems form <cit.>. Searches for compact binary mergers rely on matched filtering the data taken from gravitational-wave observatories with theoretical filter waveforms <cit.>. The set of filter waveforms is chosen such that a signal occurring anywhere within the parameter space of interest can be recovered well by at least one of the waveforms in the set of filters <cit.>. It is critical that the waveform models being used as filters are accurate representations of the signals that will be produced from compact binary mergers in the Universe. Much work in the years leading up to Advanced LIGO's first discovery focused on modelling waveforms using numerical <cit.> and analytical <cit.> techniques, and on combining these methods together to create waveform models accurate at all stages of the merger <cit.> .However, a number of assumptions are made about the emitted gravitational-wave signal to simplify the search and reduce the search parameter space. Specifically, current searches for compact binary mergers neglect any affect due to precession of the orbital plane <cit.>, orbital eccentricity <cit.> or neutron-star equation-of-state <cit.>. Current searches also neglect the affect of the so-called higher-order modes of gravitational-wave emission <cit.>, and it is on this topic that we will focus in this manuscript. Making these simplifying assumptions does not affect the ability to observe the majority of compact binary mergers, as evident from the current observations, but can mean that the detection efficiency is not optimal. It would also induce an observational bias against compact binary mergers that are not well described by these assumptions, and these kind of systems can be the ones of most value, astrophysically, as the additional information from features such as higher-order modes allows more precise measurement of the various source parameters <cit.>.Several studies have shown that the omission of higher-order modes in searches can lead to a reduction in detection rate <cit.>. Specifically, the omission of higher modes is likely to reduce the detection of systemswith large mass ratio q=m_1/m_2 ≥ 4 and large total mass M=m_1+m_2 > 100M_⊙ <cit.>. To date AdvancedLIGO has only detected systems with mass ratios and total masses lower than these values. However, we still know very little about the mass distribution of compact binary mergers and searches should be capable of observing any possible system. For such high-mass systems, more loosely modelled search techniques <cit.> also offer the ability to observe short duration compact binary mergers <cit.>. For very high-mass systems the sensitivity of such searches can become comparable to that of modelled search methods, and could potentially exceed the sensitivity of modelled searches in the case that the system is not well described by the model. Nevertheless, in cases where the waveform model is well understood, searches that use that knowledge should be more sensitive than those that don't.In this study, we present the first end-to-end modelled search method for black hole binary mergers using filter waveforms that include the higher-order modes of the gravitational-wave signals and demonstrate the improvement in sensitivity that can be obtained when using this method to search for compact binary mergers in Advanced LIGO data. Our method involves including the source orientation angles in the list of parameters that are sampled over when creating a set of filter waveforms to use in the search. At the time of writing, the only waveform model that is available including both the full inspiral, merger and ringdown components of the waveform and including higher-order modes, is the non-spinning effective-one-body model presented in <cit.>. We must therefore restrict ourselves in this work to only considering waveforms that do not include the effects of the components' spins. However, the methods we describe here are directly applicable to the case of aligned-spin waveform models, which are now in an advanced stage of development <cit.>. Indeed the search method presented here is fully generic, and could be run on eccentric, precessing waveforms including higher-order modes.The layout of this paper is as follows. In section <ref> we motivate the parameter range that we choose to consider in this work. In section <ref> we give a brief reminder of how the presence of higher-order modes will affect the emitted gravitational-wave signal, and discuss the waveform models used in this work. In section <ref> we introduce the fully generic search method that we will use. In section <ref> we assess the benefit of deploying a search including the higher-order mode components of the gravitational-wave signal. In section <ref> we also explore how signal-based consistency tests, necessary in searches of real data to distinguish real signals from instrumental transients, can sometimes falsely reject real gravitational-wave signals containing higher-order modes, and how this problem is alleviated when using higher-order mode waveforms as filters. Finally we conclude in section <ref>. § PARAMETER SPACE CONSIDERATIONSIn this section we motivate and describe the black hole binary parameter space that we will use in the rest of this work. While we will use a specific parameter space here, we stress again that the search methods we will describe can be applied for any parameter space of interest.It was recently reported that no gravitational-wave signals were observed in a search of Advanced LIGO data targeting “intermediate mass black hole binaries”, which are defined to be black hole binaries with total mass M ≥ 100 M_⊙ <cit.>.These sources are important from an astrophysical point of view. They are proposed to be precursors of supermassive black holes in some hierarchical formation scenarios <cit.>. However, there is not yet evidence for their existence. Detection of higher-order modes would allowfor more detailed tests of General Relativity in the strong field regime. Examples of this include studies of the quasi-normal ringdown modes <cit.> and studies evaluating the mass of the graviton via searching for a dispersion relation in the speed of propagation of gravitational-waves <cit.>. For the kind of sources considered in that work, higher-modes are believed to have a significant impact on gravitational-wave signals <cit.>.The effect of higher-order modes was not studied in the search for “intermediate mass black hole binaries” reported in <cit.>; both the waveforms used in the search, and the simulations to assess its sensitivity, did not include higher-order modes. It is therefore interesting to explore the same parameter space here and assess whether neglecting higher-order modes is a fair assumption in such studies, and to demonstrate the sensitivity increase that is possible if higher-order modes are included. The matched-filter search used in <cit.> targeted black hole binary mergers with total mass between 50 M_⊙ and 600M_⊙. At 600 M_⊙ compact binary mergers emit gravitational-wave signals that are for the most part too low frequency to be observed by Advanced LIGO and the sensitive distance rapidly decreases. It is also challenging to distinguish such signals from non-Gaussianities in the detector noise. Here we choose to use a maximum total mass of 400 M_⊙. Our constraints on the mass ratio, q, are limited by constraints on the waveform model we use, which we will discuss in the next section. We use a limit of q ≤ 10, which also matches the limits chosen in <cit.>. When we discuss masses in this work we will always refer to the masses of the signal observed by the observatory, often referred to as “detector frame masses”. Sources at cosmological distances will be redshifted with respect to the observer, causing the signal to appear to have higher masses than the actual ones measured in the “source frame”.The sensitivity of Advanced LIGO has been improving since the beginning of Advanced LIGO's first observing run and will continue to improve over the next years, before reaching its design sensitivity. In order to obtain reasonable estimates of the improvements derived from our search method, we will use two noise curves in this study. We will use a representative measurement of the sensitivity curve from Advanced LIGO's first observing run (O1) <cit.> and we will use Advanced LIGO's “zero-detuned high-power” design sensitivity curve <cit.>. For the former, we set the lower frequency of our matched-filter to f_low=20Hz while for the latter we use f_low=10Hz.§ GRAVITATIONAL-WAVE SIGNALS INCLUDING HIGHER-ORDER MODESThe gravitational-wave emission from a non eccentric black hole binary merger depends on 15 parameters. The individual masses, m_i, and dimensionless angular momenta (spins), χ⃗_i=s⃗_i/m_i, of its two components are parameters intrinsic to the source—collectively denoted Ξ_i. The time of the coalescence, measured in the frame of the observer, is denoted t_c. The remaining parameters describe the location and orientation of the observer, with respect to the source. Consider a frame of reference in standard spherical coordinates (D,ι,φ) with origin in the center of mass of the black hole binary. The polar angle ι is defined such that ι=0 coincides with the total angular momentum of the binary[The origin of the φ parameter is chosen to lie in the line connecting the two components at some fiducial time.]. The remaining 3 parameters are the sky-location of the source in the frame of the observer (θ,ϕ), and the polarisation ψ of the signal. A black hole binary is said to be “face-on” if ι=0 or ι=π and “edge-on” if ι=π/2.For a generic gravitational-wave source observed by an interferometric detector the observed strain h(t) can be expressed as the sum of the two gravitational-wave polarizations weighted by the sensitivity of the observer to each polarization h(t) = F_+(θ,ϕ,ψ) h_+(t) + F_×(θ,ϕ,ψ) h_×(t). Here F_+ and F_× denote the response function of the detector to each polarization <cit.>. The gravitational-wave polarizations h_+(t) and h_×(t) can be expressed as h_+(t) + i h_×(t) = ∑_ℓ≥ 2∑_m=-ℓ^m=ℓY^-2_ℓ,m(ι,φ)h_ℓ,m(t), where Y^-2_lm denote the spherical harmonics of weight -2 <cit.> and the h_ℓ,m(t) denote the various “modes” of gravitational-wave emission. For the case of compact binary mergers h_ℓ,m(t) will be a function of Ξ_i, t_c and D according to h_ℓ,m(Ξ;t)=A_ℓ,m(Ξ, D;t-t_c)e^-iυ_ℓ,m(Ξ;t-t_c). Here A_ℓ,m is a real amplitude scaling for the various modes, and υ_ℓ,m is a real time-series giving the evolution of the phase of the various modes.The black hole binaries detected by LIGO so far are characterized by a low mass ratio q≤ 4 and a total mass M < 100M_⊙ <cit.>. For such sources the (ℓ,m)=(2,± 2) modes dominate the above sum for the vast majority of the possible orientations of the source <cit.>[We note that for systems with misaligned spins the orbital plane precesses. Here the normal approach is to define ι and φ in a stationary source frame, and then if the orbital plane has precessed ι=0 no longer corresponds to the direction of orbital angular momentum and the other l=2 modes can become dominant. One can alternatively consider a source frame that tracks the precessing orbital angular momentum, and then ι and φ will vary with time. In this frame, the (ℓ,m)=(2,± 2) modes will again dominate the emitted gravitational-wave signal. We do not consider precessing systems in this work.]. The rest of the modes, known as higher-order modes, have only a small contribution during most of the inspiral and are only significant to the resulting gravitational-wave signal in the last few cycles and eventual merger of the black hole binary <cit.>. The amplitude of the higher-order modes grows as the mass ratio q of the system deviates from 1, making their impact much stronger for large mass ratio black hole binaries. In addition, the Y^-2_2,± 2 spherical harmonics have maxima at ι=0 and ι=π and a minimum at ι=π/2. For many of the other harmonics ι=π/2 is a maximum. Therefore, the (ℓ,m)= (2,± 2) modes will completely dominate the gravitational-wave signal for face-on sources. For edge-on systems, especially ones with a high-mass ratio, the higher-order modes are an important contribution to the full gravitational-wave emission <cit.>. In addition, higher-order modes have a stronger affect in signals emitted by large total mass sources. The phase of the (ℓ,m) mode scales, to good accuracy, as υ_ℓ,m∝ m ×υ_orb/M, where υ_orb denotes the orbital phase of the binary. At high values of total mass, M, the dominant (2, ±2) modes can fall below the sensitive band of the observatory, while the higher m modes, at higher frequency, are still observable.Waveform models that describe the full black-hole binary coalescence—through inspiral, merger and ringdown—can be broadly divided into two approaches. The first is the “effective-one-body” approach, calibrated against numerical relativity simulations <cit.>, the second is the various phenomenological frameworks, also calibrated against numerical relativity simulations <cit.>. There are a number of different waveform models, from both of these approaches, which have been used in the recent results papers from the LIGO and Virgo collaborations. However, with the exception of numerically generated waveforms, which are currently impractical to use for searches with a wide parameter space, these waveform models do not include the higher modes of the gravitational-wave emission, and consider solely the dominant modes. The only waveform model available at the time of this study, which includes both higher-order modes and includes the merger and ringdown components of the gravitational-wave signal is an effective-one-body waveform described in <cit.>. This model includes the (ℓ,|m|)=(2,± 1),(2,± 2),(3,± 3),(4,± 4) and (5,± 5) modes[This waveform model is known as EOBNRv2HM and is available in the LIGO Algorithm Library (LAL). We also use a frequency-domain reduced-order model <cit.> of this waveform known as EOBNRv2HM_ROM in LAL.]. The most significant mode not included in this model is the (3,± 2) mode, which can have comparable amplitude to the (5,5) mode <cit.>. While the paper <cit.> does demonstrate the accuracy of this model to generic waveforms, it would of course be beneficial to have this, and other modes, included in future waveform models. Unfortunately, this effective-one-body waveform model does not include the effect of the components' spins. Nevertheless this non-spinning waveform model is sufficient to demonstrate the methodology described later in this work, and with it we can investigate the sensitivity increase to non-spinning compact binary mergers if one searches with waveform filters that include higher-order modes. We might expect that the relative sensitivity with non-spinning waveforms would be similar to that with aligned-spin waveforms as the effects of spin inclusion and higher-order modes are largely orthogonal. However, it is true that systems with anti-aligned spins have slightly stronger higher modes than systems with aligned spins <cit.> and therefore it is possible that higher-modes would help more fornegative spin sources than for positive ones. We will investigate this in the future when waveform models are available to do so.Finally, we note that during the writing of this manuscript a number of waveform models, in both the effective-one-body and phenomenological frameworks are being developed that include higher-order modes, and allow for nonzero component spins aligned with the orbital angular momentum <cit.>. The methods described here can be applied directly to these waveform models when they become available, and this would be a necessary step before utilising this methodology to search for higher-order mode waveforms in real data. There is also work demonstrating that sets of numerical relativity waveform might be used directly in a search <cit.>, or that surrogate models could be created by interpolating between a set of numerical relativity waveforms <cit.>. Such approaches might also present a way to use accurate aligned-spin, or even precessing, higher-order mode waveforms in searches, but we do not explore that here.§ A MODELLED SEARCH FOR COMPACT BINARY MERGERS WITH HIGHER-ORDER MODE WAVEFORMSThere are currently a number of different search methods being used to observe compact binary coalescences using modelled waveforms in the data being collected by Advanced LIGO and Advanced Virgo <cit.>. The core of all of these different methods is the two-phase matched-filter that was described in <cit.>. This two-phase matched-filter has proved to be very powerful in observing compact binary mergers, but it does make a number of assumptions about the signal model, which are not true generically, in particular when one is considering higher-order modes. Specifically the method assumes that the normalized frequency domain representation of the + component of the gravitational wave signal h̃_+ is related to the frequency domain representation of the × component of the gravitational wave signal h̃_× according to h̃_+ ∝ i h̃_×. In addition, it is assumed that the “extrinsic” parameters of a gravitational-wave signal—the sky-location, source orientation, polarization phase and distance—can all be absorbed by applying a constant phase-shift, constant time-shift and a constant amplitude scaling to the observed waveform. With these assumptions in place, one can analytically maximize over an overall amplitude and phase of the signal, and use an inverse Fourier transform to quickly evaluate the statistic as a function of time <cit.>. Then only the “intrinsic” parameters—the component masses and spins—are searched over by repeating the search process with a well chosen discrete set of waveform models with varying values of the component masses and spins, known as the “template bank”. Physically, these assumptions hold if one assumes that the sources being observed have no orbital eccentricity, no precession and no contribution from higher-order modes to the gravitational-wave signal. However, these assumptions do not hold in the case here where we wish to use waveforms including higher-order modes as filters in the search.In <cit.> the authors explored relaxing the assumption that the system was not precessing and developed search statistics that can be used in that case. In the method described in <cit.> a complex maximization scheme was used to maximize over all non-intrinsic parameters, which was found to be computationally prohibitive if forced to restrict to only physically possible values. Whereas, in <cit.> the authors included the inclination of the source with respect to the observer as a parameter when constructing the template bank, effectively considering this as an intrinsic parameter. However, this method cannot be applied to a generic search because the assumption that the φ parameter (the azimuthal angle to the observer in the source frame) can be modelled as an overall phase shift in the Fourier domain breaks down when considering gravitational-wave signals with higher-order modes.Nevertheless one can extend the method in <cit.> in a reasonably trivial manner by relaxing the assumption on φ and also considering this as a parameter to search over in the template bank. This is the approach we use in this work. The resulting statistic is not new to this work, it also appears in <cit.>. This work, however, is the first case in which this has been applied in an end-to-end search. §.§ A search statistic applicable for generic searches for compact binary mergers When searching for a signal h, with known form, but unknown amplitude, in Gaussian, stationary noise n, with noise-power spectral density S_n(f) it can be demonstrated <cit.> that the optimal statistic for deciding whether a signal h is present, or not, in the data is given by ρ^2 ≡( [ ⟨ s | h ⟩] )^2/⟨ h | h ⟩ = ( [ ⟨ s | ĥ⟩ ])^2, where ρ defines a signal-to-noise ratio, â denotes a normalization of any filter waveform a such that â = a/⟨ a | a ⟩^1/2, and we define the complex matched-filter ⟨ a|b⟩ = 4 ∫^∞_0 ã(f)b̃^*(f)/S_n(f) df. For simplicity in what follows, we will distinguish between the complex matched-filter, and the real component of the complex matched-filter by defining ( a | b ) = [ ⟨ a|b⟩], such that ρ^2 = ( [ ⟨ s | ĥ⟩ ])^2= ( s | ĥ )^2. As already mentioned in section <ref>, gravitational-wave signals observed in an interferometric observatory such as LIGO or Virgo can be expressed as a linear combination of the two gravitational-wave polarizationsh(t) = F_+(θ, ϕ, ψ)h_+(t) + F_×(θ, ϕ, ψ) h_×(t). As the amplitude of h(t) is removed by normalization in equation <ref> we can freely scale the amplitude when defining h(t). It is convenient to combine F_+ and F_× into an overall amplitude rescaling and a single further parameter by defining h(t) = A( u ĥ_+(t) + ĥ_×(t)), where u = F_+/F_×√(⟨ĥ_+ | ĥ_+ ⟩/⟨ĥ_× | ĥ_×⟩) andA = F_×√(⟨ĥ_× | ĥ_×⟩). One can then insert equation <ref> into equation <ref>, which removes the amplitude term A, and from there analytically maximize ρ over u. This results in the following expression, max_u (ρ^2) = ( s | ĥ_+ )^2 + ( s | ĥ_× )^2 - 2 ( s | ĥ_+ ) ( s | ĥ_× )( ĥ_+ | ĥ_× )/(1 - ( ĥ_+ | ĥ_× )^2). Furthermore it is trivial to see that in the limit that h̃_+ ∝ i h̃_× this will collapse to the more familiar statistic used in current searches max_u (ρ^2) ≃⟨ s | ĥ_+ ⟩^2. The statistic defined in equation <ref> is generic and can be applied to any single detector search for compact binary coalescences. Physically, this statistic maximizes over the D, θ, ϕ and ψ parameters—or the distance, sky location and polarization phase—leaving all other parameters to be included in the template bank. For the case of eccentric, precessing, higher-order mode waveforms, this will result in a very large dimension parameter space, which may prove unfeasible in some situations. In such cases approaches such as the ones explored in <cit.> might be useable to further shrink the dimensionality of the parameter space by maximizing over the Y^-2_ℓ,m components, but this has yet to be successfully applied to generic systems. However, as we explore below, our simple approach can successfully be applied to the case of searching for higher-order mode signals in Advanced LIGO data. §.§ Exploring the necessity of the generic statistic for higher-order mode searches In equation <ref> we described a generic matched-filter statistic that maximizes only over the amplitude, polarization phase and sky location of the signal. While this statistic can be used generically, it is more computationally efficient to use the more commonly used statistic in equation <ref> as it requires only one matched-filter computation. Equation <ref> collapses to the form shown in <ref> in the case when h̃_+ ∝ i h̃_×. It is therefore worth investigating how well this relationship holds in the parameter space being considered to decide whether it is possible to approximate equation <ref> with the more efficient equation <ref>. It is also possible to use the more efficient statistic in some part of the parameter space and swap over to the generic statistic only in the regions of parameter space where it is needed.To investigate the possibility of using this approximation, we can simply calculate the magnitude of the imaginary component of the overlap between ĥ_+ and ĥ_× for systems within the parameter space defined in Section <ref>. To do this we generate 5 million waveforms, with component masses uniformly chosen within our chosen parameter space, and with isotropic distribution of inclination and reference orbital phase[As we are only using h_+ and h_× we do not need to choose a sky location, polarization phase or coalescence time for this set of signals.]. For each of these waveforms we compute the imaginary component of the overlap between ĥ_+ and ĥ_×, ( ⟨ĥ_+ | ĥ_×⟩). The results are then binned in terms of the total mass and mass ratio, and we show the minimum value of this overlap, as a function of the total mass and mass ratio in figure <ref>. We see that for both the early and design Advanced LIGO sensitivity curves the minimum value of this overlap is ∼ 0.985. If we were instead to plot the average value of this overlap the value would be larger than 0.997 everywhere. These values indicate the largest loss of signal-to-noise ratio that is possible when making the assumption that the polarization phase can be simplified. For example in the case where the value of the imaginary component of the overlap is 0.985, then 1.5% of the maximum signal-to-noise ratio would be lost if the signal observed in the detector is described by the × component of h(t) and the + component is used as a template. For other values of polarization phase—such that the detector would observe a combination of the + and × components—> 98.5% of the optimal signal-to-noise ratio would be recovered. Given that we will allow a 3% loss of signal-to-noise ratio due to the discreteness of the set of filter waveforms used, this indicates that for the parameter spaces and noise curves that we consider in this work it is sufficient to use the simple statistic in all regions of parameter space. We also verify this claim later in the results sections. We emphasize though that if this method is used in future searches using an extended region of parameter space, or including effects of spin precession, this should be evaluated again. § ASSESSING THE SENSITIVITY INCREASE OF A HIGHER-ORDER MODE SEARCHIn the previous section we described a method that will allow the use of template waveforms that include higher-order modes in searches for compact binary mergers. In this section we will assess the increase in sensitivity that can be obtained by using this method to search for compact binary mergers in Advanced LIGO data. We begin by creating “template banks” of waveforms to cover the full parameter space described earlier in section <ref>. From there we will explore the sensitivity increase that can be obtained when using higher-order mode waveforms. We will first assess this by comparing sensitivities above a constant signal-to-noise ratio threshold, for both the standard search, and our new method using higher-order mode waveforms. We will also identify the points in parameter space for which the sensitivity increases the most when including higher-order mode waveforms. Finally, we will use our new method in theanalysis framework <cit.> to analyse 5 days of Gaussian noise, colored to Advanced LIGO sensitivities. This will allow us to assess the increase in the background rate when including the larger number of templates that are needed to cover the higher-order mode signal parameter space. This will then enable us tocompute the sensitivity increase at a constant false-alarm rate threshold between a search that includes the effects of higher-order modes, and one that does not.§.§ A template bank of filter waveforms including higher order modesThe first step in assessing the sensitivity improvement that can be achieved by including the effects of higher-order modes in the filter waveforms is to create the set of filter waveforms, or “template bank”. In this sub-section we describe the construction of the template banks that we will use, highlighting any problems specific to construction of template banks of higher-order mode signals. We begin by defining the overlap, o(a,b), between a potential signal waveform a and a filter waveform b as the fraction of the optimal signal-to-noise-ratio (a | a) of a that is recovered when using b as a filter waveform, o(a,b) ≡max_Φ( ( â | b̂(Φ) ) ), where Φ denotes the extrinsic parameters of b that are not included as parameters in the template bank and are maximized over. In this work we calculate overlaps by maximizing over the coalescence time and the parameter u defined in equation <ref> using either equation <ref> or equation <ref> as appropriate.The “fitting factor” (often called “effectualness”) <cit.> is then defined as the maximum overlap between a and all of the filter waveforms in the template bank b_i FF(a, b_i) = max_i o(a, b_i). When constructing template banks to use in analysis of gravitational-wave data the normal choice is to demand that for any point in the parameter space that the template bank covers, the fitting factor with the template bank must be greater than 0.97 <cit.>. That is to say that the maximum loss in signal-to-noise ratio due to discreteness of the bank must not be greater than 3% anywhere in the parameter space. However, as this parameter space explicitly does not include the effect of higher-order modes—or precession, or eccentricity—fitting factors for real gravitational-wave signals can be lower than this. For the case of template banks used in the most recent analyses of Advanced LIGO data, the template bank is placed to cover a broad range of masses and spins <cit.>. When higher-order modes and precession are not considered, the orientation of the source with respect to the observer is degenerate with u, which is analytically maximized over, and so the template bank is placed in a 4-dimensional parameter space—the two masses, and two component spins—using methods described in <cit.>.In <cit.> the authors discussed how to place a template bank for precessing waveforms, and we follow a very similar approach here. Specifically, we use the “stochastic” placement algorithm described in <cit.>, to create our template banks. The basic idea of the stochastic placement algorithm is that potential template points are chosen randomly in the specified parameter space, and the fitting factor of these points computed with the points currently accepted to the template bank. Points are added to the template bank if the fitting factor is smaller than 0.97, and the process iterates until some pre-specified stopping condition is reached. In the case of the higher-order mode template banks we use here, we place waveforms in a 4-dimensional parameter space, the two component masses and the source orientation parameters (ι, φ). This method can be directly extended to cover the additional two-dimensions, describing the components' spins aligned with the orbit, when waveforms including aligned-component spins and higher-order modes become available. For this work we compute template banks both for the representative early Advanced LIGO sensitivity curve and the Advanced LIGO design noise curve, discussed earlier in section <ref>. The template banks are chosen to cover systems with total mass greater than 50 and less than 400 solar masses, with mass ratio limited to be less than 10, which we also discussed in section <ref>.We compute template banks for waveforms that include higher-order mode effects, and template banks including waveforms without any higher-order modes. For each of the two sensitivity curves, we begin by constructing a template bank of waveforms, covering our range of masses, that do not include higher-order modes. We then take that template bank and add to it templates containing higher-order modes, using the stochastic process, until we also have a template bank that is suitable for higher-order mode waveforms. When placing the template bank of higher-order mode waveforms we use equation <ref> to maximize over u. The sizes of these template banks are given in Table <ref>. We note that the higher-order mode template banks are an order of magnitude bigger than the standard template banks.In Figure <ref> we visualize the distribution of the waveforms in the template banks that we have created. The left panel of Figure <ref> shows the distribution of the templates as a function of the total mass and mass ratio when including, and when not including, higher-order mode effects. As well as being able to see that many more templates are needed when including higher-order modes, we observe that we especially need many more templates at both high masses and high-mass ratios compared to the no higher-order modes bank where these regions are sparsely populated. In the right panel of Figure <ref> we show the distribution of the inclination angle of higher-order mode waveforms in the template bank. We can see that many more templates are needed for edge-on systems than for face-on or face-away systems. We also observe two local maxima at ∼ 35 and ∼ 135 degrees. These peaks are an artifact of the two-stage template bank creation process, and the fact that templates are added to a set of non-higher-order mode waveforms, which will match well face-on and face-away systems. §.§ Sensitivity comparison at fixed signal-to-noise ratioWe wish to evaluate and compare the sensitivity to a set of given signals using both our template banks containing higher-order mode waveforms, and not containing them. This directly gives a measure of how much sensitivity would be gained by using our new higher-order mode search method. We first must define how this sensitivity will be computed. To assess the sensitivity to a given set of waveforms, drawn from some stated distribution, we must, for each waveform g_i in the parameter space we consider, compute the fitting factor that will be recovered using the given template bank, composed of waveforms b_i. The distribution of fitting factors for the set of signals allows us to understand what fraction of signal-to-noise ratio we will recover for each waveform, and identify regions of parameter space where sensitivity is poor. However, it can often be misleading to only show the distribution of fitting factors, as often the systems for which fitting factors are smallest are also those ones whose observable gravitational-wave signal is weaker. To take into account the fact that different signals can be observed at different distances, one can define the corresponding “signal recovery fraction” of a given template bank b_i to a distribution of signals g_i as SRF =∑_i (FF(g_i, b_j))^3 (g_i | g_i)^3/∑_i (g_i | g_i)^3. This was first introduced in terms of an “effective fitting factor” in <cit.>. One can understand the signal recovery fraction as the fraction of signals from a distribution g_i that would be recovered above a fiducial signal-to-noise ratio threshold with the template bank b_i compared to a template bank with a fitting factor of 1 for all g_i. For the plots shown in this section we compute the fitting factor, and then signal recovery fractions, usingequation <ref> to maximize over u. We have also created the plots using equation <ref> and the numeric values agree to within 0.05% with those in the plots shown. This demonstrates again that it is sufficient to use the computationally simpler equation <ref> when performing a search using higher-order mode waveforms.In Fig <ref> we plot the signal recovery fraction as a function of the two component masses for both the early and design Advanced LIGO sensitivity curves, and for template banks with and without higher-order modes. For each point shown on this plot the signal recovery fraction is calculated by choosing a set of g_i consisting of 500 waveforms. Each waveform has the same values of component masses, and the source orientations and sky locations are chosen isotropically. We show 1000 unique points in these plots, so a total of 500000 waveforms are used in these simulations. We can clearly see in these plots that for equal mass systems the signal recovery fractions are large for template banks with and without higher-order modes. However, as the mass ratios become larger the signal recovery fraction can become as small as 0.65 when omitting higher-order modes, implying that ignoring higher-order modes in a search would result in a reduction in detection rate of up to 35% for systems with those masses. When we include higher-order mode waveforms, the signal recovery fractions are much more uniform, as expected. Values of 0.95 are consistent with the loss expected due to discreteness of the template bank. For the Advanced LIGO design sensitivity curve the effect of higher-order modes is smaller than that of the representative early Advanced LIGO noise curve. This is expected as the early Advanced LIGO noise curve is comparatively less sensitive at lower frequencies, where higher-order modes are less important. These results are consistent with earlier works exploring the effects of higher-order modes <cit.>, reinforcing that higher-order modes are important for systems where the mass ratio and total mass is large.In Fig <ref> we show the cumulative distribution of fitting factors for all of the 500000 waveforms described above. This is shown for both the early and design Advanced LIGO sensitivity curves and for template banks both including and not including higher-order mode waveforms. We can clearly see here that there is a significant proportion of systems recovered with low fitting factors if higher-order modes are neglected. Using our higher-order mode template banks completely removes the tail of low fitting factors. We also show fitting factor as a function of the source orientation for all signals simulated at a total mass of 95 M_⊙ and a mass ratio of 8. We can clearly see that the lowest fitting factors are obtained when the inclination angle is edge-on, as expected. §.§ Sensitivity comparison at fixed false-alarm rate The results in section <ref> demonstrate that when including higher-order mode effects in the waveform filters used in a search the search efficiency will increase when evaluating efficiency above a constant signal-to-noise ratio threshold. However, in a real search the signal-to-noise ratio threshold is a function of the number of waveform templates, and the size of the parameter space covered. Our higher-order mode template banks are roughly an order of magnitude larger than the corresponding non-higher-order mode template banks. This increase in the number of templates will increase the rate of background events in the search, and therefore a signal would require a larger signal-to-noise ratio to achieve the same significance, when evaluated in terms of a false-alarm rate. In this section we will assess the sensitivity increase that can be obtained when using our higher-order mode template banks, at a constant false-alarm-rate threshold, which takes into account the increase in background triggers from using a larger number of template waveforms.The first step is to create a mapping between signal-to-noise ratio and false-alarm rate. We do this for each of our template banks by simulating ∼5 days of Gaussian noise with either the representative early Advanced LIGO sensitivity curve or the Advanced LIGO design sensitivity curve. We then analyse this data with the various template banks using theanalysis framework <cit.>, which allows us to directly map the signal-to-noise ratio to a false-alarm rate. A false-alarm weighted relative signal-recovery fraction can then be computed according to SRF =∑_i (FF(g_i, b_j))^3 (g_i | g_i)^3/ρ_thresh^3 ∑_i (g_i | g_i)^3, where ρ_thresh is the signal-to-noise threshold corresponding to the desired false-alarm rate. This doesn't have meaning as a statistic on it's own, but the ratio of this quantity computed for two different searches, which will have different values of fitting factor and ρ_thresh, directly gives the relative sensitivity. One could compute this directly for our higher-order mode, and non-higher-order mode template banks. However, this would result in a non-negligible decrease in sensitivity in the equal mass region of parameter space. This is because this region of parameter space is already well recovered by non-higher-order mode waveforms and the increased signal-to-noise ratio threshold, due to the higher-order mode waveforms, only causes a reduction in sensitivity for equal-mass systems. Instead, we can choose to consider two separate searches, one with higher-order modes and one without, and combine the results together, including the necessary trials factor of 2. This would limit the decrease in sensitivity to ∼1% in regions where higher-order modes contribute nothing while still allowing a sensitivity increase where higher-order modes are important. Formally the false-alarm weighted relative signal-recovery fraction for this combined search would be computed according to SRF_combined = ( ∑_i (FF_weighted)_i(g_i | g_i)^3 /∑_i (g_i | g_i)^3), where we define (FF_weighted)_i = max_j{(ρ_thresh)_j^-3FF_i,j^3}, where j denotes the values for the two searches being performed.In Table <ref> we show the network signal-to-noise ratios corresponding to a false-alarm rate of 10^-3 yr^-1. This is the threshold at which we choose to evaluate the relative sensitivity of our higher-order mode search. When computing the combined search sensitivity we incorporate the trials factor by using a false-alarm rate of 0.5 × 10^-3 yr^-1 in each search, and therefore 10^-3 yr^-1 in the combined search. To the accuracies quoted in this table, the threshold obtained is the same if we use equation <ref> or equation <ref> to maximize over u. In Figure <ref> we show the sensitivity increase between the higher-order mode and non-higher-order mode searches as a function of the total mass and mass ratio.This is computed from the same set of waveforms as used in Figure <ref>. We see that in both cases there is no increase in sensitivity for equal mass systems but an increase in sensitivity of up to 25% for the systems with the highest total mass and mass ratio that we consider here. It is important to again emphasize that while these averaged sensitivity increases are modest, even a single observation of a compact binary merger with measurable higher-order mode emission would allow for much more precise measurement of source parameters than systems where only the dominant gravitational-wave emission modes are observable <cit.>. In this sense, we stress that we are presenting a gain in sensitivity which is averaged over the possible orientations of the binary. These results are often dominated by face-on binaries, which have a stronger emission, and for which the quadrupolar bank shows an excellent signal recovery. The sensitivity gain is much larger for edge-on binaries, whose emission has a strong higher mode contribution, leading to a poor signal recovery when a quadrupolar bank is used as demonstrated in Figure <ref>. To emphasize this, in the lower panel of Figure <ref> we also show the sensitivity increase if considering only those waveforms with 60^∘ < ι < 120^∘, which are those waveforms oriented edge-on to the observer and for which higher-modes are most important. Here we observe much higher sensitivity increases—up to 80%—than with the full set of simulated waveforms. We note that equation <ref> defines a simple measure for combining the higher-order mode and non-higher-order mode searches, which does not reduce significantly the sensitivity to non-higher-order mode waveforms, while simultaneously allowing a sensitivity increase for systems where higher-order modes are important. However, a more optimal method to combine these two searches, would be to utilize a method similar to that defined in <cit.>, which uses Bayesian methodology to weight each template waveform according to its probability of observing a system. However, such a method requires a good knowledge of the astrophysical distribution of systems, which is not known for intermediate-mass black hole binary systems, and requires knowing relatively how often each template is to observe a signal, which is difficult to compute with curved and degenerate parameter spaces where it can be difficult to determine what region of parameter space is best covered by each template. § REAL DATA CONCERNS FOR SEARCHES FOR HIGH-MASS WAVEFORMSIn the previous sections we have evaluated the sensitivity increase when using filter waveforms containing higher-order modes assuming that the detector noise is Gaussian and stationary and using only the signal-to-noise ratio to evaluate the significance of events. In reality, data taken from gravitational-wave observatories is neither Gaussian nor stationary and instrumental non-Gaussian noise transients will produce large values of signal-to-noise ratio in a matched-filter search <cit.>. Therefore search strategies for compact binary coalescences must take into account such non-Gaussian transients and be able to distinguish them from genuine astrophysical signals. There are numerous works that have focused on this problem <cit.>. However, many of these tests were created considering lower mass compact binary mergers than those considered here and these tests are known to be less efficient when searching for intermediate-mass black hole binary mergers <cit.>. Some tests are beginning to focus on the efficiency to higher mass black hole binary mergers, but are not yet able to separate all forms of transient noise, and are not fully tuned for higher-order mode waveforms <cit.>.Indeed for certain regions of the parameter space unmodelled search techniques have been found to be more sensitive to compact binary mergers in data from LIGO's first observing run than modelled searches, because they are better at removing instrumental artifacts <cit.>. Optimizing searches to better distinguish real astrophysical signals from instrumental noise at high masses is an interesting topic that should be addressed, but this should be done in a separate work, and we will not attempt to address this specific question here.In this section we will explore how existing tests to separate real signals from noise artifacts can be applied when using higher-order mode waveforms, we will demonstrate that these tests can misclassify genuine astrophysical signals with significant higher-order mode contribution as instrumental artifacts, and that this problem is significantly mitigated when using higher-order mode waveforms as filters in the search. §.§ Reweighted signal-to-noise ratioOne of the most common methods for discriminating between gravitational wave triggers and noise artifacts is to check whether the morphology of a potential signal in the data, s, is consistent with that of the filter waveform being used, h. Several methods for doing this, testing different features of the potential signal's morphology, have been proposed <cit.>. Of these, arguably the most effective test is the one described in <cit.>. In that test a number of filters are constructed from the template waveform h in the following way. A set of N filters h_i is chosen such that each h_i is constructed, in the frequency domain as h̃_i (f) = h̃ (f)for f_L < f <= f_U 0otherwise. Each filter h_i uses non-overlapping frequency windows, f_L and f_U, such that ∑_i h_i = h. Also (h_i | h_i)= (h_j|h_j) for any value of i and j and (h_i | h_j) = 0 for any i≠ j. By this definition if the data s is a good match to the filter waveform then each of the h_i should recover the same signal-to-noise ratio, within deviations expected in Gaussian noise. In contrast, noise artifacts are often well localized in time and would often produce a very large signal-to-noise ratio in a small number of h_i and a small signal-to-noise ratio in the rest. Therefore one can construct a chi-squared test as χ^2 = N/⟨ h|h ⟩∑_i=1^N⟨ s|h_i ⟩ - ⟨ s|h ⟩/N^2. If s is described by Gaussian noise with an added signal well modelled by h, this will follow a χ^2 distribution with 2N - 2 degrees of freedom <cit.>. For non-Gaussian artifacts it has been empirically demonstrated that this will take larger values <cit.>, allowing for separation between real signals and Gaussian artifacts. There are a number of different techniques for combining the χ^2 test with the signal-to-noise ratio to produce a ranking statistic <cit.>, we choose to use here the combination described in <cit.>, which has been used to analyse Advanced LIGO data with theanalysis method <cit.>. This “reweighted signal-to-noise ratio” is given by <cit.> ρ_reweighted = ρ for χ^2 <= n_d ρ[ 1/2(1 + ( χ^2/n_d)^3)]^1/6 for χ^2 > n_d, where n_d = 2N - 2. We want to explore how well this reweighted signal-to-noise ratio performs when searching for higher-order mode signals with and without higher-order mode filter waveforms. §.§ Sensitivity comparison at fixed false-alarm rate with reweighted signal-to-noise ratioWe generate a large set of simulated intermediate-mass black hole binary waveforms to assess the sensitivity of our higher-order mode search method when evaluating sensitivity using the reweighted signal-to-noise ratio defined above. We use the same distribution of parameters as described in the section <ref>, but as the values of the χ^2 test will depend on the amplitude of the signal, we include the distance and sky location as parameters when generating the simulation set. We also add the simulated signals to simulated Gaussian noise when measuring the signal-to-noise ratio and χ^2. The signals are added to noise simulating both Advanced LIGO observatories and the quadrature sum of the recovered ρ_reweighted is used to rank events. In total we choose to simulate signals with 110 unique masses and using ∼ 10000 unique simulated signals for each mass.In Figure <ref> we show the distribution of χ^2 values as a function of signal-to-noise ratio, both with and without higher-order mode filter waveforms. The dashed lines show contours of constant ρ_reweighted; the ρ_reweighted increases as the signal-to-noise ratio increases and as the value of the χ^2 test decreases. When searching for systems that are oriented face-on to the observer using non-higher-order mode filter waveforms the χ^2 values tend to be low as the higher-order mode content is negligible. However, as the inclination ι increases, higher-modes contribute more to the resulting signal, increasing the mismatch between signal and template and causing the χ^2 to grow. When using higher-order mode filter waveforms the χ^2 values are lower, and lie away from the contour lines where non-Gaussian artifacts would appear in real data. The χ^2 test would therefore cause an additional loss in sensitivity over that considered in section <ref> if searching for higher-order mode waveforms using filters that neglect higher-order modes.We therefore reproduce the figures shown in Figure <ref>, but using ρ_reweighted to rank potential events instead of using signal-to-noise ratio. This is shown in Figure <ref>. We see in this plot a larger sensitivity increase when including higher-order modes compared to that seen in Figure <ref> due to the effect of the χ^2 test. We also now see a larger sensitivity increase for the design Advanced LIGO sensitivity curve, whereas previously the larger sensitivity increase was seen with the early Advanced LIGO sensitivity curve. This indicates that the χ^2 test is misclassifying more systems for the design sensitivity curve than for the early curve. These results qualitatively match the results in <cit.> where the authors made predictions of how the sensitivity should increase if one were able to include higher-order mode waveforms as filters in a search. As with Figure <ref>, we also show results considering only injections aligned close to edge-on to the observer in the lower panels. As before, we observe much larger sensitivity increases, and again this increase is larger when including the effects of the χ^2 test. In Figure <ref> we chose to set the maximum value on the colorbar to 2 to allow the reader to observe improvements in sensitivity that are less than 1.5. However there are some values that are much larger than this, up to a value of 4—indicating a 300% increase in the number of observed signals—for the points with the largest mass and largest mass ratio in the lower right panel. In such regions our new higher-order mode search is especially needed.While it would be beneficial to work on exploring and tuning various signal-based consistency tests and classifiers to improve the performance of searching for higher-order-mode waveforms with template waveforms that do not include higher-order modes, such tests and classifiers will be more powerful if the template waveforms being used match well to the signals in the data. Some more work is needed to improve the separation of noise transients from real signals, in the intermediate-mass black hole binary parameter space, and in the case where the template waveforms match well. However, we recommend that such work is performed while using waveforms that include higher-order mode waveforms, as described in this work. § CONCLUSIONSIn this work we have presented a new method for searching for compact binary coalescences using filter waveforms which include higher-order mode gravitational-wave emission. This method will allow for the first time searches using higher-order mode filter waveforms to be performed in ongoing analysis of data from second-generation gravitational-wave observatories. We have demonstrated the sensitivity improvement this method would allow. This improvement is as much as a 100% improvement for systems with mass ratios of 10 and total mass of 400 M_⊙, but is much more modest, < 10%, for systems with equal mass or with total mass of 50 M_⊙. In the cases where the improvement is modest, it implies that the efficiency of the current, non-higher order mode search, is already good in these areas. The improvement in sensitivity is largest—as much as 300%—for systems oriented edge-on to the observer, which are intrinsically fainter in gravitational-waves than face-on systems, but for which higher-order modes are especially important. The detection of such signals is key for testing fundamental aspects of General Relativity. For instance, a clear observation of at least two ringdown modes is needed for testing the no-hair theorem <cit.>. The method we present is also fully generic and could be applied also in searches for eccentric, or precessing, compact binary mergers.Using this method to search for higher-order mode signals in the latest Advanced LIGO and Advanced Virgo data would require waveform models that include both higher-order mode emission and model the effect of the components' spins. At the time of writing such waveform models are not available, but are currently in rapid development <cit.>. When these waveform models are available it is trivial to extend the results shown here to include the component spins, although the size of both higher-order mode and non-higher-order mode template banks will increase when including this additional freedom. Nevertheless, we see no reason why one wouldn't expect the same relative sensitivity improvement as seen here with non-spinning waveform models, when using spinning higher-order mode waveforms.A current problem with searches for intermediate-mass black hole binary mergers is that the gravitational emission from such systems is only observed for a very short time and can be confused with non-Gaussian noise transients. Developing better techniques to distinguish between noise transients and genuine gravitational-wave signals would be very beneficial in this search space, although this task is orthogonal to the problem addressed in this paper. We have also demonstrated that the performance of current signal-based consistency tests is improved significantly by including higher-order mode effects in the search parameter space. § ACKNOWLEDGEMENTS The authors would like to thank Stas Babak, Alejandro Bohé, Alessandra Buonanno, Collin Capano, Sylvain Marsat, Stephen Privitera and Vivien Raymond for helpful discussions. The authors also thank Collin Capano for reading through the manuscript and providing useful feedback and comments. The authors would also like to thank the anonymous referees for carefully reading this manuscript and providing useful comments, suggestions and feedback. IH and AN would like to thank the Max Planck Gesellschaft for support. JCB gratefullyacknowledges support from the NSF grants 1505824, 1505524 and 1333360. Computations used in this work were performed on the “Vulcan” and “Atlas” high-throughput computing clusters operated by the Max Planck Institute for Gravitational Physics.
http://arxiv.org/abs/1709.09181v2
{ "authors": [ "Ian Harry", "Juan Calderón Bustillo", "Alex Nitz" ], "categories": [ "gr-qc", "astro-ph.CO", "astro-ph.HE" ], "primary_category": "gr-qc", "published": "20170926180010", "title": "Searching for the full symphony of black hole binary mergers" }
mnras
http://arxiv.org/abs/1709.09730v2
{ "authors": [ "Emanuele Castorina", "Martin White" ], "categories": [ "astro-ph.CO" ], "primary_category": "astro-ph.CO", "published": "20170927205053", "title": "Beyond the plane-parallel approximation for redshift surveys" }
[ Alexander H. Nitz^1, December 30, 2023 ================================== § INTRODUCTION Silicon photomultipliers (SiPMs) are widely employed in high energy physics experiments. High photon detection efficiency and typical peak spectral sensitivity ranging between 400and 500 nm make them very convenient for detecting photons produced in de-excitation processes evoked by ionizing particles in a plastic scintillator. Compactness and robustness of SiPMs make mechanical implementation easy. Furthermore, when assembled in a 2D array they can cover a sizable area, therefore can be considered as a relevant replacement for traditional photomultiplier tubes (PMTs). In particular, the sensors can be coupled directly to a scintillator bulk to provide a time resolution on a sub 100 ps level <cit.>. The principal requirement for a precise time measurement is a short rise time of the signal. A large SiPM capacitance increases the rise time and width of the signal and worsens the time resolution. In this regard, a large monolithic sensor or many smaller sensors with common cathode and anode <cit.> are naturally limited in area. A reduction of the capacitance can be achieved by connecting SiPMs in series which decreases the rise time of the leading edge but also decreases the amplitude of a signal <cit.>. A parallel connection of sensors with an independent readout and amplification is another option to isolate the sensor capacitances from each other. In this case signals are summed up at the end. The scheme can be implemented either as a discrete circuit <cit.> or as an ASIC <cit.>. The latter has a clear advantage of compactness and was adopted as an input stage of the acquisition system used in the study. A typical time-of-flight (ToF) detector comprises an array of long bars covering a large surface which provides a fast trigger signal or can be used for particle identification. A clear advantage of a SiPM array is that it can take the form of the bar cross section, thus avoiding complex shape light-guides. Omitting light-guides in general reduces the dispersion of photons and decreases the price of a bar. The test-bench used in this work can be considered as a prototype for the design of the timing detector of the SHiP experiment <cit.> andthe ToF system proposed for the ND280/T2K upgrade <cit.>. Furthermore, the test-bench is used as a test ground for a recently developed ASIC MUSIC R1 <cit.> which was employed in a test-beam study for the first time. This paper is organized as follows. Section <ref> describes the experimental set-up and devices under test. Section <ref> presents the analysis procedure and results. A summary and outlook are then given in section <ref>. § EXPERIMENTAL SET-UPMeasurements presented here were carried out in June of 2017 at the T9 beamline of the East Hall of the CERN PS. Three counters made of different materials and having different dimensions were exposed to a 2.5 GeV/c muon beam to study their timing characteristics as a function of position. They are* a 150 cm × 6 cm × 1 cm bar made of a EJ-200 cast plastic scintillator (attenuation length 380 cm, rise time 0.9 ns) <cit.>. Two 4 mm thick PMMA light-guides tapered from 60×10 mm^2 to 56×6 mm^2 area are glued to both ends of the bar.* two 120 cm × 11 cm × 2.5 cm bars. One is made of a EJ-200 and anothermade of a EJ-230. The latter has an attenuation length 120 cm and rise time 0.5 ns <cit.>. Light-guides have not been used. The bars were wrapped in aluminum foil and black tape to ensure light tightness from the surrounding experimental hall. The scintillating light was read out by arrays of 8 SiPMs whose pulse shapes were recorded by a 16-channel waveform digitizer WAVECATCHER <cit.>. The digitizer was used at a 3.2 GS/s sampling rate. The circular buffer of WAVECATCHER contains 1024 cells allowing to cover a 320 ns time window, which records the full signal coming from SiPMs. Eight surface-mount devices S13360-6050PE (area 6 × 6 mm^2, pixel pitch 50 μm) from Hamamatsu <cit.> have been soldered in an array to a custom-made PCB, as shown in Fig. <ref> (right). Hereinafter the arrays are referred to as array-1 and array-2 for the 150 cm bar and array-3 for the 120 cm bars. Anode outputs of SiPMs have been read out and summed by an 8-channel SiPM anode readout ASIC (MUSIC R1) based on a novel low input impedance current conveyor<cit.>. Cathodes of all SiPMs are connected to a common power supply. The bias voltage of every SiPM was controlled via its anode using an internal DAC with 1 V dynamic range. The ASIC circuit contains a tunable pole zero cancellation shaper which was used to reduce the SiPM recovery time constant.For the data taken with the 150 cm bar, a low gain configuration has been applied.Due to a larger cross section of the 120 cm bar, a smaller number of photons was expected to be detected. Therefore the ASIC was used in a high gain configuration.In both cases the signals were summed in the chip after amplification. The experimental setup is shown in Fig. <ref> (left). The trigger was formed by the coincidence of signals from two beam counters installed 20 cm up- and downstream of the beam with respect to the bar under test. Both counters used traditional PMTs for readout. The mean value of the times registered by both beam counters was considered as a reference and was subtracted from measurements of the main bar. A veto counter with a beam hole of 1.5 cm diameter was installed right after the first beam counter and was used in an anti-coincidence mode. The time resolution of the trigger system was found to be 20 ps.The counters under test have been moved transversely with respect to the beam to study their time resolution as a function of the position of a charged-particle interaction along the the main axis of the bar. Hereinafter this axis referred to as x.§ ANALYSIS AND RESULTS Time responses of the SiPM arrays were calculated from registered waveforms in an offline analysis. A digital constant fraction discrimination (dCFD) technique was applied <cit.>. In this approach the recorded waveforms were analyzed to find the signal height. An interval of the waveform before the signal was approximated by a constant to find the baseline. The time is defined by the crossing point of the interpolated digitized signal at the threshold which is a constant fraction of the pulse amplitude. The results of the threshold scan defines the 8% fraction as an optimal value for the time resolution. It was found that for the values of amplitude larger than 0.6 V a correlation of the time versus amplitude appeared. An example of the correlation plot is shown in Fig. <ref>. The dependence of time on amplitude was parametrized using a polynomial function and corrections were applied in the analysis on the event-by-event basis. Corrections for this time-walk effect improve the time resolution of the 150 cm bar by about 7% for the interaction points located near the arrays where the number of detected photons is large. The effect is reduced to 4% for the center of the bar. §.§ Results for the 150 cm × 6 cm × 1 cm bar Examples of time spectra as measured for different positions along x are shown in Fig. <ref> (left). The time spectra can be reasonably approximated by Gaussian functions. For each position, the variance and the mean of the function were used to obtain the time resolution and the peak position of the distribution. The dependence of the measured time versus position of the crossing point along the bar as viewed by both arrays is shown in Fig. <ref> (right). The graphs are approximated by linear functions whose slopes represent the effective average speed of light along the x axis, which is found to be v_eff = 15.5 cm/ns. One can convert this value into the effective average reflection angle using the refraction index of the plastic, which gives θ_eff = 35.4^∘. The time resolution of the counter as registered by the arrays is shown in Fig. <ref> (left). It evolves from 83 ps for the crossing point near the sensor to 150 ps for the light propagation along the 130 cm distance. An improvement of the resolution is observed in case of the crossing point being at the proximity of x=150 cm. This could possibly be an effect of light reflected backwards. A similar effect was observed in Ref. <cit.>. The distribution is approximated by an analytic function consisting of a sum of two exponential functions and a constant. The resolutionof the mean time and the weighted mean measurements is also shown. In both cases the time resolution is approximately 80 ps for the full length of the bar. However the weighted mean approach provides a visible advantage for interactions taking place in vicinity of the sensors. The time resolution at the center of the bar as a function of a voltage applied to the common cathode of SiPMs is shown in Fig. <ref> (right). One can clearly observe an improvement of the timing resolution with an increasing overvoltage. The improvement is prominent at lower voltage (close to the breakdown) and saturates for larger values. The 58 V value, corresponding to about 5 V overvoltage, was used for the measurements presented above. §.§ Results for the 120 cm × 11 cm × 2.5 cm barA similar analysis was performed for the bar with dimensions . Two scintillator materials, EJ-200 and EJ-230, were considered. Results for the time resolution as a function of distance are shown in Fig. <ref> (left). The distribution for the EJ-200 bar can be compared to the results from the previous section presented in Fig. <ref> (left). Since the SiPM sensitive area in both cases is the same the number of detected photons scales with the bar cross section area. This results in a simple ratio of bar widths; one takes a square root to convert this value to the ratio of the time resolutions giving √(11  cm/6  cm) = 1.35. Indeed, the data follow reasonably this prediction.In general, the use of a large cross section bar together with a small sensor area is unpractical because of the light loss. However these results can be interesting in view of the replacement of a PMT readout in old experiments where scintillator counters already exist. The EJ-230 material is used for very-fast timing applications. Its attenuation length is shorter as compared to EJ-200 (120 cm vs 380 cm), while it has a faster time response and lower self-absorption losses in the UV region. These properties basically define the behavior of the time resolution which is better for EJ-230 at small x and worse for the far end of the bar. Time resolution as a function of the number of SiPMs used in the readout (1–8) is shown in Fig. <ref> (right). The measurement was done for an interaction point in the center of the EJ-200 bar. The distribution of points can be reasonably described by the 1/√(n) behavior, where n is the number of SiPMs in the readout chain.§ SUMMARY A feasibility study of using an array of SiPMs for photon detection in a plastic scintillator has been presented. Compactness, mechanical robustness, high photon detection efficiency, low voltage operation and insensitivity to magnetic fields make SiPMs particularly useful for light collection in physics experiments. In this study two arrays of eight 6 × 6 mm^2 SiPMs have been coupled to both ends of a 150 cm × 6 cm × 1 cm plastic scintillator counter. Anode outputs of SiPMs have been read out and summed by an ASIC MUSIC R1. The time resolution as measured by a single array varies from 83 ps to 150 ps. The resolution in the case of the two sides readout is on average 80 ps. The resolution for the bar with a larger cross section is scaled according to the statistics of photons reaching the sensors. The technology has been proposed for the timing detector of the SHiP experiment at CERN SPS and the time-of-flight system of the T2K detector upgrade at JPARC.This work was supported by the Swiss National Science Foundation. We also would like to acknowledge the contribution of FAST (COST action TD1401) for inspiring a collaboration between the engineering group and the researchers. We thank the European Organization for Nuclear Research for support and hospitality and, in particular, the operating crews of the CERN PS accelerator and beamlines who made the measurements possible. JHEP
http://arxiv.org/abs/1709.08972v2
{ "authors": [ "C. Betancourt", "A. Blondel", "R. Brundler", "A. Datwyler", "Y. Favre", "D. Gascon", "S. Gomez", "A. Korzenev", "P. Mermod", "E. Noah", "N. Serra", "D. Sgalaberna", "B. Storaci" ], "categories": [ "physics.ins-det" ], "primary_category": "physics.ins-det", "published": "20170926122856", "title": "Application of large area SiPMs for the readout of a plastic scintillator based timing detector" }
1Physics and Engineering Physics Department, Tulane University, New Orleans, LA 70118, USA*[email protected] demonstrate the simultaneous propagation of slow- and fast-light optical pulses in a four-wave mixing scheme using warm potassium vapor.We show that when the system is tuned such that the input probe pulses exhibit slow-light group velocities and the generated pulses propagate with negative group velocities, the information velocity in the medium is nonetheless constrained to propagate at, or less than, c.These results demonstrate that the transfer and copying of information on optical pulses to those with negative group velocities obeys information causality, in a manner that is reminiscent of a classical version of the no-cloning theorem.Additionally, these results support the fundamental concept that points of non-analyticity on optical pulses correspond to carriers of new information. (020.0020) Atomic and molecular physics; (190.0190) Nonlinear optics. osajnl10 Einstein1905SR A. Einstein, Zur Elektrodynamik bewegter Körper, Annalen der Physik 322, 891–921 (1905).Poincare1902 H. Poincaré, Science and hypothesis (Science, 1905).Lorentz1898 H. A. Lorentz, Simplified Theory of Electrical and Optical Phenomena in Moving Systems, Koninklijke Nederlandsche Akademie van Wetenschappen Proceedings 1, 427–442 (1898).Michelson1887 A. A. Michelson and E. W. Morley, On the Relative Motion of the Earth and of the Luminiferous Ether, Sidereal Messenger 6, 306–310 (1887).Maxwell1865 J. C. Maxwell, A Dynamical Theory of the Electromagnetic Field, Philosophical Transactions of the Royal Society of London 155, 459–512 (1865).Galison2003 P. Galison, Einstein's clocks and Poincaré's maps: empires of time (W.W. Norton, 2003).Stenner2003 M. D. Stenner, D. J. Gauthier, and M. A. Neifeld, The speed of information in a 'fast-light' optical medium, Nature 425, 695–698 (2003).Stenner2005 M. D. Stenner, D. J. Gauthier, and M. A. Neifeld, Fast Causal Information Transmission in a Medium With a Slow Group Velocity, Physical Review Letters 94, 053902 (2005).Tomita2011 M. Tomita, H. Uesugi, P. Sultana, and T. Oishi, Causal information velocity in fast and slow pulse propagation in an optical ring resonator, Physical Review A 84, 043843 (2011).Ives1938 H. E. Ives and G. R. Stilwell, An Experimental Study of the Rate of a Moving Atomic Clock, Journal of the Optical Society of America 28, 215 (1938).Hafele1972 J. C. Hafele and R. E. Keating, Around-the-World Atomic Clocks: Predicted Relativistic Time Gains, Science 177, 166–168 (1972).Abbott2016 B. P. Abbott et al., Observation of Gravitational Waves from a Binary Black Hole Merger, Physical Review Letters 116, 061102 (2016).Masanes2006 L. Masanes, A. Acin, and N. Gisin, General properties of nonsignaling theories, Physical Review A 73, 012112 (2006).Chu1982 S. Chu and S. Wong, Linear Pulse Propagation in an Absorbing Medium, Physical Review Letters 48, 738–741 (1982).Steinberg1994 A. M. Steinberg and R. Y. Chiao, Dispersionless, highly superluminal propagation in a medium with a gain doublet, Physical Review A 49, 2071–2075 (1994).Garrison1998 J. Garrison, M. Mitchell, R. Chiao, and E. Bolda, Superluminal signals: causal loop paradoxes revisited, Physics Letters A 245, 19–25 (1998).Wang2000 L. J. Wang, A. Kuzmich, and A. Dogariu, Gain-assisted superluminal light propagation, Nature 406, 277–279 (2000).Kuzmich2001 A. Kuzmich, A. Dogariu, L. J. Wang, P. W. Milonni, and R. Y. Chiao, Signal Velocity, Causality, and Quantum Noise in Superluminal Light Pulse Propagation, Physical Review Letters 86, 3925–3929 (2001).DeCarvalho2002 C. de Carvalho and H. Nussenzveig, Time delay, Physics Reports 364, 83–174 (2002).Nimtz2003 G. Nimtz, On superluminal tunneling, Progress in Quantum Electronics 27, 417–450 (2003).Solli2004 D. R. Solli, C. F. McCormick, R. Y. Chiao, S. Popescu, and J. M. Hickmann, Fast Light, Slow Light, and Phase Singularities: A Connection to Generalized Weak Values, Physical Review Letters 92, 043601 (2004).Winful2006 H. G. Winful, Tunneling time, the Hartman effect, and superluminality: A proposed resolution of an old paradox, Physics Reports 436, 1–69 (2006).Bigelow2006 M. S. Bigelow, N. N. Lepeshkin, H. Shin, and R. W. Boyd, Propagation of smooth and discontinuous pulses through materials with very large or very small group velocities, Journal of Physics: Condensed Matter 18, 3117–3126 (2006).Bianucci2008 P. Bianucci, C. R. Fietz, J. W. Robertson, G. Shvets, and C.-K. Shih, Observation of simultaneous fast and slow light, Physical Review A 77, 053816 (2008).Boyd2009 R. W. Boyd, Slow and fast light: fundamentals and applications, Journal of Modern Optics 56, 1908–1915 (2009).Suzuki2013 R. Suzuki and M. Tomita, Causal propagation of nonanalytical points in fast- and slow-light media, Physical Review A 88, 053822 (2013).Clark2014 J. B. Clark, R. T. Glasser, Q. Glorieux, U. Vogl, T. Li, K. M. Jones, and P. D. Lett, Quantum mutual information of an entangled state propagating through a fast-light medium, Nature Photonics 8, 515–519 (2014).Tomita2014 M. Tomita, H. Amano, S. Masegi, and A. I. Talukder, Direct Observation of a Pulse Peak Using a Peak-Removed Gaussian Optical Pulse in a Superluminal Medium, Physical Review Letters 112, 093903 (2014).Macke2016 B. Macke and B. Ségard, Simultaneous slow and fast light involving the Faraday effect, Physical Review A 94, 043801 (2016).Dorrah2016 A. H. Dorrah and M. Mojahedi, Nonanalytic pulse discontinuities as carriers of information, Physical Review A 93, 013823 (2016).Amano2016 H. Amano and M. Tomita, Influence of finite bandwidth on the propagation of information in fast- and slow-light media, Physical Review A 93, 063854 (2016).Asano2016 M. Asano, K. Y. Bliokh, Y. P. Bliokh, A. G. Kofman, R. Ikuta, T. Yamamoto, Y. S. Kivshar, L. Yang, N. Imoto, K. Özdemir, and F. Nori, Anomalous time delays and quantum weak measurements in optical micro-resonators, Nature Communications 7, 13488 (2016).Brillouin1960 L. Brillouin, Wave propagation and group velocity. (Academic, 1960).Parker2004 M. C. Parker and S. D. Walker, Information transfer and Landauer's principle, Optics Communications 229, 23–27 (2004).McCormick2007 C. F. McCormick, V. Boyer, E. Arimondo, and P. D. Lett, Strong relative intensity squeezing by four-wave mixing in rubidium vapor, Optics Letters 32, 178 (2007).Zlatkovic2016 B. Zlatković, A. J. Krmpot, N. Šibalić, M. Radonjić, and B. M. Jelenković, Efficient parametric non-degenerate four-wave mixing in hot potassium vapor, Laser Physics Letters 13, 015205 (2016).Macke2003 B. Macke and B. Ségard, Propagation of light-pulses at a negative group-velocity, The European Physical Journal D - Atomic, Molecular and Optical Physics 23, 125–141 (2003).Glasser2012 R. T. Glasser, U. Vogl, and P. D. Lett, Stimulated Generation of Superluminal Light Pulses via Four-Wave Mixing, Physical Review Letters 108, 173902 (2012).Glasser12 R. T. Glasser, U. Vogl, and P. D. Lett, Demonstration of images with negative group velocities, Opt. Express 20, 13702–13710 (2012).Xiao2008 Y. Xiao, M. Klein, M. Hohensee, L. Jiang, D. F. Phillips, M. D. Lukin, and R. L. Walsworth, Slow Light Beam Splitter, Physical Review Letters 101, 043601 (2008). § INTRODUCTION In 1905, Einstein unified several prevalent concepts in the theory of electromagnetism, elevating them into a single, far-reaching principle of relativity <cit.>.The special theory of relativity asserts that Lorentz invariance –i.e., that the speed of light, c, is the same in every reference frame – is a fundamental aspect of nature.A consequence of this is that superluminal signaling is prevented in flat spacetime, which would otherwise violate the principle of causality and lead to enigmatic outcomes, such as the grandfather paradox.The effects of relativity in general have been confirmed in a range of experiments, including the measurement of information velocity <cit.>, the transverse Doppler effect <cit.>, time dilation of moving atomic clocks <cit.> and, most recently, the direct detection of gravitational waves <cit.>.Additionally, information causality, or no-signaling, has since become a broad and important physical concept: one which coexists peacefully with the classical notion of locality, and at the same time has been shown to be an essential ingredient of nonlocal physical theories possessing intrinsic randomness and prohibiting the perfect cloning of arbitrary states (no-cloning), e.g., quantum mechanics <cit.>. The concept of superluminal pulse propagation, wherein optical pulses can propagate through dispersive media with group velocities greater than c, or even negative, has therefore stimulated much interest in causality over the years <cit.>.It is accepted that, while the envelop of a pulse can be modified such that the peak appears to travel faster than c, the information is a fundamental quantity bound to propagate at c or slower <cit.>.In this view, the carriers of information are points of non-analyticity, and the dispersion-free nature of such off-resonant frequency components ensures a causal relationship upon transmission through the medium.There is, therefore, a differentiation between the group velocity of a pulse v_g and the information velocity v_i.While the velocity of information has been investigated independently in the fast- and slow-light regimes, studies of information causality in a system exhibiting simultaneous slow- and fast-light propagation, as well as the transfer and copying of information to fast-light pulses, have not been realized.Systems which are able to generate slow and fast light simultaneously offer great practical advantages, both in the pure and applied sciences, with the latter leading to implementations of all-optical information processing <cit.> and ultra-sensitive interferometry <cit.>. In this paper, we examine information causality within the previously unexplored framework of copying and transferring information.We report on a system which can be simultaneously slow and fast over a single mode, and demonstrate a four-wave mixing (4WM) copier which generates a copy of information (in the form of a step discontinuity) in a distinct optical mode, and deletes the original via absorption, resulting in information transfer between the two modes.In agreement with information causality, we find that the information velocity is always less than c when copying and transferring information, independent of the group velocity time shifts described above.Moreover, we find that superluminal propagation can destroy information content, in a manner which is reminiscent of a classical version of the no-cloning theorem.Our results are in agreement with those recently demonstrated involving the propagation of quantum mutual information through fast-light media <cit.>.In Fig. <ref>, we illustrate the general concept of causality and its relationship to transmitting information via light signals using spacetime diagrams.In the free-space scenario, information flows between two events at a speed less than or equal to c, where the two events are connected through a worldline [dashed line in Fig. <ref>(a)].The passage of information within spacetime regions limited by x^2 ≤ c^2 t^2 ensures that the two events are causally connected.Let us now consider information passing through the medium in Fig. <ref>(b), which takes as its input an optical pulse traveling at c, and then generates two optical pulses, one of which is slow (v_g < c, red curve), and the other which is fast(with a negative group velocity, v_g < 0, blue curve).If we place a detector after the medium (orange line), it would seem given this simplified thought experiment that the fast pulse can arrive at the detector at a time before it was created.However, since different frequency components of the pulse experience different dispersion, the medium acts as a filter and reshapes the pulse.As a consequence, the incident and transmitted peaks, for example, are no longer causally connected <cit.>.This realization motivated theoretical and experimental investigations of the idea that, in general, the information velocity of a signal is distinct from its group velocity.To investigate this phenomenon in the context of information copying and transfer, we utilize a 4WM-based information copier with built-in loss and dispersion.The copier is based on the double-Λ scheme described previously in Refs. <cit.>, and is laid out schematically in Fig. <ref>.In this scheme, a gas of alkali atoms (in the current experiment, ^39K) is strongly pumped near the D1 line, and a detuned probe pulse (shown in red) is injected into the medium.The process amplifies the injected probe, and generates a second, conjugate pulse (blue) within the medium, with the probe and conjugate symmetrically detuned from the pump by approximately Δ_HFS, as shown in the inset of Fig. <ref>.As a result, any information imprinted on the original input pulse will be copiedinto the conjugate.Moreover, due to the relatively small ground state splitting and significant Doppler broadening in potassium, the configuration can be arranged such that the input probe pulse is strongly absorbed.In this way, the setup can (i) generate the copy and (ii) erase the original via absorption, resulting in information transfer to an optical mode which is distinct in both frequency and space.As discussed above, the transmitted pulses can be delayed or advanced as a result of dispersion, and under certain circumstances the time shifts can be achieved with minimal distortion <cit.>.Whether the pulses are superluminal or subluminal depends on the sign of the medium's group index, which, experimentally, is controlled by varying the frequency of the probe with respect to the four-wave mixing resonance (i.e., the two-photon detuning δ) <cit.>. Since the probe and conjugate resonances are shifted in frequency with respect to each other, there exists an intermediate frequency (and therefore a pulse width τ) where one of the outputs is superluminal while the other is subluminal.However, in this regime, the time shifts are relatively small <cit.>, as both outputs are tuned close to the transition between superluminality and subluminality.We have found that greater control can be achieved by utilizing spatial dispersion, which is intrinsic in our system due to the alignment of the optical beams required by phase matching <cit.>.Spatial dispersion results in a spatially-varying group index, allowing different group velocities of interest to be selected by filtering the mode with an aperture.Furthermore, when the frequency and pulse width are chosen so that the time shift associated with temporal dispersion is minimized, the optical pulse can exhibit both fast and slow light, simultaneously, over a single mode.§ METHODS In the experiments, approximately 400 mW of coherent, continuous-wave light from a Ti:Sapph laser is collimated to an elliptical spot (∼ 0.8 mm × 0.6 mm) and passed through an 80 mm, anti-reflection-coated potassium vapor cell.This strong pump beam is detuned to the blue side of the Dopper-broadened D1 absorption profile by Δ.The input probe pulse is generated by double-passing a portion of the light through an acousto-optic modulator, which is driven at a variable frequency and modulated using an arbitrary wave generator to achieve a desired pulse shape.The pump and probe (1/e^2 diameter of ∼ 670 μm) beams are combined on a polarizing beam splitter so as to overlap in the center of the cell at a small angle (∼ 0.1^∘).The vapor cell is heated to a temperature of approximately 110 ^∘C, and the average pulse power of the input probe is set to be approximately 10 μW.After filtering out the pump light using a Glan-Taylor polarizer, the probe and conjugate pulses are spatially filtered using irises, directed to amplified photodetectors, and then analyzed on an oscilloscope.For each experiment, the reference pulse is obtained by redirecting the light around the cell using a flip-mirror, which results in a negligible change in path length (i.e., the time shift is within the statistical error). § RESULTS§.§ Simultaneous generation of slow and fast light In Fig. <ref> we show the results for simultaneous slowand fast light in our system. The black dashed pulse in Fig. <ref>(a) is a reference propagating at the speed of light in air, which we obtain by redirecting the input pulse around the medium via a flip-mirror.We optimize the system for simultaneously generating slowand fast light by tuning the frequencies and filtering the spatial modes as described above, such that the overall effective gain of the four-wave mixer (sum of the outputs divided by the input) is 1.5 ± 0.2.The experimental parameters (cell temperature, one- and two-photon detunings, etc.) for this measurement and all subsequent measurements are given in Table <ref> at the end of the manuscript.In this arrangement, the peaks of the probe and conjugate pulses, shown in red and blue, are shifted relative to the reference by 90 ns (35% of the original pulse width τ = 260 ns) and -60 ns (23%), respectively.Considering the time shifts in terms of the center-of-mass (COM) of the pulse, rather than its peak, we find that the outputs are shifted by 22% and 9%.For the remainder of the manuscript, we will quantify the time shifts based on COM.With the dispersive medium length of L = 80 mm, we calculate the group velocities v_g = L/(Δ t - L/c) for the probe and conjugate in this case to be v_g = 5 × 10^-3c and v_g = - 11 × 10^-3c, respectively.We reiterate that this result is for the case of two separate modes which have different frequencies and propagation directions, but the same polarizations.Now we show that a similar result can be achieved over a single beam.In the measurement presented in Fig. <ref>(b), we scan a small iris ∼ 1 mm in diameter horizontally over the conjugate mode (an image of the mode is shown in the inset), and compare the spatially filtered pulses' arrival times with that of the reference pulse.We observe that a strong gradient emerges as a result of spatial dispersion, asymmetrically shifting the pulses by over 60 % of the input pulse width, corresponding to nearly 100 % based on the peak shifts.Of particular interest is the fact that the profile is centered about a nearly zero time shift: i.e., superluminal for x < 0, subluminal for x > 0 and luminal near x=0.We also include a calculation of the pulse disortion (see Ref. <cit.>), showing that the distortion is minimized for small time shifts and that a trade-off exists between dispersion and distortion at the far edges of the mode.Due to this excess distortion (and nonlinear spatial dependence), in our experiments we optimize the generation of simultaneous slowand fast light over the linear regime.While previous demonstrations of simultaneous slowand fast light relied on either different polarizations <cit.> or different frequencies <cit.>, in our case spatial dispersion generates both in a single mode.In principle, this strategy could allow for the physics of nonlinear and quantum optics to be probed in the fast- and slow-light regimes at the same time, within a single “shot." Another advantage of our setup is that both the probe and conjugate modes have profiles like that shown in Fig. <ref>(b), which allows for the fast- and slow-light outputs to be easily interchanged.This is akin to alternating between absorption and gain, or flipping the sign of the dispersion that the input pulse experiences.Lastly, we note that in the case of simultaneous slow and fast light, our advancements are comparable to those achieved previously <cit.>. §.§ Information velocity and the copying of optical information Now we turn to examining how the information content in the optical pulses is affected under conditions of fast and slow light, both from relativistic considerations and effects due to reshaping of the pulses.In the earliest investigations, pulse fronts were considered as the true carriers of the information in optical pulses <cit.>. This suffers from the technical drawback that a pulse front is, by its very nature, much weaker than the rest of the pulse, and therefore non-ideal from an operational and measurement point of view.Another approach is to impart a sharp discontinuity somewhere near the peak of the pulse, and take such a point of non-analyticity as the carrier of new information.In practice, these features are analytic with a finite time constant τ_sig, though they can approximate an ideal step function in the limit that τ_sig≪τ.Following the second approach, we measure the velocity of information in our simultaneously slow- and fast-light system, for a range of pulse widths, detunings and temporal positions of the discontinuity relative to the pulse peak.Exemplary results are shown in Fig. <ref>.In the measurement of Fig. <ref>(a), we tune our system to be fast and slow by 5% and 14%, respectively, and introduce new information via a discontinuity on the leading edge of the input pulse.We then compare the arrival times of the discontinuity on the probe and conjugate pulses with that of the reference. In accordance with previous experiments on information velocity, we find that, despite the fact that the smooth peaks of the probe and conjugate pulses can propagate superluminally or subluminally, the new information is limited to propagate at c or less.This suggests that the group velocity is not always a meaningful concept with regards to the propagation of information.These results also show that information causality holds when copying information between slow- and fast-light pulses, independent of the temporal position of the discontinuity.Similar results can be achieved when the new information is placed on the trailing edge of the pulse.In the result shown in Fig. <ref>(b), we tune the 4WM copier to again produce simultaneous slow and fast light, using a different set of parameters, such that the pulses are advanced and delayed by 8% and 11%.In contrast to the leading edge case, here, we see qualitatively different behavior in the pulse reshaping, where the introduction of new information results in an additional peak following the discontinuity in the newly generated, copied pulse.Despite these differences, the new information is limited to propagate at c or less in both cases. §.§ Information transfer The extension of these results to the transfer of information entails one final step: deletion of the information in the original (probe) pulse.This step is achieved by tuning the frequencies of the beams such that the probe is detuned from the center of the Doppler-broadened resonances by ≈ 200 MHz, and thus experiences strong absorption.We adjust the parameters so that the system operates with unity gain (with an effective, total gain of 0.94 ±0.13), and encounter a configuration where the dispersion experienced by the conjugate is inconsequential, and the probe is delayed by 26 %, with little distortion [top of Fig. <ref>].Using these effects, we demonstrate irreversible, classical information transfer between the two optical modes.From Fig. <ref>, we see that the intensity of the transferred information signal is approximately equal (96%) to that in the reference pulse, and almost completely destroyed (only 9%) in the absorbed probe pulse.With a full optimization of parameters (most notably, the frequencies and bandwidths), it is possible that the probe signal strength could be reduced further.Also, the tail end of the transmitted conjugate pulse is reshaped in this case, as the introduction of the discontinuity modifies the bandwidth, and hence the dispersion, which the pulse experiences.In the unity gain regime, our system behaves somewhat like a beamsplitter, with the addition that the frequency shift between the probe and conjugate resonances results in a delay between the two beamsplitter outputs.A slow-light beamsplitter was realized previously using electromagnetically induced transparency in rubidium vapor <cit.>.In this case, the competition between gain and absorption inherent in the potassium-based 4WM scheme generates the dispersion, even with unity gain. §.§ Fast light and information To investigate the role of superluminality in our information copier, we adjust the setup so that both outputs are advanced, and then tune the advancement by varying the pump detuning (Δ), keeping δ fixed.In Fig. <ref>(a), we show the resultant pulses for these measurements, which are taken in the amplifying regime, except for the last trace where the probe alone is slightly deamplified due to absorption, as discussed above.Strong fast-light effects are evident for larger pump detunings, when the probe is positioned near the center of the Doppler-broadened absorption profile.We show in Fig. <ref>(b) the calculated COM time shifts for these pulses, as well as the relative arrival times of the information signal, Δτ_sig, which are extracted based on the intensity time constants of these signals [see the inset of Fig. <ref>(b)].Despite the fact that the superluminal group velocities result in large advancements of the probe and conjugate pulses (up to 60 ns), the information signal always travels slower than c, with a delay which is on the order of 10 ns.Furthermore, it clear from the pulses in Fig. <ref>(a) that significant pulse reshaping accompanies the advancement, which has the effect of reducing the transmitted intensity of the information signal and destroying the information.In other words, we observe a trade-off between superluminality and signal strength for both outputs.In particular, when copying information to and from fast-light pulses, the trade-off occurs in an analogous way to a recent experiment involving quantum mutual information and fast light <cit.>, which is reminiscent of a classical version of the no-cloning theorem. § CONCLUSIONS In summary, we have shown that the transportation of classical information between two optical modes is bounded by c, and that neither the pulse peak nor the COM reasonably describe the rate of information transfer resulting from the process.Rather, the experiments described here reinforce the view that information is contained in points of non-analyticity, demonstrating that an all-optical information copier preserves information causality, regardless of sub- or superluminal group velocities.In part, these results have been enabled by the fact that the system is highly tunable, in that both outputs can be simultaneously slow and fast over a single mode, with a maximum range of tuning corresponding to an entire pulse width.Ultimately, the experiment is based on a straightforward optical setup, without cavities or cold atom ensembles, and the combined effects of the coupling between atomic transitions and absorption give rise to strong dispersion, even in the regime of unity gain, enabling the investigations elaborated upon here.§ FUNDINGWe would like to thank the Louisiana State Board of Regents (Grant 073A-15) and Northrop Grumman NG - NEXT for generous funding which supported this work.
http://arxiv.org/abs/1709.09244v1
{ "authors": [ "Jon D. Swaim", "Ryan T. Glasser" ], "categories": [ "physics.optics", "quant-ph" ], "primary_category": "physics.optics", "published": "20170926200346", "title": "Causality and information transfer in simultaneously slow- and fast-light media" }
Indiana University, Department of Psychological and Brain Sciences, Bloomington, IN, United States [mycorrespondingauthor]Corresponding author. Indiana University, Department of Psychological and Brain Sciences, 1101 E. 10th street, 47405-7007, Bloomington, IN, United States. We designed a grid world task to study human planning and re-planning behavior in an unknown stochastic environment. In our grid world, participants were asked to travel from a random starting point to a random goal position while maximizing their reward. Because they were not familiar with the environment, they needed to learn its characteristics from experience to plan optimally. Later in the task, we randomly blocked the optimal path to investigate whether and how people adjust their original plans to find a detour. To this end, we developed and compared 12 different models. These models were different on how they learned and represented the environment and how they planned to catch the goal. The majority of our participants were able to plan optimally. We also showed that people were capable of revising their plans when an unexpected event occurred. The result from the model comparison showed that the model-based reinforcement learning approach provided the best account for the data and outperformed heuristics in explaining the behavioral data in the re-planning trials. § INTRODUCTIONHumans deal with planning problems in their everyday situations. One of the very familiar situations is to navigate from one place to another in a neighborhood or city. In this scenario, usually there is more than one path to choose, and, depending on the goal, one might select the shortest path, the city roads, a bypass/highway outside the traffic area or a path with the minimal traffic lights. Once he chooses the highway, he still needs to decide whether to take the toll line and/or where to exit. On the other hand, if he chooses the city roads, he would need to decide which intersection to go, to use the main street or to use shortcuts, etc. In other words, after selecting a general path (plan), there are still small paths decisions. This is an example of a more general problem in which one needs to optimally plan a sequence of interdependent choices to accomplish a goal. In some cases, the shortest path is the optimal path and in others his goal might be to avoid the traffic at all costs.In multistage decision making, unlike isolated choices, the focus is on how people analyze the interrelated choices to make an optimal sequence of choices, <cit.>, <cit.> [Reference style was generated by the latex template (i.e. APA format). Due to our previous experience, it will be fixed before publication]. Usually, sequential decisions are represented in a decision tree in which the result of an action at one stage (e.g. a decision node) will be fed into the next stage, which might be another decision node or possibly an output node. Consider a decision tree with two decision nodes (yellow circles) and five possible (green) paths that is represented in the upper panel of Fig. <ref>. Given the starting position, S and the goal, G, the paths are Path B, Path A1A2, Path A1C2, Path C1C2 and Path C1A2. The black dashed line in the grid separates path B and path A1A2. In order to make optimal choices, one should know the actual output of each decision node and the uncertainty of each transition. For instance, although the number of steps (or actions) between the starting position and the next decision node (or the goal) is not depicted in Fig. <ref>, based on the expected losses (El), we know that path B is the best path to go to the goal position. In an experience-based decision tree, this knowledge is established from (an individual's) experience, <cit.>, <cit.>, <cit.>, <cit.> and <cit.> while in a description-based version, it is provided by the experimenter (available during the task), <cit.>, <cit.>, <cit.>, <cit.>, <cit.>. However, having this knowledge cannot guarantee the optimal behavior, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>.§.§ Examining the Optimality of re-planning in sequential decision making tasksIn many of the previous studies on planning in decision trees, the environment is not dynamically changing. Once the participant learns about the risky sequential decision environment, she can optimally plan her actions and does not need to update her knowledge later. But in real life, our environment is always changing and unexpected events happen. Usually, we have two strategies to deal with these situations: reevaluate our plans with the new information (which is also known as re-planning) or ignore the new information and stick to our original plan. In this article, we extend previous planning experimental designs to situations in which the participants experience random changes in the environment and need to modify their original plans to get to the goal position. We test learning to plan and re-planning in one unique framework: a 4 by 7 grid world with stochastic losses. In the learning phase of our experiments, we look into the planning behavior and how participants can learn to find the optimal sequence of choices. Then, in the test phase, we block the optimal path randomly and ask our participants to find a detour path (re-planning behavior) based on what they have learned during the learning phase as illustrated in the bottom panel of Fig. <ref>. On 30% of the trials, path B (the optimal path) is randomly blocked and becomes unavailable [Participants do not experience the blockage unless they check the optimal path.]. The vertical black dashed line that is added to the grid shows the wall that makes path B unavailable. In this situation, path C1A2 is the best path to choose (optimal re-planning behavior). It is important to emphasize that the starting and goal positions are not fixed in our design. We randomize these pairs for three reasons: 1) To make sure that our participants have a fair exposure to different aspects of the grid world environment; 2) A random starting point and goal makes it more challenging to discriminate different models and their predictions; 3) To examine human planning behavior from different arbitrary decision nodes located in different layers of a decision tree.In section  <ref>, we discuss how we fit different models and compare the results in detail. However, it is important to highlight that our design includes a generalization test that fits model parameters to the planning phase (with no blocks in the optimal path), and then subsequently uses these same parameters to predict re-planning in a generalization test when blocks are introduced, <cit.>. This provides a very strong test of the competing models that vary in number of parameter and model complexity. In other words, our model comparison is not restricted to how different models can learn the model of the environment and whether or not they can predict planning (which they have been trained for), but involves a more rigorous test on how they can perform in an unexperienced environment.§.§ Computational models for learning sequential decision making from experienceIn most everyday situations, it is almost impossible to have access to all of the information required to make sequential decisions from the start, and much of this information must be learned from experience. While it is necessary to take advantage of previous experiences to make better choices in the future, it is also essential to evaluate actions and assign (proper) credit to the earlier choices as well as to the later actions in a sequence (temporal credit assignment). These two criteria are the heart of many reinforcement learning (RL) and unsupervised learning algorithms in computer science <cit.>, <cit.>, <cit.>. In RL, the goal is to choose appropriate action(s) which maximize the expected sum of future rewards. In this set of problems, the agent initially does not know the correct correspondence between states and actions and receives feedback following an action or a sequence of actions. This framework can address learning mechanisms, ranging from the very basic stimulus response (habitual behavior) to the more complicated goal-directed behavior (including, but not limited to, planning). Whether people conduct themselves in accordance with the RL predictions in sequential choices is still under investigation and debate (for a detailed review, see <cit.>). For instance, <cit.>, asked their participants to plan a sequence of 2-8 choices by traveling through 6 states (represented as boxes on the screen) to maximize their score. Different RL models were fitted to explain the underlying mechanism. They found that people extensively pruned sub-trees with large losses in order to reduce the size of the decision problem into a computationally manageable problem <cit.>[They used two different discounting parameters to differentiate the greatest loss from the other losses.].In general, there are two classes of RL models that can provide a solution to our sequential decision problem. The first class finds the optimal policy by learning the model of the environment and is called the model-based approach. The second class, the model-free approach, is able to maximize the expected sum of future rewards without knowing the characteristics of the environment, <cit.>, <cit.>. Therefore, the second approach is computationally efficient. But as we explain later, since it only incorporates one-step rewards (plus the expected future reward from the next step) from a particular state into planning which limits its ability to plan when the start and goal positions change.There is a third RL algorithm, called successor representation, that learns a rough representation of the environment by storing the expected future visits of each state, <cit.>, <cit.>. It is computationally less expensive than the model-based RL and can easily explain the planning behavior in our experiments it has limited ability to re-plan in a changing environment.Last but not least, we investigate how heuristics and simple strategies perform in our risky sequential choice environment. In section  <ref>, we show the predictions of 3 different heuristic based models. Our design was able to show that these heuristic models fail, both quantitatively and qualitatively.§ MATERIAL AND METHODS §.§ OverviewLearning the characteristics of the environment to plan optimally (and possibly re-plan in case of the unexpected changes) was first examined by Tolman and Honzik with rats, <cit.>, <cit.>. In this task, which is known as the detour experiment, first, rats were exposed to three different paths with different lengths to a goal/food location similar to Fig. <ref>, a. Then in the test phase, they blocked the shortest path and checked the rats' choice in finding the second shortest path <cit.>. <cit.> and <cit.> summarized that acquiring skills like turning right at specific positions to do a particular task (e.g. reaching a goal/food) could be explained by a stimulus-response learning system. However, only goal directed behavior could explain the rapid learning curve and fewer errors in rats' behavioral data in finding the detour paths, <cit.>. Inspired by these findings, one of our very basic questions in this study is whether people can plan in a stochastic environment and if so how far they can modify their plans in order to maximize their earnings. There are two phases in our experiment: a training phase which allows participants to explore and learn about the grid world (without monetary reward) and a test phase that goes beyond optimal planning and requires finding the second best path to maximize their money. One of the main differences between our design and Tolman's detour problem is that in <cit.>, the environment is deterministic but in our grid world experiment, there are multiple cells with stochastic rewards (2 cells in experiment 1 and 5 cells in experiments 2 and 3). Therefore, the best path is not simply the shortest path but is the optimal path which requires planning. Recognizing the difference between the shortest and optimal paths plays a key role on the (amount of) money that participants can earn.We should also note that in our design, we used random pairs in each trial to make sure that participants explore the entire grid world. However, in the original detour problem, Tolman <cit.> exposed rats to different paths using fixed starting and goal positions. Table <ref> highlights the differences and similarities among the three experiments. It is important to emphasize that participants can only see a plain grid on the screen along with their current position (yellow circle) and their destination (red circle, Fig. <ref>) with no sign/cue of the obstacles or the stochastic losses. They do not have access to papers nor calculators/cellphones to do any computations or to take notes. In the instruction, they are told to consider a scenario that they move to a new city (a 4 by 7 grid) and need to get from one place to another (determined by the yellow and red circles). In the test phase, participants are told that a random accident might happen in one of the possible routes and block that path. If they see the accident they need to find a detour path. In the following section, first we summarize the behavioral results for these three experiments and then we try to characterize participants' strategies in planning and re-planning in the pretest and the test blocks respectively. Presumably any candidate model must incorporate environmental changes in its planning approach in order to make sure that a new information is propagated to the action selection module promptly. §.§ Experiment 1 §.§.§ ParticipantsTwenty healthy (11 females) participants performed experiment 1 for payment. Minimum payment was $9 and participants could earn up to $16 based on their performance during the task. Informed consent was collected from all participants and the study was approved by Indiana University Institutional Review Boards.§.§.§ TaskParticipants learned to correctly travel through a grid world in as few moves as possible while maximizing their reward. In experiment 1, there were 9 blocks and each has 20 trials. In each trial participants viewed a typical 4 by 7 grid world example and were asked to start from a random starting point, represented by a yellow circle, and reach to a random goal position (in red circle) using arrow keys (up, right, down and left), as shown Fig. <ref>, a. For instance in Fig. <ref>, the start and goal points are arbitrarily located at cell 5 (start) and cell 27 (goal). Except for the goal point, participants received either a fixed or stochastic “punishment” for any movement in the grid world. If they caught the goal, they earned +100 points (later exchanged for money at an exchange rate equal to 0.01). The fixed regular punishment was delivered in transition to most of the cells and it was equal to -1. At cell 21, there was another deterministic (but not regular) punishment. If participants entered this cell, they received -45. Finally, cells 15 and 16 had stochastic punishments. If transmitted to these cells, participants received a different punishment (-75, -3 respectively) with probability of 0.8 and a regular punishment (-1) with probability of 0.2. It has been suggested by <cit.> that the existence of rare events, specially outcomes that occur with probability less than 0.15, can lead to non-optimal behavior. Using this current settings for the loss structure, we have one optimal, one sub-optimal and one non-optimal path (three distinct paths). For instance, starting at cell 3, path A through 3, 7, 11, 15, 19, 23, 27 is the non-optimal path. Path B through 3, 7, 8, 12, 16, 20, 19, 23, 27 and path C through 3, 7, 6, 5, 9, 13, 17, 21, 22, 23, 27 are the optimal and sub-optimal paths. Note that the probabilities, the amount and the positions of the punishments were not known to participants.In each attempt to catch the goal, a participant had 15 moves. There were ten hidden fixed obstacles which prevented participants to go to the selected direction while traveling in the grid world (e.g. selecting up at cell 11 was blocked). Hidden obstacles are shown with upward diagonal texture in Fig. <ref>, a. Each time a participant hit an obstacle, she received a punishment of -1 and used up one of her 15 moves. The first six blocks were designed to help participants learn about the obstacles, punishments and the details of the grid world (learning phase). During this phase participants were not getting paid. The 7th block was designed to check optimal planning. Similar to previous blocks, it consisted of 20 random pairs of starting and goal points but the difference was that participants got paid based on their score (we called it the pretest block). Finally in the last two blocks (8th and 9th), optimal path, Path B, was blocked in one-third of the trials. The random blockage was to further examine optimal planning and also re-planning (finding the second optimal path) in our dynamic environment. The starting and goal positions were fixed at cell 3 and cell 27 in 8th and 9th blocks (the test blocks). In Fig. <ref>, bottom panel, we present two decision trees. One on the left, all three paths are available (planning phase) and their expected values were calculated based on the current positions of the start (cell 5) and goal (cell 27). For instance if one chooses path A (through cell 6, 7, 11, 15, 19, 23, 27), the expected value is 34.2, if he chooses path B (through 6, 7, 8, 12, 16, 20, 19, 23, 27), the expected value is 90.4 and finally for path C (through 9, 13, 17, 21, 22, 23, 27), the exact value is 50 since all punishments are deterministic. The decision tree on the right is related to re-planning phase where path B is not available (dashed line). The expected values are calculated from cell 7 because this cell is the starting point to find the detour path after the participant experiences the random blockage in the optimal path (path B) at cell 8. Then he goes up to cell 7 to choose one of the available paths (path A and path C). If he chooses path A, then the expected value is 38.8 while choosing path C leads to the value of 48. Note that in experiment 1, the corresponding decision trees in planning are directly related to the grid world on the top panel of Fig. <ref> and where the starting point and goal are located. For another pair, the expected values of the paths are different. In the test blocks, though, the starting and goal positions are fixed and the corresponding decision tree (in re-planning) is the same. The hallmark of our design was that the grid environment was not deterministic. Thus optimal planning was not necessarily reduced to finding the shortest path but to select the path with the maximum expected value through experience. For instance, in Fig. <ref>, top panel, if the starting point was at cell 3 and the goal was at cell 27, the shortest path, path A, had the lowest expected value compared to path B and path C. In the re-planning trials when path B was blocked, the second optimal path was path C which was longer than path A. §.§.§ ResultsAll three experiments that we designed had 6 blocks of learning without payment (training phase). This allowed participants to explore the grid world without any concerns about their score. So if their scores became negative - as long as they were in the first 6 blocks - they were not punished. At the beginning of these blocks, participants did more exploring and less planning but as they moved forward, they should be acquainted with the environment and able to find the optimal path. In order to examine this, we fixed the starting position at cell 3 and the goal at cell 27 for the last five trials of the 6th block. With this design, we were able to recognize preliminary evidence of learning in the final blocks of the learning phase, specifically in block 6. A participant who learned the grid world correctly, should be able to select path B (the optimal path) in these final trials otherwise we excluded his data. With this criterion, one of the participants' data was not included in our analysis. Analyzing the pretest block's data (seventh block), all 19 participants learned to choose the optimal path to reach the goal, Fig. <ref>, up left. We used the binomial distribution to compute the minimum number of trials in which the participant must plan optimally to be able to reject the null hypothesis that the optimal path was selected randomly, (α=0.05). Participants who chose the optimal path in only 14 trials or less (out of 20 trials) did not pass this test and failed to perform optimally. The red dashed line in top left panel of Fig. <ref> shows this minimum.In the test phase, path B was blocked in 13 trials (out of 40 trials). The minimum number of trials required for an optimal performance which could reject the null hypothesis was 10. The red line showed this threshold. Participants who selected the (optimal) detour path less than or equal to 9 trials out of 13 trials were not optimal in re-planning trials. Only two of the participants failed to re-plan correctly to the second optimal path (path C), Fig. <ref>, down left. Mixed-effect regression analysis with block number as regressor and participants as the random effect, showed that the average reward learning curve (as a function of block) was significant (t(113) = 12.138, p<0.0001).Fig. <ref>, right, shows that participants (on average) took path B, the optimal path, in planning from cell 3 to cell 27 more frequently than path A or path C. Note that path C was the second optimal path if path B was blocked. In the heatmap, the red color shows the more frequent directions or state occupancies. For each cell, the number of times participants visited a cell and the direction they took are counted. In the test blocks of experiment 1, there are 27 trials that re-planning did not occur and the starting and goal positions are fixed at 3 and 27. Ideally, if all participants choose the optimal path (path B) in all trials, cell 7, 8, 12, 16, 20, 19 and 23 should be visited 19 × 27 = 513 times while path A and path C should not be selected. As the heatmap shows, there are few trials (less than 50 trials in total) that path B is not selected (indicated by dark blue cells in path A and path C).§.§ Experiment 2§.§.§ ParticipantsThirty six healthy participants (18 males), performed experiment 2 for payment of $9 to $16 based on their performance in the task. All participants signed the informed consent forms. Indiana University Institutional Review Boards has approved the study. §.§.§ Task In the first experiment the best and worst paths (path B and path A) included a cell with a stochastic reward. In order to select the second optimal path in the re-planning trials, participants needed to compare a deterministic path (path C) with a stochastic path (path A). This might raise this question that our findings in the previous section only reflected participants' preference for a sure thing over a gamble. Therefore, we changed the environment in experiment 2 and designed a more complicated environment as depicted in Fig. <ref>. Similar to the previous experiment most cells had a regular punishment of -1, but there were 5 cells with stochastic reward. These cells were 9, 11, 16, 17, 19 and with probability of 0.8 subjects received -20, -75, -3, -30 and -5 respectively. The number of available paths was increased from 3 to 5 and they were: (e.g. starting at cell 7) Path A1A2 through 7, 11, 15, 19, 23, 27; Path A1C2 through 7, 11, 15, 14, 13, 17, 21, 22, 23, 27; Path B through 7, 8, 12, 16, 20, 19, 23, 27; path C1C2 through 7, 6, 5, 9, 13, 17, 21, 22, 23, 27 and path C1A2 through 7, 6, 5, 9, 13, 14, 15, 19, 23, 27. Path B which includes cell 16 with expected loss of -2.6 is the optimal path. Choosing any path that contains cell 11 with expected loss of -60.2 has the worst effect on participants' monetary reward (path A1A2 and A1C2). The losses at cell 9, cell 17 and cell 19 were adjusted to create two sub-optimal paths: path C1C2 and C1A2. Based on the expected losses of these five critical cells, these paths could be prioritized as follows: path B, path C1C2, path C1A2, path A1A2 and path A1C2.In this design the corresponding decision tree had two decision nodes and five chance nodes, the bottom panel of Fig. <ref>. Similar to experiment 1, in the test phase, the starting point and the goal position were fixed at 3 and 27 respectively. On 33% of the trials in the test phase, when the optimal path is randomly blocked, path C1A2 was the optimal path among the remaining paths. Fig. <ref>, b shows the corresponding decision tree.§.§.§ ResultsTo analyze participants' learning curve, we fitted a mixed-effect regression to average reward that each participant earned during the first 7 blocks (the learning and the pretest blocks). We found that the average reward learning curve as a function of block was significant (t(215)=9.441, p<0.0001). Note that the number of available paths increased but similar to experiment 1, first, we checked whether the selected path was an optimal path or not. This is a simple straight forward test to check the optimal behavior in our experiment. As we did in experiment 1, we computed the minimum number of trials in which the participant needed to choose the optimal path to reject the null hypothesis. Since the number of trials in the pretest block and the test blocks has not changed, these thresholds were similar to experiment 1 which was 14 in the pretest block and 10 in the test blocks. Using this analysis, 30 out of 36 subjects found the optimal path in the pretest block (α = 0.05). When it came to find the detour option, 27 participants found the second optimal path correctly; 7 chose the third optimal path and two did not re-plan at all, as shown in Fig. <ref>, panel on the left. In our analysis, the optimal strategy was to first check the original optimal path (path B) and if it was blocked then switch to the second optimal path (path C1A2). Fig. <ref>, right panel, captured this result from participants' choice. The heatmap shows the number of times participants visited each cell during trials when optimal path is not blocked in the test blocks (27 planning trials). Since we have 36 participants, the maximum number of times that a cell could be occupied is 27 × 36 = 972 (represented in dark red). In few trials, non-optimal paths (path A1, path C1 and path C2) are selected (shaded in dark blue).§.§ Experiment 3In the instructions of experiment 1 and 2, participants were told that only one new obstacle would be added to the environment and it would block one path. Also in the test phase of the first and second experiments, the starting point was at 3 and the goal point was at 27 in all trials. But having a fixed pair in the test phase might indirectly lead participants to perform optimally in the re-planning trials. For instance, learning to go to cell 27 from cell 3 was easier than learning to go to cell 13 from cell 5 in pair (5, 13). In the latter pair, participants not only needed to know the stochastic payoffs but also paths' length to choose the optimal path. One should know that path 5, 9, 13 had only 2 steps with expected loss of -17.2 and path B had 10 steps with two smaller losses (the total expected loss was -13.8). However in pair (3, 27), comparing the reward structures was enough to select the optimal path because path A was the shortest with minimum expected value and paths C1A2, C1C2, A1C2, A1A2 had smaller expected value and were longer than path B. In short, learning to choose the optimal path in some pairs was harder; consequently re-planning in these pairs when the environment changed (e.g. blockage in the optimal path) was more challenging.Thus in experiment 3, we changed the test phase (8th and 9th block) to investigate this new question: how people modify their decisions when the change in the environment is less informative. Specifically, instead of having a fixed pair of starting point and goal position in the 8th and 9th blocks, we randomized these pairs (similar to what participants got used to do in the first 7 blocks).§.§.§ ParticipantsAmong thirty two healthy subjects who participated in experiment 3, there were 19 males. Experiment 3 has three blocks of test and thus the upper limit of payment is increased to $20 but still depends on their performance in the task. All participants signed the informed consent forms. Indiana University Institutional Review Boards has approved the study. §.§.§ Task In sum, we kept the same design as previous experiments but participants saw random pairs in the test phase. In order to keep the rate of changes in the environment (in the test phase) to 33% (as it was in experiment 1 and 2), we increased the number of re-planning blocks to three blocks (8th, 9th and 10th) and therefore in 20 out of 60 trials the optimal path was blocked.§.§.§ Results Recall that in this experiment, in the test phase, the goal and starting point were not fixed, and thus for each pair we determined the optimal and non-optimal paths separately. We found that out of 32 participants, there werefive who were able to find the optimal path in the pretest and the test trials, but not able to do so in the re-planning trials. Also there are two other participants that were able to plan optimally in the pretest and re-plan in the test phase but failed to choose the optimal paths in the test phase where there is no blockage. And the rest behaved optimally in both pretest and test blocks for planning and re-planning trials, see Fig. <ref>, left panel. We had 20 different pairs in the pretest block (pair (7, 13) was different from pair (13, 7)). Only two participants did not plan optimally in this block. For some pairs, e.g. pair (13, 21), finding the optimal path was more challenging. In this particular pair, the non-optimal path was 13, 17, 21 with stochastic punishment at cell 17 with expected loss of -24.2 and the optimal path was the longer path of 13, 14, 15, 19, 23, 22, 21 with expected loss of -4.2 at cell 19. Participants needed to compare the longer path and greater expected value with the shorter path and smaller expected value. At pair (5, 13), the optimal path, path B, was even longer and less appealing. On average participants showed mediocre performance (below chance) in this pair. At pair (14, 22), though, they had no problem finding the optimal path probably due to the fact that path 14, 13, 17, 21, 22 and path 14, 15, 19, 23, 22 were symmetric with equal length and participants just needed to compare each path's expected reward. Fig. <ref>, right panel, shows how often participants chose the optimal path in the test phase when it was not blocked (inspecting the planning behavior) averaged across all participants from cell 5 to cell 13. The heat map shows that participants chose to go down, heading to the optimal path, in spite of the fact that the shorter path requires only two steps. The panel at the bottom demonstrates re-planning behavior from cell 3 to cell 27 in the test phase with fewer data points. Note that in the test phase of experiment 3, each pair was presented 6 times and in only 2 of them the optimal path was blocked. Thus the maximum number of times to choose the optimal path for all participants in experiment 3 for each pair is 32 × 6 ×2/3 = 128.The bottom panel of Fig. <ref> shows participants' choice in the re-planning trials for pair (3, 27) summarized as follows: a) Participants chose path C1A2 more than path C1C2 when path B was blocked. b) Cell 8 should have been visited 2 × 32 = 64 times (light green) if all participants behaved optimally. c) Similarly, the total number of times that cell 7 should have been visited is 128 (going from 3 to 7 and then going from 8 to 7). In our data, it is 124 due to the fact that there were trials that participants selected path C1 without checking path B and thus visited this cell once. d) Cell 6 and cell 14 have been visited more than 64 times (75 and 87 times respectively). We found that some participants failed to remember the correct transition matrix when they experienced a blockage in the optimal path and hit the surrounding obstacles two or three times before selecting the correct direction.With the salient loss of -75 (with probability of 0.8) occurring at cell 11, we compared the rate that participants received this loss before and after the pretest block. The rate of experiencing -75 before the pretest block was 12.5 while this rate went down to 1.25 during the pretest and test blocks in experiment 3(t(31) = 9.241, p<0.0001). We should emphasize that in most of the trials before the pretest block, the path containing the -75 punishment was one of the accessible paths from the starting point to the goal position and more importantly in 60% of these trials (73 out of 120 trials) this path was the shortest path or one of the two equally shortest paths. The fact that the rate of selecting this path was reduced significantly from the learning blocks to the test phase showed that participants learned to avoid the great losses as they entered to the pretest and test blocks. We did the same analysis on the cells that deliver losses greater than -5. The rate of hitting cell 17 with -30 loss in the learning blocks was significantly different from the test blocks (t(31) = 6.253, p<0.0001), but for cell 9 with loss of -20, there was no significant changes. Note that all of these losses are probabilistic. From the mixed-effect regression analysis on the average reward learning curve (in the learning blocks), we found that the block number as a predictor was significant (t(191) = 6.478, p<0.0001). In the test blocks, participants were faced with random blockage of the optimal path, which might weaken their performance in recalling the obstacles [Besides the burden of recalling grid world's configuration, starting from the 7th block, participants actually earned monetary rewards based on their choices and this might impair their performance and memory recollection.]. We used the mixed-effect regression to fit the number of times a participant hit an obstacle as a function of block number in the pretest and test blocks (considering participants as the random effect) and we did not find any significant difference between the rate of hitting obstacles in the pretest block vs. the test blocks (t(2527)=1.426, p=0.154) [Note that the obstacles were deterministic (in contrast with probabilistic punishments).]. § THEORETICAL ANALYSISReinforcement learning (RL) algorithms have been previously used to explain human behavior during learning of dynamic decision making tasks <cit.>, <cit.>, <cit.>, <cit.> and can be formulated within a Markov decision process (MDP) framework. In a MDP, the communication between an agent and the stochastic environment (at time step, k) is through three sets defined by state {s^k }, action {a^k } and (probabilistic) reward r_k. After taking action a_k the environment transfers to a new state s_k+1 with probability T_ij^u (k)=(s_k+1=s^j | s_k=s^i,a_k=a^u) and the agent receives a probabilistic reward r_k=r with probability R_ij^u (r,k)=(r_k=r | s_k=s^i,s_k+1=s^j,a_k=a^u), <cit.>. The transition function and the reward function have the Markov property meaning that the transition probability and the reward probability are independent of the history of states and actions { s_1,a_1,⋯,s_k,a_k} and merely depend on the state and the action at time step k. In other words:(s_k+1 = s^j | s_k,a_k,⋯ ,s_1,a_1)=(s_k+1=s^j | s_k,a_k) (r_k=r | s_k+1,s_k,a_k,⋯ ,s_1,a_1)=(r_k=r | s_k+1,s_k,a_k) The main goal is to find the optimal policy, π^*, which determines the probability of selecting action a in state s while maximizing a desired function of accumulated rewards called return. A widely used form of the return function is the expected sum of future reward discounted by γ, <cit.>:E[ ∑_k=0^∞γ^k r_k ] where discounting factor, γ, weighs the future reward relative to the immediate reward. Using the return function in equation <ref>, state-action function, Q(s, a), can be defined as the expected discounted sum of rewards received in state i and action j at time step k given the policy, π: Q_π( s^i, a^j, k)=E[ ∑_u=k^∞γ^u-k r_u | s_k=s^i, a_k=a^j, π] Q_π(s_k, a_k) = E[ r_k+γ Q_π (s_k+1, a_k+1) ] This equation is known as the Bellman equation. The expectation on the right hand side of this equation depends on the functions T and R. Notice that the Bellman equation provides one equation for each state and so can be considered as a set of recursive equations or more formally a system of equations. If the transition and reward functions are known, the Bellman equation can be solved by dynamic programming method, <cit.>. However, in most cases, including our experiments, the agent does not know the model of the system. The model-based method tries to estimate T and R functions and solve the Bellman equation with value iteration, <cit.>, <cit.>. In this approach, the model-based agent learns the model of the system and uses this model to find the optimal policy. However, the model-free approach such as temporal difference learning algorithm, uses an estimate of the difference between the two sides of Bellman equation <ref> which is called the temporal difference (TD) error. Therefore, without knowing the T and R functions, TD is able to learn the optimal policy which can maximize the return function. It is also computationally simple and efficient. However, its ascendancy is limited to a stationary environment. Since it locally updates the state-action function (equation <ref>), if there is a change in the environment (e.g. re-planning in our experiments or reward devaluation) it takes a long time to propagate that (new) information to update all the state-action estimations. In addition, in the training blocks of our experiments, the TD algorithm can not learn the correct state-action function or learn the optimal path because the goal and starting point are not fixed. At best it can learn different sets of state-action function for each pair and for each pair retrieve that relevant information (which is reduced to a look up table). Because of these reasons, the model-free RL (model 1) is not a good candidate to analyze our results, but to complete our model comparison benchmark, we documented the quantitative fit. The simplest form of the temporal difference (TD) error is defined in equation <ref> and is used to update the state-action function which is called the Q-learning model (equation <ref>): δ_k = r_k + γ· max_a Q̂_π,k(s^u, a) - Q̂_π, k (s^i, a^j)Q̂_π, k+1 (s^i, a^i) = Q̂_π,k (s^i, a^i) + α_c ·δ_k Note that α_c is the learning rate. Because the goal and starting points could be in any side of the grid, the associated Q(s,a) is initially set to Q_0 for all cells in the grid, <cit.>. We could have specified different Q_0 for each cell and for each action according to their position in the grid and the starting position (and especially if they were next to the grid's borders), but this would add more free parameters to the model with negligible benefit. Using the Q-values, an action selection module, e.g. Softmax decision rule, can be used to choose the appropriate action with a probability of: p_k(a) = exp(β· Q_k(s,a))/∑ exp((β · Q_k(s,a))) where β is the temperature. If β→ 0, the algorithm selects a random action (in our case, we have 4 actions: up, down, right and left, thus each action is selected with probability of 0.25) and if β→∞, the action with the maximum Q-value is selected (as in ϵ -greedy policy). We also used a baseline model, with no learning. A baseline model (model 1) randomly chooses an action at each state and does not consider the previous experiences. Unlike the two models that we explained above, modeling participants' behavior in our task is not trivial. A candidate model should be able to plan ahead and estimate the reward function simultaneously and appropriately. With these goals in mind, the baseline model has no planning at all and fails to learn the payoffs in the grid world and the model-free RL has problems in learning the correct reward function and planning because of the dynamically changing environment. Again, the model-free RL plans for one step-reward and with changing environment this strategy is not useful. In the following, we explain how different strategies of estimating the reward function and various manipulations of the planning module can change models' predictions in the learning and test phases.§.§ Model Based Reinforcement LearningAs discussed earlier, the model-based RL solves the Bellman equation by learning the model or representation of the environment (cognitive map in <cit.>). Unlike TD learning algorithm, the model-based RL's estimates and updates are global and because of this distinction, when the environment changes or the reward is devalued, the model-based RL can respond to these changes quickly. We compute the state-action function with q-value iteration (<cit.>, <cit.>) as follows:Q̂_k+1(s^i, a^u)←∑_s^j T(s^i,a,s^j)[R(s^i,a,s^j)+ γ max_a (Q̂_k(s^j, a))] Generally estimating T and R function depends on the environment with which the agent is dealing. Since the obstacles in our grid world are deterministic, with only one experience, participants can learn the transition function. We can also assume that each time an action is selected in a specific state, the transition function is updated by a learning rate respectively. In order to learn the probabilistic reward function, one possible method is to take the average of what participant has earned for each state-action pair so far (will be discussed later in equation <ref> and  <ref>). Using these two pieces of information, we can compute the state-action value function (Q(s,a))at each state with q-value iteration (<cit.>, <cit.>). Once we have state-action function, we choose the appropriate action using Softmax decision rule as in equation  <ref>.The average reward predicted by model-based RL for experiment 2 is shown in Fig. <ref>. The model-based RL predicted similar behavior as our subject (no. 14). To show the qualitative fit, we chose a very basic version of the model-based RL that updates its estimation at each step using q-value iteration method. We assumed that the model knows the correct transition matrix and does not need to learn the T function. Taking the average of previous samples requires a perfect memory (specifically at cells with stochastic rewards) and it is not practical for human participants. Instead, we use a linear filter to approximate the reward function dynamically using equation <ref>: R(k+1) = α_1 · R(k) + (1-α_1) · r where r is the reward that is delivered upon transition and R(k) is the averaged reward up to time step, k. α_1 determines the relative importance of the averaged (observed) reward versus the current reward. We also include a forgetting factor, α_2, which plays a decaying role in reward estimation (equation <ref>) when the greater loss is not experienced (e.g. in 20% of the time, these cells have a regular loss of -1): R(k+1) = α_2 · R(k)Assuming that people are only sensitive to great losses, learning and estimating the reward function only occur when a great loss is experienced. Whenever a participant experiences a great loss (anything but -1) at a cell, the model updates its estimation using equation <ref>. We tested models with different assumptions regarding α_1, α_2 by categorizing the losses into three levels of saliency (low, medium and high): Cell 16 and cell 19 with possible losses of -3, -5 (low saliency, LS), cell 9 and cell 17 with probable losses of -20, -30 (medium saliency, MS) and cell 11 with loss of -75 in 80% of the time (high saliency, HS). Similar pattern can be formed for experiment 1 with -45 at cell 21 with medium salient loss, very salient loss of -75 at cell 15 and a small loss of -3 at cell 16. Based on Bayesian Information Criterion (BIC), we found that the model with α_1_HS, α_1_MS, α_1_LS and no decay rate was the best to use. In addition to linear filter, there are other methods to estimate reward function as well (e.g. simple strategies or rules). In the next section, we introduce 3 different heuristics that we use to learn the reward structure. Note that we are still using the model-based RL framework but each strategy estimates the R function differently with small or no computational cost (model 4, 5 and 6).§.§ Heuristic-Based ModelsWe use 3 different heuristic-based models to fit to our data (training dataset). These heuristics are: avoiding the great loss, remembering the last reward and finding the shortest path. These heuristics along with previously described models are summarized in Table <ref>.Avoiding the salient loss: As we discussed above, participants were sensitive to the punishment at cell 11 with expected value of -60.2. This cell had the greatest and the most salient loss. A candidate model which avoids the salient loss can explain the behavior of our participants who chose not to enter to cell 11 (model 4). This model does not estimate the reward function but only avoids cell 11 in experiment 2 and 3 (and cell 15 in experiment 1) with -75 loss. Note that this behavior does not indicate that these participants chose the optimal path necessarily. For instance, in pair (5, 13), we had two groups of participants: The ones who failed to find the optimal path by choosing path C1 (and facing the loss of -20 at cell 9) and not path B (with the minimum loss) and participants who learned about other punishments which are not as salient as cell 11 and at the end chose path B (which is optimal).Remembering the Last Reward:The fifth model (in Table <ref>) is categorized as a model-based RL, but only remembers the last reward and does not estimate the R function. This model saves the last value of the reward at each cell. Thus with one bad experience (e.g. receiving -75 at cell 11), it is less probable for the model to enter that cell. The last value of any experience is saved for the rest of the computations and action selection. This model does not need the full memory of reward samples because it does not need to estimate or learn the stochastic reward structure. This simplicity can cause sub-optimal behavior in re-planning trials during the test phase. Consider the following scenario: In the 8th block at pair (7, 13), the optimal path (path B) is blocked, the model chooses path C1, assuming that the last experience of path A1 was with the great loss at cell 11. Then at cell 9, it would receive -1 instead of -20. This would be saved in the model's memory. For the next pair, e.g. pair (5, 13), which includes cell 9, the model selects path C1 over path B simply because its last experience at cell 9 is with a regular punishment. Moreover, it only takes two steps to catch the goal with the expected value of 99. Path B, however, requires participants to take 10 steps and its expected value is 88.4. This strategy would be very useful when the environment is deterministic.Finding The Shortest Path:There are few participants that chose the shortest path (A1A2) in the trials with a blocked optimal path, but were able to find the optimal path in regular trials. This suggests that these participants did not care about maximizing their reward when they encountered an obstacle in the optimal path (against the goal of the experiment), and it is only important for them to catch the goal using the remaining available moves. This approach (model 6), with no re-planning, finds the shortest path instead of choosing a path with minimum loss. Also note that, choosing the shortest path, is not necessarily distinguishable from other strategies since there are so many pairs in these experiments that have two paths of equal length. For instance, in experiment 1, in the pretest block, 12 out of 20 pairs have this characteristic: a path with a regular punishment and one with a great loss but both have equal length. Therefore, model 6 can choose any of these paths with equal probability. In Experiment 1, there were two people who chose the shortest path in those trials. In experiment 2, there were 6 participants who failed to choose the second optimal path and were not sensitive to loss -30 and one person who chose the shortest path ignoring all the losses. Finally in the third experiment, there were 7 participants with similar pattern in their behavior. What surprised us in these results was that these participants learned to find the optimal path when they were asked in the pretest block but with experiencing the blockage in the optimal path, they ignored or forgot their findings on reward structure and switched their strategy from choosing the optimal path to the shortest path. §.§ Cubed Model-based RL: Planning On A Smaller Grid Earlier, we explained how the shortest path and the optimal path are different in our design. Based on this distinction, we define the length of each path as the number of steps in the shortest optimal path. Thus the length of the path between cell 5 and cell 13 is 10 because the optimal path contains path B but it only takes two steps to reach cell 13 from cell 9 (disregarding the stochastic punishments). Now assume that at the beginning of the learning phase a participant is at cell 5 and she wants to reach cell 13. At the first glance, path C1 is the shortest and path B is the longest. But if she only takes one step, she would probably (80% of the time) faced with the great loss at cell 9. Next time she would divert her decision to either path A1 or path B. Assuming she chose path A1, after three steps, she would experience the maximum loss at cell 11 with probability of 0.8. These two results would lead her to take path B in the next trial. In this scenario one could use q-value iteration for three steps ahead (instead of the whole path as in model 3) and prune the rest of the paths and lower the computational cost (model 7). This model, which we called "cubed model-based RL", is fundamentally different from the previous heuristic-based models because the planning module has been changed. In fact, Model 7 plans for a small window ahead of the current position (using the shortest length for each pair). The minimum number of steps to prune the decision tree can be an extra free parameter in cubed model-based RL. While pruning at three steps is good for pair (5, 13), it is not a good stopping point for pair (3, 17). In order to compare path C1 and B correctly, one should plan for at least four steps ahead from cell 3 to be able to see the losses at cell 12 and cell 9. So for different pairs, the model needs to use different pruning depths. The diversity of pairs in each experiment (e.g. experiment 3 has 65 pairs) forces this model to pick the maximum number of available steps for planning (which is 7 in our grid world) and thus with one extra parameter, cubed model-based RL can not be distinguished from model 3 for many participants at many pairs as we report in the model fit result. So far we have discussed about the first 7 models in Table  <ref>. These models are different in their planning and learning approaches. In traditional model-free RL, the Q-values are computed using TD errors. But since the goal is not fixed and the starting points are randomized, this approach can not learn the correct Q-values at each states. Moreover, TD mechanism is unable to plan ahead when the environment is not deterministic. The baseline model lacks learning and planning regardless of the environment. We proposed 3 different heuristics (avoiding salient losses, finding the shortest path and remembering the last punishment) based on our data. Although these strategies are simpler to simulate and require less computations (because they do not estimate the R function), they can only explain a small portion of our data and mainly have wrong predictions. The (full) model-based RL, model 3, learns the model of the environment and estimates the reward structure. Having the model of the environment, which is independent of the goal or the starting position, helps the agent to predict the consequence of each action before taking them.§.§ Successor RepresentationIn addition to the main RL families that have been explained so far, there is another algorithm called successor representation (SR) that was first introduced by <cit.> which has the flexibility of the model-based RL and the simplicity of the TD learning. In this algorithm, the Q-values are decomposed into a reward matrix and a successor map which predicts the (discounted) future occupancy of all states, M (which is called SR matrix). Given any initial state, s^i, the SR matrix counts the number of times that a subsequent state (in a trajectory) is visited later:M(s^i, s^u, a^j) = 𝔼 [ ∑_ν=0^∞γ^ν𝕀 [s_k+ν= s^u ] | s_k=s^i, a_k=a^j] where 𝕀 [ .] = 1 when its argument is true and zero otherwise, <cit.>. While it is possible to compute the SR matrix from the transition matrix <cit.>, it is more common (and less expensive) to estimate SR matrix via Bellman equation as we did in Q-learning and the updating rule is as follows (α_l is a learning rate):M(s^i, s^u, a^j) ← M(s^i, s^u, a^j) + α_l [ 𝕀 [s_k+1= s^u ] + γ M(s_k+1, s^u, a_k+1) - M(s^i, s^u, a^j)] In order to calculate the Q-values given a policy π, we need to compute the inner product of the reward and SR matrix using equation <ref>, <cit.>:Q_π(s,a) = ∑_s^u M(s^i, s^u, a^j) R(s^u) As we discussed before, there are multiple methods to estimate reward function (three different heuristics and one linear filter, equation <ref>). Consistent with our model-based RL fit, we use the linear filter and heuristics to learn the reward function (avoiding the great loss, remembering the last R and finding the shortest path related to models 9, 10 and 11 respectively).§.§ Hybrid SR-MB It is important to emphasize that the SR matrix, M, is different from the transition matrix. Thus, if there are abrupt changes in the environment, i.e. transition revaluation, the SR model fails to adapt to these changes without learning the new trajectories. In our experiments, in the test blocks, the random blockage in the optimal path is one example of transition revaluation. In order to choose the correct action, the SR algorithm needs to relearn the (new) trajectories. One possible solution to this problem is to combine the SR model with the model-based RL model (hybrid SR-MB), <cit.>, <cit.>. For instance, the probability of each action can be a weighted average of the model-based RL's predicted probability and the SR's predicted probability. By assigning a greater weight to the model-based RL's prediction, the hybrid SR-MB can be more flexible toward the sudden changes of the transition matrix.§ MODEL FIT RESULTSAlthough we have 12 different models, some of them share common characteristics in planning or reward estimation. For instance, model 5 is different from model 4 in estimating the reward function (model 5 only remembers the last reward while model 4 avoids the greatest loss), but they both use model-based decision making in planning. Because of these differences and similarities, we divide these models into 6 unique categories: a) the random choice model with no planning and no learning (model 1). b) the model-free RL which does not learn the structure of the environment but does plan based on one-step rewards and the next step's expected future reward (model 2). c) the model-based RL and/with heuristics (including models 3, 4, 5 and 6). These models are similar in their planning but differ on how they estimate the reward structure. d) the cubed model-based RL (model 7) with a constrained view of the environment. e) the SR model and/with heuristics (including models 8, 9, 10 and 11). These models learn a similar multi-step representation of the environment, but their estimations of the reward function are different. Note that the SR models are not as good as model-based RL in learning the environment but better than the model-free RL. f) the hybrid SR-MB (model 12) which blends the SR algorithm with the model-based RL to improve its performance in transition devaluation.In some of these categories, i.e. category c, there are four different models and in some, i.e. category f, there is only one model. Our strategy to compare these models is to first nominate one candidate in each category (within group comparison) and then measure how good they are in contrast to other categories. Note that we first compare the performances of these models in the learning blocks and then evaluate their predictions in the test blocks. §.§ Model Fit Results In The Learning (Training) Blocks There are multiple ways to evaluate models. The first that we chose to report is based on BIC comparison <cit.>. To this end we fitted these models to the data from learning blocks (block 1 to 6 with n=120 trials) using one-step-ahead prediction method, equation <ref>. In this method, the model predicts (each) participant's choice on the next trial using the sequence of choices and payoffs that she has taken and experienced respectively. Specifically, the candidate model calculates the probability of selecting a particular action, that the participant selected, at each cell using the history of choices and rewards/punishments ({ s_1,a_1,r_1⋯,s_k,a_k,r_k}).LL_i=∑_1^n-1ln(Prob(a_k+1|{ s_1,a_1,r_1⋯,s_k,a_k,r_k})) In order to find the best fitted parameters, the maximum likelihood estimation is applied (simplex search algorithm) and BIC is computed as in equation <ref>: BIC = -2· ln(LL_i) + k· ln(N) where N is the number of observation and LL_i is the likelihood of the i^th participant. First, we compared the models in the model-based RL category (model-based RL and heuristics, category c). Model 5 which remembers the last reward wins in experiment 1, but model 3, model-based RL with a linear estimation of reward, has the lowest BIC in experiment 2 and 3. Simplicity of the environment in experiment 1 plays a critical role in this result. In experiment 1 there were only three paths (rather than 5) and the detour path (path C) had no stochastic punishment. The loss at path B, the optimal path, was very small and even if the participant was unlucky, the worse case loss was -3. In path A, however, the difference between the regular punishment and the worse case loss was considerable. In 80% of the times, the environment delivered a loss of -75 and thus with one such experience the model with the strategy of remembering the last punishment was no longer likely to choose this path. Therefore, model 5 was able to find the optimal path with only two free parameters (temperature and discount) without learning the reward structure. In the other two experiments because there was 20% chance that participants receive a regular punishment of -1 at cell 9 and cell 17, the strategy of recording only the last punishment might mislead the model to choose the wrong path. Similarly, the (within category) model comparison for SR models, the full SR model had the better BIC in experiments 2 and 3 while the heuristic-based SR model which ignores the rewards (and chooses the shortest path) won in experiment 1 [Important to mention that in experiments 2 and 3, we had similar AIC results but not in experiment 1. In fact the full SR model had a better AIC.]. Next, we compared the remaining models (the winner in each category) using BIC for each experiment, Table <ref>.In experiment 1, the SR model which finds the shortest path (with 3 free parameters) had the lowest BIC. Note that the hybrid SR-MB model had also a very low BIC (1520) although it had 7 free parameters. In experiments 2 and 3, the best model was the hybrid SR-MB. Baseline model, Q-learning and Cubed model-based RL had the worst fit as we expected. The full model-based RL was among the top three models in all experiments. In experiments 1 and 3, it was the third best model but in experiments 2, it held the second place. Note that all of these results were computed by the data in the learning blocks. One way to compare these models is to treat BIC/AIC as the log model evidence for each participant and investigate if there is a significant difference between the BIC (AIC) for a pair of models, <cit.>. However, this needs many pairwise comparisons. Moreover, the results based on AIC and BIC are not consistent to some extent [Probably because the comparisons are affected more by possible outlier values in BIC or AIC.]: based on AIC, there is much stronger evidence that the hybrid SR-MB model is the best model in experiment 1. A more sophisticated approach for comparing this number of models is to compare their Exceedance Probability (EP) which have been proposed by Stephan et al., <cit.>. In this algorithm, each model was treated as a random variable and the probability of generating (participants') data by a model was defined by a multinomial distribution described by a Dirichlet distribution. Stephan et al. used a variational Bayes method to estimate the Dirichlet distribution's parameters based on marginal likelihood of each model, <cit.> [We used the MATLAB code developed by Samuel J. Gershman <cit.>.]. Given the parameters of the Dirichlet distribution, it is possible to compute the probability that a model is more likely than any other model [To compute these probabilities, it is necessary to have an estimate of the marginal likelihood of each model m for the data set D, i.e. p(D|m).]. The best model has the highest EP (probability closer to 1). Table <ref> shows the EP of six models that we discussed above.In experiment 1, the hybrid SR-MB model and the SR-ShPath model were competing with each other and as we expected, the EPs calculated by AIC and BIC were not consistent. In experiment 2, the full model-based RL was more or less similar to the hybrid SR-MB model while in experiment 3, the superior fit was provided by the hybrid SR-MB model. These results confirmed our previous findings that the heuristic-based models were not able to explain the data in a more complicated environment. The hybrid SR-MB model was the best model in all three experiments if we only considered the AIC results (and AIC-based EP). For the next section, we chose the best three models in each experiment and compared their predictions in the test blocks.§.§ Model Prediction In The Test BlocksSo far, we used the learning phase data to train our models and then used their quantitative fit to evaluate them. As we presented above, the hybrid SR-MB model was dominantly the best model (among our proposed heuristics and models) in explaining the data in all three experiments using AIC. These results were not so clear when we compare the BIC and the EP of these models. The SR-ShPath and the MBRL had a better fit in experiment 1 and 2 respectively. Therefore, we took our model comparison to another level and investigated their predictions in the test phase. Using the knowledge of the training phase, we examined whether these models can find the optimal path in an environment where they have not experienced yet. We evaluated these predictions using two approaches: a) calculate the log-likelihood of each model during the test phase and compare these results (similar to what we have done in the training phase); b) using the best fitted parameters, compare models' prediction at critical cells in the test phase.Table <ref> summarizes the EP of the three most successful models discussed in model fitting section. The heuristic-based SR model (which finds the shortest path) had the greatest EP in experiment 1. In experiments 2 and 3, the model-based RL was the best model in explaining the test trials' data. To our surprise, the hybrid SR-MB model failed in all of these experiments. One possibility that can explain the failure of hybrid SR-MB model is that in our model fitting, we only used the learning blocks to train these models. When the environment was not changing, even a pure SR model (with or without heuristics) provided a better fit than a model-based RL. But the SR model needs to experience new trajectories when the transition matrix changes. When we linearly combined the SR algorithm with the model-based RL, greater weights were assigned to the SR model's prediction [The average weights for the SR model in our experiments were close to 1 and almost 0 for the model-based RL.]. Therefore, the model-based RL had small or zero contribution in predicting the re-planning trials' data in the test phase.One solution to this problem is to use a mixture model which switches from the SR model in the planning trials to the model-based RL mechanism in the re-planning trials. We did not fit this model but one possibility to implement this idea is to activate a switch to detect surprising changes of the environment. Then it can signal the hybrid SR-MB model to let the model-based RL take over the decision making process. Assuming that the model switches from training to test can post hoc fit all the data the best, because it uses the best fitting model for training and then switches to the best predicting model for test. However, future research is needed to identify the switching mechanism and design experiments to a priori test this mechanism. We also compared models' predictions at cell 8 after participants experienced a random blockage in the optimal path in experiment 1. In these trials, the optimal behavior is to go up to cell 7 and then reroute for the second optimal path (path C) to reach the goal. Note that in experiment 1, 13 out of 40 trials in the test blocks were re-planning trials. Also, that the starting position and the goal were fixed at 3 and 27. As we expected, the SR model which finds the shortest path (or basically any SR model) was not sensitive to the changes of the transition structure in the environment. The probability of selecting the right action at cell 8 was still greater than other actions (.63), although path B was not available anymore. The predicted probability of selecting the up action, by SR model, was 0.08. However, in the model-based RL, the probability of selecting the right action is the lowest, 0.03. Similar patterns were found in experiment 2 and 3. § DISCUSSIONPlanning in a stochastic environment is challenging. It becomes even more challenging when the environment is unknown to us. No matter how complicated these problems are, we mainly use our previous experiences to deal with them. Sometimes the environment changes and forces us to change or modify our plan. As a result, we update our plan every now and then to make sure our plan becomes a success. In this article, we tried to study real life planning problems in a simplified situation using our grid world experiment. Using this framework, we developed three experiments to investigate planning and re-planning in humans while learning an unknown environment. After 6 blocks of training in a 4 by 7 grid, participants' planning skill was tested. At the beginning of the 7th block (the pretest block), we warned them that their score will be converted to monetary reward (the exchange rate was 0.01). The majority of our participants (19 out of 19 in experiment 1, 30 out of 36 in experiment 2 and 30 out of 32 in experiment 3) were able to find the optimal path. This is in contrast to previous studies that showed people are more likely to be sub-optimal in the description-based decision trees involving a probabilistic reward, <cit.>, <cit.>. It has been proposed by <cit.> that people show planning biases in description-based decision trees problems, but not in experience based learning. In what follows, we discuss our findings and contributions, separately. §.§ Optimal planning in a grid worldSimon and Daw <cit.> investigated the neural bases of planning in a dynamic maze comparing two well established RL models (TD learning and model-based theory). Participants navigated through a virtual maze (with 16 rooms) to earn the maximum possible reward. They found that the BOLD signal related to the value and the choice in the striatum is correlated with the prediction of model-based theory <cit.>. In their design, participants did not need to estimate the expected values of the rewards because rewards were deterministic and the maze, although was constantly (and randomly) changing, was known to participants. Therefore, planning was equivalent to finding the new shortest path to reach the goal at each state by relearning the (new) environment while knowing the general structure of the maze. However, in our design, with the stochastic reward structure and deterministic obstacles, the shortest path was not optimal and participants did not need to relearn the configuration. The basic configuration of our grid world was similar to the detour problem in <cit.>, but we modified the grid world into a stochastic environment where finding the shortest path was not optimal anymore. It is important to note that our grid world had two distinctive features that encourage goal-directed behavior: first, the (hidden) punishments were probabilistic, and second, the starting and goal positions were randomly located in different cells. In order to find the optimal path, participants needed to learn and compare the expected values of different paths. Employing probabilistic rewards enabled us to represent this problem in a decision tree framework to study (optimal) planning similar to experience-based decision-making tasks, Fig. <ref>.§.§ Re-planning in humansThrough simple error-driven learning rules (e.g. TD learning), the model-free RL selects the action that leads to a greater reward (outcome) more frequently. The model-free RL stores the (action-) state values without learning the model of the system and therefore it is computationally simple. However, when the environment changes similarly to what we have in our grid world, a blockage in the optimal path, these state-action values are useless and need to be learned again! Note that the changes in the environment are not necessarily limited to a transition function, but also could be related to reward (e.g. reward devaluation), <cit.>. This inability to be adapted to varying circumstances makes the model-free RL error-prone when the structure of the task is changing, <cit.>.On the other hand, the model-based RL can update the state-action values globally due to its knowledge (or representation) of the system. For instance, in our grid world experiment, when the optimal path is blocked, the model-based RL only changes the transition function for that path. Consequently, this update changes the state-action values (Q-values) using a dynamic programming algorithm. This flexibility enables the model-based RL to pursue the goal without the need to experience and learn the environment again (goal-directed approach). A large amount of research has focused on how these two approaches cooperate or compete with each other in different areas (from clinical studies to neuroscience) using different tasks, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>.When a change occurs in our environment (e.g. grid world), we can not rely on our habits anymore. We need to update our knowledge and modify our original plans to accomplish our goal (re-planning). To the best of our knowledge, re-planning in humans has been rarely studied, <cit.>, <cit.> with only two-step decisions. We were able to analyze re-planning behavior by randomly blocking the optimal path in our test blocks. In experiment 1 and 2, the starting and goal positions were fixed during the test phase, but, in experiment 3, we randomized these pairs. Because of the stochastic nature of the payoffs, our results are not comparable directly with the rats used by <cit.>, but we showed that humans are capable of modifying their plans when there is a change in the environmental circumstances. §.§ Computational modelingWe used 12 different models to fit the choice data. The baseline model selects a random action at each cell regardless of what participants have experienced. This model was the simplest model in our benchmark. The traditional Q-learning algorithm was not able to explain the data because the starting and goal positions in the environment in our experiments were not fixed. In addition, when a change happens in the environment, the whole set of the action-state values needs to be updated (again by extensive amount of learning and exposure to the new environment). The (full) model-based RL tries to learn the model of the environment by estimating the transition and reward functions. This knowledge (of the environment) is later used to generate Q-value for each action. The model-based RL had the best predictions in the test blocks. In model 7, we restrict the model-based RL in its spatial search. Instead of a complete tree search that is commonly used in value iteration, we confine the model's planning depth to its kth nearest neighbors (k is a free parameter). While this modification can decrease the computational costs, for many pairs the best fitted k leads to a full tree search.In addition to model-based and model-free RL, there is another alternative, SR, which is more flexible than model-free RL and computationally simpler than model-based RL, <cit.>. SR calculates the state values using both reward and a successor map which stores the expected and discounted future states' occupancies. In case of reward devaluation, SR's behavior is similar to model-based RL but when there is an alteration in the transition structure, it fails to adapt to the change (similar to model-free RL), <cit.>, <cit.>, <cit.>. Although the hybrid SR-MB model provided a better account for participants' choices in the learning blocks, it failed to predict the re-planning behavior in the test blocks. Since the candidate models were not trained by the test blocks' data, they needed to generalize their knowledge (from the learning blocks) to perform optimally in the re-planning trials when the optimal path was randomly blocked. One solution to this problem is to use a mixture model which switches from the SR model in the planning trials to the model-based RL mechanism in the re-planning trials. It can post hoc fit all the data the best, because it uses the best fitting model for the training blocks and then switches to the best predicting model for the test blocks. Another possible candidate to explain the data can be a multi-step planning model proposed by Sutton et al., <cit.> which introduces the concept of “temporal abstraction” of actions that are interrelated. So instead of selecting an action for each state, a sequence of actions can be chunked and used. This sequence is called an option and this framework is called hierarchical reinforcement learning (HRL), <cit.>, <cit.>. Multi-step planning models (MSP) can be implemented in different ways. For instance, a MSP model can update the predetermined plan if any surprising or unexpected event happens (any changes in the transition function or unexpected punishment). So the model modifies the original plan more often at the beginning of learning and less when it builds the representation of the task (with fewer surprises). Also, the model can plan for a couple of steps until it gets to predefined subgoals (like a hub in the grid world) and then updates its estimates before proceeding to the final goal (or next subgoal), <cit.>. The current design did not allow us to specify a unique subgoal for different pairs but it can be a direction for the future research. §.§ Applying heuristic-based modelsOur heuristic-based models are inspired by some of our participants' data. Avoiding the greatest loss is useful in curtailing the decision tree search, but it is not optimal. It only guarantees not taking the worst action in the cells surrounding the greatest loss, though, it does not provide any policy in other cells. In other words, it is reduced to the baseline model at cells that are not adjacent to cell 11 and cell 15 and fails to explain our data. In <cit.>, they examined whether a big loss at early stages of a decision tree could interfere with finding the optimal path later or not. They found that participants are reluctant to take the paths with the greater loss although it might be counterproductive and they could earn a bigger reward later (on that path). In our experiment, we did not ask this question directly, but we found that there is a great avoidance toward the great loss of -75 [The hot stove effect <cit.> suggests that avoiding bad outcomes can form information asymmetry between available options. We randomized the starting points and the goals to make sure that participants have unbiased exposure to the environment.]. Also in the re-planning trials, there were only a few participants (three out of eighty seven in all 3 experiments) who chose the path with the greatest punishment. We should clarify that in our experiments, participants received 100 points at the goal position regardless of their chosen path. Hence, avoiding large losses in our experiments could not be compared with pruning behavior in <cit.>. One way to differentiate between the cells with regular punishments and the cells with great losses is to use different discount parameters to weight future and immediate rewards differently as it was suggested in <cit.>. Instead, in model 3, we used different rates (e.g. α_1_HS for the great loss of -75) to estimate the reward at these cells. One of the models in this category assigns one special rate, α_s1 (and possibly one special forgetting factor, α_s2) for estimating the reward at cell 11 for a loss of -75 while treating the other cells, with regular or non-regular losses, equally. The model which captures participants' tendency to avoid large losses with only two rates is inferior to our proposed model with three distinctive rates for estimating reward function.There are few participants who chose the shortest path. Because the rewards are probabilistic, this approach can be the worst strategy to apply. If path A1A2 is involved in catching the goal in a pair, it will be the shortest path but costs the greatest loss. This is not consistent with the observed pattern in the majority of our participants' data. Our final simple heuristic-based model (which was more successful than the others) remembers the last punishment at each cell. Before experiencing the non-regular losses at critical cells (11, 9, 17, 16 and 19 in experiment 2 and 3; 15, 16 and 21 in experiment 1), the model can not prioritize available paths optimally. But with one (loss) experience, it starts to select actions that lead togreater expected rewards. Remembering the last experience simplifies the reward estimation procedure, but it also promotes wrong predictions after experiencing a regular punishment at critical cells. Thus, this model fails except for the simple case of experiment 1. §.§ Related researchUnpacking different aspects of planning problems has been an interest to many researchers in different fields. While economic theory provides a normative approach for these problems, results from psychological experiments showed that people used simple heuristics, which were not necessarily optimal to maximize their payoffs in a multi-stage decision tree problem, <cit.>, <cit.>, <cit.>. In these experiments, the probability of transition at each choice node was also known to the subjects. They found that, except for a few people who planned ahead (e.g. rational planners), the majority of participants were no-planners or planners with mixed heuristics, <cit.> and <cit.>. The latter group tried to simplify the problem by using less information or mixing different strategies (i.e. a combination of local optimization at one level and random guessing at a different level) whereas the rational planners used computations similar to backward induction <cit.>. Yet, some of the strategies mentioned above had certain predictions that were not consistent with the experimental data. For instance, one strong prediction in backward induction is dynamic consistency [Dynamic consistency assumes that decision makers follow the original plan that they made for their future choices. In <cit.>, they found that people usually change their choices as they move forward in a decision tree (rather than following the original plan) and this inconsistency increases when they deal with longer decision trees.], which has been questioned before, <cit.>, <cit.>. <cit.> tried to address these concerns by proposing a new model, the Decision Field Theory-Dynamic model (DFT-D), which was able to distinguish the aforementioned patterns of participants' data and also provided a dynamic account of the decision making process at decision nodes. DFT (among with other sequential sampling models) assumes that the subject accumulates noisy information favoring each choice alternative until the evidence favoring one of the alternatives meets a decision threshold, <cit.>, <cit.>. The DFT-D model is a cognitive-dynamical model that extends DFT for multistage decisions (in planning). Using decision trees for studying sequential choices has several drawbacks. First, when the number of decision nodes increases (in bigger decision trees), participants find it too difficult to consider all the potential future consequences (of each decision) and thus might start choosing at random or adopt simple heuristics <cit.>, <cit.>. Thus, it remains unclear if humans' sub-optimal performance in these tasks is a reflection of their planning behavior or their poor understanding of the environment with which they are dealing. Second, the traditional design of decision trees restricts our design to fix the starting position at the top of the tree; Doing otherwise would result in planning problems being reduced to a single one-time decision. For instance, in a 2-step decision tree, the starting point cannot be placed in the second layer. There are few studies that used 3-step decision trees; however, they fixed the starting position at the top of the decision tree and allowed participants to use Notepad to write comments and remarks <cit.>. To address these issues, many researchers started to use spatial framework because it is more natural for participants for planning and multi-step decision making rather than a decision tree for which they have little or no experience in real life. In our experiment, we minimized spatial reasoning by displaying the environment on the screen along with the participants' position, the goal and all the feasible paths between them. More importantly, unlike <cit.>, <cit.>, <cit.>, we did not manipulate viewpoints, locational uncertainty and external cues. Therefore, only decision making and learning theories were directly relevant to our problem.§.§ ConclusionThere are many theoretical studies in reinforcement learning on how to train an agent in an unknown, complicated environment by focusing on reducing the computational costs and optimizing the search algorithms <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>. Few of them, however, have been tested with behavioral data in sequential-choice tasks, <cit.>. In this study, we were able to simplify one of the real life situations (navigation between two places) in our grid world experiment and evaluated predictions of the reinforcement learning theories with respect to choice data. Our design integrated experience-based decision-making into a classical decision tree problem. We showed that people are capable of revising their plans when an unexpected event occurs and that optimal re-planning requires learning the model of the environment (as in <cit.>). § REFERENCES
http://arxiv.org/abs/1709.09761v1
{ "authors": [ "Pegah Fakhari", "Arash Khodadadi", "Jerome Busemeyer" ], "categories": [ "stat.ML", "q-bio.NC" ], "primary_category": "stat.ML", "published": "20170927232206", "title": "The detour problem in a stochastic environment: Tolman revisited" }
Author to whom correspondence should be [email protected] of Mesoscopic and Low Dimensional Physics,Department of Physics, Sichuan University, Chengdu, Sichuan, 610064, China We propose a scheme to manipulate the electron-hole excitation in the voltage pulse electron source, which can be realized by a voltage-driven Ohmic contact connecting to a quantum hall edge channel. It has been known that the electron-hole excitation can be suppressed via Lorentzian pulses, leading to noiseless electron current. We show that, instead of the Lorentzian pulses, driven via the voltage pulse V(t) = 2 ħ/e√(√(3)/π k_ B T_h)arctanh( t - t_0/t_0 ) with duration t_0, the electron-hole excitation can be tuned so that the corresponding energy distribution of the emitted electrons follows the Fermi distribution with temperature T_ D = √( T^2_ S + T^2_ h), with T_ S being the electron temperature in the Ohmic contact. Such Fermi distribution can be established without introducing additional energy relaxation mechanism and can be detected via shot noise thermometry technique, making it helpful in the study of thermal transport and decoherence in mesoscopic system.73.23.-b, 72.10.-d, 73.21.La, 85.35.Gv On-demand electron source with tunable energy distribution Y. Yin December 30, 2023 ==========================================================§ INTRODUCTIONThe on-demand coherent injection of single or few electrons in solid-state circuits is an important task in electron quantum optics, which focus on the manipulation of electrons in optics-like setups.<cit.> The injection can be implemented simply by a voltage-pulse-driven Ohmic contact connecting to a quantum hall edge channel, which is usually referred as the voltage pulse electron source.<cit.> The Ohmic contact serves as an electron reservoir, while the quantum hall edge channel serves as an electron waveguide. Driven by the voltage pulse applied on the Ohmic contact, electrons incoming from the reservoir can be injected on the Fermi sea of the edge channel, leading to the single-electron quasi-particle excitation propagating along the waveguide. However, additional electron-hole excitation can usually be created during the injection,<cit.> inducing charge current noise.As far as the charge transport is concerned, it is desired to suppress the electron-hole excitation. In a series of seminal works, Levitov et al. have proposed that, driven by Lorentzian pulses with integer Faraday flux, integer electrons can be injected, while the accompanied electron-hole excitation can be suppressed, leading to a noiseless current flow.<cit.> This has been realized and extensively studied in the experiments reported in the group of D.C. Glattli.<cit.> Later, Gabelli et al. further show that a similar suppression can be realized by using bi-harmonic voltage pulses.<cit.> Besides, the existence of the electron-hole excitation can also be helpful in certain situation. Moskalets have demonstrated that, driven by the Lorentzian pulse with a half-integer flux, a zero-energy excitation with half-integer charge can be created in the driven Fermi sea at zero temperature, which cannot exist without the presence of electron-hole excitation.<cit.> These works demonstrate the possibility of manipulating electron-hole excitation via engineering the temporal profile of the voltage pulse.In contrast, if the thermal transport is concerned,<cit.> the electron-hole excitation are favorable, since they carry a finite amount of energy, while do not affect the average charge transport. In fact, a fully thermalized state at finite temperature can be regarded as the mixed state of certain electron-hole excitations, where the energy distribution follows the Fermi distribution, while the quantum coherence totally vanishes. If the electron-hole excitation can be manipulated via engineering the voltage pulse, it is then nature to ask: is it possible to tune the electron-hole excitation so that the corresponding state has exactly the same energy distribution of the fully thermalized state, while the quantum coherence is still preserved?Such a state will be helpful in the study of thermal transport and quantum decoherence processes in mesoscopic systems.In this paper, we propose a scheme to create such a state by using the voltage pulse electron source. We find that, instead of Lorentzian pulses, driving via the voltage pulse with duration t_0, i.e., t ∈ [0, t_0], which has the temporal profileV(t) = 2 ħ/e√(√(3)/π k_ B T_h)arctanh( t - t_0/t_0 ),with k_ B being the Boltzmann constant, electrons incoming from the reservoir at temperature T_ S and chemical potential μ can be excited so that the corresponding outgoing electrons can have the time-averaged energy distribution asf_ D(ω) =1/1+exp[(ħω -μ)/(k_ B T_ D)],which is exactly a Fermi distribution at temperatureT_ D=√( T^2_ S + T^2_ h). Note that such Fermi distribution is obtained solely by coherent excitation via voltage pulses and no additional energy relaxation mechanism is needed. Hence the quantum coherence is still preserved for the state. Experimentally, it is possible to detect the energy distribution of the state via quantum-dot-based energy filter.<cit.> We further show that such state can also be detected in situ at the voltage pulse electron source via shot noise thermometry technique,<cit.> making it convenient in further experimentally studies.The paper is organized as follows. In Sec. <ref>, we present the model of the voltage pulse electron source and introduce the time-averaged energy distribution for the electrons. In Sec. <ref>, we demonstrate how to manipulate the temporal profile of the pulse so that the time-averaged energy distribution of the emitted electrons follows the Fermi distribution we desired. In Sec. <ref>, we discuss the detection of the state via shot noise thermometry technique. We summarized in Sec. <ref>.§ MODEL AND FORMALISMThe voltage pulse electron source can be modeled as an one-dimensional quantum wire connecting two reservoir A and B,<cit.> as illustrated in Fig. <ref>. The system is biased with a time-dependent voltage pulse V(t) with duration t_0. The voltage drop is assumed to occur across a short interval at the center of the wire. If the voltage drop is spatially slow-varying on the scale of the Fermi wavelength 1/k_ F, the electron in such system can be well-approximated as dispersionless Fermi systems with the corresponding single-particle HamiltonianH = - i ħ ( ± v_ F ) ∂_x + e V(t) θ(-x),with e being the charge of electron and v_ F being the Fermi velocity. The sign ± corresponds to left- and right- going electrons, respectively. Without loss of generality, we focus on the right-going electron. The field operator of the electron ψ̂(x,t) can be expressed asψ̂(x,t) ={â(t-x/v_ F) e^-i ϕ(t) , x<0 b̂(t-x/v_ F) , x>0 .,with ϕ(t) = e/ħ∫^t dτ V(τ) describing the effect of the voltage pulse V(t). The Fermi operator â(t) and b̂(t) corresponds to the incoming and outgoing electron modes, respectively, which is also indicated in Fig. <ref>. They are related via the forward scattering phase<cit.>b̂(t) =â(t) e^-i ϕ(t),which can be obtained by requiring the field operator ψ̂(x,t) is continuity at the boundary x=0.Now we turn to discuss the energy distribution function of the incoming and outgoing electrons. Following previous works, the non-interacting electrons can be characterized by their one-body density matrix, which has the form of the first-order Glauber correlation function in the time-domain.<cit.> The correlation function for the incoming and outgoing electrons can be constructed asG_a(t, t') =< â^†(t) â(t') >, G_b(t, t') =< b̂^†(t) b̂(t') >,respectively, where < ... > represents the thermal expectation over the reservoir degree of freedom.For incoming electrons, which can be modelled as the stationary wave-packet emitted from the reservoir A at thermal equilibrium, the correlation function satisfies translation invariant in the time-domain, and hence only depends on the time difference τ=t-t'. The energy distribution function can be obtained through Fourier transformation, which has the formf_a(ω) =∫ dτ e^i ωτ G_a(t, t-τ).For reservoir A at temperature T_ S and chemical potential μ, the distribution f_a(ω) follows the Fermi distributionf_ S(ω) =1/1+exp[(ħω -μ)/(k_ B T_ S)]. In contrast, driven by the time-dependent potential V(t), the outgoing electrons are described by the non-stationary wave-packet and the translation invariant of the correlation function is broken. In this situation, one can introduce the time-averaged energy distribution function over the time interval t_0, which can be written as<cit.>f_ V(ω) =1/t_0∫^t_0_0 ∫^t_0_0 dtdt' e^i ω (t-t') G_b(t,t'),which can be measured experimentally by using quantum dot as an adjustable energy filter.<cit.> By using the scattering phase given in Eq. (<ref>), the time-averaged distribution function of the outgoing electrons can be related to the incoming ones viaf_ V(ω) =∫d ω'/2 π f_ S(ω') Π_ V (ω - ω'),Π_ V (ω) = | 1/√(t_0)∫^t_0_0 dt e^iω t e^-i ϕ(t) |^2. § TUNING DISTRIBUTION VIA VOLTAGE PULSEIf the distribution f_ V(ω) given in Eq. (<ref>) follows the Fermi distribution f_ D(ω) given in Eq. (<ref>), then the outgoing electron state will have the same energy distribution as a fully thermalized state at the temperature T_ D. To do so, one requires that the integral kernel Π_ V(ω) in Eqs. (<ref>) equals to(see Appendix <ref> for the derivation)Π_h(ω) =∫ dt e^i ω tΠ̅_h(t),withΠ̅_h(t) =T_ D/T_ Ssinh(π k_ B T_ S/ħ t)/sinh(π k_ B T_ D/ħ t).The typical profiles of Π̅_h(t) at sub-kelvin temperatures are illustrated in Fig. <ref>.To achieve such requirement, one need to find a proper V(t) so that the power spectral density of e^-i ϕ(t) follows Π_h(ω), i.e.,Π_h(ω) = Π_ V(ω) = | 1/√(t_0)∫^t_0_0 dt e^i ω t e^-i ϕ(t) |^2,with ϕ(t) given below Eq. (<ref>). It should be noted that since Π_ V(ω) is non-negative according to Eq. (<ref>), equation (<ref>) cannot be satisfied for T_ D < T_ S, since Π_h(ω) can be negative in this case.Finding V(t) satisfying the requirement Eq. (<ref>) is equivalent to the problem of phase control of pulse shaping, which has been extensively studied in the field of ultrafast optics.<cit.> The basic idea is to attack the problem by working out the Fourier transformation in Eq. (<ref>) within the stationary phase approximation. Here we only outline the procedure, leaving the technical details to the Appendix <ref>.Within the stationary phase approximation, Eq. (<ref>) can be satisfied by requiring the voltage pulse V(t) follows the relationt/t_0=∫^e/ħV(t)_-∞d ω/2πΠ_h(ω),with t ∈ [0, t_0]. An analytical solution of Eq. (<ref>) can be obtained by approximating Π_h(ω) via the Gaussian ansatz[see Appendix <ref> for details]Π_h(ω)≈ √(2π)/σ_h e^-1/2 ( ω/σ_h )^2,where σ_h = π k_ B T_h/√(3) with T_h = √(T^2_ D - T^2_ S).Substituting Eq. (<ref>) into Eq. (<ref>), one hasV(t) =√(2)σ_h ^-1(2t/t_0).By further applying the approximation ^-1(x) ≈√(6)/πarctanh(x), the above equation reduced toV(t) = 2 ħ/e√(√(3)/π k_ B T_h)arctanh( t - t_0/t_0 ),which is just Eq. (<ref>) given in Sec. <ref>.Note that there are some freedom in the choice of the pulse duration t_0. In principle, t_0 should be large enough so that the stationary phase approximation holds. In the realistic calculation, we find that it is adequate to choose t_0 to be two times larger than the Full width at half maximum (FWHM) of the profile Π̅_h(t). According to Fig. <ref>, the typical value of t_0 is of the order of 1 ns at sub-kelvin temperatures, corresponding to the frequency of the order of 1 GHz. One may also worry about the divergence of the function arctanh(τ) at the boundary τ=± 1 in Eq. (<ref>). This can be fixed by set a limit of the amplitude of V(t), which has minor influence if the overall profile of V(t) does not change too much.Despite the approximation we have used in above derivation, the time-averaged energy distribution of the outgoing electron produced by the voltage pulse V(t) can follow the desired Fermi distribution quite well, which we demonstrate in Fig. <ref>. In the main panel of the figure, the blue dotted curves represents the energy distribution f_ S(ω) of the incoming electrons from reservoir A, which is the Fermi distribution at the temperature T_ S=10 mK and chemical potential μ=0. The red solid curve represents the time-averaged distribution f_ V(ω) of the outgoing electrons, which is calculated by numerical integrating Eqs. (<ref>) with the voltage pulse V(t)[Eq. (<ref>)] of the parameter T_h=99.5 mK. One can see that it follows the Fermi distribution at the temperature T_ D=100 mK(green dashed curve), which agrees with the analytical expression given in Eq. (<ref>).The temporal profile of the applied voltage pulse V(t) is shown in the inset (a) of Fig. <ref>. The pulse duration t_0 is chosen to be 2.3 ns, which is just equal to two times of the FWHM of the corresponding Π̅_h(t)[red solid curve in Fig. <ref>]. The amplitude limit of V(t) is set to 60 μV, which has little impact on the overall profile of the pulse. In the inset (b), we also compare the integral kernel Π_ V(ω)[calculated from V(t) via Eqs. (<ref>)] to the integral kernel Π_h(ω) given in Eq. (<ref>). One can see that, despite some small ripples, the overall profile of the two kernel agrees with each other, which justifies the approximation we have used in the derivation.By further increasing the pulse duration t_0, the ripples in the integral kernel can be suppressed, as can be seen from Fig. <ref>. Comparing to Fig. <ref>, we have increase t_0 to 11.5 ns, while keeping other parameter fixed. The suppression of the ripples in the integral kernel can be clearly in the inset (b) of Fig. <ref>. As a consequence, the energy distribution of the outgoing electrons(red solid curve) agrees quite well with the Fermi distribution with T_ D=100 mK(green dashed curve), so that they are almost indistinguishable from each other in the main panel of the figure. Hence, one can see that, applying the voltage pulse V(t) with the temporal profile given in Eq. (<ref>), the energy distribution of the outgoing electrons can be tuned to follow the Fermi distribution from Eq. (<ref>) quite well. According to Eq. (<ref>), to achieve such tuning for higher temperature T_ D, voltage pulse with larger amplitude is required. In this case, the amplitude limit of the pulse can have more pronounced impact on the result.Such impact is demonstrated in Fig. <ref>. All the parameters are chosen the same as Fig. <ref>, except the temperature T_ D, which is increased to 300 mK in this case. From the inset (a), one can see that the pulse profile is modified due to the amplitude limit of the pulse, leading to small platforms close to the edges. As a consequence, these platforms induces ripples in the corresponding integral kernel, which is shown in the inset (b). Such ripples can also be identified in the energy distribution of the outgoing electron, which is shown as red solid curve in the main panel of the figure.Note that in the case where the profile of the pulse has been changed due to the amplitude limitation, the induced ripples cannot be suppressed by increasing the pulse duration t_0, which we demonstrate in Fig. <ref>. In this case, all the parameters are chosen the same as Fig. <ref>, except the pulse duration t_0, which is increased to 11.5 ns. One can still identify large ripples in the integral kernel[inset (b)]. The impact of these ripples can lead to pronounced distortion of the energy distribution of outgoing electrons far from the Fermi level, which can be seen in the main panel of Fig. <ref>.It should be noted that in the realistic situation, the voltage pulse can also lead to Joule heating.<cit.> To make the manipulation observable, the amplitude of the voltage pulse should be kept low enough so that the Joule heating is not significant. It has been reported that in typical quantum point contacts, voltage pulses with the root mean square amplitude 60 μV can heat up the electrons to about 50 mK,<cit.> which is smaller then the effect we shown here. So we expect that such manipulation can be experimentally detected in a realistic system. § SHOT NOISE THERMOMETERExperimentally, the time-averaged energy distribution of the outgoing electrons can be detected via an adjustable energy filter, which can be realized by using a quantum dot fabricated at a certain distance from the voltage source.<cit.> However, as the profile of the energy distribution can be changed during the propagation due to various energy relaxation mechanisms, such as the coupling to electrons on counter-propagating edge channels,<cit.> it is more favorable to detect the distribution in situ at the voltage pulse source. This can be done by using the shot noise thermometry technique,<cit.> which we will discuss in this section. In the standard shot noise thermometry technique, the electron temperature is extracted from the DC-bias-voltage dependence of the current noise across a tunneling barrier.<cit.> This can be implemented in the voltage pulse electron source by introducing a static potential in the short interval at the center of the wire, which is illustrated in Fig. <ref>. Note that the static potential is assumed to be rapidly-varying comparing to the Fermi wavelength 1/k_ F, which can lead to backscattering between left- and right-going electrons in the system.Following the scattering formalism, the scattering between the left- and right-going electrons can be described by the time-dependent scattering matrix<cit.>(b̂_1(t) â_2(t) ) = S(t) ( â_1(t) b̂_2(t) ), S(t) = (√(D_0) e^-i ϕ(t) -i √(1-D_0) -i √(1-D_0) √(D_0) e^i ϕ(t) ),with D_0 being the time-independent transmission coefficient due to the static potential. The Fermi operator â_1(2) and b̂_2(1) represent the incoming(outgoing) electrons modes from(towards) the reservoir A and B, respectively. The phase factor ϕ(t) is given below Eq. (<ref>). For simplicity, we assume both the reservoir A and B are kept at the same temperature T_ S and chemical potential μ.Consider the time-averaged current noise detected at the reservoir B, it can be given asS_ B= 1/t_0 ∫^t_0_0 dt ∫^t_0_0 dt' [ < ĵ_ B(t) ĵ_ B(t') >- < ĵ_ B(t) > < ĵ_ B(t') > ],where the current operator ĵ_ B(t) has the formĵ_ B(t) =b̂^†_1(t) b̂_1(t) - b̂^†_2(t) b̂_2(t).By using the time-dependent scattering matrix Eq. (<ref>), the current noise can be written asS_B = S_e + S_n,with S_e and S_n being the equilibrium and non-equilibrium contribution, respectively. The equilibrium termS_e =e^2/2πħ D^2_0 ∫dω/2π{[ 1 - f_ S(Ω) ] f_ S(ω)+ f_ S(ω) [ 1 - f_ S(ω) ] },is just the Nyquist-Johnson noise which is independent on the applied DC bias voltage. The non-equilibrium term, i.e., the shot noise term, can be written asS_n =e^2/2πħ D_0(1-D_0) ∫dω/2π{[ 1 - f_ S(ω) ] f_ V(ω)+ f_ V(ω) [ 1 - f_ S(ω)] },with f_ V(ω) given in Eqs. (<ref>).One can see immediately that, if f_ V(ω) is tuned to the Fermi distribution f_ D(ω) given in Eq. (<ref>), then Eq. (<ref>) should be equal toS̅_n =e^2/2πħ D_0(1-D_0) ∫dω/2π{[ 1 - f_ S(ω) ] f_ D(ω)+ f_ S(ω) [ 1 - f_ D(ω)] },which is just the shot noise between two reservoirs with different temperature T_ S and T_ D. Hence, to check if the manipulation of the energy distribution is properly achieved, one just compare the DC bias dependence S_n from Eq. (<ref>) to the predicted one S̅_n from Eq. (<ref>).As an example, we perform such comparison for the case with T_ S=10 mK and T_ D=100 mK in Fig. <ref>. The shot noise is normalized to the zero-bias noise S_0= e^2 D_0 (1-D_0) k_ B T_ S/(2 π^2 ħ), which is plotted as a function of the normalized voltage x = eV_0/(k_ B T_ S). The predicted S̅_n from Eq. (<ref>) is plotted as blue dotted curve. The green squares represent the shot noise obtained by applying the voltage pulse V(t) with t_0=2.3 ns, while the red dots correspond to the case with t_0=11.5 ns. The corresponding energy distribution of the outgoing electrons can be found in Fig. <ref>[t_0=2.3 ns] and Fig. <ref>[t_0=11.5 ns], respectively. One can see that, by increasing t_0, the profile of the distribution of outgoing electrons approaches the Fermi distribution f_ D(ω)[Fig. <ref> and Fig. <ref>] , while the shot noise approaches the predicted shot noise S̅_n, indicating that the shot noise can be used as a hallmark of the profile of the energy distribution of the outgoing electrons. Note that without the voltage pulse V(t), the normalized shot noise follows the well-known universal function x/tanh(x/2), which is shown as orange dashed curves in the figure.§ SUMMARYIn summary, we have propose a scheme to manipulate the electron-hole excitation in the voltage pulse electron source. By using stationary phase approximation, we derive a simple analytical expression of the voltage pulse V(t) [Eq. (<ref>)], which can tuned the electron-hole excitation so that the energy distribution of the emitted electrons from the electron source follows a desired Fermi distribution with higher temperature. Such distribution can be established without introducing additional energy relaxation mechanism. We also show that such distribution can be detected in situ at the voltage pulse electron source via shot noise thermometry technique, making it helpful in the study of thermal transport and decoherence in mesoscopic system.The author would like to thank Professor J. Gao for bringing the problem to the author’s attention. This work was supported by Key Program of National Natural Science Foundation of China under Grant No. 11234009, the National Key Basic Research Program of China under Grant No. 2016YFF0200403,and Young Scientists Fund of National Natural Science Foundation of China under Grant No. 11504248. § DERIVATION OF EQ. (<REF>), (<REF>)AND (<REF>)In this appendix, we derive the integral kernel Π_h(ω) given in Eqs. (<ref>) and (<ref>). It satisfies the condition f_ D(ω) =∫d ω'/2 π f_ S(ω') Π_ h (ω - ω'),with f_ D(ω) and f_ S(ω) being the Fermi distribution given in Eq. (<ref>) and Eq. (<ref>), respectively.This equation can be solved via Fourier transformation. Taking the derivative with respect to ω on both side of Eq. (<ref>), one hasβ_ D/41/cosh^2[β_ D(μ - ħω)/2]=∫d ω'/2 πΠ_ h (ω -ω')×β_ S/41/cosh^2[β_ S(μ - ħω)/2],with β_ S(D) = 1/[ k_ B T_ S(D) ]. Performing the Fourier transformation on both side of Eq. (<ref>) and using the identity ∫ dx e^i x y/cosh^2(x)=π y/sinh(π/2y),one obtains t/ 2β_ Dsinh(π/β_ D t)=t/ 2β_ Ssinh(π/β_ S t)Π̅_h(t),withΠ_h(ω) =∫ dt e^i ω tΠ̅_h(t),which gives Π_h(ω) given in Eq. (<ref>) and (<ref>).One can also apply a Gaussian approximation for the derivative of the distribution function. In this case, one assumef_ S(D)(E) =1 - ∫^E_0 dE' g_ S(D)(E'),withg_ S(D)(E) =1/√(2π)σ_ S(D) e^-1/2(E-μ/σ_ S(D))^2 ,where the width σ_ S(D) is chosen so that the average energy of the system satisfies∫^+∞_0 f_ S(D)(E') (E' - μ) dE' =(π k_ BT_ S(D))^2/6,which gives σ_ S(D)=π k_ B T_ S(D)/√(3). By combining the Gaussian approximation with Eq. (<ref>), the integral kernel can be approximated viaΠ_h(ω)≈ √(2π)/σ_h e^-1/2 ( ω/σ_h )^2,with σ_h = π k_ B T_h/√(3) with T_h = √(T^2_ D - T^2_ S), which is just Eq. (<ref>).§ PHASE CONTROL OF PULSE SHAPINGIn this appendix, we introduce the phase control of the pulse shaping following Ref. cook2012, which we have used in the derivation of Eq. (<ref>) from Eq. (<ref>).The problem of phase control of pulse shaping is to find a signals(t) = a(t) exp[ i θ(t) ],whose power spectral density follows a given function U(ω), i.e.,U(ω) exp[ i Φ(ω) ] =∫^+∞_-∞ dt a(t) exp[ i θ(t) - i ω t],with Φ(ω) being an arbitrary function. Here a(t), θ(t), Φ(ω) and U(ω) are all real.To attack this problem, one first evaluate the Fourier transformation Eq. (<ref>) by using the stationary phase approximation. For a given ω, this can be done by performing the Taylor expansion of the phase factor around t=T_ω asω t - θ(t) = [ ω T_ω - θ(T_ω) ] + [ω - θ'(T_ω) ] (t - T_ω)+ θ”(T_ω) (t-T_ω)^2 + ...,where T_ω is obtained by using the stationary phase conditionω=θ'(T_ω).Here θ'(t) and θ”(t) represents the first- and second-order derivative of the function θ(t) over time t, respectively.Substituting Eqs. (<ref>) and (<ref>) into Eq. (<ref>), one obtains U(ω) exp[ i Φ(ω) ]≈ √(2π/θ”(T_ω)) a(T_ω)×exp[ -i ω T_ω + i θ(T_ω) +π/4 ],from which one hasa^2(T_ω) = U^2(ω) θ”(T_ω) /2 π. By combining Eq. (<ref>) with the stationary phase condition Eq. (<ref>), one can obtain a differential equation, from which θ(t) can be solved if the function a(t) and U(ω) are known. To make this clear, let x=ω and y=T_ω, Eqs. (<ref>) and (<ref>) have the formx =dθ(y)/dy, a^2(y) =U^2(x)/2πd^2θ(y)/d^2y. Substituting Eq. (<ref>) into Eq. (<ref>), one can eliminate θ(y), which gives the differential equationa^2(y) dy =U^2(x)/2π dx,from which the function y(x)[or equivalent, the function x(y)] can be solved. The phase θ(y) can then be obtained from Eq. (<ref>).To applying the above procedure to solve Eq. (<ref>), one choosea(y) ={1/√(t_0) , y ∈ [0, t_0] 0 , otherwise .,and U^2(x) = Π_h(x). Integrating both side of Eq. (<ref>), one has∫^y_-∞ a^2(y') dy' = ∫^x_-∞d x'/2πΠ_h(x).This equation can be reduced to Eq. (<ref>) for y ∈ [0, t_0] by performing the substitution y → t and x →e/ħ V(t) = dϕ(t)/dt. 37 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Oliver et al.(1999)Oliver, Kim, Liu, and Yamamoto]oliver1999 author author W. D. Oliver, author J. Kim, author R. C. Liu,andauthor Y. Yamamoto, @noopjournal journal Science volume 284, pages 299 (year 1999)NoStop [Henny et al.(1999)Henny, Oberholzer, Strunk, Heinzel, Ensslin, Holland, and Schönenberger]henny1999 author author M. Henny, author S. Oberholzer, author C. Strunk, author T. Heinzel, author K. Ensslin, author M. Holland,and author C. Schönenberger, @noopjournal journal Science volume 284, pages 296 (year 1999)NoStop [Bertoni et al.(2000)Bertoni, Bordone, Brunetti, Jacoboni, and Reggiani]bertoni2000 author author A. Bertoni, author P. Bordone, author R. Brunetti, author C. Jacoboni,and author S. Reggiani, @noopjournal journal Phys. Rev. Lett. volume 84, pages 5912 (year 2000)NoStop [Ionicioiu et al.(2001)Ionicioiu, Amaratunga, and Udrea]ionicioiu2001 author author R. Ionicioiu, author G. Amaratunga,and author F. Udrea, @noopjournal journal Int. J. Mod. Phys. B volume 15, pages 125 (year 2001)NoStop [Ji et al.(2003)Ji, Chung, Sprinzak, Heiblum, Mahalu, and Shtrikman]ji2003 author author Y. Ji, author Y. Chung, author D. Sprinzak, author M. Heiblum, author D. Mahalu,and author H. Shtrikman, @noopjournal journal Nature volume 422, pages 415 (year 2003)NoStop [Bocquillon et al.(2014)Bocquillon, Freulon, Parmentier, Berroir, Plaçais, Wahl, Rech, Jonckheere, Martin, Grenier, Ferraro, Degiovanni, and Fève]bocquillon2014 author author E. Bocquillon, author V. Freulon, author F. D. Parmentier, author J.-M. Berroir, author B. Plaçais, author C. Wahl, author J. Rech, author T. Jonckheere, author T. Martin, author C. Grenier, author D. Ferraro, author P. Degiovanni,and author G. Fève, @noopjournal journal Ann. Phys. volume 526, pages 1 (year 2014)NoStop [Glattli and Roulleau(2017)]glattli2017 author author D. C. Glattli and author P. S. Roulleau, @noopjournal journal Phys. Stat. Sol. (b) volume 254 (year 2017)NoStop [Vanević et al.(2016)Vanević, Gabelli, Belzig, andReulet]vanevic2016 author author M. Vanević, author J. Gabelli, author W. Belzig, and author B. Reulet,@noopjournal journal Phys. Rev. Bvolume 93, pages 041416 (year 2016)NoStop [Lee and Levitov(1993)]lee1993 author author H. Lee and author L. Levitov,@noopjournal journal cond-mat/9312013 (year 1993)NoStop [Lee and Levitov(1995)]lee1995 author author H. Lee and author L. Levitov,@noopjournal journal cond-mat/9507011 (year 1995)NoStop [Keeling et al.(2006)Keeling, Klich, and Levitov]keeling2006 author author J. Keeling, author I. Klich, and author L. Levitov,@noopjournal journal Phys. Rev. Lett.volume 97, pages 116403 (year 2006)NoStop [Dubois et al.(2013)Dubois, Jullien, Portier, Roche, Cavanna, Jin, Wegscheider, Roulleau, and Glattli]dubois2013 author author J. Dubois, author T. Jullien, author F. Portier, author P. Roche, author A. Cavanna, author Y. Jin, author W. Wegscheider, author P. Roulleau,and author D. Glattli, @noopjournal journal Nature volume 502, pages 659 (year 2013)NoStop [Jullien et al.(2014)Jullien, Roulleau, Roche, Cavanna, Jin, and Glattli]jullien2014 author author T. Jullien, author P. Roulleau, author B. Roche, author A. Cavanna, author Y. Jin,and author D. Glattli, @noopjournal journal Nature volume 514,pages 603 (year 2014)NoStop [Gabelli et al.(2017)Gabelli, Thibault, Gasse, Lupien, and Reulet]gabelli2017 author author J. Gabelli, author K. Thibault, author G. Gasse, author C. Lupien,and author B. Reulet, @noopjournal journal Phys. Stat. Sol. (b) volume 254 (year 2017)NoStop [Moskalets(2016)]moskalets2016 author author M. Moskalets, @noopjournal journal Phys. Rev. Lett. volume 117, pages 046801 (year 2016)NoStop [Schwab et al.(2000)Schwab, Henriksen, Worlock, and Roukes]schwab2000 author author K. Schwab, author E. A. Henriksen, author J. M. Worlock,and author M. L. Roukes, https://search.proquest.com/docview/204486250?accountid=13854 journal journal Nature volume 404,pages 974 (year 2000)NoStop [Chiatti et al.(2006)Chiatti, Nicholls, Proskuryakov, Lumpkin, Farrer, and Ritchie]chiatti2006 author author O. Chiatti, author J. T. Nicholls, author Y. Y. Proskuryakov, author N. Lumpkin, author I. Farrer, and author D. A. Ritchie,10.1103/PhysRevLett.97.056601 journal journal Phys. Rev. Lett. volume 97,pages 056601 (year 2006)NoStop [Chen et al.(2009)Chen, Dirks, Al-Zoubi, Birge, andMason]chen2009 author author Y.-F. Chen, author T. Dirks, author G. Al-Zoubi, author N. O. Birge,and author N. Mason, @noopjournal journal Phys. Rev. Lett. volume 102, pages 036804 (year 2009)NoStop [Granger et al.(2009)Granger, Eisenstein, and Reno]granger2009 author author G. Granger, author J. Eisenstein,and author J. Reno,@noopjournal journal Phys. Rev. Lett.volume 102, pages 086803 (year 2009)NoStop [Altimiras et al.(2010)Altimiras, Le Sueur, Gennser, Cavanna, Mailly, and Pierre]altimiras2010 author author C. Altimiras, author H. Le Sueur, author U. Gennser, author A. Cavanna, author D. Mailly,and author F. Pierre, @noopjournal journal Nature Physics volume 6, pages 34 (year 2010)NoStop [Le Sueur et al.(2010)Le Sueur, Altimiras, Gennser, Cavanna, Mailly, and Pierre]le2010 author author H. Le Sueur, author C. Altimiras, author U. Gennser, author A. Cavanna, author D. Mailly,and author F. Pierre, @noopjournal journal Phys. Rev. Lett. volume 105, pages 056803 (year 2010)NoStop [Venkatachalam et al.(2012)Venkatachalam, Hart, Pfeiffer, West, and Yacoby]venkatachalam2012 author author V. Venkatachalam, author S. Hart, author L. Pfeiffer, author K. West,and author A. Yacoby, @noopjournal journal Nature Physics volume 8 (year 2012)NoStop [Jezouin et al.(2013)Jezouin, Parmentier, Anthore, Gennser, Cavanna, Jin, and Pierre]Jezouin2013 author author S. Jezouin, author F. D. Parmentier, author A. Anthore, author U. Gennser, author A. Cavanna, author Y. Jin,and author F. Pierre, 10.1126/science.1241912 journal journal Science volume 342, pages 601 (year 2013)NoStop [Spietz et al.(2003)Spietz, Lehnert, Siddiqi, and Schoelkopf]spietz2003 author author L. Spietz, author K. Lehnert, author I. Siddiqi,andauthor R. Schoelkopf,@noopjournal journal Science volume 300, pages 1929 (year 2003)NoStop [Blanter and Büttiker(2000)]blanter2000 author author Y. M. Blanter and author M. Büttiker, @noopjournal journal Physics Reports volume 336, pages 1 (year 2000)NoStop [Ivanov et al.(1997)Ivanov, Lee, and Levitov]ivanov1997 author author D. Ivanov, author H. Lee,andauthor L. Levitov, @noopjournal journal Phys. Rev. B volume 56, pages 6839 (year 1997)NoStop [Keeling et al.(2008)Keeling, Shytov, and Levitov]keeling2008 author author J. Keeling, author A. Shytov, and author L. Levitov,@noopjournal journal Phys. Rev. Lett.volume 101, pages 196404 (year 2008)NoStop [Grenier et al.(2011)Grenier, Hervé, Bocquillon, Parmentier, Plaçais, Berroir, Fève, and Degiovanni]grenier2011 author author C. Grenier, author R. Hervé, author E. Bocquillon, author F. D. Parmentier, author B. Plaçais, author J.-M. Berroir, author G. Fève,and author P. Degiovanni, @noopjournal journal New Journal of Physics volume 13, pages 093007 (year 2011)NoStop [Haack et al.(2011)Haack, Moskalets, Splettstoesser, andBüttiker]haack2011 author author G. Haack, author M. Moskalets, author J. Splettstoesser,andauthor M. Büttiker,@noopjournal journal Phys. Rev. Bvolume 84, pages 081303 (year 2011)NoStop [Haack et al.(2013)Haack, Moskalets, and Büttiker]haack2013 author author G. Haack, author M. Moskalets, and author M. Büttiker,@noopjournal journal Phys. Rev. Bvolume 87, pages 201302 (year 2013)NoStop [Kovrizhin and Chalker(2011)]kovrizhin2011 author author D. Kovrizhin and author J. Chalker, @noopjournal journal Phys. Rev. B volume 84, pages 085105 (year 2011)NoStop [Kovrizhin and Chalker(2012)]kovrizhin2012 author author D. Kovrizhin and author J. Chalker, @noopjournal journal Phys. Rev. Lett. volume 109, pages 106403 (year 2012)NoStop [Moskalets(2014)]moskalets2014 author author M. Moskalets, @noopjournal journal Phys. Rev. B volume 89, pages 045402 (year 2014)NoStop [Weiner(2011)]weiner2011 author author A. M. Weiner, @noopjournal journal Optics Communications volume 284, pages 3669 (year 2011)NoStop [Kumar et al.(1996)Kumar, Saminadayar, Glattli, Jin,and Etienne]kumar1996 author author A. Kumar, author L. Saminadayar, author D. Glattli, author Y. Jin,and author B. Etienne, @noopjournal journal Phys. Rev. Lett. volume 76, pages 2778 (year 1996)NoStop [Prokudina et al.(2014)Prokudina, Ludwig, Pellegrini, Sorba, Biasiol, and Khrapai]prokudina2014 author author M. Prokudina, author S. Ludwig, author V. Pellegrini, author L. Sorba, author G. Biasiol,and author V. Khrapai, @noopjournal journal Phys. Rev. Lett. volume 112, pages 216402 (year 2014)NoStop [Cook(2012)]cook2012 author author C. Cook, @nooptitle Radar signals: An introduction to theory and application (publisher Elsevier, year 2012)NoStop
http://arxiv.org/abs/1709.08968v1
{ "authors": [ "Yue Yin" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20170926122329", "title": "On-demand electron source with tunable energy distribution" }
AIP/123-QED WP and ZL contributed equally to this work. Department of Physics and the James Franck Institute, University of Chicago, Chicago, IL.WP and ZL contributed equally to this work. Department of Chemistry and the James Franck Institute, University of Chicago, Chicago, IL. Medical Scientist Training Program, Pritzker School of Medicine, University of Chicago,Chicago, IL. Department of Molecular Genetics and Cell Biology, University of Chicago, Chicago, IL.Department of Molecular Genetics and Cell Biology, University of Chicago, Chicago, IL.Department of Physics, University of Chicago, Chicago, IL.To whom correspondence should be addressed. E-mail: [email protected] Department of Physics and the James Franck Institute, University of Chicago, Chicago, IL.Statistical estimation theory determines the optimal way of estimating parameters of a fluctuating noisy signal. However, if the estimation is performed on unreliable hardware, a sub-optimal estimation procedure can outperform the previously optimal procedure. Here, we compare classes of circadian clocks by viewing them as phase estimators for the periodic day-night light signal. We find that continuous attractor-based free running clocks, such as those found in the cyanobacterium Synechococcus elongatus and humans, are nearly optimal phase estimators since their flat attractor directions efficiently project out light intensity fluctuations due to weather patterns (`external noise'). However, such flat directions also make these continuous limit cycle attractors highly vulnerable to diffusive 'internal noise'. Given such unreliable biochemical hardware, we find that point attractor-based damped clocks, such as those found in a smaller cyanobacterium with low protein copy number, Prochlorococcus marinus, outperform continuous attractor-based clocks.By interpolating between the two types of clocks found in these organisms, we demonstrate a family of biochemical phase estimation strategies that are best suited to different relative strengths of external and internal noise.Continuous attractor-based clocks are unreliable phase estimators Arvind Murugan December 30, 2023 =================================================================Extracting information from a noisy external signal is fundamental to the survival of organisms in dynamic environments <cit.>. From yeast anticipating the length of starvation <cit.> and bacteria estimating the availability of sugars<cit.>, to dictyostelium counting the number of cAMP pulses <cit.>, organisms must often filter noisy irregular aspects of the environment while inferring parameters about a regular aspect in order to be well-adapted <cit.>.A striking example of regularity in environmental stimuli is the daily day-night cycle of light on earth; organisms from all kingdoms of life use circadian clocks to estimate the phase of these periodic signals of fixed frequency in order to anticipate and prepare for future changes in light <cit.>. Phase inference on such an environmental signal is a challenge because unrelated aspects of the signal, such as large amplitude fluctuations due to weather patterns are uninformative of phase but the entrainment mechanisms looking for dawn-dusk transitions might conflate such fluctuations with true variation in phase. Poor phase entrainment is associated with a host of fitness costs in plants, rodents and humans<cit.>.Algorithms to infer the phase of a periodic but noisy signals have been studied extensively in statistics <cit.>; for example, the Bayesian theory of estimators develops optimal estimation procedures such as Maximum Likelihood Estimators (MLE) that account for prior expectations about the external signal. However, in practice, the MLE may be computationally too slow or consume too much memory or other computational resources <cit.>.Hence the engineering literature has considered `sub-optimal' alternatives for phase estimation, such as the Kay<cit.> and Tretter<cit.> estimators, that reduce the computational complexity of the operation.Such sub-optimal estimators can outperform the theoretically optimal estimator when subject to time, energy or other resource constraints. Molecular biology presents a novel kind of constraint on estimators, since any estimation procedure must be carried out on intrinsically unreliable biochemical hardware.This raises the question of which estimation procedures are compatible with biophysical constraints such as finite copy number fluctuations and limited energy and time. Here, we evaluate the performance of a general family of circadian clocks as phase estimators of the external day-night light cycle with weather-related amplitude fluctuations; however, these estimators are intrinsically unreliable, e.g,. due to finite copy number fluctuations. Our family interpolates between free-running limit cycle clocks, like those found in humans and S. elongatus, a 3 μ m cyanobacterium, and the damped point-attractors that describe the clock in P. marinus, a 0.5 μ m cyanobacterium with an estimated 50 × smaller protein copy number than S. elongatus <cit.>. We find that continuous attractors, such as limit cycles, are a double edged sword when viewed as statistical estimators. In the absence of internal fluctuations, the off-attractor dynamics of continuous attractors can selectively project out external fluctuations and thus approach Cramer-Rao bounds on estimation. However, continuous attractors are susceptible to diffusion along the attractor itself caused by internal noise (e.g. low protein copy number <cit.>), in which case point attractors can out-perform. Thus, we find an extension of the Laughlin principle <cit.> - clock dynamics must be tuned to match the expected statistics of both external and internal fluctuations.§ UNRELIABLE ESTIMATORS We first illustrate our results in a general context. Consider the canonical problem of phase estimation for a sine wave f(t) of known frequency with additive Gaussian white noise of strength ϵ_ext, extensively studied instatistics <cit.> and in engineering <cit.>. The Maximum Likelihood Estimator (MLE) for the phase at time t is <cit.> cosϕ̂_MLE(t) ∝∫^t_-∞f(t̅) sin(ω (t-t̅))dt̅. To physically implement such an estimator, a device must internally generate a reference sine wave of fixed frequency ω and integrate it against the entire available history of the external signal. We contrast ϕ̂_MLE with the family of finite-history estimators given by, cosϕ̂_γ(t) ∝∫^t_-∞ f(t̅) K(t-t̅) dt̅ where K is a damped oscillatory kernel; K(t)= sin(ω t) e^-γ t.ϕ̂_γ only accounts for a ∼ 1/γ length of the signal f(t)'s history.As shown in Fig.<ref>, on a perfectly reliable device, ϕ̂_MLE has lower variance σ_ϕ^2 than any member of ϕ̂_γ. We then turn on internal unreliability in the form of phase diffusion (with diffusion constant ϵ_int^2/2) in generating the oscillatory kernel K(t). Fig.<ref>b shows the precision (i.e., -log_2 σ_ϕ) of ϕ̂_MLE and two estimators in the ϕ̂_γ family as a function of ϵ_int; ϕ̂_MLE's precision is especially fragile to internal noise.On the other hand, estimators ϕ̂_γ=0.17 ω,ϕ̂_γ=ω based on shorter-lived kernels, are much more robust to phase diffusion and thus outperform ϕ̂_MLE on sufficiently unreliable hardware.Intuitively, integrating a longer history of f(t), as in ϕ̂_MLE, averages out external noise but also increases exposure to internal phase drift in K(t). In fact, we show in the SI that the estimator with γ_opt∼ϵ_int/ϵ_ext strikes the right balance in integration time and has the highest precision in this family. § CIRCADIAN CLOCKS AS ESTIMATORSWe now discuss two qualitatively distinct phase estimation strategies implemented by organisms with circadian clocks that face both external and internal fluctuations. Many organisms like humans and rodents have free running clocks that show self-sustained 24 hr rhythms even in constant dark or constant light conditions. Such clocks are phenomenologically well-described by a limit cycle attractor, a non-linear oscillator with a fixed amplitude<cit.>. The molecular details of such limit cycle attractors are best understood for the post-translational Kai ABC protein clock in S. elongatus; for example, the axes of the phase portrait in Fig.<ref> could be the phosphorylation extent of the S and T sites on KaiC (<cit.> and SI). The clock follows distinct limit cycle dynamics during the day and night<cit.>, with the day cycle positioned at higher phosphorylation levels due to higher ATP levels. We model such free-running clocks using circular day and night limit cycles of radius R in a plane. The limit cycle is defined by the dynamics τ_relaxṙ = r - r^3/R^2 , θ̇ = ω about its own center; but the center of the limit cycle itself moves along the y=x diagonal in Fig.<ref>a as (-ρ(t) L,-ρ(t) L) where ρ(t) ∈ [0,1] is the normalized light level at time t and L is a measure of the physiological changes between day and night (e.g., ATP/ADP ratio change in S. elongatus). Thus, e.g. in Fig.<ref>a, the system follows the blue dynamics at night and then after dawn it relaxes to the orange day attractor on a time scale τ_relax. In reality, the day and night limit cycles are not circles of the same size in a plane and physiological changes might lag light levels; we later use a molecular model of the KaiABC oscillator that violates all these assumptions about shape, size and relaxation to show that our qualitative results do not rely on these assumptions. We do not include transcriptional coupling <cit.> of the clock here. Other biological oscillators described by our picture of limit cycle include NF-κB <cit.> driven by TNF changes <cit.>, and synthetic oscillators <cit.>. Not all organisms have a free-running clock; for example, many insects <cit.> have damped `hourglass' clocks that decay to a fixed point under constant light or constant dark conditions but show oscillatory dynamics under day-night cycling. In fact, a sister cyanobacterial species Prochlorocaucus marinus has a KaiBC-protein based clock without the negative KaiA-feedback <cit.>. Consequently, in constant light or constant dark conditions, the clock's state decays to a distinct day or a night state respectively <cit.>. Such clocks are phenomenologically well-described by a day-time and a night-time point attractor with slow relaxation dynamics between them as shown in Fig. <ref>b, modeled as ṙ = - r/τ_relax , θ̇ = ω about an attractor point whose location varies with current light levels as (-ρ(t) L,ρ(t)L). Here we assume 2 τ_relax∼ 24 hrs as in P. marinus <cit.>; if relaxation were faster and completed before the day is over, the clock cannot resolve all times of the day. With cloudless day-night cycling, both kinds of clocks entrain into a stable trajectory as shown in the lower panels of Fig.<ref>a and b, switching dynamics between the two limit cycles or point attractors at dawn and dusk.In what follows, we will also consider a family of limit cycle clocks of varying R/L to interpolate between large-R/L limit cycles and point attractors. The Hopf bifurcation is the simplest way to parametrize such an interpolation <cit.>. However, the relaxation time τ_relax changes dramatically near a Hopf bifurcation, distracting from the effects of noise that we wish to study. Hence we hold τ_relax fixed in the interpolation but stop at a non-zero R/L to avoid singularities (see SI). § EXTERNAL NOISE - WEATHER PATTERNS We begin with the performance of different clocks in the presence of external intensity fluctuations due to weather patterns. Weather patterns cause large fluctuations in the intensity of light over a wide range of time-scales as shown in Fig.<ref>a. We model such fluctuations during the day as random dark pulses that cause a temporary shift back to the night cycle dynamics. In what follows, we quantify time-telling precision of clocks by first subjecting an in silico population of bacteria to different realizations of such noisy weather patterns. We compute the resulting distribution p(c⃗|t) in clock state c⃗ at a given time of day t. The variance of p(c⃗|t) fundamentally limits the precision with which the cell can infer the current time t from the clock state. Finally, we average variance over the day-night cycle to find mutual information between clock state and time <cit.>. See SI. Alternative related measures include the ability to anticipate sunset or sunrise.When subject to weather fluctuations, we see in Fig.<ref>b that the population variance of clock states for limit cycles is fundamentally limited by the spacing between the day and night limit cycles. Point attractors develop much larger and overlapping population distributions at different time points.We can geometrically understand the daytime variance increase σ^2_clouds, seen in Fig.<ref>c, in terms of the phase lag ΔΦ due to a single, say 2.4 hr dark pulse <cit.> administered during the day.Fig.<ref>d shows that the deviation in trajectory for limit cycle clocks (purple) is fundamentally limited by the presence of the two continuous attractors.In contrast, for the point attractor, a dark pulse sets the system in free fall towards the night point attractor, with no limit cycle to arrest such a fall. Consequently, the geometrically computed phase shift ΔΦ due to the particular dark pulse shown in Fig.<ref>d is much smaller for limit cycles (ΔΦ∼ 0.5 hr for the R,L geometry shown) than for point attractors (ΔΦ∼ 4 hr) (see SI). In fact, this contrast in ΔΦ between limit cycles and point attractors holds for dark pulses of any duration and time of occurrence (see SI). Finally, the contrast is even greater at large R/L as shown in see Fig.<ref>e;the dark pulse phase shift (ΔΦ)^2 ∼ (L/R)^2 falls rapidly with limit cycle size. This trend agrees with the variance gain σ^2_clouds seen in simulations that average over random weather conditions. Hence, large-R/L limit cycles are much less affected by external fluctuations than point attractors.To complete the analysis, note that in Fig.<ref>c, the population variance increases additively during the day and falls multiplicatively at dusk (and dawn), i.e., σ^2 σ^2 + σ^2_clouds (σ^2 + σ^2_clouds)/s^2 …. Solving for steady state, we findσ^2,ext_limitcycle∼ΔΦ^2/(s^2 - 1).where we have equated σ^2_clouds to ΔΦ^2 for a typical dark pulse. We must now compute the variance drop σ^2 →σ^2/s^2 seen at dusk (and dawn). As shown in the SI for external noise (and in Fig.<ref>b for internal noise), this dawn/dusk entropy drop can be geometrically explained by the slope of the circle map relating the two cycles <cit.>; we find that s^2 -1 ∼ L/R for large-R/L limit cycles. Plugging this and ΔΦ^2 ∼ (L/R)^2 into Eq.<ref>, we see that σ^2 → L/R → 0 for large cycles. Fig.<ref>f shows that the precision (i.e., mutual information between clock state and time) computed from random weather simulations agrees with this theory; clock precision drops as we interpolate from limit cycles to point attractors. § INTERNAL NOISE - FINITE COPY NUMBER In addition to external fluctuations, circadian clocks must also deal with the intrinsically noisy nature of biochemical reactions<cit.>. In particular, based on their relative sizes<cit.>, P. marinus is thought to have far fewer copies of the Kai clock proteins (e.g., ∼ 500 of KaiC )than S. elongatus (∼ O(10000) copies of KaiC <cit.>). Such finite numbers of molecules is known to create significant stochasticity in oscillators, even in the absence of an external signal <cit.>.Finite copy number effects on cellular function have been extensively studied and modeled <cit.>, e.g., using Gillespie simulations. Here we follow <cit.> and add Langevin noise to all dynamical variables of the system of strength ϵ_int∼ 1/√(N), where N is the overall copy number, with the ratios of different species assumed fixed (see SI). In the Langevin approach, the clock state still has dynamics implied by the phase portrait in Fig.<ref> but also diffuses with a diffusion constant ϵ_int^2 ∼ 1/N. We later check our results against full Gillespie simulations of an explicit Kai ABC model. We simulated a population of clocks in externally noiseless day-night light cycles but with internal Langevin noise. We see in Fig.<ref>b that limit cycle populations have significantly higher variance of clock state due to internal noise than point attractors, in contrast to Fig.<ref>b with external noise alone. We can understand the weakness of limit cycle attractor relative to the point attractor in terms of diffusion along flat and curved directions in the phase plane. The flat direction along the limit cycle attractor cannot contain diffusion caused by the Langevin noise and hence the population variance along the limit cycle increases linearly with time during the day, changing by σ^2 →σ^2 + ϵ_int^2 T_day during a day of length T_day (and similarly at night), as shown in Fig.<ref>c. Dawn and dusk times do reduce the variance σ^2 →σ^2/s^2 as the trajectories originating on, say, the day cycle converge on the night cycle (see Fig. <ref>d and <cit.>). In fact, we can compute this variance drop s^2 entirely through geometric considerations. We define the circle map ϕ = P(θ) as relating originating points θ near dusk on the day cycle to final points on the night cycle ϕ after relaxation (experimentally characterized in <cit.>). Then s^-1 = dP(θ)/dθ. Fig.<ref>d shows that this slope s^-1 = dP(θ)/dθ, geometrically computed in the SI, agrees with the dawn/dusk variance drop in Langevin simulations and scales as s^2 - 1 ∼ L/R for large R/L.Thus, the population variance changes as σ^2 σ^2 + ϵ_int^2 T_day (σ^2 + ϵ_int^2 T_day )/s^2 … where the night adds another + ϵ_int^2 T_night and so on. Assuming T = T_day=T_night and solving for steady-state average variance,σ^2,int_cycle∼ϵ_int^2 T s^2+1/s^2 -1Consequently, as the cycles become large (large R/L), the dawn/dusk variance drop vanishes as s^2 -1 ∼ L/R → 0 while diffusion along the flat direction still adds +ϵ_int^2 T to the variance during each day and each night; hence large-R/L limit cycles have large σ^2,int_cycle and thus low precision.In contrast, for the point attractor, the population variance stays constant during the day-night cycle. The size of this variance is analytically shown in the SI to be, σ^2,int_point∼ϵ_int^2 τ_relaxwhich matches Langevin simulations as shown in Fig.<ref>e. Sinceτ_relax∼ T_day to have distinct clock states throughout the day (see Fig.<ref>c)), we find σ^2,int_cycle≥σ^2,int_point.To summarize, in both cases, population variance is reduced by the geometric `curvature' of the dynamics which is set by how much nearby trajectories converge. Point attractor trajectories experience a constant curvature of 1/τ_relax, giving Eqn.<ref>. In contrast, limit cycle clocks have long periods of zero curvature along the limit cycle (day and night); such dephasing in constant conditions has been studied in circadian clocks <cit.>, in NF-κB <cit.> and computed in similar fashion as in Eqn.<ref> for phase oscillators <cit.>. Here, such variance increases are balanced only byshort periods of `curved' off-attractor dynamics at dawn and dusk, when clock must relax to the new day or night attractor (Fig.<ref>a). Hence limit cycles under-perform point attractors if only internal noise is present.§ COMBINATION OF EXTERNAL AND INTERNAL NOISEWe now subject the clock systems to both internal and external noise at the same time. We find results (see Fig.<ref>a) that parallel those for mathematical estimators in Fig.<ref>b. Large-R/L limit cycles outperform other clocks in filtering out external noise when internal noise is low but their precision degrades more rapidly than other clocks as internal noise ϵ_int^2 ∼ 1/N is increased. Point attractors have poor precision with only external noise but do not significantly degrade with internal noise and outperform all other clocks at high internal noise. At comparable strengths of internal and external noise, limit cycles with an intermediate value of R/L are most precise.The calculations and simulations so far assume idealized limit cycles; e.g., we assume the simplest form of circular limit cycles that exist near a Hopf bifurcation and assume the same diffusion constant ϵ_int^2 around the limit cycle. Real biochemical oscillators such as circadian clocks <cit.>, NF-κB <cit.>, or synthetic circuits <cit.> can violate such assumptions. To test if our results survive the specifics of biological clocks, we performed Gillespie simulations for a explicit model of KaiABC that interpolates between the known biochemistry<cit.> of S. elongatus's clock and the putative KaiBC clock<cit.> in P. marinus (Fig.<ref>c). The limit cycles in this model are not perfect circles of the same size, do not lie entirely in two dimensions and are affected by finite copy number in a heterogeneous way (see SI). Despite such complications, we find the general behavior of Fig.<ref>b is reproduced by this model in Fig.<ref>c. We dial the strength of the KaiA feedback γ, responsible for spontaneous oscillations, to interpolate between limit cycles and point attractors. As earlier, we find that different ratios of internal to external noise require different strength of the KaiA feedback for highest clock precision. §.§ Speed-precision trade-offThus far, we have only considered the population variance at steady state as a proxy for clock quality. An independent measure of the clock quality is the entrainment speed, i.e., the time taken to reach steady state population variance, starting from a population uniformly distributed in clock phase. In Fig.<ref>b, we show the resulting trade-off between precision and speed for our family of estimators in the presence of only external noise and then, only internal noise. With external noise, the most precise estimators (i.e., large-R/L limit cycles) take much longer to reach such a steady state. Intuitively, limit cycles retain a longer history of the external signal, allowing them to average out external noise better, much like the (slow) Maximum Likelihood Estimator (Fig.<ref>). In contrast, point attractors have little memory of the external signal seen in earlier days since the population converges to a point every night.Strikingly, such a trade-off between speed and accuracy is absent if only internal noise is present; the estimators most robust to internal noise (i.e., point attractors) are also the fastest estimators, much as we found for statistical estimators. Intuitively, the less time spent estimating using unreliable hardware gives less opportunity for error. As with statistical estimators, with both kinds of noise present, clocks with intermediate entraining speed will have the highest precision.§ DISCUSSIONParameter estimation is known to be aided by having an internal model of the expected signal since external fluctuations inconsistent with that model can then be projected out easily<cit.>. Here, we reconceptualize circadian clocks as phase estimators for noisy input signals and note that limit cycle-based free running circadian clocks encode an internal model of the expected external day-night cycle of light. We find that the continuous attractor underlying such a clock is able to effectively project out weather-related amplitude changes that are perpendicular to the flat direction. Similar roles for the flat direction of continuous attractors have been extensively explored in neuroscience <cit.>, e.g., for head and eye motor control<cit.> and spatial navigation <cit.>. However, we see here that the same flat direction becomes a vulnerability with internal fluctuations since such fluctuations cannot be restricted to be perpendicular to the attractor. Thus, when the internal model is unreliable, a simpler phase estimation procedure with no internal model provides better time keeping. Thus our work suggests that the damped circadian oscillator, like that in P. marinus <cit.>, is not merely a poor cousin of the remarkable free running oscillator found in S. elongatus. At the low protein copy numbers in P. marinus, such damped point attractors keep time more reliably than limit cycle clocks.In addition to P. marinus, damped oscillators are found elsewhere in biology <cit.>. In fact, many limit cycle oscillators shrink down to point attractors as physiological conditions are varied, such as S. elongatus's clock at low temperatures<cit.>, NF-κB at very low or high levels of TNFα stimulation<cit.> or insect clocks in response to diet and temperature changes <cit.>. Our work suggests that such families of oscillators that interpolate between limit cycles and point attractors continuously trade off protection against external fluctuations for protection against internal fluctuations.We thank Aaron Dinner, John Hopfield, Eugene Leypunskiy, Charles Matthews, Brian Moths, Thomas Witten, and members of the Rust and Murugan labs for fruitful discussions.§ STATISTICAL PHASE ESTIMATORSThe MLE for phase at time t = 0 for a periodic signal f(t) of known frequency ω with additive Gaussian white noise (AGWN) has been well-studied and is known to be <cit.>cosϕ̂= lim_T→∞1/T∫_-T^0 f(t') sin(ω t') dt'A quick way to see this is to note that with Gaussian noise, the likelihood function is log e^-x^2∼ x^2 and thus Maximum Likelihood Estimation is equivalent to least-squares minimization between the signal and a reference sine wave, _ϕ || sin(ω t + ϕ) - f(t)||^2. Expanding the square, only the cross term ∫ f(t) sin(ω t) survives since sin^2ω t and f(t)^2 terms integrate to constants. In this way,Eq.<ref> can be shown <cit.> to be the Maximum Likelihood Estimator for Gaussian white noise. If we perform this estimation `online' (i.e., provide a running estimate as a function of time), we can write this estimator in the more familiar kernel form,cosϕ̂(t) =lim_T→∞1/T∫_-T^t f(t') K_MLE(t-t') dt'where K_MLE(t) = sin(ω t). Inspired by the constraints of carrying out such an estimator using a physical system with finite memory, we generalize the above MLE to a family of estimators:cos ( ϕ̂_γ + ϕ_0) =∫_-∞^t f(t') K_γ(t-t') dt' K_γ(t) =γ√(γ^2+4ω^2)/ω e^-γ tsin(ω t)where ϕ_0 = arcsinγ/√(γ^2+4ω^2) is an offset.§.§ External noiseWe model external noise as an additive Gaussian process,f(t) = cos(ω t) + η(0,ϵ_ext)(t)where ⟨η(t') η(t) ⟩_ext = ϵ_ext^2 δ(t-t'). To estimate the variance of the estimator, we denote r(t) = cosϕ̂(t), and compute its autocorrelation function⟨ r(t) r(0) ⟩_ext = ∫_-∞^t ∫_-∞^0 ⟨ f(t_1) f(t_2) K_γ(t-t_1) K_γ(t-t_2) ⟩_ext dt_1 dt_2 = ⟨ r(t) ⟩_ext⟨ r(0) ⟩_ext + ∫∫⟨η(t_1) η(t_2) ⟩_ext K_γ(t-t_1) K_γ(t-t_2). Thus C(t)≡⟨ r(t) r(0) ⟩_ext -⟨ r(t) ⟩_ext⟨ r(0) ⟩_ext can be evaluated as C(t) =∫∫⟨η(t_1) η(t_2) ⟩_ext K_γ(t-t_1) K_γ(0-t_2) dt_1 dt_2=∫∫ϵ_ext^2 δ(t_1 - t_2) K_γ(t-t_1) K_γ(0-t_2) dt_1 dt_2=ϵ_ext^2 ∫_-∞^0 K_γ(t-t_2)· K_γ(-t_2) dt_2=ϵ_ext^2 γ e^t γ( cos(ω t)-γω^-1sin(ω t))(γ^2+4ω^2)/4(γ^2+ω^2) Thus, σ^2=C(0)=ϵ_ext^2 γ(γ^2+4ω^2)/4(γ^2+ω^2)≈ϵ_ext^2γ.Hence we conclude that for small γ,σ^2 ≈ϵ_ext^2 γ,as confirmed by numeric simulations in Fig.<ref>a, d. As γ→ 0, the estimator integrates over longer and longer histories and provides an accurate estimation of the phase.§.§ Internal noiseK_γ(t) must be generated internally by the estimator during integration. We model the intrinsic unreliability of time-keeping as phase diffusion for K_γ,K_γ(t)= Γ e^-γ tsin(ψ̂_t),dψ̂_t= ω dt + η(0,ϵ_int) √(dt)where we denote the normalization factor byΓ=(ϵ_int^2+2γ)√((ϵ_int^2+2γ)^2+16ω^2)/4ω .With this, we can write the autocorrelation for noiseless signals f(t) = cosω t and a noisy kernels as, ⟨ r(t) r(0) ⟩_int = ∫∫⟨ f(t_1) f(t_2) K_γ(t-t_1) K_γ(0-t_2) ⟩_int= ∫∫cosω t_1 cosω t_2 ⟨ K_γ(t-t_1) K_γ(0-t_2) ⟩_int Using the definition of the kernels K_γ, we find, ⟨ r(t) r(0) ⟩_int = Γ^2 ∫∫cosω t_1 cosω t_2 e^-γ (t-t_1+t2)⟨sin(ω (t-t_1)+ψ_t-t_1)sin(ω (-t_2)+ψ_-t_2)⟩_int ⟨ r(t) ⟩_int = Γ∫cosω t_1 e^γ (t_1-t)⟨sin(ω (t-t_1))+ψ(t-t_1)⟩_intNote that ψ_t is an unbiased Gaussian random walk started at ψ_0=0, and it follows a Normal distribution with variance ϵ_int^2 t. Note that if θ is random number from a Gaussian distribution N(μ,σ), then one has⟨cosθ⟩=e^-σ^2/2cosμand⟨sinθ⟩=e^-σ^2/2sinμ Using these identities on Eqn.<ref>,<ref>, we can compute the variance of the estimator σ^2=⟨ r(0)^2⟩_int-⟨ r(0)⟩_int^2 in the leading order of ϵ_int^2 asσ^2 =(2γ^4+γ^2ω^2+2 ω^4) ϵ^2/8γω^2(γ^2+ω^2)+O(ϵ_int^4)in the regime where γ≪ω and ϵ_int^2≪ 1 we can further simplify the variance to beσ^2 ≈ϵ_int^2/4γ Optimal estimator To derive the optimal estimator, note as shown in Fig.<ref>c, that with both noises present, the variance is given by (ϵ_int^2/γ, ϵ_ext^2 γ). This variance is minimized when the two terms are equal, givingγ_opt∼ϵ_int/ϵ_ext .Time-precision trade-off With only external noise, we see that slower estimators (i.e. small γ leading to longer integration of history) have a higher precision, leading to a trade-off between precision and speed. However, with only internal noise, slower estimators are less precise since longer integration times exposes the estimator to more internal noise-related dephasing.With both kinds of noise present, the optimal estimator γ_opt∼ϵ_int/ϵ_ext, strikes a balance in integration time; integrating any longer would be more negatively affected by internal noise than would be gained by averaging out external noise. Similarly, integrating for less time would insufficiently average out the external noise and not gain as much from lower exposure to internal noise. The same structure of trade-offs is seen for limit cycle and point attractor-based clocks. § CIRCLE MAP - STEP RESPONSE CURVE In our main paper, we claim that the variance of the clock state across a population drops σ^2 ⇒σ^2/s^2 at dusk where s^2 - 1 ∼ L/R as L/R ⇒ 0. Data from Langevin simulations was presented. Here we will derive this result using a simple geometric argument about circle maps. We define ϕ = P_T(θ) to be the phase on the night cycle that a clock evolves to, after time a time T, if the lights were suddenly turned off when the clock is at state θ on the day cycle. See Fig.<ref>a,b. In principle, with complex relaxation dynamics between the limit cycles, P_T(θ) could show complex dependence on T. However, we work in a simplified model where the angular frequency of the clock is independent of the amplitude of oscillations. In this limit, T only causes an overall shift in ϕ =P_T(θ); i.e., we can write P_T(θ) = P(θ) + ω T where ω is the angular frequency of the clock.In what follows, we will be interested in the derivative of ∂_θ P_T(θ); hence we will work with P(θ) instead of P_T(θ). This circle map, ϕ = P(θ), is important since it determines whether two differing day-time clock states are brought closer or taken further at dusk and thus determines the rate of entrainment of a population to the external signal. Consider two organisms that have nearby but distinct clock states θ_0, θ_0+Δθ at dusk. After dusk, these two clocks will be mapped to P(θ_0) and P(θ_0 + Δθ) ≈ P(θ_0)+ Δθ dP(θ)dθ|_θ=θ_0 respectively. Thus, dusk changes thedifference between the clock states from Δθ to Δϕ where,Δϕ≈Δθ. dP(θ)/dθ|_θ=θ_0By a similar argument, if the variance of clock states across a population is σ^2 before dusk, it will be reduced by,σ^2 σ^2 ( . dP(θ)/dθ|_θ=θ_0)^2This expression is valid in the regime where the population variance σ^2 is small enough to linearize the circle map P(θ). Similar considerations apply to the dawn transition between the night and day cycle as well. Both circle maps were recently experimentally characterized for S. elongatus in <cit.>.In our simple theoretical model where clock frequency does not change with amplitude (i.e. the radial coordinate), we can easily compute P(θ) from geometry. In Fig.<ref>, we draw a diagram of the transition from a particle on the day cycle at the phase θ to the night cycle at the phase ϕ. By trigonometry, we writeϕ = P(θ) = arctan(L + R sinθ/R cosθ),and derives^2 - 1= ( d P(θ)/dθ)^-2 - 1=L (2 L^3 + 7 L R^2 - 3 L R^2 cos(2 θ) + 4 R (2 L^2 + R^2) sinθ)/2 R^2 (R + L sinθ)^2=2 sin(θ) L/R + 𝒪(L/R)^2, where θ corresponds to the angle on the day cycle at dusk, which is at π/2 in Fig.<ref>a. This equation implies that as the day and night limit cycle gets closer, the geometric focusing effect s converges to one. This asymptotic behavior is intuitive because if L = 0, meaning no transition, then the variance should remain the same (s = 1, so σ^2 →σ^2/1^2 at the transition). Remarkably, our geometric derivation of s^2 - 1 matches the variance drop σ^2 →σ^2 /s^2 seen in stochastic simulations of weather conditions; see Fig.<ref>d. The variance gain during the day is the result of the fluctuation of sunlight, simulated as random dark pulses of random intervals, amplitude and time of delivery. Such variance is accumulated during the day and the drop over dusk time is measured (green Xs). Fig.<ref>e shows the variance drop seen in simulations with internal noise in Langevin simulations. While the cause of variance increase during the day is different (finite copy number effects), the variance drop at dusk agrees well with the geometric computation of s^2 and thus with the external noise simulations as well. In both cases, the simulations and geometric theory show that s^2 - 1 ∼ L/R as L/R ⇒ 0.§ DARK PULSE PHASE SHIFT - PHASE RESPONSE CURVEDuring the daytime, sunlight intensity fluctuates because of cloud cover and we have referred to these fluctuations as external noise. In our simulations, we subject each individual in a population to a different realization of these weather conditions and compute the resulting population variation of clock state. Such variation limits the ability of the cell to read out the objective time from the clock state.Here, we relate the population variance caused by random cloud cover to the geometrically computed Phase Response Curve (PRC) due to a single dark pulse administered during the day. Using this geometric method, we will find that the ability of limit cycle to withstand external intensity fluctuations increases with R/L, the size R of limit cycles relative to their separation L. In particular, we will show geometrically that the variance gain during the day σ^2 ⇒σ^2 + σ^2_clouds scales as (L/R)^2, in perfect agree with stochastic weather simulations.To compute the scaling relationship of σ^2_clouds, we compute the phase shift ΔΦ caused by a single dark pulse with width τ on the limit cycles with angular speed ω (i.e., the Phase Response Curve (PRC) corresponding to such a dark pulse). Fig.<ref>a shows an example of a dark pulse in the signal and how it affects the trajectory. Consider a clock at state θ on the day cycle. A dark pulse of length τ administered just then will change the dynamics to that of the night cycle. This clock has state ϕ = P(θ) with respect to the night cycle and will evolve for a time τ according to the night cycle dynamics, reaching a new state ϕ+ωτ, at a radial position determined by R,L. At the end of the dark pulse, we use the night-day circle map, θ = Q(ϕ), to find the clock state back on the day cycle. Note that all these shifts depend on the limit cycle geometry, i.e., on R and L, as shown in Fig.<ref>. Similar to how we compute the mapping in the previous section, we can write each mapping using simple trigonometry: ϕ = P(θ) = arctan(L+R sinθ/R cosθ)andθ^* = Q(ϕ) = arctan(-L + R sin(ϕ + ωτ)/ R cos (ϕ + ωτ)). Notice the mapping Q only differs from P by changing L to -L. We also include the diagram showing the transition due to dark pulse in Fig.<ref>. The process “1” corresponds to ϕ = P(θ), “2” corresponds to the rotation on the night cycle ϕ→ϕ + ωτ, and “3” corresponds to the transition back to the day cycle θ^* = Q(ϕ+ωτ). Combining this 3 processes, we write θ^* as θ^*(θ, τ, L/R) and expand it in the limit that L/R⇒ 0 to obtain that ΔΦ = -L/R(cos(θ + ωτ) - cos(θ)) + 𝒪(L/R)^2where ΔΦ = θ^* - (θ + ωτ) because θ + ωτ is the phase of the clock if it did not experience the dark pulse.This expression ΔΦ indicates the amount of phase shifted that the cloud causes. With different clocks experiencing different weather conditions, the variance gained among the population due to the fluctuation of sunlight grows like |ΔΦ|^2 ∼ (L/R)^2. We see good agreement between stochastic weather simulations and this geometric computation as shown in Fig.<ref>d. In this calculation, we focused on dark pulses administered at a fixed generic time (8 AM in Fig.<ref>d). However, the PRC ΔΦ(θ) for dark pulses has a zero at a specific time of the day (see Fig.<ref>c). That is, for each dark pulse of width τ, there exists a time of administration such that ΔΦ = 0! In fact, such a dark pulse has an entraining effect, reducing the population variance.Such an effect is seen in Fig.3c, where the population variance drops in the middle of the day. We leave experimental and theoretical investigation of the counter-intuitive effects of such specially time dark pulses to future work.Here, we show that even if we include such dark pulses with an entraining effect, the variance gained at the end of the day is still proportional to (L/R)^2 in the limit that L/R goes to zero. To simplify our derivation but retain the essence of what dark pulses do during the daytime, let's us consider dark pulses coming at three times: in the morning (θ = -π/2),around noon (θ = -ωτ/2 with small ωτ), and in the evening (θ = π/2). Starting the day with variance σ_0^2, by the end of the day the variance becomesσ^2= σ_0^2 + (ΔΦ)^2_θ = -π/2/(1+ ( d ΔΦ/dθ)_θ = -ωτ/2)^2 + (ΔΦ)^2_θ = π/2≈σ_0^2 + (L/Rsinωτ)^2/(1+2L/Rsin(ωτ/2))^2 + (L/Rsinωτ)^2 σ^2≈σ_0^2 + 2 (L/Rsinωτ)^2 + 𝒪(L/R)^2.Thus, the variance gained due to fluctuation, σ^2-σ_0^2 = σ^2_clouds, is proportional to (L/R)^2. This simple derivation may not rigorously reflect the correct constant in front of (L/R)^2 term, but the full rigorous derivation, concerning the dark pulses coming randomly at random time during the day, should yield the same power law dependent on L/R. Fig.<ref>d shows that averaging ΔΦ^2 over pulses administered at different times numerically (dashed line) results in the same power law as for single pulses and as seen in stochastic weather simulations.§ LANGEVIN MODEL OF FINITE COPY NUMBER FLUCTUATIONS Chemical reactions that occur in the bulk of a homogeneous solution can be described by a set of ordinary differential equations. However, within a single cell the copy number of molecule is limited and thus the reaction carries internal noise from the stochastic fluctuations. Gillespie showed that chemical reactions under finite copy number can be approximated by a Langevin dynamics using the following argument <cit.>,Consider an elementary reaction A + B -> C + Dwith the forward rate constant k_+, during each infinitesimal time δ t, the probability of the occurrence of this reaction follows a Poisson distribution whose mean and variance both equals to R_+δ t= k_+· N_A· N_B·δ t. Integration over a larger time step, the Poisson distribution can be approximated into a Gaussian form, resulting in Langevin dynamics,d N_a = - k_+· N_A· N_B · d t + √(R_+) d Wwhere W is a standard Wiener process of mean 0 and autocorrelation function ⟨ W(t_1)W(t_2)⟩=δ(t_1-t_2).To describe a chemical reaction network, the Langevin equation for each species consists of contributions to the noise from each reaction where the species is involved. Now consider adding another reaction C + D -> A + Bwith the rate constant k_-, then the Langevin equation for species A becomes,d N_a = - k_+· N_A· N_B· d t + k_-· N_C· N_D· d t + √(R_+) d W_1 + √(R_-) d W_2where R_+=k_+· N_A· N_B and R_-=k_-· N_C· N_D respectively denote the number rates of the forward and the backward reaction; d W_1 and d W_2 are identical independent standard Wiener processes. To fully determine the effect of the noise using the Langevin dynamics for a chemical reaction network, one needs to consider all of the reactions corresponding to the species of interest; the noise term usually becomes time-dependent and multiplicative. To simplify the description of internal noise in our phenomenological model of limit cycle/ point attractor, we take a first order approximation that the diffusion coefficient in the reaction coordinate space is homogeneous in both space and time. (See similar treatments of another biological system in <cit.>. In contrast, our explicit KaiABC simulations, presented later, do not make this simplifying assumption of homogeneous diffusion.) This allows us to write a 2-dimension phenomenological stochastic differential equationdz⃗ = f(z⃗, t)· d t + √(2 D)· dW⃗where the f(z⃗, t) denotes the deterministic dynamics driven by day-night cycles and the diffusion constant is assumed to be proportional to the total number of Kai-C molecules within the cell. §.§ Population varianceFor the cell to carry out a reliable computation, the population variance from the internal noise needs to be reduced. Such noise reduction comes from the dynamics of the attractor. In the limit cycle attractor mechanism, the internal noise reduction is performed only along the radial axis but not along the flat attractor direction. In contrast, the point attractor mechanism is able to limit population variance due to internal noise in all directions due to the effective `curvature' of the dynamics.Here we analytically estimate the steady-state population variance for a point attractor mechanism. The population variance is together determined by the diffusive term √(2 D)· dW⃗, and the noise reduction effect from the restoring force of the point attractor's harmonic well. During each infinitesimal time δ t, the internal noise increase the variance byσ ^2(t+δ t) = σ^2(t)+ 2 Dδ t.In contrast, the overdamped deterministic motion within a harmonic well provides a focusing effect that reduces the variance exponentially with time. To quantify this focusing effect, consider a 1-d overdamped dynamics of a particle within a harmonic energy well of V(r)=k· r^2. The solution to the equation of motion is r(t)=r_0· e^-2kt, with initial position r(0)=r_0. Consider an ensemble of points with a mean initial position μ_0 and a initial variance of σ ^2_0, one can solve the dynamics of the mean asμ(t)=μ_0· e^-2ktand the dynamics of the variance asσ^2(t)=σ^2_0· e^-4ktThus, per infinitesimal time δ t, the geometric focusing effect of the energy well of the point attractor reduces the population variance by σ^2(t+δ t)=σ^2(t)/gwhere g=e^4kδ t.Under the competition between the spreading effect from the internal noise and the geometrical focusing effect from the deterministic dynamics, the population variance reaches a steady value solved by σ^2_st=σ^2_st+ 2 Dδ t/g =σ^2_st+ 2 Dδ t/e^4kδ tand by taking the limit of δ t goes to 0, we have σ^2_st=D/2k.§ EXPLICIT KAIABC BIOMOLECULAR MODELWe derived our results in two distinct ways: (a) using an abstract theory of estimators, (b) using a simplified dynamical systems picture of circadian clocks. Here we illustrate our results in a third independent way - using Gillespie simulations of an explicit biomolecular KaiABC model. This model, based on recent experiments, violates the simplifying assumptions and idealizations made earlier - such as assuming circular limit cycles of the same size during the day and night, Langevin approximation of internal noise with a homogeneous and time-independent diffusion coefficient, the two dimensional nature of the dynamical systems etc. Nevertheless, we find qualitatively similar results, showing that our results rely only on the essential properties of these systems such as the existence of a continuous attractor. §.§ S . elongatus clock - hexamers with collective KaiA feedbackThe S. elongatus clock has been well-characterized experimentally <cit.> - see Fig.<ref>a. The clock is fundamentally based on the ordered phosphorylation and dephosphorylation of KaiC <cit.>. Phosphorylation of KaiC is KaiA-dependent which allows for feedback that enables collective coherent oscillations in a cell. After complete phosphorylation of KaiA-C complexes (usually by the end of the day), KaiC forms a KaiB-C complex which then dephosphorylates in an ordered manner. Crucially, the KaiB-C complex also sequesters KaiA in a KaiABC complex, reducing the pool of available KaiA for phosphorylation of other KaiC hexamers. This negative feedback enables coherent oscillations of the population of KaiC molcules in a single cell<cit.>.§.§ P. marinus model - independent hexamersP. marinus lacks the kaiA gene but possesses and expresses kaiB and kaiC. While the details of the protein clock are not fully known, gene expression shows cycling in cycling conditions but decays in constant conditions <cit.>. A conservative model, consistent with all these known facts about P. marinus, is shown in Fig.<ref>b; without KaiA feedback, different hexamer units phosphorylate independently and settle to a hyperphosphorylated state at the end of the day. At night, they dephosphorylate along a distinct pathway (homologous to that used by S. elongatus but without KaiA) and reach a hypophosphorylated state by dawn. Hybrid model We created the following hybrid model that includes S. elongatus and P. marinus models as different limits. In our model, shown in Fig.<ref>c, KaiC has a KaiA-dependent phosphorylation pathway, much like in S. elongatus, that is used during the day and driven forward by ATP. But to also include P. marinus-like behavior in the model, we allow for a second parallel phosphorylation pathway for KaiC that is independent of KaiA. The relative access of these two pathways is controlled by a parameter γ. When γ = 1, only the S. elongatus-like KaiA dependent pathway is accessible. When γ = 0, only the P. marinus-like KaiA independent pathway is accessible. Collectively, we call these states along these phosphorylation pathways, the UP states of KaiC - phosphorylation are going UP along these pathways which are usually used during the day.After maximum phosphorylation (usually at dusk), KaiA unbinds (if present) and a KaiB-based dephosphorylation pathway takes over (common to both systems). We call these states the DOWN states of KaiC.Critically, KaiA is assumed to be sequestered through the formation of KaiABC complexes during this dephosphorylation stage. In S. elongatus, reduced KaiA availability prevents other KaiC hexamers from proceeding independently through the UP stage while most of the population is in the DOWN state. Such negative feedback is critical in maintaining free-running limit cycle oscillations in S. elongatus. However, as γ→ 0, the KaiA-independent pathway is more active and thus the system effectively has no feedback. In fact, we find that at about γ≈ 0.82, sustained oscillations disappear (for kinetic parameters used here and reported below). Hence we chose γ = 1, 0.95, 0 as representative of two limit cycle-based and one point-attractor based clock respectively. §.§ Gillespie simulationsWe ran explicit Gillespie simulations corresponding to the deterministic equations above at different overall copy number N with fixed stoichiometric ratios of the molecules KaiA,B,C. We simulated external input noise by varying the ATP levels during the day. External noise in these simulations were implemented by changing ATP levels in the following way: we fluctuated the ATP levels f_ATP = ATP/(ATP+ADP) during the day between the f_ATP^day and f_ATP^night + (f_ATP^day- f_ATP^night)/3, where f_ATP^day, f_ATP^night are the ATP values during a cloudless day and night respectively. We used the day and night ATP levels for different γ that ensure that the limit cycles had periods comparable to 24 hours. For γ = 1, we used ATP/ADP ratios of f_ATP^day = 0.55,f_ATP^night= 0.45. For γ = 0.95, we used f_ATP^day =0.57,f_ATP^night =0.17 and for γ = 0, f_ATP^day =0.8,f_ATP^night =0.2. The corresponding limit cycles and point attractors are shown in Fig.<ref>d.We used the following kinetic parameters in all simulations: dt = 0.01 hr, k_+ = k_- = 2m · 0.04932hr^-1, k_Aon = 0.2466 μ M^-1 hr^-1, k_Aoff = 0.02466hr^-1, k_C→ C* = 0.2466hr^-1, k_C* → C = 0.1k_C → C*, k_ABC = 123.30hr^-1, m = 18. We set up Kai C and Kai A in a 1:1 stoichiometric ratio, each present at a copy number N where N was varied as shown in Fig.5c. These rates are consistent with those measured in <cit.>.Much like with Langevin simulations of dynamical systems, we run the Gillespie simulation until equilibration of the population. However, the system appears to reach the equilibrium state much faster (only over 5 light-dark cycles of 12h:12h). We extracted one day of such a trajectory on day 6 and repeated the simulation 100-400 times. We repeat 400 times when the copy number is low (< 1200) since the spread will be big and we found that the probability distribution is not smooth. We run only 100 times for the high copy number (> 1200). Pooling together these trajectories, we computed the mutual information between clock statea (i.e., (u,d) where u is the net phosphorylation state of KaiC in the up-pathways and d is the net phosphorylation state of KaiC in the KaiB-bound `down' pathways in Fig.<ref>c ) and time of day. The (u,d) space was binned using bins of fixed size of dimension (0.05, 0.05) while the 24 hr time-of-day was binned with bins of size 0.5 hrs. §.§ Violation of simplifying assumptionsWith these choices of γ, we see in Fig.<ref>d, that this model has limit cycles of different size during the day and night; these cycles are not circular in any projection.Further, the relaxation time between attractors varies with γ and in general, differs from τ_relax used in the simulation of limit cycle attractors in the paper. While we assumed a time- and state-independent diffusion constant to model internal noise in the dynamical system, the strength of fluctuations in the explicit KaiABC model can vary with time, as KaiA is sequestered and released by KaiBC over the course of the day-night cycle.Thus this model violates the simplifying assumptions made in the dynamical systems model. Despite such violations, this explicit KaiABC biomolecular model qualitatively reproduces our dynamical systems-based results since the latter only rely an elementary coarse feature of the system - the existence of a flat attractor direction that can project out external noise but is then susceptible to internal noise.§ SUPPLEMENTARY METHODS§.§ Dynamical system - Simulation details We simulate two kinds of dynamical systems in this paper; limit cycles and point attractors. In each case, we simulate a population of clocks, each represented by a particle in the given dynamical system, subject to external and/or internal noise. The equation that we use for the simulation isd r/d t = α r - | α | r^3/R^2 dθ/d t = ωwhere |α| = 1/τ_relax. We use α = 5 for limit cycle system and α = -5 for point attractor system. For limit cycles, R controls the size of the attractor. For point attractors, we set R = 1000 L, where L is the separation of the day and night attractor. In such a limit, the point attractors are quadratic potentials with linear restoring forces since r^3/R^2 is small. The center of the cycle and point attractors during the day are assumed to be at (-L,0) and at (0,0) at night. We evolve our dynamical system using the Fourth Order Runge-Kutta method with time step dt = 0.001 days until the value of mutual information from one day to the next does not change by more than 2-3% - i.e,. thesystem has reached steady state. Reaching steady-state usually takes around 200 days, but if the ratio of L/R is smaller than 0.1, then we may need to run the simulation until day 500 to reach an equilibrium (See speed-error tradeoff in Fig.5). For limit cycles, we initialize the population of 10^4 particles by uniformly distributing them along the perimeter of the night cycle. In the point attractor system, we initialize a population of 10^5 at the night-time point attractor.We use a larger population with point attractors since the particles tend to be distributed over a larger area of the dynamical system. Note that we bin the population by position to compute mututal information between position in the 2d state space and time. Doing so reliably requires a smooth distribution after binning. For limit cycles, the particles usually stay close to attractor and thus provide sufficient count in each bin. However, for the point attractor, the population is usually spread over the entire 2d area between the two point attractors. Therefore, we need 10^5 particles to get an accurate value of mutual information of point attractor system. External signal and weather fluctuations We generate a square wave of period 24 hours to model the day-night cycle of light on Earth with the day length of 12 hours.However, such a square wave is modulated by weather fluctuations, e.g. periods of reduced intensity due to passing clouds during the daytime. We model such fluctuating intensity as follows. We assume each weather condition lasts a random interval of time drawn from an exponential distribution of mean 2.4 hrs (1/10 of a day). During a given weather condition, we set the intensity of light to a random value, drawn uniformly from [0,1] where 1 represents the maximum intensity during the day. (At night, the intensity is held at zero with no fluctuations.) When the light intensity is reduced during the day to a value ρ∈ [0,1], we switch the dynamics to an alternative limit cycle (or point attractor) at a fractional distance ρ between the ideal day and night cycles.For example, assume the night cycle is centered at (0, 0) and the day cycle is centered at (-L, 0). During a weather condition with intensity ρ∈ [0,1], we follow dynamics due to a limit cycle located at (-ρ L, 0).We follow the same rules for the point attractor. In both cases, the switches of dynamics in response to the changing weather is instantaneous, though the clock states itself is continuous and responds at a finite rate to an instantaneous switch in dynamics. Each individual particle is subject to a different realization of the weather conditions described above.Internal noise The internal noise represents any source of stochasticity intrinsic to a single cell that would exist even in constant conditions. Such noise could be due to finite copy numbers of molecules, bursty of transcription etc. We model such internal noise by adding Langevin noise to the dynamical equations as described in the section on Langevin noise. Each individual particle in our simulation is subject to a independent random realizations of such Langevin noise. We then bin the population and compute mutual information by the same procedure as for external signals above. §.§ Measures of clock time-telling qualityWe develop and use two distinct measures of performance of noisy clocks driven by noisy inputs. Mutual information: The performance of the clock is quantified by the mutual information between the clock state c⃗ and the time t,MI(C; T) = ∑_c⃗∈ C, t ∈ T p(c⃗, t) log_2( p(c⃗, t)/p(c⃗)p(t))for all c⃗ in the set of available positions C and all t in the available time bins T. (In the dynamical systems model, c⃗ represents the position in the 2d r,t plane. For the explicit KaiABC biomolecular model, c⃗ represents the phosophorylation state of KaiC.)We simulate a population of clocks, where each clock is subject to a different realization of input signals, representing different weather conditions and also subject to different realizations of internal Langevin noise (or Gillespie fluctuations). We then collect the trajectories of each clock on the last day of the simulations and calculate the probability distribution p(c⃗| t) of clock states at a given (objective) time t ∈ [0,24] hrs of the last day in the simulation. The probability function p(c⃗) is calculated by accumulating the distribution of p(c⃗| t) over time t ∈ [0, 24] hrs of the last day. The position c⃗ and time t are binned into different bins depending on their values. We start the minimum and maximum values of the bins to the minimum and maximum values of the variables. The bin size in the time dimension is 0.48 hrs or 28.8 minutes, while The bin size in the x and y dimensions are both 0.01.We refer to this mutual information measure as `Precision' in Fig.1b, 3f, 5a,5c.Population variance along direction of motion: Mutual information is a good indicative of how well the clock encodes information about time. However, it is calculated for the entire day. Often, we want to see how the time-telling ability of a clock changes during the day (e.g., day vs night or before and after dusk). Hence we develop a new measure, closely related to mutual information, but can be computed at specific times of day.Intuitively, the mutual information quantifies how much the population distributions of clock states at different times t overlap. If these distributions are not overlapping, the clock state is a good readout of the time t. Such distributions are shown in Fig.3b and 4b (purple). We argue that only the spread of the clock distribution along the direction of motion of the clock in state space affects mutual information. The spread of the distribution in orthogonal directions does not affect mutual information as much. To see this, we write mutual information between clock state c⃗ and time t as,MI(C; T) = H(T) - H(T| C).Here H(T) is a constant, independent of the clock mechanism. Thus, MI depends entirely on the entropy of the distribution p(t|c) of real times given clock state c, averaged over different clock states,H(T | C)= ∫ p(c) dc H(T|c) =- ∫ p(c) dc [ ∫ dt p(t|c) log p(t|c) ] Consider a clock whose state-space is two dimensional with a periodic x-axis as shown in Fig.<ref>. Further, assume that the distribution p(c⃗| t) of clock states at a given time is supported on a rectangle of size a_x × a_y as shown in Fig.<ref> and that the clock states move along the x-axis at a uniform velocity u. This situation implies thatp(t|c) =0for|c_x - u t| > a_x u/2 a_x for|c_x - u t| ≤ a_xSo, H(T|C)= - ∫ p(c) dc ∫_t = (c_x - a_x)/u^(a_x+c_x)/u dt u/2 a_xlog(u/2 a_x)= log(2 a_x/u)Since MI(C; T) = H(T) - H(T | C), MI depends on - log a_x and is independent of a_y, meaning that only the spread in the direction of motion a_x affect the mutual information. Consequently, to understand the quality of time-telling at different times of the day, we project the population variance of p(c⃗| t) to the direction of the instantaneous velocity of the center of mass of p(c⃗| t). We use this population variance measure in Figs.3c, e, and 4c, d, e.§.§ Cramer-Rao bounds Cramer-Rao (CR) bounds quantify the total available information about phase in a given length of history of the signal. Any estimator working with that length of history must necessarily have higher variance (i.e., lower precision) than the Cramer- lower bound corresponding to that length of history. In the limit of infinitely long histories, the CR bound is simply set by the number of bins in time. In our case, this bound is given by log_2 50 = 5.64 bits. As shown in Fig. 4, as L/R → 0, limit cycles process longer and longer histories of the external signal. Consequently, the mutual information for such cycles approaches the CR boundin the limit L/R → 0 as seen in Fig.3f (assuming no internal noise). §.§ Hopf bifurcationThe normal form of the Hopf bifurcation is given by,ṙ = μ(r - r^3/μ) θ̇ = ωWe find limit cycles for μ > 0 which undergo a bifurcation at μ = 0, resulting in point attractors at μ < 0. The dynamics through this bifurcation are characterized by just one parameter, μ, which sets both the radius of the limit cycle (R ∼√(μ)) and the relaxation time τ_relax∼ 1/μ (i.e., the tightness of the quadratic potential around the continuous attractor.) The bifurcation itself occurs at μ = 0.Consequently, there is no way to interpolate between limit cycles μ > 0 and point attractors μ < 0 without passing through a region of long relaxation times. Long relaxation times invalidates the models of limit cycles used in this paper; under day-night cycling, limit cycles with long relaxation times lead to orbits that do not visit the attractor at all. That is, the system does not have enough time to relax from the day attractor to the night attractor before the night is over. Consequently, we find that the stable trajectory under cycling conditions is a large orbit that encloses both limit cycles. In such a limit, the continuous attractor of the limit cycle plays no role at all and the limits cycles resemble point attractors. Since we seek to contrast the effect of noise on continuous and point attractors (and not the effect of relaxation times), we keep the relaxation time constant in our interpolation. Thus, we use the parametrization,ṙ = α(r - r^3/μ) θ̇ = ωwhere we have two distinct parameters controlling the radius R ∼√(μ) and relaxation time τ_relax∼1/α, the latter of which is held constant. This parameterization does have the downside of being singular when R ∼√(μ)→ 0. Hence we use this parameterization and stay in the regime R/L > 0.5 to avoid the singularity at R=0.As seen in Fig.3f, 4f, interpolating down to R/L ∼ 0.5 already reveals point attractor-like behavior.
http://arxiv.org/abs/1709.09579v1
{ "authors": [ "Weerapat Pittayakanchit", "Zhiyue Lu", "Justin Chew", "Michael J. Rust", "Arvind Murugan" ], "categories": [ "cond-mat.stat-mech", "nlin.AO", "q-bio.CB", "q-bio.MN", "q-bio.SC" ], "primary_category": "cond-mat.stat-mech", "published": "20170927152923", "title": "Continuous attractor-based clocks are unreliable phase estimators" }
[email protected] Institute for Quantum Electronics, ETH Zürich, Auguste-Piccard-Hof 1, 8093 Zürich, SwitzerlandInstitute for Quantum Electronics, ETH Zürich, Auguste-Piccard-Hof 1, 8093 Zürich, SwitzerlandInstitute for Quantum Electronics, ETH Zürich, Auguste-Piccard-Hof 1, 8093 Zürich, SwitzerlandInstitute for Quantum Electronics, ETH Zürich, Auguste-Piccard-Hof 1, 8093 Zürich, SwitzerlandInstitute for Quantum Electronics, ETH Zürich, Auguste-Piccard-Hof 1, 8093 Zürich, SwitzerlandInstitute for Quantum Electronics, ETH Zürich, Auguste-Piccard-Hof 1, 8093 Zürich, SwitzerlandMathematical Physics and NanoLund, Lund University, Box 118, 22100 Lund, SwedenInstitute for Quantum Electronics, ETH Zürich, Auguste-Piccard-Hof 1, 8093 Zürich, Switzerland18 December 2017 19 December 2017 – Appl. Phys. Lett. 112, 021104 (2018)We present a two-quantum well THz intersubband laser operating up to 192 K. The structure has been optimized with a non-equilibrium Green's function model. The result of this optimization was confirmed experimentally by growing, processing and measuring a number of proposed designs. At high temperature (T>200 K), the simulations indicate that lasing fails due to a combination of electron-electron scattering, thermal backfilling, and, most importantly, re-absorption coming from broadened states.Two-well quantum cascade laser optimization by non-equilibrium Green's function modelling J. Faist 15 September 2017 ========================================================================================== Terahertz quantum cascade lasers (QCLs)<cit.> are interesting candidates for a wide variety of potential applications<cit.>. However, to date, their operation is limited to∼200 K<cit.> and the necessity of cryogenic cooling hinders a widespread use of these devices. In the last decade, significant scientific effort has been directed towards identifying the main temperature-degrading mechanisms<cit.>, as well as finding optimized QCL designs<cit.>. The degrading mechanisms include thermal backfilling<cit.>, thermally activated LO phonon emission<cit.>, increased broadening<cit.>, and carrier leakage into continuum states<cit.>. When numerically optimizing a design, it is important to take all of these effects into consideration, in order to ensure a close correspondence between the model and the real device.Combined with the fact that the optimization parameters are typically trade-offs for one another, the task is very complex. Here, typically simpler rate equation or density matrix models are used in order to more quickly sweep the parameter space<cit.>, while more advanced models, such as non-equilibrium Green's functions (NEGF) or Monte-Carlo, are used to validate and analyze the final designs<cit.>. In contrast, in this work we will employ an advanced model directly at the optimization stage. Specifically, we shall use a NEGF model<cit.>, capable of accurately simulating experimental devices<cit.> and including the most general treatment of scattering, from all relevant processes. The goal of the optimization is to achieve the highest possible operating temperature. Thus, the gain of the active medium should be maximized at high lattice temperature, and simultaneously the external losses minimized. The key figures for gain are inversion, oscillator strength, and line width<cit.>. These are mainly controlled by the doping density, the energy difference E_ex between the lower laser level ll and the extractor state e, and the width of the two barriers: the laser and injection barriers. Population inversion increases with doping, although a too high level promotes detrimental effects, such as electron-electron scattering. E_ex, which is chosen to be close to the LO phonon resonance E_LO in order to have a short ll lifetime, and the laser frequency ħω are mainly determined by the well widths. The laser barrier width determines the oscillator strength, which at the same time affects inversion; a more vertical transition with a larger oscillator strength, yields a lower inversion due to increased rate of non-radiative transitions from the upper laser level ul. These transitions broaden ul, and consequently also the line width. The injection barrier limits the detrimental injection directly into ll, but also the injection into ul, and thus plays a crucial role for the population inversion.As a starting point, we choose the shortest possible structure based on two quantum wells per period<cit.>, in order to maximize the gain per unit length; with fewer active states per period, more carriers are expected to concentrate on the upper laser level (ul). In addition, we limit the escape of carriers into continuum states by employing barriers with a high (25%) AlAs concentration<cit.>. An example of a design is shown in Fig. <ref>. The well widths are fixed to have E_ex≈ E_LO and ħω≈ 16 meV. The latter is chosen in order to be high enough to limit thermal backfilling, but still below the tail of the TO phonon optical absorption line. In order to limit the negative effects of impurity scattering, the 3 nm wide doping layer is placed in the central region of the widest well, where the lower laser level has its node<cit.>. Then, the barrier widths and the doping concentration were varied to find their optimal values for high gain at elevated temperatures. A variety of structures were evaluated both by NEGF simulations at 300 K lattice temperature and by manufacturing and characterizing experimental devices. It should be noted, that for high carrier concentrations, electron-electron (e-e) scattering will have a non-negligible impact<cit.>, and provides additional thermalization and reduction of the subband lifetimes through second order processes. This is expected to increases the current density and decreases the gain. Since we cannot fully model the e-e interactions<cit.>, we restrict the doping concentration of the grown devices to an areal doping density of 4.5·10^10cm (corresponding to a volume doping density of 1.5·10^17 cm^-3 of the doped 3 nm region, and an average period volume density of ∼ 1.4·10^16 cm^-3), where we expect the effect to be moderate. In Fig. <ref> (a), the simulated gain is shown for a selected set of layer sequences and doping densities. When doubling the doping density, we see an increase of gain from 50 to 70 cm at 200 K. Even though the effect is much smaller at 300 K, going from 16 to 20 cm, it still provides significant benefit since gain drops rapidly with temperature (see Fig. <ref>). In addition, we see that the absorption at higher frequencies gets larger as the population difference between ll and i increases with doping. For even higher doping densities, as shown in Fig. <ref> (b) for the best design with 31 Å injection barrier, the simulated gain is lower at 300 K, and we find an optimal doping density of ∼4.5·10^10 cm. The effect of electron-electron scattering indicates a strong reduction of gain, as well as a shift of the peak gain towards lower doping density. Changing the injection barrier width from the nominal value 34 Å to 31 Å, the ul is more efficiently filled from i and gain increases. For even narrower barrier widths, we see again a decrease in the peak gain (not shown), as thermally activated phonon emission dominates at high temperatures. The laser barrier width is less relevant, as the change in oscillator strength has a small influence<cit.> in this parameter range. The highest simulated and measured operating temperature was achieved for the structure called EV2416, shown in Fig. <ref>. Here, we see that the phonon extraction has been complemented by a secondary extraction mechanism; tunnelling into state number 4' with subsequent phonon emission landing on the ul or i state of the next period, as indicated by the calculated phonon scattering rates (see Supplementary material, Table 1). This resonance is also present in previous 2-well structures<cit.>, where it is detrimental since it has significant overlap with continuum states. This is similar to the situation in Ref. albo_temperature-driven_2017, where the lower laser level was partly depopulated into continuum states rather than a bound state. In contrast, with the higher barriers we employ here, this transition can be safely exploited for increasing inversion, as we have verified by comparing the energy resolved current densities of our samples with those of Refs. kumar_two-well_2009 and scalari_broadband_2010 (not shown). We found an optimal oscillator strength of f_osc. = 0.43. This value is significantly higher than the previous two-well design of Ref. scalari_broadband_2010, and compares well to the structures with the best THz temperature performance in the literature<cit.>.For this sample, we also investigate the gain degradation with temperature in detail. The two main inversion degrading mechanisms discussed in literature are thermally activated LO phonon emission and thermal backfilling. The former effect can clearly be seen in the left part of Fig. <ref>, where the states with in-plane energy E_k≈ 16 meV above ul, which is precisely one LO phonon energy below ul', are highly occupied. However, the rate of phonon emission increases by ∼ 20% from 100 K to 300 K, and can only account for a small fraction of the gain degradation. The latter effect can be estimated by comparing the ll population (n_ll) as a function of temperature, with the one expected from thermal transitions from the highly populated levels i and ul. To this end, we show in Fig. <ref> the occupations of the relevant levels indicated in Fig. <ref>, as well as the expected population (n_bf) of ll', from thermal backfilling. This shows, that thermal backfilling is mainly responsible for the reduction of inversion of our structure. Fig. <ref> also shows, that the occupation of level 4 roughly follows n_ll. This indicates that level 4 is depopulating ll. This is also evident in the calculated energy resolved current density (see supplementary material), where the effect is much more clear at 300 K than 100 K, indicating a thermally activated process, in agreement with Ref. albo_temperature-driven_2017.Simultaneously, the simulations display a drop in inversion with temperature by 40% from T_L = 100 K to T_L=300 K, which can only partially explain the gain drop by 80% in the same temperature interval. Since the levels i and ul are in resonance, we have defined the inversion as the average population of these levels, minus n_ll. Over the same temperature range, the FWHM of the main gain peak decreases, and thus does not explain the reduction of gain. Possible further sources are the re-absorption by the low-energy tail of the i→ ll transition, as well as the transition 4→ 5. Indeed, the 4→ 5 transition energy is only 6 meV below the main one, and the width of level 4 increases from ∼6 meV at 100 K, to ∼10 meV at 300 K. In addition, the oscillator strength is very high between levels 4 and 5 due to their spatial overlap, thus lowering the maximum operating temperature (T_max). Similarly, the i→ ll transition energy is 34 meV and i has a similar width to level 4. While this transition is further separated from the main one, it is much stronger due to the high occupation of level i. Using a simple Fermi's golden rule calculation of the gain, we can parameterize the transition broadening independently. By increasing the transition width of all states from 6 meV to 10 meV, using the populations at 300 K, we find a reduction of 75%, of the peak gain. Our findings thus show that the main part of the gain degradation originates from broadened re-absorption. This could partially be mitigated by moving level 5 in energy, e. g. by the use of higher barriers.The NEGF simulations predict gain as high as 20 cm at room temperature, which does not agree with the experimental findings discussed below.However, the simulations presented above do not include e-e scattering. In order to check the relevance of this scattering mechanism, we include it within a simplified GW approximation<cit.>. This results in a better thermalization of the electron distribution within the subbands, as seen in the right part of Fig. <ref>. The shorter ul lifetime leads to a reduction of the gain, as seen in Fig. <ref>. This indicates that neglecting e-e scattering in our simulations leads to an overestimation of the operation temperature. The model includes interface roughness (IFR) scattering with a Gaussian correlation function with correlation length Λ = 9 nm and height η = 0.1 nm. In order to investigate the sensitivity of the results on these unknown experimental parameters, simulations with η = 0.2 nm were also carried out. As can be seen in Fig. <ref>, this further decreases the gain by 5 (2.5) cm at 200 K (300 K). In addition, the simulation temperature refers to the phonon occupation number. It is known, that the optical phonons are not in equilibrium<cit.> and thus the effective phonon temperature can be tens of K higher than the experimental heat-sink temperature even for pulsed operation.A selection of structures was characterized experimentally in order to verify the numerical optimization. The designs presented in Fig. <ref> were grown by molecular beam epitaxy, and processed into wet-etched Au-Au ridge lasers with varying widths (120-160 μm) and fixed length of 1 mm. The bottom contact has been etched away before the evaporation of the top metal cladding, in order to reduce the losses due to parasitic absorption. The number of periods were chosen to keep the total thickness of the samples the same (8 μm). Fig. <ref> shows the maximum operating temperature achieved vs. the simulated gain at 300 K, which is an indicator of the design optimality. The overall device performance trends from varying barrier thickness and doping density, agree with those of the NEGF simulations. In addition, the current density of the measured samples with varying doping density show the expected trend of increasing current density with doping. The maximum operating temperatures differ widely, from 117 K to 164 K. Here, we also show the data for the previous 2-well structures, where Ref. kumar_two-well_2009 agrees well with the trend from our samples. However, the structure from Ref. scalari_broadband_2010 seems to be more temperature sensitive than the other samples. We attribute this to the extraction energy E_ex≈ 30 meV deviating from the optical phonon energy E_LO=36.7 meV for this structure, while both Ref. kumar_two-well_2009 and our designs have E_ex≈ E_LO. For 1 mm long Au-Au waveguides, we have simulated the waveguide and mirror losses from a time-domain spectroscopy (TDS) measurement of the transmission of a sample including the top contact of our laser and a 50 nm Au layer. This calculation gives waveguide losses of 30 cm. The mirror losses are calculated to be 4 cm, and thus we estimate a threshold gain of approximately 35 cm at 200 K. The NEGF simulations predict a lowest T_max of 218 K, with e-e and increased interface roughness included. This is 54 K above the measured T_max. However, it is worth to note that the simulations do not include effects such as Joule heating and non-equilibrium phonons. In addition we did not consider the absorption of the tail of the TO phonon resonance. Together with gain optimisation, waveguide losses also need to be minimized. To this end, the best sample (EV2416) was also processed with a dry etched Cu-Cu waveguide, which is expected to have lower losses<cit.>. The characterisation of LIV is shown in Fig. <ref> (a). The best device (1 mm long, 140 μm wide) operated up to a temperature of 192 K, and showed a high T_0=208 K, as shown in Fig. <ref> (b). We also show in Fig. <ref> (a) the simulated current density in the NEGF model, which has been shifted by an assumed potential drop of 3.8 V, due to a Schottky contact. The laser spectrum measured at 192 K is shown in Fig. <ref> (c) and the lasing frequency agrees with the simulated gain spectrum. However the maximum current density is underestimated in the NEGF model, even at high temperature where photo-driven current is negligible. Including e-e scattering in the simulations, we find a maximum current density of 3.3 kA/cmat 200 K. While this agrees better with the experiment, it cannot account for the high experimental current density at 190 K. This indicates that continuum leakage is still present at temperatures where highly excited states become thermally occupied. Together with the high simulated gain and the high T_0, this suggests that the excellent laser performance of the presented design can be further improved. In conclusion, we have optimized 2-well QCLs using a combination of complex numerical simulations, and experimental measurements. We find an optimal structure featuring both phonon and resonant tunnelling extraction and injection. The agreement between the experimental and simulated trends highlights the efficacy of our model for optimization of QCL structures. Together with a Cu-Cu waveguide to reduce optical losses, we have significantly improved the operation temperature of 2-well THz QCLs, close to the overall record temperature.We see potential to further improve the temperature performance of THz QCLs; the doping density, material parameters (such as barrier height), as well as optical losses can be further optimized. The main gain degradation mechanism at high temperature was found to be temperature broadening of re-absorption transitions, while thermal backfilling is responsible for the reduction of inversion. The effect of electron-electron scattering was found to be significant, reducing the maximum operating temperature by ∼ 40 K. Including this scattering mechanisms in more detail, may therefore be helpful for further optimization.§ SUPPLEMENTARY MATERIALIn order to clearly show the presence of the tunnelling extraction channel, we present the energetically and spatially resolved current densities for varying temperature, as well as the relevant calculated LO phonon scattering rates.This project has received funding from the European Research Council (ERC) under the project MUSiC. A. W. acknowledges the Swedish Research Council (VR) for financial support. The simulations were partially performed on resources provided by the Swedish National Infrastructure for Computing (SNIC) at LUNARC.39 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Köhler et al.(2002)Köhler, Tredicucci, Beltram, Beere, Linfield, Davies, Ritchie, Iotti, and Rossi]kohler_terahertz_2002 author author R. Köhler, author A. Tredicucci, author F. Beltram, author H. E. Beere, author E. H. Linfield, author A. G. Davies, author D. A. Ritchie, author R. C. Iotti,and author F. Rossi, 10.1038/417156a journal journal Nature volume 417, pages 156 (year 2002)NoStop [Liang, Liu, and Wang(2017)]liang_recent_2017 author author G. Liang, author T. Liu,andauthor Q. J. Wang, 10.1109/JSTQE.2016.2625982 journal journal IEEE J. Sel. Top. Quantum Electron. volume 23, pages 1 (year 2017)NoStop [Williams(2007)]williams_terahertz_2007 author author B. S. Williams, 10.1038/nphoton.2007.166 journal journal Nat. Photonics volume 1, pages 517 (year 2007)NoStop [Fathololoumi et al.(2012)Fathololoumi, Dupont, Chan, Wasilewski, Laframboise, Ban, Mátyás, Jirauschek, Hu, andLiu]fathololoumi_terahertz_2012 author author S. Fathololoumi, author E. Dupont, author C. W. I. Chan, author Z. R. Wasilewski, author S. R. Laframboise, author D. Ban, author A. Mátyás, author C. Jirauschek, author Q. Hu,and author H. C. Liu, 10.1364/OE.20.003866 journal journal Opt. Express volume 20, pages 3866 (year 2012)NoStop [Jirauschek and Lugli(2008)]jirauschek_limiting_2008 author author C. Jirauschek and author P. Lugli, 10.1002/pssc.200776566 journal journal Phys. Status Solidi (c) volume 5, pages 221 (year 2008)NoStop [Chassagneux et al.(2012)Chassagneux, Wang, Khanna, Strupiechonski, Coudevylle, Linfield, Davies, Capasso, Belkin,and Colombelli]chassagneux_limiting_2012 author author Y. Chassagneux, author Q. J. Wang, author S. P. Khanna, author E. Strupiechonski, author J.-R. Coudevylle, author E. H. Linfield, author A. G. Davies, author F. Capasso, author M. A. Belkin,and author R. Colombelli, 10.1109/TTHZ.2011.2177176 journal journalIEEE Trans. Terahertz Sci. Technol volume 2, pages 83 (year 2012)NoStop [Li et al.(2009)Li, Cao, Tan, Han, Guo, Feng, Luo, Laframboise, and Liu]li_temperature_2009 author author H. Li, author J. C. Cao, author Z. Y. Tan, author Y. J. Han, author X. G. Guo, author S. L. Feng, author H. Luo, author S. R.Laframboise,and author H. C.Liu, 10.1088/0022-3727/42/2/025101 journal journal J. Phys. D: Appl. Phys.volume 42, pages 025101 (year 2009)NoStop [Albo and Hu(2015a)]albo_investigating_2015 author author A. Albo and author Q. Hu,10.1063/1.4916961 journal journal Appl. Phys. Lett. volume 106, pages 131108 (year 2015a)NoStop [Hu et al.(2005)Hu, Williams, Kumar, Callebaut, Kohen, and Reno]hu_resonant-phonon-assisted_2005 author author Q. Hu, author B. S. Williams, author S. Kumar, author H. Callebaut, author S. Kohen,and author J. L. Reno, 10.1088/0268-1242/20/7/013 journal journal Semicond. Sci. Technol. volume 20, pages S228 (year 2005)NoStop [Kumar et al.(2009)Kumar, Chan, Hu, and Reno]kumar_two-well_2009 author author S. Kumar, author C. W. I. Chan, author Q. Hu,and author J. L. Reno, 10.1063/1.3243459 journal journal Appl. Phys. Lett. volume 95, pages 141110 (year 2009)NoStop [Wacker(2010)]wacker_extraction-controlled_2010 author author A. Wacker, 10.1063/1.3483764 journal journal Appl. Phys. Lett. volume 97,pages 081105 (year 2010)NoStop [Lin, Ying, and Hirayama(2011)]lin_significant_2011 author author T. T. Lin, author L. Ying,andauthor H. Hirayama, in 10.1109/irmmw-THz.2011.6105198 booktitle 2011 International Conference on Infrared, Millimeter, and Terahertz Waves (year 2011) pp. pages 1–2NoStop [Dupont et al.(2012)Dupont, Fathololoumi, Wasilewski, Aers, Laframboise, Lindskog, Razavipour, Wacker, Ban, andLiu]dupont_phonon_2012 author author E. Dupont, author S. Fathololoumi, author Z. R. Wasilewski, author G. Aers, author S. R. Laframboise, author M. Lindskog, author S. G. Razavipour, author A. Wacker, author D. Ban,and author H. C. Liu, http://dx.doi.org/10.1063/1.3702571 journal journal J. Appl. Phys. volume 111,pages 073111 (year 2012)NoStop [Chan, Hu, and Reno(2013)]chan_tall-barrier_2013 author author C. W. I.Chan, author Q. Hu,and author J. L. Reno, 10.1063/1.4824878 journal journal Appl. Phys. Lett. volume 103, pages 151117 (year 2013)NoStop [Lindskog, Winge, and Wacker(2013)]lindskog_injection_2013 author author M. Lindskog, author D. O. Winge,and author A. Wacker, in10.1117/12.2024030 booktitle Proc. SPIE, Vol. volume 8846 (year 2013) pp.pages 884603–884603–10NoStop [Vitiello et al.(2015)Vitiello, Scalari, Williams, andNatale]vitiello_quantum_2015 author author M. S. Vitiello, author G. Scalari, author B. Williams,andauthor P. D. Natale, 10.1364/OE.23.005167 journal journal Opt. Express volume 23, pages 5167 (year 2015)NoStop [Nelander and Wacker(2008)]nelander_temperature_2008 author author R. Nelander and author A. Wacker, 10.1063/1.2884686 journal journal Appl. Phys. Lett. volume 92,pages 081102 (year 2008)NoStop [Khurgin(2008)]khurgin_inhomogeneous_2008 author author J. B. Khurgin, 10.1063/1.2977994 journal journal Appl. Phys. Lett. volume 93,pages 091104 (year 2008)NoStop [Matyas, Lugli, and Jirauschek(2013)]matyas_role_2013 author author A. Matyas, author P. Lugli, and author C. Jirauschek,10.1063/1.4773516 journal journal Appl. Phys. Lett. volume 102, pages 011101 (year 2013)NoStop [Albo and Hu(2015b)]albo_carrier_2015 author author A. Albo and author Q. Hu,10.1063/1.4937455 journal journal Appl. Phys. Lett. volume 107, pages 241101 (year 2015b)NoStop [Daničić et al.(2010)Daničić, Radovanović, Milanović, Indjin, and Ikonić]danicic_optimization_2010 author author A. Daničić, author J. Radovanović, author V. Milanović, author D. Indjin,and author Z. Ikonić, 10.1088/0022-3727/43/4/045101 journal journal J. Phys. D: Appl. Phys. volume 43, pages 045101 (year 2010)NoStop [Dupont, Fathololoumi, andLiu(2010)]dupont_simplified_2010 author author E. Dupont, author S. Fathololoumi,and author H. C.Liu, 10.1103/PhysRevB.81.205311 journal journal Phys. Rev. B volume 81, pages 205311 (year 2010)NoStop [Bismuto et al.(2012)Bismuto, Terazzi, Hinkov, Beck, and Faist]bismuto_fully_2012 author author A. Bismuto, author R. Terazzi, author B. Hinkov, author M. Beck,and author J. Faist, 10.1063/1.4734389 journal journal Appl. Phys. Lett. volume 101, pages 021103 (year 2012)NoStop [Yasuda et al.(2009)Yasuda, Kubis, Vogl, Sekine, Hosako, and Hirakawa]yasuda_nonequilibrium_2009 author author H. Yasuda, author T. Kubis, author P. Vogl, author N. Sekine, author I. Hosako,and author K. Hirakawa, 10.1063/1.3119312 journal journal Appl. Phys. Lett. volume 94, pages 151109 (year 2009)NoStop [Mátyás et al.(2010)Mátyás, Belkin, Lugli, andJirauschek]matyas_temperature_2010 author author A. Mátyás, author M. A. Belkin, author P. Lugli, and author C. Jirauschek,10.1063/1.3430741 journal journal Appl. Phys. Lett. volume 96, pages 201110 (year 2010)NoStop [Lindskog et al.(2014)Lindskog, Wolf, Trinite, Liverini, Faist, Maisons, Carras, Aidam, Ostendorf, andWacker]lindskog_comparative_2014 author author M. Lindskog, author J. M. Wolf, author V. Trinite, author V. Liverini, author J. Faist, author G. Maisons, author M. Carras, author R. Aidam, author R. Ostendorf,and author A. Wacker, 10.1063/1.4895123 journal journal Appl. Phys. Lett. volume 105, pages 103106 (year 2014)NoStop [Wacker, Lindskog, andWinge(2013)]wacker_nonequilibrium_2013 author author A. Wacker, author M. Lindskog, and author D. Winge, 10.1109/JSTQE.2013.2239613 journal journal IEEE J. Sel. Top. Quantum Electron. volume 19, pages 1200611 (year 2013)NoStop [Winge, Franckié, andWacker(2016)]winge_simulating_2016 author author D. O. Winge, author M. Franckié,and author A. Wacker, 10.1063/1.4962646 journal journal J. Appl. Phys. volume 120, pages 114302 (year 2016)NoStop [Faist(2013)]faist_quantum_2013 author author J. Faist, @nooptitle Quantum cascade lasers, edition first edition ed. (publisher Oxford University Press, address Oxford, United Kingdom,year 2013)NoStop [Scalari et al.(2010)Scalari, Amanti, Walther, Terazzi, Beck, and Faist]scalari_broadband_2010 author author G. Scalari, author M. I. Amanti, author C. Walther, author R. Terazzi, author M. Beck,and author J. Faist, 10.1364/OE.18.008043 journal journal Opt. Express volume 18, pages 8043 (year 2010)NoStop [Hyldgaard and Wilkins(1996)]hyldgaard_electron-electron_1996 author author P. Hyldgaard and author J. W. Wilkins, @noopjournal journal Phys. Rev. B volume 53, pages 6889 (year 1996)NoStop [Harrison and Kelsall(1998)]harrison_relative_1998 author author P. Harrison and author R. W. Kelsall, 10.1016/S0038-1101(98)00047-1 journal journal Solid-State Electronics volume 42, pages 1449 (year 1998)NoStop [Manenti et al.(2003)Manenti, Compagnone, Carlo, andLugli]manenti_monte_2003 author author M. Manenti, author F. Compagnone, author A. D. Carlo,andauthor P. Lugli, 10.1023/B:JCEL.0000011466.51397.b0 journal journal J. Comput. Electron. volume 2, pages 433 (year 2003)NoStop [Bonno, Thobel, and Dessenne(2005)]bonno_modeling_2005 author author O. Bonno, author J.-L. Thobel, and author F. Dessenne, http://dx.doi.org/10.1063/1.1840100 journal journal J. Appl. Phys. volume 97,pages 043702 (year 2005)NoStop [Wang, Guo, and Cao(2017)]wang_transient_2017 author author F. Wang, author X. G. Guo, and author J. C. Cao, 10.1063/1.4978256 journal journal Appl. Phys. Lett. volume 110, pages 103505 (year 2017)NoStop [Winge et al.(2016)Winge, Franckié, Verdozzi, Wacker, and Pereira]winge_simple_2016 author author D. O. Winge, author M. Franckié, author C. Verdozzi, author A. Wacker,and author M. F. Pereira, 10.1088/1742-6596/696/1/012013 journal journal J. Phys.: Conf. Ser. volume 696, pages 012013 (year 2016)NoStop [Albo and Flores(2017)]albo_temperature-driven_2017 author author A. Albo and author Y. V. Flores, 10.1109/JQE.2016.2631899 journal journal IEEE J. Quantum Electron. volume 53, pages 1 (year 2017)NoStop [Fathololoumi et al.(2013)Fathololoumi, Dupont, Wasilewski, Chan, Razavipour, Laframboise, Huang, Hu, Ban, and Liu]fathololoumi_effect_2013 author author S. Fathololoumi, author E. Dupont, author Z. R. Wasilewski, author C. W. I. Chan, author S. G. Razavipour, author S. R. Laframboise, author S. Huang, author Q. Hu, author D. Ban,and author H. C. Liu, 10.1063/1.4795614 journal journal J. Appl. Phys. volume 113, pages 113109 (year 2013)NoStop [Vitiello et al.(2012)Vitiello, Iotti, Rossi, Mahler, Tredicucci, Beere, Ritchie, Hu, and Scamarcio]vitiello_non-equilibrium_2012 author author M. S. Vitiello, author R. C. Iotti, author F. Rossi, author L. Mahler, author A. Tredicucci, author H. E. Beere, author D. A. Ritchie, author Q. Hu,and author G. Scamarcio, 10.1063/1.3687913 journal journal Applied Physics Letters volume 100, pages 091101 (year 2012)NoStop [Belkin et al.(2008)Belkin, Fan, Hormoz, Capasso, Khanna, Lachab, Davies, andLinfield]belkin_terahertz_2008 author author M. A. Belkin, author J. A. Fan, author S. Hormoz, author F. Capasso, author S. P. Khanna, author M. Lachab, author A. G. Davies,and author E. H. Linfield, http://www.opticsexpress.org/abstract.cfm?URI=oe-16-5-3242 journal journal Opt. Express volume 16, pages 3242 (year 2008)NoStop
http://arxiv.org/abs/1709.09563v2
{ "authors": [ "M. Franckié", "L. Bosco", "M. Beck", "C. Bonzon", "E. Mavrona", "G. Scalari", "A. Wacker", "J. Faist" ], "categories": [ "physics.app-ph" ], "primary_category": "physics.app-ph", "published": "20170927145207", "title": "Two-well quantum cascade laser optimization by non-equilibrium Green's function modelling" }
φłḍ∇∂/∓m_p0.9ex< ∼ 0.9ex> ∼ Astroph. J. Mon. Not. Roy. Ast. Soc. Astron. Astrophys. Astron. J. Phys. Rev. Lett. Phys. Rev. D Nucl. Phys. Nature Phys. Lett. B JETP Lett.et al. ie 0Ω_0mΩ_ℓΩ_DEΩ_ΛΩ_σw_DEΩ_ρ_φP_ϕ℘w_φρ_cr,0ρ_DEρ̇_DEΛCDMsin ^-1cos ^-1 F( (^1/4/√(h))|-1)F( 1/2(√()/h)|2) sn_2F_1 a]Satadru Bag,a]Swagat S. Mishraa]and Varun Sahni[a]Inter-University Centre for Astronomy and Astrophysics, Post Bag 4, Ganeshkhind, Pune 411 007, [email protected]@[email protected] We describe a new class of dark energy (DE) models which behave like cosmological trackers at early times. These models are based on the α-attractor set of potentials, originally discussed in the context of inflation. The new models allow the current acceleration of the universe to be reached from a wide class of initial conditions. Prominent examples of this class of models are the potentialsandcosh.A remarkable feature of this new class of models is that they lead tolarge enough negative values of the equation of state at the present epoch, consistent with the observations of accelerated expansion of the universe,from avery large initial basin of attraction. They therefore avoid the fine tuning problem which afflicts many models of DE. New tracker models of dark energy [ December 30, 2023 =================================§ INTRODUCTION A remarkable property of our universe is that it appears to be accelerating. Within the context of Einstein's theory of general relativity, cosmic acceleration can arise if at least one of the constituents of the universe violates the strong energy condition ρ + 3p ≥ 0. Physical models with this property are frequently referred to as `dark energy' (DE). Although several models of DE have been advanced in the literature, perhaps the simplest remains Einstein's original idea of the cosmological constant, Λ. As its name suggests, the energy density associated with the cosmological constant, Λ/8π G, and itsequation of state, w = -1, remain the same at all cosmological epochs. Although w = -1 satisfies current observations very well, the non-evolving nature of Λ implies an enormous difference in its density and that in matter orradiation at early times.For instance ρ_Λ/ρ_r ∼ 10^-58 at the time of the electroweak phase transition, at earlier times this ratio is still smaller.This `imbalance' between the non-evolving and small value of Λ on the one hand, and the evolvingdensity in matter/radiation on the other, has fueled interest in models in which, like matter/radiation, DE also evolves with time <cit.>. In this context, considerable attentionhas been focused on models with `tracker' properties which enable the present value of the DE density to be reached from a wide range of initial conditions. This class of models appears to alleviate the so-called `fine-tuning' (or `initial value') problem which characterizes Λ<cit.>. A scalar field with the inverse power-law (IPL) potentialV ∝^-α (α > 0),presents one of the oldest and best studied examples of this class of models <cit.>. Unfortunately the IPL modelcannot account for the large negative values of w_ DE at the present epoch consistent with the observations <cit.> while at the same timepreserving a large initial basin of attraction <cit.>.In this paper we describe a new class of DE models based on the α-attractors <cit.>.A compelling feature of these newmodels is that they have a very wide basin of attraction which allows the late time asymptote w =-1 to be reached from a large class of initial conditions. A total of four different DE models are described in this paper. Each of these models has very distinctive features which are reflected in the evolution of w_(z) and its first derivative, w' = dw_/d lna.An interesting property of these models is that their currentequation of state (EOS)can drop below -0.9, providing good agreement with present observational bounds. Our results lead us to conclude that tracker models of DE could be very relevant forthe understanding of cosmic acceleration.The plan of our paper is as follows:The α-attractor family of potentialsis briefly discussed in section <ref>.Section <ref> contains our main results and provides an analysis of the four new models of tracker dark energy. A summary of our results is presented in section <ref>. § CONFORMAL INFLATION AND Α-ATTRACTORS Kallosh & Linde recently discovered an interesting new family of potentials which could give rise to successful inflation <cit.>. They notedthat the LagrangianL = √(-g)[ 1/2∂_μχ∂^μχ + χ^2/12R(g) - 1/2∂_μϕ∂^μϕ -ϕ^2/12R(g) - λ̃/4 (ϕ^2 - χ^2 )^2] ,where λ̃ is a dimensionless parameter and χ, ϕ are scalar fields,is invariant under the O(1, 1) group of transformations in the (χ, ϕ) space and also under the group of local conformal transformations. Fixing the local conformal gauge to χ^2-ϕ^2 = 6 m_p^2 ,the Lagrangian in (<ref>) can be parameterized by χ = √(6) m_p cosh/√(6) m_p , ϕ = √(6) m_p sinh/√(6) m_p,so that ϕ/χ = tanh/√(6) m_p .Consequently (<ref>) reduces toL = √(-g)m_p^2/2 R - 1/2∂_μ∂^μ - Λm_p^2 ,which describes general relativity with the cosmological constant Λ = 9λ̃m_p^2(here m_p = 1/ √(8 π G)≈ 2.4 × 10^18 GeV).A conformally invariant generalization of (<ref>) and (<ref>), Λ→ F(ϕ/χ)m_p^2, with an arbitrary function F that deforms the O (1, 1) symmetry, results in the LagrangianL = √(-g)m_p^2/2 R - 1/2∂_μ∂^μ - V (φ)with the scalar-field potentialV (φ) = m_p^4 F ( tanh/√(6) m_p) .Different canonical potentials V() were discussed in <cit.> in the context of inflation, while <cit.> introduced the α-attractorfamily of potentials following the prescription[The parameter α can be related to the curvature of the superconformal Kähler metric <cit.>.]V (φ) → V ( φ / √(α)). An attractive feature of the α-attractors is that they are able to parameterize a wide variety of inflationary scenarios within a common setting.In <cit.> it was shown that, in addition to defining inflationary models, the α-attractors were also able to source oscillatory models of dark matter and dark energy[DEfrom α-attractors has also been discussed in <cit.>, although not in the tracker context. ]. In this paper we extend the study of <cit.> by showing that the α-attractors can give rise to new models of tracker-DE in whichthe equation of state (EOS) approaches the late-time value w ≃ -1 from a wide class of initial conditions. In this paper our focus will be on the α-attractor family of potentials characterized by V (φ) = m_p^4 F ( tanh/√(6α) m_p) . Accordingly, our dark energy models are based on the following potentials all of which have interesting tracker properties: *The L-model V() = V_0 (λ/m_p) equivalently V() = V_0 tanhλ/m_p^-1. *The Oscillatory tracker modelV() = V_0 cosh(λ/m_p)equivalently V() = V_0 [ 1-tanh^2(λ/m_p) ]^-1/2. *The Recliner model V() = V_01+ exp(-λ/m_p)equivalently V() = V_0 [ 1+ tanh(λ/2m_p)] ^-1. *The Margarita potential V() = V_0 tanh^2(λ_1/m_p)cosh(λ_2/m_p)where λ_1≫λ_2. The tracker parameter λ in our models is related to the parameter α in (<ref>) by λ = √(1/6α) .The reader might like to note that in terms of the variablex = tanh, the original α-attractor based T-model of Inflation <cit.> is simply F(x) = x in (<ref>), while our L-model is F(x) = 1/x. The functional form of F(x) for our remaining three DE models is somewhat more complicated: F(x) = (1-x^2)^-1/2, for the oscillatory tracker (<ref>); F(x) = 1/1+x for the recliner potential (<ref>); and finally F(x) = x^2/√(1-x^2) for the Margarita potential (<ref>). Therefore, from the α-attractor perspective, the L-model (<ref>) appears to be the most appealing of the four DE models introduced by us. One should also point out that, in addition to the above four models, hyperbolic potentials have also been used in connection with the following models of dark energy.* DE with a constant equation of state -1<w<0 is described by <cit.> V()=3H_0^2(1-w)(1-Ω_m0)^1/|w| 16π GΩ_m0^αsinh(|w|√(6π G 1+w) (-_0+_1))^-2α ,where α = 1+w/|w|,  _0=(t_0),  _1= √(1+w 6π G)1 |w|ln1+√(1-Ω_m0)√(Ω_m0) .*The Chaplygin gas with p = - A/ρ can be described by the scalar field potential <cit.> V() = √(A)/2 (cosh(2√(6π G)) + 1/cosh(2√(6π G)) ) .Note that theChaplygin gas can also be modelled using a scalar field with the Born-Infeld kinetic term <cit.>.It is interesting that all four of the dark energy models introducedin our paper possess distinct features which allow them to be distinguishedfrom each other at late times. We shall elaborate on these models in the next section.§ TRACKER MODELS OF DARK ENERGY§.§ L-modelConsider first the L-model (<ref>) and its natural extension[We refer to this model as the `L-model' since V(φ) has a characteristic L shape for large values of λ and p as shown in figure <ref>.] V() = V_0 ^p(λ/m_p) .For small values of the argument, 0 < λ/m_p≪ 1, one findsV ≃V_0/(λ/m_p)^p which suggests that the early time behaviour of this model is very similar to that of the IPL model for which, at early times <cit.>w_ = pw_ B - 2/p+2 ,where w_ B is the background equation of state of matter/radiation. The IPL model (<ref>) therefore has the appealing property that, for large values of p ≫ 1, its EOS can track the EOS of the dominant matter component in the universe. Unfortunately it is also well known that, for Ω_0m≥ 0.2, the IPL model(<ref>) with p > 1 cannot give rise to w_0 < -0.8 at the present epoch <cit.>. This may be viewed as a significant shortfall of this model since observations appear to suggest that the current EOS of dark energy should satisfy w_0 ≤ -0.9<cit.>. Of course this problem can be bypassed if one assumes a smaller value p < 1 for the exponent in (<ref>). However in this case the initial basin of attraction shrinks considerably, which diminishes the appeal of the IPL model.In contrast to the IPL model, the L-potential has the following asymptote for λ/m_p≫ 1V(φ) ≃ V_0 indicating that the L-potential flattens and begins to behave like a cosmological constant at late times. Because of this the present value of the EOS in the L-model can be significantly lower[Another means of lowering w_0 is by couplingto the Ricci scalar, as shown in <cit.>.] than that in the IPL model (<ref>).The behavior of the L-potential(<ref>) is illustrated in Fig. <ref>.The evolution of the scalar field energy density has beendetermined by solving the following system of equations relating to a spatially flat Friedman Robertson Walker (FRW) universeH^2 = 8π G/3( ρ_m + ρ_r + ρ_) + 3H + V'() = 0 , where ρ_m (ρ_r) is the density of matter (radiation), and the density and pressure of the scalar field are ρ_ = 1/2^2 + V(),p_ = 1/2^2 - V() .As expected, the early time tracking phase in ourmodel (<ref>)– illustrated by figure <ref>, is identical tothe tracking phase of the corresponding IPL potential V∝ 1/^p.Figure <ref> shows the evolution of w_(z) = p_/ρ_, in the L-potential V = V_0^p/m_p and in the IPL potentialV= V_0(m_p/)^p. Note that both potentials have precisely two free parameters: V_0 and p. Fig. <ref> draws attention to the interesting fact that, forthe L-model with V ∼^6, the current value of w_ can be as low as w_∼ -0.8, which is considerably lower than the corresponding valuew_∼ -0.4 for V ∼^-6. In other words, for identical values of p, the late-time value of w_ in the L-model (<ref>) is significantly lower than that in the IPL model (<ref>).The value of w_ can be further lowered by increasing the value of λ in (<ref>), as shown infigure <ref>.The black star on the right y-axis of figure <ref> indicates the observational 2σ upper bound, w_0 ≤ -0.9, for DE models with a slowly varying EOS[ The current 2σ upper bound on the EOS varies between -0.8 to -0.9, depending upon the data sets employed and the method of reconstruction <cit.>. We assume the conservative bound w_0 ≤ -0.9 to highlight the fact that the EOS in our models can drop to sufficiently low values at late times.]<cit.>.Figure <ref> shows the phase-spacetrajectories of the equation of state { w_,w_'} starting from the matter dominated epoch. Here <cit.> w_' ≡dw_/dln a = ẇ_/H.Note that all trajectories approach the   limit (w_=-1,w_'=0) atlate times. The present epoch is marked by a circle on each trajectory. Comparing the L-model (<ref>) with the IPL potential (<ref>) we find that the current EOS in the former is always more negative than that in the latter,  w_0^ L1 < w_0^ IPL, which supports our earlier results infig. <ref>. Setting λ = 1 in (<ref>) and increasing the value of p leads to w_' decreasing while w_ increases. On the other hand, increasing λ (with p held fixed) leads to the opposite behaviour, namely w_ decreases whereas w_' increases. Note that for moderately large values of λ and p(λ, p ∼ few) the L-model (<ref>) will have a large initial basin of attraction before converging to w_∼ -1,w_'∼ 0by the present epoch.§.§ Oscillatory tracker model Next we turn our attention to the oscillatory tracker model (<ref>), namelyV() = V_0 cosh(λ/m_p) .For large values λ||/m_p≫ 1, this potential has the asymptotic formV ≃V_0/2exp(λ/m_p).The exponential potential has been extensively studied in <cit.>. In the context of a spatially flat FRW universe it is well known that for λ^2 > 3(1+w_ B) the late time attractor in this model has the same equation of state as the background fluid, namelyw_ = w_ B. The associated fractional density of the scalar field is Ω_ = 3(1+w_ B)/λ^2 ,with nucleosynthesis constraints imposing the lower bound λ 5<cit.>, while the CMB constraints impose an even stronger lower bound λ≥ 13<cit.>.For small values, λ||/m_p≪ 1, the potential (<ref>) hasthe limiting form V() ≃ V_0 1 + 1/2(λ/m_p)^2 .We see that, as in the case of (<ref>),the late time asymptote for V() is once again the cosmological constant V_0.However the presence of ^2 in (<ref>) suggests that the late time approach of w_ to -1 will be oscillatory. This has been illustrated in figures <ref>, <ref> and <ref> which show w_(z) and { w_,w_'} for different values of λ. From figure <ref>,it is clear that w_=1/3, 0during the radiation and matter domination epochs respectively. However, at late times the scalar fieldbegins to oscillate around the minimum of its potential (<ref>).Since these oscillations are of decreasing amplitude, w_ asymptotically approaches w_ = -1 at late times. Interestingly, for moderate values 5 ≤λ≤ 10, the present value of w_ can lie anywhere between -1 to -0.9, its precise value being determined by the phase of the oscillation. Howeverfor larger values λ > 10, the scalar field completes several oscillations prior tothe present epoch. Since the mean value of (t) in (<ref>) falls off as⟨^2(t) ⟩^1/2∝ a^-3/2(t) it follows that in such modelsw_≃ -1 today.This has been illustrated in fig. <ref> and especially in fig. <ref>.The following expression describes the EOS of dark energy during the oscillatory epochw_(t)≃ -1+λ^2 (_m(t)/m_p)^2 [1-((t)/_m(t))^2],where _m(t) is the peak oscillation amplitude whose value steadilydecreases with time. Eq. (<ref>) can be rewritten asw_(t)≃ -1+^2(t)/V_0 ,where the steady decline of ^2(t) with time ensures that w_→ -1 at late times. The previous analysis is substantiated byfigure <ref> which shows the evolution of the phase-space{ w_,w_'} for λ=20, 30 with filled black circles marking thepresent epoch (Ω_0m=0.3).The substantial difference in w_(z) for 5≤λ≤ 30 and0 ≤ z ≤ 3 (see fig. <ref>), may allow such models to be differentiated from one another on the basis of the high quality data expected from dark energy surveys such as DES, Euclid and SKA.Due to the presence of the exponential tracker asymptote (<ref>), the oscillatory tracker model (<ref>) has a very large initial basin of attraction, trajectories from which get funneled into the late time attractor w_≃ -1.Our results, summarized in figure <ref>, demonstrate that initial density values covering a range of more than 40 orders of magnitude at z = 10^12, converge onto the attractor scaling solution represented by the solid red curve. This range substantially increases if we set our initial conditions at earlier times. For instance, upon setting {_i,_i} at the GUT scale of 10^14GeV (z∼ 10^26), the range of initial density values that converges to Ω_0, DE≃ 0.7spans an impressive 100 orders of magnitude ! The oscillatory tracker potential therefore exhibits avery large degree of freedom in the choice of initial conditions. In particular it permits the possibility of equipartition, according to which the density of dark energy and radiation may have been comparable at very early times just after reheating. In our view this is a very compelling property of this DE model.[ See <cit.> for adynamical analysis of models based on similar potentials.]In passing it may be appropriate to point out that the CPL ansatz <cit.> w(a) = w_0+w_1(1-a) = w_0 + w_1z/1+z ,which is frequently used to reconstruct the properties of DE from observations,may be unable to accommodate the oscillatory behaviour of w_which characterizes this model. Non-parametric reconstruction is likely to work better for this class of potentials <cit.>. Note that the companion potential to (<ref>),V() = V_0 sinh^2λ̃/m_p describes a tracker model of dark matter forV_0λ̃^2/m_p^2 ≫ H_0^2<cit.>. It is therefore interestingthat, when taken together, the pair of α-attractor potentials (<ref>) and (<ref>) with λ̃≫λ, can describe tracker models of both dark matterand dark energy !§.§ The Recliner model The recliner potential[The Recliner potential (<ref>) presents a limiting case of the family of potentials studied in <cit.>, also see <cit.>.] V() = V_01+ exp(-λ/m_p) ,possesses the asymptotic form V ≃ V_0exp(-λ/m_p) for λ|| ≫ m_p (< 0). This endows it with alarge initial basin of attraction, due to whichscalar field trajectories rolling down (<ref>)approach a common evolutionary path from a wide range of initial conditions.In thelargelimit, λ≫ m_p, V() monotonically declines to V()≃ V_0. Therefore one expects the value of w_ to approach w_→ -1 at late times, without any intermediate oscillations. This behaviour is substantiated by figure <ref>.The fact that the current value of w_ can fall below -0.9makes this model quite appealing since it can describe cosmic accelerationwithout the fine tuning of initial conditions. The evolution of w_(z) near the present epoch is shown in figure <ref>. One finds that different values of λ in (<ref>) can clearly be distinguished on the basis of low redshift measurements of w_(z). Consequently upcoming dark energy survey's (DES, Euclid, SKA, etc.) may provide a unique opportunity to set bounds (or even determine) the value of λ in (<ref>).Phase-space trajectories { w_,w_'} forthe Recliner potential (<ref>) areillustrated infigure <ref>.Note that larger values of λ in (<ref>) result in smaller values of w_ and larger values of w_' at late times. It is interesting that the Oscillating tracker potential (<ref>) and the Recliner potential(<ref>) have two free parameters each, V_0 and λ, precisely the same numberas in the IPL model V = V_0^-p and the tracker V = V_0 exp(-λ/m_p). §.§ Transient dark energy from the Margarita potentialThe Margarita potential describes a model of transient dark energyV() = V_0 tanh^2(λ_1/m_p)cosh(λ_2/m_p),     λ_1 ≫λ_2 . Thispotential has tracker-like wings anda flat intermediate region (see figure <ref>). It exhibits three asymptotic branches: V()≃ V_0/2 exp(λ_2||/m_p) ,||/m_p≫1/λ_2, V()≃V_0+1/2m_2^2^2,1/λ_1≪||/m_p≪1/λ_2, V()≃ 1/2m_1^2^2, ||/m_p≪1/λ_1,where m_1^2=2V_0 λ_1^2/m_p^2 and m_2^2=V_0 λ_2^2/m_p^2 with m_1≫ m_2. As illustrated by the red curve in figure <ref>, the acceleration of the universe in this model is a transient phenomenon. It ends once the scalar field rolls to the minimum of V(). From that point on the scalar field begins to oscillate and behave like dark matter. Consequently the universe reverts to matter dominated expansion after the current accelerating epoch is over, with an extra contribution to dark matter coming from the coherently oscillating scalar field. As suggested by(<ref>) – (<ref>), the motion of the scalar field proceeds along three distinct stages each of which isreflected in the cosmic expansion history.(i) Initially (t) rolls down the exponential potential (<ref>). During this phase the scalar field density scales like the background fluid (radiation/matter) driving the expansion of the universe. (ii)After thetracking phase is over, thescalar field oscillates around the flat wing of the potential shown in figure <ref>. During this phase the universe begins to accelerate, as demonstrated in figure <ref>. (iii) Finally, at late times,the scalar field gets trapped within the sharposcillatory region of the potential(<ref>); see figure <ref>. Oscillations of the scalar field during this stage make it behave like pressureless matter with⟨ w_φ⟩=0;see the red line in figures <ref> and <ref>. We therefore find that cosmic acceleration is sandwiched between two matter dominated epochs. The duration of the accelerating phase depends uponthe gap between λ_1 and λ_2. Nucleosynthesis constraints limit λ_2≥ 5whereas the only constraint on λ_1 comes from the inequality λ_1 ≫λ_2.At this point we would like to draw attention to a key feature of theMargarita potential which distinguishes this model of transient acceleration from others of its kind. Note that the asymptotic form of thepotential within the flat wing, described by (<ref>), bears close resemblance to the potential near ≃ 0 for the oscillatory tracker model, namely (<ref>). Therefore, as in that model,one might expect w_ to approach -1at late times via small oscillations. This would indeed be the case were it not for the presence of the sharp oscillatory region near ≃ 0 in fig. <ref>. This region modifies the behaviour ofsignificantly. Astraverses ≃ 0, its EOS abruptly changes from negative to positive values. This leads to a spike in the value of w_ and in the deceleration parameter q. An accelerating universe punctuated by periods of sudden deceleration therefore becomes a key feature of the Margarita model of DE, as shown in figures <ref>&<ref>. It is well known that, unlike ΛCDM, a transiently accelerating universedoes notpossess afuture event horizon. Moreover the presence of even a tiny curvature term, k/a^2, can cause such a universe to stop expanding and begin to contract, giving rise to a very different cosmological future from ΛCDM. Note that while models of transient DE have been discussed earlier,see for instance <cit.>, to the best of our knowledge none of these early models had a tracker-likecommencement.Finally the reader may be interesting in acompanion potential to (<ref>) which provides another example of transient dark energy with tracker-like behaviour at early timesV() = V_0 (1-e^-(λ_1/m_p)^2)cosh(λ_2/m_p),     λ_1≫λ_2 .§ DISCUSSIONIn this paper we discuss four new models of dark energy based on the α-attractor family.In all of these modelsthe present value of the equation of statecan fall below -0.9, in agreement with recent observations. This does not come at the expense of finely tuned initial conditions, since all of the four models display tracker-like behaviour at early times. The initial attractor basin is largest forthe Oscillatory tracker model (<ref>), the Recliner potential (<ref>) and the Margarita potential (<ref>), in all of whichV ∝ e^λ at early times. The fourth model, which is described by the L-potential (<ref>), has exactly the same basin of attraction as the inverse power law potential V ∝^-p.It is interesting that all of these models display distinct late time features which allow them to be easily distinguished from one another. For instance, in the Oscillatory tracker model (<ref>) the late-time attractor w_≃ -1is reached through a series of oscillations of decreasing amplitude. By contrast, oscillations in w_ are absent in the Recliner potential (<ref>),in which the late-time approach to w_≃ -1 occurs via a steady decline in the value of w_. Our fourth model, represented by the Margarita potential (<ref>), describes atransient model of dark energy. In this model the accelerating phase is sandwiched between two matterdominated epochs. However unlike other transiently acceleratingmodels discussed in the literature, the Margarita potential provides us with an example of an α-attractor based model with a tracker-like asymptote at early times. This ensures that transientacceleration can arise from a fairly large family of initial conditions.Finally we would like to mention that the potentials suggested in this paper do not claim to address the `why now' question which is sometimes raised in the context of dark energy. The potentials in our paper contain two free parameters V_0 and λ. The value of λ is chosen in keeping with the requirement that the EOS can drop to the low values demanded by observations <cit.>.The value of the other free parameter V_0 is adjusted to ensure Ω_m ≃ 1/3, Ω_ DE≃ 2/3 at the present epoch. Its important to note thatfor a given value of V_0 there is an entire range of initial conditions {_i, _i} which funnel dark energy to its present value. This ensures that there is little fine tuning of initial conditions in the models discussed in this paper.§ ACKNOWLEDGMENTSThe authors acknowledge useful discussions with Yu. Shtanov and A. Viznyuk. S.B. and S.S.M. thank the Council of Scientific and Industrial Research (CSIR), India, for financial support as senior research fellows.§ CAN   COSMOLOGY EMERGE FROM A SINGLE OSCILLATORY POTENTIAL ? Consider any potential such as V = V_0 coshλ/m_p in(<ref>) which has an early time tracker phaseandthe late time asymptotic form V(φ) ≃ V_0[1+1/2(λ/m_p)^2 ],      |λ|≪ m_p .Recasting (<ref>) asV(φ) ≃ V_0+1/2m^2^2,where m^2=V_0λ^2/m_p^2,one might be led into thinking that: (i) since V_0 behaves like the cosmological constant, and (ii) the m^2^2 term leads to oscillations induring which ⟨ w_⟩≃ 0, therefore a potential having the general asymptotic form (<ref>) might be able to play the dual role of describing bothdark matter and dark energy. However this is not the case for the simple reason that although oscillations commence when m H,whichmight lead one to believe that 1/2m^2^2 ≫ V_0, the asymptotic forms (<ref>) and(<ref>) are only valid in the limit |λ|≪ m_pwhich implies 1/2m^2^2 ≪ V_0. In other words, the cosmological constant V_0 is always larger than the oscillatory m^2^2 term soon after the onset of oscillations, leaving little room for a prolonged dark matter dominated epoch as demanded by observations.Note, however, that a viable model of   based on a single scalar field can be constructed within the framework of a non-canonical Lagrangian, as shown in <cit.>. 99ss00 V. Sahni and A.A. Starobinsky, Int. J. Mod. Phys.D9 373 (2000).DE1 V. Sahni and A.A. Starobinsky, Int. J. Mod. Phys.D15 2105 (2006).DE P. J. E. Peebles and B. Ratra, Rev. Mod. Phys.75 559 (2003); T. Padmanabhan, Phys. Rep.380 235 (2003); V. Sahni, [astro-ph/0202076], [astro-ph/0502032]; V. Sahni,Dark matter and dark energy, Lect. Notes Phys. 653, 141-180 (2004)[astro-ph/0403324]. E. J. Copeland, M. Sami and S. Tsujikawa, Int. J. Mod. Phys.D15 1753 (2006); R. Bousso, Gen. Relativ. Gravit.40, 607 (2008); L. Amendola and S. Tsujikawa,Dark Energy, Cambridge University Press, 2010.ratra B. Ratra and P.J.E. Peebles,37, 3406 (1988).wetterich88 C. Wetterich, Nuclear Physics B302, 668 (1988).ferreira P.G. Ferreira and M. Joyce,79, 4740 (1997); P.G. Ferreira, P.G. and M. Joyce,58, 023503 (1998).zlatev I. Zlatev L. Wang and P.J. Steinhardt,82, 896 (1999).zlatev1 P.J. Steinhardt, L. Wang and I. Zlatev,59, 123504 (1999).sw00 V. Sahni and L. Wang,62, 103517 (2000).brax P. Brax and J. Martin,61, 103502 (2000);468, 40 (1999).barreiro T. Barreiro, E.J. Copeland and N.J. Nunes,61, 127301 (2000).albrecht00 A. Albrecht and C. Skordis,84, 2076 (2000).BAO Y. Wang et al., SDSS Collaboration, MNRAS469, 3762 (2017) [arXiv:1607.03154].SDSS S. Alam et al., BOSS Collaboration, MNRAS470, 2617 (2017) [arXiv:1607.03155].planck P.A.R. Ade et al., Planck Collaboration,594, A14(2016),Dark energy and modified gravity, [arXiv:1502.01590].asen17 A. I. Lonappan, Ruchika, A. A Sen, arXiv:1705.07336.huterer17 D. Huterer and D.L. Shafer, Rep. Prog. Phys.81, 016901 (2018) [arXiv:1709.01091].linde1 R. Kallosh and A. Linde, JCAP07 (2013) 002 [arXiv:1306.5220].linde2 R. Kallosh, A. Linde and D. Roest, JHEP11, 198 (2013) [arXiv:1311.0472].sss17 S. Mishra, V. Sahni and Yu. Shtanov, JCAP 06(2017)045 [arXiv:1703.03295]linder_alpha E. V. Linder,91, no. 12, 123012 (2015) [arXiv:1505.00815].chap A. Kamenshchik, U. Moschella and V. Pasquier, 2001,511, 265 [ gr-qc/0103004].chap1 V. Gorini, A. Kamenshchik, U. Moschella, V. Pasquier and A. Starobinsky, 2005,72, 103518 [ astro-ph/0504576].bilic N. Bilic, G.B. Tupper and R. Viollier, 2002,535, 17 [ astro-ph/0111325].frolov A. Frolov, L. Kofman and A. Starobinsky, 2002,545, 8 [ hep-th/0204187]. matarrese S. Matarrese, C. Baccigalupi and F. Perrotta,70, 061301 (2004).caldwell_linder R.R. Caldwell and E.V. Linder,95, 141301 (2005).linder17 E.V. Linder, Astropart. Phys.91, 11 (2017) [arXiv:1701.01445].sfs V. Sahni, H. Feldman and A. Stebbins, Astrophys.J.385, 1, (1992).copeland98 E.J. Copeland, A.R. Liddle and D. Wands,57, 4686 (1998).CMB_Neff1 E. Calabrese, D. Huterer, E. V. Linder, A. Melchiorri and L. Pagano, 83, 123504 (2011) [arXiv:1103.4132 [astro-ph.CO]].CMB_Neff2A. Hojjati, E. V. Linder and J. Samsing, 111,041301(2013) [arXiv:1304.3724 [astro-ph.CO]].scherrer_exp H. Chang, R. J. Scherrer, [arXiv:1608.03291].Roy:2013wqaN. Roy and N. Banerjee, Gen. Rel. Grav.46, 1651 (2014) [arXiv:1312.2670 [gr-qc]].Paliathanasis:2015ggaA. Paliathanasis, M. Tsamparlis, S. Basilakos and J. D. Barrow, Phys. Rev. D91, no. 12, 123535 (2015) [arXiv:1503.05750 [gr-qc]]. polar M. Chevallier and D. Polarski, Int. J. Mod. Phys. D10, 213 (2001) [ gr-qc/0009008].linder E. V. Linder,90, 091301 (2003) [ astro-ph/0208512].non-param A. Shafieloo, U. Alam, V. Sahni, and A.A. Starobinsky, Mon. Not. R. Astron. Soc.366, 1081 (2006);J. Dick,L. Knox, and M. Chu, J. Cosmol. Astropart. Phys. 07 (2006) 001; A. Shafieloo, Mon. Not. R. Astron. Soc.380, 1573 (2007);D. Huterer and G. Starkman, Phys. Rev. Lett.90, 031301 (2003); R. G. Crittenden, L. Pogosian, and G. B. Zhao, J. Cosmol. Astropart. Phys. 12 (2009) 025; C. Clarkson and C. Zunckel, Phys. Rev. Lett.104, 211301 (2010); R. G. Crittenden, G. B. Zhao, L. Pogosian, L. Samushia, and X. Zhang, J. Cosmol. Astropart. Phys. 02 (2012) 048; T. Holsclaw, U. Alam, B. Sanso´, H. Lee, K. Heitmann, S. Habib, and D. Higdon, Phys. Rev. Lett.105, 241302 (2010); A. Shafieloo, A. G. Kim, and E.V. Linder, Phys. Rev. D85, 123530 (2012); M. Seikel, C. Clarkson, and M. Smith, J. Cosmol. Astropart. Phys. 06 (2012) 036.woscillation1 G-B. Zhao, R. G. Crittenden, L. Pogosian and X. Zhang,109, 171301 (2012).woscillation2 G-B Zhao, et. al., Nature Astronomy1, 627 (2017).transient J. Frieman, C.T. Hill, A. Stebbins and I. Waga,75, 2077 (1995); K. Choi, Phys.Rev. D62 043509 (2000) [hep-ph/9902292]; J.D. Barrow, R. Bean, and J. Magueijo, MNRAS316, L41 (2000); S.C. Ng and D.L. Wiltshire,Phys.Rev. D64 123519 (2001) [astro-ph/0107142]; R. Kallosh, A. Linde, S. Prokushkin and M. Shmakova, 66 123503 (2002) [arXiv:hep-th/0208156]; R. Kallosh and A. Linde, JCAP02 02 (2003) [astro-ph/0301087]; U. Alam, V. Sahni and A.A. Starobinsky, JCAP0304, 002 (2003) [astro-ph/0302302].ss02 V. Sahni and Yu.V. Shtanov JCAP 0311,014, (2003)astro-ph/0202346.star17 A. Shafieloo, D.K. Hazra, V. Sahni and A. A. Starobinsky,Metastable Dark Energy with Radioactive-like Decay, arXiv:1610.05192.sahni-sen17 V. Sahni and A. A. Sen,Eur.Phys.J.C77, 225 (2017) [arXiv:1510.09010]
http://arxiv.org/abs/1709.09193v4
{ "authors": [ "Satadru Bag", "Swagat S. Mishra", "Varun Sahni" ], "categories": [ "gr-qc", "astro-ph.CO", "hep-ph", "hep-th" ], "primary_category": "gr-qc", "published": "20170926180144", "title": "New tracker models of dark energy" }
0.0cm 0.2cm 16cm21cm 1.0cm sciabstract12ptNuclear Disarmament Verification via Resonant PhenomenaJake J. Hecla, Areg Danagoulian^∗Department of Nuclear Science and Engineering, Massachusetts Institute of Technology,77 Massachusetts Avenue, Cambridge, MA 02139, USA^∗To whom correspondence should be addressed; E-mail:[email protected]. ============================================================================================================================================================================================================================================================= Nuclear disarmament treaties are not sufficient in and of themselves to neutralize the existential threat of the nuclear weapons. Technologies are necessary for verifying the authenticity of the nuclear warheads undergoing dismantlement before counting them towards a treaty partner’s obligation. This work presents a novel concept that leverages isotope-specific nuclear resonance phenomena to authenticate a warhead's fissile components by comparing them to a previously authenticated template. All information is encrypted in the physical domain in a manner that amounts to a physical zero-knowledge proof system. Using Monte Carlo simulations, the system is shown to reveal no isotopic or geometric information about the weapon, while readily detecting hoaxing attempts.This nuclear technique can dramatically increase the reach and trustworthiness of future nuclear disarmament treaties. *Summary:The resonant responses triggered by epithermal neutrons in nuclei can be leveraged for a zero-knowledge verification of nuclear warheads.§ INTRODUCTIONThe abundance of nuclear weapons could be the biggest existential threat to human civilization.Currently Russia and the US own more than 90% of all nuclear weapons. It is estimated that as many as 14 000 units are part of their arsenals of retired and stockpiled weapons, and an additional 3 700 units make up the deployed arsenals <cit.>.Such large numbers expose the world to the danger of catastrophic devastation in case of an intentional or accidental nuclear war, and also raise the danger of nuclear terrorism and nuclear proliferation due to possible theft or loss of nuclear weapons. So far most treaties, such as the Strategic Arms Reduction Treaty (New START) and the Intermediate-Range Nuclear Forces (INF) treaty, stipulate cross-verification of the dismantlement of delivery systems – such as bomber aircraft and cruise missiles.The delivery methods are a reliable proxy of strike capability in a nuclear war scenario.However, a reduction effort that is limited only to the verification of delivery methods leaves behind the problem of large stockpiles of surplus nuclear warheads.Thus disarmament treaties that target the stockpiles themselves are indispensable for reducing this combined danger. To enable such treaties, however, new technologies are necessary to achieve treaty verification while protecting the secrets of the treaty participants. This means verifying the authenticity of nuclear weapons – before their destruction is counted towards a treaty participant’s obligations – without revealing any classified information.Such a technique will be a highly powerful tool in enacting far-reaching disarmament treaties.But how does one verify that an object is a weapon without inspecting its interior?This apparent paradox has puzzled policy makers and researchers alike for the last few decades, with no clear solutions adopted.Past US-Russia lab-to-lab collaboration included the research and development of information barriers (IB) <cit.>.These are devices that rely on software and electronics to analyze data from radiation detectors and compare the resulting signal against a set of attributes in a so-called attribute verification scheme <cit.>. The attributes can be the plutonium mass and enrichment, the presence of explosives, etc.Most importantly, these attributes have to be quite broad, to prevent the release of classified information.This in its turn makes them insensitive to a variety of hoaxing scenarios.The other major difficulty of this paradigm is that it shifts the problem of verification to software and electronics, whose components themselves will need to undergo verification and validation for the possible presence of spyware, back-door exploits, and other hidden functionality.If present, these can either leak secret information to the inspectors, or clear fake warheads.To overcome this, an alternative approach called template verification has been proposed where all candidate warheads and/or their components are compared to those from a previously authenticated template. The authenticated template itself can be selected based on situational context:a random warhead acquired from a deployed ICBM during a surprise visit by the inspection crew can be expected to be real, since a country's nuclear deterrence hinges on having real warheads on their ICBM.To strengthen the confidence in its authenticity, multiple warheads can be removed from multiple ICBM and later compared to each other using the verification protocol described in this work.While significant first steps were taken in template verification research, much needs to be done in insuring that the verification protocol is both hoax-resistant as well as information secure.A template verification method based on the non-resonant transmission of fast neutrons was developed by researchers at Princeton <cit.>. Independently,an isotope-sensitive physical cryptography system was proposed by researchers, including one of the authors of this paper, at MIT's Laboratory of Nuclear Security and Policy <cit.>.Both approaches have advantages and disadvantages.The Princeton concept has strong information security in the form of a zero-knowledge (ZK) proof.It relies primarily on the non-resonant scattering of fast neutrons, a process which is almost identical for most actinides, making it prone to isotopic hoaxes, e.g.via replacement of the weapons grade plutonium (WGPu) pit[In this work the pit refers to the hollow plutonium sphere at the center of a fission nuclear weapon.] with easily available depleted uranium (DU) or reactor grade plutonium (RGPu).The previous MIT system, relying on isotope-specific Nuclear Resonance Fluorescence (NRF) signatures offers strong hoax resistance.However it is not fully zero-knowledge, and thus needs to undergo thorough checks for information security.The new methodology proposed in this work combines the strengths of the Princeton concept (ZK proof) with the strengths of the MIT concept (isotopic sensitivity), while avoiding their weaknesses.Such an outcome can have a strong impact on future arms reduction treaties. The new techniqueuses resonance phenomena to achieve isotope-specific data signatures, which can be used to obtain the necessary "fingerprint" of the object.This is achieved by exploiting nuclear resonances in actinides when interacting with epithermal neutrons in the 1-10 eV range.Unlike fast neutrons, which do not have this isotope-specificity for high Z nuclei and thus cannot enable a crucial resistance to isotopic hoaxes, the epithermal neutron transmission signal can be made highly specific and sensitive to the presence and abundance of individual isotopes.These include ^235U and ^239Pu in highly enriched uranium (HEU) and WGPu <cit.>. Also unlike NRF, the resonant absorption of epithermal neutrons in the beam can be observed directly with very high resolution (less than eV). This can be done by using time-of-flight (TOF) techniques.These characteristics allows for direct measurements of resonant absorption, thus enabling zero-knowledge implementations otherwise impossible with NRF. § PROTOCOL The key to any verification procedure is a protocol that can guarantee that no treaty accountable item (TAI) undergoing verification is secretly modified or replaced with another object.The general steps of the protocol are as follows:* The inspection party (inspectors) makes an unannounced visit to an ICBM site and randomly chooses a warhead from one of the missiles.The warhead enters the joint custody of the inspectors and the host country (hosts), and can be treated as the authentic template to which all future candidate warheads undergoing dismantlement and disposition will be compared.* The template is transported under the joint custody of the hosts and inspectors to the site where the candidate warheads (henceforth referred to as candidates) will undergo dismantlement, verification, and disposition.* Both the template and the candidate undergo dismantlement by the hosts in an environment that cannot be observed directly by the inspectors, but from where no new objects can be introduced to or removed.This could be done by curtaining off an area in the middle of a hall. The fissile component of the weapon, also known as the pit, is extracted.The pits are placed in marked, opaque boxes.The remaining components are placed in a different, unmarked box.* The curtains are removed and the dismantlement area is made accessible to the inspectors. The task of the inspection is to verify that the template and candidate pits in the marked boxes are geometrically and isotopicaly identical.The inspectors use a Geiger counter to verify that the non-fissile, unmarked boxes do not contain any radioactive materials, thus confirming that the marked boxes indeed contain the pits.* The two marked boxes undergo epithermal transmission analysis.The 2D radiographs and the spectral signatures are compared in a statistical test.An agreement confirms that the candidate is identical to the template and thus can be treated as authentic.A disagreement indicates a hoaxing attempt. The last step is key to the whole verification process, and is the focus of this work.The transmission measurement produces an effective image in a 2D pixel array made of scintillators that are sensitive to epithermal neutrons.Furthermore, by choosing a detector with a fast response time and knowing the time when the neutrons are produced, a TOF technique can allow an explicit determination of neutron energies.§ THE METHOD The epithermal range refers to the neutron energy domain encompassed between the thermal energies of ∼40 meV and fast neutron energies of ∼100 keV.The energies of interest for this study are those of 1 ≤ E ≤ 10 eV.While the neutron interactions in the thermal regime are described by monotonic changes in cross sections, in the epithermal range the neutrons can trigger various resonant responses in uranium and plutonium.These are typically (n,fission) or (n,γ) reactions, resulting in the loss of the original neutron.A plot of total interaction cross sections in the epithermal range for five isotopes of interest can be seen in Fig. <ref>.For a radiographic configuration these interactions selectively remove the original neutrons of resonant energies from the transmitted beam and give rise to an absorption spectrum, resulting in unique sets of ∼0.3 eV wide notches specific to each isotope.While the resonances are the most prominent features of the cross section, the continuum between the resonances also encodes information about the isotopic compositions of the target. These combined absorption features yield a unique "fingerprint" of a particular configuration of isotopics, geometry, and density distribution.This feature has been used in the past for non-destructive assay of nuclear fuel <cit.>.The key component of a nuclear weapon is its pit - the hollow plutonium sphere at the center of the assembly. This work focuses on the verification of the authenticity of the pit.A direct transmission imaging of the pit would reveal many of its secret parameters.Thus, an additional physical cryptographic barrier is necessary to achieve a ZK test.The barrier needs to be constructed such as to also allow for pit-to-pit comparisons that can detect any significant differences.These two simultaneous goals can be accomplished by the mass reciprocal mask of the pit.The reciprocal mask has a shape such that all the rays of epithermal neutrons in the beam transit the same combined areal density.A simple example of a reciprocal mask of a hollow shell could be a cube with the geometry of the said hollow shell subtracted.Thus, the aligned combination of the pit and the reciprocal will result in uniform areal density, producing an image in the detection plane which is consistent with that of a flat object.A more optimal reciprocal for a hollow shell of internal and external radii r_1 and r_0 can be defined via its thickness along the beam axis d=D-2(√(r_0^2-y^2)-√(r_1^2-y^2)) for y<r_1 and d=D-2√(r_0^2-y^2) for r_1 ≤ y ≤ r_0, where D is the combined thickness observed by all particles in the beam, and y is the vertical coordinate.A combination of the pit and the reciprocal is illustrated in Fig. <ref>.In order to keep the shape of the reciprocal secret, the hosts can place it inside a n opaque box.It should be noted that an assembly with a uniform areal density by no means implies that the resulting image will be flat as well.Secondary processes, e.g. neutron scattering, can distort the image and introduce some dependencies that contain information about geometric structures.For this reason detailed simulations are necessary to validate the concept and demonstrate information security.In the next sections we use Monte Carlo simulations to show that the transmission image produced for this configuration is identical to that of a uniform plate.This outcome guarantees that no geometric information is revealed by the transmission analysis.Even if the inspectors are capable of determining the incident flux in the neutron beam they will at most gain knowledge of an upper limit on the amount of total mass. The hosts can modify the mask to make that knowledge of no value:for example, the inspectors might determine that the mass of WGPu in the pit is less than 10 kg - this knowledge is useless, as the critical mass for a WGPu pit is approximately 6 kg. §.§.§ Isotopic Hoax Resistance As the inspectors perform the radiographic interrogation of the pit, they need to ascertain that the isotopics and effective densities of the template and the candidate pits are identical.At a given pit orientation, this can be performed by comparing the epithermal spectra of the transmitted beam from the template and candidate measurements.The energy information can be acquired from the timing of the arrival of the neutrons, via the previously described TOF method to very high precision with either boron doped microchannel plate detectors, or ^6Li based LiCaAlF_6scintillators <cit.>.To study the system's sensitivity to hoaxing, we consider a scenario where the WGPu pit has been replaced with a RGPu pit.MC simulations are performed for the configuration described in Fig. <ref>.The simulations were performed using the MCNP5 package, which has a fully validated neutron physics module <cit.>. The neutron energies were uniformly sampled in the 0 ≤ E ≤ 10 eV range.For this study the isotopic mass concentrations for RGPu was 40.8% ^239Pu.For WGPu the fractions are 93% ^239Pu, the remainder as ^240Pu and trace amounts of the other isotopes.See supplementary materials for detailed listing of the isotopic concentration. For both the template and the hoax RGPu pit the inner diameter was 6.27 cm and the outer diameter was 6.7 cm - based on public domain estimates of a Soviet tactical thermonuclear warhead pit geometries <cit.>.The reciprocal was modeled in three dimensions along the previously described formula.The combined pit-reciprocal thickness was 5 cm.Fig. <ref> shows the results of the Monte Carlo simulation of an idealized detector exposed to the transmitted flux through a combination or a reciprocal mask and a WGPu pit, and a similar configuration where WGPu has been replaced by RGPu.The large spectral discrepancies are clear.The difference is primarily caused by the abundance of ^242Pu in the RGPu, produced as a results of neutron capture in the reactor core.This manifests itself in deeper absorption lines at 4.2 and 8.5 eV, as well as generally increased absorption in the 5-8 eV range.For this simulation 20.7 × 10^6 neutrons were uniformly sampled in the 0-10 eV range, and the energy of the output was histogramed over 202 bins. For two given spectra, the chi-square test can be applied to reject or accept the null hypothesis, i.e. the hypothesis that the fluctuations are merely statistical and normally distributed.For this result the value is χ^2=67177, translating to a confidence for rejecting the null hypothesis of essentially exactly p=1.0 and showing conclusively that a hoaxing attempt is underway.Furthermore, it is possible to determine the minimum number of incident epithermal neutrons necessary for achieving a confidence of p=1-2.9×10^-7, which corresponds to the standard 5σ test.That number is n=1 × 10^5 neutrons.For comparison, the MIT research reactor has been used to produce 10^10 n/s/cm^2 epithermal neutrons in the [1 eV,10 keV] range <cit.>. For this beam it corresponds to ∼10^9 n/s in the [1,10] eV range. Thus a measurement requiring 10^5 neutrons would require fractions of a second.Alternative sources of epithermal neutrons are also possible, e.g. via nuclear reactions triggered by compact proton accelerators <cit.>.Such sources would produce enough epithermal neutrons to achieve necessary measurements in about 12 seconds.See supplementary materials for additional discussion and detailed calculations.§.§.§ Geometric Hoax Resistance Geometric hoax resistance implies the use of 2-dimensional transmission imaging to identify any significant geometric and/or isotopic differences between the template and the candidate, as a way of detecting cheating attempts.The detector described in Fig. <ref> and modeled in the MC simulation, whose spectral output is shown in Fig. <ref>, can be pixelated to produce imaging data.Fig. <ref> shows the results of simulated epithermal transmission images for a WGPu template and RGPu hoax candidate scenarios.The 2D images show a clear difference, indicating that the candidate is a hoax and thus confirming the isotopic analysis described earlier.The 1D projection of the two images' radial distributions further show the extent of the discrepancy. It should be noted that the non-flat image of the WGPu configuration is caused by the in-scatter of the epithermal neutrons, and doesn't reveal any information about the object, as will be shown in the section on geometricinformation security.The imaging analysis described here focuses on a hoax scenario of modified isotopics.However any changes in pit diameter(s) and/or densitywill result in non-uniformity in areal density as observed by the incident epithermal beam and will thus cause imaging inconsistencies similar to the ones described in Fig. <ref>.While the above discussion is indicative of the strength of the technique in detecting hoaxes, a proof of uniqueness is necessary. This means showing that for a given reciprocal a given image can be produced only by a unique object.This is not the case for a single projection, since transmission is only sensitive to the line integral along the beam axis.To exclude the possibility of geometric hoaxes with identical line integrals but consisting of a different three dimensional manifold, multiple projections along the x, y, and z axes may be necessary.For a spherically symmetric pit similar to the one treated in this work this could be achieved by multiple measurements at random angles.For a spherically non-symmetric objects three projections along the x,y, and z axes may be sufficient- each requiring a reciprocal built for that projection. A more rigorous proof of uniqueness is part of future work, and could be based on the methodology of K-transforms as described in Ref. <cit.>.§.§ Geometric Information Security Geometric information security refers to the notion that the inspectors will learn nothing about the pit geometry beyond what they already know.We assume that for the proof system to be zero-knowledge we need to only show that for an honest pit the combined pit-reciprocal transmission is identical to that of a flat, uniform plate of no geometric structure.This implies that the information content in the data cannot allow the inspectors to distinguish from a complex geometry (pit-reciprocal) and a flat object.To show this, MCNP5 simulations have been performed for a simple flat plate of WGPu with an areal density equal to that of the pit-reciprocal configuration shown in Fig. <ref>.Similar to the analysis performed in the section on geometric hoax resistance the radial distribution of the counts of the two configurations are plotted in Fig. <ref>. The radial count distribution has error bars which reflect the fluctuations that an inspector would observe for the scenario of 1 × 10^5 incident neutrons necessary for achieving a hoax detection at the 5σ confidence level.The large overlap of the errors shows the statistical identity of the two outcomes, and indicates that no useful geometric information can be extracted about the object. §.§ Isotopic Information SecurityThe isotopic composition of the plutonium pit can affect the reliability of the nuclear weapon <cit.>.Additionally, the knowledge about the abundance of particular isotopes can allow an observer to determine the methodology by which the fissile material was produced. Because of this and other considerations the information on isotopic composition can be of sensitive nature.Thus a zero-knowledge proof system should not produce any data from which the isotopics of the pit-reciprocal configuration can be inferred beyond what is commonly known. Just as the shape of the reciprocal mask protects the geometry of the pit, its isotopic composition can be used to mask the real isotopic concentrations in the pit itself.While the inspectors may use the spectral information to infer theisotopic ratios for the pit-reciprocal combination, it can be made impossible for them to infer the isotopic contributions of the pit itself.For a simple case consider a slightly modified version of the reciprocal than the one presented in Fig. <ref>, where the host has added an additional flat, cylindrical extension of unknown thickness and isotopic composition.It can be shown that multiple combinations of isotopics in the the pit-reciprocal and the extension will produce the same spectrum.To show this computationally, MCNP5 simulations have been performed for the following three configurations:pit-reciprocal made out of WGPu, and a 2 cm extension of 41% enriched RGPu;pit-reciprocal and extension at intermediate 78% enrichment; pit-reciprocal at low-intermediate 70% enrichment and extension made of super-grade plutonium. See supplementary material for the full listing of concentrations corresponding to different enrichment levels used in these simulations.In the simulation incident epithermal neutron events were uniformly sampled in the [0,10 eV] range.The results of the simulations were used to determine the expected detector counts for achieving a 5σ hoax detection.The mean expected counts and the corresponding statistical errors are plotted in Fig. <ref>.The plots show no statistically significant differences.This proves that for this particular extension thickness the inspectors cannot determine the enrichment level to better than the 70-93% range.It can be shown that the range of uncertainty on the isotopic vector can be determined from Δ𝐫 = y/x (𝐫_𝐦𝐚𝐱-𝐫_𝐦𝐢𝐧), where x and y are the pit-reciprocal and extension thicknesses, respectively, and 𝐫_𝐦𝐢𝐧 and 𝐫_𝐦𝐚𝐱 are vectors of the lowest and highest possible enrichment levels.This range can be widened arbitrarily by using an extension of lower ^239Pu concentration or of higher thickness.For a more general mathematical treatment of isotopic information security, as well as a discussion on the possible use of double-chopper and velocity selector techniques for further information protection, see supplementary material.§ CONCLUSIONS AND FUTURE WORK Nuclear arms reduction treaties have long suffered from the lack of a reliable, hoax-proof and information-secure methodology for the verification of dismantlement and disposition of nuclear weapons and their components.The work presented here covers the basic concept behind a novel epithermal zero-knowledge verification system, which targets the fissile component of the weapon.The Monte Carlo simulations show that epithermal neutrons can be used as a basis of a zero-knowledge proof system, allowing to authenticate a fissionable object, such as a hollow sphere,by comparing it against another, previously authenticated template.As required by the zero-knowledge proof, the data is physically encrypted, meaning that all the encryption happens in the physical domain prior to measurement, making it impossible for the inspection side to infer significant information about object enrichment and/or geometries.This work has shown that the technique can be made simultaneously hoax resistant and information secure in isotopic and geometric domains. The measurement times, using established techniques for producing pulsed epithermal neutrons, could be less than one second.Significant additional research is needed for further understanding the strengths and limitations of this technique.A more rigorous treatment of the geometric aspects of the proof system is necessary, to determine and quantify the resistance against possible geometric hoaxes.This includes the task of showing that a specific measurement is unique to an object of specific isotopics and geometric shape.In this context only hoax objects valuable from a manufacturing standpoint are of importance. The proof of uniqueness in the formalism of K-transform  <cit.> applied to the problem of an epithermal conical beam holds promise. Future research should also focus on a physical proof of concept implementation of this methodology.A source of epithermal neutrons, either using a research reactor or an accelerator-based nuclear reaction, can be used as a platform for such an implementation.Particle detection techniques based on existing epithermal neutron detectors should also be researched and optimized <cit.>. The impact of of object-to-object variability on information security and specificity of this verification system should also be analyzed. Finally, the zero-knowledge verification technique can also be extended to weapon components made out of low-Z elements.Most hydrogenous materials, e.g. explosive lenses, are essentially opaque to epithermal neutrons, thus necessitating the use of other, more penetrating particles.The fast neutrons at MeV scale, where interaction cross sections for hydrogen are significantly lower, are a viable alternative for a source.An established technique of fast neutron resonance radiography, which exploits the resonances in nitrogen, oxygen and carbon at the ∼MeV scale, could prove promising  <cit.>. Science § ACKNOWLEDGMENTSThe authors would like to thank their colleagues at the Laboratory of Nuclear Security and Policy for their inspiration and support.The authors are grateful for the support and encouragement from their peers within the Consortium of Verification Technologies, funded by the National Nuclear Security Administration.The authors thank Rob Goldston from Princeton Plasma Physics Lab for valuable physics discussions and encouragement.This work is supported in part by Massachusetts Institute of Technology's Undergraduate Research Opportunities (UROP) program.§ SUPPLEMENTARY MATERIALS §.§ TOF methods The technique described in this work requires one to determine the energy of every neutron count in the detector.For the neutrons in the cold, thermal, and epithermal range this can be achieved via a pulsed source and time-of-flight (TOF) technique.If the neutron pulse occurs at time t_0, and the non-relativistic neutron is detected at time t=t_0 + Δ t at a distance d then its energy isE = m(l/Δ t)^2 /2where m is the neutron mass and Δ t = t-t_0 is TOF.By propagating the errors we can determine the uncertainty in E: δ E = δΔ t ml^2/Δ t^3.The uncertainty in Δ t primarily comes from that of t_0, since most scintillation and microchannel based detectors have extremely high rise times.Here the uncertainty on t_0 is either the opening time of the chopper, or the pulse length of the accelerator that produces the epithermal neutrons via nuclear reactions.Thus, we can writeδ E/E = 2 δ t_0/Δ t. Taking 5 eV as a point midway in our energy range, it is possible to determine the maximum time-width of the pulse to achieve the energy resolution of δ E = 0.3 eV at the distance of l=5 m:δ t_0 = Δ t/2δ E/E.For this distance Δ t = 161 μs, and thus δ t_0 = 5 μs.The precision of energy reconstruction can be increased by either making the pulse shorter, or by moving the detector further away and thus increasing Δ t.Since the geometric acceptance changes quadratically with l, it is statistically more optimal to shorten the pulse length than to lengthen the distance by the same fraction. §.§.§ Epithermal neutron production by nuclear reactions and nuclear reactors The nuclear research reactors are used as very intense sources of neutrons.Depending on the configuration, the output neutron beam's energy distribution can be thermal, epithermal, or fast.The MIT reactor has been used to produce epithermal neutron beams for oncological applications <cit.>.Using a fission plate converter, beams of 10^10 n/s/cm^2 in the [1 eV, 10 keV] range have been achieved.The object in our study has a radius of 7 cm, thus the total flux in the [1, 10] eV range will be ∼10^9 n/s.This flux however will have to be modified using a chopper, in order to enable energy reconstruction via the above-described TOF techniques.With a chopper opening of 5 μ s, and a distance which corresponds to Δ t=161 μ s, the chopper will have a maximum duty of 3%.However, the presence of "wraparound" events, i.e. thermal neutrons with arrival times of n · 161 μ s, where n is an integer >1, can introduce uncertainties in energy reconstruction from TOF. This can introduce significant backgrounds.To avoid this, the chopper can be kept closed for 4830 μ s - this will eliminate all thermal neutrons down to the energy of 6 meV, while reducing the chopper duty to 0.1%.Combining these numbers, the total epithermal flux of neutrons in the energy range of [1,10] eV will be 10^6 n/s.With only 10^5 neutrons needed for the configuration described in the main body, this translates to a measurement time of 0.1 second.The epithermal neutrons can also be produced using nuclear reactions between accelerated light ions and various targets. There are two classes of light ion based nuclear reactions that can produce neutrons in the epithermal range.Significant work using epithermal neutrons was performed using the Los Alamos National Laboratory's 800 MeV proton spallation source, which producesneutrons of a broad range of energies <cit.>.This source was the basis of a number of beam lines used for a range of applied and fundamental studies.These included epithermal beams used for the non-destructive assay of nuclear fuels <cit.>. However a particularly attractive and compact alternative to a large spallation facility are smaller proton accelerators, which allow to trigger the ^7Li(p,n)^7Be and ^9Be(p,n)^9B reactions.A careful operation of incident proton energies in the initial state and neutron angles in the final state can allow to create adequate intensities of epithermal neutrons.Ref. <cit.> provides calculations and data of the dependence of the double differential neutron yield on incident proton energy and emitted neutron angle. Fig. <ref> shows the double differential yield, plotted against emitted neutron energy and angle, as well as plotted against neutron energy in the [0,10] eV range for the emission angle of 90^∘.Using the 1 count/eV/sr/μC as the order-of-magnitude value and assuming a beam opening of 30^∘the expected total epithermal yield in the [0,10]eV energy range incident upon the target will be r=8500 s^-1mA^-1.Some off-the-shelf commercial accelerators can produce 2 MeV proton beam currents of∼50mA <cit.>.It was determined that about 10^5 incident neutrons are necessary for achieving rejection of hoaxes at the 5σ confidence level. Taking a proton beam current of 1 mA,this translates to a measurement time of just 12 seconds.While the ^7Li(p,n)^7Be is an attractive reaction, a number of other, similar reactions exist, such as ^9Be(p,n)^9B.Some of these may have higher epithermal neutron yields.The search for an optimal reaction is outside of the scope of this work and may be a subject of future studies.A significant challenge when using TOF techniques is the presence of the thermal neutrons, which arrive at times much longer than the waiting time for the epithermal pulse, thus making it difficult or impossible to identify the originating pulse.This could introduce uncertainties in the TOF reconstruction.However the calculations shown in Fig. <ref> show that the (p,n) reactions have almost no thermal flux, thus significantly limiting their impact on the precision of the TOF method. §.§ Alignment and variations in design Two issues come to the fore when discussing the use of reciprocals for a zero-knowledge proof system.As in any detection system, this verification system's sensitivity will have its limits, affecting the inspector's ability to distinguish between objects of various sizes or different isotopic concentrations.The detection probability of the system is determined by measurement times, detector sensitivities, as well as the specificity of neutron interaction physics.These factors are mostly of stochastic nature, and measurements of arbitrary sensitivity can theoretically be achieved by varying the measurement times.However, systematic effects are also present.These include the unit-to-unit variability, due to manufacturing precision, as well as the hosts' ability to align the template and the candidate with the reciprocal.Modern surveying methods allow alignment precision down to the 𝒪(10μ m) scale, thus the main difficulty is related to the actual unit-to-unit variability.Information on manufacturing precision are not available in open domain.For a given variability, the two sides can agree to broadened criteria of verification in order to accommodate such variability and thus avoid false alarms and reveal information about the geometry of the pit.This circumstance in its turn limits one's ability to achieve arbitrary sensitivity.It is reasonable to assume that manufacturing variations are small, and thus the hoaxing scenarios attainable due to this limit on sensitivity are probably not of a significant advantage to either side in the inspection regime.A more rigorous treatment of this problem should be part of future research, possibly in the classified domain. §.§ Probabilistic Tests and minimum necessary counts for 5σ detection For a particular energy bin i, the statistical significance in units of sigma can be determined via n_i=(c_0,i-c_1,i)/√(c_0,i+c_1,i), assuming Poisson statistics, where c_0 and c_1 are the counts from the two distributions undergoing comparison.In frequentist statistical analysis, and assuming normally distributed errors we can determine the probability that the disagreement between c_0,i and c_1,i is consistent with the null hypothesis, i.e. is caused purely by statistical fluctuations.Conversely, the confidence for rejecting the null hypothesis and accepting the anomaly hypothesis (e.g. a hoaxing scenario is underway) can be determined via p_i=1-C(0,n_i), where C(0,x) is the cumulative distribution. For example, an n=5(σ) outcome (used in high energy physics for identifying new particles) corresponds to a confidence level of p=1-C(0,5)=1-2.9×10^-7, a very high confidence that can be used as the standard of testing.For a multi-bin data the more common test is the chi-square test.If the data has N bins, the number of degrees of freedom (NDF) is N.The probability that two distributions are deviating only due to normal fluctuations can be determined from p=Prob(χ^2,NDF), where Prob(x,y) is the chi-square distribution and χ^2 is the (non-reduced) chi-square that can be computed from χ^2=∑_i^NDF (c_0,i-c_1,i)^2/(c_0,i+c_1,i).For the data presented in Fig. <ref> we have NDF=202 and χ^2=67177.For this value of NDF the value of χ^2 corresponding to the confidence of 1-2.9×10^-7 (the 5σ standard) is just χ^2|_5σ=319.Clearly the discrepancy observed here implies an almost complete agreement with the anomaly hypothesis. Furthermore, it is possible to determine the minimum statistics necessary to bring the χ^2=67177 result, achieved by using N=2.0×10^7 neutrons, to a 5σ result.Since χ^2 depends linearly on the statistical count, the fraction of statistics necessary is just n=N(319/67177), i.e. just n=1.0×10^5 incident neutrons.Most epithermal neutron sources can produce this neutron count in a matter of minutes. §.§ Concentrations of Plutonium isotopes used in Isotopic Information Security Analysis Table <ref> lists the concentrations of individual isotopes of plutonium for the various levels of enrichment used in the isotopic information security analysis.For all the objects the density was 19.8 g/cc. §.§ Isotopic Information SecurityThe loss of a beam neutron due to some form of interaction can be described by the attenuation factor A= I/I_0 = exp(-μρ d), where ρ and d are the density and thickness of a medium, μ = σ N_A/A is the mass attenuation coefficient, N_A is Avogadro's number, A is the atomic number and σ is the total energy-dependent interaction cross section.This is only an approximation, because it treats all elastically scattered neutrons as undetected.For a transmission detector with a smallacceptance this can nevertheless be a good approximation for an analytical treatment of the dynamics of isotope-dependent transmission.Consider a particular material with isotopic vector r_i={r_238,r_239,...,r_242}, where individual elements are the fractional concentration of a particular isotope, such that ∑_i r_i =1.Then the attenuation in a particular energy bin can be determined from A(E)= exp{ -ρ d ∑_i r_i μ_i(E) }.If the object consists of a pit-reciprocal combination of thickness x and a vector r_i, an extension plate (see the main body for an explanation) of thickness y and a vector r'_i, then the logarithm of total attenuation is simply ln A(E)=-ρ∑_i μ_i (E)z_i(x+y)where z_i = (x r_i + y r'_i)/(x+y) is the effective isotopic concentration vector of the combined pit-reciprocal-extension.It can be shown that infinite combinations of x,y,r_i, and r'_i will produce the same value of z_i.To illustrate this consider three scenarios for r_i, r'_i, and x=5cm and y=2cm: * Scenario 1: r_i and r'_i correspond to WGPu and RGPu.See Table <ref> for detailed values.In this case z_i corresponds to intermediate-grade plutonium.* Scenario 2: Both r_i and r'_i are simply equal to z_i from Scenario 1, i.e. correspond to intermediate-grade plutonium.* Scenario 3: r'_i corresponds to super-grade plutonium.Using z_i from the above scenarios, we find r_i to correspond to low-intermediate grade plutonium.Inthese three scenarios the enrichment levels for the pit varied between 70% and 93%, while the effective isotopic concentration vector remained constant at z_i={0.0088,0.784,0.1304,0.0453,0.0315}.Thus all these scenarios will produce the same transmission spectrum.Fig. <ref> shows the results of calculations of transmitted spectra for these three scenarios, showing identical transmitted outputs. In addition to demonstrating this concept via calculations or MC simulations, it is also possible to determine the maximum range of uncertainty for reconstructed r_i, given actual values of 𝐫, 𝐫', and corresponding 𝐳.The possible values of 𝐫 are limited by 𝐫_𝐦𝐢𝐧=𝐳(x+y)-𝐫'_𝐦𝐚𝐱y/x 𝐫_𝐦𝐚𝐱=𝐳(x+y)-𝐫'_𝐦𝐢𝐧y/x.Here 𝐫'_𝐦𝐚𝐱 and 𝐫'_𝐦𝐢𝐧 correspond to all possible enrichment levels from which the extension can be made.The maximum is then just the super-grade plutonium, while the minimum can be the reactor grade plutonium.The full range of values of 𝐫 is then simplyΔ𝐫 = 𝐫_𝐦𝐚𝐱 - 𝐫_𝐦𝐢𝐧 = y/x (𝐫'_𝐦𝐚𝐱-𝐫'_𝐦𝐢𝐧).By using the values of x=5 cm, y=2 cm, and solving for the ^239Pu enrichment r_239, we find that the range corresponds to about Δ r_238=23%, which is consistent with the previous result of 70-93%.This range can be further widened by either increasing y, or using r'_min of even lower enrichment.As already stated, the simple calculation doesn't take into account such effects as in-scatter by neutrons.This necessitates a more thorough MC simulation to fully validate this idea.Such a simulation was performed using the MCNP5 package, and the results can be seen in Fig. <ref> in the main body.The simulations confirm the conclusion of the analytic calculations above.The importance of the above treatment is great:while the inspectors can use the data to reconstruct z_i, they will not be able to reconstruct r_i beyond simply stating that the pit enrichment level is somewhere between 70% and 93%.The knowledge of this broad range is essentially useless information, as it is already known that the plutonium in most weapons is at the WGPu enrichment levels. This range can be further broadened, if necessary. As discussed above that can be achieved either by using an extension of lower enrichment level or one of a thicker value of y - albeit at the need for longer measurement times.Finally, the reciprocal mask itself can be made modular:the recessed area shadowing the pit can be made of r'_i, while the peripheral part can be made from z_i - thus removing the need for an extension plate.While the analysis above shows that it is possible to protect the absolute isotopic information, some information about pit-to-pit variability may be inferred by the inspectors from comparative analysis of transmission spectra, for example by observing the variability in the absorption lines due to variable concentrations of ^241Pu, which has strong resonances at 4.2 and 8.5 eV.To mitigate this, the hosts and the inspectors could agree to a reduced resolution, as a way of "smearing" the absorption lines from that particular isotope.There are a few ways of achieving this.One approach would be to broaden the t_0 in the TOF technique by using a broader proton pulse for a ^7Li(p,n)^7Be reaction <cit.>.For the case of a chopper technique a wider slit can be used. If necessary, the information security of the system can be further strengthened by extending the the epithermal neutron source in this proof system with a velocity selection.Velocity selection is a well established technique for filtering out neutrons based on their energy/velocity.A velocity selector is a system of multiple blades whose length, pitch angle and angular velocity allows only neutrons of a particular velocity range to pass through <cit.>. A yet simpler configuration would consist of two choppers:the first one setting the t_0, and the second one, with a phase shift, selecting the neutrons based on their arrival time and thus their energy.Such a device could serve as a physical information barrier, allowing the hosts to limit the measurement to a particular pre-negotiated spectral region(s). Meanwhile the inspector can measure the velocity explicitly via the TOF information, as a way of confirming that the prover is not manipulating the output window of the velocity selector. §.§ Reciprocal GeometriesThe main function of the so-called reciprocal mask is to make it impossible for an observer to extract any sensitive isotopic or geometric information about the pit from a direct transmission measurement of the combined pit-reciprocal geometry.The simplest way of achieving this is by taking a space encompassed by a rectangular prism,filling it with a shape identical to the pit but with the negativer of its density, then adding a uniform density until all the negative density voxels have zero density.This amounts to creating the negative of the pit.The 2-d cutaway of such a simple approach can be seen in Fig. <ref>.While intuitively simple, this particular type of reciprocal mask has a number of problems.For example, it would be very hard to keep subcritical.Even if the criticality of the mask can be significantly reduced (e.g. by slicing it perpendicular to the beam axis and introducing space between the slices), the combined thickness of pit-reciprocal configuration is unnecessarily high, thus necessitating long measurement times for a statistically significant detection.A much more optimal reciprocal mask can be built simply by realizing that the thickness of the mask along a transmission axis needs to be equal to D-z, where D is some constant combined thickness and z is the thickness of the pit along that axis. So, for a hollow shell of internal and external radii r_1 and r_0 the reciprocal can be defined via its thickness along the beam axis d=D-2(√(r_0^2-y^2)-√(r_1^2-y^2)) for y<r_1 and d=D-2√(r_0^2-y^2) for r_1 ≤ y ≤ r_0, wherey is the vertical coordinate.A combination of the pit and the reciprocal is illustrated in Figure <ref>.For this particular case the combined thickness amounts to D=5 cm. The geometry and the enrichment of the combined pit and reciprocal is important when it comes to safety considerations.As suggested earlier, the wrong geometry may either be too close to criticality, or simply impossible to construct.Thus the criticality analysis of the geometries needs to be performed.As a neutron is incident on the pit or the reciprocal, it can trigger neutron induced fission, leading to a fission chain.The time dependence of the chain and the number of fissions can be determined from N(t)= exp(k_eff-1)t/τ, where t is time, τ is the mean lifetime of a neutron in the order of 10 ns, and k_eff is the k-effective.Positive values of k_eff-1 cause the reaction to quickly diverge in what is called a criticality event.For example for k_eff=1.1 it would take less than a microsecond for all nuclei in the pit to undergo fission, resulting in a nuclear explosion.On the other hand, for values of k_eff=0.9 the chain will exponentially decay with the lifetime of ∼100 ns.To determine the feasibility and the safety of the proposed configuration a set of MCNP5 simulations were performed to determine the k_eff.For a geometry described in Fig. <ref> and made of WGPu the k-effective was determined to be k_eff=0.866 ± 0.001. A criticality analysis was also performed on the 78% enrichment configuration described in Fig. <ref>, where a 78% enriched pit and reciprocal are followed by a 2cm extension of the same enrichment level.For this configuration k_eff=0.8318 ± .0002.For comparison, the new graphite pile at MIT's Nuclear Reactor Lab has k_eff≈ 0.82 <cit.>.It is not shileded, is open for general access and for educational purposes, and doesn't require any certification or regulatory oversight.To explore ways of further reducing this number, the reciprocal geometry in Fig. <ref> was modified by breaking it town into individual concentric hollow cylinders, which have been extended along the z-axis in a "telescope" configuration.Such a modification significantly drops k-effective, bringing it tok_eff=0.621 ± .001.In conclusion, the assemblies used in the concept described in this work are safe from the point of view of criticality consideration.§ FIGURES Dear Sir, Madame This paper proposes a treaty verification concept which, if adopted by nuclear weapon states, could in the future enable nuclear disarmament treaties which are far more ambitious and have a further reach than the arms reduction treaties of the past. The lack of trusted verification technologies has been a significant impediment to these treaties.The challenge of these technologies involves detecting cheating attempts while protecting the nuclear secrets of the weapon states. Significant new work has been done in this area, as discussed in the introduction, however there are no clear winners:most concepts have had either very strong information security, or very strong capability of detecting hoaxes, but never both.The concept proposed in this paper, we believe, provides just that combination.We believe that this topic is of broadrelevance - both to the scientific community, as well as the general public.Nuclear disarmament and thus treaty verification should be of importance to anyone who is worried about intentional or accidental nuclear war. This research applies physics concepts to a general societal problem - and as such it's a perfect fit for Nature Communications. This paper will excite other scientists to think about the policy problems where rigorous scientific work can have a significant impact.Sincerely,Areg Danagoulian, Jake J. Hecla
http://arxiv.org/abs/1709.09736v1
{ "authors": [ "Jake J. Hecla", "Areg Danagoulian" ], "categories": [ "physics.soc-ph", "physics.ins-det" ], "primary_category": "physics.soc-ph", "published": "20170927212050", "title": "Nuclear Disarmament Verification via Resonant Phenomena" }
du]Sanjeev Kumar [email protected] duk,iowa,ifp]Usha Kulshreshtha [email protected], [email protected] du,iowa,ifp]Daya Shankar Kulshreshtha [email protected], [email protected] ifp]Sarah Kahlen [email protected] ifp]Jutta Kunz [email protected] [du]Department of Physics and Astrophysics, University of Delhi, Delhi-110007, India [duk]Department of Physics, Kirori Mal college, University of Delhi, Delhi-110007, India [iowa]Department of Physics and Astronomy, Iowa State University, Ames, 50010 IA, USA [ifp]Institut für Physik, Universität Oldenburg, Postfach 2503, D-26111 Oldenburg, GermanyIn this work we present some new results obtainedin a study of the phase diagram of charged compact boson starsin a theory involving a complex scalar field with a conical potential coupled to a U(1) gauge field and gravity.We here obtain new bifurcation points in this model.We present a detailed discussion of the various regionsof the phase diagram with respect to the bifurcation points.The theory is seen to contain rich physics in a particular domainof the phase diagram. Some New Results on Charged Compact Boson Stars [ December 30, 2023 ===============================================In this work we study the phase diagram of charged compact boson starsin a theory involving a complex scalar field with a conical potential coupled to a U(1) gauge field and gravity <cit.>.A study of the phase diagram of the theory yields new bifurcation points(in addition to the first one obtained earlier,cf. Refs. <cit.>),which implies rich physics in the phase diagram of the theory.In particular, we present a detailed discussion of the various regionsin the phase diagram with respect to the bifurcation points.Let us recall that the boson stars(introduced long ago <cit.>)represent localized self-gravitating solutionsstudied widely in the literature<cit.>.In Refs. <cit.>,three of us have undertaken studies of boson stars and boson shellsin a theory involving a massive complex scalar fieldcoupled to a U(1) gauge field A_μ and gravityin the presence of a cosmological constant Λ.Our present studies extend the work ofRefs. <cit.>), performed in a theory without a cosmological constant Λfor a complex scalar field with only a conical potential, i.e., the scalar field is considered to be massless. Such a choice is possible for boson stars in a theory witha conical potential, since this potential yields compact boson star solutionswith sharp boundaries, where the scalar field vanishes. This is in contrast to the case of non-compact boson stars, where the the mass of the scalar field is a basic ingredient for the asymptotic exponential fall-off of the solutions.We construct the boson star solutions of this theory numerically.Our numerical method is based on the Newton-Raphson schemewith an adaptive stepsize Runge-Kutta method of order 4. We have calibrated our numerical techniquesby reproducing the work of Refs. <cit.>and <cit.>.We consider the theory defined by the following action(with V(|Φ|):=λ |Φ|, where λ is a constant parameter): S = ∫[ R/16π G +ℒ_M ] √(-g) d^4 x, ℒ_M= - 1/4 F^μν F_μν-( D_μΦ)^* ( D^μΦ)- V(|Φ|) , D_μΦ =(∂_μΦ + i e A_μΦ), F_μν =(∂_μ A_ν - ∂_ν A_μ) .Here R is the Ricci curvature scalar,G is Newton's gravitational constant.Also, g = det(g_μν),where g_μν is the metric tensor,and the asterisk in the above equation denotes complex conjugation.Using the variational principle, the equations of motion are obtained as:G_μν ≡R_μν-1/2g_μνR = 8π G T_μν ,∂_μ ( √(-g) F^μν) = -i e √(-g)[Φ^* (D^νΦ)-Φ (D^νΦ)^* ] , D_μ(√(-g)D^μΦ)= λ/2√(-g) Φ/|Φ| , [D_μ(√(-g)D^μΦ)]^*= λ/2√(-g) Φ^*/|Φ| .The energy-momentum tensor T_μν is given byT_μν = [ ( F_μα F_νβ g^αβ -1/4 g_μν F_αβ F^αβ) + (D_μΦ)^* (D_νΦ)+ (D_μΦ) (D_νΦ)^* -g_μν((D_αΦ)^* (D_βΦ)) g^αβ-g_μνλ( |Φ|) ] . To construct spherically symmetric solutionswe adopt the static spherically symmetric metricwith Schwarzschild-like coordinatesds^2= [ -A^2 N dt^2 + N^-1 dr^2 +r^2(dθ^2 + sin^2 θ dϕ^2) ].This leads to the components of Einstein tensor (G_μν) G_t^t= [ -[r(1-N)]'/r^2] ,G_r^r= [ 2 r A' N -A[r(1-N)]'/A r^2] ,G_θ^θ = [ 2r[rA' N]' + [A r^2 N']'/2 A r^2] = G_φ^φ . Here the arguments of the functions A(r) and N(r) have been suppressed.For solutions with a vanishing magnetic field,the Ansätze for the matter fields have the form: Φ(x^μ)=ϕ(r) e^iω t,A_μ(x^μ) dx^μ = A_t(r) dt . We introduce new constant parameters:β=λ e/√(2) , α^2 (:=a)= 4π G β^2/3/e^2 .Here a:=α^2 is dimensionless. We then redefine ϕ(r) and A_t(r):h(r)=(√(2) e ϕ(r))/β^1/3 ,b(r)=(ω+e A_t(r))/β^1/3 . Introducing a dimensionless coordinate r̂defined by r̂:=β^1/3 r (implying d/dr=β^1/3d/dr̂), Eq. (<ref>) reads:h(r̂)=(√(2) e ϕ(r̂))/β^1/3 ,b(r̂)=(ω+e A_t(r̂))/β^1/3 . The equations of motion in terms of h(r̂) and b(r̂)(where the primes denote differentiation with respect to r̂, and sign (h) denotes the usual signature function) read:[A N r̂^2 h']'=r̂^2/AN(A^2Nsign(h) -b^2 h) , [r̂^2 b'/A]'=b h^2 r̂^2/AN.We thus obtain the set of equations:N' =[1-N/r̂ -α^2 r̂/A^2 N(A^2 N^2 h'^2 + N b'^2 . .+2 A^2 N h+ b^2 h^2)], A' =[α^2 r̂/A N^2(A^2 N^2 h'^2 + b^2 h^2)] ,h”=[ α^2/A^2Nr̂ h' (2A^2h +b'^2)-h'(N+1)/r̂ N +A^2Nsign(h)-b^2 h/A^2 N^2], b”=[α^2/A^2 N^2r̂b'(A^2 N^2 h'^2 + b^2 h^2) -2 b'/r̂ + b h^2/N] . For the metric function A(r̂) we choose theboundary condition A(r̂_o)=1, where r̂_o is the outer radius of the star.For constructing globally regular ball-like boson star solutions, we choose:N(0)=1,b'(0)=0,h'(0)=0 ,h(r̂_o)=0 ,h'(r̂_o)=0 . In the exterior region r̂>r̂_o we match the Reissner-Nordström solution. The theory has a conserved Noether current:j^μ=-ie [ Φ(D^μΦ)^*-Φ^* (D^μΦ) ] , j^μ_ ;μ = 0 .The charge Q of the boson star is given byQ=-1/4π∫_0 ^r̂_o j^t √(-g) dr dθ dϕ , j^t=-h^2(r̂) b(r̂)/A^2(r̂) N(r̂) . For all boson star solutions we obtain the mass M(in the units employed): M= (1-N(r̂_o)+α^2 Q^2/r̂_o^2)r̂_o/2 .We now study the numerical solutions of Eqs. (<ref>)-(<ref>)with the boundary conditions defined byA(r̂_o)=1 and Eq. (<ref>), and determine their domain of existence for a sequence of specific values of the parameter a.Let us recall here that the theory defined by the action(Eq. (<ref>)) originally has two parameters e and λwhich are the two coupling constants of the theory.At a later stage we have introduced the new parameters β and a(:=α^2), and we have rescaled the radial coordinate and the matter functions. Then the parameter β does not appear in the resulting set of equations (<ref>)-(<ref>).Thus the numerical solutions of these coupled differential equationscan be studied by varying only one parameter, namely a.We first consider the phase diagram of the theorybased on the values of the fields at the origin of the boson star, the vector field, b(0), and the scalar field, h(0),obtained by studying a sequence of values of the parameter a.We observe very interesting phenomena near specific values of a, where the system is seen to have bifurcation pointsB_1 , B_2 and B_3.These correspond to the following values of a:a_c_1≃0.198926,  a_c_2≃0.169311and a_c_3≃0.168308, respectively,and the possibility of further bifurcation points is not ruled out.Thus the theory is seen to possess rich physicsin the domain a = 0.22 to a ≃ + 0.16.For a clear discussion, we divide the phase diagramin the vicinity of B_1 into four regions denoted by IA, IB, IIA and IIB(as seen in Fig. <ref>).The asterisks seen in Fig. <ref>, coinciding with the axis b(0)(which corresponds to h(0)=0),represent the transition points from the boson stars to boson shells <cit.>.The regions IA, IIA and IIB do not have any further bifurcation points.However, the region IB is seen to contain rich physicsas evidenced by the occurrence of more bifurcation points in this region.For better detail, the region IB is magnified in Fig. <ref>.The region IB is then further divided intothe regions IB1, IB2 and IB3 in the vicinity of B_2, as seen in Fig. <ref>. The region IB3 finally is seen to have the further bifurcation point B_3.In the vicinity of B_3 we therefore further subdividethe phase diagram into the regions IB3a, IB3b and IB3c, as seen in Fig. <ref>.The region IB3b is seen to have closed loopsand the behaviour of the phase diagram in this regionis akin to the one of the region IB2.Also, the insets shown in Figs. <ref> and <ref> represent partsof the phase diagram with higher resolution.The figures demonstrate, that as we change the value of a from a=0.225 to a=0,we observe a lot of new rich physics.While going from a=0.225 to the critical value a=a_c_1,we observe that the solutions exist in two separate domains, IIA and IIB (as seen in Fig. <ref>).However, as we decrease a below a =a_c_1,the solutions of the theory are seen to exist in the regions IA and IB(instead of the regions IIA and IIB).For the sake of completeness it is important to emphasize here, that the physics in the domain correspondingto the values of a larger than a=0.225conceptually remains the same as described by the value a=0.225.As we decrease the value of afrom the first critical value a=a_c_1to the next critical value a=a_c_2,we notice that the region IA in the phase diagramshows a continuous deformation of the curves, and the region IB is seen to have its own rich physicsas explained in the foregoing. As we decrease a below a_c_2,we observe that in the region IA there is againa continuous deformation of the curves all the way down to a=0.However in the region IB, we encounter another bifurcation point, which divides the region IB into IB1, IB2 and IB3.We observe that in the region IB1 there is a continuous deformationof the curves, and the region IB2 contains closed loops of the curves.The region IB3 is subdivided into the regions IB3a, IB3b and IB3c.The region IB3a would have a continuous deformationof the curves, and the region IB3b is seen to contain closed loops. It is tempting to conjecture, that there is a whole sequence of further bifurcation points,leading to a self-similar pattern of the new subregions involved. The numerical calculations, however, become more and more challenging, as one proceeds from the first to the higher bifurcations, since an increasing numerical accuracy is necessary to map out the domain of existence. Note, that the value of a had to be specified to 6 decimal digits for B2 and B3, already. Thus it is the global accuracy of the scheme,which presents a limiting factor. Within this accuracy, the Newton-Raphson method will provide a new solution, when an adequate starting solutionhas been specified, though.A plot of the radius r̂_o of the solutions versusthe vector field at the center of the star b(0)is depicted in Fig. <ref>.As before, the point B_1 corresponds to the first bifurcation point, and the four regions IA, IB and IIA, IIB in the vicinityof the bifurcation point are indicated.Again, the region IB shown in Fig. <ref>is enlarged and shown in Fig. <ref>, with the region IB3 being enlarged further and depicted in Fig. <ref>. The asterisks shown in Fig. <ref>again represent the transition points from the boson stars to boson shells.The oscillating behaviour seen in Figs. <ref> to <ref> in the regions IIA and IB translates in the Figs. <ref> to <ref> into a spiral behavior. The inset in Fig. <ref> represents a part of the region IBwith higher resolution.Let us now turn to the global properties of the solutions, their mass M and their charge Q. The mass M versus the radius r̂_ois shown in Fig. <ref>, while <ref> again magnifies the region of the bifurcations. The charge Q has a very similar dependence as the mass. This is illustrated in Fig. <ref> for the bifurcation region. To understand the stability of the boson stars, one can consider the mass M versus the charge Q, as shown in Fig. <ref>, or the mass per unit charge M/Q versus the charge, as shown in Figs. <ref> and <ref>. Let us first consider Fig. <ref>. Here the curves M versus Q, corresponding to the region IAand the smaller values of a, all increase monotonically fromM=Q=0 to the respective transition points with boson shells, marked by the crosses. The solutions on these curves can be considered as the fundamental solutions for their respective value of a. Thus they should be stable. In fact, all curves in region IA should be stable, representing the solutions with the lowest mass for a given charge (and parameter a). However, above a certain value of a, these curves no longer reach a boson shell, but instead their upper endpoint represents a solution, where a throat is formed.The exterior space-time r>r_0 then correspondsto the exterior of an extremal RN space-time. This happens whenever the value b(0)=0 is encountered, as discussed in detail previously <cit.>.For the curves shown in region IIB both endpoints correspond to solutions with throats, since at both endpoints b(0)=0 is encountered. Since these solutions also represent the lowest mass solutions for a given charge, they should be stable as well. In the region IIA, however, the solutions exhibit the typical oscillating/spiral behavior known for non-compact boson stars. In a mass versus charge diagram, this translates into the presence of a sequence of spikes, as seen in the insets of Figs. <ref> and <ref>. Here the solutions should be stable only on their fundamental branch, reaching up to a maximal value of the mass and the charge, where a first spike is encountered. With every following spike a new unstable mode is expected to arise, as we conclude by analogy with the properties of non-compact boson stars.In this work our focus has been on the bifurcations. Let us therefore now inspect the region of the bifurcations IB, starting with the limiting curves. For the value a_c_1the two branches of solutions, limiting the region IA, possess lower masses than thethe two branches of solutions, limiting the region IB, and should therefore be more stable. The two branches of solutions, limiting the region IB, might be classically stable as well, until the first extrema of mass and charge are encountered. Quantum mechanically, however, they would be unstable, since tunnelling might occur. Beyond these extrema, unstable modes should be present, and thus the solutions should also be classically unstable.These arguments can be extended to all the solutions in region IB. From a quantum point of view they should be unstable, since for all of them there exist solutions in region IA, which have lower masses but possess the same values of the charge. Classically, however, the lowest mass solutions for a given a within the region IB might be stable, while the higher mass solutions should clearly possess unstable modes and be classically unstable. Fig. <ref> zooms into the bifurcation region of the M/Q versus Q diagram, to illustrate that the solutions in the bifurcation regionindeed correspond to higher mass solutions. In conclusion, we have studied in this worka theory of a complex scalar field with a conical potential, coupled to a U(1) gauge field and gravity<cit.>.We have constructed the boson star solutions of this theorynumerically and investigated their domain of existence, their phase diagram, and their physical properties.We have shown that the theory has rich physicsin the domain a=0.22 to a≃0.16,where we have identified three bifurcation points B_1, B_2 and B_3 of possibly a whole sequence of further bifurcations. We have investigated the physical properties of the solutions, including their mass, charge and radius. By considering the mass versus the charge (or the mass per unit charge versus the charge) we have given arguments concerning the stability of the solutions.For all values of a studied, there is a fundamental branch of compact boson star solutions, which should be stable, since they represent the solutions with the lowest mass for a given value of the charge, and thus represent the ground state. In the region of the bifurcations additional branches of solutions are present, which possess higher masses for a given charge.Thus these solutions correspond to excited states of the system. The lowest of these might be classically stable, as well, and only quantum mechanically unstable. To definitely answer this question, a mode stability analysis should be performed, which is, however, beyond the scope of this paper, representing a topic of separate full-fledged investigations.Finally, we would like to mention that detailed investigationsof this theory in the presence of the cosmological constant Λwith 3D plots of the phase diagrams involvingthe various physical quantities of the theoryare currently under our investigationand would be reported later separately.We would like to thank James Vary for very useful discussions.This work was supported in part by the US Department of Energyunder Grant No. DE-FG02-87ER40371, by the US National Science Foundation under Grant No. PHY-0904782, by the DFG Research Training Group 1620 Models of Gravity as well as by FP7, Marie Curie Actions, People IRSES-606096. SK would like to thank the CSIR, New Delhi,for the award of a Research Associateship. 50Kleihaus:2009krB. Kleihaus, J. Kunz, C. Lämmerzahl and M. List, “Charged Boson Stars and Black Holes”, Phys. Lett. B 675, (2009) 102, [arXiv:0902.4799 [gr-qc]].Kleihaus:2010epB. Kleihaus, J. Kunz, C. Lämmerzahl and M. List, “Boson Shells Harbouring Charged Black Holes”, Phys. Rev. D82, (2010) 104050, [arXiv:1007.1630 [gr-qc]].Feinblum:1968D. A. Feinblum, W. A. McKinley,“Stable states of a scalar particle in its own gravitational field”,Phys. Rev.168, 1445 (1968).Kaup:1968zz D. J. Kaup,“Klein-Gordon Geon,” Phys. Rev.172, 1331 (1968).Ruffini:1969qy R. Ruffini, S. Bonazzola, “Systems of selfgravitating particles in general relativity and the concept of an equation of state,” Phys. Rev.187, 1767 (1969).Jetzer:1991jrP. Jetzer, “Boson Stars”,Phys. Rept.220 (1992) 163.Lee:1991axT. D. Lee and Y. Pang,“Nontopological solitons”, Phys. Rept.221, (1992) 251.Mielke:2000mh E. W. Mielke and F. E. Schunck, “Boson stars: Alternatives to primordial black holes?,” Nucl. Phys.B 564, 185 (2000). [arXiv:gr-qc/0001061].Liebling:2012fv S. L. Liebling and C. Palenzuela, “Dynamical Boson Stars,” Living Rev. Rel.15, 6 (2012) [arXiv:1202.5809 [gr-qc]].Friedberg:1976me R. Friedberg, T. D. Lee and A. Sirlin, “A Class Of Scalar-Field Soliton Solutions In Three Space Dimensions,” Phys. Rev.D 13, 2739 (1976).Hartmann:2012da B. Hartmann, B. Kleihaus, J. Kunz, I. Schaffer, Compact boson stars, Phys. Lett. B 714, (2012) 120, [arXiv:1205.0899 [gr-qc]]. Hartmann:2012wa B. Hartmann and J. Riedel, “Glueball condensates as holographic duals of supersymmetric Q-balls and boson stars,” Phys. Rev. D 86, 104008 (2012)Hartmann:2013kna B. Hartmann, B. Kleihaus, J. Kunz and I. Schaffer, “Compact (A)dS Boson Stars and Shells,” Phys. Rev. D 88, 124033 (2013), [arXiv:1310.3632 [gr-qc]].Kumar:2014knaS. Kumar, U. Kulshreshtha and D. Shankar Kulshreshtha, “Boson stars in a theory of complex scalar fields coupled to the U(1) gauge field and gravity,” Class. Quant. Grav.31, 167001 (2014). Kumar:2015siaS. Kumar, U. Kulshreshtha and D. S. Kulshreshtha, “Boson stars in a theory of complex scalar field coupled to gravity,” Gen. Rel. Grav.47, 76 (2015). Kumar:2016oopS. Kumar, U. Kulshreshtha and D. S. Kulshreshtha, “New Results on Charged Compact Boson Stars,” Phys. Rev. D 93, 101501 (2016)[arXiv:1605.02925 [hep-th]]. Kumar:2016sxxS. Kumar, U. Kulshreshtha and D. S. Kulshreshtha, “Charged compact boson stars and shells in the presence of a cosmological constant,” Phys. Rev. D 94, 125023 (2016).
http://arxiv.org/abs/1709.09445v1
{ "authors": [ "Sanjeev Kumar", "Usha Kulshreshtha", "Daya Shankar Kulshreshtha", "Sarah Kahlen", "Jutta Kunz" ], "categories": [ "hep-th", "gr-qc" ], "primary_category": "hep-th", "published": "20170927105607", "title": "Some New Results on Charged Compact Boson Stars" }
Transport Coefficients from Large Deviation Functions David T. Limmer^1,2,3 December 30, 2023 ===================================================== Planets form in disks around young stars. The planet formation process may start when the protostar and disk are still deeply embedded within their infalling envelope. However, unlike more evolved protoplanetary disks, the physical and chemical structure of these young embedded disks are still poorly constrained. We have analyzed ALMA data for ^13CO, C^18O and N_2D^+ to constrain the temperature structure, one of the critical unknowns, in the disk around L1527. The spatial distribution of ^13CO and C^18O, together with the kinetic temperature derived from the optically thick ^13CO emission and the non-detection of N_2D^+, suggest that this disk is warm enough (≳ 20 K) to prevent CO freeze-out.§ INTRODUCTION Disks around young stars are the birthplace of planets. The chemical structure of these disks, and thus of the material that will build up the planets, determines planet compositions. In addition, the disk physical structure influences the planet formation process. Therefore, evolved protoplanetary disks (Class II sources) have been intensively studied and are becoming well characterized both physically (e.g. <cit.>; <cit.>) and chemically (e.g. <cit.>; <cit.>; <cit.>; <cit.>). However, grain growth already starts when the protostellar system is still deeply embedded in its natal molecular cloud (<cit.>; <cit.>) and the HL Tau images support the idea that planet formation begins during this class 0/I phase (<cit.>). Thus, young embedded disks may reflect the true initial conditions for planet formation.Although several embedded disks are now known (e.g. <cit.>; <cit.>; <cit.>; <cit.>), and ALMA enables us to spatially resolve molecular emission from these young disks, their physical and chemical conditions remain unconstrained. One of the critical unknowns is the temperature structure, since this directly influences the volatile composition of the planet-forming material. In regions where the temperature exceeds ∼20 K, CO will be mainly present in the gas phase, while at lower temperatures CO is frozen out onto dust grains. In addition, this CO ice is the starting point for the formation of more complex molecules. § PROBING THE DISK TEMPERATURE STRUCTURE WITH ^13CO AND N_2D^+ We have analyzed our own and archival ALMA data (PIs: Tobin, Koyamatsu, Ohashi, Sakai) and used ^13CO, C^18O and N_2D^+ observations to constrain the temperature in the embedded disk of L1527. This disk is particularly interesting because its almost edge-on configuration allows for direct probe of the vertical structure. Emission originating in the disk can be isolated from the envelope contribution based on velocity. Comparing the results from a thin disk model (<cit.>) with an infall velocity profile to the results from a model including Keplerian rotation in the inner region shows that the emission in the highest velocity channels is solely due to the disk (Fig. <ref>). The spatial extent of the ^13CO and C^18O J = 2-1 emission in these channels suggest that CO is vertically present throughout the disk, including the disk midplane (Fig. <ref>). In addition, the ratio of the ^13CO and C^18O intensities shows that the ^13CO emission is optically thick (τ > 3), and thus traces the kinetic temperature of the gas. The derived temperatures are ∼30-40 K, above the CO freeze-out temperature of ∼20 K (Fig. <ref>).In contrast, N_2H^+ observations toward several protoplanetary disks have shown that the outer disk midplane becomes cold enough for CO to freeze out (<cit.>,<cit.>). The N_2H^+ ion traces CO freeze-out because its main destructor is gas-phase CO (<cit.>; <cit.>). The deuterated form of N_2H^+, N_2D^+, is not detected in the L1527 disk, corroborating the observation that CO is present in the gas phase throughout the disk.Altogether, these preliminary results are in agreement with physical models of embedded disks (<cit.>, <cit.>) and suggest that the young disk in L1527 is warm enough to prevent CO freeze-out.§ ACKNOWLEDGMENTS Astrochemistry in Leiden is supported by the European Union A-ERC grant 291141 CHEMPLAN, by the Netherlands Research School for Astronomy (NOVA) and by a Royal Netherlands Academy of Arts and Sciences (KNAW) professor prize. M.L.R.H acknowledges support from a Huygens fellowship from Leiden University. [Aikawa et al. 2015]Aikawa2015 Aikawa, Y., Furya, K., Nomura, H., & Qi, C. 2015, ApJ, 807, 120[ALMA Partnership 2015]ALMA2015 ALMA Partnership 2015,ApJ, 808, 3[Andrews et al. 2010]Andrews2010 Andrews, S.M., Wilner, D.J., Hughes, A.M., Qi, C., & Dullemond, C.P. 2010, ApJ, 723, 1241[Aso et al. 2015]Aso2015 Aso, Y., Ohashi, N., Saigo, K.,2015, ApJ, 812, 27[Dutrey et al. 2007]Dutrey2007 Dutrey, A., Henning, T., Guilloteau, S.,2007, A&A, 464, 615[Harsono et al. 2014]Harsono2014 Harsono, D., Jørgensen, J.K., van Dishoeck, E.F., Hogerheijde, M.R., Bruderer, S, Persson, M.V., & Mottram, J.C. 2014, A&A, 562, A77[Harsono et al. 2015]Harsono2015 Harsono, D., Bruderer, S., & van Dishoeck, E.F. 2015, A&A, 582, A41[Huang et al. 2017]Huang2017 Huang, J., Öberg, K.I., Qi, C.,2017, ApJ, 835, 231[Kwon et al. 2009]Kwon2009 Kwon, W., Looney, L.W., Mundy, L.G., Chiang, H.-F., & Kemball, A.J. 2009, ApJ, 696, 841[Miotello et al. 2014]Miotello2014 Miotello, A., Testi, L., Lodato, G.,2014, A&A, 567, A32[Murillo et al. 2013]Murillo2013 Murillo, N.M., Lai, S.-P., Bruderer, S., Harsono, D., & van Dishoeck, E.F. 2013, A&A, 560, A103[Öberg et al. 2010]Oberg2010 Öberg, K.I., Qi, C., Fogel, J.K.J.,2010, ApJ, 720, 480[Öberg & Bergin 2016]Oberg2016 Öberg, K.I., & Bergin, E.A. 2016, ApJL, 831, L19[Qi et al. 2013]Qi2013 Qi, C., Öberg, K.I., Wilner, D.J.,2013, Science, 341, 630[Qi et al. 2015]Qi2015 Qi, C., Öberg, K.I., Andrews, S.M.,2015, ApJ, 813, 128[Schwarz et al. 2016]Schwarz2016 Schwarz, K., Bergin, E.A., Cleeves, L.I.,2016, ApJ, 823, 91[Thi et al. 2004]Thi2004 Thi, W.F., van Zadelhoff, G.J., & van Dishoeck, E.F. 2004, A&A, 425, 955[Tobin et al. 2012]Tobin2012 Tobin, J.J., Hartmann, L., Chiang, H.-F.,2012, Nature, 492, 83 [van 't Hoff et al. 2017]vantHoff2017 van 't Hoff, M.L.R., Walsh, C., Kama, M., Facchini, S., & van Dishoeck, E.F. 2017, A&A, 599, A101
http://arxiv.org/abs/1709.09185v1
{ "authors": [ "Merel L. R. van 't Hoff", "John J. Tobin", "Daniel Harsono", "Ewine F. van Dishoeck" ], "categories": [ "astro-ph.SR" ], "primary_category": "astro-ph.SR", "published": "20170926180023", "title": "Unveiling the physical and chemical conditions in the young disk around L1527" }
iblabel[1]#1 akefntext[1] [0pt][r]thefnmark #1 1.125 *§0pt4pt4pt * §.§0pt15pt1pt [ \begin@twocolumnfalse Addressing the exciton fine structure in colloidal nanocrystals: the case of CdSe nanoplateletsElena V. Shornikova,^∗^a,b Louis Biadala,^∗^a,c Dmitri R. Yakovlev,^∗^a,d^ Victor F. Sapega,^d Yuri G. Kusrayev,^d Anatolie A. Mitioglu,^e Mariana V. Ballottin,^e Peter C. M. Christianen,^e Vasilii V. Belykh,^a,f Mikhail V. Kochiev,^f Nikolai N. Sibeldin,^f Aleksandr A. Golovatenko,^d Anna V. Rodina,^d Nikolay A. Gippius,^gAlexis Kuntzmann,^h Ye Jiang,^h Michel Nasilowski,^h Benoit Dubertret,^h and Manfred Bayer^a,dWe study the band-edge exciton fine structure and in particular its bright-dark splitting in colloidal semiconductor nanocrystals by four different optical methods based on fluorescence line narrowing and time-resolved measurements at various temperatures down to 2 K. We demonstrate that all these methods provide consistent splitting values and discuss their advances and limitations. Colloidal CdSe nanoplatelets with thicknesses of 3, 4 and 5 monolayers are chosen for experimental demonstrations. The bright-dark splitting of excitons varies from 3.2 to 6.0 meV and is inversely proportional to the nanoplatelet thickness. Good agreement between experimental and theoretically calculated size dependence of the bright-dark exciton slitting is achieved. The recombination rates of the bright and dark excitons and the bright to dark relaxation rate are measured by time-resolved techniques. \end@twocolumnfalse]§ ^a Experimentelle Physik 2, Technische Universität Dortmund, 44221 Dortmund, Germany. Tel: +49 231 755 3531; E-mail: [email protected], [email protected] ^b Rzhanov Institute of Semiconductor Physics, Siberian Branch of Russian Academy of Sciences, 630090 Novosibirsk, Russia. ^c Institut d'Electronique, de Microélectronique et de Nanotechnologie, CNRS, 59652 Villeneuve-d'Ascq, France. Tel:+33 3 20 19 79 32; E-mail: [email protected] ^d IoffeInstitute, Russian Academy of Sciences, 194021 St. Petersburg, Russia.^e High Field Magnet Laboratory (HFML-EMFL), Radboud University, 6525 ED Nijmegen, The Netherlands. ^f P. N. Lebedev Physical Institute, Russian Academy of Sciences, 119991 Moscow, Russia. ^g Skolkovo Institute of Science and Technology, 143026 Moscow, Russia. ^h Laboratoire de Physique et d'Etude des Matériaux, ESPCI, CNRS, 75231 Paris, France.§ INTRODUCTIONColloidal nanostructures are intensively investigated because of their bright luminescence and simplicity of fabrication. Starting from 1993<cit.>, the research has been concentrated on nanometer-sized spherical nanocrystals (NCs), also known as quantum dots (QDs). Recently, two-dimensional nanoplatelets (NPLs) have been synthesized and have attracted great attention due to their remarkable properties. Most importantly, CdSe NPLs with zinc-blende crystal structure have short spontaneous recombination rates,<cit.> narrow ensemble emission spectra due to their atomically controlled thickness,<cit.> and dipole emission oriented within the plane.<cit.> Among other important properties the very efficient fluorescence resonance energy transfer,<cit.> the ultralow stimulated emission threshold,<cit.> the enhanced conductivity due to in-plane transport, <cit.> and the highly efficient charge carrier multiplication<cit.> can be highlighted. Widely varying structures have been synthesized: CdSe wurtzite nanoribbons or quantum belts<cit.>, NPLs of PbS,<cit.> PbSe,<cit.> Cu_2-xS<cit.>, GeS and GeSe,<cit.> CdS,<cit.> ZnS<cit.>, CdTe<cit.>, and HgTe<cit.>, as well as various core-shell structures (for a review see Ref. Nasilowski2016). Among them CdSe-based NPLs play a role of the model system, which optical properties including quantum coherence and exciton dephasing have been intensively studied.<cit.>Compared to bulk semiconductors, in NPLs the exciton binding energy is drastically increased, e.g. in CdSe from 10 meV to hundreds of meV. There are three reasons for this:(i) the large electron effective mass due to nonparabolicity of the conduction band, (ii) the dimensionality reduction, and (iii) the dielectric confinement.<cit.> This raises questions about the band-edge exciton fine structure and exciton recombination dynamics in these two-dimensional nanostructures.Similar to CdSe QDs, the exciton ground state in CdSe NPLs is a two-fold degenerate dark state |F⟩ with angular momentum projections ± 2 on the quantization axis.<cit.> The first excited state with angular momentum projection ± 1 is an optically active (bright) |A⟩ state, which is separated from the ground state by a bright-dark energy splitting (Δ E_ AF) of several meV. The direct observation of the fine structure states in an ensemble of NCs is often hindered by the line broadening resulting from size dispersion. Typically, the linewidth of ensemble photoluminescence (PL) spectra is in the 100 meV range, much larger than Δ E_ AF of 1-20 meV. Two optical methods are commonly used to measure Δ E_ AF. The first technique is based on fluorescence line narrowing (FLN), which gives direct access to Δ E_ AF.<cit.> The second method relies on the evaluation of Δ E_ AF from the temperature dependence of the PL decay<cit.> (more details are given in Supplementary Section S1). While these two methods gave a similar result being applied to the same CdSe/CdS core/shell QDs with a 3 nm core diameter,<cit.> no comparison has been made on the same bare core NCs. It is important to do as a large discrepancy can be found in literature for QDs with diameters less than 3 nm. Nirmal et al. measured 19 meV in 2.4 nm diameter bare core CdSe QDs by the FLN technique,<cit.> while de Mello Donega et al. reported Δ E_ AF = 1.7 meV in bare core CdSe QDs with diameter of 1.7 nm from temperature-dependent time-resolved PL,<cit.> claiming that FLN measurements systematically overestimate Δ E_ AF by neglecting any internal relaxation between the exciton states. Recently, it was shown that the Stokes shift in bare core CdSe QDs can be also contributedby formation of a dangling bond magnetic polaron.<cit.> Moreover, Δ E_ AF in QDs is strongly affected by the dot shape and symmetry,<cit.> which complicates the comparison of results obtained by different groups. Obviously, more experimental methods are very welcomed to address the measurements of the bright-dark exciton splitting in colloidal nanostructures. Here we suggest and test a few new experimental approaches and examine them together with the commonly used ones on the same samples of CdSe NPLs.In this paper, we exploit four optical methods to study the bright-dark exciton energy splitting in ensemble measurements of CdSe nanoplatelets with thicknesses ranging from 3 to 5 monolayers: (i) fluorescence line narrowing, (ii) temperature-dependent time-resolved PL, (iii) spectrally-resolved PL decay at cryogenic temperatures, and (iv) temperature dependence of PL spectra. Most importantly, we compare fluorescence line narrowing and temperature-dependent time-resolved PL techniques applied to all samples. The results gained by different methods are in good agreement with each other and confirm the bright-dark exciton splitting of several meV in CdSe NPLs measured earlier by one of the methods.<cit.> Comparison of the thickness dependence of the splitting with the results of model calculations allows us to estimate the exchange strength constant and the dielectric constants inside and outside the nanoplatelets. Theoretical calculations within the effective mass approximation with account of the dielectric effect successfully reproduce experimental size dependence of the bright-dark exciton splitting. § EXPERIMENTAL RESULTS The investigated samples are three batches of CdSe NPLs with thicknesses of L=3, 4, and 5 monolayers (MLs) of CdSe and an additional layer of Cd atoms, so that both sides of the NPLs are Cd-terminated. In the following, these samples will be accordingly referred to as 3ML, 4ML, and 5ML. TEM images of the samples are shown in Fig. <ref>. Parameters of the studied samples are summarized in Table <ref>.Room temperature PL and absorption spectra of the 4ML NPLs are shown in Fig. <ref>a. In absorption spectra two peaks at 2.426 eV and 2.583 eV, separated from each other by 157 meV, are related to excitons involving the heavy hole (hh) and light hole (lh), respectively. The heavy-hole exciton has a narrow emission line at room temperature with full width at half maximum (FWHM) of 44 meV and a rather small Stokes shift of 8 meV from the absorption line, which is typical for CdSe NPLs<cit.>. Representative spectra for the 3ML and 5ML samples are given inFig. <ref> and the corresponding parameters are listed in Table <ref>.Figure <ref>b shows PL spectra of all studied samples at T=4.2 K. The spectra consist of two lines, with the high-energy one (X) attributed to the exciton emission. The shift between these lines varies from 18 to 30 meV (Table <ref>). The PL dynamics of the exciton line measured at T=4.2 K with an avalanche photodiode (APD) (Experimental section) is shown in Fig. <ref>c. The decays exhibit the bi-exponential behavior typical for excitons in colloidal NCs, where the short decay is associated with bright exciton recombination and exciton relaxation from the bright to the dark state, while the long decay is associated with the dark exciton emission. Monoexponential fits of the long-term tails are shown by the black lines, the corresponding decay times τ_ L range from 46 to 82 ns (Table <ref>). In order to resolve the fast initial dynamics in the time range 20-30 ps, a streak-camera detection was used (Experimental section). These results for the 4ML and 5ML samples are shown in Fig. <ref>d together with exponential fits (the resulting times τ_ short are given in Table <ref>), while the streak-camera images are shown in Fig. <ref>.Thicker NPLs have a longer τ_ short, the same trend was reported for spherical QDs<cit.>.The origin of the low-energy line in the NPL emission spectra at low temperatures (Fig. <ref>b) is still under debate.<cit.> Among the considered options are LO-phonon assisted exciton recombination,<cit.> emission of charged excitons (trions)<cit.> and that this line arises from recombination of a ground exciton state.<cit.> Several experimental features of the studied CdSe NPLs are in favor of the charged exciton origin of this low-energy line: (i) The low-temperature absorption peak is close to the high-energy emission line proving its assignment to the exciton ground state (Fig. <ref>). (ii) The energy separation between PL lines changes with NPL thickness and becomes larger than the 25 meV reported for the LO phonon energies in CdSe NPLs.<cit.> (Table <ref>) (iii) The recombination dynamics and its modification in external magnetic field are very different for the two lines. As one can see in the left panel of Fig. <ref>a, the exciton decay of the high-energy line strongly changes in high magnetic fields of 24 T. Namely, its fast decay component becomes considerably longer and the long decay component shortens, which is a result of magnetic field mixing of the bright and dark exciton states.<cit.> No effect of the magnetic field is found for the dynamics of the low-energy line (right panel of Fig. <ref>a), which is typical forcharged excitons with a bright ground state.<cit.> (iv) Also the magnetic-field-induced degree of circular polarization (DCP) of the PL is very different for the emission of the high-energy and low-energy lines (Fig. <ref>b) evidencing their different origins. The DCP is defined as P_c = (I^+ - I^-)/(I^+ + I^-), where I^+ and I^- are the intensities of the σ^+ and σ^- circularly polarized emission, respectively. It is controlled by the Zeeman splitting of the exciton complexes and by their spin relaxation dynamics.<cit.> The detailed analysis of the DCP goes beyond the scope of this paper and will be published elsewhere. In this paper we focus on the properties of the high-energy exciton emission line to investigate the fine structure of the neutral exciton in CdSe NPLs. §.§ Fluorescence line narrowing FLN is a commonly used technique to study the band edge exciton fine structure in colloidal NCs. It is technically demanding as it requires lasers, which photon energy can be tuned to the exciton resonances, and double or triple spectrometers with high suppression of the scattered laser light for measurements in the vicinity of the laser photon energy. FLN is used to resolve spectral lines in an inhomogeneously broadened ensemble by selective laser excitation.<cit.> Under resonant laser excitation within the inhomogeneously broadened exciton line, a subensemble of NPLs is selectively excited. This results in a strong narrowing of the emission lines in the PL spectrum, as the laser line is in resonance with the bright |± 1⟩ exciton state of only a small fraction of NPLs. The injected excitons relax into the dark state |± 2⟩, where the radiative recombination occurs. The Stokes shift between the laser photon energy and the dark exciton emission directly gives Δ E_ AF if possible contributions by dangling bond magnetic polarons or acoustic phonon polarons are absent<cit.> and internal relaxation between the exciton states can be neglected.Figure <ref>a shows a PL spectrum of the 5ML sample under nonresonant excitation (black) and an FLN spectrum under resonant excitation at 2.3305 eV (red). In FLN experiment the broad exciton emission line marked as X vanishes. Instead, the FLN spectrum consisting of several lines appears. We attribute the line with the highest PL intensity to the zero-phonon line (ZPL). Its Stokes shift from the laser photon energy gives Δ E_ AF=3.2 meV.We assign two side peaks in vicinity of the ZPL line to acoustic phonon replicas of the dark and bright excitons. The results for all samples are shown in Fig. <ref>b. The bright-dark exciton splitting,Δ E_ AF, varies from 3.2 meV in the 5ML NPLs to 4.8 meV in the 3ML ones being about inversely proportional to the NPL thickness L (Table <ref>). §.§ Temperature-dependent time-resolved PL The PL decays of the exciton lines, which are bi-exponential at liquid helium temperature, change with increasing temperature (Fig. <ref>a): in 4ML τ_L drastically shortens from 82 ns at 2.2 K to 0.36 ns at 70 K, and the short decay component decreases in amplitude and vanishes for T>30 K. The recombination rates Γ_ L=τ_ L^-1 deduced from monoexponential fits to the long-term tails for all samples are shown in Fig. <ref>b as functions of temperature. Within the three-level model illustrated in Fig. <ref>c (see Supplementary Section S1 for more details), the recombination rate of the dark exciton, Γ_ F, is assumed to be temperature independent, and the acceleration of Γ_ L with temperature is determined solely by the thermal population of the bright exciton state with the recombination rate Γ_ A≫Γ_ F. The short component of the PL decay is determined by bright exciton recombination and exciton relaxation from the upper lying bright to the dark state with a rate γ_0 (1+N_ B), where γ_0 is the zero temperature relaxation rate, N_ B = 1/ [ exp(Δ E_ AF / kT) -1 ] is the Bose-Einstein phonon occupation (Fig. <ref>c). This process requires a spin-flip of either the electron or the hole spin in the exciton and γ_0 is often referred to as a spin-flip rate. γ_th=γ_0N_ B is the thermal-activation rate for the reversed process. Within the model, the rate equations for the populations of the bright and dark exciton states, p_ A and p_ F, are:d p_ A/dt= -[Γ_ A+γ_0 (N_ B+1) ]p_ A + γ_0 N_ B p_ F ,d p_ F/dt= -[Γ_ F+γ_0 N_ B]p_ F + γ_0 (N_ B+1) p_ A .Assuming p_ A(t=0)=p_ F(t=0)=0.5, the dependence of the decay rates on temperature is deduced from the solutions of rate equations (<ref>):<cit.>Γ_ short, L(T) = 1/2[ Γ_ A +Γ_ F+γ_0 ( Δ E_ AF/2kT) ±. . ±√(( Γ_ A -Γ_ F+γ_0 )^2+γ_0^2 sinh^-2(Δ E_ AF/2kT))] .Here the sign “+” before the square root corresponds to Γ_ short=τ_ short^-1 and the sign “-” to Γ_ L.At low temperatures, such that Δ E_ AF≫ kT, Γ_ L=Γ_ F and Γ_ short=Γ_ A+γ_0. Substituting γ_0 by γ_0=Γ_ short(T=2 K)-Γ_ A, we fit the Γ_ L(T) dependences in Fig. <ref>b with the equation (<ref>) and obtain the values of Δ E_ AF and Γ_ A. All evaluated parameters are given in Table <ref>. The bright exciton recombination rates Γ_ A∼ 10 ns^-1 are in good agreement with the reported Γ_ A =3.6 and 5.5 ns^-1 for CdSe NPLs.<cit.> Note, that Γ_ A in CdSe NPLs is about two orders of magnitude faster than in CdSe spherical NCs.<cit.>The zero-temperature relaxation rates are γ_0=35.6and24 ns^-1 for the 4ML and 5ML NPLs, respectively. For the 3ML sample an estimate γ_0=40 ns^-1 was made assuming that both τ_ short^-1 and γ_0 increase in thinner NPLs.[Since no data for τ_ short in 3ML is available, γ_0 is estimated by extrapolating thickness dependence.]The Δ E_ AF values obtained from the fit for all studied NPLs are plotted in Fig. <ref>d together with the FLN results vs the inverse NPL thickness L^-1. For reference we show also the value for bulk wurzite CdSe (w-CdSe) by a closed circle: Δ E_ AF^ w=0.13 meV.<cit.>For all samples, FLN gives slightly smaller values, but the trend is the same. These measurements confirm our previous result for CdSe NPLs with Δ E_ AF of a few meV.<cit.> Remarkably, the values from Table <ref> are sufficient not only for characterizing the PL dynamics, but also for modeling the temperature evolution of the PL spectra without any additional parameters (see Subsection <ref>). §.§ Spectrally-resolved PL decay To obtain more insight into the exciton emission of the NPLs, we performed a thorough analysis of the spectrally-resolved PL decay. Figure <ref>a shows the time-resolved PL at different spectral energies (streak-camera-like data presentation) for the 4ML sample measured at T=2.2 K (Experimental section).After absorption of a non-resonant laser pulse and exciton energy relaxation, the bright and dark excitons are populated about equally.<cit.> However, due to its much larger oscillator strength only the bright exciton contributes to the PL immediately after the laser excitation. An example of a time-resolved spectrum at t=0 is shown in Fig. <ref>b (upper panel, orange). The emission line maximum is shifted to higher energy (∼ 2.5025 eV) compared to the time-integrated spectrum with the maximum at 2.498 eV (black line). As the excitons relax towards thermal equilibrium, the bright state becomes depopulated, and the emission maximum shifts to lower energy. At a delay of t=200 ns the emission comes only from the dark exciton |F⟩ statewith the emission line maximumat ∼ 2.497 eV (blue line). Therefore, the bright-dark splitting can be directly obtained from comparing the spectra at t=0 and t →∞. We obtain 5.5 ± 0.5 meV for the 4ML and 4.0 ± 0.5 meV for the 5ML NPLs (Fig. <ref>). For comparison, the time-integrated spectrum measured with a CCD is shown in the lower panel of Fig. <ref>b. The magenta and cyan Gaussian lines show the time-integrated contributions of the bright and dark excitons to the emission, respectively (see Subsection <ref>). §.§ Temperature dependence of PL spectra To explore in more detail the scattering rate between the dark and bright excitons and their splitting, γ_0 and Δ E_AF, respectively, we analyzed the evolution of PL spectra with temperature for the 4ML (Fig. <ref>a) and the 5ML (Fig. <ref>) samples. At T=2.2 K (upper panel Fig. <ref>a) the non-equilibrium exciton population relaxes into the lowest dark state and the maximum of the time-integrated exciton emission is at ∼ 2.4975 eV. With increasing temperature (middle and lower panels) the population of the bright exciton state grows and the emission maximum shifts to higher energy. This behavior is in agreement with experiments on single NCs.<cit.> To simulate the interplay between the exciton states, we fit the spectra with three Gaussian peaks centered at the energy positions corresponding to the bright exciton E_ A (magenta filling), the dark exciton E_ F (cyan filling), and the low-energy peak E_ LE (green line). The full width at half maximum (FWHM) was kept fixed for all temperatures, and was 9.5, 10 and 13.3 meV for the bright and dark exciton, and the low-energy peak, respectively (best fit). The fitting curves for PL spectra in Fig. <ref>a are shown by the red lines. The best fit for the 4ML sample is achieved with E_ F=2.4973 eV and E_ A=2.5025 eV, which gives Δ E_ AF=E_ A-E_ F=5.2 meV, in very good agreement with the results of temperature-dependent time-resolved measurements (Table <ref>).In addition to the energy splitting, the temperature dependence of the PL spectra brings insight into the thermal population of the bright and dark exciton states. Interestingly, within the three-level model described above, the integral PL intensity ratio of the dark to bright states, I_ F/I_ A, is directly linked to the bright-to-dark spin-flip rate, γ_0. Integrating the set of equations (<ref>) and assuming p_ A(0)=p_ F(0)=0.5, we obtainI_ F/I_ A=Γ_ F/Γ_ A·Γ_ A+2 γ_0 (N_ B+1)/Γ_ F+2 γ_0 N_ B. Figure <ref>b shows the experimental temperature dependence of the I_ F/I_ A ratio and its calculation according to equation (<ref>) using the parameters from Table <ref>. We stress, that good agreement is achieved without using any fitting parameters.§ DISCUSSION§.§ Bright-dark splitting In two-dimensional CdSe NPLs the exciton binding energy was estimated to amount 200–300 meV,<cit.> i.e. in between the bulk CdSe value of 10 meV and the 500–1000 meV measured in 1–2 nm diameter CdSe QDs (Ref. Elward2013 and references therein). Therefore, Δ E_ AF values of the order of several meV, i.e.between the bulk (0.13 meV)<cit.> and QD (∼ 20 meV)<cit.> values are reasonable. They are about an order of magnitude larger than typical values in epitaxial II-VI QWs.<cit.> All four optical methods used in this paper provide consistent values of Δ E_ AF for the CdSe NPLs, which are collected in Table <ref>.We would like to note here that the variety of optical methods presented in this paper can be further extended by application of magnetic fields. One example of such experiment is presented in Supplementary Section S4. This method exploits the difference in the degree of circular polarization (DCP) of the bright and dark exciton emission in an external magnetic field due to the different Zeeman splittings, which is controlled by their g-factors. The DCP maximum indicates the position of the dark exciton (Fig. <ref>a), while the PL maximum shifts with increasing temperature from the dark to bright exciton position (Fig. <ref>b). The energy difference between the DCP and PL maxima of about 5 meV at T>10 K corresponds well with the Δ E_ AF values for the 4ML NPLs (Table <ref>). §.§ Bright-dark splitting calculation within effective mass approximation, accounting for dielectric confinement effects The origin of the bright-dark splitting Δ E_ AF in NPLs is the electron-hole exchange interaction. Below we present calculations for Δ E_ AF obtained from consideration of the short-range exchange interaction. In the spherical approximation the exchange Hamiltonian can be written as:<cit.>H_ exch= -2/3ε_ exchνδ ( r_ e- r_ h)(σ· J),where ε_ exch is the exchange constant, σ=(σ_x,σ_y,σ_z) is the Pauli matrix, and J=(J_x,J_y,J_z) is the matrix of the hole total angular momentum J=3/2. Here we use the unit cell volume ν=ν_ c=a_ c^3 (with a_ c being the lattice constant) for cubic material and ν=ν_ w=a_ w^2c_ w√ 3/2 (with a_ w and c_ w being the lattice constants) for wurtzite semiconductors.In NPLs strong confinement of the carriers occurs only in one direction so that exciton wavefunction can be written as:Φ(r_ e,r_ h)=Ψ(ρ_ e-ρ_ h)ψ(z_ e)ψ(z_ h),where Ψ(ρ_ e-ρ_ h) is the normalized wavefunction describing the exciton relative motion in the plane of a nanoplatelet, ρ_ e and ρ_ h are in-plane coordinates of electron and hole, respectively.ψ(z_ e,h)=(2/L)^1/2sin(π z_ e,h/L)is the wavefunction describing quantization of electron (hole) along the z direction in an infinitely deep quantum well of thickness L. The splitting between bright and dark excitons calculated using the wavefunction Φ(r_ e,r_ h) and the Hamiltonian H_ exch gives:Δ E_ AF=Δ_ exch|Ψ̃(0)|^2/L̃,where L̃=L/a_0 is the dimensionless NPL thickness, Ψ̃(0)=Ψ(0)a_0 is the dimensionless in-plane wavefunction evaluated at ρ_ e=ρ_ h, and Δ_ exch=ε_ exchν/a_0^3 is the renormalized exchange constant. Here we use a_0=1 nm as the length unit.The value of the renormalized exchange constant Δ_ exch is related to the bright-dark exciton spitting in bulk semiconductors as:<cit.>Δ_ exch^ c=3π/8Δ E_ AF^ c( a_ ex^ c/a_0)^3,Δ_ exch^ w=π/2Δ E_ AF^ w( a_ ex^ w/a_0)^3,where a_ ex is the bulk exciton Bohr radius, "c" and "w" superscripts denote cubic and wurtzite material, respectively. This allows us to determine Δ_ exch^ w=35.9 meV using the exciton splitting in w-CdSe Δ E_ AF^ w=0.13 meV from Refs. Kiselev1975, Kochereshko1983 and the bulk exciton Bohr radius in w-CdSe a_ ex^ w=5.6 nm<cit.>. As there is no experimental data for Δ E_ AF^ c, we assume below that Δ_ exch^ c=Δ_ exch^ w=35.9 meV. The results of calculations for Δ E_ AF^ c with other possible choices of contributing parameters are given in Supplementary Section S5. To find |Ψ(0)|^2 we performed effective mass calculations for the exciton states following the approach from Refs. Gippius1998,Pawlis2011. This approach includes the electron-hole Coulomb interaction and single particle potentials (Eqs. (5) and (3) from Ref. Gippius1998, respectively) modified by the difference in dielectric constants between the NPLs, ϵ_ in, and the surrounding medium, ϵ_ out. As electron and hole are localized inside a relatively small volume of the nanoplatelet, there arises a question: which dielectric constant ϵ_ in should be used for calculation of the Coulomb interaction between carriers inside the nanoplatelet? This issue has been raised previously <cit.> and concerns the number of resonances which give contribution to the dielectric response of the medium. We did the modeling for two values of ϵ_ in which equal to: (i) the high frequency dielectric constant of c-CdSe ϵ_∞=6,<cit.> which is relevant for the case when the quantum confinement energies of electron and hole are much larger than the energy of the optical phonon, and (ii) the background dielectric constant of CdSe ϵ_ b=8.4, which takes into account the contribution from all crystal excitations except the exciton.<cit.> The value of the dielectric constant of the surrounding medium can vary in wide range, depending on the ligands at the NPL surface, the solvent, the substrate material on which the NPLs are deposited. Thus, we considered values of ϵ_ out ranging from 2, which is the case for randomly oriented ligands in solution<cit.> (strong dielectric contrast), to ϵ_ in (dielectric contrast is absent). Here we present results of calculations with ϵ_ in=8.4, ϵ_ in=6 and ϵ_ out=2.For the results of calculations with other values of dielectric constants see Supplementary Section S5.One can see from Fig. <ref> that calculations with Δ_ exch^ c=35.9 meV and with dielectric constants ϵ_ in=6, ϵ_ out=2 or ϵ_ in=8.4, ϵ_ out=2 are in good agreement with the experimental data. We note that ϵ_ in=6 and ϵ_ out=2 also give a good agreement between the calculated and experimental absorption spectra of the CdSe NPLs <cit.>. It is difficult to determine the exact values of dielectric constants ϵ_ in, ϵ_ out and renormalized exchange constant Δ_ exch^ c in c-CdSe, as experimental data can be fitted using the wide range of these parameters (Supplementary Section S5). However, all the parameterizations use reasonable set of fitting parameters ϵ_ in, ϵ_ out, Δ_ exch^ c and for all of them the calculation of Δ E_ AF based on the effective mass approximation with accounting for dielectric confinement effects agrees well with the experiment. §.§ Zero-temperature bright to dark relaxation rate We have shown by spectrally-resolved and time-resolved PL (Fig. <ref>), that the bright excitons mostly contribute to the emission for t< 500 ps. Interestingly, even at a temperature as low as 2.2 K the PL signal from the bright exciton recombination still represents up to 10% of the overall signal, as can be seen from the temperature dependence of PL spectra (Fig. <ref>). This is due to the fact that, in contrast to spherical QDs where γ_0 ≫Γ_ A, in NPLs γ_0≃25-40 ns^-1 is only three times larger than Γ_ A (Table <ref>). In small size QDs γ_0 of the same order of magnitude (∼ 10 ns^-1) were reported.<cit.>. This points to a considerably enhanced oscillator strength of the bright exciton in NPLs compared to QDs. Indeed, according to the present paper, in NPLs the bright exciton recombination rate Γ_ A=10 ns^-1, which is consistent with our previous measurement (Γ_ A =3.6 and 5.5 ns^-1, Ref. Biadala2014nl), and is comparable with the one in epitaxial II-VI and III-V quantum wells under nonresonant excitation.<cit.> In colloidal QDs Γ_ A is about two orders of magnitude smaller: 0.082<cit.>, 0.125<cit.>, 0.025<cit.>, and 0.16<cit.> ns^-1. On the other hand, in epitaxially grown CdS quantum discs γ_0=10 ns^-1and Γ_ A=6 ns^-1 have been reported.<cit.>In this case, even large Δ E_ AF=4 meV reported in these structures would not lead to prominent dark exciton emission, since the bright exciton decay would be dominated by radiative recombination rather then relaxation to the dark exciton. This raises the question of the impact of γ_0 onto emission properties of different nanostructures. § CONCLUSIONS In summary, we have measured the parameters characterizing the band-edge excitons in CdSe nanoplatelets. We have used four optical techniques to study the exciton fine structure in ensembles of the nanoplatelets, in particular the bright-dark exciton splitting.All techniques give consistent values for the bright-dark splitting Δ E_ AF ranging between 3.2 and 6.0 meV for the platelets thickness decreasing from 5 to 3 monolayers. The splitting scales about inversely with the platelet thickness. Theoretical calculations of Δ E_ AF based on the effective mass approximation with accounting for dielectric confinement effects were performed. Despite of uncertainty of parameters and limited applicability of effective mass approximation for small-sized nanostructures, we find a good agreement between experimental and calculated size dependence of the bright-dark exciton splitting. The recombination rates of the bright and dark excitons and the bright to dark relaxation rate have been measured by time-resolved techniques. The recombination time of the bright excitons in nanoplatelets of about 100 ps is considerably faster than in colloidal QDs. As a result, in contrast to QDs with γ_0 ≫Γ_ A, in CdSe nanoplatelets γ_0 ≥Γ_ A, providing a different regime for the population of the bright and dark exciton states. A variety of the optical methods for measuring the bright-dark exciton splitting examined in this paper for CdSe colloidal nanoplatelets can be readily used for the whole family of colloidal nanostructures, which composition and design is in tremendous progress nowadays.§ EXPERIMENTAL SECTION §.§ Sample preparationThe CdSe NPLs were synthesized according to the protocol reported in Ref. Ithurria2008. They have a zinc-blend crystalline structure, i.e. c-CdSe. Samples for optical experiments were prepared by drop-casting of a concentrated NPL solution onto a quartz plate. §.§ Optical measurementsThe optical experiments at low temperatures were performed on a set of different NPL ensemble samples. The NPL samples were mounted in a titanium sample holder on top of a three-axis piezo-positioner and placed in the variable temperature insert (2.2-70 K) of a liquid helium bath cryostat. For the measurements in external magnetic fields up to 17 T we used a cryostat equipped with a superconducting solenoid. For higher fields up to 24 T a cryostat was inserted in a 50 mm bore Florida-Bitter electromagnet at the High Field Magnet Laboratory in Nijmegen. All optical experiments in magnetic fields were performed in Faraday geometry (light excitation and detection parallel to the magnetic field direction).For nonresonant excitation measurements, the NPLs were excited using a pulsed diode laser (photon energy 3.06 eV, wavelength 405 nm, pulse duration 50 ps, repetition rate between 0.8 and 5 MHz) with a weak average excitation power density <0.02 W/cm^2. The PL detected in backscattering geometry was filtered from the scattered laser light with a 0.55-m spectrometer and detected either by a liquid-nitrogen-cooled charge-coupled-device (CCD) camera or by an avalanche Si-photodiode.For polarization-resolved measurements, PL was analyzed by a combination of a quarter-wave plate and a linear polarizer.For the absorption spectra measurements at T=5 K the sample was illuminated by an incandescent lamp with a broad spectrum. §.§ Time-resolved measurements with avalanche photodiode (APD)To measure long-lasting PL decays, we used an avalanche Si-photodiode connected to a conventional time-correlated single-photon counting setup (the instrumental response function is ∼ 100 ps). §.§ Spectral dependence of PL decayPL was filtered by a 0.55-m spectrometer equipped with a 2400 grooves/mm grating, slicing the spectra into bands that were ≲ 1 nm wide, and sent to an avalanche photodiode (APD). To prove that the APD quantum yield was the same for each wavelength range we compared the time-integrated PL spectrum with the PL spectrum measured by a the CCD camera. To obtain the streak-camera-like image (Fig. <ref>a) the time-resolved PL measured at different wavelengths was plotted across the energy in a two-dimensional plot. To obtain time-resolved PL spectra the two-dimensional data were integrated: for the spectrum at t=0 from -32 to 32 ps and for the spectrum at t=200 ns from 195 to 205 ns. §.§ Time-resolved measurements with a streak-cameraIn order to measure the initial fast PL dynamics, the NPLs were excited by a frequency-doubled mode-locked Ti-Sapphire laser (photon energy 3.06 eV, wavelength 405 nm, pulse duration 2 ps, repetition rate 76 MHz). Time-resolved PL spectra were recorded by a streak-camera attached to a spectrometer, providing temporal and spectral resolution of ≲ 5 ps and ≲ 1 nm. In these experiments the samples were incontact with superfluid helium providing a temperature of about 2 K. §.§ Fluorescence line narrowingFor resonant excitation of the 5ML sample (Figure <ref>a) a continuous-wave laser with photon energy 2.3305 eV (wavelength 532 nm) was used. The signal was passed through a notch filter to suppress the scattered laser light. The PL was dispersed by a triple-grating Raman spectrometer (subtractive mode). The resonant PL emission was dispersed by a 500 mm stage (1800 grooves/mm holographic grating) and detected by a liquid nitrogen cooled CCD.For excitation of 3ML, 4ML and 5ML samples (Figure <ref>b), we used the lines of Ar-ion (514.5 nm, 486.5 nm, 488 nm), He-Cd (441.6 nm), and Nd:YAG (532 nm) lasers. The laser power densities focused on the sample was not higher than 2 W/cm^2. The scattered light was analyzed by a Jobin-Yvon U1000 double monochromator equipped with a cooled GaAs photomultiplier and conventional photon counting electronics.§ ACKNOWLEDGEMENTSThe authors are thankful toAl.L. Efros for fruitful discussions. The authors are thankful to M. Meuris from department biomaterials and polymer science at TU Dortmund university for the TEM images of 4ML and 5ML samples. E.V.S., V.V.B., D.R.Y., A.V.R., and M.B. acknowledge support of the Deutsche Forschungsgemeinschaft in the frame of ICRC TRR 160. E.V.S. and D.R.Y. acknowledge the Russian Science Foundation (Grant No. 14-42-00015). N.A.G. acknowledges support from the Russian Foundation for Basic Research (Grant No. RFBR16-29-03283). We acknowledge the support from HFML-RU/FOM, a member of the European Magnetic Field Laboratory (EMFL). B.D. and Y.J. acknowledge funding from the EU Marie Curie project 642656 “Phonsi”. 99 Murray1993 C. B. Murray, D. J. Norris and M. G. Bawendi, J. Am. Chem. Soc., 1993, 115, 8706–8715. Ithurria2011nm S. Ithurria, M. D. Tessier, B. Mahler, R. P. S. M. Lobo, B. Dubertret and Al. L. Efros, Nat. Mater., 2011, 10, 936–941. Ithurria2008 S. Ithurria and B. Dubertret, J. Am. Chem. Soc., 2008, 130, 16504–16505. Gao2017 Y. Gao, M. C. Weidman and W. A. Tisdale, Nano Lett., 2017, 17, 3837–3843. Rowland2015 C. E. Rowland, I. Fedin, H. Zhang, S. K. Gray, A. O. Govorov, D. V. Talapin and R. D. Schaller, Nat. Mater., 2015, 14, 484–489. Grim2014 J. Q. Grim, S. Christodoulou, F. Di Stasio, R. Krahne, R. Cingolani, L. Manna and I. Moreels, Nat. Nanotechnol., 2014, 9, 891–895. Diroll2017 B. T. Diroll, D. V. Talapin and R. D. Schaller, ACS Photonics, 2017, 4, 576–583. Zhang2005 H.-T. Zhang,G. Wu and X.-H. Chen, Langmuir, 2005, 21, 4281–4282. Schliehe2010 C. Schliehe, B. H. Juarez, M. Pelletier, S. Jander, D. Greshnykh, M. Nagel, A. Meyer, S. Foerster, A. Kornowski, C. Klinke and H. Weller, Science, 2010, 329, 550–553. Dogan2015 S. Dogan, T. Bielewicz, V. Lebedeva and C. Klinke, Nanoscale, 2015, 7, 4875–4883. Aerts2014 M. Aerts, T. Bielewicz, C. Klinke, F. C. Grozema, A. J. Houtepen, J. M. Schins, and L. D. A.Siebbeles, Nat. Commun., 2014, 5, 3789. Joo2006 J. Joo, J. S. Son, S. G. Kwon, J. H. Yu and T. Hyeon, J. Am. Chem. Soc., 2006, 128, 5632–5633. Son2009 J. S. Son, X.-D. Wen, J. Joo, J. Chae, S.-i. Baek, K. Park, J. H. Kim, K. An, J. H. Yu, S. G. Kwon, S.-H. Choi, Z. Wang, Y.-W. Kim, Y. Kuk, R. Hoffmann and T. Hyeon, Angew. Chem., Int. Ed., 2009, 48, 6861–6864. Liu2010 Y.-H. Liu, V. L. Wayman, P. C. Gibbons, R. A. Loomis and W. E. Buhro, Nano Lett., 2010, 10, 352–357. Koh2017 W.-k. Koh, N. K. Dandu, A. F. Fidler, V. I. Klimov, J. M. Pietryga and S. V. Kilina, J. Am. Chem. Soc., 2017, 139, 2152–2155. Sigman2003 M. B. Sigman, A. Ghezelbash, T. Hanrath, A. E. Saunders, F. Lee and B. A. Korgel, J. Am. Chem. Soc., 2003, 125, 16050–16057. VaughnII2010 D. D. Vaughn II, R. J. Patel, M. A. Hickner and R. E. Schaak, J. Am. Chem. Soc., 2010, 132, 15170–15172. Li2012 Z. Li, H. Qin, D. Guzun, M. Benamara, G. Salamo and X. Peng, Nano Research, 2012, 5, 337–351. Bouet2014 C. Bouet, D. Laufer, B. Mahler, B. Nadal, H. Heuclin, S. Pedetti, G. Patriarche and B. Dubertret, Chem. Mater., 2014, 26, 3002–3008. Izquierdo2016 E. Izquierdo, A. Robin, S. Keuleyan, N. Lequeux, E. Lhuillier and S. Ithurria, J. Am. Chem. Soc., 2016, 138, 10496–10501. Nasilowski2016 M. Nasilowski, B. Mahler, E. Lhuillier, S. Ithurria and B. Dubertret, Chem. Rev., 2016, 116, 10934–10982. Yeltik2015 A. Yeltik, S. Delikanli, M. Olutas, Y. Kelestemur, B. Guzelturk and H. V. Demir, J. Phys. Chem. C, 2015, 119, 26768–26775. Achtstein2015 A. W. Achtstein, A. Antanovich, A. Prudnikau, R. Scott, U. Woggon and M. Artemyev, J. Phys. Chem. C, 2015, 119, 20156–20161. Cassette2015 E. Cassette, R. D. Pensack, B. Mahler and G. D. Scholes, Nature Commun., 2015, 6, 6086. Pal2017 S. Pal, P. Nijjar, T. Frauenheim and O. V. Prezhdo, Nano Lett., 2017, 17, 2389–2396. Benchamekh2014 R. Benchamekh, N. A. Gippius, J. Even, M. O. Nestoklon, J.-M. Jancu, S. Ithurria, B. Dubertret, Al. L. Efros and P. Voisin, Phys. Rev. B, 2014, 89, 035307. Biadala2014nl L. Biadala, F. Liu, M. D. Tessier, D. R. Yakovlev, B. Dubertret and M. Bayer, Nano Lett., 2014, 14, 1134–1139. GranadosDelAguila2014 A. Granados Del Águila, B. Jha, F. Pietra, E. Groeneveld, C. De Mello Donegá, J. C. Maan, D. Vanmaekelbergh and P. C. M. Christianen, ACS Nano, 2014, 8, 5921–5931. Labeau2003 O. Labeau, P. Tamarat and B. Lounis, Phys. Rev. Lett., 2003, 90, 257404. Biadala2009 L. Biadala, Y. Louyer, P. Tamarat and B. Lounis, Phys. Rev. Lett., 2009, 103, 037404. Brovelli2011 S. Brovelli, R. D. Schaller, S. A. Crooker, F. García-Santamaría, Y. Chen, R. Viswanatha, J. A. Hollingsworth, H. Htoon and V. I. Klimov, Nat. Commun., 2011, 2, 280. Nirmal1995 M. Nirmal, D. Norris, M. Kuno, M. Bawendi, Al. L. Efros and M. Rosen, Phys. Rev. Lett., 1995, 75, 3728–3731. Efros1996 Al. L. Efros, M. Rosen, M. Kuno, M. Nirmal, D. Norris and M. Bawendi, Phys. Rev. B, 1996, 54, 4843–4856. DeMelloDonega2006 C. De Mello Donegá, M. Bode and A. Meijerink, Phys. Rev. B, 2006, 74, 085320. Biadala2017nn L. Biadala, E. V. Shornikova, A. V. Rodina, D. R. Yakovlev, B. Siebers, T. Aubert, M. Nasilowski, Z. Hens, B. Dubertet, Al. L. Efros and M. Bayer, Nat. Nanotechnol., 2017, 12, 569–574. Rodina2015 A. Rodina and Al. L. Efros, Nano Lett., 2015, 15, 4214–4222. Leung1998 K. Leung, S. Pokrant and K. B. Whaley, Phys. Rev. B, 1998, 57, 12291–12301. Tessier2012 M. D. Tessier, C. Javaux, I. Maksimovic, V. Loriette andB. Dubertret, ACS Nano, 2012, 6, 6751–6758. Hannah2011 D. C. Hannah, N. J. Dunn, S. Ithurria, D. V. Talapin, L. X. Chen, M. Pelton, G. C. Schatz and R. D. Schaller, Phys. Rev. Lett., 2011, 107, 177403. Tessier2013 M. D. Tessier, L. Biadala, C. Bouet, S. Ithurria, B. Abecassis and B. Dubertret, ACS Nano, 2013, 7, 3332–3340. Achtstein2016 A. W. Achtstein, R. Scott, S. Kickhöfel, S. T. Jagsch, S. Christodoulou, G. H. V. Bertrand, A. V. Prudnikau, A. Antanovich, M. Artemyev, I. Moreels, A. Schliwa and U. Woggon, Phys. Rev. Lett., 2016, 116, 116802. Erdem2016phchl O. Erdem, M. Olutas, B. Guzelturk, Y. Kelestemur and H. V. Demir, J. Phys. Chem. Lett., 2016, 7, 548–554. Cherevkov2013 S. A. Cherevkov, A. V. Fedorov, M. V. Artemyev, A. V. Prudnikau and A. V. Baranov, Phys. Rev. B, 2013, 88, 041303R. Dzhagan2016 V. Dzhagan, A. G. Milekhin, M. Ya. Valakh, S. Pedetti, M. Tessier, B. Dubertret and D. R. T. Zahn, Nanoscale, 2016, 8, 17204. Liu2013 F. Liu, L. Biadala, A. V. Rodina, D. R. Yakovlev, D. Dunker, C. Javaux, J. P. Hermier, Al. L. Efros, B. Dubertret and M. Bayer, Phys. Rev. B, 2013, 88, 035302. Furis2006 M. Furis, H. Htoon, M. A. Petruska, V. I. Klimov, T. Barrick and S. A. Crooker, Phys. Rev. B, 2006, 73, 241313R. Wijnen2008 F. J. P. Wijnen, J. H. Blokland, P. T. K. Chin, P. C. M. Christianen andJ. C. Maan, Phys. Rev. B, 2008, 78, 235318. Crooker2003 S. A. Crooker, T. Barrick, J. A. Hollingsworth and V. I. Klimov, Appl. Phys. Lett., 2003, 82, 2793. Kiselev1975 V. A. Kiselev, B. S. Razbirin and I. N. Uraltsev, Phys. Status Solidi B, 1975, 72, 161–172. Kochereshko1983 V. P. Kochereshko, G. V. Mikhailov and I. N. Uraltsev, Sov. Phys. Solid State, 1983 25, 439. [transl. Fiz. Tverd. Tela, 1983, 25, 769–776.] Louyer2011 Y. Louyer, L. Biadala, J. B. Trebbia, M. J. Fernée, P. Tamarat and B. Lounis, Nano Lett., 2011, 11, 4370–4375. Elward2013 J. M. Elward and A. Chakraborty, J. Chem. Theory Comput., 2013, 9, 4351–4359. Nirmal1994 M. Nirmal, C. Murray and M. Bawendi Phys. Rev. B, 1994, 50, 2293–2300. Jeukens2002 C. R. L. P. N. Jeukens, P. C. M. Christianen, J. C. Maan, D. R. Yakovlev, W. Ossau, V. P. Kochereshko, T. Wojtowicz, G. Karczewski and J. Kossut, Phys. Rev. B, 2002, 66, 235318. Gippius1998 N. A. Gippius, A. L. Yablonskii, A. B. Dzyubenko, S. G. Tikhodeev, L. V. Kulik, V. D. Kulakovskii and A. Forchel, J. Appl. Phys., 1998, 83, 5410–5417. Pawlis2011 A. Pawlis, T. Berstermann, C. Brüggemann, M. Bombeck, D. Dunker, D. R. Yakovlev, N. A. Gippius, K. Lischka and M. Bayer, Phys. Rev. B, 2011, 83, 115302. Cardona M. Cardona and R. G. Ulbrich, Light Scattering in Solids III, Springer, Berlin, 2005, Topics in Applied Physics, vol. 51, p. 233. RodinaJETP2016 A. V. Rodina and Al. L. Efros, JETP, 2016, 122, 554–556. Lechner1996 M. D. Lechner, Refractive Indices of Organic Liquids, Springer, Berlin, 1996, Group III, vol. 38B. Feldmann1987 J. Feldmann, G. Peter, E. O. Göbel, P. Dawson, K. Moore, C. Foxon and R. J. Elliott, Phys. Rev. Lett., 1987, 59, 2337–2340. Polhmann1992 A. Polhmann, R. Hellmann, E. O. Göbel, D. R. Yakovlev, W. Ossau, A. Waag, R. N. Bicknell-Tassius and G. Landwehr, Appl. Phys. Lett., 1992, 61, 2929–2931. Gindele1998 F. Gindele, U. Woggon, W. Langbein, J. M. Hvam, M. Hetterich and C. Klingshirn, Solid State Commun., 1998, 106, 653–657. § SUPPLEMENTARY INFORMATION:ADDRESSING THE EXCITON FINE STRUCTURE IN COLLOIDAL NANOCRYSTALS: THE CASE OF CDSE NANOPLATELETS Elena V. Shornikova, Louis Biadala, Dmitri R. Yakovlev, Victor F. Sapega, Yuri G. Kusrayev, Anatolie A. Mitioglu, Mariana V. Ballottin, Peter C. M. Christianen, Vasilii V. Belykh, Mikhail V. Kochiev, Nikolai N. Sibeldin, Aleksandr A. Golovatenko, Anna V. Rodina, Nikolay A. Gippius, Michel Nasilowski, Alexis Kuntzmann, Ye Jiang, Benoit Dubertret, and Manfred Bayer §.§ S1. Band-edge exciton fine structureIt is well established theoretically and experimentally that in nanometer-sized colloidal semiconductor crystals the lowest eightfold degenerate exciton energy level is split into five fine structure levels by the intrinsic crystal field (in hexagonal lattice structures), the crystal shape asymmetry, and the electron-hole exchange interaction.<cit.> These levels are separated from each other by so large splitting energies, that at temperatures of a few Kelvinthe photoluminescence (PL) arises from the two lowest exciton levels. In nearly spherical CdSe wurtzite QDs<cit.>, as well as in zinc blende NPLs<cit.>, the ground exciton state has total spin projection on the quantization axis J=± 2 and is forbidden in the electric-dipole (ED) approximation.[In colloidal NCs, the exciton ground state is usually dark, with projection either ± 2 or 0^ L, depending on the shape and/or crystal structure.] Therefore, it is usually referred to as a “dark” state, |F⟩. The upper lying “bright” state, |A⟩, has J=± 1^L, and is ED allowed. The energy separation between these two levels Δ E_ AF=E_ A-E_ F is usually of the order of several meV and is relatively large compared to epitaxially grown quantum wells and quantum dots. These levels are schematically shown together with the relevant recombination and relaxation processes in Figure <ref>c.Typically, the linewidth of ensemble PL spectra of colloidal nanocrystals is one-two orders of magnitude larger than the characteristic Δ E_ AF=1-20 meV. There are two optical methods that are commonly used to measure Δ E_ AF in different NCs.1. Temperature-dependent time-resolved PL The exciton fine structure leads to an interplay between the upper lying bright |A⟩ and the lower dark |F⟩ states that is typical for colloidal nanostructures. The recombination rates of these exciton states are Γ_ A and Γ_ F. The PL intensity in this case can be written as I(t)=η_ AΓ_ A p_ A + η_ FΓ_ F p_ F, where η_ A,F are the corresponding quantum efficiencies, and p_ A,F are the occupation numbers of the corresponding levels. The relaxation rates between these levels are given by γ_0 and γ_th, where γ_0 is the zero-temperature relaxation rate, and γ_th=γ_0 N_ B corresponds to the thermally-activated relaxation rate form the bright to the dark exciton state, where N_ B = 1/ [ exp (Δ E_ AF / kT) -1 ] is the Bose–Einstein phonon occupation. Assuming that γ_0, Γ_ A and Γ_ F are temperature independent parameters, the system dynamics can be described by the set of rate equations (<ref>). The solutions of this system are:p_ A = C_1 e^-tΓ_ short + C_2 e^-tΓ_ L, p_ F = C_3 e^-tΓ_ short + C_4 e^-tΓ_ L,with Γ_ short=τ_ short^-1 and Γ_ L=τ_ L^-1 being the rates for the short-lasting and the long-lasting decays, respectively:Γ_ short, L(T)=1/2[ Γ_ A +Γ_ F+γ_0 ( Δ E_ AF/2kT) ±√(( Γ_ A -Γ_ F+γ_0 )^2+γ_0^2 sinh^-2(Δ E_ AF/2kT))],Here the sign “+” in front of the square root corresponds to Γ_ short and the sign “-” to Γ_ L. For nonresonant excitation, after the laser pulse absorption, both |A⟩ and |F⟩ levels are assumed to be populated equally with p_ A(t=0)=p_ F(t=0)=0.5, which gives:p_ A = C_1 e^-tΓ_ short+(0.5-C_1) e^-tΓ_ L, p_ F = C_3 e^-tΓ_ short+(0.5-C_3) e^-tΓ_ L.Here C_1 and C_3 are temperature dependent parameters:C_1 = γ_0+Γ_ A-Γ_ L/2(Γ_ short-Γ_ L), C_3 = -γ_0+Γ_ F-Γ_ L/2(Γ_ short-Γ_ L). The PL intensity is then described by:I(t)=[η_ AΓ_ A C_1 +η_ FΓ_ F C_3] e^-tΓ_ short+ [η_ AΓ_ A (0.5-C_1) +η_ FΓ_ F (0.5-C_3)] e^-tΓ_ L.This dependence represents a bi-exponential PL decay, as typically observed in colloidal NCs at cryogenic temperatures. Indeed, after nonresonant photoexcitation and energy relaxation of excitons the bright and dark states at t=0 are populated about equally, but only the emission from the bright exciton is observed due to Γ_ A≫Γ_ F. In the limit kT =0, the excitons relax to the |F⟩ state with a rate γ_0. These two processes, namely, recombination of the bright exciton and relaxation to the dark state, result in a fast initial drop of the time-resolved PL with a rate Γ_ short=Γ_ A+γ_0(1+2N_ B) ≈γ_0(1+2N_ B). At longer delays, the |A⟩ level is emptied, and the emission arises from the |F⟩ state with a rate Γ_ L=Γ_ F.At a temperature of a few Kelvin, when Δ E_ AF≫ kT the time-resolved PL is also bi-exponential with the decay rates Γ_ short and Γ_ L defined by equation (<ref>). When the temperature is increased, the short-lived (long-lived) component decelerates (accelerates). If γ_0 ≫Γ_ A, at elevated temperatures corresponding to Δ E_ AF≤ kTthe decay turns into becoming mono-exponential with Γ_ L=(Γ_ A+Γ_ F)/2 (see Figure <ref>a).The temperature dependence of the Γ_ L rate is therefore a powerful tool to measure the Δ E_ AF value. At a single dot level, it has been shown that the energy splitting obtained by this method is in excellent agreement with the energy splitting directly measured from the PL spectra and also withtheoretical calculations.<cit.> The analysis of the temperature dependence of the time-resolved PL decay is routinely used to evaluate Δ E_ AF in NCs.<cit.> However, this method is indirect and might be affected by thermal activation of trap states,<cit.> surface dangling bonds,<cit.> as well as contributions from higher energy states.<cit.>It is important to note, that typically in colloidal quantum dots γ_0 ≫Γ_ A so that the equations (<ref>) can be simplified:<cit.>Γ_ short = Γ_ A + γ_0(1+2N_ B) ≈γ_0(1+2N_ B),Γ_ L(T) = Γ_ A +Γ_ F/2 - Γ_ A -Γ_ F/2tanh(Δ E_ AF/2kT) .However, this simplification cannot be used in case of NPLs, where as we have shown in this paper Γ_ A can be comparable with γ_0.2. Fluorescence line narrowing By exciting resonantly a small fraction of the NCs, the broadening due to the size distribution is drastically reduced and linewidths down to 300 μeV can be measured<cit.>. However, this method neglects any internal relaxation between the exciton states.<cit.> Moreover, it was shown recently that the Stokes shift in bare core CdSe QDs can be also contributed by formation of dangling bond magnetic polarons.<cit.> The FLN technique, therefore, mayoverestimate Δ E_ AF. §.§ S2. Sample characterization§.§ S3. Supplementary data for 5ML sample§.§ S4. “Method No. 5”. Polarization-resolved PL spectra in magnetic fieldsThe circularly polarized emission in an external magnetic field can be also used for identification of the bright and dark excitons in colloidal NPLs. This method exploits the difference in the Zeeman splittings of the bright and dark excitons, which is controlled by their g-factors, g^A_X and g^F_X: Δ E_Z^(A,F)(B) = g^(A,F)_X μ_B B cosθ, where μ_B is the Bohr magneton and θ is the angle between the normal to the NPL plane and the magnetic field. Then the degree of circular polarization of the emission gained by the different thermal occupation of the exciton Zeeman sublevels is described by P_c(B) = [τ /(τ + τ_s)] tanh [Δ E_Z(B) /(2kT)]. Here τ is exciton lifetime and τ_s is exciton spin relaxation time. The dark exciton state with angular momentum projection ± 2 has g-factor g^F_X = g_e-3g_h.<cit.> While the bright exciton state with ± 1 has g-factor g^A_X = -(g_e+3g_h) for the case when the exchange interaction is smaller than the splitting between the light-hole and heavy-hole states, which is valid for NPL. One can see, that the g^F_X and g^A_X can differ considerably. The difference depends on g_e and g_h, which measurement for the studied NPLs goes beyond the scope of this paper.A difference in g-factors has an immediate effect on the DCP by providing different values of P_c(B) for the dark and bright excitons and different temperature dependences for them. This is confirmed by the experimental data in Fig. <ref>a, where the spectral dependence of the DCP is shown at B=1 T and various temperatures from 4.2 to 15 K. With increasing temperature the absolute value of DCP decreases, but its maximum remains located at the spectral position of the dark exciton, while the PL maximum shifts with increasing temperature from the dark to bright exciton position (Fig. <ref>b). The energy difference between the DCP and PL maxima of about 5 meV at T>10 K corresponds well with the Δ E_ AF values for the 4ML NPLs (Table <ref>). §.§ S5. Calculation of exciton parameters in c-CdSe NPLIn our calculations we consider only the contribution from the short-range exchange interaction to the bright-dark exciton splitting Δ E_ AF in c-CdSe NPLs. In spherical approximation it is described by:H_ exch= -2/3ε_ exch^ ca_ c^3δ ( r_ e- r_ h)(σ· J),where ε_ exch^ c is the exchange constant, a_ c=0.608 nm is the lattice constant of c-CdSe <cit.>, σ=(σ_x,σ_y,σ_z) is the Pauli matrix, and J=(J_x,J_y,J_z) is the matrix of the hole total angular momentum J=3/2. We have found (see the main text) the resulting splitting as:Δ E_ AF=Δ_ exch|Ψ̃(0)|^2/L̃,where L̃=L/a_0 is the dimensionless NPL thickness, Ψ̃(0)=Ψ(0)a_0 is the dimensionless in-plane wavefunction evaluated at ρ_ e=ρ_ h, and Δ_ exch=ε_ exchν/a_0^3 is the renormalized exchange constant. Here we use a_0=1 nm as the length unit. The influence of dielectric contrast on the in-plane wavefunction of exciton Ψ(0)is taken into account according to approach described in Ref. [Gippius1998]. The full Hamiltonian of the system includes potential U_e,h(ρ,z_e,z_h) which describes the Coulomb attraction between electron and hole, the attraction of the electron to the hole image, and of the hole to the electron image. Potential U_e,h(ρ,z_e,z_h) depends on ϵ_ out and ϵ_ in as follows:U_e,h(ρ, z_e,z_h)=-e^2/ϵ_ in[1/√(ρ^2+(z_e-z_h)^2)+ϵ_ in-ϵ_ out/ϵ_ in+ϵ_ out1/√(ρ^2+(z_e+z_h)^2) ],where ρ=ρ_ e-ρ_ h is the exciton in-plane motion coordinate, z_e and z_h are coordinates of electron and hole along the quantization axis.Let us consider the results of the Δ E_ AF calculations, performed for different sets of dielectric constants of the nanoplatelet ϵ_ in and the surrounding media ϵ_ out. We consider four different values of renormalized exchange constant Δ_ exch^ c.a. The straightforward way to determine Δ _ exch^ c is based on the knowledge of the bright-dark splitting Δ E_ AF^ c in bulk c-CdSe (see Eq.7). However, there is no available experimental data for Δ E_ AF^ c. The empirical expression for the bulk exchange splitting in zincblende semiconductors was obtained in Ref. [Fu1999] from linear fit of splitting values in InP, GaAs and InAs. According to Eq.12 from Ref. [Fu1999] we find:Δ E_ AF^ c(a^ c_ ex/a_ 0)^3=15.4  meV.It corresponds to the renormalized exchange constant Δ_ exch^ c=18.1 meV in c-CdSe, as well as in all other semiconductors with zincblende structure. One can see that this choice of Δ_ exch^ c gives calculated Δ E_ AF smaller than the experimental data at any ϵ_ in and ϵ_ out (Figure <ref>a).b. The next approach is based on the assumption about equality of the renormalized exchange constants of c-CdSe and w-CdSe: Δ_ exch^ c=Δ_ exch^ w=35.9 meV. This approach gives good agreement with the experimental results if we use ϵ_ in varying from the high frequency dielectric constant of c-CdSe ϵ_∞=6 to the background dielectric constant of CdSe ϵ_ b=8.4, and the outside dielectric constant ϵ_ out=2. The results of calculations with the same Δ_ exch^ c and other sets of dielectric constants are presented in Fig. <ref>b.c. Another approach is based on assumption about equality not of the renormalized exchange constants, but of the exchange constants ε_ exch in c-CdSe and w-CdSe: ε_ exch^ c=ε_ exch^ w=Δ_ exch^ wa_0^3/ν_ w=320 meV. Here ν_ w=a_ w^2c_ w√3/2=0.112  nm^3is the volume of the w-CdSe unit cell, <cit.> where a_ w=0.43 nm and c_ w=0.70 nm. Using the definition of c-CdSe unit cell from Refs. [Zarhri,Szemjonov], we find ν_ c=a_ c^3=0.224 nm^3≈ 2 ν_ w, where a_ c=0.608 nm according to Ref. [Samarth], and Δ_ exch^ c=2Δ_ exch^ w=71.9 meV. The choices ϵ_ in=8.4 and ϵ_ out=4 fit the experimental data (Fig. <ref>c). Note, that the value of Δ_ exch^ c=35.9 meV in the case b corresonds to the ε_ exch^ c = Δ_ exch^ c a_0^3/ν_ c = 160 meV. d. The last approach is also based on assumption about equality of the exchange constants ε_ exch^ c=ε_ exch^ w with the use of ε_ exch^ w=450 meV from Ref. [Efros1996].It gives us Δ_ exch^ c=ε_ exch^ cν_ c/a_0^3=101.1 meV. The calculated Δ E_ AF is larger than the experimental data for any choice of ϵ_ in and ϵ_ out, except of the not very realistic case without a dielectric contrast: ϵ_ in=ϵ_ out=8.4 (Figure <ref>d). While we can exclude the cases without dielectric confinement, when ϵ_ in=ϵ_ out, and the cases with Δ_ exch^ c<35.9 meV, there are still a wide range of suitable parameterizations between those used in Figs. <ref> b,c. Independent determination of the renormalized exchange constant Δ_ exch^ c, or dielectric constants ϵ_ in, ϵ_ out would allow one to narrow down the number of parameterizations. However, all these parameterizations use reasonable values of ϵ_ in, ϵ_ out, Δ_ exch^ c and allow us to describe dependence of bright-dark exciton splitting in c-CdSe NPLs as a result of short-range exchange interaction between electron and hole within the effective mass approximation approach. 99 Murray1993 C. B. Murray, D. J. Norris and M. G. Bawendi, J. Am. Chem. Soc., 1993, 115, 8706–8715. Ithurria2011nm S. Ithurria, M. D. Tessier, B. Mahler, R. P. S. M. Lobo, B. Dubertret and Al. L. Efros, Nat. Mater., 2011, 10, 936–941. Ithurria2008 S. Ithurria and B. Dubertret, J. Am. Chem. Soc., 2008, 130, 16504–16505. Gao2017 Y. Gao, M. C. Weidman and W. A. Tisdale, Nano Lett., 2017, 17, 3837–3843. Rowland2015 C. E. Rowland, I. Fedin, H. Zhang, S. K. Gray, A. O. Govorov, D. V. Talapin and R. D. Schaller, Nat. Mater., 2015, 14, 484–489. Grim2014 J. Q. Grim, S. Christodoulou, F. Di Stasio, R. Krahne, R. Cingolani, L. Manna and I. Moreels, Nat. Nanotechnol., 2014, 9, 891–895. Diroll2017 B. T. Diroll, D. V. Talapin and R. D. Schaller, ACS Photonics, 2017, 4, 576–583. Zhang2005 H.-T. Zhang,G. Wu and X.-H. Chen, Langmuir, 2005, 21, 4281–4282. Schliehe2010 C. Schliehe, B. H. Juarez, M. Pelletier, S. Jander, D. Greshnykh, M. Nagel, A. Meyer, S. Foerster, A. Kornowski, C. Klinke and H. Weller, Science, 2010, 329, 550–553. Dogan2015 S. Dogan, T. Bielewicz, V. Lebedeva and C. Klinke, Nanoscale, 2015, 7, 4875–4883. Aerts2014 M. Aerts, T. Bielewicz, C. Klinke, F. C. Grozema, A. J. Houtepen, J. M. Schins, and L. D. A.Siebbeles, Nat. Commun., 2014, 5, 3789. Joo2006 J. Joo, J. S. Son, S. G. Kwon, J. H. Yu and T. Hyeon, J. Am. Chem. Soc., 2006, 128, 5632–5633. Son2009 J. S. Son, X.-D. Wen, J. Joo, J. Chae, S.-i. Baek, K. Park, J. H. Kim, K. An, J. H. Yu, S. G. Kwon, S.-H. Choi, Z. Wang, Y.-W. Kim, Y. Kuk, R. Hoffmann and T. Hyeon, Angew. Chem., Int. Ed., 2009, 48, 6861–6864. Liu2010 Y.-H. Liu, V. L. Wayman, P. C. Gibbons, R. A. Loomis and W. E. Buhro, Nano Lett., 2010, 10, 352–357. Koh2017 W.-k. Koh, N. K. Dandu, A. F. Fidler, V. I. Klimov, J. M. Pietryga and S. V. Kilina, J. Am. Chem. Soc., 2017, 139, 2152–2155. Sigman2003 M. B. Sigman, A. Ghezelbash, T. Hanrath, A. E. Saunders, F. Lee and B. A. Korgel, J. Am. Chem. Soc., 2003, 125, 16050–16057. VaughnII2010 D. D. Vaughn II, R. J. Patel, M. A. Hickner and R. E. Schaak, J. Am. Chem. Soc., 2010, 132, 15170–15172. Li2012 Z. Li, H. Qin, D. Guzun, M. Benamara, G. Salamo and X. Peng, Nano Research, 2012, 5, 337–351. Bouet2014 C. Bouet, D. Laufer, B. Mahler, B. Nadal, H. Heuclin, S. Pedetti, G. Patriarche and B. Dubertret, Chem. Mater., 2014, 26, 3002–3008. Izquierdo2016 E. Izquierdo, A. Robin, S. Keuleyan, N. Lequeux, E. Lhuillier and S. Ithurria, J. Am. Chem. Soc., 2016, 138, 10496–10501. Nasilowski2016 M. Nasilowski, B. Mahler, E. Lhuillier, S. Ithurria and B. Dubertret, Chem. Rev., 2016, 116, 10934–10982. Yeltik2015 A. Yeltik, S. Delikanli, M. Olutas, Y. Kelestemur, B. Guzelturk and H. V. Demir, J. Phys. Chem. C, 2015, 119, 26768–26775. Achtstein2015 A. W. Achtstein, A. Antanovich, A. Prudnikau, R. Scott, U. Woggon and M. Artemyev, J. Phys. Chem. C, 2015, 119, 20156–20161. Cassette2015 E. Cassette, R. D. Pensack, B. Mahler and G. D. Scholes, Nature Commun., 2015, 6, 6086. Pal2017 S. Pal, P. Nijjar, T. Frauenheim and O. V. Prezhdo, Nano Lett., 2017, 17, 2389–2396. Benchamekh2014 R. Benchamekh, N. A. Gippius, J. Even, M. O. Nestoklon, J.-M. Jancu, S. Ithurria, B. Dubertret, Al. L. Efros and P. Voisin, Phys. Rev. B, 2014, 89, 035307. Biadala2014nl L. Biadala, F. Liu, M. D. Tessier, D. R. Yakovlev, B. Dubertret and M. Bayer, Nano Lett., 2014, 14, 1134–1139. GranadosDelAguila2014 A. Granados Del Águila, B. Jha, F. Pietra, E. Groeneveld, C. De Mello Donegá, J. C. Maan, D. Vanmaekelbergh and P. C. M. Christianen, ACS Nano, 2014, 8, 5921–5931. Labeau2003 O. Labeau, P. Tamarat and B. Lounis, Phys. Rev. Lett., 2003, 90, 257404. Biadala2009 L. Biadala, Y. Louyer, P. Tamarat and B. Lounis, Phys. Rev. Lett., 2009, 103, 037404. Brovelli2011 S. Brovelli, R. D. Schaller, S. A. Crooker, F. García-Santamaría, Y. Chen, R. Viswanatha, J. A. Hollingsworth, H. Htoon and V. I. Klimov, Nat. Commun., 2011, 2, 280. Nirmal1995 M. Nirmal, D. Norris, M. Kuno, M. Bawendi, Al. L. Efros and M. Rosen, Phys. Rev. Lett., 1995, 75, 3728–3731. Efros1996 Al. L. Efros, M. Rosen, M. Kuno, M. Nirmal, D. Norris and M. Bawendi, Phys. Rev. B, 1996, 54, 4843–4856. DeMelloDonega2006 C. De Mello Donegá, M. Bode and A. Meijerink, Phys. Rev. B, 2006, 74, 085320. Biadala2017nn L. Biadala, E. V. Shornikova, A. V. Rodina, D. R. Yakovlev, B. Siebers, T. Aubert, M. Nasilowski, Z. Hens, B. Dubertet, Al. L. Efros and M. Bayer, Nat. Nanotechnol., 2017, 12, 569–574. Rodina2015 A. Rodina and Al. L. Efros, Nano Lett., 2015, 15, 4214–4222. Leung1998 K. Leung, S. Pokrant and K. B. Whaley, Phys. Rev. B, 1998, 57, 12291–12301. Tessier2012 M. D. Tessier, C. Javaux, I. Maksimovic, V. Loriette andB. Dubertret, ACS Nano, 2012, 6, 6751–6758. Hannah2011 D. C. Hannah, N. J. Dunn, S. Ithurria, D. V. Talapin, L. X. Chen, M. Pelton, G. C. Schatz and R. D. Schaller, Phys. Rev. Lett., 2011, 107, 177403. Tessier2013 M. D. Tessier, L. Biadala, C. Bouet, S. Ithurria, B. Abecassis and B. Dubertret, ACS Nano, 2013, 7, 3332–3340. Achtstein2016 A. W. Achtstein, R. Scott, S. Kickhöfel, S. T. Jagsch, S. Christodoulou, G. H. V. Bertrand, A. V. Prudnikau, A. Antanovich, M. Artemyev, I. Moreels, A. Schliwa and U. Woggon, Phys. Rev. Lett., 2016, 116, 116802. Erdem2016phchl O. Erdem, M. Olutas, B. Guzelturk, Y. Kelestemur and H. V. Demir, J. Phys. Chem. Lett., 2016, 7, 548–554. Cherevkov2013 S. A. Cherevkov, A. V. Fedorov, M. V. Artemyev, A. V. Prudnikau and A. V. Baranov, Phys. Rev. B, 2013, 88, 041303R. Dzhagan2016 V. Dzhagan, A. G. Milekhin, M. Ya. Valakh, S. Pedetti, M. Tessier, B. Dubertret and D. R. T. Zahn, Nanoscale, 2016, 8, 17204. Liu2013 F. Liu, L. Biadala, A. V. Rodina, D. R. Yakovlev, D. Dunker, C. Javaux, J. P. Hermier, Al. L. Efros, B. Dubertret and M. Bayer, Phys. Rev. B, 2013, 88, 035302. Furis2006 M. Furis, H. Htoon, M. A. Petruska, V. I. Klimov, T. Barrick and S. A. Crooker, Phys. Rev. B, 2006, 73, 241313R. Wijnen2008 F. J. P. Wijnen, J. H. Blokland, P. T. K. Chin, P. C. M. Christianen andJ. C. Maan, Phys. Rev. B, 2008, 78, 235318. Crooker2003 S. A. Crooker, T. Barrick, J. A. Hollingsworth and V. I. Klimov, Appl. Phys. Lett., 2003, 82, 2793. Kiselev1975 V. A. Kiselev, B. S. Razbirin and I. N. Uraltsev, Phys. Status Solidi B, 1975, 72, 161–172. Kochereshko1983 V. P. Kochereshko, G. V. Mikhailov and I. N. Uraltsev, Sov. Phys. Solid State, 1983 25, 439. [transl. Fiz. Tverd. Tela, 1983, 25, 769–776.] Louyer2011 Y. Louyer, L. Biadala, J. B. Trebbia, M. J. Fernée, P. Tamarat and B. Lounis, Nano Lett., 2011, 11, 4370–4375. Elward2013 J. M. Elward and A. Chakraborty, J. Chem. Theory Comput., 2013, 9, 4351–4359. Nirmal1994 M. Nirmal, C. Murray and M. Bawendi Phys. Rev. B, 1994, 50, 2293–2300. Jeukens2002 C. R. L. P. N. Jeukens, P. C. M. Christianen, J. C. Maan, D. R. Yakovlev, W. Ossau, V. P. Kochereshko, T. Wojtowicz, G. Karczewski and J. Kossut, Phys. Rev. B, 2002, 66, 235318. Gippius1998 N. A. Gippius, A. L. Yablonskii, A. B. Dzyubenko, S. G. Tikhodeev, L. V. Kulik, V. D. Kulakovskii and A. Forchel, J. Appl. Phys., 1998, 83, 5410–5417. Pawlis2011 A. Pawlis, T. Berstermann, C. Brüggemann, M. Bombeck, D. Dunker, D. R. Yakovlev, N. A. Gippius, K. Lischka and M. Bayer, Phys. Rev. B, 2011, 83, 115302. Cardona M. Cardona and R. G. Ulbrich, Light Scattering in Solids III, Springer, Berlin, 2005, Topics in Applied Physics, vol. 51, p. 233. RodinaJETP2016 A. V. Rodina and Al. L. Efros, JETP, 2016, 122, 554–556. Lechner1996 M. D. Lechner, Refractive Indices of Organic Liquids, Springer, Berlin, 1996, Group III, vol. 38B. Feldmann1987 J. Feldmann, G. Peter, E. O. Göbel, P. Dawson, K. Moore, C. Foxon and R. J. Elliott, Phys. Rev. Lett., 1987, 59, 2337–2340. Polhmann1992 A. Polhmann, R. Hellmann, E. O. Göbel, D. R. Yakovlev, W. Ossau, A. Waag, R. N. Bicknell-Tassius and G. Landwehr, Appl. Phys. Lett., 1992, 61, 2929–2931. Gindele1998 F. Gindele, U. Woggon, W. Langbein, J. M. Hvam, M. Hetterich and C. Klingshirn, Solid State Commun., 1998, 106, 653–657. Rodina2016 A. V. Rodina and Al. L. Efros, Phys. Rev. B, 2016, 93, 155427. Biadala2010 L. Biadala, Y. Louyer, P. Tamarat and B. Lounis, Phys. Rev. Lett., 2010, 105, 157402. Biadala2016 L. Biadala, B. Siebers, Y. Beyazit, M. D. Tessier, D. Dupont, Z. Hens, D. R. Yakovlev and M. Bayer, ACS Nano, 2016, 10, 3356–3364. Efros2003  Al. L. Efros in Semiconductor and Metal Nanocrystals: Synthesis and Electronic and Optical Properties, ed. V. I. Klimov, Dekker, New York, 2003, ch. 3, pp. 103–141. Samarth N. Samarth, H. Luo, J. K. Furdyna, S. B. Qadri, Y. R. Lee, A. K. Ramdas and N. Otsuka, Appl. Phys. Lett., 1989, 54 2680–2682. Fu1999 H. Fu, L.W. Wang and A. Zunger, Phys. Rev. B, 1999 59, 5568. Xu Y.-N. Xu and W. Y. Ching, Phys. Rev. B, 1993, 48, 4335–4351. Zarhri Z. Zarhri, A. Abassi, H. Ez. Zahraouy, Y. El. Amraoui, A. Benyoussef and A. El. Kenz, J. Supercond. Nov. Magn., 2015, 28, 2155–2160. Szemjonov A. Szemjonov, T. Pauporté, I. Ciofini and F. Labat, Phys. Chem. Chem. Phys., 2014 16, 23251.
http://arxiv.org/abs/1709.09610v2
{ "authors": [ "Elena V. Shornikova", "Louis Biadala", "Dmitri R. Yakovlev", "Victor F. Sapega", "Yuri G. Kusrayev", "Anatolie A. Mitioglu", "Mariana V. Ballottin", "Peter C. M. Christianen", "Vasilii V. Belykh", "Mikhail V. Kochiev", "Nikolai N. Sibeldin", "Aleksandr A. Golovatenko", "Anna V. Rodina", "Nikolay A. Gippius", "Alexis Kuntzmann", "Ye Jiang", "Michel Nasilowski", "Benoit Dubertret", "Manfred Bayer" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20170927163536", "title": "Addressing the exciton fine structure in colloidal nanocrystals: the case of CdSe nanoplatelets" }
theoremTheorem[section] lemma[theorem]Lemma proposition[theorem]Proposition corollary[theorem]Corollaryproof[1][Proof]#1 definition[1][Definition]#1 example[1][Example]#1 remark[1][Remark]#1[][email protected] National Institute of Informatics, 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430, Japan. Department of Informatics, School of Multidisciplinary Sciences, Sokendai (The Graduate University for Advanced Studies), 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430 Japan National Institute of Informatics, 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430, Japan. National Institute of Informatics, 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430, Japan. NTT Research Center for Theoretical Quantum Physics, NTT Corporation, 3-1 Morinosato-Wakamiya, Atsugi 243-0198, Japan. NTT Basic Research Laboratories, 3-1 Morinosato-Wakamiya, Atsugi, Kanagawa 243-0198, Japan. National Institute of Informatics, 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430, Japan. National Institute of Informatics, 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430, Japan. Department of Informatics, School of Multidisciplinary Sciences, Sokendai (The Graduate University for Advanced Studies), 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430 JapanIn the field of quantum metrology and sensing, a collection of quantum systems (e.g. spins) are used as a probe to estimate some physical parameter (e.g. magnetic field). It is usually assumed that there are no interactions between the probe systems. We show that strong interactions between them can increase robustness against thermal noise, leading to enhanced sensitivity. In principle, the sensitivity can scale exponentially in the number of probes – even at non-zero temperatures – if there are long-range interactions. This scheme can also be combined with other techniques, such as dynamical decoupling, to give enhanced sensitivity in realistic experiments.Robust quantum sensing with strongly interacting probe systems Kae Nemoto December 30, 2023 ==============================================================§ INTRODUCTION The estimation of physical quantities or parameters is a crucial task in science. The field of quantum metrology and sensing aims to exploit quantum coherence or entanglement to give highly sensitive estimates of such quantities <cit.>. Known applications include time and frequency estimation <cit.>, gravitational wave detection <cit.>, magnetometry <cit.> and electrometry <cit.>. In a typical quantum sensing scheme, N probe systems evolve for a sensing time t, picking up a dependence on the physical parameter of interest, before readout. This procedure is repeated ν = T/t times during a total available measurement time T, and an estimate of the parameter is inferred from the accumulated measurement data. However, the quantum coherence of the probe decays on a timescale denoted T_2. This limits the useful sensing time t ≲ T_2, which in turn limits the sensitivity of the final estimate. In principle, dynamical decoupling <cit.> or other techniques <cit.> can be used to extend the coherence time to its fundamental limit T_2 ≤ 2 T_1, where T_1 is the probe relaxation time. It thus appears that the sensitivity is limited by the probe relaxation time T_1. However, it is usually assumed that the N probe systems are not interacting. In this paper we show that the T_1 sensitivity limit with non-interacting probes can be overcome with interacting probes. Our scheme is based on the idea that strong interactions can modify the energy level structure of a quantum system so that dissipation tends to drive the system into a multidimensional ground space where quantum information can be stored robustly despite energy relaxation <cit.>. We focus on the problem of estimating the resonant frequency ω between two spin-1/2 states |↑⟩ and |↓⟩, given a probe consisting of N spin-1/2 particles. If there is no decoherence the sensitivity usually scales as S ∝ t, where S = 1/T(δω)^2 and δω is the error of the frequency estimate <cit.>. For example, if we are restricted to the preparation of separable spin states, the optimal sensitivity (known as the standard quantum limit) is S_SQL = N t. If entangled states are allowed the sensitivity can, in principle, be increased to the Heisenberg limit S_HL = N^2 t, a factor of N enhancement compared to the standard quantum limit. In practice, however, even if dynamical decoupling is employed, energy relaxation will prevent the sensitivity from increasing indefinitely with increasing sensing time t. This means that the sensitivity S(t) can – at best – approach the Heisenberg limit only for relatively short times t and will eventually reach a maximum value max_t S (t) at some optimal time t_opt, before decreasing as the spins thermalize [for example, see Fig. <ref>]. However, a strong ferromagnetic interaction between the spins can lead to an increased t_opt and thus an enhanced estimate of ω. The simplest example of this idea is illustrated in Fig. <ref>(c, d, e) for N=2 interacting spins.We structure the paper as follows. We begin the Results section by describing our model and our frequency estimation scheme. We then derive the sensitivity corresponding to the estimation scheme and show how it varies depending on the strength of interactions among the probe spins. We will see that for strong ferromagnetic interactions the sensitivity increases exponentially with decreasing temperature. With long-range ferromagnetic interactions between the spins it is also possible, in principle, to achieve a sensitivity that scales exponentially in the number of probe spins N, even at non-zero environment temperatures. We conclude with a discussion of our results.§ RESULTS§.§ Model Our measurement probe consists of N spin-1/2 particles. We divide the N particles into M identical clusters of size 𝒩 = N / M and we perform identical, independent experiments in parallel on each 𝒩-spin cluster. Each cluster evolves by the Hamiltonian Ĥ = Ĥ_spins + Ĥ_env + Ĥ_int, where:Ĥ_spins = ħω/2∑_i=1^𝒩σ̂^z_i - ħ/4∑_i,j J_i,jσ̂^z_i⊗σ̂^z_j ,Ĥ_env = ħ∑_i=1^𝒩∑_k Ω_k â_i,k^†â_i,k ,Ĥ_int = ħ∑_i=1^𝒩σ̂^x_i⊗Ê_i .Here ω = ω_0 + Δω and we would like to estimate Δω, a small unknown deviation from the known frequency ω_0. The strength of the Ising interaction between the i'th and j'th spins in each cluster is J_i,j. To model energy relaxation, each spin has a dipole-dipole coupling to an environment of harmonic oscillators (indexed by k) via the environment operator Ê_i ≡∑_k λ_k (â_i,k^† + â_i,k). We assume that the environment is in a thermal state ρ̂_env∝ e^-βĤ_env with inverse temperature β = 1/k_B T_env, where k_B is the Boltzmann constant and T_env is the environment temperature. §.§ Frequency estimation schemeWe divide our frequency estimation scheme into the following four stages [see Fig. <ref>(b)]:(i) State Preparation. The 𝒩-spin cluster is prepared in the entangled Greenberger-Horne-Zeilinger (GHZ) state:|ψ^GHZ_𝒩⟩ = 1/√(2)(|↑⟩^⊗𝒩 + |↓⟩^⊗𝒩) .(ii) Sensing. The cluster evolves by the Hamiltonian Ĥ, picking up a dependence on the unknown parameter ω. The reduced state of the cluster after a sensing time t is ρ̂ (t).(iii) Readout. The 𝒩-spin cluster is measured with the POVM Π = {Π̂_0 , Π̂_1 }, where:Π̂_0 = 1/2 + 1/2( Λ̂ e^-iϕ + Λ̂^† e^iϕ) , Π̂_1 = 𝕀̂ - Π̂_0 .Here Λ̂ = (σ̂^-)^⊗𝒩 and ϕ is a controllable parameter that determines the measurement bias point <cit.>. This POVM corresponds to a binary measurement in the subspace spanned by the states |↑⟩^⊗𝒩 and |↓⟩^⊗𝒩 that make up the initial GHZ state |ψ_𝒩^GHZ⟩. The measurement leads to the outcome “0” with probability p =[ ρ̂(t) Π̂_0] or the outcome “1” with probability 1-p.(iv) Repetition. Steps (i)–(iii) are repeated on each cluster for a total time T giving ν = T / t repetitions.We define the sensitivity as S = 1 / T(δω)^2, where δω is the root-mean-squared error of the frequency estimate. The Cramer-Rao inequality (δω)^2 ≥ 1/ (Mν F) gives an upper bound for the error of the frequency estimate <cit.>, whereF = | ∂ p / ∂ω|^2/p (1 - p) ,is the (classical) Fisher information corresponding to the binary measurement of the 𝒩-spin cluster. In the limit of many repetitions ν≫ 1 it is possible to saturate the Cramer-Rao bound with maximum likelihood estimation <cit.>. Substituting ν = T/t we thus obtain the formula S = MF/t for the sensitivity. In the next section we calculate the Fisher information F, and hence the sensitivity S for the frequency estimation scheme described above. §.§ Calculating the sensitivity From the Hamiltonian given in Eqs. <ref>–<ref>, a standard derivation <cit.> leads to the Born-Markov master equation for the reduced state of the 𝒩-spin cluster (see the Supplementary Information for details):d/dtρ̂(t) = -i/ħ[ Ĥ_spins , ρ̂(t) ]+ ∫_0^∞ dτ∑_i=1^𝒩{𝒞(τ)[ σ̂_i^x(-τ)ρ̂(t) ,σ̂_i^x(0) ] + 𝒞(-τ)[ σ̂_i^x(0) , ρ̂(t) σ̂_i^x(-τ) ] } ,where 𝒞(τ) ≡{Ê_i(τ)Ê_i(0) ρ̂_env} is the environment self-correlation function and σ̂_i^x(τ) ≡ e^iτĤ_spins/ħσ̂_i^x e^-iτĤ_spins/ħ, Ê_i(τ) ≡ e^iτĤ_env/ħÊ e^-iτĤ_env/ħ.Taking the expectation value of the master Eq. <ref> with the operator Λ̂ gives (after a rotating wave approximation – see the Supplementary Information for details) the equation of motion:d/dt⟨Λ̂⟩ = 𝒩( - i ω - Γ / 2 ) ⟨Λ̂⟩ .Here the average decay rate is Γ = (1/𝒩) ∑_i=1^𝒩ξ_i, whereξ_i = 2Re∫_0^∞ dτ[ 𝒞(τ) e^-iτ (𝒥_i - ω)+ 𝒞(-τ) e^iτ (𝒥_i + ω) ] ,is the decay rate associated with the i'th spin. We ignore the imaginary part of the integral in Eq. <ref>, since it leads to a negligible frequency shift. In the equation for ξ_i above we have introduced 𝒥_i ≡∑_j=1, j ≠ i^𝒩 J_i,j, which is the collective coupling strength of the i'th spin to all other spins in the 𝒩-spin cluster. We will see below that the size of this collective coupling strength 𝒥_i relative to the spin frequency ω is a key parameter in determining the relaxation dynamics of the spin system. The equation of motion Eq. <ref> is easily solved for ⟨Λ̂(t) ⟩ and the solution is substituted into p =[ρ̂ (t) Π̂_0] to calculate the probability p. For the initial state given in Eq. <ref> we find that:p = 1/2 + 1/2cos( ω𝒩 t + ϕ) e^- 𝒩Γ t / 2 .Now, we can find an expression for the classical Fisher information F by substituting our solution for p into Eq. <ref>. Choosing the measurement bias point ϕ = π/2 - 𝒩ω_0 t gives F = 𝒩^2 t^2 e^- 𝒩Γ t so that the total sensitivity of the frequency estimate is:S = MF/t = N 𝒩 t e^- 𝒩Γ t.If Γ≠ 0, we can optimise over t to obtain:max_ tS= N/eΓ ,t_opt = 1/𝒩Γ ,where e ≈ 2.7 is the Euler number and the optimum occurs at the time t_opt.§.§ Calculating the average decay rate It is clear that the sensitivity depends crucially on the average decay rate Γ = (1/𝒩)∑_i=1^𝒩ξ_i, which in turn depends on the individual decay rates ξ_i. We can calculate ξ_i by computing the integrals in Eq. <ref>. The result depends on the strength of the collective coupling 𝒥_i relative to the spin frequency ω. Assuming ω > 0,we find the following three possibilities (see the Supplementary Information for details):(i) If -ω < 𝒥_i < ω (weak coupling) we have:ξ_i = γ_i^- (n̅_i^- + 1) + γ_i^+ n̅_i^+ .(ii) If 𝒥_i < -ω (strong anti-ferromagnetic coupling):ξ_i = γ_i^- (n̅_i^- + 1) + γ_i^+ (n̅_i^+ + 1) .(iii) If 𝒥_i > ω (strong ferromagnetic coupling):ξ_i = γ_i^-n̅_i^- + γ_i^+n̅_i^+ .Here n̅_i^± = 1 / (e^ħβ |𝒥_i ±ω| - 1) is the thermal occupation of the environment oscillator with frequency |𝒥_i ±ω|, and we have defined γ_i^± = 2π f(|𝒥_i ±ω|) where f(Ω) is the environment spectral density.We can immediately see that the strong ferromagnetic coupling regime is of particular interest, since at zero-temperature (β→∞n̅_i^±→ 0) the decay rate ξ_i vanishes for strong ferromagnetic coupling (but is non-zero for weak coupling or for strong anti-ferromagnetic coupling). This zero-temperature behaviour is an indication that at low, but non-zero temperatures there is a qualitative difference between the strong ferromagnetic case and the weak coupling or strong anti-ferromagnetic coupling. We now consider the implications of this for the sensitivity of our frequency estimation scheme, focussing on the example of a one-dimensional spin chain.§.§ Example: a 1-d spin chainThe analysis so far has been very general (we have not specified the coupling strengths J_i,j). However, to gain further insight we focus on a concrete example: a one-dimensional spin chain with the interaction J_i,j = J |i - j|^-α, where |i-j| is the distance between the i'th and j'th spin. Here |i-j| takes values from the set {1,2,...,𝒩} and α controls the range of the interaction; small α corresponding to long-range interaction and large α to short-range interaction. We choose this form for J_i,j because it covers a broad range of interesting examples including the infinite range interaction (α = 0; also known as one-axis twisting), Coulomb-like interaction (α=1), dipole-dipole interaction (α = 3), nearest-neighbour interaction (α→∞), and also the case of no interaction (J = 0). Moreover, it can be implemented experimentally for 0 ≤α≤ 3 with trapped ions <cit.>. A necessary criterion for enhanced sensitivity in our scheme is that, for each spin, the collective coupling should be larger than spin frequency, 𝒥_i > ω for all i (see Eq. <ref>). To simplify the analysis, we assume that the spin chain has periodic boundary conditions. This is convenient because it results in a collective coupling 𝒥≡𝒥_i = ∑_j=1,j≠⌊𝒩/2⌋^𝒩 J |⌊𝒩/2⌋ - j|^-α that is independent of the spin label i, so that the condition 𝒥 > ω for strong ferromagnetic coupling is the same for each spin. (We note, however, that for open boundary conditions the results will be qualitatively similar provided that 𝒥_i > ω for all i.) Since the collective coupling is the same for each spin we have that γ_i^± = γ^± and n̅_i^± = n̅^± are also independent of i. This means that the average decay rate is written simply as:(i) For weak coupling:Γ =γ^- (n̅^- + 1) + γ^+ n̅^+ , (ii) For strong anti-ferromagnetic coupling:Γ = γ^- (n̅^- + 1) + γ^+ (n̅^+ + 1) . (iii) For strong ferromagnetic coupling:Γ =γ^- n̅^- + γ^+ n̅^+ . Substituting into Eq. <ref>, gives simple expressions for the sensitivity in each case.The only variables that affect the average decay rates are the inverse temperature β (via the thermal occupation n̅^±), the strength of the collective coupling 𝒥 relative to ω (which enters through both n̅^± and γ^±), and the form of the spectral density f(Ω) (via the dissipation rate γ^±). We now examine the dependence of the sensitivity on these variables.§.§.§ Sensitivity vs. inverse temperature For weak coupling and strong anti-ferromagnetic coupling, the sensitivity max_t S saturates at a finite value as the temperature decreases (β increases), as shown in the green and orange lines of Figs. <ref>(a), <ref>(b) and <ref>(c). In contrast, for strong ferromagnetic coupling the sensitivity does not saturate, but keeps increasing as temperature decreases. From Eq. <ref> we can calculate the value at which the sensitivity saturates in the low-temperature limit: max_t S β→∞⟶ N / (eγ^-) in the weak coupling regime and max_t S β→∞⟶ N / (eγ^- + eγ^+) in the strong anti-ferromagnetic coupling regime. For strong ferromagnetic coupling, however, the low-temperature approximation of Eq. <ref> gives:max_t S β≫ 1≈N/e [γ^- e^-ħβ|𝒥-ω| + γ^+ e^-ħβ|𝒥+ω|] ,which shows that (for large β) the sensitivity increases exponentially with increasing β. In the zero-temperature limit of the strong ferromagnetic coupling regime, the average decay rate vanishes Γβ→∞⟶ 0 (since n̅^±β→∞⟶ 0) so that the sensitivity S β→∞⟶ N 𝒩 t increases linearly with the sensing time t. For example, if we have a single cluster with 𝒩 = N spins initially prepared in the N-spin maximally entangled state we achieve the Heisenberg limit S_HL = N^2 t, despite the interaction with the environment.§.§.§ Sensitivity vs. collective coupling strength The approximation in Eq. <ref> is valid in the low temperature limit of the strong ferromgnetic coupling regime, but more generally it is valid when 𝒥±ω≫ 1/ħβ. This indicates that for sufficiently large 𝒥, the sensitivity is well approximated by Eq. <ref> and increases exponentially with 𝒥. This is shown in the 𝒥≫ω stong ferromagnetic coupling region of Figs. <ref>(d), <ref>(e) and <ref>(f), for three different choices of spectral density function f(Ω).In some practical settings, the 𝒥≫ω regime may be inaccessible. An interesting question then is: how strong does the collective coupling 𝒥 have to be to give an advantage in sensitivity over, say, a non-interacting (𝒥 = 0) probe spin system. Comparing Figs. <ref>(d), <ref>(e) and <ref>(f) shows that the answer to this question is strongly dependent on the inverse temperature β and on the form of the spectral density function f(Ω). For an Ohmic spectral density function, Fig. <ref>(d) shows that increasing the collective coupling 𝒥 between spins always leads to an improved sensitivity. For white noise or for 1/f-noise, on the other hand, Figs. <ref>(e) and <ref>(f) show that interactions between the spins give improved sensitivity (compared to the non-interacting case, for example) only if the collective coupling 𝒥 is larger than some critical value that depends on the inverse temperature β.This dependence of the sensitivity on the form of the spectral density function can be partially understood by calculating the sensitivity in the region 𝒥≈±ω. For example:lim_𝒥↘ωΓ = 2πlim_𝒥↘ωf(|𝒥-ω|)/ħβ |𝒥-ω| + 2π f(2𝒥)/e^ħβ 2𝒥 - 1 .As 𝒥 approaches ω from above, the first term in Eq. <ref> diverges if the spectral density function is sub-Ohmic [i.e. if f(Ω) ∝Ω^k for k < 1] but is finite if the spectral density function is Ohmic or super-Ohmic [i.e. if f(Ω) ∝Ω^k for k ≥ 1]. Since the sensitivity is inversely proportional to Γ, this explains the sharp decrease to zero sensitivity around 𝒥≈ω for the sub-Ohmic spectral density functions in Figs. <ref>(e) and <ref>(f).§.§.§ Sensitivity vs. cluster size The collective coupling 𝒥 depends on the cluster size 𝒩. This implies that the sensitivity also depends implicitly on 𝒩. In practice, a challenging aspect of the sensing protocol is the preparation and readout of the 𝒩-spin entangled states, especially if the cluster size 𝒩 is large. It is thus interesting to ask how changes in 𝒩 affects the sensitivity.In Fig. <ref>(a) we plot the collective coupling strength 𝒥 as a function of the cluster size 𝒩 for several examples. We can see that for short-range interactions [the green (α→∞) and orange (α = 3) lines], the collective coupling strength does not increase significantly as 𝒩 increases beyond 𝒩 = 3. This is because for short-range interactions the dominant contribution to a spin's collective coupling is its coupling to its two nearest neighbours. In contrast, if the interactions are long-range, distant spins will also have a significant contibution to a spin's collective coupling, so that the collective coupling strength increases with increasing cluster size, as shown for infinite range coupling [the red line (α = 0)] in Fig. <ref>(a). Since for short-range coupling the collective coupling changes relatively little for 𝒩 > 3, a large cluster size 𝒩 (corresponding to preparation of a large maximally entangled state) does not give a substantial advantage in sensitivity compared to more clusters of smaller size 𝒩 = 3 [as illustrated in the green and orange lines, Fig. <ref>(b)]. This has important experimental implications since smaller entangled states are typically easier to prepare than large entangled states. The optimal sensing time t_opt (Eq. <ref>), however, does depend on 𝒩 and is longer for a smaller cluster size. If the coupling between spins is long-range, however, the collective coupling strength can increase as the cluster size increases [see the red line, Fig. <ref>(a)], resulting in an improved senstivity for a larger value of 𝒩 [see the red line, Fig. <ref>(b)]. In the example of infinite-range coupling (α = 0), if 𝒩 is large enough we can approximate 𝒥±ω = (𝒩 - 1)J ±ω≈𝒩J so that n̅^±≈exp (-ħβ𝒩 J). This means that if we are in the strong ferromagnetic couplng regime, the sensitivity max_t S ∼ Nexp( ħβ𝒩 J ) and the optimal sensing time t_opt∼exp( ħβ𝒩 J ) / 𝒩 increase exponentially in the cluster size 𝒩. When 𝒩 = 𝒪(N), this raises an interesting point about the use of the phrase “Heisenberg scaling” in quantum metrology: Since the Heisenberg limit is S_HL = N^2 t the scaling S ∝ N^2 is often referred to as “Heisenberg scaling”; however, in principle, max_tS can grow faster than N^2 if the optimal sensing time t_opt increases with the number of particles, as this example shows. §.§ Example: two superconducting flux qubits From the foregoing discussion it is clear that an experimental demonstration of enhanced sensitivity by our scheme would require (i) a qubit with a coherence time that is T_1-limited (i.e., close to the T_2 ≤ 2T_1 limit) and, (ii) the ability to implement a strong ferromagnetic Ising coupling 𝒥_i > ω with other qubits. A minimal experimental demonstration could be achieved with a two-qubit system that satisfies these two conditions. As a candidate system, we consider two superconducting flux qubits. It has been demonstrated in several recent experiments <cit.> that the first requirement can be met with such qubits, through the use of dynamical decoupling. The second condition can also be satisfied, since a strong ferromagnetic interaction between flux qubits has also been demonstrated experimentally <cit.>. Although both requirements have not, as yet, been implemented in a single experiment, it may be possible with future advances in the engineering of superconducting systems. In this section we choose parameters from the experiments cited above to estimate the potential gain in sensitivity with the scheme outlined in this paper.The experiments in Refs. <cit.> employ a Carr-Purcell-Meiboom-Gill (CPMG) pulse sequence in order to extend the qubit coherence time to its T_2 ≤ 2T_1 limit. This consists of π-pulses around the x-axis of each qubit at the times t_j = j t_pulse, where t_pulse is the interpulse duration and j = 0,1,...,m. A sensing experiment under these conditions cannot be used to precisely estimate a static parameter ω=ω_0 + Δω, since the π-pulse at t=t_j causes the phase accumulated in the preceeding interval [t_j-1, t_j] to be cancelled by the phase accumulated in the following interval [t_j, t_j+1]. However, if the parameter of interest is oscillating at the same frequency as the pulses are applied, the accumulated phase in each interval [t_j,t_j+1] has the same sign and the parameter can be estimated with high sensitivity <cit.>. Therefore, when dynamical decoupling is employed we should replace ω in our Hamiltonian Eq. <ref> with the time-dependent parameter ω (t) = α(t)[ ω_0 + Δωsin(2π t/t_pulse) ]. Here, α(t) is a result of the π-pulses and takes the values +1 (-1) if the time t is in the interval [t_j, t_j+1] with j even (odd). Crucially, the π-pulses do not alter the qubit-qubit interaction term, since (σ̂^x ⊗σ̂^x )(σ̂^z ⊗σ̂^z )(σ̂^x ⊗σ̂^x ) = σ̂^z ⊗σ̂^z, so that the robustness in the presence of strong ferromagnetic coupling is maintained.With the time dependent ω (t), the derivation of the sensitivity is similar to the time-independent case, with the final sensitivity decreased by a factor of (2/π)^2 due to the fact that the signal oscillates rather than being maintained at its maximum value Δω <cit.>.Recent experimental results indicate that the spectral density is dominated by 1/f-noise at low qubit frequencies, but that Ohmic, and other types of noise become significant at larger qubit frequencies <cit.>. This results in a T_1 time that depends on the qubit frequency. From the experimental values, we estimate γ^+ = γ^- = 1/T_1 ≈ 1 / (30μs) when ω_0 = 5GHz and J = 0. Since these parameters are in the weak coupling regime we can estimate the optimised sensitivity in this case as:max_t S = (2/π)^2N/eγ (2n̅ + 1)≈ 7 × 10^-6 Hz^-1 ,where we have assumed a temperature of T_env = 20mK.If, however, the qubits are both at the frequency ω_0 = 2GHz and are coupled at J = 5GHz, the experimental data suggests that we can use the values γ^- = 1/T_1 ≈ 1/(20μs) when the qubit frequency is |J - ω| = 3GHz, and γ^+ = 1/T_1 ≈ 1/(20μs) when the qubit frequency is |J+ω| = 7GHz. Since, in this case, we are in the strong ferromagnetic coupling regime, the optimised sensitivity is:max_t S = (2/π)^2N/e (γ^- n̅^- + γ^+ n̅^+)≈ 11 × 10^-6 Hz^-1 ,approximately a 50% improvement in sensitivity due the strong ferromagnetic coupling between the qubits. We note that this is a minimal example of the gain that can be achieved in practice. As discussed in Sec. <ref>, the gain can be increased significantly by decreasing the temperature or, more feasibly, by increasing the number of qubits that are ferromagnetically coupled. We now illustrate this by doubling the number of qubits in the example above from N=2 to N=4.For the non-interacting case (J_i,j=0 for all i,j), doubling the number of qubits to N=4 simply doubles the optimised sensitivity to max_t S ≈ 14 × 10^-6 Hz^-1. This is easily seen from the expression in Eq. <ref>, noting that when J_i,j=0 the parameters γ and n̅ are independent of N. On the other hand, if each qubit is coupled to every other qubit with J_i,j = J = 5GHz then the collective coupling associated with each qubit is 𝒥 = (N-1)J = 15GHz. This change in the collective coupling will result in changes in the parameters γ^± and n̅^±. We allow for the possibility that operating a flux qubit at the high frequencies |𝒥±ω| = 15 ± 3GHz might result in a decreased T_1 by choosing γ^± = 1/T_1 = 1 / (2μs), an order of magnitude reduction of T_1 compared to our N=2 parameters. Even so, we find that the reduction in n̅^± for the strongly interacting qubits leads to an optimised sensitivity max_t S ≈ 140 × 10^-6 Hz^-1, a factor of 10 improvement in sensitivity compared to the non-interacting probe.§ DISCUSSION It has been shown recently that quantum error correction can increase the robustness of frequency estimation schemes against bit-flip noise <cit.>. However, it appears that error correction does not significantly improve sensitivity in the presence of energy relaxation <cit.>. We have shown above that robustness can be achieved by introducing strong interactions between the probes. For example, if dynamical decoupling is used to extend the probe coherence time to its fundamental limit T_2 ≤ 2T_1, strong correlations between the probes can give a further enhancement. Other T_1-limited schemes, such as correlation spectroscopy <cit.>, can also be improved by introducing interactions between the probes.§ ADDITIONAL INFORMORMATION Acknowledgements: We thank Yuichiro Matsuzaki for helpful comments.Funding: This work was supported in part by the MEXT KAKENKHI Grant number 15H05870.
http://arxiv.org/abs/1709.09387v2
{ "authors": [ "Shane Dooley", "Michael Hanks", "Shojun Nakayama", "William J. Munro", "Kae Nemoto" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170927084334", "title": "Robust quantum sensing with strongly interacting probe systems" }
^1Department of Physics, University of Crete, P. O. Box 2208,71003 Heraklion, Greece; ^2Institute of Electronic Structure and Laser, Foundation for Research andTechnology–Hellas, P.O. Box 1527, 71110 Heraklion, Greece ^3National University of Science and Technology "MISiS", Leninsky prosp. 4,Moscow, 119049, Russia;^4School of Engineering and Applied Sciences,Harvard University,Cambridge, Massachusetts 02138, USAA SQUID (Superconducting QUantum Interference Device) metamaterial on a Lieblattice with nearest-neighbor coupling supports simultaneously stabledissipative breather families which are generated through a delicate balance ofinput power and intrinsic losses. Breather multistability is possible due to thepeculiar snaking flux ampitude - frequency curve of single dissipative-drivenSQUIDs, which for relatively high sinusoidal flux field amplitudes exhibitsseveral stable and unstable solutions in a narrow frequency band aroundresonance. These breathers are very weakly interacting with each other, whilemultistability regimes with different number of simultaneously stable breatherspersist for substantial intervals of frequency, flux field amplitude, andcoupling coefficients. Moreover, the emergence of chimera states as well asnovel temporally chaotic states exhibiting spatial homogeneity within eachsublattice of the Lieb lattice is demonstrated. 63.20.Pw, 11.30.Er, 41.20.-q, 78.67.PtMultistable Dissipative Breathers and Novel Collective States in SQUID Lieb Metamaterials N. Lazarides^1,2,3, G. P. Tsironis^1,2,3,4 December 30, 2023 =========================================================================================== § INTRODUCTIONThe superconducting metamaterials, a particular class of artificialmediums which relay on the sensitivity of the superconducting state reached bytheir constituting elements at low temperatures, have recently been the focus of considerable research efforts <cit.>. Thesuperconducting analogue of conventional (metallic) metamaterials, which canbecome nonlinear with the insertions of appropriate electronic components<cit.>, are the SQUID (Superconducting QUantumInterference Device) metamaterials. The latter are inherently nonlinear due tothe Josephson effect <cit.>, since each SQUID, in its simplestversion, consists of a superconducting ring interrupted by a Josephson junction.The concept of SQUID metamaterials was theoretically introduced more than adecade ago both in the quantum <cit.> and the classical<cit.> regimes. Recent experiments on SQUID metamaterials haverevealed several extraordinary properties such as negative diamagneticpermeability <cit.>, broad-band tunability<cit.>, self-induced broad-band transparency<cit.>, dynamic multistability and switching <cit.>, as wellas coherent oscillations <cit.>. Moreover, nonlinear localization<cit.> and nonlinear band-opening (nonlinear transmission)<cit.>, as well as the emergence of dynamic states referred to aschimera states in current literature <cit.>,have been demonstrated numerically in SQUID metamaterial models. Thosecounter-intuitive dynamic states have been discovered numerically in rings ofidentical phase oscillators<cit.> (see Ref. <cit.> fora review). Experimental and theoretical investigations on SQUID metamaterials have beenlimited to quasi - one-dimensional (1D) lattices and two-dimensional (2D)tetragonal lattices. However, different arrangements of SQUIDs on the plane canbe realized which may also give rise to novel band structures; for example, thearragnement of SQUIDs on a line-centered tetragonal (Lieb) lattice, whichis described by three sites in a square unit cell (Fig. <ref>a), gives riseto a frequency spectrum featuring a Dirac cone intersected by a topological flatband. Such a SQUID Lieb metamaterial (SLiMM) supports compact flat-bandlocalized states <cit.>, very much alike to those observed inphotonic Lieb lattices <cit.>. Here, the existenceof simultaneously stable excitations of the form of dissipative DiscreteBreathers (DBs) is demonstrated numerically for a SLiMM which is driven by asinusoidal flux field and it is subjected to dissipation. DBs are spatiallylocalized and time-periodic excitations <cit.> whoseexistence has been proved rigorously for nonlinear Hamiltonian networks ofweakly coupled oscillators <cit.>. They actually have beenobserved in several physical systems such as Josephson ladders <cit.>and Josephson arrays<cit.>, micromechanical oscillator arrays<cit.>, proteins <cit.>, and antiferromagnets<cit.>. From the large volume of research work on DBs, only a verysmall fraction is devoted to dissipative breathers, e.g., in Josephson ladders <cit.>, Frenkel-Kontorova lattices<cit.>, 2D Josephson arrays <cit.>, nonlinearmetallic metamaterials <cit.>, and 2D tetragonal SQUIDmetamaterials <cit.>. These excitations emerge through a delicatebalance of input power and intrinsic losses. Dissipative beathers in Josephsonarrays and ladders are reviewed in Ref. <cit.>; for a more general review, see <cit.>. Note that dissipative breathers may exhibitricher dynamics than their Hamiltonian counterparts including quasiperiodic <cit.> and chaotic <cit.> behavior. Moreover, simple 1D and 2D tetragonal lattices are considered in most works,except, e.g., those on moving DBs in a 2D hexagonal lattice <cit.>,on DBs in cuprate-like lattices <cit.>, and on long-lived DBs infree-standing graphene (honeycomb lattice) <cit.>. In the following, the dynamic equations for the fluxes through the loops of theSQUIDs of a SLiMM are quoted. Then, a typical snaking bifurcation curve of theflux amplitude as a function of the driving frequency for a single SQUID ispresented, and its use for the construction of trivial dissipative DBconfigurations is explained. The existence of simultaneously stable dissipativeDBs (from hereafter multistable DBs) at a frequency close to that of thesingle-SQUID resonance, is demonstrated. Bifurcation curves for the multistableDB amplitudes with varying the external flux field amplitude, the couplingcoefficients, and the frequency of the driving flux field, are traced. Forbetter understanding of those bifurcation diagrams, standard measures for energylocalization and synchronization of coupled oscillators are calculated.Moreover, the existence of chimera states for appropriately chosen initialconditions is also demonstrated. Eventually, the wealth of dynamic behaviorsthat can be encountered in a SLiMM due to its lattice structure is indicated bythe emergence of temporally chaotic states exhibiting a particular form ofspatial coherence. § FLUX DYNAMICS EQUATIONSConsider the Lieb lattice of Fig. <ref>a, in which each site is occupied bya SQUID (Fig. <ref>b) modelled by the equivalent circuit shown in Fig.<ref>c; all the SQUIDs are identical, with each of them featuring aself-inductance L, a capacitance C, a resistance R, and a critical currentof the Josephson junction I_c. The SQUIDs are magnetically coupled to theirnearest-neighbors along the horizontal (vertical) direction through their mutualinductance M_x (M_y). Assuming that the current in each SQUID is given bythe resistively and capacitively shunted junction (RCSJ) model<cit.>, the dynamic equations for the fluxes through the loops ofthe SQUIDs are <cit.>L C d^2 Φ_n,m^A/dt^2 +L/Rd Φ_n,m^A/dt+L I_c sin( 2πΦ_n,m^A/Φ_0) +Φ_n,m^A= λ_x ( Φ_n,m^B +Φ_n-1,m^B )+λ_y ( Φ_n,m^C +Φ_n,m-1^C ) +[1-2(λ_x +λ_y)] Φ_e , L C d^2 Φ_n,m^B/dt^2 +L/Rd Φ_n,m^B/dt+L I_c sin( 2πΦ_n,m^B/Φ_0) +Φ_n,m^B= λ_x ( Φ_n,m^A +Φ_n+1,m^A ) +( 1-2 λ_x ) Φ_e , L C d^2 Φ_n,m^C/dt^2 +L/Rd Φ_n,m^C/dt+L I_c sin( 2πΦ_n,m^C/Φ_0) +Φ_n,m^C= λ_y ( Φ_n,m^A +Φ_n,m+1^A ) +( 1-2 λ_y ) Φ_e ,where Φ_n,m^k is the flux through the loop of the SQUID of kind k inthe (n,m)th unit cell (k=A, B, C, the notation is as in Fig.<ref>a), I_n,m^k is the current in the SQUID of kind k in the(n,m)th unit cell, Φ_0 is the flux quantum, λ_x =M_x /L(λ_y =M_y /L) is the coupling coefficient along the horizontal(vertical) direction, t is the temporal variable, andΦ_e =Φ_ac cos( ω t ) is the external flux due to asinusoidal magnetic field applied perpendicularly to the plane of the SLiMM. Thesubscript n (m) runs from 1 to N_x (1 to N_y), so thatN=N_x N_y is the number of unit cells of the SLiMM (the number of SQUIDs is3 N). Using the relations τ =ω_LC t, ϕ_n,m^k=Φ_n,m^k / Φ_0, and ϕ_ac=Φ_ac / Φ_0, where ω_LC = 1/√(LC) is theinductive-capacitive (L C) SQUID frequency, Eqs. (<ref>)-(<ref>) can benormalized asLϕ_n,m^A = λ_x ( ϕ_n,m^B +ϕ_n-1,m^B ) +λ_y ( ϕ_n,m^C +ϕ_n,m-1^C ) +[1-2(λ_x +λ_y)] ϕ_e (τ) , Lϕ_n,m^B = λ_x ( ϕ_n,m^A +ϕ_n+1,m^A ) +( 1-2 λ_x ) ϕ_e (τ) , Lϕ_n,m^C = λ_y ( ϕ_n,m^A +ϕ_n,m+1^A ) +( 1-2 λ_y ) ϕ_e (τ) ,whereβ =LI_c/Φ_0 =β_L/2π    and   γ= ω_LCL/Ris the SQUID parameter and the dimensionless loss coefficient, respectively, ϕ_e (τ) = ϕ_accos(Ωτ) is the external flux of frequencyΩ =ω / ω_LC and amplitude ϕ_ac, and L is anoperator such thatLϕ_n,m^k =ϕ̈_n,m^k +γϕ̇_n,m^k +ϕ_n,m^k +βsin( 2πϕ_n,m^k ) .The overdots on ϕ_n,m^k denote differentiation with respect to τ.The SQUID parameter and the loss coefficient used in the simulations have beenchosen to be the same as those provided in the Supplemental Material of Ref.<cit.> for a 11 × 11 SQUID metamaterial, i.e., β_L =0.86 and γ =0.01. These values result from Eq. (<ref>) with L=60 pH,C=0.42 pF, I_c =4.7 μ A, and subgap resistance R=500 Ohms. The value of the coupling between neighboring SQUIDs has been chosen to beλ_x =λ_y =-0.02, as it has been estimated for a 27× 27 SQUID metamaterial in the experiments of Ref. <cit.>. These experiments were performed with a specially designed set up which allowsfor the application of uniform ac driving and/or dc bias fluxes<cit.> as well as dc flux gradients <cit.> to the SQUID metamaterials which are placed into a waveguide. In the simulations in the next Sections, the described effects can be identified within the experimentally accessible range of ϕ_ac which spans theinterval 0.001 - 0.1 <cit.>. Furthermore, the SLiMM is chosen tohave 16 × 16 unit cells, so that its size is comparable with that of the 27 × 27 SQUID metamaterial investigated in Refs.<cit.>. § SINGLE SQUID RESONANCE AND MULTISTABLE DISSIPATIVE BREATHERSIn a single SQUID driven with a relatively high amplitude field ϕ_ac,strong nonlinearities shift the resonance frequency from Ω =Ω_SQto Ω∼ 1, i.e., to the LC frequency ω_LC. Moreover, thecurve for the oscillation amplitude of the flux through the loop of the SQUIDϕ_max as a function of the driving frequency Ω (SQUID resonancecurve), aquires a snaking form as that shown in Fig. <ref> (blue)<cit.>. That curve is calculated from the normalized single SQUIDequationϕ̈ +γϕ̇ +βsin( 2πϕ) +ϕ=ϕ_accos(Ωτ) ,for the flux ϕ through the loop of the SQUID. The curve "snakes" back andforth within a narrow frequency region via succesive saddle-node bifurcations(occuring at those points for which dΩ / dϕ_max =0). The manybranches of the resonance curve have been traced numerically using Newton'smethod; the stable branches are those which are partially covered by the redcircles. An approximation to the resonance curve for ϕ_max≪ 1 is givenby <cit.> Ω^2 =Ω_SQ^2 ±ϕ_ac/ϕ_max -β_L ϕ_max^2{ a_1 -ϕ_max^2 [ a_2 -ϕ_max^2 ( a_3 -a_4 ϕ_max^2 )] } ,where a_1 = π^2 /2, a_2 = π^4 /12, a_3 = π^6 /144, anda_4 = π^8 /2880, which implicitly provides ϕ_max (Ω). Theapproximate curves Eq. (<ref>) are shown in Fig. <ref> in green color;they show excellent agreement with the numerical snaking resonance curve forϕ_max≲ 0.6. The vertical orange segment at Ω =1.01intersects the resonance curve at several ϕ_max points; five of those,numbered on Fig. <ref> with consecutive integers from 0 to 4,correspond to stable solutions of the single SQUID equation. These five (5)solutions, which can be denoted as (ϕ_i, ϕ̇_̇i̇) withi=0, 1, 2, 3, 4, are used for the construction of four (4) trivial dissipativeDB configurations. Note that the flux amplitude ϕ_max of these fivesolutions increases with increasing i. For constructing a (single-site)trivial dissipative DB, two simultaneously stable solutions are firstidentified, say (ϕ_0, ϕ̇_̇0̇) (0) and(ϕ_1, ϕ̇_̇1̇) (1), with low and high flux amplitude ϕ_max,respectively. Then, one of the SQUIDs at (n, m)=(n_e =N_x/2, m_e =N_y/2)(hereafter referred to as the central DB site, which also determines thelocation of the DB) is set to the high amplitude solution 1, while all theother SQUIDs of the SLiMM (the background) are set to the low amplitude solution0. In order to numerically obtain a dissipative DB, that trivial DBconfiguration is used as initial condition for the time-integration of Eqs.(<ref>)-(<ref>); then, a stable dissipative DB (denoted as DB_1) is formedafter integration for a few thousand time units. Three (3) more trivialdissipative DBs can be constructed similarly, e.g. by setting the central DBsite to the solution 2, 3, or 4, and the background to the solution 0.Then, by integrating Eqs. (<ref>)-(<ref>) using as initial conditions thesetrivial DB configurations, three more stable dissipative DBs are obtainednumerically (denoted as DB_2, DB_3, and DB_4, respectively). These fourdissipative DBs are simultaneously stable and oscillate with the drivingfrequency Ω =1.01.The Hamiltonian (total energy) for the SLiMM descibed by Eqs.(<ref>)-(<ref>) for γ=0 is given byH =∑_n,m H_n,m ,where the Hamiltonian (energy) density, H_n,m, isH_n,m =π/β∑_k[ ( q_n,m^k )^2 +( ϕ_n,m^k -ϕ_e)^2 ] -∑_kcos( 2πϕ_n,m^k )-π/β{λ_x [ (ϕ_n,m^A -ϕ_e)(ϕ_n-1,m^B -ϕ_e) +2(ϕ_n,m^A -ϕ_e)(ϕ_n,m^B -ϕ_e) +(ϕ_n,m^B -ϕ_e)(ϕ_n+1,m^A -ϕ_e) ] +λ_y [ (ϕ_n,m^A -ϕ_e)(ϕ_n,m-1^C -ϕ_e)+2(ϕ_n,m^A -ϕ_e)(ϕ_n,m^C -ϕ_e)+(ϕ_n,m^C -ϕ_e)(ϕ_n,m+1^A -ϕ_e)] } ,where q_n,m^k =dϕ_n,m^k/dτ is the normalized instantaneousvoltage across the Josephson junction of the SQUID in the (n,m)th unit cell ofkind k. Both H and H_n,m are normalized to the Josephson energy, E_J.Two more quantities are also defined; the energetic participation ratio<cit.>epr =[ ∑_n,m( H_n,m/H)^2 ]^-1 , which is a measure of localization (it roughly measures the number of the moststrongly excited unit cells), and the complex synchronization parameterΨ =1/3 N∑_n,m,k e^2π i ϕ_n,m^k , which is a spacially global measure of synchronization for coupled oscillators;its magnitude r (τ)=|Ψ (τ)| ranges from zero (completelydesynchronized solution) to unity (completely synchronized solution).Eqs. (<ref>)-(<ref>) implemented with periodic boundary conditions areinitialized with the four trivial breather configurations and then integrated intime with a standard Runge-Kutta fourth order scheme. The temporal evolution ofthe total energy H of the SLiMM, the energetic participation ratio epr, andthe dissipative DB amplitude ϕ_max are shown for all cases in Fig.<ref>. After some oscillations during the initial stages of evolution, allcurves flatten indicating that a steady state has been reached (after∼ 1500 time units of integration). As it can be observed, the SLiMM hashigher energy for higher amplitude DBs ϕ_max (Fig. <ref>a). Thesteady-state values of ϕ_max for the four DBs can be seen in Fig.<ref>c; these values have been also used in the inset of Fig. <ref>a.In that inset, the ratio of the energy of the unit cell to which the central DBsite belongs over the total energy of the SLiMM, i.e., e_DB =H_n_e,m_e /H,is shown for the four DBs. This ratio increases considerably with increasing DBamplitude. This is certainly compatible with Fig. <ref>b (see also theinset), in which epr is plotted as a function of τ, where apparentlyhigher amplitude DBs provide more localized structures than lower amplitude ones.It is convenient to present the energy density H_n,m profiles of the fourdissipative DBs in one plot, as shown in Fig. <ref>. These profiles areobtained after 2000  T ≃ 12500 time units of integration (T=2π/Ω)using an appropriate initial condition which is a combination of the four trivialDB configurations. The difference between the three subfigures is in thedistances between the central DB sites. Remarkably, the steady-state totalenergy of the SLiMM, H=E_tot, is the same in all the three cases and equalto H=580.6, indicating that the interaction between these DBs is almostnegligible, even if they are located very closely (as in Fig. <ref>c). § BIFURCATIONS OF MULTISTABLE DISSIPATIVE BREATHERSIn this Section, the parameter intervals in which these four DBs are stable aredetermined; for this purpose, the steady-state DB amplitudes ϕ_max arecalculated as a function of either the driving field amplitude ϕ_ac, or themagnitude of the coupling coefficients for isotropic coupling λ_x =λ_y,or the driving frequency Ω. First, ϕ_max, the energetic participation ratio epr, and the magnitude of the synchronization parameter averaged over the steady-state integration time τ_int =2000  T time units (transients have beendiscarded), are calculated as a function of ϕ_ac (Fig. <ref>). In Fig.<ref>a, it can be seen that higher amplitude DBs remain stable for narrowerintervals of ϕ_ac.Interestingly, higher amplitude DBs may turn into lower amplitude ones even severaltimes until they completely dissapear. As an example, we note that DB_4 (blue curve)which is stable approximately for ϕ_ac between 0.04 and 0.085, ittransforms to a DB_2 for ϕ_ac <0.04, and then to an even lower amplitude DBat ϕ_ac < 0.015. The presence of the latter DB is rather unexpected, since itcannot be identified with one the four DB families under consideration. All the DBsdisappear for ϕ_ac≲ 0.005, since the nonlinearity is not strong enoughto localize energy in the SLiMM. For ϕ_ac exceeding a critical value, which ishigher for lower amplitude DBs (e.g., 0.085 for DB_4 and 0.118 for DB_1), allthe four DBs turn into irregular multibreather states. In Figs. <ref>b and <ref>c the corresponding epr and <r>_int arepresented as a function of ϕ_ac. In Fig. <ref>b, it can be seen that whenall the DBs dissapear for low ϕ_ac, the SLiMM reaches a homogeneous state which is advocated by the large, close to the maximum possible value ofepr ≃ N =256. In that case, <r>_int is exactly unity (Fig. <ref>c)since the homogeneous state is synchronized. For high values of ϕ_ac(> 0.118), where ϕ_max for all the four DBs varies irregularly with varyingϕ_ac, the value of epr can be used to distinguish between two differentregimes: the first one from ϕ_ac≃ 0.118 to 0.154, in which the low,fluctuating value of epr suggests the existence of (possibly chaotic) multibreathers (see also the inset of Fig. <ref>b), and the second from ϕ_ac≃ 0.154 to 0.16, in which the high value of epr (≃ 256) suggests the existence of a desynchronized state in which all the units cells are excited. For intermediate values of ϕ_ac, epr generally increases with increasing ϕ_ac; in particular,for DB_1 it increases to rather high values because of the relative enhancement ofthe oscillation amplitude of the background unit cells with respect to the central DBunit cell. However, this is not observed for the high amplitude DBs, for which theincrease is either moderate (DB_2) or very small (DB_3, DB_4). It is alsoapparent that whenever a DB is tranformed to another, a small jump in epr occurs(inset). Fig. <ref>c provides useful information on the synchronization of thevarious SLiMM states. For example, for ϕ_ac≃ 0.154 to 0.16, <r>_int falls off to very low values indicating desynchronization as mentioned above.For the values of ϕ_ac which provide stable single-site DBs that belong to one of the four (4) families (as well as the fifth one which has appeared), the measure <r>_int is always very close to unity (inset); that occurs because all the"background" SQUIDs are oscillating in phase with the same amplitude, and only oneSQUID (the central DB site) is oscillating with higher amplitude and opposite phasewith respect to the others. For low ϕ_ac, the SLiMM reaches a homogeneousstate (the DBs have dissapeared) and then <r>_int is exactly unity.The corresponding diagram of the DB flux amplitudes ϕ_max as a function of thecoupling coefficients for isotropic coupling λ =λ_x =λ_y is shownin Fig. <ref>. Remarkably, the four DBs maintain their amplitudes almost constant for a substantial interval of λ (Fig. <ref>a), i.e., from λ =0 to -0.026 which includes the estimated physically acceptable values for that system<cit.>. The corresponding values of the epr remain low,except for the lowest amplitude breather (DB_1), for which epr ≃ 30.Note that DB_1 dissapears for λ > -0.003 but it exists all the way down toλ =-0.05. For large magnitudes of λ, the amplitudes of the three highamplitude breathers (DB_2, DB_3, DB_4) vary irregularly with varying λ;however, as it can be observed in Fig. <ref>b, their epr remains relativelylow, indicating the spontaneous formation of multibreathers.The bifurcation diagram of the DB amplitudes ϕ_max as a function of the drivingfrequency Ω is particularly interesting. This diagram has been superposed onthe single SQUID resonance curve shown in Fig. <ref> as red circles. Notice thatDB flux amplitudes (red circles) are very close to the corresponding flux amplitudesof single-SQUID stable solutions (which are covered by the red circles). Allred-circled branches (except the lowest ones pointed by the arrows) correspond tostable DB families. The red-cirled branches indicated by the arrows correspond toalmost homogeneous solutions which are not DBs. Note that different number ofmultistable dissipative DBs exists for different driving frequencies, depending onthe broadness of the red-circled branches; for example, for Ω =1.01 there are four simultaneously stable DBs, while for Ω =1.03 there are two, andfor Ω =1.07 there is only one stable DB. § NOVEL DYNAMIC SLIMM STATES So far, we focused on the formation of single-site, dissipative DBs in a SLiMM, whichcan be generated through trivial DB configurations, and they are simultaneouslystable. Beyond dissipative DB solutions, other interesting numerical solutions havebeen obtained; these solutions correspond to counter-intuitive dynamic states such asthe so-called chimera states and a type of states that exhibit spatial homogeneity aswell as chaotic evolution. Typical examples of such states, whose analysis requiresfuther work, are demonstrated here. First, a chimera state solution is illustratedwhich is generated from the following initial conditionϕ_n,m^k (τ=0) ={[ 0.5,N_x /4 +1 < n ≤ 3 N_x/4;N_y /4 +1 < m ≤ 3 N_y/4;; 0,]. ϕ̇_n, m^k (τ=0) =0 ,.With Eqs. (<ref>) and (<ref>) as initial conditions, Eqs. (<ref>)-(<ref>)for the SLiMM are integrated in time. The magnitude of the synchronization parameteraveraged over each driving period T=2π/Ω, <r>_T (τ), is monitored intime and the results are shown in Fig. <ref>a for five different drivingfrequencies Ω close to unity. It can be seen that <r>_T (τ) is in allcases considerably less than unity, indicating significant desynchronization. Thefluctuations, however, of <r>_T (τ) do not all have the same size. Specifically,for Ω=1.01, 1.015, and 1.02 (black, red, and green curves, respectively),the fluctuations have roughly the same size. For Ω=1.025 (blue curve), the sizeof fluctuations is significantly higher, while for Ω=1.03 (orange curve) thefluctuations are practically zero. This can be seen more clearly in Fig. <ref>b,in which the distributions pdf( <r>_T ) of the values of <r>_T (τ) are shown;the full-width half-maximum (FWHM) of the pdf( <r>_T )s, quantifies the level ofmetastability of chimera states <cit.>. A partiallydesynchronized dynamic state (i.e., with <r>_T<1 but practically zero fluctuations),is not a chimera state but a clustered state, i.e., a non-homogeneous state in whichdifferent groups of SQUIDs oscillate with different amplitudes and phases with respectto the driving field; however, the SQUID oscillators that belong to the same group aresynchronized. Thus, as can be inferred from Fig. <ref> as well as by theinspection of the flux profiles at the end of integration time (not shown), the curvesfor Ω=1.01, 1.015, 1.02, and 1.025 (black, red, green, and blue curves,respectively), are indeed due to chimera states. The energy density profiles at theend of the integration time for Ω=1.01, 1.03, and 1.05, are shown in Fig.<ref>. The first two are typical for chimera states, while the last one istypical for a clustered state. Note that the SQUIDs within the square in which thefluxes were initialized to a non-zero value, oscillate with high amplitude and theyare not synchronized. The rest of the SQUIDs, i.e., outside that square, oscillate inphase and with the same (low) amplitude. Thus, from the initial condition Eqs.(<ref>) and (<ref>), different chimera states are obtained for different drivingfrequencies. These states differ in their asymptotic value of <r>_T as well as theFWHM of their pdf( <r>_T )s which determines their metastability level. In Fig.<ref>c, on the other hand, one may distinguish easily groups of SQUIDs with thesame amplitude. The SQUIDs that belong to such a group are synchronized togetherwhile the groups are not synchronized to each other. In this clustered state, all theSQUIDs are oscillating with high amplitude (note the energy scales). A family of novel solutions emerges through an order-to-chaos phase transition thatis demonstrated for varying ϕ_ac. Eqs. (<ref>)-(<ref>) with periodicboundary conditions are integrated in time for ϕ_ac increasing from zero tohigher values; the initial condition is homogeneous, i.e.,ϕ_n,m^k (τ=0) =ϕ̇_n,m^k (τ=0) =0 for any n, m, and k. Theflux field amplitude ϕ_ac increases in small steps, and for each step thesolution for the previous step is taken as the initial condition. For relatively lowϕ_ac, the amplitudes of the oscillating fluxes through the loops of the SQUIDshave low values and they are very close to each other, i.e.,ϕ_max^A ≃ϕ_max^B =ϕ_max^C as shown in Fig. <ref>a (all the SQUIDs of kind k are oscillating with amplitude ϕ_max^k, k=A,B,C).Actually, the difference between ϕ_max^A and ϕ_max^B,C is less than1% in this regime. Moreover, the fluxes in all kinds of SQUIDs are oscillatingperiodically in phase, and thus the degree of synchronization <r>_int of thesestates is almost unity (see the upper branch of the red curve in the inset of Fig.<ref>a). That state is referred to as an almost homogeneous state in space. In the inset of Fig. <ref>a, the total energy of the SLiMM E_tot divided byE_0=10^6 is plotted as a function of ϕ_ac; that energy increases smoothlywith increasing ϕ_ac (lower branch of the blue curve in the inset of Fig.<ref>a). At a critical value of ϕ_ac, ϕ_ac^c ≃ 0.155, thesituation changes drastically, as an abrupt increase of all the amplitudesϕ_max^k occurs while their values become considerably different(ϕ_max^A attains considerably larger values than ϕ_max^B =ϕ_max^C). Moreover, for ϕ_ac > ϕ_ac^c, the values of ϕ_max^ks varyirregularly with increasing ϕ_ac, although their average values as well as thedifference between ϕ_max^A and ϕ_max^B,C increase (Fig. <ref>a).Also, at the phase transition point ϕ_ac^c, the parameter <r>_int abruptlyjumps to a value which indicates significant desynchronization, <r>_int∼ 0.7;that value remains almost unchanged with further increasing ϕ_ac (inset). Thevariation of the total energy of the SLiMM E_tot is similar to that of thevariation of the ϕ_max^k, i.e., it jumps abruptly to higher value atϕ_ac =ϕ_ac^c (inset). Recall that the above remarks hold for ϕ_acincreasing from zero to higher values. The corresponding curves for <r>_int andE_tot for ϕ_ac decreasing from 0.3 to zero are also shown in the inset of Fig. <ref>a (lower branch of the red curve and upper branch of the blue curve,respectively). The "explosive" (first-order) character of the transition is clearlymanifested by the presence of a large hysteresis region.Consider again the case in which ϕ_ac increases from zero to higher values.In that case, the steady-states of the SLiMM are almost synchronized (almostspatially homogeneous) and temporally periodic for ϕ_ac < ϕ_ac^c. Notehowever that those states are exactly homogeneous at the unit cell level, i.e., thatϕ̅_n,m =∑_k ϕ_n,m^k =c for any n and m, with c being aconstant. For ϕ_ac > ϕ_ac^c the SLiMM states acquire chaotictime-dependence, while they retain partial homogeneity and thus synchronization; thatis, all the SQUIDs of kind k are synchronized although they execute chaoticoscillations. Remarkably, at the unit cell level, even this state is spatiallyhomogeneous. In Fig. <ref>b the time-dependence of the fluxes ϕ^A, ϕ^B, and ϕ^C (identical for all the SQUID of kind A, B, and C, respectively, ofthe SLiMM) are plotted for ϕ_ac =0.2 during a few thousands time-units.Apparently, the flux oscillations are irregular, indicating chaotic behavior which has been checked to persists for very long times (note that ϕ^B =ϕ^C due to theisotropic coupling). A flux profile for that state is shown in Fig. <ref>c, inwhich the spatial homogeneity within each sublattice of the SLiMM is apparent. Thus,large scale synchronization between oscillators in a chaotic state occurs in thiscase. In Fig. <ref>d, two stroboscopic plots in the reducedϕ^C - ϕ̇^C (ϕ̇^C =q^C) phase space are shown together for the C SQUID at the (n_e, m_e)th unit cell. In the first one (blue down-triangles,inset), the SLiMM is in an almost synchronized temporally periodic state(ϕ_ac =0.1); in the the second one (red circles), the SLiMM is in a partiallysynchronized (synchronization of the SQUIDs within each sublattice) temporallychaotic state (ϕ_ac =0.2). Apparently, the trajectory in the reducedphase-space tends to a point in the former case, while it tends to a large areaattractor in the latter. In Fig. <ref>d, the transients leading the trajectoriesto the one or the other attractor are also shown.§ CONCLUSIONSThe existence of several regions in parameter space in which simultaneously stable dissipative DBs in a dissipative SLiMM which is driven by a sinusoidal flux field hasbeen demonstrated numerically. For that purpose, the dynamic equations Eqs.(<ref>)-(<ref>) for the fluxes threading the loops of the SQUIDs are integrated intime with periodic boundary conditions. The initial conditions have been properlydesigned to provide trivial DB configurations using combinations of simultaneouslystable solutions of the single-SQUID oscillator. For substantial nonlinearity excitedfrom a flux field with relatively high amplitude, the single-SQUID resonance curveexhibits several simultaneously stable solutions at frequencies around resonance. Thatallows for the construction of several trivial DB configurations at some particularfrequency; the subsequent temporal evolution through Eqs. (<ref>)-(<ref>) resultsin multistable (co-existing) dissipative DBs. The bifucation diagrams for thecalculated DB amplitudes as a function of ϕ_ac, λ, and Ω havebeen presented, which reveal that multistability persists within substantial parameterintervals. For a better interpretation of those bifurcation diagrams, well-establishedmeasures for energy localization and synchronization of oscillators in discretelattices were defined, and they were calculated for each dissipative DB. Remarkably,the interactions between co-existing DBs are very weak; no appreciable change in thetotal energy of the SLiMM has been observed even when the co-existing DBs are veryclose together. The bifurcation diagram of the dissipative DB amplitudes as a functionof Ω, shown as the branches formed by the red circles in Fig. <ref>,resembles the snaking bifurcation curves for spatially localized states in theSwift-Hohenberg equation <cit.>; however, snaking bifurcationcurves also occur in discrete problems <cit.>.Interestingly, snaking bifurcation diagrams for chimera states have been obtained inthe 1D extended Bogdanov-Takens lattice <cit.>.Besides single-site multistable dissipative DBs, two other types of dynamic stateswere demonstrated; chimera states, which can be generated in a SLiMM by appropriatechoice of initial conditions, and spatially homogeneous (at the unit cell level) -temporally chaotic states. The existence of the former have been demonstrated in 1DSQUID metamaterials, and the mechanism for their generation through the attractor crowding effect in coupled nonlinear oscillator arrays<cit.> has been described <cit.>. Similar chimera states are also expected toappear in SQUID metamaterials on 2D tetragonal lattices. The spatially homogeneous - temporally chaotic states, however, are peculiar to the lattice geometry of the SLiMM, and indicate the wealth of dynamic behaviors that may be encountered in that system.As ϕ_ac increases from zero, the SLiMM passes through states which arespatially (almost) homogeneous and temporally periodic. At a critical value ofϕ_ac =ϕ_ac^c a transition occurs, and for ϕ_ac > ϕ_ac^c theSLiMM passes through states in which all the SQUIDs of kind k (i.e., the SQUIDswithin each sublattice) have the same amplitude and they are synchronized together,while their time-dependence is chaotic! These states exhibit large-scale chaoticsynchronization <cit.>; notably, states with spatial coherenceand temporal chaos have been obtained in coupled map lattices with asymmetricshort-range couplings <cit.>. The order-to-chaos transition withhysteresis obtained here is similar to that demonstrated numerically and observed inlaser-cooled trapped ions <cit.>. When seen as asynchronization-desynchronization transition with hysteresis, it resembles theexplosive first order transition to synchrony observed in electronic circuits<cit.>. § ACKNOWLEDGMENTThis work is partially supported by the Ministry of Education and Science of theRussian Federation in the framework of the Increase Competitiveness Program of NUST"MISiS" (No. K2-2017-006), and by the European Union under project NHQWAVE(MSCA-RISE 691209). NL gratefully acknowledges the Laboratory for Superconducting Metamaterials, NUST"MISiS" for its warm hospitality during visits.99Anlage2011S. M. Anlage,The physics and applications of superconducting metamaterials,J. Opt. 13, 024001 (2011).Jung2014P. Jung, A. V. Ustinov, and S. M. Anlage,Progress in superconducting metamaterials,Supercond. Sci. Technol. 27, 073001 (2014).Lazarides2018N. Lazarides and G. P. Tsironis,Superconducting metamaterials,arXiv preprint, arXiv:1710.00680 .Lapine2003M. Lapine, M. Gorkunov, and K. H. Ringhofer,Nonlinearity of a metamaterial arising from diode insertions into resonant conductive element,Phys. Rev. E 67, 065601 (2003).Lapine2014M. Lapine, I. V. Shadrivov, and Y. S. Kivshar,Colloquium: Nonlinear metamaterials,Rev. Mod. Phys. 86, 1093 (2014).Josephson1962B. Josephson,Possible new effects in superconductive tunnelling,Phys. Lett. A 1, 251 (1962).Du2006C. Du, H. Chen, and S. Li,Quantum left-handed metamaterial from superconducting quantum-interference devices,Phys. Rev. B 74, 113105 (2006).Lazarides2007N. Lazarides and G. P. Tsironis,rf superconducting quantum interference device metamaterials,Appl. Phys. Lett. 90, 163501 (2007).Jung2013P. Jung, S. Butz, S. V. Shitov, and A. V. Ustinov,Low-loss tunable metamaterials using superconducting circuits with Josephson junctions,Appl. Phys. Lett. 102, 062601 (2013).Butz2013aS. Butz, P. Jung, L. V. Filippenko, V. P. Koshelets, and A. V. Ustinov,A one-dimensional tunable magnetic metamaterial,Opt. Express 21, 22540 (2013).Trepanier2013M. Trepanier, Daimeng Zhang, O. Mukhanov, and S. M. Anlage,Realization and modeling of rf superconducting quantum interferencedevice metamaterials,Phys. Rev. X 3, 041029 (2013).Zhang2015Daimeng Zhang, M. Trepanier, O. Mukhanov, and S. M. Anlage,Broadband transparency of macroscopic quantum superconducting metamaterials,Phys. Rev. X 5, 041045 (2015).Jung2014bP. Jung, S. Butz, M. Marthaler, M. V. Fistul, J. Leppäkangas, V. P. Koshelets, and A. V. Ustinov,Multistability and switching in a superconducting metamaterial,Nat. Comms. 5, 3730 (2014).Trepanier2017M. Trepanier, Daimeng Zhang, O. Mukhanov, V. P. Koshelets, P. Jung, S. Butz, E. Ott, T. M. Antonsen, A. V. Ustinov, and S. M. Anlage,Coherent oscillations of driven rf SQUID metamaterials, Phys. Rev. E 95, 050201(R) (2017).Lazarides2008aN. Lazarides, G. P. Tsironis, and M. Eleftheriou,Dissipative discrete breathers in rf SQUID metamaterials,Nonlinear Phenom. Complex Syst. 11, 250 (2008).Tsironis2014bG. P. Tsironis, N. Lazarides, and I. Margaris,Wide-band tuneability, nonlinear transmission, and dynamic multistability in SQUID metamaterials,Appl. Phys. A 117, 579 (2014).Lazarides2015b N. Lazarides, G. Neofotistos, and G. P. Tsironis, Chimeras in SQUID metamaterials, Phys. Rev. B 91,054303 (2015).Hizanidis2016a J. Hizanidis, N. Lazarides, and G. P. Tsironis, Robust chimera states in SQUID metamaterials with local interactions, Phys. Rev. E 94, 032219 (2016).Kuramoto2002 Y. Kuramoto and D. Battogtokh, Coexistence of coherence and incoherence in nonlocally coupled phase oscillators, Nonlinear Phenom. Complex Syst. 5, 380 (2002).Panaggio2015 M. J. Panaggio and D. M. Abrams, Chimera states: Coexistence of coherence and incoherence in network of coulped oscillators, Nonlinearity 28, R67 (2015).Lazarides2017N. Lazarides and G. P. Tsironis,SQUID metamaterials on a Lieb lattice: From flat-band to nonlinearlocalization,Phys. Rev. B96, 054305 (2017).Vicencio2015R. A. Vicencio, C. Cantillano, L. Morales-Inostroza, B. Real, C. Mejía-Cortés, S. Weimann, A. Szameit, and M. I. Molina,Observation of localized states in Lieb photonic lattices,Phys. Rev. Lett. 114, 245503 (2015).Mukherjee2015aS. Mukherjee, A. Spracklen, D. Choudhury, N. Goldman, P. Öhberg, E. Andersson, and R. R. Thomson,Observation of a localized flat-band state in a photonic Lieb lattice,Phys. Rev. Lett. 114, 245504 (2015).Flach2008aS. Flach and A. V. Gorbach,Discrete breathers - advances in theory and applications,Phys. Rep. 467, 1 (2008).Flach2012S. Flach,Discrete breathers in a nutshell,Nonlinear Theory and Its Applications, IEICE 3, 1 (2012).Mackay1994R. S. MacKay and S. Aubry,Proof of existence of breathers for time - reversible or Hamiltonian networks of weakly coupled oscillators,Nonlinearity 7, 1623 (1994).Aubry1997S. Aubry,Breathers in nonlinear lattices: Existence, linear stability andquantization,Physica D 103, 201 (1997).Binder2000P. Binder, D. Abraimov, A. V. Ustinov, S. Flach, and Y. Zolotaryuk, Observation of breathers in Josephson ladders, Phys. Rev. Lett. 84, 745 (2000).Trias2000E. Trías, J. J. Mazo, and T. P. Orlando,Discrete breathers in nonlinear lattices: Experimental detection in a Josephson array,Phys. Rev. Lett. 84, 741 (2000).Sato2003M. Sato, B. E. Hubbard, A. J. Sievers, B. Ilic, D. A. Czaplewski, and H. G.Graighead,Observation of locked intrinsic localized vibrational modes in amicromechanical oscillator array,Phys. Rev. Lett. 90, 044102 (2003).Edler2004J. Edler, R. Pfister, V. Pouthier, C. Falvo, and P. Hamm,Direct observation of self-trapped vibrational states in α-helices,Phys. Rev. Lett. 93, 106405 (2004).Schwarz1999U. T. Schwarz, L. Q. English, and A. J. Sievers,Experimental generation and observation of intrinsic localized spin wave modes in an antiferromagnet,Phys. Rev. Lett. 83, 223 (1999).Martinez1999P. J. Martínez, L. M. Floría, F. Falo, and J. J. Mazo,Intrinsically localized chaos in discrete nonlinear extended systems,Europhys. Lett. 45, 444 (1999).Martinez2003P. J. Martínez, M. Meister, L. M. Floria, and F. Falo,Dissipative discrete breathers: periodic, quasiperiodic, chaotic, andmobile,Chaos 13, 610 (2003).Marin2001J. L. Marín, F. Falo, P. J. Martínez, and L. M. Floría,Discrete breathers in dissipative lattices,Phys. Rev. E 63, 066603 (2001).Mazo2002J. J. Mazo,Discrete Breathers in Two-Dimensional Josephson-Junction Arrays,Phys. Rev. Lett. 89, 234101 (2002).Mazo2003J. J. Mazo and T. P. Orlando,Discrete breathers in Josephson arrays,Chaos 13, 733 (2003).Flach2008bS. Flach and A. V. Gorbach,Discrete Breathers with Dissipation,Lect. Notes Phys. 751, 289-320 (2008).Lazarides2006N. Lazarides, M. Eleftheriou, and G. P. Tsironis,Discrete breathers in nonlinear magnetic metamaterials,Phys. Rev. Lett. 97, 157406 (2006).Marin1998J. L. Marín, J. C. Eilbeck, and F. M. Russell,Localized moving breathers in a 2D hexagonal lattice,Phys. Lett. A 248, 225 (1998).Marin2001bJ. L. Marín, F. M. Russell, and J. C. Eilbeck,Breathers in cuprate-like lattices,Phys. Lett. A 281, 21 (2001).Fraile2016A. Fraile, E. N. Koukaras, K. Papagelis, N. Lazarides, and G.P. Tsironis,Long-lived discrete breathers in free-standing graphene,Chaos, Solitons & Fractals 87, 262 (2016).Likharev1986K. K. Likharev,Dynamics of Josephson Junctions and Circuits,Gordon and Breach, Philadelphia, 1986.deMoura2003F. A. B. F. de Moura, M. D. Coutinho-Filho, E. P. Raposo, and M. L. Lyra,Delocalization in harmonic chains with long-range correlated random masses,Phys. Rev. B 68, 012202 (2003).Laptyeva2012T. V. Laptyeva, J. D. Bodyfelt, and S. Flach,Subdiffusion of nonlinear waves in two-dimensional disordered lattices,Europhys. Lett. 98, 60002 (2012).Lazarides2015aN. Lazarides and G. P. Tsironis,Nonlinear localization in metamaterials,In I. Shadrivov, M. Lapine, and Yu. S. Kivshar, editors, Nonlinear, Tunable and Active Metamaterials, pages 281–301. Springer International Publishing, Switzerland, 2015.Shanahan2010M. Shanahan,Metastable chimera states in community-structured oscillator networks,Chaos 20, 013108 (2010).Knobloch2008E. Knobloch,Spatially localized structures in dissipative systems: open problems,Nonlinearity 21, T45 (2008).Bergeon2008A. Bergeon, J. Burke, E. Knobloch, and I. Mercader,Eckhaus instability and homoclinic snaking,Phys. Rev. E 78, 046201 (2008).Dean2015A. D. Dean, P. C. Matthews, S. M. Cox, and J. R. King,Orientation-dependent pinning and homoclinic snaking on a planar lattice,SIAM J. Appl. Dyn. Syst. 14, 481 (2015).Taylor2010C. Taylor and J. H.P. Dawes,Snaking and isolas of localised states in bistable discrete lattices,Phys. Lett. A 375, 14 (2010).Prilepsky2012J. E. Prilepsky, A. V. Yulin, M. Johansson, and S. A. Derevyanko,Discrete solitons in coupled active lasing cavities,Opt. Lett. 37, 4600 (2012).Clerc2016M. G. Clerc, S. Coulibaly, M. A. Ferré, M. A. García-Nustes, and R. G. Rojas,Chimera-type states induced by local coupling,Phys. Rev. E 93, 052204 (2016).Wiesenfeld1989K. Wiesenfeld and P. HadleyAttractor crowding in oscillator arrays,Phys. Rev. Lett. 62, 1335 (1989).Tsang1990Kwok Yeung Tsang and K. Wiesenfeld,Attractor crowding in Josephson junction arrays,Appl. Phys. Lett. 56, 495 (1990).Pecora1990L. M. Pecora and T. L. Carroll,Synchronization in chaotic systems,Phys. Rev. Lett. 64, 821 (1990).Heagy1994J. F. Heagy, T. L. Carroll, and L. M. Pecora,Synchronous chaos in coupled oscillator systems,Phys. Rev. E 50, 1874 (1994).Aranson1992I. Aranson, D. Golomb, and H. Sompolinsky,Spatial coherence and temporal chaos in macroscopic systems with asymmetrical couplings,Phys. Rev. Lett. 68, 3495 (1992).Hoffnagle1988J. Hoffnagle, R. G. DeVoe, L. Reyna, and R. G. Brewer,Order-chaos transition of two trapped ions,Phys. Rev. Lett. 61, 255 (1988).Levya2012I. Leyva, R. Sevilla-Escoboza, J. M. Buldu, I. Sendina-Nadal, J. Gomez-Gardenes, A. Arenas, Y. Moreno, S. Gomez, R. Jaimes-Reategui, and S. Boccaletti,Explosive first-order transition to synchrony in networked chaoticoscillators,Phys. Rev. Lett. 108, 168702 (2012). Maimistov2010A. I. Maimistov and I. R. Gabitov,Nonlinear response of a thin metamaterial film containing Josephson junctions,Opt. Commun. 283, 1633-1639 (2010).Zueco2013D. Zueco, C. Fernández-Juez, J. Yago, U. Naether, B. Peropadre, J. J. García-Ripoll, and J. J. Mazo,Supercond. Sci. Technol. 26, 074006 (2013).Pierro2015V. Pierro and G. Filatrella,Fabry-Perot filters with tunable Josephson junction defects,Physica C 517, 37-40 (2015).Mohebbi2009H. R. Mohebbi and A. H. Majedi, Shock Wave Generation and Cut Off Condition in Nonlinear Series Connected Discrete Josephson Transmission Line,IEEE Trans. Appl. Supercond. 19 (3), 891-894 (2009).
http://arxiv.org/abs/1710.00680v2
{ "authors": [ "N. Lazarides", "G. P. Tsironis" ], "categories": [ "cond-mat.mes-hall", "nlin.CD", "nlin.PS", "physics.app-ph" ], "primary_category": "cond-mat.mes-hall", "published": "20170927102958", "title": "Multistable Dissipative Breathers and Novel Collective States in SQUID Lieb Metamaterials" }
The Deep Underground Neutrino Experiment – DUNE: the precision era of neutrino physics for the DUNE collaboration December 30, 2023 ====================================================================================== We describe methods designed to determine the astrophysical parameters of quasars based on spectra coming from the red and blue spectrophotometers of the Gaia satellite. These methods principally rely on two already published algorithms that are the weighted principal component analysis and the weighted phase correlation. The presented approach benefits from a fast implementation; an intuitive interpretation as well as strong diagnostic tools on the potential errors that may arise during predictions. The production of a semi-empirical library of spectra as they will be observed by Gaia is also covered and subsequently used for validation purpose. We detail the pre-processing that is necessary in order for these spectra to be fully exploitable by our algorithms along with the procedures that are used in order to predict the redshifts of the quasars; their continuum slopes; the total equivalent width of their emission lines and whether these are broad absorption line (BAL) quasars or not. Performances of these procedures were assessed in comparison with the Extremely Randomized Trees learning method and were proven to provide better results on the redshift predictions and on the ratio of correctly classified observations though the probability of detection of BAL quasars remains restricted by the low resolution of these spectra as well as by their limited signal-to-noise ratio. Finally, the triggering of some warning flags allows us to obtain an extremely pure subset of redshift predictions where approximately 99% of the observations come along with absolute errors that are below 0.1. methods: data analysis – quasars: general. § INTRODUCTION Gaia is one of the cornerstone space missions of the Horizon 2000+ science program of the ESA that aims to bring a consensus on the history and evolution of our Galaxy through the survey of a billion celestial objects <cit.>. This objective being achieved by capturing a `snapshot' of the present structure; dynamic and composition of the Milky Way by means of precise astrometric and photometric measurements of all the observed objects as well as by the determination of the distances; proper motions; radial velocities and chemical compositions of a subset of these objects <cit.>. The on-board instrumentation is principally composed of two 1.45 × 0.50 m telescopes pointing in directions separated by a basic angle of 106.5, the light acquisition being then carried out by slowly rotating the satellite on its spin axis while reading each CCD column at the same rate as the objects cross the focal plane (i.e. in the so-called Time Delay Integration mode, hereafter TDI mode). The high astrometric precision of Gaia then coming from: (i) its lack of atmospheric perturbations (ii) its large focal length of 35 m (iii) the combination of the beams of light coming from both telescopes onto a single focal plane composed of a patchwork of 106 CCDs that allow to relate the positions of the objects coming from the two fields of view with an extremely precise angular resolution (iv) its scanning law that enables to maximize the number of observed objects as well as the number of positional relations arising from the previous point <cit.>. In addition, a high-resolution (R = λ/Δλ = 11,700) spectrometer, called Radial Velocity Spectrometer (RVS), centred around the CaII triplet (845-872 nm) will allow to determine the radial velocities of some of the most luminous stars (G_RVS < 16 mag), while two low-resolution spectrophotometers, namely the Blue Photometer (BP) observing in the range 330-680 nm (13 < R < 85) and the Red Photometer (RP) observing in the range 640-1050 nm (17 < R < 26), will allow to classify and characterize the objects having G < 20 mag. The interested reader is invited to read <cit.> and <cit.> for a more complete description of the Gaia spacecraft and of its payload. The previously described instrumentation, combined with the fact that Gaia is a full-sky survey where each object will be observed 70 times on average, is a unique opportunity in order to achieve some additional objectives. A non-exhaustive list of such applications are: a finer calibration of the whole cosmological distance ladder (i.e. through parallaxes, Cepheids & RR Lyrae stars, type-Ia supernovae, ...) <cit.>; a better understanding of the stellar physics and evolution through the refinement of the Hertzsprung-Russell diagram <cit.>; the discovery of thousands high-mass exoplanets <cit.> as well as new probes regarding fundamental physics <cit.>. Amongst the most peculiar objects that Gaia will observe, stand quasars–also termed quasi-stellar objects (QSOs) for historical reasons. Quasars are active galactic nuclei originating from the matter accretion that was occurring in the vicinity of supermassive black holes being at cosmological distances. Due to their high luminosity (L > 10^12 L_) and their large redshift (0 < z < 7), these objects play a key-role in fixing the celestial reference frame used by Gaia, but they also have their own intrinsic interest in various cosmological applications like in the evolution scenarios of the galaxies <cit.>; as discriminants over the various universe model and their parametrization <cit.>; as tracers of the large-scale distribution of Baryonic matter at high redshift <cit.> or as a means to independently constrain the Hubble constant if the latter are gravitationally lensed <cit.>. The identification and characterization of the 500,000 quasars that Gaia is expected to observe takes place in the framework of the Data Processing and Analysis Consortium (DPAC) which is responsible for the treatment of the Gaia data in a broad sense, that is: from data calibration and simulation to catalogue publication through intermediate photometric/astrometric/spectroscopic processing; variability analysis and astrophysical parameters (APs) determination. The DPAC is an academic consortium composed of nine Coordination Units (CUs), each being in charge of a specific part of the data processing <cit.>. One of these, the CU8 `Astrophysical Parameters', is dedicated to the classification of the objects observed by Gaia and to the subsequent determination of their APs <cit.>. The present paper describes the algorithms that are to be implemented within the CU8 Quasar Classifier (QSOC) software module in order to determine the APs of the objects classified as QSOs by the CU8 Discrete Source Classifier (DSC) module while relying on their low-resolution BP/RP spectra. The collected APs aiming to be published within the upcoming Gaia data release 3 catalogue. The covered APs encompass: the redshift; the QSO type (i.e. type I/II QSO or Broad Absorption Line QSO, hereafter BAL QSO); the slope of the QSO continuum and the total equivalent width of the emission lines. In the following, section <ref> explains the conventions we used along this paper. Section <ref> makes a brief review of the methods that were specifically developed in the field of this study. We present the production of a semi-empirical library of BP/RP spectra of QSOs used in order to train/test our models in section <ref>. The AP determination procedures are covered within section <ref> while their performances are assessed in section <ref>. Some discussion on the latter takes place in section <ref>. Finally, we conclude in section <ref>. § CONVENTIONS This paper uses the following notations: vectors are in bold italic, x⃗; x_i being the element i of the vector x⃗. Matrices are in uppercase boldface or are explicitly stated; i.e. X from which element at row (variable) i, column (observation) j will be denoted by X_ij. Flux will here denote the spectral power received per unit area (derived unit of W m^-2) while flux density will represent the received flux per wavelength unit (derived units of W m^-3). If not stated otherwise, input spectral energy distributions (SED) will be considered to be expressed in terms of flux density while BP/RP spectra will be expressed in terms of flux by convention.§ METHODS One of the main characteristics of the Gaia data processing is the large amount of information that has to be handled (e.g. about 40 gigabytes of compressed scientific data are received from the satellite each day) and the consequent need for fast and reliable algorithms in order to reduce those data. These last requirements led the CU8 scientists to use techniques coming nearly exclusively from the field of the supervised learning methods whose underlying principle is to guess the APs of each observed object, which are unknowns, based on the interpolation of the APs of some similar template objects <cit.>. These methods have been proven to be fairly fast and reliable but often consist in black-box algorithms having no physical significance and having only basic diagnostic tools in order to identify the potential problems that may occur during the APs retrieval. This last point is particularly crucial in the case of medium-to-low quality observations, like BP/RP spectra, or in the case where the problem is itself prone to error, like the existing degeneracy in the redshift determination of low signal-to-noise ratio (SNR) QSOs <cit.>. With these constraints and shortcomings in mind, we have developed two complementary algorithms that are specifically designed to gather the quasar APs within the Gaia mission based on the object BP/RP spectra while providing a clear diagnostic tool and ensuring an execution time that is limited to N log N floating point operations (i.e. conventionally considered as `fast' algorithms). §.§ Weighted principal component analysis Principal component analysis (PCA) aims to extract a set of templates – the principal components – from a set of observations while retaining most of its variance <cit.>. Mathematically, it is equivalent to find a decomposition of the covariance matrix associated with the input data set,σ^2 = PDP,that is such that P is orthogonal and D diagonal and for whichD_i ≥D_j;∀ i < j.The first columns of P being then the searched principal components. A decomposition such as the one of equation <ref> is straightforwardly given by the singular value decomposition (SVD) of the covariance matrix <cit.>. Consider now the building of a set of rest-frame quasar templates based on a spectral library having a finite precision on the fluxes and a limited wavelength coverage. From the definition of the redshift, we will have that the observed wavelength, , can be related to the rest-frame wavelength, , through= ( z + 1 ) ,and as a consequence, we will have that every quasar within the input library will cover a specific rest-frame wavelength range that depends on its redshift. Furthermore, the measurement of the quasar fluxes often comes along with an estimation of their associated uncertainties originating, for example, from the Poisson nature of the photons counting; from the CCD readout noise; from the sky background subtraction or from spectra edge effects. These uncertainties being not taken into account within the classical PCA implementation. In <cit.>, we solved the previously mentioned issues by considering the use of a weighted covariance matrix inside equation <ref>. For this purpose, we defined the weighted covariance of two discrete variables, x⃗ and y⃗ having weights respectively given by w⃗^⃗x⃗ and w⃗^⃗y⃗ and weighted mean values given by x̅ and y̅ asx⃗y⃗ = ∑_i (x_i - x̅) w_i^x w_i^y ( y_i - y̅)∑_i w_i^x w_i^y. The suggested implementation relies on two spectral decomposition methods, namely the power iteration method and the Rayleigh quotient iteration, that allow to gain flexibility; numerical stability as well as lower execution times[Under the condition that the number of observations within the input data set is much larger than the number of variables] when compared to alternative weighted PCA methods <cit.>. §.§ Weighted phase correlation The redshift has a particular importance over the whole quasar APs because any error committed on the latter would make the other APs diverge. Its is then critical to have the most precise estimation of it along with a strong diagnostic tool in order to flag the insecure predictions. A technique fulfilling these requirements stands in the phase correlation algorithm <cit.> whose goal is to find the phase at which an input signal and a set of templates match at best in a chi-squared sense. For reasons already enumerated within section <ref>, we will consider here a weighted version of the previously mentioned algorithm. We are then searching for the shift at which an input spectrum, s⃗, associated with a weight vector, w⃗, and a set of templates, T, matches at best in a weighted chi-squared sense. Mathematically, it is equivalent to find the shift, Z, for whichχ^2(Z) = ∑_i w_i^2 ( s_i - ∑_j a_j(Z) T_(i+Z) j)^2is minimal given that a⃗(Z) are the linear coefficients minimizing equation <ref> for a specific shift. In <cit.>, we showed that the latter equation can be re-written asχ^2(Z) = ∑_i w_i^2 s_i^2 - (Z),where (Z) is the so-called cross-correlation function (CCF) at shift Z which can be evaluated for all Z in N log N floating point operations, N being the number of samples we used. Given that the first term of equation <ref> is independent of the shift, we will simply have that the minimum of equation <ref> will be associated with the maximum of the CCF. Practically, both s⃗ and T must be sampled on a uniform logarithmic wavelength scale in order for the redshift to turn into a simple linear shift (i.e. log = log(z+1) + log) and must be extended and zero-padded such as to deal with the periodic nature of the phase correlation algorithm. A sub-sampling precision on the shift can then be gained by fitting a quadratic curve in the vicinity of the optimum of the CCF while the curvature of this quadratic curve will be used as an approximation of the uncertainty associated with the found shift. The described weighted phase correlation algorithm relies on the assumption that the most probable redshift is associated with the maximal peak of the CCF, which is not always verified in the case of QSOs. The reason for this is twofold: * The highest peak of the CCF may not always lead to a physical solution like the omission of some characteristic emission lines (e.g. Lyα λ 121; Mgii279; or Hα λ 656 nm) or the fit of a `negative' emission line coming from the presence of matter being in the line-of-sight towards the observed QSO. The origin of this issue mainly stands in the imperfections of the templates we used as well as in the assumption we made that quasar spectra can be modelled as a linear combination of these templates.* In the case of low-SNR spectra, it may also happen that the signal of some emission lines starts to be flooded within the noise such that these will not be recognized as a genuine signal but rather as variance coming from noise. As a result, ambiguities can emerge within the CCF (i.e. multiple equivalent maxima) and hence within the redshift determination. In order to identify these sources of errors, we defined two complementary score measures associated with each redshift candidate: (i) χ_r^2(z), defined as the ratio of the value of the CCF evaluated at z to the maximum of the CCF and (ii) Zscore(z) defined asZscore(z) =∏_λ[ 1/2(1 + e_λ/σ(e_λ) √(2)) ]^I_λ,where e_λ is the mean value of the continuum-subtracted emission line standing at rest-frame wavelength λ if we consider the observed spectrum to be at redshift z; σ(e_λ) is the associated uncertainty and I_λ is the theoretical intensity of the emission line standing at λ normalized such that all the covered emission lines intensities sum up to one. Equation <ref> can then be seen as the weighted geometric mean of a set of normal cumulative distribution functions of mean zero and standard deviations σ(e_λ) evaluated in e_λ. Table <ref> summarizes the various emission lines and theoretical intensities we used in the context of the present study. We can already notice that these two score measures can easily highlight the sources of errors that may occur within the CCF peak selection, namely the choice of an unphysical solution and the choice of an ambiguous candidate respectively through a low Zscore and through a low absolute difference between the chosen candidate's χ_r^2 and the one from another candidate. These will constitute in fine strong diagnostic tools regarding our implementation.§ SEMI-EMPIRICAL BP/RP SPECTRAL LIBRARY BUILDING Similarly to supervised learning methods, the undertaken approach is based on the availability of a learning library of BP/RP spectra for which the various APs are known. Such a library being non-existent at the present time, we had to convert an already released spectral library of QSOs into BP/RP spectra according to the most up-to-date instrument model. We focused, for this purpose, on the twelfth data release of the Sloan Digital Sky Survey quasar catalogue <cit.>. The choice of this specific catalogue comes from: (i) the fact that each spectrum in it was visually inspected yielding extremely secure APs (ii) the large number of 297,301 QSOs it contains (amongst which 29,580 BAL QSOs) (iii) its medium resolution of 1300 < R < 2500 (iv) its spectral coverage which is comparable to the one of the Gaia BP/RP spectrophotometers (360 < λ < 1000 nm). This spectral library will then have to be extended such as to match the wavelength range covered by the Gaia BP/RP spectra and be subsequently convolved with the instrumental response of the BP/RP spectrophotometers such as to provide the final library. §.§ Spectra extrapolation Besides the fact that the DR12Q spectra have a narrower wavelength coverage (360 < λ < 1000 nm) when compared to the BP/RP spectra (330 < λ < 1050 nm), we also have to mention that the regions where λ < 380 and where λ > 925 nm often tend to be unreliable because of spectrograph edge effects while some other inner regions might be discarded because of bad CCD columns, cosmic rays, significant scattered light or sky background subtraction problems, for example <cit.>. In order to solve these shortcomings, we have extracted a set of rest-frame PCA templates out of the DR12Q library that were later fitted to each individual spectrum as a mean to extrapolate them. Note that since the DR12Q spectra are already sampled on a logarithmic wavelength scale (at a sampling rate of Δlog_10λ = 10^-4), no re-sampling will be needed before extrapolation. Raw spectra are not readily exploitable, rather they have first to be pre-processed such as to get rid of contaminating signals and to have some insights about their usability. For this purpose, we used a procedure that is identical to the one described in <cit.>. We will hence concentrate here on the results of this procedure rather than on the underlying implementation details. We will then get, for each spectrum: (i) the set of deviant points coming from a k–σ clipping algorithm applied to the high frequency components of the spectrum as well as from the removal of the night sky emission lines and spectrograph edge effects (ii) an empirical estimation of the QSO continuum coming from the low frequency components of a multi-resolution analysis of the spectrum (iii) a smoothed version of the provided spectrum, that we will consider here as being noiseless (iv) an evaluation of its SNR coming from the ratio of the variance that is present within the noiseless and continuum-subtracted spectrum over the variance that can be attributed to noise (i.e. raw fluxes from which we subtracted the deviant points; the QSO continuum and the noiseless spectrum). Figure <ref> illustrates the results of the previously described procedure and provides a self-explanatory example of the necessity we have to pre-process our input spectra. Spectra having a SNR larger than one are then normalized such as to have a weighted norm of one and are subsequently set on a common logarithmic rest-frame wavelength scale. These will constitute the input dataset upon which we will extract our PCA templates. We choose to consider the retrieval of the BAL QSO templatesseparately from the type I/II QSOs as a way to ensure that the characteristic features of the BAL QSOs are correctly reproduced within our extrapolated spectra. Doing otherwise would have required a much larger number of PCA components to be fitted in order to accurately model those features (the latter being not seen in the vast majority of QSOs, they would have been omitted from the dominant PCA components). Also, we required the continuum to rely on an empirical basis such as to restrain at most any unphysical behaviour within our extrapolation. We will consequently subtract each empirical continuum from each input spectrum as a way to extract the PCA components from both these subtracted spectra as well as from the continua themselves. By way of comparison, continuum templates are frequently taken as being a combination of power-law and exponential functions <cit.> that often succeed in reproducing the observed spectrum but that tend to diverge over the unobserved wavelengths. Consequently, four sets of PCA templates were built based upon the algorithm described in section <ref>: one set of templates for the type I/II QSO emissions lines; one similar set associated with the BAL QSOs and two corresponding sets of continuum templates. Figure <ref> provides the mean observation and two first principal components for these four sets of templates. For the sake of completeness, let us also mention that, during PCA retrieval, weights were taken as the inverse standard deviation on the fluxes regarding the emission line PCAs while these were simply set to one if the continuum we computed was associated with some observed fluxes and zero otherwise. Finally, we used 15 emission line templates (the mean observation and the first 14 PCA components) for both the fit of the type I/II and BAL observations along with 5 continuum templates for the type I/II QSOs and 6 continuum templates for the BAL QSOs. These fits ultimately provide the extrapolated spectra, as illustrated in figure <ref>. The number of templates we used allows us to explain 68.27% of the weighted variance[Weighted variance naturally results from equation <ref> in the case where x⃗ = y⃗ while having expected values, x̅ = 0 in the case of the weighted variance of the input data set or equal to the optimal linear combination of the templates in the case of the explained weighted variance.] that is present within the type I/II emission lines and 99.82% of their continuum variance. These become respectively 66.1% and 99.85% in the case of BAL QSOs. Even if apparently low, these ratios practically reflect the fact that the spectra we used come along with noise that will not be grabbed by the dominant PCA components. Consequently, the produced extrapolation will be considered here as being noiseless. §.§ BP/RP instrumental convolution The optical system of the Gaia BP/RP spectrophotometers consists for each in six mirrors, a dispersing prism and a set of dedicated CCDs (either blue or red-enhanced) that can be modelled asS(x) = ∫ N_λ R_λ L_λ(x - x_λ)dλ,where S(x) is the dispersed flux in the one-dimensional co-moving[sliding at the same rate as the TDI mode drift each CCD column] focal plane position x; N_λ is the input SED at the observed wavelength λ; R_λ is the global instrumental response at λ and where L_λ(x-x_λ) is the monochromatic line spread function (LSF) standing at λ and being evaluated at x-x_λ, x_λ being the co-moving focal plane position associated with λ. In more details, the global instrumental response R_λ encompasses: the mirrors reflectivity, the attenuation that is due to particulate and molecular contamination, the attenuation coming from the mirror roughness, the prism transmissivity and the CCD quantum efficiency at the observed wavelength λ. Note that in the following, we will consider a mean instrumental model averaged over each field-of-view and over each row of CCD within the focal plane given that our goal is to simulate end-of-mission (combined) spectra that will consist in an aggregation of individual (epoch) observations. The BP/RP sampled fluxes, s⃗, will then be given bys_i = ∫_-0.5^+0.5 S(x + i/Nover) dx,where Nover is the oversampling we choose to use. This oversampling arise from the higher SNR that is gained by the combination of the epoch spectra which, in turn, provide the opportunity to reach a higher sampling rate when compared to the initial 60 pixels provided by the BP/RP acquisition window (e.g. through flux interpolation). A common consensus within the CU8 is to consider an oversampling of Nover = 8, which results in 480 samples for each of the BP and RP spectra. This convention will be adopted here. The instrument model described so far is not able to deal with the various sources of noise that will contaminate our actual observations. Instead, the spectra produced through equations <ref> and <ref> will consist in the approximated noise-free counterparts of what Gaia will observe. Extending the aperture photometry approach developed in <cit.>, we can still have an estimation of the noise variance that is associated with each sampled flux, s_i, asσ_i^2 = m^2 σepoch^2/Nepoch / Nover + σcal^2,where m is an overall mission safety margin designed to take into account the potentially unknown sources of errors (m = 1.2, by convention); σepoch^2 is the variance of the noise associated with s_i if the latter was coming from a single epoch observation; Nepoch is the number of epoch observations used to compute the combined spectra and σcal^2 is the uncertainty arising from the flux internal calibration. The scaling of the single epoch variance, σepoch^2, reflects the assumption we made that each flux within the combined spectra comes from the mean value of a set of Nepoch / Nover epoch fluxes. In extreme cases, for example, we will have that each combined flux is averaged over the whole epoch observations (i.e. in the case of Nover = 1) while in the case of Nover = Nepoch, each flux within the combined spectra can be seen as gathered directly from the epoch spectra. Next, the variance coming from the uncertainties in the flux internal calibration, σcal^2, is taken to be equal to the inner product of the fluxes that are present within the pixels surrounding each sample with a linear function that is inversely proportional to the global instrumental response, R_λ, evaluated in those pixels. Its objective being to take into account the fact that the precision on the flux calibration will principally depend on the instrumental response in the vicinity of the pixel of interest. Because of the intricacy that is inherent to the modelling of these calibration errors, the latter were voluntarily tuned such as to stand within a moderate range of values. We can then decompose the epoch variance, σepoch^2 into variance coming from photon and CCD noise, σflux^2, and variance coming from the uncertainties in the background estimation, σbg^2, asσepoch^2 = σflux^2 + σbg^2/τ^2,where τ is the effective CCD exposure time, the latter being dependent on the G magnitude of the objects based on the activation of bypasses within some specific CCD columns which aim to prevent luminous objects from saturating <cit.>. Both terms within equation <ref> can then be simply extended asσflux^2 = (s_i + b) τ + r^2,σbg^2 =b τ + r^2/Nbg,where r is the total CCD detection noise including, amongst other, the CCD readout noise and the CCD dark noise; b is the background flux we subtracted from our observation and that we will consider here as a constant based on a typical sky-background surface brightness and where Nbg is the number of pixels we used in order to estimate b, that is taken here as being equal to the width of the BP/RP acquisition window (i.e. Nbg = 12 pixels). Being now able to model the entire instrumental response along with the associated uncertainties, we first choose to normalize our input spectra to G magnitudes where we expect QSOs to be observed, that is at G = { 18, 18.5, 19, 19.5, 20 }. This normalization allows us to study the behaviour of the implemented methods under an increasing level of noise. BP/RP spectra were then produced based on the most up-to-date instrument model coming from tests carried out by EADS Astrium (later renamed Airbus Defence and Space) during the commissioning phase of the satellite. We have to note that this instrument model still has to be updated in order to match the actual operational condition of the satellite even if the latter is not expected to vary too much from the model we used. Noisy spectra can then be obtained by adding the appropriate random Gaussian noise (i.e. having a variance of σ_i^2) to each of the (noise-free) spectral fluxes, s_i. An example of produced BP/RP spectra is illustrated in figure <ref>.§ ASTROPHYSICAL PARAMETER DETERMINATION The bell shape of the BP/RP spectra prevents us from using both algorithms described in section <ref>. This is even more damageable given the fact that these are not sampled on a logarithmic wavelength scale and that the wavelength coverage of each pixel is not uniform. In order to tackle these problems, a resampling of the spectra fluxes and uncertainties was first performed using cubic spline interpolation <cit.>. The uniform logarithmic sampling we used, Δlog_10λ = 1.75 × 10^-4, ensuring a sampling on the redshift that is better than 0.003, which is comparable to human expertise, while producing a reasonable amount of 7.7 × 10^3 sampled points in the final templates (assuming that z ≤ 6). Note that such a logarithmic interpolation will obviously introduce covariances between the resulting samples. Nevertheless, given that the specific resampling we used stands in a wavelength range where the corresponding logarithmic function is approximately linear and that both the number of samples within the BP/RP spectra and within the synthesized spectra are of the same order of magnitude, we will have that these covariances will be restricted to close neighbouring samples while having moderate magnitudes. These will consequently have a limited impact on the resulting predictions and will hence be ignored in the following. The division of these interpolated spectra by a flat BP/RP spectrum (i.e. coming from a flat input SED) concurrently fixes the bell-shape issue, that is mostly due to the global instrumental response, as well as the problem of the non-uniform wavelength coverage of the pixels, that is due to the inconstant wavelength dispersion function of the BP/RP spectrophotometers. Accordingly, these flat-fielded spectra can then be considered as being approximately proportional to the convolution of the input SED by the LSF over a linear wavelength scale, plus noise. Now, we will have that the resulting spectra will be disjoint although they are overlapping, which would yield to a tremendous loss of efficiency if these were to be considered individually. A more interesting solution stands in the linear combination of the flat-fielded BP/RP spectra according to a given weighting scheme such as to produce a single synthesized spectrum. In more details, if we consider s_λ^bp and s_λ^rp as being the interpolated fluxes of the BP and RP spectra; σ_λ^bp, σ_λ^rp, as their associated uncertainties; F_λ^bp, F_λ^rp, as their corresponding flat BP/RP fluxes and w_λ^bp, w_λ^rp as the weighting coefficient used to join these spectra, then we have that the synthesized fluxes, f_λ, can be represented asf_λ = w_λ^bps_λ^bp/F_λ^bp + w_λ^rps_λ^rp/F_λ^rp,and their associated uncertainties, σ_λ, asσ_λ = [(w_λ^bpσ_λ^bp/F_λ^bp)^2 + (w_λ^rpσ_λ^rp/F_λ^rp)^2]^1/2.In the context of the present study, the weighting coefficients we selected are given byw_λ^rp = 1 - w_λ^bp = 620650λ,whereλ_0λ_1λ = 1/2tanh[2 π(λ-λ_0/λ_1-λ_0-1/2)]+1/2is the hyperbolic tangent transition function from λ_0 to λ_1. The specific weighting used in equation <ref> ensures a smooth transition between the flat-fielded BP and RP spectra while keeping most of their significant regions. A continuum spectrum was then gathered and subsequently subtracted from each synthesized spectrum using a procedure similar to the one described in section <ref>. An illustrative synthesized spectrum produced through equations <ref> and <ref> is shown in figure <ref>. Having now exploitable input spectra, we decided to split our input spectral library into two parts, the first part being used as a `learning set' (LS1) in order to produce the PCA templates upon which we will base the analysis of the second part, the `test set' (TS1). Conversely, the second part will be subsequently used as a learning set (LS2) for the analysis of the first part (TS2). This two-fold cross validation procedure will finally provide the APs for the whole set of observations as if these were gathered based upon a totally independent dataset. The splitting criterion we used relies on the uniform selection of half the input library sorted according to the QSOs redshifts such as to ensure an even repartition of the latter amongst these two data sets. Two sets of rest-frame PCA templates were then produced for each of the learning set according to the QSO type (type I/II or BAL). These were based on the noise-free; continuum-subtracted and synthesized BP/RP spectra having both un-normalized G magnitudes and SNR > 1 within the extrapolation procedure. From the noiseless nature of these input spectra, we had to select custom weights associated with the synthesized fluxes, f_λ, asw_λ = 330380λ1050925λ[0.7 (2 620650λ - 1 )^2 + 0.3 ],where the first two terms practically reflect the limited confidence we set on the spectra edges due, for example, to the potential inaccuracies in the spectra extrapolation or to low fluxes within the flat BP/RP spectra leading to numerical instabilities and where the last term stands for the uncertainties introduced in the BP/RP spectra combination. Figure <ref> provides the mean observation and first two principal components of the synthesized spectra of the type I/II and BAL QSOs for both learning sets. It is worth to mention that, at first glance, it might seems misleading from the point of view of the validation process to retrieve PCA components from synthesized spectra which are themselves based on the linear combination of templates. Nevertheless, let us first remind that it is one of our assumptions that any (noiseless) DR12Q quasar spectrum can be fairly represented as the linear combination of a sufficient number of such templates. Secondly, we have to note that BP/RP spectra come from the instrumental convolution of the extrapolated spectra in the observed wavelengths. The latter being then set on rest frame, we will have that the resulting PCA components will have to reflect the averaged convolution applied over the whole observed wavelengths. Finally, this convolution will have the effect of smoothing the high-frequency components from the extrapolated spectra. These being concurrently the main source of unexplained variance within the DR12Q templates, we expect the produced library to be consistent regarding an hypothetical real noise-free BP/RP spectral library. The extracted PCA components were then used in order to produce their CCF against the noisy synthesized BP/RP spectra of magnitude G = { 18, 18.5, 19, 19.5, 20 } through the algorithm described in section <ref>. The redshift identification being based on the CCF peak having a corresponding redshift in the range 0 < z < 6; a χ_r^2(z) > 0.85 and a minimal scaled distance from the ideal point (1,1) ∈ (χ_r^2(z), Zscore(z)) as given byd(z) = √((0.8 [1-χ_r^2(z)])^2 + (0.2 [1-Zscore(z)])^2).The selected redshift was then flagged for potential inaccuracies in the peak selection according to the values provided within Table <ref>. Constants used in the peak selection procedure as well as within Table <ref> are purely empirical and based on a visual inspection procedure. The optimal number of PCA components to use was chosen as a trade-off between the ratio of explained variance; the ability of the templates to model BAL QSOs and the potential overfit of the observations coming from the use of a too large number of templates. This overfitting being characterized by frequent ambiguities in the corresponding CCFs that eventually results in a large number of erroneous redshift predictions (though these will have a non-zero warning flag). Tests performed on each learning set show that the use of 3 PCA components is a satisfactory compromise between these constraints that ultimately leads to a ratio of explained variance of 94.6% (LS1) and 93.42% (LS2) regarding the type I/II QSOs and of 86.22% (LS1) and 88.14% (LS2) regarding the BAL QSOs. The BAL QSO identification is based on the comparison of the value of the CCF peak we selected using type I/II templates, y(z), against the value of the CCF peak selected from BAL templates, yb(zb), throughpb = yb(zb)/yb(zb)+y(z),where z is the redshift selected from type I/II templates and zb is the redshift selected from BAL templates. Though straight classification between these two types of QSOs is sometimes practical, it is however commonly motivated by the specific needs of the end-users. As an example, studying the physics of BAL QSOs will require an extremely pure subset of observations (e.g. with pb > 0.7) while a re-observation survey can easily deals with a `hint' on the BAL nature of the observed QSOs (e.g. with pb > 0.5). Still, the frequent discrepancies observed between the redshift predicted based upon these two kinds of templates enforce us to use such a classification. Accordingly we will consider, in the following, zb as the default reshift whenever pb > 0.55 and G ≤ 19 while keeping pb as a discriminant value for further application-specific classification. The effect of the thresholding of this discriminant value on the resulting ratio of correctly/incorrectly classified observations will be deferred to section <ref>. The slope of the QSO continuum corresponds to the spectral index, α_ν, as defined byf_λ∝λ^-α_ν-2or more compactly expressed in terms of frequency, ν, as f_ν∝ν^α_ν. This index is obtained from the fit of a power law function to the observations over wavelength regions that are commonly devoid from emission/absorption features, that is: 145–148; 170–180; 200–260; 325–470 and 525–625 nm. The exact procedure employs a k-sigma clipping algorithm (with k = 3, σ = 1) such as to underweigh iron emission blends as well as other fortuitous absorption/emission structures by a factor 100. This procedure was applied to both the input DR12Q spectra and to the synthesized BP/RP spectra as a way to fairly compare the resulting predictions while discarding any bias that can be due to the differences in the used algorithms. Because of their high numerical complexities, non-linear optimization algorithms were not used for the least-squares solution of equation <ref>. Rather, each power-law function was fitted through a linear regression of the wavelengths against the fluxes by taking the logarithm of both sides of the latter equation. Although this choice seems to be harmless from the point of view of the DR12Q spectra, synthesized spectra will have to cope with the large amount of discarded samples coming from the frequent negative fluxes encountered within the spectra edges. These discarded samples leading to a large bias towards positive fluxes (see figure <ref>, for example), we consequently decided to reject samples standing outside the observed region 350–950 nm for these specific spectra. Finally, the total equivalent width of the emission lines can be represented as W = ∫e_λ/c_λ dλwhere c_λ is the continuum slope we fitted based upon equation <ref> and e_λ are the emission lines fluxes, the latter being set to f_λ - c_λ if λ belongs to an emission line region and to zero otherwise. The identification of these emission line regions is based on parts of the spectrum where smoothed fluxes coming from a 45 points wide Savitsky-Golay filtering <cit.> stands higher than the continuum. In agreement withwhat was previously done, equation <ref> was integrated over the interval 380–925 nm regarding DR12Q spectra and over the interval 350–950 nm regarding the synthesized spectra. The rest-frame total equivalent width being hence straightly given by Wrest = W / (z+1). The results of these continuum slope and total emission line equivalent width determination procedures are summarized within Table <ref> and figure <ref> regarding DR12Q spectra and synthesized spectra of various normalizing magnitudes. We can easily see that the continuum slopes predicted from synthesized spectra tend to be bluer than those of the DR12Q spectra. This bias comes from the spread of the Siiv and Civ emission lines over the continuum region 145–148 nm and, to a lesser extent, from a similar spread of the Civ and Ciii] emission lines over the continuum region 170–180 nm as depicted within figure <ref>. The noticed relation between the flattening of the continuum slope and the normalizing magnitude comes from the increasing number of negative samples that are rejected within the continuum regions 325–470 nm and 525–625 nm where faint fluxes are usually found and which ultimately tend to artificially redden those regions. Also, we can observe that the total equivalent widths of the emission lines predicted based upon synthesized spectra are underestimated compared to the ones predicted using DR12Q spectra. The reason for this similarly stands within the globally overestimated continuum flux as well as from the reddening of the spectrum at wavelengths longer than 300 nm according to the magnitude. Note that the miss of some narrow emission lines because of the LSF convolution also tends to lessen the predicted W. The reader should hence pay a careful attention to these systematic effects once using these measurements. Given these shortcomings, one might rightfully wonder whether the use of non-linear optimization algorithms worths to be envisaged in order to predict the continuum slope of the QSOs at the expense of a ten times longer execution time. Doing sowill provide us with a mean value of the continuum slopes of -0.691 ± 0.657 for the DR12Q spectra and a correlation factor of 0.966 if these are compared to the results of our approach. In the case of synthesized spectra, these numbers become respectively -0.563 ± 0.721, -0.562 ± 0.737, -0.564 ± 0.737, -0.575 ± 0.739 and -0.601 ± 0.738 for the mean continuum slopes of magnitudes G = { 18, 18.5, 19, 19.5, 20} with associated correlation factors of 0.988 for G ≤ 19 and of 0.985, 0.975 for G = 19.5 and 20, respectively. The observed flattening of the predicted continuum slopes at G > 19 ironically comes from the non-rejection of the negative fluxes from the red part of the spectra which tends to give larger weights to these regions (i.e. the fraction of red fluxes being then more significant). While this effect will have a negligible (but still noticeable) impact on the G ≤ 19 predictions because of the sufficient SNR of the red part of the spectra at these magnitudes (e.g. the fit of the red part of the spectra providing a good approximation of the continuum slopes at these magnitudes), it will have a deleterious impact on fainter magnitudes where the red part of the spectra is often better approximated by a flat curve. This effect gets further amplified through the subsequent rejection of the blue fluxes by the k–sigma clipping algorithm. Let us still mention that this non-linear approach remains the most rigorous in a statistical point of view though the strong similarities noticed in both approaches and their common difficulties in predicting the continuum slopes of faint sources do not justify its use regarding its larger time consumption.§ PERFORMANCE COMPARISON The performances of our approach were assessed in comparison with the Extremely Randomized Trees learning method <cit.>. While classical tree-based learning methods usually try to find, at each node, a splitting criterion (i.e. an attribute and a threshold within this attribute) that is such that the learning set of observations associated with this node is split at best with respect to a given score measure (e.g. variance reduction in regression problem or information gain in classification problem), the ERT instead picks up K random attributes as well as a random threshold associated with each of these attributes in order to select the one maximizing the provided score measure. This procedure is then recursively repeated until the number of learning set observations in all leaf nodes falls under a given limit, nmin. The averaged prediction of a set of N trees then allows to subsequently lessen the variance of the model (i.e. the sensitivity of each individual tree to the used learning set). The choice of this specific method mainly comes from both its fast learning phase as well as from its high performances regarding other competing methods like Artificial Neural Networks or Support Vector Machine while having only a few numbers of parameters to tune. Let us also note that this method is the one that is presently in use within the QSOC software module in order to predict most of the QSO APs <cit.>. First of all, let us mention that the QSO continuum slope and that the total equivalent width of the emission lines will not be considered within this performance comparison because these can be straightly predicted based on observable quantities. Regarding the adjustment of the parameters of the ERT models, tests have shown that the prediction of the QSO redshift and type are rather insensitive to the K and nmin parameters if these stand within reasonable ranges of values. Consequently and according to <cit.>, the default values of K = Nattr, nmin = 5 and K = √(Nattr), nmin = 2 were accepted respectively for the redshift regression problem and BAL classification problem, Nattr being here the number of points contained within our BP/RP spectra (i.e. Nattr = 960 if Nover = 8). The number of trees to build, N, should be ideally as large as possible. Nevertheless, based on time and memory constraints we have, the latter was set to N = 1000. In the following, the ERT models will be built based upon the noisy learning sets LS1 and LS2 where the observations having a SNR > 1 are selected and normalized such as to have a unit norm for the whole set of magnitudes. Their predictions being then gathered from the associated test sets of corresponding magnitude within TS1 and TS2. We have to note that because of selection effects and observational bias within the DR12Q catalogue, neither LS1 nor LS2 will follow a realistic distribution of the redshift <cit.>. Similarly, these will neither contain a genuine fraction of BAL QSOs <cit.>. Consequently, the ERT models that will be built based on these learning sets will be particularly suited for the prediction of the QSO redshifts and type that are the most frequently encountered within LS1 and LS2. In that sense, these will constitute data-oriented models whose predictions on TS1 and TS2 will be optimistic when compared to those based on real observations. Finally, we can further note that the weighted phase correlation algorithm (hereafter WPC) is not sensitive to this data unbalancing and that the associated results will remain valid irrespectively of the actual APs distribution we will encounter. §.§ Redshift determination The distribution of the predicted redshifts against DR12Q redshifts is given within the upper part of figure <ref> for the case of the ERT predictions as well as for the case of the WPC predictions regarding the various normalizing magnitudes. We can already notice a trend of the ERT predictions standing at z ≤ 2 to be driven towards zert≈ 2.3 where stand most of our learning set observations. This effect is particularly noticeable at z ≈ 0.8 where stands our second most numerous source of QSOs and it further tends to strengthen along with an increasing magnitude (though these misclassified observations will typically have strong associated uncertainties). To a lesser extent, we may also note an opposite trend where the observations standing between z = 2 and z = 3 tend to be underestimated. These effects potentially reflect the inability of our models to fully grab the information that is present within our learning sets and/or the incompleteness of the latter. The fact that very high redshift objects (z > 3) get correctly predicted presumably comes from the entrance of the Lyα forest within the observed wavelength range where the extremely faint fluxes found therein allow to unequivocally characterize these observations. Although roughly performed here, the analysis of the results coming from machine learning methods often suffer from a lack of physical significance and interpretation that mostly arises because of their underlying complexity. Furthermore, these methods strongly depend on the completeness of the learning set we used in order to build them. In an illustrative purpose, let us consider that a given QSO spectrum is getting a correct redshift prediction from such a model, suppose now that a similar spectrum has a slightly higher redshift which results in a mean shift by a few pixels in the observed spectrum, then nothing ensures us that this shifted spectrum will get a correct prediction from the previous model since this ultimately depends on whether or not a somewhat similar spectrum was encountered within the used learning set. According to this, a learning method dedicated to the redshift determination of QSOs should ideally be based on a learning set of observations covering the vast majority of QSO shapes and characteristics over the entire range of redshift we are looking for. With these arguments in mind, we may still suppose that the ERT models we used (and in a broader sense, any model based on machine learning method) are not the best suited in predicting the redshift of QSOs. Regarding the redshift distribution from WPC (see figure <ref>, up), we can readily notice a tighter dispersion of the errors when compared to the ERT predictions with median absolute errors of 0.0057, 0.0061, 0.0069, 0.0088 and 0.0130 for the WPC predictions of magnitudes G = { 18,18.5,19,19.5,20 }, respectively, and corresponding ERT median absolute errors of 0.0419, 0.0586, 0.0755, 0.1164 and 0.1723 (see figure <ref>, middle). Similarly, 4.83%, 6.66%, 10.56%, 17.6% and 28.93% of the observations have a catastrophic prediction on their redshift (i.e. |Δ z| > 0.1) within the WPC for the same set of magnitudes while the corresponding ratios of ERT observations become respectively 35.04%, 40.37%, 45.09%, 52.97% and 63.19% (see figure <ref>, down). Most of the WPC errors come from mismatches between emission lines. These mainly consist in the confusions of Hβ with Ciii]; Mgii with Lyα; Mgii with Civ; Mgii with Ciii] and Civ with Lyα. We have to note that these mismatches do not constitute by themselves real cases of degeneracy but rather arise because of the effect of noise on the emission lines identification as we will soon see. Still, this effect is already unambiguously depicted in figure <ref> (top), where the number of observations suffering from such an emission line mismatch problem tends to increase along with an increasing magnitude. In the same figure, we may also note that low SNR spectra (G ≥ 19) tend to produce constant predictions at zwpc≈ 0.5, 1.35 and 1.8. These correspond to the fit of deviant fluxes from spectra edges by the Hα, Civ and Lyα emission lines, respectively. Though unavoidable, most of these errors will come along with a non-empty warning flag (see Table <ref>) which offers the possibility to discard these insecure predictions. By doing so, we rejected 46.2%, 50.99%, 57.72%, 67.09% and 77.58% of the total number of observations regarding magnitudes G = { 18,18.5,19,19.5,20 }, respectively. This leads to corresponding median absolute errors of 0.0053, 0.0055, 0.0058, 0.0063 and 0.0072 and associated ratios of catastrophic redshift predictions of 0.23%, 0.28%, 0.43%, 0.64% and 1.42% for the same set of magnitudes. From our previous discussion, we can notice that the performances we gained were achieved at the expense of a very high rejection rate of the observations having a non-empty warning flag. The distribution of these warning flags amongst the observations is given in Table <ref> along with their associated ratio of catastrophic redshift prediction once triggered. We first have to note thatis the most frequently triggered warning flag amongst these observations and is hence the one that contributes at most in their removal. The reason for this stands in the fact that the Zscore measure is primarily designed such as to be sensitive to the presence of all the emission lines that are theoretically covered at a given redshift estimate. The miss of one such a line can be attributed either to the wrong redshift estimate we made or to its misidentification owing to its noise or its strong damping by the instrumental convolution, though the right redshift was selected. This misleading distinction is clearly depicted in Table <ref> where solely 9.64% of the observations having G = 18 andflag set comes along with Δ z > 0.1, thus arguing for the frequent misidentification of some emission lines while at G = 20 this ratio becomes 38.84%, hence consisting in a larger fraction of effective redshift confusions. Secondly, theflag is frequently set because of the intrinsic degeneracy existing in the prediction of the redshift of quasars albeit we can assess from Table <ref> that in 70.66% of the cases the right redshift is selected amongst these ambiguous solutions at G = 18 while this ratio drops to 49.29% at G = 20. For completeness, we have to mention that 49.29% of successful identifications is still better than the ratio that would be obtained from a random selection of the solution given that the observations havingflag set often consist in more than two ambiguous solutions (see figure <ref>, for example). We can further note that, once an ambiguity is detected (i.e.warning flag triggered), the optimal peak of the CCF is commonly selected as the most probable redshift estimate as these do not additionally trigger awarning flag. Now, if a sub-optimal peak of the CCF is selected, then 55.66% of the observations comes along with Δ z≤ 0.1 at G=18 while this ratio becomes 39.23% at G = 20. Given that the latter observations must have a Zscore that is greater or equal to the one associated with the optimal peak of the CCF in order to be selected, these eventually reveal the effective degeneracy that exists in the redshift prediction of quasars once based on low SNR spectra. Finally, thewarning flag is rarely triggered given the strong constraints we set on it (see equation <ref>). This decision is further supported by the fact that the associated ratio of catastrophic redshift prediction stands to be the highest amongst the whole set of warning flags. As pointed out within Section <ref>, the thresholds that were used in order to trigger theandwarning flags are somewhat arbitrary and other values might be better suited regarding the specific needs of the end user. This is particularly true given that these were shown to have a strong impact on the trade-off between the completeness and the impurity of our predictions as we have just seen. These ratios of completeness and impurity being given in figure <ref> for varying thresholds on χ_r^2 and Zscore. Note that we do not considered theandwarning flags in this analysis given that χ_r^2 < 1 automatically implies that both these flags are set. Also remind that we required χ_r^2 > 0.85 such as to limit the number of ambiguous solutions that are potentially associated with each observation. §.§ BAL binary classification The data unbalancing has a particularly insidious impact on the analysis of the results coming from the binary classification of BAL QSOs. Indeed, based on the fact that solely 9.95% of the DR12Q observations are BAL QSOs, a model that will systemically classify the observations as type I/II QSO would then provide a satisfactory ratio of correctly classified observations (i.e. an accuracy) of 90.05% while no BAL QSO will be identified. Consequently, this ratio will not constitute an objective analysis tool if considered alone. We will hence use two additional and complementary statistical measures that were specifically designed for the analysis of the performance of binary classifiers. First, the true positive rate (hereafter TPR) will here denotes the fraction of BAL QSOs that are correctly identified by a given model. It constitutes an estimator of the probability of detection of the BAL QSOs by this model. Secondly, the false positive rate (hereafter FPR) will denote the fraction of type I/II QSOs that are wrongly classified as BAL QSOs. A perfect binary classifier should hence have TPR = 1 along with FPR = 0. Note that both these statistical measures can be adjusted by varying the user-defined threshold that was set either on pb, for the case of the WPC, or on the number of trees that voted for the BAL class regarding the ERT. By doing so and reporting the corresponding TPR against FPR we obtain the so-called Receiver Operating Characteristics (ROC) curve as depicted within figure <ref> for the case of the WPC and ERT models for quasars with magnitudes G = { 18,18.5,19,19.5,20 }. These curves allow to straightly compare the performances of these two competing models while depending neither on the data unbalancing, nor on the specific thresholds we used. The area under the ROC curve being then often taken as a fair indicator of their global performances. Now, like many data reduction pipelines, our primary objective will be to optimize the accuracy of our model with respect to the fraction of BAL QSOs that will be encountered amongst the real observations. We will then have to take into account the potential unbalancing that will be present within the Gaia observations. However, because of the uncertainties surrounding the selection effect from DSC as well as the observational bias, this unbalancing is not known a priori. Consequently, we decided to consider a fraction of BAL QSOs, rb, equal to the one that is present within the DR12Q catalogue (i.e. rb = 0.0995). The presented accuracies should hence be updated once a realistic ratio will be available though the general conclusions drawn out of these are not supposed to change (assuming that rb remains small). We can then easily figure out that the regions of the ROC curves where the accuracy is constant correspond to lines whose equations are given byTPR = 1 - rb/rb×FPR + C.Our goal will then be to find the point(s) of the ROC curve that intersect such a line while maximizing C. Note that the trivial case where C = 0 corresponds to the accuracy that would be obtained by a constant type I/II classifier which is thereby always achievable. Stated otherwise, the point(s) of the ROC curve having an optimal associated accuracy correspond(s) to the one (or those) whose distance to the line of constant type I/II accuracy is the greatest while being on its left side. Figure <ref> focuses on the regions of the ROC curves where the accuracy stands higher than the one of a constant type I/II classifier. From our previous discussion, we can readily see that the WPC models have overall better accuracies when compared to the ERT models for the whole set of G magnitudes. The best achievable accuracies being summarized in Table <ref> along with their associated thresholds, FPR and TPR for G magnitudes of 18, 18.5, 19, 19.5 and 20. The extremely low TPR found therein can be explained by both the relatively low SNR of the synthesized spectra as well as by the removal of most of the narrow absorption features by the LSF convolution and/or by the under-sampling of these spectra. The effect of noise can be readily recognized based on figure <ref> where the ROC curves tend to match the ones that would be obtained from arandom classifier (i.e. a diagonal line) as we increase the G magnitude. This translates as a drop of the point of optimal accuracy along the ROC curves which consists in both a lower TPR and a compensating lower FPR (see Table <ref>). In extreme cases, BAL QSOs become barely identifiable with a probability of detection within the WPC of 3.348% for G magnitude of 19.5 and of 0.413% for the case of G = 20. Figure <ref> compares the TPR of the WPC with the Balnicity index of the Civ trough <cit.> for the various normalizing magnitudes. This BI can be seen as a modified equivalent width of the BAL absorption occurring in the blue part of the Civ emission line.We can notice a strong dependence of the TPR according to BI, which reflects the difficulty in identifying BAL QSOs having narrow absorption features. We can finally notice that if one can afford to have a high FPR, then the ERT provides a better TPR than the WPC. This would be the case, for example, if we would like to filter the Gaia catalogue by keeping most of the the BAL QSOs while rejecting a still significant number of type I/II QSOs.§ DISCUSSION Although already fully operational, the presented software module may still experience some minor improvements that will be summarized in the remainder. First, we did not consider any extinction by the interstellar medium. The associated correction relies on the availability of a wavelength-dependent extinction law such as the one of <cit.> as well as on a map of galactic extinction like the one that will be produced by the Total Galactic Extinction software module from CU8 <cit.>. The total equivalent width of the emission lines as well as the continuum slope from equation <ref> might benefit from this correction. Nevertheless, due to the fact that the continuum slopes we subtracted from our synthesized spectra are purely empirical (see section <ref>), these will also contain most of the encountered extinction. Accordingly, this correction is not expected to bring any major improvement on the prediction of the redshift of QSOs nor on the subsequent calculation of the BAL discriminant value, pb. Furthermore, based on the fact that most of the DR12Q spectra stand at relatively high galactic latitude (i.e. |b| > 30) where the extinction is weak, the spectra we used in this study were not much affected by this extinction. A more challenging objective would be to enable the prediction of this extinction based on the BP/RP spectra of quasars. This problem is currently being investigated but seems to be hardly attainable because of the degeneracy existing between the extinction curve and the intrinsic continuum slope of the QSOs. Secondly, the computation of a χ^2 value from the optimal point of the CCF (see equation <ref>) can straightly allow to send feedback about the potential misclassification of the quasars we received from the DSC module.§ CONCLUSION We have described in the present work the processing of the BP/RP spectra coming from the Gaia satellite in order to determine the astrophysical parameters of quasars within the QSOC module of the CU8 coordination unit from the DPAC. These astrophysical parameters encompass: the redshift of the QSOs, their continuum slopes, the total equivalent width of their emission lines and whether or not these are broad absorption lines (BAL) QSOs. We have highlighted the necessity to have fast and reliable algorithms such as to deal with the huge amount of spectra that Gaia will provide as well as with their limited signal-to-noise ratio and resolution. We have introduced two already developed algorithms, namely the weighted principal component analysis and the weighted phase correlation, that were specifically designed in order to fulfil both these mentioned objectives and whose combination allows to securely predict both the redshift of the QSOs and to set a discriminant on their type. We have presented the construction of a semi-empirical library of BP/RP spectra based on the Gaia instrumental convolution of the observations coming from the Sloan Digital Sky Survey which were extrapolated in order to cover the wavelength range of the BP and RP spectra. We saw the pre-processing that is required in order for these BP/RP spectra to be fully exploitable by our algorithms as well as the methods we used for predicting the various astrophysical parameters. Some systematic bias were noticed within the prediction of the continuum slopes and of the total equivalent width of the emission lines. These bias can be mostly explained by both the spread of the Siiv, Civ and Ciii] emission lines over the continuum regions situated between 145–148 nm and 170–180nm as well as by the rejection of the negative fluxes that are usually found within the red part of the pre-processed spectra. A comparison with the currently used machine learning method showed that our approach is the one of predilection for the determination of the redshift of the quasars while benefiting from a straight physical significance as well as from strong diagnostic tools on the potential errors that may arise during predictions. Cross validation tests showed that 95.17%, 93.34%, 89.44%, 82.4% and 71.07% of the observations come along with an absolute error on the predicted redshift that is lower than 0.1 for the case of quasars with G magnitudes equal to G = { 18, 18.5, 19, 19.5, 20}. These ratios become respectively 99.77%, 99.72%, 99.57%, 99.36% and 98.580% once the insecure predictions are discarded based on the triggering of some warning flags. We explored the repartition of these warning flags amongst the observations and studied the effect of setting customized warning thresholds on the trade-off between the completeness and the impurity of our predictions. Our methods were proved to yield the best ratio of correctly classified observations regarding the identification of BAL QSOs assuming that these will be observed much less frequently than the type I/II QSOs. Machine learning methods may still provide a better probability of detection of these BAL QSOs at the expense of much higher contamination rates. Finally, we have that 91.725%, 91.1069%, 90.562%, 90.198% and 90.069% of the observations were correctly classified by our methods regarding quasars with G magnitudes of 18, 18.5, 19, 19.5 and 20, respectively.§ ACKNOWLEDGEMENTSThe author acknowledges support from the ESA PRODEX Programme `Gaia-DPAC QSOs' and from the Belgian Federal Science Policy Office.Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/.SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington and Yale University. mnras
http://arxiv.org/abs/1709.09378v1
{ "authors": [ "L. Delchambre" ], "categories": [ "astro-ph.GA", "astro-ph.CO" ], "primary_category": "astro-ph.GA", "published": "20170927080820", "title": "Determination of astrophysical parameters of quasars within the Gaia mission" }
This paper is an electronic preprint. A discrete choice model for solving conflict situations between pedestrians and vehicles inshared spacePascucci, F.1,*, Rinke, N.2, Schiermeyer, C.2, Berkhahn, V.2, Friedrich, B.1, 1 Technische Universität Braunschweig,Institute for Transportation and Urban Engineering, Hermann-Blenk-Straße 42, 38108 Braunschweig, Germany2 Leibniz Universität Hannover, Institute for Risk and Reliability, Callinstraße 34 30167 Hannover, Germany * [email protected] § ABSTRACTWhen streets are designed according to the shared space principle, road user are encouraged to interact spontaneously with each other for negotiating the space. These interaction mechanisms do not follow clearly defined traffic rules but rather psychological and social principles related to aspects of safety, comfort and time pressure. However, these principles are hard to capture and to quantify, thus making it difficult to simulate the behavior of road users.This work investigates traffic conflict situations between pedestrians and motorized vehicles, with the main objective to formulate a discrete choice model for the identification of the proper conflict solving strategy. A shared space street in Hamburg, Germany, with high pedestrian volumes is used as a case study for model formulation and calibration. Conflict situations are detected by an automatic procedure of trajectory prediction and comparison. Standard evasive actions are identified, both for pedestrians and vehicles, by observing behavioral patterns. A set of potential parameters, which may affect the choice of the evasive action, is formulated and tested for significance. These include geometrical aspects, like distance and speed of the conflicting users, as well as conflict-specific ones, like time to collision. A multinomial logit model is finally calibrated and validated on real situations. The developed approach is realistic and ready for implementation in motion models for shared space or any other less organized traffic environment.§ INTRODUCTION The street design in the urban environment has to take account of the needs of a wide variety of road users, including many types of motor vehicles as well as vulnerable users like pedestrians and cyclists. When the street space has to accommodate them all together, traffic engineers may basically choose between two alternative design approaches. The first one is based on the separation of different transport modes. When traveling towards the same direction, separated traffic areas - e.g. separated with vertical kerbs - are established. When flows cross each other, control devices like markers, signs or signal devices are used to establish priority rules. The second one is based on the shared space principle and consists on designing a continuously paved surface with minimized road signs and markings. In these areas, road users are encouraged to interact spontaneously with each other for negotiating the space, hence taking priority or giving way to others and consequently adapting their trajectory and speed according to the traffic situation. §.§ Problem statement and previous research Besides specific guidelines which can assist in the design process of shared space on the basis of technical recommendation and real examples <cit.>, traffic engineers at present cannot rely on realistic microsimulation tools, which would be useful for the Level of Service estimation and safety assessment. Despite that, in the recent years research has focused on shared space modeling by investigating the interaction mechanism among road users and proposing new methods to reproduce the behavior of people in this environments <cit.>. The common approach has been to utilize the Social Force Model (SFM) of Helbing and Molnar <cit.> - originally developed for pedestrian dynamics - and to extend it to other road users like vehicles and cyclists.However, differently to pedestrian dynamic, the presence of motorized vehiclemakes all road users more vigilant due to the higher risk of injuries involved. Complex psychological processes are hiding behind each decision, e.g. whether to brake or not when a pedestrian is approaching on the side of the road. These very conflict situations among different transport modes are hard to reproduce and require further adaptation of the SFM - an approach that many researchers in the recent past have chosen.Anvari et al. <cit.> extended the SFM with a conflict avoidance strategy, where the road user moving at the highest speed reacts first by decelerating or deviating and the other one reacts accordingly. That means, the model assigns priority to the weaker user regardless of the circumstances. Pascucci et al. <cit.> used an algorithm to compare the future positions of the conflicting users, in order to assign priority to the one that would leave the conflict zone first. Schönauer et al. <cit.> developed a tactical model which handles conflict situations by using a Stackelberg game, a rational game play which determines the winner by comparing single utilities of the players. The model considers parameters like the probability of collision, the distance between users and road regulation, with the purpose to determine the most probable reaction. However, as the authors declared, the estimated parameters were not presented due to the small amount of decisions collected on the field. Past research has also investigated pedestrian-vehicle conflicts in lane-based environments, i.e. where the behavior of road users is expected to be on a predefined trajectory <cit.>. Many authors have proposed discrete choice models to determine the reaction performed by road users in pedestrian-vehicle conflicts <cit.>. In the field of shared space, Kaparias et al. <cit.> investigated the pedestrian gap acceptance behavior by analyzing the effect of variables such as waiting time, crossing time, crossing speed and critical gap. §.§ Objective and contribution of this researchIn the research project MODIS (Multi mODal Intersection Simulation) the authors have dealt with the issue of shared space microsimulation and have proposed a Social-Force based approach to simulate the interaction among road users in these areas <cit.>. Nevertheless, dealing particularly with pedestrian-vehicle conflicts revealed that road users base the decision whether an evasive action had to be performed on several criteria. To establish the impact of the respective criteria and to assess under which circumstances a specific evasive action is chosen, an extension of the proposed deterministic model would be necessary.For this reason, this paper investigates the mechanism of reaction in pedestrian-vehicle conflicts in shared space, and aims to propose a discrete choice model to determine the most probable evasive action of a road user in given circumstances. The developed modeling approach has many innovative aspects in the field of shared space microsimulation: * It provides decisions for single conflicting users. While in previous models a comprehensive conflict solving strategy is used, in the current approach every user involved in a conflict situation decides for themselves based on personal perception and intention. This can lead to situations where for example both users temporarily decide to give way each other for safety reasons (which is quite common in shared space). * It allows two possible reactions besides the possibility not to perform any reaction at all. Previous models were formulated on the binomial decision to accept/not to accept the temporal gap between two consecutive cars, while here it is also specified how the evasive action is supposed to be. In this case, a multinomial logit is used to allow the user a total of three possible choices. * It considers multiple conflict situations. The number of simultaneous conflicts is indeed considered as an input variable in the proposed model. * The model is calibrated and validated on a large dataset of real world observations and is ready-to-use within a motion model for shared spaces or any other less-organized environment. The coefficients presented at the end of this paper are representative of the scenario where the model was calibrated and reflect the specific street layout, the road regulation and the local behavioral attitudes. Despite that, the developed procedure for coefficient estimation can also be used in different context from the one considered, including pedestrian crossings or other situations where priority is somehow negotiated.§.§ Outline of the paper This work is organized in four main steps, which reflect the methodology used to reach the objectives stated above:* a visual analysis is performed on real pedestrian-vehicle conflicts in order to identify, on a general level, which factors may affect road user's reaction choice;* data are collected, including the determination of predictors and outcome for a set of selected conflict situations;* the significant variables are identified and the multinomial logit model is calibrated and validated, both for pedestrians and vehicles;* the model is tested on real situations to determine the performances. This paper ends with some considerations about the developed model and some ideas for future research. § SITE LOCATION AND OBSERVATIONS A shared space street in Hamburg (D) was chosen for video observation and data extraction. The site is located in proximity of the railway station of the district of Bergedorf (Weidebaumsweg) and has public space features due to the presence of retail stores, a shopping mall and a pedestrian zone (Fig.<ref>). This leads to high pedestrian crossing volumes over the 63-meter long area where vehicles are supposed to drive through (in this paper referred to as circulation zone). To promote pedestrian movement all over the area, the shared space design principle has been adopted: a grey tone paved surface has been designed, which covers the circulation zone and the surrounding pedestrian ones, with no-level difference between them but simply different patterns to identify the borders. Within the circulation zone vehicles are allowed to drive at maximum 20 km/h, while owning priority over crossing pedestrians coming from both sides, who have to wait for a sufficient gap between vehicles to cross the road. Despite that, the data survey has shown that vehicles and pedestrians negotiate the space spontaneously, often giving priority to each other as a courtesy rather than strictly follow road regulation. Video survey were conducted with two video cameras, placed at opponent borders of the circulation zone (points C1 and C2 of Fig.<ref>a), across from each other, at an elevation of about 7 meters. The video cameras have a resolution of 640 x 480 at 30 frames per second. The period of video recording was chosen as Saturday afternoon in the springtime (April 2nd 2016 from 2 to 5 pm), in order to maximize the volumes of crossing pedestrians. Besides vehicles and pedestrians, a few motorcycles and cyclist were present but their presence was negligible.To get an impression about the mechanisms of conflict reaction and the possible factors influencing them, typical pedestrian-vehicle conflict situations were extracted and visually analysed. §.§ Evasive action analysisWhen entering an area shared with vulnerable road users, vehicle usually assume a different driving behavior, consisting of reducing speed and paying higher attention while driving. If no pedestrian is on either side a constant speed is kept until the end of the circulation zone, but if pedestrians appear a decision is needed whether to give them way or not. In the first case, the typical reaction consists of simply decelerating, because weaving is usually considered to be dangerous. For vehicles, three different possible choices are assumed in this work: apart from the no reaction-behavior (No Reaction Vehicle, NRV), vehicles may decelerate - slightly or clearly - (Deceleration, DEC) or retrieve the desired speed by accelerating after a conflict situation (Acceleration, ACC).When no vehicle is present in the circulation zone, pedestrians use to cross it more or less perpendicularly to the road axis in order to reduce the duration of the crossing within the circulation zone. If a vehicle - or even more vehicles - appears, the basic question is whether to take priority or to give way. In both cases, pedestrian may choose not to react (No Reaction Pedestrian, NRP), because the conflict is assumed not to be dangerous or because a deceleration of the vehicle is expected. The second one is to react prudently (Prudent, PRU), by deviating parallel to the upcoming car and/or decelerating to keep a safe distance to the projected vehicle line. Moreover they can react aggressively (Aggressive, AGG) by deviating perpendicular to the trajectory of the vehicle, sometimes even by increasing the speed (in this way the pedestrian can leave the circulation zone earlier and allow the car to decelerate less intensively). §.§ Possible relevant parametersThree main classes of parameters are assumed to be potentially influential on the choice of the evasive actions previously identified: * movement-specific, like relative position, speed and acceleration of both road users; * projected collision-specific, which describe the expected situation if no evasive action is taken by any of the users; * external conflict-specific, related to the presence of other simultaneous situations of conflict; Many other parameters like age, gender and time pressure, which may possibly affect the behavior, are not considered in this work due to the difficulty to be captured in real world traffic situations. In addition, no parameter describing the road layout or regulation is included, since only one scenario is considered. § DATA ACQUISITION AND PREPARATIONIn order to detect conflict situations automatically, the positions of all pedestrians and vehicles were tracked for a 30-min time interval at fixed time steps. To maximize the number of conflict situations, the most congested time interval was selected among the entire video material. The video tracking was manually performed by a time step of 0.5 seconds (15 video frames). The tracked points were transformed in the bird's eye view system and successively processed in three main phases.The first phase was aimed at detecting conflict situations which are used for the model calibration. It consisted of a three-step methodology which was performed at every time step ts^* of the time interval. Firstly, the expected behavior of road users was predicted by collecting the last 4 observed points of every road user, and by fitting a cubic smoothed spline, which can estimate the expected position in the next 8 seconds. Secondly, predicted positions were compared with each other to calculate the future relative minimum distance (MinDist) among road users. Thirdly, the current ts^* - with the information of road user's ID - was saved as Conflict Instant (CI) when this distance was found to be below 5 meters. This resulted in a set of 2814 CIs, belonging to 409 conflict situations between one vehicle and one pedestrian. The second phase consisted of the computation of the predictors listed in Tab.<ref> at every time step.In the third phase it was determined how road users reacted to conflicts, i.e. which nominal outcome must be associated to every CI. For this purpose, the critical element is represented by the delay between the moment ts^*, where the conflict was observed, and the moment when the road user reacted accordingly. This temporal delay between stimulus and reaction is assumed here as 1.5 seconds, which includes the perception time (needed to perceive the stimulus), the decision time (needed to elaborate a conflict solving decision) and reaction time (needed to react physically). Consequently, while the values of the predictors are calculated at time ts^*, the reaction choice is detected at time ts^*+3 ts - i.e. 1.5 seconds later. The type of evasive action is identified through a 5-step method which is briefly described here and shown in Fig.<ref>b for the pedestrian case (and applies similarly to vehicles): * the expected trajectory of the pedestrian is computed by a cubic smoothing spline through the last 3 observed positions and the actual one [P_ts^*-3 ; P_ts^*]; * the observed trajectory of the pedestrian is computed by a cubic smoothing spline though the future 3 observed positions and the actual one [P_ts^* ; P_ts^*+3]; * The intersections between the expected and the observed trajectory with the vehicle trajectory are saved respectively as XP_Exp and XP_Obs; * The time needed by the pedestrian to reach XP_Exp and XP_Obs from P_ts^* is computed; * The temporal difference k(ts^*) between XP_Exp and XP_Obs is used as the reference value to classify the reaction. Negative values of k(ts^*) indicate that the pedestrian has adopted a prudent behavior with the intention to give way to the vehicle. The benefit of the statistic k(ts^*) is due to the possibility to quantify the intensity of the evasive action by a single value, without computing any speed or directional change. The statistic k(ts^*) was computed for all CIs, both for pedestrians and vehicles. The distribution of the variable is shown in the histograms in Fig.<ref>a for pedestrians and Fig.<ref>b for vehicles. Given the distribution of the variable k(ts^*), arbitrary threshold values of ± 0.25 sec are assumed to determine which evasive action was chosen. In this way the reaction is classified and the dataset is ready for the model calibration.§ ANALYSIS AND MODEL FITTINGA multinomial logit model is chosen to investigate how each factor, among those previously identified, affects the choice of the evasive action of a pedestrian (or a motorist) when dealing with a conflict situation. This modeling approach was chosen for the categorical structure of the response, which may take three discrete values, and the type of predictors, which are both continuous and discrete. Taken the no reaction as the baseline J - both for pedestrians and drivers - the multinomial logit assumes the log-odds of each alternative response j to be linearly distributed with intercept α_k and a vector of regression coefficients β_k (see Equation <ref>). The model allows the estimation of the probability μ_CIj in a certain CI to perform the evasive action j given a set of explanatory variables X_CI. ln(μ_CIj/μ_CIJ)= α_k + X_CIβ_kDespite it is not excluded that some variables may have a quadratic or cubic relationship, a linear one was chosen with the purpose to keep the model as simple as possible. §.§ Selection of the predictorsIn a first stage, the dependence between the predictors and the outcome is tested, with the aim to identify a set of explanatory variables which are determinant for the choice of the evasive action. The analysis is carried out by maximum likelihood method and the statistical significance of each predictor is checked by the Z-value, which is defined as the regression coefficient divided by its standard error. The significance is checked with a two-sided test under the null hypothesis that the given variable does not affect the outcome. The analysis is performed both for pedestrian and drivers by estimating the coefficients β_k for the model with all predictors (full model). This will highlight which variables are statistically significant and which could be omitted in the model. The results of the regression are shown in Tab. <ref> for drivers and in Tab. <ref> for pedestrians (in both cases one regression is performed for each alternative).The result of the goodness-of-fit test expressed through the chi-squared statistic shows that the improvement given by the explanatory variables with respect to the null model is significant. The associated p-value (calculated under the null hypothesis that the model fits the data well) is approximately 1 and this suggests that further model specifications - e.g. quadratic relations - are not necessary at the moment. Looking at the p-values of single predictors, it can be noticed that only part of them are statistically significant independently from the road user and the chosen reaction, i.e. MinDist, TimeMinDist, TimeDelayXP and AccPed (p-values are close to 0). The sign also indicates clear tendencies: both road users tend to take an evasive action when MinDist decreases, TimeMinDist increases and TimeDelayXP increases. That means that road users are more inclined to change their behavior when the moment of minimum closeness is temporarily distant, when it will imply a collision, and when they are ahead of the conflicting user. Moreover, they tend to behave more prudently when they are in a deceleration phase, and aggressive when they are accelerating. Other tendencies are user- and reaction-specific, like for example CPConfNr, which is relevant only for drivers when deciding if to decelerate or not. For the sake of the model's simplicity, part of the predictors are excluded. The selection was done by excluding variables once a time, and checking for consistent decreases in residual deviance. Relative chi square are respectively 1943.8 and 2052.7, which is very close to the one of the full model.§.§ Model calibration and validationIn order to calibrate and validate the model on different data, the entire sample was splitted into 70% for training and 30% for testing. The coefficient were estimated on the training sample (Tab.<ref>) and successively tested on the validation one, where the likelihood of every reaction choice was computed for all the CIs and the option with highest probabilities was assumed as the response. The results are shown in the confusion matrix (Tab.<ref>), where each column represents the instances in the predicted class, while each row represents the instances in the observed class.The off-diagonal elements of the confusion matrix reveal in which situations the predicted choice differs from the observed one. The misclassification rate, i.e. the percentage of off-diagonal elements with respect to the total, amounts to 23.1% for drivers and 31.3% for pedestrians. This has to be considered a satisfying result given the high stochasticity of road user's behavior, which may be strongly affected by parameters like age, sex or time pressure. Two exemplifying situations are chosen to show the good performances of the developed model. For each situation three figures are shown alongside each other: a frame of the video sequence which displays the conflict dynamic (a), the observed behavior in terms of speed (or direction) with the associated value of the statistic k for every CI (b) and the probabilities predicted by the model related to the different reaction choices (c). For the sake of clarity, pedestrian and vehicles are indicated by the letter v and p. Moreover, these abbreviations are capital when the behavior of the road user is estimated and tested against observations. In situation 1 (Fig.<ref>) the vehicle V1 is in conflict with pedestrian p1 (as well the pedestrian next to p1). The latter decides to cross the circulation zone and forces V1 to give way by decelerating. The model predicts the choice correctly for the whole time of conflict, since the DEC probability is always higher than the alternative ones. In situation 2 (Fig.<ref>) pedestrian P2 (as well as the one next to P2) step onto the roadway with a speed of approximately 0.75 m/s (lower as the desired speed). As the vehicle v2 decelerates to give him way - and also because there is a vehicle ahead - P2 accelerates until it reaches its desired speed (around 1.45 m/s). This behavior is identified as AGG by the statistic k for the first part of the conflict, and is consistent with the outcome of the developed model. Moreover, the transition from AGG to NRP is reproduced very well.However, the misclassification rate shows that in approximately 1 CI over 4 the model diverges from reality. For this reason - and in view of possible model improvements - many situations were tested and the reason of model misclassification was annotated. Two main causes were found. The first one is related to courtesy behavior - e.g. when a driver decelerates to let a pedestrian cross. In this case the model would classify the reaction as ACC or NRV, while the drivers actually decelerates (second column of Tab.<ref> for vehicles). The second one is closely related to the heterogeneity of pedestrian behavior. This is evident in Tab.<ref> for pedestrians where the AGG row and column have both high number of elements. One can think of elderly people which prefer to give priority to vehicles even if they could cross safely (elderly people have lower levels of risk acceptance). On the contrary, young people tend to be less prudent and to accept higher risks. This very case is shown in Fig.<ref>, where the upcoming car v3 (which is quite close and fast) is not captured by the video frame. While the observed behavior is classified as AGG for the first part of the reaction, the model expects pedestrian P3 to be prudent, meaning to let the vehicle pass. § CONCLUSIONS AND FUTURE RESEARCHThis paper investigated the behavior of road users in vehicle-pedestrian conflicts in shared space. Reaction mechanisms have been analyzed through real world observations and classified, and a statistical method has been developed to automatically detect the chosen reaction in video data at every time step. Successively, geometrical variables which affect the choice of the evasive action have been investigated and included in the multinomial logit model. The model has been calibrated through a large dataset of observed data and has shown good performances given the simple mathematical structure and the restricted number of variable. However, two main aspects must be considered for model improvement. First, the relevance of age and time pressure in pedestrian behavior. Second, the high frequency of courtesy behavior in shared space, i.e. a vehicle letting a pedestrian cross while standing at the side of the road. Moreover, considering only a single scenario, the statistical analysis does not ensure spacial transferability of the model. The calibration of the model in other shared space streets can reveal in fact the influence of the road design and regulation in pedestrian and vehicle conflict behavior. Finally, the model can be adapted to other traffic scenarios like mid-block crossings or refuge islands, where priority rules are clearly defined but interaction always occur.With the purpose of microsimulation, the developed model is ready-to-use and can be directly implemented in any motion model. This is actually part of future research within the project MODIS, where the aim will be to extend the existing model for capturing conflict situations involving many road users.§ ACKNOWLEDGEMENTThe scientific research published in this article is granted by the DFG under the reference BE 2159/13-1 and FR 1670/13-1. The authors cordially thank the funding agency. abbrv
http://arxiv.org/abs/1709.09412v1
{ "authors": [ "F. Pascucci", "N. Rinke", "C. Schiermeyer", "V. Berkhahn", "B. Friedrich" ], "categories": [ "stat.AP", "physics.soc-ph" ], "primary_category": "stat.AP", "published": "20170927094041", "title": "A discrete choice model for solving conflict situations between pedestrians and vehicles in shared space" }
Centro Atómico Bariloche & Instituto Balseiro (C.N.E.A.) and CONICET, 8400 S. C. de Bariloche, R. N., Argentina. Centro Atómico Bariloche & Instituto Balseiro (C.N.E.A.) and CONICET, 8400 S. C. de Bariloche, R. N., Argentina. Centro Atómico Bariloche & Instituto Balseiro (C.N.E.A.) and CONICET, 8400 S. C. de Bariloche, R. N., Argentina. Centro Atómico Bariloche & Instituto Balseiro (C.N.E.A.) and CONICET, 8400 S. C. de Bariloche, R. N., Argentina. Centro Atómico Bariloche & Instituto Balseiro (C.N.E.A.) and CONICET, 8400 S. C. de Bariloche, R. N., Argentina. Centro Atómico Bariloche & Instituto Balseiro (C.N.E.A.) and CONICET, 8400 S. C. de Bariloche, R. N., Argentina. Centre de Nanosciences et de Nanotechnologies, C.N.R.S., Univ. Paris-Sud, Université Paris-Saclay, C2N Marcoussis, 91460 Marcoussis, France. Centre de Nanosciences et de Nanotechnologies, C.N.R.S., Univ. Paris-Sud, Université Paris-Saclay, C2N Marcoussis, 91460 Marcoussis, France. [Corresponding author. E-mail: ][email protected] Centro Atómico Bariloche & Instituto Balseiro (C.N.E.A.) and CONICET, 8400 S. C. de Bariloche, R. N., Argentina. Radiation pressure, electrostriction, and photothermal forces have been investigated to evidence backaction, non-linearities and quantum phenomena in cavity optomechanics.We show here through a detailed study of the relative intensity of the cavity mechanical modes observed when exciting with pulsed lasers close to the GaAs optical gap that optoelectronic forces involving real carrier excitation and deformation potential interaction are the strongest mechanism of light-to-sound transduction in semiconductor GaAs/AlAs distributed Bragg reflector optomechanical resonators. We demonstrate that the ultrafast spatial redistribution of the photoexcited carriers in microcavities with massive GaAs spacers leads to an enhanced coupling to the fundamental 20 GHz vertically polarized mechanical breathing mode. The carrier diffusion along the growth axis of the device can be enhanced by increasing the laser power, or limited by embedding GaAs quantum wells in the cavity spacer, a strategy used here to prove and engineer the optoelectronic forces in phonon generation with real carriers. The wavelength dependence of the observed phenomena provide further proof of the role of optoelectronic forces. The optical forces associated to the different intervening mechanisms and their relevance for dynamical backaction in optomechanics are evaluated using finite-element methods. The results presented open the path to the study of hitherto seldom investigated dynamical backaction in optomechanical solid-state resonators in the presence of optoelectronic forces.Optoelectronic forces with quantum wells for cavity optomechanics in GaAs/AlAs semiconductor microcavities A. Fainstein December 30, 2023 ========================================================================================================== § MOTIVATION Backaction in cavity optomechanics has shown to lead to novel physical phenomena including laser cooling, self-oscillation, and non-linear dynamics in systems that go from kilometer size interferometers to single or few trapped ions. <cit.> Briefly, a resonant photon field in a cavity exerts a force and induces a mechanical motion on the mirrors, which in turn leads to a delayed modification of the resonant condition of the trapped field. Such coupled dynamics can be exploited for a large variety of applications that span for example from gravitational wave detection <cit.> to the study of quantum motion states in mesoscopic mechanical systems. <cit.> How light exerts force on matter is at the center of these investigations. Photons can apply stress through radiation pressure, transferring impulse when reflected from the mirrors. <cit.>Related mechanisms also derived from the same fundamental interaction (Lorentz forces) are gradient forces (also exploited in optical tweezers) <cit.>, and electrostriction. The latter, linked to the material's photoelasticity, has been shown to play a role that can be of the same magnitude as radiation pressure, <cit.> or even larger if optical resonances are exploited in direct bandgap materials as for example GaAs. <cit.>In the presence of radiation pressure forces, the energy E of the photon is shifted by the Doppler effect by an amount of the order (v/c)E, where v and c are the mirror and light velocities, respectively. The mechanical energy transferred from the photon to the mirror is thus very small. Electrostriction leads to Raman-like processes, for which the transferred energy Δ E_R corresponds to the involved vibrational frequency. This leads to inelastic scattering sidebands. Again, typically Δ E_R << E. Contrastingly, if the photon is absorbed in the process of interaction, all its energy is transferred to the mirror. This fundamental difference has been used in cavity optomechanics to demonstrate strongly enhanced light-matter interactions based on photothermal forces. <cit.> In materials displaying optical resonances, the photons can be resonantly absorbed with the consequent transfer of electrons to excited states. Photoexcited carrier-mediated optomechanical interactions have been reported in semiconductor modulation-doped heterostructure-cantileverhybrid systems. Efficient cavity-less optomechanical transduction involving opto-piezoelectric backaction from the bound photoexcited electron-hole pairs has been demonstrated in these systems, including self-feedback cooling, amplification of the thermomechanical motion, and control of the mechanical quality factor through carrier excitation. <cit.> The change in the electronic landscape produced by photoexcited carriers also induces a stress in the structure through electron-phonon coupling mediated by deformation potential interaction. This stress can be identified as an optoelectronic force, and should have the same kind of temporal behavior (with different time-scales and details depending on the carrier dynamics) and amplified strength as observed for photothermal forces. <cit.> Recently optical cooling of mechanical modes of a GaAs nanomembrane forming part of an optical cavity was reported, <cit.> and its relation to optoelectronic stress via the deformation potential was analysed. Because of the very fast relaxation rate of excited carriers due to surface recombination in such nanometer size structures, it was concluded that thermal (and not optoelectronic) stress was the primary cause of cooling in that case. We will demonstrate here that the carrier dynamics can be fundamentally modified when the nanometer size GaAs layer is embedded in a monolithic microcavity, making optoelectronic forces themain mechanism of interaction of light with vibrations in such semiconductor devices. The diffusion of the photoexcited carriers thus assumes a central role, a dynamics that can be engineered using embedded quantum wells (QWs). Because of their optoelectronic properties, semiconductor GaAs/AlAs microcavities are interesting candidates for novel functionalities in cavity optomechanics. Perfect photon-phonon overlap, and access to electronically resonant coupling in addition to radiation pressure could lead to strong optomechanical interactions of photoelastic origin. <cit.> The vibrational frequencies in these microresonators are determined by the vertical layering of the device (fabricated with the ultra-high quality of molecular beam epitaxy), and not by the lateral pattering (defined by the more limited performance of microfabrication techniques). This has allowed access to much higher frequencies for the optomechanical vibrational modes, in the 20-100 GHz range, without significant reduction of the mechanical Q-factors. <cit.> In addition, these optomechanical resonators enable the conception of hybrid architectures involving artificial atoms (semiconductor excitons) coupled to the optical cavity mode, and thus combining the physics of cavity optomechanics with cavity quantum electrodynamics. <cit.>Our motivation here is to search for optoelectronic forces in these devices, and for that purpose we study the light-sound coupling involving the resonator mechanical modes and the optical cavity at resonance with the material exciton transition energy. Clear evidence of the role of optoelectronic forces emerges from new studies based on the spectral dependence and the relative intensity of the observed mechanical cavity modes. We demonstrate based on these experiments and model calculations that the main phonon generation mechanism using pulsed lasers close to resonance in these devices involves indeed the real excitation of carriers and the deformation potential mechanism. <cit.> We show that in microcavities with “bulk” GaAs spacers (i.e. cavity spacers constituted by a thick λ/2 layer of GaAs) ultrafast carrier redistribution leads to an enhanced coupling to the more uniformly distributed fundamental 20 GHz cavity vibrational mode. The relative intensity of the modes in these structures varies with laser power, consistently with a more uniform distribution of carrier being attained at higher concentrations. An engineering of the structure taking into account this effect and using embedded quantum wells is used to limit the carrier diffusion, leading to mechanical modes with a relative intensity consistent with the spatial distribution of the cavity optical field.The demonstration of optoelectronic forces and the possibility to tune the coupling to specific vibrations using quantum wells opens the way to new opportunities in the field of optomechanics.§ RESULTS We consider two planar microcavity structures, specifically a “bulk” GaAs and a multiple quantum well (MQW) resonator. The “bulk” GaAs microcavity is made of a λ/2 GaAs-spacer enclosed by (λ/4,λ/4) Al_0.18Ga_0.82As /AlAsDBRs, 20 pairs of layers on the bottom, 18 on top, grown on a GaAs substrate (a scheme of the structure ispresented in Fig. <ref>(b)). <cit.>As we have demonstrated previously, this structure works as an optomechanical resonator that simultaneously confines photons and acoustic phonons of the same wavelength. <cit.> In the MQW microcavity the λ/2 spacer is constituted by six 14.5 nm GaAs QWs separated by 6.1 nm AlAs barriers. To further enhance the light-sound coupling the second and fourth λ/2 DBR alloy layers on each side are also replaced by three GaAs/AlAs QWs. The reason for this design will become clear below. The DBRs in this case are (λ/4,λ/4) Al_0.10Ga_0.90As /AlAsmultilayers, 27 pairs on the bottom, 23 on top, grown again on a GaAs substrate. A scheme of this structure is displayed in Fig. <ref>(c). The number of DBR periods in both structures is designed to assure an optical Q-factor Q ≥ 10^4 (cavity photon lifetime τ∼ 5 ps). The samples have a thickness gradient so that the energy of the optical cavity mode, and its detuning respect to the bulk GaAs and MQW gaps, can be varied by displacing the laser spot on the surface. Reflection-type degenerate pump-probe experiments were performed with the laser wavelength tuned with the optical cavity mode (see the scheme in Fig. <ref>(a)). <cit.> Picosecond pulses (∼ 1 ps) from a mode-locked Ti:Sapphire laser, with repetition rate 80 MHz, were split into cross polarized pump (power 20 mW) and probe (1 mW) pulses.Both pulses were focused onto superimposed ∼ 50 μm-diameter spots. To couple the light to the microcavity the probe beam propagates close tothe sample normal and is tuned to the high derivative flank of the cavity mode reflectivity dip, while the pump incidence angle is set for resonant condition precisely at the cavity mode. <cit.> The laser wavelength was also set so that the phonon generation and detection would be close to resonance with the direct bandgap of the GaAs “bulk” cavity spacer (E_ GaAs∼ 1.425 eV) or of the QWs (E_ QW∼ 1.526 eV). To accomplish this resonant excitation the temperature was also used as a tuning parameter; the “bulk” cavity experiments were done at room temperature, while the MQW structure was studied at 80 K.Light is thus coupled resonantly with the optical cavity mode and the semiconductor excitonic resonance. Acoustic phonons confined in the same space as the optical cavity mode are selectively generated within the resonator, and are detected through their modulation of the optical cavity mode frequency.<cit.> These pump-probe ultrafast laser experiments are conceptually similar to the ring-down techniques recently exploited in the cavity optomechanics domain, <cit.> but more appropriate to the study of ultra-high frequency vibrations (GHz-THz range.) The pulsed laser phonon generation efficiency can be described as: <cit.>g(ω)∝∫κ(z)η_0(ω, z)|F_p(z)|^2dz.Here ω is the phonon frequency, η_0 describes the elastic strain eigenstates, F_p(z) is the spatially dependent perturbation induced by the pump laser,and κ is an effective material-dependent generation parameter that considers different light-matter couplings. All parameters are implicitly assumed to depend on the laser wavelength. We will be interested here in the relative intensity of the vibrational modes, not in their absolute values, so the main physical ingredients are expressed in the functional form of Eq. (<ref>). This equation reflects the spatial overlap of the strain eigenstates with the light-induced stress. The latter can be written (independently of the mechanism involved) as σ_p(z,t)=κ(z)|F_p(z)|^2T(t). <cit.> Here T(t) is the function describing the temporal evolution of the light-induced perturbation. Typically it is a delta-like function for radiation pressure and electrostriction forces, and a step-like function for the photothermal and optoelectronic mechanisms, broadened by the time-delay of the mechanism involved. The spatial distribution of the optical excitation F_p(z) along the growth axis (z) corresponds to the cavity confined electric field E_c(z) for radiation pressure and electrostriction forces, but can be different from it for the other two mechanisms depending on the spatial distribution and dynamics of excited charges and laser-induced temperature variations. As we argue next, this will be a way to identify the main optical force under play in the studied devices.Figures <ref> a-b present the case of the “bulk” GaAs cavity. For the experiments reported here the laser was set around 10 meV below the energy of the bulk GaAs gap. Panel (a) in the figure displays the experimental spectrum, compared with calculations assuming that F_p(z) reproduces the instantaneous spatial distribution of the light intensity|E_c(z)|^2 (“unrelaxed”), or that it corresponds to a photoexcited carrier distribution within the GaAs-spacer that is flat along the growth direction (“relaxed”). That is, it assumes that within the time in which the pump laser-induced perturbation is effective (of the order of the half period of the vibrational frequencies involved), the spatial distribution of the photoexcited carriers relaxes extending their presence throughout the full width of the GaAs cavity spacer material.Three peaks are clearly visible at ∼ 20, ∼ 60 and ∼ 100 GHz, corresponding to the fundamental, second and fourth overtones of the z-polarized cavity confined breathing mechanical modes. <cit.> The associated spatial distribution of the strain fields η_0(ω,z), together with that of the light-induced optoelectronic stress, are shown in panel (b) of the figure. The solid yellow curves in the top panel corresponds to the unrelaxed spatial pattern of the optical stress. The red step-like solid line indicates the stress for the relaxed situation. The grey dashed curve shows how the cavity confined field is distributed, but there is no optical force in the dashed regions because photoelastic (electrostrictive) coupling is resonantly enhanced in GaAs, and photons are only absorbed in GaAs, all other materials being fully transparent at the involved wavelengths. That is, κ(z) is assumed to be non-zero only in GaAs. The solid curves in σ_ stress thus represent the region where the light-induced stress is finite, either reflecting the excitation field (yellow), or the relaxed situation (red). It is clear in the experiment that the vibrational mode's amplitude decreases systematically with increasing frequency of the mode, something that according to the calculations is only compatible with the carriers having rapidly spread filling the full width of the cavity spacer along the growth direction. The explanation is straightforward considering the overlap integral given by Eq. (<ref>), and the involved spatial distributions depicted in panel (b) of Fig. <ref>. It clearly excludes radiation pressure and electrostriction as the possible driving mechanisms.Based on the above discussion it is also clear from the top curve in Fig. <ref>(a) that the relative weight of the higher frequency 60 GHz mode could be enhanced if the spatial distribution of the optical perturbation F_p(z) could be forced to map-out the cavity field intensity |E_c(z)|^2. If, as argued, the main generation mechanism is indeed governed by optoelectronic forces, one could accomplish this task by artificially limiting the diffusion of the photoexcited carriers along the growth axis. A natural way to do this is through an adequateengineering of the cavity spacer, e.g. by introducing GaAs/AlAs MQWs.This case is shown in Figs. <ref>(c-d). Again panel (c) shows the experiment and calculation of the coherent phonon spectral intensity, with the laser energy set this time around 10 meV below the MQW exciton transition energy. Panel (d) displays the spatial distribution of the photoexcited stress, and that of the strain related to the involved vibrational modes. In this case relaxing the carrier distribution to fill the full width of the QWs (red step-like curves), or maintaining the exact laser excitation pattern (yellow solid curve), makes no observable difference in the calculated spectra. Interestingly, by tailoring the spatial distribution of the photoexcited carriers using quantum wells we are able to confirm the role of optoelectronic stress as the main optomechanical coupling, and furthermore we have pushed the main vibrational frequency of these optomechanical resonators from the fundamental mode at 20 GHz to the second overtone at 60 GHz. These frequencies are one order of magnitude larger than the record frequencies demonstrated in other cavity optomechanics approaches. <cit.> From Fig. <ref>(d) it also becomes clear why additional QWs were introduced in the design at the second and fourth DBR periods, and not at the first and third: because of a change of sign of the strain fields, the latter contribute to the overlap integral in Eq. (<ref>) with the sign reversed respect to the cavity spacer.The role of photoexcited carriers in the optical forces involved in the described coherent phonon generation with pulsed lasers can be furthermore verified by investigating the laser wavelength dependence of the mechanical mode intensity. This is shown for the 60 GHz mode of the bulk-GaAs cavity in Fig. <ref>. Note that each point in this figure corresponds to an experiment in which the laser energy is varied and the position of the spot in the tapered structure is accordingly changed so that the tuning with the optical cavity mode remains the same. The wavelength limits of the experiment are thus defined by the thickness gradient existent in the microcavity structure.A threshold-like behaviour with spectral intensity tending to zero in the transparency region of the structure, and finite intensity only close and above the gap of GaAs, is observed. This is indicative of real electron-hole pairs being the mediators of the involved optical force.Note that although the spectrum calculated for the relaxed situation of the bulk-GaAs microcavity in Fig. <ref>a shows the same tendency as the experimental result, a quantitative difference between experiment and theory still remains. Namely, the relative intensity of the 60 GHz mode is slightly larger than predicted. The most natural explanation for this remaining difference between the bulk-GaAs cavity experiments and the relaxed calculation is that relaxation throughout the whole thick GaAs spacer of the cavity is not complete within the relevant times involved in the coherent generation process (times typically of the order of half a period, i.e., around 7 ps for the 60 GHz mode). Immediately after excitation, the optical force has to reproduce the spatial pattern of the cavity confined optical field (such force leads to larger intensity for the 60 GHz mode). This pattern rapidly relaxes towards a uniform distribution along the growth axis within the GaAs material (a distribution in the optical force that provides larger intensity for the 20 GHz mode, as shown in Fig.<ref>). If this relaxation is fast in comparison with the mechanical period, the latter will apply. If the relaxation is not complete, one can expect a behaviour in between the two limits as observed experimentally. One way to test this hypothesis is to perform experiments for a varying concentration of photoexcited carriers, as shown in Fig. <ref>, where the relative intensity of the 60 and 20 GHz modes as a function of pump laser power is displayed. One can expect that due to electron-electron Coulomb interactions higher carrier densities tend to accelerate the homogenisation of carriers within their available space. This is indeed what is evidenced in the experiments, with a progressive trend towards the relaxed situation, i.e. a flat distribution of the optical stress consistent with the 20 GHz mode being stronger than the 60 GHz one, as the pump laser power is increased. Having demonstrated the central role of optoelectronic forces in the generation of GHz phonons in semiconductor microcavities upon resonant pulsed excitation,we address next the potential of this mechanism for the observation of laser cooling and optically induced self-oscillation. The fundamental concepts describing backaction dynamics in cavity optomechanics are grasped by the delayed force model which, for the optically modified vibrational damping rate Γ_eff, gives <cit.>Γ_ eff=Γ_ M( 1+Q_ MΩ_ Mτ/1+Ω^2_ Mτ^2∇F/K),with Q_ M and Γ_ M=Ω_ M/Q_ Mthe unperturbed mechanical quality factor and damping, respectively. Ω_ M and K are respectively the unperturbed mechanical mode frequency and stiffness, while ∇F = ∂ F/∂ u|_u_0 represents the change in the steady-state optical force for a small displacement δ u of the mechanical resonator around the equilibrium position. To lead to backaction optical forces need to respond with a delay to fluctuations that change the frequency of the optical mode. The simplest delay function to consider is an exponential function h(t) = 1 - exp (t/τ), with τ the corresponding time-delay. Typically τ would correspond to the cavity photon lifetime τ_ cav, but in general and particularly for photothermal and optoelectronic forces, it can be significantly longer than τ_ cav. Γ_ eff in the presence of dynamical backaction can thus increase if ∇F > 0, leading to laser cooling, or decrease and eventually attain zero (self-oscillation) if ∇F < 0. The magnitude of this effect is proportional to the gradient of the optical force, which is a function of the deposited optical energy and the involved optomechanical coupling mechanism (that is, how this deposited energy is translated into a mechanical deformation). It is also proportional to a delay tuning factor f_D = Ω_ Mτ/1+Ω^2_ Mτ^2. f_Dhas a maximum for Ω_ Mτ = 1, which reflects the intuitive fact that the force fluctuations are more effective to induce vibrations if their time-constant is neither too short, nor too long, but tuned to the vibrational frequency so that τ≈ 1/Ω_ M. Table <ref> presents the different factors intervening in Eq. (<ref>) for the studied DBR microcavities and the considered mechanisms, namely, the corresponding magnitude of the involved optical forces and the delay tuning factor f_D. The optical forces are given per photon (trapped in the cavity or absorbed depending on the mechanism). They have been evaluated using finite-element methods (see Appendix <ref>), <cit.> and considering a micro-pillar with 20 period DBRs of 2 micrometer diameter. As argued above, the main difference between photothermal and optoelectronic forces respect to radiation pressure and electrostriction, is the larger amount of deposited energy per trapped cavity photon. Similarly to what is observed in GaAs microdisks, <cit.> well below the gap (far from the excitonic resonances) both geometric (radiation-pressure) and photoelastic contributions to the optomechanical coupling factor have similar values.Ultra-strong resonant enhancement of the photoelastic coupling has been experimentally demonstrated in bare GaAs/AlAs MQWs. <cit.> In Table <ref> we provide the magnitude of the electrostrictive force per incident photon considering the two situations, far from resonance and at resonance as used in our experiments, assuming that the room temperature values of the photoelastic constant given in Ref. JusserandPRL2015 are valid for a similar MQW embedded in a pillar microcavity. Concerning the photothermal coupling, a quantitative evaluation of its relevance in semiconductor GaAs membranes and microdisks indicates that it can be significant. <cit.>It is also significant in our microcavities, as evidenced in Table <ref>.What excludes it as a potentially relevant optomechanical force is its slow dynamics, which for the high frequency vibrations considered here leads todelay tuning factors f_D ∼ 10^-5 even for the fundamental breathing mode at 20 GHz and considering a relatively fast thermal relaxation τ of the order of a μs. Note that due to the deformation potential interaction in GaAs, optoelectronic forces have the same sign as the thermal stress and contribute to expand the crystal (they have the reverse sign in Si). <cit.> The delay tuning factor is close to its maximum value 0.5 for the impulsive radiation pressure and electrostrictive mechanisms, considering a cavity photon lifetime of a few picoseconds as in our vertical microcavities. It is also reasonably well tuned for the optoelectronic forces and 200 ps recombination times as demonstrated for pillars of a few microns lateral size in Ref. AnguianoPRL. In fact, f_D ∼ 0.44 is precisely the same value as attained for optimized cantilever resonators that have evidenced strong optical cooling and self-oscillation dynamics based on photothermal forces. <cit.> The factor 1/10 decrease of f_D with respect to radiation pressure and electrostriction is overly compensated by the larger efficiency of optoelectronic forces. Note that the physical mechanism at the base of optoelectronic forces is the same as in photoelastic coupling, namely, deformation potential interaction. The qualitative difference is that a photon is scattered in the latter, while it isabsorbed with the subsequent creation of real electron-hole pairs in the former. § CONCLUSIONS AND OUTLOOK In conclusion, we have shown that optoelectronic deformation potential interactions are at the origin of the optical forces acting for excitation with pulsed lasers close to the semiconductor gaps in GaAs/AlAs microcavities. The carrier dynamics following photoexcitation is determinant in the emitted coherent acoustic phonon spectrum, and can be tailored using quantum wells to push the vibrational optomechanic frequencies from 20 up to 60 GHz, an order of magnitude larger than the highest standards in cavity optomechanics.The strong potential of optoelectronical forces for the demonstration of backaction dynamical effects in semiconductor microcavities was addressed. This could open the way to ultra-high frequency cavity optomechanics, and through it to quantum measurements and applications at higher temperatures than currently accessible.§ FUNDING INFORMATIONThis work was partially supported by the ANPCyT Grants PICT 2012-1661 and 2013-2047, the Labex NanoSaclay,and the international franco-argentinean laboratory LIFAN (CNRS-CONICET). § DETAILS REGARDING THE CALCULATION OF THE OPTICAL FORCES.We describe the optomechanical coupling of a fundamental optical cavity mode with the fundamental acoustic cavity mode.<cit.> For a normalized displacement mode u⃗(r⃗), we can parametrize the profile as U⃗(r⃗) = u_0u⃗(r⃗). The effective mass is obtained by the requirement that the potential energy of this parametrized oscillator is equal to the actual potential energy:1/2Ω_M^2 ∫ dr⃗ρ(r⃗) |U⃗(r⃗)|^2 = 1/2 m_effΩ_M^2 u_0^2.The effective mass m_eff is,m_eff = ∫ dr⃗ρ(r⃗) |U⃗(r⃗ )|^2/u_0^2≡∫ d r⃗ρ(r⃗) |u⃗(r⃗)|^2,where ρ(r⃗) is the scalar density distribution field for the structure. As discussed in Ref. Baker, we consider the normalization for the mechanical modes such that the position r⃗_0 (known as the reduction point and chosen so that the displacement is maximum) satisfies | u⃗(r⃗_0) |= 1. In our system, the reduction point lies at the interfaces of the cavity spacer of GaAs. The equation of motion for the cavity breathing mode is modeled as an oscillator described by the displacement u, with an effective mass m_eff, mechanical damping Γ_M, and stiffness constant K=m_effΩ^2_M, given by<cit.> m_effd^2u/dt^2 + m_effΓ_Mdu/dt +m_effΩ_M^2 u = F_geo + F_ph + F_th + F_oe. The right-hand side corresponds to a sum over all the optical forces that drive the mechanical system: geometrical related to radiation pressure (F_geo),<cit.> photoelastic (F_ph), thermoelastic (F_th) and optoelectronic (F_oe).We proceed to the evaluation of the forces. For the electrostrictive and geometrical forces we can obtain the corresponding values computing F = ħ g_om, where g_om is the geometrical or photoelastic optomechanical coupling constant.<cit.>For the calculation of the geometric optomechanical coupling factor g_om^geo, we follow the analysis proposed by Johnson et al,<cit.> implementing a finite element-method to obtain the electric and acoustic fields. We generalize the approach presented in Refs. Ding2010,Baker to compute the effects induced by the multiple interfaces at the DBR's boundaries,g_om^geo =ω_c/2∑_i ∮_A_i(u⃗·n̂_̂î) (Δϵ_i|E⃗_|^2- Δ(ϵ_i^-1) |D⃗_⊥|^2 ) dA_i/∫ϵ |E⃗|^2 dr⃗,where ω_c is the optical angular frequency at resonance, u⃗ the normalized displacement field, n̂_i the unitary normal-surface vector corresponding to the interface, Δϵ_i=ϵ_i,left - ϵ_i,right the difference between the dielectric constants of the materials involved, Δϵ_i^-1=ϵ_i,left^-1 - ϵ_i,right^-1, E⃗_ is the component of the electric-field parallel to the interface surface and D⃗_⊥ is the normal component of the displacement field D⃗ = ϵ_0 ϵ_r E⃗. The index i runs over every distinct interface A_i.The photoelastic contribution to the optomechanical coupling occurs due to the strain-field modulation of the dielectric properties, i.e. Δ (1/ϵ_r)_ij = p_ijkl S_kl, and is given by,<cit.>g_om^ph = ω_cϵ_0 /2∫n^4E_i p_ijkl S_kl E_j dr⃗/∫ϵ |E⃗|^2 dr⃗,where ϵ=ϵ_0 ϵ_r(r⃗) is the dielectric function. Due to the resonant character<cit.> we consider p_ijkl to be non-vanishing only in the GaAs spacer. Only three different components for this tensor are non-zero due to the cubic symmetry. Since the non-diagonal component of the strain S_rz is non-zero we also take p_44 = (p_11-p_12)/2 into account.<cit.> Through Raman experiments, B. Jusserand et al have determined the wavelength and temperature dependence for the GaAs photoelastic constant p_12.<cit.> On the contrary, there is no similar reported information for p_11. However, because of the prevalent z-polarized character of the acoustic modes, it turns out that the contribution to g^ph_om is dominated by p_12, which reaches p_12 = 1.526 for 1.42eV at room temperature.<cit.>For thermoelastic and optoelectronic effects we used the expression F(t) = ∫_V σ_ij(r,t) S_ij(r,t)/u(t) dr⃗,where S_ij is the strain field tensor related to the mechanical breathing mode u, and σ_ij is the stress field tensor related to the driving mechanism. The stress tensor for the thermoelasticity can be determined by <cit.>σ_th = - γ_L C_LΔ T_L(r,t),where C_L is the heat capacity, γ_L is the Grüneisen coefficient <cit.> and Δ T_L(r,t) the lattice temperature variation. Assuming a complete transfer of energy from the electronic to the phononic system, considering intraband and non-radiative interband relaxation processes for excited electron-hole pairs we can determine Δ T_L<cit.>. For the intraband decay channel, the temperature variation can be computed as Δ T_L = N_e (ħω - E_G)/C_L, where ħω is the driving energy and N_e represents the photoexcited carriers population density. The non-radiative interband relaxation processes give a temperature variation that can be accounted as Δ T_L = N_e E_G/C_L when the excitation is resonant with the bandgap energy E_G.For the optoelectronic contribution we consider the limiting case in which electron-hole pairs are excited resonantly with the bandgap energy, and the dominant term in the electronic self-energy due to optical excitation corresponds to the variation in E_G. This stress can be summarized as follows,<cit.>σ_oe = - d_eh N_e,where d_eh is the deformation potential coefficient (∼ 9eV for GaAs.<cit.>)For the geometric and photoelastic effects the forces are given per number of photons.<cit.> In order to compare,for the thermoelastic and optoelectronic cases the forces are given per number of excited electron-hole pairs. For this purpose N_e is considered equal to1/V_eff (the inverse of the GaAs spacer volume where carriers are optically excited).In Table <ref> we present relevant optomechanical parameters and the magnitude of the calculated forces involved. We conclude that the optoelectronic forces have similar magnitude as the thermal forces and are several orders of magnitude greater than the photoelastic and radiation-pressure mechanisms (∼ 10^2-10^3). 1ReviewCOM M. Aspelmeyer, T. J. Kippenberg, and F. Marquardt, "Cavity optomechanics," Rev. Mod. Phys. 86, 1391 (2014).Ligo B. P. Abbott et al., "GW150914: The Advanced LIGO Detectors in the Era of First Discoveries - LIGO Scientific and Virgo Collaborations," Phys. Rev. Lett. 116, 131103 (2016). O'Connell A. D. O'Connell, M. Hofheinz, M. Ansmann, Radoslaw C. Bialczak, M. Lenander, Erik Lucero, M. Neeley, D. Sank, H. Wang, M. Weides, J. Wenner, John M. Martinis, and A. N. Cleland, "Quantum ground state and single-phonon control of a mechanical resonator," Nature 464, 697 (2010).Teufel J. D. Teufel, T. Donner, Dale Li, J. H. Harlow, M. S. Allman, K. Cicak, A. J. Sirois, J. D. Whittaker, K. W. Lehnert, and R. W. Simmonds, "Sideband cooling of micromechanical motion to the quantum ground state," Nature 475, 359 (2011).Chan J. Chan, T. P. Mayer Alegre, Amir H. Safavi-Naeini, Jeff T. Hill, Alex Krause, Simon Groeblacher, Markus Aspelmeyer, and Oskar Painter, "Laser cooling of a nanomechanical oscillator into its quantum ground state," Nature 478, 89 (2011).Verhagen E. Verhagen, S. Deleglise, S. Weis, A. Schliesser, and T. J. Kippenberg, "Quantum-coherent coupling of a mechanical oscillator to an optical cavity mode," Nature 482, 63 (2012). Cohadon P. F. Cohadon, A. Heidmann, and M. Pinard, "Cooling of a Mirror by Radiation Pressure," Phys. Rev. Lett. 83, 3174 (1999).Lin Qiang Lin, Jessie Rosenberg, Xiaoshun Jiang, Kerry J. Vahala, and Oskar Painter, "Mechanical Oscillation and Cooling Actuated by the Optical Gradient Force," Phys. Rev. Lett. 103, 103601 (2009).Rakich1 P. T. Rakich, P. Davids, and Z. Wang, "Tailoring optical forces in waveguides through radiation pressure and electrostrictive forces," Opt. Express 18, 14439-14453 (2010).Rakich2 P. T. Rakich, C. Reinke, R. Camacho, P. Davids, and Z. Wang, "Giant Enhancement of Stimulated Brillouin Scattering in the Subwavelength Limit," Phys. Rev. X 2, 011008 (2012).FainsteinPRL2013 A. Fainstein, N. D. Lanzillotti-Kimura, B. Jusserand, and B. Perrin, "Strong Optical-Mechanical Coupling in a Vertical GaAs/AlAs Microcavity for Subterahertz Phonons and Near-Infrared Light," Phys. Rev. Lett. 110, 037403 (2013).Rozas_Polariton G. Rozas, A. E. Bruchhausen, A. Fainstein, B. Jusserand, and A. Lemaître, "Polariton path to fully resonant dispersive coupling in optomechanical resonators," Phys. Rev. B 90, 201302(R) (2014).Baker C. Baker, W. Hease, Dac-Trung Nguyen, A. Andronico, S. Ducci, G. Leo, and I. Favero, "Photoelastic coupling in gallium arsenide optomechanical disk resonators," Optics Express 22, 14072 (2014).MetzgerPRL2008 C. Metzger, M. Ludwig, C. Neuenhahn, A. Ortlieb, I. Favero, K. Karrai, and F. Marquard, "Self-Induced Oscillations in an Optomechanical System Driven by Bolometric Backaction," Phys. Rev. Lett. 101, 133903 (2008).MetzgerPRB2008 C. Metzger, I.Favero, A. Ortlieb, and K. Karrai, "Optical self cooling of a deformable Fabry-Perot cavity in the classical limit,"Phys. Rev. B 78, 035309 (2008).Restrepo J. Restrepo, J. Gabelli, C. Ciuti, and I. Favero, "Classical and quantum theory of photothermal cavity cooling of a mechanical oscillator," Comptes Rendus Physique 12, 860 (2011).17 Hajime Okamoto, Daisuke Ito, Koji Onomitsu, Haruki Sanada, Hideki Gotoh, Tetsuomi Sogawa, and Hiroshi Yamaguchi, "Vibration Amplification, Damping, and Self-Oscillations in Micromechanical Resonators Induced by Optomechanical Coupling through Carrier Excitation," Phys. Rev. Lett. 106, 036801 (2011)18 Hajime Okamoto, Takayuki Watanabe, Ryuichi Ohta, Koji Onomitsu, Hideki Gotoh, Tetsuomi Sogawa, and Hiroshi Yamaguchi, "Cavity-less on-chip optomechanics using excitonic transitions in semiconductor heterostructures," Nature Comm. 6, 8478 (2015).19 Hajime Okamoto, Daisuke Ito, Koji Onomitsu, Tetsuomi Sogawa, and Hiroshi Yamaguchi, "Controlling Quality Factor in Micromechanical Resonators by Carrier Excitation," Applied Physics Express 2 (2009) 035001.Usami K. Usami, A. Naesby, T. Bagci, B. Melholt Nielsen, J. Liu, S. Stobbe, P. Lodahl, and E. S. Polzik, "Optical cavity cooling of mechanical modes of a semiconductor nanomembrane," Nature Physics 8, 168 (2012).JusserandPRL2015 B. Jusserand, A.N. Poddubny, A.V. Poshakinskiy, A. Fainstein, and A. Lemaître, "Polariton Resonances for Ultrastrong Coupling Cavity Optomechanics in GaAs/AlAs Multiple Quantum Wells," Phys. Rev. Lett. 115, 267402 (2015).AnguianoPRL S. Anguiano, A. E. Bruchhausen, B. Jusserand, I. Favero, F. R. Lamberti, L. Lanco, I. Sagnes, A. Lemaître, N. D. Lanzillotti-Kimura,P. Senellart, and A. Fainstein, "Micropillar Resonators for Optomechanics in the Extremely High 19-95–GHz Frequency Range," Phys. Rev. Lett. 118, 263901 (2017).Restrepo2 J. Restrepo, C. Ciuti, and I. Favero, "Single-Polariton Optomechanics," Phys. Rev. Lett. 112, 013601 (2014).Kyriienko O. Kyriienko, T. C. H. Liew, and I. A. Shelykh, "Optomechanics with Cavity Polaritons: Dissipative Coupling and Unconventional Bistability," Phys. Rev. Lett. 112, 076402 (2014). RuelloUltrasonics P. Ruello and V. E. Gusev, "Physical mechanisms of coherent acoustic phonons generation by ultrafast laser action," Ultrasonics 56, 21 (2015). Tredicucci A. Tredicucci, Y. Chen, V. Pellegrini, M Börger, L. Sorba, F. Beltram, and F. Bassani, “Controlled Exciton-Photon Interaction in Semiconductor Bulk Microcavities”, Phys. Rev. Lett. 75, 3906 (1995).AFainsteinBulkGaAs A. Fainstein, B. Jusserand, P. Senellart, J. Bloch, V. Thierry-Mieg, and R. Planel, “Center-of-mass quantized exciton polariton states in bulk-GaAs microcavities”, Phys. Rev. B 62, 8199 (2000). Trigo M. Trigo, A. Bruchhausen, A. Fainstein, B. Jusserand, and V. Thierry-Mieg, “Confinement of Acoustical Vibrations in a Semiconductor Planar Phonon Cavity”, Phys. Rev. Lett. 89, 227402 (2002).Anguiano S. Anguiano, G. Rozas, A. E. Bruchhausen, A. Fainstein, B. Jusserand, P. Senellart, and A. Lemaître, "Spectra of mechanical cavity modes in distributed Bragg reflector based vertical GaAs resonators”, Phys. Rev. B 90, 045314 (2014).PSesinP. Sesin, P. Soubelet, V. Villafañe, A. E. Bruchhausen, B. Jusserand, A. Lemaître, N. D. Lanzillotti-Kimura, and A. Fainstein, “Dynamical optical tuning of the coherent phonon detection sensitivity in DBR-based GaAs optomechanical resonators”, Phys. Rev. B 92, 075307 (2015).Thomsen C. Thomsen, H. T. Grahn, H. J. Maris, and J. Tauc, “Surface generation and detection of phonons by picosecond light pulses”, Phys. Rev. B 34, 4129 (1986).Bartels A. Bartels, T. Dekorsy, H. Kurz, and K. Koehler, “Coherent Zone-Folded Longitudinal Acoustic Phonons in Semiconductor Superlattices: Excitation and Detection,”, Phys. Rev. Lett.82, 1044 (1999). Kimura_coherentcavity N. D. Lanzillotti-Kimura,A. Fainstein, A. Huynh, B. Perrin, B. Jusserand, A. Miard, and A. Lemaitre, “Coherent Generation of Acoustic Phonons in an Optical Microcavity”, Phys. Rev. Lett. 99, 217405 (2007).KimuraTheory N. D. Lanzillotti-Kimura, A. Fainstein, B. Perrin, and B. Jusserand, “Theory of coherent generation and detection of THz acoustic phonons using optical microcavities”, Phys. Rev. B 84, 064307 (2011).Kimura_doubleresonance N. D. Lanzillotti-Kimura, A. Fainstein, B. Perrin, B. Jusserand, L. Largeau, O. Mauguin, and A. Lemaître, “Enhanced optical generation and detection of acoustic nanowaves in microcavities”, Phys. Rev. B 83, 201103(R) (2011). Flor2012 M. F. Pascual-Winter, A. Fainstein, B. Jusserand, B. Perrin, and A. Lemaître, “Spectral responses of phonon optical generation and detection in superlattices”, Phys. Rev. B 85, 235443 (2012), and references therein.Johnson2002 S. G. Johnson, M. Ibanescu, M. A. Skorobogatiy, O. Weisberg, J. D. Joannopoulos, and Y. Fink, “Perturbation theory for maxwell's equations with shifting material boundaries”, Phys. Rev. E 65, 066611 (2002).Ding2010 L. Ding, C. Baker, P. Senellart, A. Lemaître, S. Ducci, G. Leo, and I. Favero, “High frequency GaAs nano-optomechanical disk resonator”, Phys. Rev. Lett. 105, 263903 (2010).Florez2016 O. Florez, P. F. Jarschel, Y. A. V. Espinel, C. M. B. Cordeiro, T. P. M. Alegre, G. S. Wiederhecker, and P. Dainese, “Brillouin scattering self-cancellation”, Nature Communications 7, 11759 (2016).Eryiit1996 R. Eryiğit and I. P. Herman, “Lattice properties of strained GaAs, Si, and Ge using a modified bond-charge model”, Physical Review B 53, 7775–7784 (1996).
http://arxiv.org/abs/1709.08987v4
{ "authors": [ "V. Villafañe", "P. Sesin", "P. Soubelet", "S. Anguiano", "A. E. Bruchhausen", "G. Rozas", "C. Gomez Carbonell", "A. Lemaître", "A. Fainstein" ], "categories": [ "physics.optics" ], "primary_category": "physics.optics", "published": "20170926125623", "title": "Optoelectronic forces with quantum wells for cavity optomechanics in GaAs/AlAs semiconductor microcavities" }
[subfigure]labelformat=empty./figures/
http://arxiv.org/abs/1709.09381v1
{ "authors": [ "D. Eeltink", "A. Lemoine", "H. Branger", "O. Kimmoun", "C. Kharif", "J. Carter", "A. Chabchoub", "M. Brunetti", "J. Kasparian" ], "categories": [ "physics.flu-dyn" ], "primary_category": "physics.flu-dyn", "published": "20170927081958", "title": "Spectral up- and downshifting of Akhmediev breathers under wind forcing" }
snakes,arrows,shapes images/ thmTheorem corCorollary lemmaLemma propProposition defnDefinition exExample definitionprobProblem plainalgAlgorithm remarkremRemark
http://arxiv.org/abs/1709.09012v2
{ "authors": [ "Bin Zhu", "Giacomo Baggio" ], "categories": [ "math.ST", "stat.TH" ], "primary_category": "math.ST", "published": "20170926134144", "title": "On the existence of a solution to a spectral estimation problem \\emph{à la} Byrnes-Georgiou-Lindquist" }
Joseph]Michael Joseph Department of Technology and Mathematics, Dalton State College, 650 College Dr., Dalton, GA 30720, USA [email protected] [2010]05E18In this paper, we analyze the toggle group on the set of antichains of a poset.Toggle groups, generated by simple involutions, were first introduced by Cameron and Fon-Der-Flaass for order ideals of posets.Recently Striker has motivated the study of toggle groups on general families of subsets, including antichains.This paper expands on this work by examining the relationship between the toggle groups of antichains and order ideals, constructing an explicit isomorphism between the two groups (for a finite poset). We also focus on the rowmotion action on antichains of a poset that has been well-studied in dynamical algebraic combinatorics, describing it as the composition of antichain toggles. We also describe a piecewise-linear analogue of toggling to Stanley's chain polytope.We examine the connections with the piecewise-linear toggling Einstein and Propp introduced for order polytopes and prove that almost all of our results for antichain toggles extend to the piecewise-linear setting.Antichain toggling and rowmotion [ December 30, 2023 ================================§ INTRODUCTIONIn <cit.>, Cameron and Fon-Der-Flaass defined a group (now called the toggle group) consisting of permutations on the set (P) of order ideals of a poset P.This group is generated by #P simple maps called toggles each of which correspond to an element of the poset.The toggle corresponding to e∈ P adds or removes e from the order ideal if the resulting set is still an order ideal, and otherwise does nothing. While each individual toggle has order 2, the composition of toggles can mix up (P) in a way that is difficult to describe in general. In fact, Cameron and Fon-Der-Flaass proved that on any finite connected poset, the toggle group is either the symmetric or alternating group on (P).More recently, Striker has noted that there is nothing significant about order ideals of a poset in the definition of the toggle group.For any sets E and ⊆ 2^E, we can define a toggle group corresponding to .Striker has studied the behavior of the toggle group on various sets of combinatorial interest, including many subsets of posets: chains, antichains, and interval-closed sets <cit.>.In Section <ref>, we analyze the toggle group for the set (P) of antichains of a finite poset P; this set is in bijection with the set of order ideals of P.Striker proved that like the classical toggle group on order ideals, the antichain toggle group of a finite connected poset is always either the symmetric or alternating group on (P).We take this work further and describe the relation between antichain toggles and order ideal toggles (Theorems <ref> and <ref>).In particular, we obtain an explicit isomorphism between the toggle groups of antichains and order ideals of P.Throughout the paper, we also focus on a map first studied by Brouwer and Schrijver <cit.> as a map on antichains.It is named rowmotion in <cit.>, though it has various names in the literature.Rowmotion can be defined as a map on order ideals, order filters, or antichains, as it is the composition of three maps between these sets.For specific posets, rowmotion has been shown to exhibit nice behavior, which is why it has been of significant interest.In general, the order of rowmotion is unpredictable, but for many posets it is known to be small. Also, rowmotion has been shown to exhibit various phenomena recently introduced under the heading dynamical algebraic combinatorics. One of these is the homomesy phenomenon, introduced by Propp and Roby in <cit.>, in which a statistic on a set (e.g. cardinality) has the same average across every orbit.In fact, one of the earliest examples of homomesy is the conjecture of Panyushev <cit.> proven by Armstrong, Stump, and Thomas <cit.> that cardinality is homomesic under antichain rowmotion on positive root posets of Weyl groups.Striker proved a “toggleability” statistic to be homomesic under rowmotion on any finite poset <cit.>. Other homomesic statistics have been discovered on many posets, including on products of chains, minuscule posets, and zigzag posets <cit.>. Other phenomena discovered for rowmotion on various posets include Reiner, Stanton, and White's cyclic sieving phenomenon <cit.> and Dilks, Pechenik, and Striker's resonance phenomenon <cit.>. Cameron and Fon-Der-Flaass showed that rowmotion on (P) can also be expressed as the composition of every toggle, each used exactly once, in an order specified by a linear extension <cit.>. Having multiple ways to express rowmotion has proven to be fruitful in studying the action on various posets; for this reason rowmotion has received far more attention as a map on order ideals as opposed to antichains. In Subsection <ref>, we show that antichain rowmotion can also be expressed as the composition of every toggle, each used exactly once, in a specified order (Proposition <ref>). This gives another tool to studying rowmotion. In <cit.>, Roby and the author proved results for rowmotion on zigzag posets by first analyzing toggles for independent sets of path graphs (which are the antichains of zigzag posets in disguise) and then translating them back to the language of order ideals.In Subsection <ref>, we discuss antichain toggles on graded posets.As has already been studied for order ideals <cit.>, we can apply antichain toggles for an entire rank at once in a graded poset.We detail the relation between rank toggles for order ideals and antichains. Furthermore, we delve into a natural analogue of gyration to the toggle group of antichains.Gyration is an action defined by Striker <cit.> within the toggle group of a graded poset, named for its connection to Wieland's gyration on alternating sign matrices <cit.>.In Section <ref> we explore a generalization to the piecewise-linear setting.There we define toggles as continuous maps on the chain polytope of a poset, defined by Stanley <cit.>. These correspond to antichain toggles when restricted to the vertices.This follows work of Einstein and Propp <cit.> who generalized the notion of toggles from order ideals to the order polytope of a poset, also defined by Stanley <cit.>. Surprisingly, many properties of rowmotion on order ideals also extend to the order polytope, and we show here that the same is true between antichain toggles and chain polytope toggles. The main results of this section are Theorems <ref> and <ref>.As one would likely expect, some properties of antichain toggles extend to the chain polytope while others do not.In Subsection <ref>, we give concrete examples as we consider chain polytope toggles on zigzag posets. We demonstrate that while the main homomesy result of the author and Roby on toggling antichains of zigzag posets <cit.> does not extend to the chain polytope, a different homomesy result does extend. Despite numerous homomesy results in the literature for finite orbits, Theorem <ref> is one of the few known results of an asymptotic generalization to orbits that are probably not always finite.Our new results are in Subsections <ref>, <ref>, <ref>, and <ref>.Some directions for future research are discussed in Section <ref>. The other sections detail the necessary background material and framework as well as the notation we use, much of which varies between sources.§ TOGGLE GROUPS FOR ORDER IDEALS AND ANTICHAINS §.§ Poset terminology and notationWe assume the reader is familiar with elementary poset theory.Though we very minimally introduce and define the terms and notation used in the paper, any reader unfamiliar with posets should visit Stanley's text for a thorough introduction  <cit.>. A partially ordered set (or poset for short) is a set P together with a binary relation `≤' on P that is reflexive, antisymmetric, and transitive. We use the notation x≥ y to mean y≤ x, x<y to mean “x≤ y and x≠y,” and x>y to mean “x≥ y and x≠y.”Throughout this paper, let P denote a finite poset. For x,y∈ P, we say that x is covered by y (or equivalently y covers x), denoted x⋖ y, if x<y and there does not exist z in P with x<z<y. The notation x⋗ y means that y is covered by x.If either x≤ y or y≤ x, we say x and y are comparable. Otherwise, x and y are incomparable, denoted x∥ y. For a finite poset, all relations can be formed by the cover relations and transitivity. We depict such posets by their Hasse diagrams, where each cover relation x⋖ y is represented by placing y above x and connecting x and y with an edge. §.§ Order ideals, antichains, and rowmotionIn this subsection, we discuss an action that was first studied by Brouwer and Schrijver <cit.> and more recently by many others, particularly in <cit.>.This action has several names in the literature; we use the name “rowmotion” due to Striker and Williams <cit.>. . * An order ideal (resp. order filter) of P is a subset I⊆ P such that if x∈ I and y<x (resp. y>x) in P, then y∈ I.We denote the sets of order ideals and order filters of P as (P) and (P) respectively.* An antichain (resp. chain) of P is a subset S⊆ P in which any two elements are incomparable (resp. comparable).The set of antichains of P is denoted (P).* For a subset S⊆ P, an element x∈ S is a maximal (resp. minimal) element of S if S does not contain any y>x (resp. y<x).Complementation is a natural bijection between (P) and (P).Let (S) denote the complement of a subset S⊆ P.Also, any order ideal (resp. filter) is uniquely determined by its set of maximal (resp. minimal) elements, which is an antichain.Any antichain S of P generates an order ideal (S) := {x∈ P |x≤ y, y∈ S} whose set of maximal elements is S and an order filter (S) := {x∈ P |x≥ y, y∈ S} whose set of minimal elements is S.This gives natural bijections :(P)(P) and :(P)(P).For an antichain S∈(P), we call (A) the order ideal generated by A, and (A) the order filter generated by A.We compose the bijections from above to obtain maps from one of (P), (P), or (P) into itself. For an antichain A∈(P), define _(A) to be the set of minimal elements of the complement of the order ideal generated by A. For an order ideal I∈(P), define _(I) to be the order ideal generated by the minimal elements of the complement of I. For an order filter F∈(P), define _(F) to be the order filter generated by the maximal elements of the complement of F.These maps can each be expressed as the composition of three maps as follows._ : (P) ⟶(P) ⟶(P) ^-1⟶(P) _ : (P) ⟶(P) ^-1⟶(P) ⟶(P) _ : (P) ⟶(P) ^-1⟶(P) ⟶(P)These bijections are all called rowmotion.We will focus primarily on _ and _ (since _:(P)(P) is equivalent to _ for the dual poset that swaps the `≤' and `≥' relations).There is a correspondence between the orbits under these two maps; each _-orbithas a corresponding _-orbit consisting of the order ideals generated by the antichains in , and vice versa.The following commutative diagram depicts this relation. at (0,1.8) (P); at (0,0) (P); at (3.25,1.8) (P); at (3.25,0) (P); [semithick, ->] (0,1.3) – (0,0.5); [left] at (0,0.9) ; [semithick, ->] (0.7,0) – (2.5,0); [below] at (1.5,0) _; [semithick, ->] (0.7,1.8) – (2.5,1.8); [above] at (1.5,1.8) _; [semithick, ->] (3.25,1.3) – (3.25,0.5); [right] at (3.25,0.9) ;Consider the following poset P (which is the positive root poset Φ^+(A_3)).[scale=0.5] [thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (1.1, 0.9) – (1.9, 0.1); [fill] (0,2) circle [radius=0.2]; [fill] (-1,1) circle [radius=0.2]; [fill] (1,1) circle [radius=0.2]; [fill] (-2,0) circle [radius=0.2]; [fill] (0,0) circle [radius=0.2]; [fill] (2,0) circle [radius=0.2]; Below we show an example of each of _ acting on an antichain and _ acting on an order ideal as their respective three-step processes.In each, hollow circles represent elements of P not in the antichain, order ideal, or order filter. Notice that the order ideal we start with is generated by the antichain we begin with.After applying rowmotion to both, we get the order ideal generated by the antichain we obtain.[scale=0.5] at (-3.5,1) _:; at (-3.5,-3) _:;[thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (1.1, 0.9) – (1.9, 0.1); (0,2) circle [radius=0.2]; (-1,1) circle [radius=0.2]; [fill] (1,1) circle [radius=0.2]; [fill] (-2,0) circle [radius=0.2]; (0,0) circle [radius=0.2]; (2,0) circle [radius=0.2];at (3.5,1) ⟼; [shift=(7,0)] [thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (1.1, 0.9) – (1.9, 0.1); (0,2) circle [radius=0.2]; (-1,1) circle [radius=0.2]; [fill] (1,1) circle [radius=0.2]; [fill] (-2,0) circle [radius=0.2]; [fill] (0,0) circle [radius=0.2]; [fill] (2,0) circle [radius=0.2];at (10.5,1) ⟼; [shift=(14,0)] [thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (1.1, 0.9) – (1.9, 0.1); [fill] (0,2) circle [radius=0.2]; [fill] (-1,1) circle [radius=0.2]; (1,1) circle [radius=0.2]; (-2,0) circle [radius=0.2]; (0,0) circle [radius=0.2]; (2,0) circle [radius=0.2];at (17.5,1) ^-1⟼; [shift=(21,0)] [thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (1.1, 0.9) – (1.9, 0.1); (0,2) circle [radius=0.2]; [fill] (-1,1) circle [radius=0.2]; (1,1) circle [radius=0.2]; (-2,0) circle [radius=0.2]; (0,0) circle [radius=0.2]; (2,0) circle [radius=0.2]; [shift=(0,-4)] [thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (1.1, 0.9) – (1.9, 0.1); (0,2) circle [radius=0.2]; (-1,1) circle [radius=0.2]; [fill] (1,1) circle [radius=0.2]; [fill] (-2,0) circle [radius=0.2]; [fill] (0,0) circle [radius=0.2]; [fill] (2,0) circle [radius=0.2];at (3.5,-3) ⟼; [shift=(7,-4)] [thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (1.1, 0.9) – (1.9, 0.1); [fill] (0,2) circle [radius=0.2]; [fill] (-1,1) circle [radius=0.2]; (1,1) circle [radius=0.2]; (-2,0) circle [radius=0.2]; (0,0) circle [radius=0.2]; (2,0) circle [radius=0.2];at (10.5,-3) ^-1⟼; [shift=(14,-4)] [thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (1.1, 0.9) – (1.9, 0.1); (0,2) circle [radius=0.2]; [fill] (-1,1) circle [radius=0.2]; (1,1) circle [radius=0.2]; (-2,0) circle [radius=0.2]; (0,0) circle [radius=0.2]; (2,0) circle [radius=0.2];at (17.5,-3) ⟼; [shift=(21,-4)] [thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (1.1, 0.9) – (1.9, 0.1); (0,2) circle [radius=0.2]; [fill] (-1,1) circle [radius=0.2]; (1,1) circle [radius=0.2]; [fill] (-2,0) circle [radius=0.2]; [fill] (0,0) circle [radius=0.2]; (2,0) circle [radius=0.2];§.§ Toggle group of (P) Cameron and Fon-Der-Flaass showed that rowmotion on (P) can be expressed in terms of basic involutions called toggles. Before discussing our new results regarding antichain toggles in the later subsections, we cover some important well-known results about toggling order ideals. Let e∈ P.Then the order ideal toggle corresponding to e is the map t_e: (P)(P) defined byt_e(I)={[ I∪{e} if e∉I and I∪{e}∈(P),;I{e} if e∈ I and I{e}∈(P),; Iotherwise. ].We use the convention that a composition f_1 f_2 ⋯ f_k of maps (such as toggles) is performed right to left.Let _(P) denote the toggle group of (P), which is the group generated by the toggles {t_e |e∈ P}. Informally, t_e adds or removes e from the given order ideal I provided the result is also an order ideal, and otherwise does nothing.The following is clearly an equivalent description of the toggle t_e so we include it without proof. Let I∈(P) and e∈ P.Thent_e(I)={[ I∪{e} if e is a minimal element of P I,;I{e} if e is a maximal element of I,; Iotherwise. ]. Each toggle t_x is an involution (i.e., t_x^2 is the identity).Two order ideal toggles t_x,t_y commute if and only if neither x nor y covers the other.Let x,y∈ P. It is clear from the definition of t_x that for any I∈(P), applying t_x twice gives I.Thus, t_x^2 is the identity.To show when t_x,t_y commute, we consider four cases.Case 1: x=y. Then t_xt_y=t_xt_x=t_yt_x.Case 2: x∥ y.Then whether or not one of x or y can be in an order ideal has no effect on whether the other can so t_xt_y=t_yt_x.Case 3: x<y or y<x but neither one covers the other.Without loss of generality, assume x<y.Since y does not cover x, there exists z∈ P such that x<z<y.Then z must be in any order ideal containing y, but cannot be in any order ideal that does not contain x.Thus, we cannot change whether or not x is in an order ideal and then do the same for y, or vice versa, without changing the status of z.So t_xt_y=t_yt_x.Case 4: either x⋖ y or y⋖ x.Without loss of generality, assume x⋖ y.Let I={z∈ P |z<y} which is an order ideal that has x as a maximal element.Then t_x t_y (I)=I∪{y} and t_y t_x (I)=I{x} so t_xt_y≠t_yt_x.A sequence (x_1,x_2,…,x_n) containing all of the elements of a finite poset P exactly once is called a linear extension of P if it is order-preserving, that is, whenever x_i<x_j in P then i<j.Let (x_1,x_2,…,x_n) be any linear extension of P.Then _=t_x_1 t_x_2⋯ t_x_n.This proposition describes that (for finite posets) _ is the product of every toggle exactly once in an order determined by a linear extension.This has been particularly useful in examining rowmotion on certain posets due to the simple nature in which individual toggles act.Additionally, for the large class of “rowed-and-columned” posets, Striker and Williams prove that _ is conjugate in _(P) to an action called “promotion” named for its connection with Schützenberger's promotion on linear extensions of posets <cit.>.In fact, they show _ is conjugate to a large family of generalized rowmotion and promotion maps defined in terms of rows and columns.As a result, the orbit structure and the homomesic property of certain types of statistics are preserved between promotion and rowmotion, so one can often use either rowmotion or promotion to study the other.This tactic has been utilized by, e.g., Propp and Roby <cit.> and Vorland <cit.> in studying products of chain posets. For the poset of Example <ref>, as labeled below, (a,b,c,d,e,f) gives a linear extension. We show the effect of applying t_at_bt_ct_dt_et_f to the order ideal considered in Example <ref>. In each step, we indicate the element whose toggle we apply next in red. Notice that the outcome is the same order ideal we obtained by the three step process, demonstrating Proposition <ref>.[scale=0.35] [shift=(0,-4)] [thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (1.1, 0.9) – (1.9, 0.1); [red] (0,2) circle [radius=0.2]; (-1,1) circle [radius=0.2]; [fill] (1,1) circle [radius=0.2]; [fill] (-2,0) circle [radius=0.2]; [fill] (0,0) circle [radius=0.2]; [fill] (2,0) circle [radius=0.2]; [above] at (0,2) f; [left] at (-1,1) d; [right] at (1,1) e; [below] at (-2,0) a; [below] at (0,0) b; [below] at (2,0) c;at (3.5,-3) t_f⟼; [shift=(7,-4)] [thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (1.1, 0.9) – (1.9, 0.1); (0,2) circle [radius=0.2]; (-1,1) circle [radius=0.2]; [red,fill] (1,1) circle [radius=0.2]; [fill] (-2,0) circle [radius=0.2]; [fill] (0,0) circle [radius=0.2]; [fill] (2,0) circle [radius=0.2]; [above] at (0,2) f; [left] at (-1,1) d; [right] at (1,1) e; [below] at (-2,0) a; [below] at (0,0) b; [below] at (2,0) c;at (10.5,-3) t_e⟼; [shift=(14,-4)] [thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (1.1, 0.9) – (1.9, 0.1); (0,2) circle [radius=0.2]; [red] (-1,1) circle [radius=0.2]; (1,1) circle [radius=0.2]; [fill] (-2,0) circle [radius=0.2]; [fill] (0,0) circle [radius=0.2]; [fill] (2,0) circle [radius=0.2]; [above] at (0,2) f; [left] at (-1,1) d; [right] at (1,1) e; [below] at (-2,0) a; [below] at (0,0) b; [below] at (2,0) c;at (17.5,-3) t_d⟼; [shift=(21,-4)] [thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (1.1, 0.9) – (1.9, 0.1); (0,2) circle [radius=0.2]; [fill] (-1,1) circle [radius=0.2]; (1,1) circle [radius=0.2]; [fill] (-2,0) circle [radius=0.2]; [fill] (0,0) circle [radius=0.2]; [red,fill] (2,0) circle [radius=0.2]; [above] at (0,2) f; [left] at (-1,1) d; [right] at (1,1) e; [below] at (-2,0) a; [below] at (0,0) b; [below] at (2,0) c;at (24.5,-3) t_c⟼; [shift=(28,-4)] [thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (1.1, 0.9) – (1.9, 0.1); (0,2) circle [radius=0.2]; [fill] (-1,1) circle [radius=0.2]; (1,1) circle [radius=0.2]; [fill] (-2,0) circle [radius=0.2]; [red,fill] (0,0) circle [radius=0.2]; (2,0) circle [radius=0.2]; [above] at (0,2) f; [left] at (-1,1) d; [right] at (1,1) e; [below] at (-2,0) a; [below] at (0,0) b; [below] at (2,0) c;at (31.5,-3) t_b⟼; [shift=(35,-4)] [thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (1.1, 0.9) – (1.9, 0.1); (0,2) circle [radius=0.2]; [fill] (-1,1) circle [radius=0.2]; (1,1) circle [radius=0.2]; [red,fill] (-2,0) circle [radius=0.2]; [fill] (0,0) circle [radius=0.2]; (2,0) circle [radius=0.2]; [above] at (0,2) f; [left] at (-1,1) d; [right] at (1,1) e; [below] at (-2,0) a; [below] at (0,0) b; [below] at (2,0) 3;at (38.5,-3) t_a⟼; [shift=(42,-4)] [thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (1.1, 0.9) – (1.9, 0.1); (0,2) circle [radius=0.2]; [fill] (-1,1) circle [radius=0.2]; (1,1) circle [radius=0.2]; [fill] (-2,0) circle [radius=0.2]; [fill] (0,0) circle [radius=0.2]; (2,0) circle [radius=0.2]; [above] at (0,2) f; [left] at (-1,1) d; [right] at (1,1) e; [below] at (-2,0) a; [below] at (0,0) b; [below] at (2,0) c;§.§ Toggle group of (P) While toggling order ideals has received by far the most attention over the years since Cameron and Fon-Der-Flaass introduced the concept in 1995, toggles can be defined for any family of subsets of a given set.In <cit.>, Striker defines toggle groups for general families of subsets.For a set E and set of “allowed subsets” ⊆ 2^E, each e∈ E has a corresponding toggle map which adds or removes e from any set inprovided the result is still inand otherwise does nothing.In _(P), the set E is the poset P, while the setof allowed subsets is (P).Homomesy and other nice behavior have been discovered for actions in generalized toggle groups for noncrossing partitions <cit.> as well as for subsets of an n-element set whose cardinality ranges between r and n-r <cit.>. Also, Roby and the author prove results about rowmotion on zigzag posets by analyzing toggles on independent sets of path graphs <cit.>, which are the same as antichains of zigzag posets; see Remark <ref>.In this section, we examine the antichain toggle group _(P) where the set of allowed subsets is (P).Cameron and Fon-Der-Flaass proved that for a finite connected poset P (i.e., P has a connected Hasse diagram), _(P) is either the symmetric group ß_(P) or alternating group å_(P) on (P) <cit.>.Striker has analyzed antichain toggle groups in <cit.>, where it is likewise proven that for a finite connected poset P, _(P) is either the symmetric group ß_(P) or alternating group å_(P) on (P).We expand on this work and will construct an explicit isomorphism between _(P) and _(P), ruling out the possibility that for a given poset, one of these groups is a symmetric group with the other being an alternating group.The other key result of this section is Proposition <ref> that, for a finite poset P, _ is the product of every antichain toggle, each used exactly once in an order given by a linear extension (but the opposite order from that of _).This provides another tool for analyzing rowmotion. Although Brouwer and Schrijver originally considered rowmotion as a map on antichains, rowmotion on order ideals has received far more attention due to its known description as a product of toggles. Let e∈ P.Then the antichain toggle corresponding to e is the map τ_e: (P)(P) defined byτ_e(A)={[ A∪{e} if e∉A and A∪{e}∈(P),;A{e}if e∈ A,; Aotherwise. ].Let _(P) denote the toggle group of (P) generated by the toggles {τ_e |e∈ P}. We use τ_e for antichain toggles to distinguish them from the order ideal toggles t_e. Unlike for order ideals, removing an element from an antichain always results in an antichain.This is why we have simplified the definition above so the second case is slightly different from that of t_e.For any e, the toggle τ_e is clearly an involution (as is any toggle defined using Striker's definition), using the same reasoning as for order ideal toggles. Two antichain toggles τ_x,τ_y commute if and only if x=y or x∥ y. Note from Propositions <ref> and <ref> that antichain toggles commute less often than order ideal toggles. Let x,y∈ P.Case 1: x=y. Then τ_xτ_y=τ_xτ_x=τ_yτ_x.Case 2: x∥ y.Then whether x is in an antichain has no effect on whether y can be in that antichain and vice versa. So τ_x τ_y = τ_y τ_x.Case 3: x<y or y<x.Then Ø, {x}, {y} are all antichains of P, but not {x,y}.In this scenario t_xt_y(Ø)={y} but t_yt_x(Ø)={x}.For e∈ P, let e_1,…,e_k be the elements covered by e.Define t_e^*∈_(P) as t_e^* := τ_e_1τ_e_2⋯τ_e_kτ_e τ_e_1τ_e_2⋯τ_e_k.(If e is a minimal element of P, then k=0 and so t_e^*=τ_e.) Due to incomparability, all of the toggles τ_e_1,τ_e_2,…, τ_e_k commute with each other (but not with τ_e).Therefore, the definition of t_e^* is well-defined and does not depend on the order of e_1,e_2,…,e_k.For this reason, the toggles τ_e_1,τ_e_2,…, τ_e_k can be applied “simultaneously,” so t_e^* is the conjugate of τ_e by the product of all antichain toggles for the elements covered by e.As stated formally in the following theorem, applying t_e^* to an antichain A describes the effect that t_e has on the order ideal (A) generated by A. Let I∈(P), e∈ P, and A=^-1(I) be the antichain of maximal elements of I.Then the antichain ^-1(t_e(I)) of maximal elements of t_e(I) is t_e^*(A).That is, the following diagram commutes. at (0,1.8) (P); at (0,0) (P); at (3.25,1.8) (P); at (3.25,0) (P); [semithick, ->] (0,1.3) – (0,0.5); [left] at (0,0.9) ; [semithick, ->] (0.7,0) – (2.5,0); [below] at (1.5,0) t_e; [semithick, ->] (0.7,1.8) – (2.5,1.8); [above] at (1.5,1.8) t_e^*; [semithick, ->] (3.25,1.3) – (3.25,0.5); [right] at (3.25,0.9) ;We include a proof of Theorem <ref> now, but we will reproveit later as a restriction of Theorem <ref>.We have four cases to consider.The four examples in Figure <ref> correspond in order to the cases in this proof.Case 1: e∈ I and I{e}∉(P). Then t_e(I)=I so we wish to show that t_e^*(A)=A.In this case e is not a maximal element of I so there exists a maximal element y∈ I for which e<y.Then each e_i<y for 1≤ i≤ k.Also y∈ A so each of e_1,…,e_k,e is not in A and cannot be toggled in.So t_e^*(A)=A.Case 2: e∈ I and I{e}∈(P). Then e is a maximal element of I so e∈ A but no e_i covered by e is.Clearly e is not a maximal element of t_e(I)=I{e}.Any e_i⋖ e is a maximal element of t_e(I) if and only if the only x > e_i in I is x=e.Other than these, the maximal elements of t_e(I) and I are the same.Applying τ_e_1⋯τ_e_k to A does nothing because e∈ A.Then applying τ_e to A removes e from A.Then applying τ_e_1⋯τ_e_k to A{e} adds in any e_i for which no y > e_i is in A{e}.These are precisely the elements e_i for which the only x > e_i in I is x=e.Thus, t_e^*(A) is the set of maximal elements of t_e(I).Case 3: e∉I and I∪{e}∉(P).Then t_e(I)=I so we wish to show that t_e^*(A)=A.In this case there exists some e_i⋖ e not in I, so in particular this case cannot happen when e is a minimal element of P. Fix such an e_i.Then e_i∉A.If there were y>e_i in A, then y would be in I and thus e_i would be in I, a contradiction.So no element greater than e_i is in A.Then when applying τ_e_1⋯τ_e_k to A, either e_i gets toggled into the antichain or there is some x< e_i<e that is in A.In either scenario, there exists an element less than e in τ_e_1⋯τ_e_k(A).So applying τ_e leaves τ_e_1⋯τ_e_k(A) unchanged.Then applying τ_e_1⋯τ_e_k again undoes the effect of applying τ_e_1⋯τ_e_k in the first place.Thus, t_e^*(A)=A.Case 4: e∉I and I∪{e}∈(P). Then every e_i is in I.Each e_i is either a maximal element of I or less than some y≠e in I.Also any element of A comparable with e must be one of e_1,…,e_k. So e is a maximal element of t_e(I), while none of e_1,…,e_k are.Other than these, the maximal elements of I and t_e(I) are identical. Applying τ_e_1⋯τ_e_k to A removes any e_i that is in A.However, it does not insert any e_i that is not in A because such an element is less than some y∈ A.Thus, τ_e_1⋯τ_e_k(A) contains no element that is comparable with e, so applying τ_e adds e to the antichain.Since e is in τ_eτ_e_1⋯τ_e_k(A), none of e_1,…,e_k can be added to it.So t_e^*(A)=A∪{e}{e_1,…,e_k}, exactly the set of maximal elements of t_e(I).Let S⊆ P.Let η_S:=t_x_1t_x_2⋯ t_x_kwhere (x_1,x_2,…,x_k) is a linear extension of the subposet {x∈ P | x<y,y∈ S}.(In the special case where every element of S is minimal in P, η_S is the identity.)For e∈ P, we write η_e:=η_{e}.Any two linear extensions of a poset differ by a sequence of swaps between adjacent incomparable elements <cit.>.So η_S is well-defined (since Proposition <ref> shows that such swaps do not change the product t_x_1t_x_2⋯ t_x_k).For e∈ P, define τ_e^*∈_(P) as τ_e^* := η_e t_e η_e^-1.Let A∈(P), e∈ P, and I=(A) be the order ideal generated by A.Then the order ideal (τ_e(A)) generated by τ_e(A) is τ_e^*(I).That is, the following diagram commutes. at (0,1.8) (P); at (0,0) (P); at (3.25,1.8) (P); at (3.25,0) (P); [semithick, ->] (0,1.3) – (0,0.5); [left] at (0,0.9) ; [semithick, ->] (0.7,0) – (2.5,0); [below] at (1.5,0) τ_e^*; [semithick, ->] (0.7,1.8) – (2.5,1.8); [above] at (1.5,1.8) τ_e; [semithick, ->] (3.25,1.3) – (3.25,0.5); [right] at (3.25,0.9) ; In the product of two chains poset P=[3]×[2] given by [scale=0.567] [thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (-1.1, -0.9) – (-1.9, -0.1); [thick] (-0.9, -0.9) – (-0.1, -0.1); [fill] (0,2) circle [radius=0.2]; [fill] (-1,1) circle [radius=0.2]; [fill] (1,1) circle [radius=0.2]; [fill] (-2,0) circle [radius=0.2]; [fill] (0,0) circle [radius=0.2]; [fill] (-1,-1) circle [radius=0.2]; [right] at (-0.7,-1) (1,1); [right] at (0.3,0) (2,1); [right] at (1.3,1) (3,1); [left] at (-2.3,0) (1,2); [left] at (-1.3,1) (2,2); [left] at (-0.3,2) (3,2); at (-6,0.75) P=;we have η_(2,2)=t_(1,1)t_(1,2)t_(2,1), so τ_(2,2)^*=t_(1,1)t_(1,2)t_(2,1)t_(2,2)t_(2,1)t_(1,2)t_(1,1).An illustration of Theorem <ref> for an antichain of this poset is below.[scale=0.317][thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (-1.1, -0.9) – (-1.9, -0.1); [thick] (-0.9, -0.9) – (-0.1, -0.1); (0,2) circle [radius=0.2]; (-1,1) circle [radius=0.2]; [fill] (1,1) circle [radius=0.2]; (-2,0) circle [radius=0.2]; [fill] (0,0) circle [radius=0.2]; [red,fill] (-1,-1) circle [radius=0.2]; [thick, ->] (2,0.4) – (3.5,0.4); [below] at (2.75,0.4) t_(1,1);[shift=(7,0)] [thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (-1.1, -0.9) – (-1.9, -0.1); [thick] (-0.9, -0.9) – (-0.1, -0.1); (0,2) circle [radius=0.2]; (-1,1) circle [radius=0.2]; [fill] (1,1) circle [radius=0.2]; [red] (-2,0) circle [radius=0.2]; [fill] (0,0) circle [radius=0.2]; [fill] (-1,-1) circle [radius=0.2]; [thick, ->] (2,0.4) – (3.5,0.4); [below] at (2.75,0.4) t_(1,2);[shift=(14,0)] [thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (-1.1, -0.9) – (-1.9, -0.1); [thick] (-0.9, -0.9) – (-0.1, -0.1); (0,2) circle [radius=0.2]; (-1,1) circle [radius=0.2]; [fill] (1,1) circle [radius=0.2]; [fill] (-2,0) circle [radius=0.2]; [red,fill] (0,0) circle [radius=0.2]; [fill] (-1,-1) circle [radius=0.2]; [thick, ->] (2,0.4) – (3.5,0.4); [below] at (2.75,0.4) t_(2,1);[shift=(21,0)] [thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (-1.1, -0.9) – (-1.9, -0.1); [thick] (-0.9, -0.9) – (-0.1, -0.1); (0,2) circle [radius=0.2]; [red] (-1,1) circle [radius=0.2]; [fill] (1,1) circle [radius=0.2]; [fill] (-2,0) circle [radius=0.2]; [fill] (0,0) circle [radius=0.2]; [fill] (-1,-1) circle [radius=0.2]; [thick, ->] (2,0.4) – (3.5,0.4); [below] at (2.75,0.4) t_(2,2);[shift=(28,0)] [thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (-1.1, -0.9) – (-1.9, -0.1); [thick] (-0.9, -0.9) – (-0.1, -0.1); (0,2) circle [radius=0.2]; [fill] (-1,1) circle [radius=0.2]; [fill] (1,1) circle [radius=0.2]; [fill] (-2,0) circle [radius=0.2]; [red,fill] (0,0) circle [radius=0.2]; [fill] (-1,-1) circle [radius=0.2]; [thick, ->] (2,0.4) – (3.5,0.4); [below] at (2.75,0.4) t_(2,1);[shift=(35,0)] [thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (-1.1, -0.9) – (-1.9, -0.1); [thick] (-0.9, -0.9) – (-0.1, -0.1); (0,2) circle [radius=0.2]; [fill] (-1,1) circle [radius=0.2]; [fill] (1,1) circle [radius=0.2]; [red,fill] (-2,0) circle [radius=0.2]; [fill] (0,0) circle [radius=0.2]; [fill] (-1,-1) circle [radius=0.2]; [thick, ->] (2,0.4) – (3.5,0.4); [below] at (2.75,0.4) t_(1,2);[shift=(42,0)] [thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (-1.1, -0.9) – (-1.9, -0.1); [thick] (-0.9, -0.9) – (-0.1, -0.1); (0,2) circle [radius=0.2]; [fill] (-1,1) circle [radius=0.2]; [fill] (1,1) circle [radius=0.2]; [fill] (-2,0) circle [radius=0.2]; [fill] (0,0) circle [radius=0.2]; [red,fill] (-1,-1) circle [radius=0.2]; [thick, ->] (2,0.4) – (3.5,0.4); [below] at (2.75,0.4) t_(1,1);[shift=(49,0)] [thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (-1.1, -0.9) – (-1.9, -0.1); [thick] (-0.9, -0.9) – (-0.1, -0.1); (0,2) circle [radius=0.2]; [fill] (-1,1) circle [radius=0.2]; [fill] (1,1) circle [radius=0.2]; [fill] (-2,0) circle [radius=0.2]; [fill] (0,0) circle [radius=0.2]; [fill] (-1,-1) circle [radius=0.2];[thick, ->] (-0.5,4.5) – (-0.5,3); [thick, ->] (48.5,4.5) – (48.5,3); [left] at (-0.5,3.75) ; [right] at (49,3.75) ; [thick, ->] (2,6.9) – (45.5,6.9); [below] at (23.75,6.9) τ_(2,2); [shift=(0,6.5)] [thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (-1.1, -0.9) – (-1.9, -0.1); [thick] (-0.9, -0.9) – (-0.1, -0.1); (0,2) circle [radius=0.2]; [red] (-1,1) circle [radius=0.2]; [fill] (1,1) circle [radius=0.2]; (-2,0) circle [radius=0.2]; (0,0) circle [radius=0.2]; (-1,-1) circle [radius=0.2];[shift=(49,6.5)] [thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (-1.1, -0.9) – (-1.9, -0.1); [thick] (-0.9, -0.9) – (-0.1, -0.1); (0,2) circle [radius=0.2]; [red,fill] (-1,1) circle [radius=0.2]; [fill] (1,1) circle [radius=0.2]; (-2,0) circle [radius=0.2]; (0,0) circle [radius=0.2]; (-1,-1) circle [radius=0.2]; To prove Theorem <ref>, we first need a lemma. The proof of Lemma <ref> and Theorem <ref> will both be purely at the group-theoretic level, using properties of _(P) and _(P) proved earlier in the paper, and not the definitions of toggles themselves. This will allow us to use the same proof in the generalization to the piecewise-linear setting after proving the analogue of Theorem <ref> and commutativity of toggles.This will be Theorem <ref>. Let e_1,…,e_k be pairwise incomparable elements of P.Then for 1≤ i≤ k,τ_e_1^*τ_e_2^*…τ_e_i^* = η_{e_1,…,e_i} t_e_1t_e_2… t_e_iη_{e_1,…,e_i}^-1. This claim is true by definition for i=1 and we proceed inductively.Suppose it is true for some given i≤ k-1.Let * x_1,…,x_a be the elements that are both less than e_i+1 and less than at least one of e_1,…,e_i,* y_1,…,y_b be the elements that are less than at least one of e_1,…,e_i but not less than e_i+1,* z_1,…,z_c be the elements that are less than e_i+1 but not less than any of e_1,…,e_i.Clearly, it is possible for one or more of the sets {x_1,…,x_a}, {y_1,…,y_b}, and {z_1,…,z_c} to be empty.For example, if b=0, then the product T_y_1⋯ t_y_b is just the identity.Note than none of y_1,…,y_b are less than any of x_1,…,x_a because any element less than some x_j is automatically less than e_i+1.By similar reasoning, none of z_1,…,z_c are less than any of x_1,…,x_a either.Also any pair y_m,z_n are incomparable, because z_n ≤ y_m would imply y_m is less than some e_j, while y_m ≤ z_n would imply z_n<e_i+1.By transitivity and the pairwise incomparability of e_1,…,e_i+1, each y_m is incomparable with e_i+1, and each z_m is incomparable with any of e_1,…,e_i.We will pick the indices so that (x_1,…, x_a), (y_1,…,y_b), and (z_1,…,z_c) are linear extensions of the subposets {x_1,…,x_a}, {y_1,…,y_b}, and {z_1,…,z_c}, respectively.Then we have the following * (x_1,…,x_a,y_1,…,y_b) is a linear extension of {p∈ P |p<q,q∈{e_1,…, e_i}}. * This yields η_{e_1,…,e_i}=t_x_1⋯ t_x_at_y_1⋯ t_y_b. * (x_1,…,x_a,z_1,…,z_c) is a linear extension of {p∈ P|p<e_i+1}. * This yields η_e_i+1=t_x_1⋯ t_x_at_z_1⋯ t_z_c. * (x_1,…,x_a,y_1,…,y_b,z_1,…,z_c) and(x_1,…,x_a,z_1,…,z_c,y_1,…,y_b) are both linear extensions of {p∈ P|p<q,q∈{e_1,…, e_i+1}}. * This yields η_{e_1,…,e_i+1}=t_x_1⋯ t_x_at_y_1⋯ t_y_bt_z_1⋯ t_z_c.Using the induction hypothesis,τ_e_1^*…τ_e_i^* τ_e_i+1^* = η_{e_1,…,e_i} t_e_1… t_e_iη_{e_1,…,e_i}^-1η_e_i+1 t_e_i+1η_e_i+1^-1= t_x_1⋯ t_x_a t_y_1⋯ t_y_b t_e_1⋯ t_e_i t_y_b⋯ t_y_1 t_x_a⋯ t_x_1 t_x_1⋯ t_x_a t_z_1⋯ t_z_c t_e_i+1 t_z_c⋯ t_z_1 t_x_a⋯ t_x_1= t_x_1⋯ t_x_a t_y_1⋯ t_y_b t_e_1⋯ t_e_i t_y_b⋯ t_y_1 t_z_1⋯ t_z_c t_e_i+1 t_z_c⋯ t_z_1 t_x_a⋯ t_x_1= t_x_1⋯ t_x_a t_y_1⋯ t_y_b t_e_1⋯ t_e_i t_z_1⋯ t_z_c t_e_i+1 t_z_c⋯ t_z_1 t_y_b⋯ t_y_1 t_x_a⋯ t_x_1= t_x_1⋯ t_x_a t_y_1⋯ t_y_b t_z_1⋯ t_z_c t_e_1⋯ t_e_i t_e_i+1 t_z_c⋯ t_z_1 t_y_b⋯ t_y_1 t_x_a⋯ t_x_1= η_{e_1,…,e_i+1} t_e_1… t_e_i t_e_i+1η_{e_1,…,e_i+1}^-1where each commutation above is between toggles for pairwise incomparable elements. We are now ready to prove Theorem <ref>. We use induction on e.If e is a minimal element of P, then τ_e^*=t_e, so the diagram commutes by Theorem <ref>.Now suppose e is not minimal.Let e_1,…,e_k be the elements of P covered by e, and suppose that the theorem is true for every e_i.That is, for every antichain A with I=(A), the order ideal generated by τ_e_i(A) is τ_e_i^*(I). Then the order ideal generated by τ_e_1τ_e_2…τ_e_k(A) is τ_e_1^*τ_e_2^*…τ_e_k^*(I) = η_{e_1,…,e_k} t_e_1t_e_2… t_e_kη_{e_1,…,e_k}^-1(I) by Lemma <ref>.From the definition of t_e^*, it follows that τ_e_1τ_e_2⋯τ_e_k t_e^* τ_e_1τ_e_2⋯τ_e_k=τ_e. Then the order ideal generated by τ_e(A)=τ_e_1τ_e_2⋯τ_e_k t_e^* τ_e_1τ_e_2⋯τ_e_k (A) is η_{e_1,…,e_k} t_e_1t_e_2⋯ t_e_kη_{e_1,…,e_k}^-1 t_e η_{e_1,…,e_k} t_e_1t_e_2⋯ t_e_kη_{e_1,…,e_k}^-1(I)by Theorem <ref> (for t_e^*) and the induction hypothesis (for τ_e_1τ_e_2⋯τ_e_k).Thus, it suffices to show thatη_{e_1,…,e_k} t_e_1t_e_2⋯ t_e_kη_{e_1,…,e_k}^-1 t_e η_{e_1,…,e_k} t_e_1t_e_2⋯ t_e_kη_{e_1,…,e_k}^-1=τ_e^*.The toggles in the product η_{e_1,…,e_k} correspond to elements strictly less than e_1,…,e_k; none of these cover nor are covered by e.Thus by Proposition <ref>, we can commute t_e with η_{e_1,…,e_k} on the left side of (<ref>) and then cancel η_{e_1,…,e_k}^-1η_{e_1,…,e_k}.Also, since e_1,…,e_k are pairwise incomparable, we can commute t_e_1,…, t_e_k.Thus the left side of (<ref>) is η_{e_1,…,e_k} t_e_1t_e_2⋯ t_e_k t_e t_e_k⋯ t_e_2t_e_1η_{e_1,…,e_k}^-1. Note that{x∈ P|x<e}={x∈ P|x<y ,y∈{e_1,…,e_k}}∪{e_1,…,e_k}where the union is disjoint and that e_1,…,e_k are maximal elements of this set.Thus for any linear extension (x_1,…,x_n) of {x∈ P|x<y ,y∈{e_1,…,e_k}}, a linear extension of {x∈ P|x<e} is (x_1,…,x_n,e_1,…,e_k).So η_{e_1,…,e_k} t_e_1t_e_2⋯ t_e_k =η_e which means the left side of (<ref>) is η_e t_e η_e^-1= τ_e^*, same as the right side. The following is a corollary of Theorems <ref> and <ref>. There is an isomorphism from _(P) to _(P) given by τ_e↦τ_e^*, with inverse given by t_e↦ t_e^*.Striker has proven that toggle groups on many families of subsets are either symmetric or alternating groups, including independent sets of connected graphs <cit.>.An independent set of a graph is a subset of the vertices, for which no two are connected by an edge.Antichains of P are the same as independent sets of the comparability graph of P, in which two elements are connected by an edge if they are comparable (different from the Hasse diagram that only includes cover relations).So any result that holds in general for toggling independent sets of graphs also does for toggling antichains[And similarly chains of posets are the independent sets of the incomparability graph in which two elements are connected by an edge if they are incomparable.So any result that holds in general for toggling independent sets also holds for toggling chains.], but not necessarily vice versa, since it is straightforward to show that e.g. a cycle graph with five vertices is not the comparability graph for any poset. The following proposition explains that we can state _ by performing antichain toggles at every element, but in the opposite order as that of _ in Proposition <ref>. Let (x_1,x_2,…,x_n) be any linear extension of a finite poset P.Then _=τ_x_n⋯τ_x_2τ_x_1. Like the proofs of Theorem <ref> and Lemma <ref>, we could prove this proposition algebraically using Theorems <ref> and <ref>, which is what we will do in Section <ref> for the piecewise-linear generalization (Theorem <ref>).However, the following is a much more elegant proof. Let A={a_1,…,a_k} be an antichain. Recall that _(A) is the set of minimal elements of the complement of the order ideal generated by A.Let us consider what happens when we apply τ_x_i in the product τ_x_n⋯τ_x_2τ_x_1. * If x_i<a_j∈ A, then τ_x_i is performed before τ_a_j so τ_x_i cannot add x_i to the antichain.* If x_i∈ A, then τ_x_i removes x_i from A.* Otherwise, x_i∈ P(A).In this case τ_x_i is performed after any element of A less than x_i (if any) has been toggled out.If x_i is a minimal element of P(A) (i.e., x_i∈_(A)), then τ_x_i adds x_i to the antichain.If x_i is not a minimal element of P(A), then when it is time to toggle τ_x_i, some z∈_(A) with z<x_i is in the antichain, so we cannot add x_i.Thus, τ_x_n⋯τ_x_2τ_x_1(A)=_(A).For the poset of Example <ref>, with elements as named below, (a,b,c,d,e,f) is a linear extension. We show the effect of applying τ_f τ_e τ_d τ_c τ_b τ_a to the antichain considered in Example <ref>.Notice that the outcome is the same antichain we obtained by the three step process, demonstrating Proposition <ref>.[scale=0.35] [shift=(0,-4)] [thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (1.1, 0.9) – (1.9, 0.1); (0,2) circle [radius=0.2]; (-1,1) circle [radius=0.2]; [fill] (1,1) circle [radius=0.2]; [red,fill] (-2,0) circle [radius=0.2]; (0,0) circle [radius=0.2]; (2,0) circle [radius=0.2]; [above] at (0,2) f; [left] at (-1,1) d; [right] at (1,1) e; [below] at (-2,0) a; [below] at (0,0) b; [below] at (2,0) c;at (3.5,-3) τ_a⟼; [shift=(7,-4)] [thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (1.1, 0.9) – (1.9, 0.1); (0,2) circle [radius=0.2]; (-1,1) circle [radius=0.2]; [fill] (1,1) circle [radius=0.2]; (-2,0) circle [radius=0.2]; [red] (0,0) circle [radius=0.2]; (2,0) circle [radius=0.2]; [above] at (0,2) f; [left] at (-1,1) d; [right] at (1,1) e; [below] at (-2,0) a; [below] at (0,0) b; [below] at (2,0) c;at (10.5,-3) τ_b⟼; [shift=(14,-4)] [thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (1.1, 0.9) – (1.9, 0.1); (0,2) circle [radius=0.2]; (-1,1) circle [radius=0.2]; [fill] (1,1) circle [radius=0.2]; (-2,0) circle [radius=0.2]; (0,0) circle [radius=0.2]; [red] (2,0) circle [radius=0.2]; [above] at (0,2) f; [left] at (-1,1) d; [right] at (1,1) e; [below] at (-2,0) a; [below] at (0,0) b; [below] at (2,0) c;at (17.5,-3) τ_c⟼; [shift=(21,-4)] [thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (1.1, 0.9) – (1.9, 0.1); (0,2) circle [radius=0.2]; [red] (-1,1) circle [radius=0.2]; [fill] (1,1) circle [radius=0.2]; (-2,0) circle [radius=0.2]; (0,0) circle [radius=0.2]; (2,0) circle [radius=0.2]; [above] at (0,2) f; [left] at (-1,1) d; [right] at (1,1) e; [below] at (-2,0) a; [below] at (0,0) b; [below] at (2,0) c;at (24.5,-3) τ_d⟼; [shift=(28,-4)] [thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (1.1, 0.9) – (1.9, 0.1); (0,2) circle [radius=0.2]; [fill] (-1,1) circle [radius=0.2]; [red,fill] (1,1) circle [radius=0.2]; (-2,0) circle [radius=0.2]; (0,0) circle [radius=0.2]; (2,0) circle [radius=0.2]; [above] at (0,2) f; [left] at (-1,1) d; [right] at (1,1) e; [below] at (-2,0) a; [below] at (0,0) b; [below] at (2,0) c;at (31.5,-3) τ_e⟼; [shift=(35,-4)] [thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (1.1, 0.9) – (1.9, 0.1); [red] (0,2) circle [radius=0.2]; [fill] (-1,1) circle [radius=0.2]; (1,1) circle [radius=0.2]; (-2,0) circle [radius=0.2]; (0,0) circle [radius=0.2]; (2,0) circle [radius=0.2]; [above] at (0,2) f; [left] at (-1,1) d; [right] at (1,1) e; [below] at (-2,0) a; [below] at (0,0) b; [below] at (2,0) 3;at (38.5,-3) τ_f⟼; [shift=(42,-4)] [thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (1.1, 0.9) – (1.9, 0.1); (0,2) circle [radius=0.2]; [fill] (-1,1) circle [radius=0.2]; (1,1) circle [radius=0.2]; (-2,0) circle [radius=0.2]; (0,0) circle [radius=0.2]; (2,0) circle [radius=0.2]; [above] at (0,2) f; [left] at (-1,1) d; [right] at (1,1) e; [below] at (-2,0) a; [below] at (0,0) b; [below] at (2,0) c;§.§ Graded posets and gyrationThus far, the posets for which rowmotion has been shown to exhibit nice behavior are graded, i.e., posets P with a well defined rank function : P_≥ 0 satisfying * (x)=0 for any minimal element x,* (y)=(x)+1 if y⋗ x,* every maximal element x has (x)=r, where r is called the rank of P. For x∈ P, we call (x) the rank of x.Note that the rank function is uniquely determined y the poset.In a graded poset P, elements of the same rank are pairwise incomparable.Thus we can define toggling by an entire rank at once (either order ideal or antichain toggling).This has already been well-studied for order ideal toggles <cit.>[Actually, Striker and Williams defined this for a related family of “rowed-and-columned” posets <cit.>.Since we can draw the Hasse diagram for a graded poset in a way where each row corresponds to a rank, the name “rowmotion” came from the fact that it is toggling by rows for special posets.]. For a graded poset P, definet_=i:=∏_(x)=i t_x,τ_=i:=∏_(x)=iτ_x,t_=i^*:=∏_(x)=i t_x^*,τ_=i^*:=∏_(x)=iτ_x^*.All of the rank toggles t_=i,τ_=i,t_=i^*,τ_=i^* defined above are involutions because they are products of commuting involutions. The following is clear from Propositions <ref> and <ref>.The _ part is <cit.>, also found in <cit.>. For a graded poset P of rank r,_=t_=0t_=1t_=2⋯ t_=r and _=τ_=r⋯τ_=2τ_=1τ_=0.In Figure <ref>, we demonstrate both _ (top) and _ (bottom) in terms of the rank toggles.For applying t_=i, we can insert or remove each element of rank i subject to Proposition <ref>.To apply τ_=i, we remove each element of rank i that is in the antichain; otherwise we add the element if and only if it is incomparable with every element in the antichain.The poset elements toggled in the following step are shown inred. The following is a basic corollary to Theorems <ref> and <ref>. For a graded poset P, the following diagrams commute. 1 at (0,1.8) (P); at (0,0) (P); at (3.25,1.8) (P); at (3.25,0) (P); [semithick, ->] (0,1.3) – (0,0.5); [left] at (0,0.9) ; [semithick, ->] (0.7,0) – (2.5,0); [below] at (1.5,0) t_=i; [semithick, ->] (0.7,1.8) – (2.5,1.8); [above] at (1.5,1.8) t_=i^*; [semithick, ->] (3.25,1.3) – (3.25,0.5); [right] at (3.25,0.9) ; at (0,1.8) (P); at (0,0) (P); at (3.25,1.8) (P); at (3.25,0) (P); [semithick, ->] (0,1.3) – (0,0.5); [left] at (0,0.9) ; [semithick, ->] (0.7,0) – (2.5,0); [below] at (1.5,0) τ_=i^*; [semithick, ->] (0.7,1.8) – (2.5,1.8); [above] at (1.5,1.8) τ_=i; [semithick, ->] (3.25,1.3) – (3.25,0.5); [right] at (3.25,0.9) ; 1 In a graded poset, we can state any t_e^* and τ_e^* in terms of τ_e, t_e, and rank toggles. If (e)=i, then t_e^*=τ_=i-1τ_eτ_=i-1 and t_=i^*= τ_=i-1τ_=iτ_=i-1 (where the empty product τ_=-1 is the identity).Let e_1,…,e_k,x_1,…,x_m be the elements of rank i-1, where e_1,…,e_k are covered by e and x_1,…,x_m are not.Then x_1,…,x_m are each incomparable with each other, with e, and with e_1,…,e_k.Thus in the expressionτ_=i-1τ_eτ_=i-1= τ_e_1⋯τ_e_kτ_x_1⋯τ_x_mτ_e τ_e_1⋯τ_e_kτ_x_1⋯τ_x_meach τ_x_j can be moved and canceled with the other one.Therefore,τ_=i-1τ_eτ_=i-1= τ_e_1⋯τ_e_kτ_e τ_e_1⋯τ_e_k=t_e^*. Now let y_1,…,y_h be the elements of rank i.Thent_=i^* = t_y_1^* t_y_2^* ⋯ t_y_h^* = τ_=i-1τ_y_1τ_=i-1τ_=i-1τ_y_2τ_=i-1⋯τ_=i-1τ_y_hτ_=i-1= τ_=i-1τ_y_1τ_y_2⋯τ_y_hτ_=i-1= τ_=i-1τ_=iτ_=i-1. If (e)=i, then τ_e^*=t_=0t_=1⋯ t_=i-1 t_e t_=i-1⋯ t_=1t_=0 and τ_=i^*=t_=0t_=1⋯ t_=i-1t_=it_=i-1⋯ t_=1t_=0.Let (x_1,…,x_a) be a linear extension of {x∈ P |x<e}. If y∥ e, then y is not less than any of x_1,…,x_a.Thus, we have a linear extension of the form (x_1,…,x_a,y_1,…,y_b) for {p∈ P | (p)≤ i-1}, where y_1,…,y_b are all incomparable with e.Since we can rearrange the toggles in t_=0t_=1⋯ t_=i-1 according to any linear extension,t_=0t_=1⋯ t_=i-1= t_x_1t_x_2⋯ t_x_a t_y_1t_y_2⋯ t_y_b. Therefore,t_=0t_=1⋯ t_=i-1t_e t_=i-1⋯ t_=1t_=0 = t_x_1t_x_2⋯ t_x_a t_y_1t_y_2⋯ t_y_b t_e t_y_b⋯ t_y_2t_y_1 t_x_a⋯ t_x_2t_x_1= t_x_1t_x_2⋯ t_x_a t_e t_y_1t_y_2⋯ t_y_b t_y_b⋯ t_y_2t_y_1 t_x_a⋯ t_x_2t_x_1= t_x_1t_x_2⋯ t_x_a t_et_x_a⋯ t_x_2t_x_1= η_e t_e η_e^-1= τ_e^*.Then the τ_=i^* expression follows easily from the above or from Lemma <ref>. Given any graded poset P, Striker defines in <cit.> an element of _(P) called gyration, which is conjugate to _.The name “gyration” is due to its connection with Wieland's map of the same name on alternating sign matrices <cit.>. Let P be a graded poset.Then order ideal gyration _:(P) (P) is the map that applies the order ideal toggles for elements in even ranks first, then the odd ranks. The order ideal rank toggles t_=i, t_=j commute when i and j have the same parity (or more generally when |i-j|≠1).This is because there are no cover relations between an element of rank i and one of rank j in this scenario.Therefore, the definition of _ is well-defined. It does not matter the order in which elements of even rank are toggled, and similarly for odd rank.We credit David Einstein and James Propp for the suggestion to define an analogue of gyration with antichain toggles instead, and for great assistance in its definition. Antichain rank toggles never commute with each other, so toggling “the even ranks” and “the odd ranks” are ambiguous unless we define an order for applying the toggles. We choose the following for the definition of antichain gyration. Let P be a graded poset.Then antichain gyration _:(P) (P) is the map that first applies the antichain toggles for odd ranks starting from the bottom of the poset up to the top, and then toggles the even ranks from the top of the poset down to the bottom. For example, if P has rank 6, then _=τ_=0τ_=2τ_=4τ_=6τ_=5τ_=3τ_=1.We define _ in this way so that the relation between _ and _ matches that of_ and _, as in the following theorem. Let P be a graded poset. The following diagram commutes. at (0,1.8) (P); at (0,0) (P); at (3.25,1.8) (P); at (3.25,0) (P); [semithick, ->] (0,1.3) – (0,0.5); [left] at (0,0.9) ; [semithick, ->] (0.7,0) – (2.5,0); [below] at (1.5,0) _; [semithick, ->] (0.7,1.8) – (2.5,1.8); [above] at (1.5,1.8) _; [semithick, ->] (3.25,1.3) – (3.25,0.5); [right] at (3.25,0.9) ;See Figure <ref> for an example illustrating Theorem <ref>.In order to prove the theorem, we begin with a lemma. Let a_0,a_1,…,a_k be elements of a group G, such that a_i^2 is the identity for every i∈{0,1,…,k}. For every j∈{0,1,…,k}, setb_j=a_0 a_1⋯ a_j-1_ subscripts increase by 1 a_j a_j-1⋯ a_1 a_0_ subscripts decrease by 1.Then for each i∈ satisfying 2i≤ k, we haveb_0b_2⋯ b_2i_ subscripts increase by 2= a_1a_3⋯ a_2i-1_ subscripts increase by 2 a_2ia_2i-1⋯ a_1a_0_ subscripts decrease by 1. We proceed inductively. For the base case i=0, b_0=a_0. This is consistent with the lemma as a_1 a_3 ⋯ a_2i-1 and a_2i-1⋯ a_3 a_1 are empty products. The i=1,2 casesb_0b_2=a_0 a_0_identitya_1a_2a_1a_0= a_1a_2a_1a_0andb_0b_2b_4 =a_0a_0_identitya_1a_2a_1a_0 a_0a_1a_2_identitya_3a_4a_3a_2a_1a_0= a_1a_3a_4a_3a_2a_1a_0help illustrate the lemma more clearly.Now for the induction hypothesis, we assume the lemma for i-1.That is, we assumeb_0b_2⋯ b_2i-2_ subscripts increase by 2= a_1a_3⋯ a_2i-3_ subscripts increase by 2 a_2i-2a_2i-3⋯ a_1a_0_ subscripts decrease by 1.Now we multiply both sides on the right by b_2i, which isa_0a_1a_2⋯ a_2i-1a_2ia_2i-1⋯ a_2a_1a_0.This gives us== b_0 b_2 ⋯ b_2i-2b_=2i_subscripts increase by 2=a_1 a_3 ⋯ a_2i-3_subscripts increase by 2 a_2i-2 a_2i-3⋯ a_2a_1a_0 _subscripts decrease by 1 a_0a_1a_2 ⋯ a_2i-3a_2i-2_subscripts increase by 1 a_2i-1 a_2i a_2i-1 a_2i-2⋯ a_2a_1a_0 _subscripts decrease by 1=a_1 a_3 ⋯ a_2i-3_subscripts increase by 2 a_2i-1 a_2i a_2i-1 a_2i-2⋯ a_2a_1a_0 _subscripts decrease by 1=a_1a_3 ⋯ a_2i-3a_2i-1_subscripts increase by 2 a_2i a_2i-1 a_2i-2⋯ a_2a_1a_0 _subscripts decrease by 1which proves the lemma.If P has rank 2k, then_ =t_=1 t_=3⋯ t_=2k-1_ranks increase by 2 t_=2k t_=2k-2⋯ t_=2 t_=0_ranks decrease by 2and_ = τ_=0τ_=2⋯τ_=2k-2τ_=2k_ranks increase by 2τ_=2k-1⋯τ_=3τ_=1_ranks decrease by 2.So for posets of even rank 2k, it suffices to prove thatτ^*_=0τ^*_=2⋯τ^*_=2k_ranks increase by 2τ^*_=2k-1⋯τ^*_=3τ^*_=1_ranks decrease by 2 =t_=1 t_=3⋯ t_=2k-1_ranks increase by 2 t_=2k⋯ t_=2 t_=0_ranks decrease by 2.On the other hand, if P has rank 2k+1, then_ =t_=1 t_=3⋯ t_=2k-1t_=2k+1_ranks increase by 2 t_=2k t_=2k-2⋯ t_=2 t_=0_ranks decrease by 2and_ = τ_=0τ_=2⋯τ_=2k-2τ_=2k_ranks increase by 2τ_=2k+1τ_=2k-1⋯τ_=3τ_=1_ranks decrease by 2.Thus, for posets of odd rank 2k+1, it suffices to prove thatτ^*_=0τ^*_=2⋯τ^*_=2k_ranks increase by 2τ^*_=2k+1⋯τ^*_=3τ^*_=1_ranks decrease by 2 =t_=1 t_=3⋯ t_=2k+1_ranks increase by 2 t_=2k⋯ t_=2 t_=0_ranks decrease by 2. To prove Eq. (<ref>) and (<ref>), we list a few equations. By setting a_j=t_=j and i=k in Lemma <ref> (so b_j=τ^*_=j), we obtainτ^*_=0τ^*_=2⋯τ^*_=2k_ranks increase by 2 =t_=1 t_=3⋯ t_=2k-1_ranks increase by 2 t_=2k t_=2k-1 t_=2k-2⋯ t_=2t_=1t_=0_ranks decrease by 1. We can prove the following by setting a_j=t_=i+1 and i=k-1 in Lemma <ref> (so b_j=t_=0τ^*_=j+1t_=0)), then conjugating both sides by t_=0 and inverting both sides.τ^*_=2k-1τ^*_=2k-3⋯τ^*_=3τ^*_=1_ranks decrease by 2 =t_=0 t_=1 t_=2⋯ t_=2k-1_ranks increase by 1 t_=2k-2 t_=2k-4⋯ t_=2t_=0_ranks decrease by 2.By replacing k with k+1 in Eq. (<ref>), we obtainτ^*_=2k+1τ^*_=2k-1⋯τ^*_=3τ^*_=1_ranks decrease by 2 =t_=0 t_=1 t_=2⋯ t_=2k+1_ranks increase by 1 t_=2k t_=2k-2⋯ t_=2t_=0_ranks decrease by 2. To prove Eq. (<ref>), we multiply the left and right sides of Eq. (<ref>) by those of Eq. (<ref>) to obtain==τ^*_=0τ^*_=2⋯τ^*_=2k_ranks increase by 2τ^*_=2k-1⋯τ^*_=3τ^*_=1_ranks decrease by 2=t_=1 t_=3⋯ t_=2k-1_ranks increase by 2 t_=2k t_=2k-1 t_=2k-2⋯ t_=2t_=1t_=0_ranks decrease by 1== t_=0 t_=1 t_=2⋯ t_=2k-1_ranks increase by 1 t_=2k-2⋯ t_=2t_=0_ranks decrease by 2=t_=1 t_=3⋯ t_=2k-1_ranks increase by 2 t_=2k t_=2k-2⋯ t_=2t_=0_ranks decrease by 2=t_=1 t_=3⋯ t_=2k-1_ranks increase by 2 t_=2k t_=2k-2⋯ t_=2t_=0_ranks decrease by 2. Similarly, for the proof of Eq. (<ref>), we multiply the left and right sides of Eq. (<ref>) by those of Eq. (<ref>). This gives us==τ^*_=0τ^*_=2⋯τ^*_=2k_ranks increase by 2τ^*_=2k+1⋯τ^*_=3τ^*_=1_ranks decrease by 2=t_=1 t_=3⋯ t_=2k-1_ranks increase by 2t_=2kt_=2k-1 t_=2k-2⋯ t_=2t_=1t_=0_ranks decrease by 1== t_=0 t_=1 t_=2⋯ t_=2k-1t_=2k_ranks increase by 1 t_=2k+1 t_=2k⋯ t_=2t_=0_ranks decrease by 2=t_=1 t_=3⋯ t_=2k-1_ranks increase by 2 t_=2k+1 t_=2k⋯ t_=2t_=0_ranks decrease by 2=t_=1 t_=3⋯ t_=2k-1t_=2k+1_ranks increase by 2 t_=2k⋯ t_=2t_=0_ranks decrease by 2concluding the proof of Eq. (<ref>) and the proof of the theorem.§ PIECEWISE-LINEAR GENERALIZATION We call the toggles and rowmotion maps on (P) and (P) combinatorial toggling and rowmotion as they are acting on combinatorial sets. Einstein and Propp <cit.> have generalized these maps on (P) to piecewise-linear toggling and rowmotion, by constructing continuous maps that act on Stanley's “order polytope,” an extension of (P) and (P) <cit.>. In this section, we expand on this work and generalize the toggles τ_e on antichains to another polytope of Stanley, called the “chain polytope” which extends antichains. For certain posets P in which cardinality is a homomesic statistic under _,this appears to extend to the piecewise-linear setting.Many of the algebraic properties that hold in the combinatorial setting have also been proven for the piecewise-linear setting, and furthermore generalized to the birational setting <cit.>.We will show that almost all that we proved for the relationship between toggles in _(P) and _(P) also extends to the piecewise-linear setting. We will not discuss birational toggling here except in the final two paragraphs of Section <ref>, where we mention it as a possible direction for future research. §.§ Poset polytopes For a set X and finite poset P, let X^P denote the set of X-labelings of P, i.e., the set of functions f:P X. Given f∈ X^P and e∈ P, we call f(e) the label of e. A subset S⊆ P corresponds naturally to a {0,1}-labeling f of P by letting f(x)=1 if x∈ S and f(x)=0 if x∉S, as in the example below.[scale=0.567][thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (1.1, 0.9) – (1.9, 0.1); (0,2) circle [radius=0.2]; (-1,1) circle [radius=0.2]; [fill] (1,1) circle [radius=0.2]; [fill] (-2,0) circle [radius=0.2]; (0,0) circle [radius=0.2]; (2,0) circle [radius=0.2];at (3.5,1) ⟷; [shift=(7,0)] [thick] (-0.25, 1.75) – (-0.75, 1.25); [thick] (0.25, 1.75) – (0.75, 1.25); [thick] (-1.25, 0.75) – (-1.75, 0.25); [thick] (-0.75, 0.75) – (-0.25, 0.25); [thick] (0.75, 0.75) – (0.25, 0.25); [thick] (1.25, 0.75) – (1.75, 0.25); at (0,2) 0; at (-1,1) 0; at (1,1) 1; at (-2,0) 1; at (0,0) 0; at (2,0) 0;This labeling is called the indicator function of the subset.We consider a subset and its indicator function as two separate ways of writing the same object, so we will not distinguish the two.. * Antichains of P are precisely the {0,1}-labelings f of P such that for every chain x_1<x_2<⋯< x_n in P, we have ∑_i=1^n f(x_i)≤ 1.* Order ideals of P are precisely the {0,1}-labelings f of P that are order-reversing, meaning that f(x)≥ f(y) whenever x≤ y.* Order filters of P are precisely the {0,1}-labelings f of P that are order-preserving, meaning that f(x)≤ f(y) whenever x≤ y..* A subset S⊆ P is an antichain if and only if S contains at most one element in any chain x_1<x_2<⋯<x_n; for binary functions this is exactly the same condition as ∑_i=1^n f(x_i)≤ 1.* The condition that makes I⊆ P an order ideal is that if x<y and y∈ I, then x∈ I.Consider a pair x,y∈ P satisfying x≤ y.If f(y)=0, then automatically f(x)≥ f(y).If f(y)=1, then f(x)≥ f(y) if and only if f(x)=1, which is exactly the requirement to be an order ideal.* Analogous to (2).We now generalize these from labelings in {0,1}^P to [0,1]^P.In <cit.>, Stanley introduced two polytopes associated with a poset: the chain polytope and the order polytope.Stanley's “order polytope” is what we call the “order-preserving polytope.”. * The chain polytope of P, denoted (P), is the set of all labelings f∈ [0,1]^P such that ∑_i=1^n f(x_i)≤ 1 for any chain x_1<x_2<⋯<x_n.* The order-reversing polytope of P, denoted OR(P), is the set of all order-reversing labelings f∈ [0,1]^P.* The order-preserving polytope of P, denoted OP(P), is the set of all order-preserving labelings f∈ [0,1]^P.By Proposition <ref>, (P)=(P)∩{0,1}^P, (P)=OR(P)∩{0,1}^P, and (P)=OP(P)∩{0,1}^P (the vertices of the respective polytopes <cit.>).Thus, anything we prove to be true on these polytopes is also true for the combinatorial sets (P), (P), and (P).What is more surprising, however, is that almost all of what we proved in Section <ref> when working over (P), (P), and (P) can be extended to (P), OR(P), and OP(P) in a natural way.As we will not use polytope theory in this paper, knowledge of polytopes is not necessary to understand the rest of this paper.The reader may choose to think of (P), OR(P), and OP(P) simply as subsets of [0,1]^P. §.§ The poset P̂ In order to work with OR(P) and OP(P), we create a new poset P̂=P∪{m̂, M̂} from any given poset P by adjoining a minimal element m̂ and maximal element M̂. For any x,y∈ P, x≤ y in P̂ if and only if x≤ y in P.For any x∈P̂, we let m̂≤ x ≤M̂.When we make statements like “x≤ y” or “x⋗ y” or “x∥ y,” we need not clarify if we mean in P or P̂, since there is no ambiguity:If at least one of x and y is m̂ or M̂, then we must mean P̂.On the other hand, if both x,y∈ P, then those types of statements hold in P if and only if they hold in P̂.Note that a maximal or minimal element of P does not remain as such in P̂..[scale=0.5] [thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (1.1, 0.9) – (1.9, 0.1); [fill] (0,2) circle [radius=0.2]; [fill] (-1,1) circle [radius=0.2]; [fill] (1,1) circle [radius=0.2]; [fill] (-2,0) circle [radius=0.2]; [fill] (0,0) circle [radius=0.2]; [fill] (2,0) circle [radius=0.2]; at (-3.8,1) If P=; [shift=(9.3,0)]; [thick] (-0.1, 1.9) – (-0.9, 1.1); [thick] (0.1, 1.9) – (0.9, 1.1); [thick] (-1.1, 0.9) – (-1.9, 0.1); [thick] (-0.9, 0.9) – (-0.1, 0.1); [thick] (0.9, 0.9) – (0.1, 0.1); [thick] (1.1, 0.9) – (1.9, 0.1); [dashed] (0, 3.26) – (0,2.1); [dashed] (0, -0.1) – (0,-1.2); [dashed] (-2, -0.1) – (0,-1.2); [dashed] (2, -0.1) – (0,-1.2); [fill] (0,2) circle [radius=0.2]; [fill] (-1,1) circle [radius=0.2]; [fill] (1,1) circle [radius=0.2]; [fill] (-2,0) circle [radius=0.2]; [fill] (0,0) circle [radius=0.2]; [fill] (2,0) circle [radius=0.2]; [fill] (0,3.3) circle [radius=0.2]; [fill] (0,-1.3) circle [radius=0.2]; at (-3.8,1) then P̂=; at (2.9,1) .; [above] at (0,3.3) M̂; [below] at (0,-1.3) m̂; We will use dashed lines throughout the paper to denote the edges going to m̂ and M̂, so that it will be clear if we are drawing P or P̂.We extend every f∈ OR(P) to a labeling of P̂ by setting f(m̂) =1 and f(M̂) =0.[Elsewhere in the literature, m̂ and M̂ are denoted 0̂ and 1̂ respectively.With this norm, order-reversing maps would have f(0̂)=1 and f(1̂)=0.This is potentially confusing so we deviate from this norm.] We likewise extend every f∈ OP(P) to a labeling of P̂ by f(m̂) =0 and f(M̂) =1. Even though a constant labeling is both order-reversing and order-preserving, we only consider it to be in one of OR(P) and OP(P) at any time, and assign the appropriate labels to m̂ and M̂ accordingly.We do not extend elements of (P) to P̂.Working over P̂ will allow us to state definitions and theorems without splitting them into several cases.For example, x⋗m̂ (resp. x⋖M̂) means that x is a minimal (resp. maximal) element of P. Also for x∈ P, the sets {.y∈P̂ |y⋗ x} and {.y∈P̂ |y⋖ x} are always nonempty so a labeling f achieves maximum and minimum values on these sets. §.§ Rowmotion on poset polytopes In this subsection, we define rowmotion on (P), OR(P), and OP(P) as the composition of three maps in a way analogous to the rowmotion definitions in Section <ref>. The complement of a labeling is given by :[0,1]^P[0,1]^P where ((f))(x)=1-f(x) for all x∈ P. Note thatis an involution that takes elements in OR(P) to ones in OP(P) and vice versa.When restricted to {0,1}^P (which again we think of as subsets of P),corresponds to the usual complementation operation, hence the name. There is a bijection : (P)OR(P) given by((g))(x) = max{g(y_1)+g(y_2)+⋯+ g(y_k) |x=y_1 ⋖ y_2 ⋖⋯⋖ y_k ⋖M̂.}with inverse given by(^-1(f))(x) = min{f(x)-f(y) |y∈P̂, y ⋗ x.} =f(x) - max{f(y) |y∈P̂, y⋗ x.}. Also there is a bijection : (P)OP(P) given by((g))(x) = max{g(y_1)+g(y_2)+⋯+ g(y_k) | m̂⋖ y_1 ⋖ y_2 ⋖⋯⋖ y_k =x }with inverse given by(^-1(f))(x) = min{f(x)-f(y) |y∈P̂, y ⋖ x.} =f(x) - max{f(y) |y∈P̂, y ⋖ x.}.We omit the proof as it is straightforward to show that(resp. ) sends elements of (P) to elements of OR(P) (resp. OP(P)), that ^-1 (resp. ^-1) sends elements of OR(P) (resp. OP(P)) to elements of (P), and that ^-1 and ^-1 are inverses ofand . The map ^-1 is what Stanley calls the “transfer map” because it can be used to transfer properties from one of OP(P) or (P) to the other <cit.>.Alsois justbut applied to the dual poset that reverses the `≥' and `≤' relations.Clearly if Q is the dual poset of P, then they have the same chains and antichains, so (P) = (Q).We can replace y⋗ x with y> x in the definition of ^-1, since it would produce the same result by the order-reversing property.Similarly, we can replace y⋖ x with y< x in the definition of ^-1. Also any g∈(P) has only nonnegative outputs.So in theanddefinitions, we can replace “x=y_1 ⋖ y_2 ⋖⋯⋖ y_k ⋖M̂” and “m̂⋖ y_1 ⋖ y_2 ⋖⋯⋖ y_k=x” with “x=y_1<y_2 < ⋯ < y_k” and “y_1<y_2 < ⋯ < y_k=x” respectively since the maximum sum must occur on a chain that cannot be extended.It is easy to see thatandcan be described recursively as well. ((g))(x) = {[ 0 if x=M̂; g(x)+max_y⋗ x((g))(y) if x∈ P; 1 if x=m̂ ].((g))(x) = {[ 0 if x=m̂; g(x)+max_y⋖ x((g))(y) if x∈ P; 1 if x=M̂ ]. We call (g) and (g) the order-reversing and order-preserving labelings generated by the chain polytope element g. If g∈(P), then (g)=(g) and (g)=(g).Let g∈(P) and f=(g).Then for x∈ P, f(x)=1 if and only if there exists y≥ x such that g(y)=1.Otherwise f(x)=0.Since g is an antichain, it is a {0,1}-labeling.Therefore, any chain x=y_1⋖ y_2⋖⋯⋖ y_k ⋖M̂ satisfies∑_i=1^k g(y_i)=1precisely when some g(y_i)=1; otherwise the sum is 0.Such a chain exists precisely when some y≥ x satisfies g(y)=1.Thus, f=(g).Proving that (g)=(g) is analogous. Sinceandare extensions ofandto (P), OR(P), and OP(P), we can extend the definition of rowmotion to these polytopes by composing these similarly to the definitions of _, _, and _.In fact, _, _, and _ are the restrictions of the following maps to (P), (P), and (P), respectively. Let _, _OR, _OP be defined by composing maps as follows._:(P) ⟶ OR(P) ⟶ OP(P) ^-1⟶ (P)_OR:OR(P) ⟶ OP(P) ^-1⟶ (P) ⟶ OR(P)_OP:OP(P) ⟶ OR(P) ^-1⟶ (P) ⟶ OP(P) We demonstrate _ and _OR.[scale=0.567] at (-3.5,1) _:; at (-3.5,-3) _OR:;[thick] (-0.3, 1.7) – (-0.7, 1.3); [thick] (0.3, 1.7) – (0.7, 1.3); [thick] (-1.3, 0.7) – (-1.7, 0.3); [thick] (-0.7, 0.7) – (-0.3, 0.3); [thick] (0.7, 0.7) – (0.3, 0.3); [thick] (1.3, 0.7) – (1.7, 0.3); at (0,2) 0.2; at (-1,1) 0.7; at (1,1) 0; at (-2,0) 0.1; at (0,0) 0; at (2,0) 0.3;at (3.5,1) ⟼; [shift=(7,0)] [thick] (-0.3, 1.7) – (-0.7, 1.3); [thick] (0.3, 1.7) – (0.7, 1.3); [thick] (-1.3, 0.7) – (-1.7, 0.3); [thick] (-0.7, 0.7) – (-0.3, 0.3); [thick] (0.7, 0.7) – (0.3, 0.3); [thick] (1.3, 0.7) – (1.7, 0.3); at (0,2) 0.2; at (-1,1) 0.9; at (1,1) 0.2; at (-2,0) 1; at (0,0) 0.9; at (2,0) 0.5;at (10.5,1) ⟼; [shift=(14,0)] [thick] (-0.3, 1.7) – (-0.7, 1.3); [thick] (0.3, 1.7) – (0.7, 1.3); [thick] (-1.3, 0.7) – (-1.7, 0.3); [thick] (-0.7, 0.7) – (-0.3, 0.3); [thick] (0.7, 0.7) – (0.3, 0.3); [thick] (1.3, 0.7) – (1.7, 0.3); at (0,2) 0.8; at (-1,1) 0.1; at (1,1) 0.8; at (-2,0) 0; at (0,0) 0.1; at (2,0) 0.5;at (17.5,1) ^-1⟼; [shift=(21,0)] [thick] (-0.3, 1.7) – (-0.7, 1.3); [thick] (0.3, 1.7) – (0.7, 1.3); [thick] (-1.3, 0.7) – (-1.7, 0.3); [thick] (-0.7, 0.7) – (-0.3, 0.3); [thick] (0.7, 0.7) – (0.3, 0.3); [thick] (1.3, 0.7) – (1.7, 0.3); at (0,2) 0; at (-1,1) 0; at (1,1) 0.3; at (-2,0) 0; at (0,0) 0.1; at (2,0) 0.5; [shift=(0,-4)] [thick] (-0.3, 1.7) – (-0.7, 1.3); [thick] (0.3, 1.7) – (0.7, 1.3); [thick] (-1.3, 0.7) – (-1.7, 0.3); [thick] (-0.7, 0.7) – (-0.3, 0.3); [thick] (0.7, 0.7) – (0.3, 0.3); [thick] (1.3, 0.7) – (1.7, 0.3); at (0,2) 0.2; at (-1,1) 0.9; at (1,1) 0.2; at (-2,0) 1; at (0,0) 0.9; at (2,0) 0.5;at (3.5,-3) ⟼; [shift=(7,-4)] [thick] (-0.3, 1.7) – (-0.7, 1.3); [thick] (0.3, 1.7) – (0.7, 1.3); [thick] (-1.3, 0.7) – (-1.7, 0.3); [thick] (-0.7, 0.7) – (-0.3, 0.3); [thick] (0.7, 0.7) – (0.3, 0.3); [thick] (1.3, 0.7) – (1.7, 0.3); at (0,2) 0.8; at (-1,1) 0.1; at (1,1) 0.8; at (-2,0) 0; at (0,0) 0.1; at (2,0) 0.5;at (10.5,-3) ^-1⟼; [shift=(14,-4)] [thick] (-0.3, 1.7) – (-0.7, 1.3); [thick] (0.3, 1.7) – (0.7, 1.3); [thick] (-1.3, 0.7) – (-1.7, 0.3); [thick] (-0.7, 0.7) – (-0.3, 0.3); [thick] (0.7, 0.7) – (0.3, 0.3); [thick] (1.3, 0.7) – (1.7, 0.3); at (0,2) 0; at (-1,1) 0; at (1,1) 0.3; at (-2,0) 0; at (0,0) 0.1; at (2,0) 0.5;at (17.5,-3) ⟼; [shift=(21,-4)] [thick] (-0.3, 1.7) – (-0.7, 1.3); [thick] (0.3, 1.7) – (0.7, 1.3); [thick] (-1.3, 0.7) – (-1.7, 0.3); [thick] (-0.7, 0.7) – (-0.3, 0.3); [thick] (0.7, 0.7) – (0.3, 0.3); [thick] (1.3, 0.7) – (1.7, 0.3); at (0,2) 0; at (-1,1) 0; at (1,1) 0.3; at (-2,0) 0; at (0,0) 0.4; at (2,0) 0.8; Rowmotion on OR(P) and OP(P) has received much attention, particularly in <cit.> and <cit.>.For certain “nice” posets, _OR has been shown to exhibit many of the same properties as _.For example, on a product of two chains [a]×[b], the order of rowmotion in both the combinatorial (_,_) and piecewise-linear (_OR,_) realms is a+b, and the homomesy of cardinality for _ extends to _OR.On the other hand, we will see in Subsection <ref> that a homomesy for _ on zigzag posets <cit.> does not extend in general to _OR.We will focus primarily on _ and _OR since _OP:OP(P) OP(P) is equivalent to _OR for the dual poset. Some literature focuses more on OP(P) as order-preserving maps may seem more natural to work with, as in Stanley's order polytope definition.However, as OR(P) generalizes order ideals, we will be consistent and focus on OR(P).As is clear by definition, there is a relation between _ and _OR depicted by the following commutative diagram.This relation can also be seen in Example <ref>, in whichthe order-reversing labeling we started with is the one generated by the element of (P) we started with. at (0,1.8) (P); at (0,0) OR(P); at (3.25,1.8) (P); at (3.25,0) OR(P); [semithick, ->] (0,1.3) – (0,0.5); [left] at (0,0.9) ; [semithick, ->] (0.7,0) – (2.5,0); [below] at (1.5,0) _OR; [semithick, ->] (0.7,1.8) – (2.5,1.8); [above] at (1.5,1.8) _; [semithick, ->] (3.25,1.3) – (3.25,0.5); [right] at (3.25,0.9) ;§.§ Toggles on poset polytopes Toggles on OR(P) and OP(P), referred to as piecewise-linear toggles, have been explored by Einstein and Propp <cit.> and by Grinberg and Roby <cit.>, who have taken the concept further and analyzed birational toggling also.In this section, we define toggles on the chain polytope (P).We prove that almost all of the algebraic properties from Section <ref> relating toggles on (P) to those on (P) also hold for the piecewise-linear setting (P) and OR(P). Let f∈ OR(P), e∈ P, L=max_y⋗ e f(y), and R=min_y⋖ e f(y). Let h: P̂ [0,1] be defined byh(x) = {[ f(x) if x≠e; L+R - f(e) if x=e ]..* If f∈ OR(P), then h∈ OR(P).* If f∈(P), then h=t_e(f).Recall that we can extend to the poset P̂ when necessary.So if e is a maximal element of P, then L=f(M̂)=0 and if e is a minimal element of P, then R=f(m̂)=1..* Note that h(x) and f(x) can only differ if x=e, so h∈ OR(P) if and only if h(e)∈ [L,R]. Since f(e)∈[L,R], h(e)=L+R-f(e)∈ [L,R]. Thus, h∈ OR(P). * Since f∈(P), it follows that L,R,f(e)∈{0,1} and L≤ f(e)≤ R.Case 1:L=R. Then f(e)=L=R so h(e)=f(e)+f(e)-f(e)=f(e).If L=1, then some y⋗ e is in the order ideal f, so applying t_e does not change f.If R=0, then some y⋖ e is not in the order ideal f, so likewise applying t_e does not change f.So t_e(f)=h.Case 2:L≠R. Then L=0 and R=1 so all elements covered by e are in f and no element that covers e is in f.So t_e changes the label of e between 0 and 1.Since h(e)=1-f(e)=(t_e(f))(e), it follows that t_e(f)=h.A maximal chain of P is a chain that cannot be extended into a longer chain, i.e., a chain that starts at a minimal element, uses only cover relations, and ends at a maximal element. For each e∈ P, let _e(P) denote the set of all maximal chains (y_1,…,y_k) in P that contain e as some y_i.That is,_e(P) ={ (y_1,…,y_k) | m̂⋖ y_1 ⋖ y_2 ⋖⋯⋖ y_k ⋖M̂, e=y_ifor some i. }. Let g∈(P), e∈ P, and let h:P [0,1] be defined byh(x) = {[ g(x) if x≠e; 1 - max{. ∑_i=1^k g(y_i) | (y_1,…,y_k)∈_e(P) } if x=e ]..* If g∈(P), then h∈(P).* If g∈(P), then h=τ_e(g).. * Let g∈(P).Since h(x)=g(x) unless x=e, we only need to confirm that h(e)≥ 0 and that ∑_i=1^k h(y_i)≤ 1for all (y_1,…,y_k)∈_e(P).Since ∑_i=1^k g(y_i)≤ 1 for all chains containing e, h(e)≥ 0.Also for any (y_1,…,y_k)∈_e(P),∑_i=1^k h(y_i)= h(e)-g(e) + ∑_i=1^k g(y_i) = 1-max{. ∑_i=1^ℓ g(z_i) | (z_1,…,z_ℓ)∈_e(P) } - g(e)+∑_i=1^k g(y_i) ≤ 1-g(e) ≤ 1.So h∈(P). * Let g∈(P).If no y that is comparable with e (including e itself) is in g, then max{. ∑_i=1^k g(y_i) | (y_1,…,y_k)∈_e(P) }=0.In this case, h(e)=1=(τ_e(g))(e). Otherwise, some y comparable with e (possibly e itself) is in the antichain g.Then e is not in τ_e(g), either by removing e from g or by the inability to insert e into g.In this case, max{. ∑_i=1^k g(y_i) | (y_1,…,y_k)∈_e(P) }=1, so h(e)=0=(τ_e(g))(e).As we have just shown, we can extend our earlier definitions of t_e: (P) (P) and τ_e: (P) (P) to t_e: OR(P)OR(P) and τ_e: (P) (P) below in Definitions <ref> and <ref>. While t_e: OR(P)OR(P) and τ_e: (P) (P) are now continuous and piecewise-linear functions, they correspond exactly to the earlier definitions when restricted to (P) and (P). So it is not ambiguous to use the same notation for the combinatorial and piecewise-linear toggles. Given f∈ OR(P) and e∈ P, let t_e(f):P̂ [0,1] be defined by(t_e(f))(x) = {[f(x)if x≠e; max_y⋗ e f(y) +min_y⋖ e f(y) - f(e)if x=e ]..This defines a map t_e: OR(P) OR(P) because of Proposition <ref>. The group _OR(P) generated by {t_e |e∈ P} is called the toggle group of OR(P). (Each t_e is an involution and thus invertible as we will prove in Proposition <ref>(1), so we do obtain a group.)For the poset P with elements named as on the left, we consider f∈ OR(P).The dashed lines indicate f(m̂) and f(M̂) and their position within P̂.Then (t_E(f))(E)=max(0.1,0.1) + min(0.5,0.7) - 0.4 =0.2 and (t_A(f))(A)=max(0.5,0.7) + 1 - 0.9 =0.8. [scale=0.567][thick] (-0.25, 1.75) – (-0.75, 1.25); [thick] (0.25, 1.75) – (0.75, 1.25); [thick] (-1.25, 0.75) – (-1.75, 0.25); [thick] (-0.75, 0.75) – (-0.25, 0.25); [thick] (0.75, 0.75) – (0.25, 0.25); [thick] (1.25, 0.75) – (1.75, 0.25); [thick] (-0.25, -1.75) – (-0.75, -1.25); [thick] (0.25, -1.75) – (0.75, -1.25); [thick] (-1.25, -0.75) – (-1.75, -0.25); [thick] (-0.75, -0.75) – (-0.25, -0.25); [thick] (0.75, -0.75) – (0.25, -0.25); [thick] (1.25, -0.75) – (1.75, -0.25); at (0,2) I; at (-1,1) G; at (1,1) H; at (-2,0) D; at (0,0) E; at (2,0) F; at (-1,-1) B; at (1,-1) C; at (0,-2) A; [semithick] (2.75,-4.2) – (2.75,4.2); [semithick] (15.25,-4.2) – (15.25,4.2);[shift=(5.5,0)] [thick] (-0.25, 1.75) – (-0.75, 1.25); [thick] (0.25, 1.75) – (0.75, 1.25); [thick] (-1.25, 0.75) – (-1.75, 0.25); [thick] (-0.75, 0.75) – (-0.25, 0.25); [thick] (0.75, 0.75) – (0.25, 0.25); [thick] (1.25, 0.75) – (1.75, 0.25); [thick] (-0.25, -1.75) – (-0.75, -1.25); [thick] (0.25, -1.75) – (0.75, -1.25); [thick] (-1.25, -0.75) – (-1.75, -0.25); [thick] (-0.75, -0.75) – (-0.25, -0.25); [thick] (0.75, -0.75) – (0.25, -0.25); [thick] (1.25, -0.75) – (1.75, -0.25); [dashed] (0,2.25) – (0,3.35); [dashed] (0,-2.25) – (0,-3.35); at (0,2) 0; at (-1,1) 0.1; at (1,1) 0.1; at (-2,0) 0.5; at (0,0)0.4; at (2,0) 0.7; at (-1,-1) 0.5; at (1,-1) 0.7; at (0,-2) 0.9; at (0,3.6) 0; at (0,-3.6) 1; at (3.5,0) t_E⟼;[shift=(12.5,0)] [thick] (-0.25, 1.75) – (-0.75, 1.25); [thick] (0.25, 1.75) – (0.75, 1.25); [thick] (-1.25, 0.75) – (-1.75, 0.25); [thick] (-0.75, 0.75) – (-0.25, 0.25); [thick] (0.75, 0.75) – (0.25, 0.25); [thick] (1.25, 0.75) – (1.75, 0.25); [thick] (-0.25, -1.75) – (-0.75, -1.25); [thick] (0.25, -1.75) – (0.75, -1.25); [thick] (-1.25, -0.75) – (-1.75, -0.25); [thick] (-0.75, -0.75) – (-0.25, -0.25); [thick] (0.75, -0.75) – (0.25, -0.25); [thick] (1.25, -0.75) – (1.75, -0.25); [dashed] (0,2.25) – (0,3.35); [dashed] (0,-2.25) – (0,-3.35); at (0,2) 0; at (-1,1) 0.1; at (1,1) 0.1; at (-2,0) 0.5; at (0,0)0.2; at (2,0) 0.7; at (-1,-1) 0.5; at (1,-1) 0.7; at (0,-2) 0.9; at (0,3.6) 0; at (0,-3.6) 1;[shift=(18,0)] [thick] (-0.25, 1.75) – (-0.75, 1.25); [thick] (0.25, 1.75) – (0.75, 1.25); [thick] (-1.25, 0.75) – (-1.75, 0.25); [thick] (-0.75, 0.75) – (-0.25, 0.25); [thick] (0.75, 0.75) – (0.25, 0.25); [thick] (1.25, 0.75) – (1.75, 0.25); [thick] (-0.25, -1.75) – (-0.75, -1.25); [thick] (0.25, -1.75) – (0.75, -1.25); [thick] (-1.25, -0.75) – (-1.75, -0.25); [thick] (-0.75, -0.75) – (-0.25, -0.25); [thick] (0.75, -0.75) – (0.25, -0.25); [thick] (1.25, -0.75) – (1.75, -0.25); [dashed] (0,2.25) – (0,3.35); [dashed] (0,-2.25) – (0,-3.35); at (0,2) 0; at (-1,1) 0.1; at (1,1) 0.1; at (-2,0) 0.5; at (0,0) 0.4; at (2,0) 0.7; at (-1,-1) 0.5; at (1,-1) 0.7; at (0,-2)0.9; at (0,3.6) 0; at (0,-3.6) 1; at (3.5,0) t_A⟼;[shift=(25,0)] [thick] (-0.25, 1.75) – (-0.75, 1.25); [thick] (0.25, 1.75) – (0.75, 1.25); [thick] (-1.25, 0.75) – (-1.75, 0.25); [thick] (-0.75, 0.75) – (-0.25, 0.25); [thick] (0.75, 0.75) – (0.25, 0.25); [thick] (1.25, 0.75) – (1.75, 0.25); [thick] (-0.25, -1.75) – (-0.75, -1.25); [thick] (0.25, -1.75) – (0.75, -1.25); [thick] (-1.25, -0.75) – (-1.75, -0.25); [thick] (-0.75, -0.75) – (-0.25, -0.25); [thick] (0.75, -0.75) – (0.25, -0.25); [thick] (1.25, -0.75) – (1.75, -0.25); [dashed] (0,2.25) – (0,3.35); [dashed] (0,-2.25) – (0,-3.35); at (0,2) 0; at (-1,1) 0.1; at (1,1) 0.1; at (-2,0) 0.5; at (0,0) 0.4; at (2,0) 0.7; at (-1,-1) 0.5; at (1,-1) 0.7; at (0,-2)0.8; at (0,3.6) 0; at (0,-3.6) 1; We do not define toggles for m̂ and M̂; the values f(m̂) and f(M̂) are fixed across all of OR(P).We could generalize OR(P) and OP(P) and consider the order polytopes of [a,b]-labelings where the values f(m̂) and f(M̂) are set to any a<b.However these polytopes are just linear rescalings of OR(P) and OP(P).Furthermore, Einstein and Propp have extended these toggles from acting on order polytopes to acting on ^P̂ <cit.>. Given g∈(P) and e∈ P, let τ_e(g):P[0,1] be defined by(τ_e(g))(x) = {[ g(x) if x≠e; 1 - max{. ∑_i=1^k g(y_i) | (y_1,…,y_k)∈_e(P) } if x=e ]..This defines a map τ_e:(P)(P) because of Proposition <ref>. The group _(P) generated by {τ_e |e∈ P} is called the toggle group of (P). (Each τ_e is an involution and thus invertible as we will prove in Proposition <ref>(1), so we do obtain a group.) Every chain in _e(P) can be split into segments below e, e itself, and above e, and we can take the maximum sum of g on each part.So an equivalent formula for (τ_e(g))(e) is(τ_e(g))(e) = 1-max_y⋖ e ((g))(y) - g(e) - max_y⋗ e ((g))(y).In Eq. (<ref>), note that we regard (g) as an order-preserving labeling, so ((g))(m̂)=0.Similarly, we regard (g) as an order-reversing labeling, so ((g))(M̂)=0.Also, note that since any g∈(P) has nonnegative labels, it would be equivalent in the definition of τ_e to use the set of all chains of P through e, instead of the set _e(P) of maximal chains through e.We will use the definition with maximal chains for various reasons.For one, it gives us far fewer chains to worry about in computations, as in the following example.For the poset P with elements named as on the left, we consider g∈(P).Then summing the outputs of g along the maximal chains through F, we get the following. g(A) + g(C) + g(F) + g(G) + g(I)= 0.2+0+0.1+0.1+0.1= 0.5g(A) + g(C) + g(F) + g(H) + g(J)= 0.2+0+0.1+0.2+0.1= 0.6g(A) + g(C) + g(F) + g(H) + g(K)=0.2+0+0.1+0.2+0= 0.5g(B) + g(F) + g(G) + g(I)=0.3+0.1+0.1+0.1= 0.6g(B) + g(F) + g(H) + g(J)=0.3+0.1+0.2+0.1= 0.7g(B) + g(F) + g(H) + g(K)=0.3+0.1+0.2+0= 0.6 So τ_F changes the output value of F to 1-max(0.5,0.6,0.5,0.6,0.7,0.6)=1-0.7=0.3. [scale=0.82][thick] (0,0.3) – (0,1.2); [thick] (-0.2,0.2) – (-1.8,2.8); [thick] (-0.2,1.7) – (-0.8,2.3); [thick] (0.2,1.7) – (0.8,2.3); [thick] (-0.2,3.3) – (-0.8,2.7); [thick] (0.2,3.3) – (0.8,2.7); [thick] (1.8,0.2) – (1.2,2.3); [thick] (0,3.8) – (0,4.7); [thick] (-1.8,3.2) – (-0.2,4.8); [thick] (1.8,3.3) – (1.2,2.7); [thick] (2,3.8) – (2,4.7); [thick] (2.2,3.7) – (3.05,4.8); at (0,0) A; at (2,0) B; at (0,1.5) C; at (-2,3) D; at (-1,2.5) E; at (1,2.5) F; at (0,3.5) G; at (2,3.5) H; at (0,5) I; at (2,5) J; at (3.25,5) K;[shift=(7,0)] [thick] (0,0.3) – (0,1.2); [thick] (-0.2,0.2) – (-1.8,2.8); [thick] (-0.2,1.7) – (-0.8,2.3); [thick] (0.2,1.7) – (0.8,2.3); [thick] (-0.2,3.3) – (-0.8,2.7); [thick] (0.2,3.3) – (0.8,2.7); [thick] (1.8,0.2) – (1.2,2.3); [thick] (0,3.8) – (0,4.7); [thick] (-1.8,3.2) – (-0.2,4.8); [thick] (1.8,3.3) – (1.2,2.7); [thick] (2,3.8) – (2,4.7); [thick] (2.2,3.7) – (3.05,4.8); at (0,0) 0.2; at (2,0) 0.3; at (0,1.5) 0; at (-2,3) 0.6; at (-1,2.5) 0.4; at (1,2.5) 0.1; at (0,3.5) 0.1; at (2,3.5) 0.2; at (0,5) 0.1; at (2,5) 0.1; at (3.25,5) 0; at (3.5,2.5) t_F⟼;[shift=(14,0)] [thick] (0,0.3) – (0,1.2); [thick] (-0.2,0.2) – (-1.8,2.8); [thick] (-0.2,1.7) – (-0.8,2.3); [thick] (0.2,1.7) – (0.8,2.3); [thick] (-0.2,3.3) – (-0.8,2.7); [thick] (0.2,3.3) – (0.8,2.7); [thick] (1.8,0.2) – (1.2,2.3); [thick] (0,3.8) – (0,4.7); [thick] (-1.8,3.2) – (-0.2,4.8); [thick] (1.8,3.3) – (1.2,2.7); [thick] (2,3.8) – (2,4.7); [thick] (2.2,3.7) – (3.05,4.8); at (0,0) 0.2; at (2,0) 0.3; at (0,1.5) 0; at (-2,3) 0.6; at (-1,2.5) 0.4; at (1,2.5) 0.3; at (0,3.5) 0.1; at (2,3.5) 0.2; at (0,5) 0.1; at (2,5) 0.1; at (3.25,5) 0; We now show that most of the algebraic properties of t_e and τ_e we proved for the combinatorial setting extend to the piecewise-linear setting.. * For x∈ P, t_x and τ_x are involutions.* Two toggles t_x,t_y commute if and only if neither x nor y covers the other.* Two toggles τ_x,τ_y commute if and only if x=y or x∥ y.. * We start with t_x.Let f∈ OR(P).Then t_x does not change the label for any vertex other than x, and(t_x^2(f))(x) = max_y⋗ x (t_x(f))(y) +min_y⋖ x (t_x(f))(y) - (t_x(f))(x) = max_y⋗ x f(y) +min_y⋖ x f(y) -(max_y⋗ x f(y) +min_y⋖ x f(y) - f(x) ) = f(x)so t_x^2(f)=f. Now we consider τ_x.Let g∈(P) and h=τ_x(g).Again, τ_x does not change the label for any vertex other than x so it suffices to show that (τ_x(h))(x)=g(x). By Eq. (<ref>),(τ_x(h))(x)= 1-max_y⋖ x ((h))(y) - h(x) - max_y⋗ x ((h))(y) = 1-max_y⋖ x ((g))(y)-(1-max_y⋖ x ((g))(y) - g(x) - max_y⋗ x ((g))(y)) - max_y⋗ x ((g))(y) =g(x).Note we are able to replace h with g in the second equality above for expressions that do not include inputting x into g or h (the only input where g and h can differ). * If x⋖ y or y⋖ x, then t_x t_y≠t_y t_x when restricted to (P) by Proposition <ref>, so they are also unequal over the larger set OR(P). If x=y, then t_xt_y=t_xt_x=t_yt_x. Now suppose neither x nor y covers the other and x≠y. Only the label of x can be changed by t_x and only the label of y can be changed by t_y. For f∈ OR(P), the definition of (t_x(f))(x) only involves elements that cover x, are covered by x, and x itself. This is similar for y in (t_y(f))(y). Thus, the label of x has no effect on what t_y does and the label of y has no effect on what t_x does.So t_x t_y=t_y t_x. * If x=y, then τ_xτ_y=τ_xτ_x=τ_yτ_x. If x and y are comparable and unequal, then τ_xτ_y≠τ_yτ_x when restricted to (P) by Proposition <ref>, so they are also unequal over the larger set (P). Now suppose x∥ y. Only the label of x can be changed by τ_x and only the label of y can be changed by τ_y. No chain contains both x and y, so the label of x has no effect on what τ_y does and the label of y has no effect on what τ_x does.Thus, τ_xτ_y=τ_yτ_x. For e∈ P and S⊆ P, we define t_e^*∈_(P), η_S∈_OR(P), η_e∈_OR(P), and τ_e^*∈_OR(P) exactly as we defined them in _(P) and _(P): * Let t_e^* := τ_e_1τ_e_2⋯τ_e_kτ_e τ_e_1τ_e_2⋯τ_e_k where e_1,…,e_k are the elements of P covered by e.* Let η_S:=t_x_1t_x_2⋯ t_x_k where (x_1,x_2,…,x_k) is a linear extension of the subposet {x∈ P|x<y,y∈ S} of P.* Let η_e:=η_{e}.* Let τ_e^* := η_e t_e η_e^-1.The following is an analogue of Theorems <ref> and <ref>. For any e∈ P, the following diagrams commute. So there is an isomorphism from _(P) to _OR(P) given by τ_e↦τ_e^*, and inverse t_e↦ t_e^*. 1at (0,1.8) (P); at (0,0) OR(P); at (3.25,1.8) (P); at (3.25,0) OR(P); [semithick, ->] (0,1.3) – (0,0.5); [left] at (0,0.9) ; [semithick, ->] (0.7,0) – (2.5,0); [below] at (1.5,0) t_e; [semithick, ->] (0.7,1.8) – (2.5,1.8); [above] at (1.5,1.8) t_e^*; [semithick, ->] (3.25,1.3) – (3.25,0.5); [right] at (3.25,0.9) ; at (0,1.8) (P); at (0,0) OR(P); at (3.25,1.8) (P); at (3.25,0) OR(P); [semithick, ->] (0,1.3) – (0,0.5); [left] at (0,0.9) ; [semithick, ->] (0.7,0) – (2.5,0); [below] at (1.5,0) τ_e^*; [semithick, ->] (0.7,1.8) – (2.5,1.8); [above] at (1.5,1.8) τ_e; [semithick, ->] (3.25,1.3) – (3.25,0.5); [right] at (3.25,0.9) ;1 See Figure <ref> for an example demonstrating the first commutative diagram in this theorem.We begin with the left commutative diagram.Let g∈(P).We must show that (t_e^* g)=t_e((g)).Throughout the proof, we several times make use of the factonly “looks up” whileonly “looks down.” By this we mean, for any x∈ P, the value of ((g))(x) depends only on g(y) for y≥ x, whereas ((g))(x) depends only on g(y) for y≤ x.Suppose e is a minimal element of P.Then t_e^*=τ_e.By the definition ofand minimality of e, ((τ_e g))(x)=((g))(x) for all x≠e.Thus, ((τ_e g))(x)=(t_e((g)))(x) for x≠e, so we only have to check at x=e. Since e is minimal, (τ_e g)(e)=1-((g))(e) as can be seen from the definitions. By Eq. (<ref>),((τ_e g))(e) = (τ_e g)(e) + max_y⋗ e ((τ_e g))(y) = 1-((g))(e) + max_y⋗ e ((g))(y)= ((g))(m̂)-((g))(e) + max_y⋗ e ((g))(y)= max_y⋗ e ((g))(y)+ min_y⋖ e ((g))(y) -((g))(e) = (t_e((g)))(e). Now assume e is not minimal in P.Let e_1,…,e_k be the elements that e covers. Letg'= τ_eτ_e_1⋯τ_e_kg,g” = τ_e_1⋯τ_e_kg'=t_e^* g,f= (g),f'= (g'),f” = (g”).The goal is to show that f”=t_e f.Note that g,g',g” can only possibly differ in the labels of e,e_1,e_2,…,e_k.From the definition ofand the fact that t_e can only change the label of e, it follows that t_e f and f” can only possibly differ in the labels of elements ≤ e.We begin by proving f”(e)=(t_e f)(e).From Eq. (<ref>) and from the fact that e_1,e_2,…,e_k are pairwise incomparable so each chain can contain at most one of them,(τ_e_1⋯τ_e_kg)(e_j)= 1-max_y⋖ e_j ((g))(y) - g(e_j) - max_y⋗ e_j ((g))(y) _-f(e_j)by Eq. (<ref>)= 1-max_y⋖ e_j ((g))(y)-f(e_j)for 1≤ j≤ k.Then to get g'(e), we apply Eq. (<ref>) to τ_e_1⋯τ_e_kg instead of g, yieldingg”(e)=g'(e)= 1-max_e_i⋖ e ((τ_e_1⋯τ_e_kg))(e_i) - (τ_e_1⋯τ_e_kg)(e) - max_y⋗ e((τ_e_1⋯τ_e_kg))(y)_ ((g))(y)asonly looks up= 1-max_e_i⋖ e ((τ_e_1⋯τ_e_kg))(e_i) - g(e) - max_y⋗ e ((g))(y) _-f(e)by Eq. (<ref>)= 1-max_e_i⋖ e ((τ_e_1⋯τ_e_kg))(e_i)-f(e) = 1-max_e_i⋖ e( max_y⋖ e_i ((τ_e_1⋯τ_e_kg))(y) + (τ_e_1⋯τ_e_kg)(e_i))_ from Eq. (<ref>) -f(e) = 1-max_e_i⋖ e( max_y⋖ e_i ((g))(y) + (τ_e_1⋯τ_e_kg)(e_i) )_ sinceonly looks down -f(e) = 1-max_e_i⋖ e( max_y⋖ e_i ((g))(y) + 1-max_y⋖ e_i ((g))(y)-f(e_i)_ from Eq. (<ref>)) -f(e) = min_e_i⋖ ef(e_i) - f(e).Then using Eq. (<ref>)f”(e) = g”(e)+max_y⋗ ef”(y) = g”(e)+max_y⋗ ef(y) = min_e_i⋖ ef(e_i) - f(e) +max_y⋗ ef(y) = (t_e f)(e).Now we will prove that f”(x)=f(x)=(t_e f)(x) for every x<e usingdownward induction on x. So we begin with the base case x⋖ e. From Eq. (<ref>),g”(e_j)= 1-max_y⋖ e_j ((g'))(y) - g'(e_j) - max_y⋗ e_j ((g'))(y) = 1-max_y⋖ e_j ((g))(y) - ( 1-max_y⋖ e_j ((g))(y)-f(e_j) )_ from Eq. (<ref>) - max_y⋗ e_j f'(y) = f(e_j)-max_y⋗ e_j f'(y) = f(e_j)-max_y⋗ e_j f”(y)for 1≤ j≤ k. Note that the last equality is because f'(y) and f”(y) only depend on g'(x) and g”(x) for x≥ y.Since g'(x)=g”(x) for x≥ y>e_j, f'(y)=f”(y) for such y. Continuing, Eq. (<ref>) yieldsf”(e_j) = g”(e_j)+max_y⋗ e_j f”(y)=f(e_j)-max_y⋗ e_j f”(y)_ from above +max_y⋗ e_j f”(y) =f(e_j)=(t_e f)(e_j). Now let x<e and x∉{e_1,e_2,…,e_k}.Assume (as induction hypothesis)that f”(y)=f(y)=(t_ef)(y) for every y covering x (which cannot include y=e since x∉{e_1,…,e_k}). Again using Eq. (<ref>),f”(x) =g”(x) +max_y⋗ xf”(y)= g(x)+max_y⋗ xf(y)=f(x)= (t_e f)(x).For the second equality above, recall that g(x)=g”(x) because x≠e,e_1,…,e_k.This concludes the proof of the left commutative diagram.The right commutative diagram is an analogue of Theorem <ref>.The proof of that theorem (as well as Lemma <ref>) only depended on algebraic properties of _(P) and _(P), namely that toggles are involutions, when toggles commute, and Theorem <ref>.We have proven analogues for these to _(P) and _OR(P) in Proposition <ref> and this theorem's first commutative diagram.Thus the proof of the second commutative diagram is the same as that of Theorem <ref>. We will not prove the following piecewise-linear analogue of Proposition <ref> here.The result is essentially <cit.>. In that paper, piecewise-linear rowmotion is defined in terms of toggles and proven to be equivalent to the composition of three maps (our definition of _OR). There they are defining rowmotion on OP(P) not OR(P), so there is a change of notation between this paper and <cit.>, given by Θ=, ρ_P=_OP^-1=∘_OR∘, ∇=^-1, Δ=^-1. Let (x_1,x_2,…,x_n) be any linear extension of a finite poset P.Then _OR=t_x_1 t_x_2⋯ t_x_n. We use this to prove a similar expression about _ that is analogous to Proposition <ref>. Let (x_1,x_2,…,x_n) be any linear extension of a finite poset P.Then _=τ_x_n⋯τ_x_2τ_x_1.The isomorphism from _(P) to _OR(P) given by τ_e ↦τ_e^* sends _ to _OR.This is from Theorem <ref> and the commutative diagram at the end of Subsection <ref>.Therefore, it suffices to show that τ_x_n^*⋯τ_x_2^* τ_x_1^*=_OR = t_x_1 t_x_2⋯ t_x_n. We will use induction to prove that τ_x_k^*⋯τ_x_2^* τ_x_1^* =t_x_1 t_x_2⋯ t_x_k for 1≤ k≤ n.For the base case, τ_x_1^*=t_x_1 since x_1 is a minimal element of P. For the induction hypothesis, let 1≤ k≤ n-1 and assume that τ_x_k^*⋯τ_x_2^* τ_x_1^* =t_x_1 t_x_2⋯ t_x_k. Then τ_x_k+1^*τ_x_k^*⋯τ_x_2^* τ_x_1^* = η_x_k+1 t_x_k+1η_x_k+1^-1 t_x_1 t_x_2⋯ t_x_k.Let (y_1,…,y_k') be a linear extension of the subposet {y∈ P|y<x_k+1} of P. Then since (x_1,…,x_n) is a linear extension of P, all of y_1,…,y_k' must be in {x_1,…,x_k}. Furthermore, any element less than one of y_1,…,y_k' must be less than x_k+1 so none of the elements of {x_1,…,x_k} outside of {y_1,…,y_k'} are less than any of y_1,…,y_k'. Therefore, we can name these elements in such a way that (y_1,…,y_k', y_k'+1, …, y_k) is a linear extension of {x_1,…,x_k}. We remind the reader of Remark <ref>: any two linear extensions of a poset differ by a sequence of swaps between adjacent incomparable elements <cit.>. Toggles of incomparable elements commute so t_x_1t_x_2⋯ t_x_k = t_y_1⋯ t_y_k't_y_k'+1⋯ t_y_k. From Eq. (<ref>) and η_x_k+1=t_y_1⋯ t_y_k', we obtainτ_x_k+1^*τ_x_k^*⋯τ_x_2^* τ_x_1^*= η_x_k+1 t_x_k+1η_x_k+1^-1 t_x_1 t_x_2⋯ t_x_k= t_y_1⋯ t_y_k' t_x_k+1 t_y_k'⋯ t_y_1 t_y_1⋯ t_y_k't_y_k'+1⋯ t_y_k= t_y_1⋯ t_y_k' t_x_k+1 t_y_k'+1⋯ t_y_k= t_y_1⋯ t_y_k' t_y_k'+1⋯ t_y_kt_x_k+1= t_x_1 t_x_2⋯ t_x_k t_x_k+1.In the fourth equality above, we could move t_x_k+1 to the right of t_y_k'+1⋯ t_y_k because x_k+1 is incomparable with each of y_k'+1,…, y_k.This is because none of these are less than x_k+1 by design nor greater than x_k+1 by position within the linear extension (x_1,…,x_n) of P.By induction, we have τ_x_n^*⋯τ_x_2^* τ_x_1^* = t_x_1 t_x_2⋯ t_x_n = _OR so τ_x_n⋯τ_x_2τ_x_1 = _. All of Subsection <ref> about graded posets also extends to the piecewise-linear toggling (with , , andreplaced with , OR, andrespectively) because those results all used algebraic properties that we have proven also hold for the piecewise-linear toggles.In Figure <ref>, we demonstrate _ and _OR in terms of toggles, but we toggle by ranks since this is a graded poset (like in Corollary <ref>). §.§ Toggling the chain polytope of a zigzag poset  In <cit.>, Roby and the author analyze toggling within the set of independent sets of a path graph.The set of independent sets of a path graph with n vertices can easily be seen to be the same as the set of antichains of a zigzag poset with n elements. The zigzag poset (or fence poset) with n elements, denoted _n, is the poset consisting of elements a_1,...,a_n and relations a_2i-1 < a_2i and a_2i+1 < a_2i. Zigzag posets have Hasse diagrams that can be drawn in a zigzag formation.For example, [thick] (0,0) – (1,1) – (2,0) – (3,1) – (4,0) – (5,1); [fill] (0,0) circle [radius=0.07]; [fill] (1,1) circle [radius=0.07]; [fill] (2,0) circle [radius=0.07]; [fill] (3,1) circle [radius=0.07]; [fill] (4,0) circle [radius=0.07]; [fill] (5,1) circle [radius=0.07]; [below] at (0,0) a_1; [above] at (1,1) a_2; [below] at (2,0) a_3; [above] at (3,1) a_4; [below] at (4,0) a_5; [above] at (5,1) a_6; [left] at (-0.2,0.5) _6=; at (6,0.5) and; [thick] (0,0) – (1,1) – (2,0) – (3,1) – (4,0) – (5,1) – (6,0); [fill] (0,0) circle [radius=0.07]; [fill] (1,1) circle [radius=0.07]; [fill] (2,0) circle [radius=0.07]; [fill] (3,1) circle [radius=0.07]; [fill] (4,0) circle [radius=0.07]; [fill] (5,1) circle [radius=0.07]; [fill] (6,0) circle [radius=0.07]; [below] at (0,0) a_1; [above] at (1,1) a_2; [below] at (2,0) a_3; [above] at (3,1) a_4; [below] at (4,0) a_5; [above] at (5,1) a_6; [below] at (6,0) a_7; [left] at (-0.2,0.5) _7=; [left] at (6.5,0) .; The main results in <cit.> pertain to the homomesy phenomenon.First isolated by Propp and Roby in <cit.>, this phenomenon has proven to be quite widespread in combinatorial dynamical systems consisting of a set and invertible action. Suppose we have a set , an invertible map w: such that every w-orbit is finite, and a function (“statistic") f:, whereis a field of characteristic 0. If there exists a constant c∈ such that for every w-orbit ⊆,1/#∑_x∈f(x)=c,then we say the statistic f is homomesic with average c (or c-mesic for short) under the action of w on .Below we restate <cit.> in terms of antichains of _n. Consider the zigzag poset _n. Let w be a product of each of the antichain toggles τ_a_1,…,τ_a_n each used exactly once in some order (called a Coxeter element), and consider the action of w on (_n). For 1≤ j≤ n, let I_j:(_n) {0,1} be the function defined asI_j(A) = {[0if a_j∉A,;1 if a_j∈ A.;].* The statistic I_j-I_n+1-j is 0-mesic for every 1≤ j≤ n.* The statistics 2I_1+I_2 and I_n-1+2I_n are both 1-mesic.   On _6, let φ=τ_a_6τ_a_5τ_a_4τ_a_3τ_a_2τ_a_1∈_(_6) be the composition that toggles each element from left to right. In Figure <ref>, the three φ-orbits are shown. According to Theorem <ref>(2), the statistic 2I_1+I_2 is 1-mesic under the action of φ.That is, 2I_1+I_2 has average 1 across every orbit.We can verify this by computing the averages2(1)+1/3=1,2(3)+1/7=1,2(4)+3/11=1.Also, Theorem <ref>(1) says that I_2-I_5 has average 0 across every orbit, which we can also verify1-1/3=0,1-1/7=0,3-3/11=0.As one may expect, some results for antichain toggling continue to hold for chain polytope toggling, and some results do not. The example in Figure <ref> proves as a counterexample for Theorem <ref>(1), e.g. the average of h↦ h(3)-h(6) across this orbit is 1/20(13/2-6) = 1/40≠0. On the other hand, Theorem <ref>(2) still holds for this orbit, e.g. the average of h↦ 2h(1)+h(2) is 2(8)+4/20=1.Indeed, this result can be extended to chain polytope toggles.There is an issue with this notion, however.By toggling within the finite set (P) of a poset, all orbits are guaranteed to finite. The chain polytope, on the other hand, is infinite, so there is no guarantee that orbits have finite order. It is believed that for n≥ 6, there are infinite orbits under a Coxeter element.[ Using results in <cit.>, one can prove that birational rowmotion on _n has finite order for n≤ 5. This implies _OR has finite order, and therefore _=^-1∘_OR∘ does too. However, for n=7, birational rowmotion has infinite order <cit.>, so _OR may have infinite order. All Coxeter elements in _(P) are conjugate using <cit.>, so the order of toggling does not affect the order.David Einstein and James Propp have made progress in proving infinite order for _OR for _n with n≥ 6, though details are still being worked out. ] The original definition of homomesy requires orbits to be finite. Nonetheless, we can generalize to orbits that need not be finite where the average value of the statistic f, as w is iterated N times, approaches a constant c. This asymptotic generalization, first considered by Propp and Roby, has been used by Vorland for actions on order ideals of infinite posets. Suppose we have a set , a map w:, and a function (“statistic") f:.If there exists a real number c such that for every x∈,lim_N∞1/N∑_i=0^N-1f(w^i(x))=cthen we say that f is homomesic with average c (or c-mesic) under the action of w on . Below is a generalization of Theorem <ref>(2) to (_n). Despite the numerous homomesy results for finite sets, this result is notable as one of the few known instances of asymptotic homomesy for orbits that are probably not always finite. Let n≥ 2, and τ_a_1, ⋯, τ_a_n be chain polytope toggles on (_n). * Let w∈_(_n) be a composition of toggles in which τ_a_1 appears exactly once, τ_a_2 at most once, and other toggles can appear any number of times (possibly none).Then the statistic g↦ 2g(a_1)+g(a_2) is 1-mesic under the action of w on (_n).* Let w∈_(_n) be a composition of toggles in which τ_a_n appears exactly once, τ_a_n-1 at most once, and other toggles can appear any number of times (possibly none).Then the statistic g↦ g(a_n-1)+2g(a_n) is 1-mesic under the action of w on (_n). We only prove (1) as the proof of (2) is analogous. Let w be as in the theorem.Let g_0∈(_n) and define g_i:=w^i(g_0). Notice that a_1⋖ a_2 is the only maximal chain containing a_1. So, for any g∈(_n),(τ_a_1g)(a_1)=1-g(a_1)-g(a_2).There are two cases.Case 1: The toggle τ_a_2 is performed either after τ_a_1 or not at all while applying w. Then, when applying w to g_i, the labels of a_1 and a_2 are unchanged with the toggle τ_a_1 is applied.Thus, by Eq. (<ref>), (τ_a_1g_i)(a_1)=1-g_i(a_1)-g_i(a_2). Since a_1 is only toggled once in w, we haveg_i+1(a_1)=1-g_i(a_1)-g_i(a_2).Using this formula is the third equality below,lim_N∞1/N∑_i=0^N-1(2g_i(a_1)+g_i(a_2))= lim_N∞1/N∑_i=0^N-1(g_i(a_1)+ g_i(a_1) + g_i(a_2))= lim_N∞1/N( g_0(a_1) - g_N(a_1) + ∑_i=0^N-1(g_i+1(a_1)+ g_i(a_1) + g_i(a_2)) )= lim_N∞1/N( g_0(a_1) - g_N(a_1) + ∑_i=0^N-1 1 )= lim_N∞1/N( g_0(a_1) - g_N(a_1) + N )= 1 + lim_N∞1/N( g_0(a_1) - g_N(a_1) )= 1+0=1.The limit calculation follows from the Squeeze Theorem since g_N(a_1) is between 0 and 1 for all N by the chain polytope's definition.Case 2: The toggle τ_a_2 is performed before τ_a_1 while applying w.Then, recall that τ_a_2 only appears once in w. So the label of a_2 after applying τ_a_2 to g_i is g_i+1(a_2). Then using Eq. (<ref>), (τ_a_1g_i)(a_1)=1-g_i(a_1)-g_i+1(a_2). Since a_1 is only toggled once in w, we haveg_i+1(a_1)=1-g_i(a_1)-g_i+1(a_2).Using this (with i-1 in place of i) in the third equality below,lim_N∞1/N∑_i=0^N-1(2g_i(a_1)+g_i(a_2))= lim_N∞1/N∑_i=0^N-1(g_i(a_1)+ g_i(a_1) + g_i(a_2))= lim_N∞1/N( g_N-1(a_1) - g_-1(a_1) + ∑_i=0^N-1(g_i(a_1)+ g_i-1(a_1) + g_i(a_2)) )= lim_N∞1/N( g_N-1(a_1) - g_-1(a_1) + ∑_i=0^N-1 1 )= lim_N∞1/N( g_N-1(a_1) - g_-1(a_1) + N )= 1 + lim_N∞1/N( g_N-1(a_1) - g_-1(a_1) )= 1.§ FUTURE DIRECTIONSAs mentioned before, Einstein and Propp (and others) have generalized the piecewise-linear toggles t_e further to the birational setting, an idea they credit to Kirillov and Berenstein <cit.>. In that generalization, the poset labels are elements of a semifield (e.g. ^+), and in the definition of toggling, 0, addition, subtraction, max, and min are respectively replaced with 1, multiplication, division, addition, and a “parallel summation.”Any result that holds true using the semifield axioms (so no use of subtraction nor additive inverses) holds for the piecewise-linear toggling by working over the tropical semiring discussed in <cit.>. This gives a fruitful technique for proving results about piecewise-linear toggling or even just combinatorial toggling. The author believes that the toggles τ_e we have defined for (P) can similarly be generalized to the birational setting.This will likely prove useful in studying τ_e on (P) or (P) and is the next direction in which the author has begun to collaborate with others for further research.Also worth noting is that many of the results proved here are at the purely group-theoretic level.For example, the proofs of Theorem <ref>, Lemma <ref>, and every result in Subsection <ref> only rely on the algebraic properties proven previously. They use the properties that toggles are involutions, conditions for commutativity of toggles, and the homomorphism from _(P) to _(P) (proven later to be an isomorphism) given by t_e↦ t_e^* of Theorem <ref>.Due to this, the piecewise-linear analogues extend automatically after proving analogues of these algebraic conditions.So one could abstract from _(P) to a generic group G generated by involutions {γ_e | e∈ P} with relation γ_x, γ_y that commute if x and y are incomparable (so γ_e mimics τ_e). If one defines h_e:=γ_e_1⋯γ_e_kγ_eγ_e_1⋯γ_e_k for e_1,…,e_k the elements e covers (so h_e mimics t_e^*) and adds relations in G that h_x and h_y commute when neither x nor y covers the other, then in G one automatically obtains analogues of several results discussed here. Exploring this generalization may prove useful, and it may even be possible that this idea could be naturally extended from posets to a larger class of objects (such as directed graphs which generalize Hasse diagrams).§ ACKNOWLEDGEMENTSThe author thanks Jessica Striker for motivating the study of generalized toggle groups. The author is also grateful for David Einstein, James Propp, and Tom Roby for many helpful conversations about dynamical algebraic combinatorics over the years.Additionally, the author thanks the group he worked with on toggling noncrossing partitions, which includes the aforementioned Einstein and Propp, as well as Miriam Farber, Emily Gunawan, Matthew Macauley, and Simon Rubinstein-Salzedo.The author began to discover the results of this paper after exploring if the homomesy for toggling noncrossing partitions also holds for nonnesting partitions. The author is also quite grateful for an anonymous referee whose multiple careful readings and numerous helpful comments have been of great assistance in improving the paper, and also for suggesting one of the mentioned directions for future research. halpha
http://arxiv.org/abs/1709.09331v3
{ "authors": [ "Michael Joseph" ], "categories": [ "math.CO", "05E18" ], "primary_category": "math.CO", "published": "20170927044530", "title": "Antichain toggling and rowmotion" }
[];n,, theoremTheorem[section] propTheorem[section] thmTheorem[section] remarkRemark[section] remRemark[section] lemLemma[section] definitionDefinition[section] exampleExample[section] propositionProposition[section] coroCorollary[section]
http://arxiv.org/abs/1709.08994v2
{ "authors": [ "Roberto Garra", "Andrea Giusti", "Francesco Mainardi" ], "categories": [ "math-ph", "cond-mat.mtrl-sci", "math.MP", "26A33, 34K37, 76R50" ], "primary_category": "math-ph", "published": "20170926131044", "title": "The fractional Dodson diffusion equation: a new approach" }
I. Stellar light and mass profiles Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università degli Studi di Padova, Vicolo dell'Osservatorio 3, 35122 Padova, Italy INAF-OAPD, Vicolo dell'Osservatorio 5, 35122 Padova, ItalyGalaxy clusters are the largest virialized structures in the observable Universe.Knowledge of their properties provides many useful astrophysical and cosmological information. Our aim is to derive the luminosity and stellar mass profiles of the nearby galaxy clusters of the Omega-WINGS survey and to study the main scaling relations valid for such systems. We merged data from the WINGS and Omega-WINGS databases, sorted the sources according to the distance from the brightest cluster galaxy (BCG), and calculated the integrated luminosity profiles in the B and V bands, taking into account extinction, photometric and spatial completeness, K correction, and background contribution. Then, by exploiting the spectroscopic sample we derived the stellar mass profiles of the clusters. We obtained the luminosity profiles of 46 galaxy clusters, reaching r_200 in 30 cases, and the stellar mass profiles of 42 of our objects. We successfully fitted all the integrated luminosity growth profiles with one or two embedded Sérsic components, deriving the main clusters parameters.Finally, we checked the main scaling relation among the clusters parameters in comparison with those obtained for a selected sample of early-type galaxies (ETGs) of the same clusters. We found that the nearby galaxy clusters are non-homologous structures such as ETGs and exhibit a color-magnitude (CM) red-sequence relation very similar to that observed for galaxies in clusters.These propertiesare not expected in the current cluster formation scenarios. In particular the existence of a CM relation for clusters, shown here for the first time, suggests that the baryonic structures grow and evolve in a similar way at all scales.Characterization of Omega-WINGS galaxy clusters S. Cariddi1 M. D'Onofrio1,2 G. Fasano2 B.M. Poggianti2 A. Moretti2 M. Gullieuszik2 D. Bettoni2 M. Sciarratta1 Received July 19, 2017; Accepted September 21, 2017 ================================================================================================================= § INTRODUCTIONGalaxy clusters are the largest virialized structures that we observe in the Universe. Their study offers the possibility of significantly improving our understanding of many astrophysical and cosmological problems (e.g., ). For example, the determination of their masses and density profiles is of fundamental importance for determining the dark matter (DM) content and distribution in the galaxy halos, mechanisms underlying the formation and evolution of the structure, and fraction of baryons inside clusters. Furthermore, each of these astrophysical questions is linked to many others. For example, knowledge of the baryon fraction is crucial for understanding baryonic physics and correctly calibrating cosmological simulations. It follows that much of our present understanding of the Universe is based on accurate measurements of galaxy cluster properties. Everyone working in this field knows how difficult it is to determine the luminosity profiles of galaxy clusters because of galactic extinction, background galaxies contamination, completeness of the data, and membership uncertainty. These difficulties are the reason for the relatively small number of clusters luminosity determinations (see, e.g., ; ; ; ; ). Even more difficult is the estimate of clusters masses and mass profiles that are biased by various effects depending on the applied methods. The masses inferred from either X-ray (e.g., ; ) or optical data are, for example, based on the assumption of dynamical equilibrium, while those obtained by gravitational lensing (e.g., ; ) require a good knowledge of the geometry of the potential well. Discrepancies by a factor of 2-3 between the masses obtained by various methods have been reported (e.g., ). Today, thanks to the large field of view of many optical cluster surveys, such as the Sloan Digital Sky Survey (e.g., ) and the Canada-France-Hawaii Telescope Legacy Survey (e.g., ), the idea of reconstructing the stellar mass profiles of galaxy clusters starting from their integrated luminosity profiles has become possible. The optical data of modern surveys have drastically reduced the problems mentioned above affecting the precision of light profile measurements. In particular, some of the techniques already used to derive the surface brightness distribution of ETGs have been now adapted to the case of galaxy clusters. These systems are already known for sharing with ETGs many scaling relations (see, e.g., ; ; ; ; ; ) that might provide useful insights into formation mechanisms and evolutionary processes. Their existence is expected on the basis of simple models of structure formation, such as the gravitational collapse of density fluctuations of collisionless DM halos. The <cit.> model for example predicts that all the existing collapsed DM halos are virialized and characterized by a constant mean density, depending by the critical density of the Universe at that redshift and the adopted cosmology (; ). If DM halos are structurally homologous systems with similar velocity dispersion profiles, as cosmological simulations predict (; ), and if the light profiles of the clusters trace the DM potential, then looking at the projected properties of galaxy clusters we expect to find many of the scaling relations observed in ETGs. This is the case, for example, for the fundamental plane relation, i.e., the relation involving the effective surface brightness I_ eff, effective radius r_ eff, andvelocity dispersion σ (; ), which appears to originate from a common physical mechanism valid both for ETGs and clusters (; ). The combined analysis of the scaling relations of ETGs and galaxy clusters could provide important information concerning the mass assembly at different scales.With this paper we start a series of works aimed at addressing such issue. The first work is dedicated to the problem of the accurate determination of stellar light and stellar mass profiles of galaxy clusters. We exploit for this goal the large optical and spectroscopic database provided by the WIde-field Nearby Galaxy-cluster Survey (WINGS; ; ; ) and its Omega-WINGS extension (; ), which is available for nearby galaxy clusters (0.04 ≲ z ≲ 0.07). The first step carried out here provides the integrated luminosity (and stellar mass) profiles for 46 (42) nearby clusters.The paper is organized as follows: In Section <ref> we present the characteristics of our spectro-photometric data sample; in Section <ref> we merge the data of the two photometric surveys to maximize the spatial coverage and we derive the integrated luminosity profiles, color profiles, and surface brightness profiles; in this analysis we take into account the completeness effects, K correction, and background subtraction. Finally, we calculate also the total flux coming from the faint end of the cluster luminosity function. In Section <ref> we derive the stellar mass profiles starting from previously derived luminosity profiles and spectroscopic data of the surveys, which were already used to get the cluster membership and spectrophotometric masses (; , ). Finally, we successfully reconstruct the integrated stellar mass profiles up to r=r_200 for 30 of our 46 clusters, and up to r = 2r_200 for 3 of these clusters. In Section <ref> we discuss the fitting procedure used to reproduce the integrated luminosity profiles of our clusters through a Sérsic (, ) law. From these fits we obtain the main photometric parameters (n, r_ eff, I_ eff) and we use these parameters to construct the main scaling relations of clusters that are compared with those already found for ETGs (Section <ref>). In Section <ref> we present a general discussion and our conclusions.Throughout the paper we have assumed a Λ-CDM universe with H_0 = 70 km s^-1 Mpc^-1 and Ω_m = 0.3.§ DATA SAMPLEThe WINGS survey is a spectrophotometric wide-field survey of 76 galaxy clusters selected from the ROSAT X-ray-brightest Abell-type Cluster Sample Survey <cit.> and its extensions (, ). It consists of B- and V-band observations(34' × 34' field of view; FoV) obtained with the Wide Field Camera on the INT and the 2.2 m MPG/ESO telescopes (; ), J- and K-band images obtained with the Wide Field CAMera at UKIRT <cit.>, and U-band observations performed with the INT, LBT, and BOK telescopes <cit.>. Spectroscopic observations were performed by <cit.> for a subsample of 6137 galaxies using 2dF-AAT and WYFFOS-WHT.The unicity of WINGS lies in the combination of the size of the sample with the depth of the observations. However, the original project only covered the cluster cores up to ∼0.6 r_200 in almost all cases. Thus, WINGS alone does not allow a proper study of the transition regions between the cluster cores and the field. This is a severe limitation. In fact, several studies have proved that many galaxy properties are a function of the clustercentric distance (e.g., ; ). In particular, <cit.> found that the morphology-density relation <cit.> holds only in the cores of WINGS clusters.In order to overcome the limited FoV problem, an extension of the original survey was performed with OmegaCAM at VST. B- and V-band data for 46 WINGS clusters were obtained by(see, e.g., Figure <ref>); these data cover an area of 1^∘× 1^∘. A spectroscopic follow up of 17985 galaxies in 34 clusters with the AAOmega spectrograph at AAT was also performed by . Table <ref> in Appendix shows a recap of the observations carried out in the WINGS andOmega-WINGS surveys.The photometric and spectroscopic WINGS/Omega-WINGS catalogs are now available on the Virtual Observatory (; ). The WINGS database includes not only the magnitudes of the galaxies in the field, but also important quantities derived from the photometric and spectroscopic analyses, such as effective radii and surface brightness, flattening, masses, light indexes, and velocity dispersions. In this work we used the following data sets: WINGS photometric B- and V-band data, WINGS spectroscopic data, Omega-WINGS B- and V-band data, and Omega-WINGS spectroscopic data.The V- and B-band magnitudes used in this work are the SExtractor AUTO magnitudes (seefor further details), whose V-band completeness was calculated by <cit.> for the WINGS sample (90% completeness threshold at m_V ∼ 21.7 mag), and by <cit.> for the Omega-WINGS sample (90% completeness threshold at m_V ∼ 21.2 mag).The objects in the catalogs are divided into three different categories: stars, galaxies, and unknown, according to their SExtractor stellarity index and a number of diagnostic diagrams used to improve the classifications (details inand ). We rejected the stars and focused on galaxies and unknown objects.The spectroscopic redshifts of our galaxies were measured using a semi-automatic method and the mean redshift of the clusters and rest-frame velocity dispersions <cit.>. The latter were derived using a clipping procedure. The galaxies members were those laying within 3 root-mean-squares (RMS) from the cluster redshift. The r_200 radius was computed as in<cit.> and used to scale the distances from the BCG. A correction for both geometrical and magnitude incompleteness was applied to the spectroscopic catalog, using the ratio between the number of spectra yielding a redshift to the total number of galaxies in the parent photometric catalog, calculated as a function of both the V-band magnitude and the projected radial distance from the BCG <cit.>.Owing to the limits of the spectroscopic survey our stellar mass profile analysis is restricted here only to 42 out of 46 clusters.§ PHOTOMETRIC PROFILES OF CLUSTERSTo build the photometric profiles of each cluster we performed the following steps: the WINGS and Omega-WINGS catalogs were merged, a cross-match between the photometric and spectroscopic catalogs was performed, and all the galaxies classified as “non members” in the latter were removed from the main source catalog and saved into a rejected objects catalog. For each object in the main catalog, the integrated intensity within a circular area centered on the BCG was calculated, taking into account the magnitude m_i, position r_i, and completeness cc_i (see below). The field intensity per square degree I_ field was calculated starting from the <cit.> number counts, taking into account the already rejected objects. After sorting the galaxies for increasing cluster-centric distance, the final integrated intensity growth curve of each cluster was derived according to the following formula: I(r_n) = ∑_i=1^n ( cc_i· 10^-0.4 m_i) - π r_n^2 · I_ field , where i is the index associated with each catalog object and r_n the distance of the n-th object from the BCG. The intensity profiles were transformed into integrated magnitude, (B-V) color, and surface brightness profiles. The K correction was applied to each radial bin, according to the color index of the galaxy population in the bin and the mean redshift of the cluster.We now analyze the previous points in detail. In particular Section <ref> focuses on the preliminary work carried out on the two catalogs, Section <ref> on the completeness correction calculation, Section <ref> on the determination of the field galaxy contribution, and Section <ref> on the photometric profiles construction. Finally, Section <ref> deals with the calculation of the faint objects correction that is later used for correctly deriving the stellar mass profiles. §.§ Preliminary workThe Omega-WINGS images are typically four times larger than the WINGS images. These images also have broader gaps between the CCDs with the central cluster regions usually laying out of the FoV (see, e.g., Figure <ref>).In order to combine the larger Omega-WINGS FoV with the WINGS information about the central regions, we merged the two catalogs. For the objects in common between the two catalogs, we decided to use the WINGS original magnitude since its photometry is the most precise. Then we rejected all the sources classified as “stars”, keeping only galaxies and unknown objects.Finally,we removed all galaxies classified by <cit.> as “non members” via a cross-check with the spectroscopic catalog.These objects were moved to a rejected sources catalog that was later used for improving the statistical field subtraction, as described in Section <ref>. §.§ Completeness correctionThe detection rate is the probability to observe a source as a function of a series of parameters, the most important of which, in our case, are its magnitude and position.Starting from this definition, we got the completeness correction cc_i of the i-th object as the inverse of the detection rate of an object of magnitude m_i and distance from the BCG r_i times the probability that this object is a galaxy. We can compute our completeness correction as the product of three terms:first, the photometric completeness correction c_ ph(m_i), i.e., the inverse of the probability to observe an object of magnitude m_i; second, the areal correction ac(r_i), i.e., the inverse of the probability that an object with distance r_i from the BCG lies inside the FoV; and third, the probability that the considered object is a galaxy c_ un(m_i), that is 1 for objects classified as “galaxies” and a function of m_i for unknown objects.Summarizing, cc_i = c_ ph (m_i) · ac (r_i) · c_ un (m_i). Each term is now be analyzed in detail. We start from c_ ph.The function c_ ph (m_i) for the V-band WINGS data was calculated by <cit.> through the detection rate of artificial stars randomly added to the WINGS images. Likely, the detection rate of galaxies follows a slightly different trend as they are not point sources and their detection probability is not only a function of their magnitude, but also depends on a series of parameters (e.g., morphology, compactness, and inclination) whose simulated distribution should correctly match the observed distribution. This introduces an uncertainty in any measurement of the photometric completeness correction that we wanted to avoid. For this reason we decided to introduce a photometric cut at m_V = 20 mag, which is the limit within which the spectroscopic sample is representative of the photometric sample. At this magnitude the detection rate found by <cit.> is equal to 97% and galaxies are easily distinguished from stars. This allows us to assume that second-order dependences, connected to the above-mentioned galaxy parameters, cannot significantly modify the artificially-calculated completeness.The Omega-WINGS V-band completeness, instead, was calculated by <cit.> as a function of the WINGS V-band completeness by comparing the number of objects in each magnitude bin, after matching the total number of sources in the magnitude range 16 mag < m_V < 21 mag to account for the different sky coverage. These authors found that if we limit our analysis to objects brighter than m_V = 20 mag, the detection rate of the two surveys is the same. All these considerations allowed us to safely assume a V-band photometric correction c_ ph = 1 for every object and discard galaxies fainter than m_V=20 mag. We see the consequences of this choice below.The B-band completeness correction was not evaluated for the two surveys. As a consequence, we chose to characterize our clusters using only the V-band-limited sample of objects with m_V ≤ 20 mag.In order to understandwhether, by assuming c_ ph = 1 also for the B band, we were introducing a systematic bias, we considered the following facts: first of all, the equal number of sources in the two bands rules out the possibility of a drastically different c_ ph value between the two bands in the considered photometric range; moreover, the integrated B-band intensity of all the sources of our catalog with m_B ≤ 20 mag is almost 10 times larger than the combined intensity of the sources with B-band magnitude in the range 20 mag < m_B ≤ 22 mag; finally, the integrated color indexes of the clusters are always close to (B-V)∼1, no matter what cut in B or V magnitude we introduce.These considerations made us confident that, by assuming a B-band completeness of c_ ph = 1 as well, we were not introducing any systematic error.The second term on the right side of Equation <ref> concerns the probability that an object distant r_i from the BCG is observed. This probability might be defined by the fraction of the circle centered on the BCG (with radius r_i and thickness of 1 pixel) that resides in the FoV. The areal correction ac(r_i) is defined as the inverse of this probability.The third term of Equation <ref> is c_ un(m_i). Under the reasonable assumption that no object identified as a galaxy was misclassified, this term is different from 1 only for the unknown-type objects and corresponds to the probability that the considered unknown object is a galaxy. This probability has also been calculated by <cit.>.To summarize, we can approximate the completeness correction with the following formula: cc_i ≃{[ac (r_i) iftheobjectisagalaxy,;ac (r_i) · c_ un (m_i) iftheobjectisanunknown.; ]. §.§ Field subtractionTo calculate the intensity per square degree emitted by the field galaxies we used the galaxy number counts measured by <cit.>. These authors give the galaxy number counts normalized to an area of 1 square degree for the B and V bands, from magnitude 16 to 28, with bins of width 0.5 mag.In our case the flux emitted by the statistically measured field galaxies is given by I_ Berta = ∑_j=1^p N_j · f_j(m_V ≤ 20) · 10^-0.4 m_j , where j is the index of each magnitude bin, p the total number of bins, N_j the number of counts in the j-th bin, m_j the average magnitude of the galaxies in the j-th bin, and f_j(m_V ≤ 20) the fraction of objects in the considered magnitude bin with a V-band magnitude lower than 20.For the V-band data f_j(m_V ≤ 20) is a step function equal to 1 for magnitudes brighter than 20 and equal to 0 for fainter magnitudes. For the B band, since we are working with a V-band-limited sample of galaxies, we need to subtract the field contribution given only by objects with m_V ≤ 20 mag.To achieve this, we downloaded the original photometric data by <cit.> and we rebuilt the histogram of galaxy counts. The trend observed is visible in Figure <ref>. The figure shows in red the number of galaxies with magnitude m_V ≤ 20 mag. The values of f_j considered in this case are those plotted in the bottom panel of the figure. Since we had already removed a certain amount of field objects on the basis of the spectroscopic information (that excluded their membership), we risked overcorrecting the field subtraction. In order to avoid this, we calculated the intensity per square degree associated with all the sources in the rejected-objects catalog (I_ rej), i.e., I_ rej = ∑_k=1^q10^-0.4 m_k/A , where k is the index of the considered rejected object, q the total number of the rejected galaxies, m_k the magnitude of the k-th object, and A the FoV area in units of square degrees. The lower magnitude limit corresponds to the lower limit of the tabulated Berta number counts, and the upper magnitude limit to our photometric cut.We therefore calculated the field galaxies intensity per square degree as I_ field = I_ Berta - I_ rej . §.§ Photometric profilesEquation <ref> allowed us to calculate the integrated intensity at the distance of every source from the BCG, but to have equally spaced points we rebinned the data through a weighted least squares (WLS) interpolation with a sampling of 0.05r_200. Then we converted the intensity profiles into integrated absolute magnitude profiles (M_B(≤ r) and M_V(≤ r)) and we derived the integrated color index profiles (B-V)(≤ r) = M_B(≤ r) - M_V(≤ r), and the local color index profiles (B-V)(r) = M_B(r) - M_V(r), where M_B(r) and M_V(r) are the local values at each radius r obtained by differentiating the integrated values.Finally, we applied the K correction following <cit.> using the local color index in each bin radius and the mean redshift of the clusters. If (B-V)≥0.8 we applied the mean correction valid for early-type systems, if 0.8<(B-V)≤0.5 we used the typical correction of Sa galaxies, and if (B-V)<0.5 we adopted the correction valid for Sc and Irregular galaxies. This is clearly an approximation, as the K correction should be applied to the magnitude of each galaxy by a precise knowledge of its morphological type and redshift, which are available only for the spectroscopic sample. However, as long as our procedure is correct the mean K correction of the galaxy population within each radial bin can reliably be calculated by introducing errors not larger than 0.05 mag.Finally we got the surface brightness profiles [ μ_B (r) = -2.5 log( I_B(r)/A_ ring),; μ_V (r) = -2.5 log( I_V(r)/A_ ring),;] where I_B(r) and I_V(r) are the local K-corrected values of the intensity measured in a ring of area A_ ring = π( (r + 0.025 r_200)^2 - (r - 0.025 r_200)^2 ) at each position r. §.§ Correction for faint objectsWhat is lacking in our luminosity profiles is a quantification of the total light coming from the sources fainter than our magnitude cut. In order to get such contribution, we used the parametrization of the luminosity function (LF) provided by <cit.> for the V-band data of the stacked WINGS sample. It is based on a double Schechter function, i.e., ϕ(L) = ( ϕ_V^b/L_V) ·( L/L_V^b)^α_b· e^-L/L_V^b + ( ϕ_V^f/L_V) ·( L/L_V^f)^α_f· e^-L/L_V^f , where ϕ_V^b and ϕ_V^b are normalization constants, L_V^b is the luminosity associated with M_V^b = -21.25 mag, and α^b = -1.10, L_V^f is the luminosity associated withM_V^f = -16.25 mag, and α^f = -1.5.This allowed us to calculate an approximated LF correction c_LF, which is valid under the implicit assumptions that all the clusters have a similar LF and that ϕ(L) does not depend on r as the ratio between the total expected V-band cluster intensity I_ tot and the observed intensity I_ obs, c_ LF = I_ tot/I_ obs = ∫_0^L_ BCG L ·ϕ(L)dL/∫_L_ V,cut^L_ BCG L ·ϕ(L)dL , where L_ BCG is the V-band luminosity of the BCG, and L_ V,cut is the V-band luminosity at magnitude 20. The two integrals represent the luminosity density associated with a distribution of objects with LF ϕ(L) and luminosity within the integration interval. Both the integrals can be solved through the incomplete gamma function and lead to a correction on the order of, at most, 5%.Since it is a very small value and the B-band LF was not derived, we decided to not apply such a correction to our photometric profiles.§ STELLAR MASS PROFILES OF CLUSTERSThe spectrophotometric masses of all the galaxies in the spectroscopic sample are already public () or have been measured by Moretti et al. (private communication) with the same spectral energy distribution (SED)-fitting procedure described in <cit.>. Because the memberships of these objects are known on the basis of redshift measurements, we could proceed to calculate the stellar light profiles by repeating the same procedure described in Section <ref> with two fundamental differences. First, the photometric completeness correction c_ ph in Equation <ref> is now significantly larger than 1. In fact, the spectroscopic sample at m_V=20 is more than 80% complete (). However, in this case we can get a more precise measurement of c_ ph because the photometric sample is approximately 100% complete.Second, the statistical field subtraction is not needed, as the membership of each object is known.The integrated spectroscopic stellar mass profiles can be calculated according to the following formula: ℳ_ sp(≤ r_n) = ∑_i=1^n c_ ph(m_i) · ac(r_i) ·ℳ_i, where ℳ_i is the mass of the i-th galaxy, c_ ph(m_i) is the ratio between the total number of objects in a given bin of magnitude in the photometric and spectroscopic samples, and the last term has already been defined in Section <ref>. As for the photometric profiles, the stellar mass profiles were also rebinned to have equally spaced points every 0.05 r_200.The total photometric stellar mass profiles of the clusters can be finally obtained through the relationℳ_ ph (≤ r) = c_ LF·ℳ_ sp (≤ r) ·L_ ph (≤ r)/L_ sp (≤ r) , where L_ ph (≤ r) and L_ sp (≤ r) are the integrated luminosity within the radius r of the photometric and spectroscopic samples, respectively, and c_LF is the above-mentioned correction for faint objects. Behind this relation there is the implicit assumption that the measured mass-to-light ratio at each radius is representative of the true mass-to-light ratio in the cluster. This is a valid assumption since the spectroscopic sample well represents the photometric sample within m_V = 20 mag. § FINAL LIGHT AND MASS PROFILES OF CLUSTERSIn this section we discuss the properties of the light and stellar mass profiles of our clusters created through the aforementioned procedure. In total we got the stellar mass profiles of 42 of our 46 clusters, reaching r_200 in 30 cases and exceeding 2 r_200 in 3 of them. For the mass values at various radii, see Table <ref> in Appendix.The left panel of Figure <ref> shows the photometric profiles of the cluster A85; all the others can be found in Appendix. In the upper plot we see the integrated magnitude profiles, in the central plot the integrated and local (B-V) color profiles, and in the lower plot the surface brightness profiles. Blue and green lines are B- and V-band data, respectively while yellow andorange lines are the local and integrated colors. In the right part of Figure <ref> we present the light and mass profiles of A85 for the photometric and spectroscopic samples. The upper panel shows the integrated V-band luminosity of the photometric (green line) and spectroscopic (red line) samples, while the lower panel shows the associated stellar mass profiles.A number of considerations emerge from these plots: Most of the growth curves seem to be still increasing at the maximum photometric radius r_ max,ph. The (B-V) integrated color is that typical of an evolved stellar population ((B-V)∼ 1) and usually shows a gradient between the central and outer regions, which are redder in the center and bluer in the outskirts; in the most extreme case (i.e., IIZW108) it is equal to Δ (B-V)/Δ r = 0.36 mag r_200^-1. The local colors are, generally, more noisy than the integrated colors because of the lack of sources in some radial bins. The surface brightness profiles, although dominated by random fluctuation at the adopted spatial binning, show a clear cusp in the central region and very different gradients when the profiles are plotted in units of r_200. The spectroscopic and photometric light and stellar mass profiles are very different from cluster to cluster.Concerning the last point we believe that the origin of the systematic difference between the photometric and spectroscopic profiles is due to the observational difficulty of positioning the multi-object spectrograph fibers to get a simultaneous coverage of the whole cluster region, particularly in the dense core of the clusters. In most cases the spectra of the most luminous galaxies in the cluster center have not been obtained and sometimes even the BCG spectrum is missing.The consequence is that the completeness correction c_ ph(m_i)could not be calculated in certain magnitude bins, i.e., the ratio between the total number of objects in a given magnitude bin from the photometric sample and corresponding number from the spectroscopic sample. Hence, the lost flux could not be redistributed between the observed sources (e.g., in Equation <ref>), resulting in a net displacement of the two curves. The effect appears to be larger in the center and smaller in the outer regions (e.g., Figure <ref>), thus supporting our explanation. Clearly, the correct mass and light profiles are those based on the most complete photometric sample.§.§ Profile fitting and effective parameters calculationIn order to obtain the main structural parameters and the asymptotic luminosity of our clusters we decided to fit the growth curves with some empirical models. In choosing a model we made the following considerations. The surface brightness profiles display a central cusp followed by a steady decrease, as in many ETGs following the Sérsic profile or when the bulge and disk components are both visible in late-type galaxies. <cit.> preliminarily attempted the fit of the WINGS cluster profiles using the King () and the De Vaucouleurs () laws, and the King model was also used by, for example, <cit.> to fit the number density of clusters. In case of hydrostatic equilibrium and isothermality of the intracluster medium (ICM), the ICM intensity profile has traditionally been reproduced by the standard β-model (; ). In principle this model should be able to reproduce correctly the stellar light profiles of our clusters as well because clusters are thought to be scale invariant (see, e.g., , or ), and because both the ICM and the stellar light distribution are tracers of the same DM potential well.Consequently, we decided to fit the integrated luminosity profiles of our clusters by using the same empirical laws used for ETGs, i.e., the King, the Sérsic (, ), and with the standard β-models.The integrated light for a King profile is given by the following expression: L(≤ r) = ∫ 2 π r k ( 1/√(r^2/r_c^2+1) - 1/√(r_t^2/r_c^2+1))^2 dr + L_ ZP , where k is a scale factor, r is the radius, r_c is the core radius, r_t the tidal radius, and L_ ZP the zero-point luminosity (i.e., the luminosity of the BCG).The integral can be solved as [L(≤ r) = π k { r_c^2 log( r^2/r_c^2 + 1) + 1/√(r_t^2/r_c^2+1) ·.; ·. [ r^2/√(r_t^2/r_c^2+1) + r_c^2·. .; ·. . ( 1/√(r_t^2/r_c^2+1) - 4√(r^2/r_c^2 +1))]} + L_ ZP .; ] The Sérsic law is now widely adopted to fit the profiles of ETGs (see, e.g., ; ). The integrated profile is given by L(≤ r) = ∫ 2 π r I_ eff e^-b_n[ (r/r_ eff)^1/n -1 ] dr + L_ ZP , where r_ eff is the effective radius (i.e., the radius containing half of the total luminosity), I_ eff the effective intensity (i.e., the intensity at r = r_ eff), n the Sérsic index, and b_n a function of n that can only be numerically derived; for n ≤ 0.5, we used theapproximation and for n > 0.5 we used theapproximation. This leads to L(≤ r) = 2 π n r_ eff^2 I_ eff e^b_n/b_n^2n γ( 2n,b_n(r/r_ eff)^1/n) + L_ ZP , where γ( 2n,b_n(r/r_ eff)^1/n) is the incomplete gamma function. For more reliable fits, especially in the central regions, where the integrated luminosity steeply rises, we rebinned the profiles with a radial spacing of 0.02 r_200 and then we applied the models to the data and we realized that the integrated King profiles systematically fail to reproduce the data, when compared to the Sérsic profiles. In Figure <ref> we show the four best King fits that we produced and the adopted goodness-of-fit criteria values, which are explained in Section <ref>, used to discriminate between the two models. In all cases, the goodness-of-fit criteria strongly point toward the Sérsic model, and this evidence is even stronger for the remaining 42 clusters. In addition, to correctly reproduce the bulk of the points, the King profiles cannot match the central luminosity values, which are known for their high precision. As a consequence, we decided to reject the King model.The same conclusion was reached using the standard β-models, whose luminosity profile can be calculated as L(≤ r) = ∫ 2 π r I_0[ 1 + (r/r_c)^2 ]^0.5 - 3β dr + L_ ZP , where I_0 is the central intensity, r_c the core radius, and β the ratio of the specific energy of the galaxies to the specific energy of the gas. The integral can be easily solved as L(≤ r) = 2 π r_c^2 I_0/3 - 6 β [ 1 + ( r/r_c) ]^1.5 - 3 β + L_ ZP . Overall, the standard beta-model provides a worse representation of the integrated light profiles than the Sérsic model. In Figure <ref> there is a visual representation of four clusters best fitted by the β-models in comparison with theSérsic fits. In this case too the goodness-of-fit criteria point toward the choice of the Sérsic model with only a few borderline cases and more than 40 out of 46 cases strongly in favor of the latter. However,most of the derived β parameter values are compatible with those typical of the ICM in galaxy groups (see, e.g., ) and clusters (e.g., ), which span the range from β∼ 0.5 to β∼ 0.65 (Figure <ref>).The integrated Sérsic profile was able to correctly reproduce most of the observed profiles, but the presence of some luminosity bumps in the profiles of some clusters resulted in some poor fits. In these cases (see plots in Appendix) an integrated double Sérsic profile (i.e., the superposition of two Sérsic profiles) was used to better reproduce the luminosity profiles. §.§ Best model selectionSince by increasing the number of free parameters the χ^2 obviously decreases, we tested the goodness of our fitswith the single and double Sésic laws through the following criteria: * Akaike () information criterion (AIC), AIC = χ^2 + 2k + 2k (k+1)/N-k-1 , where k is the number of free parameters and N the number of data points to be fitted, and the * Bayesian information criterion (BIC, ), where BIC = χ^2 + k ln( N ). The rule for both criteria is to choose the model able to get the minimum AIC or BIC.A comparison between these criteria can be found in <cit.>, according to which both the criteria can be obtained, by changing the prior, in the same Bayesian context. These authors identified two main theoretical advantages of the AIC over the BIC. First, the AIC is derived from the principles of information, while the BIC is not; and second, the BIC prior is not sensible in the information theory context. Moreover, as a result of simulations, the authors also concluded that the AIC is less biased than the BIC.Despite this, we chose to favor the BIC over the AIC for two main reasons. First, the BIC is built starting from a vague or uniform prior <cit.>, which is a good assumption in our context, in which we have no theoretical justification to privilege one category of models with respect to the other. Second, the BIC penalizes more strongly models with a higher number of free parameters <cit.>, thus it reduces the risk of adopting overcomplicated models.In order to understand how strongly one model is favored in comparison with another, we used the criterion defined by <cit.>, according to which if we call Δ BIC the difference between the BICs of the two models: * 0 ≤Δ BIC < 2 is not worth more than a bare mention* 2 ≤Δ BIC < 6 indicates positive evidence toward the lowest BIC model* 6 ≤Δ BIC < 10 indicates strong evidence toward the lowest BIC model* Δ BIC≥ 10 indicates very strong evidence toward the lowest BIC model.To further reduce the risk of adopting overcomplicated models, we chose favor the single Sérsic models unless strong or very strong evidence in support of the double Sérsic models was present. This resulted in 39 single and 7 double Sérsic fits (e.g., Figure <ref> for the clusters A85 and A151), whose effective parameters are tabulated in Table <ref>. For all the double Sérsic profiles, the inner component is always smaller and fainter, but with a higher value of surface brightness than the outer component.For a recap of the fitting parameters see Tables <ref> and <ref>, while the fits are shown in the figures in Appendix.The confidence intervals around the best-fit parameters were calculated with the following procedure. The χ^2 was recomputed giving to all the points a constant weight calculated in such a way to have a final χ^2 = N-k, where N-k is the number of degrees of freedom. Then, each parameter was individually modified until the χ^2 reached the value N-k+9, i.e., the interval containing a probability of 99.73% of finding the true parameter value.In the double Sérsic fits sometimes the threshold N-k+9 could not be reached and the limit was marked as “undefined”. This happens in two possible cases: first, when the Sérsic index n_ in cannot be safely constrained owing to a very limited number of data points in which the inner component dominates; second, when inside the central region the inner component is not significantly brighter than the outer component and the outer region displays a very disturbed profile, no significant increase in the χ^2 is possible by increasing the inner-component effective radius value r_ eff,in.The confidence limits derived in this way are likely an overestimation of the true errors in the structural parameters because they were calculated by ignoring the mutual correlations that may exist between the parameters. All of these limits are on the order of few percent (see Table <ref>).Almost all the profiles seem to well represent both the luminosity and surface brightness of our clusters, however at least in one case (i.e., A1631a) our best model selection criterion preferred a single Sérsic fit where a human analysis of the surface brightness profile would suggest a two-component model.In a few cases the total asymptotic luminosity may have been overestimated. In fact, in the case of A151, A754, and maybe also A3560, the fitted profile intercepts with an increasing trend the very upper edge of the growth curve, which instead appears to flatten. A possible way to improve the quality of these fits could be the implementation of a simultaneous minimization of the residuals of both the integrated luminosity and surface brightness profiles. Instead, the profiles of A1991, A2415, and A2657 display some significant sudden increases of the luminosity profile that could be due either to some ongoing major merger or the presence of important background structures.No fit displays a central surface brightness higher than the observed surface brightness (within the error); in fact, to avoid unphysical divergencies at small radii all the models with higher values were immediately rejected, even if their reduced χ^2 and their AIC or BIC parameter were preferable. The same was not done in case of much lower values because the central surface brightness has almost certainly been overestimated; in fact, the central point displays the luminosity of the BCG and of all the nearby galaxies entirely collapsed to a single point with no information concerning its real distribution. As a consequence, we chose not to limit our best model selection for matching a likely unrealistic value.In a few cases (i.e., A147, A1631a, A1991, A2657, A2717, A3128, A3532, A3556, and IIZW108) the fitted profile is unable to correctly reproduce the fluctuations of the luminosity profile at small radii, however this is not, generally, a problem. In fact, in case of infalling structures larger than the adopted spatial scale we expect to see such fluctuations. Both A1991 and A2657 display important fluctuations at larger radii too. As a consequence, we can assume that these two systems either are experiencing some relevant merging event or are made of very strongly bound substructures.Finally, the presence/absence of a cool core (see, e.g., ) does not seem to influence the number of components used to fit the luminosity profiles. In fact six of the seven clusters in our sample analyzed by <cit.> can be parametrized with a single Sérsic profile, even though two of these (i.e., A119, and A4059) have a cool-core and the remaining four (i.e., A85, A3266, A3558, and A3667) do not. The only exception is A3158, which has no cool core and a double-Sérsic parametrization. No connection seems to exist either between the cool core presence/absence and the best-fit parameters values. § STACKED PROFILES AND MAIN RELATIONS AMONG STRUCTURAL PARAMETERS §.§ Profiles analysisThe various panels of Figure <ref> show the whole set of luminosity, surface brightness, mass, and color profiles of the clusters stacked in four different plots and normalized to the effective structural parameters. Both in the central andouter regions the cluster profiles show very different behaviors. The central surface brightness spans a range of ∼6 mag arcsec^-2, while the amount of light and mass within and beyond r_ eff appears to differup to a factor of ∼2 in units of L_ eff and ℳ_ eff.This is clearly an evidence of a marked difference in the global structure of clusters. Galaxy clusters do not seem to share a common light and mass distribution. The only similar behavior is visible in the stacked color profiles, showing that all the clusters have similar (B-V)(≤ r) color profiles dominated by an old stellar population in the center and by a bluer population in the outer parts. Despite the large spread observed (around 0.3 mag), all the measured profiles are compatible with an old average stellar population.As in the case of galaxies, the best way to establish the main cluster properties is to study the relations among the structural parameters. This is the aim of this section.When available, we compared the measured parameters with those from a sample of 261 ETGs studied in our previous works (e.g., ; ), whose structural parameters (e.g., effective radius and luminosity, mass, velocity dispersion, and Sérsic index) are available from the WINGS database. The idea is to quantify the correspondences and differences between the structural parameters of clusters and ETGs.Figure <ref> provides the histograms of the observed distributions of the structural parameters. Almost all the components follow some Gaussian-like distribution. The range of values spanned by each parameter, although at different scales, is comparable. For example, the effective radii span in both samples a factor of ∼ 50, while luminosities and average effective intensities span a larger interval, i.e., up to a factor of ∼ 100 for galaxies and around 30 for clusters.Once we observed that clusters and ETGs have a similar distribution of the photometric structural parameters at different scales, we decided to investigate the main relations known to be valid for ETGs. §.§ Color-magnitude diagram of galaxy clustersWe started by comparing the color-magnitude (CM) relation (B-V)(≤ r)-M_V of clusters with the average red sequence slope found by <cit.> for the WINGS galaxies in the CM diagrams of the single clusters. In Figure <ref> each cluster is represented by a dot with a (B-V)(≤ r) color that is the average integrated color index measured within various fractions of r_200 and a total magnitude M_V that corresponds to the total magnitude of the cluster within r_200. The WLS fit of the clusters is only steeper than the average red sequence slope of the galaxies in clusters (i.e., -0.04) when the mapped region of the clusters is larger 0.6 r_200. The plots clearly indicate that the most massive clusters are, on average, the reddest, while the less luminous clusters are the bluest;this is also observed in the red sequence of ETGs. Also, in the central region all the clusters seem to have approximately the same color.The WLS fits of our data (bold dashed lines) provide the following relations, which are valid for various fractions of the cluster areas (in r_200 units): [ (B-V)(≤ 0.2 r_200)= +1.15±0.39 + 0.01±0.02 M_V(r_200),; (B-V)(≤ 0.6 r_200)= -0.47±0.35 - 0.04±0.01 M_V(r_200),; (B-V)(≤ 1.0 r_200)= -1.59±0.36 - 0.08±0.01 M_V(r_200),; (B-V)(≤ 1.4 r_200)= -2.23±0.35 - 0.11±0.01 M_V(r_200),;] while the corresponding average red sequence slopes for the galaxies of the WINGS clusters in the same areas (light gray dashed lines) are [ (B-V)(≤ 0.2 r_200)= -0.38±0.16 - 0.04±0.01 M_V(r_200),; (B-V)(≤ 0.6 r_200)= -0.48±0.17 - 0.04±0.01 M_V(r_200),; (B-V)(≤ 1.0 r_200)= -0.56±0.17 - 0.04±0.01 M_V(r_200),; (B-V)(≤ 1.4 r_200)= -0.62±0.18 - 0.04±0.01 M_V(r_200).;] The CM relation of galaxy clusters is found here for the first time. An explanation of its existence should be found in the current models of cluster formation and evolution. We will dedicate a future work to a possible theoretical interpretation of what is observed.In addition to the CM relation a correlation between the mean effective luminosity of clusters L_ eff and the color gradient Δ (B-V)/Δ r is significant in our data (see Figure <ref> and Table <ref>), i.e., Δ(B-V)/Δ r = -1.47±0.32 + 0.10±0.03 log(L_ eff). The color gradient Δ (B-V)/Δ r is negative because clusters, such as ETGs, are redder in the center and bluer in the outskirts, and this gradient appears to be larger in fainter clusters. This is at variance with the case of ETGs, where the optical gradient does not seem to correlate with the galaxy luminosity (see, e.g., ). §.§ Main scaling relations of galaxy clustersTable <ref> presents the data of the mutual correlations among the structural parameters of galaxies and clusters. The following figures show the most famous ETGs parameters correlations extended to the domain of galaxy clusters. Figure <ref> compares the Kormendy relation (; ; ; ) of ETGs with that of our clusters. The green diamonds correspond to the WINGS ETGs, blue dots to the single Sérsic fits, red squares to the general parameters of the double Sérsic profiles, pink triangles to the inner components of the double Sérsic fits, and orange reversed triangles to the outer components of the double Sérsic profiles. The effective parameters of galaxy clusters follow the same relation previously found for ETGs. Clusters reside along the high-radii tail of galaxies and share the same zone-of-esclusion (ZoE; details in ) of ETGs. The ordinary least squares (OLS) linear interpolation of both the samples provides the same slope within the errors (see Table <ref>), compatible with that expected by the scalar virial theorem (light gray dashed lines) when a constant mass-to-light ratio is assumed. For a better explanation of the observed distribution in the I_ eff - r_ eff plane, see <cit.>.Figure <ref> shows the L_ eff-r_ eff and the ℳ_ eff-r_ eff relations between the effective luminosity/mass and the effective radius. Again we see that the distribution of ETGs and clusters follows the expected behavior on the basis of the Virial theorem (see Table <ref>). The position of each object in these planes depends on the zero point of the virial relation, which is different for each system. To see such an effect note how the position in the ℳ_ eff-r_ eff depends on the central velocity dispersion σ (color scale on the right plot). The velocity dispersion values on the plot are those tabulated byfor all the galaxies and were provided us by <cit.> for the clusters. To complete the series of plots dedicated to the virial equilibrium of our clusters, we also show the L_ eff-σ relation (). A plot similar to Figure <ref> has been shown by <cit.> with a possible explanation of the observed distribution. According to the authors the position of each point in the diagram is given by its virial equilibrium and by a variable zero point that turns out to depend on the effective radius r_ eff and mean ℳ/L ratio. The slope different from 2 often claimed for this relation arises when structures with different zero points are mixed together.This set of figures clearly shows that ETGs and clusters share the same virial relations. The occasional deviations come from the different zero points of the different systems in each diagram. Now we discuss the non-homology of clusters. First, we remember that a virialized structure is not necessarily a homologous structure (i.e., a structure with scale free properties). In the case of ETGs this has been proved by several works (; ; ; ; ; ; ; ; ) by showing that the light profiles of these galaxies are best fitted by the Sérsic law and that the Sérsic index n correlates with the luminosity, mass, and radius of the galaxies themselves.Figure <ref> shows the distribution of galaxy clusters with respect to the ETGs in the r_ eff - n and L_ eff - n diagrams in logarithmic units. The dashed lines gives the bi-weighted least square (BLS) fit of the two distributions. This kind of fit was applied because we do not know a priori which variable drives the correlation. We removed from the plot the cluster A1631a because a visual inspection of the surface brightness profile suggests the presence of a second cluster component in the same area.Two considerations emerge from these plots: first, both the classes of objects span the same range of n; second, the slope found for the ETGs also seems to be a plausible slope for galaxy clusters.Considering that almost all the clusters are in the luminosity range ∼ 10^12 - 10^13 L_⊙, while the galaxies span the range ∼ 10^9 - 10^11 L_⊙, it is not surprising to see that the L_ eff-n correlation, well visible in galaxies, is almost absent in clusters. The r_ eff-n correlation is, on the other hand, well visible for both the types of structures. This means that clusters with the same luminosity can have very different structures with different values of r_ eff and, of course, n. In other words, clusters are likely non-homologous systems, such as ETGs.The idea that clusters are non-homologous systems is not predicted by cluster simulations. The DM halos emerging from numerical simulations are structurally homologous systems with similar velocity dispersion profiles (; ). We do not have enough data at the moment to check the consistency of the DM profiles with the observed stellar light profiles, so we will addressed this problem in a future work. Finally, we analyze the stellar mass-to-light ratios of ETGs and clusters. Figure <ref> shows the distribution of the ℳ/L ratios as a function of the total luminosities. We see that ETGs span a factor of 10 in ℳ/L and that the stellar mass-to-light ratio does not correlate with the luminosity. This seems to be in contradiction with the claimed relation between the dynamical ℳ/L ratio and the luminosity (see, e.g., , or ). The mean stellar ℳ/L ratio of galaxy clusters spans a much smaller interval of values and again no correlation is seen with the luminosity. The combination of the two samples seems to suggest a trend with L of the mass-to-light ratio, but this is a misleading conclusion originating from the absence of clusters with low ℳ/L values.Figure <ref> shows the stellar ℳ/L ratio as a function of radius in units of r_200 for all our clusters. The constant value of ℳ/L with the small spread at medium and large radii and the increase of such a spread in the inner region.These figures provide further evidence that nearby clusters are dominated by an old stellar population almost over the whole extension of their profiles, which is an observational fact that must be reproduced by models of cluster formation and evolution. We conclude by observing that the ratio between the luminosity of cluster substructures and the main cluster component measured by <cit.> mildly correlates with the cluster velocity dispersion (see Figure <ref> and Table <ref>). This is a somewhat expected result, as the velocity dispersion should increase when a merging occurs. § SUMMARY AND CONCLUSIONSWe have produced the stellar light (and mass) profiles of 46 (42) nearby galaxy clusters observed by the WINGS and Omega-WINGS surveys. The best fit of the growth curves was obtained with the Sérsic law, which was compared with the King and the β models. We derived from the analysis of the light profiles the main cluster parameters, i.e., effective radius, total luminosity and mass, effective surface brightness, (B-V) colors, and Sérsic index. Then we used such parameters in combination with the measured velocity dispersion of the clusters to test the main scaling relations already analyzed in the past for ETGs (see, e.g., ).When fitting the light profiles we found that 7 out of 46 clusters are best fitted by a double Sérsic profile (an inner bright structure plus an outer faint structure). The presence of multiple components seems disconnected from other cluster properties such as the number of substructures visible in the optical images or a difference in the stellar populations content. This presence also does not seem to be linked to the presence of the BCG in the center of the clusters; the BCG effects will be investigated in a forthcoming work.All the analyzed relations confirm that the clusters of our sample are well-virialized structures. Notably, the same relations that are valid for the ETGs are visible for clusters at different scales, providing a clear indication that a scale-free phenomenon of mass accretion regulated by gravitation is at work. Like ETGs, clusters also exhibit a degree of non-homology (varying values of the Sérsic index even for clusters with the same luminosity) in their visible light and stellar mass profiles, and a very robust correlation of the Sérsic index with the effective radius. This is a somewhat unexpected property on the basis of numerical simulations (see, e.g, ; ), which predict self-similar DM halos with Navarro-Frenk-White profiles. The most interesting and new relation found here for the first time is the existence of a color-magnitude relation for clusters. When this relation is calculated considering only the galaxies within an area of 0.6 r_200, the CM slope perfectly matches the average red sequence slope found for the galaxies in the Omega-WINGS clusters. The CM cluster relation appears even more clearly when the analysis of the cluster properties is pushed beyond 0.6 r_200. The blue (red) clusters are the faint (bright) clusters. The existence of such relations must find an explanation in the current paradigm of galaxy and cluster formation. In fact, it is not easy to understand why the most massive structures preferentially host the older and redder galaxies, while the less massive clusters host the younger and bluer galaxies. In fact, the hierarchical accretion scenario predicts that the first structures to form are the smallest structures, while the biggest structures are the latest to form.The questions of to what extent the CM cluster relation is a cluster environment effect and what model of cluster formation and evolution is consistent with such data are left to future investigations.Finally we observed that the cluster luminosity correlates with the intrinsic (B-V) color gradient measured within r_200. We see that the faintest clusters show the largest color gradients. This behavior is not observed in ETGs, where the optical color gradient appears uncorrelated with the galaxy luminosity <cit.>.In forthcoming papers we will investigate how the BCG and cluster properties are connected, which models of clusters and galaxies can explain the observed CM relation, what are the similarities and differences between the stellar mass/light profiles and the hydrostatic/dynamic mass profiles, and what is the baryon fraction in the local universe.The authors thank the anonymous referee for the valuable suggestions that helped to improve the overall quality of this research, Mr. Davide Bonanno Consiglio for his contribution in the statistical analysis, and Dr. Andrea Biviano for his precious comments and suggestions.[Abazajian et al.(2003)]aba3 Abazajian, K., Adelman-McCarthy, J. K., Agüeros, M. A., et al. 2003, , 126, 2081 [Adami et al.(1998)]ada98 Adami, C., Mazure, A., Biviano, A., Katgert, P., & Rhee, G. 1998, , 331, 493 [Akaike(1973)]aka73 Akaike, H. 1973, in 2nd International Symposium on Information Theory, Information theory and an extension of the maximum likelihood principle, Akadémiai Kiadó (Budapest), 267 [Allen et al.(2011)]all11 Allen, S. W., Evrard, A. E., & Mantz, A. B. 2011, , 49, 409 [Annis(1994)]ann94 Annis, J. 1994, Bulletin of the American Astronomical Society, 26, 74.05 [Bender et al.(1992)]ben92 Bender, R., Burstein, D., & Faber, S. M. 1992, , 399, 462 [Berta et al.(2006)]ber6 Berta, S., Rubele, S., Franceschini, A., et al. 2006, , 451, 881 [Bertin & Arnouts(1996)]Bertin Bertin, E. & Arnouts S. 1996,317, 393 [Biviano et al.(2017)]Biv17 Biviano, A., Moretti, A., Paccagnella, A., et al. 2017, in press (arXiv:1708.07349) [Burkert(1993)]bur93 Burkert, A. 1993, A&A, 278, 23 [Burnham & Anderson(2002)]bur2 Burnham, K. P., & Anderson, D. R. 2002, Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach (2nd ed.), Springer-Verlag. (New York) [Caon(1993)]cao93 Caon, N., Capaccioli, M., & D'Onofrio, M. 1993, MNRAS, 265, 1013 [Capaccioli(1987)]cap87 Capaccioli, M. 1987, Structure and Dynamics of Elliptical Galaxies, ed. P. T. de Zeeuw (Dordrecht: Reidel), 127, 47 [Capaccioli(1989)]cap89 Capaccioli, M. 1989, in The World of Galaxies, ed. H. G. Corwin & L. Bottinelli (Berlin: Springer), 208 [Capaccioli et al.(1992)]cap92 Capaccioli, M., Caon, N., & D'Onofrio, M. 1992, , 259, 323 [Cappellari(2008)]cap8 Cappellari, M. 2008, , 390, 71 [Cappellari et al.(2006)]cap6 Cappellari, M., Bacon, R., Bureau, M., et al. 2006, , 366, 1126 [Cappi(1994)]cap94 Cappi, A. 1994, Clusters of Galaxies, 265 [Carlberg et al.(1996)]car96 Carlberg, R. G., Yee, H. K. C., Ellingson, E., et al. 1996, , 462, 32 [Cava et al.(2009)]cav9 Cava, A., Bettoni, D., Poggianti, B. M., et al. 2009, , 495, 707[Cavaliere & Fusco-Femiano(1976)]cav76 Cavaliere, A., & Fusco-Femiano, R. 1976, , 49, 137 [Cole & Lacey(1996)]col96 Cole, S., & Lacey, C. 1996, , 281, 716 [D'Onofrio et al.(1994)]don94 D'Onofrio, M., Capaccioli, M., & Caon, N. 1994, , 271, 523 [D'Onofrio et al.(2008)]don8 D'Onofrio, M., Fasano, G., Varela, J., et al. 2008, , 685, 875-896 [D'Onofrio et al.(2014)]don14 D'Onofrio, M., Bindoni, D., Fasano, G., et al. 2014, , 572, A87 [D'Onofrio et al.(2017)]don17 D'Onofrio, M., Cariddi, S., Chiosi, C., et al. 2017, , 838, 163 [de Carvalho & da Costa(1988)]dec88 de Carvalho, R. R., & da Costa, L. N. 1988, , 68, 173 [de Vaucouleurs(1948)]dev48 de Vaucouleurs, G. 1948, Annales d'Astrophysique, 11, 247 [Djorgovski & Davis(1987)]djo87 Djorgovski, S., & Davis, M. 1987, , 313, 59 [Dressler(1978)]dre78 Dressler, A. 1978, , 223, 765 [Dressler(1980)]dre80 Dressler, A. 1980, , 236, 351 [Dressler et al.(1987)]dre87 Dressler, A., Lynden-Bell, D., Burstein, D., et al. 1987, , 313, 42 [Ebeling et al.(1996)]ebe96 Ebeling, H., Voges, W., Bohringer, H., et al. 1996, , 281, 799[Ebeling et al.(1998)]ebe98 Ebeling, H., Edge, A. C., Bohringer, H., et al. 1998, , 301, 881[Ebeling et al.(2000)]ebe00 Ebeling, H., Edge, A. C., Allen, S. W., et al. 2000, , 318, 333[Eke et al.(1996)]eke96 Eke, V. R., Cole, S., & Frenk, C. S. 1996, , 282, 263 [Ettori et al.(2013)]ett13 Ettori, S., Donnarumma, A., Pointecouteau, E., et al. 2013, , 177, 119 [Faber & Jackson(1976)]fj76 Faber, S. M., & Jackson, R. E. 1976, , 204, 668 [Fasano et al.(2006)]fas6 Fasano, G., Marmo, C., Varela, J., et al. 2006, , 445, 805[Fasano et al.(2015)]fas15 Fasano, G., Poggianti, B. M., Bettoni, D., et al. 2015, , 449, 3927 [Jones & Forman(1984)]for84 Jones, C., & Forman, W. 1984, , 276, 38 [Fritsch & Buchert(1999)]fri99 Fritsch, C., & Buchert, T. 1999, , 344, 749 [Fritz et al.(2007)]fri7 Fritz, J., Poggianti, B. M., Bettoni, D., et al. 2007, , 470, 137[Fritz et al.(2011)]fri11 Fritz, J., Poggianti, B. M., Cava, A., et al. 2011, , 526, A45[Fujita & Takahara(1999)]fuj99 Fujita, Y., & Takahara, F. 1999, , 519, L51 [Girardi et al.(2000)]gir0 Girardi, M., Borgani, S., Giuricin, G., Mardirossian, F., & Mezzetti, M. 2000, , 530, 62 [Gómez et al.(2003)]gom3 Gómez, P. L., Nichol, R. C., Miller, C. J., et al. 2003, , 584, 210[Gullieuszik et al.(2015)]gul15 Gullieuszik, M., Poggianti, B., Fasano, G., et al. 2015, , 581, 41 [Gunn & Gott(1972)]gun72 Gunn, J. E., & Gott, J. R., III 1972, , 176, 1[Hamabe & Kormendy(1987)]ham87 Hamabe, M., & Kormendy, J. 1987, in Structure and Dynamics of Elliptical Galaxies, ed. T. de Zeeuw (Dordrecht: Reidel), IAU Symp. 127, 379 [Henning et al.(2009)]hen9 Henning, J. W., Gantner, B., Burns, J. O., & Hallman, E. J. 2009, , 697, 1597 [Hudelot et al.(2012)]hud12 Hudelot, P., Cuillandre, J.-C., Withington, K., et al. 2012, VizieR Online Data Catalog, 2317, [Kaiser(1986)]kai86 Kaiser, N. 1986, , 222, 323 [Kass & Raftery(1995)]kr95 Kass, R. E., & Raftery, A. E., JASA, 430, 773 [King(1962)]king62 King, I. 1962, , 67, 471 [Kormendy(1977)]kor77 Kormendy, J. 1977, , 218, 333 [La Barbera et al.(2010)]lab10 La Barbera, F., De Carvalho, R. R., De La Rosa, I. G., et al. 2010, , 140, 1528[Lewis et al.(2002)]lew2 Lewis, I., Balogh, M., De Propris, R., et al. 2002, , 334, 673 [MacArthur et al.(2003)]mac3 MacArthur, L. A., Courteau, S., & Holtzman, J. A. 2003, , 582, 689 [Marmo et al.(2004)]mar4 Marmo, C., Fasano, G., Pignatelli, E., et al. 2004, in Outskirts of Galaxy Clusters: Intense Life in the Suburbs, Proc. IAU Coll., 195, 242 [Maughan et al.(2016)]mau16 Maughan, B. J., Giles, P. A., Rines, K. J., et al. 2016, , 461, 4182 [Merten et al.(2015)]mer15 Merten, J., Meneghetti, M., Postman, M., et al. 2015, , 806, 4 [Michard(1985)]mic85 Michard, R. 1985, , 59, 205 [Miller et al.(1999)]mil99 Miller, C. J., Melott, A. L., & Gorman, P. 1999, , 526, L61 [Mohr & Evrard(1997)]moh97 Mohr, J. J., & Evrard, A. E. 1997, , 491, 38 [Moretti et al.(2014)]mor14 Moretti, A., Poggianti, B. M., Fasano, G., et al. 2014, , 564, A138 [Moretti et al.(2015)]mor15 Moretti, A., Bettoni, D., Poggianti, B. M., et al. 2015, , 581, 11[Moretti et al.(2017)]mor17 Moretti, A., Gullieuszik, M. Poggianti, B. M et al. 2017, , 599, 81 [Navarro et al.(1997)]nav97 Navarro, J. F., Frenk, C. S., & White, S. D. M. 1997, , 490, 493 [Oemler(1974)]oem74 Oemler, A., Jr. 1974, , 194, 1 [Omizzolo et al.(2014)]omi14 Omizzolo, A., Fasano, G., Reverte Paya, D., et al. 2014, , 561, A111 [Paccagnella et al.(2016)]pac16 Paccagnella, A., Vulcani, B., Poggianti, B. M., et al. 2016, , 816, 25 [Peebles(1980)]pee80 Peebles, P. J. E. 1980, The large-scale structure of the universe (Princeton, N.J.: Princeton University Press), 435 [Poggianti(1997)]Poggianti1997 Poggianti, B. M. 1997, , 122, 399 [Poggianti et al.(2006)]Poggiantietal2006 Poggianti, B. M., von der Linden, A., De Lucia, G., et al. 2006, , 642, 188 [Ponman & Bertram(1993)]pon93 Ponman, T. J., & Bertram, D. 1993, , 363, 51 [Prugniel & Simien(1997)]pru97 Prugniel, P., & Simien, F. 1997, , 321, 111 [Ramella et al.(2007)]ram7 Ramella, M., Biviano, A., Pisani, A., et al. 2007, , 470, 39 [Schaeffer et al.(1993)]sch93 Schaeffer, R., Maurogordato, S., Cappi, A., & Bernardeau, F. 1993, , 263, L21 [Schombert(1986)]sch86 Schombert, J. M. 1986, , 60, 603 [Schwartz(1978)]sch78 Schwarz, G. E. 1978, Annals of Statistics, 6, 461 [Sérsic(1963)]ser63 Sérsic, J. L. 1963, Boletin de la Asociacion Argentina de Astronomia La Plata Argentina, 6, 41 [Sersic(1968)]ser68 Sérsic, J.-L. 1968, Atlas de Galaxias Australes (Cordoba: Observatorio Astronomico) [Umetsu et al.(2014)]ume14 Umetsu, K., Medezinski, E., Nonino, M., et al. 2014, , 795, 163 [Valentinuzzi et al.(2009)]val9 Valentinuzzi, T., Woods, D., Fasano, G., et al. 2009, , 501, 851 [Valentinuzzi et al.(2011)]val11 Valentinuzzi, T., Poggianti, B. M., Fasano, G., et al. 2011, , 536, A34 [Varela et al.(2009)]var9 Varela, J., D'Onofrio, M., Marmo, C., et al. 2009, , 497, 667[Wu & Fang(1997)]wu97 Wu, X.-P., & Fang, L.-Z. 1997, , 483, 62 [Young & Currie(1994)]you94 Young, C. K., & Currie, M. J. 1994, MNRAS, 268, 11 § TABLES AND FIGURESIn the following Appendix we gathered all the tables and figures omitted from the main article.In Table <ref> we summarize the data sets available for each WINGS galaxy cluster.In Table <ref> we compared the main parameters of our clusters.Table <ref> presents a summary of the main effective parameters of each cluster. In case of the double Sérsic fits, the average effective intensity is given at the cluster effective radius, which does not correspond to either of the effective radii of the two different components.In Tables <ref> and <ref> we tabulated the best single and double Sérsic fit parameters, plus the reduced χ^2, AIC, and BIC values of all the best-fit models.In Figures <ref>-<ref> we plotted the photometric profiles of our clusters, in Figures <ref>-<ref> their stellar mass profiles, and in Figures <ref>-<ref> the best fits to their luminosity profiles.
http://arxiv.org/abs/1710.01666v2
{ "authors": [ "Stefano Cariddi", "Mauro D'Onofrio", "Giovanni Fasano", "Bianca Maria Poggianti", "Alessia Moretti", "Marco Gullieuszik", "Daniela Bettoni", "Mauro Sciarratta" ], "categories": [ "astro-ph.GA", "astro-ph.CO", "J.2" ], "primary_category": "astro-ph.GA", "published": "20170926154110", "title": "Characterization of Omega-WINGS galaxy clusters. I. Stellar light and mass profiles" }
L. [email protected]/CaltechPasadenaCA91125USA V. [email protected]/CaltechPasadenaCA91125USA H. [email protected]/CaltechPasadenaCA91125USA S. [email protected]/CaltechPasadenaCA91125USA R. [email protected]/CaltechPasadenaCA91125USA G. B. [email protected]/CaltechPasadenaCA91125USA G. [email protected] D. [email protected]/CaltechPasadenaCA91125USA J. [email protected]/CaltechPasadenaCA91125USA A. [email protected]/CfACambridgeMA02138USA T. McGlynntom.mcglynn@nasa.gov0000-0003-3973-432XNASAGSFCGreenbeltMD20771USA A. [email protected] R. Whiterlw@stsci.edu0000-0002-9194-2807MASTSTScIBaltimoreMD21218USA NASA regards data handling and archiving as an integral part of space missions, and has a strong track record of serving astrophysics data to the public, beginning with the the IRAS satellite in 1983. Archives enable a major science return on the significant investment required to develop a space mission. In fact, the presence and accessibility of an archive can more than double the number of papers resulting from the data. In order for the community to be able to use the data, they have to be able to find the data (ease of access) and interpret the data (ease of use). Funding of archival research (e.g., the ADAP program) is also important not only for making scientific progress, but also for encouraging authors to deliver data products back to the archives to be used in future studies. NASA has also enabled a robust system that can be maintained over the long term, through technical innovation and careful attention to resource allocation. This article provides a brief overview of some of NASA's major astrophysics archive systems, including IRSA, MAST, HEASARC, KOA, NED, the Exoplanet Archive, and ADS. § INTRODUCTION Since at least 1983, NASA has regarded data handling and archiving as an integral part of astrophysics space missions. This commitment now provides the major return on the considerable investment the agency has made over the past 20 years <cit.>.All astronomy archives provide sustainability. The <cit.> concluded that a sustainable archive provides data discovery and analysis tools; facilitates new science; contains high-quality, reliable data; provides simple and useful tools to a broad community; provides user support to the novice as well as to the power user; andadapts and evolves in response to community input.NASA believes that an astronomy archive's job includes the following major tasks. * Ingest new data, including reprocessing of old data.* Maintain and continue to serve a vital repository of irreplaceable data, in which considerable investment has already been made. This includes support for observation planning as well as, particularly in NASA's case, mission planning. The archive must also be a resource for original science, and a place to find high level science products.* Enable cutting-edge research. NASA does this by supporting application programming interfaces (API) and supporting the Virtual Observatory (VO) protocols; by providing expert user support; by developing new and enhanced services; and by enabling multi-wavelength projects.The last of these is evolving quickly as the amount of data steadily increases. The archive's mission is changing from “search-and-retrieve” for a user to analyze locally (on their own machine), to doing at least some analysis in situ, in the archive, prior to downloading. § ARCHIVES ENABLE SCIENCE If one has never thought about the utilty of an archive, one might ask whether anyone other than the proposing astronomer might be interested in a particular data set. However, the reality is that science archives extend the useful life of NASA's mission data indefinitely, as new results can continue to be gleaned from the data in context with new observations and fresh analyses.For example, we are still learning things from IRAS data more than 33 years after the mission ended (see e.g.,<cit.>).[width=.7]doubledoutput.epsdoubledoutputThree plots showing how archives double an observatory's output. Upper left: percent of refereed journal articles as a function of time between 2008 and 2014. As of 2014, 10% of all refereed journal articles use data that ultimately come from IRSA. Upper right: percent of Spitzer papers as a function of time between 2003 and 2014. By 2008, more papers came from archival research than PI programs. Bottom: number of Hubble papers as a function of time between 1991 and 2015. After the first several years of the mission, archival research dominates with more than half the papers.Figure <ref> demonstrates some specific examples of how archives double an observatory's output. The first plot shows the fraction of refereed astrophysics journal articles, worldwide, as a function of time. As of 2014, 10% of all refereed journal articles use data that ultimately come from IRSA (this includes 2MASS, WISE, and Spitzer). The upper right plot in Fig. <ref> shows the fraction of Spitzer papers as a function of time. Early in the Spitzer mission, all papers were written by program PIs; this makes sense, since the people most equipped to process data and write papers initially are those associated with the instrument teams and/or the mission itself. However, by 2008, more papers came from archival research than programs tied to specific principal investivators (PIs). This effect is not just limited to infrared missions; the bottom panel of Fig. <ref> shows Hubble papers as a function of time. Once again, after the first several years of the mission, archival research dominates, contributing more than half the papers.Here are just a few archival science highlights: * WISE and Spitzer discover the coldest brown dwarf <cit.>* WISE morphological study of Wolf-Rayet nebulae <cit.> * Buckyballs in a young planetary nebula using Spitzer data <cit.>* WISE, 2MASS, and PanSTARRS data may reveal super-void in cosmic microwave background (CMB) cold spot seen by Planck <cit.> * The planets HR8799 b,c,d were imaged by HST in 1998;post-processing speckle subtraction now available provides more than anorder of magnitude contrast improvement over what the state of the art had been when data were taken in 1998 <cit.>* Six years of Fermi data were combined to discover the first extragalactic gamma-ray pulsar <cit.> § A LIST OF SOME NASA ARCHIVES BY CENTER§.§ IPAC: IRSA IRSA is the NASA/IPAC Infrared Science Archive, located at IPAC at Caltech[<http://irsa.ipac.caltech.edu>]. Its charter is to provide an interface to all NASA infrared and sub-mm data sets, from ∼1 μm to ∼1 cm. It was founded in 1993, and was the original home to IRAS data. IRSA ensures the legacy of NASA's “golden age” of infrared astronomy.IRSA datasets are cited in about 10% of astronomical refereed journal articles. Through September 2016, IRSA's holdings exceed a petabyte (>1000 TB); there are more than 120 billion rows of catalogs. Between January and September 2016, there have been over 33.7 million queries, and 255 TB downloaded. §.§ IPAC: NED NED is the NASA/IPAC Extragalactic Database, located at IPAC at Caltech[<http://ned.ipac.caltech.edu/ui/>].It is the primary hub for multi-wavelength research on extragalactic science because it merges data from catalogs and literature. There are thousands of extragalactic papers, with unique measurements for millions of objects.As of September 2016, it contains 215 million objects with 256 million cross-identificationscreated from more than 102,000 articles and catalogs. There are 2 billion photometric data points joined into spectral energy distributions. NED provides a thematic archive with myriad cross-links, notes, etc., and many services tailored to extragalactic research. Updates are released every few months. §.§ IPAC/NExScI: NASA Exoplanet Archive The NASA Exoplanet Archive is also located at IPAC at Caltech[<http://exoplanetarchive.ipac.caltech.edu/>].It isfocused on exoplanets and the stars they orbit, and stars thought to harbor exoplanets. It includes Kepler data, and is the U.S. portal to CoRoT data. It also has online tools to work with these data, like the periodogram service. It also has a place (Exo-FOP) for observers to upload/share data on planets and planet candidates. §.§ IPAC/NExScI: KOA The Keck Observatory Archive (KOA) is a collaboration between NExScI (at IPAC) and the W. M. Keck Observatory[<http://exoplanetarchive.ipac.caltech.edu/>]. It provides access to public data for all ten Keck instruments since the Observatory saw first-light in 1994. It provides browse-quality images of raw data, as well as browse-quality and reduced data for HIRES, NIRC2, OSIRIS, and LWS, created by automated pipelines. An example of contributed data is the Keck Observatory Database of Ionized Absorption toward Quasars (KODIAQ; N. Lehner, PI). Coming soon: NIRSPEC extracted spectra; moving target services. See poster<cit.> for more details. §.§ STScI: MAST MAST is the Mikulski Archive for Space Telescopes, and it is located at Space Telescope Science Institute[<https://archive.stsci.edu/>]. The archive was originally established with the HST launch in 1990. It has been multi-mission since the addition of IUE in 1998. Its mandate includes NASA optical and UV data. It now includes Hubble, Kepler, GALEX, IUE, FUSE, TESS, JWST, Pan-STARRS, DSS, GSC2, and more. §.§ GSFC: HEASARC HEASARC is the High Energy Astrophysics Science Archive Research Center[<http://heasarc.gsfc.nasa.gov/>]. It has been located at NASA's Goddard Space Flight Center (GSFC) since its founding in 1990. Its mandate includes data associated with extremely energetic cosmic phenomena ranging from black holes to the Big Bang.It includes data from Chandra[Note that Chandra's operations archive is at the Chandra Science Center, <http://cxc.harvard.edu/cda/>.], XMM-Newton, Fermi, Suzaku, NuSTAR, INTEGRAL, ROSAT, Swift, & more than 20 others.It merged with Legacy Archive for Microwave Background Data Analysis (LAMBDA) in 2008; LAMBDA is the home to a variety of cosmic microwave background radiation (CMBR) missions, including WMAP, COBE, ACT, etc. §.§ SAO/CfA: ADS ADS is the Astrophysics Data System, located at the Harvard-Smithsonian Center for Astrophysics[<http://adsabs.harvard.edu/>].It indexes 12 million publications in astronomy, physics, and the arXiv preprint server. It has complete coverage of more than 100 years of astronomy and refereed physics literature. It tracks citations, as well as institutional and telescope bibliographies. It links to data products at all of the other archives mentioned here, plus other non-NASA astronomy archives around the world. It has a new interface and a new API integrating ORCID author identifications, full-text article searches, and analytics. §.§ More archives Other archives based at these centers, not all necessarily NASA-funded, also follow this model.Two examples at STScI are Pan-STARRS (optical ground-based synoptic data) andVLA-FIRST (radio data). The Palomar Oschin wide-field survey is one example at IRSA; it has three incarnations,Zwicky Transient Facility (2017+),intermediate Palomar Transient Factory (iPTF; 2013-2016),and the Palomar Transient Factory (2009-2012).There are, of course, many other non-NASA archives in the U.S. (SDSS, NRAO, etc.) and around the world (CDS, ESO, ESA, etc.). These are beyond the scope of this article, and information about them can be found in other articles in this and prior ADASS conference proceedings.Also, in many cases, observers can deliver data back to these centers for distribution, which may include data beyond original program; more on this below.§ LESSONS LEARNED§.§ Easy Access and Support All of these archives have easy access. Researchers at all levels (including team members, emeriti professors, and summer students) all need to be able to get and use data easily. Thus, these archives need an intuitive, web-based interface, with no extra software installation required. Users want to be able to visualize and assess the data, using tools at the archive, but they also want to simply download the data to their own disk as fast as possible. Expert help needs to be there when users need it, meaning that documentation has to be easily found and/or helpdesk tickets promptly answered. The archive must provide access to knowledgeable staff, who have done science with the data products, who can (a) find problems, and (b) pass on valuable experience to new users. Speed and accuracy matters for the Helpdesk; people do not want to be `left hanging.' Some questions can be very complex; acknowledging that a person has read the ticket and is working on it is important, even if it takes days/weeks to actually get an answer. Documentation can come with tools and/or data releases, or in response to specific tickets (special/unusual or frequently asked questions). Documentation should be updated frequently in response to tickets. Demonstrations of holdings and new tools can be provided live (at meetings like the AAS, ADASS, DPS, etc.), or in video tutorials (IRSA has >60 videos; >4500 views total). The complexity of science user needs increases with time, because the `easy stuff' has already been done – more advanced data reduction techniques become available, or users need to query the database in a way not envisioned when the archive was designed. §.§ Data Visualization and Discovery Visualization at the archive is important. Science users coming to look for original data, catalogs, or plots need to be able to find what they are looking for. Users probably mostly come specifically to find a particular data set, but they may also come looking for generally anything available on their target.They come knowing that they need a particular item, but if data discovery is easy, they will find more data products of use to them. Visualization helps them assess if the data are of interest or not, before downloading (and reading the documentation!).High level science products make data discovery easy and greatly enhance the science return of the archives by making complex data sets accessible to a wider audience of researchers.For people who are, for example, not experts at reducing Spitzer data, being able to find already-reduced multi-wavelength reliable photometry of their target means that the barriers to being a Spitzer data user are lower, and Spitzer data get used in more projects. Hubble Legacy high-level science products (HLSP) are used 10 times as much as more typical HST pipeline products.Especially when there are large, coherent projects (such as Hubble Treasury, Spitzer Legacy, or Spitzer Exploration Science), data reduction can be optimized for the science, and data products become even more usable. For example, source extraction over the entire sky has to take into account bright and faint backgrounds, with a variety of source densities, and often depth is sacrificed for accuracy. With a focused project, source extraction can be optimized for that project, say, the Galactic Plane, and both depth and accuracy is higher than with a broader project. These high level science products products can be generated by the support center for the telescope, or contributed by the community back to the archive. More recently, some delivered high level science products have included source lists from entire missions (Spitzer, Hubble, Chandra, Herschel, WISE, etc.). In those situations, easily combining data across wavelengths widens the user base for each data set and deepens the science return.One example of such cross-wavelength advances comes from NED. In context of assessing the completeness of the database, while studying the fusion of extragalactic data from GALEX, SDSS, 2MASS, WISE, and more, <cit.> found super-luminous spiral galaxies. This result was found by looking at what was already in the archive.The <cit.> result is just one example of two important concepts. (1) As the data sets get bigger and bigger, scientists won't be able to pull all of the data out of the archive to work with it. IRAS catalogs, considered enormous at the time, can fit on a modern iPhone without anyone particularly noticing, but WISE catalogs top 50 billion rows. Requests to `download the entire catalog' aren't trivial.The mission of the archive is evolving from a `search-and-retrieve' approach to one of `do at least some analysis in situ.' (2) In the era of larger and larger data sets (`big data'), there are science discoveries waiting in the archives that were never imagined or expected by the mission or even program PIs. §.§ Priorities and Long-Term Commitment In order to set priorities, the archives rely on community feedback. Each archive has ideas of what they would like to do next, but these plans should be shaped by what the community needs, wants, wishes for, or (in some cases) doesn't know they want yet. This input is collected from mission staff members, user committees, user surveys, helpdesk tickets, giving talks and demonstrations at conferences, and the explicit funding review cycles, and all of that feeds into setting priorities.NASA as a whole (and sometimes individual missions) explicitly funds archives, as well as archival research. The NASA ADAP (Astrophysics Data Analysis Program) is specifically set up to fund researchers primarily using archival data.Having a well-designed archive & products can greatly enhance the research value of the dataset. In order to achieve that goal, archives need to reduce the barriers to usage. They need to make it easy to find the data, and make thedata accessibleand easy to use (which speaks to reliability, units, file format, instrumental artifacts, and documentation).In this fashion, NASA explicitly enables new ideas of things to do with older data. Moreover, NASA has a strong tradition of active collaboration between missions and archives. In practice, this means that even missions not yet launched are thinking about optimizing the resulting science from their future archives. The International Virtual Observatory Alliance (IVOA) is responsible for standardized VO protocols for interoperability between archives (i.e., NOT the applications that use those protocols). Tools that use VO protocols, however, make data discovery easier. For example, users can, with the interface they know, get access to new data elsewhere. The VO protocols enable interoperability of tools, within archives and across archives.There is also infrastructure in place to ensure that information (not just data) flows between the people running these archives. Three organizations that do this are the Astronomy Data Centers Executive Committee (ADEC), the US Virtual Observatory Alliance (USVOA), and theNASA Astronomical Virtual Observatories (NAVO). NAVO enables comprehensive and consistent access to all NASA data through VO protocols. It coordinates NASA interactions with the international and national VO communities. Figure <ref> shows that the rate at which IRSA receives VO-protocol queries is increasing with time. [width=.7]VOqueries.epsVOqueriesThe smoothed weekly queries at IRSA as a function of time for VO queries (red) and non-VO queries (blue). There are increasing numbers of VO queries.§.§ Keeping it Running As archives, and the science they enable, grow more sophisticated, there are more and more API queries made of the archives. APIs, or application program interfaces, to the archives enable programs, scripts, or even users at the command line to query the archives.Scripted access to archive data enables complex projects. However, it also enables rapid queries. New users of the API, in particular, can launch inadvertent denial of service attacks on the servers.A real life example (from IRSA) is more than 70,000 requests over 10 hours, or an average of 2 per second. The servers must be watched to ensure that no one user is robbing the rest of the community of bandwidth and computing resources.Empirically, IRSA has many users making small requests via the APIs, but a few users make enormous requests.The archive must be kept running 24/7 while improving it, or `assembling the plane while it is in flight.' The archives aim to increase the audience and their usage of the archive while keeping it within existing resources. As a result, the archives must be efficient in how they use resources. One example of how IRSA does this is to use the same software across multiple data sets (see talk by X. Wu).IRSA has an interactive user interface, re-using pieces developed by other projects at IPAC. NED is experimenting with machine learning; NED needs to absorb data that is embedded in free-form text, where the tables are not standard. For example, the position of the object may be given in a column whose header is one of RA, Ra, ra, R.A., or something else entirely. NED has a pilot project to apply machine learning to classify data and facilitate their extraction. The archives must improve scalability, extensibility, and data prospecting, and they must accomplish greater integration of functionality and content across systems. For example, ADS, as an archive focused on the literature, bridges the “data” world of astronomy archives and the “publishing” world of scholarly literature. As such it integrates content and functionality relevant to both. ORCID offers an example of a standard which has been promoted by publishers to help with author disambiguation; integration of the claiming and indexing in ADS means that the astronomy community has an “easy” way to create the claims using a trusted platform. Searching by object name using SIMBAD TAP offers an example of integration of cross-archive functionality in the new ADS using VO standards. ADS also provides embedding of publisher images via APIs. The archives have to be ready to ingest new data from the community. At IRSA, Spitzer Legacy programs changed the astronomy culture by mandating that products be delivered back to the community. Now such data deliveries are a common feature of Spitzer proposals. As discussed above, these deliveries bring these data to a larger audience via the central Spitzer archive. But,IRSA has to have resources to ingest these products. The expense is not necessarily in hardware resources but in the time it takes to educate the people delivering the products. The delivery has to be well-organized and documented, not just for the people at IRSA operationally ingesting the data, but for all future users of the data. For people who frequently make deliveries, this is (now) easy.It is not necessarily easy for people new at making deliveries. IRSA has developed tools to help people learn, but it still takes time and often hand-holding. Complexity is not just about size. And an unanticipated side effect is that you can get optical and UV data out of the Spitzer archive (SINGS, LVL).§.§ What's Next: Big Data The era of “big data” is here for some missions, and certainly it has arrived when one considers collectively the data across all of the NASA astrophysics archives. IRSA has already invested in data visualization services to help people identify and experiment with data quickly, and decide whether they want to download the data or not. Planning for big data includes identifying the most critical needs of users, including increased analysis at the archive facilitated by user workspaces, and richer services for in-situ analysis.All of the archives are thinking about this in some way.§ SUMMARY Long-term, sustainable archives greatly increase the return on observatory investment, doubling the science return in published papers. Having robust, reliable support for both expert and novice users pays off. User support by instrument experts is crucial for the sucessful use of the data/archives by the wider astronomy community. Standardization of tools within an archive increases efficiency. Interoperability between archives increases access to data sets and facilitates multi-mission analysis. High level data products can expand the reach of large data sets. The archives are seeing a shift in approach from `search and retrieve' to `analyze in situ.'
http://arxiv.org/abs/1709.09566v1
{ "authors": [ "L. M. Rebull", "V. Desai", "H. Teplitz", "S. Groom", "R. Akeson", "G. B. Berriman", "G. Helou", "D. Imel", "J. M. Mazzarella", "A. Accomazzi", "T. McGlynn", "A. Smale", "R. White" ], "categories": [ "astro-ph.IM" ], "primary_category": "astro-ph.IM", "published": "20170927145939", "title": "NASA's Long-Term Astrophysics Data Archives" }
1 [email protected] ^1) Department of Chemistry, University of California, Berkeley, CA 94609 ^2) Kavli Energy NanoScience Institute, Berkeley, CA 94609^3) Materials Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94609 We describe a method for computing transport coefficients from the direct evaluation of large deviation function. This method is general, relying on only equilibrium fluctuations, and is statistically efficient, employing trajectory based importance sampling. Equilibrium fluctuations of molecular currents are characterized by their large deviation functions, which is a scaled cumulant generating function analogous to the free energy.A diffusion Monte Carlo algorithm is used to evaluate the large deviation functions, from which arbitrary transport coefficients are derivable. We find significant statistical improvement over traditional Green-Kubo based calculations. The systematic and statistical errors of this method are analyzed in the context of specific transport coefficient calculations, including the shear viscosity, interfacial friction coefficient, and thermal conductivity.Transport Coefficients from Large Deviation Functions David T. Limmer^1,2,3 December 30, 2023 =====================================================The evaluation of transport coefficients from molecular dynamics simulations is a standard practice throughout physics and chemistry.Despite significant interest and much study, such calculations remain computationally demanding.Traditional methods exploit Green-Kubo relationships <cit.> and rely on integrating equilibrium time correlation functions <cit.>.While general, depending only on identifying a relevant molecular current, these methods often suffer from large statistical errors due to finite time averaging, making them cumbersome to converge <cit.>.Alternative methods exist that directly drive a current through the system by the application of specific boundary conditions <cit.> or by altering the equations of motion <cit.>.These direct methods typically mitigate sampling difficulties by requiring that only the current is averaged rather than its time correlation function.However, such methods are generally not transferable to different transport processes. Moreover, as a nonequilibrium simulation, the details of how the current is generated can affect their convergence <cit.> and fidelity <cit.>.Here we propose a new way to compute transport coefficients that utilizes only equilibrium fluctuations, as in Green-Kubo calculations, but is evaluated by averaging a current, as in direct methods. Rather than applying a physical field to drive a current, we apply a statistical bias to the system's dynamics and measure the resultant response.The response is codified in the relative probability of a given current fluctuation, so this calculation is identical to the evaluation of a free energy, albeit in a path ensemble <cit.>. Such path ensemble free energies are known as large deviation functions <cit.>, and with trajectory based importance sampling methods to aid in their calculation, we arrive at a method to evaluate transport coefficients that is both general and quickly convergent. Large deviation theory has emerged as a useful formalism for considering the fluctuations of time integrated observables <cit.>.In fact, large deviation theory underpins many recent developments in nonequilibrium statistical mechanics, including generalized fluctuation theorems <cit.> and thermodynamic uncertainty principles <cit.>.The large deviation function is a scaled cumulant generating function and, like its equilibrium counterpart, the free energy, it codifies the stability and response of dynamical systems.While these advances in nonequilibrium statistical mechanics have yielded important relationships for systems far from equilibrium, they have also brought new insight into near-equilibrium phenomena.Andrieux and Gaspard have illustrated this especially clearly, resolving how Onsager's reciprocal relations and their generalizations beyond linear response follow from the large deviation function for the total entropy production and its symmetry provided by the fluctuation theorem <cit.>. They have shown the connection between the moments of a large deviation function for a time integrated current and phenomenological transport coefficients within both linear and nonlinear response regimes <cit.>.We use this insight–that large deviation functions can encode the dynamical response of a system driven away from equilibrium–to construct an efficient method for the evaluation of transport coefficients from molecular dynamics simulations. To compute a large deviation function for a time integrated molecular current, we employ a trajectory based importance sampling procedure.Beginning with transition path sampling <cit.>, Monte Carlo algorithms have been derived to uniformly sample path space for systems evolving with detailed balanced dynamics.These methods have found application in computing rate constants and finding reaction pathways for complex condensed phase processes spanning autoionization to viral capsid assembly <cit.>.Indeed, it was identified in early work that a reaction rate constant could be computed from a thermodynamic-like integration along path space, resulting in a relative free energy in trajectory space<cit.>. This observation was never generalized to other dynamical responses, like phenomenological transport coefficients. With the development of diffusion Monte Carlo algorithms like the cloning algorithm that directly target large deviation functions <cit.>, such generalization is possible.Recent extensions of diffusion Monte Carlo algorithms that incorporate importance sampling using an iterative feedback approach <cit.>, cumulant expansions <cit.>, or approximate auxiliary processes <cit.>, have improved the efficiency of these algorithms enough to make calculations for complex, high dimensional systems possible.In this way, we proceed numerically by computing directly an effective thermodynamic potential like that Onsager identified when he first formulated his thermodynamic theory of linear response <cit.>, with the large deviation function serving to characterize this potential.This conceptually distinct approach from traditional methodologies provides new ways of thinking about transport processes that are amenable to the kinds of analysis typically reserved for static equilibrium observables, such as their dependence on ensemble and generalization to nonlinear regimes<cit.>. While we are restricted to linear response coefficients in this article, generalization to nonlinear response regimes is straightforward <cit.>. The rest of the paper is organized in the following manner. In Section 2, we summarize important results of the large deviation theory, and illustrate its connection to phenomenological transport coefficients. We also outline the simulation methodology used to compute large deviation functions. In Section 3, we test our method by comparing it with calculations using the Green-Kubo formalism. We study three specific cases: the shear viscosity of TIP4P/2005 water <cit.>, the interfacial friction coefficient between a Lennard-Jones fluid and a Lennard-Jones wall, and the thermal conductivity of a Weeks-Chandler-Anderson <cit.> solid. We use these models to frame a discussion of the relative systematic and statistical errors associated with our new methodology in comparison to traditional Green-Kubo calculations. We provide some final remarks on our method in Section 4.§ THEORY AND METHODOLOGY We consider systems evolving according to a Markovian stochastic dynamics, though generalization to deterministic dynamics is straightforward. In the absence of an external stimulus, these dynamics obey microscopic reversibility and thus will sample a Boltzmann distribution <cit.>. Under a bias, applied either at the boundaries of the system or through an external field, a current is expected to arise. If the bias is small, the response of the system can be linearized and a transport coefficient, L, isdefined throughJ = LX ,where J is a current and X is its conjugate generalized force, which could be proportional to a temperature or concentration gradient. The entropy production for this process is equal to the product of the force and the current, or S = J X <cit.>. The transport coefficient, L, is thus a response function relating the applied force to the generated current, L=dJ/dX, in the limit that X → 0. This is the object we aim to compute. §.§ Transport coefficients from large deviation functions To compute the response coefficient, L, we must identify a corresponding dynamic variable whose fluctuations will report on the system's response to the bias. Specifically, we define a time averaged current asJ=1/t_N∫_0^t_N j(c_t) dt,where t_N is some observation time, and j(c_t) is a fluctuating variable computable from the molecular configuration, c_t, at time t. If j is correlated over a finite amount of time, then the fluctuations of J can be studied by computing its cumulant generating function, ψ (λ) = lim_t_N→∞1/t_Nln⟨ e^-λ t_NJ⟩.where ψ (λ) is known as the large deviation function and λ is a statistical field conjugate to J <cit.>. Here, the average ⟨⋯⟩ is taken within an ensemble of paths of length t_N, denoted as a vector of all the configurations visited over that time, or C(t_N)={ c_0,c_1,⋯ ,c_t_N}. The probability of observing such a path is given by, P[C(t_N)]=ρ [c_0] ∏_i=1^t_Nω[c_i-1→ c_i],where ρ [c_0] represents the distribution of initial conditions, and ω[⋯] are the transition probabilities between time-adjacent configurations. As a cumulant generating function, the derivatives of ψ (λ ) report on the fluctuations of the current J. For example, the first two derivatives yield ψ'(0)= -⟨ J ⟩,ψ”(0)= t_N⟨δ J^2 ⟩, where⟨ J ⟩ is the average current and δ J = J - ⟨ J ⟩ is its deviation from the mean, whose squared average yields the variance of J. Because the dynamics we consider obey microscopic reversibility, ψ (λ ) obeys a generalized fluctuation theorem, ψ (λ )=ψ (X-λ ) where X is the generalized force as in Eq. <ref>. This symmetry relates the likelihood of a current to its time-reversed conjugate <cit.>, and implies a fluctuation-dissipation relationship, or a relation between the second derivative of the large deviation function ψ^”(λ) = -∂⟨ J ⟩_λ/∂λ = 2 ∂⟨ J ⟩ _λ/∂ X,and the transport coefficient, L. Here ⟨…⟩_λ denotes the average in the biased path ensemble. In the limit of X → 0,ψ^”(0) =2 L,where as previously observed <cit.>, we find that the curvature of the large deviation function around λ=0 is equal to the response function L up to a factor of 2. Analogously, higher order derivatives can be related to nonlinear transport coefficients. For small values of λ, the large deviation function can be expanded asψ (λ) = Lλ ^2 + O(λ ^4),which is parabolic, and completely determined by L. This implies the distribution of J is Gaussian, with a variance of 2L/t_N. This inversion is a direct reflection of Onsager's notion of an effective thermodynamic potential, where the probability of a current is given by the exponential of the entropy production.The connection between the large deviation result and the Green-Kubo formalism can be understood by expanding the definition of J in Eq.  <ref>. Without loss of generality, within Green-Kubo theory a transport coefficient can be computed from,L = lim_t_M →∞ L(t_M), L(t_M)=∫_0^t_M⟨ j(c_0)j(c_t)⟩ dt ,where L(t_m) is an integral over the time correlation function of j(c_t), and in the long time limit is equal to L <cit.>. As ⟨ J ⟩=0 for an equilibrium system, where X=0, it is straightforward to relate the second derivative of the large deviation function with respect to λ evaluated at λ=0, to L asψ^”(0) = 2∫_0^∞⟨ j(c_0)j(c_t) ⟩ dt = 2 L ,where we have made use of the time-translational invariance of the equilibrium averaged time correlation function, and assumed that ⟨ j(c_0)j(c_t) ⟩ decays faster than 1/t. This equation is known as the Einstein-Helfand relation and is well known to yield an equivalent expression for transport coefficients <cit.>. Provided an estimate of ψ(λ) accurate enough to compute ψ^”(0), we thus have a means of evaluating L. §.§ Calculation of large deviation functions To evaluate the large deviation function for J, we use a variant of path sampling known as the cloning algorithm <cit.>. The cloning algorithm is based on a diffusion Monte Carlo procedure <cit.> where an ensemble of trajectories is integrated in parallel. Each individual trajectory is known as a walker, and collectively the walkers undergo a population dynamics whereby short trajectory segments are augmented with a branching process that results in walkers being pruned or duplicated in proportion to a weight. This algorithm has been used extensively in the study of driven lattice gases <cit.> and models of glasses <cit.>. Alternative methods for importance sampling trajectories, such as transition path sampling <cit.> or forward flux sampling <cit.>, could be used similarly.Generally, to importance sample large deviation functions, the original trajectory ensemble, P[C(t_N)], can be biased to the form <cit.>P_λ[C(t_N)] = P[C(t_N)]e^-λ t_N J[C(t_N)]-ψ(λ) t_N,where the large deviation function ψ(λ) is the normalization constant computable as in Eq. <ref>. Ensemble averages for an arbitrary observable, 𝒪, within the unbiased distribution and the biased one, are related by⟨𝒪[C(t_N)] ⟩_λ = ⟨𝒪[C(t_N)]e^-λ t_N J[C(t_N)]⟩/⟨ e^-λ t_N J[C(t_N)]⟩,where the denominator is exp[ψ(λ) t_N] in the limit of large t_N. If we choose 𝒪[C(t_N)] = δ(J-J[C(t_N)]) in Eq. <ref>, then we find a familiar relationship between biased ensembles,ln p_λ (J) = ln p(J)-λ t_NJ-t_N ψ(λ).where p_λ(J) =⟨δ(J-J[C(t_N)]) ⟩_λ is the probability of observing a given value of the current J in the biased ensemble, and p(J) is that in the unbiased ensemble. This demonstrates that ψ(λ) is computable as a change in normalization through histogram reweighting <cit.>. In order to arrive at a robust estimate for ψ(λ), the two distributions, p_λ (J) and p(J), must have significant overlap. However, for large systems or long observation times, each distribution narrows, and sampling p_λ(J) by brute force is exponentially difficult. To evaluate the large deviation function, the cloning algorithm samples P_λ[C(t_N)] by noting that it can be expanded to P_λ[C(t_N)] ∝ρ [c_0] ∏_i=1^t_Nω[c_i-1→ c_i] e^-λδ t j[c_i] ,where we have discretized the integral for J over a time δ t. The argument of the product is the transition probability times a bias factor that is local in time. This combination of terms cannot be lumped together into a physical dynamics, as it is unnormalized. However, it can be interpreted as a population dynamics where the nonconservative part proportional to the bias is represented by adding and removing walkers. In particular, in the cloning algorithm, trajectories are propagated in two steps. First,walkers are integrated according to the normalized dynamics specified by ω[c_i-1→ c_i] for a trajectory of length n δ t. Over this time, a bias is accumulated according toW_i(t,n δ t) = exp [ - λδ t ∑_j=1^n j[c_t+j δ t]],where, due to the multiplicative structure of the Markov chain, the bias is simply summed in the exponential. After the trajectory integration, n_i(t) identical copies of the ith trajectory are generated in proportion to W_i(t,n δ t),n_i(t) = ⌊ N_wW_i(t,n δ t) /∑_j=1^N_w W_j(t,n δ t)+ ξ⌋,where ξ is a uniform random number between 0 and 1 and ⌊…⌋ is the floor function. This process will result in a different number of walkers, and thus each walker in the new population is copied or deleted uniformly untilare left. With this algorithm, the large deviation function can be evaluated after each branching step as the deviation of the normalization,ψ^t(λ) =ln1/∑_i=1^ W_i(t,nδ t),which is an exponential average over the bias factors of each walker. In the limit of a large number of walkers, this estimate is unbiased <cit.>. The local estimate can be improved by averaging over the observation time,ψ(λ) = 1/t_N∑_t=1^t_N/(nδ t)ψ^t(λ)which upon repeated cycles of integration and population dynamics yields a statistically converged estimate of ψ(λ). Alternatively, ψ(λ) can be computed from histogram reweighting using Eq. <ref> from the distribution of Js generated from each walker.In the preceding, all calculations are integrated with LAMMPS <cit.> and where specified, combined with a diffusion Monte Carlo code called the Cloning Algorithm for Nonequilibrium Stationary States (CANSS) <cit.>. A detailed description of convergence criteria for this algorithm can be found in Reference <cit.>.§ RESULTS AND DISCUSSIONTo illustrate the utility of our method, we have tested it in three model transport processes. In Table 1, we list all the transport coefficients considered in this section, along with their corresponding Green-Kubo relations and the dynamical variables whose large deviation function we compute. For all of the models studied, we generate trajectories by integrating a Langevin equation of motion. A Markovian, stochastic equation is needed for the calculation of the large deviation function using the method presented in the previous section. For the position of particle i, denoted 𝐫_i = {x_i,y_i,z_i}, this equation has the formm_i 𝐫̈_i =- ∇_𝐫_i U(𝐫^N)-m_iγ𝐫̇_i + 𝐑_i ,where the dots denote time derivatives, U(𝐫^N) is the total intermolecular potential from all N particles at position 𝐫^N, m_i is the particle's mass, γ is the frictional coefficient,and 𝐑_i is a random force. The statistics of the random force is determined by the fluctuation-dissipation theorem, which for each component is⟨ R_i(t) ⟩=0⟨ R_i(t)R_j(t') ⟩=m_i k_BTγδ(t-t')δ_ijwhere k_BT is Boltzmann's constant times temperature, δ(t) is Dirac's delta function and δ_ij is the Kronicker delta. For all our calculations, we have chosen γ carefully so that the thermostat has little effect on the transport properties of the system and we are able to recover response coefficients consistent with calculations done using Newtonian trajectories, when possible. §.§ Validation of methodology: shear viscosityTo illustrate our methodology, we first consider the evaluation of the shear viscosity, η, which is typically easy to compute with traditional methods. The phenomenological law that defines the shear viscosity isNewton's law of viscosity, which relates the shear stress of a fluid to an imposed shear rate,σ_xy = η∂ v_x/∂ y,where σ_xy is the xy-component of the stress tensor, and (∂ v_x/∂ y) is the gradient of the x component of velocity in the y direction. The relevant molecular current for this process is the momentum flux, which is equivalent to σ_xy. The stress tensor is computable as σ_xy = 1/V ( ∑_im_i v_xiv_yi+∑_ix_i f_yi ),where V is the constant volume of the system, and v_ki and f_ki are the velocity and force exerted on particle i in the k direction, respectively. Given this identification of the current, its associated thermodynamic force is X=(V/T)(∂ v_x/∂ y). From Eqs. <ref> and <ref> we can identify the relation between the shear viscosity and L as η=L (V/ T).We compute the shear viscosity for the TIP4P/2005 model of water <cit.>, which has been reported previously using Green-Kubo theory <cit.>. Our simulation system consists of 216 water molecules with density ρ = 1g/cm^3 and temperature T = 298K, integrated with the Langevin equation in Eq. <ref> with γ=1 ps^-1.The simulation is thus done in an ensemble of constant number of molecules N, volume V, and temperature T. We have verified that for γ=1 ps^-1, the shear viscosity computed is the same as that from an ensemble with constant energy or an NVT ensemble using a Nosé-Hoover thermostat <cit.>. The molecules are held rigid with the SHAKE algorithm <cit.> and we employ a timestep of 1 fs. For all of the calculations, we first equilibrate the simulation for 20 ns. First, we compute η using the Green-Kubo formula in Eq. <ref>. Note that other elements of the stress tensor can be averaged over to achieve better statistics. In both the Green-Kubo method and are new proposed calculation, the statistical benefit would be identical, so for notational clarity we will consider only the xy component. We average the stress-stress time correlation function over 20 ns, and this function is shown in Figure <ref>a. The time correlation function is oscillatory due to the inertial recoil of the dense fluid, and has largely decayed within 1 ps, though there is a slow component to the decay from the approximate conservation of momentum for times shorter than the timescale for the Langevin thermostat. From Green-Kubo theory, the viscosity is the integral of this function. Shown in the main part of Figure <ref>a, is η(t_M) as a function of the upper limit of the integral as in Eq. <ref>, which has plateaued by t_M=10 ps. Also shown are the associated statistical errors, which grow with t_M. The calculated shear viscosity from 5 independent simulations and a cutoff time of 20 ns is 0.876± 0.015 mPa·s. This value is in good agreement with that previously reported <cit.>. Alternatively, we can compute the shear viscosity from the large deviation function for Σ_xy defined in Eq. <ref>. As the shear viscosity decays quickly for this model, importance sampling is unnecessary, so we illustrate the basic principle by brute force reweighting. Specifically, we generate an estimate of p[Σ _xy] with t_N= 80 ps, from a 20 ns long equilibrium trajectory. Then, we reweight the distribution to computep_λ [Σ _xy] according to Eq. <ref>. Examples of the equilibrium and biased distributions are shown in the inset of Figure <ref>(b). The added bias shifts the distribution to a different mean, and the overlap between these two distributions determines the efficiency of our sampling. The large deviation function ψ (λ), shown in the main panel in Figure <ref>(b), is evaluated by Eq. <ref> for differentλ's. The parabolic form of ψ (λ) is in agreement with the Gaussian distribution of the fluctuation in Σ _xy in the linear response regime. Given thatψ (λ) is a parabola centered at the origin, it is straightforward to compute η from fitting the curve in Figure <ref>(b) to Eq. <ref> over a range of |λ| ≤ 1.5×10^-4atm^-1ps^-1. From this, we obtain an estimate of the viscosity η =0.882± 0.017 mPa·s, which is in agreement with our Green-Kubo result. Both errors reported are the standard error of the mean.§.§ Analysis of systematic error: interfacial friction coefficientHaving validated the basic methodology, we next focus on the systematic errors determining its convergence. As a case study, we consider computing the interfacial friction coefficient between a liquid-solid interface. This friction coefficient is defined by the linear relationship,f_x=-μ A v_s,where f_x is the total lateral force exerted on the solid wall on the x direction, A is the lateral area of the interface, and v_s is the tangential velocity of the fluid relative to the solid. As before, we can identify a relevant molecular current as the momentum flux along the wall, in this case proportional tof_x = -∑_i=1^N_l∑_k=1^N_cd/dx_i u_ls(|𝐫_i - 𝐫_k|), the sum of the x component of the forces of all N_l liquid particles on the N_c wall particles, where the force is given by the gradient of the liquid-solid interaction potential, u_ls. Given this current, we can identify its conjugate force as X=(A/T)v_s, and consequently, the friction coefficient is given by μ = L (A/ T).The system is modeled as a fluid of monatomic particles confined between two stationary atomistic walls parallel to the xy plane. The fluid particles interact through a Lennard-Jones (LJ) potential with characteristic length scale d, energy scale ϵ, time τ =√(md^2/ϵ) with m as the mass of the fluid particle, and is truncated at 2.5d. Reduced units will be used throughout this and the following section, and we set k_B=1. The walls are separated by a distance H_z=18.17d along the z axis. Periodic boundary conditions are imposed along x and y directions, with the lateral dimensions of the simulation domain H_x=H_y=15.90d. Each wall is constructed with 1568 atoms distributed as (111) planes of face-centered-cubic lattice with density ρ _w=2.73d^-3, while the fluid density is ρ _f=0.786d^-3. The wall atoms do not interact with each other, but are allowed to oscillate about their equilibrium lattice sites under the harmonic potential u_h(r)=k r^2/2, with a spring constant k=600ϵ /d^2. The mass of the wall atoms is chosen to be m_c=4m. The interaction between the wall and the fluid atoms is also modeled by a LJ potential with the same length scale d and truncation, but a slightly smaller energy ϵ _wf=0.9ϵ, to model the solvophobicity of the wall <cit.>. Only the wall particles are thermostatted by the Langevin equations in Eq. <ref> using γ=1τ^-1.Previous studies have recognized that μ is difficult to compute due to the confinement of the corresponding hydrodynamic fluctuations <cit.>, which results in a large systematic error. This difficulty has led to some questioning the reliability and applicability of Green-Kubo calculations, such as the one derived in <cit.> and shown in Eq. <ref>, to compute μ. Indeed, we have found that the details of the simulation, such as the ensemble, system geometry and γ used in the Langevin thermostat, all have an important influence on the calculation of μ. This sensitivity is because the fluctuations that determine the friction are largely confined to two spatial dimensions, which is well known to result in correlations that have hydrodynamic long time tails, whose integral may be divergent <cit.>. However, both our large deviation function method and the Green-Kubo calculations are based on equilibrium fluctuations. Provided a simulation geometry, equation of motion, and ensemble, the system samples the exact same trajectories, so we expect the friction coefficient computed in both ways to agree.Shown in the inset of Figure <ref>(a) is the Green-Kubo correlation function, which includes a very slow decay extending to at least 100 τ, following short time oscillatory behavior from the layered density near the liquid-solid interface. The main panel of Figure <ref>(a) shows μ computed with increasing integration time, t_M. Averaging over 4 independent samples with a cutoff t_M=1000τ, our estimation of the friction coefficient is μ=0.109±0.019 ϵτ/d^2. The interfacial friction coefficient is also computed from the large deviation function, with t_N=400τ, using the time integrated force, Eq. <ref>, as our dynamical observable. The large deviation function and the average time integrated force, ⟨ F_x ⟩_λ, are shown in the main panel and inset of Figure <ref>(b), respectively, demonstrating that within the range of λ we consider the system still responds linearly. With λ=10^-3σ/ϵτ and t_N=4000τ,importance sampling gives us an estimate of the friction coefficient as μ=0.121±0.002 ϵτ/d^2, in reasonable agreement with the Green-Kubo estimate and with previous reports <cit.>.In both the Green-Kubo and the large deviation function calculations, the main source of systematic error is from finite time. This error is especially highlighted in this example, where the time correlation function decays very slowly. We consider the systematic errors in the estimate of μ by defining a relative error asErr^(sys)[μ]=(μ(t)-μ )/μ,where μ(t) is the finite time value of the friction coefficient, and μ its asymptotic value at t→∞. The form of the time dependent systematic error is different in the Green-Kubo method compared to the large deviation estimate. In the Green-Kubo method, systematic errors come from truncating the integral before the correlation function has decayed, and we denote this time t_M, the cutoff time in the integral of the correlation function. In the large deviation calculation, systematic errors come from both truncating the integral as well as sub-time-extensive contributions to the exponential expectation value, which are more analogous to finite size effects in normal free energy calculations. These contributions are both determined by the path length t_N. The relative systematic error is shown in Figure <ref> for both methods. For this case, it appears that the Green-Kubo method always has a smaller error than the large deviation function method, though their magnitudes are comparable. In the Green-Kubo method, it follows that if we know the analytical form of the correlation function, we can determine the scaling of the relative error. In the case of interfacial friction, Barrat and Boquet have proposed that for a cylindrical geometry where the dimension on the confined direction is much smaller than the other two direction, the force autocorrelation should decay asymptotically as ∼ 1/t^2 using hydrodynamic arguments <cit.>. This is a direct consequence of the fact that the velocity autocorrelation function decays as ∼ 1/t in a 2-dimensional system <cit.>, neglecting the self-consistent mode coupling correction that adds an imperceptible √(ln t) correction <cit.>. This is confirmed in our simulation result in Figure <ref> (orange line), where the integral of the force correlation function decays as ∼ 1/t. Since the large deviation function has a Gaussian form, we can analyze the form of the finite time correction exactly asErr^(sys)[ψ]= ψ̃(λ,t_N)-ψ(λ)/ψ(λ)= μ(t_N)-μ/μ+1/2t_Nμλ^2ln[4π t_Nμ(t_N)],where ψ(λ) is the long time limit of the large deviation function, and ψ̃(λ,t_N) is its finite time estimate. This follows from a fluctuation correction about a saddle point integration. Physically, this correction arises from a t_N that is too short, such that ψ(λ) is not the dominant contribution to the tilted propagator, but rather includes temporal boundary terms from the overlap of the distribution of initial conditions and the steady state distribution generated under finite λ <cit.>. If we expand the first term, we arrive atμ(t_N)-μ≈-∫_t_N^∞⟨ j(0)j(t) ⟩ dt+1/t_N∫_0^t_Nt⟨ j(0)j(t) ⟩ dt, which consists of the term included in the Green-Kubo expression, as well as an additional term modulated by a factor of 1/t_N. Given that the correlation decays as ∼ 1/t^2, the first term on the right hand side scales as ∼ 1/t_N, as in the Green-Kubo method, while the second term scales as ∼ (1/t_Nln t_N). This form is shown in Figure <ref> and agrees very well with our data. These additional terms explain why the magnitude of the systematic error is larger for the large deviation function. In cases where the Green-Kubo correlation function decays faster than 1/t^2, we expect that the dominant contribution to the error will come from the last term in Eq. <ref>.§.§ Analysis of statistical error: thermal conductivity We finally discuss the statistical error of our method by studying the thermal conductivity, κ, of a solid system with particles that interact via the Weeks-Chandler-Anderson potential <cit.>. The thermal conductivity is defined through Fourier's law,𝐞=-κ∇ 𝐓,where 𝐞 is the energy current per unit area, and ∇ 𝐓 is the temperature gradient. From the expression for entropy production, the thermodynamic force is given by X=-(1/k_BT^2)∇ T, and so the thermal conductivity κ=L/(Vk_BT^2). As the relevant molecular current, we study the fluctuations of the heat flux 𝐪 given by𝐪=𝐞V=∑_i𝐯_𝐢e_i+1/2∑_i≠ k(𝐟_𝐢𝐤·𝐯_𝐢)𝐫_𝐢𝐤,where e_i is the per-particle energy, 𝐟_𝐢𝐤 is the force on atom i due to its neighbor k from the pair potential, and 𝐫_𝐢𝐤 is the coordinate vector between the two particles. We use a system size of 10^3 unit cells, with lattice spacing 1.49d. A Langevin thermostat with γ =0.01τ ^-1 maintains the system at the state point T =1.0ϵ /k_B, ρ =1.2d^-3, which yields identical results for κ as an NVE calculation. We focus on the diagonal component, κ_xx, of the thermal conductivity tensor.Within Green-Kubo theory, the thermal conductivity can be computed by integrating the autocorrelation function of the x component of the heat flux, q_x, as in Eq. <ref>. The inset of Figure <ref>(a) is the decay of the autocorrelation function, which comprises a fast decay from the high-frequency vibrational modes, followed by a slower decay that contributes most to the thermal conductivity and arises due to the low frequency acoustic modes <cit.>. To compute κ from the integral, as shown in the main part of Figure <ref>(a), the upper time limit is chosen as t_M=1500τ, though the relaxation of the correlation extends only to around 5τ. To compute κ from the large deviation function, we study fluctuations in the time averaged heat flux, Q_x, defined in Eq.  <ref>. The transport coefficient, κ, is again calculated using Eq. <ref> by assuming the large deviation function ψ (λ) as a parabola, which is justified in Figure <ref>(b). The inset there shows clearly the linear response of the biased ensemble average, ⟨ Q ⟩ _λ, computed from Eq. <ref>. Given sufficient statistics the two methods converge to the same value. The estimate of thermal conductivity from the Green-Kubo method using a long trajectory of 1.5×10^6 τ is κ=34.3±2.2 1/τ d, while the estimate from the large deviation function using =1000 walkers and λ=10^-4 is κ=34.01±0.78 1/τ d.While the average values of κ agree between the two methods, the statistical convergence varies significantly. To make a fair comparison, we set the total observation time of the trajectories to the same time as the upper limit of the Green-Kubo integral, i.e. t_N=t_M=1500τ, which is much longer than the characteristic decay of the current autocorrelation function. To compensate for computational overhead of propagatingtrajectories in parallel in the cloning algorithm, the total averaging time of the Green-Kubo method is chosen as t_tot=t_M× N_a, and N_a equals the walker number, N_w, so that the two methods require approximately the same computational effort. Both N_a and N_w will be denoted as N_s reflecting the number of independent samples of each fluctuating quantity. We measure the statistical error by the relative errorErr^(stat)[κ]=√(⟨δκ^2 ⟩)/κ,which is plotted in Figure <ref> for both methods. As usual, the statistical error depends on both the relative size of observable fluctuations, and the number of independent samples. We find that as the standard deviations of both methods scale as 1/√(N_s) as expected, our importance sampling clearly helps to suppress the statistical error compared to the Green-Kubo method with similar computational effort, decreasing the magnitude of the error by an order of magnitude at fixed N_s. Even though we have to choose a bias small enough to guarantee a linear response, we do see that larger bias helps to yield statistically reliable results. Jones and Mandadapu have performed a rigorous error analysis on the estimates of Green-Kubo transport coefficients with the assumption that the current fluctuations follow a Gaussian process <cit.>. They found that the variance of κ is a monotonically increasing function of t_M, and arrived at an upper bound for the relative errorErr^(stat)[κ]<2√(t_M/t_tot)=2√(1/N_a)which depends on only the number of trajectory segments of length t_M. As a consequence, the statistics become worse when the system has longer correlation times, and there are no ways of controlling the intrinsic variance of the observable. On the other hand, in the large deviation method, the relative error in the large deviation function isErr^(stat)[ψ(λ)]= 1/ψ(λ)√(ψ^”(λ)/N_w)= 1/λ^2√(2/L N_w) |λ| > 0which depends on not only the number of samples, in this case , but also has a dependence on λ and L. In general, as λ increases, the walkers will become more correlated. However, within the regime of linear response, or to first order in λ, the number of uncorrelated walkers should be N_w. Because the large deviation function, ψ(λ), scales as λ ^2 while its second derivative, ψ^”(λ), has no dependence on λ, the relative size of the fluctuations can be tuned by changingλ away from 0. This is verified in Figure <ref>, where increased λ generates an order of magnitude reduction in the statistical error relative to the Green-Kubo calculation. This decrease in the statistical error is also confirmed for a series of λ's. This tunability afforded by the large deviation function calculation is the same advantage afforded by direct simulation of transport processes where the relative size of fluctuations is determined by the size of the average current produced by driving the system away from equilibrium. Instead of evaluating κ from the large deviation function directly, we could have derived it from the change in the average current produced at a given λ. However, in such a case, the relative error would only scale as |λ| rather than λ^2.§ CONCLUSIONSIn this paper, we have explored the possibility of calculating transport coefficients from a large deviation function or a path ensemble free energy. The robustness of our method is tested by a variety of model systems ranging in composition and complexity of molecular interactions.Our method is general, and we expect the addition of importance sampling to be beneficial in instances where statistical errors are dominant. More precisely, our analysis shows that the systematic errors for both the Green-Kubo calculation and the large deviation calculation are asymptotically the same if the time correlation function decays faster than 1/t^2. If the correlation function decays slower, than there will be a larger systematic error for the large deviation function calculation that will need to be converged at large t_N. In such cases, the form of this error follows from Eq.<ref> and scales as 1/t_N ln t_N. Such slow decay is expected for low-dimensional systems where the current includes hydrodynamic modes. Our analysis of the relative statistical errors between the Green-Kubo and the large deviation function calculations show that our method requires generically fewer statistically uncorrelated samples for comparable statistical accuracy.This is a consequence of the importance sampling employed. The magnitude of this statistical efficiency, defined as the number of independent samples needed for a given error (N_w/N_a) increases linearly with the size of the transport coefficient, L and increases rapidly with the increasing bias, as λ^4.While we have considered only linear response coefficients, our method can be easily extended to the nonlinear regime or to off-diagonal entries in the Onsager matrix, where Green-Kubo formulas are even more cumbersome to evaluate and few direct methods exist or can be formulated. These extensions are possible since the diffusion Monte Carlo algorithm is capable of sampling rare fluctuations in the non-Gaussian tails of the distribution. Moreover, it is also possible to probe the response around nonequilibrium steady states, as the method presented here does not rely on an underlying Boltzmann distribution.D.T.L. and C.Y.G. was supported by the UC Berkeley College of Chemistry. The authors would like to thank Ushnish Ray for useful discussions and for developing the use of LAMMPS with the CANSS package, available at https://github.com/ushnishray/CANSS, for the calculation of nonequilibrium properties of complex systems. ——- [Green(1954)]green1954markoff Green, M.S. Markoff random processes and the statistical mechanics of time-dependent phenomena. II. Irreversible processes in fluids. J. Chem. Phys. 1954, 22, 398–413.[Kubo(1957)]kubo1957statistical1 Kubo, R. Statistical-mechanical theory of irreversible processes. I. General theory and simple applications to magnetic and conduction problems. J. Phys. Soc. Jpn. 1957, 12, 570–586.[Levesque et al.(1973)Levesque, Verlet, and Kürkijarvi]levesque1973computer Levesque, D.; Verlet, L.; Kürkijarvi, J. Computer "experiments" on classical fluids. IV. Transport properties and time-correlation functions of the Lennard-Jones liquid near its triple point. Phys. Rev. A 1973, 7, 1690.[Schelling et al.(2002)Schelling, Phillpot, and Keblinski]schelling2002comparison Schelling, P.K.; Phillpot, S.R.; Keblinski, P. Comparison of atomic-level simulation methods for computing thermal conductivity. Phys. Rev. B 2002, 65, 144306.[Galamba et al.(2004)Galamba, Nieto de Castro, and Ely]galamba2004thermal Galamba, N.; Nieto de Castro, C.A.; Ely, J.F. Thermal conductivity of molten alkali halides from equilibrium molecular dynamics simulations. J. Chem. Phys. 2004, 120, 8676–8682.[Jones and Mandadapu(2012)]jones2012adaptive Jones, R.E.; Mandadapu, K.K. Adaptive Green-Kubo estimates of transport coefficients from molecular dynamics based on robust error analysis. J. Chem. Phys. 2012, 136, 154102.[Evans and Streett(1978)]evans1978transport Evans, D.J.; Streett, W.B. Transport properties of homonuclear diatomics: II. Dense fluids. Mol. Phys. 1978, 36, 161–176.[Hess(2002)]hess2002determining Hess, B. Determining the shear viscosity of model liquids from molecular dynamics simulations. J. Chem. Phys. 2002, 116, 209–217.[Tenenbaum et al.(1982)Tenenbaum, Ciccotti, and Gallico]tenenbaum1982stationary Tenenbaum, A.; Ciccotti, G.; Gallico, R. Stationary nonequilibrium states by molecular dynamics. Fourier's law. Phys. Rev. A 1982, 25, 2778.[Baranyai and Cummings(1999)]baranyai1999steady Baranyai, A.; Cummings, P.T. Steady state simulation of planar elongation flow by nonequilibrium molecular dynamics. J. Chem. Phys. 1999, 110, 42–45.[Hoover et al.(1980)Hoover, Evans, Hickman, Ladd, Ashurst, and Moran]hoover1980lennard Hoover, W.G.; Evans, D.J.; Hickman, R.B.; Ladd, A.J.C.; Ashurst, W.T.; Moran, B. Lennard-Jones triple-point bulk and shear viscosities. Green-Kubo theory, Hamiltonian mechanics, and nonequilibrium molecular dynamics. Phys. Rev. A 1980, 22, 1690.[Evans(1982)]evans1982homogeneous Evans, D.J. Homogeneous NEMD algorithm for thermal conductivity - Application of non-canonical linear response theory. Phys. Lett. A 1982, 91, 457–460.[Mandadapu et al.(2009)Mandadapu, Jones, and Papadopoulos]mandadapu2009homogeneous Mandadapu, K.K.; Jones, R.E.; Papadopoulos, P. A homogeneous nonequilibrium molecular dynamics method for calculating thermal conductivity with a three-body potential. J. Chem. Phys. 2009, 130, 204106. [Müller-Plathe et al.(1997)Müller-Plathe, Florian]muller1997simple Müller-Plathe, F. A simple nonequilibrium molecular dynamics method for calculating the thermal conductivity. J. Chem. Phys. 1997, 106, 6082–6085. [Zhou et al.(2009)Zhou, Aubry, Jones, Greenstein, and Schelling]zhou2009towards Zhou, X.W.; Aubry, S.; Jones, R.E.; Greenstein, A.; Schelling, P.K. Towards more accurate molecular dynamics calculation of thermal conductivity: Case study of GaN bulk crystals. Phys. Rev. B 2009, 79, 115201.[Tuckerman et al.(1997)Tuckerman, Mundy, Balasubramanian, and Klein]tuckerman1997modified Tuckerman, M.E.; Mundy, C.J.; Balasubramanian, S.; Klein, M.L. Modified nonequilibrium molecular dynamics for fluid flows with energy conservation. J. Chem. Phys. 1997, 106, 5615–5621.[Tenney and Maginn(2010)]tenney2010limitations Tenney, C.M.; Maginn, E.J. Limitations and recommendations for the calculation of shear viscosity using reverse nonequilibrium molecular dynamics. J. Chem. Phys. 2010, 132, 014103.[Geissler and Dellago(2004)]geissler2004equilibrium Geissler, P.L.; Dellago, C. Equilibrium time correlation functions from irreversible transformations in trajectory space. J. Phys. Chem. B 2004, 108, 6667–6672.[Touchette(2009)]touchette2009large Touchette, H. The large deviation approach to statistical mechanics. Phys. Rep. 2009, 478, 1–69.[Touchette(2017)]touchette2017introduction Touchette, H. Introduction to dynamical large deviations of Markov processes. arXiv:1705.06492 2017.[Jarzynski(1997)]jarzynski1997nonequilibrium Jarzynski, C. Nonequilibrium equality for free energy differences. Phys. Rev. Lett. 1997, 78, 2690.[Crooks(1999)]crooks1999entropy Crooks, G.E. Entropy production fluctuation theorem and the nonequilibrium work relation for free energy differences. Phys. Rev. E 1999, 60, 2721.[Barato and Seifert(2015)]barato2015thermodynamic Barato, A.C.; Seifert, U. Thermodynamic uncertainty relation for biomolecular processes. Phys. Rev. Lett. 2015, 114, 158101.[Gingrich et al.(2016)Gingrich, Horowitz, Perunov, and England]gingrich2016dissipation Gingrich, T.R.; Horowitz, J.M.; Perunov, N.; England, J.L. Dissipation bounds all steady-state current fluctuations. Phys. Rev. Lett. 2016, 116, 120601.[Gaspard(2013)]gaspard2013multivariate Gaspard, P. Multivariate fluctuation relations for currents. New J. Phys. 2013, 15, 115014.[Andrieux and Gaspard(2004)]andrieux2004fluctuation Andrieux, D.; Gaspard, P. Fluctuation theorem and Onsager reciprocity relations. J. Chem. Phys. 2004, 121, 6167–6174.[Andrieux and Gaspard(2007)]andrieux2007fluctuation Andrieux, D.; Gaspard, P. A fluctuation theorem for currents and non-linear response coefficients. J. Stat. Mech. Theor. Exp. 2007, P02006.[Dellago et al.(1998)Dellago, Bolhuis, Csajka, and Chandler]dellago1998transition Dellago, C.; Bolhuis, P.G.; Csajka, F.S.; Chandler, D. Transition path sampling and the calculation of rate constants. J. Chem. Phys. 1998, 108, 1964–1977.[Geissler et al.(1999)Geissler, Dellago, and Chandler]geissler1999kinetic Geissler, P.L.; Dellago, C.; Chandler, D. Kinetic pathways of ion pair dissociation in water. J. Phys. Chem. B 1999, 103, 3706–3710.[Geissler et al.(2001)Geissler, Dellago, Chandler, Hutter, and Parrinello]geissler2001autoionization Geissler, P.L.; Dellago, C.; Chandler, D.; Hutter, J.; Parrinello, M. Autoionization in liquid water. Science 2001, 291, 2121–2124.[Bolhuis et al.(2002)Bolhuis, Chandler, Dellago, and Geissler]bolhuis2002transition Bolhuis, P.G.; Chandler, D.; Dellago, C.; Geissler, P.L. Transition path sampling: Throwing ropes over rough mountain passes, in the dark. Annu. Rev. Phys. Chem. 2002, 53, 291–318.[Radhakrishnan and Schlick(2004)]radhakrishnan2004orchestration Radhakrishnan, R.; Schlick, T. Orchestration of cooperative events in DNA synthesis and repair mechanism unraveled by transition path sampling of DNA polymerase β's closing. Proc. Natl. Acad. Sci. U.S.A. 2004, 101, 5970–5975.[Basner and Schwartz(2005)]basner2005enzyme Basner, J.E.; Schwartz, S.D. How enzyme dynamics helps catalyze a reaction in atomic detail: a transition path sampling study. J. Am. Chem. Soc. 2005, 127, 13822–13831.[Hagan and Chandler(2006)]hagan2006dynamic Hagan, M.F.; Chandler, D. Dynamic pathways for viral capsid assembly. Biophys. J. 2006, 91, 42–54.[Peters(2010)]peters2010recent Peters, B. Recent advances in transition path sampling: accurate reaction coordinates, likelihood maximisation and diffusive barrier-crossing dynamics. Mol. Simul. 2010, 36, 1265–1281.[Limmer et al.(2014)Limmer and Chandler]limmer2014theory Limmer, D.T.; Chandler, D. Theory of amorphous ices. Proc. Natl. Acad. Sci. U.S.A. 2014, 111, 9413–9418.[Giardina et al.(2006)Giardina, Kurchan, and Peliti]giardina2006direct Giardina, C.; Kurchan, J.; Peliti, L. Direct evaluation of large-deviation functions. Phys. Rev. Lett. 2006, 96, 120603.[Giardina et al.(2011)Giardina, Kurchan, Lecomte, and Tailleur]giardina2011simulating Giardina, C.; Kurchan, J.; Lecomte, V.; Tailleur, J. Simulating rare events in dynamical processes. J. Stat. Phys. 2011, 145, 787–811.[Nemoto et al.(2016)Nemoto, Bouchet, Jack, and Lecomte]nemoto2016population Nemoto, T.; Bouchet, F.; Jack, R.L.; Lecomte, V. Population-dynamics method with a multicanonical feedback control. Phys. Rev. E 2016, 93, 062123.[Klymko et al.(2017)Klymko, Geissler, Garrahan, and Whitelam]klymko2017rare Klymko, K.; Geissler, P.L.; Garrahan, J.P.; Whitelam, S. Rare behavior of growth processes via umbrella sampling of trajectories. arXiv:1707.00767 2017.[Ray et al.(2017)Ray, Chan, and Limmer]ray2017exact Ray, U.; Chan, G.K.-L.; Limmer, D.T. Exact fluctuations of nonequilibrium steady states from approximate auxiliary dynamics. arXiv:1708.09482 2017.[Onsager(1931)]onsager1931reciprocal Onsager, L. Reciprocal relations in irreversible processes. I. Phys. Rev. 1931, 37, 405.[Palmer and Speck(2017)]palmer2017thermodynamic Palmer, T.; Speck, T. Thermodynamic formalism for transport coefficients with an application to the shear modulus and shear viscosity. J. Chem. Phys. 2017, 146, 124130.[Abascal and Vega(2005)]abascal2005general Abascal, J.L.F.; Vega, C. A general purpose model for the condensed phases of water: TIP4P/2005. J. Chem. Phys. 2005, 123, 234505.[Weeks et al.(1971)Weeks, Chandler, and Andersen]weeks1971role Weeks, J.D.; Chandler, D.; Andersen, H.C. Role of repulsive forces in determining the equilibrium structure of simple liquids. J. Chem. Phys. 1971, 54, 5237–5247.[Chandler(1987)]chandler1987introduction Chandler, D. Introduction to Modern Statistical Mechanics; Oxford University Press: London, UK, 1987.[Morriss and Evans(2013)]morriss2013statistical Morriss, G.P.; Evans, D.J. Statistical Mechanics of Nonequilbrium Liquids; ANU Press, 2013.[Lebowitz and Spohn(1999)]lebowitz1999gallavotti Lebowitz, J.L.; Spohn, H. A Gallavotti–Cohen-type symmetry in the large deviation functional for stochastic dynamics. J. Stat. Phys. 1999, 95, 333–365.[Helfand(1960)]helfand1960transport Helfand, E. Transport coefficients from dissipation in a canonical ensemble. Phys. Rev. 1960, 119, 1.[Foulkes et al.(2001)Foulkes, Mitas, Needs, and Rajagopal]foulkes2001quantum Foulkes, W.M.C.; Mitas, L.; Needs, R.J.; Rajagopal, G. Quantum Monte Carlo simulations of solids. Rev. Mod. Phys. 2001, 73, 33.[Hurtado and Garrido(2011)]hurtado2011spontaneous Hurtado, P.I.; Garrido, P.L. Spontaneous symmetry breaking at the fluctuating level. Phys. Rev. Lett. 2011, 107, 180601.[Garrahan et al.(2009)Garrahan, Jack, Lecomte, Pitard, van Duijvendijk, and van Wijland]garrahan2009first Garrahan, J.P.; Jack, R.L.; Lecomte, V.; Pitard, E.; van Duijvendijk, K.; van Wijland, F. First-order dynamical phase transition in models of glasses: an approach based on ensembles of histories. J. Phys. A: Math. Theor. 2009, 42, 075007.[Bodineau et al.(2012)Bodineau, Lecomte, and Toninelli]bodineau2012finite Bodineau, T.; Lecomte, V.; Toninelli, C. Finite size scaling of the dynamical free-energy in a kinetically constrained model. J. Stat. Phys. 2012, 147, 1–17.[Allen et al.(2009)Allen, Valeriani, and ten Wolde]allen2009forward Allen, R.J.; Valeriani, C.; ten Wolde, P.R. Forward flux sampling for rare event simulations. J. Phys. Condens. Matter 2009, 21, 463102.[Frenkel and Smit(2001)]frenkel2001understanding Frenkel, D.; Smit, B. Understanding molecular simulation: from algorithms to applications; Vol. 1, Academic press,2001.[Hidalgo et al.(2017)Hidalgo, Nemoto, and Lecomte]hidalgo2017finite Nemoto, T.; Hidalgo, E.G.; Lecomte, V. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time. Phys. Rev. E 2017, 95, 062134.[Plimpton(1995)]plimpton1995fast Plimpton, S. Fast parallel algorithms for short-range molecular dynamics. J. Comput. Phys. 1995, 117, 1–19.[Ray et al.(2017)Ray, Chan, and Limmer]ray2017importance Ray, U.; Chan, G.K.-L.; Limmer, D.T. Importance sampling large deviations in nonequilibrium steady states: Part 1. arXiv:1708.00459 2017.[González and Abascal(2010)]gonzalez2010shear González, M.A.; Abascal, J.L.F. The shear viscosity of rigid water models. J. Chem. Phys. 2010, 132, 096101.[Nosé(1984)]nose1984unified Nosé, S. A unified formulation of the constant temperature molecular dynamics methods. J. Chem. Phys. 1984, 81, 511–519.[Ryckaert et al.(1977)Ryckaert, Ciccotti, and Berendsen]ryckaert1977numerical Ryckaert, J.P.; Ciccotti, G.; Berendsen, H.J.C. Numerical integration of the cartesian equations of motion of a system with constraints: molecular dynamics of n-alkanes. J. Comput. Phys. 1977, 23, 327–341.[Sendner et al.(2009)Sendner, Horinek, Bocquet, and Netz]sendner2009interfacial Sendner, C.; Horinek, D.; Bocquet, L.; Netz, R.R. Interfacial water at hydrophobic and hydrophilic surfaces: Slip, viscosity, and diffusion. Langmuir 2009, 25, 10768–10781.[Petravic and Harrowell(2007)]petravic2007equilibrium Petravic, J.; Harrowell, P. On the equilibrium calculation of the friction coefficient for liquid slip against a wall. J. Chem. Phys. 2007, 127, 174706.[Huang and Szlufarska(2014)]huang2014green Huang, K.; Szlufarska, I. Green-Kubo relation for friction at liquid-solid interfaces. Phys. Rev. E 2014, 89, 032119.[Bocquet and Barrat(2013)]bocquet2013green Bocquet, L.; Barrat, J.L. On the Green-Kubo relationship for the liquid-solid friction coefficient. J. Chem. Phys. 2013, 139, 044704.[Alder and Wainwright(1970)]alder1970decay Alder, B.J.; Wainwright, T.E. Decay of the velocity autocorrelation function. Phys. Rev. A 1970, 1, 18.[Wainwright et al.(1971)Wainwright, Alder, and Gass]wainwright1971decay Wainwright, T.; Alder, B.; Gass, D. Decay of time correlations in two dimensions. Phys. Rev. A 1971, 4, 233.[Isobe(2008)]isobe2008long Isobe, M. Long-time tail of the velocity autocorrelation function in a two-dimensional moderately dense hard-disk fluid. Phys. Rev. E 2008, 77, 021201.[Che et al.(2000)Che, Cagin, and Goddard III]che2000thermal Che, J.; Cagin, T.; Goddard III, W.A. Thermal conductivity of carbon nanotubes. Nanotechnology 2000, 11, 65.
http://arxiv.org/abs/1709.09187v3
{ "authors": [ "Chloe Ya Gao", "David T. Limmer" ], "categories": [ "cond-mat.stat-mech", "cond-mat.soft", "physics.chem-ph" ], "primary_category": "cond-mat.stat-mech", "published": "20170926180047", "title": "Transport Coefficients from Large Deviation Functions" }
[pages=1-last]BikeReza.pdf
http://arxiv.org/abs/1709.09014v1
{ "authors": [ "Reza Yazdanpanah Abdolmalaki" ], "categories": [ "cs.SY" ], "primary_category": "cs.SY", "published": "20170926134746", "title": "Implementation of Fuzzy Inference Engine for equilibrium and roll-angle tracking of riderless bicycle" }
http://arxiv.org/abs/1709.09417v2
{ "authors": [ "R. Jafari", "Henrik Johannesson" ], "categories": [ "cond-mat.stat-mech", "cond-mat.str-el", "quant-ph" ], "primary_category": "cond-mat.stat-mech", "published": "20170927094437", "title": "Decoherence from spin environments: Loschmidt echo and quasiparticle excitations" }
On the Circuit Diameter of some Combinatorial Polytopes Hernán A. Gonzá[email protected] Daniel [email protected] Wout [email protected] [email protected] 30, 2023 ==================================================================================================================================================================================== The combinatorial diameter of a polytope P is the maximum value of a shortest path between two vertices of P, where the path uses the edges of P only. In contrast to the combinatorial diameter, the circuit diameter of P is defined as the maximum value of a shortest path between two vertices of P, where the path uses potential edge directions of P i.e., all edge directions that can arise by translating some of the facets of P. In this paper, we study the circuit diameter of polytopes corresponding to classical combinatorial optimization problems, such as the Matching polytope, the Traveling Salesman polytope and theFractional Stable Set polytope. § INTRODUCTIONFor a polytope P ⊆ℝ^d, the 1-skeleton of P is the graph given by the set of vertices(0-dimensional faces) of P, and the set of edges (1-dimensional faces) of P. The combinatorial diameter of P is the maximum shortest path distance between two vertices in this graph. Giving bounds on the combinatorial diameter of polytopes is a central question in discrete mathematics and computational geometry. Combinatorial diameter is fundamental to the theory of linear programming due to the long standing open question about existence of a pivoting rule that yields a polynomial runtime for the Simplex algorithm. Indeed, existence of such a pivoting rule requires a general polynomial bound on the combinatorial diameter of a polytope.The most famous conjecture in this context is the Hirsch Conjecture, proposed in 1957, which states that the combinatorial diameter of any d-dimensional polytope with f facets is at most f-d. While this conjecture has been disproved for both unbounded polytopes <cit.> and bounded ones <cit.>, its polynomial version is still open i.e., it is not known whether there is some polynomial function of f and d which upper bounds the combinatorial diameter in general. Currently the best known upper bound on the diameter is exponential in d<cit.>.Recently researchers started investigating whether the bound f-d is a valid upper bound for some different (more powerful) notions of diameter for polytopes. The present work is concerned with one such notion of diameter: the circuit diameter of a polytope, formalized by Borgwardt et al. <cit.>. Given a polytope of the form P={x∈^n:Ax=b, Bx≤d} for some rational matrices A and B and rational vectors b and d, the circuits of P are the set of potential edge directions that can arise by varying b and d (see Section <ref> for a formal definition). Starting from a point in P one is allowed to move along any circuit direction until the boundary of P is reached (see Section <ref> for a formal definition).Since for every polytope the set of circuit directions contains all edge directions, the combinatorial diameter is always an upper bound on the circuit diameter. Thus even if the Hirsch Conjecture does not hold for the combinatorial diameter, its analogue may be true for the circuit diameter. In particular, Borgwardt et al.<cit.> conjectured that the circuit diameter is at most f-d for every d-dimensional polytope with f facets. We refer the reader to <cit.> for recent progress on this conjecture.Besides studies of upper bounds on combinatorial diameter for general polytopes, there is a long history of studies of such upper bounds for some special classes of polytopes. In particular, many researchers have investigated the combinatorial diameter of polytopes corresponding to classical combinatorial optimization problems. Prominent examples of these polytopes for which the combinatorial diameter have been widely studied are Transportation and Network Flow polytopes <cit.>, Matching polytopes <cit.>,Traveling Salesman (TSP) polytopes <cit.>,and many others. In this context, there are some questions and conjectures regarding the tightness of the developed bounds which are open, and it is natural to investigate them using a more powerful notion of diameter, like the circuit diameter. The authors of <cit.> gave upper bounds on the circuit diameter of Dual Transportation polytopes on bipartite graphs, and later in <cit.> gave upper bounds on the circuit diameter of Dual Network flow polytopes.Our results. In this paper, we study the circuit diameter of the Matching polytope, the Perfect Matching polytope,the TSP polytope, and the Fractional Stable Set polytope. Our first result (in Section <ref>) is an exact characterization of the circuit diameter of the Matching polytope (resp., Perfect Matching polytope), which is the convex hull of characteristic vectors of matchings (resp., perfect matchings) in a complete graph with n nodes. In particular, it is well-known that the combinatorial diameter of the Matching polytope equals ⌊n/2⌋ <cit.>. In Section <ref>, we show that the circuit diameter of the Matching polytope is upper bounded by a constant in contrast to the combinatorial diameter. In particular, we show that the circuit diameter of the Matching polytope equals 2 for all n ≥ 7.To this aim, we show that for any two different matchings such that one is not contained in the other, the corresponding two vertices are one circuit step away from each other or the corresponding vertices have a common neighbour vertex in the Matching polytope, and therefore their circuit distance is always at most 2.For the Perfect Matching polytope, we show that if n≠ 8 the circuit diameter is 1; and if n=8 the circuit diameter is 2. In contrast, the combinatorial diameter of the Perfect Matching polytope is known to be 2 for all n≥ 8 <cit.>.In Section <ref>, we give an exact characterization of the circuit diameter of the TSP polytope, which is the convex hull of all tours (i.e., Hamiltonian cycles) in a complete graph with n nodes.It is known that the combinatorial diameter of the TSP polytope is at most 4 <cit.>. In fact, Grötschel and Padberg conjectured in <cit.> that the combinatorial diameter of the TSP polytope is at most 2, and this conjecture is still open after more than 30 years. In Section <ref>, we show that this conjecture holds for the circuit diameter.In fact, the circuit diameter of the TSP polytope equals 1 whenever n≠ 5; while for n=5 the circuit diameter is 2. This result is proven by showing that for every two tours in a complete graph, the corresponding vertices are one circuit step from each other whenever n>5. Note that no linear description of the TSP polytope is known for general graphs. We achieve the above results for the TSP polytope by using only two famous classes of its facets: namely, subtour inequalities and (certain) comb inequalities <cit.>. Finally, we consider the Fractional Stable Set polytope in Section <ref>. This is the polytope given by the standard LP relaxation of the stable set problem for a graph G with n nodes. The Fractional Stable Set polytope was widely studied. In particular, it is known that this polytope is half-integral <cit.>, and that the vertices of this polytope have a nice graph interpretation: namely, they can be mapped to subgraphs of G with all connected components being trees and 1-trees[A 1-tree is a tree plus one edge between two nodes spanned by the tree.] <cit.>.This graphical interpretation of vertices was used in <cit.> to prove that the combinatorial diameter of the Fractional Stable Set polytope is upper bounded by n.In Section <ref>, we provide a characterization for circuits of this polytope. Specifically, we show that every circuit corresponds to a connected (non necessarily induced) bipartite subgraph of G. Our characterization allows us to show that the circuit diameter of the Fractional Stable Set polytope can be essentially upper bounded by the diameter of the graph G, which is significantly smaller than n in many graphs. § PRELIMINARIES Let P be a polytope of the form P={x∈^n:Ax=b,Bx≤d} for rational matrices A and B and rational vectors b and d.Let (A) denote the kernel of A i.e., (A):={ y∈^n : A y= 0}. Furthermore, we denote by(x) the support of a vector x.When talking about the circuit diameter of a polytope P, unless specified we assumethat the system of inequalities describing P is minimal with respect to its constraints i.e., each inequality of the above system defines a facet of P. Note that in contrast to the combinatorial diameter, the circuit diameter depends on the linear description of a polytope.In fact, redundant inequalities might become facet-defining after translating the corresponding hyperplanes.A non-zero vector g ∈^n is a circuit of Pif * g ∈(A) * (Bg) is not contained in any of the sets from the collection {(By):y ∈(A),y≠ 0}. (i.e., B g is support-minimal in the collection {By :y ∈(A),y ≠ 0 })Note that if c is a circuit of P, so is -c. Given the notion of circuits, we can formally define circuit steps, circuit walks, and circuit distance.Given x' ∈ P, we say that x”∈ P is one circuit step from x', if x”=x'+αc where c is a circuit of P and α>0 is chosen to be as large as possible so that x'+αc∈ P. Note that this definition does not specify that x' or x” are vertices of P. Given two points x' and x” in P, a circuit walk from x' to x” is a sequence of points in P, x'=z^0, z^1,⋯,z^l-1,z^l=x”, where z^i is one circuit step from z^i-1, for all i=1,⋯,l.We say such a circuit walk has length l.Given two points x' and x” in P, the circuit distance from x' to x”, called (x',x”), is the length of a shortest circuit walk from x' to x”. Note that from the latter two definitions, it follows that a circuit walk from x' to x” might not always be reversible. For example, let two points x' and x” be such that x” is one circuit step from x' i.e., we have that x”=x'+αc and α>0 is as large as possible so that x'+αc∈ P. However, it may be the case that x”+α'(-c)∈ P for some α' such that α'>α; and so x' is not one circuit step from x”.Therefore, it may be the case that (x',x”)≠(x”,x'). We refer to <cit.> for an extensive discussion about circuit distance.Given a polytope P, the circuit diameter of P, or (P), is the maximum circuit distance between any pair of vertices of P. Given a system of linear equations {Ax =0 , Bx = 0}, we say that a vectorc is a unique (up to scaling) solution of the system, if every vector y satisfying Ay =0 , By = 0 is of the form y = λc for some λ∈.The following proposition gives an alternative definition of circuits, that will be useful later. It is an easy corollary of the results in <cit.>, we report a proof here for completeness. Given a polytope P={x∈^n: Ax =b , Bx≤d}, a non-zero vector c∈^n is a circuit if and only if c isa unique (up to scaling) non-zero solution of {A y=0 , B'y=0} where B' is a submatrix of B.Let us be given a non-zero vector c such that Ac=0. Let B' be the maximal (with respect to the number of rows) submatrix of B such that B'c=0. Since P is a polytope the block matrix[ A; B ]has full column rank. Hence, there exists no non-zero vector d, Ad=0, (Bd)⊂(Bc) only if there isa unique (up to scaling) non-zero solution of {A y=0 , B'y=0}.Now, let B' be a submatrix of B such that the system Ay=0 ,B'y=0 has a unique (up to scaling) non-zero solution c.Suppose for the sake of contradiction that c is not a circuit of P.Then there exists a non-zero vector d such that Ad=0 and (Bd)⊂(Bc).In particular, this means that Ad=0 ,B'd=0. Hence d is a scaling of c; and thus c is a circuit as desired. The next lemma will be used in Section <ref> to study the circuit diameter of polytopes with linear descriptions, where the coefficients in each inequality are all non-negative or all non-positive. Let Q ⊆^n be a polytope of the form Q:={x∈^n:Ax≤b,Bx≤d}, where all entries of A are non-negative and all entries of B are non-positive.Then every circuit c∈^n of Q with c≥0 or c≤0 has exactly one non-zero coordinate. Suppose that c is a circuit of Q which has at least two non-zero coordinates.We may assume that c≥0, as the case where c≤0 is identical.Then by Proposition <ref>, c is the unique (up to scaling) non-zero solution of A'y=0, B'y=0 where A', B' are some submatrices of A, B respectively.Note that since all entries of A' and c are non-negative and A'c=0, we have that for every i∈(c) the i-th column of A' equals 0. Analogously, for every i∈(c) the i-th column of B' equals 0.Let i be any index such that c_i >0. Define the vector d asd_j:= 1if j=i 0otherwise .Then d is also a solution to A'y=0, B'y=0 and is not a scaling of c, contradicting that c is a circuit.§ MATCHING POLYTOPEThe Matching polytope is defined as the convex hull of all characteristic vectors of matchings in a complete graph i.e.,P_(n):={χ(M) :Mis a matching inK_n } ,where K_n=(V,E) denotes a complete graph with n nodes; and χ(M) ∈{0,1}^E denotes the characteristic vector of a matching M. The linear description of the Matching polytope is well-known and is due to Edmonds <cit.>. In particular, the following linear system constitutes a minimal linear description of P_(n)x(E[S])≤ (|S|-1)/2 for allS ⊆ V,|S|is odd,|S|≥ 3x(δ(v))≤ 1 for allv∈ V x≥0 ,where E[S] denotes the set of edges with both endpoints in S; δ(v) denotes the set of edges with one endpoint being v; and x(F) denotes the sum ∑_e ∈ Fx_e for F⊆ E. The combinatorial diameter of the Matching polytope P_(n) equals ⌊ n/2 ⌋ forall n≥ 2 <cit.>. Our next theorem provides the value of the circuit diameter of the Matching polytope P_(n) for all possible n. In particular,it shows that the circuit diameter of the Matching polytope is substantially smaller than the combinatorial diameter. For the Matching polytope we have: (P_(n))= 1n=2,3 2n=4,5 3n=6 2n≥ 7 .The rest of the section is devoted to proving Theorem <ref>. We first recallthe characterization of adjacency of vertices of the Matching polytope. In this paper, we use symbol Δ to represent the symmetric difference operator. Consider matchings M_1, M_2 in K_n, n≥ 2.χ(M_1) and χ(M_2) are adjacent vertices of P_(n) if and only if (V, M_1 M_2) has a single non-trivial connected component[Trivial components are components consisting of a single node.]. The above lemma has a straightforward corollary. Consider matchings M_1, M_2 in K_n, n≥ 2. If (V, M_1 M_2) has a single non-trivial connected component, then c:=χ(M_1)-χ(M_2) is a circuit of P_(n).The next lemma shows that the set of circuits of the Matching polytope is much richer than the set of its edge directions. In particular, it shows that for two matchings to define a circuit their symmetric difference does not necessarily have to consist of one non-trivial component only. The circuit directions provided by this lemma will be extensively used to construct short circuit walks in the proof of Theorem <ref>. Consider matchings M_1, M_2 in K_n, such that M_1⊈M_2 and M_2⊈M_1. Then either (V,M_1 Δ M_2) contains at most two (possibly trivial) connected components, or c:=χ(M_1)-χ(M_2) is a circuit of P_(n).Suppose that (V, M_1 Δ M_2) contains at least three connected components. Let us assume for the sake of contradiction that c=χ(M_1)-χ(M_2) is not a circuit. Since c is not a circuit there exists a non-zero vector y such that (Dy) ⊂(Dc), where D denotes the constraint matrix of the minimal linear description (<ref>) for the Matching polytope.Since the inequalities x_e ≥ 0, e ∈ E are present in the minimal linear description (<ref>) and (Dy) ⊂(Dc), we have thaty_e =0 for every edge e such that c_e=0. Let e'={v_1,v_2} be an edge so that y_e'≠ 0. Let C' be the connected component of (V, M_1 Δ M_2) containing the edge e'. Without loss of generality, possibly using rescaling of the vector y, we can assume y_e'=1.By exchanging the roles of M_1 with M_2 if necessary, we can assume that c_e'=1. Note that C' is either a path or a cycle. Moreover, for all nodes v with degree two in C' we have c(δ(v))=0. Since (Dy) ⊂(Dc), we have that c(δ(v))=0 implies y(δ(v))=0, leading to y_e =c_e for all e ∈ C'.Now let e”={u_1,u_2} be an edge such that c_e” = -1. Note that such an edge e” exists since M_1⊈M_2 and M_2⊈M_1. Let C” be the connected component of (V,M_1 Δ M_2) containing the edge e”.Let us prove that y_e =c_e for all e ∈ C”. If C' and C” are the same connected component, then this readily follows from the previous paragraph. If not, let z be a node that belongs to a (possibly trivial) connected component C̃ of (V, M_1 Δ M_2) different from C' and C”. Let S:={z,u_1,u_2,v_1,v_2} and note that c(E(S))=0. Since (Dy) ⊂(Dc), we get y(E(S))=0, implying y_e” =c_e” =-1. As in the previous paragraph, C” is either a path or a cycle, and for all v ∈ V with degree two in C” we have c(δ(v))=0. Since (Dy) ⊂(Dc), necessarily y(δ(v))=0, implying y_e =c_e for all e ∈ C”.Now let e”'={w_1,w_2} be an edge not in C' and not in C”, but in the connected component C”' of(V,M_1 Δ M_2), such thatc_e”'≠ 0. If c_e”' =1 (resp. c_e”' =-1), then we take the set S:={u_1,u_2,z,w_1,w_2}, where z is not in C” and not in C”' (resp. S:={v_1,v_2,z,w_1,w_2}, where z is not in C' and not in C”'). Since c(E(S))=0 and (Dy) ⊂(Dc), we get that y(E(S))=0 holds. On the other side, y(E(S))=0 implies y_e”' =c_e”' =1 (resp. y_e”' = c_e”' =-1). Repeating this argument for all edges in the support of c we show that y=c, a contradiction. With the above lemma at hand, we are ready to prove Theorem <ref>.(Proof of Theorem <ref>) The cases n=2 and n=3 are trivial. Indeed, P_(2) and P_(3) are simplices, and thus every two vertices of P_(2) and P_(3) form an edge. For n≥ 4, we consider an empty matching M_1 and a matching M_2 consisting of two edges to establish(P_(n))≥ 2 .Indeed, (χ(M_1),χ(M_2))≥ 2, because c:=χ(M_2)-χ(M_1) satisfies c≥ 0 and has two non-zero entries, and thus c is not a circuitby Lemma <ref>. Hence, the vertex χ(M_1) is not one circuit step away from the vertex χ(M_2), implying (P_(n))≥ 2. For n=6, the lower bound on the circuit diameter can be improved to the one below(P_(6))≥ 3 .Consider an empty matching M_1 and a perfect matching M_2. For a walk from χ(M_1) to χ(M_2) the first circuit step at the vertex χ(M_1)=0 corresponds to a circuit c with c≥0. Thus, by Lemma <ref> the first circuit step corresponds to c with exactly one non-zero coordinate.After the first circuit step we get a vertex χ(M'), where M' is a matching consisting of a single edge e. Let us prove that c':=χ(M_2)-χ(M') is not a circuit and thus (χ(M_1),χ(M_2))≥ 3.If e∈ M_2, the vector c' is not a circuit by Lemma <ref>. If e∉M_2,let g be the edge in M_2 having no common vertex with the edge e. Then the vector c' is not a circuit, since the vector D χ(g) has a smaller support than D c', where D is the constraint matrix of the linear description (<ref>) for P_(6). Hence, we showed that any circuit step from χ(M_1) will always end in a vertex χ(M'), which is at least two circuit steps from χ(M_2), implying (P_(6))≥ 3.Now let us prove the corresponding upper bounds for (P_(n)), n≥ 4. For n=4, n=5 and two matchings M_1 and M_2, (V,M_1 M_2) has at most two non-trivial connected components. This fact together with Corollary <ref> implies (M_1,M_2) ≤ 2. For n=6 and two matchings M_1 and M_2, (V,M_1 M_2) has at most three non-trivial connected components. Again, this fact together with Corollary <ref> implies (M_1,M_2) ≤ 3.For n≥ 7, consider the graph (V, M_1 Δ M_2) given by the symmetric difference of two matchings M_1 and M_2. If the symmetric difference contains one e ∈ M_1 and one e' ∈ M_2, then by Lemma <ref>andCorollary <ref>, (M_1,M_2)is at most 2.Otherwise, the subset F of edges of M_1 Δ M_2 satisfieseither F ⊆ M_1 or F ⊆ M_2.If |F| =2, the results again follows by Corollary <ref>. So assume |F|≥ 3.First, suppose F ⊆ M_2. Let e be any edge connecting two endpoints of two distinct edges in F, and let M̃ := M_1 ∪{e}. Clearly, (M_1,M̃) =1.Now we claim that c:=χ(M_2)-χ(M̃) is a circuit. Indeed, (V, M̃Δ M_2) has at least three connected component:one path of length 3 and either at least two other edges, or one other edge plus at least one trivial connected component consisting of a single node (since n≥ 7).In both cases, Lemma <ref> implies that c:=χ(M_2)-χ(M̃) is a circuit, leading to the result. Finally, suppose F ⊆ M_1. Similarly to the previous case, we set M̃ := M_2 ∪{e}. Then, by Lemma <ref> we get that χ(M̃)-χ(M_1) is a circuit, andbyCorollary <ref> we get thatχ(M_2) - χ(M̃) is a circuit, leading to the result. §.§ Perfect Matching Polytope Let us define the Perfect Matching polytopeP_(n):={χ(M) :Mis a perfect matching inK_n} ,where n≥ 4 and n is even. In <cit.>, Edmonds showed that the following linear system constitutes a minimal linear description of P_(n)x(δ(S))≥ 1 for allS ⊂ V,|S|is odd ,|S|≥ 3x(δ(v))= 1 for allv∈ V x≥ 0 . For the perfect matching polytope we have: (P_(n))= 1n=4,6 2n= 8 1n≥ 10 . The rest of this section is devoted to prove Theorem <ref>. First, let us recall the characterization of adjacency of the vertices of the Perfect Matching polytope.Consider perfect matchings M_1, M_2 in K_n, n≥ 2.χ(M_1) and χ(M_2) are adjacent vertices of P_(n) if and only if (V, M_1 M_2) has a single non-trivial connected component. The above lemma has a straightforward corollary. Consider perfect matchings M_1, M_2 in K_n, n≥ 2. If (V, M_1 M_2) has a single non-trivial connected component, then c:=χ(M_1)-χ(M_2) is a circuit of P_(n). The next lemma shows that every two different matchings define a circuitwhenever n≥ 10. The circuit directions provided by this lemma will be extensively used to construct short circuit walks in the proof of Theorem <ref>. The proof of Lemma <ref> uses ideas similar to the ones in the proof of Lemma <ref>.Consider two different perfect matchings M_1, M_2 in K_n, n≥ 10. Then c:=χ(M_1)-χ(M_2) is a circuit of P_(n). Let us assume for the sake of contradiction that c is not a circuit. Then there exists a non-zero vector y such that (Dy) ⊂(Dc), where D is the constraint matrix of (<ref>). Since the inequalities x_e ≥ 0, e ∈ E are in the minimal linear description (<ref>) and (Dy) ⊂(Dc), we have y_e =0 for every edge e such that c_e =0. Let e'={v_1,v_2} be such that y_e'≠ 0. Without loss of generality, possibly rescaling vector y we can assume y_e'=1. Let C' be the connected component of (V, M_1 Δ M_2) containing e'. By exchanging the roles of M_1 with M_2, we can assume c_e'=1. Moreover, for every node v we have c(δ(v))=0. Since (Dy) ⊂(Dc), we have y(δ(v))=0 for every node v. Since C' is an even cycle, y(δ(v))=0, v∈ V implies y_e =c_e for all edges e ∈ C'. In particular, for an edge f = {v_2, v_3}, f∈ (M_1 Δ M_2), which is different from the edge e', we have y_f = c_f=-1.Now let C” be a connected component of (V,M_1 Δ M_2), different from C'. Note that such C” exists since otherwise (V,M_1 Δ M_2) contains only one non-trivial connected component, implying that c is a circuit by Lemma <ref>. Lete”={u_1,u_2} be an edge in C” such that c_e”=-1. Again, since y(δ(v))=0 for every node v and since C” is an even cycle, there exists γ such that y_e=γc_e for every edge e in C”. Let z be a node that is not adjacent to any of the nodes u_1,u_2,v_1,v_2 in the graph (V,M_1 Δ M_2). Note that such a node exists, because each node in (V,M_1 Δ M_2) has degree exactly 2, and we have n>8. Also note that such node z is not equal to any of the nodesu_1,u_2,v_1,v_2, since {u_1,u_2} and {v_1,v_2} are edges in (V,M_1 Δ M_2). Let us define S:={z,u_1,u_2,v_1,v_2}. It is straightforward to check that c(δ(S))=0. Indeed, since (Dy) ⊂(Dc) and the constraint x(δ(S))≥ 1 is present in (<ref>), we have that y(δ(S))=0. On the other side, y(δ(S))= -2 - 2γ = 0,implying γ = 1 and therefore y_e =c_e for all e∈ C”. Repeating this argument for all non-trivial connected components of (V,M_1 Δ M_2), we get y=c, a contradiction.Now, with Lemma <ref> at hand, we are ready to prove Theorem <ref>. (Proof of Theorem <ref>) To show that the corresponding lower bounds for the circuit diameter hold, it is enough to show thatP_(8)≥ 2 .To show this, let us define two perfect matchings in the complete graph K_8 with the node set {v_1,…, v_8}M_1:={v_1v_2, v_3v_4, v_5v_6, v_7v_8} andM_2:={v_1v_4, v_3v_2, v_5v_8, v_7v_6} .The vector c:=χ(M_1)-χ(M_2) is not a circuit, since the vector Dc has a larger support than D(χ({v_1v_2, v_3v_4})-χ({v_1v_4, v_3v_2})), where D is the linear constraint matrix of the linear description ofP_(8). Hence, we have(P_(8))≥ 2 . Now let us prove the corresponding upper bounds for (P_(n)), n≥ 4. For n=4, n=6 and two perfect matchings M_1 and M_2, (V, M_1 M_2) has at most one non-trivial connected component. This fact together with Corollary <ref> implies (P_(n))≤ 1for n=4, n=6.For n=8 and two perfect matchings M_1 and M_2, (V,M_1 M_2) has at most two non-trivial connected components.Again, this fact together with Corollary <ref> implies (P_(8))≤ 2. For n≥ 10, the upper bound follows from Lemma <ref>. § TRAVELING SALESMAN POLYTOPE The Traveling Salesman polytope is defined as the convex hull of characteristic vectors of Hamiltonian cycles in a complete graph i.e.,P_(n):={χ(T) :Tis a Hamiltonian cycle inK_n} .In fact, no linear description of the Traveling Salesman polytope is known for general n. Moreover any linear description of P_(n), which admits an efficient way to test whether a given linear constraint belongs to this description, would have consequences for the long-standing conjecture 𝒩𝒫=co-𝒩𝒫 <cit.>(Section 5.12). However, for some small values of n a linear description of the Traveling Salesman polytope is known. For example, P_(5) can be described by nonnegativity constraints and the constraints below <cit.>x(E(S))≤ |S|-1 for allS, S ⊆ V,2≤|S|≤ |V|-2x(δ(v))= 2 for allv∈ Vx≥0 .Moreover,the linear inequalities from (<ref>) define facets of the Traveling Salesman polytope P_(n) for all n ≥ 4 <cit.>. For n≥ 6 the inequalitiesx_uv+x_vw+x_wu+x_uu'+ x_vv'+x_ww'≤ 4for distinctu,v,w, u', v', w' ∈ Valso define facets of P_(n) <cit.>. The inequality (<ref>) belongs to the well-known family of comb inequalities, which are valid for the Traveling Salesman polytope. Surprisingly, such scarce knowledge on linear description of the Traveling Salesman polytope is enough for us to prove the following theorem.For the Traveling Salesman polytope we have: (P_(n))= 1n=3,4 2n=5 1n≥ 6 .The proof of Theorem <ref> follows from a series of lemmata below. For n=5 we have (P_(n))=2. Recall, that the Traveling Salesman polytope P_(5) admits the minimal linear description (<ref>) <cit.>.For two Hamiltonian cycles T_1, T_2 in K_5 without a common edge (see Figure <ref>), the vector c:=χ(T_1)-χ(T_2) is not a circuit of P_(5). Indeed, (D y)⊂ (D c) for the non-zero vector y:=χ(M_1)-χ(M_2), where D is the constraint matrix of (<ref>) and M_1, M_2 are two different matchings in K_5 on the same four nodes. Thus (P_(5))≥ 2.The bound (P_(5))≤ 2 follows from the fact that for any two Hamiltonian cycles T_1, T_2 such that T_1∩ T_2≠∅, χ(T_1)-χ(T_2) is a circuit of P_(5). Indeed, up to symmetry we have two possible cases (see Figure <ref>) and in each of these cases χ(T_1)-χ(T_2) is a circuit.For n=6 we have (P_(n))=1. Let us consider two different Hamiltonian cycles T_1 and T_2 in K_6, then up to symmetry and up to exchanging the roles of T_1 and T_2 we have one of the nine cases (see Figure <ref>). In all these nine cases, y :=χ(T_1)-χ(T_2) is a circuit of P_(6). For n≥7 we have (P_(n))=1. Consider two different Hamiltonian cycles T_1, T_2 in K_n, n≥ 7. For the sake of contradiction let us assume that c:=χ(T_1)-χ(T_2) is not a circuit for the Traveling Salesman polytope P_(n). Thus there exists some non-zero y, which is not a scaling of c, satisfying (Dy) ⊆(Dc), where D denotes the matrix of the linear constraints (<ref>) and (<ref>), since the linear inequalities in (<ref>) and (<ref>) define facets for P_(n), n≥ 7.Case 1: T_1 and T_2 are not disjoint.First, let us prove that c is a circuit when T_1∩ T_2≠∅. Then, there are two different nodes u and v such that |{e∈ E : c_e≠ 0,e∈δ(u)}|=|{e∈ E : c_e≠ 0,e∈δ(v)}|=2 and c_uv=0.Let w be such that |{e∈ E : c_e≠ 0,e∈δ(w)}|=4, and let the edgese,g∈δ(w) be such that c_e=c_g. Theny_e=y_g holds.For the values c_uw and c_vw, we have (up to symmetry) four possibilities: *c_uw=1 and c_vw=-1 *c_uw=1 and c_vw=1 *c_uw=0 and c_vw=1 * c_uw=0 and c_vw=0 . Case (<ref>). Let u' be the node such that c_uu'=-1; and w' be the node such that c_ww'=1 and u≠ w'. There are two possible cases: u'=w' (see Figure <ref>) and u'≠ w' (see Figure <ref>). In the first case (see Figure <ref>), the statement of the Claim follows by considering y(δ(u)), y(δ(w)), y(E[{u,v,w}]) and y_ww'+ y_uw'+y_wu+ y_wv+ y_ut+ y_w's, where c_ut=0, c_w's=0, s≠ t and s,t are different from u,v,w,w'. (Note that such s, t exist since there are at least 3 nodes in K_n different from u,v,w,w', because n≥ 7. For at most 2 nodes r of these 3 nodes, we have c_w'r≠ 0. For every node r of these 3 nodes, we have c_ur=0.)In the second case (see Figure <ref>), the statement of the Claim follows by considering y(δ(u)), y(δ(w)), y(E[{u,v,w}]) and y_wu+ y_uv+y_wv+ y_ww'+ y_uu'+ y_vs, where c_vs=0 and s is different from u,v,w,u',w'. (Note that such s exists since there are at least 2 nodes in K_n different from u,v,w,u',w', because n≥ 7. For at most 1 node r of these 2 nodes, we have c_vr≠ 0.)Case (<ref>). Let u' be the node such that c_uu'=-1; and w' be a node such that c_ww'=-1. There are two possible cases: u'=w' (see Figure <ref>) and u'≠ w' (see Figure <ref>). In the first case (see Figure <ref>), the statement of the Claim follows by considering y(δ(u)), y(δ(v)), y(δ(w)), y(E[{v,w,w'}]) and y_wu+ y_uv+y_vw+ y_ww'+ y_ut+ y_vs, where c_ut=0, c_vs=-1, s≠ t and s,t are different from u,v,w,w'. (Note that such s, t trivially exist. The node s is uniquely defined, and forevery node t different from u,v,w,w',s we havec_ut=0.)In the second case (see Figure <ref>), the statement of the Claim follows by considering y(δ(u)), y(δ(w)), y(E[{u,w,w'}]) and y_wu+ y_uv+y_wv+ y_ww'+ y_uu'+ y_vs, where c_vs=0 and s is different from u,v,w,u',w'. (Note that such s exists since there are at least 2 nodes in K_n different from u,v,w,u',w', because n≥ 7. For at most 1 node r of these 2 nodes, we have c_vr≠ 0.) Case (<ref>) Let w', w” be two different nodes such that c_ww'=-1 and c_ww”=-1(see Figure <ref>). The statement of the Claim follows by considering y(δ(w)) and y_wu+ y_uv+y_vw+ y_ww̅+ y_ut+ y_vs for each w̅∈{w',w”}, where c_ut=0, c_vs=0, s≠ t and s,t are different from u,v,w,w̅. (Note that such s and t exist. Indeed, there are at least 3 nodes in K_n different from u,v,w,w̅, because n≥ 7. For at most 2 nodes r of these 3 nodes, we have c_ur≠ 0.For at most 1 node r of these 3 nodes, we have c_vr≠ 0. ) Case (<ref>). Consider a node w' and a node u' such that c_ww'=-c_uu'. To prove the Claim, it is enough to show that y_ww'=-y_uu'.There are two possible cases: u'=w' (see Figure <ref>) and u'≠ w' (see Figure <ref>). In Figure <ref>, without loss of generality we assumed that c_ww'=-1 and c_uu'=1.) In the first case (see Figure <ref>), we can consider y (E[{w,u,u'}]) to establish y_ww'=-y_uu'. In the second case (see Figure <ref>), to establish y_ww'=-y_uu' we can consider y_wu+ y_uv+y_vw+ y_ww'+ y_uu'+ y_vs where c_vs=0 and s is different from u,v,w,u',w'. Such s exists unless n=7 and we have the situations in Figure <ref>. (Note that otherwise such s exists. Indeed, there are at least 3 nodes in K_n different from u,v,w,u',w', if n≥ 8. For at most 2 nodes r of these 3 nodes we have c_vr≠ 0.)Now in the case in Figure <ref> and n=7, it is straightforward to establish that there are at least two nodes r such that|{e∈ E : c_e≠ 0,e∈δ(r)}|=4. Moreover, if |{e∈ E : c_e≠ 0,e∈δ(w')}|=4 then there are at least four nodes r such that|{e∈ E : c_e≠ 0,e∈δ(r)}|=4. Now it is not difficult to use already considered cases (<ref>), (<ref>), (<ref>), to establish y_ww'=-y_uu'.Using the above Claim for all nodes of degree 4 in a same connected component C of T_1 T_2, we establish that y_e=y_g whenever c_e=c_g and e,g are both in C. On the other side, we have y(δ(v))=0 for all nodes v. Hence, we also have y_e=-y_g whenever c_e=-c_g and e,g are both in C. Moreover, y_e=-y_g holds for all edges e, g such that c_e=-c_g. Indeed, lete=vv' and g=uu' be two edges from different connected components of T_1 T_2 such that c_e=-c_g. Consider the constraint x(E[{v,v',u,u'}])≤ 3 from (<ref>). Since c(E[{v,v',u,u'}])=0, we have y(E[{v,v',u,u'}])=y_e+y_g=0, implying y_e=-y_g. Hence, for n≥ 7 we proved that χ(T_1)-χ(T_2) is a circuit whenever T_1∩ T_2 is not empty.Case 2: T_1 and T_2 are disjoint. Let us prove that for n≥ 7, χ(T_1)-χ(T_2)is a circuit whenever T_1∩ T_2=∅.For n=7 we have (up to symmetry) three possibilities for two different Hamiltonian cycles T_1 and T_2 without a common edge (see Figure <ref>). In all these cases χ(T_1)-χ(T_2) is a circuit.For n≥ 8 let us show the following Claim.Let w be a node and e,g∈δ(w) be such that c_e=c_g. Theny_e=y_g holds. Let e, g be wv, wu for some two nodes u, v. We may assume that u and v are different, since otherwise the statement of the Claim is trivial. Without loss of generality, we may assume c_e=1 and c_g=1. There are two possible cases *c_uv=-1 *c_uv=0. In the case (<ref>), let w', w” be two different nodes such that c_ww'=-1 and c_ww”=-1. For each w̅∈{w',w”}, to establish y_ww̅=y_uv consider y_wu+y_uv+y_wv+y_ww̅+y_vs+y_ut, where s, t, s≠ t are two nodes different from u, v, w, w̅ such that c_vs=0 and c_ut=0. (Note that such nodes s, t exist. Indeed, since n≥ 8 there are at least 4 nodes in K_n different from u, v, w, w̅. There are at most 2 nodes r of these 4 nodes such that c_vr≠ 0. Also there are at most 2 nodes r of these 4 nodes such that c_ur≠ 0.) To establish y_e=y_g, now it is enough to consider y(E[{s,w^⋆,w}]) and y(δ(w)), where s∈{u,v}, w^⋆∈{w',w”} such that c_sw^⋆=0. (Note that such nodes s, w^⋆ exist, since otherwise T_2 has a subtour.).In the case (<ref>), let u', u” be two different nodes such that c_uu'=c_uu”=-1, and let v', v” be two different nodes such that c_vv'=c_vv”=-1 (see Figure (<ref>)). First note that {u',u”}≠{v',v”} as otherwise T_2 contains a subtour.Then we may assume that v'∉{u',u”} and u'∉{v',v”}. (Note that v” could be equal to u”).It follows that y_uu'= y_uu” by considering y_wu+y_uv+y_vw+y_vv'+y_uu̅+y_wz for each u̅∈{u',u”}, where c_wz=0 and z is different from u,v,w,v', and u̅.(Note that such a z exists.Indeed there are at least 3 nodes in K_n different from u,v,w,v' and u̅ if n≥8.For at most 2 nodes r of these 3 nodes, we have c_wr≠0.).By symmetry, we also have that y_vv'= y_vv”.There exists u̅∈{u',u”} such that c_wu̅=0 as otherwise T_2 contains a subtour.Then it follows that y_uw=- y_uu̅ by considering y(E[{w,u,u̅}]).Therefore, y_uw=- y_uu' and y_uw=- y_uu”.Similarly, y_vw=- y_vv' and y_vw=- y_vv”. Now, if c_vu'≠0, then since u'∉{v',v”}, we have that c_vu'=1.Then it follows that y_vu'= y_vw by considering y(δ(v)).It follows that y_vu'=- y_uu' by considering y(E[{u,u',v}]).Then in this case we have thaty_uw=- y_uu'= y_vu'= y_vw,and therefore y_g= y_e, as desired.Otherwise, c_vu'=0, and by symmetry we may assume that c_uv'=0 as well. There exists a node v”'≠ u' such that c_vv”'=1.It follows that y_vv”'= y_vw by considering y(δ(v)). If c_wu'=0(see Figure (<ref>)), then it follows that y_vv”'=- y_uu' by considering y_uu'+y_u'v+y_vu+y_vv”'+y_u'w+y_uv'.Then in this case we have thaty_uw=- y_uu'= y_vv”'= y_vw,and therefore y_g= y_e, as desired.Otherwise, c_wu'=-1.Then c_wu”=0, as otherwise T_2 contains a subtour. If v”=u” (see Figure (<ref>)), it follows that y_vw=- y_uu' by consideringy_uu”+y_u”w+y_wu+y_uu'+y_wv+y_u”z, where c_u”z=0 and z is different from u,u”,w,u', and v. (Note that such a z exists.Indeed there are at least 3 nodes in K_n different from u,u”,w,u', and v if n≥8.For at most 2 nodes r of these 3 nodes, we have c_u”r≠0.).Then in this case we have thaty_uw=- y_uu'= y_vwand therefore, y_g= y_e, as desired. Otherwise, v”≠ u”. If c_vu”=0 (see Figure (<ref>)), it follows that y_vv”'=- y_uu” by considering y_u”v+y_vu+y_uu”+y_u”w+y_vv”'+ y_uz, where c_uz=0 and z is different from u”,v,u,w, and v”'. (Note that such a z exists.Indeed there are at least 3 nodes in K_n different from u”,v,u,w, and v”' if n≥8.For at most 2 nodes r of these 3 nodes, we have c_ur≠0.).Then in this case we have thaty_uw=- y_uu”= y_vv”'= y_vw,and therefore, y_g= y_e, as desired.Finally, if instead c_vu”=1 (that is, u”=v”'), then it follows that y_vu”=- y_uu” by considering y(E[{u,v,u”}]).Then in this case we have that y_uw=- y_uu”= y_vu”= y_vw,and therefore y_g= y_e, as desired.Together, claims <ref> and <ref> implies that, up to scaling, y =c, a contradiction.Thus, for n≥ 7 and for any two different Hamiltonian cycles T_1, T_2, we have that c =χ(T_1)-χ(T_2) is a circuit for the Traveling Salesman polytope.(Proof of Theorem <ref>) The cases n=3 and n=4 are trivial. Indeed, P_(3) and P_(4) are simplices, and thus every two vertices of P_(3) and P_(4) form an edge. The cases, n=5, n=6 and n≥ 7 are covered by Lemma <ref>, Lemma <ref> and Lemma <ref>,respectively. § FRACTIONAL STABLE SET POLYTOPE Given a connected graph G=(V,E) with at least two nodes, the Fractional Stable Set polytope is defined as followsP_(G):={x∈^V : x_u+x_v ≤ 1 for all uv∈ E, x≥0} .The Fractional Stable Set polytope is a well studied polytope. In particular, it is known that all vertices of it are half-integral <cit.> i.e., x∈{0,1/2,1}^V whenever x is a vertex of P_(G). In <cit.>, it is shown that the combinatorial diameter of P_(G) is bounded from above by the number of nodes in G. Before we study the circuit diameter of the Fractional Stable Set polytope let us study the circuits of this polytope.Its circuits admit a nice characterization captured by the lemma below.For a graph G=(V,E), a vector c, c≠0 is a circuit of P_(G) if and only if the graph G' with the node set V':={v∈ V : c_v≠ 0} and the edge set E':={e∈ E :e=uv,u,v ∈ V'andc_u+c_v=0} is connected. Let G' be not connected and let C be a connected component of G' with a node set U. Let us define the vector c'∈^V asc'_v:=c_v if v∈ U 0otherwise .The vector c is not a circuit of P_(G) since the vector D c' has a smaller support than D c, where D is the linear constraint matrix in the minimal description ofP_(G). On the other hand, it is straightforward to check that if G' is connected, then c is a unique (up to scaling) non zero solution of the below system y_v=0 for all v∈ V such thatc_v=0y_v+y_u=0 for all uv∈ E such thatc_v+c_u=0 ,showing that c is a circuit of P_(G). To study the circuit diameter of the Fractional Stable Set polytope we need the following notation. For a node v, let B(v,0) be defined as {v}. For integer positive k, we define B(v, k) to be the set of nodes which are at distance at most k from v. The set of nodes which are at distance exactly k from v is denoted by N(v,k) i.e., N(v, k):=B(v, k)∖ B(v, k-1). The eccentricity ε(v) of a node v∈ V is minimum k such that V=B(v, k). Let v be any node in a connected graph G=(V,E) with at least two nodes. Then (P_(G)) is 𝒪(ε(v)).Let x' and x” be two vertices of P_(G). Let us show that (x',x”) is at most  4ε(v)+c for some constant c. To do this we construct a circuit walk from x' to x”.The walk will correspond to two different phases. In Phase I we construct a circuit walk from x' to some “well structured" point y', and in Phase II we move from y' to x” by another circuit walk. To simplify the exposition, in the proof we assume that G is a non-bipartite graph. It will be clear from the analysis of the length of the circuit walk that the bound in the statement of the lemma is also satisfied in the bipartite case.Phase I: Let us assume that b is the smallest k such that the subgraph of G induced by B(v,k) is non-bipartite. Start of Phase I:If b is odd, we first take a circuit walk from x' to a point z with z_v=0 and z_u=ϕ for u∈ N(v,1), where ϕ:=1/2 if b=1 andϕ:=1 otherwise. If b is even, we start by a circuit walk from x' to a point z with z_v=1, z_u=0 for u∈ N(v,1) and z_u=ϕ for u∈ N(v,2), where ϕ:=1/2 if b=2 andϕ:=1 otherwise. Initialize t:=1 if b is odd and t:=2 otherwise. If at the beginning of Phase I we have t=1,then 4 circuit steps are enough to reach z from x'.In the proof we are going to show that from every point x'∈{0,1/2,1}^V, x'∈ P_(G) we can reach a desired zin at most 4 circuit steps. Note, that x'∈{0,1/2,1}^V, x'∈ P_(G) is a weaker assumption than the assumption that x' is a vertex of P_(G). We weaken the assumptions on x' in the proof of this claim for the sake of exposition.First suppose that b=1.There are three possible cases: *x'_v=1*x'_v=1/2*x'_v=0.In the case <ref>, we have x'_w=0 for all w∈ N(v,1), since x'_v+ x'_w≤1.Then let c be defined as the following vector:c_u= -1/2 ifu=v 1/2 if u∈ N(v,1) -1/2if u∈ N(v,2), x'_u=1 0else .By Lemma <ref>, c is a circuit.Let y= x'+ c. Then clearly, y is feasible for P_(G).In particular, for any edge uw with u,w∈ N(v,1), we have that y_u+ y_w=1.Similarly, for any edge uw withu∈ N(v,1) andw∈ N(v,2), we have that y_u+ y_w≤1. Furthermore,y is one circuit step from x' since b=1 implies that there exists an edge uw, u,w∈ N(v,1). Now, let c' be defined as the following vector:c'_u= -1/2 ifu=v 0else .By Lemma <ref>, c' is a circuit. Moreover, y+ c' is the desired point z. Note, that z is one circuit step from y as z_v=0.Hence, in this case 2 circuit steps are enough to reach z from x'. In the case <ref>, we have x'_w≤1/2 for all w∈ N(v,1).Then let c be defined as the following vector:c_u= -1/2 ifu=v 1/2 if u∈ N(v,1), x'_u=0 -1/2if u∈ N(v,2), x'_u=1 0else .By Lemma <ref>, c is a circuit.Let z= x'+ c. Then clearly, z is feasible for P_(G).In particular, for any edge uw with u,w∈ N(v,1), we have that z_u+ z_w=1.Similarly, for any edge uw with u∈ N(v,1) and w∈ N(v,2), we have that z_u+ z_w≤1. Furthermore,z is one circuit step away from x' as z_v=0. Thus z is the desired point, and in this case 1 circuit step is enough to reachz from x'.In the case <ref>, let c be defined as the following vector:c_u= 1/2 ifu=v -1/2 if u∈ N(v,1), x'_u>0 0else .By Lemma <ref>, c is a circuit.Let y= x'+α c be the point which is one circuit step from x', where α≥ 0. Clearly α∈{0,1,2}. We have α≥ 1, since x'+ c is feasible for P_(G). Thus α∈{1,2}, hence y∈{0,1/2,1}^V and y_v ∈{1/2,1}. Due to the considered cases <ref> and <ref>, we know that from y we can reach a desired z in at most 3 circuit steps. Thus, a desired point z can be reached from x' in at most 4 circuit steps.Now, suppose b>1.We have the same three cases as when b=1, and we will refer to them identically.In the case <ref>, we have that x'_w=0 for all w∈ N(v,1).Then let c be defined as the following vector:c_u= -1/2 ifu=v 1/2 if u∈ N(v,1) -1/2if u∈ N(v,2), x'_u>0 0else .By Lemma <ref>, c is a circuit.Let y= x'+α c be the point which is one circuit step from x', where α≥ 0. Clearly α∈{0,1,2}. We have α≥ 1, since x'+ c is feasible for P_(G). Thus α∈{1,2}.First, suppose that α=1. Then let c' be the following vector:c'_u= -1/2 ifu=v 1/2 if u∈ N(v,1) -1/2if u∈ N(v,2), y_u>0 0else .By Lemma <ref>, c' is a circuit.Let z= y+ c'.Then z is feasible and z is one circuit step from y as z_v=0. Thus, a desired point z is at most 2 circuit steps from x' if α=1. Now, suppose α=2.Then z=x'+2c is a desiredpoint.Thus, if α=2 then 1 circuit step is enough to reach z from x'.In the case <ref>, we have that x'_w≤1/2 for all w∈ N(v,1).Then let c be defined as the following vector:c_u= 1/2 ifu=v -1/2 if u∈ N(v,1), x'_u=1/2 0else .By Lemma <ref>, c is a circuit.Let y= x'+ c.Then y is feasible, and is one circuit step from x'.Note that y satisfies the conditions of case <ref>, thus a desired point z can be achieved in at most 2 circuit steps from the point y.Thus, z is at most 3 circuit steps from x'.In the case <ref>, let c be defined as the following vector:c_u= 1/2 ifu=v -1/2 if u∈ N(v,1), x'_u>0 0else .By Lemma <ref>, c is a circuit.Let y= x'+α c be the point which is one circuit step from x', where α≥ 0. Clearly α∈{0,1,2}. We have α≥ 1, since x'+ c is feasible for P_(G). Thus α∈{1,2}, hence y∈{0,1/2,1}^V and y_v ∈{1/2,1}. Due to the considered cases <ref> and <ref>, we know that from y we can reach a desired z in at most 3 circuit steps. Thus, a desired point z can be reached from x' in at most 4 circuit steps.Therefore, in all cases, we need at most 4 circuit steps to reach z from x'. The proof of the next claim is essentially identical to the proof of Claim <ref>. If at the beginning of Phase I we have t=2,then 6 circuit steps are enough to reach z from x'.Invariants for z and t in Phase I: During Phase I, we updatez and t such that at each moment of time the following holds for all u∈ N(v,k), for all k≤ t:⋆z_u= 0 ifk≡ b+1 2 1 if k < b and k≡ b 2 1/2if k≥ b and k≡ b 2 .By construction, z and t defined at the beginning of Phase I satisfy condition (<ref>) for all u∈ B(v,t). At each step (except possibly the last one) of Phase I, t is increased by 2 and the point z is updated to satisfy (<ref>) for all u∈ B(v,t). In the end of Phase I, t equals ε(v), and hence (<ref>) holds for all u∈ V. Step of Phase I: At each step we change coordinates of point z corresponding to the nodes in N(v, t+1) and N(v, t+2).If t < b-2, we walk from z to the point z', such that for all u∈ N(v,k), for all k≤ t+1z'_u= 1 ifk≡ b+1 2 0 ifk≡ b 2 .Such a point z' can be reached from z in at most two circuit steps. From z' we walk to the point z” such that for all u∈ N(v,k), for all k≤ t+2z”_u= 0 ifk≡ b+1 2 1 ifk≡ b 2 .A point z” with above properties can be reached from z' in one circuit step. Thus, in this case we are able to define z” to be the new point z and increase t by 2 using at most three circuit steps.If t=b-2, we walk from z to the point z', such that for all u∈ N(v,k), for all k≤ t+1z'_u= 1 ifk≡ b+1 2 0 ifk≡ b 2 .Such a point z' can be reached from z in at most two circuit steps. From z' we walk to the point z” such that for all u∈ N(v,k), for all k≤ t+2z”_u= 1/2 ifk≡ b+1 2 1/2 ifk≡ b 2 ,where k is such that u∈ N(v,k). A point z” with above properties can be reached from z' in one circuit step. From z” we walk to the point z”' such that for all u∈ N(v,k), for all k≤ t+2z”'_u= 0 ifk≡ b+1 2 1 ifk < b and k≡ b 2 1/2if k≥ b and k=b 2 .A point z”' with above properties can be reached from z” in one circuit step. Thus, in this case we are able to define z”' to be the new point z and increase t by 2 using at most four circuit steps. If t ≥ b, we walk from z to the point z', such that for all u∈ N(v,k), for all k≤ t+1z'_u= 1/2 if k≡ b+1 2 1/2 ifk < b and k≡ b 2 0if k≥ b and k≡ b 2 .Such a point z' can be reached from z in one circuit step. From z' we walk to the point z” such that for all u∈ N(v,k), for all k≤ t+2z”_u= 0 if k=b+1 2 1 if k < b and k≡ b 2 1/2if k≥ b and k≡ b 2 .A point z” with above properties can be reached from z' in one circuit step. Thus, in this case we are able to define z” to be the new point z and increase t by 2 using only two circuit steps. Note, that if at the beginning of a Phase step we have ε(v)= t+1, we are in the case t≥ b. In this case, we need only two circuits steps to update z and increase t by 1.Phase II: We are now at the “well structured” point y'=z.In this Phase, we construct a circuit walk from the current point z to the vertex x”. Recall that at the end of Phase I, z satisfies (<ref>) for all u∈ V and t=ε(v).Start of Phase II: If for w∈ N(v,t) we have z_w=0, then we first take two circuit steps from the current z to the point z', such that for all u∈ N(v,k), for all k≤ t-1z'_u= 0 ifk≡ b+1 2 1 ifk < b and k≡ b 2 1/2if k≥ b and k≡ b 2and for u∈ N(v,t)z'_u= 1/2 ifx”_u∈{1/2, 1} 0ifx”_u=0 . Now we take two circuit steps from z' to z”, such that for all u∈ N(v,k), for all k≤ t-2z”_u= 0 ifk≡ b+1 2 1 ifk < b and k≡ b 2 1/2if k≥ b and k≡ b 2and for u∈ N(v,t-1) we havez”_u= 0if uw∈ E for some w∈ N(v,t), x”_w=1 1/2otherwiseand for u∈ N(v,t) we have z”_u=x”_u. Thus, we define z” to be the new point z and decrease t by 1 using at most four circuit steps.Invariants for z and t in Phase II: During Phase II, we updatez and t such that at each moment of time the following holds for all u∈ N(v,k), for all k≤ t-1⋆⋆z_u= 0 ifk≡ b+1 2 1 ifk < b and k≡ b 2 1/2if k≥ b and k≡ b 2and for u∈ N(v,t), we have⋆⋆⋆z_u= 0 ifmax{x”_w : w∈ N(v,t+1),uw∈ E }=1 1/2 ifmax{x”_w : w∈ N(v,t+1),uw∈ E }=1/2 ϕ otherwise ,where ϕ:=1/2 if b≥ t andϕ:=1 if b<t. Moreover, for all u∈ N(v,k), k>t, we have z_u=x”_u. Again by construction, z and t defined at the beginning of Phase II satisfy condition (<ref>) for all u∈ B(v,t-1) and condition (<ref>) for all u∈ N(v,t). At each step (except possibly the last one) of Phase II, t is decreased by 2 and the point z is updated to satisfy condition (<ref>) for all u∈ B(v,t-1) and condition (<ref>) for all u∈ N(v,t). At every moment of Phase II, we have t=b 2.Step of Phase II:For all points on the circuit walk in a step of Phase II, we have z_u=x”_u for every u∈ N(v,k), for all k>t.If at the beginning of a step of Phase IIwe have t≥ b+2, we take a circuit step from z to a point z', such that for all u∈ N(v,k), for all k≤ t-1z'_u= 1/2 ifk≡ b+1 2 1/2ifk < b and k≡ b 2 0if k≥ b and k≡ b 2and for u∈ N(v,t), we have z'_u= 1/2 ifx”_u∈{1/2, 1} 0ifx”_u=0 .From z' we take a circuit step to a point z”, such that for all u∈ N(v,k), for all k≤ t-2z”_u= 0 ifk≡ b+1 2 1ifk < b and k≡ b 2 1/2if k≥ b and k≡ b 2for u∈ N(v,t-1), we have z”_u= 1/2 ifx”_u∈{1/2, 1} 0ifx”_u=0and for u∈ N(v,t) we have z”_u=x”_u. From z” we take a circuit step to z”' such that for all u∈ N(v,k), for k≤ t-2z”'_u= 1/2 if k≡ b+1 2 1/2if k < b and k≡ b 2 0if k≥ b and k≡ b 2 .Moreover, for u∈ N(v,t-1)∪ N(v,t) we have z”'_u=x”_u. It is not hard to see, that from z”' it takes at most one more additional circuit step to a point satisfying condition (<ref>) for all u∈ B(v,t-3) and condition (<ref>) for all u∈ N(v,t-2). Thus, for t≥ b+2 it takes at most four circuit steps to update z and decrease t by 2. In the case when t<b+2, in the same way t can be decreased by 2 and the point z can be updated in at most four circuit steps. Note that for u∈ V we have x”_u=1/2 only if k≥ b or x”_w=1/2 for some w∈ N(v,k+1), uw∈ E, where k is such that u∈ N(v,k). Furthermore, for the very last Phase II step we need only three circuit steps if t=1 and only one circuit step if t=0.Number of Circuit Steps in the Constructed Walk:The total number of circuit steps needed in both Phases is at most 4ε(v)+c for some constant c.Indeed, to start Phase I we need at most a constant number of circuit steps. With each step of Phase I, t increases by 2 and we use at most 4 circuit steps, until t=ε(v) or t=ε(v)-1. In the latter case, we still need at most 4 circuit steps to finish Phase I and increase t by 1. We also need at most 4 circuit steps to start Phase II by updating t to be equal to ε(v)-1.With each step of Phase II, t decreases by 2 and we use at most 4 circuit steps.This is done until t=0 or t=1.In both cases, we need an additional constant number of steps to finish Phase II. This gives the upper bound of 2ε(v)+2ε(v)+c for some constant c on the total number of circuit steps in the constructed circuit walk from x' to x”.Lemma <ref> immediately implies an upper bound on the diameter of the Fractional Stable Set polytope in terms of the diameter of the graph G, defined as (G):=max_v∈ V{ε(v)}. For a connected graph G=(V,E) with at least two nodes, (P_(G)) is 𝒪((G)). plain
http://arxiv.org/abs/1709.09642v1
{ "authors": [ "Sean Kafer", "Kanstantsin Pashkovich", "Laura Sanità" ], "categories": [ "math.OC", "math.CO" ], "primary_category": "math.OC", "published": "20170927172629", "title": "On the Circuit Diameter of some Combinatorial Polytopes" }
Ambient Backscatter Networking: A Novel Paradigm to Assist Wireless Powered CommunicationsXiao Lu, Dusit Niyato, Hai Jiang, Dong In Kim, Yong Xiao and Zhu Han Dong In Kim is the corresponding authorDecember 30, 2023 ===================================================================================================================== Ambient backscatter communication technology has been introduced recently, and is then quickly becoming a promising choice for self-sustainable communication systems as an external power supply or a dedicated carrier emitter is not required. By leveraging existing RF signal resources, ambient backscatter technology can support sustainable and independent communications and consequently open up a whole new set of applications that facilitate Internet-of-Things (IoT). In this article, we study an integration of ambient backscatter with wireless powered communication networks (WPCNs). We first present an overview of backscatter communication systems with an emphasis on the emerging ambient backscatter technology. Then we propose a novel hybrid transmitter design by combining the advantagesof both ambient backscatter and wireless powered communications. Furthermore, in the cognitive radio environment, we introduce a multiple access scheme to coordinate the hybrid data transmissions. The performance evaluation shows that the hybrid transmitter outperforms traditional designs. In addition, we discuss some open issues related to the ambient backscatter networking. Ambient backscatter communications, modulated backscatter, RF energy harvesting, self-sustainable communications, wireless powered communications, Internet-of-Things.§ INTRODUCTION Information transmission based on modulated backscatter of incident signals from external RF sources has emerged as a promising solution for low-power wireless communications. The power consumption of a typical backscatter transmitter is less than 1 μW <cit.>, which renders excessively long lifetime, e.g., 10 years, for an on-chip battery. This low power consumption well matches the harvestable wireless energy from RF sources, e.g., typically from 1 μW to tens of μW <cit.>. This additionally renders RF energy harvesting to be an alternative to power backscatter transmitters. Furthermore, backscatter communications can be embedded into small gadgets and objects, e.g., a radio frequency identification (RFID) and passive sensor. Therefore, backscatter communications is also envisioned as the last hop in the Internet-of-Things (IoT) <cit.>, which requires low cost and ubiquitous deployment of small-sized devices <cit.>. Due to recent dramatic increases in application demands, the requirement for backscatter communications has gone beyond the conventional RFID towards a more data-intensive way.This strongly raises the need for re-engineering backscatter transmitters for better reliability, higher data rates, and longer interrogation/transmission range. However, traditional backscatter communication techniques, e.g., RFID, are hindered by three major shortcomings: 1) The activation of backscatter transmitters relies on an external power supply such as an active interrogator (also called a reader or carrier emitter) which is costly and bulky. 2) A backscatter transmitter passively responds only when inquired by a reader. The communication link is restricted in one hop, typically with the distance ranging from a few centimeters to a few meters. 3) A backscatter transmitter's reflected signal could be severely impaired by adjacent active readers, significantly limiting the device usage in a dense deployment scenario.Recently, ambient backscatter communication technology <cit.> has emerged to overcome some of the above limitations. An ambient backscatter transmitter functions using ambient RF signals for both backscattering and batteryless operations through RF energy harvesting.Energy is harvested from ambient RF sources in the radio environment, avoiding the development and deployment cost of readers as well as improving the flexibility in use. Despite many benefits, the study of ambient backscatter communications is still at its nascent stage. Various challenges arise from the data communication and networking perspectives <cit.>. This motivates our work in this article.§ BACKSCATTER COMMUNICATIONS This section first describes the backscatter communications from the system perspective and then reviews the fundamental principles of backscatter communications.§.§ Backscatter Communication Systems Following the standard communication terminology, a backscatter communication system has three major components, i.e., a carrier emitter, a backscatter transmitter, and a receiver. The carrier emitter can be either a dedicated RF generator that only broadcasts continuous sinusoidal waves <cit.> or an ambient RF transmitter communicating with its own intended receiver(s) <cit.>, e.g., a Wi-Fi router <cit.>. A backscatter transmitter is a device that backscatters a signal from the carrier emitter for data transmission to the backscatter receiver. The receiver is capable of decoding the data from the modulated backscatter signal. In an RFID context, the carrier emitter and receiver can be co-located, and it is called an interrogator or reader.Backscatter communication systems can be classified based on the type of power supply. * In passive systems, the backscatter transmitter relies on the exogenous RF waves from a carrier emitter for both operational power and a carrier signal to reflect. To communicate, the backscatter transmitter first harvests energy from incident RF waves through rectifying, typically by a rectenna or charge pump. Once the rectified DC voltage exceeds a minimum operating level, the device is activated. The device then tunes its antenna impedance to generate modulated backscatter using the instantaneously harvested energy. Therefore, passive systems are featured with coupled and concurrent backscattering and energy harvesting processes. For example, the experiment in <cit.> demonstrates that a batteryless backscatter sensor can function continuously with an input RF power of 18 dBm (equivalently, 0.1103 μW/cm^2 power density) for energy harvesting. Because part of incident RF waves are utilized for energy harvesting to attain the operational power, the effective transmission range of a passive device is relatively short, typically within a few meters. Moreover, it has limited data sensing, processing, and storage capabilities. Nevertheless, the key benefit of the passive device is its low cost and small size. For example, the recent advance in chipless implementation such as surface acoustic wave (SAW) tag <cit.> allows fabricating a passive device with the cost of only 10 cents. * In semi-passive systems, the backscatter transmitter, instead of relying on the harvested energy from the carrier emitter alone, is equipped with an internal power source. Thereby, without needing to wait until harvesting enough energy, the access delay is significantly reduced. The semi-passive device enjoys better reliability and wider range of accessibility, typically up to tens of meters. Another benefit is that the battery can support larger data memory size and on-chip sensors <cit.>, which remarkably widens the functionality and usability of the device. However, the semi-passive device still utilizes the RF signals from a carrier emitter for backscattering, and thus, does not largely improve the transmission rate over its passive counterpart. Moreover, the semi-passive device has limited operation time subject to the battery capacity. Figure <ref> illustrates three configurations of backscatter communication systems. * Monostatic backscatter configuration consists of two components, i.e., an interrogator and a backscatter transmitter, e.g., an RFID tag. The interrogator as the carrier emitter first releases RF signals to activate the backscatter transmitter. Once activated, the backscatter transmitter performs modulation utilizing the same RF signals from the interrogator. The reflected modulated backscatter signals are then captured by the interrogator which acts as the backscatter receiver. Since the carrier emitter and the backscatter receiver are co-located, the backscattered signal suffers from a round-trip path loss <cit.>. The monostatic configuration is mostly adopted for short-range RFID applications. * Bistatic backscatter configuration differs from the monostatic counterpart in that the interrogator is replaced by a separate carrier emitter and a separate backscatter receiver. The bistatic configuration allows setting up more flexible network topologies. For example, a carrier emitter can be placed at an optimal location for backscatter transmitters and receivers. Moreover, the bistatic configuration can mitigate the doubly near-far effect <cit.> by distributing several carrier emitters in the field. It was shown that the fabrication cost of the carrier emitter and backscatter receiver in the bistatic configuration is cheaper than that of the interrogator in monostatic configuration due to less complex design <cit.>. * Ambient backscatter configuration is similar to the bistatic configuration. However, the carrier emitter is an existing RF source, e.g., a TV tower, base station, or Wi-Fi access point, instead of a dedicated device. Unlike the bistatic backscatter configuration in which the communication is always initiated by the carrier emitter, the ambient backscatter transmitter can harvest RF energy from the ambient RF source and initiate transmission to its receiver autonomously. Nevertheless, compared with the bistatic configuration, the transmission range of an ambient backscatter transmitter is limited because the performance is affected by channel conditions and by natural variations of the ambient signals <cit.>. In addition to lower cost and less energy consumption, the ambient backscatter configuration does not need extra spectrum to operate. In the other configurations, the use of interrogators or dedicated carrier emitters costs not only additional power but also frequency resources.Besides, both the emitted signal from interrogators (or dedicated carrier emitters) and the reflected signal from backscatter transmitters can cause severe interference to other wireless devices, especially in a dense deployment scenario, e.g., large supply chains, and when high-gain antennas are employed. By contrast, the ambient backscatter signal does not cause any noticeable interference to other wireless devices unless their distance to the ambient backscatter transmitter is extremely close, e.g., less than 7 inches as demonstrated in a real experiment in <cit.>. Thus, the choice of working frequency of an ambient backscatter transmitter can be determined based on the specific applications. For example, the outdoor implementation can be based on TV frequency <cit.> while Wi-Fi frequency <cit.> would be an appropriate option for indoor applications. Interestingly, a recent study in <cit.> reveals that ambient backscatter communications can even improve the legacy receiver's performance if the legacy receiver can leverage the additional diversity from the backscattered signals. Moreover, in monostatic and bistatic backscatter configurations, all the backscatter transmitters need to be located within the coverage of their dedicated interrogators or carrier emitters in order to receive RF signals and to be scheduled, and thus, usually only one-hop communication is feasible. Differently, multiple ambient backscatter transmitters can initiate transmissions independently and simultaneously, thus making multi-hop communications possible to overcome the short-range issue of backscatter links.Table <ref> summarizes and compares the important ambient backscatter communication prototypes. §.§ Fundamentals §.§.§ Basic principles of modulated backscatter Different from other conventional wireless communication systems, a backscatter transmitter does not generate active RF signals. Instead, the backscatter transmitter maps a sequence of digital symbols onto the RF backscattered waveforms at the antenna. The waveform adaptation is done by adjusting the load impedance, i.e., reflection coefficient, of the antenna to generate different waveforms from that of the original signal. This is known as load modulation. Figure <ref> shows the diagram of a backscatter transmitter with binary load modulation.It has two loads with the impedances intentionally matching and mismatching with the antenna impedance, respectively.The antenna reflection coefficient, and thus the amount of the reflected signal from the antenna[The reflected signal is composed of two parts: structural mode and antenna mode scatterings <cit.>. The former is determined by the antenna physical property, e.g., material and shape. The latter is a function of antenna load impedance.], can be tuned by switching between the two impedance loads.Specifically,when the load with the matched impedance is chosen, most of the incident signal is harvested, i.e., an absorbing state. Conversely, if the antenna switches to the other load, a large amount of the signal is reflected, i.e., a reflecting state.A backscatter transmitter can utilize an absorbing stateand a reflecting state to indicate a bit “0" and a bit “1”, respectively, to its intended receiver. It follows that, in the reflecting state, the receiver will observe a superposition of the original wave from the signal source (e.g., the interrogator) and the backscatter transmitter's reflected wave. In the absorbing state, the receiver will only see the original wave. The states are then interpreted as information bits. The information rate can be adapted by varying the bit duration. As the backscatter transmitter only works as a passive transponder that reflects part of the incident signals during modulation, the implementation can be very simple and requires no conventional communication components such as oscillator, amplifier, filter, and mixer. Typically, the backscatter transmitter only consists of a digital logic integrated circuit, an antenna, and an optional power storage, making it cheap, small, and easy to deploy.§.§.§ Modulation and demodulation In backscatter communications, information is usually carried in amplitude and/or phase of the reflected signals.Binary modulation schemes such as amplitude shift keying (ASK) and phase shift keying (PSK) are most commonly adopted in passive backscatter systems. However, due to low spectrum efficiency, binary modulations result in a low data rate, e.g., up to 640 kbps for FM0 and Miller coding. This calls for research efforts in multilevel modulation, i.e., higher-order constellations, to accommodate demand from data-intensive applications. Recent studies have shown that backscatter systems can support M-ary quadrature amplitude modulation (QAM), e.g., 16-QAM <cit.> and 32-QAM <cit.> as well as N-PSK, e.g., 16-PSK <cit.>. A passive RF-powered backscatter transmitter <cit.> operating on 5.8 GHz can achieve 2.5 Mbps data rate with a 32-QAM modulation at a ten-centimeter distance. Moreover, 4-PSK and 16-PSK modulations have been implemented in an ambient backscatter prototype, i.e., BackFi <cit.>. The experiments demonstrate 5 Mbps data rate at the range of one meter.Because of amplitude and/or phase modulation, a backscatter receiver needs to determine the amplitude and/or phase change. It is relatively simpler to implement an amplitude demodulator. Figure <ref> again shows the diagram of binary amplitude demodulation based on envelope detection at the receiver. The circuit has four components: an antenna, an envelope averager, a threshold calculator, and a comparator.The instantaneously incoming waves at the antenna are smoothed at the envelope averager, which gives an envelope of the instantaneous signals. The threshold calculator generates a threshold value by taking the mean of the long-term averaged envelope.Then, the comparator makes a comparison between the smoothed instantaneous envelope of the received modulated backscatter and the threshold to decide the value of the information bits.Alternatively, demodulating from backscattered waves with phase variation requires phase detection.Some most common practices of phase demodulation include the adoption of a homodyne receiver with an RF in-phase/quadrature demodulation <cit.> and channel estimation <cit.>. For example, BackFi <cit.> uses a preamble to estimate the combined forward-backward channel. This estimated value of the channel is then used for decoding the information bit modulated on the phase. §.§.§ Operating Frequency Most of the existing backscatter systems operate at ultra-high frequency (UHF), i.e., 860-960 MHz, which ranges within Industrial, Scientific, and Medical (ISM) band. Existing protocol stacks of EPC Global Class 1 Gen 2 and ISO 18000-6c are standardized regulations for RFID backscatter systems in UHF. Though widely adopted, UHF backscatter transmitters are susceptible to multipath signal cancellation (because of narrow bandwidth) and interference, both from existing radio systems and multiuser concurrent transmissions. Apart from UHF, some works studied super-high frequency (SHF) or microwave frequency range, i.e., 5.725-5.875 GHz, and ultra-wideband (UWB) with instantaneous spectral occupancy of 500 MHz. Increasing the frequency to SHF results in a smaller form factor and facilitates multiple-antenna implementation because the antenna size depends on wavelength. Other advantages include an increased antenna gain and object immunity <cit.>. UWB backscatter transmitters adopt ultrashort pulses at nanosecond scale to enhance frequency density for better reliability <cit.>. Using wideband signals has certain benefits including immunity to multi-path fading, lower power consumption, and robustness to interference.Ambient backscatter communications take advantage of existing radio signals, e.g., from TV or cellular stations. Therefore, this allows high flexibility in the designed circuit frequency as an ambient backscatter transmitter can work on any frequency range allocated to existing communication systems. The choice of frequency can be determined based on the specific applications. For example, the outdoor implementation can be based on TV frequency <cit.> while Wi-Fi frequency <cit.> would be an appropriate option for indoor applications. § AMBIENT BACKSCATTER ASSISTED WIRELESS POWERED COMMUNICATION NETWORKSWireless powered communication is an active radio transmission paradigm powered by RF energy harvesting <cit.>. The recent development in the circuit technology has greatly advanced an RF energy harvester implementation in terms of hardware sensitivity, RF-to-DC conversion efficiency, circuit power consumption, etc. This has made powering active radio communications by ambient RF energy harvesting <cit.> become feasible. For example, a recent design and implementation of RF-powered transmitter <cit.>showed that the transmission rate could reach up to 5 Mbps.We propose a novel hybrid transmitter that combines ambient backscatter communications and the wireless powered communications, both of which can harvest RF energy and support self-sustainable communications <cit.>.On the one hand,a wireless powered transmitter requires a long time to accumulate enough energy for active transmission when the ambient RF signals are not abundant <cit.>. In this circumstance, ambient backscattering may still be employed as it has ultra-low power consumption.On the other hand, since the ambient RF signals may alternate between ON state (i.e., the signals are present) and OFF state (i.e., the signals are absent), ambient backscattering cannot be adopted when the RF signals are OFF. In this case, a wireless powered transmitter can be adopted to perform active RF transmission using energy that was harvested and stored before when the RF signals were ON <cit.>.Therefore, these two technologies can well complement each other and result in better data transmission performance. Figure <ref> illustrates the block diagram of the hybrid transmitter. The transmitter consists of the following components: an antenna, a power management module, an energy storage, a load modulator, an RF energy harvester, an active RF transceiver, a digital logic and microcontroller, and a memory storage connected to an external application (e.g., a sensor).We can see that many circuit components such as the RF energy harvester, memory and energy storage, can be shared between ambient backscatter and wireless powered communications. Based on the state-of-the-art circuit technology for ambient backscatter transmitters <cit.> and wireless-powered transmitters <cit.>, the proposed design for the hybrid transmitter may not require a considerably more complex hardware implementation than that for performing ambient backscatter or wireless powered transmission alone.* Antenna, shared by an RF energy harvester, a load modulator, and an active RF transceiver,* Power management module, which controls the power flow of the circuit,* Energy storage, to reserve the harvested energy for later use,* Load modulator, to perform ambient backscatter function with load modulation,* RF energy harvester to perform RF-to-DC energy conversion,* Low-power active RF transceiver, for active RF transmission or reception, * Application (e.g., sensor), either integrated or connected externally, to collect the measured data,* Memory storage, to store data generated by the application or the decoded information from the active RF transceiver, and* Low-power digital logic and microcontroller, to process data from the application and control the load modulator as well as the active RF transceiver.The proposed design hereallows a highly flexible operation to perform RF energy harvesting, active data transmission/reception, and backscattering. Such integration in the proposed design has the following advantages. * It supports a long duty cycle. When the hybrid transmitter has data to transmit, but not enough energy to perform active RF transmission, the device can perform backscattering for urgent data delivery. Since ambient backscatter communications can use instantaneously harvested energy, it does not consume the energy in the storage reserved for active RF transmission. Consequently, the duty cycle is improved significantly. Additionally, the delay to respond to the data transmission request is much shorter. * In comparison with an ambient backscatter transmitter, the hybrid transmitter can achieve a longer transmission range using active RF transmission when necessary. * The hybrid transmitter is capable of offloading transmission from active RF broadcast to passive backscattering, thus alleviating interference issues. This is especially beneficial in a dense/ultra-dense network with high spatial frequency reuse as 1) there are various signal sources to facilitate backscattering and 2) ambient backscattering does not cause noticeable interference to many other users. * The hybrid transmitter can still be used even without licensed spectrum, e.g., in cognitive radio networks. Coexisting with a primary user who is assigned a licensed channel, the hybrid transmitter can harvest energy and perform backscattering when the primary user is transmitting. When the primary user is idle, the hybrid transmitter can use active RF transmission to access the channel as a secondary user.Consider a typical WPCN composed of a hybrid access point (HAP) that co-locates with a carrier emitter. The HAP can not only release RF signal for monostatic backscattering, but also serve as the data sink. The system performance can be improved by properly adjusting the operation mode of the hybrid transmitter based on channel conditions. Figure <ref> illustrates three different operation scenarios empowered by the hybrid transmitter. * When a hybrid transmitter locates near the HAP, e.g., T_1, it can enjoy full-time transmission by backscattering the carrier wave from the HAP (i.e., monostatic configuration). * When a hybrid transmitter locates far away from the HAP but still in the coverage range of the HAP, e.g., T_2, it can perform harvest-then-transmit. * When a hybrid transmitter is outside the coverage range of the HAP, e.g., T_3, it can first perform ambient backscatter to its peer in coverage. Then the peer relays the data to the HAP. Clearly, these operation scenarios of the hybrid transmitter render better throughput via mode switching and increase the coverage range of the WPCN. Despite the above advantages of the hybrid transmitter, some technical issues arise. With a single antenna setting, the hybrid transmitter cannot backscatter, harvest energy, and perform active data transmission simultaneously. Therefore, the transmitter has to determine when and how long each operation mode should be activated to achieve the optimal tradeoff among RF energy harvesting, ambient backscattering, and active RF transmission. The problem becomes more complicated with multiple transmitters in the network, which will be discussed next.§ MULTIPLE ACCESS SCHEME FOR THE HYBRID TRANSMITTERS IN COGNITIVE RADIO NETWORKS In this section, we aim to examine the proposed hybrid transmitter in a cognitive radio network scenario where there exists no licensed channel for the hybrid transmitter. To coordinate the hybrid data transmissions of multiple devices, we devise a multiple access scheme for the ambient backscatter assisted wireless powered communications. §.§ System Model We consider a cognitive radio network consisting of a primary transmitter (PT), multiple secondary transmitters (STs), which are the hybrid transmitters as described in Section <ref>, and a common secondary access point (SAP), e.g., gateway.The SAP works as both the controller for coordinating data transmissions and the data sink for collecting data from the multiple STs, e.g., sensors. In order to decode the hybrid transmission from the STs, the SAP is equipped with both a conventional radio receiver and a backscatter receiver. When the PT transmits data on the licensed channel, the STs can backscatter for data transmission to the SAP or harvest energy to replenish its battery <cit.>. When the PT is not transmitting, i.e., the channel is idle, the STs can perform active data transmission to the SAP over the channel with the stored energy. The network requires a multiple access scheme for the STs and SAP, which is proposed as shown in Figure <ref>.Let 𝒩={1, 2, ⋯, N} denote the set of the STs. LetP_T and τ denote the transmit power and the normalized transmission period of the PT, respectively. The normalized channel idle period is then represented as 1-τ. The first part of the channel busy period is an energy harvesting subperiod with time fraction ρ. During this subperiod, all the STs harvest energy from the PT's signal to be used for later active data transmission. Then the time fraction 1-ρ during the channel busy period is used for backscattering. Let α_n denote the time fraction assigned to ST_n for backscattering and ∑_n ∈𝒩α_n=1. When an ST is backscattering, the other STs can also harvest energy from the PT's signal. Thus, the total energy harvesting duration for ST_n is[ρ+(1-ρ)∑_m ∈𝒩\{n}α_m]τ.Let h_n and μ denote the channel gain coefficient between the PT and ST_n, and RF-to-DC energy conversion efficiency of the STs, respectively. The total amount of harvested energy by ST_n can then be computed as E^H_n=P_T h_nμτ[ ρ + (1-ρ)∑_m ∈𝒩\{n}α_m]. During the channel idle period, ST_n can perform active data transmission for a time fraction, denoted by β_n, in a sequential fashion, where ∑_n ∈𝒩β_n=1. Evidently, there exists a tradeoff among energy harvesting, backscattering and active transmission. The throughput of the STs can be optimized by exploring this tradeoff. Let α={α_1,α_2,…, α_N} denote the set of the time fractions for backscattering and β={β_1,β_2,…, β_N} denote the set of time fractions for active data transmission of the STs.The objective of our proposed multiple access scheme is to explore the optimal combination of ρ, α and β to maximize the sum throughput of the STs.§.§ Multiple Access ControlThe transmission rate of backscattering depends on the physical configuration of the circuit <cit.>. Through the real implementation in <cit.>, an ST can backscatter by load modulation without consuming energy from its battery. When the PT is transmitting and an ST is not scheduled for backscattering, it can harvest energy, part of which, denoted by E_C, is consumed by the circuit components. The surplus energy is stored in the battery and reserved for future active data transmission. We consider two important requirements in this multiple access scheme. One is the quality-of-service (QoS) requirement that the minimum throughput of each ST should be greater than or equal to the demand threshold, denoted by R_t. The other is the physical constraint that, during active data transmission, the allocated transmit power of ST_n cannot exceed the maximum power limit, denoted as P̅_t. Under the constraints of the above two requirements, we formulate anoptimization problem to maximize the sum throughput of the STs. max_α, β {∑_n ∈𝒩(1-ρ)α_nτ R^b_n + ∑_n ∈𝒩 (1-τ) β_nη W log_2(1+max[0,E^H_n-E_C]g_n/(1-τ)β_nσ^2) } s. t. (1-ρ)τα_n R^b_n + (1-τ)β_nη W log_2(1+max[0, E^H_n-E_C ]g_n/(1-τ)β_nσ^2) ≥ R_t,∀ n ∈𝒩, E^H_n-E_C/(1-τ)β_n≤P̅_t , ∀ n ∈𝒩, where R^b_n is the backscatter data rate of ST_n, η∈ [0,1] represents the data transmission efficiency, W is the bandwidth of the primary channel,σ^2 is the power of additive white Gaussian noise (AWGN), and g_n is the channel gain coefficients between ST_n and the SAP. We consider the free-space propagation model and calculate the channel gain according to the Friis equation <cit.>, i.e., G_TG_Rλ^2/(4π d)^2. G_T and G_R are the antenna gain coefficients for the transmitter and the receiver, respectively, λ is the wavelength of the emitted signal, and d is the distance between the transmitter and the receiver. Particularly, the objective function in (<ref>) calculates the sum throughput, where * ∑_n ∈𝒩(1-ρ)α_nτ R^b_n is the sum throughput from backscattering by all STs, and * ∑_n ∈𝒩 (1-τ) β_nη W log_2(1+max[0, E^H_n-E_C ] g_n/(1-τ)β_nN) is the sum throughput from the active data transmission by all STs. max[0, E^H_n-E_C ] /(1-τ)β_n is the transmit power of ST_n calculated by averaging the stored energy (harvested energy minus circuit power consumption if E^H_n> E_C) over the active transmission duration. The constraint in (<ref>) is the QoS requirement for each individual ST. The left-hand side is the individual throughput of an ST. The energy harvesting requirement is that the total amount of harvested energy has to exceed the circuit power consumption E_C to support the active RF generation function <cit.>. The expression in (<ref>) indicates the transmit power allocation constraint.§.§ Numerical Results We consider the PT as a small cell base station with 1 W transmit power. The bandwidth and the frequency of the primary channel are 100 kHz and 900 MHz, respectively.The antenna gains of the PT, STs, and SAP are all set to 6 dBi. For ambient backscatter communications, we consider a 100 kbps transmission rate. The circuit power consumption E_C, minimal demand threshold R_t and maximum transmit power P̅_t of each ST are -25 dBm, 10 kbps and 1 W, respectively.The energy conversion efficiency μ and data transmission efficiency η are set at 0.15 and 0.6, respectively. The power of additive white Gaussian noise is 10^-10 Watt. In addition, we use the CVX toolbox in MATLAB to solve the formulated sum throughput maximization problem.For ease of presentation, we first consider N=2 STs.We first show how the sum throughput changes with varying β_1 and ρ when the channel busy duration τ is 0.7.Both STs are placed at 11 m away from the PT. The distance between ST_1 and the SAP is 2 m, and that between ST_2 and the SAP is 2.5 m. Figure <ref>(throughput1) shows the sum throughput versus ρ and β_1, when α_1=α_2=0.5, i.e., ST_1 and ST_2 have equal backscatter duration. We observe that when the energy harvesting time fraction ρvaries from 0 to 1, the sum throughput first increases and then decreases after reaching a maximum value.The maximum sum throughput is achieved when β_1 is greater than 0.5. The reason is that ST_1 is located nearer to the SAP than ST_2, and consequently ST_1 has better active transmission rate. Therefore, to achieve better sum throughput, more active transmission time should be allocated to ST_1 once the minimal throughput requirement of ST_2 is met. Then, in Figure <ref>(throughput2), weinvestigate how β_1 should be taken with the variation of α_1 to maximize the sum throughputwhen ρ is fixed at ρ=0.5. We observe that when α_1 takes a small value (e.g., close to 0),the maximal sum throughput is achieved by choosing a large value of β_1. On the other hand, when α_1 takes a large value (e.g., close to 1), the maximal sum throughput can be obtained by assigning a small value of β_1. Fig. <ref> and <ref> show the maximal sum throughput when either α_1 or ρ is fixed. This implies that the proposed multiple access scheme, by solving our formulated optimization problem, can achieve the design goal of maximizing the sum throughput over all possible combinations of α, β, and ρ.In Figure <ref>, we compare the sum throughput when the STs are (i) the proposed hybrid transmitters with the optimal solution to the formulated problem, labeled as “HT", (ii) wireless powered transmitters only, which are equivalent to HT with ρ=1, labeled as “WPT", and (iii) backscatter transmitters only, which are equivalent to HT with ρ=0 and ∑_n ∈𝒩β_n=0, labeled as “BT". The wireless powered transmitters harvest energy when the PT is transmitting, and sequentially perform active RF transmission when the PT is idle. The backscatter transmitters sequentially perform backscattering during the PT's transmission and remain idle when the PT is silent. For demonstration purpose, we also plot the sum active transmission throughput and sum backscattering throughput of the hybrid transmitters, labeled as “HT-WPT" and “HT-BT", respectively.We consider that the SAP is located 15 m away from the PT.Let N=1,2,...,8. The N STs are aligned on the line segment between the SAP and the PT with intervals of every 1 m starting from 2 m away from the SAP towards the PT.As shown in Figure <ref>, the hybrid transmitter outperforms the others by better exploiting the signal resources. In particular, the sum throughput of backscatter transmitters remains constant due to passive transmission only when the PT is transmitting. By contrast, the sum throughput of the wireless powered transmitters gradually increases as the number of STs increases because more energy is harvested by more STs for active transmission.Indeed, wireless powered transmitters achieve more active transmission throughput than HT-WPT. However, by exploiting ambient backscattering, the sum throughput obtained by hybrid transmitters is greater than that of the wireless powered transmitters. Interestingly, under our parameter setting, the sum backscattering throughput of the STs remains unchanged (saturated) when there exist three or more STs.This is because in cases of three or more STs, an ST is scheduled to harvest energy only when another is backscattering (i.e., ρ=0).With the increase of the number of STs, the sum throughput of hybrid transmitters and that of wireless power transmitters become steady because of saturation in resource utilization.§ FUTURE DIRECTIONS AND OPEN ISSUES In this section, we discuss some future research directions for the proposed ambient backscatter assisted wireless powered communication network. §.§.§ Integration of Ambient Backscatter with RF-powered Relay Similar to the idea of a hybrid transmitter, incorporating ambient backscatter capability in an RF-powered relay network would also be an interesting research direction.The operation of this relay network would additionally include the scheduling and resource allocation for the relay links.§.§.§ Full-Duplex Wireless Powered CommunicationsWith full-duplex operation, during the energy harvesting process, our hybrid transmitter can simultaneously perform energy harvesting and active wireless transmission <cit.>.With this implementation, the optimal control of the hybrid transmitter needs to be obtained by taking into account the extra benefit of full-duplex operation. §.§.§ Multiple Access Scheme for the Hybrid Transmitters with Multiple Channels in Cognitive Radio Networks The performance of the hybrid transmitters can be potentially improved in a network with multiple PTs and multiple licensed channels, because there are richer signal resources for energy harvesting and more possible spectrum holes to allow concurrent active data transmissions.This dynamic environment complicates the coordination among the hybrid transmitters as it introduces an additional tradeoff among energy harvesting, backscattering, active transmission and channel selection. §.§ Open Issues for Ambient Backscatter Communications§.§.§ Anti-Collision Mechanism Due to hardware limitation, a backscatter device may not be equipped with the capability of detecting transmissions of its peers. Consequently, serious data collision can happen which will deteriorate network performance substantially. Potential solutions can be proposed by designing collision avoidance mechanisms, more accurate recovery scheme from collisions, and more efficient bit-rate adaptation to mitigate interference. §.§.§ Multilevel Modulation Existing implementation of 16 PSK in the prototype <cit.> working in a Wi-Fi environment has already demonstrated the applicability of multilevel modulation for ambient backscatter communications. More advanced multilevel modulations, e.g., QAM and spread spectrum, can be explored.§.§.§ Mesh NetworkingAmbient backscatter communications is potential to support diverse network topologies (e.g, mesh tree structures). Therefore, examining ambient backscatter performance in mesh networking both experimentally and theoretically will be important. Additionally, efforts need to be made developing multi-hop relaying protocols considering various conditions. § CONCLUSIONSAmbient backscatter communication technology, emerging as an integration of wireless backscatter technology and RF energy harvesting based on existing radio resources, enables sustainable and ubiquitous connections among small devices, paving the way for the development of IoT. In this article, we have proposed and evaluated a hybrid transmitter that combines ambient backscatter and wireless powered communication capabilities. Through numerical simulation and comparison, we have demonstrated the superiority of the hybrid transmitter. With self-sustainability and licensed channel-free operation, our design is suitable for low-power data communication applications. The tradeoff among energy harvesting, active transmission and ambient backscatter communication has been studied. Future research issues that can be further addressed have been discussed. 99 B.2014Kellogg B. Kellogg, A. Parks, S. Gollakota, J. R. Smith, and D. Wetherall, “Wi-Fi Backscatter: Internet Connectivity for RF-powered Devices," in Proc. of the 2014 ACM conference on SIGCOMM,Chicago, IL, August 2014.XLuSurvey X. Lu, P. Wang, D. Niyato, D. I. Kim, and Z. Han, “RF Energy Harvesting Network: A Contemporary Survey," IEEE Communications Surveys and Tutorials, vol. 17, no. 2, pp. 757-789, Second Quarter 2015. X.2015Lu X. Lu, P. Wang, D. Niyato, and Z. Han, “Resource Allocation in Wireless Networks with RF Energy Harvesting and Transfer," IEEE Network, vol. 29, no. 6, pp. 68-75, Dec. 2015.V.2013LiuV. Liu, A. Parks, V. Talla, S. Gollakota, D. Wetherall, and J. R. Smith, “Ambient Backscatter: Wireless Communication Out of Thin Air," in Proc. of the 2015 ACM Conference on Special Interest Group on Data Communication (SIGCOMM), Hong Kong, China, August 2013.D.2012Miorandi D. Miorandi, S. Sicari, F. D. Pellegrini, and I. Chlamtaca, “Internet of things: Vision, applications and research challenges," Ad Hoc Networks, vol. 10, no. 7, pp. 1497-1516, September 2012.X.2017LuVTC X. Lu, H. Jiang, D. Niyato, D. I. Kim, and P. Wang, “Analysis of Wireless-powered Device-to-Device Communications with Ambient Backscattering" in Proc. of IEEE VTC 2017-Fall, Toronto, Canada, September 2017.J.2014Kimionis J. Kimionis, A. Bletsas, and J. N. Sahalos, “Increased Range Bistatic Scatter Radio," IEEE Trans. Commun., vol. 62, no. 3, pp. 1091-1104, March 2014.D.2016Assimonis S. D. Assimonis, S.-N. Daskalakis, and Aggelos Bletsas, “Sensitive and Efficient RF Harvesting Supply for Batteryless Backscatter Sensor Networks," IEEE Transactions on Microwave Theory and Techniques, vol. 64, no. 4, pp. 1327-1338, March 2016. V.2010PlesskyV. Plessky and L. Reindl, “Review on SAW RFID Tags," IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, Vol. 57, No. 3, pp. 654-668, March 2010. J.2010Yin J. Yin, J. Yi, M. K. Law, Y. Ling, M. C. Lee, K. P. Ng, B. Gao, H. C. Luong, A. Bermak, M. Chan, W.-H. Ki, C.-Y. Tsui, and M. Yuen,“A System-on-Chip EPC Gen-2 Passive UHF RFID Tag With Embedded Temperature Sensor," IEEE Journal of Solid-State Circuits, vol. 45, no. 11, pp. 2404-2420, October 2010. X.Lu2016 X. Lu, P. Wang, D. Niyato, D. I. Kim, and Z. Han, “Wireless Charging Technologies: Fundamentals, Standards, and Network Applications," IEEE Communications Surveys and Tutorials, vol. 18, no. 2, pp. 1413-1452, Second Quarter, 2016.D.2015Bharadia D. Bharadia, K. R. Joshi, M. Kotaru, and S. Katti, “BackFi: High Throughput WiFi Backscatter," in Proc. of the 2015 ACM Conference on Special Interest Group on Data Communication (SIGCOMM), London, UK, August 2015. N.2014ParksA. N. Parks, A. Liu, S. Gollakota, and J. R. Smith, “Turbocharging Ambient Backscatter Communication," in Proc. of Proceedings of the 2014 ACM conference on SIGCOMM, Chicago, IL, August 2014.D.1605.04805Darsena D. Darsena, G. Gelli, and F. Verde, “Modeling and Performance Analysis of Wireless Networks with Ambient Backscatter Devices," IEEE Transactions on Communications, vol. 65, no. 4, pp. 1797-1814, Jan. 2017.D.2010Dardari D. Dardari, R. D'Errico, C. Roblin, A. Sibille, and M. Z. Win, “Ultrawide Bandwidth RFID: The Next Generation?," Proceedings of the IEEE, vol. 98, no. 9, pp. 1570-1582. September 2010.J.2012ThomasS. J. Thomas, and M. S. Reynolds, “A 96 Mbit/sec, 15.5 pJ/bit 16-QAM Modulator for UHF Backscatter Communication," in Proc. of IEEE International Conference on RFID (RFID), Orlando, FL, April 2012. A.2015ShiraneA. Shirane, Y. Fang, H. Tan, T. Ibe, H. Ito, N. Ishihara, and K. Masu, “RF-Powered Transceiver With an Energy- and Spectral-Efficient IF-Based Quadrature Backscattering Transmitter,"IEEE Journal of Solid-State Circuits, vol. 50, no. 12, pp. 2975-2987, August 2015.I.May2015FlintI. Flint, X. Lu, N. Privault, D. Niyato, and P. Wang, “Performance Analysis of Ambient RF Energy Harvesting with Repulsive Point Process Modelling," IEEE Transactions on Wireless Communications, vol. 14, no. 10, pp. 5402-5416, May 2015. X.March2015LuX. Lu, I. Flint, D. Niyato, N. Privault, and P. Wang, “Performance Analysis for Simultaneously Wireless Information and Power Transfer with Ambient RF Energy Harvesting," in Proc. of IEEE WCNC, New Orleans, LA, USA, March 2015.I.December2014Flint I. Flint, X. Lu, N. Privault, D. Niyato, and P. Wang, “Performance Analysis of Ambient RF Energy Harvesting: A Stochastic Geometry Approach," in Proc. of IEEE Globecom, Austin, USA, December 2014.J.2015Kim Y.-J. Kim, H. S. Bhamra, J. Joseph, and P. P. Irazoqui, “An Ultra-Low-Power RF Energy-Harvesting Transceiver for Multiple-Node Sensor Application," IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 62, no. 11, pp, 1028-1032, November 2015. X.May2016Lu X. Lu, I. Flint, D. Niyato, N. Privault and P. Wang, “Self-Sustainable Communications with RF Energy Harvesting: Ginibre Point Process Modeling and Analysis," Journal on Selected Areas in Communications (JSAC), vol. 34, no. 5, pp. 1518-1535, May 2016.D.2015Niyato D. Niyato, P. Wang, D. I. Kim, Z. Han, and X. Lu, “Performance Analysis of Delay-Constrained Wireless Energy Harvesting Communication Networks under Jamming Attacks," in Proc. of IEEE WCNC, New Orleans, LA, USA, March 2015.X.June2014Lu X. Lu, P. Wang, D. Niyato, and E. Hossain, “Dynamic Spectrum Access in Cognitive Radio Networks with RF Energy Harvesting," IEEE Wireless Communications, vol. 21, no. 3, pp. 102-110, June 2014.D.May2016NiyatoD. Niyato, X. Lu, P. Wang, D. I. Kim and Z. Han, “Distributed Wireless Energy Scheduling for Wireless Powered Sensor Networks," in Proc. of IEEE ICC, Kuala Lumpur, Malaysia, 23-27, May 2016.D.June2015Niyato D. Niyato, P. Wang, D. I. Kim, Z. Han, and X. Lu, “Game Theoretic Modeling of Jamming Attack in Wireless Powered Networks," in Proc. of IEEE ICC, London, UK, June 2015.X.April2015Lu X. Lu, P. Wang, D. Niyato, D. I. Kim, and Z. Han, “Wireless Charger Networking for Mobile Devices: Fundamentals, Standards, and Applications,"IEEE Wireless Communications, vol. 22, no. 2, pp. 126-135, April 2015. Y.2015Zeng Y. Zeng and R. Zhang, “Full-duplex Wireless-powered Relay with Self-energy Recycling," IEEE Wireless Communications Letters, vol. 4, no. 2, pp. 201-204, April 2015. J.2012ThomasQAM S. J. Thomas, E. Wheeler, J. Teizer, and M. S. Reynolds, “Quadrature Amplitude Modulated Backscatter in Passive and Semipassive UHF RFID Systems," IEEE Transactions on Microwave Theory and Techniques,vol. 60, no. 4, pp. 1175-1182, February 2012.J.2016Mao J. Mao, Z. Zou, and L. Zheng,“An UWB-based Sensor-to-Time Transmitter for RF-powered Sensing Applications," to appear in IEEE Transactions on Circuits and Systems II: Express Briefs . K.2003Finkenzeller K. Finkenzeller, “RFID Handbook: Fundamentals and Applications in Contactless Smart," Cards and Identification, 2nd ed. New York: John Wiley and Son LTD, 2003.K.2015Lu K. Lu, G. Wang, F. Qu, and Z. Zhong, “Signal detection and BER analysis for RF-powered devices utilizing ambient backscatter," in Proc. of International Conference on Wireless Communications & Signal Processing (WCSP), Nanjing, China, Oct. 2015. A.2008Hande A. Hande, R. Bridgelall, and D. Bhatia, “Energy harvesting for active RF sensors and ID tags," in Energy Harvesting Technologies. Berlin, Germany: Springer, 2008, ch. 18Z.2015Ma Z. Ma, T. Zeng, G. Wang, and F. Gao, “Signal Detection for Ambient Backscatter System with Multiple Receiving Antennas," in Proc. of IEEE 14th Canadian Workshop on Information Theory (CWIT), St. John's, NL, July 2015. C2014Barott W. C. Barott, “Coherent Backscatter Communications Using Ambient Transmitters and Passive Radar Processing," in Proc. of National Wireless Research Collaboration Symposium (NWRCS), Idaho Falls, ID, May 2014. P.2015Hu P. Hu, P. Zhang, and D. Ganesan, “Laissez-Faire: Fully Asymmetric Backscatter Communication," in Proc. of the 2015 ACM Conference on Special Interest Group on Data Communication (SIGCOMM), August 2015. P.Hu2014 P. Hu, P. Zhang, and D. Ganesan, “Leveraging interleaved signal edges for concurrent backscatter," in Proc. of the 1st ACM workshop on Hot topics in wireless (HotWireless), September 2014.V.2014Liu V. Liu, V. Talla, and S. Gollakota, “Enabling instantaneous feedback with full-duplex backscatter," in Proc. of the 20th annual international conference on Mobile computing and networking (MobiCom), September 2014. D.2009Griffin J. D. Griffin, “High-frequency modulated-backscatter communication using multiple antennas," ProQuest, 2009.F.2010Iannello F. Iannello, O. Simeone, and U. Spagnolini, “Energy Management Policies for Passive RFID Sensors with RF-Energy Harvesting," in Proc. of IEEE International Conference on Communications (ICC), Cape Town, South Africa, May 2010.P.2013Sample A. P. Sample, A. N. Parks, S. Southwood, and J. R. Smith, “Wireless Ambient Radio Power," Wirelessly Powered Sensor Networks and Computational RFID, pp. 223-234. Springer New York, 2013.F.2010Guidi F. Guidi, D. Dardari, C. Roblin, and A. Sibille “Backscatter Communication Using Ultrawide Bandwidth Signals for RFID Applications," The Internet of Things, pp. 251-261, January 2010.R.2015Correia R. Correia, N. Borges Carvalho, and S. Kawasaki,“Backscatter Radio Coverage Enhancements Using Improved WPT Signal Waveform," in Proc. of IEEE Wireless Power Transfer Conference (WPTC), Boulder, CO, May 2015.D.2011Fernandes R. D. Fernandes, N. B. Carvalho, and J. N. Matos,“Design of a Battery-free Wireless Sensor Node," in Proc. of International Conference on Computer as a Tool (EUROCON), Lisbon, Portugal, April 2011.A.2015S A. S. Boaventura, and N. B. Carvalho,“Evaluation of Simultaneous Wireless Power Transfer and Backscattering Data Communication Through Multisine Signals," in Proc. of IEEE Wireless Power Transfer Conference (WPTC), Boulder, CO, May 2015.J.2013Grosinger J. Grosinger, “Feasibility of Backscatter RFID Systems on the Human Body," EURASIP Journal on Embedded Systems, Dec. 2013.S.2012Trotter M. S. Trotter, C. R. Valenta, G. A. Koo, B. R. Marshall, and G. D. Durgin, “Multi-antenna Techniques for Enabling Passive RFID Tags and Sensors at Microwave Frequencies RFID (RFID)", in Proc. of IEEE International Conference on Orlando, FL, April 2012. J.2013ThomasS. J. Thomas, T. Deyle, R. Harrison, and M. S. Reynolds,“Rich-Media Tags: Battery-free Wireless Multichannel Digital Audio and Image Transmission with UHF RFID Techniques," in proc. of IEEE International Conference on RFID (RFID), Penang, Malaysia, April 2013.K.2016Huang K. Huang, C. Zhong and G. Zhu, “Some New Trends in Wirelessly Powered Communications," S.2014GollakotaS. Gollakota, M. S. Reynolds, J. R. Smith, and D. J. Wetherall, “The Emergence of RF-Powered Computing," Computer, vol. 47, no. 1, pp. 32-39, Jan. 2014.X.April2015Lu X. Lu, P. Wang, D. Niyato, D. I. Kim, and Z. Han, “Wireless Charger Networking for Mobile Devices: Fundamentals, Standards, and Applications,"IEEE Wireless Communications, vol. 22, no. 2, pp. 126-135, April 2015.
http://arxiv.org/abs/1709.09615v1
{ "authors": [ "Xiao Lu", "Dusit Niyato", "Hai Jiang", "Dong In Kim", "Yong Xiao", "Zhu Han" ], "categories": [ "cs.NI" ], "primary_category": "cs.NI", "published": "20170927164228", "title": "Ambient Backscatter Networking: A Novel Paradigm to Assist Wireless Powered Communications" }
[email protected] ^1Department of Physics, University of Maryland Baltimore County, Baltimore, MD 21250, USA^2 Department of Physics, Virginia Tech, Blacksburg, VA 24061, USASinglet-triplet qubits in lateral quantum dots in semiconductor heterostructures exhibit high-fidelity single-qubit gates via exchange interactions and magnetic field gradients. High-fidelity two-qubit entangling gates are challenging to generate since weak interqubit interactions result in slow gates that accumulate error in the presence of noise. However, the interqubit electrostatic interaction also produces a shift in the local double well detunings, effectively changing the dependence of exchange on the gate voltages.We consider an operating point where the effective exchange is first order insensitive to charge fluctuations while maintaining nonzero interactions.This “sweet spot" exists only in the presence of interactions.We show that working at the interacting sweet spot can directly produce maximally entangling gates and we simulate the gate evolution under realistic 1/f noise. We report theoretical two-qubit gate fidelities above 99% in GaAs and Si systems.A robust operating point for capacitively coupled singlet-triplet qubits J. P. Kestner^1 December 30, 2023 ========================================================================§ INTRODUCTIONThe singlet-triplet qubit <cit.> is an attractive platform for quantum information processing due to its fast single-qubit operations <cit.> and extended coherence times <cit.>. Voltage gates “detune" the double quantum dot (DQD), i.e., adjust the energy difference ε between the two minima of the DQD potential, driving rotations around the z axis of the Bloch sphere via the exchange interaction J(ε), and rotations about the x axis are induced by a magnetic field difference across the DQD, h. Two-qubit entangling operations can be achieved via capacitive coupling <cit.> or interqubit exchange interaction <cit.>. Here, we consider the former because they have been demonstrated experimentally<cit.> and are naturally robust to leakage outside of the logical subspace. The primary source of error is fluctuation of the detuning due to charge noise in the device, inhibiting the singlet-triplet qubit from performing at fault tolerant levels. Single-qubit gates can be fast, with a π-rotation about the z axis demonstrated in 350 ps <cit.>, and precise, with 99% fidelity <cit.>. However, the relatively weak capacitive interaction generates two-qubit gates that are much slower, 140 ns for a controlled-phase (cphase) gate<cit.>, and these slow gates accumulate substantial errors in the presence of noise, with fidelities yet to exceed 90% <cit.>. Strategies such as dynamical decoupling<cit.>, pulse shaping <cit.>, composite pulse sequences <cit.>, and control tuning using iterative experimental feedback <cit.> can improve the fidelity of gating in the presence of noise. These methods are particularly effective against noise that fluctuates on timescales much longer than the time required to complete the quantum operation (i.e., the gate time). For the slower two-qubit gates, however, high-frequency charge noise is difficult to suppress. An alternative approach is to use a robust operating point in control parameter space, often called a “sweet spot," where the Hamiltonian is insensitive to certain perturbations and hence the effect of fluctuations of any frequency is reduced.The remarkable recent progress of superconducting qubits can be largely attributed to the introduction of the transmon sweet spot <cit.>.In this Rapid Communication we introduce such a sweet spot for two coupled singlet-triplet qubits. Previous investigations of sweet spots in singlet-triplet qubits have mainly focused on a single, isolated qubit <cit.>, in which case the sweet spot previously discussed is not appropriate for capacitive coupling and the sweet spot we present does not exist. Where the case of interacting qubits has been considered <cit.>, the focus was on the robustness of the coupling term in the Hamiltonian. However, the primary contribution to the error during a two-qubit gate is from fluctuations of the strong local terms in the Hamiltonian rather than in the weak coupling term itself. In the present work, using harmonic oscillator basis functions in a Hund-Mulliken (HM) model for the singlet-triplet two-qubit Hamiltonian <cit.>, we report an interacting sweet spot where the local effective exchange terms are insensitive to fluctuations in the detunings caused by charge noise, while maintaining a nonzero two-qubit coupling. Our results are meant to be taken qualitatively, as computational methods such as exact diagonalization<cit.> or a full configuration interaction<cit.> would be necessary for quantitative precision. However, since the experimental potential profile is typically not known precisely anyway, a qualitative approach is not inappropriate and provides a starting point for experimental fine tuning. While the sweet spot only suppresses charge noise, it can be combined with standard echo pulses to also mitigate magnetic field gradient noise, thus producing high-fidelity two-qubit entangling gates. We perform numerical simulations of performance at the interacting sweet spot in the presence of realistic noise with parameters typical for GaAs and Si devices, and find that fidelities above 99% are achievable simply by choosing the operating parameters wisely.§ SWEET SPOT ANALYSISThe Hamiltonian for two capacitively coupled singlet-triplet qubits in a linear four-dot array is written using a HM approximation<cit.>, where the two-qubit Hilbert space is spanned by products of the unpolarized triplet state |T_0⟩ and the hybridized singlet state |S̃⟩ formed by the lowest harmonic oscillator orbitals centered at each minima, where the singlet contains a small admixture of a doubly occupied orbital controlled by the detuning of the corresponding DQD, ε_i. In the basis {|S̃S̃⟩,|S̃T_0⟩,|T_0S̃⟩,|T_0T_0⟩},H(ε_1,ε_2,h_1,h_2) = ∑_i=1^2 [(J_i(ε_i)/2-β_i(ε_1,ε_2))σ_z^(i)+ h_i/2σ_x^(i)] + α(ε_1,ε_2)σ_z^(1)σ_z^(2),where σ⃗^(i) are the Pauli operators for qubit i, J_i is the local exchange splitting, and the interqubit Coulomb interactions contribute both a local shift in the effective exchange due to a monopole-dipole interaction, β_i(ε_1,ε_2), and a non-local term due to a dipole-dipole interaction α(ε_1,ε_2). The magnetic field difference between the two wells of each DQD is h_i, which, in GaAs, originates from an inhomogeneous Overhauser field, h_i=h≈ 2π× 30 MHz<cit.>, while in Si comparable values can be realized with integrated micromagnets,h_i=h≈ 2π× 15 MHz<cit.>. We use the convention that ε = 0 corresponds to a symmetric double well and ε > 0 raises (lowers) the inner (outer) of the two dots. Pulsing both qubits to positive ε then corresponds to tilting the DQDs away from each other. Charge noise enters the Hamiltonian through J, β, and α terms since they are controlled electrically via ε_1 and ε_2. It has been found empirically in single-qubit experiments that the exchange splitting increases roughly exponentially with detuning, J_i(ε_i) ∝ J'_i(ε_i) <cit.>. Thus, while large detuning generates fast gates, the sensitivity to charge noise, proportional to J'_i(ε_i), also increases with detuning. However, when the Coulomb interaction from the neighboring qubit is considered, the effective exchange for qubit i, J_eff,i(ε_1,ε_2) ≡ J_i(ε_i)/2 - β_i(ε_1,ε_2), can markedly deviate from simple exponential behavior due to the monopole-dipole interaction <cit.>, as shown in Fig. <ref>a.To model experiments in GaAs<cit.> we take relative permittivity κ = 13.1ϵ_0, effective electron mass m^* = 0.067m_e, confinement energy of the quantum dots ħω_0 = 1meV (hence, an effective Bohr radius a_B=√(ħ/m^*ω_0)≈ 34nm), and intraqubit distance 2a=5a_B (a denoting the distance from center to minima of the DQD in the (1,1) charge configuration). Appropriately sized harmonic oscillator orbitals give an on-site interaction energy U=4.1meV.The tunneling rate, t_0, is computed assuming a quartic DQD potential <cit.>, resulting in t_0=7μeV. The resulting exchange model is near the limit of the validity of the HM approximation, as can be tested by checking the monotonicity of exchange with intraqubit distance <cit.>, but the qualitative physics is still safely captured.In Si, recent overlapping Al gate layering techniques have produced more densely packed dots with size a_B = 29nm and spacing 2a≃ 3.5 a_B <cit.>.Using these parameters, κ = 11.68ϵ_0, and m^* = 0.19m_e, one estimates U= 5.3meV.Again assuming a quartic DQD potential would produce a tunneling rate outside the validity of HM, but since the tunneling barrier can be independently tuned <cit.>, it is more reasonable anyways to directly set the tunneling parameter to an experimentally reported value. We take t_0 = 40μeV <cit.>, which allows a good approximation for an intraqubit distance of 2a = 160nm.The sensitivity of the Hamiltonian to charge noise is quantified by the Frobenius norm of the gradient on the ε_1–ε_2 plane,∇⃗ H(ε_1,ε_2)≃√(∑_{i,j}=1^2 (∂ J_eff,i/∂ε_j)^2),where we have omitted the derivatives of α because they are orders of magnitude smaller, as also noted experimentally<cit.>, and their effect is negligible. (This makes Eq. (<ref>) easier to measure experimentally too.) Numerically minimizing this function using the HM form of J_eff,i derived in Ref. Fernando2015, we find previously unreported minima that reduce the sensitivity by orders of magnitude.The locations of these interacting sweet spots depend on the interqubit distance 2R (i.e., the center-to-center distance between the DQDs in the (1,1,1,1) charge configuration), but always lie on the ε_1=ε_2 diagonal line in our search space. Therefore, from here on, we only consider symmetric operating points ε_1=ε_2=ε (though the fluctuations in ε_i are not restricted to be symmetric). At the sweet spot, ε_ss, J_eff'(ε_ss) ≈ 0 while α(ε_ss) ≠ 0 [Note that Ref. Ramon2011 also discusses a sweet spot resulting from the β terms, but it is where J_eff=0 rather than where its derivative is zero, so it is not the same in that it is not a robust operating point.]. This only occurs when interqubit interactions are present and, if one stays predominately in the (1,1,1,1) configuration, only when the qubits are tilted away from each other (i.e., in a “breathing" mode rather than a “sloshing" mode). Other sweet spots may exist at larger, possibly asymmetric, detunings, but we cannot use HM to explore those regions.In order to capture the effect of charge noise on an entangling gate, we define the two-qubit insensitivity in analogy to the single-qubit case of Ref. Reed2016 as something roughly akin to the rate of entanglement divided by the rate of decoherence,ℐ(ε) = α(ε)/∇⃗ H(ε).Fig. <ref>b shows that the insensitivity, though finite, increases by orders of magnitude at an interacting sweet spot.The HM approximation can break down at high detuning if the S(0,2) probability is large, so we restrict detunings such that this probability is below ħω_0/U (which is why the GaAs curves in Fig. <ref> appear truncated). For Si, two sweet spots are valid within this range, however, we focus on the lower one because it is deeper in the regime of HM applicability.The location of the interacting sweet spot depends on the interqubit separation, 2R, as shown in Figs. <ref>a and <ref>b. For the parameters given above, we find no interacting sweet spots at interqubit distances greater than 2R_max = 1544nm (674nm) for GaAs (Si). We do not consider interqubit distances less than four times the intraqubit distance, 2R_min=4a, so as to safely neglect tunneling between adjacent DQDs.Operating at these new sweet spots is useful because it generates an entangling operation while providing protection against charge noise. The entangling power can be quantified as<cit.>ep(U) = 2/9(1-|tr^2[(Q^†UQ)^TQ^†UQ]/16|),where Q is the transformation from the logical basis to the Bell basis<cit.>. One can always construct a controlled-NOT (cnot) gate from at most two applications of any maximally entangling gate <cit.>. We numerically search for the shortest time τ required to generate a maximally entangling gate in a single square pulse of the detunings to the sweet spot, ε_ss.The results are shown in Figs. <ref>c and <ref>d. For our GaAs parameters, there is a region of interqubit distances where no sweet spot gate can directly generate more than 99% of the maximal entangling power. For our Si parameters, there is no such excluded region, and gate times are generally faster in Si due to the smaller distance scale. Note that gate time actually decreases as the qubits are moved farther apart. Although this may seem counterintuitive at first, it can be understood by noting that increasing the interqubit distance moves the interacting sweet spot to stronger detuning.Since τ∝ 1/α(ε), and the nonlocal coupling α(ε) increases exponentially with the detuning (since α(ε) ∝ J_1(ε_1)J_2(ε_2) has been shown empirically<cit.>) but, as an electrostatic term, only decreases polynomially with distance, if we restrict ourselves to operations at the sweet spot, the gate time will decrease as the interqubit distance increases.§ SIMULATIONSThere are two main noise sources for singlet-triplet qubits: fluctuations in the magnetic field gradient, δ h_i, and fluctuations in the detunings, δε_i. We now simulate the evolution of the two-qubit singlet-triplet system when targeting maximally entangling gates at the interacting sweet spot. The fluctuations in h are predominantly a low-frequency noise source, with its power spectral density (PSD) S_h(ω)∝ 1/ω^2.6 <cit.>. We thus model the noise in h as quasistatic with a standard deviation of 8 neV <cit.> (4.2 neV<cit.>) in GaAs (Si). Detuning fluctuations due to charge noise also contains a quasistatic contribution, δε_i^(QS), with a standard deviation of 8μV× 1eV/9.4V (6.4μeV) in GaAs<cit.> (Si<cit.>), but in addition includes higher frequency noise, δε_i^(1/f) with a PSD S_ε(ω)∝ 1/ω^0.7 <cit.>. We generate this “1/f" noise by superimposing random telegraph noise (RTN) traces with a range of switching rates ν and varying amplitudes (1/2ν)^0.7-2/2. The total PSD is then computed in order to scale the noise to the experimentally reported magnitude, 0.09 neV/√(Hz) (10.04 neV/√(Hz)) at 1 MHz for GaAs<cit.> (Si<cit.>). The resulting PSD is shown in Fig. <ref>. In GaAs, the noise PSD has been measured to be proportional to 1/ω^0.7 for a range 50 kHz < ω < 1 GHz<cit.>.For ease of computation when solving for the evolution operator, we assume that the effect of the 1/f noise on the evolution operator of a gate of time length τ is predominantly from the frequency band ranging from 1/10τ to 10/τ. We absorb noise at lower frequencies into the quasistatic contribution and we ignore noise at higher frequencies since we confirmed numerically that it is too fast compared to the entanglement dynamics of the evolution to have a noticeable influence, essentially averaging itself out. We then numerically construct the 1/f noise from ten RTNs with switching rates logarithmically distributed uniformly in this range. The generated noise is ∝ 1/ω^0.7 in the relevant bandwidth and is Lorentzian elsewhere. Fig. <ref> shows an example of the noise generated in this way when τ=100ns and τ=10μs. The noisy gate is constructed by computing the time ordered exponential of Eq. (<ref>) where ε_i(t) = ε_ss + δε_i^(QS) + δε_i^1/f(t) and h_i = h + δ h_i, resulting in U'. We then compare U' to the same operation in the absence of noise U using the averaged two-qubit fidelity defined in Ref. Cabrera2007.While the sweet spot provides protection against charge fluctuations, it does not suppress δ h_i noise at all.However, the quasistatic nature of the magnetic field noise allows its effects to be suppressed with standard echo techniques. For example, a π-pulse about y applied to both qubits halfway through the maximally entangling gate operation suppresses (though not completely removes) the errors due to both δ h and the DC component of δ J since it anticommutes with the dominant error terms produced by those noises, yet preserves the entanglement because it commutes with the non-local interaction. We compute gate fidelities both with and without the π-pulses at the halfway point, assuming errorless single qubit gates in our simulations, a reasonable approximation since single-qubit fidelities near 99% are accessible<cit.>.The results are shown in Figs. <ref>a and <ref>b.For our GaAs parameters, the magnetic field noise is much stronger than the residual charge noise coupled in through higher order derivatives at the sweet spot, so unless a pulse sequence is used to suppress magnetic noise, the optimal strategy is simply to use a sweet spot at the largest possible R, resulting in the shortest gate times, so as to reduce the accumulation of magnetic noise. However, if a simple echo is used to reduce the magnetic noise its residual effects become comparable to the residual charge noise. Reducing the interqubit distance decreases higher order sensitivity to charge perturbations at the cost of increasing the time over which the residual magnetic field noise accumulates, a tradeoff resulting in an optimal distance for a sweet spot operation of about 1 μm, corresponding to a gate time of about 950ns, and yielding a fidelity of 99.96%. For our Si parameters, due to the generally faster gate times and larger charge noise, the two types of error are already comparable without using a pulse sequence, and the tradeoff results in an optimal distance of 2R≈ 324nm, corresponding to τ≈ 830ns and a fidelity near 86%. Once a pulse sequence is used to further suppress magnetic noise, residual charge noise dominates and the optimal strategy is simply to use the sweet spot at the smallest possible R, resulting in the smallest second derivatives. Then one obtains fidelities of up to 99.86%.§ CONCLUSIONUsing the Hund-Mulliken model for two capacitively coupled singlet-triplet qubits, we have found a symmetric outward detuning, called the interacting sweet spot, where the effective exchange is insensitive to charge noise. This is particularly useful for combatting high-frequency noise that is difficult to correct with existing standard pulse sequences. We simulate the evolution at the sweet spot under realistic charge and magnetic field noise and show maximally entangling gates above the current 90% fidelity<cit.> are accessible. By using this interacting sweet spot to perform two-qubit gates and a noninteracting sweet spot <cit.> to perform the single-qubit gates, one can ensure that, with the exception of the ramping times, the qubits remain protected against charge noise for the entire duration of the computation.§ ACKNOWLEDGMENTSThis material is based upon work supported by the National Science Foundation under Grant No. 1620740, and addition of the Si results was supported by the Army Research Office (ARO) under Grant Number W911NF-17-1-0287. MAW acknowledges support from the UMBC Office of Undergraduate Education through an Undergraduate Research Award.apsrev4-1
http://arxiv.org/abs/1709.09165v2
{ "authors": [ "M. A. Wolfe", "F. A. Calderon-Vargas", "J. P. Kestner" ], "categories": [ "cond-mat.mes-hall", "quant-ph" ], "primary_category": "cond-mat.mes-hall", "published": "20170926175914", "title": "A robust operating point for capacitively coupled singlet-triplet qubits" }
/ł#1 #1#1 #1#1 #1#1 #1Schulich Faculty of Chemistry, Technion-Israel Institute of Technology, Haifa 32000, IsraelWe study the stochastic thermodynamics of resetting systems. Violation of microreversibility means that the well known derivations of fluctuations theorems break down for dynamics with resetting. Despite that we show that stochastic resetting systems satisfy two integral fluctuation theorems. The first is the Hatano-Sasa relation describing the transition between two steady states. The second integral fluctuation theorem involves a functional that includes both dynamical and thermodynamic contributions. We find that the second law-like inequality found by Fuchset al. for resetting systems [ EPL, 113, (2016)] can be recovered from this integral fluctuation theorem with the help of Jensen's inequality. Integral Fluctuation Theorems for Stochastic Resetting Systems Arnab Pal and Saar Rahav December 30, 2023 ==============================================================§ INTRODUCTIONDynamics with resetting, where a system is intermittently returned to a predetermined state, has been fascinating researchers from many fields and disciplines <cit.>. Indeed, dynamical systems with resetting have been employed as models for diverse situations such as searching for lost possessions, foraging for food in the wild, stochastic phenotype switching, optimal search algorithms, and random catastrophic events <cit.>. Part of the interest is due to the neat mathematical structure of resetting, but most of the interest is due to the usefulness of resetting in search problems <cit.>. It is now well understood that the inclusion of resetting can drastically affect the distribution of search times. Consider a particle that diffuses until it reaches a target position for the first time. Naively one would think that adding resetting to the system's dynamics is unlikely to be helpful, since some resetting events will occur when the particle already is near the target, and will therefore be detrimental. This naive intuition is often wrong. Resetting can be quite helpful in search problems in which a particle can diffuse far away from the target in the wrong direction. Resetting prevents such realizations from occurring, thereby removing realizations which take an exceedingly large times to reach their target <cit.>.One of the most fundamental characteristics of stochastic resetting systems is that they are inherently out of thermal equilibrium. Consider a Brownian particle diffusing in some potential landscape. In the absence of a non-conservative force (or boundary conditions that couple the system to imbalanced reservoirs) the system will relax to an equilibrium state, in which the probability to find the system at x is given by the Boltzmann distribution. Resetting can be added to the dynamics by mandating that at each time step dt the system has probability r dt to be reset to a preselected position x_r. If a reset has not happened the system simply continues to diffuse. The resetting step is clearly unidirectional, since the dynamics does not include anti-resetting transitions (i.e. the time-reversal of resetting events). This simple observation means that stochastic dynamics with resetting can not satisfy detailed balance, and the system must therefore relax to a non-equilibrium steady state <cit.>. It has been noticed that the dynamics of relaxation to the steady-state can be quite unusual in the presence of resetting <cit.>. With time, an inner core region near the resetting point relaxes to the steady state, while the outer region is still transient, and the location of the boundary separating them grows as a power law <cit.>.While the dynamics of stochastic resetting systems was studied extensively, the thermodynamic interpretation of resetting was largely overlooked. The first, and to the best of our knowledge the only paper to deal with this question was published fairly recently <cit.>. Fuchset. al.used the theory of stochastic thermodynamics in order to give a consistent thermodynamic interpretation to resetting. In particular, the authors of Ref. <cit.> identified the entropy change and work due to a resetting event eventually deriving the first and second law of thermodynamics in the presence of resetting. The change in the system's entropy during a reset was interpreted as the difference between the information created and erased in this step, making an interesting connection between resetting and the thermodynamics of information <cit.>.The theory of stochastic thermodynamics was developed to extend thermodynamics into the realm of small out-of-equilibrium systems <cit.>. The most important concept underlying the theory is the ability of assigning a meaningful thermodynamic interpretation to a single realization of a process. This in turn allows one to study distributions of thermodynamic quantities, such as heat or work <cit.>. The development of stochastic thermodynamics was largely motivated by the realization that such distributions satisfy a set of results known as fluctuation theorems <cit.>. Fluctuation theorems (FT) can be viewed as replacing the inequality of the second law by an equality expressed as an exponentially weighted average over distributions of thermodynamic variables. These celebrated results are a rare example of general laws that hold even for far from equilibrium processes. Their discovery spurred an extensive research effort focused on out-of-equilibrium systems and processes <cit.>.It is only natural to ask whether stochastic resetting systems satisfy any fluctuation theorems. The question is non trivial since the resetting transitions are unidirectional. Most derivations of fluctuation theorems are based on the ability to map a realization onto a time-reversed counterpart, implicitly assuming that such a counterpart exists. The absence of anti-resetting transitions therefore means that the derivations based on this assumption break down, and that the usual fluctuation theorem will be typically violated in systems with resetting. In this paper we nevertheless show that stochastic resetting systems satisfy two integral fluctuation theorems (IFTs). The first is the Hatano-Sasa relation <cit.>, which was derived for dynamics without resetting. It pertains to a processes in which a system is driven from one steady-state to another by the modulation of parameters. The derivation is illustrated in sec:HSFT. The thermodynamic interpretation of the Hatano-Sasa functional is discussed in sec:func. The second relation we derive is an extension of an IFT derived for Markov jump processes with unidirectional transitions in <cit.>. The fluctuating quantity appearing in this IFT is an interesting combination of thermodynamic and dynamic quantities. The derivation of this second IFT is presented in sec:IFTDR.We discuss how the presence of resetting affects the physical interpretation of the exponentially weighted functional that appears in each of the IFTs. We also present numerical simulations to support the validity of both IFTs. We discuss some implications of our results in Conclusion.§ THE HATANO-SASA INTEGRAL FLUCTUATION THEOREMIn 2001, Hatano and Sasa derived an insightful fluctuation relation for overdamped Langevin systems (without resetting) which are driven from one steady-state to another <cit.>. Consider a system that is prepared at a steady state matching an initial value of some parameter α (0). The system is then driven by varying α with time, leading to a finite-time transition between steady-states in which the actual time dependent probability distribution lags behind the distribution of the momentary steady-state (with parameter α (t)). On the other hand, for quasistatic variation of the parameters the system moves through a continuous sequence of stationary states since the lag between the actual distribution and the momentary steady state vanishes. A quantitative measure of such a lag between two equilibrium states is given by the Clausius inequality. The goal of Hatano and Sasa was to find an analogue in the case of steady states.Hatano and Sasa identified a functional Y[x(t)] of realizations of this process x(t) that satisfies an integral fluctuation theorem⟨ e^-Y ⟩ = 1, łHatano-Sasa-IFTwhere angular brackets denote the average over an ensemble of realizations of the process. The functional is given byY[x(t);α(t)]=∫_0^τ dt α̇(t) ∂ϕ/∂α(x(t);α(t)), łHatano-Sasa-functionalwhere ϕ (x;α) ≡ - lnρ(x;α) is the logarithm of the momentary steady-state distribution ρ(x;α) for a given value of α(t)<cit.>.Hatano and Sasa proceeded to show that the functional Y can be recast in a way that has an interesting thermodynamic interpretationY=β Q^ex[ x(t) ]+Δϕ[ x(t) ], łdefinition of Ywhere β^-1 is the temperature of the ambient medium, and Δϕ= ϕ(x(τ);α (τ))-ϕ(x(0);α (0)) is the difference between the final and initial values of ϕ along the realization. Q^ex was identified as the excess heat following Oono and Paniconi who studied heat dissipation in non-equilibrium steady states in <cit.>. This definition follows from a decomposition of total heat into excess heat, which is produced only during transitions between steady-states, and a housekeeping heat, which is constantly produced to maintain the steady state, so that Q=Q^hk+Q^ex. In equilibrium the housekeeping heat vanishes and the excess heat becomes identical to the total heat. The Hatano-Sasa relation implies thatβ⟨ Q^ex⟩+ ⟨Δϕ ⟩≥ 0 ,łsecond-lawwhich obtained from Eq.Hatano-Sasa-IFT with the help of Jensen's inequality. The excess heat is minimal for quasistatic processes where the external variation is slow compared to other time-scales, and system is effectively at the momentary steady state at each time step. For such processes ⟨ Q^ex⟩=-T⟨Δϕ ⟩. Eq.second-law can be recast as the second law for transitions between two steady states if one notes that the Shannon entropy can be defined as S(α)=-∫  dx  ρ(x;α)  lnρ(x;α), leading to a Clausius-like inequalityT Δ S ≥ -⟨ Q^ex⟩<cit.>.In this section we argue that Hatano-Sasa relation holds also for dynamics with resetting. In fact, the derivation presented by Hatano and Sasa in Ref. <cit.> holds without changes. We present the derivation below for completion and also to highlight a subtle point that needs some care when resetting is present. Consider an overdamped particle diffusing in a parameter dependent potential landscape U(x,α). The particle can also be reset to a fixed position x_r. This process occurs at a rate r which is taken to be spatially independent for simplicity. The probability distribution of finding the particle in different locations p(x,t;λ⃗) evolves according to a Fokker-Planck equation∂ p/∂ t= D ∂^2p/∂ x^2 +∂/∂ x[ ∂ U/∂ x p ]- rp + r δ (x-x_r).Here λ⃗ = {α, r, x_r} is the set of the process parameters that can be controlled externally, and D is the diffusion constant.If the parameters are not varied in time the system will decay to a steady state with distribution ρ (x;λ⃗)=e^-ϕ (x;λ⃗).We now imagine that the system is initially in a steady state. At time t ≥ 0, the system is driven out of this state via variation of some of the parameters in time according to a known protocol, λ⃗ (t) (where 0≤ t ≤τ). We further assume that the parameters are varied in a smooth manner. For the purpose of the derivation we divide the time interval into many small time segments of size τ/N. The original process can now be approximated by a process in which the parameters are kept constant in each time step and are changed suddenly in between the time steps. Specifically, we take λ⃗(t)=λ⃗_k for t_k < t < t_k+1 with 0≤ k ≤ N-1 and t_0 ≡ 0, t_N≡τ. For larger and largervalues of N this piecewise constant process will be a better and better approximation of the original process. Let us denote the transition probability between two states in one unit of time τ/N for a fixed λ⃗ by P(x^' | x ; λ⃗). By definition this propagator maps the steady state distribution onto itselfρ (x^';λ⃗) = ∫ dx  P(x^' | x ; λ⃗) ρ (x;λ⃗).One can now define functionals G [x(t);λ⃗ (t) ] over realizations using a limiting procedure where G is expressed in terms of the values of x(t) at each time step t_k=k/N τ and then by taking the limit N →∞. The ensemble average of this functional for a given value of N is given by ⟨ G ⟩≃∫ ∏_k=0^N dx_k ( ∏_k=0^N-1 P(x_k+1|x_k;λ⃗_k) ) ρ(x_0;λ⃗_0) G[x(t);λ⃗(t)].When the limit N →∞ is taken (with fixed τ) the piecewise constant process approaches the original smooth process, while the functional converges to a limiting form. The Hatano-Sasa fluctuation theorem is obtained by noting that the functionalR[ x_k; λ⃗_k] ≡∏_k=0^N-1 ρ(x_k+1;λ⃗_k+1)ρ (x_k+1;λ⃗_k),satisfies a relation ⟨  R[ x_k; λ⃗_k] ⟩≃ 1. łeq:relationr This can be verified by direct substitution of Eq. (<ref>) into Eq. (<ref>). Writing R as an exponent of a functional, using the definition of ϕ, and taking the limit N →∞ results in< exp[ - ∫_0^τ dt  λ̇⃗̇·∂ϕ (x;λ⃗)/∂λ⃗]>=1.Here λ̇⃗̇·∂/∂λ⃗≡α̇∂/∂α +ṙ∂/∂ r + ẋ_r∂/∂ x_r. The Hatano and Sasa functional as appeared in <cit.>is recovered when only α is varied, but one can see the resulting fluctuation theorem also holds when the parameters characterizing the resetting process are varied. One interesting difference between dynamics with and without resetting is that realizations with resetting need not be continuous. In fact, the resetting events involve finite and sudden changes in the particle position. Nevertheless, the dependence of ϕ (x;λ⃗) on parameters is smooth, and as a result the derivatives appearing in Eq. (<ref>) are well defined. In contrast, recasting the functional in a form that would involve derivative with respect to x must be done with proper care. This will become important when we try to give a physical interpretation to the functional appearing in (<ref>). This is the subject of the next section.§ THERMODYNAMIC INTERPRETATION OF THE HATANO-SASA FUNCTIONAL IN THE PRESENCE OF RESETTING The physical interpretation of the functional Y[ x(t);λ⃗(t)] is not obvious at first sight. For diffusive dynamics without resetting Hatano and Sasa showed that this functional expresses the excess heat, namely heat exchanged between the diffusing particle and its environment beyond the heat that would have been exchanged had the system been maintained at steady state. The ensemble average of the latter is the heat required to keep the system at a steady state and is therefore termed the house-keeping heat. The functional Y[ x(t);λ⃗(t) ] is therefore a measure for the deviation from quasistatic time-variation of parameters. In this section we shall see that this qualitative picture applies also to systems with resetting, but with some important differences.For this purpose we examine a single realization of a process where the particle diffuses in a potential U(x,α) but gets interrupted with resetting to x_r. The microscopic dynamics at an infinitesimal time step dt is given bydx = x(t+dt)-x(t)= N̂_t [ x_r-x(t)]+( 1-N̂_t) [ - ∂ U/∂ x  dt+√(2D)  d B_t].Here x(t) is the position of the particle, while N̂_t is a random variable which determines whether there was a resetting event in the time interval between t and t+dt. It can take only two values, 0 or 1. The probability for a reset event is given by P(N̂_t = 1 )=r dt, whereas the complementary probability is P(N̂_t = 0)=1-rdt. The change in particle position due to diffusion is simply d x_diff≡ - ∂ U/∂ x  dt+√(2D) d B_t, where dB_t is a Wiener process. dB_t therefore satisfies< dB_t >=0, < dB_t^2> = dt, and < dB_t dB_t^'> =0 for tt^' (assuming non overlapping time intervals). The stochastic dynamics of systems with resetting exhibits an interesting feature which is absent in purely diffusive dynamics. The difference dx needs not be small in an infinitesimal time step. The reason for this is obvious. In reset events the particle position is changed suddenly so that dx=x_r-x(t) can be arbitrarily large. As a result one should take care when making manipulations that require expansions in powers of dx.In their paper Hatano and Sasa used integration by parts to rewrite the functional in a more physically transparent form. This is precisely the type of manipulation that can be problematic at reset events. However, we note that the probability of a reset at each time step is infinitesimal. The random variables N̂_t that determine the epochs of reset constitute a Bernoulli process which furthermore converges to a Poisson process when dt → 0. As a result, the probability to find J_r resetting events in a realization of duration τ isP(J_r,τ) = (rτ)^J_r/J_r! e^-r τ,while the waiting time between two consecutive reset events is distributed according to P(Δ t) =r e^-rΔ t. One sees that averages over realizations are dominated by realizations with a finite number of separate resetting events. The relative weight of realizations with an extremely large number of resetting events is therefore negligible. One can then focus on realizations with a finite number of separate resetting events.We thus examine the functional Y[x(t);λ⃗(t)] in Eq. (<ref>) for a realization x(t) that has a finite number J_r of resettings at times 0<t_1<t_2<⋯<t_J_r<τ. The integrand in Eq. (<ref>) may change suddenly in the vicinity of the resetting points, but the change is a jump between two finite values. As a result excluding a finite number of infinitesimal time segments around the resetting times will not change the value of the functional. One can therefore rewrite the functional asY[x(t);λ⃗(t)] = ∫_0^t_1^- dt λ̇⃗̇·∂ϕ/∂λ⃗ + ∫_t_1^+^t_2^- dt λ̇⃗̇·∂ϕ/∂λ⃗ + ⋯ + ∫_t_n_r^+^τ dt λ̇⃗̇·∂ϕ/∂λ⃗ ,where t_i^-, t_i^+ denote the times just before and after the resetting events. Since resetting is instantaneous, the difference between these two times is infinitesimal. In each of the time segments in Eq. (<ref>) the particle performs diffusion without resetting. As a result, we can use integration by parts to obtainY[x(t);λ⃗(t)] = ϕ(x(t_1^-); λ⃗ (t_1))-ϕ (x_r;λ⃗ (0))- ∫_0^t_1^- dt dx/dt∂ϕ/∂ x + ϕ(x(t_2^-); λ⃗ (t_2))-ϕ (x_r;λ⃗ (t_1)) - ∫_t_1^+^t_2^- dt dx/dt∂ϕ/∂ x ⋯ + ϕ(x(τ); λ⃗ (τ))-ϕ (x_r;λ⃗ (t_J_r)) - ∫_t_J_r^+^τ dt dx/dt∂ϕ/∂ x ,Our use of integration by parts means that the stochastic integrals in Eq. (<ref>) should be interpreted according to the Stratonovich prescription <cit.>.We are now in position to recast the functional Y in terms of thermodynamic quantities. The thermodynamic interpretation of stochastic resetting was discussed in by Fuchset al.<cit.>. For instance, they noted that the resetting step involves a resetting work of U(x_r,α)-U(x(t^-),α) that is done on the system. In addition, each resetting step must also be associated with a change of the fluctuating entropy of the system, -ln p(x,t;λ⃗) where p satisfies Eq.eq:FPeq. In a seminal paper Seifert has shown that inclusion of such fluctuating system's entropy in the total entropy production results in exact, rather than asymptotic fluctuation theorems <cit.>. The change of this fluctuating entropy in a single resetting step (suppressing the explicit time dependence in p) isΔ S_reset[x(t);λ⃗(t)] = lnp(x(t^-);λ⃗(t))/p(x_r;λ⃗ (t)).Fuchset al. discussed the ensemble average of this quantity and showed that this contribution for the entropy production must enter the second law of thermodynamics of resetting systems, see Eqs. (10)-(13) in Ref. <cit.>.Examination of the functional (<ref>) shows that it has a sum of contribution of the form ϕ (x(t_i^-);λ⃗(t_i))-ϕ (x_r;λ⃗(t_i)) from all the reset events (as shown in fig:test for instance)along the realization. This can be identified as a change in entropy due to resetting, but with the momentary steady state distribution ρ replacing the actual probability distribution p Δ S^ex_reset [x(t);λ⃗(t)]≡∑_i=1^J_r[ϕ(x_r;λ⃗(t_i)) - ϕ(x (t_i^-);λ⃗(t_i)) ]= ∑_i=1^J_r ln ρ(x (t_i^-);λ⃗(t_i))/ρ(x_r;λ⃗(t_i)),where we have used the definition of ϕ.The summation runs over all resetting events in the realization [x(t); 0≤ t ≤τ].We use the notationex for this measure of entropy production to conform withcustomary notation (See e.g. <cit.>).As will be discussed later, for systems with resetting, the interpretationof Δ S^ex_reset as an excess quantity is somewhat misleading. We note that the mean rate of this resetting entropy production is given by Ṡ^ex_reset=r∫ dx p(x;λ⃗) lnρ(x ;λ⃗)/ρ(x_r;λ⃗).The functional Y can now be rewritten asY [x(t);λ⃗(t)] = Δϕ -Δ S^ex_reset -∫_0^t_1^- dt dx/dt∂ϕ/∂ x - ∫_t_1^+^t_2^- dt dx/dt∂ϕ/∂ x⋯ - ∫_t_J_r^+^τ dt dx/dt∂ϕ/∂ x. The stochastic integrals in Eq. (<ref>) can be expressed in terms of the excess heat produced during a realization in a similar way to the approach taken by Hatano and Sasa <cit.>. Heat is exchanged between the system and the thermal reservoir only when the particle diffuses and this is given byQ [x(t);λ⃗(t)] = -∫_0^t_1^- dt dx/dt∂ U/∂ x -∫_t_1^+^t_2^- dt dx/dt∂ U/∂ x⋯ -∫_t_J_r^+^τ dt dx/dt∂ U/∂ x, łtotal-heatwhich is based on the fact that in overdamped systems the force -∂ U/∂ x has to balance the force that the particle in the environment apply on the diffusing particle. The stochastic integrals here should also be interpreted according to the Stratonovich prescription. (See e.g. Ref. <cit.> for a more detailed discussion on the differences between the Ito and Stratonovich prescriptions in the context of stochastic thermodynamics). It will be now useful to define the housekeeping heat along the trajectory in the following way <cit.> Q^hk[x(t);λ⃗(t)] =∫ dt  v_ss (x(t); λ⃗) dx/dt,where v_ss (x; λ⃗)= J_ss (x;λ⃗)/ρ (x;λ⃗) is the mean local velocity of particles at the steady state distribution with the external parameters λ⃗. The diffusive particle current is given by J_ss(x;λ⃗) = - ∂ U/∂ xρ - D ∂ρ/∂ x also for diffusive dynamics with resetting. The main difference between the current case and that of systems without resetting is that here the current J_ss will generally be position dependent. Substitution of the expression for the current allows to express the house keeping heat asQ^hk[x(t);λ⃗(t)] = ∫ dt (-∂ U/∂ x + 1/β∂ϕ/∂ x)dx/dt.The stochastic integrals in Eq. (<ref>) are clearly the difference between the heat of a realization and its house keeping counterpart from Eq.eq:ekq2, namely,Y [x(t);λ⃗(t)]= Δϕ - Δ S^ex_reset + β Q^ex,where the excess heat is defined as the difference Q^ex≡ Q-Q^hk. In absence of resetting the so-called excess heat behaves like a proper excess quantity. By this we mean that its ensemble average is approximately proportional to Δϕ in slow, gradual, processes. It does not grow with the duration of the process. This is no longer true for the excess heat in the presence of resetting. This can be understood intuitively. The resetting dynamics is built out of a sequence of resetting events and periods of diffusion. The systematic bias of the former, due to the fact that resetting events always put the particle as x_r, indicates that the diffusion will also have a preferred direction. Both the mean excess heat and the mean resetting entropy are therefore expected to grow with time even for systems at steady state. Nevertheless, the derivation above shows that the combination β Q^ex -Δ S^ex_reset is the one which behaves like a proper excess quantity, namely that it has a mean that is not proportional to theduration in quasistatic processes. One should nevertheless note that separating this well defined excess quantity into resetting and heat related parts results inpartial contributions which are expected to behave awkwardly. We finally obtain a second law-like inequality using the Jensen's relation in Eq.eq:HSwithr, β⟨ Q^ex⟩+ ⟨Δϕ ⟩ - ⟨Δ S^ex_reset⟩≥ 0,for transitions between steady states in systems with resetting.To summarize, the derivation presented in Secs. <ref> and <ref> show that the Hatano-Sasa integral fluctuation theorem is also valid for systems with resetting. Its derivation is almost unchanged by the inclusion of resetting. A bit of care is needed since the functional Y [x(t);λ⃗(t)] in Eq.eq:HSwithr includes realizations with intermittent long range jumps due to resetting.§.§ Numerical SimulationsTo illustrate our results we performed simulations of a simple example of stochastic dynamics with resetting. Specifically, we considered an overdamped particle diffusing in a potential U(x,α)=α(t) |x|, where α(t) is the stiffness of the potential. The whole system is immersed in a bath with temperature β^-1. At each microscopic time step dt, the system may be reset to the origin with probability rdt. Alternatively, the system diffuses for a time dt with the complementary probability 1-rdt.The diffusion constant has been fixed at D=1/2 with Dβ=1. The system is initially prepared at a steady-state with α=1.0, r=0.6. It is then driven away from this steady state using a time variation of α, where 0 ≤ t ≤ 1. We used three different driving protocols: (a) α(t)=t, (b) α(t)=e^t and (c) α(t)=(1+t)^-1 respectively. As α is varied externally, the system probability distribution follows a series of non-equilibrium states which lag behind the momentary steady state.Each realization of the stochastic process Eq.eq:defdx is a fluctuating trajectory. For each such realization we calculate the value of the functional Y[x(t)] with the help of Eq.Hatano-Sasa-functional. This requires discretization of the integral appearing in the definition of Y, which is done usingY=∑_i=0^N-1{ϕ(x_i+1;α_i+1)-ϕ(x_i+1;α_i) }. łHSfunctionaldiscretizeEvaluation of this functional requires knowledge of the steady state distribution ρ(x;α) for any value of system and resetting parameters. For this particular system analytical expressions for the steady state distribution are known <cit.>, and they have been employed in our calculations.Fig. <ref> presents the resulting distribution function of the values of Y for the three processes mentioned above. They are generated from histograms of N_r=10^8 realizations of the dynamics. The Hatano-Sasa relation is verified by computing Y^e ≡ln1/N_r∑_i=1^N_r e^-Y_i for each one the different driving protocols. The solid vertical lines in Fig. <ref> depict the value of Y^e whereas the vertical dashed lines correspond to the value of ⟨ Y ⟩. Our numerical simulation returns values of Y_e ≃ 0.002 - 0.005 which is consistent with the predictions of the Hatano-Sasa relation, Y_e = 0. § INTEGRAL FT FOR DISCRETE JUMP PROCESSES WITH RESETTING In this section, we present another integral fluctuation theorem which holds for Markov jump processes. In this setup resetting is introduced by including unidirectional transitions that point towards a specific resetting site. The fluctuation theorem commonly emerges from a comparison of the probabilities of a realization and that of its time reversed counterpart. When the dynamics exhibits microreversibility each allowed trajectory has a time-reversed counterpart and vice-versa. Importantly, the mapping between trajectories and their time-reversed counterparts is one-to-one. Dynamics with resetting violates microreversibility. All resetting transitions put the particle at the reset site, whereas the dynamics does not include any anti-resetting transitions. Despite this, we show that by introducing an auxiliary dynamics with inverted (anti-resetting) transitions a one-to-one mapping of realizations can still be achieved. This allows us to derive an IFT which is solely given in terms of the original resetting dynamics. We model the resetting dynamics as a jump process on a discrete lattice with the sites {1, 2, 3, ... r ..., N_s }, where the reset site is labeled by r. We distinguish between two physically distinct types of Markov transitions. The first type consists of diffusive jumps between any two sites which occur due to the interaction between the system and the ambient medium, possibly including some external bias. These jumps are bidirectional. The second type of transitions is the resetting events. These are transitions from any site (except the reset site) to the particular reset site. We view the resetting events as done by some external agent. These transitions are unidirectional, as there are no anti-resetting transitions. To highlight this distinction we denote the rate of bidirectional jump from site m to n by W_nm(t), and the rate of resetting transitions from site m to r by R_rm(t) respectively. The two types of transitions are schematically depicted in Fig. <ref>a. The bi-directionality of the diffusive transitions is expressed by demanding that W_nm(t) >0 implies that also W_mn (t) >0. The probability to find the system in site n at time t, denoted by p(n,t), evolves according to a master equation dpdt=ℒp, whereℒ is the transition rate matrix. The off-diagonal elements of the transition rate matrix are composed from the transition rates. Its diagonal elements are chosen to ensure conservation of probability ℒ_ii = - ∑_jiℒ_ji. The transition rates may be time-dependent. Let us consider a particular realization of the jump process, Γ= { m(t)}, evolving between t=0 and t=t_f, where the state of the system m(t) transitions between a sequence of states m_j, such that m(t) ≡ m_j, for τ_j ⩽ t ⩽τ_j+1. In this notation m_0 is the initial state of this particular realization, while the system is at m_J at the final time t_f. This realization is heuristically depicted in Fig. <ref>b. The Markovian nature of this jump process allows the construction of the probability density of a realization from a few simple building blocks.During the realization Γ the system spends a finite amount of time in a sequence of sites. The probability that the system is at m(t)=n for the time segment (t_1,t_2), without making any transitions, is the so-called survival probability. It is given byS_n(t_2,t_1)= exp[-∫_t_1^t_2dτ e_n(τ) ], łsurvival-all-site where e_n(t) is the total rate of transitions out of site n e_n(t)=K_n(t)+ R_rn(t)[ 1-δ_rn]. łexit-rate-all-site Here K_n(t)=∑_m ≠ n W_mn(t) is the contribution of bidirectional transitions to this escape rate. The time that a realization spends in site n without leaving is therefore distributed according tof_n(t)=e_n(t)exp[-∫_0^tdt  e_n(τ) ].The expression for the escape rate from the reset site r is somewhat different, since there are no resetting transitions out of this site. It is given by e_r(t)=∑_m ≠ r W_mr(t). Since the jump process is Markovian the probability density of the realization Γ is given byP[ Γ]=p_i(m_0) S_m_0(τ_1,0)W_m_1m_0(τ_1) S_m_1(τ_2,τ_1)R_rm_1(τ_2)  S_r(τ_3,τ_2)....W_m_Jm_J-1(τ_J) S_m_J(t_f,τ_J).where the initial condition is chosen randomly from a distribution p_i. This specific realization includes a resetting event at t=τ_2, as well as several diffusive transitions at times τ_1, τ_3, τ_4, ⋯. This probability density can be recast in a way that separates the roles of diffusing and resetting transitionsP[ Γ]=p_i(m_0) ∏_j=1^Jexp[-∫_τ_j^τ_j+1 dt  K_m_j(t)-(1-δ_rm_j)∫_τ_j^τ_j+1 dt  R_rm_j(t) ] ∏_j ∈ J_d W_m_j+1m_j(τ_j) ∏_j ∈ J_r R_rm_j(τ_j), łforward-trajectory where J_d (J_r) is the set of j values of diffusive (resetting) jumps that occurred during the realizations. Note that the second term in the exponential of Eq.forward-trajectory only picks up contributions when the actual state m(t) is not at the reset site r.To proceed we examine an auxiliary dynamics in which the resetting transitions are replaced by anti-resetting transitions. This means that the `resetting' site has many outgoing anti-resetting transitions with a total rate of R^aux_r(t)=∑_m ≠ rR_mr(t), where R_mr(t) is the anti-resetting rate from the site r to site m. For each realization Γ of the original dynamics let us examine a realization Γ≡{m(t)}={ m(t_f-t)} of this auxiliary dynamics. The probability density of seeing Γ≡{m(t)} in this auxiliary dynamics is P[ Γ]=p_i(m_0) ∏_j=1^Jexp[-∫_τ_j^τ_j+1 dt K_m_j(t)-δ_rm_j∫_τ_j^τ_j+1 dt  R^aux_r(t) ] ∏_j ∈ J_d W_m_jm_j+1(t_f-τ_j) ∏_j ∈ J_r R_m_jr(t_f-τ_j). łtime-reversed-trajectory where p_i is the initial condition for the time reversed trajectory. Unlike in Eq.forward-trajectory, the second term in the exponential of Eq.time-reversed-trajectory gives us contributions only when the actual time reversed state m(t) is at the reset site r.Crucially, there is a one-to-one mapping between realizations of the resetting dynamics and their time-reversed counterparts in the auxiliary dynamics. The only requirement for this one-to-one mapping is that all the resetting transitions are replaced by the anti-resetting ones. This leaves some freedom in choosing the magnitude of various rates in the auxiliary dynamics. To proceed we choose W_mn(t) = W_mn(t)R_mr(t) = R_rm(t)/f(r,m,t), łtransition-laws where f(r,m,t) is an arbitrary function. Namely, we elect to keep the bidirectional transition rates as they were in the resetting dynamics, but allow for more general choice of the anti-resetting rates. We will see in the following that two specific choices of those rates have an interesting physical interpretation.The probability density of Γ can now be rewritten as P[ Γ]=p_i(m_J) ∏_j=1^Jexp[-∫_τ_j^τ_j+1 dt  K_m_j(t)-δ_r m_j∫_τ_j^τ_j+1 dt  R_r^aux(t) ] ∏_j ∈ J_d W_m_jm_j+1(τ_j) ∏_j ∈ J_r R_r m_j(τ_j)/f(r,m_j,τ_j), łtime-reversed-fictitious-trajectory where R_r^aux(t)=∑_m ≠ r R_rm(t)/f(r,m,t). The one-to-one mapping between the realizations of the resetting and auxiliary dynamics results in an integral fluctuation theorem. Let us define Σ[ Γ]≡lnP [ Γ]/P[ Γ]. łSigma-definition The fact that the auxiliary dynamics conserves probability means that ⟨ e^-Σ⟩ = ∑_Γ e^-Σ[ Γ] P [ Γ]= ∑_Γ P[ Γ] = 1, łIFT-discrete-resetting where the average is over an ensemble of realizations of the resetting dynamics. The Jensen's inequality can now be used to derive a second-law-like inequality, resulting in ⟨Σ[Γ] ⟩⩾ 0. We note that Eq.IFT-discrete-resetting is valid for any choice of the initial condition of the auxiliary dynamics. We choose the initial distribution of the auxiliary dynamics to be identical to the final distribution of the resetting dynamics such that p_i=p_f.The functional appearing in the integral fluctuation theorem (<ref>) is given by Σ[Γ]= Δ S_tot-Δ S_reset+Σ_dyn. łSigma-defn Here Δ S_tot=lnp_i(m_0)/p_f(m_J)+∑_j ∈ J_d lnW_m_j+1m_j(τ_j+1)/W_m_jm_j+1(τ_j+1),łtot-entropy is the total entropy production in the system. The contributions in Eq.tot-entropy are due to the changes in the fluctuating system entropy and the medium entropy from all thebidirectional transitions respectively. The resetting transitions are responsible for a resetting entropy production term. It has the following form Δ S_reset =- ∑_j ∈ J_r ln f(r,m_j,τ_j+1). łreset-entropy While the first two terms in Σ[Γ] have a thermodynamical interpretation, the last term Σ_dyn is dynamical in nature. This dynamical term is given by Σ_dyn = ∑_j=1^Jδ_rm_j∫_τ_j^τ_j+1 dt  R_r^aux(t)- ∑_j=1^J(1-δ_rm_j) ∫_τ_j^τ_j+1 dt R_rm_j(t). łDelta-S-dyn-1 This term can be rewritten as Σ_dyn = ∫ dt [R_r^aux(t) χ_r (t) - ∑_mr R_rm(t) χ_m (t) ], łDelta-S-dyn-2 where χ_i(Γ) is an indicator function so that χ_i = 1, for Γ(t)=i and 0 otherwise. In particular, for autonomous processesΣ_dyn has the following form Σ_dyn = ∑_m ≠ r  [R_r^aux Θ_r(Γ)-R_rm Θ_m(Γ)], łDelta-S-dyn-3 where Θ_i(Γ) ≡∫  dt  χ_i(Γ) is the so-called residence time. It is simplythe total time spent at the site i during the realization. The residence time is a stochastic quantity which fluctuates from one realization to another. When normalized by the observation time, this quantity is often known as the empirical density, which converges to the steady state distribution of the system. It is worth emphasizing that Eq.IFT-discrete-resetting holds for any choice of the function f(r,m,t).Up to now the physical interpretation of Σ[ Γ] was not fully clear as it depended on parameters of the non-physical auxiliary process. In the following, we consider two particular choices of f(r,m,t). These choices lead to integral fluctuation theorems with interesting physical interpretation that is expressed only in terms of the original resetting dynamics.§.§ f(r,m,t) = 1In this case the auxiliary dynamics is obtained by simply reversing the direction of the resetting transitions while maintaining their magnitude. This prescription was previously used to study Markov processes with unidirectional transitions <cit.>. Systems with resetting are a subtype of such processes where all the unidirectional transitions point to one preselected site.The choice of f(r,m,t)=1 is useful since it results in a functional with a physically meaningful interpretation. The resetting entropy contribution to Σ[ Γ] identically vanishes. In contrast, the dynamical contribution does not. For time-independent transitions we find Σ_dyn= ∑_ m ≠ r  R_rm[ Θ_r(Γ)-  Θ_m(Γ) ], łsigma-dynamic-independent where we have used the fact that the total absorption rate at the reset site is given by R_r^aux=∑_m ≠ r R_rm. The dynamical contribution to Σ[ Γ] therefore depends on the fluctuating residence times at all sites.The structure of the dynamical term Σ_dyn exhibits similarity to the so-called dynamical activity or traffic which basically counts the number of all jumps irrespectively of their direction in a general jump process <cit.>. An important distinction, however, is that the traffic is time symmetric by construction, namely it does not not change sign if the trajectories are observed backward in time. Detailed and integral fluctuation relations for the traffic functional were derived in <cit.> based on an artificial auxiliary dynamics <cit.>. §.§ f(r,m,t) = p(r,t)/p(m,t) An alternative choice of the auxiliary dynamics is obtained by choosing f(r,m,t)=p(r,t)/p(m,t), where p(m,t) is the time dependent solution of the master equation. This choice also results in an integral fluctuation theorem with an appealing physical interpretation. We first notice that the resetting entropy along a realization is readily obtained from Eq.reset-entropy Δ S_reset=∑_j ∈ J_rlnp(m_j,t)/p(r,t).It is evident that the resetting entropy does not vanish for this choice of resetting rates. Furthermore, the mean rate of resetting entropy production is given by Ṡ_reset(t)= ∑_m ≠ r  R_rm(t) p(m,t) lnp(m,t)/p(r,t). łreset-EP This is precisely the expression derived by Fuchs et. al. <cit.> (see Eq. (24) there).For this choice of f(r,m,t) the dynamical part of Σ[ Γ] is given byΣ_dyn=∫ dt ∑_mr R_rm (t) [p(m,t)/p(r,t)χ_r (t) - χ_m (t) ].The appealing feature of this choice of auxiliary dynamics is that the ensemble average of this dynamical term vanishes. Indeed, by definition < χ_m (t)>=p(m,t). Consequently, the ensemble average of Eq.eq:dyn2choice gives us< Σ_dyn> =∫ dt ∑_mr R_rm (t) [p(m,t)/p(r,t) p(r,t) - p(m,t) ]=0. The vanishing mean of the dynamical contribution to Σ[ Γ] results in a purely thermodynamic second-law-like inequality Ṡ_tot-Ṡ_reset⩾ 0.which is derived by using Jensen's inequality in the integral fluctuation theorem in Eq.IFT-discrete-resetting. At steady state Ṡ_sys=0, and the inequality simplifies to Ṡ_med-Ṡ_reset⩾ 0.Here Ṡ_med is the medium entropy production rate. This version of the second law, applicable to resetting systems, was originally derived in <cit.>. Our results show that it can also be derived from an integral fluctuation theorem by using the Jensen inequality. Interestingly, the fluctuating functional appearing in this IFT includes both thermodynamic and dynamical contributions. The latter turns out to have a vanishing mean and therefore does not appear in the second law, but it certainly contributes to the stochastic thermodynamics of the system.Several recent papers have derived integral fluctuation theorems for systems where only some of the transitions can be observed <cit.>. The derivation is based on a construction of auxiliary dynamics very similar to the one employed here. It is interesting to note that this particular prescription was found to be useful in a variety of physical contexts.§.§ Numerical Simulations To illustrate our considerations we simulated a stochastic jump process with both bidirectional and resetting transitions. We used a four-site system as depicted in fig-unidirection. While bidirectional transitions can occur between any two sites,the resetting transitions can occur only to a preselected site, chosen here to be site 2. In our simulation we used the transition rates W_12=0.3, W_13=1.0, W_14=0.7, W_21=0.5, W_23=0.6, W_24=0.7 , W_31=0.9, W_32=1.3, W_34=0.7, W_41=0.8, W_42=0.2, W_43=1.3, R_21=0.4, R_23=0.6, and R_24=1.0. We chose an example with time-independent rates to allow usage of the Gillespie algorithm. For this dynamics the waiting time between realizations is distributed exponentially. The system is initially in a uniform probability distribution, with p_i(n) ≡ p(n,0)=1/4. Then time evolution of p(n,t) is obtained by solving the master equation. To explore the stochastic thermodynamical properties of this model we need to generate single realizations of the jump process. To this aid, we have used Gillespie algorithm to generate the stochastic trajectories of the system by determining the epochs of jumps between the states. We simulated the jump process, by picking an initial state with probability p_i (n), and then following the transitions that the system makes until a final time of t_f=5. For each realization we computed the functional Σ [Eq.Sigma-defn], using the prescription f(r,m,t)=1, which results in a vanishing resetting entropy production. The system and the medium entropy are calculated with the help of Eq.tot-entropy. This requires following the initial and final states of each realization, as well as the transitions made during it. The calculation of the system's entropy also requires knowledge of the initial and final probability distributions. Finally, the dynamical contribution Σ_dyn is computed using Eq.sigma-dynamic-independent where the residence time Θ_i at site i is computed from the stochastic dynamics simulation.We obtain the complete statistics of Σ by taking an ensemble over N_r=10^9 independent realizations. To test the validity of the integral fluctuation theorem we computed Σ^e ≡ln1/N_r∑_i=1^N_r e^-Σ_i numerically. For the given set of parameters, we find Σ^e=0.04934.., which is marked by a solid vertical line in fig-unidirection (right panel). The dashed vertical line depicts ⟨Σ⟩. Σ^e is considerably smaller than ⟨Σ⟩ and is located near the origin. This is consistent with thepredictions of the IFT Eq.IFT-discrete-resetting. We note in passing that trying to numerically verify the IFTEq.IFT-discrete-resetting for processes of longer duration may be difficult. The reason is the growth of ⟨Σ⟩ with the duration, resulting in the need to sample an exceedingly large number of realizations to ensure convergence of the exponential average ⟨ e^- Σ⟩. § CONCLUSION In this paper we have studied the stochastic thermodynamics of resetting systems. In particular, we investigated whether stochastic dynamics with resetting satisfies fluctuation theorems. Our results are complementary to the ones recently presented by Fuchs et. al. <cit.>, where the work and entropy production of resetting were identified, and a version of the second law that holds with resetting was derived.The search for fluctuation theorems of resetting systems is complicated by the fact that resetting events violate microreversibility. This violation of microreversibility ultimately means that many of the known versions of fluctuation theorems are inapplicable in systems with resetting. Nevertheless, we identify two integral fluctuation theorems that hold for stochastic dynamics with resetting.The first IFT is the Hatano-Sasa relation for transitions between steady states <cit.>. The functional that appears in the fluctuation relation in systems with resetting includes both the usual excess heat but also a contribution due to resetting entropy change. Interestingly, while none of these terms behaves like a proper excess quantity, in the sense of exhibiting a small mean for quasistatic processes, their sum does.The second IFT describes stochastic jump processes with resetting. It is derived by comparing the resetting dynamics to an auxiliary dynamics in which resetting is replaced with anti-resetting. There is some freedom in choosing the transition rates of this auxiliary dynamics, resulting in some freedom in the final form of the IFT. We identify two choices for the auxiliary dynamics which lead to an IFT with interesting physical interpretation. The first choice leads to an IFT for a functional that has no contribution of resetting entropy. Instead, it includes a dynamical term that is calculated from fluctuating residence times in sites. The second choice leads to a functional that does have a contribution from the entropy changes in the resetting events. This functional also includes a modified dynamical contributions, but we find that the ensemble average of this term vanishes. Interestingly, we find that this fluctuation relation leads to the second-law-like inequality found in Ref. <cit.>.The Hatano-Sasa relation holds for quite general processes in which a system is driven out of a steady state. It is therefore very desirable to calculate analytically the distribution of values of the Hatano-Sasa functional. Several recent papers employed the theory of large deviations to calculatedistributions of thermodynamic observable (like work, heat and total entropy production) <cit.>.A similar approach could also be handy to describe the Hatano-Sasa functional. Another possible research direction would be to study the full statistics of the residence times introduced in Sec. <ref>. This could be done using the Feynman-Kac formalism following references <cit.>, where the authors have studied the full statistics of the residence time in generic diffusion processes. The natural extension of these studies to systems with resetting is to focus on the statistics of the residence time near the resetting point (or state).This work was supported by the the U.S.-Israel Binational Science Foundation (Grant No. 2014405), by the Israel Science Foundation (Grant No. 1526/15), and by the Henri Gutwirth Fund for the Promotion of Research at the Technion. 1EM11a M. R. Evans, and S. N. Majumdar, Phys. Rev. Lett. 106, 160601 (2011).EM11b M. R. Evans, and S. N. Majumdar, J. Phys. A: Math. Theor. 44, 435001 (2011).KMSS14 L. Kusmierz, S. N. Majumdar, S. Sabhapandit, and G. Schehr, Phys. Rev. Lett. 113, 220602 (2014).EM14 M. R. Evans, and S. N. Majumdar, J. Phys. A: Math. Theor.47, 285001 (2014).Pal14 A. Pal,Phys. Rev. E 91, 012113 (2015).SabhapanditTouchette15 J. M. Meylahn, S. Sabhapandit, and H. Touchette, Phys. Rev. E 92, 062148 (2015).MSS15 S. N. Majumdar, S. Sabhapandit, and G. Schehr, Phys. Rev. E91, 052131 (2015).Eule2016 S. Eule, and J. J. Metzger, New J. Phys.18, 033006 (2016).RoldanGupta2017 E. Roldan, and S. Gupta, Phys. Rev. E96, 022130 (2017).BMS13 A. J. Bray, S. N. Majumdar, and G. Schehr, Advances in Physics62, 225 (2013).Majumdar:2005 S. N. Majumdar, Curr. Sci.89, 2076 (2005).Redner S. Redner,A guide to First-Passage Processes (Cambridge University Press, Cambridge 2001).First-Passage-Book First-Passage Phenomena and Their ApplicationsEd.R. Metzler, G. Oshanin, S. Redner, (World Scientific, 2014).Benichou2011RMP O. Bénichou, C Loverdo, M. Moreau, and R. Voituriez, Rev. Mod. Phys. 8381 2011.AAM16 A. Pal, A. Kundu and M. R. Evans, J. Phys. A: Math. Theor.49,225001 (2016).Shamik16 A. Nagar, and S. Gupta, Phys. Rev. E93, 060102 (2016).SR2014 S. Reuveni, Michael Urbakh, and Joseph Klafter, PNAS 111, 12 (4391-4396) (2014).SR2016 S. Reuveni, Phys. Rev. Lett. 116, 170601 (2016).PalShlomi2016 A. Pal, and S. Reuveni, Phys. Rev. Lett.118, 030603 (2017).Seifert2016 J Fuchs, S Goldt, and U Seifert, EPL 113, 6 (2016).Parrondo2015 J. M. R. Parrondo, J. M. Horowitz, and T. Sagawa, Nature Phys.,11, 131 (2015).Sekimoto-book K. Sekimoto,Stochastic Energetics, (Springer, Berlin 2010).Seifert:2012 U. Seifert, Rep.Prog. Phys.,75, 126001 (2012).Jarzynski:11 C. Jarzynski, Annu. Rev. Condens. Matter Phys. 2, 329–51 (2011).VanZon-Cohen R. van Zon, and E. G. D. Cohen, Phys. Rev. Lett.91, 110601 (2003); Phys. Rev. E67, 046102 (2003); Phys. Rev. E.69, 056121 (2004).PalSanjib A. Pal, and S. Sabhapandit, Phys. Rev. E87, 022138 (2013); Phys. Rev. E90, 052116 (2014).Evans1993 D.J. Evans, E. G. D. Cohen,and G. P. Morriss, Phys. Rev. Lett.,71, 3616 (1993).Gallavotti1995 G. Gallavotti, and E.G. D. Cohen, Phys. Rev. Lett.,74, 2694 (1995).Jarzynski1997 C. Jarzynski, Phys. Rev. Lett.,78, 2690 (1997).Crooks1998 G. E. Crooks, J. Stat. Phys.,90, 1481 (1998).Kurchan1998 J. Kurchan, J. Phys. A: Math. Gen.,31, 3719 (1998).Lebowitz1999 J. L. Lebowitz, and H. Spohn, J. Stat. Phys.,95, 333 (1999).Seifert2005 U. Seifert, Phys. Rev. Lett.,95, 040602 (2005).HatanoSasa2001 T. Hatano, and S. Sasa, Phys. Rev. Lett. 86, 16 (2001).Trepagnier EH Trepagnier, C. Jarzynski, F. Ritort, GE Crooks, CJ Bustamante, J. Liphardt, PNAS 101, 42 (2004).OonoPaniconi Y. Oono, and M. Paniconi, Prog. Theor. Phys. Suppl.130, 29 (1998).Rahav2014 S. Rahav, and U. Harbola, J. Stat. Mech.P10044 (2014).Maes2008 C. Maes, K. Netǒcný, and B. Wynants ,Markov Proc. Rel. Fields,14,445 (2008).Baiesi2009 M. Baiesi, C. Maes, and B. Wynants,Phys. Rev. Lett.,103,010602 (2009).Baiesi2015 M. Baiesi, and G. Falasco, Phys. Rev. E92, 042162 (2015).Haritch2014 D. Hartich, A. C. Barato, and U. Seifert,J. Stat. Mech.,2014, P02016.Shiraishi2015 N. Shiraishi, and T. Sagawa,Phys. Rev. E,91, 012130 (2015).Polettini M. Polettini, and M. Esposito, preprtint, arXiv:1703.05715.Bisker G. Bisker, M. Polettini, T. R. Gingrich, and J. M. Horowitz, preprint, arXiv:1708:06769.Gardiner C. W. Gardiner,Handbook of Stochastic Methods, (Springer, Berlin 1985).Speck2005 T. Speck, and U. Seifert, J. Phys. A: Math. Gen.,38, L581 (2005).Touchette H. Touchette, Physics Reports478 (2009).SatyaCurrentScience S. N. Majumdar, Current Science,89, 2076 (2005).Sanjib-SatyalLT S. Sabhapandit, S. N. Majumdar, and A. Comtet, Phys. Rev. E73, 051102 (2006).PCB2017 Paul C. Bressloff, Phys. Rev. E95, 012130 (2017).
http://arxiv.org/abs/1709.09330v1
{ "authors": [ "Arnab Pal", "Saar Rahav" ], "categories": [ "cond-mat.stat-mech", "physics.chem-ph" ], "primary_category": "cond-mat.stat-mech", "published": "20170927043518", "title": "Integral Fluctuation Theorems for Stochastic Resetting Systems" }
Kotel'nikov Institute of Radio Engineering and Electronics, Russian Academy of Sciences, Fryazino, Moscow District, 141190, Russia We consider a gated one-dimensional (1D) quantum wire disturbed in a contactless manner by an alternating electric field produced by a tip of a scanning probe microscope. In this schematic 1D electrons are driven not by a pulling electric field but rather by a non-stationary spin-orbit interaction (SOI) created by the tip. We show that a charge current appears in the wire in the presence of the Rashba SOI produced by the gate net charge and image charges of 1D electrons induced on the gate (iSOI). The iSOI contributes to the charge susceptibility by breaking the spin-charge separation between the charge- and spin collective excitations, generated by the probe. The velocity of the excitations is strongly renormalized by SOI, which opens a way to fine-tune the charge and spin response of 1D electrons by changing the gate potential. One of the modes softens upon increasing the gate potential to enhance the current response as well as the power dissipated in the system. Dynamic transport in a quantum wire driven by spin-orbit interaction Yasha Gindikin December 30, 2023 ====================================================================Introduction.—Today we are witnessing the burst of interest in the ballistic electron transport in quantum wires <cit.>. For the last three decades the quantum wires formed by electrostatic gating of a high-mobility two-dimensional (2D) electron gas have been the favorite playground to study quantum many-body effects in one-dimensional (1D) electron systems <cit.>, where a strongly correlated state known as the Tomonaga-Luttinger liquid emerges as a result of the electron-electron (e-e) interaction <cit.>. The dynamic transport experiments are the most subtle and precise methods to extract the many-body physics <cit.>.Currently of most interest are the group III-V semiconductor nanowires as they represent basic building blocks for the topological quantum computing <cit.> and spintronics <cit.>. In particular, InAs and InSb nanowires are promising systems for the creation of helical states and as a host for Majorana fermions <cit.>. The fundamental reason behind these properties is the strong Rashba spin-orbit interaction (RSOI) in these materials <cit.>.Recently we have found that RSOI is created by the electric field of the image charges that electrons induce on a nearby gate <cit.>. A sufficiently strong image-potential-induced spin-orbit interaction (iSOI) leads to highly non-trivial effects such as the collective mode softening and subsequent loss of stability of the elementary excitations, which appear because of a positive feedback between the density of electrons and the iSOI magnitude.By producing a spin-dependent contribution to the e-e interaction Hamiltonian of 1D electron systems, the iSOI breaks the spin-charge separation (SCS), the hallmark of the Tomonaga-Luttinger liquid <cit.>. As a result, the spin and charge degrees of freedom are intertwined in the collective excitations, which both convey an electric charge and thus both contribute to the system electric response, in contrast to a common case of a purely plasmon-related ballistic conductivity. In addition, the iSOI renormalizes the velocities of the collective excitations. An attractive feature of the iSOI is that the spin-charge structure of the collective excitations in 1D electron systems and their velocities can be tuned by the gate potential.The iSOI signatures in the dynamics of a 1D electron system were studied in Ref. <cit.> in the absence of the RSOI owing to the external electric fields to show that the spin-charge structure of the excitations as well as their velocities can be determined from the Fabry-Pérot resonances in the frequency-dependent conductance of a 1D quantum wire coupled to leads. The goal of the present paper is to investigate the interplay of the iSOI and RSOI in the dynamic charge- and spin response of a 1D electron system without contacts that may dramatically affect the system response <cit.>. The search for non-invasive methods to excite the electron system and measure the response is actively pursued nowadays, especially in plasmonics. The tools currently used include the nanoantennas and electron probe techniques <cit.>, and even the Kelvin probe force microscopy <cit.>.We consider a single-mode 1D quantum wire subject to an alternating electric field produced by the conducting tip of the scanning probe microscope, as shown in Fig. <ref>. Such schematic was discussed in Ref. <cit.> in the context of local disturbance of the charge subsystem <cit.>. We emphasize that the probe electric field, which grows even faster than the potential as the probe approaches the wire, also gives an essential contribution to RSOI thereby disturbing the spin subsystem, too <cit.>.The quantum wire is supposed to be placed directly on a conductive gate <cit.>, so that the electron image charges on the interface become the source of the iSOI. Since the potential difference between the wire and the gate is negligible, the probe electric field screened by the gate has no pulling component along the wire. However, the probe electric field perpendicular to the wire is the source of the time-dependent RSOI. We show that in response to this, the charge current does appear in the wire in the presence of iSOI and/or RSOI caused by the gate net charge. The RSOI gives rise to an interesting mechanism of electric conductivity. Since the RSOI magnitude is getting modulated along the wire by the non-stationary tip-induced RSOI, there appears a modulation of the bottom of the conduction band that results in the charge current. The process is illustrated by the inset in Fig. <ref>. The iSOI produces a complementary conductivity mechanism by mixing the charge- and spin collective excitations, generated by the probe. We also find an unusual dependence of the dissipative conductivity on the gate potential. As the potential increases, one out of two collective modes softens, with its amplitude growing. This enhances the current response and the system conductivity as determined from the dissipated power.The model.—We start by formulating the Hamiltonian,H = H_kin + H_e-e + H_SOI + H_ext .The kinetic energy is H_kin = ∑_s∫ψ^+_s(x)(p̂_x^2/2m) ψ_s(x) dx, with the electron field operator ψ_s(x), the momentum p̂_x, the spin index s.The x axis is directed along the wire, and y axis is directed normally towards the gate, which is separated by a distance of a/2 from the wire. The e-e interaction operator reads asH_e-e =1/2∑_s_1 s_2∫ψ^+_s_1(x_1) ψ^+_s_2(x_2) U(x_1-x_2) ×ψ_s_2(x_2) ψ_s_1(x_1) dx_1 dx_2 ,where U(x) = e^2/√(x^2 + d^2) - e^2/√(x^2 + a^2) is the e-e interaction potential screened by the image charges, d being the quantum wire diameter. Its Fourier transform is U_q = 2e^2[K_0(qd) - K_0(qa)], with K_0 being the modified Bessel function <cit.>. A two-particle contribution to the SOI Hamiltonian equals <cit.>H_iSOI =α/2ħ∑_s_1s_2∫ψ^+_s_1(x_1) ψ^+_s_2(x_2) [ E(x_1-x_2)𝒮_12. + . 𝒮_12 E(x_1-x_2) ] ψ_s_2(x_2) ψ_s_1(x_1) dx_1 dx_2 . Here α is a material-dependent SOI constant, E(x_i - x_j) = -e a[(x_i - x_j)^2 + a^2]^ -3/2 is the y component of the electric field acting on an electron at point x_2 from the electron image charge at point x_1, and 𝒮_12 = (p̂_x_1 s_1 + p̂_x_2 s_2)/2. Eq. (<ref>) and Eq. (<ref>) together represent a spin-dependent pair interaction Hamiltonian.A single-particle contribution to the SOI Hamiltonian comes from the image of the positive background charge density n_ion in the wire, the charge density n_g in the gate, and the field of the electron's own image E(0) to giveH_RSOI = α/ħ∑_s∫ψ^+_s(x)𝔈 p̂_x s ψ_s(x)dx , with 𝔈 = E(0) -n_ionE_0 - 2π n_g, where E_0 is the q=0 component of the Fourier-transform E_q = -2e |q| K_1(|q|a) of the field E(x) <cit.>. Denote the y-component of the non-uniform ac-field produced by the probe and screened by the gate by 𝔉_y = F(x,t). Then the external perturbation can be written asH_ext = α/2ħ∑_s∫ dxψ^+_s(x) [𝔉_yp̂_x + p̂_x𝔉_y ] s ψ_s(x) = - α m/e ħ∫ F(x,t) j_σ(x)dx ,where j_σ(x) = ∑_s s j^(s)(x) stands for the spin current, withj^(s)(x) = - i e ħ/2 m[ ∂_x ψ^+_s ψ_s(x) - ψ^+_s(x) ∂_x ψ_s ] ,being the s-spin component of the electron current operator.In order to find a linear response of the system to H_ext we employ the equation of motion for the Wigner distribution function (WDF) defined asf^(s)(x,p,t) = 1/2 π∫ e^i pη⟨ψ_s^+(x + η/2,t)ψ_s(x - η/2,t)⟩dη .This technique is particularly well-suited for the problem at hand, since the lack of contacts in the system relieves us from non-trivial problems with the boundary conditions for the WDF <cit.>.Results.—Following Ref. <cit.>, we obtain the following equation for the WDF Fourier transform in the random-phase approximation,ħω f^(s)_1(q,p,ω) = (ħ^2 pq/m + α q s E) f^(s)_1(q,p,ω) - [f^(s)_0(p+q/2) - f^(s)_0(p-q/2)] ×{α p s F_qω - m α/e ħ E_q ∑_ςς j^(ς)_qω + (U_q + α p s E_q) ∑_ς n^(ς)_qω} .Here f^(s)_1(q,p,ω) stands for the deviation of f^(s)(q,p,ω) from its equilibrium value f_0^(s)(p) as a result of the external perturbation H_ext. Then, n^(s)_qω and j^(s)_qω are, respectively, the electron density and current response, related to the WDF by n^(s)_qω= ∫ f^(s)_1(q,p,ω) dp and j^(s)_qω= -e ħ/m∫ p f^(s)_1(q,p,ω) dp. The mean electric field is E = 𝔈 + n_0 E_0. The mean electron density n_0 is kept fixed, so the Fermi momentum is k_F^(s) = -s k_so± k_F, where k_so = α m E/ħ^2 and k_F stands for π n_0/2.To derive the closed equations for n^(s)_qω, first integrate Eq. (<ref>) with respect to p:ω e n^(s)_qω + q j^(s)_qω =α/2 ħ q s e n_0 F_qω+α q s e/2ħ[ 2 E n^(s)_qω + E_q n_0 ∑_ς n^(ς)_qω] .Substitute j^(s)_qω from Eq. (<ref>) to Eq. (<ref>), express f^(s)_1(q,p,ω), and integrate the latter with respect to p to get n^(s)_qω.Further notations will be simplified by introducing the dimensionless variables as α̃ = 2πα n_0e a_B, 𝒰_q = U_qπħ v_F, ℰ = Ee n_0^2, ℰ_q = E_qe n_0,ℱ_qω = F_qωe n_0, and v_q = ωv_F q, with v_F = ħ k_Fm and a_B being the Bohr radius in the material. The system response to the external perturbation is governed by the following equations (s = ± 1),n^(s)_qω((ℰ + ℰ_q/2 )ℰ_q α̃^2 - 𝒰_q + v_q^2 - 1- α̃ s v_q ℰ_q)+ n^(-s)_qω( (ℰ + ℰ_q/2 )ℰ_q α̃^2 - 𝒰_q) = φ^(s)_qω ,with the spin-dependent perturbationφ^(s)_qω = α̃ s/2ℱ_qω v_q - α̃^2/2ℱ_qω (ℰ + ℰ_q) . The first term on the right hand side of Eq. (<ref>) is a perturbation in the spin sector caused directly by the SOI produced by the probe. This term is linear in α̃. The second term describes an indirect perturbation of the charge sector that appears because of the SOI present in the system. Its magnitude is, correspondingly, proportional to α̃^2.The normalized phase velocities of the collective excitations v=ω/q v_F, obtained from Eq. (<ref>) by setting the determinant to zero, are given byv_±^2 = 1 + 𝒰_q - α̃^2ℰℰ_q ±√((𝒰_q - α̃^2ℰℰ_q)^2 + α̃^2ℰ_q^2) . The evolution of the excitation velocities v_± and the spin-charge separation parameter of the modes that depends on the velocities as ξ_± = (v_± - v_±^-1)/α̃ℰ_q with the change in the iSOI magnitude is analyzed in detail in Refs. <cit.>. Here we would like to stress that in the presence of iSOI (ℰ_q0) both v_± and ξ_± can be controlled via the mean electric field ℰ by tuning the gate potential. Thus, v_- goes to zero as ℰ grows, i.e. the corresponding mode softens. The possibility of tuning the plasmon velocity via the RSOI magnitude was discussed for 2D systems <cit.>. An important difference from the 2D case is that without iSOI, ℰ has no effect on the excitation velocities nor does it violate the SCS between the modes. This is related to the fact that a constant SOI can be completely eliminated in 1D by a unitary transformation <cit.>.The charge and spin susceptibilities defined by χ^ρ_q ω = (n^(+)_qω + n^(-)_qω)/ℱ_qω and χ^σ_q ω = (n^(+)_qω - n^(-)_qω)/ℱ_qω are equal toχ^ρ_q ω = α̃^2 ℰ_q + ℰ(1 - v_q^2)/(v_q^2 - v_+^2)(v_q^2 - v_-^2)andχ^σ_q ω = α̃ v_q v_q^2 -1 -2 𝒰_q + α̃^2 ℰℰ_q/(v_q^2 - v_+^2)(v_q^2 - v_-^2) .Their dependence on α̃ is explained similarly to Eq. (<ref>).According to Eq. (<ref>), the power fed to the system is given byP (ω)= - α m/e ħ∫∂ F/∂ t⟨ j_σ(x) ⟩dx = α̃ħ/4 e∫_0^∞ωℑ𝔪χ^j_σ_q ω |ℱ_qω|^2dq .The spin current susceptibility χ^j_σ_q ω = ∑_s s j^(s)_qω/ℱ_qω can be determined from Eq. (<ref>), which represents a continuity equation for a system with SOI. It is seen that the separate flow of the spin and charge is violated by the second term on the right hand side that refers to inherent mechanisms of mixing the spin and charge degrees of freedom by SOI. Using Eq. (<ref>), we obtainχ^j_σ_q ω = α̃ e v_F (1 + α̃^2 ℰ^2)(1 - v_q^2) + 2 𝒰_q/(v_q^2 - v_+^2)(v_q^2 - v_-^2) .The imaginary part of the susceptibility for ω >0 equalsℑ𝔪χ^j_σ_q ω = [(1 - v_-^2)(1 + α̃^2 ℰ^2) + 2 𝒰_q/2 v_-(v_+^2 - v_-^2)δ (ω - q v_- v_F).. + (v_+^2 - 1)(1 + α̃^2 ℰ^2) + 2 𝒰_q/2 v_+(v_+^2 - v_-^2)δ (ω - q v_+ v_F) ]πα̃ e v_F^2 q .The leading contribution to the dissipated power comes from the first δ-function,P (ω) = α̃^2 h ω^2 |ℱ_qω|^2 (1 - v_-^2)(1 + α̃^2 ℰ^2) + 2 𝒰_q/16 v_-^3(v_+^2 - v_-^2)(1 + ω/v_-^2 v_F∂ v_-/∂ q) ,with q determined from ω = q v_-(q) v_F. The dependence of the excitation velocity v_-(ℰ) on the electric field of the gate results in a sharp peak in P (ω,ℰ), as illustrated by Fig. <ref>.The dissipated heat could be measured by the scanning thermal microscopy <cit.>, but a detailed consideration of the heat release involving the kinetics of the phonon subsystem is beyond the scope of the present letter.Conclusion.—To summarize, the dynamic charge and spin response of a 1D electron system to an alternating electric field of the charged probe was investigated in the presence of the SOI. The electric response to the probe-induced non-stationary SOI appears because of the RSOI and iSOI present in the system. As a result of the interplay between the iSOI and RSOI, the velocities of the collective excitations and their spin-charge structure become tunable via the electric field of the gate, and so does the system conductivity determined from the dissipated power.I am grateful to Vladimir Sablikov for helpful discussions. This work was partially supported by Russian Foundation for Basic Research (Grant No 17–02–00309) and Russian Academy of Sciences.
http://arxiv.org/abs/1709.09530v3
{ "authors": [ "Yasha Gindikin" ], "categories": [ "cond-mat.str-el", "cond-mat.mes-hall" ], "primary_category": "cond-mat.str-el", "published": "20170927140045", "title": "Dynamic transport in a quantum wire driven by spin-orbit interaction" }
Département de Physique Théorique, Université de Genève, CH-1211 Genève 4, Switzerland This work is motivated by the puzzling results of the recent experiment [S. Tewari et al., Phys. Rev. B 93, 035420 (2016)], where a robust coherence recovery starting from a certain energy was detected for an electron injected into the quantum Hall edge at the filling factor 2. After passing through a quantum dot the electron then tunnels into the edge with a subsequent propagation towards a symmetric Mach-Zender interferometer, after which the visibility of Aharonov-Bohm (AB) oscillations is measured. According to conventional understanding, its decaywith the increasing energy of the injected electron was expected, which was confirmed theoretically in the bosonization framework. Here we analyze why such a model fails to account for the coherence recovery and demonstrate that the reason is essentially the destructive interference of the two quasiparticles (charge and neutral modes) forming at the edge out of the incoming electron wave packet. This statement is moreover robust with respect to the strength of the Coulomb interaction. We firstly exploit the idea of introducing an imbalance between the quasiparticles, by creating different conditions of propagation for them. It can be done by taking into account either dispersion or dissipation, which indeed results in the partial coherence recovery. The idea of imbalance can also be realized by applying a periodic potential to the arms of interferometer. We discuss such an experiment, which might also shed light on the internal coherence of the two edge excitations. Another scenario relies on the lowering of the energy density of the electron wave packet by the time it arrives at the interferometer in presence of dissipation or dispersion. This energy density is defined by a parameter completely independent of the injected energy, which naturally explains the emergence of a threshold energy in the experiment. Valid PACS appear here Coherence recovery mechanisms in quantum Hall edge states. Anna S. Goremykina, Eugene V. SukhorukovDecember 30, 2023 ==========================================================§ INTRODUCTION The edge excitations of the integer quantum Hall (QH) regime became the basis for the new field of electronic optics in two-dimensional electronic gases, due to their ballistic, one-dimensional and chiral behaviour<cit.>. At the same time a question about the nature of decoherencein the QH edge states, important to further quantum information applications, does not yet have a satisfactory answer. Quite a number of the experiments<cit.> based on<cit.> the electronic Mach-Zender interferometer (MZI)tried to shed light on the effects of dephasing and interaction taking place in such systems. Initially, it was demonstrated<cit.> that the coherence of the incoming electron current becomes greatly suppressed with increasing the energy and temperature at the filling factor ν=1. However, even a more striking behavior was revealed in the case of ν=2, where the lobe-structure of the visibility as a function of bias between the arms of the interferometer has been reported <cit.>. Such an effect was attributed <cit.> to a strong interaction between the channels, leading to a separation of the spectrum of the edge excitations into the fast charge and slow neutral modes. A later experiment<cit.> made it possible to concentrate on studying the decoherence of a single electron injected into the edge state. The scheme of the experiment is provided in the Fig. <ref>. In this set-up, created in the system of QH edge states at the filling factor ν=2, a single-electron wave packet (WP) is injected into one of the channels with the energy defined by the level of the QD, working as an energy filter for an electron passing through. After covering a distance |x_0| of 2.7μ m it eventually arrives at the MZI with the length of 7.2μ m for the both arms. The quantum interference is then analyzed by measuring the oscillations of the current I on the way out of the MZI and plotting subsequently the visibility V = I_max-I_min/I_max+I_min as a function of the injected energy. Strikingly, it does not vanish with the energy unlike in the cases mentioned above. Instead, after a short decay, it flattens for the energies larger than ∼20μ eV, with a significant coherence restoration of around 42%. In their paper experimentalists argued that such a behavior can be explained by a partial relaxation of the electron WP on its way to the interferometer. Thus, its energy is lowered to the extent when one of the main mechanisms of the decoherence in such a set-up, the inelastic scattering inside the interferometer, would be weak.An attempt to explain such an outcome was made in [expln_other] by taking into account the disorder present in the edge. However, interaction is considered perturbatively in that work and each of the edge states is studied as a Fermi liquid. Another paper [artur] adopted an approach from [dephase_int], which accounts for the strong Coulomb interaction between the edge channels via the bosonization approach in order to see if such a minimalistic model captures the effect. Alas, the visibility was predicted to decay in a power-law manner with increasing the energy of the electron. In the present paper – a logical continuation of the latter work – we are analyzing what prevented the existence of the visibility plateau in the previous model and describe several possible scenarios of the coherence recovery. Namely, we demonstrate within the initial model that it is the destructive interference of the two quasiparticles originating from the strong interaction between the edge channels that leads to a complete decoherence. Therefore, it is enough to introducedifferent conditions guiding their dynamics to prevent such an exact cancellation. Such conditions can already be applied in case of asymmetric interferometer. However, as we point out in the Appendix <ref>, a possible slight asymmetry cannot account for such a significant coherence restoration as witnessed in the experiment. Thus, we move on to more plausible scenarios. A natural assumption would be to include either dissipation or dispersion present in either of the charge or neutral modes. Both of the effects were found to be present<cit.> at least for the neutral mode, whereas the charge mode has not been studied in the experiment due to its larger excitation energy. At this point it might already seem intuitive that taking either of the effects into account for one of the modes results in the coherence recovery, which is indeed the case as we will demonstrate. On the other hand, there is much more to the physics, when modification of the dynamics is applied to the both modes.We start our analysis by taking dissipation into account via an additional imaginary term in the velocity of the neutral or the charge modes. In fact, the experiment [dissip_neutr] demonstrated more of a quadratic law behavior of the imaginary part of the dispersion relation, which we replace, however, by a linear one for simplicity. We then study the energy distribution function dependence from the injected energy _0 and demonstrate the emergence of the energy cut-off _0≪ 1/γ. Next, we arrive at the following important conclusions: * Characteristic time γ (i.e. the “strength” of dissipation as it is explained below) is directly proportional to x_0. Due to the relative analytical simplicity, we concentrate on the case of x_0≫ L, which corresponds literally to neglecting the dissipation inside the interferometer. We justify this approach by studyingthe limiting situation of x_0=0, while including the dissipation inside the MZI in the Appendix <ref>. Ultimately, we recover the same visibility decay, thus concluding that it is the relaxation of the WP before the interferometer which is more relevant. Hence, the MZI should be considered as a probe of the WP dynamics.* Unlike the initial model where the decoherence strength is governed by the parameter _0η, i.e. the one defining the possible phase space of the scattering states inside the interferometer, it is η/γ that replaces it in the dissipation case. The latter is derived from the energy loss of the WP and its proportionality to 1/γ by the time the WP reaches the interferometer. Throughout the paper we denote η=L/v-L/u, where u,v are the velocities of the charge and neutral modes respectively and L is a length of the arm of the interferometer.* It follows from the previous point that visibility decays for the energies _0γ≪ 1 and saturates at the plateau for _0γ≫ 1. The latter happens due to the inefficiency of the non-elastic scattering inside the interferometer. Particularly, in the case of a strong dissipation η/γ≪ 1 present in both modes, a complete coherence recovery is discovered! For a strong dissipation present in one mode we find a restoration of around 59%, while for a small dissipation the restoration appears to be of the order of η/γ.* The condition γ≫η, describing the strong dissipation can be interpreted as a comparison of the effective time broadening of the WP with η, which explains why non-elastic scattering becomes weak. This argument has direct connections to the dispersion case. Although the energy is conserved, the WP becomes naturally broadened in the space lowering down its energy density. Therefore, we expect the similar physics to take place and calculate only the case of the dispersion (quadratic) present in the neutral mode. Curiously, we arrive at the same result as for the dissipation. * Even though we are not in a position to make precise quantitative predictions we can still estimate the threshold energy _thr, for instance, in the case of dissipation. Judging roughly from the findings of [dissip_neutr], it is of order _thr∼ 40μ eV, which is in a good agreement with the experiment. These estimations can be found in Sec. <ref>. What is also intriguing is that despite a total dephasing of the WP in the linear dispersion case, each of the quasiparticles does not decohere on its own. We bring attention to the fact that this statement is robust with respect to the strength of interaction, i.e. supposing the charges of the quasiparticles to differ from 1/2, which is discussed in the Appendix <ref>. In order to measure this elusive coherence we propose a following experiment with a remark that we are going to ignore the effects of either dispersion or dissipation in its theoretical description. It is done to single out the bare contribution of a quasiparticle, which nevertheless does not undermine the goal of the experiment itself. Briefly, in addition to the DC current going through a QD one should apply a periodic bias of a small amplitudebetween the arms of the MZI, synchronized with the tunneling events. Next, one can separate the two contributions after the MZI and find the visibility of the DC current. What should be found is that it also saturates with the energy, oscillating at the same time, which is exactly the result of the modified time dynamics for one of the modes. Even if the experiment will also show the existence of an additional contribution from the dissipation or dispersion it would be instructive to compare them with such an oscillating part to check our theory.Finally, we briefly outline the structure of the paper. In Sec. <ref> and <ref> we present a formalism to describe a system introduced above as well a transport in it. Then, in Sec. <ref>, we bring up the main result of the paper <cit.> for a linear plasmon spectrum and formally analyze the decay of the visibility by studying the large injected energy asymptotics of the interference current. It is next followed by Sec. <ref>, where the general approach to finding the correlation functions is discussed. Particularly, we describe in detail, how they are modified by introducing the dissipation via the Fluctuation Dissipation Theorem (FDT). The details of the derivation are provided in the Appendix <ref>. Then we proceed to finding the energy distribution function of the WP as well as its energy in Sec. <ref>. There we provide a clear physical picture of the nature of the coherence recovery and explain how it is essentially connected to the dispersion case. Afterwards, our qualitative predictions are confirmedby the calculations for the visibility in the presence of either dissipation (Sec. <ref>) or dispersion (Sec. <ref>). To wrap up the paper the proposal of an experiment allowing for a periodic coherence recovery is discussed in Sec. <ref>. Finally, Sec. <ref> is devoted to the concluding remarks. § MODEL AND INITIAL STATE The basic characteristic of interest is the evolution of the initial state which is a charged QD with a corresponding energy _0. We will base its study upon the formalism developed in the previous work [artur], where the visibility is studied in the linear spectrum case. We will briefly outline the main steps of that approach and introduce necessary changes in the subsequent Sections <ref> and <ref>. Note that we work in the units, where ħ = c = e = 1.The general Hamiltonian describing our system (see Fig. <ref>) comprising a QD and a MZI has the formℋ = ℋ_0 + ℋ_d + ℋ_t + ℋ_t,d .Here ℋ_0 describes the Hamiltonian for the QH edge states for the filling factor ν = 2, ℋ_d corresponds to the QD's Hamiltonian, while the two left terms take into account the tunneling between the QD and the edge and the tunneling between the “up” and “down” arms of the interferometer. A standard approach would be to write down such a Hamiltonian in the bosonized form expressing the electron operator ψ_nm(x) in terms of a collective bosonic mode ϕ_nm(x):ψ_nm(x) =1/√(2π a) e^iϕ_nm(x), n=1,2;m=u,d,where n denotes a particular channel, while m corresponds to the “up” or “down” part of the interferometer. Also an ultraviolet cut-off a was introduced, whose role is to keep the correct electron anti-commutation relation. The fields ϕ_nm(x) are connected to charge densities ρ_nm(x) = 1/2π∂_xϕ_nm(x) and satisfy the commutation relations [ϕ_nm(x), ϕ_n'm'(y)] = iπδ_n,n'δ_m,m'(x-y). In terms of the new bosonic operators the Hamiltonian ℋ_0 reads:ℋ_0 = 1/8π^2∑_nn',m∫ dx V_nn'(x,y) ∂_x ϕ_nm(x) ∂_x ϕ_n'm(x),where the matrix V_nn'is presented in the form:V = [ 2π v_0 + UU;U2π v_0+ U ].The term U > 0 describes a strong screened Coulomb interaction, while v_0 is a Fermi velocity of the edge plasmon in the absence of interaction. Thus, the Hamiltonian ℋ_0 can be easily diagonalized applying a unitary transformation:ϕ_1m = 1/√(2)(_1m+_2m), ϕ_2m = 1/√(2)(_1m-_2m).The new fields _1m and _2m are called respectively charge and neutral modes according to their “physical sense”. In their basis ℋ_0 acquires a form:ℋ_0 = 1/4π∫ dx ∑_n,mv_n {∂_x_nm}^2,with the charge mode velocity v_1 ≡ u = v_0 + U/π being much larger than the velocity of the dipole mode v_2 ≡ v_0 due to the strong Coulomb interaction U≫ v_0. Note that we will mention below and discuss in the Appendix how this condition can be relaxed and outline the consequences.Moving to the Hamiltonian for the QD, we represent it in terms of the electron annihilation operator d, which givesℋ_d = _0 d^† d,where _0 is the energy level of the charged QD. Next, the tunneling term ℋ_t,d between the dot and the channel 1uin the Hamiltonian is presented as:ℋ_t,d=τ_d/√(2π a) e^-i ϕ_1u(x_0)d +h.c.Finally, the tunneling Hamiltonian ℋ_t between the 1d and 1u channels of the interferometer has the formℋ_t = ∑_j=ℒ,ℛτ_j/2π a e^iϕ_1u(x_j)e^-iϕ_1d(x_j)+ h.c,where x_ℒ=0 and x_ℛ=L.Having defined the Hamiltonian, we move to studying the evolution of a single-particle state injected into the QH edge from a QD. To account for tunneling into the edge, described by τ_d, non-perturbatively, the evolved single-particle state can be presented in a form of a wave packet (WP) of a certain width Γ, whose dynamics is governed by the Hamiltonian H_tot = H_0 +H_d + H_t,d. Such a packet will then describe a single-particle state at large times with an empty QD. The next idea is to construct an auxiliary initial state for a new system without a QD, whose evolution will still describe the old system comprising the edge and the QD.For that, the WP must be evolved back in time with H_0. In the simplest case of ν = 1 the result<cit.> reads|Ψ⟩_in = |τ_d|/v_0∫_-∞^x_0dx e^i(_0-i Γ)(x-x_0)/v_0e^iϕ(x)|Ω⟩,where Γ = |τ_d|^2/2v_0 is the QD level width and |Ω⟩ stands for the ground state of the system. This state, being evolved with the edge Hamiltonian H_0, coincides at time t = T ≫Γ^-1 with the state injected from a QD and evolved with the total Hamiltonian (<ref>). Thus, it allows reformulating the problem: instead of studying the evolution of an injected electron, one studies the dynamics of an auxiliary initial state of the system without a QD.But what happens if an electron tunnels into the edge of ν=2 with a non negligible interaction between the channels? We still may use the same initial state with a few modifications. Firstly, an obvious replacement of ϕ(x) by ϕ_1u(x) is in order. Secondly,this state should be considered as the one coming in from the free-fermion region x ≤ x_0 and propagating then into the region x>x_0, where there is interaction between the two channels.[Separation of the space into a “free” region and the one with interactions between the edge channels and/or other effects is somewhat artificial and originates from the choice of representing the initial state. Otherwise, we would have to account for higher order tunneling processes into the edge.] Next, one can add to that dissipation or dispersion. Eventually, the only quantities of interest are either the correlators for the fields inside the interferometer or the correlators between the free bosonic field ϕ(x), present in the WP (<ref>), and the charge or neutral modes in the region x>x_0. The contribution from those two modes can be treated separately, as if there was only a channel with no effective interaction region at x<x_0, and a region where an interaction is introduced, which changes the fields velocity and/or where the dissipation or dispersion are “turned on”. We will return to this matter later on, when we get down to the calculations for different regimes.§ TRANSPORT THROUGH THE INTERFEROMETER Tofind the visibility we first define currents through the interferometer, measured in the 1d channel, in the first order in tunneling amplitudes τ_L and τ_R in (<ref>) for the left and right junctions. The general expression for the current has the following structure<cit.>I(t) = ∑_j,j'=ℒ,ℛ⟨ I_jj'(t)⟩,where each term is of the formI_jj'(t)=-∫_-∞^t dt' I_jj'(t,t'),such that I_jj'(t,t') =[A^†_j'(t'), A_j(t)] + [A^†_j'(t), A_j(t')],A_j(t) = τ_j/2π a e^iϕ_1u(x_j)e^-iϕ_1d(x_j).The averaging in (<ref>) is taken over the initial state (<ref>) with two remarks. Firstly, the ground state of the system is a product state of the “up” and “down” subsystems: |Ω⟩ = |Ω_u⟩|Ω_d⟩, so that each |Ω_u,d⟩= | 0⟩_1| 0⟩_2 is the ground state of the two channels formed by the ground states of the charge and neutral modes. The terms ⟨ I_ℒℒ⟩ and ⟨ I_ℛℛ⟩ are the direct currents at the left and right tunnel junctions respectively, while the interference currents are represented by ⟨ I_ℒℛ⟩ and ⟨ I_ℛℒ⟩. These currents define the corresponding direct and interference charges Q_dir = ∫ dt[⟨ I_ℒℒ(t)⟩ + ⟨ I_ℛℛ(t)⟩], Q_int = 2∫ dt Re{⟨ I_ℒℛ(t)⟩},which in turns determine the visibility V:V = Q_max-Q_min/Q_max+Q_min = | Q_int|/Q_dir.The charges Q_max and Q_min in the above expression correspond to the maximum and minimum values of the charge Q = Q_dir + Q_int transmitted through the interferometer, which oscillates as a function of the AB phase _AB=Arg(τ_ℒτ^*_ℛ) in Q_int=| Q_int|cos(_AB+θ), where θ is an additional scattering phase shift.Throughout the paper we will be using a convenient notation Q≡∫_-∞^∞ dt ⟨ I_ℒℛ(t)⟩.§ VISIBILITY: CASE OF LINEAR SPECTRUM OF PLASMONSBefore we move to introducing dissipation or dispersion effects, it is instructive to understand what prevents the coherence recovery of visibility, if none of those phenomenon is taken into account. In what follows we are going to analyze previously obtained result for the visibility, looking at it, however, from a new point of view.Firstly, direct charge acquires a trivial form:Q_dir = |τ_|^2+|τ_|^2/uv.We note that the direct charge or current are the local characteristics, that originate from the local correlation functions either at the point x = 0 or x=L. As a result this quantity does not depend on the distance x_0 between the QD and the left tunneling junction. The interference current and hence the associated charge has a trickier structure, which reads⟨ I_(t)⟩ =-Γτ_τ^*_/2π^2 uvη∬_-∞^0 dτ dτ' e^i_0 (τ-τ')+Γ(τ+τ')/τ-τ'-iδ×[F(u,v)-F(v,u)],where F(u,v) =√(-i(τ'+t+x_0/v-L/v)+γ)/√(-i(τ'+t+x_0/v-L/u)+γ) × √(i(τ+t+x_0/v-L/u)+γ)/√(i(τ+t+x_0/v-L/v)+γ).Here η = L/v-L/u is a flight time difference between the two modes along the interferometer, mentioned in Sec. <ref>. In the original paper [artur], the expression (<ref>) was carefully transformed to a simpler form to perform further both analytical and numerical calculations. However, as we are only interested in whether the visibility saturates or decays, we are going to find the current asymptotics for a large initial energy _0→∞.In such a limit the biggest contribution to the integral comes from F(u,v)-F(v,u) taken at τ=τ'. Recalling that in order to obtain the interference charge (<ref>) one needs to integrate the current (<ref>) over t, we shift t by τ+ x_0/v-L/v and arrive at:Q_int∝∫ dt( √(t+iγ/t-iγ)√(t+η-iγ/t+η+iγ)..- √(t-iγ/t+iγ)√(t+η+iγ/t+η-iγ)) = 0,which is justified by the relation √(t+iγ/t-iγ) = (t) when γ→ 0. Therefore, there are two contributions from the fractional edge excitations (quasiparticles) F(u,v) and F(v,u) into the interference current that exactly cancel each other as energy _0 is increased, meaning the decay of the visibility. Remarkably, each quasiparticle does not decohere on its own and it is their total contribution to the current that leads to vanishing of the visibility. Interestingly, the effect is robust with respect to the strength of interaction when the fractional charges of the quasiparticles differ from 1/2, which is discussed in the Appendix <ref>. We, thus, conclude that the exact cancellation might be prevented by introducing different propagating conditions for the two modes. Remaining within the current model, we study the effect of a possible slight asymmetry in the interferometer (Appendix <ref>), which provides such conditions. However, we conclude that within the particular experiment, which is stated to be conducted with a symmetric interferometer, a slight asymmetry contribution can not playing a decisive role in the large coherence recovery effect.Finally, we note that the case of free fermions for ν=1 can be easily recovered from the above formulas by taking u=v, resulting in the following visibility:V_0= 2|τ_τ_|/|τ_|^2+|τ_|^2. § CORRELATION FUNCTIONS§.§ General approachExpression for the current (<ref>) consists of various electron correlation functions, which can be separated into the “up” and “down” groups, for the operators in the corresponding arms of the interferometer. As we are supposing the size L≪ |x_0| of the interferometer to be small and hence neglecting effects of dispersion or dissipation inside it, correlators for the “down” electron operators are of the following form:⟨Ω_d|Ψ_1d^†(x,t) Ψ_1d^†(y,t')|Ω_d⟩ =1/2π a1/√(i(y-vt')-i(x-vt)+δ)×1/√(i(y-ut')-i(x-ut)+δ),where δ^-1 is the energy ultra-violet cut-off. Such a result follows directly from the expansion of the bosonic field ϕ(x,t) into the eigenstates of a free Hamiltonian (<ref>) with a particular velocity v:ϕ(x,t) = ∫_-∞^∞dk/√(k){ e^ikx-ikvtâ_k +h.c.}and the relation ⟨â_kâ_k^†⟩ = θ(k) for the boson creation and annihilation operators at zero temperature. So averaging “down” operators with the initial state (<ref>) also becomes trivial. However, the “up” group includes correlators between the operators forming the initial wave-packet at x<x_0 and the ones in the region x>x_0. The way such correlators have been treated<cit.> greatly used translational symmetry both in time and space, which is only possible for the case of linear spectrum. Thus, we are going to need a more general approach. One could start with a specific 1D equation of motion for the bosonic field. For instance, the following equationϕ̇(x,t) +{ v_0 + [v-v_0]θ(x-x_0)}∂_x ϕ(x,t) = 0allows finding the bosonic field with a velocity v in the region x>x_0 in terms of the field ϕ(x<x_0,t). It then gives all necessary correlations for the linear spectrum case.[With such an approach one must not forget that the density of states proportional to 1/v_0 in the free region changes to 1/√(uv) in the interaction region. Thus, to keep electron operator continuous in space one must demand v_0 = √(uv).] To include the dispersion one needs to slightly modify the above equation and follow the same procedure. On the other hand, such a method fails to describe a dissipation, as it is impossible then to use the expansion over the Hamiltonian eigenstates in the region x>x_0. Therefore, we are going to handle it making use of the FDT, to which we devote the discussion below.§.§ Correlation functions for dissipation The linear response theory provides means of determining the correlator ⟨ϕ(y<x_0)ϕ(x>x_0,t)⟩ directly by connecting it to the response function to perturbation (FDT). In its turn, such a response function is found from the equation of motion for ϕ(x,t) with the source defined by the form of the perturbation. Such a method has a great advantage as it does not require any knowledge of the Hamiltonian, but the spectrum. It is therefore a perfect approach to describe a dissipative behavior.The dissipation itself will be accounted for phenomenologically by adding an imaginary term iv' to the plasmon velocity v in the region x>x_0, which is justified by the findings of the recent experiment.<cit.> For the sake of simplicity we suppose the velocity to be constant along x.To proceed with our scheme, we first choose a convenient form of perturbation: V(x,t) ≡δ(x-y) V(t), where y<x_0 is a particular coordinate. Then, the corresponding additional term to the Hamiltonian readsH_1 = -∫_-∞^∞ρ(x,t)V(x,t)= -ρ(y,t)V(t), where ρ(y,t) is a charge density. According to the Kubo formula we get then:⟨δρ(x,t)⟩ = i∫_-∞^0 dτ V(t+τ)K(y,x,τ),K(y,x,τ) = ⟨[ρ(y,τ),ρ(x,0)]⟩.Hence, the response function is of the formG(y,x,ω) = δρ_ω/δ V_ω = i∫_-∞^0dt e^iω tK(y,x,t).Next, one can easily demonstrate that S_ρ(k,ω) = -2θ(ω)ImG(k,-ω),where S_ρ(k,ω) = ∫ dx dt e^-ik(x-y)+iω t⟨ρ(y,t),ρ(x,0)⟩.The second step would be to find the response function to the perturbation from the following equation of motion with the source, generated by the form of the spectrum and the dissipation:-iωϕ(x,ω) +{ v+iv'θ(x-x_0)}∂_xϕ(x,ω) = V_ωδ(x-y) .Its solution in the frequency domain acquires the form:ϕ(y,x,x_0,ω) =V_ω/ve^-iω y/vθ(x-y)×{θ(x_0-x)e^iω x/v +θ(x-x_0)e^iω/v+iv'(x-x_0)+iω x_0/v}.Note, that the form of solution implies that v'<0, to ensure convergence in the region x>x_0. Moving then to the k domain for all the space variables and finding the response function (see Appendix <ref> for the details of calculation), we arrive at the following result:⟨ϕ(y)ϕ(x,t) -ϕ(y)ϕ(x,0)⟩= ln-i(y-x_0/v+x_0-x/v-iv')/-i(t+y-x_0/v+x_0-x/v-iv').To introduce a different velocity v_2 in the region x>x_0 one can simply replace v-iv' by v_2-iv', which can be reached by adding an additional term into (<ref>). We avoided that to concentrate on the dissipation effect.The expression (<ref>) has a very clear structure and one easily restores the correlation functions for a simple linear spectrum by putting v' to 0. Hence, the only difference between the two cases is the renormalization of the velocity for x>x_0: ṽ^-1 = v/v^2+v'^2, but more importantly a finite shift γ = x_0-x/v^2+v'^2v'of the logarithm branch point into the complex half-plane, which was infinitesimally small before. We are interested in the correlators like (<ref>) for x=0 or x=L. Consequently, γ acquires positive values and is proportional to the distance between the QD and the interferometer, which we assume to be the largest distance scale in our problem. One needs to remember that such a correlator goes into the exponent with a prefactor of 1/2 as ϕ(x) is either a neutral or charge field. § ENERGY DISTRIBUTION FUNCTION: THRESHOLD EMERGENCE In this section we concentrate mostly on the energy distribution function in presence of dissipation. Being relatively simple to account for on the calculations level, it also allows to grasp the physics quickly. Dispersion, on the other hand, is harder to deal with analytically, which nevertheless is not a problem on a qualitative level since the behaviour of the WP in its presence is quite clear: it broadens in space and cools down locally. We, therefore, linger on the case of dissipation and return to its comparison with the effect of dispersion in the end.Dissipation may be present both in the regions outside and inside the interferometer. First and foremost we would like to answer the question: which contribution is more important? Taking dissipation into account everywhere leads to cumbersome expressions without a clear analytical structure. So let us concentrate on two cases: x_0 = 0, so that the tunneling from the dot takes place directly into the interferometer, and |x_0|≫ L.Therefore, in the first case we only study the effect of dissipation inside the interferometer. We provide a careful discussion of this limit in the Appendix <ref>, where we demonstrate, that for _0→∞ the interference current vanishes. On the contrary, as it will be seen from the current Section, |x_0|≫ L case shows that it is the dissipation outside the interferometer that leads to the coherence recovery. Thus, we expect a crossover between the two cases and a partial coherence recovery for finite x_0. In this sense interferometer should be considered as a probe of the physics happening with the WP on its way to it.Before moving to any calculations for the current we start with discussing the energy distribution function of the injected electron in a particular channel n=1u; 2u:f_n (,x) = ∫ dt Δ W_n(,t,x),described by the Wigner function<cit.>W_n (,t,x) = ∫ dz e^-i z/2π⟨Ψ^†_n(x,t+z/2)Ψ_n(x,t-z/2)⟩,Δ W_n(,t,x) = W_n(,t,x) - W^FS_n(,t,x),where W^FS_n(,t,x) is a Fermi Sea contribution. We concentrate on f_1u(,x=0), whose exact form is presented below: f_1u(,0)= -Γ/4π^3∫_-∞^∞dt∫_-∞^∞dz e^-i z/z-iδ×∬_-∞^0 dτ dτ' e^i_0(τ-τ')+Γ(τ+τ')/τ-τ'-iδ×{χ_v(t,z,τ,τ')χ_u(t,z,τ,τ') - 1},where χ_v(t,z,τ,τ')= √(-i(t+z/2+τ+x_0/v)+γ)/√(-i(t-z/2+τ+x_0/v)+γ)×√(i(t-z/2+τ'+x_0/v)+γ)/√(i(t+z/2+τ'+x_0/v)+γ)and χ_u is obtained from χ_v by replacing v by u and γ by another one if necessary.Let us suppose that dissipation is present only in the neutral mode, so that γ in (<ref>) is finite and is infinitesimally small in the charge mode, represented by χ_u. To analyze the cumbersome expression for the energy distribution function we consider “weak” dissipation in a sense that γ≪ x_0/u-x_0/v. In such a case f_1u() can be easily treated[Note that while in the case of a linear spectrum, it was the two QPs, whose destructive interference lead to the complete decoherence, “turning” on the dissipation smears their distinctive characteristics, by destructing and cooling them down. Thus,we were able to separate them in χ_vχ_v-1 in (<ref>)only for a “small” dissipation.] by a separation of the contributions from the neutral and charge modes χ_vχ_u-1 = χ_v-1 + (χ_u-1), so that f_1u = f_v+f_u. The behaviour of f_u is known<cit.> and is governed by the law f_u(→∞) ∼ 1/ in the limit _0→∞. However, the contribution influenced by the dissipation acquires an energy cut-off of the order 1/γ in the same limit:f_v() ∼1/e^-2γ,which can be verified by an asymptotic integration off_v ∝∬_-∞^∞dtdz (e^-i z/z-iδ√(t+iγ/t-iγ)√(t+z-iγ/t+z+iγ)-1)derived from Eq. (<ref>) for a large initial energy. Alternatively, one can study a more convenient value ∂ f_v()/∂, whose exact form can be simply revealed:∂ f_v/∂ ∼ - γ^2/π^2{ K_1(γ||)()+K_0(γ||)}^2 ∼ -2γ/πe^-2γfor→∞,where K_0,1 is the modified Bessel function of the second kind. The plot of the above asymptotics for ∂ f_v/∂ is presented in the Fig. <ref> below. Such a behaviour implies that for _0γ≫ 1 it is the cut-off γ^-1 that defines the energy of the mode by the time it reaches the interferometer. Namely, the latter becomes proportional to γ^-1 as it is shown in Appendix <ref>. On the other hand, for energies below the cut-off _0 γ< 1, the injected state remembers its initial conditions. Hence, such a threshold phenomena will be reflected in the visibility behaviour in a form of a robust coherence plateau from a certain point of order γ^-1. At this point it is instructive to check if the estimation for the threshold energy _thr∼ħ/2γ gives a reasonable number. For that we use the definition (<ref>) of γ, with v' denoting the imaginary contribution to the velocity. We then roughly estimate the velocities v and v' from the experimental findings [dissip_neutr]. For the real contribution v to the velocity we take its low energy (up to ∼25μ eV) value of 4.6· 10^4 m/s. Concerning the imaginary part v', we note that although the curve for Im(k)/ω exhibits more of a quadratic behaviour we approximate it by a linear one and estimate v' ≃ - v Im[k(ω)]/Re[k(ω)]|_ω∼ 20μ eV∼ -v/5. Next, taking |x_0| = 3μ m, we find the threshold energy to be of the order _thr∼ 40μ eV. Surprisingly, using a very raw guess, we arrive at the value of the same order as in the experiment!Returning to the general discussion, we note that two scenarios are possible – dissipation present in one or both modes. In the former case it leads to the mere inequivalence between the two modes and thus the two contributions to the interference current do not cancel each other contrary to the case of linear dispersion (<ref>). The second possibility reveals even more interesting physics as one can demonstrate the full coherence recovery under certain conditions. Indeed, a parameter responsible for the decoherence strength<cit.> without dissipation is _0η, in a sense that for _0η≫ 1 the visibility vanishes. Therefore, “turning on”the dissipation the parameter becomes η/γ for _0γ≫ 1. Consequently, when η/γ≪ 1 in both modes the WP should not decohere at all, which we demonstrate in the next section. It can already be understood from the following arguments. Decoherence originates from the inelastic processes<cit.> inside the interferometer, whose probability increases with growing η/γ, as γ can be considered as a characteristic time width of the WP. Thus, in case of a large WP broadening scattering can be described perturbatively in terms of the above parameter. Therefore, the lower value of this parameter leads to a higher position of the visibility plateau, up to a complete coherence recovery, described by the visibility V_0 in (<ref>). It is therefore not surprising that the case of x_0 = 0 (dissipation only inside the interferometer) results in the complete decoherence of the WP with increasing initial energy. On the other hand, the situation of |x_0|≫ L implies a strong dissipation and a large energy loss before the interferometer. It is also the case that provides a clear physical picture and can be relatively simply analyzed analytically.Finally, we note that the similar reasoning is applicable to the dispersioncase. Although the energy is conserved, the energy density lowers down as a result of broadening of the WP. Hence, the strength of decoherence is described by the same ratio of η to the associated broadening of the WP in time. Therefore, such a threshold effect in the visibility might as well be explained by the presence of dispersion. We support this statement in Sec. <ref> byfinding the visibility asymptotics when a weak dispersion is present in one mode.§ VISIBILITY: EFFECT OF DISSIPATION To proceed we may directly use the expression for the interference current (<ref>), bearing in mind that the shift γ is not small and may be different for F(u,v) and F(v,u). Let us then study the current in case of γ being much larger or much smaller compared to the flight time difference η.§.§.§ Strong dissipation: γ≫η Considering the dissipation parameter γ to be large compared to η is natural in our model since we demonstrated that γ∝ x_0, while |x_0|≫ L. Let us first assume that dissipation is only present in the neutral mode, thenQ= -iτ_τ^∗_/2π u v η∫_-∞^∞dt { F(u,v)-F(v,u)},F(u,v) = √(t-iγ)/√(t+iγ)√(t-η+iγ)/√(t-η-iγ),with finite[In fact, in the lowest order the visibility does not depend on in which mode exactly the dissipation is “turned on”.] γ in F(u,v) and infinitesimally small in F(v,u). When γ≫η, we can expand F(u,v) as:F(u,v) = 1 + iηγ/γ^2+t^2.Meanwhile,for the part, originating from the charge modeF(v,u) = (t)(t-η),so that∫_-∞^∞dt{ F(u,v)-F(v,u)} = η(2 + iπ).Substituting it into (<ref>) and using a definition (<ref>) for the interference charge we get:Q_int = α2|τ_τ_|/uvcos(_AB+θ),with α e^iθ = 1/2-i/π, i.e. α∼ 0.592. As we mentioned before the direct charge does not change, so we arrive at the following result for the visibility in case of strong dissipation present in the neutral mode:V = α V_0. Interestingly, including strong dissipation into both modes, the coherence is completely recovered and the visibility acquires the maximum value V_0 in (<ref>), corresponding to the free fermion case!Particularly, using the expression for the current (<ref>) with dissipation γ_1 for the neutral mode and γ_2 for the charge one and expanding it in terms of η/γ_1,2, we can simply perform all integrations without taking the limit _0→∞. Eventually, we are able to present the visibility as a function of initial energy of the wave packet in the following form:V = V_0 (1-η^2/16A +η^2/128B), A =∑_n=1,21/γ_n(1/2γ_n-e^-2_0γ_n[_0+1/2γ_n]), B = [∑_n=1,2(-1)^n-11/γ_n(1-e^-2_0γ_n)]^2.Let us check simple asymptotics. When _0→ 0 visibility is strictly V_0, while for _0→∞ it saturates to a slightly smaller value V ≃ V_0[1-η^2/32(1/γ^2_1+1/γ^2_2)+η^2/128(1/γ_1-1/γ_2)^2].The plot of (<ref>) is presented in Fig. <ref>.§.§.§ Weak dissipation: γ≪η Let us study the other limit, when dissipation is small and is present in one, neutral, mode for simplicity. Again we are going to make use of the expressions for the current from the previous section (<ref>) and (<ref>) and study, therefore, the _0→∞ asymptotics. Note that the integral over time for the difference F(u,v)-F(v,u) obviously converges, although it is not the case for each of the terms separately. Hence, we are going to rewrite it as { F(u,v)-1} -{ F(v,u)-1}, so that each brace contains a well converging integrand over time. We denote the terms as F(u,v) and F(v,u) respectively.Consider F(v,u) (alternatively F(u,v)) in (<ref>) when γ≪η. Although it is already simply represented, we may rewrite it as follows for further needs:F(u,v) = - √(t-iγ)/√(t+iγ) +√(t-η+iγ)/√(t-η-iγ).The Fig. <ref> gives an idea of how the above expression looks like as a function of t. Finite γ smears sharp borders at t=0 and t=η in the real part of the function. However, the limit of small dissipation allows using of (<ref>) to integrate F(u,v) in the 1st and 3rd regions and alsoexpanding (<ref>) in terms of γ/η on the rest of the time axis.For that it is divided into five regions so that ∫ dt F(u,v)=∑_n=1^5 J_n, where J_n is a contribution to the integral from the nth area. Then, using t_0≪ηand taking into account the above explanations, the integrals over the regions in the vicinity of t=0 and t=η readJ_1+J_3= ∫_-t_0^t_0 dt(- √(t-iγ)/√(t+iγ) +√(t+iγ)/√(t-iγ))= 4iγln2t_0/γ.From the careful expansion of (<ref>) in other regions we get:J_2 = 2t_0-η + 2iγ lnη/t_0,J_4 = 4iγln2η/γ,J_5 = iγη/t_0.Hence, ∫ dt F(u,v) = 2t_0-η +4iγln2η/γ. The first two terms are eliminated in the expression for the current by the contribution∫ dt F^∗(v,u) = 2t_0 -η from the charge mode. Thus, the visibility is described by the following formula:V = V_0 2γ/πηln2η/γ. § VISIBILITY: EFFECT OF DISPERSION As we have already mentioned before dispersion effect on the visibility is similar to that of dissipation despite a qualitatively different behavior of the WP in its presence. However, the broadening of the WP in space in presence of dispersion reduces effectively the possibility of the inelastic scattering inside the interferometer, which should lead to a coherence recovery. To provide a quantitative evidence for it, we again study the limit of a small interferometer |x_0|≫ L and, for simplicity, assume a quadratic dispersion to be present in a neutral mode. We adopt the same method used in Sec. <ref> and start by solving the equation of motion for the bosonic field:{ iω-v_0∂_x - ( v_1-v_0)θ(x-x_0)∂_x+iγ∂_xθ(x-x_0)∂_x}ϕ(ω,x) = 0,with the dispersion of the form iγ∂_xθ(x-x_0)∂_x, which corresponds to a Hermitian operator. The velocity v_1 takes the desired values v or u, depending for which mode dispersion is taken into account. Unlike the dissipation case, we can directly solve this equation, using the solution for ϕ(x<x_0)≡ϕ^-, derived straightforwardly from the expression (<ref>), and performing the Fourier transform:ϕ^-(k) = ∫dk'/√(k')(â_k'δ(ω-k'v_0)/i(k'-k)+δe^i(k'-k)x_0. .+â^†_k'δ(ω+k'v_0)/-i(k+k')+δe^-(k'+k)x_0). Considering γ to be small, the solution for ϕ(x>x_0)≡ϕ^+ reads:ϕ^+≃∫dk'/√(k')[â_k'exp{-ik'(v_0 t-x_0)+iv_0/v_1k'(x-x_0)(1+k'γv_0/v^2_1)} + h.c.]Having such an expansion we formally write down all the needed correlation functions and arrive at the following expression for the integral over time for the auxiliary interference charge Q:Q = -Γτ_τ^∗_/2π^2uvη∫ dt∬_-∞^0 dτ dτ'e^i_0(τ-τ')+Γ(τ+τ')/τ-τ'-iδ ×(e^1/2{K_1(t,τ)+K_2(t,τ')}-F(v,u)),where F(v,u) was defined in (<ref>), while correlators K_1 and K_2 take dispersion into account as follows:K_1(t,τ) = ∫_0^∞dω/ωe^iωτ+i x_0ω/v(1+ωγ/v^2)+iω t(e^iηω-1),K_2(t,τ') = -K_1^∗(t,τ').Let us bring the integral (<ref>) to a dimensionless form by making a substitution of variablesω =βω̃,t = t̃/β,τ = τ̃/β, τ' = τ̃'/β,β = v √(v/x_0γ)and consider a limit _0→∞. For simplicity we are also studying the case βη≪ 1, making it analogous to the problem of a strong dissipation in one mode, discussed in Sec. <ref>. We, thus, arrive at the resultQ = iτ_τ^∗_/2π uvη∫_-∞^∞dt̃/β(sgn(t̃/β)sgn(t̃/β-η)-e^iηβ J(t̃)),J(t̃) = ∫_0^∞dω̃cos(ω̃t̃+ω̃^2).Note that J(t̃) as a function of t̃ is well bounded and hence the exponent in (<ref>) may be expanded in terms of ηβ:Q = -iτ_τ^∗_/2π uv(2+i∫_-∞^∞dt̃J(t̃)).The integral in the above expression can be handled with by rewriting it in a following way:∫_-∞^∞dt̃J(t̃) = ∫_0^∞dt̃∫_-∞^∞dω̃cos(ω̃t̃+ω̃^2) = π.Using the definitions (<ref>) and (<ref>), and taking into account that the direct charge does not change, we find the visibility to be exactly the same as the one found in the case of strong dissipation in one mode (<ref>)-(<ref>)! From such a result we may conclude that it is the energy density that plays a role in the coherence recovery effect. In case of a dissipation it decreases because of the energy loss, while for a dispersion the reason is the space broadening of the WP.§ PERIODIC COHERENCE RECOVERY FOR LINEAR SPECTRUM In Sec. <ref> we demonstrated how a separation of a WP into two quasiparticles in a linear spectrum case leads to visibility decay with increasing initial energy. Taking into account dissipation or dispersion effects we were able to present an explanation of partial coherence recovery within conditions of a particular experiment. On the other hand, it is of interest to imagine a new experiment which would allow to introduce “unequal” conditions of propagation for the two modes, so that their exact cancellation would be impossible already in a simple case of a linear spectrum. As a nature of the effects mentioned above is not yet understood, we consider an experiment, where they can be neglected (or one could take x_0=0). Then, a periodic bias Δμsin(Ω t) is applied between the upper and lower arms of the interferometer, so that Δμ≪Ω. Additional phase shifts appear then in the electron operators in the upper arm of MZI and the asymptotics for theinterference current at _0→∞ reads[The general shift is described by exp(iΔμ/Ω[cosΩ(t-L/v)-cosΩ t']). However, following (<ref>), such a phase is cancelled in front of F(v,u) when t'=t-L/v.]⟨ I_(t)⟩ = 2 i τ_τ^∗_/2π uvη×[ e^iΔμ/Ω{cosΩ(t-L/v)-cosΩ(t-L/u)} F(u,v)-F(v,u) ]Such a result implies the periodic coherence recovery, which we will study by expanding the exponent in the above expression up to the first order in Δμ/Ω. However, before proceeding, we point out that any additional random phase appearing in the arguments of the cosines results in averaging out of the effect. Hence, the periodic signal applied between the arms of MZI must be synchronized with the tunneling event from the QD.Integrating current over time, we arrive at the following expression:Q∝∫_-∞^∞dt(t-η) tsin(Ω[t-x_0/v-η]+φ),φ = arccos2Γ/√(4Γ^2+Ω^2).There are two contributions to the current: from the WP and the “background”, which is left after the WP has passed through the interferometer. It is periodic unlike the DC current, consisting of electrons rarely tunneling from the QD into the edge. Thus, these signals can be measured separately, which correspond to leaving integration over t∈[0,η] in (<ref>). Finding the interference charge and taking into account that the direct charge is not influenced by the applied bias, we express the saturated visibility in the lowest order of Δμ/Ω as V = 4V_0|Δμsinθ|sin^2ηΩ/2/ηΩ^2cosφ,where θ = Ω[x_0/v+η/2]-φ. As expected the periodic potential leads to the imbalance between the contributions of the two quasiparticles and an oscillating visibility for _0η≫ 1. Notably, in case ηΩ≪ 1 and η≪ x_0, the parameter determining the periodicity of the visibility recovery is mostly Ω x_0/v, while the amplitudebecomes proportional to Δμη. Thus, the periodic signal is able to partially restore the coherence of the WP, which is a direct consequence of the fact that the charge and neutral modes behave as quasiparticle, which do not dephase on their own.§ CONCLUSIONS In this paper we make several important statements. For a start, we provide a plausible explanation of the recent experiment [main_roche], where a robust coherence plateau in the visibility of a single electron state, initially injected into a QH edge channel, was detected. The visibility of this electron WP, sent to the MZI, was found to remain almost constant as a function of the injected energy starting from a certain threshold value, while having been expected to decay and vanish. This expectation was supportedin the theoretical work [artur], where a bosonization approach was used to account for a strong Coulomb interaction present in the QH edge system. It made obvious the inability of such a minimalistic model to capture the effect of the coherence recovery. Analyzing the reason behind it, we find that ultimately the origin of the visibility decay lies in the destructive interference between the two quasiparticles arising from the interaction in the QH edge at the filling factor ν=2. Nevertheless, each of the quasiparticles remains coherent, which is a puzzling outcome. We check that this conclusion is robust with respect to the strength of interaction, i.e. when the charges of these quasiparticles deviate from 1/2. Thus, a trivial mechanism of the partial coherence recovery could be implemented in the model by merely creating an imbalance between the two contributions of the quasiparticles. One source of such an imbalance is an asymmetry of the interferometer. However, we show that a slight asymmetry that could be present in the experiment is not responsible for such a strong coherence recovery. Moreover, modifying the lengths of the arms of the MZI with gates it is easy to get rid of this contribution completely. The two other candidates, namely dissipation or dispersion, which were found to be present<cit.> at least in the neutral mode, can also account for the aforementioned imbalance. However, most importantly, they can provide the second means of the coherence recovery. Indeed, in the presence of either dissipation or dispersion the energy density of the WP is lowered by the time it reaches the interferometer. Hence, it weakens the inelastic scattering inside it, which recovers the coherence. Therefore, the stronger the energy loss results in the bigger coherence recovery. In support of this reasoning, we demonstrate that coherence can be fully recovered if a strong dissipation is present in both modes. We point out that our calculations are performed assuming the distance between the QD and the interferometer |x_0| to be much larger than the size of interferometer L. In this limit we are able to predict the values of the coherence plateau in cases of strong/weak dissipation and also weak dispersion. Considering |x_0|≫ L allows to neglect the effects of dissipation or dispersion taking place inside the interferometer, so we are not in a position to predict exact values of the visibility plateau or the threshold energy in a particular experiment. However, what we do show is that dissipation or dispersion inside the interferometer on their own (x_0=0) do not lead to coherence recovery. Thus, it is the outside dynamics which plays a significant role. These assumptions should be easy to check experimentally by studying the values of the visibility plateau as a function of x_0 that must be increasing with it.Moreover, the rough estimations of a threshold energy, defined only by the dissipation strength, are in a good agreement with the experiment.Finally, we return to the question of the quasiparticle nature of the two edge excitations and propose an experiment which could shed light on their physics. Namely,we study a set-up where a periodic bias of a small amplitude, synchronized with the tunneling event from the QD, is applied between the arms of a symmetric interferometer. Assuming a linear spectrum, one then discovers sustained oscillations of the visibility at a large injected energy, which is exactly a consequence of the intrinsic coherence of the two modes. Although, dissipation or dispersion might also manifest themselves it would be important to compare all the contributions.Acknowledgements. This project was supported by the Swiss National Science Foundation. § CORRELATION FUNCTIONS WITH DISSIPATION Following the formula (<ref>) and a relation between a charge density and a bosonic field, one gets the following expression for the linear response function:𝒢(k',k,k”,-ω) = 2π k/v(δ(k'-ω/v)δ(k”+ω/v+k)/k+ω/v+iv' - δ(k'-ω/v)δ(k”+ω/v+k)-δ(k+k')δ(k”)/k+ω/v-iδ),which immediately revealsS_ρ(k',k,k”,ω) =-4π k ω/vṽ'θ(ω)δ(k'-ω/v)δ(k”+ω/v+k)/(k+ω /ṽ)^2+ω^2/ṽ'^2with the help of the FDT statement (<ref>), where ṽ' = v^2 + v'^2/v'. Hence, the spectral function acquires the following form in the coordinate space, valid for v' < 0:S_ρ(y,x,x_0,ω) = -ω(ṽ^-1+iṽ'^-1)/2π ve^iω/v(y-x_0)+iω(ṽ^-1+iṽ'^-1)(x_0-x).Integrating now this expression over the coordinates and taking care of the boundary conditions, we arrive at the correlator for the fields:S_ϕ(y,x,x_0,ω) = 4π^2∫_-∞^y dy'∫_x^+∞ dx' S_ρ(y',x',x_0,ω) =2π/ωe^-iω/v(x_0-y)-iω (x-x_0)(ṽ^-1+iṽ'^-1).The seeming divergence at zero ω is non-physical and disappears in the correlators for the electron operators. Performing a Fourier transform we find the expression (<ref>). A simple check in the last expression shows that in the limit ṽ' = 0, we restore the correlation function for free bosonic fields.§ ENERGY OF A MODE WITH DISSIPATION Here we find the energy E_v of the neutral mode with dissipation, following the discussion of the energy distribution in Sec. <ref> and using the expression (<ref>) in the limit _0→∞:E_v = ∫ d f_v()∝∫ dt ∫ dz δ'(z)/z-iδ(√(t+iγ)√(t+z-iγ)/√(t-iγ)√(t+z+iγ)-1) = i∫ dt 2tγ-iγ^2/2(t^2+γ^2)^2=π/4γ,where we used integration by parts. Restoring all the coefficients E_v = 1/16π√(v/u)γ^-1. Note that taking the limit γ→ 0 we recover the energy ultraviolet cut-off, as it should be. Energy for the charge mode is found exactly in the same manner. § DECOHERENCE IN CASE OF X_0=0. To include dissipation inside the interferometer into consideration we must modify the expression for the current (<ref>). Again we are going to study its integral over time Q, from which the interference charge is found. Then from the general definition(<ref>) in case x_0=0:Q≡∫_-∞^∞ dt ⟨ I_(t)⟩ = ∫_-∞^∞ dt ∫_-∞^t dt' (M(t,t')+M(t',t)),where M(t,t') ∝√(t-L/u+iγ_2)/√(t-L/u-iγ_2)√(t-L/v+iγ_1)/√(t-L/v-iγ_1)t'-iδ/t'+iδ(γ_1/(t'-t+L/v)^2+γ^2_1-γ_2/(t'-t+L/u)^2+γ^2_2),with δ being infinitesimally small and γ_1,2 being finite to account for the dissipation. Note that when γ_1,2 are also infinitesimally small the lorentzians in the above expressions become Dirac δ-functions. Therefore, as the integration for t' comes from the region t' ⩽ t, the contribution from M(t',t) vanishes. In this manner the expression (<ref>) can be found. However, with dissipation being present inside the interferometer, M(t',t) needs to be taken into account. We first rewrite the integrals over t', making them running up to t'=0.Next, we shift t by -t' and change the sign of t' in M(t',t), which allows to rewrite Q as an integral over the whole axis of t':Q∝∫_-∞^∞ dt√(t-L/u+iγ_2)/√(t-L/u-iγ_2)√(t-L/v+iγ_1)/√(t-L/v-iγ_1)∫_-∞^∞dt' t+t'-iδ/t+t'+iδ(γ_1/(t'+L/v)^2+γ^2_1-γ_2/(t'+L/u)^2+γ^2_2).Integration over t' can be easily performed by moving the integration contour in the positive half plane, which results in zero for the whole expression. § INTERFERENCE CURRENT IN THE LIMIT _0→∞ FOR ARBITRARY FRACTIONAL CHARGES OF THE QUASIPARTICLES The integral over time of the interference current at a large initial energy _0 acquires the form:𝒬 = ∫_-∞^∞dt⟨ I_(t)⟩∝∫_-∞^∞dt∫_-∞^∞ dt̃χ_v(t,t̃)χ_u(t,t̃)ℱ(t̃),where χ_v(t,t̃) = (t-t̃+x_0/v-iγ)^δ_1/(t-t̃+x_0/v+iγ)^δ_1(t+x_0-L/v+iγ)^δ_1/(t+x_0-L/v-iγ)^δ_1and χ_u(t,t̃) is defined in the same manner by replacing v→ u and δ_1→δ_2. The last multiplier in (<ref>) is defined as follows:ℱ(t̃) = 1/(i(t̃-L/v)+γ)^2δ_1( i(t̃-L/u)+γ)^2δ_2 -1/(-i(t̃-L/v)+γ)^2δ_1(-i(t̃-L/u)+γ)^2δ_2.If δ_1 = δ_2 = 1/2, then F∝δ(t̃-L/v) - δ(t̃-L/u),so that we easily arrive at the previous formulas for the interference charge(<ref>), which eventually results inzero for the asymptotics. When the charges are arbitrary, ℱ can not besimplified. However, we are still able to derive a few conclusions. Firstly,∫_-∞ ^∞dt̃ℱ=0. It can be justified by the fact thatfor t̃ outside the region [L/u,L/v] the integrand quickly decays, due to the1/t^2 decrease of both terms in ℱ. Hence, as both branch points in each term are situated in the same half-plane, the contour can be modified by being dragged into the infinity of the other half-plane. Such a procedure shows that the integral indeed vanishes. Thus, we can subtract the unity from χ_vχ_u in (<ref>). The advantage is that χ_vχ_u - 1 becomes non-zero only in certain regions of t. Obviously, these regions are defined by various t̃, but the only relevant t̃s belong to [L/u,L/v], as we realized above from analyzing ℱ. In this case the value of χ_vχ_u can be simply found:χ_vχ_u-1=e^2iπδ_1-1,for t∈ [L-x_0/u,t̃-x_0/u]∪ [t̃-x_0/v,L-x_0/v] and zero everywhere else.Therefore, integrating χ_vχ_u-1 over the indicated region in t shows that there is no a dependence on t̃, so we are left with 𝒬∝∫ dt̃ℱ = 0. § INTERFERENCE CHARGE IN CASE OF A SMALL ASYMMETRY IN THE MZI FOR A LINEAR DISPERSION Adding an asymmetry to the interferometer (the upper arm of the length L_1 and the lower one of L_2≳ L_1) one arrives at the expression for the interference charge of a form resembling the one (<ref>) in the previous Appendix:Q∼∫_-∞^∞ dt ∫_-∞^∞ dt̃χ_v(t,t̃)χ_u(t,t̃)F(t̃),with χ_v,u(t,t̃) describing the propagation of the neutral or charge modes in the upper arm of the interferometerχ_v(t,t̃) = (t+x_0-L_1/v)(t-t̃+x_0/v),andF() being responsible for the interference effect:F() = 1/{(-L_1/u-iδ)(-L_2/u-iδ)(-L_1/v-iδ)(-L_2/v-iδ)}^1/2 - c.c.,where δ is an infinitesimally small shift. In case of a symmetric interferometer L_1 = L_2 the above expression F() acquires a simple form (<ref>) in terms of the delta-functions. To find where the main contribution to F() comes from, we multiplyand divide the second term in it by the denominator of the first one, which immediately results in the simplificationF() =1-(-L_1/u)(-L_1/v)(-L_2/v)(-L_1/v)/{(-L_1/u-iδ)(-L_2/u-iδ)(-L_1/v-iδ)(-L_2/v-iδ)}^1/2.It now becomes clear from the form of the numerator that the only relevantlies in the following region: ∈ [L_1/u;L_2/u] ∪ [L_1/v;L_2/v]. Here we have assumed that L_1/v ≫ L_2/u, which seems natural due to the fact that u≫ v and that we are mainly interested in a case of a small difference between L_2 and L_1. Moreover, the integral of F() overis simply zero similarly to the statement made in the previous Appendix. Hence, we may rewrite Q asQ∼∬ dt d[χ_u(t,)χ_v(t,)-1] F().The idea is then to study the expression in the square brackets as a function of t, using the particular values for . Let us denote firstly t_1 = L_1-x_0/u, t_2 = L_1-x_0/v, t_3 =- x_0/u, t_4 =-x_0/v. Next, it is easy to show thatfor ∈ [L_1/u;L_2/u]: χ_u(t,)χ_v(t,) = -2, if t∈ [t_1,t_3] ∪ [t_4,t_2],0, elsewhere for ∈ [L_1/v;L_2/v]: χ_u(t,)χ_v(t,) = -2, if t∈ [t_1,t_3] ∪ [t_2,t_4],0, elsewhere,where the last expression holds for L_2-L_1/v<x_0(1/u-1/v), which essentially means that the difference L_2-L_1 must be smaller than x_0 and is quite applicable to the conditions of the experiment. With that in mind, we can perform integration over t and arrive at the following expression Q∼η_1 I_1 + I_2;I_1 = ∫ _L_1/u^L_2/udF(), and I_2 = ∫_L_1/v^L_2/vd(2 - L_1[1/u+1/v]) F().with the notation η_1 = L_1(1/v-1/u) similar to the one used throughout the main part of the paper. Both I_1 and I_2 are table integrals<cit.> if rewritten in a suitable form. Indeed, we express the first integral I_1 as followsI_1 = -2i∫_L_1/u^L_2/ud/{(-L_1/u)(L_2/u-)(L_1/v-)(L_2/v-)}^1/2 = -4i F(π/2, r)/√(L_1L_2)(1/v-1/u),where F(π/2,r)≡∫_0^π/2dα/√(1-r^2sin^2α) is an elliptic integral of the first kind and r = L_1-L_2/√(uvL_1L_2)(1/u-1/v). Note that, when L_1 = L_2, then r = 0 and F(π/2,0)=π/2 , while η_1I_1(L_1=L_2) = -2π i. We deal with the second integral in the same mannerI_2= 2i∫_L_1/v^L_2/v d2 -L_1(1/u+1/v)/{(-L_1/u)(-L_2/u)(-L_1/v)(L_2/v-)}^1/2= 4i/√(L_1L_2)(1/v-1/u)[2(L_2/v-L_1/u)Π(π/2, L_1-L_2/vη_1,r)+L_1/u F(π/2,r) - L_1(1/u+1/v)F(π/2,r)],where Π(π/2, L_1-L_2/vη_1,r) ≡∫_0^π/2dθ/(1-L_1-L_2/vη_1sin^2θ)√(1-r^2sin^2θ) is an elliptic integral of the third kind. Again a simple check for L_1=L_2 gives Π(π/2, L_1-L_2/vη_1, 0) = π/2, which results in I_2(L_1 = L_2) = 2π i. Therefore, the total charge Q = 0 in this limit as it should be. Combining the above findings for I_1 and I_2 we finally get:Q∼i/√(L_1L_2)(1/v-1/u)[(L_2/v-L_1/u)Π(π/2, L_1-L_2/vη_1,r)-η_1F(π/2,r)].The experiment [main_roche] was stated to be conducted with a symmetric interferometer. Thus, accounting for only a slight asymmetry and studying the expression (<ref>) as a function of L_2 in the vicinity of L_1 one arrives at the interference charge of the order Q ∼L_2-L1/vη_1. Therefore, an asymmetry indeed leads to a partial coherence recovery, but apparently it can not be the main reason behind its large value in the experiment. apsrev4-1
http://arxiv.org/abs/1709.08969v1
{ "authors": [ "Anna S. Goremykina", "Eugene V. Sukhorukov" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20170926122335", "title": "Coherence recovery mechanisms in quantum Hall edge states" }
http://arxiv.org/abs/1709.09245v1
{ "authors": [ "I. A. Zhuravlev", "S. V. Barabash", "J. M. An", "K. D. Belashchenko" ], "categories": [ "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.mtrl-sci", "published": "20170926200638", "title": "Phase stability, ordering tendencies, and magnetism in single-phase fcc Au-Fe nanoalloys" }
School of Physics and Astronomy,andCentre for the Mathematics and Theoretical Physics of Quantum Non-equilibrium Systems, University of Nottingham, Nottingham NG7 2RD, United Kingdom In these four lectures I describe basic ideas and methods applicable to both classical and quantum systems displaying slow relaxation and non-equilibrium dynamics. The first half of these notes considers classical systems, and the second half, quantum systems. In Lecture 1, I briefly review the glass transition problem as a paradigm of slow relaxation and dynamical arrest in classical many-body systems. I discuss theoretical perspectives on how to think about glasses, and in particular how to model them in terms of kinetically constrained dynamics. In Lecture 2, I describe how via large deviation methods it is possible to define a statistical mechanics of trajectories which reveals the dynamical phase structure of systems with complex relaxation such as glasses.Lecture 3 is about closed (i.e. isolated) many-body quantum systems. I review thermalisation and many-body localisation, and consider the possibility of slow thermalisation and quantum non-ergodicity in the absence of disorder, thus connecting with some of the ideas of the first lecture. Lecture 4 is about open quantum systems, that is, quantum systems interacting with an environment. I review the description of open quantum dynamics within the Markovian approximation in terms of quantum master equations and stochastic quantum trajectories, and explain how to extend the dynamical large deviation method to study the statistical properties of ensembles of quantum jump trajectories.My overall aim is to draw analogies between classical and quantum non-equilibrium and find connections in the way we think about problems in these areas.Aspects of non-equilibrium in classical and quantum systems: Slow relaxation and glasses, dynamical large deviations, quantum non-ergodicity, and open quantum dynamics Juan P. Garrahan Received: Jul 6, 2017 / Accepted: Sep 11, 2017 ========================================================================================================================================================================= § INTRODUCTION The purpose of these lecture notes is to introduce some general and simple ideas about slow relaxation and non-equilibrium dynamics in many-body systems, both classical and quantum.The aim is not to be comprehensive, but rather to describe particular ways in which to address certain interesting questions in non-equilibrium, and to highlight potential connections between problems in areas that may appear very different.The first part of these notes deals with classical systems. Lecture <ref> is about the glass transition problem, an important and yet not fully understood general problem in condensed matter science, and also a paradigm of slow and complex relaxation more generally. I describe basic questions that emerge from the phenomenology of glass forming systems such as supercooled liquids, and briefly discuss basic theoretical perspectives. Most of the focus is on a general modelling of glasses in terms of systems with constraints in their dynamics, an approach that has wider applicability, as discussed later in the notes.A key insight that emerges from these considerations is that the interesting behaviour in manysystems with cooperative dynamics is to be encountered in properties of the trajectories of the dynamics rather than in configurations, highlighting the need for a statistical mechanics approach to study trajectory ensembles. Lecture <ref> describes how such approach can be constructed with dynamical large deviation methods. Such an approach leads to thinking about dynamics in a thermodynamic-like fashion, for example by revealing the existence of dynamical phases - and phase transitions between them - that underly observed fluctuation behaviour in the evolution of systems with cooperative and collective dynamics. The second part of these notes is about non-equilibrium in quantum systems.Lecture <ref> discusses dynamics in isolated quantum many-body systems, where despite unitary evolution, there is both equilibration and thermalisation.I furthermore describe many-body localisation in disordered systems as a novel paradigm for quantum non-ergodicity. I also consider similarities and differences between many-body localisation and slowdown and arrest in classical glasses, contrasting mechanisms based on disorder to those based on dynamical constraints.Lecture <ref> is about open quantum systems.I discuss how the dynamics of quantum systems that interact with an environment can be described in an approximate Markovian way, considering similarities and differences with classical stochastic systems.I also explain how to extend large deviation methods to the open quantum case in order to study properties of quantum jump trajectories using similar ideas to those employed in classical non-equilibrium.There are many excellent reviews on several of the topics covered in these notes.For the glass transition problem these include Refs. <cit.>; for kinetically constrained models, Refs. <cit.>; for large deviations, Refs. <cit.>.In the case of quantum systems, comprehensive recent reviews on thermalisation and many-body localisation include Refs. <cit.>; and on open quantum systems, Refs. <cit.>. The selection of topics, many of the examples shown, and the overall approach tothe problems discussed here is also based on my own work in these areas.§ SLOW RELAXATION IN CLASSICAL SYSTEMS §.§ Phenomenology of the glass transition In the physical sciences glasses are the paradigm of non-equilibrium matter: when too cold or too dense fluids cease to flow, forming the amorphous solid-like material we call glass. This solidification occurs in the absence of any apparent structural ordering, in contrast to more conventional condensed matter. Dynamical arrest like that of glasses is ubiquitous in nature. It occurs in a vast range of systems spanning microscopic to macroscopic scales. Despite its practical importance, a fundamental understanding of the glass transition is still lacking, making it one of the outstanding problems of condensed-matter science. For reviews see <cit.>.Figure 1 illustrates the glass transition problem.The central physical ingredient necessary for glassy slowing down is that of excluded volume interactions at high densities. Under such conditions motion is severely restricted through steric constraints, cf. Fig. 1(a), and particles can only move if the neighbouring particles that are blocking their path move before.The higher the density the more collective motion becomes.Cooperative relaxation leads to separation of timescales, where short scale motion (e.g. harmonic and anharmonic vibrations) is fast but the larger scale motion required for structural relaxation is slow. This becomes manifest in time correlations displaying metastability and decaying stretched exponential manner in time, indicative of a wide distribution of relaxation timescales, cf. Fig. 1(b). The typical relaxation time of supercooled liquids (inferred either from dielectric relaxation or from their viscosity) grows dramatically with decreasing temperature, a phenomenon which is quasi-universal, cf. Fig.1(c). By convention, when relaxation time becomes 100s (corresponding to a viscosity of fifteen orders of magnitude higher than that of a normal liquid), a liquid cannot be experimentally distinguished from a solid and such materials undergo a so-called experimental glass transition, corresponding to a falling out-of-equilibrium into the solid amorphous glass. A hallmark of dynamics close to the glass transition is dynamical heterogeneity, illustrated in Fig. 1(d) which shows the spatial pattern of relaxation in a two-dimensional supercooled Lennard-Jones mixture.The slower the relaxation the more relaxation fluctuates in time and space: dynamics of systems close to the glass transition is fluctuation dominated and far from mean-field. It is important to note that the generic physics of the glass transition occurs in classical many-body systems in the absence of disorder, making them different from disordered systems such as spin-glasses.§.§ Theoretical perspectives on glasses A theoretical understanding of the slowdown and arrest of glass formers is a long-sought after goal of condensed matter science. The key distinguishing feature of glasses in contrast to more conventional condensed matter is that their observed dynamics is not accompanied by obvious structural changes.Given that slowdown and arrest is widespread in soft materials it is expected that the underlying theory for such phenomena should be generic. Loosely speaking there are two classes of theoretical approaches.One can be termed thermodynamic in the sense that it purports that the observed dynamics is a manifestation of underlying thermodynamic singularities. The most prominent of these is the so-called random fist-order transition (RFOT) theory <cit.>.Figure 2(a) serves as a cartoon. It sketches the energy (or free-energy) of a glassy system as a function of its (perhaps coarse-grained) configuration.Due to (effective) frustration this free-energy landscape in configuration space is “rugged” with a potential proliferation of low lying states that under the right conditions may dominate the thermodynamics, leading eventually to a static singularity at some non-zero temperature <cit.>. While this thermodynamic transition to an “ideal glass state” may be in an unobservable range, RFOT purports that is has consequences in the observable dynamics, e.g., with growing timescales due to growing correlation lengths of the underlying“amorphous order” of the glass phase, and relaxation times diverging. Within RFOT the growth of the primary relaxation time τ with decreasing temperature is often assumed to follow a Vogel-Fulcher-Tammann (VFT) law, τ = τ_0 e^A/(T - T_c) <cit.>, where T_c is the temperature at which times would diverge, often identified with the temperature at which the static transition to the ideal glass state would occur. Further intricacy of the landscape is also predicted within RFOT to lead to subsequent singularities deep in the glass phase, such as the “Gardner transition” and consequent changes in the rigidity properties of the amorphous solid <cit.>. While many of the ideas behind RFOT trace back to systems with quenched disorder such as spin-glasses <cit.>, its application to structural glasses is validated by exact results in the mean-field (large dimensional) limit for both liquids and hard spheres <cit.>. An alternative perspective is dynamical, in particular the so-called dynamical facilitation (DF) theory <cit.>, which is the one that I favour. Figure 2(b) serves as its corresponding cartoon: the idea is that it is not the (statistical) properties of configurations or states that is relevant, but that of the pathways between them. That is, what matters is the connectivity of configuration space, and under the right conditions (as in cold liquids or dense colloids) the scarcity of dynamical pathways give rise to glassy slowing down, without the need for any underlying static singularity. Figures 2(b) and (c) illustrate the basic mechanisms by which this combination of simple thermodynamics and complex dynamics can be achieved. Imagine that the elementary excitations that give rise to motion are effectively localised and are scarce at low temperatures (e.g. they are energetically costly). These could for example be the kind of localised defects that are prominent in for example tilings and coverings at high densities <cit.>. In such cases, thermodynamics would be simple and non-singular (say that of a low density gas of such excitations), cf. Fig. 2(c). However, if their dynamical evolution is subject to local dynamical constraints (as for example excitations that cannot appear or disappear spontaneously, but can branch or coalesce) then the structure of their trajectories could be much more complex, cf. Fig. 2(d).§.§ Kinetically constrained models of glasses The basic tenets of DF theory are realised explicitly in a class of idealised models known as kinetically constrained models (KCMs) <cit.> and whose study serves as the basis for DF insights <cit.>. The key idea is to encode the local effects of excluded volume interactions in the definition of the rates for transitions for the stochastic dynamics of these models. Figure 3(a) illustrates this: consider the transition where the particle labelled i moves to occupy the empty space labelled j. Let us call the transition rate for this process γ_ij → ji, and that for the reverse process γ_ji → ij. In defining dynamics, we usually require that the rates obey detailed balance with respect to the Boltzmann distribution corresponding to some energy function E, such that γ_ij → ji / γ_ji → ij = exp(-βΔ E_ij), where Δ E_ij is the difference in energies before and after the transitions. Assuming ergodicity in the dynamical rules, this should guarantee that this dynamics converges to the thermodynamics associated with this Boltzmann distribution. Now, dynamics-to-thermodynamics is many-to-one, that is, there are actually many ways to satisfy the same detailed balance condition with different definitions for the dynamical rules. This fact is often exploited in Monte Carlo simulations by defining an artificial dynamics that speeds up convergence to equilibrium (and where actual dynamical aspects are not of interest).But it can also be exploited to model the actual evolution of systems with cooperative dynamics such as glasses.In Fig. 3(a) the neighbouring particles k and l are not changing positions in either the forward or backward transitions, but their presence may affect the rates at which the transitions take place.For example, if we think of excluded volume, the location of such neighbouring particles may determine whether the transitions can take place at all.That is, we can make the rates dependent on these neighbours, γ_ij → ji→ F(kl) γ_ij → ji , γ_ji → ij→ F(kl) γ_ji → ij,and detailed balance is still obeyed,F(kl) γ_ij → ji/F(kl) γ_ji → kl = γ_ij → ji/γ_ji → kl = e^-βΔ E_ij .In general, total transition rates are composed of an “asymmetric part”, γ_ij → ji (in the sense that it changes depending on the direction of the transition), and a “symmetric” part which is the same in both directions - this latter we call a kinetic constraint as it does not affect the eventual stationary state, cf. DB, but does determine the actual dynamical evolution <cit.>.§.§.§ Fredrickson-Andersen model Perhaps the simplest model that exploits the above idea to obtain non trivial relaxation dynamics is the one-spin facilitated Fredrickson-Andersen (FA) model <cit.>. We will consider it in one dimension for simplicity. The FA model is defined in terms of binary variables n_i = 0,1 occupying the sites of a one dimensional lattice.Its energy function is defined as, E = J ∑_i=1^N n_i,leading to trivial thermodynamics (setting k_ B = 1), Z= ( 1 + e^-J/T)^N , c≡⟨ n_i ⟩ = e^-J/T/1 + e^-J/T ,⟨ n_i n_j ⟩ = ⟨ n_i ⟩⟨ n_j ⟩ = c^2(i ≠ j) ,with no correlation between different sites and where the only relevant thermodynamic observable is the concentration c of excited spins (i.e., those with n=1) in equilibrium. Of course, due to the absence of interactions in the energetics, thermodynamic properties are smooth all the way to T=0. The dynamics of the FA model is defined in terms of single spin-flip dynamics with the following rules,11→ 10 rate = 1-c 10→ 11 rate = c 11→ 01 rate = 1-c 01→ 11 rate = c 010→ 000 rate = 0 000→ 010 rate = 0The rates above are reversible and obey detailed balance with respect to E. The kinetic constraint is such that only sites who have an excited nearest neighbour can flip, and in that case they do so with the natural rates determined by the change in energy. But sites whose two nearest neighbours are unexcited cannot flip, even if energetically favourable.The need for a neighbouring excitation to allow motion is often called facilitation <cit.>. At high temperatures the kinetic constraint does not play a major role: in a typical equilibrium configuration at T > J there are many excitations and most sites are therefore facilitated.In contrast, for T < J there is a conflict between the low number of excitations, c ≪ 1, thd, in equilibrium configurations, and the need for them to be present to allow dynamics to proceed, FA.Figure 3(b) illustrates this conflict: it shows the evolution of the energy per site after a “quench” from T=∞ and c=1/2 to various low temperatures, T < J, so that the target state for the dynamics has c ≪ 1.Initially, relaxation is fast and T independent as there are many excitations that facilitate dynamics - this evolution is similar to that of an unconstrained set of non-interacting spins. However, at some point most remaining excited sites that need to be relaxed are isolated: a plateau develops indicative of both a change of mechanism and a separation of timescales in the dynamics, and subsequent evolution is much slower and strongly T dependent. This slow evolution is due to an effective activated diffusion of excitations. At low temperatures excitations are rare and typically an excited site is surrounded by a region of unexcited sites, … 00100 ….While the central excitation here cannot relax, it can facilitate the excitation of one of its neighbours, say to the right, … 00110 …, a process that has rate c to occur as it increases the energy by one unit of J.Subsequently, the new excitation in turn can now facilitate the central spin to flip, … 00010 …, with rate (1-c)/2, the half coming from the fact that it could have been the rightmost excitation that relaxes, going back to the initial configuration (other processes are also possible, such as facilitating a third excitation, but this is suppressed by c, which is small at low T). The central excitation has then made a hop to the right with a rate that goes as c(1-c).Since it could have equally hopped to the left, overall what we get is effective diffusion of isolated excitations with diffusion rate D ∼ c when c ≪ 1.The upshot of this is that the relaxation of a site that is at distance l from its nearest excitation requires the diffusion of this excitation to it, and its relaxation time goes as τ(l) ∼ D^-1 l^2. From here we can get the typical relaxation time in equilibrium, τ_α: since in equilibrium the typical length between excitations is l_ eq = c^-1, we get, τ_α = τ(l_ eq) ∼ D^-1 l_ eq^2 = c^-(1+2)≈ e^3 J/T .The typical relaxation time of the FA model thus grows fast with decreasing temperature.It does so in an Arrhenius form, but it is important to note that this is not due to a single barrier of size 3J but due to a combination of the way that the diffusion rate scales with temperature and the dynamic exponent that enters into τ(l). The results above are a consequence of the fact that the dynamics of the FA model in one dimension is fluctuation dominated as a consequence of the kinetic constraint. §.§.§ East model A second simple KCM of interest is the East model <cit.>. Like the FA model it is defined in terms of binary variables on a lattice with a non-interacting energy function E = J ∑_i=1^N n_i, leading to the same simple thermodynamics, thd.The difference lies in the kinetic constraints which as slightly more restrictive than those of the FA model, with the only (also single spin-flip) transitions allowed being,11→ 10 rate = 1-c 10→ 11 rate = cthat is, up spins facilitate flip of neighbouring spins in the eastwards direction only; i.e., in the East model the third and fourth transitions in FA are not allowed.The stricter constraints, East, mean that the mechanism that gives rise to activated diffusion of excitations in the FA model is not available in the East, as it requires facilitation in both directions.This gives rise to further separation of timescale and more cooperative relaxation.This is apparent from the evolution of the energy after a quench to low temperatures, Fig. 3(c): in contrast to the FA model case, Fig. 3(b), the energy decays in stages, each plateau associated with a hierarchy of the dynamics corresponding to relaxation over a particular range of lengthscales.The hierarchical nature of the dynamics is easy to understand in terms of the optimal pathways for relaxation. Just like in the case of the FA model, at low temperatures excitations in typical equilibrium configurations will be isolated.While an up spin that is next to another up spin can relax immediately, first line in East, when they are further away intermediate steps that increase the energy are required. When the distance is l=2, as in ⋯ 101 ⋯, an extra excitation needs to be created, 101→111→110→100. Here the leftmost excitation facilitated the excitation of the middle site, which then facilitated the relaxation of the rightmost one. The energy barrier that needed crossing was Δ E(l=2) = J corresponding to the difference between the initial energy and the most energetic intermediate state visited. Consider now distance l=4, as in ⋯ 10001 ⋯.An obvious route to relaxing the rightmost spin is to excite all the ones in between so that it can be facilitated, and then de-excite them back, 10001→11001→11101→11111 → 11110 → 11100 → 11000 → 10000. In this case Δ E = 3J and in general such as procedure will give an energy barrier that scales with the length l that needs to be span. One can however do better: an optimally energetic path for l=4 is shown in Fig. 3(d), and only requires Δ E(l=4) = 2J. Figure 3(e) shows a similar construct for l=8.The key idea is that one only needs to create an excitation at distance l/2 and the subsequent relaxation is that of a domain of half the length <cit.>.This means that Δ E(2l) = Δ E(l) + J ⇒Δ E(l) = J ⌈log_2 l⌉,and the barrier only grows logarithmically with length.(In the expression above ⌈·⌉ means the ceiling function, so that l=2^k-1 + 1, …, 2^k all have barriers Δ E = J k.)At low temperatures relaxation will be activated and dominated by the smallest barrier that needs to be crossed.From this argument, the timescale τ(l) to relax a region of size l should go as <cit.>τ(l) ∼ e^Δ E(l)/T≈ l^z(T),with z(T) = J/T ln 2.This means that relaxation in the East model obeys a scaling relation, but the dynamical exponent z increases with decreasing T, indicating that dynamics becomes “stiffer” the lower the temperature.These energetic arguments would suggest that the typical relaxation time at equilibrium would be given by the energy barrier at the typical distance between excitations,τ(l_ eq) ≈ l_ eq^z(T)≈exp(J^2/T^2 ln 2 )<cit.>. However, as the length gets longer one needs also to consider the multiplicity of paths, which may give an entropic contribution <cit.>.In fact, while for modest distances the energetic contribution dominates and the scaling obeys tEastzEast, for lengths comparable to l_ eq, at low T, the entropic contribution is significant, and the overall typical relaxation time has been shown to be<cit.>τ_α∼exp( J^2/2 ln 2 T^2) .This shows that in contrast to the FA model, the East model relaxes in a super-Arrhenius way. Furthermore, note that while τ_α grows very fast with decreasing temperature, it only diverges at T=0.On the one hand, the above results are interesting conceptually.They show that, rather straightforwardly, local kinetic constraints give rise to hierarchical dynamics, cf. tEastzEast, and super-Arrhenius relaxation, tauEast.On the other, they are also useful in practice if one consider whether a relaxation law such as the one of the East model (or its higher dimensional generalisations <cit.>) is at all applicable to the observed phenomenology of supercooled liquids.Figure 1(c) shows this indeed to be the case. It plots the relaxation time (or the viscosity) for a large number of supercooled liquids, fitting the data to the form lnτ_α∝(J/T - J/T_ o)^2, the so-called “parabolic law” <cit.>, whose low T behaviour scales with T precisely like the East model, tauEast.Below an “onset” temperature T_ o dynamics becomes heterogeneous, and T < T_ o is the regime where DF ideas become applicable.In this regime the fit to the parabolic law is excellent, cf. Fig. 1(c). One can argue <cit.> furthermore that the parabolic law provides a superior fit to the data to that found using the traditional VFTform <cit.>, lnτ_ VFT∝(T - T_K )^-1, without the need to invoke an essential singularity at an unobservable T_K > 0.The effect of the kinetic constraints on the dynamics becomes apparent if one looks at trajectories <cit.>. Figures 3(f-h) show one-dimensional equilibrium trajectories for three models for comparison: panel (f) corresponds to a spin model with energy function E and where dynamics is unconstrained, so that single spin-flip transitions 1 → 0 and 0 → 1 can occur with rates 1-c and c, respectively; panel (g) is the FA model; and panel (h), the East model.While the three models have the same equilibrium distribution, their trajectories are very different.In particular, both the FA and East models display pronounced space and time fluctuations (that give rise to heterogeneous dynamics) indicative of strongly correlated dynamics, thus realising the basic idea illustrated in Figs. 2(c,d).§ LARGE DEVIATIONS AND THERMODYNAMICS OF TRAJECTORIES §.§ Fluctuations in trajectory space The key insight of the DF approach <cit.> to glasses is that the interesting structure is to be found in the trajectories of the dynamics rather than in configurations, cf. Figs. 3(f-h). Take the case of the East model. Each time slice of an equilibrium trajectory such as that of Fig. 3(h) - see also Inset to Fig. 4(a) - corresponds to an equilibrium configuration.As such, one-time (i.e., static) observables, like for example the magnetisation, have Gaussian distributions in the large size limit, given the non-interacting nature of the thermodynamics of the East model, cf. Ethd.Trajectories, however, have non-trivial correlations, something that is evident by the large “space-time bubbles” <cit.>, cf. Fig. 3(h).This means that in contrast to static observables, the distributions of dynamical observables, corresponding to time-integrated quantities such as for example the time integral of the magnetisation, K_t = ∫_0^t dt' M(t'), must have non-Gaussian distributions <cit.>.This is indeed the case, see Fig. 4(a): P(K_t) has pronounced non-Gaussian tails in the large deviation regime (i.e., away from the mean and variance). The reason that time-integrated observables reveal information that static observables do not is that the moments of quantities like K_t contain time-integrals of time-correlation functions of its integrand, and thus encode both space and time correlations.Furthermore, a distribution like that of Fig. 4(a) is suggestive of an interesting phase behaviour in trajectory space, whose study requires a method for ensembles of trajectories that is analogous to the more standard equilibrium ensemble methods of equilibrium statistical mechanics.This can be readily defined using the machinery of large deviations <cit.>. §.§ Stochastic dynamics In what follows for concreteness I will consider stochastic dynamics corresponding to continuous time Markov chains.Consider a classical stochastic system evolving as a continuous time Markov chain.The Master Equation (ME) for the probability reads <cit.>∂_t P(C,t) = ∑_C' ≠ C W(C' → C) P(C',t)- R(C) P(C,t) ,where P(C,t) indicates the probability of the system being in configuration C at time t,W(C' → C) is the transition rate from C' to C, and R(C) = ∑_C' ≠ C W(C → C') the escape rate from C. The ME can be written in operator form,∂_t |P(t) ⟩ =|P(t) ⟩ ,with probability vector |P(t)⟩ |P(t)⟩ = ∑_C P(C,t) | C ⟩ ,where { | C ⟩} is an orthonormal configuration basis, ⟨ C | C' ⟩ = δ_C,C'.The Master operatoris defined as,= ∑_C,C' ≠ C W(C → C') |C' ⟩⟨ C| - ∑_C R(C) |C ⟩⟨ C|The stochastic generatoris in general non-Hermitian but has a real spectrum, with its largest eigenvalue equal to zero. The associated right eigenvector corresponds to the probability vector of the stationary state, |ss⟩ = 0 ,while the corresponding left eigenvectoris the “flat” or “trace” state= ∑_C⟨ C| ,and = 0 is the statement of probability conservation. The dynamics described by the ME is realised by stochastic trajectories.Each trajectory is a particular realisation of the noise that gives rise to a time record of configurations and of waiting times for jumps between them, observed up to some time.Figure 4(b) sketches such trajectories: the trajectory starts in configuration C_0 and then there is a succession of configuration changes, C_0→ C_1→ C_2→…→ C_n occurring at times t_1, t_2, …, t_n. Between the last jump and the end of the trajectory at τ there is no further change in configuration.The evolution described by the ME is recovered when averaging over all stochastic trajectories.§.§ Dynamical large deviations §.§.§ Example To introduce the ideas, Let us consider first an elementary example which is exactly solvable.Consider unidirectional hopping of a particle with rate γ between nearest neighbouring sites of a one-dimensional ring, as in Fig. 4(c). The possible positions of the particle determine the set of configurations,{ | C ⟩} = { | x ⟩ :x=1,…,L }.The Markov generatorreads,= γ∑_x=1^L |x+1⟩⟨ x| - γ∑_x=1^L |x ⟩⟨ x| .If we denote a stochastic trajectory of total time t by ω_t, the simplest dynamical observable one can consider is the total number of jumps in the trajectory K(ω_t).This observable for example can be used to classify trajectories according to how dynamically “active” they are: a more active trajectory will be one with a higher value of K, and a less active, one with a lower value of K.The distribution of K, P_t(K) = ∑_ω_t Prob(ω_t)δ[ K(ω_t) - K ] ,where Prob(ω_t) indicates the probability of getting trajectory ω_t with the dynamics Wex, provides information about the statistical properties of the dynamics. For the simple example we are considering this is of course a Poisson distribution, P_t(K) = e^-γ t(γ t)^K/K! .Using the Stirling approximation for the factorial, one finds that at long times the probability acquires a large deviation form, becoming an exponential in time, times a function of the intensive observable K/t, P_t(K) ∼e^-t [ γ - K/t + K/γ tln(K/γ t)].Essentially the same information as in the probability is contained in the moment generating function (MGF), Z_t(s) = ∑_K P_t(K) e^-s K,which for our simple problem reads, Z_t(s) = e^t γ( e^-s - 1 ) .We see that Zsex also has a large deviation form of an exponential in time, times a function of the conjugate variable s to the observable K.§.§.§ Generalisation The above approach directly generalises to any system with stochastic dynamics described by a ME such as ME1. A dynamical order parameter K is a time-extensive function of the trajectory. In its most general form it is reads,K(ω_t) = t ∑_C,C' ≠ C q_C,C'(ω_t) α_C → C' +t∑_C μ_C(ω_t) β_C .This form indicates that dynamical observables can increase either when the trajectories make jumps, or by accumulating the value of a static observable between jumps. The first summation in Kgen corresponds to advancing K by α_C → C' for each jump in the trajectory between configurations C and C'. The total number of jumps in trajectory ω_t between C and C' is indicated by t q_C,C'(ω_t), with q sometimes called the dynamical or empirical flux <cit.>. The second summation corresponds to the time-integral of a static observable of the configurations. The total time spent in configuration C in trajectory ω_t is indicated by t μ_C(ω_t), with μ referred to as the empirical measure (as it gives the estimate of the distribution over configurations that can be inferred from that specific trajectory).For example, for the case of the total number of jumps of the example above we would set all α_C → C'=1 and all β_C=0.Conversely, if we were time-integrating say a magnetisation we would set all α_C → C'=0 and β_C equal to the magnetisation of the configuration, β_C = M(C).The distribution PK for a dynamical order parameter such as Kgen in general acquires a LD form at long times <cit.>, P_t(K) ∼e^-t φ(K/t),where φ(k) is called the LD rate function.Similarly the MGF, Zs, also acquires a LD form <cit.>, Z_t(s) ∼ e^t θ(s) ,with θ(s) called alternatively the scaled cumulant generating function (SCGF) (as its derivatives evaluated at s=0 give the cumulants of K scaled by time), or simply the LD function. The rate function and the SCGF are related by a Legendre transform, which for our choice of convention reads <cit.>, θ(s) = - min_k [ k s + φ(k) ] . This approach provides a direct generalisation of the ensemble method of equilibrium statistical mechanics.The analogy is summarised in Table I. In equilibrium the relevant ensemble is that of configurations C, while for dynamics it is that of trajectories ω.In dynamics, the large size limit is that of long times. Static order parameters are functions extensive in volume, such as the magnetisation M of a configuration in a magnetic problem; the corresponding intensive order parameter is obtained by scaling out the system size, m = M/V.For dynamics, order parameters are functions of the trajectories extensive in time, such as time-integrated quantities K; the corresponding intensive order parameters are obtained as k = K/t. At large volumes (resp. long times) the distribution of the order parameter has a LD form, cf. PKLD, and is determined by an entropy density, i.e., the log of the number of configurations (resp. trajectories) where the order parameter takes a certain value, scaled by size.In the case of the magnetisation, for the static problem this is the Gibbs free-energy; in the case of dynamics, it is the rate function, PKLD.Associated to each order parameter there is a conjugate field.In the static case, for example when the order parameter is the magnetisation m, the conjugate field is the magnetic field. For dynamics, the conjugate field to a dynamical observable K is the counting field s, which is also the variable of its MGF. At large size (long time) the MGF has a LD form, cd. ZsLD, and is determined by a free-energy, in the static case, in the magnetic language we are using, it corresponds to the Helmholtz free-energy; for dynamics, it is the SCGF, ZsLD.The SCGF θ(s) is the free-energy for trajectory ensembles, so in analogy with equilibrium, its analytic structure tells us about the phase structure of the dynamics. Computing free-energies is often hard, but there is one property that can be exploited to obtain θ(s).In dynamics time is a special direction, which in turn allows to calculate the MGF (which plays the role of a partition sum) in terms of a “transfer matrix”.This reduces the problem of calculating a partition sum, to the often simpler problem of maximising an operator.In particular, the MGF, Zs, can be written as <cit.>Z_t(s) =e^t _s | P_0 ⟩ ,where P_0 is the initial distribution of configurations.The operator _s is obtained by deforming or tilting the Markov generator .For a general dynamical observable, Kgen, the tilted generator reads <cit.>_s =∑_C,C' ≠ C e^-sα_C → C' W(C → C') |C' ⟩⟨ C|            - ∑_C[ R(C) + sβ_C ] |C ⟩⟨ C| .The SCGF is the largest eigenvalue of _s, so that at long times from Zs2 we recover ZsLD.§.§ Phase transitions in trajectory space Let us come back to the problem of the East model. It was mentioned above that the large dynamical fluctuations evident in the East model trajectories, cf. Fig. 4(a), suggested a non trivial phase behaviour of the dynamics.We will now see how with the LD methods described above it is possible to show that there is indeed a novel class of dynamical phase transitions in KCMs such as the East model <cit.>.The Markov generator for the East model dynamics is given by <cit.>= ∑_in_i-1[ ϵσ_i^+ + σ_i^- - ϵ (1-n_i) - n_i ] ,where we have scaled out a factor of (1-c) and defined ϵ = c/(1-c). The factors in square brackets in WEast correspond to flipping the spins and their associated escape rates, and the factors of n_i before the brackets are the kinetic constraints. Let us consider as trajectory observable the dynamical activity <cit.> (sometimes also termed frenesi <cit.>) defined as the total number of configuration changes in a trajectory.In the general definition of Kgen this would correspond to,K(ω_t) = t ∑_C,C' ≠ C q_C,C'(ω_t) ,i.e., all jumps between configurations are counted equally.This is the natural order parameter for “glassiness”, as it measures whether trajectories display a lot of motion (the kind of dynamics one would associate with liquid-like ergodic relaxation) or very little motion (what one would associate with glassy slow down and arrest), and it does so in an “order agnostic” manner in the sense that it makes no assumptions as to which configurations give rise to which kind of dynamics. The tilted generator deformed according to act is then, cf. Ws, _s = ∑_in_i-1[ e^-s( ϵσ_i^+ + σ_i^- ) - ϵ (1-n_i) - n_i ] ,where σ_i^± are Pauli raising and lowering operators acting on site i, and n_i = σ_i^+σ_i^-. Calculating the largest eigenvalue of the operator WEasts for arbitrary s is a difficult challenge, so we will employ a series of approximations to simplify the problem so as to get a result that - while not quantitatively accurate - gives a qualitative idea of what the LD approach reveals. We first approximate the spin problem described by WEasts where sites can onlyhave single occupation, by a similar “bosonic” problem <cit.> where the single occupation restriction is lifted. The tilted operator reads,_s^ bos = ∑_in_i-1[ e^-s( ϵ a_i^† + a_i ) - ϵ - n_i ] ,where now n_i = a_i^† a_i. NB: we are using the Doi-Peliti formalism, see e.g. Ref. <cit.>, where the representation of creation and annihilation operators is not the standard one of quantum mechanics, but one more convenient for stochastic dynamics,a^† | n ⟩ = | n + 1 ⟩, a | n ⟩ = n | n - 1 ⟩, [a, a^†] = 1 .The approximation WEastsb is good for ϵ≪ 1, corresponding to low temperatures, as multiple site occupancy is suppressed; cf. reaction-diffusion problems <cit.>.The largest eigenvalue of WEasts can be estimated via a variational approximation, for example by maximising the expectation value of _s^ bos in a coherent state basis, cf. Ref. <cit.>.In practice this amounts to solving the Euler-Lagrange equations, ∂_s^ bos/∂ a_i = ∂_s^ bos/∂ a_i^† = 0 ,where operators are treated as c-numbers and the generator is normal ordered. We make a further mean-field approximation and drop the site index to obtain the equations, {[0= ∂_s^ bos/∂ a= a^†[ e^-s( ϵ a^† + a ) - ϵ - n ]+ n (e^-s - a^†); ;0= ∂_s^ bos/∂ a^†=a[ e^-s( ϵ a^† + a ) - ϵ - n ] + n (e^-sϵ - a );].Under a = ϵ a^† these two equations become equivalent, and there are two solutions,a_ I = 3/4ϵ e^-s + ϵ/4√(9 e^-s - 8)anda_ II = 0 .Inserting the two possible solutions of the EL equations back into _s^ bos we get two variational estimates for the SCGF, θ_ I(s) which is dependent on s, and θ_ II(s) = 0 for all s.Of these two possible branches we have to choose the largest one for each value of s. Due to convexity of the SCGF, θ_ I(s) crosses from positive to negative at s=0, and therefore θ(s)=θ_ I(s) for s<0 and θ(s)=θ_ II(s)=0 for s>0; see Fig. 5(a). The SCGF if singular at s=0 where the two solutions cross, corresponding to a first order transition in the ensemble of trajectories <cit.>. This is evident in the behaviour of the activity K. The s-dependent average activity, ⟨ K ⟩_s = ∑_K P_t(K) K e^-s K/Z_t(s) ,is the average activity when the probability of a trajectory ω_t is tilted by e^-s K(ω_t), and thus corresponds to the activity of trajectories of dynamics away from the typical one (corresponding to s=0) - such reweighed ensemble of trajectories is sometimes called the s-ensemble <cit.>. As such, ⟨ K ⟩_s serves as a dynamical order parameter for classifying the ensemble of trajectories controlled by s (cf. the magnetisation and magnetic field in the static case, see Table I). At long times ⟨ K ⟩_s is obtained from the first derivative of the SCGF with respect to s, k(s) = lim_t →∞⟨ K ⟩_s/t = - θ'(s) .As Figs. 5(a,b) show, our mean-field estimate of θ(s) for the East model has a first order singularity at s=0 and the corresponding order parameter shows a discontinuous jump at the transition point. Figure 5(c) provides a reminder of the connection between the free-energy and the order parameter distribution, as connected by the Legendre transform. When the free-energy is analytic, the distribution is unimodal. This is indicative of the system having a single phase. Instead when the free-energy is non-analytic, for example when two branches cross at some value of the controlling field, the corresponding distribution is one associated to two-phases made convex by a Maxwell construction.The same relation between SCGF and rate function holds for dynamics. In the case of the East model the transition at s=0, Figs. 5(a,b), is one between an active phase with finite activity for s ≤ 0, and an inactive phase of vanishing activity for s>0 <cit.>. The qualitative picture gleaned from the crude approximations above is confirmed by more quantitative studies of the East model and other KCMs <cit.>. Furthermore, it has been shown that similar active-inactive dynamical transitions are present in more realistic models of glasses <cit.>, cf. Fig. 5(d,e), and the fluctuations associated to such dynamical first-order transitions manifest in the heterogenous pattern of relaxation in these systems, cf. Fig. 1(d).§ ERGODICITY AND NON-ERGODICITY IN CLOSED QUANTUM SYSTEMS§.§ Quantum equilibrationGeneric quantum many-body systems - evolving unitarily according to their Hamiltonian - are said to equilibrate, meaning that at long times their state becomes indistinguishable, from the point of view of expectation values of observables, from the time-integrated state <cit.>; for reviews see <cit.>. This occurs due to dephasing of the state of the system in the energy eigenbasis (a condition we assume to hold, but which can be somewhat relaxed). Describing the state of a system in terms of its density matrix, we have for its time evolution, ρ(t) = U_t ρ_0 U_t^† ,where U_t = e^- i t H ,with H the Hamiltonian of the system (and we have set ħ = 1 from here onwards). Expanding in the energy eigenbasis we getρ(t) =∑_n| n ⟩⟨ n |ρ_nn +∑_n ≠ m| n ⟩⟨ m |ρ_nme^-i (E_m-E_n) t.Due to the absence of degeneracies in the energy differences, the oscillatory terms in the second line of rhott become negligible at large times, and in the long time limit the state becomes indistinguishable in practice from the time-integrated state ρ(t) ⟶ ρ(t) =lim_t →∞1/t∫_0^t U_t'ρ U_t'^† dt' ,meaning specifically,| ⟨ A(t) ⟩ -Tr[ Aρ(t)] |   small . Figure 6(a) illustrates what in practice is understood as equilibration in aquantum system.It describes the evolution of a spin-1/2 chain with repulsive density-density interactions that decay with distance as r^-6,H = Ω∑_i=1^L σ^x_i -μ∑_i=1^L n_i+ V/2∑_i≠ j^L n_i n_j/|m-k|^6,where n_i = σ^+_i σ^-_i. HRyd describesa problem of relevance to Rydberg atoms <cit.>.This Hamiltonian is generic, in the sense that it is not integrable and due to the nature of the interactions the likelihood of degenerate energy gaps is negligible, certainly in the large size limit.Equilibration becomes evident if one considers the unitary dynamics generated by HRyd: Fig. 6(a) shows the behaviour of local observables (i.e., observables that correspond to sums of local terms) in a numerical simulation of a finite system (from direct diagonalisation); at long times expectation values become stationary and close to their time average, something that becomes more precise with increasing system size. (Of course, for a finite system like this one there are renewals eventually, but these occur on timescales much larger than the ones shown.)It is important to note that “equilibration” as used in the context of quantum many-body systems is different from the usual meaning attributed to this concept in the case of classical stochastic systems where it is closer to the idea of “thermalisation” (see below).That is, that a quantum system equilibrates does not mean it is also ergodic, only that observations in long time state are in practice stationary in the large size limit, cf. Fig. 6(a). This effectively stationary state can still depend on details of the initial conditions, as we will see below.§.§ Quantum thermalisation Beyond equilibration, most quantum many-body systems are believed also to thermalise <cit.>.The meaning is the following. Consider two partitions A and B of an overall closed many-body quantum system. Let us assume further for simplicity that the initial state is a pure state | ψ_0 ⟩, so that ρ_0 = | ψ_0 ⟩⟨ψ_0 | .It follows, that at all subsequent times the state of the system is also pure,ρ(t) = | ψ(t) ⟩⟨ψ(t) | , | ψ(t) ⟩ = U_t | ψ_0 ⟩,cf. rhotU. It follows that the von Neumann entropy of the whole system is always zero,S(t) = - ρ(t) lnρ(t) = 0 . Consider now the reduced state in partition A obtained by tracing out partition B,ρ_A(t) =Tr_B ρ(t) .Even if ρ(t) is pure, in general ρ_A(t) will be mixed. Furthermore, the entropy of the reduced state,S_A(t) = - ρ_A(t) lnρ_A(t) ,quantifies the entanglement (i.e., quantum correlations) between partitions A and B, and is called the (bipartite) entanglement entropy. Note: (i) we obtain the same amount of entanglement if we trace out A instead, i.e., S_A = S_B; and (ii) as long as ρ(t) is pure, S_A is a measure of entanglement, i.e., it tells not only if there is bipartite entanglement, but also quantifies it, and therefore can be used to quantify its change under a certain process, e.g., how entanglement grows in time under coherent evolution (see below).We established above that ρ(t) for the whole system A+B equilibrates.Thermalisation refers to the form that the reduced state ρ_A(t) acquires at long times, from the point of view of expectation values of local observables in A,⟨ O_A(t) ⟩ =Tr_A [ O_A ρ_A(t) ] .Specifically,lim_t →∞⟨ O_A(t) ⟩ =Tr_A ( O_ATr_B ρ_ th) ,where ρ_ th is a thermal density matrix for the whole system,ρ_ th = e^- β H .The inverse temperature β is set by the energy of the initial state, which of course conserved under the evolution, that is, βs.t.: ⟨ψ_0 | H | ψ_0 ⟩ =Tr( H ρ_ th).This means that the only memory of the initial conditions that remains in the effective long time state of partition A is that of the initial energy. All other initial details are forgotten. Thermalisation is thus the general setup for quantum ergodicity where the system can act as its own thermal reservoir. For references see e.g., <cit.>. Quantum thermalisation can be seen as a consequence of the spectrum of a many-body system obeying the eigenstate thermalisation hypothesis (ETH)<cit.>. When ETH holds, the spectrum of such a system is similar - in the large size limit - to that of a random matrix with the same symmetries of the Hamiltonian. In particular, matrix elements of local observables in the energy eigenbasis have the following form, ⟨ E_n | 𝒪 | E_m ⟩ =δ_nmF(E̅)+e^-S(E̅)/2g(E̅,ω) R_nm,where E̅ = (E_n+E_m)/2, ω = E_n-E_m, F(E) is some smooth function of E, S(E) is the entropy of states with energy E, and R_nm are Gaussian random numbers with zero mean and unit variance. It follows from RMT thatfor large size local observables become diagonal in the energy eigenbasis with random off-diagonal corrections that vanish exponentially with size. This means that expectation values at long times are well approximated by micro-canonical averages in the energy shell set by the initial conditions (or equivalently, canonical averages at the corresponding temperature) and thermalisation ensues.Just like forGaussian random matrices, the spectrum of ETH obeying systems displays level repulsion, and the entanglement entropy of eigenstates in the bulk of the spectrum obeys a volume law (i.e., bipartite entanglement scales with the size of the smaller partition).This volume law makes S_A, cf. SA, generically extensive in size, thus giving the entanglement entropy its thermal character.§.§ Many-body localisation An exception to the above scenario are integrable systems <cit.> which equilibrate to a generalised Gibbs ensemble <cit.>. A second notable exception, and the one to be discussed here, is that of (non-integrable) many-body quantum systems with quenched disorder that display many-body localisation (MBL) <cit.>; for reviews see <cit.>. A typical system that can undergo a transition from a thermal to a MBL phase is the following <cit.>, cf. Fig. 6(b), H = -J∑_i=1^N( c^†_i c_i+1 + c^†_i+1 c_i)+ V ∑_i=1^N n_i n_i+1 + ∑_i=1^N h_i n_i ,for a system of spinless fermions in a one-dimensional lattice of size N. The first sum in HMBL corresponds to nearest neighbour hopping with rate J, and the second, to density-density interactions with coupling constant V. The third set of terms corresponds to a random local energies of strength that are random i.i.d. h_i ∈ [-h, h]. For h=0 the clean interacting problem thermalises for all values of V>0. In the non-interacting case, V=0, this is a typical system that displays Anderson localisation (AL) for all h > 0. In the clean case (h=0) dynamics evolved under HMBL thermalises.In the presence of weak disorder, thermalisation is also achieved.A signature of ergodicity is that time correlations decay (appropriately normalised) to zero. A typical test, which is also applicable in experiments, is to start from an initial product state with a well defined density profile, such as the following half-filling state| ψ_0 ⟩ = | ∙∘∙∘∙∘∙∘⟩,sometimes also called a charge density wave. A simple observable which quantifies the extent to which the system remembers the initial patter is the difference in density between even and odd sites, or “imbalance” <cit.>, ℐ(t) = N_o(t)-N_e(t)/N_o(t)+N_e(t).Thermalisation implies forgetting details about initial conditions, and therefore the imbalance should go to zero at long times under ergodic conditions. (While the random fields in principle break translation invariance for any given sample, both for large systems and/or when averaging over disorder, this effect becomes negligible.) Figure 6(c) shows results from simulations of the system of HMBL: for small h (red curve)the imbalance both becomes stationary and goes to zero within simulation times, compatible with thermalisation.In contrast, when the random field exceeds a critical value, h > h_c, there is a transition into the MBL phase. In Fig. 6(c) this is manifested as the imbalance, while stationary, remains non-zero at long times; this shows that for strong disorder the system maintains memory of the initial state, an indication of non-ergodicity.For reference, Fig. 6(c) also shows the non-interacting case of AL. The MBL state has been well characterised by now, both in terms of spectral properties and dynamics <cit.>. MBL implies a breakdown of ETH. Under MBL conditions the eigenstates in the bulk of the spectrum have low entanglement - similar to low lying states of gapped Hamiltonians (i.e., displaying area rather than volume law entanglement), and there is no level repulsion.In fact, an MBL system has an extensive number of emergent local conservation laws - something which is believed to be true in general and has indeed been proven rigorously for certain specific problems. On the dynamical side, a key distinction between an interacting system in the thermal and MBL phase is the growth of entanglement.This is illustrated in Fig. 6(d) for the model of HMBL under the same conditions as Fig. 6(c): the initial product state CDW has zero bipartite entanglement; in the thermal phase, h < h_c, the half-chain entanglement entropy grows rapidly (red curve) towards a value which is extensive in system size; the growth on the MBL side, h > h_c, in contrast is much slower, first achieving a sub-extensive level similar to that of AL, and subsequently progressing logarithmically in time; while the asymptotic value is extensive it is much lower than that of the thermal phase, as a consequence of MBL eigenstates obeying area law, which means that the extensivity is a consequence of the superposition that is reached at long times.§.§ Slow quantum relaxation due to constraints A key distinction between MBL and classical glasses is that the former are disordered systems while in the latter disorder is absent.Whether MBL can exist in translationally invariant systems is still debated <cit.>.Even if it were the case that strict asymptotic quantum non-ergodicity could not exist in the absence of disorder, an interesting question to address is whether the standard mechanisms for glassy slowing down - in particular kinetic constraints encoding steric restrictions to local motion - can give rise to analogous slow relaxation in closed quantum systems. Figure 7 summarises some of the findings from two recent papers that address this precise issue by considering two quantum models inspired by classical KCMs. The first model <cit.> is a quantum version of a constrained lattice gas <cit.>. It consists of hard-core particles moving on a 1D strip of a triangular lattice of L sites and occupation N; see Fig. 7(a). The Hamiltonian - described in terms of spin-1/2 operators - isH_ QLG = -12∑_⟨ i, j⟩Ĉ_ij{λ( σ^+_iσ^-_j + σ^+_jσ^-_i) . . - (1-λ) [ n_i (1 - n_j) + n_j(1 - n_i) ] }.The operator Ĉ_ij = 1 - ∏_k n_k plays the role of a kinetic constraint: the product is over all sites k that are common neighbours of both sites i and j, and as Fig. 7(a) shows, it allows motion only when at least one of the common neighbouring sites is empty, thus mimicking excluded volume restrictions. The model conserves density but has no particle-hole symmetry.The effect of the constraints is only significant for large fillings, where many moves that would be possible in the unconstrained problem are blocked.Consider unitary evolution under Htlg.We take as initial states |ψ_0⟩ product states corresponding to classical configurations, discarding those with only isolated vacancies (which are disconnected under H). To quantify relaxation consider the two-time (connected and scaled) density autocorrelatorc(t) = 1/L∑_i⟨ψ_0 | n_i(t) n_i(0) |ψ_0⟩/ϕ(1-ϕ) - ϕ/(1-ϕ),where n_i(t) is the Heisenberg-picture number operator and ϕ=N/L is the filling fraction.Since |ψ_0 ⟩ is a product state, ⟨ψ_0 | n_i(t) n_i(0) |ψ_0⟩ reduces to the expectation value ⟨ n_i(t) ⟩ for initially occupied sites i, so that c(t) plays a role similar to the imbalance, cf. Imb, in that it quantifies memory of the initial density profile. Figure 7(b) shows c(t) (and a running time average to smooth out short time fluctuations).The key observation is that for λ > 1/2, such that the kinetic terms in H dominates over the potential terms, thermalisation is fast.In sharp contrast, for λ < 1/2, where potential energy dominates over the kinetic energy, there is a pronounced separation of timescales, with fast decay to a nonzero plateau, and eventual thermalisation only at much longer times. Compare this two-step relaxation to time-correlators in classical glassy systems, Fig. 1(b).This shows that, just like in the classical case, kinetic constraints can also induce slow and cooperative relaxation in quantum systems under the right conditions.Furthermore, for λ < 1/2, metastability and slow thermalisation is accompanied by spatially heterogenous growth of entanglement, as shown in Fig. 7(c).Again this is reminiscent of dynamic heterogeneity in the classical case, cf. Fig. 1(d).The second model is the quantum East model <cit.>, defined in terms of quantum spins in a one-dimensional lattice with Hamiltonian, H_ East = - ∑_i n_i-1[ λσ_i^x - ( 1 - λ) ] .This Hamiltonian is (up to a sign) the same operator as the tilted generator of the classical East model, WEasts, at temperature T=∞⇒ϵ=1.As in the classical case, the constraint only allows spin flips on sites whose left neighbour has a projection on the spin-up state.As in the previous model, the parameter λ biases the weight of the kinetic versus the potential energy terms in H_ East.Figure 7(c) illustrates the basic dynamics of this model. For λ > 1/2 the system thermalises: the left panel shows the total longitudinal magnetisation starting from all possible product states at half filling (note that in this model filling is not conserved); these states all have the same expectation value of the energy, so from ETH we expect that asymptotically the observable reaches the same value, as they do.In contrast, for λ < 1/2dynamics is orders of magnitude slower and relaxation is not achieved on numerical accessible timescales: the observable retains memory of details of the initial conditions for all accessible times, see right panel.While this does not prove asymptotic non-ergodicity, and might indeed be quasi-MBL (MBL like dynamics for long times, with eventual thermalisation beyond times that can be reached numerically), it again proves that kinetic constraints can lead to dynamical arrest over very long times in analogy with what occurs in glasses. § NON-EQUILIBRIUM IN OPEN QUANTUM SYSTEMS §.§ Markovian open quantum dynamicsConsider a closed quantum system, Fig. 8(a), composed of the subsystem of interest (denoted sys in the figure), and rest that we will call the environment (denoted env in the figure). The total system evolves unitarily according to a total Hamiltonian, H_ tot = H + H_ env + H_ int,where H acts on the system, H_ tot acts on the environment, and H_ int.A general form for the interaction Hamiltonian isH_ int = ∑_μ=1^N_ J( J_μ b_μ^† + J_μ^† b_μ) ,where J_μ,J_μ^† are operators that act on the system (which we call jump operators below), and b_μ,b_μ^† are operators on the environment.For the environment we assume the diagonal form, H_ env = ∑_μ=1^N_ Jω_μ b_μ^† b_μ.We will further assume that the environment is large, that correlation times in the environment are short compared to relevant timescales of the system, and that the system-environment coupling is weak.Under these conditions it is possible to trace out the environment and obtain an effective Markovian description of the dynamics of the system in terms of a quantum master equation (QME) <cit.>, ∂_t ρ = -i [H , ρ] + ∑_μ J_μρ J_μ^† - 1/2{ J_μ^† J_μ , ρ}.On the r.h.s. of QME the first term describes the part of the dynamics due to the system Hamiltonian - i.e., the coherent part of the evolutions - while the rest corresponds to the effect of the environment on the system under the Markovian approximation - i.e., the dissipative part of the evolution, cf. Fig. 8(a).We can write the QME as ∂_t ρ = Ł(ρ) ,where the generator Ł is a superoperator (an operator acting on matrices) <cit.>, Ł(·)= -i [H , (·)] + ∑_μ J_μ (·) J_μ^† - 1/2{ J_μ^† J_μ , (·) }.The QME equation is often called the Lindblad equation, and the generator Ł the Lindbladian.Formally integrating QME2 we obtain the state of the system at any time t > 0 given the initial state ρ_0,ρ(t) = e^t Ł (ρ_0) .In contrast to unitary evolution, the dynamics generated by Ł is dissipative, and as such generically the state of the system at long times tends to a stationary state,lim_t →∞ρ(t) = ρ_ ss,which in general is unique. (Just like in the classical case, under certain conditions, dynamics can be reducible with the existence of more than one stationary state <cit.>.)The form QME is the general form for a local equation that gives rise to evolution which preserves the properties of the density matrix - rhot is a completely positive trace preserving (CPTP) map. In fact, it is easy to show that if ρ_0 is a valid density matrix, then under rhot,ρ(t)^† = ρ(t), Tr[ρ(t)] = 1 , Tr[ρ(t)^2] ≤ 1∀ t ,where we have assumed Tr[ρ_0] = 1. The structure and properties of the QME QMEL are very similar to that of the classical ME ME1ME2.Table 2 provides a dictionary between the classical ME and the QME.The QME QME is a deterministic equation for the density matrix, a quantum dissipative analog of the deterministic equation ME1 for the probability in the classical case.Both the classical Markov generator and the Lindbladian Ł have zero as their largest eigenvalue.The corresponding right eigenvector in the classical case is the stationary state probability |ss⟩, while in the quantum case the right eigenmatrix is the stationary state ρ_ ss. The corresponding left eigenvector/eigenmatrix are the flat stateforand the identity matrixfor Ł. These left eigenstates are always the same irrespective of the form ofand Ł, as this is the statement of probability conservation.In terms of Ł, “action to the left” is defined in terms of the adjoint Ł^†, whereŁ^†(ρ) = i [H , ρ] + ∑_μ J_μ^†ρJ_μ - 1/2{ J_μ^† J_μ , ρ},and it is easy to show that Ł^†()=0 holds. The structure of the operatorand superoperator Ł is also analogous.The classical generator has off diagonal terms in the configuration basis which are positive and encode the possible jumps between configurations and their probabilities.Each of these operators is of the form, W(C → C') |C' ⟩⟨ C|.The corresponding operators in the quantum case are the jump operators J_μ, acting on the density matrix as J_μ (·) J_μ^†.The diagonal part of the classical generatoris the escape rate operator, ∑_C R(C) |C ⟩⟨ C|, with each entry corresponding to the escape rate of each configuration. A similar role is played in the quantum case by -i H_ eff, where the effective Hamiltonian is defined asH_ eff = H - i/2∑_μ J_μ^† J_μ.In fact, if we rewrite the Lindbladian L as,Ł(·)= ∑_μ J_μ (·) J_μ^† - i [ H_ eff (·) - (·) H_ eff^†] ,we see that Ł has a structure analogous to that of W.An interesting observation is that the classical ME is contained in the QME: for example, if H=0, and all the jump operators are rank-1, i.e., of the form J_μ∝ |C' ⟩⟨ C|, the dynamics of the diagonal of the density matrix decouples from the dynamics of the off-diagonal elements, and the diagonal dynamics is equivalent to that of a classical probability vector; this point is illustrated below. For a small sample of applications of this formalism to problems in open quantum dynamics see e.g. <cit.>. §.§.§ Example: 2-level system at T=0 Consider the problem illustrated in Fig. 8(b). It depicts a system with two levels |0⟩ and |1 ⟩ (i.e., a qubit) under the action of a driving HamiltonianH = Ωσ_x ,where σ_x = |0⟩⟨ 1| + |1⟩⟨ 0|.(This could model, e.g., an atom where two atomic levels are driven by a laser on resonance with the line between them, described in the rotating wave approximation with Ω the strength of the drive, or Rabi frequency.) The system of Fig. 8(b) is also subject to a dissipative process described by a single jump operatorJ = √(κ)σ_-,where σ_- = |0⟩⟨ 1| (and σ_+ = σ_-^†).This models for example the interaction with a zero temperature environment where the system can only emit, say photons, into the environment, but not absorb from it. The rate for these emissions is κ.The Lindbladian for this 2-level problem explicitly reads,Ł (·) =- i Ω [ σ_x , (·)] + κσ_- (·) σ_+ - κ/2{ n , (·) },where n = σ_+ σ_-. Given that ρ is a 2×2 matrix, the superoperator Ł can be thought of as a 4×4 matrix and diagonalised explicitly.For simplicity let us consider the case where κ = 4 Ω, for which expressions are more compact (as certain square roots in the general solution vanish).The four eigenvalues of Ł are,λ_0 = 0, λ_1 = -2 Ω, λ_2 = λ_3^* = -3 Ω - i √(3)Ω,where we have labeled the eigenvalues by decreasing real part. The first eigenvalue vanishes, as expected, and corresponds to the stationary state. All other eigenvalues have negative real parts, indicating that their modes decay exponentially in time.For each eigenvalue there are right and left eigenmatrices, i.e., Ł(R_n) = λ_n R_n, Ł^†(L_n) = λ_n L_n.For Ł of L2 the right eigenmatrices areR_0 =( [ 1/6 - i/3; i/3 5/6; ]), R_1 =( [ 0 1/2; 0 1/2; ]) , R_2 = R_3^† = ( [ - 1 + i √(3)/12 i/6; - i/6 1 + i √(3)/12; ]),and the left eigenmatrices areL_0 = , L_1 = σ_x , L_2 = L_3^† = ( [1+ i 2√(3) i 2 √(3)/√(3) - i; - i 2 √(3)/√(3) - i 1; ]),and we have chosen a normalisation such that( L_n · R_m ) = δ_nm.We can use the above for a spectral decomposition of the generator so thatrhot becomesρ(t) = ∑_n e^t λ_n( L_n ·ρ_0 ) R_n.From here we see that the stationary state coincides with R_0, and in the particular case of the 2-level system of Fig. 2(b) we haveρ_ ss = R_0 = ( [ 1/6 - i/3; i/3 5/6; ]).rhoss2 says that in the stationary state the average occupation of the upper state is ⟨ n ⟩ = 1/6 and that of the lower state 1 - ⟨ n ⟩ = 5/6, and there are also steady state coherences so that ⟨σ_y ⟩ = 2/3 ≠ 0. §.§.§ Example: classical 2-level system It was mentioned above that a classical master equation can be embedded in the QME.Consider the example of the same 2-level system, but now we replace the coherent term in the dynamics by a dissipative jump from the the |0⟩ to the |1⟩ state, see Fig. 8(c).Of course this corresponds to the dynamics of classical Ising spin, but let us consider how such system in described in the QME framework.In this case we have the following Hamiltonian and jump operators,H = 0, J_1 = √(κ)σ_-, J_2 = √(γ)σ_+,leading to the Lindbladian,Ł (·) = κσ_- (·) σ_+ + γσ_+ (·) σ_- - κ-γ/2{ n , (·) }- γ (·) .If we decompose the density matrix as ρ = ( [ p_1 x - i y; x + i y p_0; ]),where p_0,1 are the probabilities of occupation of the states, with p_0 + p_1 = 1, and x,y the coherences, we find that the QME, QME2, splits into two disconnected parts,∂_t ρ = Ł(ρ)⇒{[ ∂_t ( [ p_1; p_0; ])= ( [ γ p_0 - κ p_1; κ p_1 - γ p_0; ]); ; ∂_t ( [ x; y; ])= -κ+γ/2( [ x; y; ]);]. ,and the probabilities and coherences decouple.If the initial state ρ_0 is diagonal, then the dynamics described by the Lindbladian L2c is the same as that of a classical generator = ( [ - κ γ; κ - γ; ]),acting on a probability vector (p_1, p_2).If the initial state has any coherences, from QME2c it follows that they decay exponentially fast, taking the system into the classical subspace.§.§ Quantum jump trajectoriesThe Lindbladian, L, generates a continuous quantum Markov chain<cit.>.Just like the classical ME is associated to stochastic trajectories between configurations, cf. Fig. 4(b), the QME can be unravelled in terms of stochastic quantum trajectories <cit.>.As in the classical case, averaging the state of the system at any one time over the set of stochastic trajectories recovers the deterministic evolution of the density matrix under the QME QME. Quantum unravellings are not unique.They depend on the choice of observational basis in the environment that is chosen: for example for situations common in quantum optics, different unravellings could be those given by counting photons as opposed to homodyne measurement of photocurrents (associated with quantum trajectories described in terms of quantum stochastic differential equations, where the quantum noise due to the environment enters as a quantum Wiener process). The particular class of unravellings we will consider, which makes the analogy to the classical case more direct, is that based on quantum jumps. Figure 9(a) sketches a quantum trajectory associated with the QME QME, corresponding to a quantum jump unravelling. Let us assume for simplicity that we start from a pure state |ψ_t_0⟩ at time t_0. A trajectory will correspond to the evolution of a pure state | ψ_t ⟩ given by periods of deterministic evolution, generated by H_ eff, cf. Heff, punctuated by sudden jumps of the wavefunction due to the action of the jump operators J_μ occurring at random times.The periods between quantum jumps are analogous to the periods between classical jumps in the case of classical Markov chains, compare Fig. 9(a) with Fig. 4(b).The evolution between jumps is given by| ψ_t_k + Δ t⟩ = e^-i Δ t H_ eff | ψ_t_k⟩,where | ψ_t_k⟩ is the state after the previous quantum jump that occurred at a time we denote t_k. As mentioned above, the operator H_ eff plays the role of the escape rate operator for the quantum trajectories.In contrast to the classical case, H_ eff is in general non-diagonal, and as such the state evolves in time between jumps. Since in general H_ eff≠ H_ eff^†, the evolution in psieff is non-unitary, and as such the norm of the state is not preserved.This non-conservation of the norm is associated with the survival time before the next quantum jump occurs. In particular, if the last jump occurred at time t_k, the probability of surviving (i.e., having no jump) until time t_k + Δ t is related to the norm of psieff by,P_| ψ_t_k⟩(Δ t) = | ψ_t_k + Δ t⟩^2/| ψ_t_k⟩^2 = ⟨ψ_t_k | e^i Δ t H_ eff^†e^-i Δ t H_ eff | ψ_t_k⟩/⟨ψ_t_k| ψ_t_k⟩.This expression should be compared to the classical survival in a configuration which is exponential of the escape rate, P(Δ t | C) = e^- Δ t R(C). After the no-jump evolution psieff from the time of the last jump at t_k a new jump occurs at t_k+1 = t_k + Δ t.The wavefunction changes by direct application of the jump operator to the state before the jump, i.e., | ψ_t_k+1⟩_ after = J_μ_k+1 | ψ_t_k+1⟩.The change in the state occurs instantaneously, and thus the time index is the same before and after the jump, cf. Fig. 9(a).The probability of it being jump μ out of the N_ J possible jumps is given by p_μ(| ψ_t_k+1⟩ ) = ⟨ψ_t_k+1 | J_μ^† J_μ | ψ_t_k+1⟩/∑_ν=1^N_ J⟨ψ_t_k+1 | J_ν^† J_ν | ψ_t_k+1⟩,where | ψ_t_k+1⟩ is the state just before the jump, psiJ.Each jump is associated with an emission event observed in the environment, while the periods between jump correspond to periods where no emission is observed.The time record of these observed emissions is called a quantum jump trajectory. Let us consider the 2-level system of Example 1 above (for the choice of parameters κ = 4 Ω). Let us assume for simplicity that the initial state is the state | 0 ⟩.From H2J2 we get the effective Hamiltonian, cf. Heff,H_ eff = Ωσ_x - κ/2 n.The survival probability between jump starting from state | 0 ⟩ is thenP_| 0 ⟩(Δ t) = e^-2 ΩΔ t[ 1 + 2 ΩΔ t + 2 (ΩΔ t)^2 ] .Given that there is a single jump we have for psiJ| ψ_t_k+1⟩_ after = J | ψ_t_k+1⟩∝ | 0 ⟩,so that after each quantum jump the system is reverted to the initial state. In terms of the states after the jumps this corresponds to a renewal process,| 0 ⟩| 0 ⟩ | 0 ⟩⋯,which, in contrast to classical renewals, has waiting times Δ t that are not exponentially distributed, cf. Pw2. The associated quantum jump trajectories are a history of the times at which the single kind of quantum jump occurs, such as the sequence of clicks sketched in Fig. 9(b). §.§ Dynamical large deviationsWe can generalise <cit.> the LD approach of Lecture <ref> to study ensembles of quantum jump trajectories (QJTs).In particular, we wish to classify QJTs through dynamical observables such as the total number of jumps K in a trajectory (e.g., the total number of photons emitted). PKLDLT generalise straightforwardly.The SCGF can also be calculated from the largest eigenvalue of a tilted generator: if the observable of interest for a QJT ω_t is defined as K(ω_t) = ∑_μα_μ K_μ(ω_t),where K_μ(ω_t) is the total number of jumps of kind μ in trajectory ω_t, then the associated tilted operator reads,Ł_s(·)= -i [H , (·)] + ∑_μ e^-s α_μ J_μ (·) J_μ^† - 1/2{ J_μ^† J_μ , (·) } .Note that K of Koq is a “counting operator” - cf. Kgen when β_C = 0 - and thus the tilting affects only the jump terms in Ł_s.§.§.§ Example: 2-level systemLet us go back to the example of 2-level system of section 4.1.1.As an observable we consider the total number of jumps. The corresponding tilted operator reads <cit.>,Ł_s (·) =- i Ω [ σ_x , (·)] + e^-sκσ_- (·) σ_+ - κ/2{ n , (·) }.Fixing again κ = 4 Ω for simplicity, we can obtain immediately the SCGF as the largest eigenvalue of L2s, θ(s) = 2 Ω( e^-s/3 - 1 ).From a Legendre transform like LT we obtain the corresponding rate functionφ(k) = 3 [ k ln( 3 k/2 Ω) - ( k - 2 Ω/3) ].The SCGF theta2 and the rate function phi2 are shown in Fig. 10(a,b). The moments of the number of jumps are obtained from the derivatives of θ(s) evaluated at zero.For example, for the average number of jumps we get, ⟨ K ⟩/t = - . d θ(s)/ds|_s=0 = 2/3Ω,which also corresponds to the minimum of the rate function phi2. The variance in turn reads, ⟨ K^2 ⟩ - ⟨ K ⟩^2/t = - . d^2 θ(s)/ds^2|_s=0 = 2/9Ω, The whole of the SCGF and/or the rate function describes the shape of the overall distribution of K.While theta2phi2 resemble the form of the SCGF and rate function for a Poisson process, cf. PKexLDZsex, there are small but crucial differences: in the quantum 2-level problem the total number of emissions is sub-Poissonian, as seen from the ratio of the variance to the mean, cf. K2var2, expressed as a Mandel-Q parameter,Q = ⟨ K^2 ⟩ - ⟨ K ⟩^2/⟨ K ⟩ - 1 = - 2/3,where Q=0 for a Poisson process, and Q positive (resp. negative) indicates super-Poissonian (resp. sub-Poissonian) statistics. Figure 10(b) shows that the rate function, and thus the distribution, is narrower than a Poisson rate function, and thus the statistics of K displays smaller than a Poisson process. Quantum jumps are “anti-bunched” in time - i.e., events are anti-correlated. This is a quantum effect: immediately after a jump the system is in the |0⟩ state, and coherent build up to the |1⟩ state is needed before another jump can occur; this is manifested in the survival probability not being exponential, cf. Pw2.In fact, the rate function phi2 is that of a Conway-Maxwell-Poisson distribution <cit.>.While the behaviour of θ(s) around s=0 - or equivalently φ(k) around its minimum - describes properties of typical dynamics, the behaviour of θ(s) for s ≠ 0 encodes properties of atypical fluctuations of the dynamics.The moments of the dynamical observable - such as the total number of jumps we are considering in this example - in the tilted ensemble of QJTs are⟨ K^n ⟩_s = ∑_ω_t K^n(ω_t)Prob(ω_t) e^- s K(ω_t)/∑_ω_t Prob(ω_t) e^- s K(ω_t),where Prob(ω_t) and the tilt is controlled by the counting field s.At long times, the corresponding cumulants are obtained from the derivatives of θ(s). Figure 10(a) shows the tilted average K, k(s) = lim_t →∞⟨ K ⟩_s/t = - θ'(s) = 2/3 e^-s/3,for the 2-level example. We see that k(s) goes from larger to smaller values as s increases as expected, a s < 0 describes the more active than typical side of the dynamics, while s > 0, the more inactive side.Figure 10(c) shows three sample trajectories taken from the s=0 (i.e., typical ensemble) for which k(s=0)=2/3 (center), and from the values of s (i.e., tilted ensembles) for which k(s) = 2 (left) and k(s) = 2/9.Higher derivatives of θ(s) describe properties of the fluctuations in the atypical QJT subensembles. An interesting observation is that, even if the average rate of emissions k(s) changes with s, the properties of the fluctuations around the average, as measured by the s-dependent Q parameter,Q_s = lim_t →∞⟨ K^2 ⟩_s - ⟨ K ⟩_s^2/⟨ K ⟩_s - 1= - θ”(s)/θ'(s) - 1 = - 2/3,are independent of s, describing a form of dynamical self-similarity in the 2-level system for this choice of parameters <cit.>.§.§.§ Example: 3-level system and intermittency As a second example we consider <cit.> the 3-level problem of Fig. 11(a), by adding a third level |2⟩ coherently coupled to level |0⟩.The Hamiltonian now reads,H = Ω_1 ( | 1 ⟩⟨ 0 | + | 0 ⟩⟨ 1 | ) + Ω_2 ( | 2 ⟩⟨ 0 | + | 0 ⟩⟨ 2 | ),while the single jump operator is given byJ = √(γ) | 1 ⟩⟨ 0 | .For Ω_2 ≪Ω_1 this problem describes “blinking” dynamics, cf. Fig. 11(b), with a switching between periods of high emissions - corresponding to the system being in the subspace | 0 ⟩|, | 1 ⟩, and periods of no emissions when the system is “shelved” in state | 2 ⟩.Such intermittent dynamics has a natural interpretation in terms of our “thermodynamics of trajectories”. Figure 11(c) shows the SCGF for this problem (where we have chosen γ = 4 Ω_1 and Ω_2 = Ω_1/10).We see that for s<0 it tracks that of the 2-level system: atypically active dynamics is one where emissions have similar statistics to those of the 2-level system. In contrast, for s>0 the SCGF turns abruptly and becomes very flat. The corresponding average emission rate as s function of s is shown in Fig. 11(d). For s<0 it has values similar to that of the 2-level system at the same s, and the trajectories look dense in jump events, cf. Fig. 11(e).For s>0, k(s) is close to zero and the trajectories are nearly empty of jump events, cf. Fig. 11(e).The order parameter k(s) changes rapidly around s=0, and its variance displays a peak indicating enhanced fluctuations: this shows that there is a first-order crossover in the dynamics (a phase transition is not possible in such as finite sized system) between an active phase at s<0 and a largly inactive at s>0, with typical dynamics occurring at the crossover, and thus displaying strong fluctuations and intermittency.This is the kind of “thermodynamic” interpretation of dynamical behaviour that the dynamical LD method permits. § ACKNOWLEDGEMENTSI am very grateful to my collaborators over the years in the joint work reviewed here, including R. Jack, I. Lesanovsky, V. Lecomte, F. van Wijland, S. Powell, M. Guta, B. Olmos, Y. Elmatad, L. Hedges, A. Keys, Z. Lan, E. Levi, K. Macieszczak, M. Merolle, E. Pitard, and M. van Horssen. Much of the work described in the early part of these notes was done together with my late friend and long-time collaborator David Chandler to whom this paper is dedicated. Financial support was provided by EPSRC Grant No. EP/M014266/1. elsarticle-num
http://arxiv.org/abs/1709.09208v2
{ "authors": [ "Juan P. Garrahan" ], "categories": [ "cond-mat.stat-mech" ], "primary_category": "cond-mat.stat-mech", "published": "20170926182130", "title": "Aspects of non-equilibrium in classical and quantum systems: slow relaxation and glasses, dynamical large deviations, quantum non-ergodicity, and open quantum dynamics" }
uva,sk]Ivan Sosnovik [email protected] sk,inm]Ivan Oseledets [email protected][uva]University of Amsterdam, Amsterdam, The Netherlands [sk]Skolkovo Institute of Science and Technology, Moscow, Russia [inm]Institute of Numerical Mathematics RAS, Moscow, Russia. In this research, we propose a deep learning based approach for speeding up the topology optimization methods. The problem we seek to solve is the layout problem. The main novelty of this work is to state the problem as an image segmentation task. We leverage the power of deep learning methods as the efficient pixel-wise image labeling technique to perform the topology optimization. We introduce convolutional encoder-decoder architecture and the overall approach of solving the above-described problem with high performance. The conducted experiments demonstrate the significant acceleration of the optimization process. The proposed approach has excellent generalization properties. We demonstrate the ability of the application of the proposed model to other problems. The successful results, as well as the drawbacks of the current method, are discussed. deep learning, topology optimization, image segmentation§ INTRODUCTION Topology optimization solves the layout problem with the following formulation: how to distribute the material inside a design domain such that the obtained structure has optimal properties and satisfies the prescribed constraints? The most challenging formulation of the problem requires the solution to be binary i.e. it should state whether there is a material or a void for each of the parts of the design domain. One of the common examples of such an optimization is the minimization of elastic strain energy of a body for a given total weight and boundary conditions. Initiated by the demands of automotive and aerospace industry in the 20^th century, topology optimization has spread its application to a wide range of other disciplines: e.g. fluids, acoustics, electromagnetics, optics and combinations thereof <cit.>.All modern approaches for topology optimization used in commercial and academic software are based on finite element methods. SIMP (Simplified Isotropic Material with Penalization), which was introduced in 1989 <cit.>, is currently a widely spread simple and efficient technique. It proposes to use penalization of the intermediate values of density of the material, which improves the convergence of the solution to binary. Topology optimization problem could be solved by using BESO (Bi-directional evolutionary structural optimization) <cit.> as an alternative. The key idea of this method is to remove the material where the stress is the lowest and add material where the stress is higher. The more detailed review is discussed in Section <ref>.For all of the above-described methods, the process of optimization could be roughly divided into two stages: general redistribution of the material and the refinement. During the first one, the material layout varies a lot from iteration to iteration. While during the second stage the material distribution converges to the final result. The global structure remains unchanged and only local alteration could be observed. In this paper, we propose a deep learning based approach for speeding up the most time-consuming part of a traditional topology optimization methods. The main novelty of this work is to state the problem as an image segmentation task. We leverage the power of deep learning methods as an efficient pixel-wise image labeling technique to accelerate modern topology optimization solvers. The key features of our approach are the following:* acceleration of optimization process;* excellent generalization properties;* absolutely scalable techniques;§ TOPOLOGY OPTIMIZATION PROBLEM Current research is devoted to topology optimization of mechanical structures. Consider a design domain Ω : {ω_j}_j=1^N, filled with a linear isotropic elastic material and discretized with square finite elements. The material distribution is described by the binary density variable x_j that represents either absence (0) or presence (1) of the material at each point of the design domain. Therefore, the problem, that we seek to solve, can be written in mathematical form as:min_x c(u(x), x) =∑_j=1^N E_j(x_j)u^T_j k_0u_j s.t. V(x) / V_0 = f_0 KU = F x_j ∈{ 0; 1},j= 1 … Nwhere c is a compliance, u_j is the element displacement vector, k_0 is the element stiffness matrix for an element with unit Young’s modulus, U and F are theglobal displacement and force vectors, respectively andK isthe global stiffness matrix.V(x) andV_0 are the material volume and design domain volume, respectively. f_0 is the prescribed volume fraction.The discrete nature of the problem makes it difficult to solve.Therefore, the last constraint in (<ref>) is replaced with the following one: x_j ∈ [ 0; 1],j= 1 … N. The most common method for topology optimization problem with continuous design variables is so-called SIMP or power-law approach <cit.>. This is a gradient-based iterative method with the penalization of non-binary solutions, which is achieved by choosing Young's modulus of a simple but very efficient form:E_j(x_j) = E_min + x^p_j (E_0- E_min) The exact implementation of SIMP algorithm is out of the scope of the current paper. The updating schemes, as well as different heuristics,can be found inexcellentpapers <cit.>. The topology optimization code in Matlab is described in details in<cit.> and the Python implementation of SIMP algorithm is represented in <cit.>.Standard half MBB-Beam problem is used to illustrate the process of topology optimization. The design domain, constraints, and loads are represented in Figure <ref>. The optimization of this problem is demonstrated in Figure <ref>. During the initial iterations, the general redistribution of the material inside of the design domain is performed. The rest of the optimization process includes the filtering of the pixels: the densities with intermediate values converge to binary values and thesilhouette of the obtained structure remains almost unchanged.§ LEARNING TOPOLOGY OPTIMIZATIONAs it was illustrated in Section <ref>, it is enough for the solver to perform a few number N_0 of iterations to obtain the preliminaryview of a structure.The fraction of non-binarydensities could be close to 1, however, the global layout pattern is close to the final one. The obtained image I could be interpreted as a blurred image of a final structure, or an image distorted by other factors. Thething is that there are just two types of objects on this image: the material and the void. The image I^*, obtained as a result of topology optimization does not contain intermediate values and, therefore, could be interpreted as the mask of image I. According to this notation, starting from iteration N_0 the process of optimization I → I^* mimics the process of image segmentation for two classes or foreground-background segmentation. We propose the following pipeline for topology optimization: use SIMP method to perform the initial iterations and get thedistribution with non-binary densities; use the neural network to perform the segmentation of the obtained image and converge the distribution to {0, 1} solution. §.§ ArchitectureHere we introduce the Neural Network for Topology Optimization — deep fully-convolutional neural network aimed to perform theconvergence of densities during the topology optimization process. The input of the model is two grayscale images (or a two-channel image). The first one is the density distribution X_n inside of the design domain which was obtained after the last performed iteration of topology optimization solver. The second input is the last performed update (gradient) of the densities δ X = X_n - X_n-1, i.e. the difference between the densities after the n-th iteration and n-1-th iteration.The output of the proposed model is a grayscale image of the same resolution as an input, which represents the predicted final structure. The architecture of our model mimics the common for the image segmentation hourglass shape. The proposed model has an encoder network and a corresponding decoder network, followed by a final pixel-wise classification layer. The architecture is illustrated in Figure <ref>. The encoder network consists of 6 convolutional layers. Each layer has kernels of size 3 × 3 and is followed by ReLU nonlinearity. The first two layers have 16 convolutional kernels. This block is followed by the pooling of the maximal element from the window of size 2 × 2. The next two layers have 32 kernels and are also followed by MaxPooling layer. The last block consists of 2 layers with 64 kernels each. The decoder network copies the architecture of the encoder part and reverses it. The MaxPooling layers are replaced with Upsampling layers followed by the concatenation with features from the corresponding low-level layer as it is performed in U-Net <cit.>. The pooling operation introduces the invariance of the subsequent network to small translations of the input. The concatenation of features from different layers allows one to benefit from the use of both the raw low-level representation and significantly encoded parametrization from the higher levels. The decoder is followed by the Convolutional layer with 1 kernel and sigmoid activation function. We included 2 Dropout layers <cit.> as the regularization for our network. The width and height of the input image could vary, however, they must be divisible by 4 in order to guarantee the coherence of the shapes of tensors in the computational graph. The proposed neural network has just 192,113 parameters.§.§ DatasetTo train the above-described model, we need example solutions to the System <ref>. The collection of a large dataset from the real-life examples is difficult or even impossible. Therefore, we use synthetic data generated by using Topy <cit.> — an open source solver for 2D and 3D topology optimization, based on SIMP approach.To generate the dataset we sampled the pseudo-random problem formulations and performed 100 iterations of standard SIMP method. Each problem is defined by the constraints and the loads. The strategy of sampling is the following:* The number of nodes with fixed x and y translations and the number of loads is sampled from the Poisson distribution: N_x∼ P(λ = 2),N_y, N_L∼ P(λ =1) * The nodes for each of the above-described constraints are sampled from the distribution defined on the grid. The probability of choosing the boundary node is 100 times higher than that for an inner node. * The load values are chosen as -1. * The volume fraction is sampled from the Normal distributionf_0 ∼𝒩(μ = 0.5, σ=0.1) The obtained dataset[The dataset and the related code is available at <https://github.com/ISosnovik/top>] has 10,000 objects. Each object is a tensor of shape 100 × 40 × 40: 100 iterations of the optimization process for the problem defined on a regular 40 × 40 grid.§.§ TrainingWe used dataset, described in Section <ref>, to train our model. During the training process we “stopped" SIMP solver after k iterations and used the obtained design variables as an input for our model. The input images were augmented with transformations from group D4: horizontal and vertical flips and rotation by 90 degrees. k was sampled from some certain distribution ℱ. Poisson distribution P(λ) and discrete uniform distribution U[1, 100] are of interest to us. For training the network we used the objective function of the following form:ℒ = ℒ_conf (X_true, X_pred) + βℒ_vol (X_true, X_pred)where the confidence loss is a binary cross-entropy:ℒ_conf (X_true, X_pred) = - 1/NM∑_i = 1^N∑_j = 1^M[ X_true^ijlog ( X_pred^ij)+ ( 1 - X_true^ij) log ( 1 - X_pred^ij) ]where N × M is the resolution of the image. The second summand in Equation (<ref>) represents thevolume fraction constraint:ℒ_vol (X_true, X_pred) = (X̅_pred - X̅_true)^2We used ADAM <cit.> optimizer with default parameters. We halved the learning rate once during the training process. All code is written in Python[The implementation is available at <https://github.com/ISosnovik/nn4topopt>]. For neural networks, we used Keras <cit.> with TensorFlow <cit.> backend. NVIDIA Tesla K80 was used for deep learning computations. The training of a neural network from scratch took about 80-90 minutes.§ RESULTSThe goal of our experiments is to demonstrate that the proposed model and the overall pipeline are useful for solving topology optimization problems. We compare the performance of our approach with standard SIMP solver <cit.> in terms of the accuracy of the obtained structure and the average time consumption. We report two metrics from common image segmentation evaluation: Binary Accuracy and Intersection over Union (IoU). Let n_l, l=0,1 be the total number of pixels of class l. The ω_tp, t, p = 0,1 is a total number of pixels of class t predicted to belong to class p. Therefore:Bin. Acc. = ω_00 + ω_11/n_0 + n_1; IoU = 1/2[ ω_00/n_0 + ω_10 + ω_11/n_1 + ω_01] We examine 4 neural networks with the same architecture but trained with different policies. The number of iterations after which we “stopped" SIMP algorithm was sampled from different distributions. We trained one neural network by choosing discrete uniform distribution U[1, 100] and another three models were trained with Poisson distribution P(λ) with λ=5, 10, 30. §.§ Accuracy and performance We conducted several experiments to illustrate the results of the application of the proposed pipeline and the exact model to mechanical problems. Figure <ref> demonstrates that our neural network restores the final structure while being used even after 5 iterations. The output of the model is close to that of SIMP algorithm. The overall topology of the structure is the same. Furthermore, the time consumption of the proposed method, in this case, is almost 20 times smaller. Neural networks trained with different policies produce close results: models preserve the final structure up to some rare pixel-wise changes. However, the accuracy of these models depends on the number of the initial iterations performed by SIMP algorithm. Tables <ref>, <ref> summarize the results obtained in the series of experiments. The trained models demonstrate the sufficiently more accurate results comparing to the thresholding applied after the same number of iterations of SIMP method. Some models benefit when they are applied after 5-10 iterations, while others demonstrate better result in the middle or at the end of the process. The proposed pipeline could significantly accelerate the overall algorithm with minimal reduction in accuracy, especially when CNN is used at the beginning of the optimization process. The neural network which used discrete uniform distribution during the training process does not demonstrate the highest binary accuracy and IoU comparing to other models till the latest iterations. However, this model allows one to outperform the SIMP algorithm with thresholding throughout the optimization process. §.§ TransferabilityThis research is dedicated to the application of neural networks to the topology optimizationofminimal compliance problems. Nevertheless, the proposed model does not rely on any prior knowledge of the nature of the problem. Despite the fact that we used mechanical dataset during the training, other types of problems from topology optimization framework could be solved by using the proposed pipeline. To examine the generalization properties of our model, we generated the small dataset of heat conduction problems defined on 40 × 40 regular grid. The exact solution and the intermediate densities for the problems were obtained in absolutely the same way as it was described in Section <ref>. The conducted experiments are summarized in Table <ref>, <ref>. During the initial part of the optimization process, the results of the pre-trained CNNs are more accurate than this of thresholding. Our model approximates the mapping to the final structure precisely when the training dataset and the validation dataset are from the same distribution. However, it mimics updates of SIMP method during the initial iterations even when CNN is applied to another dataset. Therefore, this pipeline could be useful for the fast prediction of the rough structure in various topology optimization problems.The neural network described in Section <ref> is fully-convolutional, i.e. it consists of Convolutional, Pooling, Upsampling and Dropout layers. The architecture itself does not have any constraints on the size of the input data. In this experiment, we checked the scalable properties of our method. The model we examined had been trained on the original dataset with square images of size 40 × 40. Figure <ref> visualizes the result of the application of CNN to the problems defined on grids with different resolution. Here we can see that changes in the aspect ratio and reasonable changes in the resolution of the input data do not affect the accuracy of the model. Pre-trained neural network successfully reconstructs the final structure for a given problem. Significant changes of the size of the input data require additional training of the model because the typical size of a common patterns changes with the increase of the resolution of an image. Nevertheless, demonstrated cases did not require one to tune neural network and allowed to transfer model from one resolution to another. § RELATED WORKThe current research is supposed to be the first one which utilizes deep learning approach for the topology optimization problem. It is inspired by the recent successful application of deep learning to problems in computational physics. Greff at al. <cit.> used the fully-connected neuralnetwork as amapping function from the nano-material configuration and the input voltage to the output current.The adaptation of restricted Boltzmann machinefor solving the Quantum Many-Body Problem was demonstratedin paper <cit.>. Mills et al.<cit.> used the machinery of deep learning to learn the mapping between potential and energy, bypassing the need to numerically solve the Schrödinger equation and the need for computing wave functions.Tompson et al. <cit.> andKiwon et al. <cit.> accelerated the process of modeling of liquidsby the application of neural networks. The paper <cit.> demonstrates how a deep neural network trained on quantum mechanical density functional theory calculations can learn an accurate and transferable potential for organic molecules. The cutting-edge research <cit.> shows how generative adversarial networks could be usedfor simulating 3D high-energy particle showers in multi-layer electromagnetic calorimeters.§ CONCLUSIONIn this paper, we proposed a neural network as an effective tool for the acceleration of topology optimization process. Out model learned the mapping from the intermediate result of the iterative method to the final structure of the design domain. It allowed us to stop SIMP method earlier and significantly decrease the total time consumption.We demonstrated that the model trained on the dataset of minimal compliance problems could produce the rough approximation of the solution for other types of topology optimization problems. Various experiments showed that the proposed neural network transfers successfully from the dataset with a small resolution to the problems defined on the grids with better resolution. elsarticle-num § DATASET§ RESULTS
http://arxiv.org/abs/1709.09578v1
{ "authors": [ "Ivan Sosnovik", "Ivan Oseledets" ], "categories": [ "cs.LG", "math.NA" ], "primary_category": "cs.LG", "published": "20170927152200", "title": "Neural networks for topology optimization" }
We study gluings of asymptotically cylindrical special Lagrangian submanifolds in asymptotically cylindrical Calabi–Yau manifolds. We prove both that there is a well-defined gluing map, and, after reviewing the deformation theory for special Lagrangians, prove that this gluing map defines a local diffeomorphism from matching pairs of deformations of asymptotically cylindrical special Lagrangians to deformations of special Lagrangians. We also give some examples of asymptotically cylindrical special Lagrangian submanifolds to which these results apply.Continuous attractor-based clocks are unreliable phase estimators Arvind Murugan December 30, 2023 =================================================================§ INTRODUCTIONCalabi–Yau manifolds have various families of distinguished submanifolds. The most obvious family is the complex submanifolds, since Calabi–Yau manifolds are complex. There are other families that are less well understood. In this paper, we are concerned with special Lagrangian submanifolds, first introduced by Harvey and Lawson in <cit.> in 1982. They used special Lagrangian submanifolds as an example of their notion of a calibrated submanifold. As calibrated submanifolds, special Lagrangians are minimal submanifolds; in fact, they are volume-minimising in their homology class. Rather than working with the calibration, we shall regard special Lagrangians as submanifolds L on which the imaginary part of the holomorphic volume form Ω and the Kähler form ω vanish. We shall prove two theorems concerning asymptotically cylindrical special Lagrangian submanifolds, one of which has a variant extending to a slightly more general setting. First we prove a gluing theorem [Theorem <ref>] Let M_1 and M_2 be asymptotically cylindrical Calabi–Yau manifolds, whose limit cylinders can be identified so that we can construct a “connected sum" M^T, and that the limits of the Calabi–Yau structures agree under this identification. Suppose that L_1 and L_2 are asymptotically cylindrical special Lagrangian submanifolds and their limit cylinders agree under this identification. By cutting off, we construct a pair of closed forms (Ω^T, ω^T) and a submanifold L_0^T. Suppose that (Ω^T, ω^T) can be perturbed to give a Calabi–Yau structure, only changing the cohomology classes up to scaling. Then we may perturb L_0^T to form a special Lagrangian L^T in M^T with this Calabi–Yau structure. We also sketch an additional argument that extends Theorem A to [Theorem <ref>] Let M be a Calabi–Yau manifold and let L be a closed submanifold. If Ω|_L and ω|_L are sufficiently small depending on M, L and the inclusion, and are exact, then L can be perturbed to a special Lagrangian L'.In the second part of the paper, we prove[Theorem <ref>] Let M_1, M_2, L_1, L_2, M^T and L^T be as in Theorem A. For a deformation of (L_1, L_2) as a pair of special Lagrangians whose limits are identified, we may define a gluing map. Moreover, the space of all such pairs is a manifold around (L_1, L_2), and this gluing map defines a smooth map from this manifold to the special Lagrangian deformations of L^T. This is a local diffeomorphism; in particular, it constructs an open subset of the special Lagrangians on M^T around L^T. We shall suppose throughout that our ambient Calabi–Yau manifolds are connected, but that the special Lagrangian submanifolds need not be, to allow for the possibility of gluing finitely many special Lagrangians with the same total collection of cross-sections.On physical grounds, it is conjectured that there is a duality between Calabi–Yau threefolds known as “mirror symmetry" (see for example the review of Gross <cit.>). One of the formulations of this given in <cit.> is that there should be an isomorphism between the special Lagrangian submanifolds of a Calabi–Yau manifold equipped with a flat U(1) bundle and the complex submanifolds of its mirror pair equipped with a holomorphic line bundle. Specifically, a conjecture of Strominger, Yau and Zaslow <cit.> says that a Calabi–Yau manifold and its mirror pair both admit special Lagrangian torus fibrations and the generic tori are dual (in the sense that one is an appropriately generalised first cohomology group of the other). This gives another reason to study special Lagrangian submanifolds. It is obvious that a torus would be obtained from Theorem A if the special Lagrangians we glue are topologically × T^n-1, and then Theorem B would yield an interaction between deformations of these tori and deformations of the original × T^n-1 special Lagrangians. The deformation theory of calibrated submanifolds in general and special Lagrangian submanifolds in particular was initiated by McLean <cit.>. One of the key ideas in the special Lagrangian case is the use of the Kähler form to translate normal vector fields on L into one-forms on L, though this of course works more generally for Lagrangian submanifolds of symplectic manifolds. If M is an asymptotically cylindrical Calabi–Yau manifold, and L is an asymptotically cylindrical special Lagrangian submanifold, then the induced metric on L is itself asymptotically cylindrical. Hence, deformation results follow by combining the asymptotically cylindrical Laplace–Beltrami theory of Lockhart <cit.> with the McLean results. This was carried out by Salur and Todd <cit.>. A slightly different deformation result for minimal Lagrangian submanifolds with boundary constrained to lie in an appropriate symplectic submanifold has also been obtained by Butscher <cit.>. Joyce has written a series of papers <cit.> on special Lagrangians with conical singularities in a fixed compact (generalised) Calabi–Yau manifold. For instance, he provides desingularisations of such singularities by gluing in a sufficiently small asymptotically conical submanifold of ^n. This gluing argument uses the Lagrangian Neighbourhood Theorem to ensure that all submanifolds we deal with are Lagrangian. However, once we have a compact Lagrangian submanifold that is close to special Lagrangian he gives a general result on perturbing it to become special Lagrangian <cit.>. Pacini extended this result to the case of asymptotically conical Lagrangian submanifolds in ^n given by patching together special Lagrangians with conical singularities and asymptotically conical ends (see <cit.>). In the asymptotically cylindrical setting, where there is a change of Calabi–Yau structure arising from the Calabi–Yau gluing, it is much harder to remain Lagrangian and so the analysis is rather different. There are also other gluing constructions of submanifolds satisfying appropriate partial differential equations: for instance, Lotay has considered similar desingularisation problems for coassociative submanifolds of G_2 manifolds (see for example <cit.>) and Butscher constructs contact stationary Legendrian submanifolds by a gluing method in <cit.>. We now outline the content of this paper. As a preliminary, in section <ref>, we will first describe asymptotically cylindrical submanifolds and discuss the restriction map of forms in section <ref>. We then discuss patching in the purely Riemannian case. We also briefly describe patching methods for asymptotically cylindrical submanifolds. We give our definitions and some examples and then review (and correct slightly, altering the dimension of the asymptotically cylindrical deformation space in Theorem <ref>) the deformation theory of McLean and Salur–Todd in subsection <ref>. We then turn to gluing. In Hypothesis <ref>, we assume a result on the gluing of Calabi–Yau manifolds that generalises to higher n the one obtained by the author in <cit.>, and then prove in Theorem <ref> (Theorem A) that asymptotically cylindrical special Lagrangian submanifolds can be glued. This follows by considering the same argument as in the deformation case. The main difficulty is to find a bound on the inverse of the linearisation, and this is done by showing that the inverse of the linearisation depends continuously on the Calabi–Yau structure and so by comparing with the original structures, we can assume we are just working with d + d* as in the deformation case. Extending this slightly, by finding an auxiliary Calabi–Yau structure around any given nearly special Lagrangian submanifold, leads to Theorem <ref> (Theorem A1), that any nearly special Lagrangian submanifold can be perturbed to a special Lagrangian. In section <ref>, we find the derivative of the map perturbing submanifolds to special Lagrangians that we have just constructed, subject to an analytic condition. In order to do this, we first give a careful discussion of how to identify normal vector fields on nearby pairs of submanifolds and how these give derivativesfor various natural maps on the “manifold of submanifolds". We then briefly discuss how corresponding analysis applies to taking the harmonic parts of normal vector fields (that is, the part corresponding to a harmonic one-form), and hence find the derivative of the map making things special Lagrangian from the previous section. For instance, we show that given two curves of submanifolds L_s and L'_s, the curve of normal vector fields such that exp_v_s(L_s) = L'_s is smooth, and give an expression for the tangent to v_s at zero in terms of the tangents to the curves L_s and L'_s (for this expression, see Proposition <ref>). This material is inspired by Palais <cit.> and Hamilton <cit.>, but we could not find it in the literature. In section <ref>, we proceed to the same kind of analysis of the patching of submanifolds. This is analytically more complicated, but because there is no need to worry about harmonic parts, it is conceptually simpler. Finally, we show in Theorem <ref> (Theorem B) as the final part of section <ref> that the gluing map of special Lagrangians so defined is a local diffeomorphism of moduli spaces. Intuitively, this is obvious, because the gluing map is just “patch and become special Lagrangian" so its derivative should just be “patch and become harmonic", and Nordström (in <cit.>) says that this is an isomorphism, at least for long enough necks.Acknowledgement. The work in this paper is a slight extension of part of the author's PhD thesis <cit.>. He is indebted to his supervisor, Alexei Kovalev, for much useful advice. He would also like to thank the examiners, especially Dominic Joyce, for helpful comments.§ ASYMPTOTICALLY CYLINDRICAL GEOMETRY In this section, we make the basic Riemannian geometry and analytic definitions required for the rest of the paper, with particular emphasis on asymptotically cylindrical submanifolds. Throughout, all manifolds will be oriented. We make these definitions before defining special Lagrangians, as some of this material will be needed for our review of basic special Lagrangian theory in section <ref>. The section falls into three parts. First, in subsection <ref>, we define norms for objects on manifolds and submanifolds and briefly explain how these norms interact with the inclusion. Specifically, we state Theorem <ref>, that (locally) if the second fundamental form of L in M is bounded in C^k-1, then the restriction maps are C^k bounded. In subsection <ref>, we give definitions of asymptotically cylindrical manifolds and define some appropriate approximate gluing or patching maps. We briefly discuss the Laplace–Beltrami operator on the resulting manifolds. We then proceed in subsection <ref> to define asymptotically cylindrical submanifolds (Definition <ref>) and give their basic properties. Finally, we describe an approximate gluing or patching map of asymptotically cylindrical submanifolds in Definition <ref>, and describe how this map interacts with the restriction maps of forms. §.§ NormsTo define norms, we shall use the notion of a jet bundle. We shall use the following basic results.Let M be a Riemannian manifold and let E be a vector bundle over M. There exists a vector bundle J^s(E), called the jet bundle, such that at every point p ∈ M, the fibre J^s(E)_p is given by the quotient of smooth sections of E by the sections vanishing to order s at p, and whenever σ is a smooth section of E, we can write j^k(σ) a smooth section of J^s(E) given by the equivalence class of σ at every point. We call the operation of getting j^s(σ) from σ jet prolongation. Moreover, suppose we have a linear connection on E. Then using the Levi-Civita connection we have induced connections on tensor products. Hence, we may define a bundle map[row sep=0pt] J^s M r ⊕_l=0^s S^l (T^*M) ⊗ E,[σ][mapsto]r(σ_p,(∇σ)_p, …,(∇^k σ)_p).where S^l is the symmetric product andare the symmetrisation operators. (<ref>) is a vector bundle isomorphism.A metric and connection on J^sE follow immediately. We note also that if F is a smooth fibre bundle, Palais <cit.> shows that we can also define a jet bundle J^s(F). Moreover, this construction is functorial, so that if F is a subbundle of a bundle associated to the tangent bundle, J^s(F) is a smooth subbundle of J^s(E). This means we have a metric and connection on J^s(F) also. We may then define Let (M, g) be a Riemannian manifold. Let s be a non-negative integer, and μ∈ [0, 1). Let E be a bundle associated to the tangent bundle, so that the Riemannian metric and Levi-Civita connection define a metric and connection on the bundle E. Given a section σ of E, defineσ_C^0, μ = sup_ x ∈ M(g(σ_x, σ_x))^1/2 + sup_x, y ∈ M d(x, y) < δ|σ_x - σ_y| /d(x, y)^μ,where δ is the injectivity radius and |σ_x - σ_y| is given by parallel transporting σ_x with ∇ from the fibre E_x to the fibre E_y along the geodesic from x to y and then measuring the difference. Note that since parallel transport is an isometry, this is symmetric in x and y. Since J^s(E) is also a bundle associated to the tangent bundle, we then defineσ_C^s, μ = j^s σ_C^0, μ,where j^s σ is the jet prolongation from Proposition <ref>.If μ = 0, we just write C^s. This definition is taken from Joyce <cit.>. Note that if M is not compact, smooth forms need not have finite C^s, μ norm for any s and μ. As we will mostly be working with forms on compact manifolds and asymptotically translation invariant forms on asymptotically cylindrical manifolds, this will not be a major issue. The following easy proposition explains why C^s norms are convenient to work with.Let f: E → F be a smooth bundle map between fibre bundles over a manifold M that are subbundles of vector bundles associated to the tangent bundle. For each open subset U in M with compact closure, open subset V of E|_U with compact closure in E, constant K, and positive integer k,we have a constant C_k,K, V such that whenever the sections σ_1 and σ_2 of E|_U lie in V, and have derivatives up to order s bounded by K, f(σ_1)|_U - f(σ_2)|_U_C^s≤ C_s, K, Vσ_1|_U - σ_2|_U_C^s. We now pass to a submanifold L of the Riemannian manifold (M, g). (L, g|_L) is itself Riemannian: write ∇^L for the induced Levi-Civita connection. It follows that we have natural C^k, μ norms on sections of bundles associated to the tangent bundle TL just as in Definition <ref>. We will also need norms on sections of the normal bundle ν_L. The metric g defines a metric on this bundle: in order to define a C^k norm on its sections, we need a connection. The natural connection on ν_L is given by taking the normal part after applying the Levi-Civita connection on M. We again call this connection ∇^L. Combining it with the Levi-Civita connection ∇^L on T^*L yields connections on tensor products by the Leibniz rule, so we may immediately extend the C^s norms from Definition <ref> to normal vector fields. Similarly, we may use ∇^L to define a C^s norm on the second fundamental formof L in M, as this is a section of T^*L ⊗ T^*L ⊗ν_L. We now pass to the relation between norms on L and M. Suppose α is a differential form on M with α_C^k small. We would like to know that α|_L_C^k is also small. It can be shown that this holds locally given a local bound on the second fundamental form. More precisely, we have Let L be an immersed submanifold of (^n, g), where the metric g need not be Euclidean. If the second fundamental formhas finite C^k-1(L) norm , then for each p there exists C_p such thatα|_L_C^k(L)≤ C_p α_C^k(M)for every p-form α on M; moreover, C_p depends only on p and the C^k-1(L) bound on . This result is not immediately apparent in the literature, and proving it in full generality is somewhat complicated. The idea, however, is easy. Since everything is multilinear, it suffices to work with one-forms, and since we have a Riemannian metric, we can regard T^*L as a subbundle of T^*M|_L. The restriction map corresponds to orthogonal projection to this subbundle. The case k=0 follows immediately. For higher k, we know essentially by the Gauss-Weingarten formulae that derivatives on L are just the T^*L-components of derivatives on M. Hence, for k=1 for instance, we obtain that the first derivative of α|_L is the T^*L-component of the first derivative of the T^*L-component of α. But the difference between this and the T^*L-component of the first derivative α is the tangent part of the derivative of a normal part, and so can be controlled essentially by the second fundamental form. For k>1, there are simply more terms to deal with. Moreover, this also shows that the norms defined on forms on L by using the connection on M rather than ∇^L in the isomorphism of Proposition <ref> are Lipschitz equivalent to the standard C^k norms. The Lipschitz constant depends only on the C^k-1 norm of the second fundamental form of L in M.§.§ Asymptotically cylindrical manifolds and their patching In this subsection, we briefly review the definition of asymptotically cylindrical manifolds and asymptotically translation invariant objects, appropriate norms, and patching maps for such manifolds and closed forms on them. This is well-known; for instance, the ideas are present in Kovalev <cit.> and appear more definitely in Nordström <cit.>. We also discuss the Laplace–Beltrami operator on the result of a patching in Theorem <ref>, and explain that it can, up to harmonic forms, be bounded below in terms of the length of the neck created by the patching. Finally, we extend Nordström's work in <cit.> to give a slightly more concrete statement on the gluing of harmonic forms in Proposition <ref>. We begin by defining an asymptotically cylindrical manifold.A Riemannian manifold (M, g) is said to be asymptotically cylindrical with rate δ>0 if there exists a compact manifold with boundary M^, whose boundary is the compact manifoldN, a metric g_N on N, and constants C_r satisfying the following. Firstly, M is the union of M^ and N × [0, ∞), where these two parts are identified by the obvious identification between N = ∂ M^ and N ×{0}. Secondly, we have the estimate |∇^r (g|_N × [0, ∞) - g_N - dt^2)| < C_re^-δ tfor every r = 0, 1, …, where t is the coordinate on [0, ∞) extended to a global smooth function on M, ∇ is the Levi-Civita connection induced by g, and |·| is the metric induced by g on the appropriate space of tensors. (M,g) is said to be asymptotically cylindrical if it is asymptotically cylindrical with rate δ for some δ >0. We refer to the cylindrical part N × [0, ∞) as the end(s) of M, and we write g̃ for the cylindrical metric g_N + dt^2; (M, g) is said to be (eventually) cylindrical if g = g̃ for t large enough. The analogous sections of vector bundles are said to be asymptotically translation invariantSuppose that M is an asymptotically cylindrical manifold. Given a bundle E associated to the tangent bundle over M, a section α̃ of E|_N extends to a section of E|_N × [0, ∞) by extending parallel in t. A section α of E is then said to be asymptotically translation-invariant (with rate 0<δ'<δ) if there is a section α̃ of E|_N and (<ref>) holds (with δ') for |∇^r (α -α̃)| for t>T. In general, given an asymptotically translation invariant section α, we will write α̃ for its limit in this sense. If α is asymptotically translation invariant with α̃= 0, we say α is exponentially decaying; this just means that (<ref>) holds for |∇^r α| itself. We will need norms adapted to asymptotically translation invariant forms. Specifically, we define a C^s_δ norm on a subset of asymptotically translation invariant sections α of a bundle E associated to the tangent bundle by settingα_C^s_δ = (1-ψ)α + ψ e^δ t(α -α̃)_C^s + α̃_C^s,where the cutoff function ψ(t) has ψ(t) = 1 for t>2, and ψ(t) = 0 for t<1. The topology induced by (<ref>) is called the extended weighted topology with weight δ. If we restrict to exponentially decaying forms (that is, the closed subset with α̃= 0), we obtain an ordinary weighted topology. In the same way we have a C^s, μ_δ topology, and by taking the inverse limit we also have a C^∞_δ topology. See also the discussion of Sobolev extended weighted topologies in, for example, <cit.> and <cit.>.We now consider patching. We make the following definitions. Suppose that M_1 and M_2 are asymptotically cylindrical manifolds with ends, with corresponding cross-sections N_1 and N_2 and metrics g_1 and g_2 with limits g̃_1 and g̃_2. Suppose given an orientation-reversing diffeomorphism F: N_1 → N_2. Combining F with t ↦ 1-t induces an orientation-preserving map F: N_1 ×(0, 1) → N_2 × (0, 1). (M_1, g_1) and (M_2, g_2) match if there exists F as above such that F^* g_2 = g_1. We may then fix T>1, a “gluing parameter", and letM_i ⊃ M_i^ (T+1) := M_i^∪ N_i × (0, T+1),Moreover, we may define additional metrics on M_i by ĝ_i = g_i + ψ_T(g̃_i - g_i), and restrict these to M_i^ (T+1), where ψ_T is a cutoff function with ψ_T(t) = 1 for t≥ T and ψ_T(t) = 0 for t< T-1. Now F defines an orientation-preserving diffeomorphism between N_1 × (T, T+1) and N_2 × (T, T+1), so we may use it to join together M_1^ (T+1) with M_2^ (T+1) and obtain the closed manifold M^T. Note that a subset of M^T is parametrised by (-T-1/2, T+1/2) × N, with (-1/2, 1/2) × N corresponding to the identification region; we shall call this subset the neck of M^T. On this identification region, ĝ_1 and ĝ_2 are identified, and so we can combine them to form the metric g on M^T.We shall use the subsets M_i^ T' for varying T' in our analysis. Evidently, M_i^ T' is always a subset of M_i and defines a subset of M^T if T' ≤ T+1. With respect to the metrics g on M^T, we have a family of Riemannian manifolds with the same local geometry but global geometry becoming increasingly singular. We nevertheless have Let (M^T, g) be a compact manifold obtained by approximately gluing two asymptotically cylindrical manifolds with ends M_1 and M_2 as in Definition <ref>. Let k be a positive integer and μ∈ (0, 1). Then if α is orthogonal to harmonic forms on M^T, then dα_C^k, μ + d^* α_C^k, μ≥ CT^l α_C^k+1, μ,for some positive constants C and l, which may both depend on the geometry of M_1 and M_2; C may also depend on the regularity k+μ. Moreover, the same holds if g is only uniformly close (with all derivatives) to the glued metric. We will not give a proof of this; fundamentally it follows by combining standard elliptic regularity results with the L^2 lower bound on exterior derivative given by Chanillo–Treves in <cit.> and its extension to d^* in <cit.>. A similar argument using <cit.> gives an extension of Theorem <ref> to the Laplacian.We also require a method of patching closed differential forms, and make[cf. Nordström <cit.>]Let (M_1, g_1) and (M_2, g_2) be matching asymptotically cylindrical manifolds as in Definition <ref>. Suppose that α_1 and α_2 are asymptotically translation invariant closed p-forms on M_1 and M_2 respectively. The diffeomorphism F (extended as in Definition <ref>) induces a pullback map F^* from the limiting bundle ⋀^p T^*M_2 |_N_2 to ⋀^p T^*M_1|_N_1. α_1 and α_2 are said to match if F^*α̃_2 = α̃_1. Because we have convergence with all derivatives, the limits α̃_i are closed on N_i, and hence, treated as constants, on the end of M_i. As a decaying closed form, as in for instance Kovalev <cit.> or Melrose <cit.>, we may choose η_i with α_i - α̃_i = dη_i on the end. Let ψ_T be a cutoff function as in Definition <ref> and defineα̂_i = α_i - d(ψ_T η_i)on the end, and α_i off the end. Note that α̂_i are closed.On the overlap of M_1^ (T+1) and M_2^ (T+1), α̂_i = α̃_i, and so the two forms are identified by F. Thus they define a closed form γ_T(α_1, α_2) on M^T. Finally, we note the following variant of the result of Nordström <cit.>, giving a more quantified estimate. Let M_1 and M_2 be matching asymptotically cylindrical Riemannian manifolds. Suppose we have a metric g on M^T and constants ϵ and C_k such that g - g^T_C^k≤ C_k e^-ϵ Twhere g^T is the metric on M^T given by Definition <ref>, and this norm is taken with respect to either metric. Given a pair of harmonic forms α_1 and α_2 on M_1 and M_2, matching as in Definition <ref>, they are closed, and so we may define γ_T(α_1, α_2). We may then obtain a harmonic form Γ_T(α_1, α_2) on M^T by taking the harmonic part. For T sufficiently large, Γ_T is an isomorphism. Moreover, we have an estimateΓ_T(α_1, α_2)_C^k≥ C (α_1_C^k_δ + α_2_C^k_δ),where C is independent of T and the norms on the right hand side are the extended weighted norms of Definition <ref>.Nordström shows in <cit.> that this gluing map is an isomorphism except for a finite exceptional set of values of T. In particular, the map is a linear map between vector spaces of the same dimension; this can also be proved a little more directly. It will thus suffice to demonstrate (<ref>), which shows the map is injective. That this gluing map is an isomorphism follows from <cit.>, using <cit.> to reduce to the simpler one-form version, so we only have to show the lower bound (<ref>). We show first that (<ref>) holds when we replace Γ_T with γ_T. Since γ_T(α_1, α_2) is just given by identifying the cutoffs α̂_1 and α̂_2 it suffices to show that there exists C independent of T such that α̂_i_C^k(M)≥ C α_i_C^k_δ.We argue using the restriction to α̂_i to the neighbourhood M_i^ 2 of M_i^ and the limit α̃_i of α̂_i. Note that this limit has two parts, a part with and a part without dt. If T>3, then α̂_i|_M_i^ 2 = α_i|_M_i^ 2; for any T, α̃_i is again the limit of α̂_i. We thus obtain for every T>3α_i|_M_i^ 2_C^k(M_i^ 2) + α̃_i_C^k(N)≤ 2α̂_i_C^k(M).Hence, it suffices to prove that there is a constant C such that α_i_C^k_δ≤ C(α_i|_M_i^ 2_C^k(M_i^ 2) + α̃_i_C^k(N)).Since the space of harmonic one-forms is finite-dimensional and the map α_i ↦ (α_i|_M_i^ 2, α̃_i) is linear, such a constant must exist provided that this map is injective. Suppose then that α_i|_M_i^ 2 = 0 and α̃_i = 0. As in Definition <ref>, we see that, working over (0, ∞) × N, α_i = dη for some decaying η. Moreover, η is closed on (0, 2) × N, and as α_i is zero there, we may choose η to be translation invariant there as well. Thus η|_(0, 2) × N defines a closed translation invariant form on the whole end. Subtracting this translation invariant form from η and extending over M^ by zero, we find that α_i is the differential of an asymptotically translation invariant form on M. In particular, α_i must define the trivial class in H^1(M_i). It follows from Nordström <cit.> that α_i is zero.This proves that (<ref>) holds when we replace Γ_T by γ_T. It only remains to prove that Γ_T(α_1, α_2) can be bounded below in terms of γ_T(α_1, α_2); it will suffice to show that the part of γ_T(α_1, α_2) orthogonal to harmonic forms satisfies an estimate like (<ref>). We will use Theorem <ref>: that in our setting, a form orthogonal to the harmonic forms satisfies (<ref>). It follows that if d^*γ_T(α_1, α_2) satisfies an estimate like (<ref>), so too does the part orthogonal to harmonic forms. We consider d^* α̂_1. We know that this is zero for t< T-1, as then α̂_1 coincides with α_1 and the metric g on M^T coincides with g_1. It remains to consider the neck. Clearly, ψ_T and its derivatives are bounded above independent of T; this implies that the glued metric g^T and α̂_1 are exponentially close to g̃ and α̃ here. We know that α̃ is g̃-harmonic, and so we see that d^* α̂_1 satisfies (<ref>). Hence Γ_T(α_1, α_2) - γ_T(α_1, α_2) satisfies (<ref>) and the result follows.§.§ Asymptotically cylindrical submanifolds and their patching In this subsection, we describe what it will mean for a submanifold to be asymptotically cylindrical (Definition <ref>), following Joyce–Salur <cit.> and Salur–Todd <cit.>. We show that an asymptotically cylindrical submanifold of an asymptotically cylindrical manifold is itself an asymptotically cylindrical manifold (globalising a variant of Theorem <ref>) when equipped with the restricted metric. We then extend the patching or approximate gluing of subsection <ref> and give such a patching of matching submanifolds, and explain how this relates to the other patchings in that subsection. For instance, in cohomology, the patching map of forms commutes with the restriction map to a submanifold.Let (M, g) be an asymptotically cylindrical Riemannian manifold, with cross-section N. Let L be a submanifold of M. L is called asymptotically cylindrical (with cross-section K) if there exists R ∈ and an exponentially decaying normal vector field v on K × (R, ∞) (in the sense that its inclusion and derivatives are exponentially decaying in the metric on M) such that L is the union of a compact part with boundary exp_v(K ×{R}) and exp_v(K × (R, ∞). If v ≡ 0 far enough along K × (R, ∞), we shall call L cylindrical. For notational clarity, we shall write the coordinate on (R, ∞) as r. We note that K need not be connected, even if L is. We note the following Suppose that (M, g) is an asymptotically cylindrical manifold with cross-section N, and that L is an asymptotically cylindrical submanifold in the sense of Definition <ref>. Then L, with its restricted metric, is itself an asymptotically cylindrical manifold. It suffices to work on each end. Since v is exponentially decaying, exp_v^* g also defines an asymptotically cylindrical metric, so it suffices to suppose that L is cylindrical. On the other hand, by an analogue to Theorem <ref> and the fact the second fundamental form depends continuously on the inclusion and metric which both converge as t →∞, exponentially decaying tensors remain exponentially decaying on restriction, so it suffices to suppose the metric on M is cylindrical. The result follows. We may thus define extended weighted spaces of forms on asymptotically cylindrical submanifolds. Since a similar argument applies to the metric on the bundle TM|_L, we may then define spaces of exponentially decaying normal vector fields on asymptotically cylindrical submanifolds just as in subsection <ref>. Note that to define extended weighted spaces we need a notion of translation invariant normal vector fields, which we do not yet have. See Definition <ref> below. We now extend the notion of approximate gluing to submanifolds.Let M_1 and M_2 be matching asymptotically cylindrical manifolds in the sense of Definition <ref>, so that we have orientation-reversing F: N_1 → N_2. Let L_1 and L_2 be asymptotically cylindrical submanifolds of M_1 and M_2; by definition, they have limits K_1 × (R_1, ∞) and K_2 × (R_2, ∞). Say that they match if F(K_1) = K_2. Given such a pair, by definition L_i = L_i^∪ (exp_v((R_i, ∞) × K_i)).Consider the cutoff function φ_T, with φ_T(t) = 0 for t≥ T, and φ_T(t) = 1 for t< T-1. Note that φ_T∘ι̃ is a well-defined function on K × (R, ∞) with φ_T ∘ι(k, r) = 1 for r<T-1 as r = t∘ι̃ for the cylindrical inclusion ι̃. Hence for T>R_i+1L̂_i = L_i^∪ (exp_(φ_T∘ι̃) v((R_i, ∞) × K_i))is a submanifold. We want to be able to identify L̂_1 and L̂_2 over the identified regions (T, T+1) × N_i of M_i; that is, we want to show that L̂_i ∩ (N_i × (T, T+1)) are identified by F. We show that L̂_i ∩ (N_i × (T, T+1)) = K × (T, T+1), and so clearly is identified. For r>T, we have φ_Tι̃(k, r) = 0, so that ι̂(k, r) = ι̃(k, r), and so t∘ι̂= r. In particular, t∘ι̂(k, T) = T for all k ∈ K_i. Furthermore, note that by choosing T sufficiently large ι̂_* and ι̃_* can be chosen as close as we like. Since ι̃_*=, it follows that dt(ι̂_* )>0 throughout L̂_i. In particular, t∘ι̂(k, r) is always a strictly increasing function of r for fixed k. It then follows that for r<T, t ∘ι̂(k, r) <T, and for r >T+1, t∘ι̂(k, r)>T+1, so that L̂_i ∩ (N × (T, T+1)) = K_i × (T, T+1). Since L_i match, these are identified by F and we can form the gluing of L̂_1 and L̂_2, a submanifold L^T of M^T.We may also consider L_1, L_2 and L^T constructed in Definition <ref> as ambient manifolds. L_1 and L_2 match in the sense of Definition <ref> and L^T is their gluing, though not necessarily with parameter T. Suppose that a matching pair of closed forms on L_1 and L_2 is induced from a matching pair of closed forms on M_1 and M_2, the pair of forms is α_1|_L_1 and α_2|_L_2. Then both γ_T(α_1, α_2)|_L^T and γ_T(α_1|_L_1, α_2|_L_2) define forms on L^T. These forms need not be equal in general. Nevertheless, slightly weaker results will hold. To state these, we shall assume that the ends of M_i and L_i have the same parametrisation, so that γ_T is the gluing map on L_1 and L_2 corresponding to their gluing as submanifolds with gluing parameter T and similarly for the cutoff functions; by reparametrising M_i this may assumed without loss of generality.Let M_1 and M_2 be a matching pair of asymptotically cylindrical manifolds, and let L_1 and L_2 be matching asymptotically cylindrical submanifolds. Let L^T be the glued submanifold of Definition <ref>, and α_1 and α_2 be a matching pair of asymptotically translation invariant closed forms on M_1 and M_2. Then as cohomology classes[γ_T(α_1|_L_1, α_2|_L_2)] = [γ_T(α_1, α_2)]|_L^T,and there exist constants C_k and ϵ such that for all kγ_T(α_1|_L_1, α_2|_L_2) - γ_T(α_1, α_2)|_L^T_C^k≤ C_ke^-ϵ T. This is easy to prove: indeed, if L_1 and L_2 are cylindrical and the cutoff functions are chosen appropriately, we get equality, and it is easy to see that changing cutoff functions and passing to the asymptotically cylindrical case introduces exact and exponentially decaying in T terms. A similar argument proves Let M^T and L^T be as in Definition <ref>. Consider the metric given on L^T by direct gluing of the metrics on L_1 and L_2 by cutting them off and identifying them over an appropriate region; consider also the metric given on L^T by restricting the metric on M^T defined similarly to L^T. The difference of these two metrics satisfies an estimate like (<ref>), with respect to either of them. We also note at this point that the second fundamental form of L^T in M^T can be bounded independently of T; this follows by looking at the compact parts and the neck separately, and noting that things converge either to the behaviour of L_i in M_i or to the cylindrical behaviour.§ SPECIAL LAGRANGIANS In this section, we first define special Lagrangian submanifolds and make some elementary remarks on the structure of the ends of asymptotically cylindrical such submanifolds. Then, in subsection <ref>, we show that there are asymptotically cylindrical special Lagrangian submanifolds in some asymptotically cylindrical Calabi–Yau manifolds. Finally, in subsection <ref> we summarise the deformation theory due to McLean <cit.> in the compact case and extended by Salur and Todd <cit.> to the asymptotically cylindrical case (where the limit may alter). We give fuller details in the asymptotically cylindrical case, carefully stating the result (Theorem <ref>) that the deformations form a manifold with specified dimension, as the argument in <cit.> is somewhat unclear and the dimension found in <cit.> is not quite correct.§.§ DefinitionsIn this subsection we make the necessary definitions of Calabi–Yau structures, their generalisations SU(n) structures, and define special Lagrangian submanifolds.We begin with a Calabi–Yau structure on the ambient manifold. This definition is adapted from Hitchin <cit.>.Let M be a 2n-dimensional manifold. An Calabi–Yau structure on M is (induced by) a pair of closed forms (Ω, ω) where Ω is a smooth complex n-form on M and ω is a smooth real 2-form on M such that at every point p of M: * Ω_p = β_1 ∧⋯∧β_n for some β_i ∈ T^*_p M ⊗,* Ω_p ∧Ω̅_p ≠ 0,* Ω_p ∧Ω̅_p= (-2)^ni^n^2/n!ω_p^n,* ω_p ∧Ω_p = 0,* ω_p(v, J_pv) > 0 for every v ∈ T_pM where J is the unique complex structure such that Ω is a holomorphic volume form. We shall call (Ω_p, ω_p) satisfying (i)-(v) above an SU(n) structure on the vector space T_pM. A Calabi–Yau structure on a manifold induces a complex structure and a Ricci-flat Kähler metric; as indicated earlier, we shall assume our manifolds with Calabi–Yau structures are connected.Apart from the condition that the forms are closed, a Calabi–Yau structure is a pair of forms that satisfy certain pointwise conditions. Therefore, it is a section of a subbundle of the bundle of forms. We make Let (M) be the subset of the bundle (⋀^n T^*M ⊗_) ⊕⋀^2 T^*M⋃_p ∈ M{(Ω_p, ω_p) ∈(⋀^n T_p^*M ⊗_) ⊕⋀^2 T_p^*M: (Ω_p, ω_p)is an SU(n) structure on T_pM}.A section of (M) will be called an SU(n) structure on M. Similarly, if L is a submanifold of M, a section of (M)|_L will be called an SU(n) structure around L.Of course, a Calabi–Yau structure on M is in particular an SU(n) structure on M, and any SU(n) structure on Mor a tubular neighbourhood of L in M restricts to an SU(n) structure around L. We now have to combine Definition <ref> with Definition <ref> to define an asymptotically cylindrical Calabi–Yau structure. In so doing, we make a further restriction.A Calabi–Yau structure (Ω, ω) on a manifold M with an end is said to be asymptotically cylindrical if the cross-section N of M is of the form S^1 × X, the induced metric g is asymptotically cylindrical and, with respect to g, Ω and ω have limits (dt + i dθ) ∧Ω_ and dt ∧ dθ + ω_, where (Ω_, ω_) is a Calabi–Yau structure on X.In their study of asymptotically cylindrical Ricci-flat Kähler manifolds, Haskins–Hein–Nordström<cit.> show that for a somewhat more general definition of asymptotically cylindrical Calabi–Yau manifolds, and provided n>3, simply connected irreducible asymptotically cylindrical Calabi-Yau manifolds must have N = (S^1 × X)/Φ for a certain isometry Φ of finite order (which may be greater than one: see <cit.>). We restrict to the case where Φ is simply the identity for simplicity; similar arguments should work in the more general setting. If n=2, or for instance there is a torus factor that can be split so that a factor can be taken with n=2, then there are further examples of worse limit behaviour; again, we are ignoring these. This restriction follows Salur and Todd <cit.>, as we will use the deformation theory of that paper. Note that it follows from the Cheeger-Gromoll splitting theorem that if M is connected and N (or equivalently X) is not, then the manifold reduces to a product cylinder, and so we may freely assume N and X are connected.We now define special Lagrangian submanifolds and discuss asymptotically cylindrical special Lagrangian submanifolds.Let M be a Calabi–Yau n-fold and L an n-submanifold. L is special Lagrangian if and only if Ω|_L = 0 and ω|_L = 0. We now turn to the idea of an asymptotically cylindrical special Lagrangian submanifold of an asymptotically cylindrical Calabi–Yau.We make the obvious definition that an asymptotically cylindrical special Lagrangian submanifold of an asymptotically cylindrical Calabi–Yau manifold is a special Lagrangian submanifold in the sense of Definition <ref> which is asymptotically cylindrical in the sense of Definition <ref>. Note that although the end N × (0, ∞) = X × S^1 × (0, ∞) of the Calabi–Yau manifold M must be connected, asymptotically cylindrical special Lagrangians can have multiple ends contained in this one end.The following result gives the structure of the cross-section K of L. It is straightforward to prove by considering the components of the restrictions of each form. Let M be an asymptotically cylindrical Calabi–Yau manifold with cross-section N = X × S^1 as in Definition <ref>, and L be an asymptotically cylindrical special Lagrangian submanifold with cross-section K, so that L = exp_v(K × (R, ∞)) far enough along the end, where v decays exponentially with all derivatives. For each connected component K' of K, there is a special Lagrangian Y in X, and a point p ∈ S^1, such that K' = Y ×{p}⊂ X × S^1 = N.Definition <ref> and Proposition <ref> together form a version of the definition of asymptotically cylindrical special Lagrangian found in <cit.>. Our definition, which appears simpler, makes the identifications required, given concretely by Salur and Todd, tacitly and implicitly. §.§ Examples We now give some examples of asymptotically cylindrical special Lagrangian submanifolds in asymptotically cylindrical Calabi–Yau threefolds to which our results will apply. We show (Proposition <ref>) that for an asymptotically cylindrical Calabi–Yau threefold obtained as the complement of an anticanonical divisor in a Kähler manifold, an antiholomorphic involutive isometry acting on the divisor will yield an antiholomorphic involutive isometry of the asymptotically cylindrical manifold, and hence essentially a special Lagrangian submanifold.To explain the construction in detail, we first outline the construction of asymptotically cylindrical Calabi–Yau threefolds. We quote the following from Haskins–Hein–Nordström <cit.>. Let M̅ be a smooth compact Kähler manifold of complex dimension n ≥ 2, and D be a smooth anticanonical divisor in it which has holomorphically trivial normal bundle. Then for each Kähler class [ω] on M̅, M = M̅∖ D admits an asymptotically cylindrical Ricci-flat Kähler metric g_i, where asymptotically cylindrical means with respect to a diffeomorphism around DU ∖ D → D ×{0 < |z| < 1}≅ D × S^1 × (0, ∞),with the first map constructed using the trivial normal bundle of D to satisfy (<ref>) below, and the second given by writing z = e^-t+iθ. The Kähler form is in the cohomology class [ω|_M], and its limit is dθ^2 ⊗ g_D where g_D is the Calabi–Yau metric on D in the Kähler class [ω|_D]. Moreover, the metric is unique subject to the diffeomorphism (<ref>) of the tubular neighbourhood to D × S^1 × (0, ∞) and these properties. We need such a M̅. There are many possible options; see, for instance, the discussion by Haskins–Hein–Nordström <cit.>. The simplest example, however, is to take M̅ as complex projective space P^n+1. Anticanonical divisors D are then the zero sets of homogeneous polynomials of degree n+1. Provided none of the zeros of such a polynomial are also zeros of its first partial derivatives, the zero set D is smooth by the implicit function theorem. D might nevertheless have nontrivial self-intersection. We thus blow up the self-intersection as in Kovalev <cit.>, and as in that paper the resulting submanifold has trivial normal bundle. We can improve the uniqueness result of Theorem <ref> slightly, and this will be useful for constructing special Lagrangian submanifolds. (<ref>) requires a diffeomorphism Φ between Δ× D and an appropriate neighbourhood of D in M̅. There may not be such a diffeomorphism that is biholomorphic, in general. Haskins–Hein–Nordström prove, however, that provided the diffeomorphism satisfies <cit.>, Theorem <ref> follows. We shall show that the constructed metric is in fact independent of the choice of diffeomorphism here. By the uniqueness part of Theorem <ref>, it suffices to show that if Φ_1 and Φ_2 are two such diffeomorphisms, the metrics we construct with each are asymptotically cylindrical with respect to the structure given by the other, since we are by hypothesis using the same Kähler class. Let Φ_1 and Φ_2 be diffeomorphisms from Δ× D to their images in M̅, satisfyingΦ_i(0, x) = xfor allx ∈ D, Φ_i^* J - J_0along {0}× D, Φ_i^*J - J_0 = 0on all ofTΔ,where J and J_0 are the complex structures on M̅ and Δ× D respectively. (These are the conditions in <cit.>). Given a fixed Kähler class on M̅, let g_1 and g_2 be the Calabi–Yau metrics on M̅∖ D constructed by Theorem <ref> with these diffeomorphisms so that it is clear that g_i is asymptotically cylindrical with respect to the diffeomorphism Ψ_i given by composing Φ_i with z = e^-t-iθ. Then g_1 is also asymptotically cylindrical with respect to Ψ_2, and vice versa; the limits are the same. In particular, g_1 = g_2.The construction in <cit.> is that if Φ_i is such a diffeomorphism then an asymptotically cylindrical metric near D can be chosen by taking near Di/2dz_i ∧ dz̅_i/|z_i|^2 + ω,where ω is the Kähler form on D corresponding to the Calabi-Yau metric in the restriction of the Kähler class and z_i is the function M̅→Δ given by composing Φ_i^-1 with the projection. Because we are using the same Kähler class in both cases, ω does not depend on i. Much of the work in <cit.> is simply in showing that this can be cut off under the differentials without affecting the asymptotics. For instance, on Δ× D (<ref>) can be expressed as i/2∂∂̅(log |z|)^2 + ω; the conditions in and arising from (<ref>) are primarily to ensure that this still has the same asymptotic when ∂ and ∂̅ are given by the complex structure on M̅. Using the same ideas, we shall show that dz_1∧ dz̅_1/|z_1|^2 = dz_2∧ dz̅_2/|z_2|^2 + decaying terms,where the decaying terms become O(e^-t) under the identification. This implies g_1 has the same limit as g_2. To do this, we shall primarily work locally in D. We know by <cit.> that for any local holomorphic defining function w for D in M̅, we may write locallyf_1 z_1 + z_1^2 h_1 = w = f_2 z_2 + z_2^2 h_2,where f_1 and f_2 are nonzero holomorphic functions on an open subset of D, and h_1 and h_2 are smooth functions on a neighbourhood of this open subset in M̅. By rearranging, and expanding using the Leibniz rule, it is easy to see thatdz_1/z_1 = dz_2/z_2 + f_1/f_2 d(f_2/f_1) +decaying terms.It remains to show that f_1/f_2 is a constant; then the middle term vanishes and (<ref>) holds. Since f_1 and f_2 are nonzero holomorphic functions, f_1/f_2 is holomorphic. We claim that this holomorphic function is independent of the local defining function w. Then, by taking a cover of D, f_1/f_2 extends to a holomorphic function on the compact complex manifold D and so is a constant.To do this, we note that f_i = dw(∂/∂ z_i). Let w' be another local holomorphic defining function, and let p ∈ D be in the relevant open subset. (dw)_p and (dw')_p both lie in (T^1, 0)^*_p M̅, and moreover as w and w' are defining functions they both vanish on T^1, 0_pD. It follows that (dw)_p = a(dw')_p for some a, and a cancels in the quotient f_1/f_2.In particular, we may obtain the following Suppose that M̅_1 and M̅_2 are smooth compact Kähler manifolds of complex dimension n ≥ 2. Suppose D_i is a smooth anticanonical divisor in M̅_i, for each i, with holomorphically trivial normal bundle. Suppose that f: M̅_1 →M̅_2 is a biholomorphism with f(D_1) = D_2. Suppose [ω̅_2] is a Kähler class on M̅_2 so that [ω̅_1] = f^*[ω̅_2] is a Kähler class on M̅_1. Theorem <ref> constructs asymptotically cylindrical metrics on M̅_1 ∖ D_1 and M̅_2 ∖ D_2. The map f induces an isometry between these. This follows from Proposition <ref> by composing f with the required diffeomorphism and noting that it still satisfies (<ref>). Using Corollary <ref>, we will now explain a construction of asymptotically cylindrical special Lagrangian submanifolds, that is fairly general in a subset of these asymptotically cylindrical Calabi–Yau manifolds in Proposition <ref>; we will then specialise and give a concrete example. To do this, we will use antiholomorphic involutive isometries, as mentioned, for instance, by Joyce–Salur <cit.>. We make two similar definitions, one of which applies for a general complex manifold such as M̅, and one of which is specialised for the Calabi–Yau case. Let M be a Kähler manifold with complex structure J and Kähler form ω. A diffeomorphism σ of M is called an antiholomorphic involutive isometry if σ^2 = 𝕀, σ^* J = -J, σ^* ω = - ω,If M is in fact a Calabi–Yau manifold with Calabi–Yau structure (Ω, ω), the antiholomorphic involutive isometry σ has fixed phase if alsoσ^* Ω = Ω̅ Note that any antiholomorphic involutive isometry σ of a Calabi–Yau manifold must satisfy σ^* Ω = e^iαΩ̅ for some α. It follows that σ is an antiholomorphic involutive isometry with fixed phase for (e^-iα/2Ω, ω). Thus, for our purposes it is enough to find antiholomorphic involutive isometries and then adjust the Calabi–Yau structure. It is easy to see The fixed point set of an antiholomorphic involutive isometry with fixed phase is a special Lagrangian submanifold. Now for M an asymptotically cylindrical Calabi–Yau, we want to find an antiholomorphic involutive isometry σ (with fixed phase) such that this special Lagrangian is asymptotically cylindrical. To do this, we induce σ from our construction of M. Specifically, we have As in Theorem <ref>, let M̅ be a smooth compact Kähler manifold of complex dimension n≥2 and D a smooth anticanonical divisor with holomorphically trivial normal bundle. Let σ be an antiholomorphic involutive isometry of M̅. Suppose moreover that σ acts on D, and D contains some fixed points of σ. By Theorem <ref>, M = M̅∖ D admits an asymptotically cylindrical Ricci-flat Kähler metric, in the restriction of this Kähler class. Moreover, the restriction of σ to M defines an antiholomorphic involutive isometry of M which is asymptotically cylindrical. That is, there exist a diffeomorphism σ̃ of N and a parameter l ∈ such thatσ(n, t) → (σ̃(n), t+l)exponentially, meaning that on restriction to N × (T, ∞) for some large T, we have σ = exp V ∘ (σ̃(n), t+l) for some vector field V on M decaying exponentially with all derivatives. The fixed point set of σis an asymptotically cylindrical submanifold in the sense of Definition <ref>.Since σ acts on D, it also acts on M. Hence the restriction of σ to M is well-defined and squares to the identity. Now the complex structure on M is just the restriction of the complex structure on M̅. Hence, we immediately have σ^* J = -J. It remains to show that σ induces an isometry. To do this, we apply Corollary <ref>. σ is precisely a biholomorphism between M̅ and its complex conjugate, preserving the anticanonical divisor. Hence, Corollary <ref> gives that it induces an isometry. Now <cit.> implies that σ is an asymptotically cylindrical diffeomorphism, as an isometry of asymptotically cylindrical metrics. It only remains to show that the fixed point set, which we know by Lemma <ref> is a submanifold, is in fact asymptotically cylindrical. To do this, we use that σ is an involution again. We start by noting that since σ is an involution, we must have l=0. By abuse of notation, we shall also write σ̃ for the induced diffeomorphism on (T, ∞) × N. We now show that the fixed points of σ, far enough along the end, are the image of the fixed points of σ̃ under exp_v/2. The fixed points of σ̃ are certainly a cylindrical submanifold. Since v is exponentially decaying, and the map from a vector field along a submanifold to the normal vector field with the same image preserves exponential decay (this follows by similar arguments to Proposition <ref> below), this shows that the fixed points of σ are an asymptotically cylindrical submanifold, as required. Given p with σ̃(p) = p, we must have σ(p) = exp_v(p). For t large enough, uniformly in the cross-section, there is a unique minimising geodesic from p to exp_v(p). Since σ is an involution, σ(exp_v(p)) = p, and so this geodesic is reversed by the isometry σ. Hence its midpoint exp_v/2(p) is fixed by σ. This shows that the fixed points of σ contain this image (at least far enough along the end); the reverse argument is only slightly more complicated. This shows that the fixed points of σ are an asymptotically cylindrical submanifold with the same limit as the fixed points of σ̃ (in other words, the limit is the fixed points of σ̃ on N). It remains to prove that σ̃ fixes some points of N = D × S^1, otherwise all we have done is exhibited a closed special Lagrangian submanifold. We shall consider σ̃(x, θ). By <cit.>, this can be given by the limit approached by the image under σ of the unique geodesic half-line approaching (x, θ). Any curve approaching (x, θ) must, passing to a curve in M̅, approach (x, 0). Hence the image approaches (σ|_D(x), 0). This shows that σ̃(x, θ) = (σ|_D(x), θ') for some θ'. Now also we know by continuity that σ̃ is antiholomorphic on the Calabi–Yau cylinder × S^1 × D. Since we clearly have σ̃_*=, we must have σ̃_*= -. Hence also the isometry σ̃_* preserves the orthogonal complement TD of {}. It follows that θ' depends only on θ. Moreover, the isometry thus given on S^1 is a reflection. Hence it has two fixed points. Thus for each fixed point of σ|_D we have two fixed points of σ̃ on N. Since by hypothesis σ|_D has fixed points, σ̃ has fixed points, and the resulting submanifold is indeed asymptotically cylindrical. Hence, if we choose the appropriate phase for the holomorphic volume form, we get an asymptotically cylindrical special Lagrangian submanifold. In particular, we shall consider the simplest examples of such M̅, that is the blowups of Fano manifolds in the self-intersection of their canonical divisor, as described below Theorem <ref>. There is no need to worry about metrics here, and so we restrict to antiholomorphic involutions, that is involutions σ with σ^* J = -J. We state the following Suppose that M̅ is a Fano manifold with D a smooth anticanonical divisor and σ is an antiholomorphic involution which acts on D. Suppose further that we may find a submanifold representing the self-intersection of D on which σ acts. Then if we blow up this self-intersection, the resulting manifold admits an antiholomorphic involution acting on the anticanonical divisor given by the proper transform of D. Essentially this follows because the blowup is unique and determined by the existence of a suitable projection. We can then choose any Kähler class [ω] on the blowup and consider [ω] - σ^*[ω] to get an antiholomorphic involutive isometry. The only explicit example we discussed above was to take M̅ =P^n, D = {p(x_1, x_2, x_3, x_4, … x_n+1) = 0},for a homogeneous polynomial p of degree n+1 that is always a submersion at its zeros. If p has real coefficients, D is preserved by the involution of P^n given by complex conjugation of ^n+1. Since we may perturb p to get a transverse submanifold without losing the real coefficients, we may suppose σ acts on the self-intersection, and so apply Proposition <ref> to find an antiholomorphic involution of the blowup. Choosing an appropriate Kähler class, applying Proposition <ref>, and then choosing the appropriate phase for the holomorphic volume form, we find anasymptotically cylindrical Calabi–Yau manifold admitting a special Lagrangian submanifold. We know that each end of this special Lagrangian has limit components {p}× Y, for some special Lagrangian Y in the proper transform of D. It is clear from the construction that Y is a component of the fixed points of the induced map on the blowup restricted to this proper transform. This set of fixed points is identified with the fixed points of σ|_D, which are given by the product of the solutions to {p(x_1, x_2, x_3, x_4, … x_n+1) = 0} in the real projective space P^n with P^1 (as we may change the argument). As a basic example, therefore, taking n=3 and the divisor x_1^4 + x_2^4 + x_3^4 + x_4^4 = 0, this method does not yield an asymptotically cylindrical special Lagrangian. However, x_1^4 + x_2^4 + x_3^4 - x_4^4 will; so will other similar examples. Moreover, each component of this set corresponds to two ends of the special Lagrangian, again by the construction. Thus a special Lagrangian constructed by this method always has an even number of ends; since the fixed points of σ|_D may not be connected, it follows that there are examples with more than two. Of course, the special Lagrangian constructed in this way may itself be disconnected, so by taking connected components we may be able to get examples with odd numbers of ends. Note also that the two ends are given by antipodal points of S^1; we shall show in the next subsection that this condition need not be preserved by deformations, so that there are further examples not constructed in this method. Similar arguments will apply more generally to Fano manifolds constructed as complete intersections; we should be able to choose the required polynomials to be real inductively, including the anticanonical divisor and its perturbation. §.§ Deformation theory of special Lagrangians In this subsection, we give a summarised version of McLean's deformation result <cit.> and the extension to asymptotically cylindrical special Lagrangians given by Salur and Todd <cit.>. We shall outline the proof in the compact case as our gluing result will rest on applying the same idea to perturb a nearly special Lagrangian submanifold to a special Lagrangian submanifold; in the asymptotically cylindrical case, we shall give some details to improve the results and provide clearer proofs. In particular, note that Theorem <ref> is not quite in agreement with <cit.>. We begin with the compact case. Let M be a Riemannian manifold. If L and L' are closed submanifolds of M with L and L' close in a C^1 sense, then there is a normal vector field v to L such that L' is the image of v under the Riemannian exponential map; in this case, we write L' = exp_v(L). Conversely, of course, a C^1-small normal vector field v defines a C^1-close submanifold L' = exp_v(L). Moreover, for smooth L, the regularities of L' and v are the same. Thus, to work with submanifolds it suffices to work with normal vector fields. Hence, to understand which submanifolds L' close to L are special Lagrangian, we want to understand the zero set of the nonlinear map F: v ↦ (exp_v^* Ω|_L, exp_v^* ω|_L).To do this, we restrict to appropriate Banach spaces and apply the implicit function theorem to F to obtain families of special Lagrangian deformations; we then show that the family obtained is independent of the choice of Banach space. The derivative D_0F and its surjectivity between appropriate spaces are covered in detail by McLean <cit.>. A calculation using Cartan's magic formula shows that the linearisation D_0F is D_0F(u) = (d(ι_u ω|_L), d* (ι_u ω|_L)),recalling that since L is Lagrangian, u ↦ι_u ω|_L is an isomorphism between the normal and cotangent bundles of L. We will generalise this isomorphism for our purposes in Lemma <ref> below. Surjectivity is slightly more technical. We need to use some Banach space of forms, say C^1, μ. It is easy to see that F maps only to exact forms, and so we consider F as a map from C^1, μ normal vector fields to C^0, μ forms. By Baier <cit.>, we know that F is then a smooth map of Banach spaces. D_0F then becomes (<ref>) between C^1, μ normal vector fields and exact C^0, μ forms; since u ↦ι_u ω|_L is an isomorphism between C^1, μ normal vector fields and C^1, μ forms, and d+d* is surjective, it follows that (<ref>) is surjective. Finally, to show the result is independent of the Banach space used, the choice of Banach space is a choice of regularity for our solutions of F. But these vector fields have the same regularity as the submanifolds concerned. Thus this reduces to smoothness of calibrated (and hence minimal) submanifolds.We also review an asymptotically cylindrical extension of this theory. This is due to Salur and Todd <cit.>. We begin by discussing normal vector fields corresponding to asymptotically cylindrical deformations of an asymptotically cylindrical manifolds. We would like to say that these are asymptotically translation invariant in some sense. We will give a definition of these much later in Definition <ref>.In particular, if the asymptotically cylindrical deformations have the same limit, we expect to get exponentially decaying normal vector fields. It is clear that by adding an exponentially decaying normal vector field on K × (R, ∞) to the vector field v of Definition <ref> also gives an asymptotically cylindrical submanifold. The setup of the tubular neighbourhood theorem used by Salur and Todd <cit.> precisely uses this: they define the map Ξ to be applying the exponential map to this sum, and then choose an isomorphism ζ between the normal bundles of (the end of) L and K × (R, ∞); this isomorphism is just a pushforward and so it is reasonably obvious that it preserves exponentially decaying normal vector fields. The converse result follows in the same way. For the asymptotically translation invariant case Salur and Todd use the rigidity of cylindrical special Lagrangians to reduce the problem (and hence to avoid needing a general definition of asymptotically translation invariant normal vector fields). They say that a cylindrical special Lagrangian deformation of the limit corresponds to a translation-invariant one-form using the isomorphism between one-forms and normal vector fields on a special Lagrangian on K × (R, ∞), and hence gives a well-defined one-form on the end of L, and then they combine these one-forms with exponentially decaying one-forms.We are proceeding more directly: we have L a well-defined submanifold of M, and consider the exponential map. It is simplest to suppose that we take the exponential map corresponding to a cylindrical metric on M; our choice of exponential map does not affect the final result. We have to show that there is a uniform lower bound on the injectivity radius of M with the cylindrical metric to ensure our tubular neighbourhood is going to contain all reasonably nearby submanifolds. But this is clear, as with the cylindrical metric the injectivity radius only depends on the geometry of the cylinder and of the compact complement M^. As before, C^1-uniformly close C^k, μ submanifolds correspond to C^1-uniformly-small C^k, μ normal vector fields. We now have to pass to the subset corresponding to asymptotically cylindrical submanifolds. Again using the isomorphism between one-forms and normal vectors, we can assume that the Riemannian exponential map is defined on T^*L. If a one-form on an asymptotically cylindrical submanifold is asymptotically translation invariant, then its image under the Riemannian exponential map is also an asymptotically cylindrical submanifold. This result is still needed by Salur and Todd, in the specific cylindrical case. This is much less obvious a priori than the exponentially decaying case: whilst the image certainly decays in a C^0 sense to the corresponding cylindrical manifold, it is not clear why the associated normal vector field must decay with all its derivatives. If we suppose that the Riemannian metric we use on M is actually cylindrical, this follows because combining one-forms is a smooth operation. Specifically, by the definition of “asymptotically cylindrical submanifold", we have to show that exp_βexp_α (K × (R, ∞)) = exp_γ(K' × (R, ∞)) is asymptotically cylindrical, that is that we can find K' and γ so that γ is exponentially decaying. K' can be obtained from the limit, and then pointwise, γ can be chosen to depend smoothly on α and β, with a well-defined limit in this map as t →∞. Indeed, since the metric is cylindrical, we do not need to worry about it in this limit: for everything else, we have smooth dependence and so exponential convergence to this limit. Conversely, given two asymptotically cylindrical submanifolds with limits exp_α(K_1 × (R, ∞)) and exp_β(K_2 × (R, ∞)) close enough that one is the image under the Riemannian exponential map of a one-form γ, γ must have a well-defined limit (corresponding to the normal vector field between K_1 and K_2) and again γ at each point depends smoothly on α and β and continuously on t with a well-defined limit, so decays exponentially to the limit γ̃. For further details of arguments like this, see section <ref> below, particularly Proposition <ref>. That is, a sensible definition of asymptotically translation invariant normal vector field is just “a normal vector field whose corresponding one-form is asymptotically translation invariant". We shall show this is equivalent to the general definition we shall give in Definition <ref> in Proposition <ref> later. In particular, an exponentially decaying normal vector field gives an asymptotically cylindrical submanifold with the same limit. Conversely, given a close asymptotically cylindrical submanifold with the same limit, we can find such an asymptotically translation invariant normal vector field, and this normal vector field must decay exponentially. We will now sketch how to prove the following variant of <cit.>.Let M be an asymptotically cylindrical Calabi–Yau manifold, with cross-section X × S^1, where X is a compact connected Calabi–Yau manifold, and let L be an asymptotically cylindrical special Lagrangian in M with cross-section asymptotic to ⋃ Y_i ×{p_i} where, for each end i, Y is a special Lagrangian submanifold of X. The space of special Lagrangian deformations of L that are also asymptotically cylindrical with sufficiently small decay rate is a manifold with tangent space at L given by the normal vector fields on L such that ι_v ω|_L is a bounded harmonic one-form. The most important difference between this and <cit.> is that there it is claimed that the tangent space has slightly greater dimension. Salur and Todd claim that we can find a deformation curve with limit corresponding to every normal vector field limits ∑ c_i, and this is not the case in general, as we shall see in Proposition <ref> below. We also give a slightly clearer explanation of which deformations of the Y_i correspond to deformation curves in Proposition <ref>, and we will need this argument later in Proposition <ref> where we will apply it to show that the space of matching special Lagrangian deformations is a submanifold. Finally, we drop the connectedness assumption from Salur–Todd, though this has no practical effect on the proof. Hence we give some details. By considering each component separately, we may suppose L is connected throughout the proof. Suppose that L' is such a deformation and the corresponding asymptotically translation invariant normal vector field v has limit ṽ. Then in the cylindrical Calabi–Yau manifold × S^1 × X, if we extend ṽ to be translation invariant then exp_ṽ(× Y ×{p}) must also be a translation invariant special Lagrangian.Our first problem is thus to identify the possible limits ṽ. This can be done for each end separately. Again, we apply (<ref>) and (<ref>); since the only noncompactness is , and we assume translation-invariance, McLean's result applies and locally the ṽ such that exp_ṽ (× Y ×{p}) is special Lagrangian are diffeomorphic to translation-invariant harmonic normal vector fields, meaning the normal vector fields whose corresponding one-form is harmonic. It is easy to see that these are precisely of the form u + C, where u is a harmonic normal vector field on Y, and C is constant on Y. We further have to consider which of these deformations arises as the limit of an asymptotically cylindrical deformation.Let L_0 be an asymptotically cylindrical special Lagrangian submanifold of the asymptotically cylindrical Calabi–Yau manifold M, with cross-sections K_0 = {p}× Y and N = S^1 × X respectively. We know that the set of cylindrical deformations of the special Lagrangian ×{p}× Y in × S^1 × X is a manifold. The set of limits of asymptotically cylindrical deformations of L is a submanifold 𝒦 of this submanifold of cylindrical deformations. Its tangent space is precisely those harmonicnormal vector fields on K that arise as limits of bounded harmonic normal vector fields on L.Let K' be a deformation of the limit K, and extend it to L'. We have to show that we can choose L' to be special Lagrangian. This can be done if and only if the exact and exponentially decaying forms Ω|_L' and ω|_L' are the differentials of decaying forms for any such extension: see <cit.>. We know that Ω|_L' and ω|_L' are decaying closed forms. The decaying forms form a complex under exterior differentiation. We call the cohomology of this complex H^*_(L); the argument of Melrose <cit.> says that this the same as the compactly supported cohomology. To ask for Ω|_L' and ω|_L' to be the differential of decaying forms is to ask for their classes in these cohomology groups to be the trivial classes. As in <cit.>, we know that ω and Ω restrict to exact forms on a tubular neighbourhood of L_0 (since they restrict to zero on L_0). Consequently we may find τ_1 and τ_2 asymptotically translation invariant with limits τ̃_1 and τ̃_2 such that τ_i|_L_0 = 0 and dτ_1 = ω, dτ_2 = Ω on a tubular neighbourhood of L_0. We have a coboundary map ∂: H^p(K) → H^p+1_(L) given by the long exact sequence of relative cohomology. Examining the definition of ∂, we see that [ω|_L'] ∈ H^2_(L) is given by ∂([τ̃_1|_K']) and [Ω|_L'] ∈ H^n_(L) is ∂([τ̃_2|_K']). That is, we need to show that the space of deformations K' such that ([τ̃_1|_K'], [τ̃_2|_K']) are both in ∂ is a submanifold with the desired tangent space. By Proposition <ref>, we know each component of the cross-section K is of the form {p}× Y for some point p ∈ S^1 and some special Lagrangian Y in X. Thus we may writeK = ⋃{p^i}× Y^iIt similarly follows that any special Lagrangian deformation is K_s = ⋃{(p^i)'}× (Y^i)'We now note that since Ω̃= dθ∧Ω_ + dt ∧Ω_ restricts to zero on each space {(p^i)'}× X, τ̃_2 is closed on this submanifold. Consequently, [τ̃_2|_K_s] = [τ̃_2|_⋃{(p^i)'}× (Y^i)'Y^i is special Lagrangian so that Ω_|_Y^i = _Y^i, for a suitable choice of orientation. Calculation shows that this choice can be made so that dr ∧_Y_i is the limit of _L on this end. Applying Stokes' theorem, we find[τ̃_2|_K_s] = ∑ ((p^i)' - p^i) [_Y_i]Now the kernel of ∂ on H^n-1(K) is the image of the restriction map H^n-1(L) → H^n-1(K). It is known (Nordström <cit.>) that for any given metric this is equivalent to being L^2-orthogonal to those β so that there exists a harmonic form on L with limit dr ∧β. As L is connected by assumption, there is also one such harmonic form, _L, with limit dr ∧ (∑_Y_i). It immediately follows that K_s satisfies ∂ ([τ̃_2|_K_s]) = 0 if and only if ∑ ((p^i)' - p^i) (Y_i) = 0As the end is cylindrical, this defines a linear subspace of the potential deformations K_s (the p^i and Y^i components are essentially independent of each other, so this is still a manifold). We now restrict to this subspace, and show that within it the deformations satisfying ∂([τ̃_1|_K']) = 0 form a manifold with the desired properties. We consequently consider the nonlinear map K' ↦∂[τ̃_1|_K'].This linearises using Cartan's magic formula and the assumption τ̃_1|_K = 0 to givev ↦∂ [ι_v ω̃|_K].To prove that the kernel of (<ref>) is a submanifold, we only have to prove that its linearisation (<ref>) is surjective to the image of ∂ in H^2_(L). We begin by noting that if we allowed all normal vector fields v with ι_v ω|_K harmonic, ι_v ω would certainly include all of H^1(K), and hence (<ref>) would be surjective. Thus it suffices to show that the normal vector field we removed by the condition (<ref>) maps to zero. But this is ∑, and dr|_K = 0. Thus we have shown that the deformations of K that extend to deformations of L form a submanifoldof all special Lagrangian deformations of K. Its tangent space is normal vector fields ∑ c_i+ v_i with v_i normal to Y_i such that ι_∑ c_i+ v_iω|_K is harmonic, ∑ c_i Vol(Y_i) = 0, ∑∂ (ι_v_iω̃|_Y_i) = 0It remains to check that for every such normal vector field, there is a harmonic normal vector field v on L with these limits, and conversely. In terms of one-forms, the limit becomes∑ c_i dr + ι_v_iω̃|_Y^iWe know (compare Nordström <cit.>) that we can treat the two parts of this separately by considering exact and coexact harmonic forms. Thus we have to show a harmonic one-form with limit either of the form ∑ c_i dr or a one-form on the Y^i exists if and only if it satisfies the relevant condition. The second is obvious from exactness; the first can be dealt with again using L^2-orthogonality of the two kinds of limit. ∑ c_i is a harmonic 0-form; there is a harmonic form with limit ∑ c_i dr if and only if ∑c_i is L^2-orthogonal to the limits of harmonic 0-forms; there is only one of these, namely 1, so this is ∑ c_i (Y_i) = 0exactly as required. Hencehas tangent space vector fields corresponding to limits of bounded harmonic one-forms.Proposition <ref> is essentially claimed in <cit.>, and we have followed the ideas of their proof. Their proof itself is somewhat unclear and possibly erroneous: we know that the τ_i decay on L, but Salur–Todd seem in that proposition to claim they have zero limit on L' too, so we have expanded on it. On <cit.>, Salur and Todd further claim that the normal vector fields C give further perturbations L' of L, which is unfortunately untrue also. Note that in the examples from subsection <ref>, assuming we only have two ends, these ends are {p}× Y and {p+π}× Y. Thus the condition on the c_i becomes c_1 + c_2 = 0, and so this structure is not preserved in general. To obtain our final result, we take a space of vector fields v defined as “decaying vector fields" ⊕ “acceptable limits". In order to do this, we have to take some choice of vector field for each acceptable limit ṽ. Whilst there is no canonical way to do this, if we assume that they are eventually constant any differences can be absorbed into the decaying vector fields. The image of such a vector field under the map v ↦ (exp_v^* Ω, exp_v^* ω) is obviously exponentially decaying, and is still exact by homotopy, and so we may still consider a right hand side of decaying exact forms; furthermore, by the above, the right hand side can be considered as differentials of decaying forms. We thus work on the (nonlinear) subspace of all normal vector fields corresponding to asymptotically cylindrical submanifolds that decay to a limit in 𝒦; the tangent space to this is normal vector fields decaying to harmonic normal vector fields that are limits of harmonic normal vector fields on L, and so the linearisationcomes down to the effect of (d, d*) from forms that decay to harmonic limits that are the limits of harmonic forms to the differentials of decaying forms. This is surjective exactly as in the decaying case, and evidently its kernel is precisely bounded harmonic forms. We get dimension of the moduli space the dimension of the bounded harmonic 1-forms, which is b^1(L). Finally, we have to show that the resulting moduli space is independent of the regularities used. By standard minimal surface regularity as in the compact case, we certainly know that everything involved is smooth. We thus only need to show that the moduli space is independent of the decay rate implicitly chosen in the previous paragraph. It is clear that if we consider things decaying slower, we get the same solutions and maybe some more: we have to show that as we consider decay rate getting faster, the space of deformations does not shrink to {L}, which would correspond to all the deformations decaying slower than L. Suppose L' is any such deformation. Then there is a curve of special Lagrangians L_s with L_0 = L and L_1 = L'. The tangents to this curve are asymptotically translation invariant harmonic normal vector fields on L_s. The limits of the submanifolds L_s lie within an open subset of the limit K_0 which we can identify; hence, for all of these we can find a uniform bound on the first nonzero eigenvalue of the Laplacian on the cross-section and it follows that there exists δ > 0 (depending only on L) such that all the normal vector fields decay at least at rate δ (essentially by Nordström <cit.>). Hence, so does the whole curve L_s and in particular L'.This completes the proof of Theorem <ref>.§ CONSTRUCTING AN APPROXIMATE SPECIAL LAGRANGIAN In this section, we apply the approximate gluing definitions given in subsection <ref> to the asymptotically cylindrical special Lagrangian submanifolds of section <ref>, to construct a submanifold L^T. We will primarily be showing that L^T is nearly special Lagrangian. We first discuss the hypothesis we shall assume for Calabi–Yau gluing, which can be obtained from <cit.> in the case of n=3; then we deduce in Proposition <ref> that L^T is nearly special Lagrangian. We shall work with a pair of matching asymptotically cylindrical Calabi–Yau manifolds, and we shall assume the following gluing result for Calabi–Yau manifolds.[Ambient Gluing]Let M_1 and M_2 be a matching pair of asymptotically cylindrical Calabi–Yau manifolds. There exist T_0>0, a constant ϵ and a sequence of constants C_k such that for gluing parameter T>T_0 there exists a Calabi–Yau structure (Ω^T, ω^T) on the glued manifold M^T constructed in Definition <ref> withΩ^T - γ_T(Ω_1, Ω_2)_C^k + ω^T - γ_T(ω_1, ω_2)_C^k≤ C_k e^-ϵ T, [Ω^T] = [γ_T(Ω_1, Ω_2)],[ω^T] =c[γ_T(ω_1, ω_2)],for some c>0, where γ_T is the patching map of closed forms defined in Definition <ref>. This hypothesis says that we can perturb the approximate gluing (γ_T(Ω_1, Ω_2), γ_T(ω_1, ω_2)) to get a Calabi–Yau structure. (<ref>) says that the perturbation from our approximate gluing to the glued structure is small, and (<ref>) says that these perturbations are basically by exact forms. Hypothesis <ref> holds if n=3. This follows from the previous work in <cit.>, especially Proposition 5.8. Note that that proposition would give (<ref>) for Ω, rather than Ω as here: but this is fine as (iΩ, ω) is a Calabi–Yau structure whenever (Ω, ω) is.If n=4, a gluing result for SU(4) structures is known. For instance, Doi–Yotsutani have given a gluing result by passing through (7): see <cit.>. It is not immediate from this work that we can achieve (<ref>) and (<ref>). Indeed, the analogue of (<ref>) for the (7) structures is very unlikely to be true, as (as with SU(4) structures) applying γ_T to (7) structures yields a four-form that is very unlikely to define a (7) structure. We would have to show that there was an exact perturbation that was a (7) structure, rather than merely any perturbation as Doi–Yotsutani use (by taking the nearest (7) structure pointwise and then perturbing as in <cit.>). Even then, the SU(4) structure induced from a (7) structure is not unique, and further work would be required to get (<ref>).For higher n still, it is not immediately obvious that the complex structure parts of asymptotically cylindrical Calabi–Yau structures can be glued easily (because the set of decomposable complex n-forms is highly nonlinear). Hence, it seems unlikely that Hypothesis <ref> (in particular (<ref>)) holds in the case of n>3. In Proposition <ref> below, we will obtain the consequences of Hypothesis <ref> for our approximately glued submanifold; we will then discuss what weaker conditions than Hypothesis <ref> might yield these consequences. In Joyce's work on the desingularisation of cones (<cit.>), (<ref>) is not difficult to obtain. The gluing is carried out in such a way that ω|_L = 0. For [Ω|_L], we observe that H^n(L) is one-dimensional so it suffices to consider the pairing with the homology class of [L]: but the desingularised submanifold L is homologous to the original special Lagrangian and so this pairing is zero.Hypothesis <ref> has the following consequence, which may be proved by considering the compact parts and the neck separately, and noting that we have convergence either to the behaviour on M_i or to the cylindrical behaviour.Let M_1 and M_2 be a matching pair of asymptotically cylindrical Calabi–Yau manifolds. Let g^T be the gluing of the asymptotically cylindrical metrics g_1 and g_2 on M^T and g(Ω^T, ω^T) be the metric given by the Calabi–Yau structures in Hypothesis <ref>. Then there exist constants C_k and ϵ such that g^T - g(Ω^T, ω^T)_C^k≤ C_k e^-ϵ T,where these norms are taken with either metric. Combining Proposition <ref> with Proposition <ref> and the second fundamental form bound mentioned below it, we obtain also Let M_1 and M_2 be a matching pair of asymptotically cylindrical Calabi–Yau manifolds, and let L_1 and L_2 be a matching pair of asymptotically cylindrical submanifolds. Let M^T and L^T be the glued manifold and submanifold as in Definition <ref> and suppose Hypothesis <ref> holds. We have two metrics on L^T. Firstly, we have a metric obtained by direct gluing of the metrics on L_1 and L_2 as in Definition <ref>. Secondly, we have the metric given on L^T by restricting the metric on M^T induced by (Ω^T, ω^T). The difference of these two metrics decays exponentially in T to zero with all derivatives, with respect to either of them. Moreover, with respect to the metric induced from (Ω^T, ω^T), the second fundamental form of L^T in M^T is bounded in C^k for all k uniformly in T. In particular, the restriction maps of C^k p-forms are bounded independently of T.Combining Hypothesis <ref> with Lemma <ref> and Corollary <ref>, we obtainSuppose that (M_1, M_2) is a pair of matching asymptotically cylindrical Calabi–Yau manifolds, and L_1 and L_2 are asymptotically cylindrical special Lagrangian submanifolds matching in the sense of Definition <ref>. Let L^T be the glued submanifold and suppose that Hypothesis <ref> holds. Then * [Ω^T|_L^T] = 0 = [ω^T|_L^T],* There exist constants C_k and ϵ such thatΩ^T|_L^T_C^k + ω^T|_L^T_C^k≤ C_k e^-ϵ T.That is, as T becomes larger, L^T becomes close to special Lagrangian. We now return to the question of what we may expect when n>3. We certainly need to suppose that our Calabi–Yau structures can be glued, and it seems likely that a perturbative argument ought to work. Hence, suppose that we have Calabi–Yau structures (Ω^T, ω^T) and that (<ref>) holds; the question is how we may obtain something resembling [Ω^T|_L^T] = 0 = [ω^T|_L^T]. If we have a Kähler class [ω^T_0] whose restriction to L^T is zero, then, rescaling Ω^T if necessary, by the Calabi conjecture we may find ω^T_1 in this class such that (Ω^T, ω_1^T) is a Calabi–Yau structure and [ω_1^T|_L^T] = 0. In particular, if the holonomy of (Ω^T, ω^T) is exactly SU(n), then we know that there are no parallel (2, 0) + (0, 2) forms and hence by a Bochner argument no harmonic such forms; see, for instance, <cit.>. With J the complex structure corresponding to (Ω^T, ω^T) we thus know by Hodge theory thatJ γ_T(ω_1, ω_2) lies in the same cohomology class as γ_T(ω_1, ω_2), since they have the same harmonic representative. Then ω^T_0 = 1/2 (J γ_T(ω_1, ω_2) + γ_T(ω_1, ω_2)) is a closed (1, 1) form close to γ_T(ω_1, ω_2), and in particular defines a Kähler metric. ω^T_0 is in the cohomology class [γ_T(ω_1, ω_2)], and so restricts to an exact form on L^T just as before. We can then find a Ricci-flat metric in this Kähler class as above. We always have holonomy SU(n) if n is odd and M^T is simply connected; see <cit.>. If n is even, nevertheless the asymptotically cylindrical Calabi–Yau manifolds M_1 and M_2 have holonomy exactly SU(n) if they are simply connected and irreducible (see <cit.>). It seems unlikely to be difficult to prove that then M^T must have holonomy exactly SU(n) as well. It remains to deal with the holomorphic volume form; we suppose ω_1^T has been found as above in general. Now, [Ω^T]|_L^T and [Ω^T]|_L^T lie in the one-dimensional space H^n(L^T). Hence, there exists α such that [ (e^iαΩ^T)]|_L^T is zero. Thus (e^iαΩ^T, ω_1^T) is a Calabi-Yau structure for which Proposition <ref> (i) holds. It remains to verify (ii) for this structure, that is it remains to show that ω_1^T and α are close to ω^T and 0 respectively. This, again, should not be difficult. In particular, assuming the details above can be filled in, the argument we give should extend to a gluing map of minimal Lagrangian submanifolds provided that the holonomy of M^T is exactly SU(n). Alternatively, we can glue special Lagrangians provided our ω_1^T can be chosen in general: but it may be necessary to choose this form depending on the submanifolds we are interested in.§ THE SLING MAP: EXISTENCE AND WELL-DEFINITION Suppose that M_1 and M_2 are a pair of matching asymptotically cylindrical Calabi–Yau manifolds, and L_1 and L_2 are a pair of matching asymptotically cylindrical special Lagrangian submanifolds and that Hypothesis <ref> applies. Then we know from Proposition <ref> that the submanifold L^T constructed by gluing L_1 and L_2 as in Definition <ref> is close to special Lagrangian. To prove Theorem A, we now have to show that L^T can be perturbed into a special Lagrangian submanifold. More generally, we seek to show that any such nearly special Lagrangian submanifold L can be perturbed into a special Lagrangian submanifold, and to define a uniform “SLing map" giving this perturbation. This is carried out in this section. We first state Condition <ref> on a submanifold L of a Calabi–Yau manifold M, such that an inverse function (or contraction mapping) argument always gives that then L can be perturbed into a special Lagrangian. One of the most important conditions is then dealt with in Lemma <ref>. This will be absolutely essential for all our later work, and extends the isomorphism between one-forms and normal vector fields used by McLean to the nearly special Lagrangian case. We then show that the “remainder term" can be sensibly bounded, in Proposition <ref>, and complete the proof of Theorem A in Theorem <ref>: L^T satisfies Condition <ref> and so can be perturbed into a special Lagrangian. We then give a definition (Definition <ref>) of an “SLing map" applicable whenever Condition <ref> holds. Finally, we deal with Theorem A1 (Theorem <ref>): we explain that Condition <ref> holds whenever Ω|_L and ω|_L are sufficiently small and exact, with sufficient smallness depending on L, so that any nearly special Lagrangian submanifold can be perturbed to a special Lagrangian.§.§ General setupIn this first subsection, we identify the inverse function (or contraction mapping) argument that we will use and describe the required hypotheses in Condition <ref>. We shall then prove our first results towards obtaining them, Lemma <ref> and Proposition <ref>. Lemma <ref> says firstly that if L^T is nearly special Lagrangian then the map v ↦ι_v ω|_L gives an isomorphism between normal vector fields and one-forms, generalising the special Lagrangian case, and secondly gives (not particularly explicit) bounds on the C^k, μ norms of this in both directions. Proposition <ref> says that the linearisation of the map to which we apply the inverse function theorem does not change very much if we change the SU(n) structure by a small amount. For the inverse function argument, we rely on essentially the same idea as in the deformation theory of subsection <ref>. That is, we consider the nonlinear map of (<ref>)F:[row sep=0pt] {normal vector fields on L^T}r {n-forms and 2-forms on L}, v [mapsto]r((exp_v^* Ω^T)|_L^T, (exp_v^* ω^T)|_L^T).Since (Ω^T, ω^T) is a torsion-free Calabi–Yau structure, this linearises as in subsection <ref> using Cartan's magic formula to giveD_0F: v ↦ (d(ι_v Ω^T)|_L^T, d(ι_v ω^T)|_L^T).However, since L^T is not special Lagrangian, we do not have that ι_v ω^T has no normal component, and we certainly do not have the equality ι_v Ω^T|_L^T = *(ι_v ω^T|_L^T) of <cit.>. If we suppose that these do hold, and further that the inverse of the linearisation D_0F can be controlled appropriately, then an inverse function argument would prove that if Ω^T|_L^T and ω^T|_L^T are sufficiently small then L^T can be perturbed to special Lagrangian. This is moreover the case if these equalities only nearly hold (indeed, since the second is only used for bounding the linearisation we not need to assume it separately), and thus we formalise this as [SLing conditions]Let M be a Calabi–Yau manifold. The closed submanifold L satisfies the SLing conditions if for some k and μ * The map u ↦ι_u ω|_L is an isomorphism between C^k+1, μ normal vector fields to L and C^k+1, μ 1-forms on L and there exists C_1 ≥ 1 such that 1/C_1u_C^k+1, μ≤ι_u ω|_L^T≤ C_1 u_C^k+1, μ. * The linear map u ↦ (dι_u ω|_L, dι_u Ω|_L) is an isomorphism from C^k+1, μ normal vector fields u such that ι_u ω|_L is L^2-orthogonal to harmonic forms (these normal vector fields will later be called orthoharmonic: see the discussion after Definition <ref>) onto exact C^k, μ 2- and n-forms, and there exists C_2 ≥ 1 such that for any such u, u_C^k+1, μ≤ C_2 (dι_u ω|_L_C^k, μ + d ι_u Ω|_L_C^k, μ). * There exists r>0 such that whenever u_C^k+1, μ < r andv_C^k+1, μ < r, the remainder term satisfies the following bound for some constant C_3 depending on r( exp^*_u Ω|_L - exp^*_v Ω|_L - dι_u Ω|_L + dι_v Ω|_L,exp^*_u ω|_L - exp^*_v ω|_L - dι_u ω|_L + dι_v ω|_L)_C^k, μ≤ C_3u-v_C^k+1, μ(u_C^k+1, μ+v_C^k+1, μ). * (Ω|_L, ω|_L)_C^k, μ≤min{1/8 C_1^2 C_2^2 C_3, r/2 C_1 C_2}, and we have the cohomology conditions [Ω|_L] = 0 and [ω|_L] = 0.If L satisfies Condition <ref>, we can perturb it. A standard application of the contraction mapping principle yields the following precise implicit function theorem.Suppose that L is a closed submanifold of the Calabi–Yau manifold M and that L satisfies Condition <ref>. Then there exists a normal vector field v to L such that exp_v(L) is special Lagrangian. We have v_C^k+1, μ≤ 2C_1 C_2 (Ω|_L, ω|_L)_C^k, μ; note that by Condition <ref> (iv), this is always less than min{1/4 C_1 C_2 C_3, r}. It therefore suffices to show that if M_1 and M_2 are a matching pair of Calabi–Yau manifolds and L_1 and L_2 a matching pair of special Lagrangian submanifolds and Hypothesis <ref> holds, then the submanifold L^T of M^T constructed as in Definition <ref> satisfies Condition <ref>. Parts (iii) and (iv) of Condition <ref> are relatively straightforward; we need to discuss parts (i) and (ii). We begin with (i). We know from McLean <cit.> that v ↦ι_v ω|_L defines an isomorphism between normal vectors to a special Lagrangian submanifold, and cotangent vectors to this submanifold. Moreover, this map is an isometry, so that if L is special Lagrangian (i) holds with C_1 = 1. We extend this to the case where (Ω, ω) is only defined locally and L is close to special Lagrangian. Suppose that (Ω, ω) is an SU(n) structure around L in the sense of Definition <ref> and p is a point of L such that |(ω|_L)_p|<1 (i.e. |ω(u, v)| < |u||v| for all u, v ∈ T_pL). Then the complex structure J_p: T_pM → T_pM does not take any tangent vector to another tangent vector, and at p the map u ↦ι_u ω|_L from normal vectors to tangent covectors is an isomorphism.Now suppose the uniform norm ω|_L_C^0 < 1, so that the preceding paragraph holds at all points. Then v ↦ι_v ω|_L defines a map from smooth normal vector fields on L to smooth 1-forms on L. Provided that ω|_L_C^k, μ is sufficiently small compared to the C^k, μ norm of J and the C^k norm of the second fundamental form of L in M, we have boundscv_C^k, μ≤ι_v ω|_L_C^k, μ≤Cv_C^k, μ,where the constants c and C depend on the C^k, μ norms of ω|_L and the induced almost complex structure J and the C^k norm of the second fundamental form of L in M.For the first part, we work at p ∈ L. Suppose that v ∈ T_pL ⊂ T_pM such that Jv ∈ T_pL. J_p is an isometry and so|v|^2 = g(v, v) = ω(v, J_pv) < |v||Jv| = |v|^2.This is a contradiction, proving the first claim.It follows that the map from tangent vectors to normal vectors given by taking the normal component of J_pv is an isomorphism as it is an injective linear map between spaces of the same dimension. That is, given a normal vector v we can find a tangent vector u such that v is the normal component of Ju; we can find a pair u and u' of tangent vectors such that v = J_pu + u'. To show u ↦ι_u ω|_L is an isomorphism, we show that it too is injective. It is easy to see that if not there exists u such that J_pu is normal. As above, we can find tangent vectors v and v' such that u = J_pv + v', and hence J_pv = v' - u. Applying J_p to this and rearranging, J_pv' = -v - J_pu.Since v and v' are tangential, u and J_pu are normal and nonzero, and J_p is an isometry, we obtain |v| < |-v-J_pu| = |J_pv'| = |v'| < |u +v'| = |J_pv| = |v|,which is evidently a contradiction. This completes the first part of the proof. For the second part, we shall work with the norms on sections of TM|_L induced by the ambient connection; as described after Theorem <ref>, the restrictions of these norms to normal vector fields are Lipschitz equivalent to the standard norms, with Lipschitz constant depending on the C^k norm of the second fundamental form. We introduce the notations π^1_1 and π^1_0 for the normal and tangent parts of vector fields and one-forms. By the remark on Lipschitz equivalence after, and the argument indicated for, Theorem <ref>, we obtain π^1_i α_C^k, μ≤ C' α_C^k, μ,where C' depends on the C^k norm of the second fundamental form. The upper bound in (<ref>) follows immediately; it remains to show the lower bound. To do this, we use a similar idea to the first part. Given a normal vector field v, we can find local tangent vector fields u and u' such that v = Ju + u'.Now we note that u' is small: if w is also a local tangent vector field, taking the inner product of w with (<ref>) yields0 = ω(u, w) + g(u', w).u and w are both tangential, so ω(u, w)_C^k, μ can be bounded byω|_L_C^k, μu_C^k, μw_C^k, μ. Hence, since the C^k, μ norm of a covector field can be identified as its operator norm as a map from C^k, μ vector fields to C^k, μ functions, we have that u'_C^k, μ can be bounded by u_C^k, μω|_L_C^k, μ. In particular, we findu_C^k, μ≤ C_1 Ju_C^k, μ≤ C_1 v_C^k, μ + C_1 u_C^k, μω|_L_C^k, μ,where C_1 depends on the C^k, μ norm of J, and hence that if C_1 ω|_L_C^k, μ is sufficiently smallu_C^k, μ≤C_1/1-C_1 ω|_L_C^k, μv_C^k, μ. Now we apply J to (<ref>), obtainingJv = -u + Ju'.Consequently, the normal part of Jv is the normal part of Ju' and we have the estimateπ^1_1 Jv_C^k, μ ≤ C_2 J u'_C^k, μ≤ C_3 u'_C^k, μ≤ C_3 u_C^k, μω|_L_C^k, μ≤C_1C_3 ω|_L_C^k, μ/1-C_1 ω|_L_C^k, μv_C^k, μ,where C_2 depends on the C^k norm of the second fundamental form and C_3 depends on the C^k norm of the second fundamental form and the C^k, μ norm of J. If C_1C_3ω|_L_C^k, μ/1-C_1 ω|_L_C^k, μ is sufficiently small we then have the desired lower boundπ^1_0 Jv_C^k, μ≥( 1- C_1C_3 ω|_L_C^k, μ/1-C_1 ω|_L_C^k, μ) v_C^k, μ. We now turn to part (ii) of Condition <ref>. To understand this, we will show that v ↦ι_v Ω^T|_L^T and v ↦*ι_v ω^T|_L^T are similar. We will work locally, and take a different SU(n) structure (Ω', ω') around an open subset U of L^T, again in the sense of Definition <ref>, so that U is special Lagrangian with respect to (Ω', ω'), and then we know from <cit.> that v ↦ι_v Ω'|_U = v ↦ *' ι_vω'|_U. Then we have to show that v ↦ι_v Ω^T|_U is close to v ↦ι_v Ω'|_U and similarly for v ↦ * ι_v ω^T|_U. That these follow provided that (Ω', ω') is close enough to (Ω^T, ω^T) is the content of the following. Suppose that M is a 2n-dimensional manifold, and L is an n-dimensional submanifold. Let (M) be the bundle of SU(n) structures over M from Definition <ref>. Suppose that (Ω_1, ω_1) and (Ω_2, ω_2) are two sections of (M)|_L. Assume further that (Ω_1, ω_1) is the restriction to L of a Calabi–Yau structure on M, so that there is a well-defined metric g on M, giving a well-defined C^k norm on TM|_L, and suppose that the second fundamental form with respect to g of L in M is bounded in C^k-1 by R.Then there exist C and δ_0 depending on R such that if Ω_1 - Ω_2_C^k + ω_1 - ω_2_C^k < δ < δ_0,then (u ↦ *_1(ι_u ω_1|_L)) - (u ↦ *_2(ι_u ω_2|_L))_C^k+ (u ↦ι_u Ω_1|_L) - (u ↦ι_u Ω_2|_L)_C^k < Cδ,where these norms are the induced norms on the bundle ν^*_L ⊗⋀^n-1 T^*L. Since the map (Ω, ω) ↦ g, contraction, and the map g ↦ *_g are smooth bundle maps, (Ω, ω) ↦ (u ↦ *ι_u ω|_L - u ↦ι_u Ω|_L)is a smooth bundle map. We may then apply Proposition <ref> using the ambient Levi-Civita connection. Since (Ω_1, ω_1) is parallel and δ_0 is small, this proposition applies provided (Ω_1, ω_1) and (Ω_2, ω_2) lie in a compact subset of (M). By trivialising so that (Ω_1, ω_1) is the standard structure, we may assume this locally. It follows just as in Proposition <ref> by extending to a smooth, hence Lipschitz continuous, map of jet bundles that (<ref>) holds when the C^k norms are those induced using the ambient connection. But as described after Theorem <ref>, the C^k norms induced using the ambient connection are Lipschitz equivalent with a constant depending on the C^k-1 norm of the second fundamental form to the C^k norms induced using the restricted connection, which is bounded by R. (<ref>) with the C^k norms induced using the restricted connection follows.§.§ The gluing theoremIn this subsection we combine Theorem <ref> with a special case of Proposition <ref> to complete the proof of Theorem A by proving Theorem <ref>, that the approximate special Lagrangian constructed in section <ref> can be perturbed to be special Lagrangian. As a preliminary, we will consider the nonlinearity and show that Condition <ref> (iii) holds with a constant independent of T.We are interested in the map u ↦ (exp_u^* Ω|_L - dι_u Ω|_L, exp_u^* ω|_L - dι_u ω|_L). We note that the restriction to L is always controlled by the second fundamental form, and so it suffices to consider the rest of the map, which we formalise as follows.Letbe the subset of (J^1 TM ⊕ J^1 ⋀^n T^*M ⊕ J^1 ⋀^2 T^*M) × (⋀^n T^*M ⊕⋀^2 T^*M) consisting of pairs (((u, ∇ u), (α_0, ∇α_0),(β_0, ∇β_0)), (α, β)) such that π_⋀^n T^*M ⊕⋀^2 T^*M (α, β) = exp(u).It is easy to see The intersection ofwith the set of elements with (u, ∇ u) small is a submanifold and is a smooth bundle over M. This simply follows because this open subset ofis the preimage of the diagonal submanifold of M × M under an obvious submersion. We may now make Let G be the bundle map fromto ⋀^n T^*_pM ⊕⋀^2 T^*_pM given by(((u, ∇ u), (α_0, ∇α_0),(β_0, ∇β_0)), (α, β)) ↦ (exp_u^* α - dι_u α_0, exp_u^* β - dι_u β_0). Note that exp_u^* does only depend on the first jet. Note further that isometrically embedding an incomplete manifold M into M' will enlarge , as more points exp_u(p) will be in M'. The extended map G will of course depend on the metric on M' ∖ M. This idea will be needed below to deal with behaviour on the neck of a glued manifold. We may then prove The map G is smooth. Moreover, it depends continuously on the metric on M in the sense thatif g_s is a finite-dimensional family of metrics, then G depends continuously on s. Moreover, we have that for every ϵ, δ_0 and p there exists δ depending on p, ϵ and δ_0 such that if π(u) = p, |s-s'| < δ, and |(u, ∇ u)| < δ_0|G(s, (((u, ∇ u), (α_0, ∇α_0),(β_0, ∇β_0)), (α, β))) - G (s', (((u, ∇ u), (α_0, ∇α_0),(β_0, ∇β_0)), (α, β)))|< ϵ|(u, ∇ u)| |(α, β)|.In particular, if there is a metric g_∞ such that g_s → g_∞ as s →∞, then G converges to the corresponding G_∞ as s →∞, and we have that for every ϵ, δ_0 and p there exists K such that if π(u) = p, s > K and |(u, ∇ u)| < δ_0, (<ref>) holds.The contraction and exterior derivative terms are obviously smooth, and are independent of the metric. The difficulty is the terms exp_u^* α and exp_u^* β. We extend the jet (u, ∇ u) to a local field, so defining a local diffeomorphism. We begin by showing that the pushforward map (exp_u)_* from T_pM to T_exp_p(u)M has these properties (viewed as a map on J^1(TM) ⊕ TM); then we may dualise. Given ((u, ∇ u), v) at p ∈ M, by hypothesis we have a unique geodesic γ with initial velocity u. Construct the Jacobi field X along γ with X_0 = v and (∇_ X)_0 = ∇_v u. It is straightforward to see that the pushforward (exp_u)_* v is precisely the final value of this Jacobi field.To prove that pushforward has the desired properties, therefore, we just have to show that this Jacobi field does. But locally, we are just solving a system of ordinary differential equations. The smoothness of the map then reduces to the fact that the solution to a system of ordinary differential equations depends smoothly on the initial conditions, which is well-known; see <cit.>. Similarly, continuity in s for a smooth family of metrics g_s is simply continuity in the finite-dimensional parameter s, and again this is well-known. The analogue to (<ref>) follows by noting that the map giving the derivative of pushforward in u is also continuous in s and that the pushforward is independent of the metric if (u, ∇ u) = 0.Dualising by choosing a smooth local trivialisation of TM immediately gives the corresponding results for the pullback map from the relevant components ofto T^* M; the final result is then immediate.Similarly, in <cit.>, Joyce prepares for analysis of the nonlinear term by passing to work with finite-dimensional spaces. He goes on to give a more direct analysis of this term than we will; we will only show that we can bound things uniformly in T. We now argue directly using this.Let M_1 and M_2 be a matching pair of asymptotically cylindrical Calabi–Yau manifolds and let L_1 and L_2 be a matching pair of asymptotically cylindrical special Lagrangian submanifolds. Let L^T be the approximate gluing of L_1 and L_2 defined in Definition <ref>. Suppose Hypothesis <ref> applies, so that M^T is Calabi–Yau with a suitable Calabi–Yau structure (Ω^T, ω^T). Then for any k and μ, for T sufficiently large,Condition <ref>(iii) holds with constants C_3 and r independent of T.We may work locally around each point p of L^T, and look for a pointwise estimate,|G(((u, ∇ u), (Ω_p, ∇Ω_p), (ω_p, ∇ω_p)), (Ω_exp_u(p), ω_exp_u(p)))- G(((v, ∇ v), (Ω_p, ∇Ω_p), (ω_p, ∇ω_p)), (Ω_exp_v(p), ω_exp_v(p)))|<C (|(u, ∇ u)| + |(v, ∇ v)|)|(u-v, ∇ u - ∇ v)|.As G has zero linearisation around u = 0, and all of these maps are smooth in p (including that the sections Ω and ω are),some such local constant can be found for each p when we restrict to (u, ∇ u) in a ball around zero of radius r_p.Since M^T is compact, we can find C and r_p independently of p for each T; it remains to prove that they are independent of T, as the difference between this and the remainder term we wish to bound is just the restriction to L^T, which we know is uniformlybounded in T. We appeal to the continuity of G and hence of these constants in the metric. Similarly, by differentiating G we obtain an estimate for the derivatives, analogously to Proposition <ref>, and using (<ref>) these derivatives are also continuous, so the estimates can again by chosen continuously. We will note when we do this that (Ω, ω) also converge. The most obvious space of parameters to consider is just T (large enough). Unfortunately, as T →∞ the metric on M^T becomes increasingly singular and so there is no g_∞ with which we may compare. Hence we need to be slightly more careful, and we work locally. We begin by choosing T'_0 > T_0, with T_0 from Hypothesis <ref>, and considering G on M_1^ T'_0⊂ M^T. M_1^ T'_0 is incomplete, so we further consider it as a subspace of M_1^ (T'_0+1) as indicated above to make sureis large enough; G is then well-defined on M_1^ T'_0 for (u, ∇ u) small enough, uniformly in t. Now by Hypothesis <ref>, the Calabi–Yau structure (Ω^T|_M_1^ (T'_0 +1), ω^T|_M_1^ (T'_0 + 1)) converges to (Ω_1|_M_1^ (T'_0 + 1), ω_1|_M_1^ (T'_0 + 1)) as T →∞; hence, the metric converges to the metric g_1 induced by (Ω_1, ω_1).C and r_p can now be bounded independently of T on M_1^ T'_0, by first finding C and r_p for g_1 and using smoothness in the forms and the continuity part of Proposition <ref>. Similar arguments apply for M_2^ T'_0. If T < T'_0 -1/2, then M_1^ T'_0 and M_2^ T'_0 intersect and so we have covered all of M^T. Otherwise, it remains to consider the subset (-T - 1 + T'_0, T+1 - T'_0) × N of the neck of M^T. Since we want to work on a fixed manifold, we shall consider (-1, 1) × N, and include this as the subset (t - 1, t+1) × N for |t| < T - T'_0. Again, the Calabi–Yau structures (Ω^T, ω^T)|_(t-1, t+1) × N define incomplete metrics, and so to makelarge enough we extend to the corresponding (t-2, t+2) × N, and then G applied to sufficiently small tangent vectors over (-1, 1) × N depends continuously on the metric on (-2, 2) × N. Now (Ω^T, ω^T)|_(-t-2, t+2) × N gives a family of Calabi–Yau structures on (-2, 2) × N. This family can be parametrised by the pair (t, T) in the set {T ≥ T'_0 - 1/2, |t+1| < T - T'_0}. Clearly, for each t as T approaches infinity, we approach the cylindrical metric g̃ on (-2, 2) × N and the Calabi–Yau structure approaches the cylindrical Calabi–Yau structure. Thus by choosing T'_0 large enough we may consider the g|_(-t-2, t+2) × N as perturbations of g̃ and the forms as perturbations of the cylindrical forms. Hence, we can find C and r_p independent of both t and T.This provides the required constants uniformly in T.We may now prove Theorem A. Let M_1 and M_2 be a matching pair of asymptotically cylindrical Calabi–Yau manifolds and let L_1 and L_2 be a matching pair of asymptotically cylindrical special Lagrangian submanifolds. Let L^T be the approximate gluing of L_1 and L_2 defined in Definition <ref>. Suppose Hypothesis <ref> applies, so that M^T is Calabi–Yau with a suitable Calabi–Yau structure (Ω^T, ω^T). Then for any k and μ, for T sufficiently large,Condition <ref> holds for the submanifold L^T of M^T so there is a normal vector field v such that exp_v(L^T) is a special Lagrangian submanifold for (Ω^T, ω^T). v is smooth and decays exponentially with T, in the sense that there is ϵ>0 and a sequence C_k so that v_C^k≤ C_k e^-ϵ T.Proposition <ref> gives that ω^T|_L^T decays exponentially. Lemma <ref> gives precisely Condition <ref>(i), with constant C_1 depending on how small ω^T|_L^T is, provided it is small enough. Hence, Condition <ref>(i)holds for T sufficiently large and C_1 can be taken uniform in T. Proposition <ref> says that Condition <ref>(iii) holds for T sufficiently large with C_3 bounded independently of T. It remains to show that Condition <ref>(ii) and (iv) hold. We shall first show that (ii) holds and that C_2 grows at most polynomially in T. Proposition <ref> will then precisely give (iv). That is, we will have proved that Condition <ref> holds for L^T, so by Proposition <ref> there is such a v perturbing L^T to a special Lagrangian. The norm of v is controlled again by Proposition <ref>: since C_1, C_2 and C_3 grow at most polynomially, and Proposition <ref> says that Ω^T|_L^T + ω^T|_L^T decays exponentially in T, this norm decays exponentially in T. To prove Condition <ref>(ii), we consider the two linear maps v ↦ (dι_v ω^T|_L^T, dι_v Ω^T|_L^T)and v ↦ (dι_v ω^T|_L^T, d* ι_v ω^T|_L^T).Corollary <ref> says that the metric of (Ω^T, ω^T) is close to the glued metric. Applying Theorem <ref> on a lower bound for d+d^* and Condition <ref>(i), we see (<ref>) is an isomorphism with a lower bound of the form u_C^k+1, μ≤ CT^r ( dι_u ω^T|_L^T_C^k, μ +d* ι_v ω^T|_L^T_C^k, μ).We shall apply Proposition <ref> to show that the difference of (<ref>) and (<ref>) is exponentially small in T. It will follow by openness of isomorphisms of Banach spaces that (<ref>) also has a lower bound of the form (<ref>), which is Condition <ref>(ii) with C_2 growing at most polynomially in T. (<ref>) and (<ref>) are the composites with the exterior derivative ofv ↦ (ι_v ω^T|_L^T, ι_v Ω^T|_L^T)and v ↦ (ι_v ω^T|_L^T, *ι_v ω^T|_L^T).To show that (<ref>) and (<ref>) are exponentially close as maps from C^k+1, μ normal vector fields to C^k, μ forms, therefore, it suffices to show that (<ref>) and (<ref>) are exponentially close as maps from C^k+1, μ normal vector fields to C^k+1, μ forms, and this is an application of Proposition <ref>.Proposition <ref> is essentially a local result. We pick some fixed A(T) decaying exponentially in T, and have to show around every point p of L^T we can find an open neighbourhood U ∩ L^T of p and an SU(n) structure (Ω_U, ω_U) around U ∩ L^T with respect to which U ∩ L^T is special Lagrangian and so that Ω_U - Ω^T_C^k+2(U ∩ L^T) + ω_U - ω^T_C^k+2(U ∩ L^T)≤ A(T).Proposition <ref> would then imply that, as maps from C^k+2 to C^k+2, (<ref>) and (<ref>) differ by A(T) B for some fixed constant B. Note that B may depend on the second fundamental form of L^T in M^T, but since we know that the second fundamental form is bounded uniformly in T by Corollary <ref>, A(T) B also decays exponentially in T. On the other hand, (<ref>) and (<ref>) are function-linear and so are given by tensors. Thus, this bound is a bound on the difference of the tensors, and so maps also differ by at most A(T)B as maps from C^k+1, μ normal vector fields to C^k+1, μ forms. As A(T) B is exponentially decaying, we would obtain that (<ref>) and (<ref>) are exponentially close, and so that Condition <ref>(ii) holds with C_2 growing at most polynomially.We shall now find (Ω_U, ω_U). In order to do this, we will work with three open subsets U_1, U_2 and U_3 forming an open cover of M^T, so that {U_i ∩ L^T} form an open cover of L^T. For each U_i we may find A_i(T) decaying exponentially, and by taking the maximum we obtain A(T) decaying exponentially. Specifically, let U_1 = M_1^ (T-2)⊂ M^T, U_2 = M_2^ (T-2)⊂ M^T, and U_3 be the subset (-3, 3) × N of the neck. We first deal with U_i for i=1, 2. Here, we have by construction that γ_T(Ω_1, Ω_2) = Ω_i, γ_T(ω_1, ω_2) = ω_i,and for T sufficiently large L^T ∩ U_i = L_i ∩ U_i. Consequently, (Ω_i, ω_i) defines an SU(n) structure around U_i∩ L^T for which U_i ∩ L^T is special Lagrangian. By Hypothesis <ref>, (Ω_i, ω_i) = (γ_T(Ω_1, Ω_2), γ_T(ω_1, ω_2)) is exponentially close to (Ω_T, ω_T) on U_i ∩ L^T, so we are done. It remains to find an SU(n) structure around U_3 ∩ L^T with the desired properties. We know that U_3 ∩ L^T is the intersection with U_3 of the image under an exponentially small normal vector field of K × (T-4, T+4) with respect to the asymptotically cylindrical metrics. Note that where these metrics are not defined or do not agree, the vector field is zero, so this is a well-defined notion. Since the glued metric is exponentially close to the asymptotically cylindrical metrics, this is still the case for the glued metric. That is, there is some normal vector field v_T, so that L^T ∩ U_3 = exp_v_T(K × (T-4, T+4)) ∩ U_3 and v_T decays exponentially in T. Note that K × (T-4, T+4) is special Lagrangian with respect to the cylindrical SU(n) structure (Ω̃, ω̃). By construction, we have that (γ_T(Ω_1, Ω_2), γ_T(ω_1, ω_2)) is exponentially close to (Ω̃, ω̃) on U_3 and by Hypothesis <ref> again we consequently have that (Ω̃, ω̃) is exponentially close to (Ω^T, ω^T) on U_3. We may extend v_T to a tubular neighbourhood V_1 of K × (T-4, T+4), containing N × (T-4, T+4) ∩ L^T, which exp_v_T maps locally diffeomorphically to a tubular neighbourhood V_2 of L^T ∩ U_3. We may choose this extension, which we shall also call v_T, to also decay exponentially in T. Let F be the inverse of the diffeomorphism exp_v_T: V_1 → V_2. Letour SU(n) structure (Ω_U_3, ω_U_3) around L^T ∩ U_3 be F^*(Ω̃, ω̃) (which we think of as defined on V_1 for simplicity). Since v_T is exponentially small, and Ω̃ and ω̃ are bounded with their derivatives independently of T, (Ω_U_3, ω_U_3) - (Ω̃, ω̃)is exponentially small, and so too is (Ω_U_3, ω_U_3) - (Ω^T, ω^T)on V_1 ∩ V_2. In particular, (<ref>) is exponentially small on L^T ∩ U_3. However, it is easy to see.F^*((dt + idθ) ∧Ω_, dt∧ dθ + ω_)|_exp_v(K × (T-4, T+4)) = .((dt + idθ) ∧Ω_, dt∧ dθ + ω_)|_(K × (T-4, T+4)) = (, 0).That is, L^T ∩ U_3 is special Lagrangian with respect to (Ω_U_3, ω_U_3). That is, (Ω_U_3, ω_U_3) has the desired properties. This completes the proof of Condition <ref> (ii). v is smooth since the special Lagrangian is smooth and it is a normal vector field between smooth submanifolds, as at the beginning of subsection <ref>. For the exponential decay of all the C^k, μ norms at a fixed rate, we note that how large T needs to be in this argument depends on k and μ. However, for all k and μ we have v_C^k+1, μ decaying exponentially in T for T large enough; moreover, with the same rate. Since v is smooth, it follows that we can extend this to smaller T. This is the gluing theorem for special Lagrangians. It follows from the proof of Proposition <ref> that there is more than one possible special Lagrangian perturbation of L^T, and the family of perturbations corresponds to the harmonic normal vector fields on L^T. In particular, if we vary the harmonic normal vector field we can construct a whole local deformation space to our glued special Lagrangian; the estimate in Proposition <ref> follows from choosing the normal vector field v so that its corresponding one-form is L^2-orthogonal to harmonic forms. We may compare this with the similar analysis in Joyce<cit.> and Pacini<cit.>. Both of these construct a submanifold that is close to special Lagrangian and then argue that it may be deformed. However, in both those cases, careful application of the Lagrangian neighbourhood theorem is used to ensure that the initial submanifold corresponding to L^T is itself Lagrangian. If we defined L^T with similar care we could presumably obtain that γ_T(ω_1, ω_2)|_L^T = 0, but as we have had to introduce a perturbation to ω to obtain a Calabi–Yau structure, we cannot obtain that ω^T|_L^T = 0. If L^T is Lagrangian, then we may again apply the Lagrangian neighbourhood theorem to infer that the one-forms corresponding to Lagrangian deformations of it under the specialised isomorphism of Condition <ref>(i) are closed. The assumption that they are orthoharmonic thus becomes that they are exact, and this rather simplifies the analysis.In section <ref>, we will analyse a gluing map to show that it defines a local diffeomorphism on deformations of special Lagrangians. This means we need one single gluing map defined on nearly special Lagrangian submanifolds L, and we need to make a uniform choice of this harmonic normal field. Thus we make the following definition The mapis defined from submanifolds of M^T satisfying Condition <ref> to special Lagrangian submanifolds of M^T by, given a submanifold L, finding a normal vector field v to L such that the corresponding one-form ι_v ω^T|_L is L^2-orthogonal to harmonic forms as in Proposition <ref> and letting (L) = exp_v(L). Theorem <ref> can then be interpreted as saying that the domain ofcontains all approximate gluings of asymptotically cylindrical special Lagrangians for sufficiently large T.§.§ GeneralisationsIn this subsection, we explain that the domain of , or equivalently the set of submanifolds L satisfying Condition <ref>, is larger than merely the patchings of section <ref>. We first observe that this is an open subset of submanifolds; then, in Theorem <ref> we prove Theorem A1: any “nearly special Lagrangian" in the sense that ω|_L and Ω|_L are exact and sufficiently small (unfortunately depending on L) can also be perturbed to be special Lagrangian. The work of this subsection is not required for the remainder of the paper, and so our treatment is somewhat brief. We begin with the openness result.Condition <ref> is open in submanifolds with the C^∞ topology, so that the domain ofis an open set. This is essentially immediate: the four conditions all depend continuously on the structure, and so the set where they are satisfied is open. We now turn to a more direct generalisation. We note first of all that the proof of Theorem <ref> gives the following statement. Suppose M is a 2n-dimensional Calabi–Yau manifold with Calabi–Yau structure (Ω, ω) and that L is a closed submanifold. Let k be a positive integer, and A be a positive constant. Suppose that around each point p of L there is a local SU(n) structure (Ω'_p, ω'_p) around a neighbourhood of p in L, in the sense of Definition <ref>, with Ω'_p - Ω_C^k+2+ω'_p - ω_C^k+2≤ A,and so that a neighbourhood of p in L is a special Lagrangian with respect to (Ω'_p, ω'_p). If A is sufficiently small, then for any μ∈ (0, 1) Condition <ref> (ii)holds for k and μ. We note that (iii) holds automatically with some constants C_3 and r. Hence, ifΩ|_L_C^k+1 + ω|_L_C^k+1 is also sufficiently small depending on A, the second fundamental form of L in M, the inverse Laplacian bound, r, and C_3, and also [Ω|_L] = 0 = [ω|_L] then (iv) holds and (L) exists. When k=0 and restricting to the Lagrangian case, Joyce <cit.> gave a rather more direct estimate of the constant corresponding to C_3 in (iii). Fundamentally, that argument shows that for any given r, estimating C_3 rests on estimating the derivatives of the nonlinear map v → (exp_v^* Ω|_L, exp_v^* ω|_L); that is, by the chain rule, linearity of restriction to L, and Theorem <ref>, it rests on the derivatives of v → (exp_v^* Ω|_L, exp_v^* ω|_L) and the second fundamental form. Since Ω and ω are parallel and of fixed size, estimating these derivatives depends entirely on estimating the pullback map J^1(ν_L) →⋀^n T^*M ⊕⋀^2 T^*M|_L, and its derivatives; r controls exactly how large a ball in J^1(ν_L) we admit. Based on the Rauch comparison result (Proposition <ref>) compared with the identification of pushforward with Jacobi fields in Proposition <ref>, it seems plausible that this should only depend on the curvature of M and its derivatives. In any case, it should be possible to choose r and then estimate these derivatives independently of L, by extending to work with the corresponding map J^1(TM) →⋀^n T^*M ⊕⋀^2 T^*M. This estimate on the derivatives of pullback corresponds roughly to (iii) of <cit.>, which is an estimate on certain adapted derivatives of Ω considered as a form on T^*L, under the identification given by the Lagrangian neighbourhood theorem (that is, on the pullback of Ω under an appropriate diffeomorphism). In <cit.>, this yields the required derivatives, because the pushforward just reduces to an algebraic map under this identification.We shall now explain that the existence of (Ω'_p, ω'_p) follows from smallness of Ω|_L and ω|_L. Combining this with Proposition <ref> leads to Theorem <ref> (Theorem A1).We state the following result. Let V be a 2n-dimensional vector space, and L an n-dimensional subspace. For every ϵ>0 there exists δ >0such that if (Ω, ω) is an SU(n) structure on V and |Ω|_L|+ | ω|_L| + |Ω|_L - _L| < δ with respect to the metric induced by (Ω, ω), then there is an SU(n) structure (Ω', ω') such that L is special Lagrangian with respect to (Ω', ω') and |Ω'-Ω| + |ω'-ω| < ϵ. A full proof of this is technically slightly involved, but conceptually straightforward. We may assume without loss of generality that (Ω, ω) is the standard SU(n) structure Ω = (e_1 + i e_2) ∧⋯∧ (e_2n-1 + i e_2n), ω = e_1 ∧ e_2 + ⋯ + e_2n-1∧ e_2n. Then setting ω' = ω - ∑_i, jω(e_i, e_j) (ξ_i ∧ξ_j + Jξ_i ∧ Jξ_j),where the ξ_i and -J ξ_i form the dual basis to the e_i and Je_i, yields another hermitian metric with ω'|_L = 0. We then simply have to rescale Ω in absolute value so that it satisfies Definition <ref>(iii) and adjust its phase so that Ω'|_L = 0. It is not hard to show that these only yield a small change in the structure.In fact, the construction in Lemma <ref> defines a bundle map, and so we can simply apply it globally on L to obtain Let M be a 2n-dimensional manifold, and L an n-dimensional submanifold. For every ϵ>0 there exists δ >0such that if (Ω, ω) is a torsion-free SU(n) structure on M and Ω|_L_C^k+ ω|_L_C^k +Ω|_L - _L_C^k < δ, then there is an SU(n) structure (Ω', ω') around L such that L is special Lagrangian with respect to (Ω', ω') and Ω'-Ω_C^k + ω'-ω_C^k < ϵ. Consequently, we find the following theorem saying that all “nearly special Lagrangian" submanifolds, in some sense, can be perturbed to special Lagrangian.Suppose M is a 2n-dimensional Calabi–Yau manifold with Calabi–Yau structure (Ω, ω) and that L is a closed submanifold. Suppose that for some k, Ω|_L_C^k+2 + ω|_L_C^k+2 is sufficiently small, in terms of a constant depending on the second fundamental form of L in M, a lower bound on d+d^* on L in the sense of Theorem <ref>, and the constant C_3 of Condition <ref>(iii) (with appropriate regularities). Suppose further that [Ω|_L] =0 and [ω|_L] = 0. Then Condition <ref> holds (for any μ) and we can thus perturb L to a special Lagrangian submanifold.As mentioned after Proposition <ref>, a bound on C_3 could presumably be obtained by the argument of Joyce <cit.>. Hence Theorem <ref> could be regarded as a rather less precise but somewhat extended version of Joyce's result <cit.> which says that Lagrangian submanifolds that are sufficiently close to special Lagrangian can be perturbed to be special Lagrangian.§ THE DERIVATIVE OF SLINGIn the remainder of the paper, we shall show Theorem B: that the gluing map of special Lagrangians defined by combining the approximate gluing map of Definition <ref> with themap of Definition <ref> is a local diffeomorphism from the space of matching special Lagrangian deformations of a matching pair to the space of special Lagrangian deformations of its gluing. To do this, we show that these maps are smooth and find their derivatives, and then apply the inverse function theorem. Smoothness is rather complicated. Although we know in the compact case that closed submanifolds of a given manifold M form a Fréchet manifold (see Hamilton <cit.>), we will need to apply the inverse function theorem, and this does not hold for Fréchet manifolds without further assumptions. On the other hand, it's not at all straightforward to define a Banach manifold of submanifolds as the transition functions often involve composition. Consequently, we will restrict to finite-dimensional spaces of submanifolds. To prove that maps of forms and of normal vector fields (hence of submanifolds) are smooth, we will exhibit them as composites with smooth maps of bundles. For instance a map F: TM ⊕ TM → TM such that π(F(u, v)) = exp(u) will, on restriction to a fixed family u_s, induce a smooth map of vector fields. We can then find their derivatives by looking at the derivatives of these bundle maps, and give bounds in this way. This analysis is simpler for , though there are other terms involved in that case making things more difficult. Thus, we will deal with that case first in this section before extending the relevant analysis to the approximate gluing map in section <ref>. Finally, in section <ref>, we obtain the estimates on the derivatives which we will use to prove the result. We shall do much of this section and the next in the case where M is Riemannian and L is any submanifold, rather than restricting to the case where M is Calabi–Yau and the submanifold L is close to special Lagrangian. This is chiefly because the derivatives involved turn out, in terms of normal vector fields, to relate to Jacobi fields, and to work instead with one-forms simply adds additional complexity. In this section, we deal with closed manifolds and submanifolds, though our arguments will mostly be local and so will readily generalise.§.§ General Riemannian manifolds We begin by working with identifications between various tubular neighbourhoods, which we explain with an example. Suppose that we have a curve of submanifolds L_s, and another curve of submanifolds L'_s (in due course we will take L'_s = (L_s)). We suppose also that we knowabout the normal vector field w_s to L_s such that L'_s = exp_w_s(L_s); for instance, if L'_s = (L_s), this vector field is identified by the construction. Since we are interested in the derivative, there are further natural normal vector fields u on L_0 giving the tangent of L_s and v on L'_0 giving the tangent of L'_s. Intuitively, it seems clear that “v = w' + u". This is in fact the case, and we shall prove it carefully. The above explanation also shows why this looks like a general discussion of identifications between normal vector fields on different submanifolds, since we need to know precisely how to identify u, v, and w' so that this makes sense; for the answer, see Proposition <ref>. We also will prove a basic smoothness result Proposition <ref>. This result is structurally the same as Proposition <ref> that we needed in the previous section; it is used first to show that the maps are smooth as indicated above, and second as in Proposition <ref> to obtain analytic controls on the maps.We begin by defining some ways of identifying normal vector fields on L_0 with normal vector fields on L'_0. Suppose that L is a submanifold of the Riemannian manifold (M, g) and that L' = exp_v(L) is a small deformation of it.We note the following estimate. Let (M, g) be a Riemannian manifold with sectional curvature bounded below and above by the constants -c and C respectively. Let γ be a geodesic in M with initial velocity v_0. Let X be a Jacobi field along γ with |X_0| = A and |∇_(X)_0|= B. Then for |v_0| < π/2√(C) we have estimatesA cos(|v_0|√(C)) - B/√(c)sinh(|v_0|√(c))≤ |X_1| ≤ A cosh( |v_0|√(c)) + B/√(c)sinh(|v_0|√(c)). Moreover, we have the following similar result, using a result in Jost <cit.>. Let (M, g) be a Riemannian manifold with sectional curvature bounded below and above by the constants -C and C, for some fixed C>0. Let γ be a geodesic in M with initial velocity v_0. Let X be a Jacobi field along γ with |X_0| = A and |(∇_ X)_0| = B. Let P_t be parallel transport from γ(0) to γ(t) along γ. Then for each t,|X_t - P_t(X_0 + t (∇_ X)_0)| ≤ A (cosh (√(C) |v_0| t)-1) + B (1/√(C) |v_0|sinh(√(C) |v_0| t) - t). To give our definitions, we note that at a point we may represent L by its tangent space. This is a point of the Grassmannian fibre bundle _k(TM). To determine L', we use the normal vector field v such that L' = exp_v(L). The value of v at a point of L just determines a point of L'; we will need the tangent space to L' also. We will use the first jet of v. This is comparable to the use of the first jet bundle for u in Definition <ref>. As we will mostly be working with n-dimensional special Lagrangian submanifolds of 2n-dimensional Calabi–Yau manifolds, we will restrict to the case of n-dimensional submanifolds. Consequently, we will define our transfer maps on the Whitney sum TM ⊕ J^1(TM) ⊕_n(TM). Note that the J^1(TM) term will be the 1-jet at a point of a small deformation and so can be expected to be small.Let M be a 2n-dimensional Riemannian manifold. Suppose that (u, (v, ∇ v), ℓ) ∈ TM ⊕ J^1(TM) ⊕_n(TM),with π(u, (v, ∇ v), ℓ) = p and (v, ∇ v) small. We note that v, being small, defines a geodesic γ in M. Choose a basis {e_1, …, e_n} for the subspace ℓ of T_pM. For each e_i construct the Jacobi field X^(i) along γ with X^(i)_0 = e_i and (∇_ X^(i))_0 = ∇_e_i v. The final values of the X^(i) form a subset of T_exp_p(v)M. If this subset is not linearly independent, then it follows by linearity of the Jacobi equation that we could choose e_1 so that the final value of X^(1) is zero; but for v and ∇ v sufficiently small this is impossible by Proposition <ref>. Hence, this set is linearly independent and forms a basis for an n-dimensional subspace ℓ' of T_exp_p(v)M. Now find Jacobi fields X_1 and X_2 along γ with initial conditions (X_1)_0 = 0 = (∇_ X_2)_0 and (∇_ X_1)_0 = u = (X_2)_0 and suppose that X_i has final value w_i ∈ T_exp_p(v)M. We then take T_i(u, (v, ∇ v), ℓ) to be the part of w_i orthogonal to the subspace ℓ'. T_i is only defined on an open subset of TM ⊕ J^1(TM) ⊕_n(TM), as the geodesic γ must exist. This cannot necessarily be stated as “|(v, ∇ v)| < ϵ" for some uniform ϵ, because M might be incomplete. In this case, we shall sometimes extend to M' into which we can embed M isometrically, as for G in Definition <ref>. We now consider what the T_i look like for fixed submanifolds.Let M be a Riemannian manifold, and let L and L' be submanifolds of M. Suppose that there is a normal vector field v on L such that exp_v(L) = L'. Suppose that v_C^1 is sufficiently small.Define two bundle maps ν_L → TM|_L' by writing T_i(u) = T_i(u, (v, ∇ v), T_pL), where u ∈ (ν_L)_p, (v, ∇) is some extension of the first jet of v at p to J^1(TM)|_L, and T_i is as in Definition <ref>, for i=1, 2. It is easy to see by examining Definition <ref> that these bundle maps are independent of the extension of the jet required. It is also easy to see using the identification of pushforward in the proof of Proposition <ref> that the projection in Definition <ref> is just the orthogonal projection onto the normal bundle of L', so that T_1 and T_2 define bundle maps from ν_L to ν_L'. We now show that these bundle maps are bundle isomorphisms.The bundle maps T_1 and T_2 of Definition <ref> define bundle isomorphisms ν_L →ν_L' if v_C^1 is sufficiently small.By definition T_i: ν_L →ν_L' are bundle maps over exp_v:L → L'. Both ν_L and ν_L' have rank n, so it suffices to prove that there is no normal vector u ∈ (ν_L)_p such that T_i(u) = 0.Let γ be the geodesic with initial velocity v_p. We must show that if we have e ∈ T_pL and u ∈ (ν_L)_p, and Jacobi fields X^0 and X^1 along γ with (X^0)_0 = e and (∇_ X^0)_0 = ∇_e v, andeither (X^1)_0 = u and (∇_ X^1)_0 = 0, or vice versa, then the final values of X^0 and X^1 differ. To do this, we will apply Proposition <ref>. Let P_t be the parallel transport along γ and then let Y^0 = P_1( e + ∇_e v),Y^1 = P_1(u)By Proposition <ref>, we have|X^0_1 - Y^0|≤ |e| ( cosh (√(C) |v_p|) - 1 + |(∇ v)_p|( sinh (√(C) |v_p|)/√(C) |v_p|- 1) ),|X^1_1 - Y^1|≤ |u| max{cosh(√(C) |v_p|) - 1, sinh (√(C) |v_p|)/√(C) |v_p|- 1}.This tells us that by choosing v_C^1 small enough, we can make X^i_1 and Y^i as close as we like, in terms of |e| and |u|. It's easy to see that the inner product of Y^0 and Y^1 is small in terms of |e| and |u|, since e and u are orthogonal, and also that Y^0 and Y^1 are similar in size to |e| and |u|. It follows that for v_C^1 small enough, | X^0_1, X^1_1| <|X^0_1| |X^1_1|, and thus X^0_1 and X^1_1 cannot be equal for v_C^1. Hence, the T_i define isomorphisms from normal vector fields on L to normal vector fields on L'. Note also that the construction of ℓ' in Definition <ref> precisely gives a further isomorphismT_pM ≅ T_exp_p(v) M,with ℓ' being the image of ℓ. Passing to a map of tangent vectors as in Definition <ref>, this isomorphism corresponds to the pushforward of tangent vectors by the diffeomorphism exp_v from L to L'. We have shown as part of Proposition <ref> that it is smooth and depends continuously on the metric as a map of triples (u, (v, ∇ v), ℓ); in fact, of course, this map is essentially independent of the space ℓ.We now pass to a finite-dimensional familyof submanifolds L_s. We assume we choose the parametrisation diffeomorphically, so that smooth dependence on s and L_s is equivalent. We will show that the transfer maps T_i define smooth maps from normal vector fields on L_0 to normal vector fields on L_s in . As with the definition, we begin with a pointwise result.Let M be a Riemannian manifold. Consider the maps T_i: TM ⊕ J^1(TM) ⊕_n(TM) → TMdefined as in Definition <ref>. These maps are smooth. They depend continuously on the metric on M in the sense described in Proposition <ref>, and (<ref>) becomes|T_i(s, u, (v, ∇ v), ℓ) - T_i (s', u, (v, ∇ v), ℓ)| < ϵ|(u, (v, ∇ v))|. Moreover, if we extend these maps by the identity to T_i: TM ⊕ J^1(TM) ⊕_n(TM) → TM ⊕ J^1(TM) ⊕_n(TM),they are injective immersions. That is, their images are submanifolds and the inverse maps defined on these images are smooth, and depend continuously on the metric in the same sense. We omit the proof, which is essentially identical to that of Proposition <ref>: note that since the fibres of the Grassmannian are compact, we do not need to estimate the size of ℓ in (<ref>). We may now prove Let L_s and L'_s' be finite-dimensional smooth compact families of submanifolds of M, with L_0 and L'_0 close enough that there exists a normal vector field v(0, 0) to L_0 whose image under the Riemannian exponential map is L'_0.Then for s and s' small enough and v(0, 0) small enough, there exists a normal vector field to L_s whose image under the Riemannian exponential map is L'_s'. This normal vector field is the image under T_2 of a normal vector field v(s, s') on L_0. The map (s, s') ↦ v(s, s') is a smooth map. The existence of the normal vector fields is immediate, so we have to prove that v(s, s') depends smoothly on (s, s'). We know that exp is a smooth map TM → M. Write u_s for the normal vector field to L_0 whose image is L_s, and consider the map[row sep=0pt] ν_L ×rM×, (v, s) [mapsto]r(exp(T_2(v, j^1(u_s)_π(v), T_π(v)L_0)), s).where j^1 is the jet prolongation map of Proposition <ref>, and we choose some smooth extension of this to J^1(TM)|_L as before. Using Proposition <ref>, we have that (<ref>) is smooth. It is also easy to see that its derivative at (0_p, s) is an isomorphism if and only if the component T_0_pν_L → T_exp_p(u_s) M is an isomorphism.We show that this is the case if s is sufficiently small, by finding curves for each tangent vector and considering their image under (<ref>). T_0_pν_L can be decomposed as the direct sum T_0_p((ν_L)_p) ⊕ T_pL; clearly, T_0_p((ν_L)_p) ≅ (ν_L)_p. For v' ∈ T_pL, we can choose some curve γ in L through p with tangent v' and then consider the corresponding curve 0_γ in ν_L. Theimage of this curve under (<ref>) is precisely exp_u_s∘γ, so for u_s sufficiently small, T_pL ⊂ T_0_pν_L is mapped isomorphically to T_exp_p(u_s)L ⊂ T_exp_p(u_s)M. On the other hand, for v' ∈ T_0_p((ν_L)_p), we choose the curve sv'. The image of this curve is exp(T_2(sv')); since the derivative of exp at zero is the identity, the tangent at zero is just T_2 (v') and hence T_0_p((ν_L)_p) ⊂ T_0_pν_L is mapped isomorphically to (ν_L_s)_exp_p(u_s)⊂ T_exp_p(u_s)M, by Proposition <ref>. Hence, for s sufficiently small, the derivative T_0_pν_L → T_exp_p(u_s) M is an isomorphism, and so (<ref>) is a local diffeomorphism. Since L_0 is compact, we can find a tubular neighbourhood T of L_0 on which the inverse mapT ×→ν_L_0×is well-defined. If v(0, 0) is sufficiently small, then L'_0 lies in T, and hence so does L'_s' for s' sufficiently small. Clearly, L'_s' can be essentially equivalently viewed as a smooth family of inclusions of L into M. The map desired is given by the composition of these inclusions with (<ref>); as a composition with a smooth map, this is smooth.The same holds if L_s and L'_s' are asymptotically cylindrical submanifolds with decay rate uniformly bounded from below. The argument to construct a tubular neighbourhood on which (<ref>) is smooth goes through just as before, noting that the metric converges to a limit, so (<ref>) does, essentially by the same argument as in Proposition <ref>. We have to check that composition behaves in this case: but this works exactly as in the compact case.We now explain how the maps T_1 and T_2 yield the identifications required at the beginning of this subsection. We begin with T_1.Let M, L, L', v and T_1 be as in Definition <ref>. Let v_s be a smooth curve of normal vector fields on L with v_0 = v.Then exp_v_s(L) defines a smooth curve of submanifolds, and when s=0 it passes through exp_v(L) = L'. Therefore, there exists a smooth curve w_s of normal vector fields to L' such thatexp_v_s(L) = exp_w_s(L').We have w' = T_1 v' where w' and v' are the derivatives at 0 of these curves of vector fields. To see w_s is a smooth curve, we apply Proposition <ref> with L_s = L' for all s, and L'_s' = exp_v_s(L). Then w_s is precisely the normal vector field to L' given by Proposition <ref> and therefore depends smoothly on s. We now need to show that the derivative of this curve at zero is given by T_1(v'). We begin by noting that we can find sections x_s of TM|_L', such that for all p ∈ L_0 we have exp_v_s(p) = exp_x_sexp_v_0(p). We can evaluate x', as follows. By construction, and the fact the derivative of exp at 0 is the identity, x'_exp_v_0(p) is the derivative of the curve exp_p(v_s) at s=0. For each s, exp_p(v_s) is given by the final position of a geodesic; since v_s(p) is smooth in s, we have a smooth variation through geodesics. x' is then the final value of the corresponding Jacobi field X along the geodesic γ corresponding to v_0(p). Since all the geodesics start at the same point p, we certainly have X_0 = 0. For (∇_γ̇ X)_0, we note that this as usual is equal to the derivative in s of the initial velocities, hence v'. That is, x' is given by the final value of the Jacobi field along γ with initial conditions X_0 = 0, (∇_γ̇ X)_0 = v'. Note that this is just evaluating the derivative of the Riemannian exponential map: for an alternative proof, see <cit.>.We now have to move from x' to w'. To determine w_s from x_s, we compose exp_x_s with the projection π of a tubular neighbourhood map around L', invert the resulting diffeomorphism of L', and then compose this inverted diffeomorphism with exp_x_s. To evaluate the derivative of this is fairly straightforward. The derivative of the curve π∘exp_x_s of diffeomorphisms of L is just the tangential part of x' (see <cit.>). On inversion around the identity π∘exp_x_0, we get the negative of this tangential part. Recomposing this with exp_u_s gives addition of the derivatives, and so w' =x' = T_1 v'. T_2 is not quite the most obvious other way of constructing a curve of submanifolds through L'. We introduce the following notation. Let M be a Riemannian manifold and let L be a submanifold. Let u be a normal vector field on L, so that exp_su(L) forms a curve of submanifolds for u sufficiently small. Let v be a normal vector to L, with π(v) = p ∈ L. Let γ be the geodesic with initial velocity u_p, so that γ(s) ∈exp_su(L) for each s. Let J be the Jacobi field along γ with initial conditions J_0 = v and (∇_ J)_0 = 0. Then J_s ∈ TM|_exp_su(L) for each s, so we can consider its normal part. This gives another curve of tangent vectors N_s with π(N_s) = γ(s). We may consider the derivative N' of N_s at s=0. We define'_u v = N'. Concretely, if {e_i(s)} is a smooth family of orthonormal bases for T_γ(S)exp_su(L), '_u v = -∑ g'(e_i, v) e_i - g(e'_i, v) e_i.We will not use this form explicitly, but will rely on the fact that it is small if v is. Now '_u v is not necessarily a normal vector on L, but we can extend the map T_1 of Definition <ref> to sections of TM|_L. We can thus make Let M be a Riemannian manifold and let L be a submanifold. Let u and v be normal vector fields on L, so that '_u v is a section of TM|_L. Then let T_3u be T_1('_u v) where we extend the map T_1 of Definition <ref> constructing a normal vector field on exp_v(L). Note that T_1 u and T_2 u are linear maps of u, but T_3 u is not. Moreover, we can write T_3 u also as a map induced from a map on a bundle as in Definition <ref>, but we would have to define it on J^1(TM) ⊕ J^1(TM) ⊕_n(TM), as the condition in Definition <ref> that we take the normal part of J_s on exp_su(L) requires the first jet of u to determine the tangent space T_exp_su(p)exp_su(L). We will not give details of these arguments. We can now explain the relevance of T_2.Let M, L, L', v and T_2 be as in Definition <ref>. Let u_s be a smooth curve of normal vector fields on L with u_0 = 0 and derivative u'; let T_3 be as in Definition <ref>. Let T_2, s be the transfer maps from L to exp_u_s(L) given by taking the appropriate map T_2 from Definition <ref>. Then exp_T_2, s(v)exp_u_s(L) defines a smooth curve of submanifolds, and when s=0 it passes through exp_v(L) = L'. Therefore, there exists a smooth curve w_s of normal vector fields to L' such thatexp_T_2, s(v)exp_u_s (L) = exp_w_s(L').We have w' = T_2 u' + T_3u'.To see that w_s is a smooth curve, we show that exp_T_2, s(v)exp_u_s (L) is a smooth curve of submanifolds; then we may apply the argument at the beginning of Proposition <ref>. We have to show that if ι is the inclusion of L then exp_T_2, s(v)exp_u_sι is a smooth curve of maps. But this is the image under the smooth map[row sep=0pt] J^1(ν_L) rM,(u, ∇ u) [mapsto]r exp(T_2(v_π(u), u,∇ u, T_π(u)L)),where T_2 is as in Definition <ref> of the smooth curve j^1(u_s) of first jets, hence is indeed smooth. To find w', as in Proposition <ref>, we first construct for each p ∈ L a family of tangent vectors x_s to exp_p(v) such thatexp_T_2, s(v)exp_u_s(p) = exp_x_sexp_v(p). Again as in Proposition <ref>, we identify exp_T_2, s(v)exp_u_s(p) as the curve of final positions of a variation through geodesics, so that x'(p) is the final value of the Jacobi field X corresponding to this variation. These geodesics have initial positions exp_u_s(p) and initial velocities T_2, s(v_p). Differentiating in s, we see that X_0 = u'_p. Now also x_s at exp_p(v_p) is given by composing exp^-1_exp_p(v) with the left hand side of (<ref>). Hence, it depends smoothly on the curve of 1-jetsof u_s at p. In particular, x' at exp_p(v_p) depends smoothly on the 1-jets of u_0 = 0 and of u'. These 1-jets remain the same if we replace (u_s) by (su'), and so to find x' we may suppose that u_s = su'. With this curve, we are in the situation of Definition <ref>. We let γ(s) be the geodesic with initial velocity u'_p, and let J_s be the Jacobi field along γ with initial conditions J_0 = v_p and (∇_ J)_0 = 0. Then T_2, s(v_p) is just the normal part of J_s. Hence, the derivative in s of T_2, s(v_p) is just '_u'v. This is equivalently (∇_ X)_0. That w' is given by taking the normal part of x' follows exactly as in Proposition <ref>. Hence w' is given by taking the normal part of the final value of Jacobi fields, and so is a linear combination of T_1 and T_2: examining the initial conditions precisely gives w' = T_2 u' + T_3u'. Putting the ideas of Propositions <ref> and <ref> together, we obtain Let M, L, L' and v be as in Definition <ref>. Suppose that u_s is a curve of normal vector fields to L_0 and w_s is a curve of normal vector fields to L'. For each s, we can find a normal vector field v_s to exp_u_s(L_0) so that exp_v_sexp_u_s (L_0) = exp_w_sexp_v_0 (L_0).The transfer map T_2 of Definition <ref> defines an isomorphism between the normal bundles to L_0 and exp_u_s(L_0). Hence, we may identify v_s with a normal vector field on L, and so (v_s) is a smooth curve of such normal vector fields. We have w' =T_1 v' + T_2 u' + T_3u',and so the derivative of the map (u_s, w_s) ↦ v_s is given byv' = T_1^-1(T_2 u' + T_3u' - w' ).§.§ Nearly special Lagrangian submanifolds We now pass to the case where our submanifolds are close to special Lagrangian. By Lemma <ref>, on such submanifolds normal vector fields can be identified with one-forms. This means that we can write the maps T_1 and T_2 of Definition <ref> and hence of Definition <ref>, and the map T_3 of Definition <ref>, in terms of one-forms: we carry this out carefully. To do this, we first need to introduce the requirement that the submanifold be “nearly special Lagrangian" to the bundle TM ⊕ J^1(TM) ⊕_n(TM) used in Definition <ref> and to the corresponding “target bundle" in which the image of the extended map (<ref>) lies. We also have to restrict to bundles containing normal vectors to deal with inverses, and make a corresponding definition for one-forms. We thus make the following Let M be a Calabi–Yau manifold. Letbe the subbundle of _n(TM) consisting of those subspaces ℓ for which |ω|_ℓ|<1. Let ' be the subbundle of J^1(TM) ⊕ such that also the subspace ℓ' constructed in Definition <ref> satisfies |ω|_ℓ'|<1. Let N be the subbundle of TM ⊕ J^1(TM) ⊕_n(TM) (over M) consisting of those (u, (v, ∇ v), ℓ) such that u is normal to the subspace ℓ. Let N' be the subspace{(u, (v, ∇ v), ℓ)∈ TM × (J^1(TM) ⊕_n(TM)): π_TM(u) = exp_π_J^1(TM) ⊕_n(TM)((v, ∇ v), ℓ) (v)anduis normal to ℓ'},where ℓ' is the subspace constructed in Definition <ref>. Letbe the subbundle of T^*M ⊕ J^1(TM) ⊕_n(TM) consisting of (α, (v, ∇ v), ℓ) with α^♯ in ℓ.and ' are open subbundles. It is easy to see, similarly to Lemma <ref> that N, N' andare smooth manifolds and the projection maps define bundle structures. The intersection of N, N' andwith ' in their _n(TM) component are then also open subbundles. Note that N and N' may be defined with merely a Riemannian metric, and this extension will be used in Proposition <ref> below. We may make Let I: N ∩' →∩' be given by taking (u, (v, ∇ v), ℓ), contracting u with ω, and then taking the orthogonal projection to the tangential part. Let I': N' ∩' →∩' be given similarly by contracting u with ω at exp_v(p), pulling back the resulting covector using (v, ∇ v) by applying the dual of the map (<ref>), and the taking the orthogonal projection to the tangential part. We have, reasonably simply, and using the argument in Proposition <ref> for pullback I and I' are smooth maps, depending continuously on the metric as in Proposition <ref>, and locally around any (u, 0, 0, ℓ) are diffeomorphisms.This enables us to make our definitions of what T_1 and T_2 look like in terms of one-forms.Let M be a Calabi–Yau manifold. First consider ∩'. Define maps τ_1 and τ_2 on this space by τ_i = I' ∘ T_i ∘ I^-1, with T_i the extended map from Proposition <ref>. These maps are also local diffeomorphisms around any (α, 0, 0, ℓ). Similarly define a map τ_3 on a suitable bundle by composition of the pointwise version of the map T_3 of Definition <ref> with suitably extended versions of I' and I. Now let L be a submanifold of M such that ω|_L_C^0 < 1. Given a sufficiently small normal vector field v on L, we can construct L' = exp_v(L) and again have ω|_L'_C^0 < 1. For i=1, 2, we can then define a map τ_i from one-forms on L to one-forms on L' by taking α to τ_i(α, (v, ∇ v), T_pL) where ∇ v is extended somehow as in Definition <ref>. Similarly, we can define a map τ_3 by taking some extension of α; the map will not depend on the extension. Note from the discussion after Definition <ref> that we need the first jet of α to define our pointwise τ_3. This is thus more complicated, and throughout this discussion we will omit the details. It is then immediate from the definitions that these maps of one-forms correspond to the maps T_i of normal vector fields, since I and I' are the pointwise versions of v ↦ι_v ω|_L and v ↦ι_v ω|_L'. Specifically, we haveLet M, L and L' be as in Definition <ref>, and i ∈{1, 2}. Then the following diagram commutes:[column sep=large] normal vector fields on L[leftrightarrow]rv ↦ι_v ω|_L[leftrightarrow]dT_i(Def. <ref>) one-forms on L[leftrightarrow]dτ_i(Def. <ref>) normal vector fields on L'[leftrightarrow]rv ↦ι_v ω|_L' one-forms on L = L'.where the horizontal maps are determined by Lemma <ref>. For example, given a one-form α on L, we have the one-form τ_iα on L. We find another one-form on L by taking a normal vector field u on L such that ι_u ω|_L = α, applying T_i to u to get a normal vector field on L', and then finding the corresponding one-form ι_T_i uω|_L', and these two one-forms are equal. Similarly, given a one-form on L=L', the one-forms on L constructed by τ_i^-1 and T_i^-1 are the same. Finally, the corresponding diagram with T_3 and τ_3 also commutes. Furthermore, we have the following regularity result for the τ_i of Definition <ref> corresponding to Proposition <ref>, simply following from that and Proposition <ref>.For i=1, 2, 3, τ_i are smooth maps, and depend continuously on the Calabi–Yau structure in the same sense as T_i depends continuously on the metric (described in Proposition <ref>); in particular, we have the estimate corresponding to (<ref>). §.§ The Laplacian on normal vector fields Finally,relies on the identification of the harmonic part of a given normal vector field (that is, the normal vector field corresponding to the harmonic part of the corresponding one-form). We thus need to check this is also smooth and find its derivative. In this subsection, we use Lemma <ref> similarly to subsection <ref> to define a Laplacian on the normal vector fields on a submanifold that is close to special Lagrangian. This consequently defines a notion of the harmonic part of a normal vector field. We then show that, with an appropriate identification of normal vector fields taken from section <ref>, this harmonic part depends smoothly on the submanifold concerned and in Proposition <ref> we identify the derivative of the map v ↦ v, in terms of the derivative of the Laplacian induced by the identifications. We make Suppose that L ⊂ M is a submanifold and (Ω, ω) is an SU(n) structure around it in the sense of Definition <ref>. This induces a Riemannian metric on L and consequently a Laplace–Beltrami operator on the differential forms on L. Suppose that Lemma <ref> holds for (Ω, ω), that is v ↦ι_v ω|_Lis an isomorphism between normal vectors and one-forms. Then (<ref>) induces linear differential operators Δ and d+d^* on normal vector fields. Since by the estimates of (<ref>) we have an isomorphism between differential one-forms and normal vector fields of given regularity, all the properties of d+d^* and Δ carry over. Hence we have harmonic normal vector fields, whose corresponding one-forms are harmonic, and orthoharmonic normal vector fields, whose corresponding one-forms are L^2-orthogonal to harmonic forms (or which lie in the image of Δ). Note that as (<ref>) need not be an isometry if L is not Lagrangian, harmonic normal vector fields and orthoharmonic normal vector fields need not be L^2-orthogonal. We now note thatand a left inverse Δ^-1, considered as maps of forms, depend smoothly on the metric. This seems to be well-known, and can be shown by first fixing a cohomology class and showing its representative depends smoothly on the metric, and then simply applying inner products. As in, for instance, Proposition <ref>, we shall choose a finite-dimensional family of metrics parametrised by .Let M be a compact manifold and g_s be a finite-dimensional smooth family of smooth metrics on it parametrised by s ∈⊂^m, with a base point 0 ∈. Then the map: ×Ω^p(M) →Ω^p(M)is smooth, perhaps after shrinking the neighbourhoodof g_0.Equally, we can find a left inverse to the Laplacian smoothly in s. These metric smoothness results pass immediately to smoothness of the corresponding operators on normal vector fields on a compact nearly special Lagrangian submanifold L of a Calabi–Yau manifold M. We begin by defining these operators.Let M be a Calabi–Yau manifold and let L_s be a finite-dimensional smooth family of smooth submanifolds parametrised by s ∈⊂^m, with a base point 0 ∈. Suppose that for all s ∈, v ↦ι_v ω|_L_s is an isomorphism of bundles. Let T_s be the family of transfer maps from L_0 to L_s, given by T_2 from Definition<ref>, so that T_s = T_2, s in Proposition <ref>. Define maps : ×{normal vector fields onL_0}→{normal vector fields onL_0}andΔ^-1: ×{normal vector fields onL_0}→{orthoharmonic normal vector fields onL_0},as follows.(s, v) is given by taking T_s(v) a normal vector field on L_s, taking the harmonic part u using the correspondence between 1-forms and normal vector fields, and then setting (s, v) = T_s^-1(u). Δ^-1(s, v) is given by taking the normal vector field T_s(v) on L_s and then finding an orthoharmonic normal vector field u on L_0 such that Δ T_s u = v-(s, v). Let M, {L_s},and Δ^-1 be as in Definition <ref>. Then the mapsand Δ^-1 are well-defined and smooth, perhaps after reducing the neighbourhoodof L_0. Since the transfer maps T_s and the identifications between normal vector fields and one-forms are smooth, this follows immediately from composition. Indeed, we can explicitly evaluate the derivative of : Suppose that M is a Calabi–Yau manifold and L_s is a finite-dimensional smooth family of smooth submanifolds as in Definition <ref>. Let T_s andbe as in Definition <ref>. Let v_s be a smooth curve of normal vector fields to L_0. Then _s v_s is again a smooth curve of normal vector fields to L_0, and its derivative satisfies_0 (.d/ds|_s=0_s v_s)= _0 (v' - Δ' u),Δ_0 (.d/ds|_s=0_s v_s)= - Δ' _0 v_0.where u satisfies Δ_0 u = v - _0 v, and Δ' is the derivative of the induced operator Δ_s in s (where Δ_s is again induced by the transfer operator T_s).That _s(v_s) is a smooth curve follows immediately from Proposition <ref>.To identify the derivative, we observe that we must have Δ_s _s v_s ≡ 0v_s - _s v_s = Δ_s u_s,for some smooth curve of normal vector fields u_s. Note that this curve exists and is smooth again by Proposition <ref> (and u_s is chosen to be orthoharmonic on L_0). Differentiating (<ref>) in s, and applying _0 to the second equation, gives the stated result. §.§ The derivative of SLingWe may now prove that, on restriction to a finite-dimensional manifoldof perturbations L_s of a nearly special Lagrangian submanifold L_0,is smooth and give its derivative, under a hypothesis that the transfer maps of subsection <ref> behave well with respect to the harmonic normal vector fields, which is essentially a condition that the base point L_0 is close enough to (L_0).Recall that (L_s) is defined as the special Lagrangian L' given by exp_v(L_s) where (v) = 0 in the sense of the discussion after Definition <ref>. Let 𝒰 be an open set in a finite-dimensional space of perturbations of the submanifold L_0 and ' an open set of the special Lagrangian deformations of L'_0 = (L_0). We write F: [row sep=0pt] 𝒰×' r × H^1(L), (L_s, L'_s') [mapsto]r (L, α),where if v is the normal vector field on L_s giving L'_s', α is the cohomology class of the harmonic part of ι_vω|_L_s. F is a well-defined map ifand ' are small enough.We haveF(L_s, (L_s)) = (L_s, 0),for all L_s ∈. We shall show that F is smooth and establish its derivative at (L_0, (L_0)), D_(L_0, (L_0)) F, give a condition under which D_(L_0, (L_0)) F is an isomorphism, and consequently invert it under this condition to find D_L_0.Let M be a Calabi–Yau manifold, and let L_0 be a submanifold of M in the domain of the SLing map of Definition <ref>. Let , L'_0 and ' be as described above. Let v_0 be the orthoharmonic normal vector field on L_0 such that exp_v_0(L_0) = (L_0).Then the map F of (<ref>) is smooth. We haveD_(L_0, L'_0)F: [row sep=0pt] T_L_0⊕ T_L'_0' r T_L_0⊕ H^1(L),(v, u) [mapsto]r (v, [ι_T_1^-1 (T_2 v + T_3 v - u) - Δ'_v xω|_L_0]),where T_1 and T_2 are as in Definition <ref> (for the transfer from L_0 to L'_0 via v_0), T_3 is similarly as in Definition <ref>, Δ'_v is the derivative defined in Proposition <ref>, and x is (any) normal vector field to L_0 with Δ x = v_0.We consider F as a composition of two maps. Firstly we find a normal vector field w(s, s') on L_0 such that T_2, s(w(s, s')) is the normal vector field on L_s giving L'_s', where T_2, s is taken between L_0 and L_s. Then we take the cohomology class of the harmonic part of the one-form ι_T_2, s(w(s, s'))ω|_L_s with respect to the metric induced on L_s. By Proposition <ref>, w(s, s') depends smoothly on (s, s'). Analogously to Proposition <ref> (composing a further map I' induced from I' of Definition <ref> to translate normal vector fields into one-forms) we know thatis smooth in precisely the sense required. It follows that F is smooth. We now find the derivative D_(L_0, L'_0) F. We first recall from Proposition <ref> that the derivative of the map (s, s') ↦ w(s, s') is given by (v, u) ↦ T_1^-1(T_2 v + T_3 v - u),where v is a normal vector field to L_0 and u is a normal vector field to L'_0. Given curves L_s and L'_s with normal vector fields v and u, let w' = T_1^-1(T_2 v + T_3 v - u) be the derivative of the corresponding curve w(s) of normal vector fields on L_0. We computed in Proposition <ref> that the derivative of a curve _s T_2(w(s)) has harmonic part (w' - Δ' x), where Δ x gives the orthoharmonic part of w(0), and Laplacian -Δ'w(0). In this case, w(0) = v_0 and consequently it is orthoharmonic. Consequently the derivative of the curve _s T_2(w(s)) is harmonic, and so it is_0(w' - Δ' x) = _0(T_1^-1(T_2 v + T_3 v - u) - Δ' x),where Δ x = v_0.From Proposition <ref> we obtain Let M, L_0, , L'_0, ', and v_0 be as in Proposition <ref>. Suppose that for every nonzero harmonic normal vector field u on L'_0, T^-1_1 (u) is a nonzero harmonic normal vector field on L_0. Then , restricted to a sufficiently small open subset of , is a smooth map. Its derivative is given by mapping a normal vector field v to the unique harmonic normal vector field u to L'_0 such thatT_1^-1 u =(T_1^-1(T_2 v + T_3 v) - Δ'_v x),where T_1, T_2 and T_3 are as in Proposition <ref>, Δ' is as in Proposition <ref> and Δ x = v_0. Note that if L_0 is special Lagrangian, so L'_0 = L_0 and v_0 = 0 (and we may take x=0), T_1 and T_2 are the identity and T_3 is zero, so (<ref>) reduces to u =v.We apply the inverse function theorem to the map F of (<ref>) at (L_0, L'_0). We first show that D_(L_0, L'_0)F is an isomorphism. We know that D_(L_0, L'_0)F is a linear map between spaces of the same finite dimension so it suffices to prove that it is injective. That is, we suppose that v ∈ T_L_0 and u ∈ T_L'_0' satisfy D_(L_0, L'_0)F (v, u) = 0. Applying (<ref>), v=0 automatically and0 = ι_ (T_1^-1(T_2 v + T_3 v - u) + Δ'_v x)ω|_L_0 = ι_- T_1^-1 uω|_L_0.The hypothesis then implies that u must be zero, so that (v, u) = 0 and D_(L_0, L'_0)F is injective. By the inverse function theorem, therefore, when we restrict to a small neighbourhood of (L_0, L'_0) in ×', F becomes a diffeomorphism. (L_s) is precisely given by the second component of F^-1(L_s, 0). Consequently, the derivative (D)(v) is given by the harmonic normal vector field u to L'_0 so that DF_(L_0, L'_0)(v, u) = 0. Rearranging to obtain (<ref>) is straightforward.§ THE DERIVATIVE OF APPROXIMATE GLUING We now prove the results corresponding to the previous section for the approximate gluing map of Definition <ref>. As this map does not involve the Laplacian, the results corresponding to subsection <ref> are not required; however, the analysis corresponding to subsections <ref> and <ref> is more involved, as we need to extend these results to asymptotically cylindrical submanifolds and asymptotically translation invariant normal vector fields. We do this extension in subsection <ref>, and then pass to approximate gluing in subsection <ref>.§.§ Asymptotically translation invariant normal vector fields In this subsection we give a general definition of asymptotically translation invariant normal vector fields, show that it behaves well with respect to the maps T_i from subsection <ref>, and finally show that this definition is, for an asymptotically cylindrical close to special Lagrangian submanifold of an asymptotically cylindrical Calabi–Yau manifold, equivalent to the corresponding one-form being asymptotically translation invariant. We briefly discussed the definition of asymptotically translation invariant normal vector fields for the asymptotically cylindrical deformation theory of Theorem <ref>. In that case, we explained briefly that for special Lagrangian L, we could take v asymptotically translation invariant if and only if ι_v ω|_L is. To define asymptotically translation invariant vector fields in general, we use that each asymptotically cylindrical submanifold has a corresponding cylindrical submanifold. Let L be an asymptotically cylindrical submanifold of the asymptotically cylindrical manifold M. Let K × (R, ∞) be the cylindrical end, so that the end of L is exp_v(K × (R, ∞) for v decaying. Translation gives an action on TM|_K × (R, ∞), and consequently a notion of translation invariant vector field. Since we always have a notion of exponentially decaying vector fields, we obtain a notion of asymptotically translation invariant vector fields on K × (R, ∞). We may extend v to obtain an asymptotically cylindrical diffeomorphism with limit the identity (as in Proposition <ref>) of tubular neighbourhoods of K × (R, ∞) and the end of L. Pushforward by this diffeomorphism induces a map from vector fields along K × (R, ∞) to vector fields along the end of L; as v and its first derivative are exponentially decaying, it follows as in Definition <ref> (that is, by using the Rauch comparison estimate Proposition <ref>) that this defines an isomorphism far enough along the end. We then say that a vector field along L is asymptotically translation invariant precisely if it is the image of an asymptotically translation invariant vector field under this pushforward. An asymptotically translation invariant normal vector field is simply asymptotically translation invariant and normal. We say an asymptotically translation invariant vector field's limit is the translation invariant vector field along K × (R, ∞) given by the limit of the vector field on K × (R, ∞) of which it is a pushforward. We can define an extended weighted norm on such asymptotically translation invariant vector fields by using the standard extended weighted norm corresponding to exp_v^* g for a suitable extension of v on the normal vector field on the cylindrical submanifold with limit K × (R, ∞).The purpose of using pushforward for this transfer and not restricting to normal vector fields in the definition is that it makes the following result trivial.Suppose that L_1 and L_2 are asymptotically cylindrical submanifolds with the same limit, so that there is an exponentially decaying normal vector field w to L_1 such that exp_w(L_1) = L_2. Extend w to define an asymptotically cylindrical diffeomorphism of tubular neighbourhoods, with limit the identity. A vector field along L_1 is asymptotically translation invariant if and only if its image under the pushforward by this diffeomorphism is. Moreover, the vector field and its pushforward have the same limit.We restrict to the ends of of L_1, L_2 and the corresponding cylindrical end L̃. As the compact parts are irrelevant for this discussion, we will just write these ends as L_1 and L_2. We have L_1 = exp_v_1(L̃), L_2 = exp_v_2(L̃); again, make some extensions v_1 and v_2 to define asymptotically cylindrical diffeomorphisms with limit the identity between tubular neighbourhoods. Suppose u is an asymptotically translation invariant vector field on L_1; that is, it is the pushforward by exp_v_1 of an asymptotically translation invariant vector field on L̃. We want to show that the pushforward by exp_w of u is a translation invariant vector field on L with the same limit; that is, we want to show that the pushforward by the composition exp_wexp_v_1 of an asymptotically translation invariant vector field is the pushforward by exp_v_2 of an asymptotically translation invariant vector field with the same limit. Equivalently, it suffices to show that exp_v_2^-1exp_wexp_v_1 (which is essentially a diffeomorphism of the end L̃) preserves asymptotically translation invariant vector fields and their limits. But this is an asymptotically cylindrical diffeomorphism, and so pullback induces a corresponding asymptotically cylindrical metric on the tubular neighbourhood of L̃, and the question just becomes the independence of asymptotic translation invariance on metric. This is immediate as usual. As the diffeomorphism has limit the identity, it preserves the limits of the vector fields. The reverse implication is equally obvious.This implies in particular that Definition <ref> is independent of the extension of v used. We will now explain some consequences of Definition <ref>, particularly with respect to the transfer maps T_1, T_2 and T_3. Let L be an asymptotically cylindrical submanifold of M; let L' be another asymptotically cylindrical submanifold with the same limit. Let u be an asymptotically translation invariant normal vector field on L. Then * u_C^k is finite for every k. * T_1 u and T_2 u are asymptotically translation invariant normal vector fields on L' with the same limit as u. Similarly, if u' is an asymptotically translation invariant normal vector field to L', T^-1_1 u', T^-1_2 u' are asymptotically translation invariant normal vector fields on L with the same limits as u'. Finally, T_3 u is an exponentially decaying normal vector field on L'. * If the ambient metric is cylindrical as in subsection <ref>, asymptotically translation invariant normal vector fields correspond to nearby asymptotically cylindrical submanifolds, with the limits also corresponding. We begin with (i). By definition, away from a compact part, u is the image under pushforward of an asymptotically translation invariant vector field w on K × (R, ∞). w is clearly bounded in C^k for every k, at least for R large enough. We have to show that pushforward preserves this, and this follows easily from the smoothness and continuity result obtained as part of Proposition <ref>. For instance, |u_exp_v(p)| is precisely given by applying the pointwise pushforward map with w_p and (v_p, (∇ v)_p); this depends smoothly on p and converges to a limit as we approach the end, using the continuity in the metric. This proves the C^0 bound; the C^k bound is similar by using the derivatives. The major part of this proposition is (ii). We shall prove the result for T_1 and T_2 and their inverses; T_3 is similar. As usual we shall argue using the ideas of Proposition <ref>. It follows from Lemma <ref> that the result is true just for applying pushforward, so it suffices to check two things. Firstly, we shall show that these four maps are close to pushforward in the sense that their differences decay exponentially. Secondly, we shall show that exponentially decaying vector fields are asymptotically translation invariant with zero limit. By the linearity of Definition <ref>, it follows that the image under these maps is asymptotically translation invariant, and that the limits are preserved. We begin by working with T which is either T_1 or T_2. This is defined pointwise by a map on TM ⊕ J^1 (TM) ⊕_n(TM). By Proposition <ref>, this map is smooth and depends continuously on the metric in the sense of that proposition. Similarly, pushforward, which we shall denote P, is defined pointwise by a map on TM ⊕ J^1(TM), which by the appropriate part of Proposition <ref> is smooth and depends continuously on the metric. We shall restrict to the submanifold N of TM ⊕ J^1(TM) ⊕_n(TM) defined in Definition <ref>; that is, (u, (v, ∇ v), ℓ) with u normal to ℓ. For any such u and ℓ, we immediately have|T(u, 0, ℓ) - P(u, 0)| = 0.Since these maps are smooth, it follows that for each p we have a constant C with|T(u, (v, ∇ v), ℓ) - P(u, (v, ∇ v))| ≤ C |(v, ∇ v)||u|,as a pointwise estimate. As this constant may be chosen smoothly, and we may differentiate and obtain the same results for jets, we have a local estimateT(u, (v, ∇ v), ℓ) - P(u, (v, ∇ v))_C^k≤ C_k v_C^k+1u_C^k.Just as in Proposition <ref>, we now only have to show that C_k may be chosen uniformly as v is exponentially decaying and u is bounded by (i). But, using that we have (<ref>) and its analogue for pushforward, the derivatives are continuous in the metric. L itself can be regarded as a finite-dimensional parameter space of metrics, and as p heads to the end of L, the metric converges to a limit g̃. Hence, the whole map depends continuously on the point of L, and we may choose a uniform bound.This shows that the image of a normal vector field by T and by pushforward differ by an exponentially decaying vector field. As for the inverses, we know from the inverse part of Proposition <ref> and the analogous result, proved identically, for the inverse of pushforward that these satisfy the same properties on N' from Definition <ref>: the result is then proved in exactly the same way. It now only remains to show that exponentially decaying vector fields are asymptotically translation invariant with zero limit, that is they are the pushforwards of exponentially decaying vector fields on L̃. To do this, we again apply the inverse of pushforward. Just as above, we know that we can take an appropriate smooth submanifold of TM × J^1(TM), and define a map that is smooth and continuous in the metric. Note that as here we need to ensure the vector field to L is normal, this submanifold is not N'. It follows in the same way that the image of a pair of exponentially decaying objects is exponentially decaying, since the pushforward of zero is always zero and we converge to a constant metric. The converse result can be proved the same way: an asymptotically translation invariant vector field with zero limit is exponentially decaying. As for (iii), the relationship between asymptotically translation invariant normal vector fields and asymptotically cylindrical manifolds holds just as sketched in subsection <ref>; passing to pointwise operators on appropriate bundles as in this section enables us to formalise that argument. Note that the condition that the limit was zero for (ii) meant we did not have to assume cylindricality of the ambient metric, whereas that will be necessary in this case. We now prove that asymptotically translation invariant one-forms correspond to asymptotically translation invariant normal vector fields. That is, Definition <ref> is equivalent to the one-forms we used in subsection <ref>.Let M be an asymptotically cylindrical Calabi–Yau manifold, and let L be an asymptotically cylindrical submanifold so that ω|_L is small with all derivatives, so that the estimate (<ref>) of Lemma <ref> applies in every C^k space. Then a normal vector field v on L is asymptotically translation invariant in the sense of Definition <ref> if and only if the one-form ι_v ω|_L is asymptotically translation invariant.As usual, we are only interested in the end of L, and this is asymptotic to a cylindrical submanifold L̃. There is an exponentially decaying normal vector field v on L whose image under the exponential map is L̃; this extends to a diffeomorphism exp_v between tubular neighbourhoods decaying to the identity, and Definition <ref> says a vector field u along L is asymptotically translation invariant if and only if it is (exp_v)_* w for some asymptotically translation invariant vector field w along L̃. It suffices to suppose that L is cylindrical, as follows. With the notation as above, ι_u ω|_L = ι_(exp_v)_* wω|_exp_v(L̃) = ι_w exp_v^* ω|_L̃.Moreover, if u is normal, w is normal with respect to the metric exp_v^*g and vice versa. Hence, the result is true for L if and only if it is true for L̃ with the metric exp_v^*g and the 2-form exp_v^* ω; these correspond to the asymptotically cylindrical Calabi–Yau structure (exp_v^* Ω, exp_v^* ω), and as v is exponentially decaying and ω asymptotically translation invariant exp_v^* ω also restricts to a small form (essentially as in Proposition <ref>), and so this reduces the problem to L̃. If L is cylindrical, we just note that ω determines an asymptotically translation invariant section of ⋀^2 T^*M|_L. Composing with the projection map T^*M|_L → T^*L, we obtain an asymptotically translation invariant section of T^*L ⊗ (T^*M|_L). Hence, if v is an asymptotically translation invariant normal vector field, the corresponding one-form is asymptotically translation invariant. Moreover, we know that if we restrict this section to ν_L, it defines an asymptotically translation invariant section of T^*L ⊗ (ν_L)^*. By Lemma <ref>, this section consists of isomorphisms, so we can invert it; it is easy to see that the inverse section is again asymptotically translation invariant. This completes the cylindrical case, and hence the proof. §.§ The derivative of approximate gluing In this subsection, we will deal with the approximate gluing map of Definition <ref>. We will identify its derivative, and use this to define an approximate gluing map of normal vector fields. We will also show that this gluing map, when we pass to special Lagrangians and use the identifications between normal vector fields and one-forms, is close to the approximate gluing map of one-forms given in Definition <ref>. In particular, this shows that for special Lagrangian submanifolds the approximate gluing map of submanifolds is smooth, and its derivative is close to the approximate gluing map of one-forms. We make the following preliminary observation. To discuss smoothness of maps of submanifolds, we treat submanifolds as equivalent to normal vector fields using the Riemannian exponential map. It is not hard to see, essentially by the argument of Hamilton <cit.>, that smoothness of maps is independent of the metric used, that the identity map on submanifolds defines a smooth map of vector fields normal with respect to different metrics, and that the derivative of this map is just given by taking the normal part of the vector field with respect to the new metric.Compare Proposition <ref>. This observation is relevant to this analysis for various reasons, but most obviously because we have to change the metric in gluing the ambient spaces M_1 and M_2. Recall from Definition <ref> that to glue asymptotically cylindrical submanifolds L_1 and L_2, we cut them off to form L̂_1 and L̂_2 and then identify. The observation shows that, regardless of metric, the identification part of this is smooth. Its derivative is given by taking the normal parts with respect to the cutoff metrics and then identifying; equivalently, it is given by identification and taking the normal part with respect to the final metric on M^T. We thus need to understand the cutoff map L_i ↦L̂_i, and may work with only one asymptotically cylindrical submanifold L, with cutoff L̂, in a fixed asymptotically cylindrical M. Of course, since it depends on cutting off a normal vector field, this map depends on some fixed metric on M, but we may fix one once for all. Since we need to find a derivative, we in fact work with a family L_s of asymptotically cylindrical submanifolds with cutoffs L̂_s and cross-sections K_s. We assume that this family decays at a fixed uniform rate, so that we can fix a decay rate for normal vector fields; note that if L_s is a deformation family of special Lagrangians this is immediate by the argument at the end of subsection <ref>. We now introduce families of normal vector fields giving L_s and L̂_s, and prove that L̂_s is a smooth family. First of all, we have normal vector fields u_s and û_s to L_0 and L̂_0 so that exp_u_s(L_0) = L_s and exp_û_s(L̂_0) = L̂_s. These normal vector fields are not well-adapted to the cutoff, and so we define some further normal vector fields. There is a normal vector field v_s to K_s × (R, ∞) giving the end of L_s; it depends smoothly on s by the remark after Proposition <ref>. Moreover, there is a normal vector field w_s to K_0 × (R, ∞) giving K_s × (R, ∞); this also depends smoothly on s. The cutoff function needed in Definition <ref> is then φ_s = φ_T∘exp_w_s∘ι where ι is the inclusion of K_0 × (R, ∞).Thus the resulting cutoff normal vector field depends smoothly on s, and so so does L̂_s. This shows that the map we consider is smooth. By using the transfer map T_2, s as in Proposition <ref> to consider the family v_s of normal vector fields in the previous paragraph on the same space, we can summarise this asL_s= exp_T_2, s(v_s)(K_s × (R, ∞)) = exp_T_2, s(v_s)exp_w_s(K_0 × (R, ∞)),L̂_s= exp_ T_2, s(φ_s v_s)exp_w_s(K_0 × (R, ∞)),where we make a change to the definition of v_s.Now this expression enables us to find the derivative. Let T_1 and T_2 be the transfer maps of Definition <ref> with L=K_i × (R_i, ∞) and L' = exp_v_i((R_i, ∞) × K_i); let T̂_1 and T̂_2 be the corresponding transfer maps with L= K_i × (R_i, ∞) and L' = exp_φ v_i((R_i, ∞) × K_i). Define T_3 and T̂_3 similarlyusing Definition <ref>. It follows from Proposition <ref> that the normal vector field to L_0 giving the tangent to the curve L_s is u' = T_2 w' + T_1 v' + T_3w'; similarly, the normal vector field to L̂_0 giving the tangent to the curve L̂_s is T̂_2 w' + T̂_1 (φ_s v_s)' + T̂_3w' = T̂_2 u' + T̂_1( φ_0 v' + ∇_u'φ_T) + T̂_3w'. Note that φ_T is defined on M, so the normal derivative ∇_u'φ_T makes sense. The derivative is consequently the mapu' = T_2 w' + T_1 v' + T_3w' ↦T̂_2 w' + T̂_1 (φ_0 v' + ∇_u'φ v') + T̂_3 w' = û'.In order to understand this derivative explicitly, we have to explain how to obtain w' and v' from u'.Note that if M were cylindrical, w_s would be translation invariant for all s, and thus so would w'. Hence by our preliminary observation, w' is the normal part of the translation invariant vector field corresponding to the behaviour of the limits, and is determined by its limit. On the other hand, by assumption v' decays at a uniform rate and so v' also decays. Since v_0 and v' are exponentially decaying, it follows by (ii) of Proposition <ref> that T_1 v' is too. Similarly it follows that T_3w' is exponentially decaying. On the other hand, since w' is determined by its limit, the proof of Proposition <ref> implies T_2 w' is also determined by its limit: we may find the limit of w' from the limit of T_2 w', hence w', and thence T_2 w'. The previous paragraph says the limit of T_2 w' is the limit of u', hence the map of (<ref>) is well-defined. That is, the limit of u' gives us w' as above; then T_1 v' = u' - T_2 w' - T_3 w' enables us to find v'. Note that at a point with r< T-1, we have that T_1 = T̂_1, T_2 = T̂_2, T_3 = T̂_3, φ_T≡ 1, and ∇φ_T = 0. Hence (<ref>) becomes the identity map at these points, and so it can be extended to a map from normal vector fields on L_0 to normal vector fields on L̂_0. This identifies the derivative of the approximate gluing map of submanifolds from Definition <ref>. It also defines an approximate gluing map of matching asymptotically translation invariant normal vector fields (that is, those vector fields v_i on L_i whose limits ṽ_i satisfy F_* ṽ_1 = ṽ_2). To check this, we just have to show that this condition implies the constructed normal vector fields match in the identification region. Since the limits are the same, we know that the limit of w' is the same; thus T̂_2 w' agrees in the identification region provided we choose the metrics to agree. There are simpler definitions of such a gluing map, but we will have to use the transfer maps because to define a normal vector field on L^T, we will need normal vector fields on L̂_1 and L̂_2. We now translate this to one-forms. If L_1 and L_2 are a matching pair of special Lagrangians, and Hypothesis <ref> holds, then by Proposition <ref> and Lemma <ref> v ↦ι_v ω|_L^T gives an isomorphism between normal vector fields and one-forms on L^T. As L_1 and L_2 are themselves special Lagrangian we also have such an isomorphism on L_1 and L_2. With respect to these isomorphisms, we have Let M_1, M_2, L_1, L_2 and L^T be as in Proposition <ref>. Suppose T is sufficiently large that v ↦ι_vω|_L^T is an isomorphism. Then we can induce a gluing map of normal vector fields from the gluing map of one-forms. This map differs from the gluing map described above by a linear map decaying exponentially in T. This means that there exist constants ϵ and C_k so that the two gluing maps G_1 and G_2 satisfyG_1(v_1, v_2) - G_2(v_1, v_2)_C^k≤ C_k e^-ϵ T (v_1_C^k + v_2_C^k).Let α_1 and α_2 be matching asymptotically translation invariant one-forms with common limit α̃. We shall show that the difference of the one-form given by gluing these as above and γ_T(α_1, α_2) is exponentially decaying: this proves the result as it follows from Proposition <ref> and Lemma <ref> that the isomorphism between normal vector fields and one-forms on L^T is bounded uniformly in T. Let u_i be the normal vector field corresponding to α_i using the Calabi–Yau structure on M_i, and let u be the gluings of u_1 and u_2 as above. We prove this in two stages. Firstly, we prove that on each cutoff cylindrical submanifold L̂_i, with cutoff normal vector field û_i as in (<ref>), and cutoff one-form α̂_i, ι_û_iω_i|_L̂_i - α̂_i is exponentially small in T. We then note that ι_û_iω_i|_L̂_i - ι_û_iω̂_i|_L̂_i is exponentially small in T. This proves that ι_u γ_T(ω_1, ω_2)|_L^T - γ_T(α_1, α_2) is exponentially small in T. Secondly, we use Hypothesis <ref> to show that ι_u (γ_T(ω_1, ω_2)-ω^T)|_L^T is exponentially small in T.To do the first of these we simply replace T_1, T_2, T_3, T̂_1, T̂_2 and T̂_3 with the corresponding maps τ_1, τ_2, τ_3, τ̂_1, τ̂_2, τ̂_3 of one-forms as in Definition <ref>: Proposition <ref> says that these are just the one-form versions of the T_i and T̂_i. By construction, α̂_i and ι_û_iω̂_i|_L̂_i are both equal to α, for t< T-3/2, say. Hence, it suffices to consider the difference where t> T-3/2, and so it suffices to prove that τ_1, τ_2, τ̂_1 and τ̂_2 are exponentially close in T to the identity, and τ_3 and τ̂_3 are exponentially small in T in the sense of (<ref>). This follows by a similar argument to Proposition <ref> using Proposition <ref>: we obtain local bounds on τ_i α - α in terms of v and α using smoothness, and then continuity shows that these bounds may be chosen uniformly; since we may suppose v is exponentially small in T, as we are only interested in this behaviour far enough along the end, the result follows. Note that this is essentially independent of which metric is used. It now only remains to show that ι_u (γ_T(ω_1, ω_2)-ω^T)|_L^T is exponentially small in T. Hypothesis <ref> says that γ_T(ω_1, ω_2) - ω^T is exponentially small in T; u is uniformly bounded in T since û_1 and û_2 are, so ι_u (γ_T(ω_1, ω_2) - ω_T) is exponentially small in T, and Corollary <ref> then implies that the restriction is. Note that the norms v_1_C^k may readily be bounded by the extended weighted norms, so we may suppose we have extended weighted norms on the right hand side of (<ref>). § ESTIMATES IN THE GLUING CASE AND PROOF OF THEOREM B We will now show that the gluing map of special Lagrangians given by combining themap of Definition <ref> with the approximate gluing map of Definition <ref> is indeed a local diffeomorphism of moduli spaces for T sufficiently large. This is Theorem B, Theorem <ref> below. We will show that the derivative of this map, which we identified in the previous two sections under certain hypotheses, is an isomorphism. We will first show that the hypotheses required are satisfied. This will imply in particular that the derivative ofis close to taking the harmonic part of the one-form on the special Lagrangian submanifold given by gluing our asymptotically cylindrical pair (Proposition <ref>). Finally, we show in Proposition <ref> that the space of matching special Lagrangian submanifolds around any pair is a manifold, so that our derivative for the approximate gluing map in subsection <ref> applies, and prove using the linear harmonic theory in Proposition <ref> that the composition of the derivatives is an isomorphism. We set up notation for our gluing analysis as follows: unfortunately, this notation is rather involved.Let (M_1,M_2) be a matching pair of asymptotically cylindrical Calabi–Yau manifolds and let (L_1, L_2) be a matching pair of asymptotically cylindrical special Lagrangian submanifolds as in Definition <ref>. Suppose that Hypothesis <ref> holds so that (Ω^T, ω^T) is a Calabi–Yau structure on M^T. Let L_0(T) be the family of glued submanifolds of M^T given by approximately gluing. Suppose T_0 is sufficiently large that for T> T_0 sufficiently large Condition <ref> applies with k and μ, and let L'_0(T) be the family of special Lagrangian submanifolds for (Ω^T, ω^T) given by perturbing L_0(T) as in Theorem A (Theorem <ref>). Let v_0(T) be the normal vector field to L_0(T) giving L'_0(T), and let x^T be a normal vector field on L_0(T) with Δ x^T = v_0(T). We immediately have the following estimates Suppose we are in the situation of Convention <ref>. We may choose x^T such that there exists fixed ϵ > 0 and constants C_k, μ such that v_0(T)_C^k+1, μ + x^T_C^k+3, μ≤ C_k, μe^-ϵ Tfor every k and μ.The estimate on v_0(T)_C^k+1, μ follows immediately from Theorem <ref>. As for x^T, this follows essentially from the Laplacian version of Theorem <ref>, using Corollary <ref> to note that L_0(T) essentially has the glued metric, and Lemma <ref> to note that bounding one-forms and normal vector fields is essentially equivalent. As L_0(T) and L'_0(T) are close to special Lagrangian and special Lagrangian, respectively, we can identify normal vector fields on them with one-forms. We then obtain from Proposition <ref> that D_L_0(T), if defined, is the map from a one-form α on L_0(T) to the unique harmonic one-form β on L'_0(T) such that τ_1^-1β =(τ_1^-1(τ_2 α + τ_3 α) + Δ'_α x^T),where Δ'_α x^T is given by finding the normal vector field v on L_0 corresponding to α, constructing the transfer operators T_2, s from L_0 to exp_sv(L_0), constructing the curve of normal vector fields T_2, s^-1Δ T_2, s x^T, and then taking the derivative of this curve in s at zero, and τ_1, τ_2 and τ_3 are as in Definition <ref>. We first show that for T large enough, β satisfying (<ref>) is defined, by showing that the τ_i behave well as T gets large.Let M_1, M_2, L_1, L_2, L_0(T), L'_0(T) be as in Convention <ref>, and ϵ as in Lemma <ref>. Consider the maps τ_1, τ_2 and τ_3 defined in Definition <ref> from one-forms on L_0(T) to one-forms on L'_0(T); these maps also depend on T. There exists a sequence of constants C_k such that for every kα - τ_1 α_C^k + α - τ_2 α_C^k + τ_3 α_C^k≤ C_k e^-ϵ Tα_C^k. Essentially, this follows by the same proof as in Proposition <ref>. We can always find a local constant, giving a bound in terms of v_0(T); by the argument in Proposition <ref> this constant can be chosen uniform in T, and then the exponential decay follows from the exponential decay of v_0(T). The other major ingredient in (<ref>) (as well as the τ_i) is the Laplacian. We have the following Let M_1, M_2, L_1, L_2, M^T, L_0(T), L'_0(T), v_0(T) be as in Convention <ref>,and ϵ as in Lemma <ref>. We have two metrics g(Ω^T, ω^T)|_L_0(T) and g(Ω^T, ω^T)|_L'_0(T) on the submanifold L_0(T) of M^T. There exist constants C_k such that for all kg(Ω^T, ω^T)|_L_0(T) - g(Ω^T, ω^T)|_L'_0(T)_C^k≤ C_k e^-ϵ T. Secondly, these two metrics induce two Laplacians Δ_L_0(T) and Δ_L'_0(T) on forms on these submanifolds. There exist constants C'_k, μ such that for any integer k and μ∈ (0, 1) and for any form αΔ_L_0(T)α - Δ_L'_0(T)α_C^k, μ≤ C'_k, μ e^-ϵ Tα_C^k+2, μ,and the same estimate for d^*. Thirdly, possibly increasing C'_k, μ there also exists r such that if α is orthoharmonic with respect to the metric on L_0(T) or L'0(T), d α_C^k, μ +d^* α_C^k, μ≥ C'_k, μ T^rα_C^k+1, μ,where the Laplacian is that on L_0(T) or L'_0(T) respectively.(<ref>) follows by an argument similar to that of Proposition <ref>; (<ref>) then follows from smoothness of the Laplacian and Hodge star written in local coordinates as functions of the metric. As for (<ref>), as in the proof of Theorem <ref> this is an immediate consequence of Theorem <ref> if the metric is close to the glued metric. For L_0(T) we know this by Corollary <ref>; for L'_0(T), we use (<ref>).Proposition <ref> has the following two corollaries. Firstly, it implies that the harmonic parts of a form taken on L_0(T) and L'_0(T) are similar.Let M_1, M_2, L_1, L_2, M^T, L_0(T), L'_0(T), v_0(T) be as in Convention <ref>, and ϵ as in Lemma <ref>. Given a form α on L_0(T) = L'_0(T), write _L_0(T) and _L'_0(T) for its harmonic part with respect to the two metrics of Proposition <ref>. Then for every k and μ there exist constants C_k, μ such that for every form α, _L_0(T)α - _L'_0(T)α_C^k, μ≤ C_k, μ e^-ϵ Tα_C^k, μ.Furthermore, we have for some constants r and C_k, μ _L'_0(T)α_C^k, μ≤ C_k, μ T^r α_C^k, μ. The first follows straightforwardly as the Laplacians converge together exponentially and can be lower bounded polynomially, so that the orthoharmonic part on L'_0(T) of a form harmonic on L_0(T) must decay exponentially. As for the second part, Theorem <ref> shows that we can bound the orthoharmonic part polynomially in terms of the Laplacian, and the Laplacian can obviously be uniformly bounded, provided that the metric is not too far from the glued metric: this follows from the metric comparison results of (<ref>) and Corollary <ref>. Secondly, combining Proposition <ref> with Proposition <ref> implies that the transfer maps induce isomorphisms between harmonic normal vector fields.Let M_1, M_2, L_1, L_2, M^T, L_0(T), L'_0(T) and v_0(T) be as in Convention <ref>.Let T_i be either T_1 or T_2 from Definition <ref>. For T sufficiently large, _L'_0(T)∘ T_i defines an isomorphism between harmonic normal vector fields. So does _L_0(T)∘ T_i^-1.That _L'_0(T)∘ T_i is an isomorphism is equivalent to _L'_0(T)∘τ_i being an isomorphism. We shall show that _L'_0(T)∘τ_i is injective on one-forms: since L_0(T) and L'_0(T) have the same first Betti number, it will follow that _L'_0(T)∘τ_i is an isomorphism. Pick some k and μ, and suppose that α is a harmonic one-form on L_0(T) with α_C^k, μ = 1. By Proposition <ref>, we see that τ_i α - α_C^k, μ is bounded by a constant decaying exponentially in T.By the bound (<ref>), we obtain that _L'_0(T)τ_i α - _L'_0(T)α_C^k, μ is similarly bounded. By Corollary <ref>, we also have that _L'_0(T)α -α_C^k, μ can also be bounded by a constant decaying exponentially in T. Hence we obtain that _L'_0(T)τ_i α - α_C^k, μ decays exponentially in T, and it follows that _L'_0(T)τ_i α cannot be zero. That _L_0(T)∘ T_i^-1 is an isomorphism follows in exactly the same way.In particular, for T sufficiently large, combining Corollary <ref> with Proposition <ref> shows thatis a smooth map on a small neighbourhoodof L_0(T) with derivative at L_0(T) given by Proposition <ref>. Slightly restricted versions of Propositions <ref> and <ref> also hold in a more general setting. If M is a Calabi–Yau manifold and L_0 is a closed nearly special Lagrangian submanifold satisfying Condition <ref>, then (L_0) exists. If (L_0) is close enough to L_0 in C^k, then working with a suitably low regularity in the propositions shows again that _L_0τ_i^-1 is an isomorphism. Then, as in the gluing case, Proposition <ref> applies and we have thatis again a smooth map with derivative given by(<ref>) on a small enough spaceof perturbations of L_0.We now know that D_L_0(T) exists and in terms of one-forms is given by (<ref>). We now show that this map is close to taking the harmonic part _L'_0(T). Proposition <ref> controls the τ_i; it remains to deal with the term Δ'_α x^T. We have Let M_1, M_2, L_1, L_2, M^T, L_0(T), L'_0(T), v_0(T) and x^T be as in Convention <ref>, and ϵ as in Lemma <ref>. Consider the map of one-forms α↦Δ'_α x^T,defined after (<ref>). This is a linear map of α and for every k there are C_k and C'_k, independent of T, such thatΔ'_α x^T_C^k≤ C_k α_C^k+1x^T_C^k+2≤ C'_k e^-ϵ Tα_C^k+1.Δ'_α x^T is the derivative at zero of the map Δ_α x^T given by finding the normal vector field v corresponding to α, the corresponding transfer operator T_2, and then evaluating T_2^-1Δ T_2 x^T. Since this is a smooth map by the argument of Proposition <ref>, it follows immediately that the derivative is well-defined and in particular linear. The proof of the bound is similar to Proposition <ref>, though is more involved. We argue that the map Δ_α x^T can be determined locally, as for the τ_i, by the first jet of α, the second jet of x^T, and the tangent space to L at the point, and is continuous in the Calabi–Yau structure in the same sense as Proposition <ref>. Hence, the derivative Δ'_α x^T can also be determined by the first jet of the derivative, the second jet of x^T, and the tangent space and is continuous in the Calabi–Yau structure in the sense of (<ref>). But then the bound follows by exactly the same argument as in Proposition <ref>: the argument of Proposition <ref> gives it locally, and the uniform constant follows by comparing with (Ω_1, ω_1), (Ω_2, ω_2) and the cylindrical Calabi–Yau structure (Ω̃, ω̃).The second inequality follows immediately from Lemma <ref>.We can now turn to our main result identifying D_L_0(T) with _L'_0(T) on one-forms.Let M_1, M_2, L_1, L_2, M^T, L_0(T), L'_0(T) be as in Convention <ref>, and ϵ as in Lemma <ref>. Given a one-form α on L_0(T) we can construct a harmonic one-form with respect to the metric on L'_0(T) by applying either the harmonic projection _L'_0(T) or D_L_0(T). We have constants C_k such that (_L'_0(T) - D_L_0(T)) α_C^k≤ C_k e^-ϵ Tα_C^k+1.We recall from (<ref>) that D_L_0(T) (α) is the unique L'_0(T)-harmonic β such that _L_0(T)τ_1^-1β = _L_0(T) (τ_1^-1(τ_2 α + τ_3 α) + Δ'_α x^T).By applying Proposition <ref> repeatedly and Proposition <ref>, we find that there exist C_k such that τ_1^-1(τ_2 α + τ_3 α) + Δ'_α x^T - α_C^k≤ C_k e^-ϵ Tα_C^k+1.By applying _L_0(T), which is bounded at most polynomially in T by Proposition <ref>, and increasing C_k if necessary, it follows that _L_0(T)τ_1^-1 D_L_0(T) (α) - _L_0(T)α≤ C_k e^-ϵ Tα_C^k+1.Applying Proposition <ref> to the difference τ_1^-1 D_L_0(T)(α) - D_L_0(T)(α), and combining the resulting estimate with (<ref>) and the polynomial bound on _L_0(T) again we find another C_k such that _L_0(T) D_L_0(T)(α) - _L_0(T)α ≤C_k e^-ϵ T (α_C^k+1 + D_L_0(T)(α)_C^k).But then by Corollary <ref> and the fact that D_L_0(T)(α) is L'_0(T)-harmonic, we obtain for yet another C_kD_L_0(T)(α) - _L'_0(T)α_C^k≤ C_k e^-ϵ T (α_C^k+1 + D_L_0(T)(α)_C^k).We can then simply use the triangle inequalityD_L_0(T)(α)_C^k≤_L'_0(T)α_C^k + D_L_0(T)(α) - _L'_0(T)α_C^kto bound the right hand side of (<ref>). The D_L_0(T)(α) - _L'_0(T)α_C^k term can be absorbed on the left hand side for T large enough; the _L'_0(T)α_C^k term is polynomially bounded by α_C^k+1 using Corollary <ref> again. Thus we get the required estimate.We now turn to the approximate gluing map. We know from subsection <ref> that on any manifold of matching pairs this is smooth and its derivative is exponentially close in T to the approximate gluing map of one-forms under the identification. From the asymptotically cylindrical deformation Theorem <ref>, we infer that the set of matching special Lagrangians is a finite-dimensional manifold.Let M_1 and M_2 be a matching pair of asymptotically cylindrical Calabi–Yau manifolds, and let L_1 and L_2 be a matching pair of asymptotically cylindrical special Lagrangian submanifolds as in Definition <ref>. We have a set of pairs (L'_1, L'_2) where L'_1 is a special Lagrangian deformation of L_1 and L'_2 is a special Lagrangian deformation of L_2. The subset of (L'_1, L'_2) such that L'_1 and L'_2 also match is a manifold. The tangent space is just the space of matching normal vector fields corresponding to matching bounded harmonic forms.The map fromL'_i to its cross-section K'_i is evidently smooth. By Proposition <ref>, which follows <cit.>, K'_i lies in a submanifold 𝒦_i of possible limits. The derivative of L'_i ↦ K'_i is the map v_i ↦ṽ_i given by taking the limit of a harmonic normal vector field such that ι_v ω|_L_i is a harmonic one-form. That is, the derivative is the limit map from harmonic normal vector fields to the limits of these, and hence is surjective. For notational simplicity, write K_1 = K_2: that is, use F to identify the two cross-sections. Let ∂_i: H^1(K) → H^2_(L_i) be the coboundary map of the exact sequence needed in Proposition <ref> for the asymptotically cylindrical manifold L_i. Let τ̃_i be the limit of an asymptotically translation invariant form τ_i on a tubular neighbourhood of L_i with τ_i|_L_i = 0 and dτ_i = ω on this tubular neighbourhood. Note that both τ_1 and τ_2 are examples of the form which we called τ_1 in Proposition <ref>, with τ_1 on L_1 and τ_2 on L_2. By Proposition <ref>, if L_i is connected, _i is the kernel of the mapK'_i ↦∂_i([τ̃_i|_K'_i]).on special Lagrangian deformations K'_i = ⋃{(p^j)'_i}× (Y^j)'_i of K = ⋃{p^j}× Y^j satisfying ∑ ((p^j)'_i - p^j) (Y_j) = 0If L_i is not connected, we simply get additional constraints of the form (<ref>) corresponding to each connected component. We may simply apply all of these linear constraints to get a linear subspace of the required normal vector fields, since we are working with the cylindrical limit. Then to prove the result, we just have to prove that the joint kernel of the two maps (<ref>) is a manifold, restricted to this space. We first want to show that the two maps are as similar as possible. We note that dτ̃_1 -dτ̃_2 = ω̃- ω̃=0. Consequently, for any deformation K' of K, we have that [(τ̃_1 - τ̃_2)|_K'] = [(τ̃_1 - τ̃_2)|_K] = 0. Since we shall only need these restricted cohomology classes, we shall write τ̃= τ̃_i. We know, as a consequence of the calculation of its linearisation in Proposition <ref>, that the map from K' to [τ̃|_K'] is a submersion into H^1(K), even with the further restriction above. Consequently, as ∂_1 ∩∂_2 is a vector subspace of H^1(K), we have that _1 ∩_2 is a submanifold of the manifold of all special Lagrangian deformations of K.Consequently, the diagonal space {(K', K'): K' ∈_1 ∩_2} is a submanifold of _1 ×_2. The desired submanifold of matching deformations of (L_1, L_2) is precisely the inverse image of this submanifold under the submersion (L'_1, L'_2) ↦ (K'_1, K'_2). As for the tangent space, the last part shows that it is the inverse image of the tangent space of{(K', K'): K' ∈_1 ∩_2} consisting of pairs of harmonic normal vector fields on K, under the projection map; that is, it is all pairs of matching harmonic normal vector fields whose limit lies in the tangent space of _1 ∩_2. This is the intersection of the tangent spaces of _1 and _2, that is the harmonic normal vector fields on K which arise as limits of harmonic normal vector fields on both L_1 and L_2. Thus we have all matching pairs of harmonic normal vector fields, as required. Consequently, we haveIf we restrict the gluing map of submanifolds given by Definition <ref> to pairs of matching asymptotically cylindrical special Lagrangian submanifolds, for T large enough, the approximate gluing map is a smooth immersion and so maps to a finite-dimensional submanifold of the deformations of L^T.Subsection <ref> shows that the derivative of this gluing map is close to the approximate gluing map of one-forms for T large enough. By the argument of Proposition <ref> the approximate gluing map of one-forms is injective and bounded below independently of T when restricted to the matching harmonic forms, and so this derivative must also be injective.We may now combine Proposition <ref> with Proposition <ref> (on the harmonic gluing map) to prove that the gluing map of special Lagrangians is a local diffeomorphism of moduli spaces.Let M_1 and M_2 be a matching pair of asymptotically cylindrical Calabi–Yau manifolds, and suppose that Hypothesis <ref> holds so M_1 and M_2 can be glued to give a Calabi–Yau manifold M^T. Let L_1 and L_2 be a matching pair of asymptotically cylindrical special Lagrangians in M_1 and M_2. By Theorem A (Theorem <ref>), there exists T_0>0 such that L_1 and L_2 can be glued to a form a special Lagrangian in M^T for all T>T_0. Moreover, this applies for any sufficiently small deformation of L_1 and L_2 as a matching pair, and hence we obtain a gluing map from the deformation space of matching pairs of submanifolds in Proposition <ref> to the space of deformations of the gluing of L_1 and L_2. This map is a local diffeomorphism for T sufficiently large. This gluing map is the composition of the approximate gluing map of Definition <ref> with the SLing map of Definition <ref>. We shall show that the derivative of this gluing map, regarded as a map of one-forms, is exponentially close to the gluing map Γ_T of harmonic one-forms described in Proposition <ref>; this derivative is the composition of the derivatives of the approximate gluing and . For the proof we use the notation of Convention <ref>. As in the previous proposition, by subsection <ref>, the derivative of the approximate gluing map is exponentially close to the approximate gluing map of one-forms of Definition <ref>.By Proposition <ref>,D_L_0(T) is exponentially close to _(L_0(T)). In particular, it is bounded polynomially in T, by Corollary <ref>. By composition (using that the harmonic part map is polynomially bounded, so that this exponential decay is preserved) we find that the difference between the derivative of the gluing map of special Lagrangians and the gluing map Γ_T of harmonic forms with respect to the metric g(Ω^T, ω^T)|_L'_0(T) is exponentially small in T. Combining Corollary <ref> with Proposition <ref>, this metric is exponentially close in T to that given by Definition <ref>, so, since by Proposition <ref> Γ_T is bounded below uniformly in T and is an isomorphism, it follows that the derivative of the gluing map of special Lagrangians is an isomorphism for T sufficiently large. By the inverse function theorem, the result follows. abbrv
http://arxiv.org/abs/1709.09564v1
{ "authors": [ "Tim Talbot" ], "categories": [ "math.DG" ], "primary_category": "math.DG", "published": "20170927145402", "title": "Gluing and deformations of asymptotically cylindrical special Lagrangians" }
In the present study we are performing simulation of simple model of two patch colloidal particles undergoing irreversible diffusion limited cluster aggregation using patchy Brownian cluster dynamics. In addition to the irreversible aggregation of patches, the spheres are coupled with isotropic reversible aggregation through the Kern-Frenkel potential. Due to the presence of anisotropic and isotropic potential we have also defined 3 different kinds of clusters formed due to anisotropic potential and isotropic potential only as well as both the potentials together. We have investigated the effect of patch size on self-assembly under different solvent qualities for various volume fractions. We will show that at low volume fractions during aggregation process, we end up in a chain conformation for smaller patch size while in a globular conformation for bigger patch size. We also observed a chain to bundle transformation depending on the attractive interaction strength between the chains or in other words depending on the quality of the solvent. We will also show that bundling process is very similar to nucleation and growth phenomena observed in colloidal system with short range attraction. We have also studied the bond angle distribution for this system, where for small patches only 2 angles are more probable indicating chain formation, while for bundling at very low volume fraction a tail is developed in the distribution. While for the case of higher patch angle this distribution is broad compared to the case of low patch angles showing we have a more globular conformation. We are also proposing a model for the formation of bundles which are similar to amyloid fibers using two patch colloidal particles.§ INTRODUCTIONBiological particles self organize into highly monodisperse structures due to the presence of specific interaction sites on their surface <cit.>; examples include virus, proteins etc <cit.>.There are some specific kinds of proteins which aggregate together in ordered bundles leading to diseases such as cataract, Alzheimer's disease and Parkinson's disease <cit.>. In fact it has been shown that almost all neurodegenerative diseases occur due to the abnormal protein aggregation. These kinds of proteins are generally termed as amyloid proteins <cit.>. This type of bundling is also observed in semi-flexible polymers grafted on a surface <cit.>. In this particular case an attraction between the polymer chains leads to the collapse of the homogeneous phase to a bundled state. To mimic some of these biological structures, colloidal particles which are asymmetric, patterned or patchy <cit.> have received considerable attention in recent years as their isotropic <cit.> counterpart was not able to explain some of the experimental observations like bundling. Particles which are anisotropic in shape or interactions are synthesized as they have very promising applications in electronics<cit.> , drug delivery <cit.>, in fabricating photonic and plasmonic materials <cit.> . Several patchy particles have been developed where colloidal particles undergo DNA-mediated interactions and are known as DNA-functionalised colloids <cit.> . The patchy models have been already used to study the equilibrium properties of polymerizationby Wertheim theory <cit.> where they developed a thermodynamic perturbation theory for patchy particles having two patch sites. Sciortino et al. <cit.> did an extensive study of the two patch site model and have shown that simulation matches with the predictions of Wertheim theory. In their study they have looked at the equilibrium structures formed when only one bond per patch was allowed. Structures such as tubes, lamellae are predicted in experimental and computer simulations by varying size and shape of the patches <cit.> . Although extensive work has been carried out for the case of patchy particles and inverse patchy colloidal particles <cit.> , very few work has been carried out for the case where in addition to patchy interaction an isotropic potential is also considered along with it<cit.>. These studies were confined to finding the equilibrium properties like liquid-liquid coexistence curve or to study the competition between phase separation and polymerization. Liu et al. <cit.> have observed that as the patch number decreases the critical point of the liquid-liquid binodal curve shifts towards the lower volume fraction and width of the binodal curve increases. Using these results they were able to explain the phase separation of proteins like lysozyme and α-crystalline molecule. Dudowicz and coworkers <cit.> studied the competition between phase separation and polymerization using lattice-based linear polymerization models. Audus et al. <cit.> recently studied the structural property of the equilibrium structure in a similar system, where they showed that the fractal dimension of this system turns out to be 2. It has already been shown that the reversible system is close to reaction limited aggregation model in the case of only isotropic potential and a fractal dimension of 2 <cit.>. M. Kastelic et al. <cit.> also studied a similar system where they estimated the number of patches and interaction strength of the patch using experimental data to reproduce the liquid-liquid coexistence curve, but in their study isotropic potential was not considered. In the present work, we have modeled a unique system where the bonds between the patches are irreversible while the bonds formed via isotropic potential are reversible. When two patches come in close contact they form a bond or in other words it follows diffusion limited cluster aggregation model <cit.>, which has a fractal dimension of 1.8. AsPrabhu et al. <cit.> have shown for monovalent patches, particles organize themselves into chains(fibers) and these chains form an arrested structure which consists of strands. Formation of fibers and bundles has also been observed by Huisman et al. <cit.>where they used an anisotropic Lennard-Jones potential to model two-patch particles. In this system above a particular temperature these particles form bundles, and they have shown that this transition is very similar to the sublimation transition for polymers. Also a transition from small clusters to long straight rigid tubes has been observed by Vissers et al. <cit.> in case of one-patch colloid of 30% surface coverage below a specific temperature which is density dependent. In the present work we are using the simulation technique called the Brownian cluster dynamics(BCD). This method was developed for studying the structure, kinetics and diffusion of aggregating system of particles interacting via isotropic potential <cit.>. It has already been shown that BCD agrees with other Brownian dynamics simulations <cit.> and predicted phase diagrams are similar to Monte Carlo simulations <cit.> . The advantage over the other methods is that BCD can handle very large number of particles upto 10^6 particles. Acutha et al. <cit.> modified this method to accommodate asymmetric potential with isotropic potential on the particles. They have shown that when a single polymer chain was simulated using this technique they were able to get the correct static and dynamic properties ignoring hydrodynamic interactions. They went further and modeled the step growth polymerization where one patch could form only one irreversible bond under different solvent conditions where they observed that the kinetically driven system formed an arrested structure. In the present work we do not have any constraint over the number of bonds a patch can form, which leads to quite fascinating structures.The paper is arranged in the following ways. In model and simulation techniques we briefly describe our simulation technique and also explain how we are changing the quality of the solvent. Then we present our results about the change in the structure of the system as we vary the patch size. For very small patches we form linear chainswhile on increasing the patch size we end up in a more globular structure, also the effect of changing the volume fraction of the system will be discussed. We will also discuss about the possible link between protein and patchy particle aggregation in the context of bundling, which is believed to be the origin of many neurodegenerative diseases, followed by conclusion in last section.§ MODEL AND SIMULATION TECHNIQUESA model potential was developed by Kern and Frenkel <cit.> to simulate colloidal systems with anisotropic interactions. This model consists of hard spheres of diameter σ which is kept as unity with two patches, where the orientation of the patches is specified by a unit vector v̂_i,which lies at the center of patch.The patch can be viewed as intersection of sphere with a cone of solid angle 2 ω having vertex at the center of the sphere. We simulate hard spheres complemented with patch vectors v_i interacting through a square well patchy potential (U_iso (r_i,j)) followed by Kern Frenkel potential (U_patchy (r̂_i,j,v̂_i,v̂_j)). Here r̂_i,j =r_i,j/r_i,j, where r_i,j is the vector connecting the center of mass of spheres i and j,v̂_i= v_i/v_i,v̂_j = v_j/v_j are unit patch vectors of spheres i and j. The potential is given byU_tot (𝐫_i,j, v_i ,v_j)= ∞ r_i,j≤σ U_iso + U_patchyσ < r_i,j≤σ(1+ϵ) 0r_i,j>σ(1+ϵ)U_iso (r_i,j) = -u_0where ϵ is the interaction width which we have kept ϵ=0.1 and u_0 is the depth of the square well and σ is the diameter which is kept as unity in the present study. U_patchy depends on the orientation of the two particles and is defined as:U_patchy (r̂_i,j,v̂_i,v̂_j) = -u_1 if r̂_i,j.v̂_i>cos ω and r̂_j,i.v̂_j>cos ω0elseIn the present study each patch vector is associated with two oppositely located patches and ω is a tunable parameter where we can change the percentage of the patchy surface coverage on the sphere. ω = 90^∘ corresponds to the irreversible aggregation of pure isotropic square-well potential and ω = 0^∘ corresponds to the hard sphere. We start our simulation with an ensemble of N_tot randomly distributed spheres (each associated with a randomly oriented patch vector) of diameter σ = 1 in a box of length L, where volume fractionϕ=π/6 N_tot/L^3. We have used a cubic box of fixed length L = 50 with periodic boundary conditions in the present study. The simulation procedure involves 2 steps one is the cluster construction step and the other one is the movement step. In the cluster construction step, when 2 monomers or spheres are within the interaction range then a bond is formed with a probability α_0, while if a bond already exists then it is broken with a probability β_0, so that P=α_0/α_0+β_0. Then P=1-exp(u_0/k_BT) is the relation connecting temperature T with the probability P as already shown by Prabhu et al. <cit.> . The collection of all bonded spheres together is defined as a cluster. For the square well system we have defined the reduced second virial coefficient B_2=B_rep - B_att, where B_rep=4, which is the repulsive part coming from hard core repulsive interaction between the spheres. B_att is the attractive part of the second virial coefficient due to the square well potential and is given byB_att = 4. [ exp( - u_0/ k_B . T)] . [ (1+ϵ)^3 - 1]<cit.>, where T is the temperature and k_B is the Boltzmann factor, which is kept as unity in the present work. When two patches overlap the probability to form a bond is α_p=1 and the probability to break a bond is kept as β_p=0, thus forming irreversible bonds. In the present study we have allowed multiple bonds per patch which is quite different from the previous studies where they have employed an artificial constraint of single bond per patch.In the movement step we randomly select 2N_tot times a sphere then we either rotate it whereby we rotate the patch vector randomly with an angular displacement of s_R with respect to the patch vector or we translate the sphere with a small step size s_T in a random direction. Thus on average every particle undergoes both rotational and translational diffusion independently and in an uncorrelated manneras already demonstrated by Prabhu et al.<cit.> . If the rotation or the translation step leads to the breaking of bond or overlap with the other spheres that movement is rejected in our simulations. The step sizes have to be very small otherwise it will lead to a nonphysical movement due to rejection of the movement steps. It has already been shown for the parameters chosen in the present study, in order to obtain the correct diffusional behavior, the translation step size s_T=0.013 and the rotational step size s_R=0.018are the best choices of parameters<cit.>. After a cluster construction and movement step is over the simulation time t_sim is incremented. The relation between physical time t and t_sim comes from the fact that for a free particle undergoing Brownian motion, mean squared displacement is given by <R^2>=6D_1^T t, where D_1^T =1/6 is the translational diffusion coefficient of a single sphere. For a particle undergoing random walk <R^2>=t_sim s_T^2 where s_T is a constant step size and t_sim is the number of simulation steps taken.The time taken for a sphere to travel its own diameter σ is given by t_0= <σ^2> therfore the reduced time is given by t/t_0=t_sim s^2_T.In the present work we have used three different B_att values: * B_att = 0 : In this case system is in good solvent condition, there is no reversible isotropic interaction and the particles aggregate only due toirreversible bonding between the patches. * B_att = 4 : As B_att increases, quality of solvent deteriorates or in other words reversible isotropic aggregation also contributes towards the structure. At this value of B_att the hard core repulsion is balanced by the attractive part of the potential and B_2=0. This condition corresponds to Boyle temperature of the fluid.* B_att=12 : For pure isotropic fluid, this B_att corresponds to the value where we observe liquid-crystal binodal <cit.> or can be considered as bad solvent condition.We wil be working with the above three B_att conditions, mainly for 2 differentvolume fractions ϕ=0.02 and ϕ=0.2 andfor two different ω values 22.5^∘ and 45^∘.As a result of irreversible patchy and reversible isotropic interaction we are able to define 3 different types of clusters (Fig.<ref>),* We define spheres which interact only through the patches, or spheres which are irreversibly connected to its neighbors as P cluster as shown in figure <ref>(a).* The spheres which only interact through the isotropic potential forms a different type of cluster which we call as NPI cluster see figure <ref>(b).* The clustersformed as a result of both patchy as well as isotropic interaction are called as PI cluster. In the present work we run our simulation till the system forms a a percolating cluster in PI construction i.e. cluster that extends from one end of the box to the opposite end.§ RESULTS AND DISCUSSION In the present work we have used two different patch angles ω = 22.5^∘ and ω = 45^∘ to study the aggregation of patchy particles mainly at two different volume fractions ϕ= 0.02 and ϕ=0.2. The effect of the solvent condition was taken into account by changing the B_att as mentioned in previous section. InFig. <ref> we are showing the structures obtained from the simulations at B_att=0, B_att=4 and B_att=12 at two different ω values 22.5^∘ and 45^∘ for ϕ=0.02 after the same time t/t_0=1800, where the kinetics of the system no longer evolves. In fig <ref>(a) B_att=0 and ω=22.5^∘, we observe the formation of chains. As the patch angle is very smallit is not possible to form more than one bond per patch due to hard core repulsion between the spheres, reversible isotropic potential also does not play any part in the aggregation process as B_att=0, so only chain formation is possible. In fig <ref>(b) we show the same system for B_att=4, a visual inspection itself reveals that the length of the chains is larger compared to B_att=0. This is due to the presence of isotropic attractive potential between the spheres, they stay in each others range for a longer period of time compared to B_att=0. As the particles diffuse within the bonds as well as the spheres also undergo rotational diffusion they are more likely to form irreversible bond through patches. For the case of B_att=12 we observe that the reversible part of the potential plays a major role which results in the transformation from chains to bundles as can be observed in fig <ref>(c). In fig <ref>(d) we have shown the structure for B_att=0 for ω=45^∘ where we observe the presence of dense clusters, as spheres are able to form multiple bonds per patch. For the case of B_att=4, the number of bonds per particle will increase as the reversible aggregation will also contribute towards the structure formation and we observe denser clusters. In the case of B_att=12 the clusters should have been more denser, which we do not observe in the fig <ref>(f) as the dominant contribution for aggregation comes from the irreversible part of the potential. Irreversible aggregation always leads to branching and forms fractal type of aggregate. §.§ Kinetics of aggregationIn order to understand the kinetics of aggregationwe have followed the average number of bonded neighbors Z as a function of time for three different cluster construction type as explained in previous section. In Fig. <ref> we have plotted the average number of bonded neighbor for the P construction (clusters formed only due to the patches) at ϕ=0.02 for different B_att values. The aggregation process starts from a randomly distributed spheres system and thus for all the three B_att we start at the same value of Z_p as the bonds are irreversible for the patches. If the distance between the center of mass of the sphere is within a distance of 1+ϵ and also the patches are aligned irreversible bonds are formed between the patches, while if patches are not aligned but are within the interaction range a reversible bond is formed. The average life time of such a reversible bond is given by 1/β, so as B_att increases the life time of the bond increases which means the sphere will be in each other's interaction range for a longer period of time. The spheres stay in each other's interaction range for a longer time and thus the patch vectors will align themselves thus forming an irreversible bond. As a result in the P construction the kinetics of aggregation becomes faster as we increase the attractive strength. We observe that for B_att=0, the number of bonded neighbors for P construction stagnates at a value ∼ 2, which means for ω=22.5^∘ on average only 2 bonds per particle are formed or in other words a patch is able to form only one bond.In the insetof Fig. <ref>we have plotted the average number of neighbors which are connected to a sphere through both irreversible as well as reversible part of the potential (PI construction). For the case of B_att=0 and B_att=4 the average number of neighbors reaches a steady state value close to 2 signifying there is no densification in these systems. As the Z_PI∼ 2 also signifies thatfor the PI construction as well, we have on average 2 bonds per particle and our system will have only chain like configuration for smaller values of B_att. For the case of B_att=12 we observe that the average number of neighbors increases to a value more than 2, indicating that we have a locally more dense system. This is expected because pure isotropic square well (system where we do not have anisotropic patchy interaction) counterpart undergoes phase separation through nucleation and growth phenomena at B_att=12 <cit.>. For comparison we have also plotted the case when we have pure square well potential undergoing irreversible DLCA <cit.> shown as dotted lines in the inset of the Fig. <ref>. Here we observe that the kinetics of aggregation for the case of irreversible pure isotropic square well and patchy particles are very different from each other indicating that isotropic reversible part of the potential is playing a major role in the aggregation phenomena as well as the structure. To understand the role played by the reversible part of the potential we have plotted in Fig. <ref> the evolution of the average number of neighbors (Z_NPI) formed due to only the reversible part of the potential as a function of time for ϕ=0.02 and ω=22.5^∘. For B_att=0, we have α_0=0, thereby no reversible bonds are formed, while for B_att=4 and 12 we have both α_0>0 and β_0>0. For the case of B_att=4, the contribution of Z_NPI towards Z_PI is more compared to Z_P in initial time of the aggregation process t/t_0<0.2, which means the initial aggregation process is dominated by the reversible part of the potential and at later times attains a constant value 0.14. In Fig. <ref> we also observe that the kinetics of aggregation of the reversible part of the patchy particle and pure isotropic square well potential are very similar, although for the patchy case the curve is always below the pure square well case as in this calculation we do not consider the particles that are bonded by irreversible bonds. After a time t/t_0>10, Z_NPI starts to decrease slightly as more and more particles are forming part of the P cluster as evident from Fig. <ref>, where Z_P keeps on increasing which means aggregation process is dominated by the irreversible aggregation of the patchy sites. For the case of B_att=12 we observe that Z_NPI follows similar trend as that of B_att=4 although the average number of neighbors are higher than B_att=4 as the attraction strength in the case of former is higher than latter. We also observe that after a time of t/t_0>30, the kinetics of aggregation is accelerated for the case of patchy particle at B_att=12, indicating a chain to bundle transition. For the case of pure isotropic potential this upturn indicates the gas-crystal phase separation which we do not observe in the case of Z_NPI as the system is stuck in the meta stable state as shown by Babu et al. <cit.>. For the case of patchy particle we do not observe any kind of meta stable state for chain to bundle transition.As observed in the irreversible aggregation of pure square well fluids, the volume fraction of the system also plays a major role in deciding the kinetics and structure of the patchy particle aggregate <cit.>. In figure <ref>(a) we have plotted the evolution of the average number of neighbors due to the patchy part of the potential Z_P as a function of time for ϕ=0.02 and ϕ=0.2 at ω=22.5^∘. As we increase the volume fraction of the system the number of particles with their bond vector oriented against each other as well as the number of particles within each other's range increases. This is the reason why Z_P starts at a higher value for ϕ=0.2 compared to ϕ=0.02 where we have started both the aggregation process from a random system. For the case of P construction we observe that Z_P ∼ 2 indicating the fact that for higher ϕ also on average the patches are able to form only one bond per patch when ω=22.5^∘. The value of Z_P also reaches a steady state value of ∼ 2 for B_att=4 and B_att=12 but is slightly above for B_att=0 indicating the fact that more number of patches have formed bonds thereby the clusters have increased in size. For the case of NPI construction see figure <ref>(b) the number of neighbors Z_NPI for B_att=4 rises initially and then goes down before attaining a steady state value quite similar to the case of ϕ=0.02. When the spheres are in each other's interaction range, they may form a reversible bond and then they diffuse within the bonds such that their patches get aligned whereby they will form irreversible patchy bond, thus the number of reversible bonds reduces. In the case of higher volume fraction and weaker reversible attractionB_att=4, we form chains and the chain length is higher as compared to the case of B_att=12 which forms bundles. For the case of B_att=12 even though the upturn in Z_NPI is not as evident as in the case of ϕ=0.02, still this system tries to densify but it is hindered as the movement of the spheres are restricted due to the increased number of particles in the system. As explained in previous section in our simulation we are allowing multiple bonds per patch, which means the patch angle will also play a significant role in the kinetics as well as the structure that is formed during the aggregation process. In Fig. <ref> we have plotted the Z_P as a function of the reduced time for the case ω=45^∘ for ϕ=0.02. B_att=0 has the slower kinetics and attains a steady state value for t/t_0>50. As expected B_att=12 has higher number of patchy neighbors compared to the other B_att values, as the spheres are connected through the reversible part of the potential and hence they are more likely to diffuse within bonds to form irreversible patchy bonds. In all the three cases Z_P attains a steady state value greater than 2 indicating multiple bonds are formed per patch. As Z_P>2 we are not observing any chain formation for higher ω value but more globular type structure. In the inset we have shown the evolution of Z_PI as a function of time, in all the cases it goes to the same steady state value which means that for higher patch angles the irreversible part of the potential dominates compared to ω=22.5^∘. For earlier time Z_PI for B_att=4 and B_att=12 starts from a higher value compared to B_att=0 as the number of bonded neighbors increases due to the reversible interaction of the potential. For the case of B_att>0 the value of Z_PI will always be smaller than for the case of pure isotropic irreversibleDLCA. Although the evolution follows very closely the pure isotropic DLCA, indicating that system wants to evolve the same way, but is restricted due to the finite patch size. On increasing ω further we will converge towards theirreversible pure isotropic DLCA system <cit.>. In Fig. <ref> we have plotted Z_NPI as a function of reduced time for ϕ=0.02 for the case ofa pure square well reversibly aggregating system and only the reversible part of the patchy particle with ω=45^∘. We observe that in the initial times the contribution of the isotropic part of the potential is not significant compared to the irreversible aggregation of the patchy particles as the maximum steady state value is less than 0.9, which means on averageless than one particle is reversibly connected to each particle. The particles which were connected through the reversible part of the potential later diffuses and forms irreversible patchy bonds, which is the reason for the dip we are observing in Z_NPI. These small clusters formed will diffuse and further densify into a globular form thereby Z_NPI starts to increase even though therate of increase is very slow. The increase in Z_NPI for ω=45^∘ is characterized by the increase in the size of the clusters as observed in figure <ref>(e), whereas for ω=22.5^∘ the sudden increase in Z_NPI leads to chain to bundle transition.We have already shown that for ω=22.5^∘ chains are formed irrespective of the B_att used in the system. In Fig. <ref> we have plotted the chain length as a function of time for a volume fraction ϕ=0.02. For calculating the chain length we identify the sphere with only one bonded neighbor through the patches, the neighboring sphere will have 2 neighbors through the patches if it is connected to a chain. We keep on counting the connected neighbor through the patches till we reach a monomer with only one bond, which will be the opposite end of the chain. The average chain length is defined as <l>=∑ m_l N(m_l)/∑ N(m_l), where m_l is the mass of the chain and N(m_l) is the size distribution of chain lengths. For the case of B_att=0 the average chain length reaches a constant value 9.7, which is consistent with the predictions of Sciortino et al. <cit.> . While for the case of B_att>0 studied in the system the chain length goes on increasing, indicating that as the reversible interaction of the particles increases the length of the chain is larger at the initial times, as can be observed when we compare the Fig. <ref> and Fig. <ref>. At longer times we observe that the chain length for B_att=4 crosses over B_att=12, i.e on average we have longer chains for the case of smaller attraction. This may seem contour intuitive, but what we observe for B_att=12, there is a chain to bundle transition, while for B_att=4 the system remains as chains. For B_att=4 the particles or chains get attached to the other chains and then theyhave enough time either to break away or diffuse to form irreversible patchy bonds thereby increasing the chain length. For B_att=12 we observe similar kinetics but as the attraction is higher between the particles, once they form part of the chain they start to aggregate as bundles.§.§ Structural Analysis /To identify where the spheres are distributed in the patches we have calculated the probability of occurrence of an angle for a single particle g(θ) between the patch vectors of two particles (the area under the curve has been normalized to one). If both the patches face in the same direction then the angle between the patch vectors is zero while if they face each other the angle between them is defined as 180^∘. The distribution of the angle defined between all the patch vectors in the system g(θ) is quite similar to thepair correlation function where we use positions instead of angles. In Fig. <ref> we observe that for B_att=0 and B_att=4 for ω=22.5^∘ at ϕ=0.02 only two angles are most probable 20^∘ and 160^∘. We already know that Z_PI<2 which means that on average we form one bond per patch or in other words we have chain conformation. We also observe that around the angle of 40^∘, g(θ) converges to zero faster for B_att=0 and B_att=4 compared to B_att=12. A tale is devolped for the distribution of B_att=12 close to 40^∘ and 140^∘ which is the signature of bundling at very low concentration. Tavares et al. <cit.> have shown that for the case of B_att=0 the most probable angle that they observe for the case of 22.5^∘ turns out to be 22^∘ which agrees with our results as well. In Fig. <ref> we have plotted the distribution of g(θ) for three different ϕ and B_att values at ω=22.5^∘. Here we observe that even for B_att=0 all the angles between 40^∘ and 140^∘ are possible for moderate to high volume fractions. As the volume fraction increases the rearrangement of the patches as well as the diffusion of the cluster for aggregation becomes difficult, as it may lead to bond breakage or may lead to overlap with the neighboring spheres. Also the maximum for the most probable angle that we observed for low volume fraction comes down and shifts inwards for moderate and high ϕ values as the particles maximize the reversible bonds thus going to a lower energy state. In this case we also observe another peak which appears at 90^∘ which means the particles form bundles and these bundles branch out similar to the spaghetti like structure.In the present study even though multiple bonds are allowed for the patches, ω=22.5^∘ was able to form only one bond per patch.For the case of ω=45^∘ as shown in Fig. <ref> we observe that peaks which existed for the case of ω=22.5^∘ are not very prominent and that the maximum has shifted to 60^∘ and 120^∘ for all theB_att and ϕ values. This can be understood from the fact that for ω=45^∘ we are allowing the patches to form multiple bonds. The patchy particles will try to maximize the irreversible bonds so the most probable angle change from the case of ω=22.5^∘. It has already been shown that the average number of bonded particles per patch increases from 2 to 4 see Fig. <ref>. As the attraction increases there is another probable angle coming up at 90^∘ for all the volume fractions we have studied. This is due to the the reversible part of the potential, as for B_att=0 (also see Fig. <ref>) for all the volume fraction we have considered in the present study we observe a minimum at 90^∘. Thereby we can conclude that reversible part of the potential favors branching of the clusters as observed for the case of ω=22.5^∘.In the present work we have shown that for the case of ω=22.5^∘, the patchy particles always form chains for B_att=0 and B_att=4. Whereas at B_att=12 we observe that chains aggregate together to form bundles. It was suggested by Huisman et al.<cit.> that this transformation is similar to the sublimation transition of polymers. They have shown that the transition happens at a specific temperature using Monte-Carlo simulations. In the present work we also observe a very sharp transition from chains to bundles. The kinetics of aggregation in the case of NPI cluster construction gives us a clear indication that the transition is similar to a nucleation and growth type mechanism. We also observe that the chain length at B_att=4 where only chains are formed, is higher than for the case of B_att=12 where bundling happens. This is contrary to the work of Huisman et al. <cit.> , because in their case the individual particle can break and form bonds, while in the present study we have irreversible bonds for the patches. So once the bonds are formed among the patch they grow as chains for the case of ω=22.5^∘ and these chain aggregate together to form bundles. In order for the chain to grow they have to align themselves and the probability for 2 chains to align is very small, due to the hindrance created by other cluster around. The chain to bundle transition is also believed to be the reason for many neurodegenerative diseases like alzheimer's. There the monomers aggregate to form oligomers which then transfer to bundle configuration commonly called amyloid fibers. These fibers are chemically stable quite similar to our present model where irreversible bonds give structures formed a stable conformation. These fibers then aggregate together to form a percolating cluster or gels depending on the PH of the solution, which is very similar to the present model where we started with monomer which aggregate together to form chains which transformed into a bundle depending on the interaction strength or the quality of the solvent. These amyloid fibers form helical bundles similar to the present study for individual arm of the gel formed for the case of ϕ=0.02, ω=22.5^∘ at B_att=12 see figure <ref>(c). The present model gives a good qualitative description of the different process involved in the amyloid fiber formation more quantitative study will be reported in the future.§ CONCLUSIONSWe have used Brownian Cluster Dynamics to investigate the kinetics of aggregation and structure of the resulting aggregates for the 2 patch system by varying patch size and solvent condition. In the model studied, the anisotropic potential is complemented by square well isotropic potential, where b we have also defined 3 different definition of clusters. We haveobserved that for small patches we form chain like configuration and also the average chain length increases on increasing the isotropic interaction fromB_att=0 toB_att=4. For B_att=12 the average chain length is less thanB_att=4 as a result of the chain to bundle transition via nucleation and growth mechanism. When the size of the patch was increased to ω=45^∘ we observed globular structure instead of the chains like configuration. We have shown that in the case of small patch size (ω=22.5^∘) the structure and kinetics of aggregation is dominated by the isotropic reversible interaction while for larger patch size (ω=45^∘) it is dominated by irreversible patchy interaction and on further increasing the patch size the system will tend towards the isotropic irreversible DLCA aggregation. In the case of ω=22.5^∘ the bond distribution around a single particle showed that only 2 bond angles were most probable indicating chain formation. On further increasing the attraction the distribution developed a tail indicating chain to bundle transition. While for the case of ω=45^∘ we observed that there is a contribution from other angles as well giving us a more globular conformation.It will be interesting to study how the kinetics of aggregation and structure of aggregates changes on making the anisotropic interactions also reversible which will be addressed in future work.§ ACKNOWLEDGEWe would like to thank the HPC Padum and Badal of IIT Delhi for providing us the necessary computational resources.9 [Huisman et al.(2008)Huisman, Bolhuis, and Fasolino]huisman2008phase B. Huisman, P. Bolhuis, A. Fasolino. Phys. Rev. Lett. 100, 188301(2008).[Whitesides and Grzybowski(2002)Whitesides, and Grzybowski]whitesides2002selfG.M. Whitesides,B. Grzybowski. Science 295, 2418–2421(2002). [Bancroft et al.(1967)Bancroft, Hills, and Markham]bancroft1967study J. Bancroft, G. Hills, R. Markham. Virology 31, 354–379(1967). [Salunke et al.(1989)Salunke, Caspar, and Garcea]salunke1989polymorphism D. Salunke, D. Caspar, R. Garcea. Biophys. J. 56,887–900(1989). [Rombaut et al.(1990)Rombaut, Vrijsen, and Boeyé]rombaut1990new B. Rombaut, R. Vrijsen, A. Boeyé. Virology 177, 411–414(1990). [Prevelige et al.(1993)Prevelige, Thomas, and King]prevelige1993nucleation P. Prevelige, D. Thomas, J.King. Biophys. J. 64, 824–835(1993). [Mateu(2013)]mateu2013assembly M. G. Mateu. Arch. Biochem. Biophys. 531, 531, 65–79(2013). [Chiti and Dobson(2006)Chiti, and Dobson]chiti2006protein F. Chiti, C. M. Dobson. Annu. Rev. Biochem. 75,333–366(2006). [Woodard et al.(2014)Woodard, Bell, Tipton, Durrance, Cole, Li, and Xu]woodard2014gel D. Woodard, D.Bell, D. Tipton, S. Durrance, L. Cole, B. Li, S. Xu. PloS One 9, e94789(2014). [Benetatos et al.(2013)Benetatos, Terentjev, and Zippelius]benetatos2013bundling P. Benetatos, E. M. Terentjev, A. Zippelius. Phys. Rev. E 88, 042601(2013). [Glotzer and Solomon(2007)Glotzer, and Solomon]glotzer2007dimensions S. Glotzer, M. Solomon. Nat. Mate. 6, 557–562(2007). [Dorsaz et al.(2011)Dorsaz, Thurston, Stradner, Schurtenberger, and Foffi]dorsaz2011phase N. Dorsaz, G. M.Thurston, A. Stradner, P. Schurtenberger, G.Foffi. Soft Matter 7, 1763–1776(2011). [Valadez-Pérez et al.(2012)Valadez-Pérez, Benavides, Schöll-Paschinger, and Castañeda-Priego]valadez2012phase N. E. Valadez-Pérez, A. L. Benavides, E. Schöll-Paschinger, R. Castañeda-Priego. J. Chem. Phys. 137, 084905(2012). [Abramo et al.(2012)Abramo, Caccamo, Costa, Pellicane, Ruberto, and Wanderlingh]abramo2012effective M. Abramo, C. Caccamo, D. Costa, G. Pellicane, R. Ruberto,U. Wanderlingh. J. Chem. Phys. 136,01B610(2012). [Gratzel(2001)]gratzel2001photoelectrochemical M. Gratzel. Nature 414, 338(2001). [Langer and Tirrell(2004)Langer, and Tirrell]langer2004designing R. Langer, D. A. Tirrell. Nature 428, 487(2004). [Champion et al.(2007)Champion, Katare, and Mitragotri]champion2007making J. A. Champion, Y. K. Katare, S. Mitragotri. Proc. Natl. Acad. Sci. U.S.A. 104, 11901–11904(2007). [Liddell et al.(2003)Liddell, Summers, and Gokhale]liddell2003stereological C. Liddell, C. Summers, A. Gokhale. Mater. Charact. 50, 69–79(2003). [Zhang et al.(2005)Zhang, Keys, Chen, and Glotzer]zhang2005self Z. Zhang, A. S.Keys, T.Chen, S. C. Glotzer. Langmuir 21, 11547–11551(2005). [Halverson and Tkachenko(2013)Halverson, and Tkachenko]halverson2013dna J. D. Halverson, A.V. Tkachenko. Phys. Rev. E 87, 062310(2013). [Coluzza et al.(2013)Coluzza, van Oostrum, Capone, Reimhult, and Dellago]coluzza2013design I. Coluzza, P. D. Van Oostrum, B. Capone, E. Reimhult, C. Dellago Soft Matter 9, 938–944(2013). [Liu et al.(2016)Liu, Tagawa, Xin, Wang, Emamy, Li, Yager, Starr, Tkachenko, and Gang]liu2016diamond W. Liu, M. Tagawa, H. L. Xin, T. Wang, H.Emamy, H.Li, K. G. Yager, F. W. Starr, A. V. Tkachenko, O. Gang. Science 351, 582–586(2016). [Wang et al.(2012)Wang, Wang, Breed, Manoharan, Feng, Hollingsworth, Weck, and Pine]wang2012colloids Y. Wang, Y. Wang, D. R.Breed, V. N. Manoharan, L.Feng, A. D. Hollingsworth, M. Weck, D. J.Pine. Nature 491, 51-55(2012). [Feng et al.(2013)Feng, Dreyfus, Sha, Seeman, and Chaikin]feng2013dna L. Feng, R. Dreyfus, R. Sha, N. C. Seeman, P. M. Chaikin. Adv. Mater. 25, 2779–2783(2013). [Di Michele and Eiser(2013)Di Michele, and Eiser]di2013developments L. Di Michele, E. Eiser.Phys. Chem. Chem. Phys. 15, 3115–3129(2013). [Wertheim(1987)]wertheim1987thermodynamic M. Wertheim. J. Chem. Phys. 87,7323–7331(1987).[Sciortino et al.(2007)Sciortino, Bianchi, Douglas, and Tartaglia]sciortino2007self F. Sciortino, E. Bianchi, J. F. Douglas, P. Tartaglia. J. Chem. Phys. 126, 194903(2007). [Chen et al.(2011)Chen, Whitmer, Jiang, Bae, Luijten, and Granick]chen2011supracolloidal Q. Chen, J. K. Whitmer, S. Jiang, S. C. Bae, E.Luijten, S. Granick. Science 331, 199–202(2011). [Munao et al.(2013)Munao, Preisler, Vissers, Smallenburg, and Sciortino]munao2013cluster G. Munao, Z. Preisler, T. Vissers, F. Smallenburg, F. Sciortino. Soft Matter 9, 2652–2661(2013). [Preisler et al.(2013)Preisler, Vissers, Smallenburg, MunaoÌ€, and Sciortino]preisler2013phase Z. Preisler, T. Vissers, F. Smallenburg, G. Munaò, G, F. Sciortino.J. Phys. Chem. B 117,9540–9547(2013). [Vissers et al.(2014)Vissers, Smallenburg, Munaò, Preisler, and Sciortino]vissers2014cooperative T. Vissers, F. Smallenburg, G. Munaò, Z. Preisler, F. Sciortino. J. Chem. Phys. 140 144902(2014). [Shireen and Babu(2017)Shireen, and Babu]shireen2017lattice Z. Shireen, S. B. Babu. J. Chem. Phys. 147,054904(2017). [Preisler et al.(2014)Preisler, Vissers, Munaò, Smallenburg, and Sciortino]preisler2014equilibrium Z. Preisler, T. Vissers, G. Munaò, F. Smallenburg, F. Sciortino Soft Matter 10, 5121–5128(2014). [Bianchi et al.(2006)Bianchi, Largo, Tartaglia, Zaccarelli, and Sciortino]bianchi2006phase E. Bianchi, J. Largo, P. Tartaglia, E. Zaccarelli, F. Sciortino. Phys. Rev. Lett. 97, 168301(2006). [Foffi and Sciortino(2007)Foffi, and Sciortino]foffi2007possibility G. Foffi, F. Sciortino. J. Phys. Chem. B 111, 9702–9705(2007). [Bianchi et al.(2008)Bianchi, Tartaglia, Zaccarelli, and Sciortino]bianchi2008theoretical E. Bianchi, P. Tartaglia, E. Zaccarelli, F. Sciortino. J. Chem. Phys. 128, 144504(2008). [Bianchi et al.(2011)Bianchi, Blaak, and Likos]bianchi2011patchy E. Bianchi, R. Blaak, C. N. Likos. Phys. Chem. Chem. Phys. 13, 6397–6410(2011). [Noya et al.(2014)Noya, Kolovos, Doppelbauer, Kahl, and Bianchi]noya2014phase E. G. Noya, I. Kolovos, G. Doppelbauer, G. Kahl, E. Bianchi, Soft Matter 10, 8464–8474(2014). [Kalyuzhnyi et al.(2015)Kalyuzhnyi, Bianchi, Ferrari, and Kahl]kalyuzhnyi2015theoretical Y. V. Kalyuzhnyi, E. Bianchi, S. Ferrari, G. Kahl. J. Chem. Phys. 142, 114108(2015). [Ferrari et al.(2015)Ferrari, Bianchi, Kalyuzhnyi, and Kahl]ferrari2015inverse S. Ferrari, E. Bianchi, Y. V. Kalyuzhnyi, G. Kahl. J. Phys. Condens. Matter 27, 234104(2015). [Noya and Bianchi(2015)Noya, and Bianchi]noya2015phase E. G. Noya, E. Bianchi. J. Phys: Condens. Matter 27, 234103(2015). [Wolters et al.(2015)Wolters, Avvisati, Hagemans, Vissers, Kraft, Dijkstra, and Kegel]wolters2015self J. R. Wolters, G. Avvisati, F. Hagemans,T. Vissers, D. J. Kraft, M. Dijkstra, W. K. Kegel. Soft Matter 11,1067–1077(2015). [Hatch et al.(2015)Hatch, Mittal, and Shen]hatch2015computational H. W. Hatch, J. Mittal, V. K. Shen. J. Chem. Phys. 142, 164901(2015). [Ferrari et al.(2017)Ferrari, Bianchi, and Kahl]ferrari2017spontaneous S. Ferrari, E. Bianchi, G. Kahl, Nanoscale 9,1956–1963(2017). [Dudowicz et al.(2003)Dudowicz, Freed, and Douglas]dudowicz2003lattice J. Dudowicz, K. F. Freed, J. F. Douglas. J. Chem. Phys. 119, 12645–12666(2003). [Rah et al.(2006)Rah, Freed, Dudowicz, and Douglas]rah2006lattice K. Rah, K. F. Freed, J. Dudowicz, J. F. Douglas. J. Chem. Phys. 124, 144906(2006). [Liu et al.(2007)Liu, Kumar, and Sciortino]liu2007vapor H. Liu, S. K. Kumar, F. Sciortino. J. Chem. Phys. 127, 084902(2007). [Dudowicz et al.(2009)Dudowicz, Douglas, and Freed]dudowicz2009exactly J. Dudowicz, J. F. Douglas, K. F. Freed. J. Chem. Phys. 130, 224906(2009). [Li et al.(2009)Li, Gunton, and Chakrabarti]li2009simple X. Li, J. Gunton, A. Chakrabarti. J. Chem. Phys. 131, 09B614(2009). [Audus et al.(2016)Audus, Starr, and Douglas]audus2016coupling D. J. Audus, F. W. Starr, J. F. Douglas. J. Chem.Phys. 144,074901(2016).[Babu et al.(2006)Babu, Rottereau, Nicolai, Gimel, and Durand]babu2006flocculation S. Babu, M. Rottereau, T. Nicolai, J. C. Gimel, D. Durand.The Eur. Phy. J. E 19, 203–211(2006).[Kastelic et al.(2015)Kastelic, Kalyuzhnyi, Hribar-Lee, Dill, and Vlachy]kastelic2015protein M. Kastelic, Y. V. Kalyuzhnyi, B. Hribar-Lee, K. A. Dill, V. Vlachy Proc. Natl. Acad. Sci. U.S.A. 112, 6766–6770(2015). [Prabhu et al.(2014)Prabhu, Babu, Dolado, and Gimel]prabhu2014brownian A. Prabhu, S. B. Babu, J. S. Dolado, J. C. Gimel. J. Chem. Phys. 141, 024904(2014). [Babu et al.(2009)Babu, Gimel and Nicolai]babu2009crystallization S. Babu, J. C. Gimel, T. Nicolai. J. Chem. Phys. 130, 064504(2009). [Kihara(1953)]kihara1953virial T. Kihara. Rev Mod Phys 25, 831(1953). [Rottereau et al.(2005)Rottereau, Gimel, Nicolai, and Durand]rottereau2005influence M. Rottereau, J. C. Gimel, T. Nicolai, D. Durand. Eur. Phys. J. E 18, 15–19(2005). [Babu et al.(2008)Babu, Gimel, Nicolai, and De Michele]babu2008influence S. Babu, J. C. Gimel, T. Nicolai, C. De Michele. J. Chem. Phys. 128, 204504(2008). [Babu et al.(2006)Babu, Gimel, and Nicolai]babu2006phase S. Babu, J. C. Gimel, T. Nicolai J. Chem. Phys. 125, 184512(2006). [Kern and Frenkel(2003)Kern, and Frenkel]kern2003fluid N. Kern, D. Frenkel. J. Chem. Phys. 118, 9882–9889(2003). [Babu et al.(2008)Babu, Gimel, and Nicolai]babu2008diffusion S. Babu, J. C. Gimel, T. Nicolai. Eur. Phys. J. E 27, 297–308(2008). [Tavares et al.(2012)Tavares, Rovigatti, and Sciortino]tavares2012quantitative J. M. Tavares, L. Rovigatti, F. Sciortino, F. J. Chem. Phys. 137, 044901(2012).
http://arxiv.org/abs/1710.09255v1
{ "authors": [ "Isha Malhotra", "Sujin B. Babu" ], "categories": [ "cond-mat.soft" ], "primary_category": "cond-mat.soft", "published": "20170927105631", "title": "Aggregation kinetics of irreversible patches coupled with reversible isotropic interaction leading to chains, bundles and globules" }
1 2017 2017 0 999 0 To be addedA new approach for short-spacing correction in the image domain Faridani et al.Argelander-Institut für Astronomie (AIfA), Universität Bonn, Auf dem Hügel 71, 53121 Bonn, Germany Institut für theoretische Astrophysik, Zentrum für Astronomie der Universität Heidelberg, Albert-Ueberle Str. 2, 69120 Heidelberg, Germany Department of Astronomy, University of Wisconsin-Madison, 475 North Charter Street, Madison, WI 53706, USAXXXX XXXX XXXX The short-spacing problem describes the inherent inability of radio-interferometric arrays to measure the integrated flux and structure of diffuse emission associated with extended sources. New interferometric arrays, such as SKA, require solutions to efficiently combine interferometer and single-dish data. We present a new and open source approach for merging single-dish and cleaned interferometric data sets requiring a minimum of data manipulation while offering a rigid flux determination and full high angular resolution. Our approach combines single-dish and cleaned interferometric data in the image domain. Thisapproach is tested for both Galactic and extragalactic data sets. Furthermore, a quantitative comparison of our results to commonly used methods is provided. Additionally, for the interferometric data sets of NGC 4214 and NGC 5055, we study the impact of different imaging parameters as well as their influence on the combination for NGC 4214. The approach does not require the raw data (visibilities) or any additional special information such as antenna patterns. This is advantageous especially in the light of upcoming radio surveys with heterogeneous antenna designs.A new approach for short-spacing correction of radio interferometric data sets S. Faridani1Corresponding author: [email protected] ,F. Bigiel2,L. Flöer1,J. Kerp1 S. Stanimirović3Received — ; accepted — ================================================================================================================================= § INTRODUCTION Future radio-interferometer arrays will enable a new generation of 21 cm line surveys for studying different scientific aspects of galaxy dynamics and evolution as well as the interstellar medium (ISM) <cit.>. In the future, various surveys will be conducted with the Australia's Square Kilometer Array Pathfinder <cit.> such as the GASKAP survey <cit.>, or with Meerkat <cit.>, the South African SKA pathfinder, such as the LADUMA survey <cit.>. These instruments will not only measure the gas distribution of the Milky Way with high angular resolution, but also investigate the content of galaxies in the local universe <cit.>. New technologies such as focal plane arrays increase the survey speed by more than an order of magnitude <cit.>. With these improvements, large-scale surveys at arcsecond angular resolution become feasible.For these new facilities one of the major issues will be handling the huge amount of data <cit.>. For these surveys the long term storage of the raw data (visibilities) is actually not planned and most likely not feasible. Automated on-the-fly data reduction pipelines and parameterized source finding algorithms will be used <cit.> to extract science-ready data from these observations. All these new facilities are interferometric arrays, which are, by design, subject to the so-called short-spacing problem <cit.>. The lack of very short baselines leads to insensitivity to emission from large angular scales. This is particularly an issue when observing the neutral Galactic ISM or diffuse HI halos around galaxies <cit.>. In this respect, single-dish telescopes will still be important to study Galactic emission or the diffuse ISM in nearby galaxies. Single dishes provide the missing-spacing information which can be added to an interferometric observation. This process is called the short-spacing correction <cit.>. The aim of this procedure is to recover the integrated flux density and diffuse emission, as measured with a single dish, while preserving the high angular resolution of an interferometer.Interferometry is an important observational approach for wavelengths from sub-millimeter, millimeter to the cm regime and beyond. SSC is important and desirable especially for instruments such as ALMA and NOEMA <cit.>, where the combination of interferometric and single-dish data sets is indispensable and constantly used. However, the combination of single-dish and interferometric data sets is non-trivial. Typically, either individual SSC implementations are developed for a specific purpose or data set, or require the user to adjust various parameters <cit.>. This situation motivated us to perform a detailed investigation of standard SSC schemes and eventually led to the development of a new approach, which operates in the image domain. Moreover, this new approach is ideally suited also for online use on the large data sets from future facilities.In addition, methods using the Fast Fourier Transformation (FFT) inherently adopt periodicity of the signal. This may cause artifacts produced by structures close to the edge of or even beyond the primary beam of the radio interferometer. This is again a common case for observations of Galactic extended structures, where a significant amount of bright emission is located at the borders of each map <cit.>.We also investigate differences between existing methods for the SSC from the perspective of the observer. Analyzing real observations rather than simulations allows to test our approach by adopting inherently realistic conditions (artifacts, calibration offsets, radio frequency interferences (RFI), unstable baselines, etc.). Especially, we use observations of the Small Magellanic Cloud <cit.> and different interferometric data cubes of NGC 4214 and NGC 5055 from The Nearby Galaxy Survey <cit.>. These different objects with large and variable angular extents on the sky provide ideal test cases for our method.Additionally, we investigate the impact of two different imaging parameters, i.e., weighting scheme and pixel size on the result of the interferometric data and combination, respectively. We investigate the changes of different characteristics in the interferometric map, in particular the flux density, as a function of the aforementioned imaging parameters.These parameters are of great importance for the combination, since the characteristics of each interferomteric map affect the result of combination significantly. Furthermore, long term storage of interferomteric raw data is not feasible for the upcoming interferometric facilities. Therefore, the data reduction process can not be repeated arbitrarily.The structure of the paper is as follows: Section <ref> provides a brief introduction to the principle of synthesis imaging, the short-spacing problem, and the princible solution to perform the SSC. Section <ref> presents the details of the SMC, NGC 4214, and NGC 5055 data. Section <ref> describes our approach for performing the SSC in the image domain. Section <ref> presents the evaluation of our approach. Section <ref> comprises a comparison of the results of combination for the SMC data sets using three different methods. Section <ref> discusses the interferometric imaging parameters weighting scheme and pixel size and Section <ref> discusses their impact on the flux distribution of the resulting synthesized image and the SSC. Section <ref> summarizes our results and provides an outlook regarding possible future work. § SYNTHESIS IMAGING AND MISSING SPACINGS The smallest angular scale that can be resolved decreases with increasing frequency and telescope diameter. In synthesis imaging, signals from a large number of medium sized telescopes are combined. In principle, the largest separation between the array dishes (longest baseline) determines the best achievable angular resolution.The synthesized image presents a best model of the true sky intensity distribution I^(ν)(l,m) of a source as a function of direction cosines l,m. The final image is reconstructed from a non-uniformly sampled visibility function V^(ν)(u,v) via the Fourier transformation, where the observation is conducted for a specific frequency ν, i.e., line observation or a finite range ν in case of continuum observations. Equation <ref> describes the relationship between the visibility V^(ν)(u,v) and the brightness distribution I^(ν)(l,m)V(u,v) = ∬ I(l,m)A(l,m)e^-2π i(ul+vm) dldm, where A(l,m) is the primary beam of a single antenna of the array at a certain frequency ν. Visibilities V(u_ij(t), v_ij(t)) are measured in the (u,v)-domain, where each sample is the cross correlation of incoming signals for an antenna pair (i,j). (u_ij(t), v_ij(t)) is referred to as a baseline b. Henceforward, we neglect the degradation of the visibilities due to integration time and finite bandwidth <cit.>.An interferometer samples the visibility space at discrete points given by the array properties. In this case, one writes: V^(ν)_ obs(u, v) = V^(ν)_ true(u, v) · S^(ν)(u, v), with the sampling function S^(ν)(u,v):S^(ν)(u,v)={[1, (u,v) ∈;0, ]. Using the inverse Fourier transformation and the convolution theorem, one retrieves the dirty image (I^D(ξ, η)) and dirty beam. The latter is the inverse Fourier transformation of the sampling function S_(ν)(u, v).𝔉^-1(V^(ν)_obs(u,v)) =𝔉^-1(V^(ν)_true(u,v) · S^(ν)(u,v))I^D(ξ, η) =𝔉^-1(V^(ν)_true(u,v)) ∗𝔉^-1(S^(ν)(u,v)) . V^(ν)_true(u,v) and V^(ν)_obs(u,v) are true and observed visibilities. The “∗” symbol denotes the convolution of the two inverse Fourier transformations.Measuring as many data points as possible in the (u,v)-plane is essential for an interferometric observation. However, one cannot sample the entire (u,v)-plane. Each missing baseline means that certain spatial frequencies are not measured. Thus, the integral is not uniquely solvable and can only be determined approximately. However, in practice it is impossible to position antennas at arbitrarily many locations. Aperture synthesis makes use of the Earth's rotation to increase the (u,v)-coverage. Mosaicing is another approach to improve the (u,v)-sampling, where a measurement consists of a concatenation of different pointings <cit.>.Before the interferometric data can be used for scientific purposes, deconvolution is necessary. Deconvolution tries to reconstruct the true brightness distribution from the limited sample of visibilities. The most common deconvolution technique is the CLEAN algorithm as introduced by <cit.> and its variants <cit.>.§.§ Short-Spacing Problem (SSP) For any interferometer the central region of the (u, v)-plane is never sampled (u=v=0). This is due to the physical size of the dishes and their minimal separation, which is referred to as the shortest baseline. The incompleteness of the (u, v)-coverage at low spatial frequencies, known as the short-spacing problem (SSP), leads to an insensitivity of interferometers towards emission on large angular scales. u=v=0 contains the total power information. The total flux density for a source is then given by: V(0,0) = ∬ I(l,m) dldm = ∫ IdΩ = S_tot.Note that the integrated flux density in the dirty image is zero <cit.>. By cleaning, part of the integrated flux density can be reconstructed. The effect of short spacings is negligible for objects that are small in comparison to the extent of the primary beam. For Galactic objects and nearby galaxies however, which are large extended structures, the lack of sensitivity towards low spatial frequencies is a severe shortcoming. One specific example are the diffuse, low-column density, extended disks around galaxies <cit.>.The interferometric observations of these objects suffer from the so-called negative bowls. These denote an image degradation that arises due to the lack of information on emission from large angular scale structures during the imaging process.To overcome the missing-spacing problem, the data from a single-dish telescope is required to fill the gap because these only measure total power.For the combination the u, v overlap region of both is of great importance <cit.>. Here, both instruments are sensitive (Fig. <ref>) for the object's structure.The most common techniques for adding missing spacings are part of one of the major astronomical data reduction packages (e.g.in AIPS <cit.>,in CASA <cit.> andin MIRIAD <cit.>), where the combination occurs in the Fourier domain using single-dish and deconvolved interferometric data. The combination can also be performed prior to deconvolution. In this case a proper combined beam is necessary as shown by <cit.>. Next, we introduce the data sets that are used to compare our combination method in the image domain to these established techniques.§ OBSERVATIONS AND DATA In this section, we present the Small Magellanic Cloud (SMC), NGC 4214 and NGC 5055 data sets. The SMC data sets presented stem from <cit.>. They used observations obtained with the 64m Parkes telescope and the Australia Telescope Compact Array (ATCA) <cit.>. We use the SMC data sets to evaluate the performance of our approach. Furthermore, we use NRAO VLA observations of NGC 4214 <cit.> and NGC 5055 <cit.> obtained as part of the THINGS survey <cit.> in order to demonstrate the impact of the imaging parameters (pixel size and weighting scheme) on the interferometric data. We do not use the SMC data sets to study the impacts of different imaging parameters, since the data set is an interferometric mosaic consists of 320 pointings <cit.>. The (u,v)-coverage of this observation is very complex and therefore, inappropriate for the purpose of our study. Whereas, the interferometric observations of NGC 4214 and NGC 5055 are single pointings.§.§ The SMC observations and data The Small Magellanic Cloud (SMC) is a nearby dwarf galaxy located at a distance of approximately 60 kpc <cit.>. The measured mass for the SMC varies between 3.4× 10^8≤ M_⊙≤ 5.5 × 10^8 <cit.>. The variations in the measured masses are probably caused by the different field of views of various observations. The galaxy reveals a complex morphology with a non-symmetric shape (Fig. <ref>). Various studies show that the galaxy has a strong filamentary structure with small, compact clumps embedded in a considerable amount of diffuse gas <cit.>. This is obvious in the interferometric and single-dish observations of the galaxy. The presence of both warm and cold components in combination with the nearby location makes it an ideal test object for various SSC methods.While the single-dish observation reveals the non-symmetric shape of the SMC, the interferometric observation show a wealth of small-scale structures. Both observations cover an area of approximately 20 degrees^2 <cit.>. The angular resolution of the single-dish data is 18.8', the corresponding value for the interferometric data is 98”. The spectral resolution in the interferomteric and regridded single-dish data cubes is 1.65 with heliocentric velocities 88 ≤ v_ helio≤ 216 . The measured rms noise level in the low- and high-resolution data sets are ≈ 145 mand ≈ 18 m. The quality of both single-dish and interferometric data is of great importance for the combination. Therefore, <cit.> performed different calibration and data editing measures (e.g., removing solar interference etc.) to obtain the best possible image for both data sets.Both the Parkes and ATCA images were tapered by multiplying by a function which smoothly decreased the image intensities to zero near the edges. This is important, since the Parkes image in particular has non-zero emission observed all over the map. The sharp edges produce strong horizontal and vertical ringing (spikes) in the center of the Fourier plane <cit.> after Fourier transforming <cit.>.§.§ NGC 4214 and NGC 5055 observations and Data In the following, we present different data sets of NGC 4214 and NGC 5055 observations. The reason for this selection is that these galaxies are located at different distances and they have substantial differences in their morphology, extent, and physical parameters <cit.>. Table <ref> presents the physical and observational parameters of these two galaxies. They also differ in the amount of faint diffuse gas present in and around the galaxy <cit.>. Furthermore, these galaxies are well separated from Galactic emission in velocity <cit.>.For each of these galaxies, four different interferometric data cubes have been imaged. New data sets have been produced based on the VLA raw data from the THINGS observations. The data reduction has been performed using the THINGS pipeline (Bigiel, priv. comm.). The details of the imaging process are presented by <cit.>. Note that all the presented data in this work are corrected for the primary beam efficiency.These four data cubes differ in the applied weighting scheme and pixel size. For consistency, the pixel sizes and number of pixels along R.A. and Dec. axes have been chosen such that all the data sets have the same FoV (≈ 0.4degree). For each galaxy, four data sets are imaged using robust parameters 5 and 0.5. While 5 is nearly pure natural weighting and achieves data sets with higher sensitivity, the latter robust parameter produces data sets with higher angular resolution. Henceforward, we refer to these data sets as NA for natural (robust parameter 5) and UN for uniform (robust parameter 0.5) weighted, respectively. Note that the used parameters are from AIPS and differ for other astronomical frameworks.For each pair of data sets with natural and robust weighting, two different pixel sizes of 1.5” and 3” are chosen. Table <ref> summarizes the important characteristics of both NGC 4214 and NGC 5055 interferometric data sets. The last column shows the measured total flux in the unmasked velocity-integrated intensity maps. Note that both angular resolutions of data sets as well as the amount of measured integrated flux densities change for different data sets depending on the applied weighting scheme and chosen pixel size. § THE NEW SSC APPROACH APPLIED TO THE SMC DATAWe present our new and open source approach for combing single-dish and cleaned interferometric data sets in the image domain [The code is publicly available. <https://bitbucket.org/snippets/faridani/pRX6r>]. In the first step both input FITS[Flexible Image Transform System (FITS) is a standardized file format commonly used for storing astronomical data.] files (low- and high-resolution data sets) are imported and the angular resolutions of the single-dish and interferometric data are retrieved from the corresponding FITS headers. Additionally, we check if both data sets have compatible brightness units. Since, low- and high-resolution data sets have different coordinate systems and projections, regridding is required. Regridding is the process of interpolation from a specific coordinate grid to a different one. The chosen default regridding scheme is linear interpolation which conserves surface brightness/intensity. This is a crucial factor for the combination. Linear interpolation is prone to block-like artifacts. The significance of the artifacts is higher when the difference in the grid resolution of both low- and high-resolution is large. The need for interpolation can be circumvented if an appropriate pixel grid is chosen during the single-dish data reduction process (due to the lower resolution of the single-dish data these are often on a coarser grid, i.e., the angular size of each pixel is larger). Note that if the intensities are in units of Jy/beam, the units are different for the interferomteric and the single-dish data sets (Jy/beam_ int and Jy/beam_ sd, respectively). This is the reason why Eq. <ref> contains the correction factor α, which is the ratio of the interferometric and the single-dish beam areas: α = beam area_ int/ beam area_ sd.Furthermore, It is important to note that for data sets in units of K, Jy/pixel, Jy/arcseconds, etc., the factor α is not necessary and must be omitted from the Eq. <ref>. Our pipeline recognises and treats all these units appropriately.For the determination of missing flux, the interferometric data are convolved with a two dimensional, normalized Gaussian kernel such that the angular resolution of the convolved interferometric data matches the single dish data. The difference between the convolved interferometric and the regridded single-dish data cube is proportional to the missing flux, i.e., the missing information that the interferometer lacks.The additional flux is added to the interferometric data set and the combined data set is exported. Mathematically, this is written as follows: I_ missing= I_ sd^ reg - I_ int^ convI_ comb= I_ int + α· I_ missing. I_ missing is the missing flux only observed by the single dish. I_ sd^ reg is the regridded single-dish data set, I_ int^ conv the convolved interferometric data set as described before. The combination I_ comb is the result of summation of the interferometric data set I_ int and the missing flux multiplied by α.Figure <ref> shows the data flow of the developed short-spacing approach, where the solid lines show the data flow and the dotted lines the retrieved information from the header. Our code is written in Python and makes use of existing CASA tasks and image tools <cit.>.§ EVALUATION OF THE SSC METHOD FOR SMC DATA The measured total flux densities in the low- and high-resolution data cubes are 4.5 × 10^5 and 1.4 × 10^5 , respectively. Hence, the interferometer measures less flux than the single dish. The corresponding value in the combined data cube is 4.5 × 10^5 . The interferometer only receives about 30% of the total flux. The recovered angular resolution in the combined data is 98”. The mean rms noise level in the combined data cube is about 20 m, which is slightly higher than the corresponding value in the high-resolution data set. It is however considerably lower than that of the value of the low-resolution data cube. Nevertheless, it is important to recall that the noise in both, the interferometric and the combined map, is a strong function of the considered angular scales. The noise level is directly related to the sampling of the (u,v)-plane.Figure <ref> shows velocity integrated maps of the SMC. Panel (a) shows the flux density map of ATCA, panel (b) for the combination. Panel(c) quantifies the relative contribution of emission gained by the SSC. Apparently, there is a considerable amount of diffuse extended structures in the SMC, demonstrating how significantly the interferometric observation of the SMC suffers from the negative bowls <cit.>. Additionally, the cumulative flux as a function of radial separation from the center of the map (Fig. <ref>, panel a) as well as sum spectra (Fig. <ref>, panel b) for all three data sets are calculated. In both panels, the blue line presents the regridded low-resolution data (Parkes_ reg), the green line the high-resolution data (ATCA), and the red line the combined data. The measured cumulative flux density shows that the result of the combination is in line with the measured values from the regridded Parkes data for all radii. This is also true for theflux density values in panel (b), where the total flux density is determined separately for each spectral channel.Note the strong deviation between the measured values in the first channels of the sum spectra (Fig. <ref>, panel b) for single-dish and interferometric data sets. Here the interferometer receives significantly more flux than the single-dish.The origin of the deviation could not be conclusively determined from the data at hand. However, these first few channels are mainly noise dominated. The flux difference can be a result of using the MEM algorithm for cleaning the interferometer data cube as discussed in <cit.>.The combination results demonstrate the importance of the zero-spacing correction regarding determination of the physical and morphological properties of the objects. The results also show that the total flux in the combined map is quantitatively consistent with the total flux measured with the Parkes telescope, whereas the angular resolution of the ATCA data set is preserved.§ A COMPARISON OF DIFFERENT SSC APPROACHES We present a comparison of the results of our combination method with two other common approaches. These are introduced in Sect. <ref> and <ref> respectively. The results for the SMC data set are discussed in Sect. <ref>. §.§ Combination before deconvolution (CBD) This method makes use of the linearity of the Fourier transform. The linearity allows to perform the SSC in the image domain. The result of the combination is a combined dirty image. The image needs to be deconvolved using an appropriate combined beam. The following equations describe the method mathematically: I_ comb^D= (I_ int^D + α· f_ cal· I_ sd^D) / (1 + α)B_ comb= (B_ int + α· B_ sd) / (1 + α). I_ int^D is the interferometric dirty image, I_ comb^D the combined dirty image, and B_ comb the combined synthesized beam. α estimates a factor for the resolution difference between the interferometric and single-dish data. f_ cal is the measured calibration factor of the flux-density scales for the interferometric and single-dish data, where f_ cal is retrieved from the overlap region of single-dish and interferometric data (compare Fig. 1). It presents the systematic difference of calibration for interferometer and single dish. E.g., for the presented SMC data sets the value is f_ cal = 1.05 ± 0.05. For the combined data set, the deconvolution is performed in MIRIAD using the maximum entropy algorithm <cit.>. This method requires both visibilities as well as the exact knowledge of single-dish and interferometric antenna pattern. The former is not a well determined quantity. §.§ Combination in the Fourier domain (Feather) Feathering and its variations are the most commonly used approaches to perform the SSC, where the combination occurs in the Fourier (spatial frequency) domain. Thetask in CASA operates in a similar fashion as thetask in MIRIAD andin AIPS <cit.>. The combination method can be summarized as follows: First, both single-dish and imaged interferometric data cubes are Fourier transformed. Second, the Fourier transform of the regridded single-dish and interferometric data is tapered with two tapering functions w'(k) and w”(k). The sum of both tapering functions w'(k) and w”(k) is a Gaussian function with a FWHM value equal to that of the interferometric image <cit.>. This ensures that the interferometric angular resolution is preserved after the combination. For the combination the single-dish data is deconvolved. The deconvolution is necessary since the single-dish data also has an antenna pattern. This can be retrieved by: V(u,v) = V_ sd(u,v)/b_ sd(u,v), where V_ sd(u,v) is the single-dish visibilities. In this case, the single dish is considered as an interferometer with infinite large number of receiving elements and a monotonically decreasing distribution of baselines from zero to D_sd, where D_sd is the diameter of the single-dish telescope <cit.>. b_ sd(u,v) is the antenna pattern of the single dish. The resulting visibilities from Eq. <ref> need to be rescaled by the scaling factor f_ cal as described before. Hereafter the combination term is: V_ comb(k) = w'(k)· V(k) + f_ cal· w”(k) · V_ sd(u,v). After combination the result is transformed back to the image domain <cit.>. For feathering it is important that the input images have a well-defined beam shape. The deconvolution step (Eq. <ref>) also requires care. The Fourier transform of the beam approaches zero for large values of k. As a result of this, the noisier high spatial frequencies are even amplified. To minimize this effect, an appropriate tapering function is to be applied. We used thetask in CASA to perform the combination in the Fourier domain. The inputs are the regridded single-dish and interferometric data, observed with 64 m Parkes telescope and the ATCA (Sect. <ref>). The applied scaling factor f_ cal for the single-dish data is ∼ 1 (Sect. <ref>). §.§ Results of combination for the SMC data sets Figure <ref> shows the velocity-integrated maps for all three combination methods. The left panel is the CBD result, the middle map the combination in the spatial frequency domain applying feathering, and the right panel the result of our combination approach. The amount of recovered integrated flux density from the velocity-integrated maps is very similar in all three cases. The values are consistent with the corresponding value from the regridded single-dish data. Both, CBD and feathering are sensitive to bright isolated structures within the primary beam and close to its rim. Thus, the single-dish data needs to be tapered prior to combination <cit.>. This leads to a lower final angular resolution of the scientific data. This is not mandatory for our combination method, since this method does not require any Fourier transform.Figure <ref> shows the power spectral density (PSD) profiles of two combined data sets. The green line shows the PSD profile of the combined data set using our introduced approach, the blue line for the combined data set using Feather task in CASA, respectively. Both methods show very similar results at middle and higher spatial frequencies. However, there exists a difference in the amount of measured power at the lower spatial frequencies, where feathering shows higher values at these regions. It is unclear what causes this difference. But since in feathering tapering and deconvolution is involved, we think that these operations may change the flux at the largest scales. It might be related to both w' and w” parameters as introduced in Eq. <ref>.§ SYNTHESIZED IMAGING PARAMETERSAn interferometric image is a model or best guess of the true brightness distribution of the object. Different choices and strategies during the process of imaging affect the final result substantially. In the current section we discuss the impact of two imaging parameters: pixel size and weighting scheme on the reduced interferometric data. §.§ Pixel sizeFor the Fourier transformation the visibilities need to be brought onto a regular grid. This is realized by convolving the raw data with a specific gridding kernel. We chose to Nyquist sample the data and thus the pixel size is set to ≈ 1/(2·√(2)) of the FWHM <cit.>. §.§ Weighting schemeFor an interferometric observation, the density of the sampling points in the (u,v)-plane is not uniform and varies with the observing time. The coverage of the central regions of the (u,v)-plane is commonly more complete because of redundancy than in its outer regions. Different weighting schemes have been introduced to emphasize different regions in the (u,v)-plane <cit.>. For a specific weighting scheme, Eq. <ref> can be modified as follows: 𝔉^-1(V^(v)_obs(u,v)) =𝔉^-1(V^(v)_true(u,v) · S^(v)(u,v) · W(u,v)) where W(u,v) describes the applied weighting scheme. Consequently, the dirty beam and clean beam change through multiplication with W(u,v). For interferometric observations, each visibility sample is given a weight during the imaging process (see Eq. <ref>).Different weighting schemes give a trade off between higher sensitivity and higher angular resolution <cit.>. In the following we briefly introduce some weighting schemes:Natural weighting emphasizes all visibilities equally. For this weighting scheme W(u,v) ∝ 1. In the Uniform weighting, weights are inversely proportional to the density N of the sampling function W(u,v) ∝ 1/N. The latter weighting scheme emphasizes the less sampled long baselines resulting in a higher noise level in the reconstructed interferometric image. This latter scheme, however, yields a better resolution, i.e., smaller synthesized beam, compared to that of natural weighting. Robust weighting parameterizes the weighting function with a single parameter R to vary between the natural and uniform weighting schemes. By varying this parameter, images with sensitivities close to naturally weighted maps but with angular resolutions closer to those of uniform weighting <cit.> are calculated.§ IMPACT OF IMAGING PARAMETERS ON THE INTERFEROMETRIC AND SSC DATA In this section, we discuss the influence of the visibility weighting scheme and pixel size on the resulting interferometric image and SSC. Figure <ref> demonstrates the effect of different weighting schemes on the resulting sampling function as well as the final interferometric image for the NGC 4214 data sets. The presented results stem from the same visibilities and have the same FoV, however, they differ in the applied weighting schemes. The arrangement of the figure is as follows: The top panels show the result of a Fast Fourier Transformation (FFT) of the cleaned interferometric observations of NGC 4214 with different weighting schemes. The top left panel shows the gridded (u,v)-coverage for natural weighting, the top right panel for robust weighting.The bottom panels show the velocity-integrated maps of the same observation for the natural and uniform weighting schemes, respectively. The chosen pixel size for this observation is 3 arcseconds. In the top left panel the more numerous visibilities at small (u,v)-distance lead to a lower rms noise level and higher sensitivity towards large angular scale structures, whereas the applied weighting scheme in the top right panel increases the weight for visibilities at large (u,v)-distance resulting in a higher noise level. The latter weighting scheme puts the emphasis on the small angular scale structures and achieves a better angular resolution compared to the former one. The measured beam size for the naturally weighted data set is about 20 arcseconds, whereas the corresponding values for the uniformly weighted data is about 10 arcseconds. §.§ Flux variations as a function of weighting scheme and pixel size For all interferometric data sets of NGC 4214 and NGC 5055, the cumulative flux as a function of radial separation from the center of the map in ever larger radii is measured. Thus, the measured value in the largest radius corresponds to the integrated flux density in each data set. Panel (a) of Fig. <ref> shows the measured flux densities for all data sets of NGC 4214, Fig. <ref> for NGC 5055, respectively.For the NGC 4214 data sets the measured total flux increases in the central regions. The maximum value is measured at a radius of about 8 arcminutes for all 4 data sets. At this radius the bulk of the emission from the galaxy is measured. For the larger radii the cumulative flux decreases. This is due to the deep negative bowls around the galaxy, which are the result of the missing spacings as described in Sect. <ref>. Note the significant difference in the measured total fluxes for both data sets with the natural weighting (panel a - red and green lines) compared to the values measured in data cubes with the robust weighting (panel a - blue and black lines). For a given weighting scheme, the amount of measured total flux is higher for data sets with larger pixel size (in this case, 3 arcseconds).The effect of applied weighting scheme can be summarized as follows: The uniform (robust 0.5) weighting scheme puts the emphasis on the long baselines. The sampling points in these regions of the (u,v)-coverage are more sparse than the central regions. This results in a higher noise level and lower sensitivity towards large-scale structures. Additionally, the amplitude of the sidelobes in the synthesized antenna pattern are higher which results in deeper negative bowls around the structure. Note, the difference in the measured total flux for data sets with natural and uniform weighting, where the measured total fluxes in the naturally weighted data sets are significantly higher than those of the uniformly weighted data sets.The overlap region between single-dish and interferometric observation decreases significantly for uniform weighting. This is an important factor for the combination. Therefore, are uniform weighted data sets are less appropriate for the combination.Note, that the effect of different pixel sizes compared to the choice of applied weighting scheme is smaller but not negligible. The analysis demonstrates that the effect of pixel size is purely a smoothing effect. A larger pixel grid smoothes the visibilities. Panel (b) of Fig. <ref> shows the result of PSD for different NGC 4214 data sets. The PSD profiles show that we measure higher power at lower spatial frequencies for naturally weighted data set for a given pixel size, corresponding to higher power at larger angular scales. The trend, however, changes at higher spatial frequencies, where the uniformly weighted data sets (with their smaller synthesized beam) recover more emission. The smoothing as a result of a larger pixel size suppresses both low and high spatial frequencies, where the PSD profiles reveal less power (red and cyan lines compared to blue and green lines). The result also shows that smoothing affects the measured total flux, however, it is scale independent.The smoothing decreases the amplitude of the negative bowls around the bright structures (Sect. <ref>). Therefore, the measured total fluxes for data sets with larger pixel size are higher for a given weighting scheme (panel (a) of Fig. <ref>). Overall, our case study provides an idea of the magnitude of the effect of changing pixel size or weighting scheme on the final flux distribution in these data sets.Figure <ref> shows the result of flux profiles for NGC 5055. The curves are quite different for the NGC 5055 data sets compared to those of NGC 4214 data sets (panel (a) of Fig. <ref>). The most significant difference is the strong increase of the measured total flux values for naturally (robust 5) weighted data sets, whereas the corresponding values are constant for the uniformly (robust 0.5) weighted data. This result suggest that the interferometric array yields more information regarding different scales and therefore, the negative bowls are flatter compared to those of NGC 4214.It is important to mention that the position as well as the shape and depth of the negative bowls depends on distance, extent, and orientation of the galaxy on the sky as well as on the antenna pattern of the interferometric array. The negative bowls arise when the interferometer lacks large-scale information. NGC 5055 is located at a larger distance compared to NGC 4214 (Table <ref>). This suppresses the amplitude of the negative bowls. This is the reason why the drops obvious in Fig. <ref> are smaller than those present in panel (a) of Fig. <ref>. §.§ SSC for NGC 4214 The current section presents the result of the combination for all four NGC 4214 data sets as described in Sect. <ref>. The SSC for NGC 5055 will be presented in a follow-up paper, including yet deeper and more extended observations from the HALOGAS survey <cit.>.The SSC is performed using the combination method as described in Sect. <ref>. The missing spacings are provided by the Effelsberg Bonn Survey <cit.>. The angular resolution of the EBHIS data cube is 10.8', the corresponding values in the VLA and combined data sets vary approximately between 6” and 20”. The spectral resolution of the regridded EBHIS, VLA and combined cubes is about 1.3 .Figure <ref> shows the result of combination for the natural weighted NGC 4214 data set with a pixel size of 3”. The left panel shows the velocity-integrated VLA map, the right panel that of combination, respectively. Note the significance of the negative bowls around the structure obvious in panel (a). Figure <ref> shows the measured cumulative fluxes for the combined (blue), VLA (green), and regridded Effelsberg (red) data cubes of the different NGC 4214 data sets. The maximum radius probed by the observation is 12.4' and is marked with a dashed line. We distinguish between naturally and uniformly weighted data with designations NA and UN, respectively.For both naturally weighted data sets, the amount of measured flux densities at the largest radius, i.e., the accumulated flux across the entire map is in good agreement with the corresponding value measured in the regridded EBHIS data. For the uniformly weighted data, the measured flux densities in the combined map are smaller than the values measured in the regridded EBHIS data. Note that these values are significantly higher than the measured values in the VLA data alone. The difference reveals that the galaxy contains a considerable amount of diffuse gas, which cannot be traced by the interferometer. It is apparent that within the inner radii the measured flux density for the VLA data is higher than the Effelsberg data. These regions are dominated by small angular scale structures (compared to the EBHIS beam). For all the data sets the measured flux densities for the VLA data reveal a steep drop at larger radii. This is the result of the aforementioned negative bowls around the structure. These regions correspond to large angular scale structures, where the Effelsberg data provide the missing information and compensate this effect.The results demonstrate that the Effelsberg data can overcome the short-spacing problem and provide the missing short-spacing data. They also show that the naturally weighted data sets lead to better results regarding the short-spacing correction. This is of great important if the focus is on measuring total flux or studying extended structures. § SUMMARY AND OUTLOOKThe new era of radio astronomy will be characterized by new large interferometric arrays. However, the observations of extended Galactic objects as well as many nearby galaxies performed by these new instruments will be subject to short-spacing problem (SSP). This is due to the fact that interferometric arrays are not sensitive to the emission on the largest angular scales, which are important to study the extended and diffuse gas component.Additionally, data handling is an important aspect for the current, modern and next-generation facilities. Due to the huge amount of raw data produced by such arrays long-term storage of raw data is not feasible. In this paper, a new combination method is introduced to perform the short-spacing correction (SSC) in the image domain. The method operates on reduced, science-ready data. The only inputs are single-dish and interferometric data cubes as FITS files and the corresponding telescope beams. Additional information such as visibilities or dirty beam images are not required. This is a key advantage for the observations of future telescopes such as ASKAP and WSRT/APERTIF as the method can operate on-the-fly as part of online data processing tools. The comparison with other methods shows that our approach comes up with very similar results compared those of feathering and combination before deconvolution. Moreover, no Fourier transformation is performed as the method operates in the image space directly. Thus, the resulting combined data product is not subject to aliasing if strong emission is present at the border of the interferometric map. The crucial step in the pipeline is the regridding of the single-dish data, where interpolation inaccuracies can cause flux inconsistency or induce artifacts. However, this can be efficiently circumvented if an appropriate pixel grid is chosen during the single-dish data reduction process such that the difference in the grid resolution of both low- and high-resolution is not large.We present archival deep observations of the SMC carried out with the 64 m Parkes telescope and the ATCA <cit.> and the result of their combination. The result of the combination underlines the importance of the SSC for nearby, extended galaxies with considerable amount of large angular scale structure. It also shows that the combination method meets the expectations regarding the measured flux density and angular resolution. Another important consideration is the choice of imaging parameters for interferometric data sets. This topic is of great importance, since re-imaging is not possible if raw data are not stored for future facilities producing large data rates. We study the impact of two imaging parameters, weighting scheme and pixel size, on the reconstructed synthesized image for two nearby galaxies NGC 4214 and NGC 5055 from THINGS ensemble <cit.>. Our analysis shows that, as expected, the reconstructed synthesized images from the same raw data can have significantly different properties (e.g., resolution, noise level, sensitivity towards extended structures) depending on the chosen parameters. We also perform the SSC for NGC 4214. In this case the single-dish data is provided from the Effelsberg-Bonn Survey <cit.>. The results show that for the purpose of the short-spacing correction the natural weighted interferometric data set is the more appropriate choice.The lead author is grateful to the Deutsche Forschungsgemeinschaft (DFG) for support under grant numbers KE757/7-1-3. Frank Bigiel acknowledges support from DFG grant BI1546/1-1. The lead author is very thankful to Tobias Röhser and the referee for their very useful comments and suggestions. Based on observations performed by the 64 m Parkes telescope. Based on observation performed by the Australia Telescope Compact Array (ATCA). Based on observation performed by 100-m Effelsberg telescope. Based on observations with the Very Large Array (VLA).an
http://arxiv.org/abs/1709.09365v1
{ "authors": [ "S. Faridani", "F. Bigiel", "L. Floeer", "J. Kerp", "S. Stanimirovic" ], "categories": [ "astro-ph.IM", "astro-ph.GA" ], "primary_category": "astro-ph.IM", "published": "20170927072902", "title": "A new approach for short-spacing correction of radio interferometric data sets" }
firstpage–lastpage Understanding Infographics through Textual and Visual Tag Prediction Zoya Bylinskii1* Sami Alsheikh1* Spandan Madan2* Adrià Recasens1*Kimberli Zhong1 Hanspeter Pfister2 Fredo Durand1 Aude Oliva1 1 Massachusetts Institute of Technology 2 Harvard University{zoya,alsheikh,recasens,kimberli,fredo,oliva}@mit.edu {spandan_madan,pfister}@seas.harvard.eduAccepted; Received ============================================================================================================================================================================================================================================================================================================================= Spectroscopic surveys require fast and efficient analysis methods to maximize their scientific impact. Here we apply a deep neural network architecture to analyze both SDSS-III APOGEE DR13 and synthetic stellar spectra.When our convolutional neural network model (StarNet) is trained on APOGEE spectra, we show that the stellar parameters (temperature, gravity, and metallicity) are determined with similar precision and accuracy as the APOGEE pipeline.StarNet can also predict stellar parameters when trained on synthetic data, with excellent precision and accuracy for both APOGEE data and synthetic data, over a wide range of signal-to-noise ratios. In addition, the statistical uncertainties in the stellar parameter determinations are comparable to the differences between the APOGEE pipeline results and those determined independently from optical spectra. We compare StarNet to other data-driven methods; for example, StarNet and the Cannon 2 show similar behaviour when trained with the same datasets, however StarNet performs poorly on small training sets like those used by the original Cannon. The influence of the spectral features on the stellar parameters is examined via partial derivatives of the StarNet model results with respect to the input spectra.While StarNet was developed using the APOGEE observed spectra and corresponding ASSET synthetic data, we suggest that this technique is applicable to other wavelength ranges and other spectral surveys. techniques: spectroscopic - methods: numerical - surveys - infrared: stars - stars: fundamental parameters§ INTRODUCTIONSpectroscopic surveys provide a homogeneous database of stellar spectra that are ideal for machine learning applications. A variety of techniques have been studied in the past two decades.These range from the SDSS SEGUE stellar parameter pipeline using a decision tree architecture between spectral matching to a synthetic grid and line index measurements <cit.>, to the detailed algorithms employed for analysis of the Gaia spectroscopic data <cit.>.The use of artificial neural networks in astrophysical applications has a history going back more than 20 years, with pioneering research in stellar classification by authors such as <cit.> and <cit.>. In<cit.> and <cit.>, a neural network was applied to synthetic stellar spectra to predict the effective temperature , surface gravity , and metallicity [Fe/H]. More recently, dramatic improvements have occurred in the usability and performance of algorithms implemented in machine learning software. This, combined with the increase in computing power and the availability of large data sets, has led to the successful implementation of more complex neural network architectures. These have proven to be pivotal in difficult image recognition tasks and natural language processing. The earlier attempts at neural networks for stellar spectra analyses <cit.> would train the neural network on synthetic spectra, and test the model on synthetic spectra. Machine learning methods were also used in one of the SEGUE pipelines <cit.>, where two neural networks were trained: one on synthetic spectra and the other on previous SEGUE parameters.In this paper, we examine several cases of training on synthetic spectra and predicting stellar parameters on both synthetic and observed spectra, or training and predicting on only observed spectra in a purely data-driven approach. The methods we use follow the supervised learning approach, where a representative subset of stellar spectra (either synthetic or observed) with known stellar parameters is selected. This subset can be further divided into a reference set, where the neural network learns the mapping function from spectra to stellar parameters, and a test set, used for ensuring the accuracy of the predictions. Once the network model is trained (i.e. with fixed network parameters), the stellar parameters can be predicted for the rest of the sample. The stellar parameters we consider for this spectral analysis are the effective temperature (), surface gravity (), and metallicity ([Fe/H]).In this paper we present StarNet: a convolutional neural network model applied to the analysis of stellar spectra. We introduce our machine learning methods in Section 2, and evaluate our model for a set of synthetic data in Section 3.As an exercise of its effectiveness, we apply StarNet to the APOGEE survey in Section 4 (DR13 and earlier data releases when appropriate), and compare the stellar parameters to those from the APOGEE pipeline(s). In Section 5, we discuss the success of our StarNet results to other stellar analyses, and confirm that neural networks can significantly increase the robustness, efficiency, and scientific impact of spectroscopic surveys.§ MACHINE LEARNING METHODOLOGY Supervised learning has been shown to be well adapted for continuous variable regression problems. Given a training set in which, for each input spectrum, there are known stellar parameters, a supervised learning model is then capable of approximating a function that transforms the input spectra to the output values. The learned function can then ideally be used to predict the output values of a different dataset. The particular form of this function and how it is learned depends on the neural network architecture. Summarized below is the convolutional neural network that we have implemented for the analysis of stellar spectra; we provide more details about deep neural networks and the mathematical operations of our selected architecture in Appendix <ref>. §.§ The StarNet convolutional neural networkA neural network (NN) can be arranged in layers of artificial neurons: an input layer (i.e. the input data), a number of hidden layers, and an output layer. Depending on the architecture of the NN, there may be several hidden layers, each composed of multiple neurons that weight and offset the outputs from the previous layer to compute input values for the following layer. The more hidden layers used in the NN, the deeper the model is. The combination of these layers act as a single function, and in the case of StarNet, this function predicts three stellar parameters.Inspired from recent studies of deep neural networks on stellar spectra <cit.>, we have focused our analysis on deep architectures. The convolutional neural network (CNN) selected for StarNet is shown schematically in Fig. <ref>. This architecture is composed of a combination of fully connected layers and 1-dimensional convolutional layers. Fully connected layers are the classical neural network layers that compute multiple linear combinations of all of the input values to produce an output vector. In the case of a convolutional layer, a series of filters are applied, extracting local information from the previous layer. Throughout the training phase the network learns the filters that are activated most strongly when detecting specific features, thus producing a collection of feature maps. Using two successive convolutional layers,the second of the two convolves across the previous layer's feature map, which allows the model to learn higher order features.The combination of convolutional layers and fully connected layers in our StarNet implementation means that the output parameters are not only affected by individual features in the input spectrum, but also, combinations of features found in different areas of the spectrum are utilized. This technique strengthens the ability of StarNet to generalize its predictions on spectra with a wide range of signal-to-noise (S/N) ratios across a larger stellar parameter space. More details of the StarNet model itself are discussed in Section <ref>. §.§ Training and testing of the Model Before training the model, the reference set is split into a training set and a cross-validation set. Training is performed through a series of forward and back propagation iterations, using batches of the training set. The forward propagation is the model function itself: at each layer, weights are applied to all of the input values, and at the output layer, a prediction is computed. These predictions are then compared to the target values through a loss function. In our StarNet model, a mean-squared-error loss function computes the loss between predictions and target values and a total loss for all training examples is minimized. Initially, the weights of the model are randomly set and therefore the predictions will be quite poor. To improve these predictions, the model weights are updated following each batch forward propagation. Therefore, the weights are adjusted multiple times per iteration.A minimum was usually reached after 20-25 iterations (although this could vary greatly depending on the complexity of the model used). The training of our StarNet model reached convergence in an amount of time that depended on the size of the training set and the model architecture; for ∼41,000 spectra (close to 300 million intensity values) and  7.5 million NN parameters, the training converged in 30 minutes, using a 16 cores virtual machine. When the training stage reaches convergence, the weight values are frozen. The estimated model is evaluated on a test set of spectra with a wide range of S/N. All tests of StarNet are quantified with location (mean, median) and spread (standard deviation, mean absolute difference) summary statistics on the residuals with a test set of known parameters.§ SYNTHETIC SPECTRAAs mentioned in Section 1, machine learning techniques have been applied to stellar spectra in the past using synthetic data for training and testing <cit.>. Synthetic spectra provide a unique application to evaluate the StarNet model, since all details of the spectra are known a priori, and the S/N ratios and wavelength regions can be varied.Since we will evaluate StarNet on the APOGEE observed data sets, we have applied StarNet to synthetic spectra generated by the APOGEE collaboration. §.§ Training and testing StarNet with the APOGEE ASSET spectra The synthetic spectra used by the APOGEE consortium have been generated using MARCS and ATLAS9 model atmospheres, as described in <cit.>, and the radiative transfer code “ASSET” <cit.>. These synthetic spectra were continuum normalized in the same way as the observed APOGEE spectra to facilitate a closer match between the synthetic and observed datasets. Provided in a publicly available 6D spectral grid compressed with Principal Components Analysis (see Table <ref> for parameter distribution), we have used these synthetic spectra as a first test of StarNet. To sample a spectrum at any desired location within the synthetic grid, we used a third order interpolation routine between spectra at the existing grid points.In all instances of training StarNet on synthetic data, we added Gaussian noise to match S/N ≈ 20 up to noiseless spectra. Each spectrum was used several times in a particular training process (though only once per training iteration) and therefore different amounts of noise were added with each iteration to mimic the behaviour of multiple visits to a single object in the APOGEE survey. This method also contributes to generalizing the predictive capabilities of StarNet on lower S/N spectra.As a first test, we generated a dataset of 300,000 synthetic spectra via random sampling of the stellar parameters within the limits of the ASSET grid. Of these spectra, 260,000 were randomly selected as the reference set. A subset of these (224,000 or 86% of the reference set) were used to train StarNet. The remaining 36,000 spectra from the reference set were used to cross-validate the model following each training iteration.The 40,000 spectra left out of the reference set were used as a test set, and the residuals between the StarNet predictions and the generated parameters are shown in Fig. <ref>. For all three predicted stellar parameters, the variances of the residual distributions are inversely proportional to the S/N. At [Fe/H]<-1.9 and lower S/N, there is a distinct trend where StarNet appears to over-predict the metallicity. In this region, absorption features are not as prominent, therefore the noise effectively causes the StarNet [Fe/H] predictions to be similar for all of these stars (i.e. [Fe/H]≈-2.2). §.§ Stellar Parameter Predictions and Precision To evaluate the errors in the StarNet predictions, it is necessary to understand how the output of the model is affected by the uncertainties in the input spectra (x). Treating the entire network as a vector valued function f(x, w, b) = (f_T_eff, f_, f_ [Fe/H]), the Jacobian of the StarNet model for a given star and stellar label, j, is given by:𝐉(x ,w, b) = (∂ f_j/∂ x_1, ... ,∂ f_j/∂ x_n)(x, w, b) Here we assume that each spectrum is accompanied by a corresponding error spectrum, e_x. Propagating the error spectrum with the Jacobian of that spectrum results in an approximation of the statistical uncertainty, σ_ prop, of the output prediction for each stellar parameter computed as the product: σ_ prop^2 ≈𝐉^2 ·e_x^2 Besides restricting our approximation to the small error domain, we do not compute the error from the StarNet CNN model weights for practical reason. To compensate for this deficit,an empirical intrinsic scatter, σ_ int, is derived from the synthetic test data, which is then added in quadrature - element-wise - with the propagated uncertainty.This gives a total estimated uncertainty: σ = √(σ_ prop^2 + σ_ int^2) The intrinsic scatter term is slightly dependent on the region of the parameter-space in which the predicted parameter lies, reflecting the capacity of the physical model to match observed spectral features. For instance, this intrinsic scatter term will be larger for stars that are predicted to have low metallicities compared to those that are found to be metal-rich (see Fig. <ref>).Therefore, we derive this scatter term in bins, within the same range of the ASSET synthetic grid found in Table <ref>: for , , and [Fe/H] the bin sizes are 450 K, 0.5 dex, and 0.3 dex, respectively.§ STARNET APPLICATIONS TO APOGEE SPECTRA The APOGEE survey has been carried out at the 2.5-m Sloan Foundation Telescope in New Mexico, and covers the wavelength range from 1.5 to 1.7 microns in the H band, with spectral resolution R = 22,500. Targets are revisited several times until S/N ≥ 100 is reached <cit.>.All visits are processed with a data reduction pipeline to 1-D and are wavelength calibrated before being combined. The individual visits, unnormalized combined spectra, and normalized combined spectra are all recorded into the publicly-available APOGEE database <cit.>.The APOGEE Stellar Parameters and Chemical Abundances Pipeline (ASPCAP) is a post data reduction pipeline tool for advanced science products.This pipeline relies on the nonlinear least squares fitting algorithm FERRE <cit.>, which compares an observed spectrum to a grid of synthetic spectra generated from detailed stellar model atmospheres and radiative line transfer calculations. For DR12, this synthetic grid was the ASSET grid used for training in the previous section. The fitting process estimates stellar parameters (, , [Fe/H]) and abundances for 15 different elements (C, N, O, Na, Mg, Al, Si, S, K, Ca, Ti, V, Mn, Fe and Ni). These results are further calibrated with comparisons to stellar parameters from optical spectral analyses of stars from a variety of star clusters <cit.>; see Appendix for more details. §.§ Pre-processing of the Input Data As a test, StarNet was also trained using the APOGEE observed spectra and ASPCAP DR13 stellar parameters <cit.>. The APOGEE error spectra and the estimated errors from ASPCAP were not included as part of the reference set. This effectively limits the accuracy of StarNet to the reference set parameters. To minimize the propagation of the ASPCAP errors in the StarNet results, we limited the range of our dataset. For example, <cit.> caution against using stars cooler than = 4000 K where models are less certain, as well as hotter stars where the spectral features are less defined. For these reasons, our dataset was reduced to stars between 4000 K ≤≤ 5500 K. The APOGEE team have also flagged specific stars and visits for a variety of other reasons. Stars from the DR13 dataset with either aorwere removed from the dataset, which includes stars contaminated with persistence <cit.>, stars that were flagged for having a bright neighbour, or having estimated parameters near the parameter grid edge of the synthetic models. Furthermore, stars with [Fe/H] < -3 were removed, or those with a radial velocity scatter, v_scatter, greater than 1 . Spectra with a high v_scatter are possibly binary stars <cit.>, with potentially duplicate spectral lines.Implementing these restrictions to the reference set allowed the neural network to be trained on a cleaner dataset and learn from more accurate stellar parameters, while placing these restrictions on the test set was an attempt to maximize the validity of ASPCAP stellar parameters to be used when comparing with StarNet predictions. All of these cuts to the APOGEE dataset are summarized in Table <ref>.In addition to the restrictions mentioned above, the reference set only included individual visit spectra from stars where the combined spectrum had S/N > 200. ASPCAP stellar parameter precision is known to degrade at lower S/N <cit.>. However, using high S/N combined spectra as the training input resulted in over-fitting the model (see Appendix <ref>) to spectra with high S/N. To compensate for this issue, StarNet was trained on individual visit spectra, while using the stellar parameters obtained from the combined spectra.This step allowed the model to be trained on lower S/N spectra with high precision parameters. Limiting our training sample to contain stellar parameters from high S/N spectra was also practical: currently StarNet is not able to weight the input spectra according to the noise properties of the spectra during training. When combining various data sets, which we will address in future work, it might become more critical, but is less relevant for the current analysis.The last phase of the data pre-processing was normalizing the spectra. Both the Cannon 2 <cit.> and ASPCAP have implemented independent continuum normalization techniques; in addition, the APOGEE DR13 spectra were further normalized in certain spectral regions. The ASSET synthetic spectra were similarly normalized to facilitate proper matching with the data. It has been suggested <cit.> that the ASPCAP continuum normalization techniques result in a S/N dependency. This is a potential limitation of StarNet when it has been trained on synthetic spectra, since these spectra were ASPCAP normalized without the addition of noise beforehand. Any non-linearities in continuum normalization would be excluded from the synthetic spectra dataset. If this inconsistent normalization is indeed a problem, it could create an inherent mismatch between a synthetic spectrum and a low S/N APOGEE spectrum of identical stellar parameters, leading to erroneous stellar parameter estimates.This potential for a continuum normalization bias motivated us to adopt a simple and linear normalization method in StarNet; for training and testing on APOGEE spectra, the spectra were split into blue, green, and red chips, and each chip was divided by its median flux value. The three chips were then combined to create a single spectrum vector. Given that normalization procedures usually do not need external information, our simple approach is also testing the network capabilities compared to a more physically-driven continuum removal. Further analysis is, however, required to determine the full impact of continuum normalization in this NN approach. §.§ Training and testing StarNet with APOGEE spectra StarNet has been trained and tested on the ASPCAP stellar parameters (, , [Fe/H]) corresponding to individual visit spectra and combined spectra, respectively. As discussed previously, this included stars from the APOGEE DR13 dataset with the cuts outlined in Table <ref>. There were 17,149 stars with S/N > 200 that met these requirements, of which 2651 were used as part of the test set, and 14,498 stars (containing 44,784 individual visits) were used as the StarNet reference set.This is slightly less than 10% of the total APOGEE DR13 dataset. A subset of these individual visits (41,000 or 92% of the reference set) were randomly selected for the training set. The remaining 3,784 visit spectra were used to cross-validate the model StarNet following each forward propagation during training.No significant deviations were found between random selections for training and cross-validation samples.Once trained, StarNet was applied to the test set containing both high and low S/N combined spectra. The StarNet predictions are compared to ASPCAP parameters in Fig. <ref>. For the high S/N spectra, StarNet predictions show excellent agreement with the ASPCAP DR13 results.For the lower S/N spectra, there are more and larger deviations between StarNet and DR13. For example, at high temperatures (> 5000 K), StarNet predicts lower effective temperatures than DR13. The quality of the model predictions depends on the number of stars in the training set that span the parameter space of the test set, therefore we suggest that these deviations are due to an insufficient number of stars with > 5000 K (∼4%) in our reference set.Similarly, there are few stars at low metallicities, [Fe/H] < -1.5 (∼0.2% of the reference set), and therefore applying StarNet to the most metal-poor stars also results in larger deviations from the DR13 results. Sample size bias, as seen in these two regions, reinforces the need for large training sets for deep neural networks such as StarNet.Another potential source of error in the predictions at high temperatures and low metallicities lies in the spectra themselves. In these regions, the spectral features are weaker, therefore the neural network will struggle to locate the most important features during training.Similarly, the parameters determined by ASPCAP will have larger intrinsic uncertainties. Furthermore, these effects can be amplified when testing on lower S/N spectra.After training, StarNet was then applied to 148,724 stars in the APOGEE DR13 database. This final inference step is very fast: taking about 1 minute of CPU time for the whole data set. The short amount of time required to make these predictions is a huge advantage for the rapid data reduction of large spectral surveys.StarNet predictions for 99,211 spectra are shown in Fig. <ref>, where stars with ASPCAP , , or [Fe/H] values of -9999 were removed along with those that are presumed to be M dwarf stars (see Section <ref>). StarNet predictions are compared to stellar isochrones to show the expected trend in the parameter relationships; four isochrones with different metallicities ([Fe/H] = 0.25,-0.25,-0.75 and -1.75) were generated from the Dartmouth Stellar Evolution Database <cit.>, with an age of 5 Gyr and [α/Fe]=0.Fig. <ref> also shows the StarNet reference set of 14,498 stars, highlighted to show the spread in their abundances across the parameter space.To analyze the StarNet model self-consistency, predictions were made for both the individual visit spectra and combined spectra for the same objects. Targets with four or more individual visits were selected for this test, and the differences in the residuals were calculated.For each stellar parameter bin, the mean value for 100 objects is shown in Fig. <ref> (left panel), whereas there were 230 objects in each S/N bin (right panel). The propagated and total errors shown are discussed in Section <ref>. When comparing StarNet predictions for the individual visits to those for the combined spectra, the results are quite consistent across the majority of the parameter-space, though as discussed previously, there is a noticeable increase in the deviations at lower metallicities ([Fe/H] < -1.2 dex) and higher temperatures ( > 5100K). Also as expected, the scatter increases at lower S/N, although only marginally until S/N ≲ 60. Even at the lowest values (S/N ∼ 15) the results are quite consistent.The ability to predict well on lower S/N spectra is largely due to the fact that StarNet was trained on individual visits - rather than combined spectra only - with high validity stellar parameters.The trends seen in the residuals between predictions for individual visits and combined spectra (i.e., an increase in scatter at lower metallicities and lower S/N) are also reflected in the StarNet propagated errors (see Section <ref>), also shown in Fig. <ref>. This provides confidence that the error propagation methods used give an adequate estimate of the uncertainties in the StarNet predictions. §.§ Model SelectionDeep learning architectures, such as StarNet, typically involve experimentation and tuning of hyper-parameters in order to converge to an adequate model. Some of these hyper-parameters include: the number of filters in each convolutional layer; the length of the filters that are convolved across the inputs in each of these convolutional layers; the number of connection nodes in each fully connected layer; the pooling window size in the maxpooling layer (the number of inputs which are compared against each other to find the maximum value); and the learning rate of the gradient descent optimizer. The final selection of hyper-parameters was automated as follows:to select the optimal model architecture, a hyper-parameter optimization was run in two stages. During the first stage, the number of convolutional layers, fully connected layers, filters, and nodes were varied randomly, along with the length of the filters, the pooling window size, and the learning rate. This allowed the optimizer to test different combinations of the number of convolutional and fully-connected layers with a variety of other hyper-parameters to ensure that each combination could reach its maximum predictive potential. The same training and cross-validation sets (as described in Section <ref>) were used during the training of these models, and the same test set (discussed in Section <ref>) was used to test each model. The metric used for model comparisons was the “Mean Squared Error" (MSE) between the target and predicted parameters. To have each parameter weighted equally, the parameters were normalized to have approximately zero-mean and unit variance. An example of the best performing models, evaluated for each combination of layers, is shown in Fig. <ref>.It was found that increasing the number of layers improved the model predictions, but that there was a plateau in performance when combining two convolutional layers with two fully connected layers. The addition of more layers onto this architecture does not clearly improve the prediction results, while fewer layers led to significantly worse MSE.A second hyper-parameter optimization was then started, where the number of convolutional and fully-connected layers were fixed at two each, while the remaining hyper-parameters were varied and optimized using a Tree-structured Parzen Estimator <cit.>. The model selected by this second stage of hyper-parameter optimization was similar to that shown in Fig. <ref>, and this model was used as the starting point for our model architecture selection. Small changes were made to discover possible improvements until the StarNet model architecture performed consistently with adequate results.While simpler models could perform comparably well to the StarNet architecture when trained on synthetic data, it was decided that a single model architecture should be used to train on both synthetic data and APOGEE spectra.Earlier studies found that applying neural networks to stellar spectra only necessitates at most two hidden layers with fewer nodes than the amount used in StarNet <cit.>, but these were applied to much smaller data sets. Current machine learning methods - along with large datasets - allow for more complex models to improve performance while still avoiding over-fitting. The max pooling layer reduces the degrees of freedom of the model by minimizing the impact of unnecessary weights and therefore simplifying the model function. To speed up convergence for the StarNet model fit, we use the He Normal <cit.> weight initialization, also permitting for convergence of deeper models and eventually enabling the neural network to find more complex features. In short, the He Normal initializer, ReLU activation, and ADAM optimizer (see Appendix <ref>) allow for a deeper model to reach convergence more easily and at an accelerated rate, while the max pooling layer simplifies the model rank complexity. The use of a cross-validation set is another technique implemented in StarNet to mitigate over-fitting. These techniques and methods are described further in Appendix <ref>. §.§ Comparisons with The Cannon The Cannon <cit.> is a data-driven, generative model that shares the same limitations as a supervised learning approach, i.e., both rely on a reference data set. Unlike StarNet, the generative approach uses stellar parameters as the inputs and spectra as the outputs during training. The Cannon uses a quadratic polynomial function to translate the stellar parameters into spectra, and the best-fitting coefficients of the function are found through least-squares fitting. During the test phase, the stellar parameters are determined with another regression step, where the stellar parameters are varied until they produce a spectrum that best matches the observed spectrum.The Cannon was originally trained on APOGEE DR10 <cit.> combined spectra from 542 stars in 19 clusters. These are far fewer training examples than StarNet uses from DR13, and yet The Cannon was able to predict stellar parameters for the DR10 spectra extremely well. In an effort to compare the two techniques, we trained the same StarNet architecture, labeled StarNet_C1, on the same data set of 542 stars from DR10. StarNet_C1 was then applied to 29,891 combined spectra that had both ASPCAP and The Cannon predictions for stellar parameters. These are compared in Fig. <ref>, where the limitations of the StarNet_C1 model are clearly seen. The machine learning method used for StarNet requires a large training sample that spans a wide parameter space. Training on 542 combined spectra does not span enough of the parameter range at an accumulated S/N to fit the complex StarNet convolutional neural network.This can also be seen in the large residuals when comparing StarNet_C1 predictions and ASPCAP DR10 parameters. Predictions for low S/N spectra show the most significant scatter.The requirement of large training sets that adequately represent the test set is a limitation of many deep learning methods, such as StarNet. However, as larger data sets become available, then these methods can be extremely effective, as discussed in the next Section.§.§ Comparisons with The Cannon 2 The Cannon 2 <cit.> is a continuation of The Cannon project. In addition to the three stellar parameters predicted by The Cannon, The Cannon 2 predicts 14 additional chemical abundances. The Cannon 2 is also trained on individual visit spectra, but it trains on a larger number of stars than The Cannon.Also, The Cannon 2 adds a regularization term in the function at training, to reduce the likelihood of the model over-fitting the training data.The training set for The Cannon 2 consisted of stars with 200 < S/N < 300, [Fe/H] > -3, [X/Fe] < 2, [α/Fe] > -0.1, [V/Fe] > -0.6, V_scatter< 1 km s^ -1, and excluded targets flagged with the . These cuts removed stars near the edge of the grid of synthetic models, along with other sources of error resulting in poor ASPCAP parameter determinations.However, it did not take into account spectra flagged with persistence. The Cannon 2 training set consisted of visits from 12,681 stars.To make an adequate comparison between the two models, StarNet_C2 was trained on 39,098 visits from 12,681 stars that met the same restrictions set by The Cannon 2 training set, from APOGEE DR12 <cit.>.A test set of 85,341 combined spectra with 4000 K < < 5500 K and< 3.9 was adopted, with similar restrictions as The Cannon 2 test set.We compile the Root Mean Square Error (RMSE) and the Mean Absolute Error (MAE) for an identical test set for both The Cannon 2 and StarNet_C2 in Table <ref>.This confirms the visual inspection of the residuals in Fig. <ref>, that StarNet_C2 is capable of predicting values closer to the ASPCAP stellar parameters.Also seen in Fig. <ref>, StarNet_C2 differs from ASCAP at lower metallicities and hotter temperatures, when the S/N of the spectra is lower. These effects are also seen in The Cannon 2 predictions. In fact, a comparison of StarNet_C2 to The Cannon 2 shows that the effect is larger in The Cannon 2 results.§.§ Training StarNet with Synthetic SpectraA crucial next step in the development of a NN approach to analyzing stellar spectra is to show that StarNet can also predict stellar parameters without an external pipeline. In this Section, we present stellar parameter results for APOGEE data after training StarNet on synthetic spectra only. Our goal is to show that StarNet can operate as a standalone data processing pipeline, producing an independent database of stellar parameters that does not depend on previous ASPCAP pipeline results for training. §.§.§ Synthetic Gap Differences in feature distributions between synthetic and observed spectra is referred to as the synthetic gap. To probe the feasibility of training StarNet on synthetic spectra and accurately predict the parameters from observed spectra, it was necessary to ensure that the synthetic gap was relatively small. Since each spectrum consists of 7214 data points, or represents a point in 7214-dimensional space, one would expect a synthetic and observed spectrum with the same parameters to occupy approximately the same region in this space if the gap was indeed small. Visualizing this space is possible only through dimensionality reduction.We employed the t-Distributed Stochastic Neighbour Embedding <cit.>, a technique often used in machine learning to find clusters of similar data in a two dimensional space. This two dimensional space is comprised of arbitrary variables - not physical stellar parameters - meant to serve as a lower dimensional representation of the spectra to facilitate visualization.A subset of 4000 APOGEE DR13 spectra with S/N>200 and known ASPCAP parameters were randomly selected, along with 4000 interpolated synthetic ASSET spectra with the same parameters. After applying t-SNE, a distinct separation could be seen between the synthetic and APOGEE spectra (left image in Fig. <ref>).Examination of mismatched spectra showed that there were zero-flux values in wavelength bins along the spectra, a method used in the APOGEE pipeline to flag bad pixels. Unfortunately, it is not possible to mask these values at the testing stage with the current implementation of StarNet, thus a nearest-neighbour interpolation was carried out to smooth over the zeros. Another round of t-SNE revealed closer agreement between synthetic and APOGEE spectra (right image in Fig. <ref>). These zero-flux values for all APOGEE spectra were fixed before predicting their stellar parameters in subsequent StarNet training from synthetic spectra. While interpolating the zero-point values in the observed spectra results in more accurate predictions, we caution that this may not be an ideal solution.This is because we are modifying the data in a way that may not resemble a spectrum without the zero-point values. An alternative method would be to artificially inject zero-point values in the synthetic spectra during training.This would allow StarNet to distribute weights more evenly across the spectrum, mimicking the training process on observed spectra. We note that this is not an issue when StarNet is trained on real APOGEE spectra that inherently have the zero-flux values.§.§.§ Predictions for APOGEE DR12 SpectraAfter training StarNet with synthetic spectra from the ASSET code (as described in Section <ref>), we applied it to a dataset of 21,787 combined spectra from APOGEE DR12[The ASSET spectra are only available with continuum normalization using the ASPCAP method, therefore we adopted the ASPCAP normalized DR12 spectra to test this implementation of StarNet. The non-ASPCAP normalization scheme described in section <ref> was only applied when StarNet was trained on APOGEE DR13 spectra] using the same restrictions found in Table <ref>. We compare our results to the ASPCAP DR12 parameters (see Fig. <ref>), since those were also determined using the ASSET spectral grid (unlike the APOGEE DR13 results). The distribution of the residuals is similar to those seen in Fig. <ref>, where StarNet was trained on observed APOGEE spectra. Using the method as described in Section <ref>, the intrinsic scatter error terms for spectra with S/N > 150 were calculated to be Δ= 51 K,Δ= 0.06,and Δ[Fe/H] = 0.08. For the entire spectral data set, including the lower S/N spectra, the intrinsic scatter errors are two times larger than when StarNet was trained on ASPCAP parameters. This is likely due to unaccounted systematic effects at training time, e.g., instrumental or extinction effects.An extension of StarNet to include unaccounted synthetic modeling effects is planned for future work.Interstellar extinction, atmospheric extinction, and instrumental signatures are not simulated in the synthetic spectra. These effects can vary from star to star, generally affecting the bluer regions of a spectrum as a smoothly varying function. Given that the APOGEE spectra cover infrared wavelengths for mostly nearby bright objects (H>12.5), the effects of extinction are not expected to be significant. In Section <ref>, the sensitivity of the StarNet model to various features in the DR13 spectra is discussed.Adding noise to the synthetic set also at the training stage allowed StarNet to learn which features impacted the stellar parameter estimates, while discounting weak features that would not be detectable in noisy APOGEE spectra.By varying the noise levels, it was found that a S/N > 20 is necessary to decrease the residual intrinsic scatter before saturation. Adding noise to the synthetic training set not only helps to reproduce a more realistic setting, but also accounts for some of the uncertainties in the physical and instrumental modeling and decreases over-fitting. Since the APOGEE data set consists of mostly high S/N spectra, then adding a simple noise model appears to be sufficient for modeling the APOGEE spectra. However, potential applications on non-APOGEE data (or APOGEE data with more sensitive stellar parameters) may require a more thorough noise model. §.§ Partial DerivativesIt is possible to examine the learned model to determine which parts of the spectra the neural network is weighting when predicting stellar parameters. We do this by calculating the partial derivatives of each of the outputs (, , and [Fe/H]) with respect to every input flux value of a particular spectrum to obtain the Jacobian (as described in Section <ref>). In Fig.s <ref> and <ref> - for both StarNet trained on APOGEE DR13 spectra and synthetic spectra, respectively - we show an average of the absolute valued Jacobian from 2000 stars located in particular ranges of the parameter space. The Jacobian from metal poor stars were compared against those from metal rich stars. Comparisons were also made between hot and cool stars. We focus on a subsection of the green chip from 15950 to 16180. Some features are labeled from the APOGEE input line list. A few notable features include:* Hydrogen Bracket (HBR) lines, which play a significant role in the determination of gravity, but also in the determination of temperature in metal-poor stars* Atomic metal lines (e.g., FeI, CaI, CuI) that play a significant role in the determination of temperature and metallicity throughout our stellar parameter range* Certain atomic metal lines in the synthetic grid that play a role in the determination of temperature, e.g., SiI appears more and CaI appears less significant * Unidentified or weaker lines that are not used in the APOGEE window functions and yet play an important role in the determination of all of the stellar parameters from the observed spectra* Unidentified or weaker lines that play a significant role only in the determination of metallicity for cool stars and metal-rich stars None of these features were pre-selected or externally weighted before the neural network was trained. We also compared the partial derivatives for subsets of stars with S/N<60 versus S/N>200, but found no notable differences in the predictive power. Similar results are seen in the other wavelength regions across the whole APOGEE spectral range.§ DISCUSSION§.§ Optical Benchmark Stars The accuracy of the parameters returned by StarNet is limited by the quality of the training data. When StarNet is trained on the ASPCAP stellar parameters, uncertainties in the ASPCAP pipeline are implicitly propagated throughout the results. One option to examine the fidelity of the APOGEE StarNet results is to compare them to optical analyses. Stellar parameters and abundances for individual stars derived through optical analysis can have higher precision due to the availability of higher resolution optical spectrographs, access to more spectral features, and higher confidence in the optical stellar atmosphere models <cit.>.To investigate the accuracy in the StarNet predictions from training on synthetic spectra, all stars from the APOGEE DR13 database with the “Calibration-Cluster” or “Standard Star” flag were selected to create a group of benchmark stars with ASPCAP parameters. Parameters determined from optical spectral analyses were then obtained for these stars through direct matching of the APOGEE identifier/2MASS identifier <cit.> to stars in the Pastel 2016 catalog <cit.>, and supplemented with additional data from references in <cit.> and <cit.>. Fig. <ref> compares the ASPCAP DR13 stellar parameters to the optically determined parameters for 102 benchmark stars (tabulated in Appendix B, with references).Examining Fig. <ref>, there are small offsets between the ASPCAP DR13 and the optical values for all three stellar parameters. Comparing Fig. <ref> to Fig. <ref> - where StarNet is trained on observed spectra and compared to ASPCAP - a similar bias at low [Fe/H] is seen in the residuals. The bias seen in Fig. <ref> is likely due to a sample size bias in the StarNet training set, though the bias in Fig. <ref> could be due to a insufficient coverage in the synthetic grid. These systematic biases warrant the need for larger reference samples of the parameter space, as well as possible improvements in the coverage of theoretical spectra.Comparing the residual scatter between the DR13 ASPCAP and the StarNet predictions (Fig. <ref>) to the scatter in the optical analyses predictions (Fig. <ref>), we see that the optical analyses have larger scatter by a factor of ≳ 2 in , ≳ 4 in , and ≳ 4 in [Fe/H]. The larger scatter suggests that our simplified derivation of statistical uncertainties is sufficient until we can calibrate stellar parameters for a larger set of more homogeneous benchmark stars.Well calibrated stellar parameters would require a more thorough probabilistic model of both StarNet training and StarNet predictions. Therefore, we consider this very good agreement, and do not investigate these intrinsic dispersions further at this time.A comparison of Fig. <ref> and <ref> revealsthat, when StarNet is trained on synthetic spectra and compared to DR12, the scatter is a factor of ∼ 2 lower than the residuals between the DR13 and optical analyses. However, due to the different input physics, analysis techniques, spectral resolution, etc. between StarNet, APOGEE, and optical analyses, we cannot directly compare the precision of the stellar parameters returned by StarNet and an optical analysis at this time. Rather, this similar degree of scatter in the residuals indicates that the systematic errors intrinsic to StarNet are comparable to the systematic errors expected from an optical analysis. This indicates that the StarNet model is sufficient for a single survey analysis, and can provide a new and exciting tool for the efficient analyses of spectroscopic surveys. §.§ M Dwarfs in DR13While APOGEE has primarily observed red giant and subgiant stars, DR13 includes spectra for known M dwarfs.These stars are accompanied with a specific Target Flag: “APOGEE MDWARF”.Initially, we had not removed these stars from the dataset, and found obvious discrepancies between the StarNet results and those provided by ASPCAP DR13. On closer inspection, we found  5900 stars that resembled the known M dwarfs, but that had not been flagged as M dwarfs.Those stars were removed from our data sets, using one of the following cuts:* Metallicity parameters of -0.9 < (ASPCAP [Fe/H]) < 1.0, and where (StarNet [Fe/H] - ASPCAP [Fe/H]) < -0.4, or* Gravity parameters of (ASPCAP ) > 3.6, and where (StarNet- ASPCAP ) < -0.3 The ASSET grid provided with the APOGEE DR12 data release does not include synthetic spectra for M dwarfs, and therefore StarNet cannot be trained on those stars.Furthermore, StarNet cannot be trained on the observed spectra at this time since there are too few M dwarfs in theAPOGEE dataset (<1000 flagged).Our method for flagging M dwarfs is not robust, e.g., some RGB stars may have been removed, while other M dwarfs were not.§.§ Neural Network Considerations One limitation of a deep neural network is the necessity of large training sets spanning a wide range of the parameter space. The distribution of the stellar parameters in the training set for StarNet (when trained on observed spectra) are shown in Fig. <ref>. From these plots, it is evident that there are fewer spectra for stars at high temperatures, low metallicities, and low gravities.As seen in Figs. <ref> and <ref>, having fewer training spectra in these regions of the parameter space results in less accurate predictions when testing on stars in those regions. StarNet is also challenged when combinations of limited parameters are scarce in the training set. For example, when StarNet was trained on APOGEE DR13 spectra, there were very few stars with 4500K ≤≤ 5200K, 1 ≤≤ 2, and [Fe/H] ≤ -1.6in the reference set (see Fig. <ref>). This issue could be mitigated by adjusting the restrictions set on the StarNet reference set, but may cost in accuracy.While stellar observations are limited by time and resources, generating synthetic spectra are limited only by computing resources. Being able to produce synthetic spectra that evenly span a parameter space can improve the ability of a neural network to predict in regions which may be affected by a sample bias. We see this as one of the strengths of StarNet.In the future, we envision combining observed spectra with synthetically generated spectra to improve the precision in the StarNet predictions. This would increase the number of training spectra over the entire parameter range, while also reducing the effect of the synthetic gap. Of course, this NN application is not limited to the APOGEE wavelength region, but can be extended to other spectroscopic surveys. Finally, some skepticism exists around machine learning due to its historical inaccessibility and complexity; sometimes being perceived as a black box. New developments have improved usability and made machine learning much more accessible. The training and testing of all of the models used in this paper were performed with the neural network library,<cit.>, which provides a high level application program interface to the<cit.> machine intelligence software package. We provide code to reproduce steps of training and testing on the APOGEE DR13 data set with StarNet at https://github.com/astroai/starnet.§ CONCLUSION We have presented a convolutional neural network model that is capable of determining the stellar parameters , , and [Fe/H] directly from spectra. Our development of this model, StarNet, includes training and testing on real spectra (from the APOGEE database) and synthetic spectra (from the APOGEE ASSET grid).We have implemented an optimization procedure for hyper-parameter tuning, examining the precision in the stellar parameter predictions when developing the StarNet architecture.By applying this StarNet model to various data releases from the APOGEE survey, we show that it is capable of estimating stellar parameters from synthetic spectra.These parameters and the uncertainties are similar to the ASPCAP results.We also compare StarNet to other data-driven methods (such as The Cannon 2) and find similar results when trained on sufficiently large observational data sets. In our applications on the APOGEE data sets, we also explore the limitations of the training sets; the sample sizes; the stellar parameter ranges; the impact of zero-flux values in real spectra during training and during testing; and the effect of signal-to-noise of the spectra on the accuracy of the StarNet predictions. Our results are robust over a large range of S/N values for the APOGEE spectra, including low S/N (∼15).In the future, testing of the intrinsic errors in neural network models should be further examined, as well as variations in the synthetic spectral grids; e.g., variations in the model atmospheres adopted when generating the synthetic spectra, changes in the continuum normalization methods, as well as the exploration of different wavelength regions. We are especially interested in applying StarNet to optical spectroscopic surveys and observations.§ ACKNOWLEDGEMENTSWe thank the anonymous referee for helpful comments that have improved this manuscript.This paper was first presented at l'Observatoire de la Côte d'Azur in Nice, France on 1 June 2017, and again (in part) at the Joint Institute for Nuclear Astrophysics Forging Connections meeting in Lansing, Michigan on 28 June 2017. The authors wish to thank Dr. Mike Irwin (Institute of Astronomy, Univ. of Cambridge) and Dr. Jo Bovy (University of Toronto) for helpful comments on this work, and Drs. Vanessa Hill, Alejandra Recio-Blanco, and Patrick de Laverny (Observatoire de la Côte d'Azur, France) for many interesting discussions on spectral analyses. SB and TO thank the National Research Council Herzberg Astronomy and Astrophysics for funding through their co-op program. KAV, CLK, FJ, SB, TO, and SM acknowledge partial funding for this work through a National Sciences and Engineering Research Council Collaborative Research and Training Experience Program in New Technologies for Canadian Observatories.mnras § THE STARNET CONVOLUTIONAL NEURAL NETWORK In this paper, we approach the prediction of stellar parameters from spectra with supervised learning. From a data set consisting of individual stellar spectra and associated physical parameters, we approximate a function to learn the mapping between the two. For other similar spectra, we then assume the trained model is capable of predicting stellar parameters.Our mapping function is parameterized and organized as a neural network (NN). It consists of a collection of artificial neurons or nodes arranged in layers: an input layer (in our case the input spectra), a number of hidden layers, and an output layer (in our case the stellar parameters). Each node is parameterized as a linear function of weights, w, applied to an input, x, with an additional bias value, b. A node is then activated by a nonlinear function, g, giving an output of a node to be written as:h(x) = g(w^Tx + b) Common activation functions include the sigmoid function and the Rectified Linear Unit (ReLU):g(z) = max(0, z) which allow the network to adapt to non-linear problems <cit.>. The ReLU function was used in the implementation for all StarNet layers except for the last output layer.In a traditional sequential NN architecture, each node is connected to every node from the previous layer as well as every node in the following layer, thereafter referred to as fully connected layers. At hidden layer, l, the output, h^(l), is a vector valued function of the previous layer, h^(l-1), and is given by:h^(l) = g(w^(l)h^(l-1) + b^(l)) The first layer is simply the input h^(0)(x) = x, in our case: the spectra.The next two layers of StarNet are convolutional layers, which are more adapted to higher dimensional inputs by leveraging local connectivity in the previous layer. In convolutional layers, the weights are applied as filters. The filter slides across the previous layer taking the dot product between the filter weights and sections of the input. For a given filter covering a section, s, this operation can be summarized as:h^(l)_s = g(w_s^(l)⊗h^(l-1)+ b_s) These filters allow for the extraction of features in the input and learn which features to extract through training. After the convolutional layers in StarNet, we use a max pooling layer. A max pooling layer is a non-linear down-sampling technique typically used in convolutional neural networks to decrease the number of free parameters and to extract the strongest features from a convolutional layer. In a max pooling layer, a window moves along the feature map generated by each of the filters in the previous convolutional layer - in strides of length equal to the length of the window - extracting the maximum value from each sub-region. These pools of maxima are then passed on to the following layer. The next two layers in StarNet are fully-connected layers.The combination of all those layers allows for the formation of non-linear combinations of an input vector, x, to produce an output vector prediction, f(x; w, b). For each training sample of spectra, x_t, and corresponding known stellar parameters, y_t, the NN model weights and biases are estimated by minimizing the empirical risk that computes the loss between the predictions and targets for a batch of T training samples, often supplemented with a regularizing function. We ended up adopting a mean-squared-error loss function without regularization for StarNet, such that the StarNet empirical risk to be minimized reads:_w,b1/T∑_t=1^T (y_t - f(x_t; w, b))^2. The minimization is performed with a stochastic gradient descent (SGD) algorithm. SGD algorithms require the computation of the loss function gradients with respect to the weights, and make adjustments to those weights iteratively until reaching a minimum. In our case, the optimization is performed using the ADAM optimizer <cit.>, an SGD variant using adaptive estimates of the gradient moments to adjust learning rates. Initially, the weights of the model are randomly set and therefore the predictions will be quite poor. Computing the gradients is operated backwards through each sequential layer, a process referred as back-propagation, and is the computationally expensive part of the training. In the case of StarNet, a cross-validation set was used to test the model following every iteration to evaluate whether or not the model had improved; if improvements were not made after a given number of iterations, the training was stopped. This minimum may differ depending on the complexity of the model architecture as well as various hyper-parameters (discussed in Section <ref>). Following each iteration, the cross-validation set is sent through a single forward propagation where the outputs are predicted and compared against the target values. This set is not used for training, but only to ensure that the model is not over-fitting to the training set. Over-fitting occurs when a model learns a function that may be able to compute the outputs for the training set very well, but can not generalize that function to be applied to a test set that is not included in the training. A cross-validation set is used as a type of middle-ground between the training and test set, and ensures that over-fitting does not occur. If the cross-validation predictions do not improve after several iterations, the training will be stopped. Using a cross-validation set to avoid over-fitting and tuning hyper-parameters is common practice in machine learning applications <cit.>.Once a minimum has been reached, a separate test set - not included in training or cross-validation - is selected to ensure that the model performs well on new data. For StarNet, this test set consists of stellar spectra with “known” stellar parameters that are used to compare with StarNet predictions. For more in depth explanation of deep neural networks, we refer the reader to <cit.>.§ OPTICAL BENCHMARK STARS IN CLUSTERSIn this Appendix, we tabulate the stellar parameters for stars in globular clusters that have both APOGEE DR13 results (based on IR spectral analyses) and results in the literature (based on high-resolution optical spectral analyses).Our systematic investigation has revealed numerous stars with incomplete optical parameters in the literature (e.g., missing T_ eff, log g, or [Fe/H]), or incomplete IR parameters in the ASPCAP database (e.g., presumably due to data acquisition issues such as low S/N or persistence).We have removed those stars from our benchmark sample, as well as targets in the APOGEE database with a high RV scatter between individual visits (which usually indicates a potential binary system). Fig. <ref> compares the stellar parameters for our final sample of optical benchmark stars (as listed below, with the obvious outliers removed). Small systematic offsets in T_ eff, log g, and [Fe/H] are found, though typically within the quoted 1σ errors. @cccccccccccccccc@2MASS IDClusterc]@c@APOGEEPERSFlag 3cASPCAP DR13 Parameters3cOptical Parameters 3cStarNet_ASPCAP3cStarNet_SynthReferences T_eff log g[Fe/H] T_efflog g [Fe/H] T_efflog g [Fe/H]T_efflog g [Fe/H] [K][dex][dex] [K] [dex] [dex] [K] [dex] [dex] [K] [dex] [dex] 2cRepresentative Errors92 0.110.05 86 0.240.17 47 0.14 0.072M16412709+3628002 M13 – 4184 0.81 -1.54 4348 0.98 -1.54 4201 0.96 -1.48 4405 0.99 -1.61 h, ac, af, al 2M16413053+3629434 M13 – 4431 1.55 -1.39 4410 1.2 -1.44 4434 1.49 -1.34 4429 1.47 -1.42 ac, af 2M16413072+3630075 M13 High 4816 2.21 -1.29 4750 1.9 -1.5 4809 2.2 -1.27 4742 1.9 -1.36 k 2M16413082+3630130 M13 – 4658 1.45 -1.58 4825 1.77 -1.55 4665 1.74 -1.56 4869 1.98 -1.6 k, ag 2M16413476+3627596 M13 – 4213 1.2 -1.33 4230 0.85 -1.44 4205 1.14 -1.3 4249 1.02 -1.34 ad, af 2M16413482+3627197 M13 High 4171 1.17 -1.34 4173 0.7 -1.49 4157 1.12 -1.3 4220 1.0 -1.37 j, ad, af 2M16413684+3629289 M13 – 4547 1.68 -1.43 4750 2.0 -1.48 4582 1.89 -1.5 4869 2.25 -1.53 ag 2M16413707+3630378 M13 – 4550 1.62 -1.48 4700 1.7 -1.58 4597 1.87 -1.49 4829 2.16 -1.51 ag 2M16413870+3625380 M13 High 4168 1.17 -1.31 4111 0.67 -1.56 4134 1.09 -1.29 4205 0.95 -1.33 f, j, ac, ad, af, al 2M16413945+3632029 M13 – 4883 2.14 -1.41 4850 2.1 -1.56 4839 2.21 -1.44 4970 2.47 -1.45 ag 2M16414398+3622338 M13 High 4568 1.49 -1.47 4600 1.4 -1.55 4638 1.62 -1.45 4712 1.63 -1.47 k 2M16414478+3623273 M13 High 4529 1.67 -1.47 4625 1.65 -1.6 4522 1.57 -1.49 4657 1.58 -1.59 k, ag 2M16414528+3631068 M13 – 4679 1.8 -1.41 4600 1.6 -1.57 4742 2.0 -1.34 4701 2.07 -1.35 k, ag 2M16414558+3630328 M13 – 4711 1.86 -1.43 4750 1.9 -1.57 4781 2.19 -1.35 4733 2.18 -1.37 k 2M16414744+3628284 M13 – 4214 1.21 -1.34 4180 0.8 -1.46 4203 1.14 -1.32 4261 1.14 -1.34 ad, af 2M16414932+3625264 M13 High 4686 1.99 -1.3 4600 1.6 -1.58 4736 2.05 -1.3 4741 1.92 -1.27 k 2M16415037+3623417 M13 High 4702 1.9 -1.4 4700 1.9 -1.54 4771 1.94 -1.35 4697 1.76 -1.37 ag 2M16415160+3629363 M13 High 4914 2.02 -1.38 4975 1.7 -1.68 4782 1.95 -1.51 4923 1.62 -1.61 ag 2M16415239+3628395 M13 High 4668 1.96 -1.39 4800 2.0 -1.52 4631 1.85 -1.52 4858 2.04 -1.46 k 2M16415842+3628312 M13 High 4860 2.28 -1.32 4900 2.0 -1.53 4754 2.18 -1.36 4901 2.05 -1.48 k 2M16415862+3627465 M13 High 4719 1.94 -1.49 4850 1.9 -1.64 4618 1.89 -1.43 5002 2.35 -1.52 ag 2M13131736+1814463 M53 – 4314 0.74 -1.84 4425 1.06 -2.04 4414 1.04 -1.79 4620 1.19 -1.9 ah 2M07380627+2136542 N2420 – 4725 2.55 -0.22 4850 2.6 -0.07 4725 2.56 -0.22 4696 2.57 -0.22 am 2M07381549+2138015 N2420 – 4872 2.57 -0.21 4800 2.6 -0.06 4890 2.57 -0.21 4855 2.71 -0.18 am 2M07382696+2138244 N2420 – 4832 2.48 -0.18 4800 2.6 -0.03 4825 2.48 -0.17 4824 2.65 -0.16 am 2M12100405+1832532 N4147 High 4235 1.12 -1.53 4383 1.1 -1.3 4270 0.92 -1.73 4458 0.79 -1.86 ak 2M13414576+2824597 M3 – 4051 0.84 -1.46 4137 0.69 -1.62 4057 0.89 -1.46 4277 0.76 -1.52 j, r, x, y, ac, af, al 2M13421204+2826265 M3 – 3968 0.95 -1.27 4000 0.54 -1.64 3975 0.87 -1.27 4172 0.55 -1.42 r, x, y, ac, af, al 2M13421679+2823479 M3 – 4123 1.01 -1.33 4250 0.93 -1.42 4119 0.95 -1.32 4260 0.97 -1.4 j, ac, af 2M13423922+2827574 M3 Low 4104 1.06 -1.27 4175 0.63 -1.48 4110 1.03 -1.3 4263 0.94 -1.35 x, z 2M13424150+2819081 M3 – 3853 0.55 -1.17 3966 0.45 -1.53 3910 0.98 -1.12 4110 0.51 -1.27 r, x, y, ac, af 2M14052071+2829419 N5466 High 4341 1.28 -1.8 4499 1.27 -1.94 4472 1.38 -1.67 4846 1.76 -1.87 ah 2M08514401+1146245 M67 High 5476 3.93 -0.02 5541 3.8 0.02 5363 3.62 -0.1 5242 3.7 -0.13 i 2M15174702+0204519 M5PAL5 – 4361 1.22 -1.25 4381 1.05 -1.53 4362 1.27 -1.25 4528 1.47 -1.28 u 2M15180831+0158530 M5PAL5 – 4840 1.73 -1.37 4961 1.84 -1.59 4861 1.9 -1.39 5164 2.36 -1.25 u 2M15180987+0210088 M5PAL5 Low 4557 1.45 -1.3 4630 1.44 -1.56 4558 1.53 -1.32 4751 1.77 -1.32 u 2M15181075+0212356 M5PAL5 Low 4698 1.85 -1.18 4740 1.82 -1.37 4710 1.99 -1.13 4843 2.43 -1.13 u 2M15181867+0204327 M5PAL5 Low 3995 1.04 -1.05 4113 0.73 -1.15 3996 1.09 -1.06 4184 0.69 -1.19 l, af, ak 2M15182262+0200305 M5PAL5 – 4035 1.05 -1.03 4100 0.66 -1.3 4034 1.11 -1.02 4203 0.94 -1.12 u 2M15182283+0203097 M5PAL5 – 4251 1.35 -1.11 4250 1.1 -1.12 4234 1.3 -1.11 4352 1.43 -1.09 l, af 2M15182345+0159572 M5PAL5 – 4483 1.76 -1.02 4300 1.3 -1.11 4459 1.69 -1.01 4381 1.61 -1.05 l, af 2M15182591+0205076 M5PAL5 – 4198 1.37 -1.06 4300 0.8 -1.02 4175 1.23 -1.07 4283 1.31 -1.09 l, af 2M15183223+0201341 M5PAL5 – 4060 1.05 -1.06 4200 1.0 -1.09 4053 1.13 -1.06 4236 0.98 -1.13 l, af 2M15183531+0207400 M5PAL5 – 4327 1.51 -1.19 4409 1.3 -1.35 4340 1.45 -1.19 4487 1.7 -1.21 af, aj, ak 2M15183738+0206079 M5PAL5 Low 4241 1.1 -1.17 4236 0.61 -1.49 4240 1.08 -1.2 4332 1.09 -1.19 u 2M15183765+0201212 M5PAL5 – 4839 2.31 -1.25 4845 2.2 -1.46 4835 2.28 -1.27 5132 2.7 -1.22 u 2M15184132+0205014 M5PAL5 – 4534 1.73 -1.19 4460 1.42 -1.34 4534 1.82 -1.2 4589 1.96 -1.16 u, af, aj 2M15184164+0203533 M5PAL5 – 4128 1.19 -1.07 4128 0.83 -1.11 4103 1.09 -1.07 4216 1.05 -1.13 l, af, ak 2M15184346+0203074 M5PAL5 – 4581 1.82 -1.12 4475 1.3 -1.38 4615 1.91 -1.13 4540 1.9 -1.1 u 2M15184374+0208171 M5PAL5 Low 4938 2.37 -1.19 4860 2.25 -1.4 4865 2.29 -1.18 5036 3.15 -1.04 u 2M15184495+0202034 M5PAL5 – 3990 1.17 -1.0 4017 0.55 -1.3 3981 1.01 -1.02 4123 0.79 -1.14 ak, al 2M15184540+0204302 M5PAL5 – 4223 1.14 -1.21 4266 0.8 -1.27 4222 1.11 -1.25 4363 1.09 -1.25 l, u, af, ak, al 2M15185515+0214337 M5PAL5 – 4487 1.38 -1.28 4584 1.3 -1.5 4489 1.51 -1.31 4669 1.75 -1.29 u 2M17163427+4307363 M92 – 4638 1.19 -2.24 4750 1.63 -2.54 4791 1.78 -1.86 4771 1.41 -2.14 ab, ae 2M17165185+4308031 M92 – 4620 1.41 -2.24 4740 1.75 -2.72 4761 1.73 -1.81 4765 1.98 -2.03 ae 2M17165557+4309277 M92 – 4264 0.6 -2.19 4340 0.35 -2.27 4382 1.05 -1.93 4454 0.74 -2.3 x 2M17165738+4307236 M92 – 4254 0.67 -2.15 4335 0.8 -2.28 4352 1.03 -1.95 4430 0.81 -2.28 n, x, ab, af 2M17165772+4314115 M92 – 4443 1.11 -2.21 4615 1.48 -2.34 4515 1.34 -1.89 4612 1.41 -2.24 ab, ae 2M17165883+4315116 M92 – 4513 1.08 -2.2 4685 1.55 -2.34 4746 1.65 -1.78 4654 1.27 -1.99 ab, ae 2M17165956+4306456 M92 – 4618 1.35 -2.32 4880 1.9 -2.34 4723 1.63 -1.88 4979 1.99 -2.31 ab 2M17165967+4301058 M92 High 4746 1.77 -2.11 4875 1.9 -2.34 4803 1.87 -1.73 4904 2.32 -1.92 ab, ae 2M17170033+4311478 M92 – 4680 1.26 -2.19 4830 1.75 -2.54 4677 1.54 -1.86 4817 1.77 -2.0 ab, ae 2M17170043+4305117 M92 – 4601 1.35 -2.19 4725 1.65 -2.5 4727 1.77 -1.84 4626 1.56 -2.08 ab, ae 2M17170081+4310251 M92 – 4531 1.19 -2.18 4630 1.52 -2.54 4655 1.6 -1.88 4679 1.52 -2.08 ab, ae 2M17170647+4306029 M92 – 4613 1.37 -2.22 4780 1.75 -2.55 4679 1.61 -1.88 4812 1.88 -2.06 ae 2M17170731+4309308 M92 – 4447 1.05 -2.16 4540 1.2 -2.34 4532 1.33 -1.9 4542 1.16 -2.11 ab 2M17171221+4302209 M92 High 4833 1.93 -2.08 4940 2.0 -2.35 4768 1.83 -1.73 4920 2.28 -1.9 ae 2M17171307+4309483 M92 – 4341 0.88 -2.08 4373 0.87 -2.19 4470 1.28 -1.88 4385 0.91 -2.0 n, ab, af 2M17172166+4311031 M92 – 4611 1.38 -2.2 4880 1.85 -2.39 4693 1.63 -1.84 4888 2.03 -2.03 ae 2M21290843+1209118 M15 – 4467 1.08 -2.14 4640 1.4 -2.37 4595 1.35 -1.91 4507 1.09 -2.0 ab 2M21294465+1207307 M15 – 4451 0.96 -2.19 4610 1.35 -2.37 4570 1.26 -1.88 4600 1.34 -2.08 ab 2M21294979+1211058 M15 Low 4303 0.69 -2.16 4423 0.86 -2.26 4475 1.15 -1.94 4420 0.75 -2.12 e, n, n, o, o, q, ab, af, af 2M21295311+1212310 M15 – 4382 0.83 -2.22 4604 1.33 -2.34 4468 1.25 -1.98 4596 1.11 -2.21 d, ab 2M21295492+1213225 M15 Low 4145 0.29 -2.27 4305 0.47 -2.39 4303 0.82 -1.95 4405 0.66 -2.36 d, m, o, o, ab 2M21295560+1212422 M15 – 4395 0.8 -2.18 4429 0.79 -2.38 4498 1.13 -1.9 4542 1.12 -2.03 d, o, o 2M21295562+1210455 M15 – 4250 0.56 -2.16 4200 0.15 -2.37 4387 1.04 -1.92 4376 0.63 -2.1 o, o, ab 2M21295618+1212337 M15 – 4327 0.77 -2.21 4418 0.76 -2.29 4489 1.17 -1.92 4456 0.9 -2.09 n, n, o, o, af 2M21295666+1209463 M15 High 4120 0.28 -2.34 4269 0.3 -2.44 4274 0.81 -1.94 4381 0.66 -2.33 m, o, o, ab 2M21295678+1210269 M15 – 4134 0.04 -2.27 4325 0.45 -2.42 4310 0.76 -1.98 4349 0.33 -2.22 o, o 2M21295801+1214260 M15 – 4605 1.29 -2.26 4855 2.0 -2.37 4569 1.32 -1.95 4929 2.03 -2.19 ab 2M21295856+1209214 M15 – 4197 0.41 -2.23 4300 0.3 -2.43 4373 0.97 -1.97 4392 0.73 -2.14 m, o, o, ab 2M21300033+1210508 M15 – 4292 0.68 -2.17 4341 0.43 -2.4 4427 1.08 -1.94 4434 0.83 -2.12 d, o, o 2M21300038+1207363 M15 – 4394 0.72 -2.26 4552 1.24 -2.4 4546 1.16 -2.0 4628 1.05 -2.22 d, ab 2M21300224+1211215 M15 Low 4084 0.18 -2.29 4288 0.62 -2.39 4255 0.78 -1.96 4371 0.59 -2.32 d, o, o, ab 2M21300274+1210438 M15 High 4271 0.59 -2.24 4275 0.5 -2.39 4432 1.05 -1.94 4389 0.66 -2.16 o, o, ab 2M21300637+1206592 M15 High 4430 0.6 -2.33 4625 1.3 -2.4 4640 1.2 -1.9 4515 0.66 -2.11 o, o, ab 2M21300696+1207465 M15 – 4709 0.85 -2.3 4940 1.5 -2.37 4656 1.02 -1.93 4908 0.83 -2.14 ab 2M21301049+1210061 M15 – 4486 1.02 -2.1 4470 1.07 -2.22 4668 1.41 -1.86 4562 1.12 -1.93 e, o, af 2M21304412+1211226 M15 – 4539 1.01 -2.08 4640 1.4 -2.37 4741 1.54 -1.88 4729 1.27 -1.95 ab 2M19533747+1844596 M71 High 4069 1.33 -0.7 4200 1.2 -0.83 4068 1.34 -0.69 4127 1.1 -0.8 p, af 2M19533757+1847286 M71 High 3906 1.22 -0.58 3996 0.88 -0.43 3918 1.15 -0.56 3974 0.84 -0.68 p, w, af 2M19533986+1843530 M71 High 4230 1.52 -0.68 4300 1.25 -0.7 4204 1.42 -0.66 4257 1.27 -0.74 p, af 2M19534750+1846169 M71 High 4259 1.6 -0.65 4367 1.55 -0.72 4270 1.58 -0.65 4283 1.25 -0.76 a, p, af, aj 2M19534827+1848021 M71 High 3979 1.33 -0.6 4055 0.8 -0.89 3968 1.19 -0.59 3988 0.79 -0.71 p, aa, af 2M19535064+1849075 M71 High 4223 1.71 -0.65 4291 1.4 -0.83 4212 1.55 -0.65 4226 1.37 -0.71 a, p, af, aj, ak 2M19535325+1846471 M71 High 3896 1.15 -0.63 4099 0.8 -0.89 3922 1.11 -0.61 3988 0.8 -0.74 a, p, aa, af 2M19205287+3745331 N6791 High 4500 2.58 0.3 4512 2.32 0.47 4520 2.37 0.32 4590 2.31 0.26 b, t 2M19205629+3744334 N6791 High 4472 2.53 0.32 4473 2.33 0.56 4456 2.26 0.34 4581 2.27 0.3 b, t 2M19411355+4012205 N6819 – 4802 2.51 0.02 4835 2.61 0.1 4805 2.61 0.01 4783 2.67 0.03 c 2M19413031+4009005 N6819 – 4079 1.55 0.04 4046 1.77 -0.18 4082 1.54 0.04 4139 1.57 0.04 v 2M23565751+5645272 N7789 – 4560 2.37 -0.01 4345 2.2 0.15 4562 2.36 -0.01 4626 2.45 0.05 ai 2M06293009-3116587 N2243 – 4595 2.08 -0.47 4561 1.9 -0.69 4603 2.14 -0.48 4639 2.08 -0.46 g, s 1. Average error from the ASPCAP DR13 parameters.2. Average standard deviation of the mean from multiple (N∼3) optical parameter sources.3. Average statistical errors for StarNet predictions with respect to ASPCAP DR13 parameters.Source references: (a) <cit.>; (b) <cit.>; (c) <cit.>; (d) <cit.>; (e) <cit.>; (f) <cit.>; (g) <cit.>; (h) <cit.>; (i) <cit.>; (j) <cit.>; (k) <cit.>; (l) <cit.>; (m) <cit.>; (n) <cit.>; (o) <cit.>; (p) <cit.>; (q) <cit.>; (r) <cit.>; (s) <cit.>; (t) <cit.>; (u) <cit.>; (v) <cit.>; (w) <cit.>; (x) <cit.>; (y) <cit.>; (z) <cit.>; (aa) <cit.>; (ab) <cit.>; (ac) <cit.>; (ad) <cit.>; (ae) <cit.>; (af) <cit.>; (ag) <cit.>; (ah) <cit.>; (ai) <cit.>; (aj) <cit.>; (ak) <cit.>; (al) <cit.>; (am) <cit.>
http://arxiv.org/abs/1709.09182v2
{ "authors": [ "Sebastien Fabbro", "Kim Venn", "Teaghan O'Briain", "Spencer Bialek", "Collin Kielty", "Farbod Jahandar", "Stephanie Monty" ], "categories": [ "astro-ph.IM", "astro-ph.SR" ], "primary_category": "astro-ph.IM", "published": "20170926180016", "title": "An Application of Deep Neural Networks in the Analysis of Stellar Spectra" }
a4paper,inner=3.3cm, outer=3.5cm,top=3.3cm,bottom=4cm
http://arxiv.org/abs/1709.09444v1
{ "authors": [ "Stefan Kindermann", "Sergiy Pereverzyev Jr.", "Andrey Pilipenko" ], "categories": [ "math.NA" ], "primary_category": "math.NA", "published": "20170927105442", "title": "The quasi-optimality criterion in the linear functional strategy" }
[ Arkaitz Zubiaga December 30, 2023 =====================Determining the locations of the major snowlines in protostellar environments is crucial to fully understand the planet formation process and its outcome. Despite being located far enough from the central star to be spatially resolved with ALMA, the CO snowline remains difficult to detect directly in protoplanetary disks. Instead, its location can be derived from N_2H^+ emission, when chemical effects like photodissociation of CO and N_2 are taken into account. The water snowline is even harder to observe than that for CO, because in disks it is located only a few AU from the protostar, and from the ground only the less abundant isotopologue H_2^18O can be observed. Therefore, using an indirect chemical tracer, as done for CO, may be the best way to locate the water snowline. A good candidate tracer is HCO^+, which is expected to be particularly abundant when its main destructor, H_2O, is frozen out. Comparison of H_2^18O and H^13CO^+ emission toward the envelope of the Class 0 protostar IRAS2A shows that the emission from both molecules is spatially anticorrelated, providing a proof of concept that H^13CO^+ can indeed be used to trace the water snowline in systems where it cannot be imaged directly.§ INTRODUCTION The formation of low-mass stars begins with the collapse of a dense core in a molecular cloud. To conserve angular momentum, the infalling material forms a disk around the protostar. Due to the ongoing accretion of material from the surrounding envelope through the disk onto the star, together with the launching of an outflow, the envelope dissipates over time. What remains is a pre-main sequence star surrounded by a protoplanetary disk that contains the gas and dust from which a planetary system may be forming. The composition of the planets is thus determined by the chemical structure of the protostellar environment. A key aspect of protostellar chemistry are snowlines. A snowline marks the midplane radius at which a molecular species freezes out from the gas phase onto dust grains. The location of a snowline depends both on the species-dependent sublimation temperature and the physical structure of the protostellar envelope or protoplanetary disk (e.g., temperature and density; see Fig. <ref>). The selective freeze out of major carbon or oxygen carrying species at different snowlines causes the elemental C/O-ratio of the planet forming material to vary with radial distance from the star (<cit.>; <cit.>). The bulk composition of planets may therefore be regulated by their formation location with respect to the major snowlines (e.g., <cit.>; <cit.>; <cit.>). Moreover, the growth of dust particles, and thus the planet formation efficiency is thought to be enhanced in these freeze-out zones, for example due to the increase in solid density (e.g., <cit.>; <cit.>; <cit.>). Snowlines thus play a crucial role in the formation and composition of planets. Determining their locations is therefore key to fully understand the planet formation process and its outcome.§ THE CO SNOWLINEOf the major snowlines, the CO snowline is particularly interesting because CO ice is the starting point for the formation of many complex organic molecules (e.g., <cit.>). In addition, due to the low sublimation temperature of CO (∼20 K), the CO snowline is located relatively far away from the central star (10s–100 AU in protoplanetary disks; see Fig. <ref>). Although we are now able to spatially resolve this with ALMA, locating the CO snowline directly remains difficult. CO line emission is generally optically thick and does not reveal the cold disk midplane. The logical step in such situation would be to use optically thin isotopologues. However, even C^18O does not always offer a solution as is seen for TW Hya (<cit.>): the CO abundance remains 1–2 orders of magnitude below the 10^-4 ISM abundance far within the radius where the CO snowline is expected based on the disk temperature. Furthermore, a recent ALMA survey of all disks in the Lupus star forming region suggest that CO abundances below the ISM abundance may be common (<cit.>; <cit.>), indicating that other processes than freeze-out may contribute to lowering the gas-phase CO abundance. In contrast to C^18O, the double isotopologue ^13C^18O has recently been shown to reveal the snowline location in TW Hya (<cit.>). However, the low ^13C^18O abundance restricts observations to the very few nearest and brightest disks. An alternative approach is to use a molecule whose abundance is strongly affected by the freeze-out of CO. One such molecule is DCO^+, which forms via reaction of H_2D^+ and CO. Based on these chemical considerations, DCO^+ emission is expected to form a ring inside the CO snowline, where the CO abundance is low enough to enhance H_2D^+ formation, while there is still enough gaseous CO left to act as parent molecule for DCO^+. <cit.> indeed observed ring-shaped DCO^+ emission toward HD 163296, but the outer radius was later shown not to correspond exactly to the CO snowline (<cit.>); probably because DCO^+ can also form in warmer regions higher up in the disk from CH_2D^+ (<cit.>). ALMA observations of six protoplanetary disks by <cit.> corroborate that DCO^+ emission, while still tracing relatively cold gas, does not have a direct relation with the CO snowline in disks. Another molecule suggested to trace the CO snowline is N_2H^+. Formation of N_2H^+ occurs through proton transfer from H_3^+ to N_2, N_2 + H_3^+→N_2H^+ + H_2,but is impeded when CO is present in the gas phase because CO competes with N_2 for reaction with H_3^+, CO + H_3^+→HCO^+ + H_2. In addition, reaction with gaseous CO is the dominant destruction route of N_2H^+: N_2H^+ + CO→HCO^+ + N_2.N_2H^+ is therefore expected to be abundant only beyond the CO snowline, where CO is depleted from the gas phase. A CO snowline location has indeed been derived from N_2H^+ emission for the disks around TW Hya and HD 163296 (<cit.>). However, a simple chemical model incorporating the main reactions for N_2H^+ (Eqs. <ref>–<ref>) in addition to freeze out, thermal desorption and photodissociation of CO and N_2, combined with radiative transfer modeling, shows that the relationship between N_2H^+ and the CO snowline is more complicated (<cit.>). First, the N_2H^+ abundance peaks at temperatures slightly below the CO freeze-out temperature, instead of directly at the CO snowline (see Fig. <ref>), as also found by <cit.>. The snowline marks the radius where 50% of the CO is present in the gas phase and 50% is frozen out. Apparently, this reduction in gaseous CO is not yet enough to diminish the N_2H^+ destruction. Second, N_2H^+ can be formed higher up in the disk, above the layer where CO is frozen out, due to a small difference in the photodissociation rates for CO and N_2 (see Fig. <ref>). The slightly higher rate for CO creates another region where N_2 is still present in the gas phase, while CO is not. This “surface” layer of N_2H^+ can contribute significantly to the emission, shifting the N_2H^+ emission peak to larger radii, away from the CO snowline. N_2H^+ emission alone then merely provides an upper limit for the snowline location. Applying our modeling approach to the TW Hya disk, using the physical model from <cit.>, we derive a CO snowline location of ∼18 AU, instead of the ∼30 AU suggested by <cit.>. The latter authors fitted a radial N_2H^+ column density profile to the emission with a steep rise at the CO snowline. Our outcome is consistent with the results from <cit.> based on multiple ^13CO and C^18O lines, and the ^13C^18O analysis by <cit.>. Deriving the CO snowline location from N_2H^+ emission is thus not as straightforward as was generally assumed, but, given a good physical model for the target disk, the location can be determined using simple chemistry in combination with radiative transfer modeling.§ THE H_2O SNOWLINE The most important snowline is the water snowline, since the bulk of the oxygen budget and ice mass is in water ice. Unfortunately, this snowline is even more difficult to observe than that for CO. Because of the large binding energy of H_2O, water sublimates off the grains only at temperatures above ∼100 K. This means that the snowline is located a few AU from the star in protoplanetary disks, that is, ∼0.01^'' in the nearest star-forming region Taurus. High angular resolution is thus required to observe it. Furthermore, except for the H_2O line at 183 GHz, the only thermal water lines observable from the ground are those from the less abundant isotopologue H_2^18O. As such, even ALMA will have great difficulty to detect the water snowline in protoplanetary disks. Cold water lines (<100 K) have been detected from space toward TW Hya with Herschel (<cit.>), but the Herschel beam is too large to resolve the snowline. So far, only for the disk around V883 Ori a snowline location has been reported, but this was inferred from a steep drop in the dust optical depth (<cit.>). The best way to locate the water snowline may therefore be by applying the same strategy as done for CO, that is, using a chemical tracer. The best candidate to trace the water snowline is HCO^+, because gaseous H_2O is its most abundant destroyer (<cit.>; <cit.>). A strong decline in HCO^+ abundance is thus expected when H_2O desorbs from the dust grains. This is corroborated by the results from chemical models. Figure <ref> shows the outcome of the three-phase astrochemical code GRAINOBLE (<cit.>) for the 1D temperature and density structure of the envelope around NGC1333 IRAS2A (<cit.>). Three-phase models consider reactions for gas-phase species, reactions for species on the ice surface, and reactions for bulk ice species. As expected, the HCO^+ and H_2O abundances show a strong anticorrelation. Hints of this anticorrelation are seen in observations towards the protostar IRAS 15398-3359, which show ring-shaped H^13CO^+ emission surrounding CH_3OH emission, another grain-surface molecule with a similar snowline location as water (<cit.>). The non-detection of the high excitation H_2^18O 4_1,4-3_2,1 line prevented unambigious confirmation, but the detected HDO emission, although more complex, is consistent with the H_2O-HCO^+ anticorrelation scenario (<cit.>). Protostellar envelopes, like IRAS 15398-3359, are the best sources to establish whether HCO^+ is a good snowline tracer: the water snowline is located further away from the star than in disks (10s–100s AU rather than a few AU; <cit.>; <cit.>), and compact warm water has already been observed toward four sources (<cit.>; <cit.>, 2013; <cit.>). The only thing lacking are thus HCO^+ observations. We have therefore observed the optically thin isotopologue H^13CO^+ toward the Class 0 protostar NGC1333 IRAS2A using NOEMA (<cit.>). Comparison with the H_2^18O emission presented by <cit.> shows that while H_2^18O peaks on source, H^13CO^+ has its emission peak ∼ 2^'' off source (see Fig. <ref>). As a first analysis, we performed a 1D radiative transfer calculation with Ratran (<cit.>), using the 1D temperature and density structure derived by<cit.> from DUSTY modeling (<cit.>) of the continuum emission. A simple parametrized abundance profile for H^13CO^+ with sharp decreases inside the H_2O snowline and outside the CO snowline, as predicted by a full chemical model, can reproduce the observed location of the emission peak (see Fig. <ref>). The H^13CO^+ abundance drops outside the CO snowline because its parent molecule, CO, is frozen out. The temperature, however, has to be increased by a factor of ∼2 to reproduce the observed peak location with a drop in H^13CO^+ at 100 K. A constant H^13CO^+ abundance at all radii, on the other hand, produces emission peaking on source, unlike observed. These results suggest that water and HCO^+ are indeed anticorrelated, and provide a proof of concept that the optically thin isotopologue H^13CO^+ can be used to trace the water snowline. § SUMMARY AND OUTLOOK Due to its close proximity to the central star and the fact that only the less abundant isotopologue H_2^18O can be observed from the ground, both high angular resolution and high sensitivity are required to observe the water snowline. Chemical imaging may therefore be the only way to locate this snowline in protoplanetary disks. This approach has proven useful for the CO snowline, although simple chemical considerations have to be taken into account when deriving the CO freeze-out radius from N_2H^+ emission. The best candidate to image the water snowline is HCO^+, because its main destructor is gaseous H_2O. The first observations of both H^13CO^+ and H_2^18O toward the protostellar envelope of NGC1333 IRAS2A show that these molecules are indeed spatially anti-correlated. This suggest that H^13CO^+ may be used as a tracer of the water snowline in protoplanetary disks.§ ACKNOWLEDGMENTS I would like to acknowledge and thank Catherine Walsh, Mihkel Kama, Stefano Facchini, Magnus Persson, Daniel Harsono, Vianney Taquet, Jes Jørgensen, Ruud Visser, Edwin Bergin and Ewine van Dishoeck for their contributions to this work, and support from a Huygens fellowship from Leiden University. [Aikawa et al. 2015]Aikawa2015 Aikawa, Y., Furya, K., Nomura, H., & Qi, C. 2015, ApJ, 807, 120[Ansdell et al. 2016]Ansdell2016 Ansdell, M., Williams, J.P., van der Marel, N. 2016, ApJ, 828, 46[Bergin et al. 1998]Bergin1998 Bergin, E.A., Melnick, G.J., & Neufeld, D.A. 1998,ApJ, 499, 777[Bjerkeli et al. 2016]Bjerkeli2016 Bjerkeli, P., Jørgensen, J.K., Bergin, E.A.,2016, A&A, 595, A39[Cieza et al. 2016]Cieza2016 Cieza, L.A., Casassus, S., Tobin, J.,2016,Nature, 535, 258[Eistrup et al. 2016]Eistrup2016 Eistrup, C., Walsh, C., & van Dishoeck, E.F. 2016, A&A, 595, A83[Favre et al. 2015]Favre2015 Favre, C., Bergin, E.A., Cleeves, L.I., Hersant, F., Qi, C., & Aikawa, Y. 2015,ApJL, 802, L23[Herbst & van Dishoeck 2009]Herbst2009 Herbst, E., & van Dishoeck, E.F. 2009, ARA&A, 47, 427[Harsono et al. 2015]Harsono2015 Harsono, D., Bruderer, S., & van Dishoeck, E.F. 2015, A&A, 582, A41[Hogerheijde & van der Tak 2000]Hogerheijde2000 Hogerheijde, M.R., & van der Tak, F.F.S. 2000, A&A, 362, 697[Hogerheijde et al. 2011]Hogerheijde2011 Hogerheijde, M.R., Bergin, E.A., Brinch, E.A.,2011, Science, 334, 338[Huang et al. 2017]Huang2017 Huang, J., Öberg, K.I., Qi, C.,2017, ApJ, 835, 231[Ivezić & Elitzur 1997]Ivezic1997 Ivezić, Z., & Elitzur, M. 1997,MNRAS, 287, 799[Jørgensen & van Dishoeck 2010]Jorgensen2010 Jørgensen, J.K., & van Dishoeck, E.F. 2010, ApJL, 710, L72[Jørgensen et al. 2013]Jorgensen2013 Jørgensen, J.K., Visser, R., Sakai, N.,2013, ApJL, 779, L22[Kama et al. 2016]Kama2016 Kama, M., Bruderer, S., van Dishoeck, E.F.,2016, A&A, 592, A83[Kristensen et al. 2012]Kristensen2012 Kristensen, L.E., van Dishoeck, E.F., Bergin, E.A.,2012,A&A, 542, A8[Madhusudhan et al. 2014]Madhusudhan2014 Madhusudhan, N., Amin, M.A., & Kennedy, G.M. 2014, ApJL, 794, L12[Mathews et al. 2013]Mathews2013 Mathews, G.S., Klaassen, P.D., Juhász, A.,2013, A&A, 557, A132[Miotello et al. 2017]Miotello2017 Miotello, A., van Dishoeck, E.F., Williams, J.P.,2017, A&A, 599, A113[Öberg et al. 2011]Oberg2011 Öberg, K.I., Murray-Clay, R., & Bergin, E.A. 2011,ApJL, 743, L16[Öberg & Bergin 2016]Oberg2016 Öberg, K.I., & Bergin, E.A. 2016, ApJL, 831, L19[Persson et al. 2012]Persson2012 Persson, M.V., Jørgensen, J.K., & van Dishoeck, E.F. 2012,A&A, 541, A39[Persson et al. 2013]Persson2013 Persson, M.V., Jørgensen, J.K., & van Dishoeck, E.F. 2013, A&A, 549, L3[Phillips et al. 1992]Phillips1992 Phillips, T.G., van Dishoeck, E.F., & Keene, J. 1992, ApJ, 399, 533[Qi et al. 2013]Qi2013 Qi, C., Öberg, K.I., Wilner, D.J.,2013, Science, 341, 630[Qi et al. 2015]Qi2015 Qi, C., Öberg, K.I., Andrews, S.M.,2015, ApJ, 813, 128[Ros & Johansen 2013]Ros2013 Ros, K., & Johansen, A. 2013, A&A, 552, A137[Schoonenberg & Ormel 2017]Schoonenberg2017 Schoonenberg, D., & Ormel, C.W. 2017, A&A, 602, A21[Schwarz et al. 2016]Schwarz2016 Schwarz, K., Bergin, E.A., Cleeves, L.I.,2016, ApJ, 823, 91[Stevenson & Lunine 1988]Stevenson1988Stevenson, D.J., & Lunine, J.I. 1988, Icarus, 75, 146[Taquet et al. 2013]Taquet2013 Taquet, V., López-Sepulcre, A., Ceccarelli, C.,2013, ApJL, 768, L29[Taquet et al. 2014]Taquet2014 Taquet, V., Charnley, S.B., & Sipilä, O. 2014,ApJ, 791, 1[van 't Hoff et al. 2017a]vantHoff2017 van 't Hoff, M.L.R., Walsh, C., Kama, M., Facchini, S., & van Dishoeck, E.F. 2017a, A&A, 599, A101 [van 't Hoff et al. 2017b]vantHoff2017b van 't Hoff, M.L.R., Persson, M.V., Harsono, D.,2017b,A&A, submitted[Walsh et al. 2015]Walsh2015 Walsh, C., Nomura, H., & van Dishoeck, E.F. 2015, A&A, 582, A88[Zhang et al. 2017]Zhang2017 Zhang, K., Bergin, E.A., Blake, G.A., Cleeves, L.I., & Schwarz K.R. 2017 Nat. Astron., 1, 0130
http://arxiv.org/abs/1709.09184v1
{ "authors": [ "Merel L. R. van 't Hoff" ], "categories": [ "astro-ph.SR" ], "primary_category": "astro-ph.SR", "published": "20170926180018", "title": "Imaging the water snowline in protostellar envelopes" }
Zero-rating of Content and its Effect on the Quality of Service in the Internet Manjesh K. Hanawal, Fehmina Malik and Yezekael Hayel Manjesh K. Hanawal and Fehmina Malik are with IEOR, IIT Bombay, India. E-mail: {mhanawal, fmalik}@iitb.ac.in. Yezekael Hayel is with LIA/CERI, University of Avignon, France. E-mail: [email protected] December 30, 2023 ========================================================================================================================================================================================================================================================================================== The ongoing net neutrality debate has generated a lot of heated discussions on whether or not monetary interactions should be regulated between content and access providers. Among the several topics discussed, `differential pricing' has recently received attention due to `zero-rating' platforms proposed by some service providers. In the differential pricing scheme, Internet Service Providers (ISPs) can exempt data access charges for on content from certain CPs (zero-rated) while no exemption is on content from other CPs. This allows the possibility for Content Providers (CPs) to make `sponsorship' agreements to zero-rate their content and attract more user traffic. In this paper, we study the effect of differential pricing on various players in the Internet. We first consider a model with a monopolistic ISP and multiple CPs where users select CPs based on the quality of service (QoS) and data access charges.We show that in a differential pricing regime 1) a CP offering low QoS can make have higher surplus than a CP offering better QoS through sponsorships. 2) Overall QoS (mean delay) for end users can degrade under differential pricing schemes. In the oligopolistic market with multiple ISPs, users tend to select the ISP with lowest ISP resulting in same type of conclusions as in the monopolistic market. We then study how differential pricing effects the revenue of ISPs. § INTRODUCTION The term 'network neutrality'generally refers to the principle that Internet Service Providers (ISPs) must treat all Internet traffic of a particular class on an equal basis, without regard to the type, origin, or destination of the content or the means of its transmission. What it implies is that all points in a network should be treated equally without any discrimination on speed, quality or price. Some of those features have made internet to grow at such a fast rate.Any practice of blocking, throttling, preferential treatment, discriminatory tariffsof content or applications is treated as non-neutral behavior. In this paper we study non-neutral behavior related to discriminatory access prices – the price ISP charges the end users to provide access to the CPs.In recent years, with the growing popularity of data intensive services, e.g., online video streaming and cloud-based applications, Internet traffic has been growing more than 50% per year <cit.>, causing serious network congestions, especially in wireless networks which are bandwidth constrained. To sustain such rapid traffic growth and enhance user experiences, ISPs need to upgrade their network infrastructures and expand capacities. However, the revenues from end-users are often not enough to recoup the corresponding costs and ISPs are looking at other methods for revenue generation. Some methods being adopted are moving away from flat rate pricing to volume based pricing <cit.>, especially by the wireless ISPs. On the other hand, CPs that work on different revenue models (mainly advertisements driven) are seeing growth in revenues <cit.> due to higher traffic. Pointing to this disparity, some ISPs have proposed that CPs return the benefit of value chain by sharing their revenues with them. In <cit.>, the author describes evolution of the pricing structure on the Internet along the different relationships between CPs, ISPs and end users. It is observed that:“the network neutrality issue is really about economics rather than freedom or promoting/stifling innovation."While ISPs earnings depend on the total volume of traffic that flows through their networks (under volume based pricing), CPs revenue often depends on what fraction of that is directed toit.Also, as noted in <cit.>, there is a lot of competition on the content side of the Internet, but not enough at the last mile ISPs as most of times it is a monopolistic market. To increase their share of traffic, many CPs are exploring various smart data pricing (SDP) schemes <cit.> and also exploring favors from ISPs so that they stand out in the competition. Some CPs prefer that ISPsincentivise the end users to access their content more either by giving higher priority to its content or exempting access charges on it. In turn, CPs can share the gains with the ISPs. This looks like an attractive proposition for ISPs who anyway wants CPs to share their revenue with them. Among the various models for monetary interactions, zero-rating has found traction. In zero-rating, an ISP and CPs enter into sponsorship agreement such that the ISP subsidizes data traffic charges (or access price) applicable on the content accessed from the CPs. The CPs compensate the ISP either by repaying the subsidy amount or offer free advertisement services. In return, sponsoring CPs hope to get more traffic and earn higher advertisement revenues. The article <cit.> describes an economic mechanism based on subsidization of usage-based access cost of users by the CPs. The author argue that this induces an improvement of the revenue of the ISP and thus strengthen the investment incentives.Zero-rating of content has no effect on the end users under flat-rate pricing models. However it affects their decisions under volume based subscription where they are charged based on the amount of traffic they consume. We thus focus on volume based pricing models which is predominantly followed (or being planned) by the wireless ISPs. Differential pricing schemes, and in particular zero-rating methods,are non-neutral as it allows ISPs to discriminate content based on its origin. Net neutrality advocates argue that differential pricing hinders innovations at the CPs as new entrants cannot afford to get their content zero-rated and will be left out in the competition. Those favoring differential pricing argue that it helps more users connect to the Internet, especially in the developing countries where Internet penetration is still low. Some of zero-rated platforms like BingeOn by T-Mobile, FreeBee by Verizon, and Free Basics by Facebook are accused of violating net neutrality principles and are under scrutiny. The issue of differential pricing is now a part of consultations launched by several regulatory authorities including the FCC, European Commission, CRTC<cit.> seeking public and stake holders' opinion. Differential pricing is banned in Chile and The Netherlands with India being the latest to do so <cit.>. Zero-rating is a SDP scheme through which CPs aim to attract more user traffic. However, pricing schemes alone do not guarantee higher traffic as quality of service (QoS) also matters for users. While QoS experienced at CPs depends on long term planning and investment in the service capacities of their facilities, pricing strategies can be based on short term planning and constitute running costs. The CPs can tradeoffbetween long term and running costs to maximize their revenues. Our aim in this paper is to understand how zero-rating schemes affect revenues of CPs and QoS experienced by the users. Specifically, we ask if a CP offering lower QoS can earn more revenue than CPs offering higher QoS through differential pricing? And, can QoS experienced by users under differential schemes degrade compared to the case where it is not allowed (neutral regime). As no market data is available for an empirical study of the effect of differential pricing, we take an analytical approach and model the scenario as follows: we consider a single ISP connecting users to a set of CPs that offer same/similar content. Users' decision to select a CP depends on the QoS and access price for the content at that CP. The users constitute mobile devices or any internet enabled devices that generate requests to access content from the CPs. The requests areassumed to be generated according to a Poisson process. We consider a hierarchical game where ISP act as a leader and sets access price. The CPs then negotiate with the ISP and competitively decide what fraction of access prices they will sponsor (or subsidize) such that their utility, defined as the difference between the average revenues earned from the user traffic and the amount they have to pay to the ISP, is maximized. Finally, knowing CPs decisions users compete and select CPs such that their cost, defined as the sum of mean delay and the corresponding access price, is minimized. We analyze the hierarchical game via a backward induction technique. Our analysis reveals that answers to both previous questions can be positive. We identify the regimes where differential pricing schemes lead to unfair distribution of revenues among CPs and users QoS experience degrades. However, if access price set by ISPs are regulated, both the unfavorable scenarios can be avoided.The rest of the paper is organized as follows: In Section <ref>, we discuss the model and settings of the hierarchical game. We begin with equilibrium behavior of the users in Section <ref> and study mean delay experienced by them in Section <ref>. In section <ref>, we study the preference of CPs for the differential pricing and demonstrate that the game need not have Nash equilibria. In Section <ref>, we consider the exogenous demand and study its effect on the CPs' behavior. In Section <ref>, we analyze the game where CPs decisions are restricted to either fully sponsor the access priceor do not sponsor at all. We study the resulting monetary gain for the ISP at equilibrium in Section <ref>. We extend our analysis to include multiple ISPs in Section <ref>. Finally, we discuss regulatory implications of our analysis and future extensions in Section <ref>.§.§ Literature reviewThere is a significant amount of literature that analyzes various aspects of the net neutrality debate, like incentive for investment, QoS differentiation, side payments or off-network pricing through analytical models. For a detailed survey see <cit.>. However, the literature on the effect of differential pricing and QoS experienced by users in this regime are few and we discuss them below.In <cit.>, the authors study a game between a CP and an ISP where the ISP first sets the price parameters for sponsorship and the CP responds by deciding what volume of content it will sponsor. This model assumes that the users always access the sponsored content irrespective of the QoS which is not always true. Multiple CPs involving larger (richer) and smaller revenues are considered in <cit.>. When the ISP charges both the end users and the CPs, it is argued that richer CPsderive more benefit through sponsorship in the long run (market shares are dynamic). Negotiation between the ISP and CPs are studied in <cit.> using Stakelberg game or Nash bargaining game where CPs negotiate with the ISP for higher QoS for its content. It is argued that QoS at equilibrium improves in both the games.The analysis involving stakelberg game is extended to multiple CPs in <cit.>. A scheme named Quality Sponsor Data (QSD) is proposedin <cit.> where ISPs make portion of their resources available to CPs for sponsorship. Voluntary subsidization of the data usage charges by CPs for accessing their content is proposed in <cit.>. It is argued that voluntary subsidization of traffic increases welfare of the Internet content market. Hierarchal games involving ISP, CPs and multiple type of users is analyzed in <cit.> and show that all parties benefit form sponsored data. However, the work ignores the effect of QoS.In <cit.>, QoS parameter are considered in deriving the total traffic generated by the users. A stakelberg game between an ISP and CPs are analyzed where the ISP decides the access price and the CPs decide whether or not to sponsor user traffic. Competition among the users and their strategic behavior is not considered in this work. Furthermore, this work considers `QoS index' as an exogenous parameter (required bandwidth), whereas we consider perceived QoS by the users. In <cit.>, the authors study ISPs optimal service differentiation strategy subject to its network capacity constraints. Based on optimal taxation theory, the authors consider an optimal control problem. In <cit.>, the authors propose a model considering sponsored data plans in which content providers pay for traffic on behalf of their consumers. In this paper, the authors consider a demand-response type model for end users consumption whereas, in our framework we assume a congestion game in order to model the interaction through a quality metric, that impacts directly the demand. Agreements between CPs about linking their contents and there propose a better offer is analyzed in <cit.> through the network neutrality prism. The authors show that CPs are interested in reaching a linking agreement when the termination fee set by the Internet Service Provider (ISP) is not particularly high. In their model, the authors do not also consider explicit congestion at the demand side, as we consider in our analysis. Moreover, we do not consider agreements between CPs about content, we assumer only possible economic agreements between CPs and one ISP. Our work differs from all the models studied in the literature as we consider users to be strategic andtraffic distribution in the networkis derived from their equilibrium behavior. Further, we focuson QoS offered at the CPs and study how it influences the CPs sponsorship decisions and how it in turn affects the revenue of the ISPs. This work is an significant extension of an earlier conference version <cit.> that only considered the single ISP case with preliminary discussions about exogenous arrivals.§ MODEL AND SETUP We first consider a monopolistic market with a single Internet Service Provider (ISP) that connects end users to the Internet. We focus on a particular type of non-elastic traffic like content, say videos, music, or online shopping, that the users can access from content providers (CPs). Multiple CPs offer same/similar type of contents and end users can access content from any of them. The ISP charges c monetary units per unit of traffic accessed through its network to the end users. We refer to this price as the `access price'. As specified in <cit.>, recently, broadband ISPs in US and Europe introduce a data-cap and adopt a two-part tariff structure, a combination of the flat-rate and usage-based pricing. Under such a two-part scheme, additional charges are imposed if a user�s data usage exceeds the data cap and the exceeded amount is charged based on a per-unit fee. Therefore, we consider in our model the usage-based part of the scheme which integrates the relationship between access price and demand. Let N denote the number of CPs and [N] the set of CPs. Each CP can enter into zero-rating-agreement with the ISP. In this case, CP pays a proportion of the access price for the content accessed from it and the ISP passes on the benefit to the end users. This proportion may be different for each CP. Let γ_i ∈ [01],i ∈[N] denotes the fraction of the access price end users pay to access content of i-th CP (also denoted CP_i). The value of _i, i∈[N] is decided by i-th CP and we refer to it as the subsidy factor. For every unit of traffic accessed from i-th CP, end users and i-th CP pay γ_ic and(1-γ_i)c, respectively, to the ISP.We allow the possibility for i-th CP to pay the ISP only ργ_ic per unit traffic accessed from it, where ρ∈ (0 1 ]. This parameter determines the level of negotiation between CPs and the ISP. We refer to the special cases when γ_i ∈ (01), γ_i=0, and γ_i=1, as i-th CP is Partially-sponsoring (P), Sponsoring (S), and No-sponsoring (N), respectively. We consider a large population of end users that access the content from CPs by sending requests to them. The requests are generated according to Poisson process with a rate λ. Each request results in a certain amount of traffic flow between the CPs and end users going through the ISP. Without loss of generality, we assume that the mean traffic flow per request is one unit[Otherwise ρ can be appropriately scaled. See (<ref>).]. Each end user decides which CP to send its request and this selection does not depend on the choice of other users and its past decisions[The assumption that users do not have memory is not overly restrictive as the population is large and each new request is likely from a different user.]. The quality of the service (QoS) experienced by end users depends on the quantity of requests served and the service capacity of the CPs. The later depends on the investments in infrastructure. Higher the investments more is the capacity. Let m_i, i∈ [N] denotes the service capacity of i-th CP and let T_i(λ_i) denotes the QoS experienced by end users at i-th CP when the CP is serving λ_i requests. We stress that the QoS here refers to that experienced at the CPs and not on the Internet backbone or the ISP network. The network model for the case of two CPs is depicted in Figure <ref>.§.§ Utility of UsersEach end user likes its request to be served as early as possible. We consider the mean delay experienced at each CP as QoS metric and set T_i(x)=1/m_i -x for all i∈ [N], which is a well known mean (stationary) delay value in a M/M/1 queue <cit.>[Note that if the service times at the CPs are exponentially distributed, then 1/(m_i-x) gives the exact value of mean delay at i-th CP when it receives request at Poisson rate x.].End users' decision on which CP to send their request depends on the QoS as well as the access price. We define utility (or cost ) of each end user by selecting a CP as the sum of mean delay and access price it incurs at that CP. Specifically, the cost of an end user served at i-th CP with requests at rate x is[A more general cost function C_i(x) that is convex and strictly increasing in x can also be analyzed using techniques we use in this paper.]C_i(x):=C_i(x,m_i,c,γ_i)=1/m_i -x+ γ_ic m_i > x,∞Each end user aims to select a CP that gives the minimum cost. Note that this formulation is the same as allowing each user to choose a CP with the lowest mean delay subject to a price constraint. This criteria also turns out to be optimizing weighted sum of mean delay and price with the weight corresponding to the Lagrangian multiplier associated to the constraint. §.§ Utility of Content ProvidersThe revenue of CPs is an increasing function of the number of requests they receive. Generally, CPs revenue comes from per-click advertisements accessible on their webpages and higher requests imply more views of advertisements (for example, CP can embed an advertisement in each request served) resulting in higher income. We define utility of CP_i as the net revenue earned after discounting the access price through sponsorship. Let U_i denote the utility of CP_i, i∈ [N], when it sponsors γ_i fraction of access price and receives λ_i mean number of requests, then: U_i(_i)= f_i(λ_i) -ρ (1-γ_i)cλ_i.The function f_i(·) is monotonically increasing and denotes the revenue from per-click advertisements for example. The definition of this function is out of the scope of this paper. The number of requests received at a CP is the function of the γ_i, i∈ [N] set by all the CPs. End users react to these decisions, and hence, the decision of CPs are coupled through end users behavior. We assume that the value of m_i, i∈ [N] are fixed and are public knowledge. §.§ Decision Timescale and HierarchyWe now explain decision stages of our complex system which involves different types of players:ISP, CPs and end users. Each of them makes decisions in hierarchy and their decisions change over different timescales. At the top of the hierarchy the ISP sets the access price c which remains fixed for a long duration. Next in the hierarchy CPs make sponsorship decisions after knowing the access price set by the ISP. The CPs decision (parameter γ_i) can change at a timescale that is smaller than that of the ISP. At the bottom level of the hierarchy the end users decide from which CP to access content knowing subsidized access prices for all CPs. The users' decisions change at a smaller timescale than that of the CPs.In the following we study the game among each type of players and how decisions of each type influences the performance of the other players. We make the following assumptions in the rest of the paper: A1: ∑_i=1^N m_i > λ. A2:m_i < λ for all i∈ [N]. A3:m_1 < m_2 ≤ m_3 …≤ m_N and m_1 > λ/N.The first assumption ensures that a user can always find a CP where mean waiting time is bounded. The second assumption implies that no single CP is capable of handling all the requests alone and hence requests get spread across multiple CPs. The last condition indexes the CPs according to their rank in service capacity and ensures that the CP with lowest service capacity gets non-zero amount of requests at equilibrium. For notational convenience, we write m=∑_i=1^N m_i - λ, which denotes the excess service capacity in the network.To maintain analytic tractability and get clear insights, we restrict our theoretical analysis to the case with two CPs, i.e., set N=2. However, for the general N numerical studies can be done as it turns out that equilibrium rates can be found generally efficiently solving a convex program (see (<cit.>)). We start with the study of end users behavior. § USER BEHAVIOR In this section, we study how end users respond to sponsorship decisions by the CPs. Each user selects one CP to process its request without knowing current occupancy of the CPs nor the past (and future) arrival of requests from other end users. Then, the end users' decision to select a CP is necessarily probabilistic and users aim to select CPs according to a distribution that results in smallest expected cost.Since all users are identical, we are interested in symmetric decisions where every user applies the same probabilistic decision on arrival of a new request. For any symmetric decision, arrival rate of requests at each CP also follows a Poisson process by the thinning property of the Poisson processes.We apply Wardrop equilibrium conditions to find the equilibrium decision strategy of large population of users<cit.>. Let γ_-i denote the subsidy factor of the CP other than CP_i andλ^*=(λ_1^*, λ_2^*), where λ_i^*:=λ_i^*(_i,_-i), i∈[N], is the arrival rate at i-th CP at equilibrium. we have that ∑_i=1^Nλ_i^*=λ^*. Then we have the following properties coming from Wardrop principles that can be summarized in our context to the following sentence: "Every strictly positive request implies minimum cost" [We can apply Wardrop equilibrium conditions as the request arrivals satisfy the PASTA (Poisson Arrivals See Time Averages) property <cit.>.]:∀λ_i^*>0C_i(λ_i^*)≤C_j(λ_j^*)j≠ i,which can also be expressed as∀i, λ_i^*· (C_i(λ_i^*) - α)=0, α= min_i C_i(λ_i^*). For N=2 and a given action profile (_1,_2) of the CPs,the equilibrium rates are as follows:λ_i^* = m_i - 1/α-γ_i c∀ i=1,2,where α:=α(γ_1,γ_2) is the equilibrium cost given byα=c(γ_1+γ_2)/2+1/m+ √((c(γ_1-γ_2))^2/4+1/m^2).The following lemma follows immediately the previous lemma. For given _i, i=1,2, the equilibrium rate λ_i^* is strictly increasing in _-i, i.e., equilibrium rates at a CP increases if the subsidy factor for other CP increases.From equation (<ref>), it is also clear that the equilibrium cost α is monotonically increasing in c. However, the effect of c on the equilibrium rates depends on the values of (_1,_2) relative to each others. We have the following proposition that describes monotonicity properties of the equilibrium rates depending on the subsidy factors set by the CPs. For any (_1,_2), equilibrium rates satisfy the following properties: * if _1<_2, then _1^* is monotonically increasing in c (and _2^* is decreasing in c). * if _1≥_2, then _1^* is monotonically decreasing in c (and _2^* is increasing in c).Interestingly, when the sponsorship decisions of the CPs are symmetric, i.e., all CPs set the same subsidy factor, users are indifferent to the sponsorship decisions. To see this, notice from (<ref>) that when _i=∀ i=1,2for some ∈ [0 1] the equilibrium rates in (<ref>) are independent of . Thus, if the access prices for the content across all the CPs if either increased or decreased by the same amount, users preferences for the CPs do not change. § MEAN DELAY IN THE NETWORK In this section, we analyze the average waiting time (or delay) experienced by a 'typical' end user at equilibrium situation. We refer to any user that arrives at equilibrium as a typical end user.Given a global rate λ, a fraction λ_i^*/λ of the users' requests are served at i-th CP at equilibrium, and each one of them incur mean delay of 1/(m_i-λ_i^*). Since the end users are homogeneous, the equilibrium strategy of each end user is equivalent to the strategy to select CP_i with probability λ_i^*/λ. Hence we define mean delay (henceforth referred simply as delay) experienced by a typical user as: D(c,γ_1,γ_2)=∑_i=1^Nλ_i^*/λ1/m_i- λ_i^*. Given (γ_1,γ_2) and c, we have D(c,γ_1, γ_2)=α/λ∑_i=1^2m_i - c/λ∑_i=1^2m_iγ_i -2/λ,where α is given by Equation (<ref>). And when γ_i=γ for all i=1,2, for some γ∈ [0 1] we get D(c,γ):=D(c, γ, γ)=2/m.When γ_i are the same across all CPs, i.e., sponsorship decisions are symmetric, then the delay does not depend neither on the price set by the ISP nor on the subsidy factor set by theCPs. In this case, delay only depends on the excess capacity– larger the excess capacity smaller the delay. When N>2, it can be shown that the delay is given by N/m, i.e., increases linearly with the number of CPs. For each couple (γ_1,γ_2), the delayD(c, γ_1,γ_2) is convex inc. Further, if γ_1 ≥γ_2 it is monotonically increasing in c. An heuristic argument is as follows. Recall the assumption that m_1< m_2. When _1>_2, the equilibrium arrival rate increases at CP_2 (see Prop. <ref>) with c which in turn increases delay experienced by a typical user at CP_2. However, the corresponding rate of delay decrease at CP_1 (from the decreased arrival rate) is smaller due to its smaller capacity and it results in a overall increase in delay. On the other hand, when _1<_2, the equilibrium arrival rate increases at CP_1 (see Prop. <ref>) with c which in turn increases delay for a typical user at CP_1 but decreases at CP_2. For smaller values of c, the rate of increase in delay at CP_1 is small compared to the rate of decrease in delay at CP_2 as m_2>m_1. However, as more arrivals shift to CP_1 with larger value of c, the delay increases significantly at CP_1 and can dominate the rate of decrease of delay at CP_2. Hence, with increasing value c, delay first decreases and then can increase. The threshold on c where the delay changes its behavior, depends on the gap between service capacity of the CPs, i.e., m_2-m_1; larger the gap more prominent is the rate of decrease of delay at CP_2 and hence decrease in delay continues over a larger values of c. In the following, we study mean delay experienced by a typical user when the decisions of CPs are asymmetric and compare it with the case when decision of theCPs are symmetric. Since mean delay is invariant to the amount of subsidy in the later case, we consider symmetric action profile (N,N) (corresponding to γ_1=γ_2=1) as a reference and treat it as a regime with no-differential pricing or `neutral'. The main result of this section follows. For any (_1,_2), the following properties hold: * If _2≤_1, then D(c,_1,_2)≥ D(c, 1, 1) for all c. * If _2>_1, then D(c,_1,_2)≥ D(c, 1, 1) if and only if c(_2-_1) ≥1/m (m_2/m_1-m_1/m_2 ).Notice that D(0, γ_1, γ_2)=D(c,γ,γ) for all γ_1, γ_2, γ∈ [0 1] and c. On one hand, for the case _2 ≤_1, it is shown in lemma <ref> thatthe delay is increasing in c, hence the first assertion in the theorem implying that delay in the differential regime is always higher compared to the neutral regime. On the other hand, when _2 >_1, delay experienced by a typical user is larger than in the neutral regime provided access price is larger than a certain threshold, otherwise it will be smaller. To understand this behavior notice that D(c, _1, _2) is convex in c (see Lemma (<ref>)). This indicates that there exists a threshold on c above which delay will be higher than in the neutral regime. As given in (<ref>) this threshold increases if disparity between the service capacities of the CPs (m_2/m_1) increases and/or disparity between the access prices (_2-_1) for the content of the CPs decreases.In summary, the above result suggests that the differential pricing can be unfavorable to the users. Specifically, when the CPs sponsorship decisions are asymmetric the mean delay for a typical user can be higher than that in the neutral regime. Theorem <ref> also explains the effect of access price set by the ISP on delay experienced by the users. When _2> _1, smaller access prices actually benefits the users as delay in the differential regime is smaller compared with the neutral regime. However, if the access prices is large, more than a certain threshold, this favorable scenario no longer holds delay in the network can be higher. In a differential regime, the QoS for end users can be degraded compared to that in the neutral regime if the CP with poor QoS offers higher subsidy than the CP with better QoS. § BEHAVIOR OF CPS In this section, we analyze the behavior of CPs in this hierarchical system. Particularly,we are interested in the question of which type of CPs prefer differential pricing or zero-rating scheme. We study particularly if a CP with lower service capacity can earn more revenue than the other CP by allowing higher subsidy. We will then study competition between the CPs and analyze their equilibrium behavior. In the following we assume that the advertisement revenue of each CP is proportional to their request arrival rate.We set f(λ_i)=βλ_i, where β >0 is a constant that depends on how the traffic translates to revenues at the CPs which can be obtained from statistical analysis of per-click data usage. Utility of each CP depends on the vector (λ_1^*,…,λ_N^*) of equilibrium rates and it is defined by (with abuse of notation):U_i(_i, _-i)=(β- (1-_i)ρ c)λ_i^*.Clearly, CP_i will sponsor its content if and only if (1-_i)<β /cρ. §.§ Which CPs Prefer Differential Pricing? In the differential pricing regime CP_1 can attract more user traffic than CP_2 by offering higher subsidy. However, higher subsidy may also increase the amount CP_1 has to pay to ISP. Then, a natural question is the following: can CP_1 set its subsidy factor such that it gets higher utility than CP_2? The following proposition gives the condition for this to happen. Let β=ρ c/β≤ 1. For given _1, _2, we have that U_1(_1,_2)≥ U_2(_1,_2) if and only ifλ_1^* ≥λ/1-(1-_1)β/1-(1-_2)β+1.The technical condition β̅<1 allows the CPs to consider all subsidy factors in the interval [0,1]. If this condition is not met, CPs have to keep subsidy factor over a restricted range to get positive utility. Note that λ_1^* and the ratio on the right hand of the above condition are both decreasing in _1. The rate of decrease of λ_1^* is lower when _1 is small and the above condition holds. Indeed, a numerical example in Fig. <ref> demonstrates that the condition holds. In Fig. <ref> we also show a case where the condition does not hold. In particular, when the difference between service capacities of the CPs is relatively small, condition (<ref>) holds for some _1, and, in spite of its inferior service capacity CP_1 can earn higher utility. However, when the difference between service capacities is large, CP_1 cannot get higher utility than CP_2 through higher subsidy.These examples demonstrate thatthe differential pricing regime can de-incentivizes the CPs from increasing their capacity resulting in degraded QoS for the end users. Specifically, the CPs with lower service capacity can prefer to offer higher subsidy rather than investing in their infrastructure and still end up earning higher revenue that CP that have invested more in their infrastructure but are not offering higher subsidy.In differential pricing regime, a CP with lower QoS can earn more revenue than a CP with higher QoS by offering appropriate subsidy on access price. §.§ Non-Cooperative game between CPsThe CPs aim to maximize their utility by appropriately setting their subsidy factor γ_i. We study the interaction as a non cooperative game between the CPs where the action of the CP_i is to set a subsidy factor _ithat maximizes its utility. The CPs know that the end users select a CP to serve their request based on quality and the (subsidized) access price they offer. The objective of the i-th CP is given by the optimization problem: max_∈ [01] U_i(γ, _-i).We say that (_1^*, _2^*) is a Nash equilibrium for the non-cooperative game between the CPs if U_i(_i, ^*_-i) ≤ U_i(^*_i, ^*_-i) ∀ i and γ_i ∈ [0 1]. For a given _-i, i=1,2, let _i(_-i) denotes the best response of CP_i, i.e., _i(_-i)∈max_∈ [01] U_i(γ, _-i) . The utility functions are non-linear as illustrated in Figs. <ref>. In Fig. <ref>, we depict utility functions of CP_1 and CP_2 as a function of _1 for a fixed value of _2. Note that CP_1 has multiple optimum points. Similar behavior is observed in Fig. <ref>. Further, notice that utilities of CP_1 and CP_2 have a steep slopes as _1 →_2 and _2 →_1, respectively. This property of CP utilities give raise to discontinuity in their best response behavior. The discontinuity in the best response behavior leads to non-existence of Nash equilibrium as illustrated In Figs. <ref> - <ref>. As seen, they have a point of discontinuity and best response functions do not intersect. Hence equilibrium do not exists in these examples. In the following we restrict the actions of CPs to γ_i ∈{0,1}, i.e., either sponsor or not-sponsor and study the non-cooperative game between the CPs. § EXOGENOUS ARRIVAL OF DEMAND In the previous setup the total traffic rate is a constant and the total revenue for the ISP is λ c irrespective of the subsidy set by the CPs. This makes the ISP indifferent to the differential pricing and zero-rating strategy proposed by the CPs. However, higher subsidy by the CPs may attract more traffic and in turn result in higher demand and then potential revenue for the ISP. To account for this, we next consider a model with exogenous generation of traffic in addition to the usual traffic of rate λ. The exogenous traffic corresponds to the increaseddemand that the CPs attract, in addition to their usual traffic, by offering subsidies which increase the total traffic generated by end users. Naturally, higher is the subsidy offered by an CP higher is the exogenous traffic it attracts.We model the exogenous traffic generated for CP_i offering subsidy γ_i to be linear in γ_iand given by λ_0 (1-γ_i), where λ_0 ≥ 0 is a fixed constant which represents the global maximum demand. The total traffic λ̃_i for CP_i, i=1,2, when it attracts the usual traffic λ_i and offers subsidy γ_i, is then determined by: λ̃_i=λ_i + λ_0 (1-γ_i), where λ_1+λ_2 =λ, and the total traffic in the networks is given by λ̃:=λ̃(γ_1, γ_2)=λ + ∑_i=1^2 λ_0 (1-γ_i). We continue to make the assumptions A1-A3 as earlier. Note that with exogenous arrivals it may happen that ∑_i=1^2 m_i < λ̃ if the subsidy offered by the CPs is too high hence restraining the CPs from taking such actions. The usual traffic λ gets split between the CPs competitively by the end users based on the QoS experienced and the access price, whereas the exogenous traffic is fixed and depends only on the subsidy factory. At equilibrium, the total traffic at CP_i is therefore given by λ̃_i^*=λ_i^* + λ_0(1-γ_i) where (λ_1^*, λ_2^*) are equilibrium flows set according to Wardrop criteria as earlier.With exogenous arrivals, mean delay defined in Section <ref> can be redefined asD(c,γ_1,γ_2)=∑_i=1^Nλ̃_i^*/λ̃1/m_i- λ̃_i^*.The analysis of mean delay and the results derived in the previous sections remain thesame after replacing the capacities m_1 and m_2 by m_1 -λ_0 (1-γ_1) and m_2-λ_0(1-γ_2), respectively, and ∑_i=1^2 m_i ≤λ̃. The utilities of CPs with exogenous arrivals are redefined asU_i(_i, _-i)=(β- (1-_i)ρ c)λ̃_i^* ∀i=1,2.The condition for CP_1 to earn higher revenue than that of CP_2 under exogenous arrivals remains the same as in Proposition<ref> after replacing λ by λ̃ and λ_1^* by λ̃_1^*. Hence, our earlier observation that the CP with lower QoS may earn higher revenue that of the CP offering higher QoS holds under exogenous arrivals. Similar to previous model, the best response for CPs with exogenous traffic also have discontinuities and leads to non-existence of pure strategy Nash equilibrium.Elastic Demand: One could also model the total demand generated by the users as elastic in the subsidy factor and model it as Λ+Λ_0(1-γ_1)+λ_0(1-γ_1), where Λ and Λ_0 are constants, which gets competitively split across the CPs. The difference between this model compared to exogenous arrivals is that here subsidy of each CP's effects the total demand not just its demand. The analysis with the elastic model remains essentially same as that in Sections <ref> and <ref>. Hence we skip the details. In the subsequent models we only consider the model with exogenous arrivals.§ NON-COOPERATIVE GAME WITH ACTIONS {S, N} In this section we focus on the study of non cooperative game where CPs only play either Sponsor (S) or Not-sponsor (N) actions, i.e., _i ∈{0,1} for all i=1,2 (no partial sponsorship is allowed). The four possible action profiles are denoted as (S,S), (N,N), (S,N), and (N,S). Here, profile (S,N) indicates that action of CP_1 is S (sponsor) and that of CP_2 is N (non-sponsor). For ease of notation, we add subscripts 00, 11, 01, 10 to the minimum equilibrium utility (α) of these action profiles, respectively. The following theorem gives equilibrium rates associated with these action profiles. In the following we assume that m > 2 λ_0, i.e., the excess capacity is enough to support all of the exogenous traffic and m_i > λ_0 ∀ i, i.e., each CP has enough capacity to handle all of the exogenous traffic it can attract. The equilibrium rates are described as follows: * For action profile (S,S):λ̃_̃ĩ^̃*̃=m_i-λ_0-1/α_00∀ iwhere α_00=2/m-2λ_0. * For action profile (N,N): λ̃_̃ĩ^̃*̃=m_i-1/α_11-c∀ i where α_11=c+2/m. * For action profile (S,N): λ̃_̃1̃^̃*̃=m_1-λ_0-1/α_01 and λ̃_̃2̃^̃*̃=m_2-1/α_01-c where α_01=c/2+1/(m-λ_0)+√(c^2/4+ 1/(m- λ_0)^2). * For action profile (N,S): λ̃_̃ĩ^̃*̃=m_1-1/α_10-c and λ̃_̃2̃^̃*̃=m_2-λ_0-1/α_10 where α_10=α_01. The minimum equilibrium cost under different action profiles satisfy the following properties: * α_00≥α_11-c, * α_01≥α_00≥α_10-c, * c+ 1/(m-λ_0) ≤α_01≤ c+ 2/(m-λ_0). Considering that each CP play only pure actions {S,N}, the non-cooperative setting is a described by a 2× 2 matrix game between the CPs. The utilities for both CPs over actions {S,N} are given in Table 1. Though it is well known that a mixed Nash equilibria always exists in a finite matrix game but pure Nash equilibria may not exists[Though γ_i ∈[0,1] can be thought of as probability distribution over γ_i∈{0,1}, it is to be noted that our earlier observation that non existence of Nash equilibrium does not contradict the fact mixed Nash equilibrium always exits. The CP utilities are highly nonlinear in γ_is and expected utility over any mixed equilibria dervied from utilities associated with pure actions does not have the same form as in (<ref>). ]. Since the decision of CPs has to be deterministic, we are interested in the study of Pure strategy Nash Equilibria (PNE) and its properties. The following theorem characterizes all possible PNE. Let ρ, c, β be given. Then, * (S,S) is a PNE if and only if ρ/β≤ 1/(α_10-c)+λ_0-1/α_00/c(m_2+λ_0-1/α_00):=A * (N,N) is a PNE if and only if ρ/β≥1/(α_11-c)+λ_0-1/α_01/c(m_1+λ_0-1/α_01):=B * (S,N) is a PNE if and only if 1/(α_10-c)+λ_0-1/α_00/c(m_2+λ_0-1/α_00)≤ρ/β ρ/β≤1/(α_11-c)+λ_0-1/α_01/c(m_1+λ_0-1/α_01) * (N,S) cannot be a PNE.Theorem <ref> characterizes all PNE and Figure <ref> depicts all possible PNE in different range over ρ/β .§ REVENUE GAIN FOR ISP In this section we study the revenue gain for ISP induced by applying differential pricing mechanism. The exogenous traffic generated by the end users increases with higher subsidy proposed by the CPs, which induces higher revenue for the ISP. To measure this revenue gain, we consider the metric called 'Revenue Gain Factor' (RGF) defined as the ratio of ISP revenue under the differential pricing scheme and that under the neutral regime where none of the CPs subsidize the access price, i.e., γ_i=1 for any CP i, then: RGF=(λ+λ_0(1-γ_1)+λ_0(1-γ_2))ρ c/λρ c =1+λ_0(1-γ_1)+λ_0(1-γ_2)/λ. The following proposition describes explicit value of RGF depending on the action profile of CPs. RGF under different action profiles are as follows: * For action profile (S,S): RGF=1+2λ_0/λ * For action profile (N,N): RGF=1, i.e., there is no revenue gain for the ISP * For action profile (S,N): RGF =1+λ_0/λ * For action profile (N,S): RGF=1+λ_0/λ. Obviously, the RGF is the highest when both CPs sponsor. However, the action profile (S,S) may not be always a pure Nash equilibrium as proved in previous section. In the following we illustrate the behavior of the RGF metric at equilibrium with respect to parameters λ, λ_0 and c. We note that the value of RGF at equilibrium depends on c through equilibrium values of γ_i^*, i=1,2.From Fig. <ref> it is observed that as λ increases, RGF initially decreases steeply and then at a slower rate. This change is due to shift of PNE point from (S,S) to (S,N) when increasing λ. This observation is natural – if the usual traffic is already high, then the additional traffic from subsidy may not improve the ISP revenue much. Thus the ISP prefers a differential pricing regime only in the case where the intrinsic traffic is not significant compared to the exogenous traffic.Fig. <ref> describes RGF as a function of λ_0 for a fixed λ. Initially the gain increases at smaller rate with increase in exogenous traffic (λ_0) and later the rate increases. This shift in behavior is again due to shifting PNE points from (S,N) to (S,S) as CPs are likely to go for full sponsorship if more exogenous traffic is generated. Thus, higher the exogenous traffic, more is the the ISP's preference for the differential pricing.Fig. <ref> shows that RGF is unaffected by the change in cost keeping PNE point fixed. However, with the increase in cost, PNE shifts from (S,S) to (S,N) and thereby causing the drop in RGF. This implies fixing higher access price may not be beneficial for ISP under exogenous arrival of demand.§ MULTIPLE ISPS In this section, we extend our analysis for the case of oligopoly market with several ISPs. We consider two ISPs but the analysis can be extended to more than two. Each ISP connects both the CPs to the end users and set the access price independently. Knowing the access price, the CPs set the subsidy factor for each ISP independently. Let c_i, i=1,2 denotes the access price set by ISP_iand γ_ij denote the subsidy factor set by CP_j for traffic generated through ISP_i. We denote the traffic rate that flows to CP_j from ISP_i as λ_ij. The exogenous traffic rate from ISP_i to CP_j is λ_0(1-γ_ij). Without loss of generality we assume that c_1<c_2. The interaction framework is described in figure <ref>.The cost of an end user that connects viaISP_i to CP_j while it is receiving traffic at rate x is given by:C_ij(x):=C_ij(x,m_j,c_i,γ_ij)=1/m_j -x+ γ_ijc_i m_j > x,∞ Let λ_ij^* denote the flows at equilibrium for all i,j=1,2. For a given γ_ji and c_i, ∀ i,j=1,2, we have * λ^*_1j>0 and λ_2j^*=0 if and only if γ_1jc_1 < γ_2jc_2, * λ_1j^*=0 and λ_2j^*>0 if and only if γ_1jc_1 > γ_2jc_2, * λ_1j^*=λ_2j^* if and only if γ_1jc_1 = γ_2jc_2.As in Section <ref>, we assume that CPs either Sponsor or Not Sponsor traffic of the ISPs, i.e. γ_ij∈{0,1}, ∀ i,j. This decision is based on the access price proposed by ISPs. Further, CPs are assumed to sponsor only traffic on one ISP only, not both. Indeed, a CP cannot contract with two different ISPs. The resulting action profile for CPs is then {SN, NS, NN}, where action SN denote that traffic form ISP_1 is sponsored while that from ISP_2 is not sponsored, NS denotes that the other way around and NN denotes that traffic from none of the ISPs is sponsored.The CP_j's utility is given by:U_j(γ_ij)=∑_i(β_j-ρ(1-γ_ij)c_i)( λ_ij^*+λ_0(1-γ_ij)).One can compute the equilibrium traffic λ_ij^* each CP j gets from each ISP i using Wardrop conditions as earlier and the resulting utilities are summarized in Table <ref>.Utility of ISP i is the total revenue earned from traffic that goes through his network, i.e.,R_i(c_i)=∑_jλ_ij^*c_i.The following theorem describes the pure Nash equilibrium of the non-cooperative game between CPs, depending on main parameters of the model which are the access prices c_i proposed by ISP i. Let ρ, c and β_j=β ∀ j given. Assume c_1 < c_2, then only (SN,SN), (NN,NN)and (SN,NN) can be PNE. Specifically, * (SN, SN)is the PNE if and only ifρ/β≤1/(α-c_1)-m/2+λ_0/c_1(m_2-m/2+λ_0),* (NN, NN) is the PNE if and only if ρ/β≥m/2-1/α+λ_0/c_1(m_1-1/α+λ_0),* (SN, NN) is the PNE if and only if 1/(α-c_1)-m/2+λ_0/c_1(m_2-m/2+λ_0)≤ρ/β≤m/2-1/α+λ_0/c_1(m_1-1/α+λ_0). The previous theorem suggests that CPs will not subsidize the traffic from ISP with higher access price. Further, coupled with Lemma <ref>, we note that traffic flow through ISP with higher cost is null at equilibrium. Let RGF_i, i=1,2 denotes the revenue gain factor for ISP_i defined as follows:RGF_i = (∑_jλ_ij+λ_0(1-γ_ij))ρ c_i/(∑_jλ_ij) ρ c_i, = 1+∑_jλ_0(1-γ_ij)/∑_jλ_ij. Considering exogenous arrivals, the usual traffic λ gets split between the CPs competitively based on the QoS experienced and the access price whereas the exogenous traffic is fixed and depends only on the subsidy factory. The total traffic at CP_j at equilibrium is given by λ̃^*_j=∑_i λ_ij^*+λ_0 ∑_i(1-γ_ij), where λ_ij^* are set according to the Wardrop equilibrium. For all possible action profiles, the equilibrium rates and the corresponding RGF are given as follows: * For (SN,SN): λ_11^*=(m_1-m/2+λ_0), λ_21^*=0, λ_12^*=(m_2-m/2+λ_0) and λ_22^*=0. RGF_1=1+2λ_0/λRGF_2=1. * For (SN,NS):λ_11^*=(m_1-m/2+λ_0), λ_21^*=0, λ_12^*=0 and λ_22^*=(m_2-m/2+λ_0). RGF_1=1+λ_0/λ_11^*RGF_2=1+λ_0/λ_22^*. * For (SN,NN): λ_11^*=m_1-1/α+λ_0, λ_21^*=0, λ_12^*=m_2-1/(α-c_1 ) and λ_22^*=0, where α=c_1/2+1/m+√(c_1^2/4+ 1/m^2) and can be bounded as c_1+ 1/m≤α≤ c_1+ 2/m. RGF_1=1+λ_0/λ RGF_2=1.* For (NS,SN): λ_11^*=0, λ_21^*=(m_1-m/2+λ_0), λ_12^*=(m_2-m/2+λ_0) and λ_22^*=0.RGF_1=1+λ_0/λ_12^* RGF_2=1+λ_0/λ_22^*.* For (NS,NS): λ_11^*=0, λ_21^*=(m_1-m/2+λ_0), λ_12^*=0 and λ_22^*=(m_2-m/2+λ_0). RGF_1=1RGF_2=1+2λ_0/λ.* For (NS,NN): λ_11^*=0, λ_21^*=m_1-1/α+λ_0, λ_12^*=m_2-1/(α-c_1) and λ_22^*=0.RGF_1=1RGF_2=1+λ_0/λ_21^*.* For (NN,SN): λ_11^*=m_1-1/(α-c_1), λ_21^*=0, λ_12^*=m_2-1/α+λ_0 and λ_22^*=0. RGF_1=1+λ_0/λ RGF_2=1. * For (NN,NS): λ_11^*=m_1-1/(α-c_1), λ_21^*=0, λ_12^*=0 and λ_22^*=m_2-1/α+λ_0. RGF_1=1RGF_2=1+λ_0/λ_22^*. * For (NN,NN): λ_11^*=(m_1-m/2), λ_21^*=0, λ_12^*=(m_2-m/2) and λ_22^*=0. RGF_1=1RGF_2=1. We illustrate the behavior of RGF for each ISP at different PNE in Figure <ref>. Since at PNE, none of the CPs make sponsorship with ISP_2 (ISP with higher access price) and the RGF of ISP_2 remains the same. Therefore, we focus on the revenue gain of ISP_1. It can be seen from Figures <ref>-<ref> that the behavior of the RGF of ISP_1 is exactly same as that of single ISP case. And there is no impact of c_2 on RGF for both ISPs which is because c_2>c_1. § CONCLUSIONS AND POLICY RECOMMENDATIONS In this work we have analyzed interaction between ISPs, CPs and end users as a complex interacting system where sponsoring of the content (differential pricing) is allowed in the Internet. Our analysis suggests that a CP with poor QoS can attract more end user traffic demand by offering heavy subsidy and earn more revenue than a CP with better QoS if the later do not offer higher subsidy. This implies that differential pricing schemes can be unfair and decentivize the CPsto improve their QoS through systematic long term investments, but encourage them to focus more on the running costs involving subsidies. Zero-rating schemes thus suit CPs with poor QoS that like to improve their revenues without updating QoS at their facilities.Our analysis also suggests that overall QoS experienced by end users can worsen in the differential pricing regime – if a CP with poor QoS offers heavy subsidy on access price, it increases congestion at the CPlevel (and reduces at the others) and effectively increases the overall mean delay experienced by the end users in the network. However, if a CP with better QoS offers more subsidy than that of the CPs with lower QoS, the overall delay experienced by the users decreases, making the differential pricing regime favorable to end users.As our analysis suggests, differential schemes result in unfair distribution of revenues among the CPs and degrade QoS experience for the end users if CPs with poor QoS offer higher subsidy on access price than the CPs with better QoS. Thus, as a policy recommendation, we suggest that the CPs with poor QoS should not be allowed to offer higher subsidy on the access price than that offered by the CPs with higher QoS.Alternatively, CPs should be only allowed to subsidize the access price in proportion to their QoS guarantees.Our current model looks at single type of content. As a future work, we will look into a more general model with multiple types of populations corresponding to different content types with different QoS requirement at CPs. § APPENDICES§ PROOF OF LEMMA <REF> First note that by Assumption (2), λ_i^* >0 for i=1,2. From equation (4), there exists α>0 such that 1/m_1-λ_1^*+γ_1c=1/m_2-λ_2^*+γ_2 c=α. Simplifying the above we get: λ_i^* = m_i - 1/α- γ_ic i=1,2. In order to compute the value of α, we use the relation λ_1^*+λ_2^*=λ which yields m_1 +m_2-λ= 1/α- γ_1c+ 1/α- γ_2c. Simplifying the above, we get a quadratic equation in α. It is easy to argue that one of the roots is not feasible and the other root gives the required value of α. § PROOF OF LEMMA <REF> From (<ref>) for all i=1,2 we have λ_i^*=m_i-1/α-γ_i c1/m_i-λ_i^*=α-cγ_i. Substituting in (<ref>) and simplifying we get (<ref>). When γ_i=γ for all i, the minimum cost at equilibrium (from (<ref>)) is given as α:=α(γ, ⋯, γ)=cγ + 2/m. Finally, Substituting the above in (<ref>) we get (<ref>).§ PROOF OF LEMMA <REF> Substituting value of α from (<ref>) in (<ref>) and simplifying we have D(c,γ_1,γ_2) = c(_1-_2)(m_2-m_1)/2λ +m_1+m_2/λm- 2/ +(m_1+m_2)√((cm(_1-_2))^2+4)/2m. First consider the case _1 ≥_2. It is clear that D(c,_1,_2) is monotonically increasing in c. Now consider the case _1 < _2. Differentiating D(c,_1,_2) with respect to c and simplifying we get∂ D(c,_1,_2)/∂ c = |_1-_2|(m_1+m_1)/2λ× { -m_2-m_1/m_1+m_2+cm|_1-_2|/√((cm|_1-_2|^2)+4)}.It is now easy to verify that the above derivative is positive for allc≥( √(m_2/m_1)-√(m_1/m_2))/(m|_1-_2|)and it is negative otherwise. Hence D(c,_1,_2) is convex in c with a unique minimum. § PROOF OF THEOREM <REF>From Lemma (<ref>) recall thatD(c,γ_1,γ_2) = c(_1-_2)(m_2-m_1)/2λ +m_1+m_2/λm-2/+ (m_1+m_2)√((cm(_1-_2))^2+4)/2m.andD(c,1,1)= 2/m. We haveD(c,γ_1,γ_2) ≥ D(c,1,1)c(_1-_2)(m_2-m_1)/2λ+m_1+m_2/λm+ (m_1+m_2)√((cm(_1-_2))^2+4)/2m≥2/ + 2/m (m_1+m_2)√((cm(_1-_2))^2+4) ≥2(m_1+m_2)+ cm(_2-_1)(m_2-m_1)(cm(_1-_2))^2≥ 4cm(_2-_1)(m_2-m_1/m_2+m_1) +(cm(_2-_1))^2(m_2-m_1/m_2+m_1)^2 (cm(_2-_1))^2(1-(m_2-m_1/m_2+m_1)^2 ) ≥4cm(_2-_1)(m_2-m_1/m_2+m_1).When _1 > _2, the last inequality holds for all c. Hence the first claim is proved. Now consider the case _2> _1. Dividing both sides of the last inequality by _2-_1 >0 and continuing the chain of if and only if conditions, we haveD(c,γ_1,γ_2) ≥ D(c,1,1) cm(_2-_1)(1-(m_2-m_1/m_2+m_1)^2 )≥ 4(m_2-m_1/m_2+m_1)c ≥ (m_2/m_1-m_1/m_2 ) 1/m(_2-_1).§ PROOF OF PROPOSITION <REF> Consider the case _1<_2. From (<ref>) we have - c_1=c(_2-_1)/2+2/m+√((cm(_1-_2))^2+4)/2m. Differentiating both with respect to parameter c we get: ∂( - c_1)/∂ c=(_2-_1)/2+cm(_1-_2)^2/2√((cm(_1-_2))^2+4). It is clear that ∂( - c_1)/∂ c> 0 for all c. Hence α -_1c is monotonically increasing in c. The claim follows from (<ref>) and noting that λ_2^*=λ-λ_1^*. Now consider the case _2<_1. From (<ref>) we have - c_2=c(_1-_2)/2+2/m+√((cm(_1-_2))^2+4)/2m. Following similar steps above we observe that ∂( - c_2)/∂ c> 0 for all c. Hence α -_2c is monotonically increasing in c. The claim follows from (<ref>) and noting that λ_1^*=λ-λ_2^*.§ PROOF OF PROPOSITION <REF> We have U_1(_1, _2)/U_2(_2,_1)=(1-(1-_1)β)λ_1^*/(1-(1-_2)β)λ_2^* U_1(_1, _2)≥ U_2(_2,_1) λ_2^*/λ_1^*≤(1-(1-_1)β)/(1-(1-_2)β) λ_2^*+λ_1^*/λ_1^*≤(1-(1-_1)β)/(1-(1-_2)β)+1 λ_1^* ≥λ1/1-(1-_1)β/1-(1-_2)β+1. § PROOF OF THEOREM <REF>(S,S) is a PNE: From Table <ref>, (S,S) is PNF iff the following two conditions hold (β-ρ c)(m_1 - 1/α_00)≥β (m_1 - 1/(α_10-c) (β-ρ c)(m_2 - 1/α_00)≥β (m_2-2/(α_01-c)) Simplifying these two conditions and using our conventions that m_1 < m_2, we get (<ref>). (N,N) is a PNE: From Table <ref>, (N,N) is PNE iff the following conditions holds β(m_1 - (1/α_11-c))≥ (β-ρ c) (m_1-1/α_01) β(m_2 - 1/(α_11-c))≥ (β-ρ c) (m_2 - 1/α_10).Simplifying and using our conventions that m_1< m_2, we get (<ref>). (S,N) is a PNE: From Table <ref>, (S,N) is PNF iff the following two conditions hold (β-ρ c)(m_1 - 1/α_01)≥β (m_1 - 1/(α_11-c))(β-ρ c)(m_2 - 1/(α_01-c))≥ (β -ρ c) (m_2-2/α_00).Simplifying these two conditions we get (<ref>). (N,S) is a PNE: From Table <ref>, (S,N) is PNF iff the following two conditions hold β (m_1 - 1/(α_10-c))≥ (β-ρ c)(m_1 - 1/α_00) (β-ρ c)(m_2 - 1/α_10)≥β(m_2-1/(α_11-c)).i.e., C:= 1/(α_10-c)+λ_0-1/α_00/c(m_1+λ_0-1/α_00)≤ρ/β ρ/β≤1/(α_11-c)+λ_0-1/α_01/c(m_2+λ_0-1/α_01)=:D.We will next argue that C≤ D leads to a contradiction.We have C≤ D1/(α_10-c)-1/α_00/c(m_1-1/α_00)≤1/(α_11-c)-1/α_01/c(m_2-1/α_01) (m_2-1/α_11-c)(m_1-1/α_00) ≤(m_1-1/α_10-c) (m_2-1/α_10)(m_2-m/2)(m_1-m/2)≤(m_1-1/α_10-c) (m_2-1/α_10) ()m_1(1/α_10-m/2)+ m_2(1/α_10-c-m/2) ≤1/α_10(α_10-c)-m^2/4 () m_1(1/α_10-m/2)+ m_2(1/α_10-c-m/2) ≤m^2/2+√((cm)^2+4)-m^2/4where the last inequality follows by substituting value of α_10 on RHS and simplifying. It is clear that m^2/2+√((cm)^2+4)-m^2/4 <0. We will next show that m_1(1/α_10-m/2)+ m_2(1/α_10-c-m/2) is nonnegative.From Corollary <ref> we have α_10-c≤α_00=2/m. Hence we get m_1(1/α_10-m/2)+ m_2(1/α_10-c-m/2)≥m_1(1/α_10-m/2)+ m_1(1/α_10-c-m/2)=m_1 2α_10-c/α_10(α_10-c)-m_1m= m_1m-m_1m=0,where the last equality follows after substituting the value of α_10 and simplifying.§ PROOF OF LEMMA 5 Assume λ_1j^*>0 and λ_2j^*=0, then by Wardrop condition we have 1/m_j-λ_j^*+γ_1jc_1 <1/m_j-λ_j^*+γ_2jc_2whereλ_j^*= λ_1j^*+λ_2j^*. Hence γ_1jc_1 < γ_2jc_2.The other direction follows by noting that λ_j^* ≠ 0 ∀ j and applying the Wardrop conditions.Proof of the other items is similar.§ PROOF OF THEOREM <REF>(SN,SN) is PNE iff the following four conditions hold (β-ρ c_1)(m_1-m/2+λ_0) ≥(β-ρ c_2)(m_1-m/2+λ_0)(β-ρ c_1)(m_1-m/2+λ_0) ≥β(m_1-1/(α-c_1)(β-ρ c_1)(m_2-m/2+λ_0)≥ (β-ρ c_2)(m_2-m/2+λ_0)(β-ρ c_1)(m_2-m/2+λ_0)≥β(m_2-1/(α-c_1)Solving first two inequalities, we getc_2 ≥ c_1; ρ/β≤1/(α-c_1)-m/2+λ_0/c_1(m_1-m/2+λ_0) Solving second two inequalities, we getc_2 ≥ c_1 ; ρ/β≤1/(α-c_1)-m/2+λ_0/c_1(m_2-m/2+λ_0) Using our conventions that m_1 ≤ m_2 , we get (<ref>). (NN,NN) is PNE iff the following four conditions holds β(m_1-m/2))≥(β-ρ c_1)(m_1-1/α+λ_0) β(m_1-m/2)≥ (β-ρ c_2)(m_1-1/α+λ_0) β(m_2-m/2) ≥ (β-ρ c_2)(m_2-1/α+λ_0) β(m_2-m/2) ≥ (β-ρ c_1)(m_2-1/α+λ_0)Solving first two inequalities, we getρ/β≥m/2-1/α+λ_0/c_1(m_1-1/α+λ_0) ρ/β≥m/2-1/α+λ_0/c_2(m_1-1/α+λ_0) Solving second two inequalities, we getρ/β≥m/2-1/α+λ_0/c_2(m_2-1/α+λ_0) ρ/β≥m/2-1/α+λ_0/c_1(m_2-1/α+λ_0) Using our conventions that m_1 ≤ m_2 ; c_1 ≤ c_2, we get (<ref>). (SN,NN) is PNE iff following conditions hold (β-ρ c_1)(m_1-1/α+λ_0) ≥ (β-ρ c_2)(m_1-1/α+λ_0) (β-ρ c_1)(m_1-1/α+λ_0) ≥β(m_2-m/2)β(m_2-1/(α-c_1)) ≥ (β-ρ c_1)(m_2-m/2+λ_0) β(m_2-1/(α-c_1)) ≥ (β-ρ c_2)(m_2-m/2+λ_0)Solving first two inequalities, we getc_2 ≥ c_1 ; ρ/β≤1/α-m/2+λ_0/c_1(m_1-m/2+λ_0) Solving second two inequalities, we getρ/β≥1/(α-c_1)-m/2+λ_0/c_1(m_2-m/2+λ_0) ρ/β≥1/(α-c_1)-m/2+λ_0/c_2(m_2-m/2+λ_0) Using our conventions that m_1 ≤ m_2 and c_1 ≤ c_2, we get (4). (NN,SN) is PNE iff the following four conditions holdsβ(m_1-1/(α-c_1)) ≥ (β-ρ c_2)(m_1-m/2+λ_0) β(m_1-1/(α-c_1)) ≥ (β-ρ c_1)(m_1-m/2+λ_0)(β-ρ c_1)(m_2-1/α+λ_0) ≥ (β-ρ c_2)(m_2-1/α)(β-ρ c_1)(m_2-1/α+λ_0) ≥β(m_2-m/2)Solving first two inequalities, we getρ/β≥1/(α-c_1)-m/2+λ_0/c_2(m_1-m/2+λ_0) ρ/β≥1/(α-c_1)-m/2+λ_0/c_1(m_1-m/2+λ_0):=A1Solving second two inequalities, we getc_1≤ c_2; ρ/β≤m/2-1/α+λ_0/c_1(m_2-1/α+λ_0):=B1Using our conventions that m_1 ≤ m_2 and c_1 ≤ c_2, we get1/(α-c_1)-m/2+λ_0/c_1(m_1-m/2+λ_0)≤ρ/β≤m/2-1/α+λ_0/c_1(m_2-1/α+λ_0)which is not true because A1>B1 (the proof is exactly on same lines to that of C>D). Other strategies cannot be PNE. This is because the conditions to be PNE gives c_1≥ c_2 which is contradiction to our assumption. 100 Sigcom10_InternetTraffic C. Labovitz, D. McPherson, S. Iekel-Johnson, J. Oberheide, and F. Jahanian, “Internet inter-domain traffic,” in Proc. ACM SIGCOMM, New Delhi, India, 2010. KearneyPaper M. Page, L. Rossi, and C. Rand, “A viable future model for the Internet,” A. T. Kearney Paper, 2010. MishraViewPoint Vishal Mishra, “Routing money, not packets," ACMMagazine, Communications, Vol. 58,Issue 6, June 2015.Pages 24-27 Wang17 X. Wang, R. Ma, and Y. Xu, "The Role of Data Cap in Optimal Two-Part Network Pricing", in IEEE/ACM Transactions on Networking, vol. 25, no. 6, 2017. CS2013_SDP S. Sen, C. Joe-wong, S. Ha, and M. Chiang, “A Survey of Smart Data Pricing: Past Proposals, Current Plans, and Future Trends," ACM Computing Surveys, 2013. Ma16 R. T. B. Ma. "Subsidization Competition: Vitalizing the Neutral Internet", IEEE/ACM Transactions on Networking, vol. 24, no. 4, 2016. EUConsultation http://ec.europa.eu/competition/publications/reports/kd0217687enn.pdf. CanadaConsultation http://www.crtc.gc.ca/eng/archive/2016/2016-192.pdf. CRCOM https://www.crcom.gov.co/pp/Zero_rating_eng.pdf. TRAI http://trai.gov.in/sites/default/files/CP-Differential-Pricing-09122015.pdf. Altman11 E. Altman, J. Rojas, S. Wong, M. K. Hanawal, and Y. Xu, “Net neutrality and Quality of S ervice," in Proc. of the Game Theory for Networks, GameNets (invited paper), Sanghai, Chaina, 2011. Infocom13_SponsoringContent M. Andrews, U. Ozen, M. I. Reiman, and Q. Wang, “Economic models of sponsored content in wireless networks with uncertain demand,"in Proc. of IEEE INFOCOM Workshop on Smart Data Pricing, 2013. Infocom14_SponsoringContent L. Zhang, D. Wang, “Sponsoring Content: Motivation and Pitfalls for Content Service Providers," Proc. of IEEE INFOCOM Workshop on Smart Data Pricing, 2014. La1 R. ElDelgawy and R. J. La, “Interaction between a content provider and a service provider and its efficiency," in Proc. of IEEE INFOCOM, 2015. La2 R. ElDelgawy and R. J. La, “A case study of internet fast lane," in Proc. of Annual Conference on Information Sciences and Systems (CISS), 2015. WiOpt15_SponsoringContent M. H. Lotfi, K. Sundaresan, S. Sarkar, and M. A. Khojastepour, and S. Rangarajan “The economics of quality sponsored data in non-neutral networks," in Proc. of International Symposium on Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks (WiOpt) , 2015. CoNEXT_Ma R. T. B. Ma, “Subsidization competition: Vitalizing the neutral internet," in Proc. of ACM CoNEXT, 2014. Infocom15_SponsoringData C. Joe-Wong, S. Ha, and M. Chiang, “Sponsoring mobile data: An economic analysis of the impact on users and content providers,"in Proc. of IEEE INFOCOM, 2015. Sigmetrics15_SponsoredData L. Zhang, W. Wu, and D. Wang, “Sponsored data plan: A two-class service model in wireless data networks,"in Proceedings of ACM SIGMETRICS,2015. Zou17 M. Zou, R. T. B. Ma, X. Wang, and Y. Xu. "On Optimal Service Differentiation in Congested Network Markets". In Proceedings of the IEEE International Conference on Computer Communications (INFOCOM), 2017. Jullien18 B. Jullien, and W. Sand-Zantman, "Internet regulation, two-sided pricing, and sponsored data", in International Journal of Industrial Organization, vol. 58, pp. 31-52, 2018. calzada18 J. Calzada, and M. Tselekounis, "Net Neutrality in a hyperlinked Internet economy", in International Journal of Industrial Organization, vol. 59, pp. 190-221, 2018. Kleinrock WiOpt18 Manjesh K. Hanawal, Fehmina Malik and Yezekael Hayel, “Differential Pricing of Traffic in the Internet" in Proc. of International Symposium on Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks (WiOpt), 2018. L. Kleinrock, “Queuing Systems," Vol-1 Theory, John Wiley & Sons, 1975 FrankWolf M. Frank, and P. Wolfe, “An Algorithm for Quadratic Programming," Naval Research Logistics Quarterly, 3(1-2), 1956 . Wardrop J. G. Wardrop, “Some Theoretical Aspects of Road traffic Research,” in Proc. of the Institution of Civil Engineers, 1 (3), 1952. PASTA R. W. Wolff, “Poisson Arrivals See Time Averages," Operations Research, 30 (2), 1982. Manjesh K. Hanawal received the M.S. degree in ECE from the Indian Institute of Science, Bangalore, India, in 2009, and the Ph.D. degree from INRIA, Sophia Antipolis, France, and the University of Avignon, Avignon, France, in 2013. After spending two years as a postdoctoral associate at Boston University, he is now an Assistant Professor in Industrial Engineering and Operations Research at the Indian Institute of Technology Bombay, Mumbai, India. His research interests include communication networks, machine learning and network economics.Fehmina Malikis currently pursuing Ph.D. at IEOR, IIT Bombay, Mumbai, India. She received her B.Sc. Hons. Mathematics degree,M.Sc. and M.Phil in Operations Research from University of Delhi, Delhi, India in 2011, 2013 and 2015 respectively.Her current research interests include Game theory, Internet Economics, Supply Chain and Inventory Management.Yezekael Hayel He has been Assistant/Associate Professor with the University of Avignon, France, since 2006. He has held a tenure position (HDR) since 2013. His research interests include performance evaluation and optimization of networks based on game theoretic and queuing models. He looks at applications in communication/transportation and social networks, such as wireless flexible networks, bio-inspired and self-organizing networks, and economic models of the Internet and yield management. Since joining the Networking Group of the LIA/CERI, he has participated in several projects. He was also involved in workshops and conference organizations. He participates in several national (ANR) and international projects with industrial companies, such as Orange Labs, Alcatel-Lucent, and IBM, and academic partners, such as Supelec, CNRS, and UCLA. He has been invited to give seminal talks in institutions, such as CRAN, INRIA, Supelec, UAM (Mexico), ALU (Shanghai), TU Delft, UGent and Boston University. He was a visiting professor at NYU Polytechnic School of Engineering in 2014/2015.He is now the head of the computer science/engineering institute (CERI) of the University of Avignon.
http://arxiv.org/abs/1709.09334v2
{ "authors": [ "Manjesh K. Hanawal", "Fehmina Malik", "Yezekael Hayel" ], "categories": [ "econ.EM" ], "primary_category": "econ.EM", "published": "20170927045132", "title": "Zero-rating of Content and its Effect on the Quality of Service in the Internet" }
conditions
http://arxiv.org/abs/1709.09161v1
{ "authors": [ "Emmanuel Dufourq", "Bruce A. Bassett" ], "categories": [ "stat.ML", "cs.LG", "cs.NE" ], "primary_category": "stat.ML", "published": "20170926175631", "title": "EDEN: Evolutionary Deep Networks for Efficient Machine Learning" }
firstpage–lastpage The Deep Underground Neutrino Experiment – DUNE: the precision era of neutrino physics for the DUNE collaboration December 30, 2023 ======================================================================================The first direct detection of the asteroidal YORP effect, a phenomenon that changes the spin states of small bodies due to thermal reemission of sunlight from their surfaces, was obtained for (54509) YORP 2000 PH_5. Such an alteration can slowly increase the rotation rate of asteroids, driving them to reach their fission limit and causing their disruption. This process can produce binaries and unbound asteroid pairs. Secondary fission opens the door to the eventual formation of transient but genetically-related groupings. Here, we show that the small near-Earth asteroid (NEA) 2017 FZ_2 was a co-orbital of our planet of the quasi-satellite type prior to their close encounter on 2017 March 23. Because of this flyby with the Earth, 2017 FZ_2 has become a non-resonant NEA. Our N-body simulations indicate that this object may have experienced quasi-satellite engagements with our planet in the past and it may return as a co-orbital in the future. We identify a number of NEAs that follow similar paths, the largest named being YORP, which is also an Earth's co-orbital. An apparent excess of NEAs moving in these peculiar orbits is studied within the framework of two orbit population models. A possibility that emerges from this analysis is that such an excess, if real, could be the result of mass shedding from YORP itself or a putative larger object that produced YORP. Future spectroscopic observations of 2017 FZ_2 during its next visit in 2018 (and of related objects when feasible) may be able to confirm or reject this interpretation. methods: numerical – celestial mechanics –minor planets, asteroids: general –minor planets, asteroids: individual: 2017 FZ_2 –minor planets, asteroids: individual: 2017 DR_109 –planets and satellites: individual: Earth. § INTRODUCTION Small near-Earth asteroids (NEAs) are interesting targets because their study can lead to a better understanding of the evolution of the populations of larger minor bodies from which they originate. Most large asteroids consist of many types of rocks held together bygravity and friction between fragments; in stark contrast, most small asteroids are thought to be fast-spinning bare boulders (see e.g.Harris 2013; Statler et al. 2013; Hatch & Wiegert 2015; Polishook et al. 2015; Ryan & Ryan 2016). Small NEAs can be chipped off by another small body from a larger parent asteroid through subcatastrophic impacts (see e.g. Durda et al. 2007), they can also be released during very close encounters with planets following tidal disruption (see e.g. Keane & Matsuyama 2014; Schunová et al. 2014), or due to the action of the Yarkovsky–O'Keefe–Radzievskii–Paddack (YORP) mechanism (see e.g. Bottke et al. 2006).The asteroidal YORP effect changes the spin states of small bodies as a result of thermal reemission of starlight from their surfaces. Such an alteration can secularly increase the rotation rate of asteroids, driving them to reach their fission limit and subsequently triggering their disruption (Walsh, Richardson & Michel 2008). This process can produce binary systems and unbound asteroid pairs (Vokrouhlický & Nesvorný 2008; Pravec et al. 2010; Scheeres 2017). Asteroids formed by rotational fission escape from each other if the size of one of the members of the pair is small enough (see e.g. Jacobson & Scheeres 2011). Secondary fission opens the door to the formation of transient, but genetically-related, dynamic groupings.The discovery and ensuing study of the orbital evolution of (54509) YORP 2000 PH_5 led to the first direct observational detection of the YORP effect (Lowry et al. 2007; Taylor et al. 2007). While YORP spin-up is now widely considered as the dominant formation mechanism for small NEAs (see e.g. Walsh et al. 2008; Jacobson & Scheeres 2011; Walsh, Richardson & Michel 2012; Jacobson et al. 2016), there are still major questions remaining to be answered. In particular, meteoroid impacts can also affect asteroid spins at a level comparable to that of the YORP effect under certain circumstances (Henych & Pravec 2013; Wiegert 2015). In fact, it has been suggested that small asteroids can be used to deflect incoming NEAs via kinetic impacts (Akiyama, Bando & Hokamoto 2016). The asteroidal YORP effect has been measured in objects other than YORP; for example, (25143) Itokawa 1998 SF_36 (Kitazato et al. 2007; Ďurech et al. 2008a; Lowry et al. 2014), (1620) Geographos 1951 RA (Ďurech et al. 2008b), (3103) Eger 1982 BB (Ďurech et al. 2012) and small asteroids in the Karin cluster (Carruba, Nesvorný & Vokrouhlický 2016). P/2013 R3, a recent case of asteroid breakup, has been found to be consistent with the YORP-induced rotational disruption of a weakly bound minor body (Jewitt et al. 2017). On the other hand, YORP is not only known for being affected by the YORP effect, it is also a well-studied co-orbital of the Earth (Wiegert et al. 2002; Margot & Nicholson 2003). Members of this peculiar dynamical class are subjected to the 1:1 mean-motion resonance with our planet.Earth's co-orbital zone currently goes from ∼0.994 au to ∼1.006 au, equivalent to a range in orbital periods of 362 to 368 d (see e.g. de la Fuente Marcos & de la Fuente Marcos 2016f). Models (see Section 3.2) indicate that the probability of a NEA ever ending up on an orbit within this region has an average value of nearly 0.0025. Such an estimate matches the observational results well (see Section 3.2), although NEAs in Earth's co-orbital zone can only be observed for relatively brief periods of time due to their long synodic periods. About 65 per cent of all known NEAs temporarily confined or just passing through Earth's co-orbital zone have absolute magnitude, H, greater than 22 mag, or a size of the order of 140 m or smaller, making them obvious candidates to being by-products of YORP spin-up or perhaps other processes capable of triggering fragmentation events (see above).Here, we show that the recently discovered minor body 2017 FZ_2 was until very recently a quasi-satellite of the Earth and argue that it could be related to YORP, which is also a transient companion to the Earth moving in a horseshoe-type orbit (Wiegert et al. 2002; Margot & Nicholson 2003). This paper is organized as follows. In Section 2, we present data, details of our numerical model, and 2017 FZ_2's orbital evolution. Section 3 explores the possibility of the existence of a dynamical grouping, perhaps related to YORP. Mutual close encounters between members of this group are studied in Section 4. Close approaches to other NEAs, Venus, the Earth and Mars are investigated in Section 5. Our results are discussed in Section 6. Section 7 summarizes our conclusions.§ ASTEROID 2017 FZ_2: DATA, INTEGRATIONS AND ORBITAL EVOLUTION The recently discovered NEA 2017 FZ_2 was originally identified as an Earth's co-orbital candidate because of its small relative semimajor axis, |a-a_ Earth|∼0.0012 au, at discovery time (now it is 0.008 au). Here, we present the data currently available for this object, outline the techniques used in its study, and explore both its short- and medium-term orbital evolution.§.§ The dataAsteroid 2017 FZ_2 was discovered on 2017 March 19 by G. J. Leonard observing with the 1.5-m reflector telescope of the Mt.Lemmon Survey at an apparent magnitude V of 19.2 (Urakawa et al. 2017).[http://www.minorplanetcenter.net/mpec/K17/K17F65.html]It subsequently made a close approach to our planet on 2017 March 23 when it came within a nominal distance of 0.0044 au, travelling at a relative velocity of 8.46 km s^-1.[https://ssd.jpl.nasa.gov/sbdb.cgi?sstr=2017%20FZ2;old=0;orb=0; cov=0;log=0;cad=1#cad] The object was observed with radar from Arecibo Observatory (Rivera-Valentin et al. 2016) on 2017 March 27, when it was recedingfrom our planet.[http://www.naic.edu/%7Epradar/asteroids/2017FZ2/2017FZ2.2017Mar 27.s2p0Hz.cw.png] The orbital solutioncurrently available for this object (see Table <ref>) is based on 152 observations spanning a data-arc of 8 d (includingthe Doppler observation); its minimum orbit intersection distance (MOID) with the Earth is 0.0014 au. Asteroid 2017 FZ_2 was initially included in the list of asteroids that may be involved in potential future Earth impact eventscompiled by the Jet Propulsion Laboratory (JPL) Sentry System (Chamberlin et al. 2001; Chodas 2015),[https://cneos.jpl.nasa.gov/sentry/]with a computed impact probability of 0.000092 for a possible impact in 2101–2104, but it has been removedsince.[https://cneos.jpl.nasa.gov/sentry/details.html#?des=2017%20FZ2] It is, however, a small object with H=26.7 mag(assumed G = 0.15), which suggests a diameter in the range 13–30 m for an assumed albedo in the range 0.20–0.04. For thisreason, the explosive energy associated with a hypothetical future impact of this minor body could be comparable to those of typicalnuclear weapons currently stocked, i.e. a locally dangerous impact not too different from that of the Chelyabinsk event (see e.g.Brown et al. 2013; Popova et al. 2013; de la Fuente Marcos & de la Fuente Marcos 2015c).§.§ The approachAs explained in de la Fuente Marcos & de la Fuente Marcos (2016f), confirmation of co-orbital candidates of a given host comes onlyafter the statistical analysis of the behaviour of a critical or resonant angle, λ_ r,[In our case, therelative mean longitude or difference between the mean longitude of the object and that of its host.] in a relevant set of numericalsimulations that accounts for the uncertainties associated with the orbit determination of the candidate. If the value ofλ_ r librates or oscillates over time, the object is actually trapped in a 1:1 mean-motion resonance with its host astheir orbital periods are virtually the same; if λ_ r circulates in the interval (0, 360), we speak of anon-resonant, passing body. Librations about 0 (quasi-satellite), ±60 (Trojan) or 180 (horseshoe) are oftencited as the signposts of 1:1 resonant behaviour (see e.g. Murray & Dermott 1999), although hybrids of the three elementaryco-orbital states are possible and the actual average resonant value of λ_ r depends on the orbital eccentricity andinclination of the object (Namouni, Christou & Murray 1999; Namouni & Murray 2000). Here, we use a direct N-body code[http://www.ast.cam.ac.uk/%7Esverre/web/pages/nbody.htm] implemented by Aarseth (2003)and based on the Hermite scheme described by Makino (1991) —i.e. no linear or non-linear secular theory is used in this study—to investigate the orbital evolution of 2017 FZ_2 and several other, perhaps related, NEAs. The results of Solar systemcalculations performed with this code are consistent with those obtained by other authors using different softwares (see de laFuente Marcos & de la Fuente Marcos 2012); for further details, including the assumed physical model, see de la Fuente Marcos & dela Fuente Marcos (2012, 2016f). Initial conditions in the form of positions and velocities in the barycentre of the Solar system for2017 FZ_2, other relevant NEAs, and the various bodies that define the physical model have been obtained from JPL'shorizons[https://ssd.jpl.nasa.gov/?horizons] system (Giorgini et al. 1996; Standish 1998; Giorgini & Yeomans1999; Giorgini, Chodas & Yeomans 2001; Giorgini 2011, 2015) at epoch JD 2458000.5 (2017-September-04.0 TDB, Barycentric DynamicalTime), which is the t = 0 instant in our figures unless explicitly stated.§.§ The evolutionFig. <ref>, top panel, shows that, prior to its close encounter with our planet on 2017 March 23, 2017 FZ_2 was aquasi-satellite of the Earth with a period close to 60 yr as the value of the resonant angle was librating about zero with anamplitude of nearly 30. The bottom panel in Fig. <ref> only displays the time interval (-250, 1) yr for clarity andshows complex, drifting yearly loops (the annual epicycles) as seen in a frame of reference centred at the Sun and rotating with theEarth. This result —for the time interval (-225, 50) yr— is statistically robust as it is common to all the control orbits(over 10^3) investigated in this work. Extensive calculations (see below) show that the orbital evolution of this NEA is highlysensitive to initial conditions, much more than for any other previously documented quasi-satellite of our planet. A very chaoticorbit implies that it will be difficult to reconstruct its past dynamical evolution or make reliable predictions about its futurebehaviour beyond a few hundred years.When observed from the ground, a quasi-satellite of the Earth traces an analemma in the sky (de la Fuente Marcos & de la FuenteMarcos 2016e,f).[There is an error in terms of quoted units in figs 1 and 2 of de la Fuente Marcos & de la Fuente Marcos(2016e), and figs 2 and 3 of de la Fuente Marcos & de la Fuente Marcos (2016f), the right ascension is measured in hours notdegrees in those figures.] Fig. <ref> shows ten loops of the analemmatic curve described by 2017 FZ_2 (in red, nominalorbit) that is the result from the interplay between the tilt of the rotational axis of the Earth and the properties of the orbit ofthe quasi-satellite. Due to its significant eccentricity but low orbital inclination, its apparent motion traces a very distortedteardrop. Non-quasi-satellite co-orbitals do not trace analemmatic loops as seen from the Earth (see the blue curve inFig. <ref> that corresponds to 2017 DR_109, an Earth's co-orbital that follows a horseshoe-type path).Consequently, 2017 FZ_2 joins the list of quasi-satellites of our planet that already includes (164207) 2004 GU_9 (Connorset al. 2004; Mikkola et al. 2006; Wajer 2010), (277810) 2006 FV_35 (Wiegert et al. 2008; Wajer 2010), 2013 LX_28 (Connors2014), 2014 OL_339 (de la Fuente Marcos & de la Fuente Marcos 2014, 2016c) and (469219) 2016 HO_3 (de la Fuente Marcos &de la Fuente Marcos 2016f). Although it was the smallest known member of the quasi-satellite dynamical class —independent of thehost planet and significantly smaller than the previous record holder, 469219, that has H = 24.2 mag— it is no longer a memberof this category; our calculations indicate that, after its most recent close flyby with our planet, it has become a non-resonantNEA.The quasi-satellite resonant state —as the one experienced by 2017 FZ_2 and the other five objects pointed out above— wasfirst described theoretically by Jackson (1913), but without the use of modern terminology. The energy balance associated with itwas studied by Hénon (1969), who called the objects engaged in this unusual resonant behaviour `retrograde satellites'. However,such objects are not true satellites because they are not gravitationally bound to a host (i.e. have positive planetocentricenergy). The term `quasi-satellite' itself was first used in its present sense by Mikkola & Innanen (1997). Although the firstquasi-satellite (of Jupiter) may have been identified (and lost) in 1973 (Chebotarev 1974),[Originally published inRussian, Astron. Zh., 50, 1071-1075 (1973 September–October).] the first bona fide quasi-satellite (of Venus in this case),2002 VE_68, was documented by Mikkola et al. (2004). A modern theory of quasi-satellites has been developed in the papers byMikkola et al. (2006), Sidorenko et al. (2014) and Pousse, Robutel & Vienne (2017). A recent review of confirmed quasi-satelliteshas been presented by de la Fuente Marcos & de la Fuente Marcos (2016e).Fig. <ref> shows that the most recent quasi-satellite episode of 2017 FZ_2 started nearly 275 yr ago (but see below) andended after a close encounter with the Earth on 2017 March 23. The past and future orbital evolution of this object as described byits nominal orbit in Table <ref>, left-hand column, is shown in Fig. <ref>, central panels; the evolution of tworepresentative control orbits based on the nominal solution but adding (+) or subtracting (-) six times the correspondinguncertainty from each orbital element (all the six parameters) in Table <ref>, left-hand column, and labelled as`±6σ' are displayed as well (right-hand and left-hand panels, respectively). These two examples of orbit evolution usinginitial conditions that are most different from those associated with the nominal orbit are not meant to show how wide thedispersion of the various parameters could be as they change over time; the statistical effect of the uncertainties will be studiedlater. Figs <ref> and <ref> show that prior to its most recent flyby with our planet this NEA was an Aten asteroid (now it isan Apollo) following a moderately eccentric orbit, e = 0.26, with low inclination, i = 171, that kept the motion of thisminor body confined between the orbits of Venus and Mars as it experienced close approaches to both Venus and the Earth (A- andH-panels). These two planets are the main direct perturbers of 2017 FZ_2 —although Jupiter drives the precession of the nodes(see e.g. Wiegert, Innanen & Mikkola 1998). For this reason, the dynamical context of this NEA is rather different from that of therecently identified Earth's quasi-satellite 469219 (de la Fuente Marcos & de la Fuente Marcos 2016f). The geocentric distanceduring the closest approaches shown in Fig. <ref>, A-panels, is often lower than the one presented in the figure due to itslimited time resolution (a data output interval of 36.5 d was used for these calculations); for example, our (higher timeresolution) calculations show that on 2017 March 23 the minimum distance between 2017 FZ_2 and our planet was about 0.0045 aualthough Fig. <ref>, central A-panel, shows a value well above the Hill radius of the Earth, 0.0098 au, during theencounter.Fig. <ref> shows that 2017 FZ_2 experienced brief quasi-satellite engagements with our planet in the past and it willreturn as Earth's co-orbital in the future (C-panels); it may remain inside or close to the neighbourhood of Earth's co-orbital zonefor 20 kyr and possibly more although its orbital evolution is very chaotic (see below). The value of the Kozai-Lidov parameter√(1 - e^2)cos i (Kozai 1962; Lidov 1962) remains fairly constant (B-panels). In our case, oscillation of the argument ofperihelion is observed for a certain period of time (Fig. <ref>) at ω = 270 (left-hand G-panels); librationabout 180 has also been observed for other control orbits, but it is not shown here. When ω oscillates about180 the NEA reaches perihelion while approaching the descending node; when ω librates about 270 (-90),aphelion always occurs away from the orbital plane of the Earth (and perihelion away from Venus). Some of these episodes correspondto domain III evolution as described in Namouni (1999), i.e. horseshoe-retrograde satellite orbit transitions and librations. Thisbehaviour is also observed for 469219 (de la Fuente Marcos & de la Fuente Marcos 2016f) and other Earth's co-orbitals or nearco-orbitals (de la Fuente Marcos & de la Fuente Marcos 2015b, 2016a,b).The current orbital solution for 2017 FZ_2 is not as good as that of 469219, the fifth quasi-satellite of our planet, which hasa data-arc spanning 13.19 yr; in any case, the overall evolution of 2017 FZ_2 is significantly more chaotic as it is subjectedto the direct perturbation of Venus and the Earth–Moon system. The combination of relatively poor orbital determination and strongdirect planetary perturbations makes it difficult to predict the dynamical status of this object beyond a few centuries. Followingthe approach detailed in de la Fuente Marcos & de la Fuente Marcos (2015d), we have applied the Monte Carlo using the CovarianceMatrix (MCCM) method to investigate the impact of the uncertainty in the orbit of 2017 FZ_2 on predictions of its past andfuture evolution. Here and elsewhere in this paper, covariance matrices have been obtained from JPL's horizons. Fig. <ref> shows results for 250 control orbits. Its future evolution is rather uncertain, which is typical of minor bodieswith a perhaps non-negligible probability of colliding with our planet during the next century or so —as pointed out above, animpact was thought to be possible in 2101–2104 and the dispersion grows very significantly after that. In the computation of thisset of control orbits (their orbital elements), the Box-Muller method (Box & Muller 1958; Press et al. 2007) has been used togenerate random numbers according to the standard normal distribution with mean 0 and standard deviation 1 (for additional details,see de la Fuente Marcos & de la Fuente Marcos 2015d, 2016f). Our calculations show that the orbit of 2017 FZ_2 is inherentlyvery unstable due to its close planetary flybys and has a Lyapunov time —or characteristic time-scale for exponential divergenceof integrated orbits that start arbitrarily close to each other— of the order of 10^2 yr. Such short values of the Lyapunovtime are typical of planet-crossing co-orbitals (see e.g. Wiegert et al. 1998). Fig. <ref> clearly shows how thiscircumstance affects the value of the resonant angle and the quasi-satellite nature of 2017 FZ_2 (compare with Fig. <ref>,top panel).Asteroid 2017 FZ_2 will reach its next visibility window for observations from the ground starting in February 2018. FromFebruary 27 to March 18, it will be observable at an apparent visual magnitude ≤23 cruising from right ascension 10^ hto 8^ h and declination +1 to +14. Unfortunately, the Moon will interfere with any planned observations untilMarch 10. This will be the best opportunity to gather spectroscopy of this object for the foreseeable future.About 20 days before the discovery of 2017 FZ_2, another small NEA, 2017 DR_109, had been found following an orbit similarto that of 2017 FZ_2.[http://www.minorplanetcenter.net/mpec/K17/K17E31.html] The new minor body was observed by D. C.Fuls with the 0.68-m Schmidt camera of the Catalina Sky Survey at a visual apparent magnitude of 19.6 (Fuls et al. 2017). It is alsoa possible co-orbital of our planet, |a-a_ Earth|∼0.0011 au, but smaller (9–20 m) than 2017 FZ_2. Its current orbitalsolution is not as robust as that of 2017 FZ_2 —see Table <ref>, central column— but it is good enough to arrive tosolid conclusions regarding its current dynamical status, in the neighbourhood of t=0. Fig. <ref> shows the behaviourof the resonant angle of 2017 DR_109 during the time interval (-75, 75) yr; it is indeed a co-orbital and follows a horseshoeorbit with respect to our planet. Fig. <ref> indicates that the orbital evolution of this object is nearly as chaotic asthat of 2017 FZ_2. Although it may remain inside or in the neighbourhood of Earth's co-orbital zone for many thousands of years(see D-panels), it switches between the various co-orbital states (quasi-satellite, Trojan or horseshoe) and hybrids of themmultiple times within the time interval displayed (see C-panels). Many of the dynamical aspects discussed regarding 2017 FZ_2are also present in Fig. <ref>. As for its next window of visibility, from 2018 February 23 to March 6 it will have anapparent visual magnitude <22, moving from right ascension 0^ h to 9^ h and declination +60 to +10, butthe Moon will interfere with the observations during this period. In general, objects following horseshoe-type paths with respect tothe Earth can be observed favourably only for a few consecutive years (often less than a decade), remaining at low solar elongations—and beyond reach of ground-based telescopes— for many decades afterwards.§ ARE THERE TOO MANY OF THEM? The discovery of two small NEAs moving in rather similar orbits within 20 days of each other hints at the possible existence of a dynamical grouping as the past and future orbital evolution of both objects also bears some resemblance. In order to search for additional NEAs that may be following paths consistent with those of 2017 DR_109 and 2017 FZ_2, we use the D-criteria of Southworth & Hawkins (1963), D_ SH, Lindblad & Southworth (1971), D_ LS (in the form of equation 1 in Lindblad 1994 or equation 1 in Foglia & Masi 2004), Drummond (1981), D_ D, and the D_ R from Valsecchi, Jopek & Froeschlé (1999). These criteria are customarily applied using proper orbital elements not osculating Keplerian orbital elements like those in Table <ref> (see e.g. Milani 1993, 1995; Milani & Knežević 1994; Knežević & Milani 2000; Milani et al. 2014, 2017).Unfortunately, our exploration of the orbital evolution of 2017 DR_109 and 2017 FZ_2 in Section 2 indicates that, due to their short Lyapunov times, it may not be possible to estimate meaningful proper orbital elements for objects moving in such chaotic orbits. Proper elements are expected to behave as quasi-integrals of the motion, and are often computed as the mean semimajor axis, eccentricity and inclination from a numerical simulation over a long timespan; when close planetary encounters are at work, no relevant timespan can lead to parameters that remain basically unchanged. Although the dynamical context associated with these objects makes any exploration of their possible past connections difficult, a representative set of orbits of a given pair with low values of the D-criteria could be integrated to investigate whether their orbital evolution over a reasonable amount of time is also similar or not, confirming or disproving any indirect indication given by the values of the D-criteria based on osculating Keplerian orbital elements. Such an approach will be tested here. §.§ The evidenceTable <ref>, top section, shows various orbital parameters and the values of the D-criteria for objects with D_ LSand D_ R < 0.05 with respect to 2017 FZ_2 as described by its nominal orbit in Table <ref>, left-hand column.Unfortunately, the closest dynamical relatives of 2017 FZ_2 are also small NEAs with poor orbital determinations, 2009 HE_60 and 2012 VZ_19. There are however in Table <ref>, top section, some relatively large NEAs with well-constrained orbitalsolutions, (488490) 2000 AF_205 and (54509) YORP 2000 PH_5. Asteroid 488490 is considered as an accessible NEA suitable forsample return missions (Christou 2003; Sears, Scheeres & Binzel 2003). Besides YORP, 2017 DR_109 and 2017 FZ_2, two otherobjects, 2009 HE_60 and 2015 YA, are also confined within Earth's co-orbital zone. Asteroid 2015 YA follows an asymmetrichorseshoe trajectory (a hybrid of quasi-satellite and horseshoe that may evolve into a pure quasi-satellite state or a plainhorseshoe one within the next century or so) with respect to the Earth, but it may not stay as Earth co-orbital companion for longdue to the very chaotic nature of its orbital evolution (de la Fuente Marcos & de la Fuente Marcos 2016b); its orbit determination is as reliable as that of 2017 DR_109 and it is in need of improvement as well. Although the quality of the orbital determination of 2009 HE_60 (Gibbs et al. 2009) is inferior to those of 2015 YA,2017 DR_109 or 2017 FZ_2, our simulations show that this NEA is currently engaged in a brief quasi-satellite episode withour planet that started nearly 70 yr ago and will end in about 15 yr from now, to become a regular horseshoe librator for over10^3 yr. Fig. <ref>, top panel, shows the evolution of the value of λ_ r; the bottom panel displays thepath followed by 2009 HE_60 in a frame of reference centred at the Sun and rotating with our planet, projected on to theecliptic plane. Fig. <ref> shows ten loops of the analemmatic curve described by 2009 HE_60 (in green). We thereforeconfirm that 2009 HE_60 is a robust candidate to being a current quasi-satellite of our planet. We speak of a candidate becauseits orbital solution is in need of some improvement; unfortunately, it has not been observed since 2009 April 29. Its next window ofvisibility spans from 2018 May 29 to June 9, when it will have an apparent visual magnitude <23.5, moving from right ascension17^ h to 16^ h and declination -20 to -18, but the Moon will interfere with the observations duringmost of this period. In any case, our planet appears to be second to none regarding the number of known quasi-satellites (see thereview by de la Fuente Marcos & de la Fuente Marcos 2016e).The presence of YORP among the list of candidates to being dynamical relatives of 2017 FZ_2 in Table <ref>, top section,makes one wonder whether some of those small NEAs may be the result of mass shedding from YORP itself or a putative larger objectthat produced YORP. The data show that 2012 BD_14 appears to follow an orbit akin to that of YORP. Table <ref>, bottomsection, explores the presence of NEAs moving in YORP-like orbits (the nominal orbit of YORP is given in Table <ref>,right-hand column). The orbital similarity between YORP and 2012 BD_14 is confirmed; another relatively large NEA with goodorbital determination, (471984) 2013 UE_3, is also uncovered. The sample of known NEAs moving in YORP-like orbits comprises afew objects with probable sizes larger than 100 m and several more down to the meteoroid size (a few metres). Some or all of themcould be trapped in a web of overlapping mean-motion and secular resonances as described by de la Fuente Marcos & de la FuenteMarcos (2016d), but the very chaotic nature of this type of orbits subjected to the recurrent direct perturbations of Venus and theEarth–Moon system could favour an alternative scenario where some of these objects may have a past physical relationship. If someof these NEAs had a common genetic origin, one may expect that their dynamical evolution back in time kept them within a relativelysmall volume of the orbital parameter space.Fig. <ref> shows a number of backwards integrations for 10 000 yr corresponding to the nominal orbits of representative NEAs in Table <ref>. The points show the current parameters of the 19 different objects in Table <ref>. As this figure uses only nominal orbits and some of the 19 NEAs have poor orbit determinations, the purpose of this plot is simply act as a guide to the reader, not to provide an extensive exploration of the backwards dynamical evolution of these objects, which is out of the scope ofthis paper. In general, the backwards time evolution of these objects seems to keep them clustered within a relatively small region.Some pairs follow unusually similar tracks, for example YORP and 2012 BD_14 or 471984 and 2007 WU_3. In principle, it isdifficult to conclude that the present-day orbits followed by some of these objects could be alike due to chance alone. But howoften do NEAs end up and remain on an orbit within this region of the orbital parameter space?§.§ The expectationsIn order to provide a statistically robust answer to the question asked in the previous section, we use the list of near-Earthobjects (NEOs) currently catalogued (as of 2017 July 20, 16 498 objects, 16 323 NEAs) by JPL's Solar System Dynamics Group (SSDG) Small-Body Database (SBDB),[http://ssd.jpl.nasa.gov/sbdb.cgi] concurrently with the NEOSSat-1.0 orbit model developed byGreenstreet, Ngo & Gladman (2012) and the newer one developed within the framework of the Near-Earth Object Population ObservationProgram (NEOPOP) and described by Granvik et al. (2013a,b) and Bottke et al. (2014), which is intended to be a state-of-the-artreplacement for a widely used model described by Bottke et al. (2000, 2002). We use data from these two NEO models because suchsynthetic data do not contain any genetically related objects and they are free from the observational biases and selection effectsthat affect the actual data. These two features are critical in order to decide whether some of the NEAs in Table <ref> mayhave had a physical relationship in the past or not. The data from JPL's SSDG SBDB (see Fig. <ref>, left-hand panels)[In this and subsequent histograms, the bin size hasbeen computed using the Freedman-Diaconis rule (Freedman & Diaconis 1981), i.e. 2IQR n^-1/3, where IQR is theinterquartile range and n is the number of data points.] show that the probability of finding a NEA within Earth's co-orbital zoneis 0.0025±0.0004[The uncertainty has been computed assuming Poissonian statistics, σ=√(n), see e.g. Wall &Jenkins (2012).] (40 objects out of 16 323, but they may or may not be trapped in the 1:1 mean-motion resonance with our planet)and that the probability of finding a NEA following a YORP-like orbit —in other words, D_ LS and D_ R<0.05 withrespect to YORP— is 0.00067±0.00025[As before, we adopt Poissonian statistics to compute the uncertainty —applyingthe approximation given by Gehrels (1986) when n<21, σ∼ 1 + √(0.75 + n).] (11 objects out of 16 323). These are ofcourse biased numbers, but how do they compare with unbiased theoretical expectations?A quantitative answer to this question can be found looking at the predictions made by a scientifically validated NEO model. Forthis task, we have used first the codes described in Greenstreet et al. (2012)[http://www.phas.ubc.ca/%7Esarahg/n1model/]with the same standard input parameters to generate sets of orbital elements including 16 498 virtual objects. The NEOSSat-1.0orbit model was originally applied to compute the number of NEOs with H<18 mag (or a diameter of about 1 km) in each dynamicalclass —Vatiras, Atiras, Atens, Apollos and Amors. It can be argued that only about 6.4 per cent of known NEOs have H<18 mag, butif we assume that the size and orbital elements of asteroids are uncorrelated we can still use the NEOSSat-1.0 orbit model tocompute reliable theoretical probabilities —this customary assumption has been contested by e.g. Bottke et al. (2014) and it isnot used by the NEOPOP model. Fig. <ref>, central panels, shows a typical outcome from this NEO model. When we compareleft-hand and central panels in Fig. <ref>, we observe that the two distributions are quite different in terms of the values ofthe orbital inclination. There is also a significant excess of synthetic NEOs with values of the semimajor axis close to 2.5 au thatcorresponds to minor bodies escaping the main asteroid belt through the 3:1 mean-motion resonance with Jupiter; the Alinda family ofasteroids are held by this resonance (see e.g. Simonenko, Sherbaum & Kruchinenko 1979; Murray & Fox 1984). The NEOSSat-1.0 model predicts that the probability of finding a NEO within Earth's co-orbital zone is 0.0028±0.0003 and that ofa NEO following a YORP-like orbit is 0.00003±0.00003 (averages and standard deviations come from ten instances of the modelusing different seeds to generate random numbers). It is clear that the values of the theoretical and empirical probabilities offinding a NEO within Earth's co-orbital zone are statistically consistent; therefore, our assumption of lack of correlation betweensize and orbital elements might be reasonably correct, at least within the context of this research. Surprisingly, the empiricalprobability of finding a NEA following a YORP-like orbit is significantly higher than the theoretical one, over 21σ higher.While it can be argued that our approach is rather crude, it is difficult to explain such a large difference as an artefact when theprobabilities of finding a co-orbital agree so well (but see below for a more detailed discussion). The NEOSSat-1.0 model is not the most recent and highest-fidelity NEO model currently available but, how different are the estimatesprovided by a more up-to-date model like NEOPOP? The software implementing the NEOPOP model is publiclyavailable[http://neo.ssa.esa.int/neo-population] and the model has been successfully applied in several recent NEO studies(Granvik et al. 2016, 2017). It accurately reproduces the orbits of the known NEOs as well as their absolute magnitudes and albedos,i.e. size and orbital elements of NEOs are correlated in this framework. The model is calibrated from H=15 mag up to H=25 mag;as our knowledge of the NEO population with H<15 mag is assumed to be complete, known NEOs have been used for this range ofabsolute magnitudes. Fig. <ref>, right-hand panels, shows a typical outcome from this model. The synthetic, unbiaseddistributions are quite different from the real ones (left-hand panels). The excess of synthetic NEOs with values of the semimajoraxis close to 2.5 au is less significant than that of the NEOSSat-1.0 model. Using the standard options of the NEOPOP model, thecode produces 802 457 synthetic NEOs with H<25 mag. Out of this data pool, we select random permutations (only for H>15 mag,the ones with H<15 mag remain unchanged) of 16 498 objects to compute statistics or construct the histograms in Fig. <ref>,right-hand panels.The NEOPOP model predicts that the probability of finding a NEO within Earth's co-orbital zone is 0.0021±0.0004 and that of a NEO following a YORP-like orbit is 0.000024_-0.000024^+0.000043 (averages and standard deviations come from 25 instances ofthe model). These two values of the probability agree well with those derived from the NEOSSat-1.0 model. Therefore, the theoreticaland empirical probabilities of finding a NEO within Earth's co-orbital zone are statistically consistent, but the empirical one offinding a NEA following a YORP-like orbit is well above any of the theoretical ones. It could be argued that the theoretical andempirical probabilities of finding a co-orbital are equal by chance. In addition, known NEAs in YORP-like orbits are simply morenumerous in relative terms because they tend to pass closer to our planet and in consequence are easier to discover as accidentaltargets of NEO surveys. Although these arguments have significant weight, one can also argue that all the known Earth co-orbitalsare serendipitous discoveries that were found as a result of having experienced very close encounters with our planet.Regarding the issue of observed and predicted absolute magnitudes, let us focus only on NEOs, real or synthetic, with H<25 mag.The probabilities given by the NEOPOP model are the values already cited. The data from JPL's SSDG SBDB give a value of0.0021±0.0004 (27 out of 12736 objects) for the probability of finding a NEA with H<25 mag within Earth's co-orbital zone,which is a perfect match for the theoretical estimate from the NEOPOP model. As for YORP-like orbits with H<25 mag, the empirical value of the probability is 0.0003_-0.0003^+0.0002 (4 out of 12736 objects). Even if we take into account the values of H, the theoretical value of the probability of observing a YORP-like orbit is significantly below the empirical value. In addition, the NEOPOP model predicts no NEOs with H<23.5 mag moving in YORP-like orbits but there are two known, YORP and 471984. It is true thatthe evidence is based on small samples but one tantalizing albeit somewhat speculative possibility is that some of these objects have been produced in situ via fragmentation events, i.e. they are direct descendants of minor bodies that were already moving inYORP-like orbits.On the other hand, the synthetic data generated to estimate the theoretical values of the probabilities can be used together withthe D-criteria to uncover unusual pairs among the NEAs in Table <ref>. The procedure is simple, each set of synthetic NEOsfrom NEOSSat-1.0 or NEOPOP comprises fully unrelated virtual objects. If we compute the D-criteria with respect to the real YORP (or2017 FZ_2) for all the virtual NEOs in a given set and perform a search looking for those with D_ LS and D_ R<0.05,we can extract a sample of relevant unrelated (synthetic) NEAs and compute the expected average values and standard deviations ofthe D-criteria when no orbital correlation between pairs of objects is present in the data. These averages and dispersions can beused to estimate how unlikely (in orbital terms) the presence of a given pair of real objects is. In order to evaluate statistically this likelihood, we compute the absolute value of the difference between the value of a certainD-criterion for a given pair of objects in Table <ref> and the average value for uncorrelated virtual objects, then divide bythe dispersion found from the synthetic data. If the estimator gives a value close to or higher than 2σ we may assume thatthe pair of objects could be unusual (an outlier, in statistical parlance) within the fully unrelated pairs scenario and performadditional analyses. We have carried out this investigation and the estimator discussed gives values around 1σ for all thetested pairs with one single exception, the pair YORP–2012 BD_14 that gives values of the estimator in the range 2–3σfor all the D-criteria.A simple scenario that may explain the observed excess or even the presence of the unusual pair YORP–2012 BD_14 is in-situproduction of NEAs moving in YORP-like orbits either as a result of mass shedding from YORP itself (or any other relatively largeNEA in this group) or a putative bigger object that produced YORP (or any other of the larger objects in Table <ref>).However, it is not just the size of this population that matters: we also have the issue of its stability and how many concurrentresonant objects should be observed in these orbits at a given time. Finding answers to these questions requires the analysis ofrelevant N-body simulations. Given the chaotic nature of these orbits, relatively short calculations should suffice to arrive torobust conclusions.In order to explore the stability and resonant properties of YORP-like orbits, we have performed 20 000 numerical experiments usingthe same software and physical model considered in the previous section and orbits for the virtual NEOs uniformly distributed (i.ewe do not make any assumptions on the origin, natural versus artificial, of the virtual objects) within the rangesa∈(0.98, 1.02) au, e∈(0.20, 0.30), i∈(0, 4), Ω∈(0, 360), and ω∈(0, 360), i.e.consistent with having D_ LS and D_ R<0.05 with respect to YORP. After an integration of 2 000 yr forward in time weobtain Figs <ref> and <ref>. For each simulated particle we compute the average value and the standard deviation of theresonant angle in the usual way (see e.g. Wall & Jenkins 2012); the standard deviation of a non-resonant angle is 103923. InFig. <ref>, nearly 66 per cent of the virtual NEOs do not exhibit any sign of resonant behaviour during the simulation, 0.7 percent are quasi-satellites, nearly 1.8 per cent are Trojans, and 5.6 per cent move in horseshoe orbits, the remaining 25.9 per centfollow elementary or compound 1:1 resonant states for at least some fraction of the simulated time. Fig. <ref> shows that themost stable configurations are invariably characterized by very long synodic periods (i.e. a∼1 au) and lower e. From Table <ref>, bottom section, and results in Sections 2 and 3, as well as results from the literature, we have at leastthree objects held by the 1:1 mean-motion resonance with our planet out of 11 listed NEAs. In other words, over 27 per cent of allthe objects moving in YORP-like orbits exhibit signs of resonant behaviour when the theoretical expectation in the framework of afully random scenario is about 34 per cent. This result can be regarded as another piece of evidence in support of our previousinterpretation because the orbital distribution of NEAs is not expected to be random (in the sense of uniform), see Fig. <ref>,and real NEAs are not supposed to have formed in these peculiar orbits, but come from the main asteroid belt. In addition we havefound one quasi-satellite (2017 FZ_2) out of 11 NEAs in Table <ref>, bottom section, and two NEAs moving in horseshoe-type paths (YORP and 2017 DR_109), i.e. a quasi-satellite probability of about 9 per cent when the random scenario predicts 0.7 percent and a horseshoe probability of about 18 per cent when the random scenario predicts 5.6 per cent.The various aspects discussed above strongly suggest that some of the NEAs in Table <ref> may form a dynamical grouping and perhaps have a genetic connection. Evidence for the presence of possible dynamical groupings among the co-orbital populations of ourplanet has been discussed in the case of NEAs moving in Earth-like orbits, the Arjunas (de la Fuente Marcos & de la Fuente Marcos2015a), and also the group of objects following paths similar to those of 2015 SO_2, (469219) 2016 HO_3 and 2016 CO_246(de la Fuente Marcos & de la Fuente Marcos 2016a,f). However, the study by Schunová et al. (2012) could not find any statistically significant group of dynamically related NEAs among those known at the time, but it is also true that many objects in Table <ref> have been discovered after the completion of that study.§ MUTUAL CLOSE ENCOUNTERS: HAPPENING BY CHANCE OR NOT? If the excess of NEAs following YORP-like orbits is statistically significant, one would expect that some of the asteroid pairs understudy had their orbits intersecting one another in the past and perhaps that their relative velocities were very low during such very close approaches. An exploration of this scenario requires the analysis of large sets of N-body simulations backwards in time for each relevant pair, but in order to identify any results as statistically significant we must study first what happens in the case of a random population of virtual NEAs moving in YORP-like orbits. The statistical analysis of minimum approach distances and relative velocities between unrelated objects can help us distinguish between close encounters happening by chance and those resulting from an underlying dynamical (and perhaps physical) relationship.Using a sample of 10 000 pairs of unrelated, virtual NEAs with orbits uniformly distributed within the ranges considered in the numerical experiments performed in the previous section, and integrated backwards in time for 2 000 yr, we have studied how is the distribution of relative velocities at minimum approach distance (<0.1 au, the first quartile is 0.0046 au and the average is 0.017 au) during mutual encounters. Given the fact that these NEAs are too small to change significantly their orbits due to their mutual gravitational attraction during close encounters, such a velocity distribution can be considered as a reasonably robust proxy for the representative collision velocity distribution for pairs of NEAs following YORP-like orbits. The results of this analysis may apply to both natural and artificial objects.Fig. <ref>, left-hand panels, shows the distribution of relative velocities at minimum distance for pairs of virtual NEAs in YORP-like orbits like those in the numerical experiments performed in the previous section. The most likely encounter velocities are near the high velocity extreme, which is characteristic of encounters near perihelion where multiple encounter geometries are possible. The mean approach velocity falls in the low-probability dip between the high-probability peak at about 14 km s^-1 (encounters at perihelion) and the secondary peak at about 3 km s^-1 (encounters at aphelion). However, the closest approaches (at about 10 000 km from each other) are characterized by values of the relative velocity close to 13 km s^-1 due to their relatively high eccentricity. The results in Fig. <ref>, left-hand panels, do not take into account predictions from the NEOPOP model and thismay be seen as a weakness of our analysis. In order to explore possible systematic differences, we have completed an additional set of calculations using the actual synthetic orbits predicted by the NEOPOP model as input, but randomizing the values of Ω and ω, then picking up pairs at random to compute the new set of simulations. The results of these calculations are shown in Fig. <ref>, central panels, and are quite similar to those in the left-hand panels. However, the ranges in the values of a, e and i are slightly different. We have repeated the calculations using uniform orbital distributions, but for the ranges a∈(0.985, 1.035) au, e∈(0.19, 0.27), and i∈(0, 4.5); Fig. <ref>, right-hand panels, shows that the results are very much alike. Although no actual collisions have been observed during all these simulations, the velocity distribution in Fig. <ref> is the one we seek as the mutual gravitational perturbation is negligible even during the closest mutual flybys.Fig. <ref> shows that the mean value of the velocity and its dispersion do not represent the distribution well. Collisions between these minor bodies yield higher probable collision velocities although high speed lowers the overall collision probability (the probability of experiencing an encounter closer than 30 000 km is of the order of 0.0002). However, impacts from meteoroids originated within this population on members of the same population happen at most probable velocities high enough to perhaps induce catastrophic disruption or at least partial destruction of the target as the impact kinetic energy goes as the square of the collision velocity. Our results closely resemble those obtained by Bottke et al. (1994) for the velocity distributions of colliding asteroids in the main asteroid belt. YORP-induced catastrophic asteroid breakups can generate multiple small NEAs characterized by pair-wise velocity dispersions under 1 m s^-1 and also clouds of dust as observed in the case of P/2013 R3 (Jewitt et al. 2017). Groups of objects moving initially along similar paths lose all dynamical coherence in a rather short time-scale (Pauls & Gladman 2005; Rubin & Matson 2008; Lai et al. 2014). This randomization is accelerated when recurrent close planetary encounters are possible. As the virtual NEAs used in our numerical experiments move in uncorrelated orbits, the results in Fig. <ref> can be used to single out pairs of objects that exhibit some level of orbital coherence; this feature may signal a very recent (not more than a few thousand years ago) breakup event. The velocity distribution of any pair of unrelated NEAs following YORP-like orbits is expected to be strongly peaked towards the high end of the distribution in Fig. <ref>, i.e. yield a most probable collision velocity close to or higher than 13 km s^-1. We have explored the velocity distributions of multiple pairs of objects in Table <ref> using the same approach applied before and found that this is true for the vast majority of them. However, we have found two notable exceptions, the pairs (54509) YORP 2000 PH_5–2012 BD_14 and YORP–2007 WU_3. The first one has a non-negligible probability of experiencing close encounters with relative velocities well under 1 km s^-1, 0.0007. The second one has a most probable close approach velocity <4 km s^-1—the probability of a value of the close approach velocity that low or lower is 0.77—well below the high-probability peak at 14 km s^-1 in Fig. <ref>. The probability of having a close approach velocity <4 km s^-1 from Fig. <ref> is 0.14. Asteroid 2012 BD_14 has a relatively good orbit (Holmes et al. 2012) as it has one radar Doppler observation, but lacks any additional data other than those derived from the available astrometry/photometry. Assuming similar composition, this Apollo asteroid must be as large as 2017 FZ_2 (15–33 m). In order to compute the velocity distribution for the pair YORP–2012 BD_14, we have performed an analysis analogous to that in Fig. <ref> but using control orbits for both objects generated by applying the MCCM method as described above and integrating backwards for 1 000 yr. Fig. <ref> shows that the most probable collision velocity in this case is ∼4 km s^-1 and about 7 per cent of flybys have relative velocities close to or below 2 km s^-1, but down to 336 m s^-1. In addition, the probability of experiencing an encounter closer than 30 000 km is 0.0013, which is significantly higher than that of analogous encounters for random YORP-like orbits. In other words, it is possible to find control orbits —albeit with relatively low probability— of YORP and 2012 BD_14, statistically compatible with the available observations, that place them very close to each other and moving at unusually low relative speed in the recent past. These approaches are observed at 0.75–0.9 au from the Sun, i.e. in the inmediate neighbourhood of Venus. Although outside the scope of this paper, it may be possible to find control orbits of these two objects that may lead to grazing encounters at velocities of a few m s^-1 if a much larger sample of orbits is analysed (the current one consists of 3 000 virtual pairs). It is difficult to explain such a level of coherence in their recent past orbital evolution as due to chance causes; these two objects could be genetically related. Unfortunately, 2012 BD_14 reached its most recent visibility window late in summer 2017, from August 25 to September 18, but this NEA will become virtually unobservable from the ground during the next few decades.The case of YORP and 2007 WU_3 (Gilmore et al. 2007; Bressi, Schwartz & Holvorcem 2015) is markedly different from the previous one and also from the bulk of pairs in Table <ref>, and deserves a more detailed consideration. A velocity distribution analysis similar to that in Fig. <ref> but using control orbits for both objects generated by applying the covariance matrix methodology described before and integrating backwards in time for 2 000 yr (see Fig. <ref>) shows that the most probable (close to 80 per cent) collision speed in this case is <4 km s^-1 and over 20 per cent of encounters have relative velocities close to or below 2 km s^-1. The velocity distribution is clearly bimodal with the most probable encounter velocities near the low-velocity extreme, which is typical of encounters near aphelion where a multiplicity of flyby geometries are possible. The mean approach velocity falls in the low-probability dip between the high-probability peak at about 2.3 km s^-1 (encounters at aphelion) and the secondary peak at about 3.8 km s^-1 (encounters at perihelion). However, the closest approaches (at about 19 000 km from each other) are characterized by values of the relative velocity close to 2.5 km s^-1. In addition, the probability of experiencing an encounter closer than 30 000 km is 0.0007.Although the orbital solutions of both objects are not as precise as those of widely accepted young genetic pairs —for example, 7343 Ockeghem (1992 GE_2) and 154634 (2003 XX_28) as studied by Duddy et al. (2012) or 87887 (2000 SS_286) and 415992 (2002 AT_49) as discussed by Žižka et al. (2016)— and their dynamical environment is far more chaotic, this unusual result hints at a possible physical connection between these two NEAs although perhaps the YORP mechanism was not involved in this case. The velocity distributions for asteroid families studied by Bottke et al. (1994) —see their figs 10a, 11a, 12a and 13a— clearly resemble what is observed in Fig. <ref>, in particular that of the Eos family (their fig 10a) which is non-Gaussian and bimodal with the most probable collisions occurring near the apses; the lowest encounter velocities are observed when their perihelia are aligned and the highest when they are anti-aligned. In the case of YORP and 2007 WU_3, low-velocity encounters take place at about 1.22 au from the Sun and the high-velocity ones at 0.81 au. These two objects are no longer observable from the ground and theywill remain at low solar elongation as observed from the Earth for many decades.The Eos family is the third most populous in the main asteroid belt and is thought to be the result of a catastrophic collision (Hirayama 1918; Mothé-Diniz & Carvano 2005), although its long-term evolution is also driven by the YORP effect (Vokrouhlický et al. 2006); many of its members are of the K spectral type and others resemble the S-type (see e.g. Mothé-Diniz, Roig & Carvano 2005). Having a comparable short-term orbital evolution can be used to argue for a dynamical connection but not to claim a physical connection. Genetic pairs must also have a similar chemical composition that can be studied via visible or near-infrared spectroscopy. Gietzen & Lacy (2007) carried out near-infrared spectroscopic observations of YORP and concluded that it belongs to either of the silicaceous taxonomic classes S or V. Mueller (2007) has pointed out that a S-type classification would be in excellent agreement with data from the Spitzer Space Telescope. Asteroid 2007 WU_3 may be of Sq or Q taxonomy according to preliminary results obtained by the NEOShield-2 collaboration (Dotto et al. 2015).[http://www.neoshield.eu/wp-content/uploads/NEOShield-2_D10.2_i1__Intermediate-observations-and-analysis-progress-report.pdf]If YORP and 2007 WU_3 are genetically related, their mutual short-term dynamical evolution suggests that they are the result of acatastrophic collision not the YORP effect. Such collisions are unusual, but not uncommon; asteroid (596) Scheila 1906 UA experienced a sub-critical impact in 2010 December by another main belt asteroid less than 100 m in diameter (Bodewits et al. 2011; Ishiguro et al. 2011; Jewitt et al. 2011; Moreno et al. 2011; Yang & Hsieh 2011; Bodewits, Vincent & Kelley 2014). In this case, the impact velocity was probably close to 5 km s^-1 (Ishiguro et al. 2011).§ ENCOUNTERS WITH PLANETS AND OTHER NEOS In addition to experiencing close encounters with other objects moving in YORP-like orbits, Fig. <ref> shows that these interesting minor bodies can undergo close encounters with Venus, the Earth and (rarely) Mars. Close encounters with members of the general NEO population are possible as well. The circumstances surrounding such encounters are explored statistically in this section.§.§ Encountering other NEOsWe have already found that collisions between NEAs following YORP-like orbits are most likely happening at relative velocities closeto 13 km s^-1, but most NEAs are not moving in YORP-like orbits. Here, we study the most general case of a close encounterbetween a virtual NEA moving along a YORP-like trajectory and a member of the general NEO population. In order to explore properlythe available orbital parameter space, our population of virtual general NEAs has values of the orbital elements uniformlydistributed within the domain q<1.3 au (uniform in q, not in a), e∈(0, 0.9), i∈(0, 50), Ω∈(0, 360),and ω∈(0, 360). Fig. <ref> shows a velocity distribution that is very different from that in Fig. <ref>;instead of being nearly bimodal, it is clearly unimodal although the most likely value of the encounter velocity is similar to that in Fig. <ref>. About 90 per cent of encounters/collisions have characteristic relative velocities in excess of 7 km s^-1 and about 55 per cent, above 15 km s^-1. If one of the minor bodies discussed here experiences a collision with another smallNEA, the impact speed will be probably high enough to disrupt the object, partially or fully, creating a group of genetically (bothphysically and dynamically) related bodies. Fig. <ref> shows the encounter velocity (colour coded) for the pairs in Fig. <ref> as a function of the initialvalues of the orbital parameters —q, e and i— of the general NEA. As NEAs with low values of q and high values of iare rare, very high-speed collisions are unlikely. Relatively low-speed collisions mostly involve NEAs moving in low-eccentricity, low-inclination orbits. The lowest velocities (<2.5 km s^-1) have been found for a∈(0.9, 1.1) au, e∈(0.2, 0.3), andi<4 which is precisely the volume of the orbital parameter space enclosing those NEAs following YORP-like orbits. Asexpected, very low-speed collisions are only possible among members of the same dynamical class (i.e. when they have very similarorbits). The previous analysis is based on the wrong assumption that NEOs have values of the orbital elements uniformly distributed;Fig. <ref> clearly shows that this is not the case. On the other hand, sampling uniformly can oversample high eccentricities and inclinations, which lead to higher encounter velocities. In order to obtain unbiased estimates, we have performed additionalcalculations using a population of synthetic NEOs from the NEOPOP model to obtain Figs <ref> and <ref>. Themost probable value of the relative velocity during encounters is ∼15 km s^-1 and the most likely value of the semimajoraxis of the orbits followed by NEOs experiencing flybys with NEAs following YORP-like orbits is ∼2.5 au. This more realisticanalysis still shows that the most probable impact speed during a hypothetical collision between a general NEO and one objectfollowing a YORP-like orbit could be high enough to cause significant damage to both NEOs.§.§ Encountering VenusTables <ref> and <ref>, and Figs <ref> and <ref> show that minor bodies following YORP-like orbitshave perihelion distances that place them in the neighbourhood of the orbit of Venus. This fact clears the way to potential closeencounters with this planet as one of the nodes could be in the path of Venus (see Fig. <ref>, H-panels). In order toinvestigate this possibility, we have performed short integrations (50 yr) of virtual NEAs with orbits as those described inSection 3 and under the same conditions. We decided to use short integrations to minimize the effects on our results derived fromthe inherently chaotic orbital evolution of these objects. Out of all the experiments performed, we have focused on 25 000 caseswhere the minimum distance between the virtual NEA and Venus during the simulation reached values under 0.1 au. We have found thatfor this type of orbit, the probability of experiencing such close encounters with Venus is about 82.2 per cent (this value of the probability has been found dividing the number of experiments featuring encounters under 0.1 au by the total number of experimentsperformed). Our calculations show that nearly 2 per cent of YORP-like orbits can lead to encounters under one Hill radius of Venus(0.0067 au), about 0.03 per cent approach closer than 20 planetary radii and 0.01 per cent closer than 10 planetary radii. Although relatively close encounters are frequent, the probability of placing the virtual NEA within the volume of space that couldbe occupied by putative natural satellites of Venus is rather small for these orbits. Outside 10 planetary radii, the velocity ofthe incoming body with respect to Venus during the closest encounters is in the range (5, 7) km s^-1; inside 10 planetary radiithe speed of the virtual NEA is significantly increased due to gravitational focusing by Venus. Fig. <ref> shows that thedistribution of relative velocities at minimum distance is far from symmetric and the peak of the distribution does not correspondto the typical values observed when the distance from Venus to the virtual NEA is the shortest possible. In principle, NEAs thatexperience close encounters with Venus (or the Earth) can suffer resurfacing events when large rocks are dislodged from the surfaceof the minor body. Our calculations indicate that this may be happening to NEAs following YORP-like orbits, but the chance of thishappening is low during encounters with Venus —the probability of encounters under 10 Venus radii is just 0.01 per cent. §.§ Encountering the Earth: evaluating the impact riskHere, we use the same set of simulations analysed in the previous section to perform a risk impact assessment evaluating theprobability that an object moving in a YORP-like orbit experiences a close encounter with our planet. We found that the probabilityof suffering a close encounter under 0.1 au with the Earth is about 74.8 per cent (from 20 000 experiments). Our calculations alsoshow that over 10 per cent of YORP-like orbits can approach the Earth closer than one Hill radius (0.0098 au), about 0.14 per centapproach under 20 planetary radii and 0.05 per cent below 10 planetary radii. The overall upper limit for the impact probabilitywith our planet could be < 0.00005 which is higher than that with Venus. Although the objects of interest here tend to approach Venus more frequently, when they approach our planet, they do it at a closerdistance. In principle, the Earth could be more efficient in altering the orbits of these small bodies, but Fig. <ref> showsthat the most probable value of the relative velocity at closest approach is nearly 8 km s^-1 with a Gaussian spread or standarddeviation of 1.230 km s^-1. This is higher than for flybys with Venus. Encounters at lower relative speed could be moreeffective at modifying the path of the perturbed minor body because it will spend more time under the influence of the perturberduring the flyby. Taking into account that the values of the masses of Venus and the Earth are very close, the overall influence ofboth planets on the orbits of NEAs following YORP-like orbits could be fairly similar. In addition, close encounters can happen in sequence; i.e. an inbound NEO can experience a close flyby with our planet that may facilitate a subsequent close encounter withVenus or an outbound NEO can approach Venus at close range making a close flyby with the Earth possible immediately afterwards. Figs <ref> and <ref> show that the most probable value of the relative velocity at closest approach is higher forclose encounters with our planet than for flybys with Venus and one may find this counterintuitive because at ∼0.7 au Keplerian velocities are higher than at ∼1 au. However, for flybys near perihelion multiple encounter geometries are possible. Thesituation could be analogous to that of Amor asteroids (or Apollos with perihelion distances ∼1 au) encountering the Earth.Independent calculations by JPL's horizons give a range of 4.94 to 11.64 km s^-1 for the relative velocity duringencounters between 2017 FZ_2 and Venus during a time span of nearly 300 yr; in the case of encounters with the Earth it gives arange of 4.63 to 22.52 km s^-1.Several objects in Table <ref> have very small values of their MOIDs with respect to the Earth, some as small as 30 000 km orunder 5 Earth radii. NEAs moving in these orbits clearly pose a potential risk of impact with our planet. In addition to2017 FZ_2 (and its initially possible impact in 2101–2104), two other small NEAs in Table <ref> have a non-negligibleprobability of colliding with our planet in the near future as computed by JPL's Sentry System. Asteroid 2010 FN has an estimatedimpact probability of 0.0000039 for a possible collision in 2079–2116.[https://cneos.jpl.nasa.gov/sentry/details.html#?des=2010%20FN]Asteroid 2016 JA has an impact probability of 0.000001 for a possible impact in2064–2103.[https://cneos.jpl.nasa.gov/sentry/details.html#?des=2016%20JA]§.§ Encountering MarsCalculations analogous to the ones described in the cases of Venus and the Earth show that although relatively distant encounters (∼0.1 au) between objects moving in YORP-like orbits and Mars are possible, the actual probability is about 0.6 per cent and wefound no encounters under one Hill radius of Mars (0.0066 au). The velocity of these bodies relative to Mars at its smallestseparation is in the interval (4, 5) km s^-1 (see Fig. <ref>). The role played by Mars on the dynamical evolution ofthese minor bodies is clearly negligible. § DISCUSSION Asteroid 2017 FZ_2 was, prior to its close encounter with the Earth on 2017 March 23, a quasi-satellite of our planet. It was thesmallest detected so far (H=26.7 mag) and also moved in the least inclined orbit (i=17). This minor body is no longer trappedin the 1:1 mean-motion resonance with our planet, but it has been trapped in the past and it could be again in the future. Its orbit is highly chaotic and it was even suggested that this NEA may collide with our planet in the near future (<100 yr), but it is seen as of less potential concern because of its small size.Although apparently insignificant, this small body has led us to uncover a larger set of NEAs that appears to be at least some sort of grouping of dynamical nature that perhaps includes several objects that may be genetically related. It is not possible to reach robust conclusions regarding a possible connection between them because there is no available spectroscopic information for the vast majority of these NEAs and their current orbital solutions are not yet sufficiently reliable. Most of these NEAs have few observations and/or have not been re-observed for many years; a few of them, (54509) YORP 2000 PH_5 included, may remain out of reach of ground-based telescopes for many decades into the future.Objects in YORP-like orbits[As pointed out above, D_ LS and D_ R<0.05 with respect to YORP or (see Table <ref>, bottom section) a∈(0.98, 1.02) au, e∈(0.20, 0.30), i∈(0, 4), Ω∈(0, 360), and ω∈(0, 360).] might be returning spacecraft, but their obvious excess makes this putative origin very unlikely. Althougha number of items of space debris and working spacecraft (e.g. Rosetta, Gaia or Hayabusa 2) have been erroneously given minor body designations, their orbits tend to be comparatively less inclined and far less eccentric, and their absolute magnitudes closer to 30. In addition, any artificial interloper within the group of objects moving in YORP-like orbits would have been originally Venus-bound and there are not many of those; besides, artificial objects without proper mission control input tend to drift away from Earth's co-orbital region rather quickly (see e.g. section 9 of de la Fuente Marcos & de la Fuente Marcos 2015b). They may be lunar ejecta, but our calculations suggest that these objects cannot remain in their orbits for a sufficiently long period of time, say for the last 100 000 yr. The only alternative locations for the origin of these bodies are either the asteroid belt or having been produced in situ (i.e. within Earth's immediate neighbourhood) via YORP spin-up or any other mechanism able to generate fragments. Our analysis using the NEOSSat-1.0 and NEOPOP orbital models indicates that the known delivery routes from the asteroid belt to the YORP-like orbital realm might not be efficient enough to explain the high proportion of small NEAs currently moving in YORP-like orbits. The presence of several relatively large minor bodies, YORP included, and a rather numerous dynamical cohort of much smaller objects hints at a possible genetic relationship between several of the members of this group. Arguing for a genetic relationship requires both robust dynamical and compositional similarities. However, it is also entirely possible that the existence of observational biases may account, at least partially, for the observed excess. Small NEAs could be the result of the YORP mechanism, but they may also be the by-product of collisions or the aftermath of tidal stripping events during low-velocity close encounters with the planets, in particular with the Earth (see e.g. Bottke & Melosh 1996a,b). Asteroids encountering the Earth (or Venus) with low relative velocity are good candidates for disruption because tidal splitting is more likely at low relative velocity. Objects following YORP-like orbits encounter the Earth preferentially with relative velocity in the range 6–8 km s^-1 (the encounter statistics analysed in Section 5.3). Morbidelli et al. (2006) have shown that (see their fig. 2), when encountering the Earth at these low relative velocities, any fragment resulting from tidal stripping can become physically unbound in a relatively short time-scale, producing a pair of genetically related NEAs.On the other hand, low-speed flybys, which are more sensitive to gravitational perturbations from our planet, tend to retain the coherence of the nodal distances for shorter time-scales and are more prone to generate a dynamical grouping or meteoroid stream after tidal disruption. Fig. <ref> shows the behaviour of the nodal distances for a number of objects in Table <ref> (also see H-panels in Figs <ref> and <ref>). As the orbit determinations of 2009 HE_60 and 2012 VZ_19 are rather poor, Fig. <ref> is only meant to be an example, not a numerically rigorous exploration of this important issue; in any case and although certain objects appear to exhibit some degree of (brief) nodal coherence, the evolution is often too chaotic to be able to arrive to any reliable conclusions. The pair-wise approach followed in Section 4 seems to be the only methodology capable of producing robust evidence, either in favour or against a possible dynamical relationship. The picture that emerges from our extensive analysis is a rather complex one. YORP-induced rotational disruption may be behind the origin of some of the small bodies discussed in this paper (as it could be the case of the pair YORP–2012 BD_14), but the fact that they also experience very close encounters with both Venus and our planet cannot be neglected. On the other hand, once fragments are produced either during planetary encounters or as a result of the YORP effect, these meteoroids can impact other objects moving along similar paths at relatively high velocities triggering additional fragmentations (perhaps the case of the pair YORP–2007 WU_3) or even catastrophic disruptions leading to some type of cascading effect that can eventually increase the size of the population moving along these peculiar orbits and perhaps explain the observed excess. Subcatastrophic collisions and tidal encounters can also lead to further rotational disruptions. The dynamical environment found here is rather different from the one surrounding typical main-belt asteroid families; here, very close planetary encounters are possible in addition to the active role played by mean-motion and secular resonances that are also present in the main asteroid belt. In this paper, we have not included the results of the Yarkovsky and YORP effects (see e.g. Bottke et al. 2006), but ignoring these effects has no relevant impact on our analysis, which is based on relatively short integrations, or our conclusions. Not including these non-gravitational forces in our integrations is justified by the fact that the largest predicted Yarkovsky drift rates are ∼10^-7 au yr^-1 (see e.g. Farnocchia et al. 2013) and also because the physical properties —such as rotation rate, albedo, bulk density, surface conductivity or emissivity— of most of the NEAs under study here are not yet known and without them, noreliable calculations can be attempted. The width of the co-orbital region of the Earth is about 0.012 au; at the highest predicted Yarkovsky drift rates, the characteristic time-scale to leak from the co-orbital region of our planet is equal to or longer than 120 000 yr that is much larger than the time-scales pertinent to this study —for an average value for the Yarkovsky drift of 10^-9 au yr^-1, the time-scale to abandon the region of interest reaches 12 Myr.§ CONCLUSIONS In this paper, we have studied the orbital evolution of the recently discovered NEA 2017 FZ_2 and other, perhaps related, minor bodies. This research has been performed using N-body simulations and statistical analyses. Our conclusions can be summarized as follows.* Asteroid 2017 FZ_2 was until very recently an Earth's co-orbital, the sixth known quasi-satellite of our planet and thesmallest by far. Its most recent quasi-satellite episode may have started over 225 yr ago and certainly ended after a closeencounter with the Earth on 2017 March 23. * Extensive N-body simulations show that the orbit of 2017 FZ_2 is very unstable, with a Lyapunov time of the order of100 yr. * Our orbital analysis shows that 2017 FZ_2 is a suitable candidate for being observed spectroscopically as it will remainrelatively well positioned with respect to our planet during its next visit in 2018. * Over a dozen other NEAs move in orbits similar to that of 2017 FZ_2, the largest named being (54509) YORP 2000 PH_5.Among these objects, we have identified two present-day co-orbitals of the Earth not previously documented in the literature:2017 DR_109 follows a path of the horseshoe type and 2009 HE_60 is confirmed as a strong candidate to being aquasi-satellite. * We find an apparent excess of small bodies moving in orbits akin to that of YORP that amounts to an over twentyfold increasewith respect to predictions from two different orbital models and we argue that this could be the result of mass shedding fromYORP.* NEAs moving in YORP-like orbits may experience close encounters with both Venus and the Earth. Such encounters might lead to tidal disruption events, full or partial. We also find that mutual collisions are also possible within this group.* N-body simulations in the form of extensive backwards integrations indicate that the pair YORP–2012 BD_14 might be therecent (∼10^3 yr) outcome of a YORP-induced rotational disruption.* N-body simulations and the available spectroscopic evidence suggest that the pair YORP–2007 WU_3 may have a common genetic origin, perhaps a subcatastrophic collision.Spectroscopic studies during the next perigee of some of these objects should be able to confirm if YORP could be the source of any ofthe small NEAs studied here —or if any of them is a relic of human space exploration.§ ACKNOWLEDGEMENTS We thank the anonymous referee for a particularly constructive, detailed and very helpful first report and for additional comments, S. J. Aarseth for providing the code used in this research, A. I. Gómez de Castro, I. Lizasoain and L. Hernández Yáñez of the Universidad Complutense de Madrid (UCM) for providing access to computing facilities. This work was partially supported by the Spanish `Ministerio de Economía y Competitividad' (MINECO) under grant ESP2014-54243-R. Part of the calculations and the data analysis were completed on the EOLO cluster of the UCM, and we thank S. Cano Alsúa for his help during this stage. EOLO, the HPC of Climate Change of the International Campus of Excellence of Moncloa, is funded by the MECD and MICINN. This is a contribution to the CEI Moncloa. In preparation of this paper, we made use of the NASA Astrophysics Data System, the ASTRO-PH e-print server, and the MPC data server.99[Aarseth2003]2003gnbs.book.....A Aarseth S. J., 2003,Gravitational N-body simulations.Cambridge Univ. Press, Cambridge, p. 27[Akiyama, Bando, & Hokamoto2016]2016AdSpR..57.1820A Akiyama Y., Bando M., Hokamoto S., 2016, Adv. Space Res., 57, 1820[Bodewits et al.2011]2011ApJ...733L...3B Bodewits D., Kelley M. S., Li J.-Y., Landsman W. B., Besse S., A'Hearn M. F., 2011, ApJ, 733, L3[Bodewits, Vincent & Kelley2014]2014Icar..229..190B Bodewits D., Vincent J.-B., Kelley M. S. P., 2014, Icarus, 229, 190[Bottke & Melosh1996a]1996Natur.381...51B Bottke W. F., Melosh H. J., 1996a, Nature, 381, 51[Bottke & Melosh1996b]1996Icar..124..372B Bottke W. F., Jr., Melosh H. J., 1996b, Icarus, 124, 372[Bottke et al.1994]1994Icar..107..255B Bottke W. F., Nolan M. C., Greenberg R., Kolvoord R. A., 1994, Icarus, 107, 255[Bottke et al.2000]2000Sci...288.2190B Bottke W. F., Jedicke R., Morbidelli A., Petit J.-M., Gladman B., 2000, Science, 288, 2190[Bottke et al.2002]2002Icar..156..399B Bottke W. F., Morbidelli A., Jedicke R., Petit J.-M., Levison H. F., Michel P., Metcalfe T. S., 2002, Icarus, 156, 399[Bottke et al.2006]2006AREPS..34..157B Bottke W. F., Jr., Vokrouhlický D., Rubincam D. P., Nesvorný D., 2006,Annu. Rev. Earth Planet. Sci., 34, 157[Bottke et al.2014]2014AGUFM.P12B..09B Bottke W. F., Jr. et al., 2014, American Geophysical Union, Fall Meeting 2014, abstract #P12B-09[Box & Muller1958]BM58 Box G. E. P., Muller M. E., 1958,Ann. Math. Stat., 29, 610 [Bressi, Schwartz & Holvorcem2015]2015MPEC....N...20B Bressi T. H., Schwartz M., Holvorcem P. R., 2015,MPEC Circ., MPEC 2015-N20[Brown et al.2013]2013Natur.503..238B Brown P. G. et al., 2013, Nature, 503, 238[Carruba, Nesvorný & Vokrouhlický2016]2016AJ....151..164C Carruba V., Nesvorný D., Vokrouhlický D., 2016, AJ, 151, 164[Chamberlin et al.2001]2001DPS....33.4108C Chamberlin A. B., Chesley S. R., Chodas P. W., Giorgini J. D., Keesey M. S., Wimberly R. N., Yeomans D. K., 2001, AAS/Div. Planet. Sci. Meeting Abstr., 33, 41.08[Chebotarev1974]1974SvA....17..677C Chebotarev G. A., 1974, SvA, 17, 677[Chodas2015]2015DPS....4721409C Chodas P., 2015, AAS/Div. Planet. Sci. Meeting Abstr., 47, 214.09[Christou2003]2003P SS...51..221C Christou A. A., 2003, Planet. Space Sci., 51, 221 [Connors2014]2014MNRAS.437L..85C Connors M., 2014, MNRAS, 437, L85[Connors et al.2004]2004M PS...39.1251C Connors M., Veillet C., Brasser R., Wiegert P., Chodas P.,Mikkola S., Innanen K., 2004, Meteorit. Planet. Sci., 39, 1251[de la Fuente Marcos & de la Fuente Marcos2012]2012MNRAS.427..728D de la Fuente Marcos C.,de la Fuente Marcos R., 2012,MNRAS, 427, 728[de la Fuente Marcos & de la Fuente Marcos2014]2014MNRAS.445.2961D de la Fuente Marcos C., de la Fuente Marcos R., 2014, MNRAS, 445, 2985[de la Fuente Marcos & de la Fuente Marcos2015a]2015AN....336....5D de la Fuente Marcos C., de la Fuente Marcos R., 2015a, AN, 336, 5[de la Fuente Marcos & de la Fuente Marcos2015b]2015A A...580A.109D de la Fuente Marcos C., de la Fuente Marcos R., 2015b, A&A, 580, A109[de la Fuente Marcos & de la Fuente Marcos2015c]2015MNRAS.446L..31D de la Fuente Marcos C., de la Fuente Marcos R., 2015c, MNRAS, 446, L31[de la Fuente Marcos & de la Fuente Marcos2015d]2015MNRAS.453.1288D de la Fuente Marcos C., de la Fuente Marcos R., 2015d, MNRAS, 453, 1288[de la Fuente Marcos & de la Fuente Marcos2016a]2016Ap SS.361...16D de la Fuente Marcos C., de la Fuente Marcos R., 2016a, Ap&SS, 361, 16[de la Fuente Marcos & de la Fuente Marcos2016b]2016Ap SS.361..121D de la Fuente Marcos C., de la Fuente Marcos R., 2016b, Ap&SS, 361, 121[de la Fuente Marcos & de la Fuente Marcos2016c]2016MNRAS.455.4030D de la Fuente Marcos C., de la Fuente Marcos R., 2016c, MNRAS, 455, 4030[de la Fuente Marcos & de la Fuente Marcos2016d]2016MNRAS.456.2946D de la Fuente Marcos C., de la Fuente Marcos R., 2016d, MNRAS, 456, 2946[de la Fuente Marcos & de la Fuente Marcos2016e]2016MNRAS.462.3344D de la Fuente Marcos C., de la Fuente Marcos R., 2016e, MNRAS, 462, 3344[de la Fuente Marcos & de la Fuente Marcos2016f]2016MNRAS.462.3441D de la Fuente Marcos C., de la Fuente Marcos R., 2016f, MNRAS, 462, 3441[Dotto et al.2015]2015EPSC...10..335D Dotto E. et al., 2015, European Planetary Science Congress 2015, 10, EPSC2015-335[Drummond1981]1981Icar...45..545D Drummond J. D., 1981, Icarus, 45, 545[Duddy et al.2012]2012A A...539A..36D Duddy S. R., Lowry S. C., Wolters S. D., Christou A., Weissman P., Green S. F., Rozitis B., 2012, A&A, 539, A36[Durda et al.2007]2007Icar..186..498D Durda D. D., Bottke W. F., Nesvorný D., Enke B. L., Merline W. J., Asphaug E., Richardson D. C., 2007, Icarus, 186, 498[Ďurech et al.2008a]2008A A...488..345D Ďurech J. et al., 2008a, A&A, 488, 345[Ďurech et al.2008b]2008A A...489L..25D Ďurech J. et al., 2008b, A&A, 489, L25[Ďurech et al.2012]2012A A...547A..10D Ďurech J. et al., 2012, A&A, 547, A10[Farnocchia et al.2013]2013Icar..224....1F Farnocchia D., Chesley S. R., Vokrouhlický D., Milani A., Spoto F., Bottke W. F., 2013, Icarus, 224, 1[Foglia & Masi2004]2004MPBu...31..100F Foglia S., Masi G., 2004, MPBu, 31, 100[Freedman1981]FD81 Freedman D., Diaconis P., 1981,Z. Wahrscheinlichkeitstheor. verwandte Geb., 57, 453[Fuls et al.2017]FU17 Fuls D. C. et al., 2017,MPEC Circ., MPEC 2017-E31[Gehrels1986]1986ApJ...303..336G Gehrels N., 1986,ApJ, 303, 336[Gibbs et al.2009]2009MPEC....H...61G Gibbs A. R. et al., 2009, MPEC Circ., MPEC 2009-H61[Gietzen & Lacy2007]2007LPI....38.1104G Gietzen K. M., Lacy C. H. S., 2007, Lunar Planet. Sci. Conf., 38, 1104[Gilmore et al.2007]2007MPEC....W...30G Gilmore A. C. et al., 2007,MPEC Circ., MPEC 2007-W30[Giorgini2011]2011jsrs.conf...87G Giorgini J., 2011,in Capitaine N., ed.,Proceedings of the Journées 2010 “Systèmes de référence spatio-temporels” (JSR2010):New challenges for reference systems and numerical standards in astronomy,Observatoire de Paris, Paris, p. 87[Giorgini2015]2015IAUGA..2256293G Giorgini J. D., 2015,IAU General Assembly, Meeting #29, 22, 2256293[Giorgini & Yeomans1999]GY99 Giorgini J. D., Yeomans D. K., 1999,On-Line System Provides Accurate Ephemeris and Related Data,NASA TECH BRIEFS, NPO-20416, p. 48[Giorgini et al.1996]1996DPS....28.2504G Giorgini J. D. et al., 1996,BAAS, 28, 1158[Giorgini, Chodas, & Yeomans2001]2001DPS....33.5813G Giorgini J. D., Chodas P. W., Yeomans D. K., 2001, BAAS, 33, 1562 [Granvik et al.2013a]2013DPS....4510602G Granvik M. et al., 2013a, AAS/Div. Planet. Sci. Meeting Abstr., 45, 106.02[Granvik et al.2013b]2013EPSC....8..999G Granvik M. et al., 2013b, European Planetary Science Congress 2013, 8, EPSC2013-999[Granvik et al.2016]2016Natur.530..303G Granvik M. et al., 2016, Nature, 530, 303[Granvik et al.2017]2017A A...598A..52G Granvik M., Morbidelli A., Vokrouhlický D., Bottke W. F., Nesvorný D., Jedicke R., 2017, A&A, 598, A52[Greenstreet, Ngo & Gladman2012]2012Icar..217..355G Greenstreet S., Ngo H., Gladman B., 2012, Icarus, 217, 355[Harris2013]2013DPS....4530107H Harris A. W., 2013, AAS/Div. Planet. Sci. Meeting Abstr., 45, 301.07[Hatch & Wiegert2015]2015P SS..111..100H Hatch P., Wiegert P. A., 2015, P&SS, 111, 100[Hénon1969]1969A A.....1..223H Hénon M., 1969, A&A, 1, 223[Henych & Pravec2013]2013MNRAS.432.1623H Henych T., Pravec P., 2013, MNRAS, 432, 1623[Hirayama1918]1918AJ.....31..185H Hirayama K., 1918, AJ, 31, 185[Holmes et al.2012]2012MPEC....B...40H Holmes R. et al., 2012, MPEC Circ., MPEC 2012-B40 [Ishiguro et al.2011]2011ApJ...740L..11I Ishiguro M., et al., 2011, ApJ, 740, L11[Jackson1913]1913MNRAS..74...62J Jackson J., 1913, MNRAS, 74, 62[Jacobson & Scheeres2011]2011Icar..214..161J Jacobson S. A., Scheeres D. J., 2011, Icarus, 214, 161[Jacobson et al.2016]2016Icar..277..381J Jacobson S. A., Marzari F., Rossi A., Scheeres D. J., 2016, Icarus, 277, 381[Jewitt et al.2011]2011ApJ...733L...4J Jewitt D., Weaver H., Mutchler M., Larson S., Agarwal J., 2011, ApJ, 733, L4[Jewitt et al.2017]2017AJ....153..223J Jewitt D., Agarwal J., Li J., Weaver H., Mutchler M., Larson S., 2017, AJ, 153, 223[Keane & Matsuyama2014]2014LPI....45.2733K Keane J. T., Matsuyama I., 2014, Lunar Planet. Sci. Conf., 45, 2733[Kitazato et al.2007]2007A A...472L...5K Kitazato K., Abe M., Ishiguro M., Ip W.-H., 2007, A&A, 472, L5[Knežević & Milani2000]2000CeMDA..78...17K Knežević Z., Milani A., 2000, Celest. Mech. Dyn. Astron., 78, 17[Kozai1962]1962AJ.....67..591K Kozai Y., 1962, AJ, 67, 591[Lai et al.2014]2014M PS...49...28L Lai H., Russell C. T., Wei H., Zhang T., 2014, Meteorit. Planet. Sci., 49, 28[Lidov1962]1962P SS....9..719L Lidov M. L., 1962, Planet. Space Sci., 9, 719[Lindblad1994]1994ASPC...63...62L Lindblad B. A., 1994, in Kozai Y., Binzel R. P., Hirayama T., eds, ASP Conf. Ser. Vol. 63,Seventy-five Years of Hirayama Asteroid Families: the Role of Collisions in the Solar System History.Astron. Soc. Pac., San Francisco, p. 62[Lindblad & Southworth1971]1971NASSP.267..337L Lindblad B. A., Southworth R. B., 1971, in Gehrels T., ed., Proc. IAU Colloq. 12: Physical Studies of Minor Planets.Univ. Sydney, Tucson, AZ, p. 337[Lowry et al.2007]2007Sci...316..272L Lowry S. C. et al., 2007, Science, 316, 272[Lowry et al.2014]2014A A...562A..48L Lowry S. C. et al., 2014, A&A, 562, A48[Makino1991]1991ApJ...369..200M Makino J., 1991,ApJ, 369, 200[Margot & Nicholson2003]2003DDA....34.0615M Margot J. L., Nicholson P. D., 2003, BAAS, 35, 1039[Mikkola & Innanen1997]1997dbps.conf..345M Mikkola S., Innanen K., 1997, in Dvorak R., Henrard J., eds, The Dynamical Behaviour of our Planetary System.Kluwer, Dordrecht, p. 345 [Mikkola et al.2004]2004MNRAS.351L..63M Mikkola S., Brasser R., Wiegert P., Innanen K., 2004, MNRAS, 351, L63[Mikkola et al.2006]2006MNRAS.369...15M Mikkola S., Innanen K., Wiegert P., Connors M., Brasser R., 2006, MNRAS, 369, 15[Milani1993]1993CeMDA..57...59M Milani A., 1993, Celest. Mech. Dyn. Astron., 57, 59[Milani1995]1995ASIB..336...47M Milani A., 1995, in NATO Advanced Study Institute on From Newton to Chaos, ASIB 336, p. 47[Milani & Knežević1994]1994Icar..107..219M Milani A., Knežević Z., 1994, Icarus, 107, 219[Milani et al.2014]2014Icar..239...46M Milani A., Cellino A., Knežević Z., Novaković B., Spoto F., Paolicchi P., 2014, Icarus, 239, 46[Milani et al.2017]2017Icar..288..240M Milani A., Knežević Z., Spoto F., Cellino A., Novaković B., Tsirvoulis G., 2017, Icarus, 288, 240[Morbidelli et al.2006]2006M PS...41..874M Morbidelli A., Gounelle M., Levison H. F., Bottke W. F., 2006, Meteorit. Planet. Sci., 41, 874[Moreno et al.2011]2011ApJ...738..130M Moreno F., et al., 2011, ApJ, 738, 130[Mothé-Diniz & Carvano2005]2005A A...442..727M Mothé-Diniz T., Carvano J. M., 2005, A&A, 442, 727[Mothé-Diniz, Roig & Carvano2005]2005Icar..174...54M Mothé-Diniz T., Roig F., Carvano J. M., 2005, Icarus, 174, 54[Mueller2007]2012arXiv1208.3993M Mueller M., 2007, PhD thesis, Freie Universitaet, Berlin (arXiv:1208.3993)[Murray & Fox1984]1984Icar...59..221M Murray C. D., Fox K., 1984, Icarus, 59, 221[Murray & Dermott1999]1999ssd..book.....M Murray C. D., Dermott S. F., 1999,Solar System Dynamics,Cambridge Univ. Press, Cambridge, p. 97[Namouni1999]1999Icar..137..293N Namouni F., 1999, Icarus, 137, 293[Namouni & Murray2000]2000CeMDA..76..131N Namouni F., Murray C. D., 2000, Celest. Mech. Dyn. Astron., 76, 131[Namouni, Christou & Murray1999]1999PhRvL..83.2506N Namouni F., Christou A. A., Murray C. D., 1999, Phys. Rev. Lett., 83, 2506[Pauls & Gladman2005]2005M PS...40.1241P Pauls A., Gladman B., 2005, Meteorit. Planet. Sci., 40, 1241[Polishook et al.2015]2015DPS....4740202P Polishook D., et al., 2015, AAS/Div. Planet. Sci. Meeting Abstr., 47, 402.02[Popova et al.2013]2013Sci...342.1069P Popova O. P. et al., 2013, Science, 342, 1069[Pousse, Robutel & Vienne2017]2017CeMDA.128..383P Pousse A., Robutel P., Vienne A., 2017, Celest. Mech. Dyn. Astron., 128, 383[Pravec et al.2010]2010Natur.466.1085P Pravec P. et al., 2010, Nature, 466, 1085[Press2007]PR07 Press W. H., Teukolsky S. A., Vetterling W. T., Flannery B. P., 2007,Numerical Recipes: The Art of Scientific Computing, 3rd edn.Cambridge Univ. Press, Cambridge[Rivera-Valentin et al.2016]2016DPS....4832910R Rivera-Valentin E. G., Taylor P. A., Rodriguez-Ford L. A.,Zambrano Marin L. F., Virkki A., Aponte Hernandez B., 2016, AAS/Div. Planet. Sci. Meeting Abstr., 48, 329.10[Rubin & Matson2008]2008EM P..103...73R Rubin A. E., Matson R. D., 2008, EM&P, 103, 73[Ryan & Ryan2016]2016DPS....4841101R Ryan W., Ryan E. V., 2016, AAS/Div. Planet. Sci. Meeting Abstr., 48, 411.01[Scheeres2017]2017arXiv170603385S Scheeres D. J., 2017, Icarus, in press, arXiv:1706.03385 (10.1016/j.icarus.2017.05.029)[Schunová et al.2012]2012Icar..220.1050S Schunová E., Granvik M., Jedicke R., Gronchi G., Wainscoat R., Abe S., 2012, Icarus, 220, 1050[Schunová et al.2014]2014Icar..238..156S Schunová E., Jedicke R., Walsh K. J., Granvik M., Wainscoat R. J., Haghighipour N., 2014, Icarus, 238, 156[Sears, Scheeres & Binzel2003]2003LPI....34.1047S Sears D. W. G., Scheeres D. J., Binzel R. P., 2003, Lunar Planet. Sci. Conf., 34, 1047 [Sidorenko et al.2014]2014CeMDA.120..131S Sidorenko V. V., Neishtadt A. I., Artemyev A. V., Zelenyi L. M., 2014, Celest. Mech. Dyn. Astron., 120, 131[Simonenko, Sherbaum & Kruchinenko1979]1979Icar...40..335S Simonenko A. N., Sherbaum L. M., Kruchinenko V. G., 1979, Icarus, 40, 335[Southworth & Hawkins1963]1963SCoA....7..261S Southworth R. B., Hawkins G. S., 1963, Smithsonian Contrib. Astrophys., 7, 261 [Standish1998]ST98 Standish E. M., 1998,JPL Planetary and Lunar Ephemerides, DE405/LE405,Interoffice Memo. 312.F-98-048, Jet Propulsion Laboratory, Pasadena, California[Statler et al.2013]2013Icar..225..141S Statler T. S., Cotto-Figueroa D., Riethmiller D. A., Sweeney K. M., 2013, Icarus, 225, 141[Taylor et al.2007]2007Sci...316..274T Taylor P. A. et al., 2007, Science, 316, 274[Urakawa et al.2017]UR17 Urakawa S. et al., 2017,MPEC Circ., MPEC 2017-F65[Valsecchi, Jopek, & Froeschle1999]1999MNRAS.304..743V Valsecchi G. B., Jopek T. J., Froeschle C., 1999, MNRAS, 304, 743[Vokrouhlický & Nesvorný2008]2008AJ....136..280V Vokrouhlický D., Nesvorný D., 2008,AJ, 136, 280[Vokrouhlický et al.2006]2006Icar..182...92V Vokrouhlický D., Brož M., Morbidelli A., Bottke W. F., Nesvorný D., Lazzaro D., Rivkin A. S., 2006, Icarus, 182, 92[Wajer2010]2010Icar..209..488W Wajer P., 2010, Icarus, 209, 488[Wall & Jenkins2012]2012psa..book.....W Wall J. V., Jenkins C. R., 2012,Practical Statistics for Astronomers.Cambridge Univ. Press, Cambridge[Walsh, Richardson & Michel2008]2008Natur.454..188W Walsh K. J., Richardson D. C., Michel P., 2008, Nature, 454, 188[Walsh, Richardson & Michel2012]2012Icar..220..514W Walsh K. J., Richardson D. C., Michel P., 2012, Icarus, 220, 514[Wiegert2015]2015Icar..252...22W Wiegert P. A., 2015, Icarus, 252, 22[Wiegert, Innanen & Mikkola1998]1998AJ....115.2604W Wiegert P. A., Innanen K. A., Mikkola S., 1998, AJ, 115, 2604[Wiegert et al.2002]WI02 Wiegert P., Connors M., Chodas P., Veillet C., Mikkola S., Innanen K., 2002,American Geophysical Union, Fall Meeting 2002, P11A-0352[Wiegert et al.2008]WI08 Wiegert P. A., DeBoer R., Brasser R., Connors M., 2008,J. R. Astron. Soc. Can., 102, 52[Yang & Hsieh2011]2011ApJ...737L..39Y Yang B., Hsieh H., 2011, ApJ, 737, L39[Žižka et al.2016]2016A A...595A..20Z Žižka J., Galád A., Vokrouhlický D., Pravec P., Kušnirák P., Hornoch K., 2016, A&A, 595, A20
http://arxiv.org/abs/1709.09379v1
{ "authors": [ "C. de la Fuente Marcos", "R. de la Fuente Marcos" ], "categories": [ "astro-ph.EP" ], "primary_category": "astro-ph.EP", "published": "20170927081316", "title": "Asteroid 2017 FZ2 et al.: signs of recent mass-shedding from YORP?" }
e-Sem: Dynamic Seminar Management System for Primary, Secondary and Tertiary Education Ioannis A. Skordas 7, Nikolaos Tsirekas1, Nestoras Kolovos5, George F. Fragulis1, Athanasios G. Triantafyllou7 and Maria G. Bouliou4 1Laboratory of Web Technologies & Applied Control SystemsDept. Of Electrical Engineering Western Macedonia Univ. of Applied Sciences, Kozani, Hellas 7 Lab. of Atmospheric Pollution & Environmental Physics, Dept. of Geotechnology and Environmental EngineeringWestern Macedonia Univ. of Applied Sciences,5 Engineering Geologist 4 Economist researcher Received …; accepted … ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= This paper describes the dynamic seminar management system named "e-Sem", developed according to the open-source software philosophy. Due to its dynamic management functionality, it can equally adapt to any education environment (Primary, Secondary, Tertiary). The purpose of the proposed dynamic system is ease of use and handling, by any class of users, without the need of special guidance. Also,students are given the opportunity to: a) register as users; b) enroll in seminars in a simple way; c) receive e-learning material at any time of day any day of week, and d) be informed of new announcements concerning the seminar in which they are enrolled . In addition, the administrator and thetutors have a number of tools such as : management seminars and trainees in a friendly way, sending educational material as well as new announcements to the trainees; the possibility of electronic recording of presence or absence of the trainees in a seminar, anddirect printing of a certificate of successful attendance of a seminar for each trainee. The application alsooffers features such as electronic organization, storage and presentation of educational material, overcoming the limiting factors of space and time of classical teaching, thus creating a dynamic environment. Web Based Application; MySql; PHP; Open Source Software;Tele-education;Distance Learning; Distance Teaching§ INTRODUCTIONWorld wide web is undoubtedly a part of humans' everyday life. Conventional communication methods included reporting activity as well as radio, newspaper and television announcements. Nowadays the reading of news articles,trip planning,extraction of information from an encyclopedia and goods buying are indicative activities which many people deal mainly via the Web <cit.>Any information can now be send over the Internet fast, with additional historical data through a website and in a simplifiedway to the wide audience. Therefore, the combination of telecommunications and new technologies creates an evolving development framework system, making it more flexible with many abilities and more user-friendly <cit.>. There are many commercial Internet applications on the market which are subject to restrictions such as the high purchase value, maintenance cost, absolute dependence on the manufacturer, oligopolistic practices of some software companies (e.g. high annual utility costs, unexpected increases, withdrawal and lack of support) and the absence on the part of several manufacturers innovation<cit.>. Instead,"Free Software / Open Source Software" (FOSS) provides significant advantages for users, giving them the freedom to run, copy, distribute, study, modify and improve the software without restrictions (Free Software Foundation, 1996- 2007) and havenumerous scientific applications (see <cit.>, <cit.>, <cit.>, <cit.>-<cit.> and the references therein.Seminars Management Systems are sophisticated web applications, which are developedby institutions or companies that want to work with digital learning (e-learning/distance learning), either to provide services to third parties or to train their staff <cit.>. § SYSTEM DESCRIPTIONSeveral open seminar management software systems (SMS) are used today. The most popular(no. of users) are CoMPUS, e-Class and Moodle. Most SMS are composed of many individual parts, having in common most of the following <cit.>: * users in a seminar management system are divided intostudents, tutors and administrators, and the access to the system is determined by the distinct role each one of them. * SMS are Operating in a client-server model. * they maintain some form of certification for users and divide them into groups, so that the same platform can be used for more than one seminar. * they have a friendly interface for all users (students andtutors). * there is the ability to save information data about the user (creating profiles) and "help" during navigation of the application.* The environment works with web browsers so it is accessible from anywhere with any operating system, and is not necessary for users to install other software.* make use of open source tools such as HTML, PHP, MySQL and Apache Server <cit.>. According to the above requirements,we propose "e-Sem" that isa new dynamic seminar management system that can be utilized at primary, secondary and tertiary levels of education as well as at summer schools and other scientific/research communities. "e-Sem" is an application based on open source software tools, such as HTML, MySQL, PHP, JavaScript and JQUERY see <cit.>, <cit.>,<cit.>, <cit.>, <cit.>,<cit.>, <cit.>, <cit.>. The purpose of the proposed "eSem" management system is to provide trainees the ability to participate in online tutorials(seminars) in an easy, fast and comprehensible manner. In addition, all data are dynamically managed, while the necessary data validation checks are performed with automated actions. The complete diagram of the system is shown in Figure 1, where three different user groups are supported. The first group are"Trainees/students", who canenroll in seminars, while they can seea list of seminars taken in the past. In addition, they are given the opportunity to edit their profile and can receive educational materials and be informed about news and announcements for the specific seminar that are enrolled. The second group of users is the "Teachers/Tutors" with management rights (adding, deleting and modifying students) on seminars. It is also possible to record attendance, as well as to print a certificate on successful completion of a seminar for each trainee. The system automatically refuses to register a trainee in a seminar when the maximum number of participants are met. "Teachers/Tutors"also have the ability to send educational material(lectures, exercises, exams), news and announcements to the trainees that participate in the specific seminar. Finally, the third group consists of the "Administrator" with the exclusive privilege of adding, deleting and modifying students, tutors and seminars.We describebelowthe functional and non-functional analysis requirements in details:Non-Functional: * Operating in a client-server model * There is no need for installing additional software as the environment works with web browsers so it is accessible from anywhere with any operating system. Functional: * The system supports three different user groups: * The Trainees/students have the ability: * enroll to seminars * view their participation history * edit their profile * download educational material for seminars * show news - announcements * print certificate of successfully completed seminars * The Teachers/Tutors have the ability: * manage seminars * keep an attendance log for each of their seminars * mark a student as a successful participant * manage their seminars' participants * edit their profile * upload educational material * post news - announcements for their seminars * The Administrator has the ability: * manage seminars * manageseminars participants * edit the profiles oftrainees and tutors * add new tutors * post generalnews-announcements § USER INTERFACE AND FUNCTIONALITYThis section describes the user interface, emphasizing the features and functions of the e-Sem application. As shown in Figure 2, there are two groups of users who have access to the following form (adding seminars). The first consists of tutors and the other one are the administrator. A number of validation checks are performed when entering data in order to add new seminars to the application.In addition, it is possible to send educational material, as well as to delete, edit and display detailed information for each seminar.Another basic feature of the e-Sem applicationis that the teacher/administrator can check the attendance/absence of participants at a seminar in a hourly basis. The advantage in this case is that the successful / unsuccessful completion of the seminar for each trainee is easily and automatically determined (Figure 3). The trainee has the ability to enroll in a seminar through a dynamic form that inform the trainee about the available seminars, while at the same time he/she is informed about the number of participants who have expressed interest so far (Figure 4) . As shown in Figure 5, past history of seminars is an additional feature that informs the trainee in which seminars has successfully participated in the past. Alsohas the ability to directly obtain a certificate of successful attendance at any completed seminar. The teacher/administrator can send educational material in the form of a file to the participants of each seminar (Figure 6).Another function as shown in Figure 7 is to send new messages and announcements to specific tutors ortrainees/students, or multicast to everyone. This action can be done by thetutors as well as by the administrator.In order to evaluate the functionality and effectiveness of the application, the following were performed: The application was uploaded in a web server and test data was imported.Then we test the application's speed/response . The results were very satisfactory as the application worked fast enough. Then, we created an account for each userand ask of a number of users to test it . These users come from various social groups and professions (developers, analysts, teachers, students, pupils). Then we briefly informed these users(by e-mail) about the subject that our application deals with. We have not provided detailed usage instructions to see if our application is functional and easy to use. The comments we received were quite satisfactory and encouraging for the work we had done. Most test-users thought the application was very easy to handle and its operations quite clear. Most of the negative comments were about the visual approach of the application. The majority of these were taken into account and we took the necessary steps to improve them. In general, we would say that the feedback of our test usershelped us to improve several parts of the application. § MYSQL AND DATABASE ARCHITECTURE MySQL is the most well-known Database Management System <cit.>. It is an open source tool and itsadvantage is that it is constantly improved in regular time intervals. As a result MySQL has evolved into a fast and extremely powerful database. It performs all functions like storing, sorting, searching and recall of data in an efficient way. Moreover, it uses SQL, the standard worldwide query language. Finally, it is distributed free. The Database which is used in the application has been designed to use the storage engine InnoDB, so that the restrictions of the foreign keys can be created. For the needs of the web application «e-Sem», a database is build, that stores the necessary information in order to refresh dynamic content of the website, each time the data base are modified. In this way, it becomes easier to manage and view the contents of the application. The database called dbseminars with totally eight tables is shown in Figure 8. The two main tables of the application are the users and the seminars, where the first consists of 15 fields with id key. One of the most important fields is "accesslevel”, which accepts only three values (0, 1 and 2). Depending on the value the user identification takes place, ie zero (0) for the admin, one (1) for the tutor and two (2) for the trainees/students. The remaining fields are the user's details when they are first logged into the system. The seminars table has as its main purpose to store seminars, having as a master key the “semid” that uniquely characterizes each entry. In addition, the “maxparticipants” field represents the maximum number of participants for each seminar. In the users seminars table, the students participating in the seminars are stored with the “userid” and “semid” as foreign keys and the key combination of them. The “attendancebook” has the role of recording attendance-absence during the attendance of seminars for each student. The main key is the combination of “semid” and “userid”, with foreign keys the corresponding fields. The “history” table represents the history of a successful student's participation in a seminar. As a foreign key is the “userid” and primary key the combination of “userid” and “semtitle”. The “news” table stores news and announcements, and consists of seven fields with id key. In the “files” table information such as the name, type, and the size of a file associated with the training material are stored. The key is id. Finally, the “presenthours” table saves the information of what time it is stated as present or absent in the particular student's seminar. The main key is the combination of the three “semid”, “userid” and “hour” fields. § CONCLUSIONIn the present paper we propose a Seminars' Management System (e-Sem) with a dynamic environment that meets the needs of several levels of education (Primary,Secondary,Tertiary,summer schools etc.), developed with open source software tools and taking into account their advantages and features. Students/trainees can easily accessthe system in a friendly and simplified way, whiletutors havea management tool with many capabilities and functions. Our applicationuse recent client side technologies as CSS3, bootstrap v3.3.7 framework for texts, forms, buttons, tables and navigation andjQuery for JavaScript plugins. Extendingthe abilities of thee-Sem system is under developement from our research team .Specifically we work to add the following items (among other stuff): * ability to representation the educational material in multimedia form such asvideo and audio * provide a live streaming ability* add a live chat room, a shoutboxand a discussion forum for all users. Also another possible improvement of the proposed application could be the use of a Model View Controller (MVC) framework in order its modules to be better organized. Finally we give the differences as well as the similarities between our application and existing ones as Moodle, Compus, e-class:Similarities * Client-server technology * Search for educational material * Frequently Asked Questions (FAQ) * News - Announcements * File exchange Differences * Automatically send emails to registered users, when introducing announcements and educational material by administrator. * Insert future events from all application users * Multimedia * forum andlive chat § ACKNOWLEDGMENTDr. George F. Fragulis work is supported by the program Rescom 80159 of the Western Macedonia Univ. of Applied Sciences, Kozani, Hellas.1Avgeriou Avgeriou Paris, Papasalouros Andreas, Retalis Symeon, Skordalakis Manolis, (2003). Towards a Pattern Language for Learning Management Systems. Στο Journal of Educational Technology & Society.Atkinson Atkinson L. & Suraski Z. (2004). Core php programming 3rd edition. Pearson Education Inc. Publishing as Prentice Hall Professional Technical Reference.Binstock Binstock A. (2013). Developer reading list: The Must-have books for JavaScript. O’Reilly. Berners Berners-Lee, T., Hall, W., Hendler, J.A., O'Hara, K., Shadbolt, N. & Weitzner, D.J. (2006). A framework for web science. Foundation and Trends in Web Science.BritainBritain Sandy, Liber Oleg, (1999). A Framework for Pedagogical Evaluation Of Virtual Learning Envi-ronments. JTAP, JISC Technology Applications. Report 41. University of Wales – Bangor.Chaffer Chaffer J. & Swedberg K. (2013). Learning jquery fourth edition. Packt.DuckettDuckett J. (2008). Beginning web programming with html, xhtml, and css. Wiley Publishing.Laurie Laurie B. & Laurie P. (2003). Apache: The definitive quide 3rd edition. O’Reilly.SturgessSturgess, Phillipa. (2004). Evaluation of Online Learning Management Systems.VaswaniVaswani V. (2004).MySQL: The complete Reference. McGraw-Hill Brandon A. Nordin.Xuan Xuan Z. Allan P. Dale. (2001). JavaAHP: A web-based decision analysis tool for natural resource and environmental management. Environmental Modelling & Software 16, 251-262. web2008 HTML, XHTML, and CSS: Complete, Gary B. Shelly and Denise M. Woods, 2010. php1PHP Solutions: Dynamic Web Design Made Easy, David Powers, 2010. mysqlHigh Performance MySQL: Optimization, Backups, and Replication, Baron Schwartz & Peter Zaitsev & Vadim Tkachenko, 2012. apache Apache: The Definitive Guide 3rd Edition, Ben Laurie and Peter Laurie, O’Reilly, 2003.Java1 JavaScript & jQuery: The Missing Manual, David Sawyer McFarland, 2011 Skordas_Fragulis_Triant2011 e-AirQuality: A Dynamic Web Based Application for Evaluating the Air Quality Index for the City of Kozani, Hellas , J. Skordas, G.F. Fragulis and Ath. Triantafyllou, 15th Panhellenic Conference on Informatics (PCI), Sept. 30 2011-Oct. 2 2011, pp. 171 - 174 , 2011. Skordas_Fragulis_Triant2014A.Q.M.E.I.S.: Air Quality Meteorological and Enviromental Information System in Western Macedonia, Hellas ,J. Skordas, G.F. Fragulis and Ath. Triantafyllou,1st International Conference on Buildings Energy Efficiency and Renewable Energy Sources 2014, Kozani, Greece. (arXiv:1406.0975, 2014). Tele1 TelE-Learning: The Challenge for the Third Millennium (IFIP Advances in Information and Communication Technology), Edited by Don Passey and Mike Kendal, 2011. Tele2 International Perspectives on Tele-Education and Virtual Learning Environments,edited by Graham Orange and Dave Hobs, 2010. Tele3Towards to a versatile tele-education platform for computer science educators based on the Greek School Network, Michael Paraskevas ,Thomas Zarouchas et al. , IADIS International Conference e-Learning 2013. Tele4The Greek School Network: A Paradigm of Successful Educational Services Based on the Dynamics of Open-Source Technology, Michael N. Kalochristianakis,Michael Paraskevas et al.,IEEE Transactions on Education 50(4):321 - 330 , December 2007. Tele5Asynchronous Tele-education and Computer-Enhanced Learning Services in the Greek School Network, Michael N.Kalochristianakis, Michael Paraskevas and Emmanouel Varvarigos, Emerging Technologies and Information Systems for the Knowledge Society, First World Summit on the Knowledge Society, WSKS 2008, Athens, Greece, September 24-26, 2008. SATEP SATEP: Synchronous-Asynchronous Tele-education Platform, Lazaridis, Lazaros , Papatsimouli Maria and Fragulis, George F.In: South-East Europe Design Automation, Computer Engineering, Computer Networks and Social Media Conference SEEDA-CECNSM 2016, At Kastoria, Hellas. ACM, 2016. p. 92-98.
http://arxiv.org/abs/1709.09409v2
{ "authors": [ "Ioannis A. Skordas", "Nikolaos Tsirekas", "Nestoras Kolovos", "George F. Fragulis", "Athanasios G. Triantafyllou", "Maria G. Bouliou" ], "categories": [ "cs.CY" ], "primary_category": "cs.CY", "published": "20170927093135", "title": "e-Sem: Dynamic Seminar Management System for Primary, Secondary and Tertiary Education" }
firstpage–lastpage The quantum pigeonhole principle as a violationof the principle of bivalence Arkady Bolotin[Email: [email protected] ] Ben-Gurion University of the Negev, Beersheba (Israel) Draft Oct 2016 ======================================================================================================We report the discovery, spectroscopic confirmation, and mass modelling of the gravitationally lensed quasar system PS J0630-1201. The lens was discovered by matching a photometric quasar catalogue compiled from Pan-STARRS1 and WISE photometry to the Gaia DR1 catalogue, exploiting the high spatial resolution of the latter (FWHM ∼ 01) to identify the three brightest component of the lensed quasar system. Follow-up spectroscopic observations with the WHT confirm the multiple objects are quasars at redshift z_q=3.34. Further follow-up with Keck AO high-resolution imaging reveals that the system is composed of two lensing galaxies and the quasar is lensed into a ∼28 separation four-image cusp configuration with a fifth image clearly visible, and a 10 arc due to the lensed quasar host galaxy. The system is well-modelled with two singular isothermal ellipsoids, reproducing the position of the fifth image. We discuss future prospects for measuring time delays between the images and constraining any offset between mass and light using the faintly detected Einstein arcs associated with the quasar host galaxy.gravitational lensing: strong – quasars: general – methods: observational – methods: statistical § INTRODUCTION Gravitationally lensed quasars can be used as tools for a variety of astrophysical and cosmological applications <cit.>, including: mapping the dark matter substructure of the lensing galaxy <cit.>; determining the mass <cit.> and spin <cit.> of black holes; and measuring the properties of distant host galaxies <cit.>. In addition, they can also be used to constrain cosmological parameters with comparable precision to baryonic acoustic oscillation methods <cit.> and to probe the physical properties of quasar accretion disks through microlensing studies <cit.>.<cit.> found that, for the Dark Energy Survey (DES - ), no simulated gravitationally lensed quasar system with image separation less than 15 is segmented into multiple catalogue sources due to limitations on survey resolution. As a result, only 23% of simulated systems with pairs of quasar images are segmented into two sources. This means that to identify lensed quasars as groups of close sources of similar colours, one has to either employ 2D modelling techniques (Ostrovski et al. 2017, in prep.) or rely on cross-matching to higher resolution imaging surveys such as Gaia (FWHM ∼01; ). Gaia <cit.> is a space observatory performing a full sky survey to study the Milky Way. Gaia data release 1 (DR1 - ) contains positions for a total of 1.1×10^9 sources across the whole sky with a point source limiting magnitude of G = 20.7 <cit.>.Here, we present the discovery of the lensed quasar PS J0630-1201 from a preliminary search for gravitationally lensed quasars from Pan-STARRS1 (PS - ) combined with the Wide-field Infrared Survey Explorer (WISE - ) by cross-matching to Gaia DR1 detections. All magnitudes are quoted on the AB system. Conversions from Vega to AB for the WISE data are W1_AB = W1_Vega+2.699 and W2_AB = W2_Vega+3.339 <cit.>. § LENS DISCOVERY AND CONFIRMATION To select gravitationally lensed quasar candidates, we first create a morphology-independent (i.e., not restricted to objects consistent with the point spread function) photometric quasar catalogue using Gaussian Mixture Models, a supervised machine learning method of generative classification, as described in <cit.>. We use WISE and PSphotometry to define g-i, i-W2, z-W1, and W1-W2 colours and we employ three model classes (point sources, extended sources, and quasars) to generate a catalogue of 378,061 quasar candidates from a parent sample of 80,028,181 objects with i<20. To remove stellar contaminants at low galactic latitude, we apply conservative colour cuts based on the comparisons between the known quasar distribution and the distribution of our candidates as well as removal of objects with high point source probabilities. We also discard objects with a neighbouring candidate within 5^'' yielding a total of 296,967 objects, since the spatial resolution of PS is enough to flag these systems as potential lensed quasar candidates or binary pairs.We then exploit the excellent spatial resolution and all-sky coverage of the Gaia DR1 catalogue to identify lensed quasar candidates by resolving the photometric quasars into multiple components. Gaia DR1 is known to be enormously incomplete for close-separation objects but nevertheless can identify multiple components of lensed quasars (seefor details). We cross-match our photometric quasar candidates with the Gaia DR1 catalogue using a 3^'' search radius and find that 1,401 of the quasar candidates have 2 Gaia objects associated with their PS position, whilst 26 of the candidates have 3 Gaia associations. Visual inspection of these 26 objects revealed one lens candidate, PS J0630-1201, shown in Fig. <ref> with Gaia positions overlaid on the PS cutout, to be of interest. Its PS catalogue position, and PS and WISE photometry are listed in Table <ref>. The other objects with triple Gaia matches were ruled out as common contaminants, mainly single quasars with other objects nearby (that, despite being resolved in PS images, were not quasar candidates in our catalogue), as well as apparently faint interacting starbursting galaxies and likely duplicate entries in the Gaia catalogue (objects with separations of <01).§.§ Spectroscopic Follow-up at the WHT Spectroscopic follow-up observations to confirm that the multiple components are multiply-imaged quasars were performed with the dual-arm Intermediate dispersion Spectrograph and Imaging System (ISIS) on the William Herschel Telescope (WHT) on the night of April 01 2017 UT. The R300B grating was employed on the blue arm, providing wavelength coverage from ∼3200Å to ∼5400Å with ∼4Å resolution, and the R158R grating on the red arm resulted in coverage from ∼5300Å to ∼10200Å with a resolution of ∼7.7Å; the 5300Å dichroic was used to split the beam to the blue and red arms. Two exposures of 600s each were obtained with a slit PA of 22.5 degrees, i.e., along the direction of the three brightest images, and two clearly separated traces were visible on the red arm, with the two southern-most images not deblended, whilst no clear separation was apparent in the blue data. Extractions of these traces are shown in Figure <ref>, revealing very similar spectra indicating a lensed quasar at z_q = 3.34. No features of a lower-redshift lensing galaxy are visible in the spectra. §.§ Adaptive Optics Follow-up with Keck After the spectroscopic confirmation of multiple quasars, the system was observed on April 11 2017 UT with the NIRC2 camera mounted on Keck II, using Keck's adaptive optics (AO) system. The NIRC2 narrow camera was used, giving a 10^''×10^'' field of view and 10mas pixels, and four 180s exposures were obtained with the K^' filter. These data clearly resolve the three quasars observed with WHT spectroscopy (A, B, and C in Figure <ref>), and also reveal two additional point-like objects (D and E) and two extended objects (G1 and G2). Note that most of the structure around the bright images is an artefact (“waffling”) due to AO correction problems with the low-bandwidth wavefront sensor.The PSF of image C appears to have a structure extending down from the core of the PSF that is not seen in images A or B but could be consistent with a lensed arc. We therefore produced a pixellated model of the PSF around the images ABC to remove it and increase the dynamic range of the image. In the first iteration of this procedure an arc between images B and C was clearly visible, and we therefore re-fitted for the PSF excluding pixels that contain arc flux. The residuals of this fit are shown in the middle and right panels of Figure <ref>. § LENS MASS MODELLINGThe AO imaging data show that PS J0630-1201 is a lensed quasar in a `cusp' configuration, with two lensing galaxies. The presence of a fifth point source, E in Figure <ref>, is intriguing as it is located approximately where the fifth (demagnified) image would be expected to appear. To understand the nature of E as a possible fifth quasar image, we initially create a mass model using the positions of the brighter quasar images (A to D) and the two lensing galaxies, and use this model to predict the location of any additional images. We first determine the position of each point source by modelling them with Gaussian or Moffat profiles, and we fit Sersic profiles to G1 and G2. We also perform moment-based centroiding in a range of apertures, and use the spread of all of our measurements to estimate the uncertainties on the positions; these are typically ∼1 mas for the point sources and ∼10 to 20 mas for the galaxies. However, we also find that the relative positions of the quasar images change from exposure to exposure, presumably due to atmospheric fluctuations, and we therefore impose a 5 mas minimum uncertainty on each position. We simultaneously fit the PS imaging data (grizY) whilst fitting the Keck data (K^'), and our photometric and astrometric results are given in Table <ref>.We use the positions of the galaxies and point sources to constrain a lensing mass model. The two galaxies are initially modelled as singular isothermal spheres and we use the positions of the four brighter images to infer their Einstein radii using thesoftware <cit.>. The best-fit lens model predicts a fifth image near the location of image E with a flux approximately half of image D, comparable to the observed flux ratio. We consequently use the position of E to constrain a more realistic lens model, allowing the two galaxies to have some ellipticity, and we include an external shear. This model has five free parameters for each galaxy (two position parameters, an Einstein radius, and an ellipticity with its orientation) and two additional parameters each for the position of the source and the external shear, i.e., there are 14 total parameters. We likewise have 14 constraints from the observed positions of the five quasar images and the two lensing galaxies.Using image plane modelling, we sample the parameter space using<cit.> and, as expected, find a best-fit model with χ^2∼ 0. Both mass components are inferred to be coincident with the light, with Einstein radii for G1 and G2 of 101±0.01 and 058±0.01, respectively. Both masses are also mildly flattened, but we find that the light and mass orientations, given in Table <ref>, are misaligned, significantly so for G2. We also note that the shear is well constrained (γ=0.14±0.01 with PA -76) and is particularly large, though there is no nearby galaxy along the shear direction. The total magnification for the system is ∼53. § DISCUSSION AND CONCLUSIONS We have presented spectroscopic and imaging data that confirm that PS J0630-1201 is a quasar at z_q = 3.34 lensed into five images by two lensing galaxies. We are able to fit the positions of the five images well with a two-SIE lens model andrecover flux ratios to within 30% with discrepancies likely caused by microlensing and/or differential extinction and reddening, as evidenced by the strongly varying flux ratio between images B and C from optical to near-infrared wavelengths. However, we find that for both lenses the ellipticity of the mass is not consistent with the ellipticity of the galaxy light (Table <ref>) and the inferred shear is quite large. This could be the result of an additional mass component, e.g., a dark matter halo that is not coincident with either galaxy <cit.>. In that case, the weak demagnification of the fifth image might indicate that the dark matter halo is not cuspy <cit.>, although constraints from the quasar image positions alone are not sufficient to test this. Deeper imaging of the arc of the lensed host galaxy and observations at radio wavelengths, where extinction and the effects of microlensing are no longer important, will help to constrain a more complex model for the mass distribution.The relatively bright fifth image of PS J0630-1201 also presents the possibility of obtaining four new time delay measurements, for a total of ten time delays. Based upon our current best lens model, these delays should range between 1 and 245 days assuming that the lens redshifts are z∼1 (Table <ref>). Because of the overall compactness of the system and the presence of the two lensing galaxies, it would be difficult to obtain time delays from the fifth image with conventional seeing-limited monitoring programmes. However, if such a campaign observed a sudden brightening (or dimming) event in one of the brighter images, dedicated monitoring with a high-resolution facility <cit.> could yield an observation of the delayed brightening of the fifth image.Several other lensed quasars have been observed to have bright additional images, and these systems typically also have multiple lensing galaxies <cit.>. The cluster lens SDSS J1004+4112 <cit.> is a quad with an observed `central' image, but the the complexity of the mass distribution makes it very difficult to estimate time delays <cit.>. PMN J0134-0931 is a radio-loud quasar that is being lensed into five images by two galaxies <cit.>. However, the image separations are very small so measuring time delays – expected to range from an hour to two weeks <cit.> – will be extremely difficult. B1359+154 <cit.> and SDSS J2222+2745 <cit.> are six-image lens systems, and in both cases the lens is a compact group of three galaxies. These lenses have intrinsically more complex mass distributions, but time delays may still be informative for cosmography. However, most of the independent time delays are expected to be less than a day for B1359+154 <cit.> and quite long (400 to 700 days) for SDSS J2222+2745 <cit.>. PS J0630-1201 therefore appears to be the most promising lens for measuring additional independent time delays.§ ACKNOWLEDGEMENTS We are grateful to the STRIDES collaboration for many useful discussions about quasar lens finding. FO, CAL, RGM, and SLR acknowledge the support of the UK Science and Technology research Council (STFC), and MWA also acknowledges STFC support in the form of an Ernest Rutherford Fellowship. FO acknowledges the current support of the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq - grant number 150151/2017-9) and was supported jointly by CAPES (the Science without Borders programme) and the Cambridge Commonwealth Trust during part of this research. CDF and CER acknowledge support from the US National Science Foundation through grant number AST-1312329. The Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation Grant No. AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation.This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration.This work has made use of data from the European Space Agency (ESA) mission Gaia (<https://www.cosmos.esa.int/gaia>), processed by the Gaia Data Processing and Analysis Consortium (DPAC, <https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement.The William Herschel Telescope is operated on the island of La Palma by the Isaac Newton Group of Telescopes in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias.Some of the data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community.We are most fortunate to have the opportunity to conduct observations from this mountain.This research made use of Astropy, a community-developed core Python package for Astronomy (Astropy Collaboration, 2013). mnras/mnras
http://arxiv.org/abs/1709.08975v2
{ "authors": [ "Fernanda Ostrovski", "Cameron A. Lemon", "Matthew W. Auger", "Richard G. McMahon", "Christopher D. Fassnacht", "Geoff C. -F. Chen", "Andrew J. Connolly", "Sergey E. Koposov", "Estelle Pons", "Sophie L. Reed", "Cristian E. Rusu" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20170926123802", "title": "The Discovery of a Five-Image Lensed Quasar at z = 3.34 using PanSTARRS1 and Gaia" }
a,b]Khatereh Azizi b]Petri Hirvonen [email protected] b]Zheyong Fanauthor b]Ari Harju c]Ken R. Elder b,d]Tapio Ala-Nissila a,e]S. Mehdi Vaez Allaei [email protected][author] Corresponding author. E-mail: [email protected] (Zheyong Fan)[a]Department of Physics, University of Tehran, Tehran 14395-547, Iran [b]COMP Centre of Excellence, Department of Applied Physics, Aalto University School of Science, P.O. Box 11000, FIN-00076 Aalto, Espoo, Finland [c]Department of Physics, Oakland University, Rochester, Michigan 48309, USA [d]Departments of Mathematical Sciences and Physics, Loughborough University, Loughborough,Leicestershire LE11 3TU, UK [e]School of Physics, Institute for Research in Fundamental Sciences (IPM), Tehran 19395-5531, IranWe study heat transport across individual grain boundaries in suspended monolayer graphene usingextensive classical molecular dynamics (MD) simulations. We construct bicrystalline graphene samples containinggrain boundaries with symmetric tilt angles using the two-dimensional phase field crystal method and then relax the samples with MD. Thecorresponding Kapitza resistances are then computed using nonequilibrium MD simulations. We find that the Kapitza resistance depends strongly on the tilt angleand shows a clear correlation with the average density of defects in a given grain boundary, but is not strongly correlated with the grain boundary line tension. We also show that quantum effects are significant in quantitative determination ofthe Kapitza resistance by applying the mode-by-mode quantum correction to the classical MD data. The corrected data are in good agreement with quantum mechanical Landauer-Bütticker calculations.Grain boundary Kapitza resistance Graphene Molecular dynamics Phase field crystal § INTRODUCTIONGraphene <cit.>, the famous two-dimensionalallotrope of carbon, has been demonstrated to haveextraordinary electronic <cit.>, mechanical <cit.>, and thermal <cit.> properties in its pristine form.However, large-scale graphene films, which are needed for industrial applications are typically grown by chemical vapordeposition <cit.> and are polycrystalline in nature <cit.>, consisting of domains of pristine graphenewith varying orientations separated by grain boundaries (GB) <cit.>. They play asignificant or even dominant role in influencing many properties of graphene <cit.>.One of the most striking properties of pristine graphene is its extremely high heat conductivity, which has been shown to be in excess of5000 W/mK <cit.>. Grain boundaries in graphene act as line defects or one-dimensional interfaces which leads to a strong reduction of the heat conductivity in multigrain samples <cit.>. The influence of GBs can be quantified by the Kapitza or thermal boundary resistance R. The Kapitza resistance of graphene grain boundaries has been previously computed using molecular dynamics (MD) <cit.> and Landauer-Bütticker <cit.>methods, and has also been measured experimentally <cit.>. However, these works have onlyconsidered a few separate tilt angles, and a systematic investigation on the dependence of the Kapitza resistance on the tilt anglebetween any two pristine grains is still lacking. The relevant questions here concern both the magnitude R for different tilt angles and possible correlations between the structure or line tension of the GBs and the corresponding value of R.Modelling realistic graphene GBs has remained a challenge due to the multiple length and time scales involved.Recently, an efficient multiscale approach <cit.> for modelling polycrystalline graphene samples was developed based onphase field crystal (PFC) models <cit.>. The PFC models are a family of continuum methods for modelling theatomic level structure and energetics of crystals, and their evolution at diffusive time scales (as compared to vibrational time scales in MD). The PFC models retain full information about the atomic structure and elasticity of the solid <cit.>.It has been shown <cit.> that using the PFC approach in two-dimensional space one can obtain large, realistic andlocally relaxed microstructures that can be mapped to atomic coordinates for further relaxation in three-dimensional space with the usual atomistic simulation methods.In this work, we employ the multiscale PFC strategy of Ref. <cit.>to generate large samples of tilted, bicrystalline graphene with a well-defined GB between the two grains. These samples are then further relaxed with MD at T = 300 K. A heat current is generated across the bicrystals using nonequilibrium MD (NEMD) simulations, and the Kapitza resistance is computed from the temperature drop across the GB. We map the values of R(θ) for a range of different tilt angles θ and demonstrate how R correlates with the structure of the GBs. Finally, we demonstrate that quantum corrections need to be included in R to obtain quantitative agreement with experiments and lattice dynamical calculations. § MODELS AND METHODS §.§ PFC models PFC approaches typically employ a classical density field ψ(r) to describe the systems. The ground state of ψ isgoverned by a free energy functional F[ψ(r)] that is minimized either by aperiodic or a constant ψ, corresponding to crystalline and liquid states, respectively. We use the standard PFC model F = ∫ dr(1/2ψ[ϵ+(q^2+∇^2)^2]ψ + 1/3τψ^3 + 1/4ψ^4), where the model parameters ϵ and τ are phenomenological parameters related to temperature and average density, respectively.The component (q^2+∇^2)^2 penalizes for deviations from the length scale set by the wave number q, giving rise to a spatiallyoscillating ψ and to elastic behaviour <cit.>. The crystal structure in the ground state is dictated by theformulation of F and the average density of ψ, and for certain parameter values the ground state of ψ displays ahoneycomb lattice of density maxima as appropriate for graphene <cit.>. The PFC calculations are initialized with symmetrically tilted 2-crystals in a periodic, two-dimensionalcomputational unit cell.The initial guess for the crystalline grains is obtained by using the one-mode approximation <cit.> ψ(x,y) = cos(qx)cos(qy/√(3))-1/2cos(2qy/√(3)), and by rotating alternatingly by ±θ. The tilt angle between two adjacent grains is θ-(-θ) = 2θ, which ranges from2θ = 0 to 2θ = 60 (see Fig. <ref> for examples). We consider a subset of the tilt angles investigated in Ref. <cit.>, with the exact values being listed in Table <ref>. The rotated grains and the unit cell size arematched together as follows: if just one of the rotated grains filled the whole unit cell, it would be perfectly continuous at the periodic edges.Along both interfaces, narrow strips a few atomic spacings wide are set to the average density – corresponding to a disordered state – to give thegrain boundaries some additional freedom to find their lowest-energy configuration. We assume non-conserved dynamics torelax the systems in analogy to chemical vapour deposition <cit.> – the number of atoms in the monolayer can vary as if due to exchange with a vapor phase. In addition, the unit cell dimensions areallowed to vary to minimize strain. Further details of the PFC calculations can be found in Ref. <cit.>.The relaxed density field is mapped to a discrete set of atomic coordinates suited for the initialization of MD simulations <cit.>. §.§ NEMD simulations We use the NEMD method as implemented in the GPUMD (graphics processing units molecular dynamics) code <cit.> tocalculate the Kapitza resistance, using the Tersoff <cit.> potential with optimized parameters <cit.> for graphene.The initial structures obtained by the PFC method are rescaled by an appropriate factor to have zero in-plane stress at 300 K in the MD simulations with the optimized Tersoff potential <cit.>. In the NEMD simulations, periodic boundary conditions are applied in the transverse direction, whereas fixed boundary conditions are applied in the transport direction. We first equilibrate the system at 1 K for 1 ns, then increase the temperature from 1 K to 300 K during 1 ns,and then equilibrate the system at 300 K for 1 ns. After these steps, we apply a Nosé-Hoover chain of thermostats<cit.> to the heat source and sink, choosing astwo blocks of atoms around the two ends of the system, as schematically shown in Fig. <ref>. The temperatures of the heat source and sink are maintained at 310 K and 290 K, respectively. We have checked that steady statecan be well established within 5 ns. In view of this, we calculate the temperature profile T(x) of the system and the energyexchange rate Q between the system and the thermostats using data sampled in another 5 ns. The velocity-Verlet integration scheme <cit.> with a time step of 1 fs is used for all the calculations.Three independent calculations are performed for each system and the error estimates reported in Table <ref> correspond to the standard error of the independent results.In steady state, apart from the nonlinear regions around the heat source and the sink intrinsic to the method,a linear temperature profile can be established on each side of the GB, but with an inherent discontinuity (temperature jump)at the GB. An example of this for the system with 2θ = 9.43 is shown in Fig. <ref>. The Kapitza resistance R isdefined as the ratio of the temperature jump Δ T and the heat flux J across the grain boundary: R = Δ T/J, where J can be calculated from the energy exchange rate Q (between the system and thermostat)and the cross-sectional area S (graphene thickness is chosen as 0.335 nm in our calculations), i.e. J=Q/S.§ RESULTS AND DISCUSSION It is well known <cit.> that the calculated Kapitza resistance depends on the sample length inNEMD simulations. Figure <ref> shows the calculated Kapitza resistance R in the 2θ=9.43 case as a function of thesample length L_x. Using fixed boundary conditions as described above, R saturates at around L_x=400 nm. On the other hand,using periodic boundaries as described in Ref. <cit.>, R converges more slowly. To this end, we have hereused fixed boundary conditions and a sample length of 400 nm for all the systems. Thecalculated temperature jump Δ T, heat flux J, and Kapitza resistance R in the 13 bicrystalline systems are listed in Table <ref>.The Kapitza resistance calculated from the heat flux does not contain any information on the contributions from individual phonon modes.Methods of spectral decomposition of both the heat current (flux) <cit.>and the temperature <cit.> within the NEMD framework have been developed recently. Here, we use the spectral decomposition formalismas described in Ref. <cit.> to calculate the spectral conductance g(ω) of the 2θ=9.43 system. In this method,one first calculates the following nonequilibrium heat current correlation function (t is the correlation time): K(t) = ∑_i∈ A∑_j∈ B⟨∂ U_i(0)/∂r⃗_ij·v⃗_j(t) - ∂ U_j(0)/∂r⃗_ji·v⃗_i(t) ⟩, where U_i and v⃗_i are respectively the potential energy and velocity of particle i, r⃗_ij=r⃗_j-r⃗_i (r⃗_i is the position of particle i), and K(t=0) measures the heat current flowing form a block A to an adjacent block B arranged along the transport direction.Then, one performs a Fourier transform to get the spectral conductance: g(ω) = 2/SΔ T∫_-∞^+∞ dt e^iω t K(t). The spectral conductance is normalized as G= ∫_0^∞dω/2πg(ω), where G is the total Kapitza conductance (also called thermal boundary conductance), which is the inverse of the Kapitza resistance G=1/R. Figure <ref>(a) shows the calculated correlation function K(t), which resembles the velocity autocorrelation function whoseFourier transform is the phonon density of states <cit.>. Indeed, thermal conductance in the quasi-ballistic regime is intimately relatedto the phonon density of states. The corresponding spectral conductance g(ω) is shown as the solid line in Fig. <ref>(b).The total thermal boundary conductance is G ≈ 33 GW/m^2/K, corresponding to a Kapitza resistance of R ≈ 0.03 m^2K/GW.In view of the high Debye temperature (around 2000 K) for pristine graphene <cit.>, we expect that it is necessary tocorrect the classical results to properly account for possible quantum effects. While using classical statistics can lead to <cit.>an underestimate of the scattering time for the low-frequency phonons as well as an overestimate of the heat capacity of the high-frequencyphonons for thermal transport in the diffusive regime, only the second effect matters here in the quasi-ballistic regime.Therefore, one can correct the results bymultiplying the classical spectral conductance by the ratio of the quantum heat capacity to the classical one:x^2e^x/(e^x-1)^2, where x=ħω/k_BT, with ħ, k_B, T being the Planck constant, Boltzmann constant, andsystem temperature, respectively. This factor is unity in the low-frequency (high-temperature) limit andzero in the high-frequency (low-temperature) limit. Applying this mode-to-mode quantum correction to the classicalspectral conductance gives the quantum spectral conductance represented by the dashed line in Fig. <ref>(b).The integral of the quantum corrected spectral conductance is reduced by a factor of about 2.3 as compared to the classical one. Figure <ref>(a) shows the calculated Kapitza resistances for all the 13 systems as a function of the tilt angle,both before (red squares) and after (blue circles) applying the quantum correction. It clearly shows that the Kapitzaresistance depends strongly on the tilt angle, varying by more than one order of magnitude. The Kapitza resistance increases monotonically from both sides to the middle angle of 2θ∼ 30, except for one “anomalous” system with 2θ=21.79. This system has smaller R than that with 2θ=18.73. One intuitive explanation is that this system is relatively flat compared to other systems, as can been seen from Figs. <ref>(e)-(h). Similar “anomalous” heat transport has been reported in Ref. <cit.> for the same grain boundary tilt angle. The largest Kapitza resistancesoccurring around the intermediate angles, being about 0.12 m^2/K/GW after quantum corrections,are more than an order of magnitude smaller than those in grain boundaries in silicon nanowires <cit.>. A more reasonable comparison between different materials is in terms of the Kapitza length L_K <cit.>, defined as the system length of the corresponding pristine material at which the bulk thermal resistance due to phonon-phonon scattering equals the Kapitza resistance. Mathematically, we have L_K=κ R,where κ is the thermal conductivity of the bulk material. We calculate L_K by assuming a value of κ=5200 W/mK for pristine graphene according to the very recent experiments <cit.> and list the values in Table <ref>. The largest Kapitza lengths (corresponding to the largest Kapitza resistances) before quantum corrections are about 300 nm, which would be about 700 nm after quantum corrections. These values are actually larger than those for silicon nanowires. Therefore, the effect of grain boundaries on heat transport in graphene is not small even though the Kapitza resistances are relatively small. To facilitate comparison with previous works, we also show the Kapitza conductances in Fig. <ref>(b). The Kapitza conductances in oursystems range from about 17 GW/m^2/K to more than 500 GW/m^2/K before applying the quantum corrections.Bagri et al. <cit.> reported Kapitza conductance values (obtained by NEMD simulations with periodic boundary conditions in the transport direction) ranging from 15 GW/m^2/K to 45 GW/m^2/K.The lower limit of 15 GW/m^2/K does not conflict with our data, as this value is reported in a system of a grain size of 25 nm,where the data cannot have converged yet. On the other hand, Cao and Qu (obtained by NEMD simulations with fixed boundary conditions in the transport direction) <cit.> reported saturated Kapitzaconductance values in the range of 19-47 GW/m^2/K, which fall well within the values that we obtained.Last, we note that quantum mechanical Landauer-Bütticker calculations by Serov et al. <cit.> predicted theKapitza conductance to be about 8 GW/m^2/K for graphene grain boundariescomparable to those in our samples with intermediate tilt angles (2θ∼ 30). This is much smaller than the classicalKapitza conductances (about 20 GW/m^2/K), but agree well with our quantum corrected values. This comparison justifies the mode-to-mode quantum correction we applied to the classical data and resolves the discrepancy between the results from classicalNEMD simulations and quantum mechanical Landauer-Bütticker calculations.The last remaining issue concerns the possible correlation of the values of R(θ) with the energetics and structure of the GBs. The grain boundary line tension and the defect density are closely related to the tilt angle. The line tension γ is defined as γ=lim_L_y →∞Δ E/L_y, in the thermodynamic limit, where Δ E is the formation energy for a GB of length L_y. The defect density is defined as ρ=N_ p-h/L_y, where N_ p-h is the number of pentagon-heptagon pairs in the grain boundary. The calculated γ and ρvalues for all the tilt angles are listed in Table <ref> and plotted in Figs. <ref>(a)-(b). In Figs. <ref>(c)-(d), we plot the Kapitza resistance against γ and ρ, respectively. At small and large tilt angles, where the defect density is relatively small, there is aclear linear dependence of R on both γ and ρ. However, at intermediate tilt angles (2θ≈ 30),where the defect density is relatively large, the linear dependences become less clear, especially between R and γ,which may indicate increased interactions between the defects. Overall, there is a strongercorrelation between the Kapitza resistance and the defect density which is consistent with the idea of enhanced phonon scattering with increasing ρ. § SUMMARY AND CONCLUSIONS In summary, we have employed an efficient multiscale modeling strategy based on the PFC approach and atomistic MD simulations to systematically evaluate the Kapitza resistances in graphene grain boundaries for a wide range of tilt angles between adjacent grains.Strong correlations between the Kapitza resistance and the tilt angle, the grain boundary line tension, and the defect density are identified.Quantum effects, which have been ignored in previous studies, are found to be significant. By applying a mode-to-mode quantum correction method based on spectral decomposition, we have demonstrated that good agreement between the classical molecular dynamics dataand the quantum mechanical Landauer-Bütticker method can be obtained. We emphasize that we have only considered suspended systems in this work. In a recent experimental work byYasaei et al.<cit.>, Kapitza conductances (inverse of the Kapitza resistance) for a few supported(on SiN substrate) samples containing grain boundaries with different tilt angles were measured. The Kapitza conductances reported inthis work are about one order of magnitude smaller than our quantum corrected values. This large discrepancy indicates that certainsubstrates may strongly affect heat transport across graphene grain boundaries and more work is needed to clarify this. § ACKNOWLEDGEMENTS This research has been supported by the Academy of Finland through its Centres of Excellence Program (Project No. 251748). We acknowledge the computational resources provided by Aalto Science-IT project and Finland's IT Center for Science (CSC).K.A. acknowledges financial support from the Ministry of Science and Technology of Islamic Republic of Iran. P.H. acknowledges financial support from the Foundation for Aalto University Science and Technology, and from theVilho, Yrjö and Kalle Väisälä Foundation of the Finnish Academy of Science and Letters.Z.F. acknowledges the support of the National Natural Science Foundation of China (Grant No. 11404033).K.R.E. acknowledges financial support from the National Science Foundation under Grant No. DMR-1506634. elsarticle-num
http://arxiv.org/abs/1709.09529v1
{ "authors": [ "Khatereh Azizi", "Petri Hirvonen", "Zheyong Fan", "Ari Harju", "Ken R Elder", "Tapio Ala-Nissila", "S Mehdi Vaez Allaei" ], "categories": [ "cond-mat.mes-hall", "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.mes-hall", "published": "20170927135956", "title": "Kapitza thermal resistance across individual grain boundaries in graphene" }
Veto Interval Graphs and Variations Breeann Flesch Computer Science Division Western Oregon University 345 North Monmouth Ave. Monmouth, OR 97361 Jessica Kawana^*, Joshua D. Laison^* Mathematics Department Willamette University 900 State St. Salem, OR 97301 Dana Lapides^* Earth and Planetary Science University of California, Berkeley 307 McCone Hall Berkeley, CA 94720-4767 Stephanie Partlow[Funded by NSF Grant DMS 1157105] Mathematics Department Woodburn Wellness, Business And Sports School 1785 N Front St Woodburn, OR 97071 Gregory J. Puleo Department of Mathematics and Statistics College of Sciences and Mathematics Auburn University 221 Parker Hall Auburn, Alabama 36849 Received: Jul 6, 2017 / Accepted: Sep 11, 2017 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== We introduce a variation of interval graphs, called veto interval (VI) graphs.A VI graph is represented by a set of closed intervals, each containing a point called a veto mark.The edge ab is in the graph if the intervals corresponding to the vertices a and b intersect, and neither contains the veto mark of the other.We find families of graphs which are VI graphs, and prove results towards characterizing the maximum chromatic number of a VI graph.We define and prove similar results about several related graph families, including unit VI graphs, midpoint unit VI (MUVI) graphs, and single and double approval graphs.We also highlight a relationship between approval graphs and a family of tolerance graphs.Keywords: interval graph, veto interval graph, MUVI graph, approval graph, tolerance graph, bitolerance graph Mathematics Subject Classification (2010): 05C62, 05C75, 05C15, 05C05, 05C20 § INTRODUCTIONAn interval representation of a graph G is a set of intervals S on the real line and a bijection from the vertices of G to the intervals in S, such that for any two vertices a and b in G, a and b are adjacent if and only if their corresponding intervals intersect.A graph is an interval graph if it has an interval representation.Interval graphs were introduced by Hajös in <cit.>, and were then characterized by the absence of asteroidal triples and induced cycles larger than three by Lekkerkerker and Boland in 1962 <cit.>.An asteroidal triple in G is a set A of three vertices such that for any two vertices in A there is a path within G between them that avoids all neighbors of the third.Interval graphs have been extensively studied and characterized, and fast algorithms for finding the clique number, chromatic number, and other graph parameters have been developed <cit.>.Furthermore, many variations of interval graphs, including interval p-graphs, interval digraphs, circular arc graphs, and probe interval graphs, have been introduced and investigated <cit.>.We define the following variations of interval graphs.A veto interval I(a) on the real line, corresponding to a vertex a, is a closed interval [a_l,a_r] together with a veto mark a_v with a_l<a_v<a_r.The numbers a_l and a_r are the left endpoint and right endpoint of I(a), respectively.We denote a veto interval as an ordered triple I(a)=(a_l,a_v,a_r).A veto interval representation of a graph G is a set of veto intervals S and a bijection from the vertices of G to the veto intervals in S, such that for any two vertices a and b in G, a and b are adjacent if and only if either a_v<b_l<a_r<b_v or b_v<a_l<b_r<a_v.In other words, a and b are adjacent if and only if their corresponding intervals intersect and neither contains the veto mark of the other.In this case we say that I(a) and I(b) are adjacent.Ifb_v<a_l<b_r<a_v we say that b intersects a on the left, and if a_v<b_l<a_r<b_v we say that b intersects a on the right.A graph G is a veto interval (VI) graph if G has a veto interval representation.If the intervals in S are all the same length, then S is a unit veto interval representation, and the corresponding graph G is a unit veto interval (UVI) graph.If no interval in S properly contains another, then S is a proper veto interval representation, and the corresponding graph G is a proper veto interval (PVI) graph.If every interval in S has its veto mark at its midpoint, then S is a midpoint veto interval representation, and the corresponding graph G is a midpoint veto interval (MVI) graph.We similarly abbreviate midpoint unit veto interval graphs and midpoint proper veto interval graphs as MUVI graphs and MPVI graphs, respectively.In Section <ref>, we show that a graph is a PVI graph if and only if it's an UVI graph, but not all MPVI graphs are MUVI graphs.If we instead put a directed edge from a to b if a intersects b on the left, we say that G is a directed veto interval graph.Note that a directed veto interval graph is an orientation of the veto interval graph with the same veto interval representation.Also note that different veto interval representations may yield the same veto interval graph but different directed veto interval graphs.We use facts about directed veto interval graphs to prove results about the underlying veto interval graphs. If G is a directed veto interval graph with directed path a_1 a_2 … a_k, k ≥ 3, then a_1 and a_k are not adjacent.Since I(a_1) intersects I(a_2) on the left, a_1_r<a_2_v<a_3_l.Likewise a_1_r<a_k_l, so I(a_1) and I(a_k) are disjoint.Veto interval graphs are triangle-free.In any orientation of a triangle, there is a directed path of length 2.By Lemma <ref>, the graph cannot contain the third edge.The complete graph K_n is not a veto interval graph for n ≥ 3.Let a and b be vertices of a veto interval graph G, and I(a) and I(b) be their veto intervals in a veto interval representation of G. If I(a) is contained in I(b), then a and b are not adjacent.Assume I(a) is contained in I(b). Then, b_l ≤ a_l<a_v<a_r ≤ b_r. Thus by definition, a and b are not adjacent. Note that every induced subgraph of a VI graph is also a VI graph, since given a VI representation R of a graph G with vertex v, R-I(v) is a VI representation of G-v.We say that a marked point of a veto interval representation S is a left endpoint, veto mark, or right endpoint of some veto interval in S. If a graph G has a veto interval representation S, then G has a VI-representation T in which the set of all marked points are distinct.Furthermore, if S is unit, proper, and/or midpoint, then such a VI-representation T exists that is unit, proper, and/or midpoint also.Let G be a veto interval graph and S be a veto interval representation of G.Sort the set of marked points in S in increasing order, x_1 through x_k.Suppose x_i is the smallest x-value shared by multiple marked points in S.We will construct a new VI-representation S' of G.Let ε be a distance smaller than the difference between any pair of distinct marked points in S.First, if there is a veto interval with right endpoint a_r at x_i, we move I(a) to the right by ε to make S'.If another interval I(b) has a left endpoint at x_i, then I(a) and I(b) are adjacent in S and in S'.If another interval I(b) has a veto mark or right endpoint at x_i, then I(a) and I(b) are not adjacent in S and in S'.An example of these intervals is shown on the left in Figure <ref>.Note that no other interval can share marked points with a_l or a_v, by our choice of x_i as the smallest x-value shared by multiple marked points in S.Now suppose there are no intervals with right endpoint at x_i and there is a veto interval with veto mark a_v at x_i.Again we move I(a) to the right by ε to make S'.If another interval I(b) has a left endpoint or veto mark at x_i, then I(a) and I(b) are not adjacent in S and in S'. If another interval I(b) has a left endpoint b_l=a_r, then I(a) and I(b) are adjacent in S and in S'.If another interval I(b) has a veto mark b_v=a_r, then I(a) and I(b) are not adjacent in S and in S'.Now suppose there are no intervals with right endpoint or veto mark at x_i and there is a veto interval with left endpoint a_l at x_i.In this case we move I(a) and all intervals I(c) for which c_r=a_v and c_v<a_l to the right by ε.If another interval I(b) has a left endpoint at x_i, then I(a) and I(b) are not adjacent in S and in S'.Similarly in the cases b_l=a_v, b_v=a_v, b_r=a_v, b_l=a_r, b_v=a_r, and b_r=a_r, I(a) and I(b) are either adjacent in both S and in S' or not adjacent in both S and S'.Examples of these intervals are shown on the right in Figure <ref>.We iterate this process at each shared marked point, sweeping from left to right in the representation, until all marked points are distinct.This is the representation T.Since the lengths of all intervals in T are the same as they were in S, S is a unit VI-representation if and only if T is.Similarly, veto marks are at the midpoint of intervals in T if and only if they are in S, and since there are no pairs of marked points p_1 and p_2 for which p_1<p_2 in S and p_2<p_1 in T, S is a proper VI-representation if and only if T is.By Lemma <ref>, in what follows we may assume that every veto interval representation has distinct marked points.Note also that given a veto interval representation R with distinct marked points, its corresponding veto interval graph is completely determined by the ordering of the 3n marked points in R.§ FAMILIES OF VETO INTERVAL GRAPHS The following families of graphs are veto interval graphs.* Complete bipartite graphs K_m,n, for m ≥ 1 and n ≥ 1. * Cycles C_n, for n≥ 4. * Trees.* The set of intervals with m copies of the veto interval (0,2,4) and n copies of the interval (3,5,7) is a veto interval representation of K_m,n.* Theorem <ref> shows that all cycles C_n with n>3 are MUVI graphs, and hence are veto interval graphs. * We will prove this by induction. Let T be a tree with k vertices, k >1. Let a be a leaf of T and v be its neighbor.By induction, T-a has a veto interval representation R.By Lemma <ref> we may assume that all marked points in R are distinct.We will add I(a) to R adjacent to I(v) to obtain a veto interval representation of T. Let I(x) be the interval with marked point x_m closest to v_r on the left, and I(w) be the interval with marked point w_m closest to v_r on the right.We add in I(a) such that x_m<a_l<v_r<a_v<a_r<w_m, as shown in Figure <ref>.Note I(a) and I(v) are adjacent. All other intervals do not have a marked point contained in I(a), so are either not overlapping with I(a) or contain I(a).In either case they are not adjacent to I(a), the latter by Lemma <ref>.So a is adjacent to v and no other vertex in T, and we have obtained a veto interval representation of T.Recall that Lekkerkerker and Boland proved that both large induced cycles and asteroidal triples, some of which are trees, are forbidden induced subgraphs of interval graphs <cit.>.However, by Proposition <ref> the same cannot be said of veto interval graphs.We now construct a family of graphs G_k for positive integers k, and prove that G_10 is not a VI graph.Note that G_10 is bipartite, and hence a triangle-free graph which is not a VI graph.Let G_k be the graph with vertex set A={ a_1, a_2, …, a_k}, a vertex v adjacent to every vertex in A, and an additional set of vertices B={b_ij |1 ≤ i < j ≤ k }, with b_ij adjacent to the two vertices a_i and a_j in A.Note that G_k is a bipartite graph with 1+k+k2 vertices. In any veto interval representation of G_k, veto intervals corresponding to the vertices in A intersect I(v) on the left or on the right.In what follows, suppose veto intervals I(a_1), …, I(a_p) intersect I(v) on the left and veto intervals I(a_p+1), … , I(a_p+q) intersect v on the right, and let A_1 = {a_1, …, a_p} and A_2 = A - A_1. So p+q=k. Additionally, let B_1={b_ij |i,j≤ p}, B_2={b_ij |i,j ≥ p+1}, and B_3={b_ij |i ≤ pandj ≥ p+1}.We note that B_1, B_2, and B_3 partition B.By construction |B_1|=p2 and |B_2|= q2, so |B_3| = p+q2 - p2-q2= pq.There is no veto interval representation of G_k with p > 7 or q > 7.By way of contradiction, consider a veto interval representation of G_k with p>7.In this representation, note that an interval I(b_i,j) ∈ B_1 cannot intersect I(a_i) on the left and I(a_j) on the right, since I(a_i) and I(a_j) intersect.We may therefore further partition B_1 into subsets β_1 = { b_i,j |I(b_i,j)intersects I(a_i)andI(a_j)on the left}, and β_2 = { b_i,j |I(b_i,j)intersectsI(a_i)andI(a_j) on the right}. We consider the 2p-1 regions of the line between the 2p left endpoints and veto marks of intervals in A_1, and suppose a right endpoint b_i,j_r of an element b_i,j∈ B_1 is in one of these regions, as shown in Figure <ref>.Since b_i,j intersects intervals in A_1 only on the left, the set S of left endpoints and veto marks of intervals in A_1 to the left of b_i,j_r determines which vertices in A_1 that b_i,j may be adjacent to, namely elements in A_1 with left endpoints in S and veto marks not in S.Since b_i,j is adjacent to exactly two of these vertices in G_k, b_i,j_v must veto all but two of these edges, and these two must be the vertices a_i and a_j in S with left endpoints furthest to the right.Therefore if two vertices b_i,j and b_s,tin β_1 have their right endpoints in the same region, they are adjacent to the same two vertices, i.e. i=s and j=t.So there is at most one right endpoint b_i,j_r in each of these 2p-1 regions.Additionally, there is no such right endpoint in the region furthest to the left, since in this case I(b_i,j) intersects only one interval I(a_i), and there is no such right endpoint in the region farthest to the right, since in this case I(b_i,j) intersects the veto mark of every interval I(a_i).Therefore |β_1| ≤ 2p-3.Using an equivalent argument, |β_2| ≤ 2p-3, so |B_1| ≤ 4p-6.Since |B_1|=p2 we have p2≤ 4p-6, which yields p ≤ 7.Equivalently, q ≤ 7. There is no veto interval representation of G_k in which pq >2k-4.We prove that in any veto interval representation of G_k, |B_3| ≤ 2k-4.Let R be a veto interval representation of G_k with distinct endpoints.Let a ∈ A_1 such that a_r ≥ x_r for all x ∈ A_1, and b ∈ A_1 such that b_v ≤ x_v for all x ∈ A_1.Similarly, let c ∈ A_2 such that c_l ≤ x_l for all x ∈ A_2, and d ∈ A_2 such that d_v ≥ x_v for all x ∈ A_2.We will prove that for every b_ij∈ B_3, either a_i = a, a_i = b, a_j = c or a_j =d.Assume not for contradiction, and consider I(b_i,j).Since b_i,j_l < a_i_r < a_j_l < b_i,j_r, I(b_i,j) intersects both I(a) and I(c).By assumption, a_i ≠ a and a_j ≠ c, so either b_i,j vetos a or a vetos b_i,j, and similarly for c.Assume without loss of generality that v_v < b_i,j_v.Therefore b_i,j cannot veto a, so b_i,j_l < a_v.But since a_v < v_l < a_t_r for all 1 ≤ t ≤ p, b_i,j overlaps a_t for all 1 ≤ t ≤ p.Therefore a_t vetos b_i,j for all 1 ≤ t ≤ p, t ≠ i.This contradicts the fact that b_v < x_v for all x ∈ A_1 - {b}.Note that for every i, 1 ≤ i ≤ p there are exactly two possible values of j for b_i,j.Similarly, there for every j, p+1 ≤ j ≤ p +q there are exactly two possible values of i for b_i,j.Thus gives us |B_3| ≤ 2p + 2q - 4 = 2k -4 with subtracting out the overcounting.Combining the facts that |B_3| = pq and |B_3| ≤ 2k-4, we can see that there is no veto interval representation of G_k in which pq >2k-4. G_10 is not a VI graph.Suppose G_10 has a veto interval representation, with p and q defined above, and p ≤ q.By Lemma <ref>, we have p=3, p=4, or p=5.In each case we have pq>16, contradicting Lemma <ref>. G_10 is the smallest bipartite graph we have found which is not a veto interval graph, with 56 vertices. Not every subgraph of a VI graph is a VI graph.The graph G_10 is not a veto interval graph, and is bipartite, but all complete bipartite graphs are veto interval graphs by Lemma <ref>.Up to isomorphism, the directed graphs shown in Figure <ref> are the only orientations of C_5 which are directed veto interval graphs.Let G be a directed veto interval graph which is an orientation of C_5, with vertices a, b, c, d, and e.The graph G does not have a directed path of length 4 by Lemma <ref>.But G must have a directed path of at least 2 edges, since otherwise G would have alternating directed edges, which can only happen for even cycles.So the longest directed path in G has exactly 2 or 3 edges.If G has longest directed path a → b → c → d, then G also has edges a → e and e → d or this directed path would be longer.This yields the orientation shown on the left in Figure <ref>.If G has longest directed path a → b → c, then G also has directed edges a → e and d → c.Then a → e → d → c is a longer path unless G also has edge d → e.This yields the orientation shown on the right in Figure <ref>.Each of these orientations is a directed veto interval graph as shown in Figure <ref>.The Grötzsch Graph is not a VI graph.Label the Grötzsch graph G as shown in Figure <ref>. Assume by way of contradiction that the Grötzsch Graph has a VI representation R.Direct the edges of G using R to obtain G as a directed VI graph.We will show this orientation of G fails to satisfy Lemma <ref>.By Lemma <ref>, the 5-cycle abcde must be directed in one of the two ways shown in Figure <ref>.Case 1. G has directed edges a → b → c → d and a → e → d. Then the additional orientations on the edges a → g, g → c, b → h, h → d, a → j, and j → d, shown in the first drawing in Figure <ref>, are forced by Lemma <ref>. Subcase 1a. G has directed edge j → k. Then the directed edges h → k, g → k, b → f, f → k, e → f, e → i, and i → k, shown in the second drawing in Figure <ref>, are forced again by Lemma <ref>.Applying Lemma <ref> to the cycle cdei forces c → i but applying Lemma <ref> to the cycle gcik forces i → c.Hence we have reached a contradiction. Subcase 1b. G has directed edge k → j. Then the directed edges k → g, k → h, k → i, i → c, i → e, k → f, and f → e shown in the third drawing in Figure <ref>, are forced again by Lemma <ref>.Applying Lemma <ref> to the cycle bfhk forces b → f but applying Lemma <ref> to the cycle abfe forces f → b, which is a contradiction.Case 2. G has directed edges a → b → c, a → e, d → e, and d → c. Then the additional orientations a → g and g → c are forced by Lemma <ref>, as shown in the first drawing of Figure <ref>. Subcase 2a. G has directed edge g → k. Then the directed edges a → j, j → k, d → j, d → h, h → k, b → h, b → f, f → k, e → f, e → i, and i → k, shown in the second drawing in Figure <ref>, are forced again by Lemma <ref>.Applying Lemma <ref> to the cycle cdei forces c → i but applying Lemma <ref> to the cycle gcik forces i → c, which is a contradiction. Subcase 2b. G has directed edge k → g. Then the directed edges k → i, i → c, i → e, k → f, f → e, f → b, k → h, h → b, h → d, k → j, and j → d, shown in the third drawing in Figure <ref>, are forced again by Lemma <ref>.Applying Lemma <ref> to the cycle agkj forces a → j but applying Lemma <ref> to the cycle ajde forces j → a, which is a contradiction.Since we have reached a contradiction in all cases, the Grötzsch Graph is not a VI graph.The Grötzsch Graph is a minimal forbidden subgraph for VI graphs.Let G be the Grötzsch Graph. Due to symmetry, it is sufficent to show that G-k, G-i, and G-a are all VI graphs. Their VI representations are shown in Figure <ref>. § UNIT, PROPER, AND MIDPOINT VI GRAPHSIn this section we focus on the unit, proper, and midpoint veto interval graphs defined in Section <ref>.First we prove theorems about MUVI graphs (the most restricted class), and then we investigate the relationships between UVI, MVI, MUVI, PVI, and MPVIgraphs.Recall that a graph G is a caterpillar if G is a tree with a path s_1 s_2 ⋯ s_k, called the spine of G, such that every vertex of G has distance at most one from the spine. The following families of graphs are MUVI graphs.* Complete bipartite graphs K_m,n, for m ≥ 1 and n ≥ 1. * Caterpillars. * The cycle C_n, for n≥ 4.* The representation given in Proposition <ref> is a MUVI representation of K_m,n, with intervals of length 4. * Let G be a caterpillar with spine vertices s_1, s_2, ..., s_k and additional vertices a_1, a_2, …, a_j.We let I(s_1)=(1,2,3), I(s_2)=(3,4,5), …, I(s_k)=(2k-1, 2k, 2k+1), and for every vertex a adjacent to s_i, I(a)=(2i-3+i/(k+1), 2i-2+i/(k+1), 2i-1+i/(k+1)).Figure <ref> shows an example of a MUVI representation of a caterpillar using this construction. * Label the vertices of C_n for n ≥ 4 as v_1, v_2, ..., v_n.If n is odd, let I(v_k) = (20k-18, 20k-6, 20k+6) for 1 ≤ k ≤n+1/2, I(v_(n+3)/2) = (10n-22, 10n-10,10n+2), I(v_i) = (20(n-i)+12, 20(n-i)+24, 20(n-i)+36) for n+5/2≤ i ≤ n-1, and I(v_n) = (15,27,39).If n is even, let I(v_k) = (20k-18, 20k-6, 20k+6) for 1 ≤ k ≤n/2, I(v_(n+2)/2) = (10n-4, 10n+8,10n+20), I(v_i) = (20(n-i)+12, 20(n-i)+24, 20(n-i)+36) for n+4/2≤ i ≤ n-1, and I(v_n) = (15,27,39).See Figure <ref> for the MUVI representation for C_9. The 5-lobster L_5, shown in Figure <ref>, is the graph formed by subdividing each edge of the star S_5. The 5-lobster L_5 is not a MUVI graph.Label the vertices of L_5 as in Figure <ref>.Assume R is a MUVI representation of L_5. Since a has degree 5, by the pigeonhole principle, at least three intervals intersect I(a) on the same side. Without loss of generality, let I(b), I(d), and I(f) intersect I(a) on the right. Also, without loss of generality, let b_l<d_l<f_l. Then, f_l<b_v<d_v<f_v<b_r<d_r<f_r. The vertex e must be adjacent to d only. If e intersects d on the right, then d_v<e_l. If e_l<b_r, then b_v<e_l<b_r<d_r<e_v, so e is adjacent to b, a contradiction. If b_r<e_l<d_r, then f_v<b_r<e_l<d_r<f_r<e_v, so e is adjacent to f, a contradiction. If e intersects d on the left, then the same arguments follow in mirror. Therefore, no MUVI representation exists for the 5-lobster. The class of MUVI graphs is properly contained in the class of UVI graphs. All MUVI graphs are UVI graphs by definition.However, Lemma <ref> shows that the 5-lobster is not a MUVI graph, and Figure <ref> shows a UVI representation of the 5-lobster.Given a MPVI representation R, the left endpoints, right endpoints, and veto marks in R appear in the same order. In other words, for any two intervals I(a) and I(b) in R, if a_1 < b_l then a_v < b_v and a_r<b_r.Consider two intervals I(a) and I(b) in R with a_l < b_l.Since R is proper, a_r < b_r.We will show that a_v < b_v.Denote the intervals (a_l,a_v) and (a_v,a_r) by AL and AR, respectively.Note that since R is a midpoint representation, AL and AR have the same length, and BL and BR have the same length.Assume by way of contradiction that b_v < a_v.So we have BL ⊂ AL and AR ⊂ BR. This contradicts the fact that AL and AR have the same length, and BL and BR have the same length.A graph G is a PVI graph if and only if G is a UVI graph.Note that if intervals are unit length, no interval can be properly contained in another, so every UVI graph is a PVI graph.For the converse, we follow the proof technique of Bogart and West in their proof that G is a proper interval graph if and only if G is a unit interval graph <cit.>.Note that if no interval in R is properly contained in another, then the left endpoints in R appear in the same order as the right endpoints.We order the intervals in R in the order of their left endpoints, which is also the order of their right endpoints.We start with a proper interval representation R_0 of G and iteratively construct a new representation R_i in which the first i intervals in this ordering are unit length.We consider the representation R_i-1 of G in which the first i-1 intervals are unit length, and we adjust the ith interval I=[a,b] to be unit length.In R_i-1, let α=a if I contains no right endpoints, and α=c otherwise, where c is the rightmost right endpoint contained in I.Note that the interval with right endpoint c is unit length in R_i-1, so α<a+1 and α<b.We transform R_i-1 into R_i by uniformly shrinking or expanding the part of R_i-1 in the interval [α, b] to [α, a+1], and translating the part of R_i-1 in the interval [b, ∞) to [a+1, ∞).Since this transformation preserves the order of the marked points in R_i-1, R_i has the same adjacencies as R_i-1 and also has interval I with unit length.Hence R_n is a UVI representation of G.The class of MUVI graphs is properly contained in the class of MPVI graphs. Again, if the intervals are unit length, then no interval is properly contained in another.So every MUVI representation is also a MPVI representation. Conversely, Figure <ref> shows a MPVI representation of the 5-lobster, which is not a MUVI graph by Lemma <ref>. § COLORING VETO INTERVAL GRAPHS We ask how large the chromatic number of VI graphs can be.Since VI graphs are triangle-free, examples of VI graphs with large chromatic number are relatively hard to find.By Proposition <ref>, odd cycles are VI graphs, so there are examples of VI graphs with chromatic number 3.On the other hand, by Theorem <ref>, not all bipartite graphs are VI graphs.The Grötszch graph is an example of a triangle-free graph with chromatic number 4, but Theorem <ref> shows that this graph is not a VI graph.Proposition <ref> below shows that the circulant graph (13, {1,5}) is a 4-chromatic veto interval graph. The circulant graph (13, {1,5}), shown in Figure <ref>, is a 4-chromatic veto interval graph. Heuberger proved that the chromatic number of (13, {1,5}) is 4 <cit.>.A veto interval representation of (13, {1,5}) is given in Figure <ref>. This example was found via a computer search on a collection of 4-chromatic graphs from the House of Graphs website <cit.>.A similar search on 5-chromatic graphs has not found any 5-chromaitc veto interval graphs.We don't know whether there exists a VI graph with chromatic number greater than 4, or indeed any upper bound on the chromatic number of VI graphs.If the veto intervals are unit length, however, we obtain the following result. UVI graphs are 4-colorable. Let G be a UVI graph with UVI representation R.For each interval I(v) ∈ R, color I(v) red if ⌊ v_l ⌋ is odd and ⌊v_v ⌋ is odd, color I(v) blue if ⌊ v_l ⌋ is odd and ⌊v_v ⌋ is even, color I(v) purple if ⌊ v_l ⌋ is even and ⌊v_v ⌋ is even, and color I(v) orange if ⌊ v_l ⌋ is even and ⌊v_v ⌋ is odd.We show that this is a proper 4-coloring of G by verifying that any two intervals in R with the same color are not adjacent.Suppose first that I(v) and I(w) have the same color and ⌊ v_l ⌋≠⌊ w_l ⌋.Since I(v) and I(w) have the same color,⌊ v_l ⌋ and ⌊ w_l ⌋ have the same parity, so I(v) and I(w) differ by more than 1.Since I(v) and I(w) are unit length, they do not overlap.Now suppose that I(v) and I(w) have the same color and ⌊ v_l ⌋ = ⌊ w_l ⌋ = k for some integer k.If I(v) and I(w) are red or purple, thenv_l, v_v, w_l, and w_v all have the same parity, and since I(v) and I(w) are unit length, v_l, v_v, w_l, and w_v are all contained in the same unit interval [k, k+1), and one must veto the other.If I(v) and I(w) are blue or orange, and if without loss of generality v_v < w_v, then we also have w_l < v_v, and I(v) vetoes I(w).In all cases I(v) and I(w) are not adjacent. Let k be the largest chromatic number of any UVI graph.By Theorems <ref> and <ref>, 3 ≤ k ≤ 4.The exact value of k is still open.§ ADDITIONAL VARIATIONS§.§ Double Veto Interval Graphs A double veto interval I(a) has two veto marks a_v and a_w with a_l<a_v<a_w<a_r.We denote a double veto interval as an ordered quadruple I(a)=(a_l,a_v,a_w,a_r).A double veto interval representation of a graph G is a set of double veto intervals S and a bijection from the vertices of G to the veto intervals in S, such that for any two vertices a and b in G, a and b are adjacent if and only if either a_w<b_l<a_r<b_v or b_w<a_l<b_r<a_v.In other words, a and b are adjacent if and only if their corresponding intervals intersect and neither contains either veto mark of the other.A graph G is a double veto interval graph if G has a double veto interval representation.We define k-veto graphs and k-veto interval representations analogously, with k veto marks in each interval, and denote the veto marks in an interval in such a representation by v_1 through v_k.Since the proof of Lemma <ref> applies to double interval graphs and k-veto interval graphs, these graphs are also triangle-free. Note that every veto interval graph is also a double veto interval graph.Given a veto interval representation R of a graph G, we can construct a double veto interval representation of G by splitting each veto mark in R into two veto marks a small distance apart.We know of no example of a double veto interval graph that is not a veto interval graph, so these graph classes may be equal.We do have the following result showing that we do not obtain a different graph class with more than two veto marks. A graph G is a k-veto graph if and only if G is a double veto graph, for k ≥ 2.Given a double veto representation of G, we obtain a k-veto representation of G by splitting the left veto mark of each interval in R into k-1 veto marks a small distance apart.Conversely, suppose R is a k-veto representation of G.Let R' be the double veto representation obtained by removing the veto marks v_2 through v_k-1 from each interval in R.We claim that R' still represents G.Let I(x) be an arbitrary interval in R.We check that for any other interval I(y), I(x) is adjacent to I(y) in R if and only if I(x) is adjacent to I(y) in R'.We consider the different ways I(x) and I(y) can intersect in R. If I(y) does not contain any of the veto marks x_v_2 through x_v_k-1, then removing these veto marks does not affect the adjacency of I(x) and I(y).If I(y) does contain one of these veto marks, then either I(y) also containsx_v_1 orx_v_k, or I(y) is contained in I(x).In either case, I(x) and I(y) are not adjacent in both R and R'. Therefore R' has the same adjacencies as R.§.§ Single approval graphs Two veto intervals are considered adjacent if their intersection contains neither of their veto marks.In this section we consider alternative definitions of adjacency: two intervals are adjacent if their intersection contains one or both of their marks.To help intuition, in this section we call veto intervals and veto marks approval intervals and approval marks, respectively, and denote the approval mark of the interval I_a by a_a.A single approval interval representation (respectively, double approval interval representation) has two intervals I_a and I_b adjacent if they intersect and their intersection contains exactly one of their approval marks (respectively, both of their approval marks).If the intersection contains a_a we say that I(a) approves I(b).A graph G is a single approval interval graph if G has a single approval interval representation, and a double approval interval graph if G has a double approval interval representation.Single and double approval graphs are related to tolerance graphs in the following way.Tolerance graphs are a generalization of interval graphs in which each vertex in a graph G is assigned an interval and a tolerance such that a and b are adjacent in G if I(a) and I(b) intersect, and this intersection is at least as large as either the tolerance of I(a) or the tolerance of I(b).Tolerance graphs were introduced by Golumbic and Monma in <cit.> and have been studied extensively since then.For a thorough treatment of tolerance graphs, see <cit.> by Golumbic and Trenk.Specifically, using the terminology of Golumbic and Trenk, a bounded bitolerance representation of a graph G has an interval I_a=[L(v),R(v)] and two tolerant points p(a) and q(a) for each vertex a of G, such that p(a) and q(a) are both contained in the interior of I_a.For each interval I_a, the lengths p(a)-L(a) and R(a)-q(a) are the left tolerance and right tolerance of I_a.The vertices a and b are adjacent in G if I_a and I_b intersect and their intersection contains points in [p(a),q(a)] or in [p(b),q(b)](i.e. they intersect by more than their corresponding tolerance).If all intervals I_a in a bounded bitolerance representation R have p(a)=q(a), then R is a point-core bitolerance representation and its corresponding graph G is a point-core bitolerance graph.If you direct the edges of a point-core bitolerance graph you get a point-core bitolerance digraph, and if you then remove the loops you get an interval catch digraph, which have been studied extensively <cit.>.In the terminology of approval graphs, a point-core bitolerance representation has a marked point p(a)=q(a)=a_a, which is both the endpoint of the left and right tolerance of I(a) and the approval mark of I(a), and two intervals I(a) and I(b) are adjacent if they intersect and their intersection contains either one or both of their approval marks.In this light, we may ask whether the class of point-core bitolerance graphs is equal to either the class of single approval graphs or the class of double approval graphs.To answer this question, we note that Golumbic et al. showed that C_n is not a tolerance graph for n ≥ 5 <cit.>, but by Theorems <ref> and <ref>, C_n is both a single approval and a double approval interval graph for n ≥ 5.Thus tolerance graphs and approval graphs are distinct classes of graphs.To further highlight the similarities between approval and tolerance graphs, let R be a set of closed intervals with middle marks, and let G be the interval graph with interval representation R (ignoring middle marks), G_1 be the veto interval graph with veto interval representation R, and G_2 be the point-core bitolerance graph with point-core bitolerance representation R.Then G is the disjoint union of G_1 and G_2.Furthermore, let G_3 be the single approval graph and G_4 the double approval graph with representation R.Then G_2 is the disjoint union of G_3 and G_4, so G is also the disjoint union of G_1, G_3, and G_4.Also note that if R is a midpoint unit representation, then G_3 is empty.An example is shown in Figure <ref>.We now investigate families of single approval graphs. The following families of graphs are single approval interval graphs.* Complete graphs K_n, for any positive integer n. * Cycles C_n, for any positive integer n ≥ 3. * Wheels W_n, for any positive integer n ≥ 3. * Trees. * Complete k-partite graphs.* Label the vertices of K_n as v_1, v_2, … ,v_n.To each vertex v_i assign the single approval interval of (-2i, -2i+1, 2i).This is a single approval interval representation of K_n, since each pair of intervals intersect, but only the approval mark of the smaller labeled vertex is contained in the intersection.* Label two adjacent vertices of C_n as vertex v_1 and v_2.Then label the other vertex adjacent to v_1 as v_3 and the other vertex adjacent to v_2 as v_4.Continue labeling in this way, making v_5 the other vertex adjacent to v_3 and v_6 the other vertex adjacent to v_4, until the last vertex gets label v_n.Now assign the single approval interval of (2,6,7) to vertex v_1, (2i, 2i+4, 2i+7) to vertex v_i, 2 ≤ i ≤ n-1, and (2n, 2n+6, 2n+7) to vertex v_n.See Figure <ref> for an example of the vertex labeling and the corresponding single approval interval representation for C_7. * Label the n-cycle of W_n in the same way as C_n was labeled in part 2, and assign the same single approval intervals, creating a single approval interval representation for this cycle. To the remaining vertex assign the single approval interval (2, 2n+8, 2n+10).This vertex is adjacent to all of the vertices of the cycle, so its interval overlaps with all of the intervals from the cycle containing exactly one approval mark. * We define an algorithm for assigning single approval intervals to the vertices of a tree T.Select a vertex v_1 as the root of T.Let l(v) be the distance from a vertex v of T to v_1.Label the remaining vertices of T with v_2, …, v_n such that if k<j then l(v_k) ≤ l(v_j).We construct a single approval representation S_i of the induced subgraph of T with vertices {v_1, …, v_i}.For S_1, assign v_1 the single approval interval (0,1,2).Given the single approval representation S_i-1, we construct the single approval interval I(v_i) to obtain the representation S_i.We define three regions, R_1, R_2 and R_3, in the following way.Let v_k be the parent of v_i.Now let R_1 be the region between the approval mark v_k_a and the the marked point immediately to the left of that point in S_i-1.Let R_2 be the region between v_k_r and the first marked point to the right of v_k_r in S_i-1.If there is no marked point to the rightof v_k_r in S_i-1, then let R_2 be the region between v_k_r and v_k_r+1.Suppose v_j_r is the largest right endpoint in S_i-1 with l(v_j)<l(v_i).Then we define R_3 to be the interval from v_j_r+1 to v_j_r+2.Now we assign the single approval interval of v_i from level j such that v_i_l∈ R_1, v_i_a∈ R_2, and v_i_r∈ R_3, and v_i_r appears in the same order among the right endpoints of the vertices in level j as v_i_a. We also assign marked points to be distinct from previous marked points.Each pair of vertices within a given level double approve each other, so no adjacencies result.By construction, each vertex in level i is adjacent to only its parent from level i-1.Lastly, there are no approval marks in any intersection between a vertex in level i and vertices in level j, j < i-1, so no adjacencies result.A tree T with corresponding regions R_1, R_2 and R_3 for vertex v_7 is shown in Figure <ref>.The final single approval interval representation of this tree constructed using this technique is given in Figure <ref>.More space has been added between marked points in this figure to more easily see the construction. * Let G be a complete k-partite graph with vertices v_1, v_2, …, v_n, and partite sets {v_1, …, v_i_1-1}, {v_i_1, …, v_i_2-1}, …, {v_i_k-1, …, v_n}.Given vertex v_a, the bth vertex in a partite set with c total vertices, we assign the single approval interval I(v_a)=(a,n+2a-b,n+2a-b+c).An example showing a single approval interval representation of a complete 3-partite graph is shown in Figure <ref>.The class of interval graphs is properly contained in the class of single approval graphs.Consider an interval graph representation.We can add an approval mark arbitrarily close to the left end point of each interval.Since whenever two intervals intersect, the left end point of one interval will be contained within the other interval, there will be one approval mark in each intersection.Also, since all points are distinct in the representation, in an intersection of two intervals there can only be one left end point contained.Hence, there is exactly one approval mark per intersection of intervals.Thus all interval graphs are single approval graphs.Furthermore, interval graphs are a proper subset of single approval graphs since single approval graphs contain cycles which are not contained in interval graphs.§.§ Double approval graphsThe following families of graphs are double approval interval graphs.* Complete graphs K_n, for any positive integer n. * Cycles C_n, for any positive integer n ≥ 3. * Wheels W_n, for any positive integer n ≥ 3. * Complete bipartite graphs K_m,n, for any positive integers m and n. * Trees.* Label the vertices of K_n with v_1, v_2, …, v_n.We let I(v_i)=(i, i+n, i+2n) for 1≤ i ≤ n.This is a double approval representation of K_n.An example for n=5 is shown in Figure <ref>. * Since a double approval representation for C_3=K_3 is given in part 1, let n ≥ 4.Label the vertices of C_n with v_1, v_2, …, v_n.We let I(v_i)=(i, 2i+n. 2i+n+3) for 1 ≤ i ≤ n-3, I(v_n-2)=(-2, 3n-4, 3n-2), I(v_n-1)=(-1,0,3n-1), and I(v_n)=(-3,n+1,n+3).An example for n=6 is shown in Figure <ref>. * A double approval representation for W_3=K_4 is given in part 1.For the wheel W_n with n+1 vertices, n ≥ 4, add the approval interval (-4, n, 3n) to the representation of C_n in part 2.An example for n=7 is shown in Figure <ref>. * In K_m,n, label the vertices in the first partite set with v_1, v_2, …, v_m, and the vertices in the second partite set with w_1, w_2, …, w_n.We let I(v_i)=(2i,2m+2n+2i,2m+2n+2i+1) and I(w_i)=(2m+2i,2m+2i+1,4m+2n+2i).An example is shown in Figure <ref>. * We consider rooted trees T with root r, and for a vertex v ∈ T, let l(v) be its distance from r.Let the height of T be h=max(l(v), v∈ T).We prove a stronger statement by induction, namely that T has a double approval representation R with v_l <r_a and v_a>r_a if l(v)=1 and v_l>r_a if l(v)>1.We induct on the h.For the base case, consider a tree T with h=1, i.e. a star with central vertex r and leaves v_1 through v_k.Let I(r)=(0,k+1,3k+2) and I(v_i)=(i,k+2i,k+2i+1).For the induction step, consider a tree T with height h>1, and delete its leaves to obtain a tree T' with height h-1.By induction, T' has a double approval interval representation R'.We add intervals to R' for the leaves of T to obtain a double approval interval representation of T in the following way.For each leaf x of T', let y_1, …, y_j be the children of x in T.We consider an interval (x_a-d, x_a+d) around the approval mark of x in R' that doesn't contain any other marked points of R'.This can always be done by Lemma <ref>.We place the intervals I(y_1), …, I(y_j) in this interval in a similar way as the base case, by letting I(y_i)=(x_a-d+id/(2j), x_a+id/(2j), x_a+(2i+1)d/(4j)), as shown in Figure <ref>.Doing this for every leaf of T' yields a double approval interval representation of T. The tripartite graph K_2,2,2 is not a double approval graph.Suppose K_2,2,2 has vertices a, b, c, d, e, and f, with missing edges ab, cd, and ef.Suppose by way of contradiction that K_2,2,2 has a double approval interval representation R.For each non-adjacent pair of vertices ab, cd, and ef, the approval mark of one of their intervals is outside the other.Say a_a is outside I_b, c_a is outside I_d, and e_a is outside I_f.Without loss of generality two of these approval marks are to the left of their corresponding intervals, say a_a < b_l and c_a < d_l.But since b and c are adjacent, b_l<c_a.Therefore a_a<b_l < c_a< d_l, contradicting the fact that a and d are adjacent.If a, b, and c are positive integers with a ≤ b ≤ c, then K_a,b,c is a double approval interval graph if and only if a=1.By Proposition <ref>, if a ≥ 2 then K_a,b,c is not a double approval interval graph.Conversely, we may add the interval (1,2m+2n+1.5,4m+4n+1) to the representation of K_m,n in the proof of Theorem <ref> to obtain a double approval interval representation of K_1,b,c for all positive integers b and c.A graph G is a midpoint unit double approval interval graph if and only if G is a unit interval graph.Let G be a midpoint unit double approval graph, and let R be its midpoint unit double approval interval representation.We transform R into a unit interval representation S of G as follows.Suppose every interval in R has length 2c for some constant c.Given an approval interval (a,a+c,a+2c) in R, define the corresponding interval in S to be (a+c/2, a+3c/2).Since every interval in S has length c, S is a unit interval representation.We verify that two intervals I_1=(a,a+c,a+2c) and I_2=(b,b+c,b+2c) are adjacent in R if and only if they are adjacent in S.First suppose I_1 and I_2 are adjacent in R, and assume without loss of generality that a<b.Then the approval mark of I_1 is contained in I_2, so b<a+c<b+2c.Hence a<b<a+c, so a+c/2<b+c/2<a+3c/2, so I_1 and I_2 intersect in S.Conversely, if I_1 and I_2 intersect in S and a<b, then a+c/2<b+c/2<a+3c/2, and by an analogous argument I_1 and I_2 are adjacent in R.Since this construction is invertible, we can similarly transform a unit interval representation S of G into a midpoint unit double approval interval representation R of G.§ OPEN QUESTIONS We conclude with a list of open questions. * Theorem <ref> shows that G_10 is a bipartite graph with 56 vertices which is not a VI graph.What is the smallest bipartite non-VI graph? * Does there exist a VI graph with chromatic number 5?More generally, what is the maximum chromatic number of VI graphs? * What is the maximum chromatic number of UVI, MUVI, or MPVI graphs? * Is there a double veto interval graph which is not a VI graph? * Is there an interval graph which is not a double approval graph?* Is there a VI graph which is not a MVI graph? * Theorem <ref> shows that all caterpillars are MUVI graphs, but Proposition <ref> shows that lobsters are not MUVI graphs.Which trees are MUVI graphs?* Propositions <ref> and <ref> show that the class of MUVI graphs is properly contained in both the class of UVI graphs and MPVI graphs.Is one of these classes of graphs contained in the other? * Several classes of intersection graphs generalize interval graphs to two and more dimensions.It would be interesting to study any of these classes of graphs, such as rectangle intersection graphs, with veto marks.§ ACKNOWLEDGEMENTS We thank Benjamin Reiniger for suggesting the House of Graphs as a source of large chromatic number triangle-free graphs.plain
http://arxiv.org/abs/1709.09259v2
{ "authors": [ "Breeann Flesch", "Jessica Kawana", "Joshua D. Laison", "Dana Lapides", "Stephanie Partlow", "Gregory J. Puleo" ], "categories": [ "math.CO", "05C62, 05C75, 05C15, 05C05, 05C20" ], "primary_category": "math.CO", "published": "20170926205844", "title": "Veto Interval Graphs and Variations" }
firstpage–lastpage 3D Modeling of the Magnetization of Superconducting Rectangular-Based Bulks and Tape Stacks M. Kapolka, V. M. R. Zermeño, S. Zou, A. Morandi, P. L. Ribani, E. Pardo, F. Grilli Document written on September 19, 2017. M. Kapolka, E. Pardo are with the Slovak Academy of Sciences, Institute of Electrical Engineering, Bratislava, Slovakia. V. M. R. Zermeño, S. Zou and F. Grilli are with the Karlsruhe institute of Technology, Institute for Technical Physics, Karlsruhe, Germany. P. L. Ribani and A. Morandi are with the University of Bologna, Italy. M. Kapolka and E. Pardo acknowledge the use of computing resources provided by the project SIVVP, ITMS 26230120002 supported by the Research & Development Operational Programme funded by the ERDF, the financial support of the Grant Agency of the Ministry of Education of the Slovak Republic and the Slovak Academy of Sciences (VEGA) under contract No. 2/0126/15. Corresponding author's email: [email protected] 30, 2023 ===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================The discovery of 1991 VG on 1991 November 6 attracted an unprecedented amount of attention as it was the first near-Earth object (NEO) ever found on an Earth-like orbit. At that time, it was considered by some as the first representative of a new dynamical class of asteroids, while others argued that an artificial (terrestrial or extraterrestrial) origin was more likely. Over a quarter of a century later, this peculiar NEO has been recently recovered and the new data may help in confirming or ruling out early theories about its origin. Here, we use the latest data to perform an independent assessment of its current dynamical status and short-term orbital evolution. Extensive N-body simulations show that its orbit is chaotic on time-scales longer than a few decades. We confirm that 1991 VG was briefly captured by Earth's gravity as a minimoon during its previous fly-by in 1991–1992; although it has been a recurrent transient co-orbital of the horseshoe type in the past and it will return as such in the future, it is not a present-day co-orbital companion of the Earth. A realistic NEO orbital model predicts that objects like 1991 VG must exist and, consistently, we have found three other NEOs —2001 GP_2, 2008 UA_202 and 2014 WA_366— which are dynamically similar to 1991 VG. All this evidence confirms that there is no compelling reason to believe that 1991 VG is not natural. methods: numerical – celestial mechanics –minor planets, asteroids: general –minor planets, asteroids: individual: 1991 VG –planets and satellites: individual: Earth. § INTRODUCTION Very few near-Earth objects (NEOs) as small as 1991 VG (about 10 m) have given rise to so much controversy and imaginative conjectures.Asteroid 1991 VG was the first NEO ever discovered moving in an orbit that is similar to that of the Earth (Scotti & Marsden 1991). This element of novelty led to a stimulating debate on how best to interpret the new finding. On the one hand, it could be the first member of a new orbital class of NEOs (e.g. Rabinowitz et al. 1993); on the other, it could be a relic of space exploration (e.g. Scotti & Marsden 1991; Steel 1995b).In addition to the primary debate on the possible character —natural versus artificial— of 1991 VG, a lively discussion resulted in multiple theories about its most plausible origin; the main asteroid belt (e.g. Brasser & Wiegert 2008), lunar ejecta (e.g. Tancredi 1997), a returning spacecraft (e.g. Steel 1995b) or space debris (e.g. Scotti & Marsden 1991), and being an extraterrestrial artefact (e.g. Steel 1995a), were all argued in favour and against as possible provenances for this object. After being last observed in 1992 April, 1991 VG has spent over a quarter of a century at small solar elongation angles, out of reach of ground-based telescopes. Now, this peculiar minor body has been recovered (Hainaut, Koschny & Micheli 2017)[<http://www.minorplanetcenter.net/mpec/K17/K17L02.html>] and the new data may help in confirming or ruling out early speculative theories about its origin. Here, we use the latest data available on 1991 VG to study its past, present and future orbital evolution in an attempt to understand its origin and current dynamical status. This paper is organized as follows. In Section 2, we present historical information, current data, and the various theories proposed to explain the origin of 1991 VG. Details of our numerical model and 1991 VG's orbital evolution are presented and discussed in Section 3. In Section 4, we show that even if 1991 VG is certainly unusual, other known NEOs move in similar orbits and orbital models predict that such objects must exist naturally. Arguments against 1991 VG being a relic of alien or even human spaceexploration are presented in Section 5. Our results are discussed in Section 6. Section 7 summarizes our conclusions.§ ASTEROID 1991 VG: DATA AND THEORIES Asteroid 1991 VG was discovered on 1991 November 6 by J. V. Scotti observing with the Spacewatch 0.91-m telescope at Steward Observatory on Kitt Peak at an apparent visual magnitude of 20.7, nearly 0.022 au from the Earth (Scotti & Marsden 1991). The rather Earth-like orbit determination initially led to suspect that the object was of artificial origin, i.e. returning space debris, perhaps a Saturn S-IVB third stage. It experienced a close encounter with our planet at nearly 0.0031 au on 1991 December 5 and with the Moon at 0.0025 au on the following day, but it was not detected by radar at NASA's Goldstone Deep Space Network on December 12 (Scotti et al. 1991).After being last imaged in 1992 April, 1991 VG remained unobserved until it was recovered on 2017 May 30 by Hainaut et al. (2017) observing with the Very Large Telescope (8.2-m) from Cerro Paranal at a magnitude of 25. The new orbit determination (see Table <ref>) is based on 70 observations that span a data-arc of 9 339 d or 25.57 yr and shows that 1991 VG is an Apollo asteroid following a somewhat Earth-like orbit —semimajor axis, a=1.026 au, eccentricity, e=0.04975, and inclination, i=144— with a minimum orbit intersection distance (MOID) with the Earth of 0.0053 au. As for the possible origin of 1991 VG and based only on its orbital elements, Scotti & Marsden (1991) suggested immediately that it might be a returning spacecraft. West et al. (1991) pointed out that its light curve might be compatible with that of a rapidly rotating satellite —probably tumbling— with highly reflective side panels, further supporting the theory that 1991 VG could be an artificial object. Although an artificial origin was proposed first, T. Gehrels pointed out that if 1991 VG was natural, it might be a representative of a new orbital class of objects, the Arjunas; an unofficial dynamical group of small NEOs following approximately Earth-like orbits which could be secondary fragments of asteroids that were originally part of the main belt and left their formation region under the effect of Jupiter's gravity (Cowen 1993; Rabinowitz et al. 1993; Gladman, Michel & Froeschlé 2000).In his analysis of terrestrial impact probabilities, Steel (1995b) assumed that 1991 VG was a returned spacecraft. An artificial nature for 1991 VG was discussed in detail by Steel (1995a), concluding that it was a robust candidate alien artefact (inert or under control); this conclusion was contested by Weiler (1996). The controversy on a possible alien origin for 1991 VG was continued by Steel (1998) and Weiler (1998) to conclude that either the detection of 1991 VG was a statistical fluke (unusual NEO or space debris of terrestrial origin) or a very large number of alien probes are following heliocentric orbits.Tancredi (1997) reviewed all the available evidence to conclude that 1991 VG could be a piece of lunar ejecta, the result of a relatively large impact. Tatum (1997) favoured a natural origin for 1991 VG to state that any asteroid moving in an Earth-like orbit with semimajor axis in the range 0.9943–1.0057 au will inevitably collide with our planet, i.e. observed NEOs in such paths must be relatively recent arrivals. Brasser & Wiegert (2008) re-examined the topic of the origin of 1991 VG and argued that it had to have its origin on a low-inclination Amor- or Apollo-class object. Here and in order to identify the most Earth-like orbits among those of known NEOs, we have used the D-criteria of Southworth & Hawkins (1963), D_ SH, Lindblad & Southworth (1971), D_ LS (in the form of equation 1 in Lindblad 1994 or equation 1 in Foglia & Masi 2004), Drummond (1981), D_ D, and the D_ R from Valsecchi, Jopek & Froeschlé (1999) to search the known NEOs for objects that could be dynamically similar to our planet, considering the orbital elements of the Earth for the epoch JDTDB 2458000.5 (see below) that is the standard time reference used throughout this research. The actual values are: semimajor axis, a = 0.999215960 au, eccentricity, e = 0.017237361, inclination, i = 0000524177, longitude of the ascending node, Ω = 230950190495, and argument of perihelion, ω = 233858793714. The list of NEOs has been retrieved from JPL's Solar System Dynamics Group (SSDG) Small-Body Database (SBDB).[<http://ssd.jpl.nasa.gov/sbdb.cgi>]Considering the currently available data on NEOs, the orbit of 1991 VG is neither the (overall) most Earth-like known —the current record holder is 2006 RH_120 (Bressi et al. 2008a; Kwiatkowski et al. 2009) which has the lowest values of the D-criteria, but it has not been observed since 2007 June 22— nor the one with the orbital period closest to one Earth year —which is 2014 OL_339 with a=0.9992 au or an orbital period of 364.83227±0.00002 d (Vaduvescu et al. 2014, 2015; de la Fuente Marcos & de la Fuente Marcos 2014; Holmes et al. 2015)— nor the least eccentric —which is 2002 AA_29 with e=0.01296 (Connors et al. 2002; Smalley et al. 2002), followed by 2003 YN_107 with e=0.01395 (McNaught et al. 2003; Connors et al. 2004)— nor the one with the lowest inclination —which is probably 2009 BD with i=038 (Buzzi et al. 2009; Micheli, Tholen & Elliott 2012), followed by 2013 BS_45 with i=077 (Bressi, Scotti & Hug 2013; de la Fuente Marcos & de la Fuente Marcos 2013). Other NEOs with very Earth-like orbits are 2006 JY_26 (McGaha et al. 2006; Brasser & Wiegert 2008; de la Fuente Marcos & de la Fuente Marcos 2013) and 2008 KT (Gilmore et al. 2008; de la Fuente Marcos & de la Fuente Marcos 2013). In summary and within the context of the known NEOs, although certainly unusual, the orbital properties of 1991 VG are not as remarkable as initially thought. On a more technical side, the orbit determinations of 2006 RH_120 and 2009 BD required the inclusion of non-gravitational accelerations (radiation-pressure related) in order to reproduce the available astrometry (see e.g. Micheli et al. 2012). The orbital solution of 1991 VG (see Table <ref>) was computed without using any non-gravitational forces and reproduces the observations used in the fitting.§ INTEGRATIONS AND ORBITAL EVOLUTION Understanding the origin and current dynamical status of 1991 VG demands a detailed study of its past, present and future orbital evolution. A careful statistical analysis of the behaviour over time of its orbital elements and other relevant parameters using a sufficiently large set of N-body simulations, including the uncertainties associated with the orbit determination in a consistent manner, should produce reasonably robust conclusions.In this section, we use a publicly available direct N-body code[<http://www.ast.cam.ac.uk/ sverre/web/pages/nbody.htm>]originally written by Aarseth (2003) that implements a fourth order version of the Hermite scheme described by Makino (1991). The suitability of this software for Solar system studies has been successfully and extensively tested (see de la Fuente Marcos & de la Fuente Marcos 2012). Consistent with the new data on 1991 VG (Hainaut et al. 2017), non-gravitational forces have been excluded from the calculations; the effects of solar radiation pressure have been found to be negligible in the calculation of the orbit in Table <ref> and for an average value for the Yarkovsky drift of 10^-9 au yr^-1 (see e.g. Nugent et al. 2012), the time-scale to escape the orbital neighbourhood of the Earth is about 12 Myr, which is some orders of magnitude longer than the time intervals discussed in this research.Our calculations make use of the physical model described by de la Fuente Marcos & de la Fuente Marcos (2012) and of initial conditions —positions and velocities in the barycentre of the Solar system for the various bodies involved, including 1991 VG— provided by JPL's horizons[<https://ssd.jpl.nasa.gov/?horizons>] (Giorgini et al. 1996; Standish 1998; Giorgini & Yeomans 1999; Giorgini, Chodas & Yeomans 2001; Giorgini 2011, 2015) at epoch JD 2458000.5 (2017-September-04.0 TDB, Barycentric Dynamical Time), which is the t = 0 instant in our figures.Fig. <ref> shows the evolution backwards and forward in time of several orbital elements and other relevant parameters of 1991 VG using initial conditions compatible with the nominal orbit in Table <ref>. The time interval displayed (10 kyr) isconsistent with the analysis of its dynamical lifetime carried out by Brasser & Wiegert (2008). Fig. <ref>, top panel (geocentric distance), shows that this NEO experiences recurrent close encounters with our planet well within the Hill radius, which is 0.0098 au. The Hill radius of the Earth is the maximum orbital distance of an object (natural or artificial) to remain gravitationally bound to our planet, i.e. to be a satellite. When inside the Hill sphere, the Earth's attraction dominates that of the Sun even for objects with a positive value of the geocentric energy, i.e. unbound passing objects. In order to be captured as a satellite of our planet, the geocentric energy of the object must be negative (Carusi & Valsecchi 1979). This simple criterion does not include any constraint on the duration of the capture event; Rickman & Malmort (1981) recommended the addition of an additional restriction that the object completes at least one revolution around our planet while its geocentric energy is still negative (see Section 5 for a more detailed analysis applied to 1991 VG).Given its low eccentricity, 1991 VG cannot undergo close encounters with major bodies other than the Earth–Moon system. As the orbit of 1991 VG is somewhat Earth-like, these are often low-velocity encounters (as low as 0.9 km s^-1). Such fly-bys can be very effective in perturbing an orbit even if the close approaches are relatively distant, but in this case we observe very frequent (every few decades) close fly-bys. Under such conditions, one may expect a very chaotic evolution as confirmed by the other panels in Fig. <ref>. Although very chaotic orbits present great challenges in terms of reconstructing the past dynamical evolution of the affected objects or making reliable predictions about their future behaviour, Wiegert, Innanen & Mikkola (1998) have shown that it is still possible to arrive to scientifically robust conclusions if a proper analysis is performed. On the other hand, low-velocity encounters well within the Hill radius can lead to temporary capture events (see Section 5).Fig. <ref>, second to top panel, shows the evolution of the value of the so-called Kozai-Lidov parameter √(1 - e^2)cos i (Kozai 1962; Lidov 1962) that measures the behaviour of the component of the orbital angular momentum of the minor body perpendicular to the ecliptic. The value of this parameter remains fairly constant over the time interval studied; the dispersion is smaller than the one observed for typical NEOs following Earth-like paths (see e.g. figs 3, 6 and 9, B-panels, in de la Fuente Marcos & de la Fuente Marcos 2016b). The variation of the relative mean longitude of 1991 VG or difference between the mean longitude of the object and that of the Earth, λ_ r (see e.g. Murray & Dermott 1999), is shown in Fig. <ref>, third to top panel. When λ_ r changes freely in the interval (0, 360)—i.e. λ_ r circulates— 1991 VG is not subjected to the 1:1 mean-motion resonance with our planet. If the value of λ_ r oscillates or librates —about 0 (quasi-satellite), ±60 (Trojan) or 180 (horseshoe)— then the orbital periods of 1991 VG and the Earth are virtually the same and we speak of a co-orbital companion to our planet. Fig. <ref>, third to top panel, shows that 1991 VG has been a recurrent transient co-orbital of the horseshoe type in the past and it will return as such in the future, however, it is not a present-day co-orbital companion of the Earth. Fig. <ref> shows the most recent co-orbital episode of 1991 VG in further detail; the variation of the relative mean longitude indicates that 1991 VG followed a horseshoe path for about 300 yr.Fig. <ref>, fourth to top panel, shows the evolution of the value of the semimajor axis of 1991 VG. Earth's co-orbital regiongoes from ∼0.994 au to ∼1.006 au, or a range in orbital periods of 362–368 d, and the figure shows that 1991 VG only enters this zone for relatively brief periods of time although it remains in its neighbourhood during the entire integration. Fig. <ref>, fourth and third to bottom panels, shows how the eccentricity and the inclination, respectively, change over time. Although in both cases the evolution is very irregular, there is some weak coupling between both orbital elements and in some cases, when the eccentricity reaches a local maximum, the value of the inclination reaches a local minimum and vice versa. This explains why the value of the Kozai-Lidov parameter (Fig. <ref>, second to top panel) remains relatively stable throughout the integrations; this is also a sign that the Kozai-Lidov mechanism (Kozai 1962; Lidov 1962) may be at work, at least partially. This interpretation is confirmed in Fig. <ref>, second to bottom panel, as the value of the argument of perihelion, ω, shows signs of libration (it does not circulate) which is a typical side effect of the Kozai-Lidov mechanism. Fig. <ref>, bottom panel, shows the evolution of the nodal distances of 1991 VG; encounters with the Earth–Moon system are only possible in the neighbourhood of the nodes and both nodes tend to drift into the path of our planet in a chaotic manner. The orbital evolution displayed in Fig. <ref> gives a general idea of the dynamical behaviour of 1991 VG, but it does not showthe effect of the uncertainties in the orbit determination (see Table <ref>). In order to account for this critical piece of information, we use the Monte Carlo using the Covariance Matrix (MCCM) method detailed in section 3 of de la Fuente Marcos & de la Fuente Marcos (2015c) —the covariance matrix to generate initial positions and velocities has been obtained from JPL's horizons. Fig. <ref> shows the results of the evolution of 250 control orbits generated using the MCCM method. Thesesimulations confirm that the orbital evolution of 1991 VG is chaotic on time-scales longer than a few decades.§ UNUSUAL BUT NOT UNCOMMON One of the arguments originally applied to reject a natural origin for 1991 VG was that, in accordance with the evidence available at that time, such a NEO was highly improbable, from an orbital point of view. Brasser & Wiegert (2008) using numerical simulations estimated that the probability of a NEO ever ending up on an Earth-like orbit could be about 1:20 000. If we use the latest NEO orbital model described by Granvik et al. (2013a,b) and Bottke et al. (2014) and implemented in the form of a publicly available survey simulator,[<http://neo.ssa.esa.int/neo-population>] we obtain a probability of finding a NEO moving in an orbit akin to that of 1991 VG of about 10^-6 —the degree of similarity between two orbits has been estimated as described before, using the D-criteria. As the previously mentioned NEO orbit model only strictly applies to NEOs with H<25 mag (in fact, the single object predicted by the model has a magnitude of 24.24) and 1991 VG is smaller, it cannot be discarded that objects other than 1991 VG may be moving along similar paths.In order to confirm or reject this plausible hypothesis, we have used the D-criteria mentioned above to search the known NEOs (data from JPL's SSDG SBDB as before) for objects dynamically similar to 1991 VG. We apply these criteria using osculating Keplerian orbital elements, not the customary proper orbital elements (see e.g. Milani 1993, 1995; Milani & Knežević 1994; Knežević & Milani 2000; Milani et al. 2014, 2017), because 1991 VG-like orbits are inherently chaotic on very short time-scales. Table <ref> shows the data of three other NEOs —2001 GP_2 (McMillan et al. 2001), 2008 UA_202 (Bressi et al. 2008b) and 2014 WA_366 (Gibbs et al. 2014)— which are dynamically similar to 1991 VG (they have D_ LS and D_ R < 0.05). Integrations analogous to those in Fig. <ref> (not shown) indicate that the evolution of these three NEOs (although their orbit determinations are in need of significant improvement) bears some resemblance to that of 1991 VG. Common features include relatively frequent close encounters with the Earth–Moon system and very chaotic short-term evolution (all of them), small values of their MOIDs, recurrent trapping by 1:1 mean-motion resonances with the Earth (particularly 2008 UA_202), evolution temporarily affected by the Kozai-Lidov effect and other consistent properties. Asteroid 2008 UA_202 is considered an easily retrievable NEO (García Yárnoz, Sanchez & McInnes 2013). NEOs 2001 GP_2 and 2008 UA_202 are included in the list of asteroids that may be involved in potential future Earth impact events maintained by JPL's Sentry System (Chamberlin et al. 2001; Chodas 2015).[<https://cneos.jpl.nasa.gov/sentry/>] Asteroid 2001 GP_2 has a computed impact probability of 0.00021 for a possible impact in 2043–2107; asteroid 2008 UA_202 islisted with an impact probability of 0.000081 for a possible impact in 2050–2108. Their very similar range of years for a most probable potential impact —i.e. they reach their closest perigees at similar times even if their synodic periods are long— suggest a high degree of dynamical coherence for these two objects even if their values of Ω and ω are very different. Although they have relatively high values of the impact probability, their estimated diameters are smaller than 20 m. In the improbable event of an impact, its effects would be local, not too different from those of the Chelyabinsk event or some other recent minor impacts (see e.g. Brown et al. 2013; Popova et al. 2013; de la Fuente Marcos & de la Fuente Marcos 2015b; de la Fuente Marcos, de la Fuente Marcos & Mialle 2016); however, an object like 2008 UA_202 probably would break up in the Earth's atmosphere and few fragments, if any, will hit the ground (or the ocean). § NATURAL OR ARTIFICIAL? Some of the orbital properties of 1991 VG have been used to argue in favour or against a natural or artificial origin for this object.A very unusual dynamical feature that was pointed out by Tancredi (1997) was the fact that 1991 VG experienced a temporary satellite capture by the Earth during its 1991–1992 fly-by; such satellite capture showed a recurrent pattern.The backward evolution of the new orbit determination fully confirms the analysis made by Tancredi (1997). Fig. <ref>, top panel, shows that the Keplerian geocentric energy of 1991 VG (relative binding energy) became negative during the encounter and also that 1991 VG completed an entire revolution around our planet (bottom panel). Therefore, it might have matched both criteria (see above, Carusi & Valsecchi 1979; Rickman & Malmort 1981) to be considered a bona fide satellite or, perhaps more properly, a minimoon of the Earth; using the terminology in Fedorets, Granvik & Jedicke (2017) we may speak of a temporarily captured orbiter. However, the relative binding energy was not negative for the full length of the loop around our planet pictured in Fig. <ref>, bottom panel.As the loop followed by 1991 VG with respect to the Earth was travelled in the clockwise sense, this event may be regarded as the first ever documented retrograde capture of a satellite by our planet, even if it had a duration of about 28 d. Fig. <ref>, top panel, also shows that this unusual phenomenon is recurrent in the case of 1991 VG. But, if there are other objects moving in 1991 VG-like orbits, how often do they become temporary satellites of the Earth? Figs <ref> and <ref> show the results of 10^6 numerical experiments in which a virtual object moving in a 1991 VG-like orbit —orbital elements assumed to be uniformly distributed in the volume of orbital parameter space defined by a∈(0.95, 1.05) au, e∈(0.0, 0.1), i∈(0, 3), Ω∈(0, 360), ω∈(0, 360), and the time of perihelion passage τ_q∈(2458000.5, 2458365.75) JD— undergoes a fly-by with our planet. The region chosen encloses the orbit solutions of 1991 VG, 2001 GP_2, 2008 UA_202 and 2014 WA_366, and those of other NEOs cited earlier in this paper as well. We did not use the NEO orbit model mentioned before to generate the synthetic orbits because we wanted to survey the relevant volume of orbital parameter space in full detail so our results could be applied to both natural and artificial objects. The evolution of the virtual objects was followed just for one year of simulated time to minimize the impact of orbital chaos and resonant returns (see e.g. Milani, Chesley & Valsecchi 1999) on our conclusions. This short time interval is fully justified because our previous analyses show that, after experiencing a close fly-by with the Earth, an object moving in a 1991 VG-like orbit most likely jumps into another 1991 VG-like orbit. These experiments have been carried out with the same software, physical model and relevant initial conditions used in our previous integrations.Our calculations show that the probability of becoming a temporary (for any length of time) satellite of our planet for members of this group of objects is 0.0036. The overall capture rate is roughly similar to that found by Granvik et al. (2012). In our case, the probability of a capture for less than one month is 0.0023, for one to two months is 0.0011, and for more than two months is 0.00021 (see Fig. <ref>). Captures for less than 7 d are less probable than captures for 7 to 14 d. Therefore, most objects temporarily captured as Earth's transient bound companions spend less than one month in this state and they do not complete one full revolution around our planet. These results also indicate that as long as we have NEOs in 1991 VG-like orbits, they naturally tend to become temporary satellites of our planet. However, captures for more than two months are rather unusual and those lasting one year, exceedingly rare. This result is at odds with those in Granvik et al. (2012) and Fedorets et al. (2017), but this is not surprisingbecause our short integrations are biased against producing temporarily captured orbiters due to the comparatively small size of our synthetic sample —10^6 experiments versus 10^10 in Granvik et al. (2012)— and our choice of initial conditions —e.g. their integrations start at 4–5 Hill's radii from the Earth. Fedorets et al. (2017) show explicitly in their table 1 that 40 per cent of all captures should be temporarily captured orbiters. As objects moving in 1991 VG-like paths are being kicked from one of these orbits into another, it is perfectly normal to observe recurrent but brief capture episodes as those discussed for 1991 VG. Such temporary captures are also observed during the integrations of 2001 GP_2, 2008 UA_202 and 2014 WA_366 although they tend to be shorter in duration and less frequent. In addition, no virtual object collided with the Earth during the calculations, which strongly suggests that even if impact probabilities are theoretically high for many of them, it is much more probable to be captured as an ephemeral satellite of our planet than to become an actual impactor; this conclusion is consistent with recent results by Clark et al. (2016), but it is again at odds with results obtained by Granvik et al. (2012) and Fedorets et al. (2017), who found that about one per cent of their test particles impacted the Earth. This significant discrepancy comes out of the facts pointed out before, comparatively small size of our synthetic sample and different initial conditions. Fig. <ref> shows how the duration of the episode of temporary capture as natural satellite of our planet depends on the initial values of the semimajor axis, eccentricity and inclination. The colours (or grey scale) in the maps depend on the duration of the episode in days as indicated in the associated colour box. NEOs moving in 1991 VG-like orbits of the Amor- or Apollo-class are more likely to experience longer capture episodes. The probability of getting captured decreases rapidly for objects with e>0.05 and/or i>15, and the duration of the recorded episodes is shorter. It is important to notice that one of these virtual objects might leave the assumed initial volume of NEO orbital parameter space (see above) in a time-scale of the order of 1 kyr as Fig. <ref> shows. In addition, the longest orbital period of a satellite of our planet is about 205 d (if it is at the Hill radius); therefore,most of the temporary captures recorded in our numerical experiment do not qualify as true satellites according to Rickman & Malmort (1981) because they did not complete at least one revolution when bound to the Earth; following the terminology used by Fedorets et al. (2017), we may speak of temporarily captured fly-bys in these cases. In fact and strictly speaking, the event in Fig. <ref> iscompatible with a temporarily captured fly-by not a temporarily captured orbiter; one of the annual epicycles happens to (somewhat accidentally) loop around the Earth lasting several months, but the geocentric energy is negative only for a fraction of the time takento travel the loop. Indeed, our limited numerical experiment has been optimized to show how frequent temporarily captured fly-bys —not orbiters— are; the histogram in Fig. <ref> matches reasonably well that of temporarily captured fly-bys in fig. 2 in Fedorets et al. (2017).The topic of the capture of irregular satellites by planets has been studied by e.g. Astakhov et al. (2003), Nesvorný, Vokrouhlický & Morbidelli (2007) and Emel'yanenko (2015). Jupiter is a well-documented host for these captures (see e.g. Rickman & Malmort 1981; Tancredi, Lindgren & Rickman 1990; Kary & Dones 1996). In regards to the Earth, the topic has only recently received attention (see e.g. Baoyin, Chen & Li 2010; Granvik, Vaubaillon & Jedicke 2012; Bolin et al. 2014; Brelsford et al. 2016; Clark et al. 2016; Jedicke et al. 2016; Fedorets et al. 2017). Fedorets et al. (2017) have predicted that the largest body constantly present on a geocentric orbit could have a diameter of the order of 0.8 m.The recent scientific interest on the subject of transient bound companions of our planet was triggered by the exceptional close encounter between our planet and 2006 RH_120 (Bressi et al. 2008a; Kwiatkowski et al. 2009). Kwiatkowski et al. (2009) showed that 2006 RH_120 was temporarily captured into a geocentric orbit from 2006 July to 2007 July. In their work, they confirmed that 2006 RH_120 is a natural object and it cannot be lunar ejecta; they favour a scenario in which its capture as transient satellite was the result of aerobraking in the Earth's atmosphere of a NEO previously moving in a standard Earth-crossing orbit with very low MOID, a low-eccentricity Amor- or Apollo-class minor body.If we interpret the capture of 2006 RH_120 as a minimoon within the context of our previous numerical experiment, in which the probability of remaining captured for an entire year is about 10^-6, this episode might be a statistical fluke or perhaps indicate that the population of NEOs capable of experiencing such episodes is exceedingly large. In principle, our results strongly favour the interpretation of the 2006 RH_120 capture episode as a clear outlier. Short-lived satellite capture events consistent with those in Figs <ref> and <ref> have been routinely observed in simulations of real NEOs moving in Earth-like orbits (see e.g. the discussion in de la Fuente Marcos & de la Fuente Marcos 2013, 2014, 2015a). While our simulations show that the capture episode experienced by 1991 VG is unusual but not uncommon, the one by 2006 RH_120 seems to be truly uncommon and it is difficult to assume that the same scenario that led 1991 VG to become a minimoon can be applied to 2006 RH_120 as well. However, as 2006 RH_120 is a confirmed (by radar) natural object, it is reasonable to assume that 1991 VG is natural too. Within the contextof numerical experiments optimized to study temporarily captured orbiters —not fly-bys— the case of 2006 RH_120 is not unusual though. In fact, Granvik et al. (2012) and Fedorets et al. (2017) found a good match between the probability associated with the capture of 2006 RH_120 and predictions from their models for the temporarily captured orbiter population. The orbital solution in Table <ref> predicts that in addition to its close approaches to our planet in 2017–2018 (2017 August 7 and 2018 February 11) and 1991–1992 (1991 December 5 and 1992 April 9), 1991 VG experienced similar fly-bys on 1938 August 31 and 1939 March 14, then on 1956 June 14 and 1957 March 26, and more recently on 1974 August 27 and 1975 March 15. The first two dates predate the start of any human space exploration programme, so they can be customarily discarded. The date in 1956 follows some suborbital tests, one of them on June 13; the same can be said about 1957, another suborbital flight took place on March 25. It is highly unlikely that debris from suborbital tests could have been able to escape into a heliocentric orbit to return at a later time. There were no documented launches in or around 1975 March 15, but the spacecraft Soyuz 15 was launched on 1974 August 26 at 19:58:05 UTC (Clark 1988; Newkirk 1990). This manned spacecraft failed to dock with the Salyut 3 space station due to some electronic malfunction and returned to the ground safely two days later. One may argue that 1991 VG might be some part of Soyuz 15 (perhaps some stage of the Proton K/D launch system) as the time match is very good, but Soyuz 15 followed a low-Earth orbit and it is extremely unlikely that any of the stages of the heavy-lift launch vehicle (e.g. the second stage, 8S11K, of 14 m and 11 715 kg or the third stage of 6.5 m and 4 185 kg, empty weights) could have escaped the gravitational field of our planet. Although both space debris and active spacecraft have received temporary designations as minor bodies by mistake (see e.g. section 9 of de la Fuente Marcos & de la Fuente Marcos 2015d), objects with initial conditions coming from artificial paths tend to be removed from Earth's orbital neighbourhood rather fast (de la Fuente Marcos & de la Fuente Marcos 2015d). This is to be expected as spacecraft move under mission control commands and trajectories must be corrected periodically. In addition, the orbital solutions of the NEOs mentioned in the previous sections (including that of 1991 VG) did not require the inclusion of any non-gravitational acceleration (e.g. pressure related) to reproduce the available observations with the exception of two objects, 2006 RH_120 and 2009 BD, for which the effects of the radiation pressure were detected (see e.g. Micheli et al. 2012).Objects of artificial origin are characterized by a low value of their bulk density (while the average asteroid density is 2 600 kg m^-3, but 2006 RH_120 has 400 kg m^-3 and 2009 BD may have 640 kg m^-3, Micheli et al. 2012) or conversely by a high value of its proxy, the Area to Mass Ratio (AMR), that may be >10^-3 m^2 kg^-1 for an artificial object. The lowest values of the bulk density of natural objects are linked to the presence of highly porous rocky materials. The density of a fully loaded 8S11K was about 886 kg m^-3 and an empty one was significantly less dense at perhaps 62 kg m^-3 (as a hollow metallic shell); the AMR of an empty 8S11K is close to 5.4×10^-3 m^2 kg^-1. The AMR values of the NEOs cited in this work are compatible with a natural origin for all of them (including 1991 VG). Reproducing the paths followed by objects of artificial origin (e.g. space debris) requires the inclusion of non-gravitational accelerations in order to properly account for the observational data; this is also applicable to inert or active spacecraft and very likely to any putative extraterrestrial artefact. As 1991 VG doesnot exhibit any of the properties characteristic of artificial objects, it must be natural.§ DISCUSSION The data review and analyses carried out in the previous sections show that, although certainly unusual, the orbital properties and the dynamical evolution of 1991 VG are not as remarkable as originally thought. Although initially regarded as a mystery object, it is in fact less of a puzzle and more of a dynamically complex NEO, one of a few dozens which roam temporarily in the neighbourhood of the path of the Earth. Steel (1995a, 1998) hypothesized that 1991 VG could be an alien-made object, some type of self-replicating probe akin to that in von Neumann's concept. In perspective, this imaginative conjecture may indeed be compatible with what is observed in the sense that if an inert fleet of this type of alien-made objects is actually moving in Earth-like orbits, they would behave as an equivalent population of natural objects (i.e. NEOs) moving in 1991 VG-like orbits. The presence of a present-day active (i.e. under intelligent control) fleet of alien probes can be readily discarded because the observed objects do not appear to be subjected to any non-gravitational accelerations other than those linked to radiation pressure and perhaps the Yarkovsky effect, and also because of the lack of detection of any kind of alien transmissions. An inert (or in hibernation mode) fleet of extraterrestrial artefacts would be dynamically indistinguishable from a population of NEOs if the values of their AMRs are low enough. However and adopting Occam's razor, when there exist two explanations for an observed event, the simpler one must be given precedence. Human space probes, radar, spectroscopic and photometric observations performed over several decades have all shown that, in the orbital neighbourhood of the Earth, it is far more common to detect space rocks than alien artefacts.Scotti & Marsden (1991) and West et al. (1991) used orbital and photometric data to argue that 1991 VG could be a human-made object,a piece of rocket hardware or an old spacecraft that was launched many decades ago. The orbital evolutions of relics of human space exploration exhibit a number of traits that are distinctive, if not actually unique, and the available observational evidence indicatesthat none of them are present in the case of 1991 VG and related objects; almost certainly, 1991 VG was never launched from the Earth. The putative fast rotation period and large-amplitude light curve reported in West et al. (1991) could be compatible with 1991 VG being the result of a relatively recent fragmentation event, where the surface is still fresh, and an elongated boulder is tumbling rapidly; 2014 WA_366 has an orbit quite similar to that of 1991 VG, perhaps they are both fragments of a larger object. Tancredi (1997) put forward a novel hypothesis for the origin of 1991 VG, ejecta from a recent lunar impact. Our calculations stronglysuggest that objects moving in 1991 VG-like orbits may not be able to remain in this type of orbit for an extended period of time. These integrations indicate that, perhaps, the present-day 1991 VG is less than 10 kyr old in dynamical terms, but impacts on the Mooncapable of ejecting objects the size of 1991 VG are nowadays very, very rare. Brasser & Wiegert (2008) have studied this issue in detail and the last time that a cratering event powerful enough to produce debris consistent with 1991 VG took place on the Moon could be about 1 Myr ago. Unless we assume that 1991 VG was born that way, then left the orbital neighbourhood of the Earth, and recently was reinserted there, the presence of 1991 VG is difficult to reconcile with an origin as lunar ejecta. After discarding an artificial (alien or human) or lunar origin, the option that remains is the most natural one, 1991 VG could be an unusual but not uncommon NEO. We know of dozens of relatively well-studied NEOs that move in Earth-like orbits and our analysis shows that three of them follow 1991 VG-like orbits. All these orbits are characterized by relatively high probabilities of experiencing temporary captures as satellites of the Earth and also of becoming trapped in a 1:1 mean motion resonance with our planet; in a recurrent manner for both cases. We have robust numerical evidence that 1991 VG has been a co-orbital and a satellite of our planet in the past and our calculations predict that these events will repeat in the future. Therefore, the peculiar dynamics of 1991 VG is not so remarkable after all, when studied within the context of other NEOs moving in Earth-like orbits. In addition to the few discussed here, multiple examples of these behaviours can be found in the works by de la Fuente Marcos & de la Fuente Marcos (2013, 2015a,b, 2016a,b). NEOs moving in 1991 VG-like orbits tend to spend less than one month as satellites (i.e. inside the Hill sphere) of our planet and follow paths of the horseshoe type when moving co-orbital (i.e. outside the Hill radius) to our planet; they appear to avoid the Trojan and quasi-satellite resonant states (see e.g. de la Fuente Marcos & de la Fuente Marcos 2014, 2015a) perhaps because of their comparatively wide semimajor axes relative to that of the Earth. Another unusual dynamical property of 1991 VG and related objects is that of being subjected to the Kozai-Lidov mechanism (see Fig. <ref>, second to bottom panel, libration in ω), at least for brief periods of time. This is also a common behaviour observed for many NEOs moving in Earth-like orbits (see e.g. de la Fuente Marcos & de la Fuente Marcos 2015d).In a seminal work, Rabinowitz et al. (1993) argued that 1991 VG and other NEOs were signalling the presence of a secondary asteroid belt around the path of our planet. These objects are unofficially termed as the Arjunas and are a loosely resonant family of small NEOs which form the near-Earth asteroid belt. It is difficult to imagine that all these objects were formed as they are today within the main asteroid belt and eventually found their way to the NEO population as they were. None the less, the Hungarias have been suggested as a possible direct source for this population (Galiazzo & Schwarz 2014). NEO orbit models like the one used here show that this is possible, but it is unclear whether the known delivery routes are efficient enough to explain the size of the current population of small NEOs, and in particular the significant number of Arjunas like 1991 VG. Fragments can also be produced in situ (i.e. in the neighbourhood of the path of the Earth) by multiple mechanisms: subcatastrophic impacts (see e.g. Durda et al. 2007), tidal disruptions after close encounters with planets (see e.g. Schunová et al. 2014) or the action of the Yarkovsky–O'Keefe–Radzievskii–Paddack (YORP) mechanism (see e.g. Bottke et al. 2006). These processes can generate dynamically coherent groups of genetically related objects —although YORP spin-up is considered dominant (see e.g. Jacobson et al. 2016)— which may randomize their orbits in a relatively short time-scale as they move through an intrinsically chaotic environment (see Fig. <ref>). In addition, the superposition of mean motion and secular resonances creates dynamical families of physically unrelated objects (see e.g. de la Fuente Marcos & de la Fuente Marcos 2016c) which intertwine with true families resulting from fragmentation. On a more practical side, all these objects are easily accessible targets for any planned NEO sample-return missions (see e.g. García Yárnoz et al. 2013) or even commercial mining (see e.g. Lewis 1996; Stacey & Connors 2009). Bolin et al. (2014) have predicted that, while objects like 1991 VG or 2006 RH_120 are extremely challenging to discover using the currently available telescope systems, the Large Synoptic Survey Telescope or LSST (see e.g. Chesley et al. 2009) may be able to start discovering them on a monthly basis when in full operation, commencing in January 2022. Our independent assessment of the current dynamical status and short-term orbital evolution of 1991 VG leads us to arrive to the samebasic conclusion reached by Brasser & Wiegert (2008): asteroid 1991 VG had to have its origin on a low-inclination Amor- or Apollo-class object. However and given its size, it must be a fragment of a larger object and as such it may have been produced in situ, i.e. within the orbital neighbourhood of the Earth–Moon system, during the relatively recent past (perhaps a few kyr ago). § CONCLUSIONS In this paper, we have studied the dynamical evolution of 1991 VG, an interesting and controversial NEO. This investigation has been carried out using N-body simulations and statistical analyses. Our conclusions can be summarized as follows.* Asteroid 1991 VG currently moves in a somewhat Earth-like orbit, but it is not an Earth's co-orbital now. It has been atransient co-orbital of the horseshoe type in the past and it will return as such in the future. * Extensive N-body simulations confirm that the orbit of 1991 VG is chaotic on time-scales longer than a few decades. * Our calculations confirm that 1991 VG was a natural satellite of our planet for about one month in 1992 and show that thissituation may have repeated multiple times in the past and it is expected to happen again in the future. Being a recurrent ephemeral natural satellite of the Earth is certainly unusual, but a few other known NEOs exhibit this behaviour as well.* A realistic NEO orbit model shows that although quite improbable, the presence of objects moving in 1991 VG-like orbits is not impossible within the framework defined by our current understanding of how minor bodies are delivered from the main asteroid belt to the NEO population. * Consistently, we find three other minor bodies —2001 GP_2, 2008 UA_202 and 2014 WA_366— that move in orbitssimilar to that of 1991 VG. * NEOs, moving in 1991 VG-like orbits have a probability close to 0.004 of becoming transient irregular natural satellites ofour planet.* Our results show that, although featuring unusual orbital properties and dynamics, there is no compelling reason to considerthat 1991 VG could be a relic of human space exploration and definitely it is not an alien artefact or probe.The remarkable object 1991 VG used to be considered mysterious and puzzling, but the new data cast serious doubt on any possible origin for this NEO other than a natural one. We find no evidence whatsoever of an extraterrestrial or intelligent origin for this object. Spectroscopic studies during its next perigee on 2018 February may be able to provide better constraints about its most plausible source, in particular whether it is a recent fragment or not.§ ACKNOWLEDGEMENTS We thank the referee, M. Granvik, for his constructive, thorough and very helpful reports, S. J. Aarseth for providing the code used in this research, A. I. Gómez de Castro, I. Lizasoain and L. Hernández Yáñez of the Universidad Complutense de Madrid (UCM) for providing access to computing facilities. This work was partially supported by the Spanish `Ministerio de Economía y Competitividad' (MINECO) under grant ESP2014-54243-R. Part of the calculations and the data analysis were completed on the EOLO cluster of the UCM, and we thank S. Cano Alsúa for his help during this stage. EOLO, the HPC of Climate Change of the International Campus of Excellence of Moncloa, is funded by the MECD and MICINN. This is a contribution to the CEI Moncloa. In preparation of this paper, we made use of the NASA Astrophysics Data System, the ASTRO-PH e-print server, and the MPC data server.99[Aarseth2003]2003gnbs.book.....A Aarseth S. J., 2003,Gravitational N-body Simulations.Cambridge Univ. Press, Cambridge, p. 27[Astakhov et al.2003]2003Natur.423..264A Astakhov S. A., Burbanks A. D., Wiggins S., Farrelly D., 2003, Nature, 423, 264[Baoyin, Chen & Li2010]2010RAA....10..587B Baoyin H.-X., Chen Y., Li J.-F., 2010, Res. Astron. Astrophys., 10, 587[Bolin et al.2014]2014Icar..241..280B Bolin B. et al., 2014, Icarus, 241, 280[Bottke et al.2006]2006AREPS..34..157B Bottke W. F., Jr., Vokrouhlický D., Rubincam D. P.,Nesvorný D., 2006,Annu. Rev. Earth Planet. Sci., 34, 157[Bottke et al.2014]2014AGUFM.P12B..09B Bottke W. F., Jr. et al., 2014, Am. Geophys. Union, Fall Meeting 2014, Abstract No. P12B-09[Brasser & Wiegert2008]2008MNRAS.386.2031B Brasser R., Wiegert P., 2008, MNRAS, 386, 2031[Brelsford et al.2016]2016P SS..123....4B Brelsford S., Chyba M., Haberkorn T., Patterson G., 2016, Planet. Space Sci., 123, 4[Bressi et al.2008a]2008MPEC....D...12B Bressi T. H. et al., 2008a, MPEC Circ., MPEC 2008-D12.[Bressi et al.2008b]2008MPEC....V...06B Bressi T. H. et al., 2008b, MPEC Circ., MPEC 2008-V06.[Bressi, Scotti & Hug2013]2013MPEC....B...72B Bressi T. H., Scotti J. V., Hug G., 2013, MPEC Circ., MPEC 2013-B72.[Brown et al.2013]2013Natur.503..238B Brown P. G. et al., 2013, Nature, 503, 238[Buzzi et al.2009]2009MPEC....B...14B Buzzi L. et al., 2009, MPEC Circ., MPEC 2009-B14.[Carusi & Valsecchi1979]1979RSAI...22..181C Carusi A., Valsecchi G. B., 1979, Riun. Soc. Astron. Ital., 22, 181[Chamberlin et al.2001]2001DPS....33.4108C Chamberlin A. B., Chesley S. R., Chodas P. W., Giorgini J. D.,Keesey M. S., Wimberly R. N., Yeomans D. K., 2001,AAS/Div. Planet. Sci. Meeting Abstr., 33, 41.08[Chesley et al.2009]2009AAS...21346017C Chesley S. R., Brown M. E., Durech J., Harris A. W., Ivezić Ž., Jones R. L., Knežević Z., LSST Solar System Science Collaboration, 2009, AAS Meeting Abstr., 41, 460.17[Chodas2015]2015DPS....4721409C Chodas P., 2015,AAS/Div. Planet. Sci. Meeting Abstr., 47, 214.09[Clark1988]Clark1988 Clark P., 1988,The Soviet Manned Space Program.Orion Books, New York[Clark et al.2016]2016AJ....151..135C Clark D. L., Spurný P., Wiegert P., Brown P., Borovička J., Tagliaferri E., Shrbený L., 2016, AJ, 151, 135[Connors et al.2002]2002M PS...37.1435C Connors M., Chodas P., Mikkola S., Wiegert P., Veillet C., Innanen K., 2002, Meteorit. Planet. Sci., 37, 1435[Connors et al.2004]2004M PS...39.1251C Connors M., Veillet C., Brasser R., Wiegert P., Chodas P., Mikkola S., Innanen K., 2004, Meteorit. Planet. Sci., 39, 1251[Cowen1993]1993SciN..143..117C Cowen R., 1993, Sci. News, 143, 117[de la Fuente Marcos & de la Fuente Marcos2012]2012MNRAS.427..728D de la Fuente Marcos C.,de la Fuente Marcos R., 2012,MNRAS, 427, 728[de la Fuente Marcos & de la Fuente Marcos2013]2013MNRAS.434L...1D de la Fuente Marcos C., de la Fuente Marcos R., 2013, MNRAS, 434, L1[de la Fuente Marcos & de la Fuente Marcos2014]2014MNRAS.445.2961D de la Fuente Marcos C., de la Fuente Marcos R., 2014, MNRAS, 445, 2985[de la Fuente Marcos & de la Fuente Marcos2015a]2015AN....336....5D de la Fuente Marcos C., de la Fuente Marcos R., 2015a, Astron. Nachr., 336, 5[de la Fuente Marcos & de la Fuente Marcos2015b]2015MNRAS.446L..31D de la Fuente Marcos C., de la Fuente Marcos R., 2015b, MNRAS, 446, L31[de la Fuente Marcos & de la Fuente Marcos2015c]2015MNRAS.453.1288D de la Fuente Marcos C., de la Fuente Marcos R., 2015c, MNRAS, 453, 1288[de la Fuente Marcos & de la Fuente Marcos2015d]2015A A...580A.109D de la Fuente Marcos C., de la Fuente Marcos R., 2015d, A&A, 580, A109[de la Fuente Marcos & de la Fuente Marcos2016a]2016Ap SS.361...16D de la Fuente Marcos C., de la Fuente Marcos R., 2016a, Ap&SS, 361, 16[de la Fuente Marcos & de la Fuente Marcos2016b]2016Ap SS.361..121D de la Fuente Marcos C., de la Fuente Marcos R., 2016b, Ap&SS, 361, 121[de la Fuente Marcos & de la Fuente Marcos2016c]2016MNRAS.456.2946D de la Fuente Marcos C., de la Fuente Marcos R., 2016c, MNRAS, 456, 2946[de la Fuente Marcos, de la Fuente Marcos & Mialle2016]2016Ap SS.361..358D de la Fuente Marcos C., de la Fuente Marcos R., Mialle P., 2016, Ap&SS, 361, 358[Drummond1981]1981Icar...45..545D Drummond J. D., 1981, Icarus, 45, 545[Durda et al.2007]2007Icar..186..498D Durda D. D., Bottke W. F., Nesvorný D., Enke B. L., Merline W. J., Asphaug E., Richardson D. C., 2007, Icarus, 186, 498[Emel'yanenko2015]2015SoSyR..49..404E Emel'yanenko N. Y., 2015, Sol. Sys. Res., 49, 404[Fedorets, Granvik, & Jedicke2017]2017Icar..285...83F Fedorets G., Granvik M., Jedicke R., 2017, Icarus, 285, 83[Foglia & Masi2004]2004MPBu...31..100F Foglia S., Masi G., 2004, MPBu, 31, 100[Galiazzo & Schwarz2014]2014MNRAS.445.3999G Galiazzo M. A., Schwarz R., 2014, MNRAS, 445, 3999[García Yárnoz, Sanchez & McInnes2013]2013CeMDA.116..367G García Yárnoz D., Sanchez J. P., McInnes C. R., 2013, Celest. Mech. Dyn. Astron., 116, 367[Gibbs et al.2014]2014MPEC....X...13G Gibbs A. R. et al., 2014, MPEC Circ., MPEC 2014-X13.[Gilmore et al.2008]2008MPEC....K...48G Gilmore A. C. et al., 2008, MPEC Circ., MPEC 2008-K48.[Giorgini2011]2011jsrs.conf...87G Giorgini J., 2011,in Capitaine N., ed.,Proc. Journées 2010 `Systèmes de référence spatio-temporels' (JSR2010),New Challenges for Reference Systems and Numerical Standards in Astronomy.Observatoire de Paris, Paris, p. 87[Giorgini2015]2015IAUGA..2256293G Giorgini J. D., 2015,IAU Gen. Assem. Meeting, 29, 2256293[Giorgini & Yeomans1999]GY99 Giorgini J. D., Yeomans D. K., 1999,On-Line System Provides Accurate Ephemeris and Related Data,NASA TECH BRIEFS, NPO-20416, p. 48[Giorgini et al.1996]1996DPS....28.2504G Giorgini J. D. et al., 1996,BAAS, 28, 1158[Giorgini, Chodas, & Yeomans2001]2001DPS....33.5813G Giorgini J. D., Chodas P. W., Yeomans D. K., 2001,BAAS, 33, 1562[Gladman, Michel, & Froeschlé2000]2000Icar..146..176G Gladman B., Michel P., Froeschlé C., 2000, Icarus, 146, 176[Granvik, Vaubaillon & Jedicke2012]2012Icar..218..262G Granvik M., Vaubaillon J., Jedicke R., 2012, Icarus, 218, 262[Granvik et al.2013a]2013EPSC....8..999G Granvik M. et al., 2013a, Eur. Planet. Sci. Congr. 2013, 8, EPSC2013-999[Granvik et al.2013b]2013DPS....4510602G Granvik M. et al., 2013b, AAS/Div. Planet. Sci. Meeting Abstr., 45, 106.02[Hainaut, Koschny & Micheli2017]2017MPEC....L...02H Hainaut O., Koschny D., Micheli M., 2017,MPEC Circ., MPEC 2017-L02.[Holmes et al.2015]2015MPEC....P...64H Holmes R., Foglia S., Buzzi L., Linder T., 2015, MPEC Circ., MPEC 2015-P64.[Jacobson et al.2016]2016Icar..277..381J Jacobson S. A., Marzari F., Rossi A., Scheeres D. J., 2016, Icarus, 277, 381[Jedicke et al.2016]2016IAUS..318...86J Jedicke R., Bolin B., Bottke W. F., Chyba M., Fedorets G., Granvik M., Patterson G., 2016, in Chesley S. R., Morbidelli A., Jedicke R., Farnocchia D., eds, Proc. IAU Symp. 318, Asteroids: New Observations, New Models.Cambridge Univ. Press, Cambridge, p. 86[Kary & Dones1996]1996Icar..121..207K Kary D. M., Dones L., 1996, Icarus, 121, 207[Knežević & Milani2000]2000CeMDA..78...17K Knežević Z., Milani A., 2000, Celest. Mech. Dyn. Astron., 78, 17[Kozai1962]1962AJ.....67..591K Kozai Y., 1962, AJ, 67, 591[Kwiatkowski et al.2009]2009A A...495..967K Kwiatkowski T. et al., 2009, A&A, 495, 967[Lewis1996]LW96 Lewis J. S., 1996,Mining the Sky: Untold Riches from the Asteroids, Comets, and Planets.Addison-Wesley, Reading, p. 82[Lidov1962]1962P SS....9..719L Lidov M. L., 1962, Planet. Space Sci., 9, 719[Lindblad1994]1994ASPC...63...62L Lindblad B. A., 1994, in Kozai Y., Binzel R. P., Hirayama T., eds, ASP Conf. Ser. Vol. 63,Seventy-five Years of Hirayama Asteroid Families: the Role of Collisions in the Solar System History.Astron. Soc. Pac., San Francisco, p. 62[Lindblad & Southworth1971]1971NASSP.267..337L Lindblad B. A., Southworth R. B., 1971, in Gehrels T., ed., Proc. IAU Colloq. 12: Physical Studies of Minor Planets.Univ. Sydney, Tucson, AZ, p. 337[Makino1991]1991ApJ...369..200M Makino J., 1991,ApJ, 369, 200[McGaha et al.2006]2006MPEC....J...38M McGaha J. E. et al., 2006, MPEC Circ., MPEC 2006-J38.[McMillan et al.2001]2001MPEC....H...04M McMillan R. S., Scotti J. V., Gleason A. E., Spahr T. B., 2001, MPEC Circ., MPEC 2001-H04.[McNaught et al.2003]2003MPEC....Y...76M McNaught R. H. et al., 2003, MPEC Circ., MPEC 2003-Y76.[Micheli, Tholen & Elliott2012]2012NewA...17..446M Micheli M., Tholen D. J., Elliott G. T., 2012, New Astron., 17, 446[Milani1993]1993CeMDA..57...59M Milani A., 1993, Celest. Mech. Dyn. Astron., 57, 59[Milani1995]1995ASIB..336...47M Milani A., 1995, in Roy A. E., Steves B. A., eds,From Newton to Chaos. Modern Techniques for Understanding and Coping with Chaos in N-body Dynamical Systems, Vol. 336.Plenum Press, New York, p. 47[Milani & Knežević1994]1994Icar..107..219M Milani A., Knežević Z., 1994, Icarus, 107, 219[Milani, Chesley & Valsecchi1999]1999A A...346L..65M Milani A., Chesley S. R., Valsecchi G. B., 1999, A&A, 346, L65[Milani et al.2014]2014Icar..239...46M Milani A., Cellino A., Knežević Z., Novaković B., Spoto F., Paolicchi P., 2014, Icarus, 239, 46[Milani et al.2017]2017Icar..288..240M Milani A., Knežević Z., Spoto F., Cellino A., Novaković B., Tsirvoulis G., 2017, Icarus, 288, 240[Murray & Dermott1999]1999ssd..book.....M Murray C. D., Dermott S. F., 1999,Solar System Dynamics.Cambridge Univ. Press, Cambridge, p. 97[Nesvorný, Vokrouhlický & Morbidelli2007]2007AJ....133.1962N Nesvorný D., Vokrouhlický D., Morbidelli A., 2007, AJ, 133, 1962[Newkirk1990]Newkirk1990 Newkirk D., 1990,Almanac of Soviet Manned Space Flight.Gulf Publishing Company, Texas[Nugent et al.2012]2012AJ....144...60N Nugent C. R., Margot J. L., Chesley S. R., Vokrouhlický D., 2012, AJ, 144, 60[Popova et al.2013]2013Sci...342.1069P Popova O. P. et al., 2013, Science, 342, 1069[Rabinowitz et al.1993]1993Natur.363..704R Rabinowitz D. L. et al., 1993, Nature, 363, 704[Rickman & Malmort1981]1981A A...102..165R Rickman H., Malmort A. M., 1981, A&A, 102, 165[Schunová et al.2014]2014Icar..238..156S Schunová E., Jedicke R., Walsh K. J., Granvik M., Wainscoat R. J., Haghighipour N., 2014, Icarus, 238, 156[Scotti & Marsden1991]1991IAUC.5387....1S Scotti J. V., Marsden B. G., 1991, IAU Circ., 5387, 1[Scotti et al.1991]1991IAUC.5401....2S Scotti J. et al., 1991, IAU Circ., 5401, 2[Smalley et al.2002]2002MPEC....A...92S Smalley K. et al., 2002, MPEC Circ., MPEC 2002-A92.[Southworth & Hawkins1963]1963SCoA....7..261S Southworth R. B., Hawkins G. S., 1963, Smithsonian Contrib. Astrophys., 7, 261 [Stacey & Connors2009]2009P SS...57..822S Stacey R. G., Connors M., 2009,Planet. Space Sci., 57, 822[Standish1998]ST98 Standish E. M., 1998,JPL Planetary and Lunar Ephemerides, DE405/LE405,Interoffice Memo. 312.F-98-048, Jet Propulsion Laboratory, Pasadena, California[Steel1995a]1995Obs...115...78S Steel D., 1995a, The Observatory, 115, 78[Steel1995b]1995MNRAS.273.1091S Steel D. I., 1995b, MNRAS, 273, 1091[Steel1998]1998Obs...118..226S Steel D., 1998, The Observatory, 118, 226[Tancredi1997]1997CeMDA..69..119T Tancredi G., 1997, Celest. Mech. Dyn. Astron., 69, 119[Tancredi, Lindgren & Rickman1990]1990A A...239..375T Tancredi G., Lindgren M., Rickman H., 1990, A&A, 239, 375[Tatum1997]1997JRASC..91..276T Tatum J. B., 1997, J. R. Astron. Soc. Canada, 91, 276[Vaduvescu et al.2014]2014MPEC....P...23V Vaduvescu O. et al., 2014, MPEC Circ., MPEC 2014-P23.[Vaduvescu et al.2015]2015MNRAS.449.1614V Vaduvescu O. et al., 2015, MNRAS, 449, 1614[Valsecchi, Jopek, & Froeschle1999]1999MNRAS.304..743V Valsecchi G. B., Jopek T. J., Froeschle C., 1999, MNRAS, 304, 743[Weiler1996]1996Obs...116..316W Weiler H., 1996, The Observatory, 116, 316[Weiler1998]1998Obs...118..226W Weiler H., 1998, The Observatory, 118, 226[West et al.1991]1991IAUC.5402....1W West R. M., Wisniewski W., McDowell J., Rast R., Chodas P., Marsden B. G., 1991, IAU Circ., 5402, 1[Wiegert, Innanen & Mikkola1998]1998AJ....115.2604W Wiegert P. A., Innanen K. A., Mikkola S., 1998, AJ, 115, 2604
http://arxiv.org/abs/1709.09533v2
{ "authors": [ "C. de la Fuente Marcos", "R. de la Fuente Marcos" ], "categories": [ "astro-ph.EP" ], "primary_category": "astro-ph.EP", "published": "20170927140237", "title": "Dynamical evolution of near-Earth asteroid 1991 VG" }
Response to a small external force and fluctuations of a passive particle in a one-dimensional diffusive environment François Huveneers December 30, 2023 ==================================================================================================================== Kucerovsky's theorem provides a method for recognizing the interior Kasparov product of selfadjoint unbounded cycles. In this paper we extend Kucerovsky's theorem to the non-selfadjoint setting by replacing unbounded Kasparov modules with Hilsum's half-closed chains. On our way we show that any half-closed chain gives rise to a multitude of twisted selfadjoint unbounded cycles via a localization procedure. These unbounded modular cycles allow us to provide verifiable criteria avoiding any reference to domains of adjoints of symmetric unbounded operators.§ INTRODUCTIONIn recent years a lot of attention has been given to the non-unital framework for noncommutative geometry, where the absence of a unit is interpreted as a non-compactness condition on the underlying noncommutative space, <cit.>. For a more detailed analysis of the non-compact setting it is important to distinguish between the complete and the non-complete case, <cit.>. Whereas the complete case is still modelled by a (non-unital) spectral triple or more generally an unbounded Kasparov module, the lack of completeness leads to the non-selfadjointness of symmetric differential operators. A noncommutative geometric framework that captures the non-complete setting is provided by Hilsum's notion of a half-closed chain, where the selfadjointness condition on the unbounded operator is replaced by a more flexible symmetry condition, <cit.>. This framework is supported by results of Baum, Douglas, Taylor and Hilsum showing that any first-order symmetric elliptic differential operator on any Riemannian manifold gives rise to a half-closed chain, <cit.>.Unbounded Kasparov modules give rises to classes in Kasparov's KK-theory via the Baaj-Julg bounded transform and this result has been extended by Hilsum to cover half-closed chains, <cit.>. This transform contains information about the algebraic topology of the original geometric situation described by a half-closed chain.The main structural property of Kasparov's KK-theory is the interior Kasparov product, <cit.>:_B : KK(A,B)KK(B,C) → KK(A,C) .The interior Kasparov product is however not explicitly constructed and it is therefore important to develop tools for computing the interior Kasparov product of two given Kasparov modules. Given three classes in KK-theory, Connes and Skandalis developed suitable conditions for verifying whether one of these three classes factorizes as an interior Kasparov product of the remaining two classes, <cit.>.The conditions of Connes and Skandalis were translated to the unbounded setting by Kucerovsky, <cit.>. Thus, given three unbounded Kasparov modules, Kucerovsky's theorem provides criteria for verifying that one of these unbounded Kasparov modules factorizes as an unbounded Kasparov product of the remaining two unbounded Kasparov modules. In many cases, the conditions are easier to verify directly at the unbounded level, using Kucerovsky's theorem, instead of first applying the bounded transform and then relying on the results of Connes and Skandalis. Indeed, in the unbounded setting we are usually working with first-order differential operators whereas their bounded transforms are zeroth-order pseudo-differential operators involving a square root of the resolvent.In this paper we extend Kucerovsky's theorem to cover the non-complete setting, where the unbounded Kasparov modules are replaced by half-closed chains. The main challenge in carrying out such a task is that the domain of the adjoint of a symmetric unbounded operator can be difficult to describe. The original proof of Kucerovsky does therefore not translate to the non-selfadjoint setting as the correct conditions have to be formulated without any reference to maximal domains of symmetric unbounded operators. The main technique that we apply is a localization procedure relating to the work of the first author in <cit.>. This procedure allows us to pass from a symmetric regular unbounded operator D to an essentially selfadjoint regular unbounded operator of the form x D x^* for an appropriate bounded adjointable operator x. In the case where D is a Dirac operator, the localization corresponds to a combination of two operations: restricting all data to an open subset and passing from the non-complete Riemannian metric on this open subset to a conformally equivalent but complete Riemannian metric. The size of the open neighborhood and the relevant conformal factor are both determined by the positive function xx^*. In particular, our technique allows us to construct a multitude of unbounded modular cycles out of a given half-closed chain. We interpret this localization procedure in terms of the unbounded Kasparov product by the module generated by the localizing element x. In this way, we may work with selfadjoint unbounded operators and hence eliminate the difficulties relating to the description of maximal domains. On the other hand, the “conformal factor” (xx^*)^-2 produces a twist of the commutator condition and this twist is described by the modular automorphism σ( ) = (xx^*)( · ) (xx^*)^-1. We refer to Connes and Moscovici for further discussion of this issue in the case where x is positive and invertible, see <cit.>. The present paper is motivated by the geometric setting of a proper Riemannian submersion of spin^c-manifolds, and the criteria that we develop here have already been applied in <cit.> to obtain factorization results involving the corresponding fundamental classes in KK-theory.Our results may also be of importance for the further development of the unbounded Kasparov product as initiated by Connes in <cit.> and developed further by Mesland and others in <cit.>.The structure of this paper is as follows: In Section <ref> and Section <ref> we review the concept of a half-closed chain and of an unbounded modular cycle. In Section <ref>, Section <ref> and Section <ref> we prove our results on the localization procedure and investigate how it relates to the Kasparov product. In Section <ref> we prove Kucerovsky's theorem for half-closed chains.§.§ AcknowledgementsWe would like to thank Georges Skandalis for a highly stimulating remark concerning the “locality” of Kucerovsky's theorem.This work also benefited from various conversations with Magnus Goffeng and Bram Mesland.The authors gratefully acknowledge the Syddansk Universitet Odense and the Radboud University Nijmegen for their financial support in facilitating this collaboration. During the initial stages of this research project the first author was supported by the Radboud excellence fellowship. The first author was partially supported by the DFF-Research Project 2 “Automorphisms and Invariants of Operator Algebras”, no. 7014-00145B and by the Villum Foundation (grant 7423).The second author was partially supported by NWO under VIDI-grant 016.133.326. § HALF-CLOSED CHAINSLet us fix two -unital C^*-algebras A and B.Let E be a countably generated Hilbert C^*-module over B. We recall that a closed (densely defined) unbounded operator D : Dom(D) → E is said to be regular when it has a densely defined adjoint D^* : Dom(D^*) → E and when 1 + D^* D : Dom(D^* D) → E has dense range. It follows from this definition that 1 + D^* D : Dom(D^* D) → E is in fact densely defined and surjective, <cit.>. In particular we have a bounded adjointable inverse (1 + D^* D)^-1 : E → E.For two countably generated Hilbert C^*-modules E and F over B, we let 𝕃(E,F) and 𝕂(E,F) denote the bounded adjointable operators from E to F and the compact operators from E to F, respectively. When E = F we put 𝕃(E) := 𝕃(E,F) and 𝕂(E) := 𝕂(E,F). We let _∞ : 𝕃(E,F) → [0,∞) denote the operator norm.The following definition is due to Hilsum, <cit.>: A half-closed chain from A to B is a triple (,E,D), where ⊆ A is a norm-dense *-subalgebra, E is a countably generated C^*-correspondence from A to B and D : Dom(D) → E is a closed, symmetric and regular unbounded operator such that * a(1 + D^* D)^-1 is a compact operator on E for all a ∈ A;* a( Dom(D^*) ) Dom(D) for all a ∈;* [D,a] : Dom(D) → E extends to a bounded operator d(a) : E → E for all a ∈. A half-closed chain (,E,D) from A to B is said to be even when E comes equipped with a /2-grading operator : E → E (= ^*, ^2 = 1), such that [a,] = 0 for all a ∈ A and D= -D.A half-closed chain which is not even is said to be odd. Let (,E,D) be a half-closed chain from A to B. A few observations are in place:* d(a) : E → E, a ∈, is automatically adjointable with d(a)^* = - d(a^*).* The differenceD a - a D^* : Dom(D^*) → Ea ∈extends to the bounded adjointable operator d(a) : E → E.* a(1 + D D^*)^-1∈𝕂(E) for all a ∈ A. (Remark that D^* is automatically regular by <cit.>). We recall that a Kasparov module from A to B is a pair (E,F) where E is a countably generated C^*-correspondence from A to B and F : E → E is a bounded adjointable operator such thata( F - F^*) ,a(F^2 - 1) ,[F,a] ∈𝕂(E) ,for all a ∈ A. A Kasparov module (E,F) from A to B is even when it comes equipped with a /2-grading operator : E → E such that [ a,] = 0 for all a ∈ A and F+F = 0. Otherwise we say that (E,F) is odd.For an unbounded regular operator D : Dom(D) → E we let F_D := D(1 + D^* D)^-1/2∈𝕃(E) denote the bounded transform of D. We have that F_D^* = F_D^* = D^*(1 + D D^*)^-1/2.The next result creates the main link between half-closed chains and Kasparov modules. This result is due to Hilsum, <cit.>, and it generalizes the corresponding result of Baaj and Julg for unbounded Kasparov modules, <cit.>. Remark however that the condition [F_D,a] ∈𝕂(E), a ∈ A, is for some reason left unproved in <cit.>. We therefore give a full proof of this commutator condition here: Suppose that (,E,D) is a half-closed chain from A to B. Then (E,F_D) is a Kasparov module from A to B of the same parity as (,E,D) and with the same /2-grading operator : E → E in the even case. We have to show that [F_D,a] ∈𝕂(E) for all a ∈ A. Since the *-algebra A is dense in C^*-norm and since the C^*-algebra 𝕂(E) 𝕃(E) is closed in operator norm it suffices to show that [F_D,a]b ∈𝕂(E) for all a,b ∈.We recall that(1 + D^* D)^-1/2 = 1/π∫_0^∞^-1/2 (1 ++ D^* D)^-1d,where the integral converges absolutely in operator norm and where the integrand is continuous in operator norm. Remark here that (1 ++ D^* D)^-1_∞≤ (1 + )^-1 for all ≥ 0. For a ∈ and ≥ 0 we then compute that[D (1 ++ D^* D)^-1 , a ]= - D D^* (1 ++ D D^*)^-1 d(a) (1 ++ D^* D)^-1 - D (1 ++ D^* D)^-1 d(a) D(1 ++ D^* D)^-1 + d(a)(1 ++ D^* D)^-1.In particular, it holds for each a,b ∈ that the mapM : (0,∞) →𝕃(E)M() := ^-1/2 [D (1 ++ D^* D)^-1 , a ] bis continuous in operator norm and that M() ∈𝕂(E) for all ∈ (0,∞). Moreover, we have the estimateM() _∞≤^-1/2 d(a)3(1 + )^-1,for all > 0. We may thus conclude that[F_D,a] b = 1/π∫_0^∞ M() d∈𝕂(E) ,for all a,b ∈. This proves the theorem. § UNBOUNDED MODULAR CYCLESLet us fix -unital C^*-algebras A and B together with a dense *-subalgebra A.The following definition is from <cit.>: An unbounded modular cycle fromto B is a triple (E,D,) where E is a countably generated C^*-correspondence from A to B, D : Dom(D) → E is an unbounded selfadjoint and regular operator, and : E → E is a bounded positive and selfadjoint operator with norm-dense image such that * a (i + D)^-1 : E → E is a compact operator for all a ∈ A;* (a + ) Δ has Dom(D) ⊆ E as an invariant submodule andD (a + )-(a + ) D : Dom(D) → Eextends to a bounded adjointable operator d_(a,) : E → E for all a ∈, ∈.* The supremumsup_ > 0 (^1/2 + )^-1 d_(a,) (^1/2 + )^-1_∞is finite for all a ∈, ∈.* The sequence { ( + 1/n)^-1 a } converges in operator norm to a for all a ∈ A. An unbounded modular cycle is even when E comes equipped with a /2-grading operator : E → E (= ^*, ^2 = 1), such that [a,] = 0 for all a ∈ A and D= -D.An unbounded modular cycle is odd when it is not even. Note that if Δ has a bounded inverse then (3) and (4) are automatic. If, in addition, A is unital, Δ, Δ^-1∈ and B= then the modular cycle (E,D,Δ) defines a twisted spectral triple in the sense of <cit.>, with the twisting automorphism σ : → given by σ(a) = Δ a Δ^-1 for all a ∈.In <cit.> it is assumed thatis equipped with a fixed operator space norm _1 : M_n() → [0,∞), n ∈, such that the inclusion → A is completely bounded. In the above definition it is then required that the supremum in (3) is completely bounded in the sense thatsup_ > 0 (^1/2 + )^-1 d_(a,0) (^1/2 + )^-1_∞≤ Ca _1for all a ∈ M_n(), n ∈ (thus, the constant C is independent of the size of the matrices). This structure is relevant for the construction of the unbounded Kasparov product, but will not play a role in the present text. As in the case of half-closed chains, each unbounded modular cycle represents an explicit class in KK-theory. This result can be found as <cit.>. We state it here for the convenience of the reader. We recall that F_D := D(1 + D^2)^-1/2 denotes the bounded transform of D : Dom(D) → E (but now D is selfadjoint and regular). Suppose that (E,D,) is an unbounded modular cycle fromto B. Then (E, F_D) is a Kasparov module from A to B of the same parity as (E,D,) and the same /2-grading operator : E → E in the even case.§ LOCALIZATION OF REGULAR UNBOUNDED OPERATORSLet E be a countably generated Hilbert C^*-module over a -unital C^*-algebra B and let D : Dom(D) → E be a closed, symmetric and regular unbounded operator. It will be assumed that : E → E is a bounded selfadjoint operator such that * ( Dom(D^*) ) Dom(D) ;* D-D : Dom(D) → E extends to a bounded operator d() : E → E.Remark that it follows by the above assumption and the inclusion DD^* thatD-D^* : Dom(D^*) → Ealso has d() : E → E as a bounded extension. Moreover, d() : E → E is automatically adjointable with d()^* = - d().Before proving our first result, we notice that D : Dom(D) → E is a closed unbounded operator on the domainDom(D) := {ξ∈ E |(ξ) ∈Dom(D) }.A similar remark holds for D^*: Dom(D^* ) → E.Suppose that the conditions in Assumption <ref> hold. ThenD= D^*and D : Dom(D) → E is a regular unbounded operator with core Dom(D) and with(D)^* = D - d() .In particular, we have thatDom( (D)^* ) = Dom(D) .We first claim that the unbounded operators D : Dom(D) → E and D^*: Dom(D^* ) → E are regular with cores Dom(D^*) and Dom(D), respectively, and with adjoints(D )^* = D - d() and (D^* )^* = D^*- d() . To prove this claim, we recall that D : Dom(D) → E is regular by assumption, and we thus have that cc0 D^* D 0 : Dom(D) Dom(D^*) → EEis selfadjoint and regular. Moreover, we have thatcc0 0( Dom(D) Dom(D^*) ) Dom(D) Dom(D^*)and the identities[ cc0 D^* D 0 , cc0 0]= cc D-D 0 0 D-D^*= ccd() 0 0 d()hold on Dom(D) Dom(D^*). This means that cc0 D^* D 0 and cc0 0 satisfy the conditions of <cit.> and we may conclude thatcc0 D^* D 0cc0 0= ccD^* 0 0 D: Dom(D^* ) Dom(D) → EEis a regular unbounded operator withccD^* 0 0 D ^* = ccD^* 0 0 D- ccd() 0 0 d() : Dom(D^* ) Dom(D) → EEMoreover, we know that ccD^* 0 0 D: Dom(D^* ) Dom(D) → EE has Dom(D) Dom(D^*) as a core. This proves the claim.To end the proof of the proposition, it now suffices to prove that D = D^*. To this end, we notice that(D^* )(ξ) = (D)(ξ) for all ξ∈Dom(D^*)Since Dom(D) Dom(D^*) is a core for D^* we obtain from Equation (<ref>) that D^*D. Moreover, since Dom(D^*) is a core for D we also obtain from Equation (<ref>) that D D^*. We conclude that D = D^*.It will be assumed that x : E → E is a bounded adjointable operator such that * x( Dom(D^*) ) Dom(D) and x^* ( Dom(D^*) ) Dom(D);* D x - x DandD x^* - x^* D : Dom(D) → E extend to bounded operators d(x)andd(x^*) : E → E, respectively.As above, d(x) and d(x^*) : E → E are automatically adjointable with d(x)^* = - d(x^*). Moreover, d(x) and d(x^*) are bounded extensions of D x - x D^*andD x^* - x^* D^* : Dom(D^*) → E, respectively.We define the localization of E (with respect to x : E → E) as the Hilbert C^*-submodule E_xE given by the norm-closure of the image of x:E_x := cl( Im(x) ) . We define := x x^* : E → E. Suppose that the conditions of Assumption <ref> are satisfied. Then the unbounded operatorD- d(x) x^* : Dom(D ) → Eis selfadjoint and regular and it has Dom(D) ⊆Dom(D ) as a core. Moreover, we have that( D- d(x) x^* )(ξ) = (x D x^*)(ξ) ,for all ξ∈Dom(D x^*) Dom(D ). Clearly, = x x^* : E → E satisfied the conditions of Assumption <ref> and it therefore follows from Proposition <ref> that D: Dom(D ) → E is regular with core Dom(D) and that(D )^* = D- d() = D- d(x) x^* - x d(x^*) .Since d(x)x^* : E → E is a bounded adjointable operator, it follows by <cit.> that D- d(x) x^* : Dom(D ) → E is regular. It is moreover clear that Dom(D) is also a core for D- d(x) x^* and that( D- d(x) x^* )^* = (D)^* - ( d(x) x^* )^*= D- d(x) x^* - x d(x^*) + x d(x^*) = D- d(x) x^*,proving that our unbounded operator is selfadjoint as well. The final statement of the lemma is obvious. Suppose that the conditions of Assumption <ref> are satisfied. We define the localization of D : Dom(D) → E (with respect to x : E → E) as the closure of the unbounded symmetric operatorx D x^* : Dom(D) ∩ E_x → E_x .The localization of D is denoted byD_x : Dom(D_x) → E_x .Remark that x( Dom(D) ) Dom(D) ∩ E_x, implying that the localization D_x is densely defined. Suppose that the conditions of Assumption <ref> are satisfied and let r ∈ with |r| >d(x^*) x _∞ be given. Then i r + Dx^* x : Dom(D x^* x) → E is a bijection and the resolvent is a bounded adjointable operator (i r + D x^* x)^-1 : E → E satisfying the relation(i r +D- d(x) x^* )^-1 x = x ( i r +D x^* x)^-1.By replacing x with x^* in Assumption <ref> we see from Lemma <ref> that the unbounded operatorD x^* x - d(x^*) x : Dom(D x^* x) → Eis selfadjoint and regular. In particular, we know that the resolvent (i r + D x^* x - d(x^*)x)^-1 : E → E is a well-defined bounded adjointable operator. Sinced(x^*) x (i r + D x^* x - d(x^*) x)^-1_∞≤ d(x^*) x _∞ |r|^-1 < 1we may conclude that ir + D x^* x : Dom(D x^* x) → E is a bijection and that the resolvent is a bounded adjointable operator. In fact, we have that(i r + D x^* x)^-1= (i r + D x^* x - d(x^*) x)^-1( 1 + d(x^*) x (i r + D x^* x - d(x^*) x)^-1)^-1.The relation in Equation Equation (<ref>) now follows since(ir + D- d(x) x^*) x = (i r + x D x^* ) x = x (ir + D x^* x)on Dom(D x^* x).Suppose that the conditions of Assumption <ref> are satisfied. Then the localization of D : Dom(D) → E with respect to x : E → E is a selfadjoint and regular unbounded operatorD_x : Dom(D_x) → E_x ,with core x(Dom(D)) ⊆Dom(D_x). Moreover, we have the identity ( i μ + D_x)^-1(ξ) = (i μ + D- d(x) x^*)^-1(ξ) ,for all ξ∈ E_x and all μ∈{0}. In particular, E_xE is an invariant submodule for (i μ + D - d(x) x^*)^-1 : E → E for all μ∈{0}. To show that D_x : Dom(D_x) → E_x is selfadjoint and regular, it suffices to verify thati r + x D x^* : x( Dom(D) ) → E_xhas dense image whenever r ∈ satisfies |r| >d(x^*) x _∞, see <cit.>. Let such an r ∈ be given.Clearly, x^* x : E → E satisfies the condition of Assumption <ref> and it therefore follows from Proposition <ref> that D x^* x : Dom(D x^* x) → E is regular with core Dom(D)E. Combining this with Lemma <ref> we may find a norm-dense submodule E such that(i r + D x^* x)^-1( ) = Dom(D) .Moreover, we have that(i r + x D x^*) x (i r + D x^* x)^-1(ξ) = x( ξ ) for all ξ∈.Since x()E_x is norm-dense and x(ir + D x^* x)^-1() = x( Dom(D) ), this proves the desired density result and hence that the localization D_x : Dom(D_x) → E_x is selfadjoint and regular.Let μ∈{0}. The identity in Equation (<ref>) can now be verified on the image of i μ + x D x^* : x( Dom(D) ) → E_x, but here it follows immediately since (x D x^*)(ξ) = (D- d(x) x^*)(ξ) for all ξ∈ x( Dom(D) ). The result of Proposition <ref> can be generalized by replacing the bounded adjointable operator x : E → E by a sequence of bounded adjointable operators x_n : E → E, n ∈, each of them satisfying the conditions of Assumption <ref>. Suppose moreover that the sums∑_n = 1^∞ x_n x_n^* and∑_n = 1^∞ d(x_n) d(x_n)^* are norm-convergent in 𝕃(E) (this can of course always be obtained by rescaling the operators x_n : E → E, n ∈). In this context, we define the localization of E with respect to the sequence x = {x_n} as the closed submoduleE_x := cl( span_{ x_n (ξ) | n ∈,ξ∈ E })E .The localization D_x of D : Dom(D) → E is defined as the closure of the symmetric unbounded operator∑_n = 1^∞ x_n D x_n^* :Dom(D) ∩ E_x → E_x.As in Proposition <ref>, we then obtain that D_x : Dom(D_x) → E_x is a selfadjoint and regular unbounded operator.§ LOCALIZATION OF HALF-CLOSED CHAINSLet A and B be -unital C^*-algebras. Throughout this section (,E,D) will be a half-closed chain from A to B. We denote by ϕ : A →𝕃(E) the *-homomorphism that provides the left action of A on E. Moreover, x ∈ will be a fixed element.Notice that ϕ(x) : E → E satisfies the condition of Assumption <ref> with respect to the symmetric and regular unbounded operator D : Dom(D) → E. Recall then that the localization of E is the norm-closed submodule E_x := cl( Im( ϕ(x)) )E and that the localization D_x of D : Dom(D) → E is the closure of the symmetric unbounded operatorϕ(x) D ϕ(x^*) : Dom(D) ∩ E_x → E_x .By Proposition <ref>, the localization D_x : Dom(D_x) → E_x is selfadjoint and regular. We put:= x x^* ∈. By definition, the localization of A with respect to x ∈ A is the hereditary C^*-subalgebra of A defined byA_x := cl( x A x^*)A .The *-homomorphism ϕ : A →𝕃(E) restricts to a *-homomorphism ϕ_x : A_x →𝕃(E_x) and in this way E_x becomes a C^*-correspondence from A_x to B. We remark that ∈ A_x and that ϕ_x() : E_x → E_x is a bounded positive and selfadjoint operator with norm-dense image. We define the *-subalgebra _xA_x as the intersection_x := ∩ A_x .Remark that _xA_x is automatically norm-dense.When the half-closed chain (,E,D) is even with /2-grading operator : E → E, then E_x can be equipped with the /2-grading operator |_E_x : E_x → E_x obtained by restriction of : E → E.We are going to prove the following: Suppose that (,E,D) is a half-closed chain and that x is an element in . Then the triple (E_x, D_x, ϕ_x() ) is an unbounded modular cycle from _x to B of the same parity as (,E,D) and with grading operator |_E_x: E_x → E_x in the even case. Clearly the C^*-correspondence E_x is countably generated (since E is countably generated by assumption). Moreover, we have already established that the unbounded operator D_x : Dom(D_x) → E_x is selfadjoint and regular in Proposition <ref> and that ϕ_x() : E_x → E_x is bounded positive and selfadjoint with norm-dense image. So it only remains to check conditions (1), (2), (3) and (4) of Definition <ref>. The last condition (4) follows immediately since ( + 1/n)^-1 a → a in C^*-norm for all a ∈ A_x. The remaining three conditions are proved in Proposition <ref>, Proposition <ref> and Proposition <ref> below. We will refer to the unbounded modular cycle (E_x,D_x,ϕ_x()) as the localization of the half-closed chain (E,ϕ,D) with respect to x ∈.We start by proving the compactness condition (1) of Definition <ref>. We put D_x := D ϕ() - d( x ) ϕ(x^*) : Dom(D ϕ()) → Eand recall that D_x is a selfadjoint and regular unbounded operator by Lemma <ref>. We remark that D_x agrees with D_x if and only if the image of ϕ(x) : E → E is norm-dense. In fact, when the image of ϕ(x) is not norm-dense then these two unbounded operators do not even act on the same Hilbert C^*-module.We have the resolvent identitycc0ϕ()ϕ() 0cc(i + D_x)^-10 0 (i + D_x)^-1- cci D^* D i^-1 =cci D^* D i^-1cc d(x) ϕ(x^*) - i i ϕ() i ϕ() d(x) ϕ(x^*) - i (i + D_x)^-1.It suffices to notice that the identitiescci D^* D icc0ϕ()ϕ() 0 - cci + D_x0 0 i + D_x = ccD ϕ() - i - D_xi ϕ() i ϕ() D ϕ() - i - D_x = ccd( x) ϕ(x^*) - i i ϕ() i ϕ() d( x) ϕ(x^*) - ihold on Dom(D ϕ() ) Dom(D ϕ()). Recall in this respect that D ϕ() = D^* ϕ() by Proposition <ref>. The bounded adjointable operatorϕ_x(a) ( i + D_x)^-1 : E_x → E_xis compact for all a ∈ A_x. Notice that ∈ A_x and that the left ideal A_x Δ A_x is norm-dense. It thus suffices to show that ϕ_x()(i + D_x)^-1∈𝕂(E_x).We apply the notation 𝕂(E,E_x) 𝕂(E) for the closed right ideal generated by all compact operators on E of the form |ξ⟩⟨η| with ξ∈ E_x and η∈ E. Similarly, we let 𝕂(E_x,E) 𝕂(E) denote the closed left ideal generated by all compact operators of the form |η⟩⟨ξ| for ξ∈ E_x and η∈ E. We remark that 𝕂(E_x,E) = 𝕂(E,E_x)^*. Since (E,ϕ,D) is a half-closed chain we know that ccϕ() 0 0ϕ() cci D^* D i^-1∈𝕂(EE)and it therefore follows from Lemma <ref> thatϕ()^2(i + D_x)^-1∈𝕂(E,E_x) .Since (ϕ(Δ)+1/n)^-1ϕ(Δ)^2→ϕ(Δ) as n →∞ this implies that also ϕ() (i + D_x)^-1∈𝕂(E,E_x) and thus that (-i + D_x)^-1ϕ() ∈𝕂(E_x,E). We may thus conclude that ϕ() (1 + D_x^2)^-1ϕ() ∈𝕂(E,E_x) 𝕂(E_x,E) restricts to a compact operator on the Hilbert C^*-module E_x ⊆ E. But this proves the present proposition since we have from Proposition <ref> thatϕ_x() (1 + D_x^2)^-1ϕ_x() = ( ϕ() (1 + D_x^2)^-1ϕ() )|_E_x.We continue by proving the twisted commutator condition (2) of Definition <ref>. Let a ∈_x, ∈. Then (ϕ_x(a) + )ϕ_x() : E_x → E_x has Dom(D_x)E_x as an invariant submodule andD_x (ϕ_x(a) + )ϕ_x() - ϕ_x() (ϕ_x(a) + ) D_x : Dom(D_x) → E_xextends to a bounded adjointable operator d_(a,) : E_x → E_x. In fact we have thatd_(a,) = ( ϕ(x) d( x^* (a + ) x ) ϕ(x^*) )|_E_x.Let ξ∈Dom(D) ∩ E_x. We then have that( ϕ_x(a) +) ϕ_x()(ξ) ∈Dom(D) ∩ E_xand thatD_x (ϕ_x(a) + )ϕ_x()(ξ) - ϕ_x() (ϕ_x(a) + ) D_x(ξ)= ϕ(x) D ϕ(x^*) (ϕ(a) + )ϕ(x x^*)(ξ) - ϕ(x x^*) (ϕ(a) + ) ϕ(x) D ϕ(x^*) (ξ)=ϕ(x) d( x^* (a + ) x ) ϕ(x^*)(ξ) .Since Dom(D) ∩ E_x is a core for the localization D_x : Dom(D_x) → E_x, this proves the proposition. We finally prove the supremum condition (3) of Definition <ref>. Let a ∈_x, ∈. Then we have thatsup_ > 0 (ϕ_x()^1/2 + )^-1 d_(a,) (ϕ_x()^1/2 + )^-1_∞ < ∞.This follows immediately from Proposition <ref>. Indeed, the operator norm of(ϕ_x()^1/2 + )^-1ϕ(x) : E → E_xis bounded by 1 for all > 0.One may equip _x with the operator space norm _1 : M_n(_x) → [0,∞), n ∈, defined bya _1 := sup{ a,d(a) _∞} for alla ∈ M_n(_x) ,where the norms inside the supremum are the C^*-norm on M_n(A) and the operator-norm on 𝕃(E^ n), respectively. Clearly, the inclusion _x → A_x is then completely bounded. It is moreover possible to find a constant C > 0 such thatsup_ > 0 (ϕ_x()^1/2 + )^-1 d_(a,0) (ϕ_x()^1/2 + )^-1_∞≤ Ca _1 ,for all a ∈ M_n(_x). Cf. Remark <ref>.§ LOCALIZATION AS AN UNBOUNDED KASPAROV PRODUCTIn this section we continue under the conditions spelled out in the beginning of Section <ref>. We thus have a half-closed chain (,E,D) and an element x ∈. The element x ∈ provides us with a closed right ideal I_xA defined as the norm-closure:I_x := cl( x A) .In particular, we may consider I_x as a countably generated Hilbert C^*-module over A. The hereditary C^*-subalgebra A_x = cl(xAx^*) ⊆ A can be identified with the compact operators on I_x via the *-homomorphism ψ : A_x →𝕃(I_x) induced by the multiplication in A. We thus obtain an even Kasparov module (I_x,0) from A_x to A with corresponding class [I_x,0] ∈ KK_0(A_x,A) in KK-theory.Moreover, by Theorem <ref>, our half-closed chain (,E,D) (of parity p ∈{0,1}) yields a Kasparov module (E,F_D) from A to B with corresponding class [E,F_D] ∈ KK_p(A,B).Finally, the unbounded modular cycle (∩ A_x, E_x, ϕ_x()) constructed in Section <ref> yields a Kasparov module (E_x,F_D_x) from A_x to B with corresponding class [E_x,F_D_x] ∈ KK_p(A_x,B), see Theorem <ref>.In this section we will prove the following theorem:Suppose that (,E,D) is a half-closed chain, that x ∈ and that A_x is separable. Then we have the identity[ E_x,F_D_x] = [ I_x,0] _A [E, F_D]in KK_p(A_x,B), where _A : KK_0(A_x,A)KK_p(A,B) → KK_p(A_x,B) denotes the Kasparov product. The C^*-correspondence E_x from A_x to A is unitarily isomorphic to the interior tensor product of C^*-correspondences I_x _ϕ E (via the unitary isomorphism xa ξ↦ϕ(xa)(ξ)). For each a ∈ A, we define the bounded adjointable operator T_xa : E → E_x by ξ↦ϕ(xa)(ξ). By <cit.> it suffices to prove the connection condition, thus thatF_D_x T_xa -T_xa F_D ,F_D_x T_xa- T_xa F_D^*∈𝕂(E,E_x)for all a ∈ A. Indeed, the positivity condition of <cit.> is obviously satisfied since the bounded adjointable operator in the Kasparov module (I_x,0) from A_x to A is trivial. See also Section <ref> for more details.However, since T_xa = T_x ϕ(a) : E → E_x and ϕ(a) (F_D - F_D^*) ∈𝕂(E) it suffices to prove the first of these inclusions. This proof will occupy the remainder of this section, see Proposition <ref>.In the case where xAA is norm-dense and A is separable, we have that (I_x,0) = (A, 0) and it therefore follows from the above theorem that the two Kasparov modules (E_x,F_D_x) and (E,F_D) represents the same class in KK_p(A,B).§.§ The modular transformWe continue working under the general assumptions stated in the beginning of Section <ref>. We recall that := x x^*. We will in the following suppress the *-homomorphism ϕ_x : A_x →𝕃(E_x).For each ≥ 0, we introduce the notationR_x(^2) := (1 + ^2 + D_x^2)^-1∈𝕃(E_x)R_x() := (1 ++ D_x^2)^-1∈𝕃(E_x) .In general, we are not able to estimate the norm of R_x(^2) from above by (1 + )^-1 since : E_x → E_x may have zero in the spectrum. Instead, we recall the following basic estimate from <cit.>:R_x(^2) _∞≤2/(1 + )∀≥ 0 . The next definition is from <cit.>: The modular transform of the unbounded modular cycle (E_x,D_x, ) is the unbounded operatorG_(D_x,) : ( Dom(D_x) ) → E_xdefined byG_(D_x,) : η↦1/π∫_0^∞^-1/2 (1 + ^2 + D_x^2)^-1 D_x(η) d.We remark that G_(D_x,) : ( Dom(D_x) ) → E_x is well-defined. Indeed, for η = (ξ) with ξ∈Dom(D_x) we have from Proposition <ref> thatR_x( ^2) D_x(η)=R_x(^2)D_x(ξ) +R_x(^2) x d(x^* x) x^* (ξ) .Using the estimate from Equation (<ref>), we may thus find a constant C > 0 such that(1 + ^2 + D_x^2)^-1 D_x(η) ≤ C(1 + )^-3/4∀≥ 0 ,implying that the integral in Equation (<ref>) converges absolutely in the norm on E_x.The following result is a consequence of <cit.>: The differenceF_D_x^6 - G_(D_x,)^6 : Dom(D_x) → E_xextends to a compact operator on E_x. Notice that the above result implies that the unbounded operatorG_(D_x,)^6 : Dom(D_x) → E_xextends to a bounded adjointable operator on E_x. §.§ The connection conditionWe will continue working under the assumptions of Section <ref>.We recall from Lemma <ref> thatD_x := D ϕ() - d(x) ϕ(x^*) : Dom(D ϕ()) → Eis a selfadjoint and regular unbounded operator and we putR_x( ϕ(^2)) := (1 + ϕ(^2) + (D_x)^2)^-1∈𝕃(E)R() := (1 ++ D^* D)^-1∈𝕃(E) ,for all ≥ 0. For each ≥ 0, we have the identityR() - R_x( ϕ(^2)) ϕ(^2)= R_x(ϕ(^2)) ( 1 - ϕ(^2) + ϕ(x) d(x^* x x^*) D) R()+ ( D_xR_x(ϕ(^2) ) )^* ϕ(x) d(x^*) R()of bounded adjointable operators on E. We have the identities1 - R_x(ϕ(^2)) ϕ(^2) (1 ++ D^* D) = 1 - R_x(ϕ(^2)) ( 1 + ϕ(^2) + ϕ(x) D ϕ(x^* x x^*) D )+ R_x( ϕ(^2)) ( 1 - ϕ(^2) + ϕ(x) d(x^* x x^*) D)= ( D_xR_x(ϕ(^2)) )^* ϕ(x) d(x^*)+ R_x( ϕ(^2)) ( 1 - ϕ(^2) + ϕ(x) d(x^* x x^*) D)on Dom(D^* D). But this proves the lemma after multiplying with R() = (1 ++ D^* D)^-1 from the right. For each y ∈ I_x = cl(xA), we recall that T_y : E → E_x denotes the bounded adjointable operator T_y : ξ↦ϕ(y)(ξ). Notice then that it follows from Proposition <ref> that T_R_x(ϕ(^2)) ϕ() =R_x(^2) T_ : E → E_x .The differenceT_ R() D ϕ() -R_x(^2) D_x T_^2 : Dom(D ϕ()) → E_xextends to a compact operator M_ : E → E_x for all ≥ 0. Moreover, there exists a constant C > 0 such thatM__∞≤ C(1 + )^-3/4∀≥ 0 .Since (,E,D) is a half-closed chain and (E_x, D_x, ) is an unbounded modular cycle we obtain that the differenceT_ R() D ϕ() -R_x(^2) D_x T_^2 : Dom(D ϕ()) → E_xextends to a compact operator M_ : E → E_x for all ≥ 0. Indeed, this is already true for each of the terms viewed separately. So we only need to prove the norm-estimate. To this end, we let ξ∈Dom(D ϕ()) and compute that( T_ R() D ϕ() -R_x(^2) D_x T_^2)(ξ)= T_ R() D ϕ()(ξ) - T_R_x(ϕ(^2)) ϕ(x) D ϕ(x^* ^2)(ξ)= T_ R() D ϕ()(ξ) - T_R_x(ϕ(^2))ϕ(^2) D ϕ()(ξ) - T_R_x(ϕ(^2)) ϕ(x) d(x^* x x^*) ϕ()(ξ) .Since T_R_x(ϕ(^2)) ϕ(x) _∞≤ 2^3/4 (1 + )^-3/4 by the estimate in Equation (<ref>) we may focus on the differenceT_ R() D ϕ()(ξ) - T_R_x(ϕ(^2)) ϕ(^2) D ϕ()(ξ) .However, using Lemma <ref> we get thatT_ R() D ϕ()(ξ) - T_R_x(ϕ(^2)) ϕ(^2) D ϕ()(ξ)= T_R_x(ϕ(^2)) ( 1 - ϕ(^2) + ϕ(x) d(x^* x x^*) D)R() D ϕ()(ξ)+ T_( D_xR_x(ϕ(^2) ) )^* ϕ(x) d(x^*) R() D ϕ()(ξ) .The result of the lemma then follows from the basic estimate D R() _∞≤ (1 + )^-1/2 and the estimate in Equation (<ref>) a few times. The differenceT_^2 F_D - G_(D_x,) T_^2 : Dom(D) → E_xextends to a compact operator from E to E_x. Since ϕ() F_D - F_D^*ϕ() : E → E is compact, we only need to show thatT_ F_D^*ϕ() - G_(D_x,) T_^2 : Dom(D) → E_xextends to a compact operator from E to E_x. Now, recall thatT_ F_D^*ϕ()(ξ) = 1/π∫_0^∞^-1/2 T_ (1 ++ D^* D)^-1 D ϕ()(ξ) dfor all ξ∈Dom(D). The result of the proposition now follows by Lemma <ref> sinceT_ D^* (1 + DD^*)^-1/2ϕ()(ξ) -G_(D_x,) T_^2(ξ)= 1/π∫_0^∞^-1/2( T_ (1 ++ D^* D)^-1 D ϕ()-R_x(^2) D_x T_^2)(ξ) d = 1/π∫_0^∞^-1/2 M_(ξ) d.Remark that it follows from the above proposition that the unbounded operatorG_(D_x,) T_^2 : Dom(D) → E_xextends to a bounded adjointable operator on E_x. The differenceF_D_x T_xa -T_xa F_D : E → E_xis a compact operator for all a ∈ A. Since [ϕ(b), F_D ] ∈𝕂(E) for all b ∈ A and since ^7(1/n + ^7)^-1 x → x in the norm on A, it suffices to show thatF_D_x T_^7 - T_^7 F_D : E → E_xis a compact operator. But now Proposition <ref> and Theorem <ref> imply that the following identities hold modulo 𝕂(E,E_x): F_D_x T_^7 - T_^7 F_D ∼ F_D_x T_^7- T_^2 F_D ϕ(^5) ∼ F_D_x T_^7 - cl( G_(D_x,) T_^2 ) ϕ(^5)= F_D_x^6 T_ - cl( G_(D_x,)^6 ) T_∼ 0 .§ KUCEROVSKY'S THEOREMLet us fix three C^*-algebras A,B and C with A separable and B and C both -unital. Throughout this section we will assume that (,E_1,D_1), (,E_2,D_2) and (,E,D) are even half-closed chains from A to B, from B to C and from A to C, respectively. We denote the associated *-homomorphisms by ϕ_1 : A →𝕃(E_1), ϕ_2 : B →𝕃(E_2) and ϕ : A →𝕃(E) and the /2-grading operators by _1 : E_1 → E_1, _2 : E_2 → E_2 and : E → E, respectively. We will moreover assume that E := E_1 _ϕ_2 E_2 agrees with the interior tensor product of the C^*-correspondences E_1 and E_2. In particular, we assume that ϕ(a) = ϕ_1(a)1 for all a ∈ A and that = _1 _2. We will denote the bounded transforms of our half-closed chains by (E_1, F_D_1), (E_2,F_D_2) and (E,F_D) and the corresponding classes in KK-theory by [E_1,F_D_1] ∈ KK_0(A,B), [E_2,F_D_2] ∈ KK_0(B,C) and [E,F_D] ∈ KK_0(A,C). We may then form the interior Kasparov product[E_1, F_D_1] _B [E_2,F_D_2] ∈ KK_0(A,C)and it becomes a highly relevant question to find an explicit formula for this class in KK_0(A,C). In this section we shall find conditions on the half-closed chains (,E_1,D_1), (,E_2,D_2) and (,E,D) entailing that the identity[E,F_D] = [E_1,F_D_1] _B [E_2,F_D_2]holds in KK_0(A,C). This kind of theorem was first proved by Kucerovsky in <cit.> under the stronger assumption that the half-closed chains (,E_1,D_1), (,E_2,D_2) and (,E,D) were in fact unbounded Kasparov modules. Thus under the strong assumption that all the involved symmetric and regular unbounded operators were in fact selfadjoint. As in the case of Kucerovsky's theorem we rely on the work of Connes and Skandalis for computing the interior Kasparov product, see <cit.>.We recall from <cit.> that an even Kasparov module (E,F) from A to C is the Kasparov product of the even Kasparov modules (E_1,F_1) and (E_2,F_2) from A to B and from B to C, respectively, when the following holds: * E = E_1 _ϕ_2 E_2, ϕ = ϕ_11.* For every homogeneous ξ∈ E_1 we have thatF T_ξ - (-1)^ξ T_ξ F_2 ,F^* T_ξ - (-1)^ξ T_ξ F_2^* ∈𝕂(E_2,E) ,where T_ξ : E_2 → E is defined by T_ξ(y) := ξη for all η∈ E_2 and where ξ∈{0,1} denotes the degree of ξ∈ E_1.* There exists a ν < 2 such that((F_11)^*F^* + F(F_11) ) ϕ(a^*a) + νϕ(a^* a)is positive in the Calkin algebra 𝕃(E)/𝕂(E) for all a ∈ A. The condition in Equation (<ref>) is often referred to as the connection condition and the condition in Equation (<ref>) is referred to as the positivity condition.Before we state our conditions on half-closed chains we recall that the odd symmetric and regular unbounded operator D_1 : Dom(D_1) → E_1 can be promoted to an odd symmetric and regular unbounded operator D_11 : Dom(D_11) → E_1 _ϕ_2 E_2 with resolvent (1 + D_1^* D_1)^-1 1 ∈𝕃(E_1 _ϕ_2 E_2).We now introduce the analogues for the above connection and positivity condition for half-closed chains. They will be shown in Theorem <ref> below to indeed correspond to the above two conditions for Kasparov modules. Given three even half-closed chains (,E_1,D_1), (,E_2,D_2) and (,E_1 _ϕ_2 E_2,D) as above, the connection condition demands that there exist a dense -submodule _1E_1 and cores _2 andfor D_2 : Dom(D_2) → E_2 and D : Dom(D) → E, respectively, such that (a) For each ξ∈_1:T_ξ( _2 ) Dom(D),T_ξ^*() Dom(D_2), _1(ξ) ∈_1 . (b) For each homogeneous ξ∈_1, the graded commutatorD T_ξ - (-1)^ξ T_ξ D_2 : _2 → Eextends to a bounded operator L_ξ : E_2 → E.Given three even half-closed chains (,E_1,D_1), (,E_2,D_2) and (,E_1 _ϕ_2 E_2,D) as above, a localizing subset is a countable subsetwith = ^* such that (a) The subspace A := span_ℂ{ xa | x ∈,a ∈ A }⊆ Ais norm-dense.(b) The commutator[D_11, ϕ(x) ] : Dom(D_11) → Eistrivial for all x ∈.(c) We have the domain inclusionDom(D) ∩Im(ϕ(x^* x)) Dom(D_11) ,for all x ∈.Given three even half-closed chains (,E_1,D_1), (,E_2,D_2) and (,E_1 _ϕ_2 E_2,D) and a localizing subset Λ, the local positivity condition requires that for each x ∈, there exists a constant _x > 0 such that (D_11) ϕ(x^*) ξ, D ϕ(x^*) ξ +D ϕ(x^*) ξ, (D_11) ϕ(x^*) ξ≥ - _x ξ, ξ,for all ξ∈Im(ϕ( x)) ∩Dom( D ϕ(x^*) ). Note that the local positivity condition makes sense because of (d) in Definition <ref>. Indeed, for each ξ∈Im(ϕ( x)) ∩Dom( D ϕ(x^*) ) we have thatϕ(x^*) ξ∈Im(ϕ(x^* x)) ∩Dom(D) Dom(D_11) .Suppose that ⊆ A is unital and that ϕ_1(A)E_1E_1 is norm-dense. Then the half-closed chains (,E,D) and (,E_1,D_1) are in fact unbounded Kasparov modules (thus D = D^* and D_1 = D_1^*). The choice := {1}⊆ automatically satisfies the conditions (a) and (b) for a localizing subset in Definition <ref> and the last condition (c) amounts to the requirementDom(D) ⊆Dom(D_11) .Moreover, in this case, the local positivity condition in Definition <ref> means that there exists a constant > 0 such that(D_11) ξ, D ξ + D ξ, (D_11) ξ≥ - ξ,ξ,for all ξ∈Dom(D). Finally, the connection condition in Definition <ref> can be seen to be equivalent to the connection condition applied by Kucerovsky in <cit.>. In this setting, we therefore recover the assumptions applied by Kucerovsky in <cit.> (except that the domain condition in <cit.> is marginally more flexible). The corresponding special case of Theorem <ref> here below, is therefore in itself an improvement to <cit.> because of the extra flexibility in the choice of localizing subset(if one is willing to disregard the minor domain issue mentioned earlier in this remark). We record the following convenient lemma, which can be proved by standard techniques: Suppose that the connection condition of Definition <ref> holds. Then the connection condition holds for _2 := Dom(D_2) and := Dom(D). Moreover, L_ξ : E_2 → E is adjointable with(L_ξ)^*(η) = (T_ξ^* D - (-1)^ξ D_2 T_ξ^*)(η) ∀η∈Dom(D)whenever ξ∈_1 is homogeneous. The next lemma provides a convenient sufficient condition for verifying the inequality in Definition <ref>: Let x ∈ A and suppose that Im(ϕ(x^* x)) ∩Dom(D) Dom(D_11) and that there exists a constant _x > 0 such that(D_11) η, D η + D η, (D_11) η≥ - _x η,η,for all η∈Im(ϕ(x^* x)) ∩Dom(D). Then we have that(D_11) ϕ(x^*) ξ, D ϕ(x^*) ξ + D ϕ(x^*) ξ, (D_11) ϕ(x^*) ξ≥ - ϕ(x) ^2 _x ξ,ξ,for all ξ∈Im(ϕ( x)) ∩Dom( D ϕ(x^*) ). This follows immediately since - _x ϕ(x^*) ξ, ϕ(x^*) ξ≥ - ϕ(x) ^2 _x ξ,ξ∀ξ∈ E .The next lemma is straightforward to prove by rescaling the elements inby elements in (0,∞). It will nonetheless play a very important role: Suppose that the local positivity condition of Definition <ref> holds with localizing subset ⊆. Then we may rescale the elements inand obtain a localizing subset ' such that the local positivity condition of Definition <ref> holds with the additional requirement that_x = 1/4 and d(x^*) ϕ(x) _∞ < 1 ∀ x ∈' .Suppose that the three even half-closed chains (,E_1,D_1), (,E_2,D_2) and (,E_1 _ϕ_2 E_2,D) satisfy the connection condition and the local positivity condition. Then (E,F_D) is the Kasparov product of (E_1,F_D_1) and (E_2,F_D_2). In particular we have the identity[E,F_D] = [E_1,F_D_1] _B [E_2,F_D_2]in the KK-group KK_0(A,C). Without loss of generality we may assume that _x = 1/4 and that d(x^*) ϕ(x) _∞ < 1 for all x ∈.We need to prove the connection condition in Equation (<ref>) and the positivity condition in Equation (<ref>) for the even Kasparov modules (E,F_D), (E_1,F_D_1) and (E_2,F_D_2). But these two conditions are proved in Proposition <ref> and Proposition <ref> below, respectively. The positivity condition will be satisfied with ν = 1 = 4 _x.§.§ The connection conditionWe continue working in the setting explained in the beginning of Section <ref>.Before proving our first proposition on the connection condition in Equation (<ref>), it will be convenient to introduce some extra notation. For ∈ [0,∞), define the bounded adjointable operatorsR() := (1 ++ D^* D)^-1 ,R() := (1 ++ D D^*)^-1 : E → E R_2() := (1 ++ D_2^* D_2)^-1 ,R_2() := (1 ++ D_2 D_2^*)^-1 : E_2 → E_2 .Suppose that the connection condition of Definition <ref> holds. Then we have thatF_D T_ξ - (-1)^ξ T_ξ F_D_2,F_D^* T_ξ - (-1)^ξ T_ξ F_D_2^* ∈𝕂(E_2,E) ,for all homogeneous ξ∈ E_1.Without loss of generality we may assume that ξ = η b_1 b_2 with η∈_1 homogeneous and b_1,b_2 ∈. Using Lemma <ref> we compute as follows, for each ∈ [0,∞):R() T_η b_1 - T_η b_1 R_2() = R() T_η b_1 D_2^* D_2 R_2() - D^* D R() T_η b_1 R_2()= - R() T_η d_2(b_1)D_2 R_2() - (-1)^η R() L_ηϕ_2( b_1)D_2 R_2()+ (-1)^η R() D T_η b_1 D_2 R_2() - D^* D R() T_η b_1 R_2()= - R() ( T_η d_2(b_1) + (-1)^η L_ηϕ_2(b_1) )D_2 R_2()- D^* R() L_η b_1 R_2() ,where d_2(b_1) : E_2 → E_2 is the bounded extension of the commutator D_2 ϕ_2(b_1) - ϕ_2(b_1) D_2^* : Dom(D_2^*) → E. In particular, we may find a constant C > 0 such thatD R() T_η b_1 - D T_η b_1 R_2() _∞≤ C(1 + )^-1,for all ≥ 0.We now use the integral formulaeF_D = 1/π D ∫_0^∞^-1/2 R() dF_D_2= 1/π D_2 ∫_0^∞^-1/2 R_2() dfor the bounded transforms. Indeed, using Lemma <ref> one more time, these formulae allow us to compute thatF_D T_ξ= F_D T_η b_1ϕ_2(b_2)= 1/π DT_η b_1∫_0^∞^-1/2 R_2() ϕ_2(b_2) d + 1/π D ∫_0^∞^-1/2( R() T_η b_1 - T_η b_1 R_2() ) ϕ_2(b_2) d = (-1)^ξ T_η b_1 F_D_2ϕ_2(b_2) + 1/π∫_0^∞^-1/2 L_η b_1 R_2() ϕ_2(b_2) d + 1/π∫_0^∞^-1/2 D ( R() T_η b_1 - T_η b_1 R_2() ) ϕ_2(b_2) d.The fact that D_2 R_2() ϕ_2(b_2) and R_2() ϕ_2(b_2) ∈𝕂(E_2), for all ∈ [0,∞), combined with the estimate in Equation (<ref>) now imply that both of the integrals on the right hand side of Equation (<ref>) converge absolutely to elements in 𝕂(E_2,E) (remark that the integrands also depend continuously on ∈ (0,∞) with respect to the operator norm). We thus conclude thatF_D T_ξ - (-1)^ξ T_η b_1 F_D_2ϕ_2(b_2) ∈𝕂(E_2,E) .Since [F_D_2, ϕ_2(b_2)] ∈𝕂(E_2) we have proved that F_D T_ξ - (-1)^ξ T_ξ F_D_2∈𝕂(E_2,E). A similar argument shows that F_D^* T_ξ - (-1)^ξ T_ξ F_D_2^* ∈𝕂(E_2,E) as well. §.§ LocalizationThroughout this subsection the conditions stated in the beginning of Section <ref> are in effect.We are now going to apply the localization results obtained in Section <ref>, <ref> and <ref>. Recall from Definition <ref> and Proposition <ref> that whenever x ∈, then the localization D_x : Dom(D_x) → E_x is the selfadjoint and regular unbounded operator defined as the closure ofϕ(x) D ϕ(x^*) : Dom(D) ∩ E_x → E_x ,where E_x := cl( Im(ϕ(x)))E. The core idea is to replace the bounded transform of D : Dom(D) → E by the bounded transforms of sufficiently many localizations D_x : Dom(D_x) → E_x, when verifying the positivity condition in Equation (<ref>). The precise result is given here:Suppose that conditions (a) and (b) of Definition <ref> hold for some localizing subsetand that ν∈ is given. Suppose moreover thatT_x^* ( (F_D_1^*1)|_E_x F_D_x + F_D_x (F_D_1 1)|_E_x) T_x + νϕ(x^* x)is positive in 𝕃(E)/ 𝕂(E) for all x ∈. Then we have thatϕ(a^*) ( (F_D_1^*1) F_D^* + F_D (F_D_1 1) ) ϕ(a) + νϕ(a^* a)is positive in 𝕃(E)/ 𝕂(E) for all a ∈ A. For x ∈ we have that [F_D_1 1, ϕ(x) ] = 0 and the closed submodule E_xE is thus invariant under F_D_1 1. The restriction (F_D_1 1)|_E_x : E_x → E_x is therefore a well-defined bounded adjointable operator. The same observation holds for the adjoint F_D_1^*1.Sinceis countable we may write the elements inas a sequence {x_1,x_2,x_3,…}. For each n ∈, we choose a constantC_n > 2 +x_n ^2 +F_D T_x_n^* - T_x_n^* F_D_x_∞x_nand define the element:= ∑_n = 1^∞1/n^2 C_n x_n^* x_n∈ A ,where the series is absolutely convergent. Since A ⊆ A is norm-dense and = ^* we have thatA ⊆ Ais norm-dense as well. It therefore suffices to show that( (F_D_1^*1) F_D^* + F_D (F_D_1 1) )+ νϕ(^2)is positive in the Calkin algebra 𝕃(E)/𝕂(E). We now compute modulo 𝕂(E), using Proposition <ref>, thatcommutes with F_D_1 1 and that (F_D,E) is a Kasparov module:( (F_D_1^*1) F_D^* + F_D (F_D_1 1) ) ∼^1/2( (F_D_1^*1) F_D + F_D (F_D_1 1) ) ^3/2 = ^1/2∑_n = 1^∞1/n^2 C_n( (F_D_1^*1) F_D + F_D (F_D_1 1) ) T_x_n^* T_x_n∼^1/2∑_n = 1^∞1/n^2 C_n T_x_n^* ( (F_D_1^*1)|_E_x F_D_x + F_D_x (F_D_1 1)|_E_x) T_x_n^1/2.But this proves the present proposition since ^1/2∑_n = 1^∞1/n^2 C_n T_x_n^* ( (F_D_1^*1)|_E_x F_D_x + F_D_x (F_D_1 1)|_E_x) T_x_n^1/2 + νϕ(^2)=^1/2∑_n = 1^∞1/n^2 C_n( T_x_n^* ( (F_D_1^*1)|_E_x F_D_x + F_D_x (F_D_1 1)|_E_x) T_x_n + ν T_x_n^* T_x_n)^1/2is positive in 𝕃(E)/𝕂(E) by assumption.§.§ The positivity conditionWe remain in the setup described in the beginning of Section <ref>.Before continuing our treatment of the positivity condition in Equation (<ref>) we introduce some further notation: For each x ∈ satisfying condition (c) in Definition <ref> we putDom(Q_x) := Dom(D ϕ(x^*)) ∩Im(ϕ(x))and define the map Q_x : Dom(Q_x) → C byQ_x(ξ) := 2 ReD ϕ(x^*) ξ, (D_11)ϕ(x^*) ξ,where Re : C → C takes the real part of an element in the C^*-algebra C.For each ≥ 0 and x ∈ satisfying condition (b) of Definition <ref> we define the bounded adjointable operators on E_x:R_1()|_E_x := (1 ++ (D_1^*1)(D_11))^-1|_E_x S_1()|_E_x := (D_11) (1 ++ (D_1^*1)(D_11))^-1|_E_x R_x() := (1 ++ D_x^2)^-1 S_x() := D_x (1 ++ D_x^2)^-1. The next lemma follows by standard functional calculus arguments: Suppose that x ∈ satisfies condition (b) of Definition <ref>. Then the maps [0,∞)^2 →𝕃(E_x) defined byM_1(,μ,x) := S_x() S_1(μ)|_E_xM_2(,μ,x) := S_x() R_1(μ)|_E_x√(1+ μ)M_3(,μ,x):= R_x() S_1(μ)|_E_x√(1 + )M_4(,μ,x) := R_x() R_1(μ)|_E_x√((1 + )(1 + μ))are all continuous in operator norm and satisfy the estimateM_j(,μ,x) _∞≤ (1 + )^-1/2 (1 + μ)^-1/2 j ∈{1,2,3,4},for all ,μ∈ [0,∞). In particular, it holds that the integral1/π^2∫_0^∞∫_0^∞ (μ)^-1/2 (M_j^* M_j)(,μ,x)d dμconverges absolutely to a bounded adjointable operator K_j(x) ∈𝕃(E_x) with 0 ≤ K_j(x) ≤ 1 for all j ∈{1,2,3,4}. In order to ensure that later computations are well-defined we prove the following: Suppose that x ∈ satisfies condition (c) of Definition <ref> and that d(x^*) ϕ(x) _∞ < 1. ThenIm( R_x() T_x ) Dom(Q_x) andIm( S_x() T_x ) Dom(Q_x) ,for all ≥ 0. In particular, if x ∈ moreover satisfies condition (b) of Definition <ref>, thenIm( M_j(,μ,x) T_x) Dom(Q_x) ,for all j ∈{1,2,3,4} and all ,μ∈ [0,∞). Recall from Lemma <ref> and Proposition <ref> that( i r + D_x)^-1 T_x = T_x (i r + D ϕ(x^* x))^-1,for all r ∈ with |r| ≥ 1 >d(x^*) ϕ(x) _∞. We thus see thatIm( (i r + D_x)^-1 T_x ) Im(ϕ(x)) ∩Dom(D ϕ(x^*)) = Dom(Q_x) . The inclusions in Equation (<ref>) now follow sinceR_x() T_x = (-i √(1 + ) + D_x)^-1 ( i √(1 + ) + D_x)^-1 T_xand sinceS_x() T_x = D_x R_x() T_x = ( i √(1 + ) + D_x)^-1 T_x + i √(1 + ) R_x() T_x ,for all ≥ 0. We now start a more detailed computation of the application Q_x : Dom(Q_x) → C from Definition <ref>. Suppose that x ∈ satisfies condition (b) and (c) of Definition <ref> and that d(x^*) ϕ(x) _∞ < 1. ThenQ_x( S_x() T_x(ξ)) = 2 Re (D_11) ϕ(x) ξ, S_x() T_x ξ - (1 + ) Q_x( R_x() T_x(ξ)) ,for all ∈ [0,∞) and ξ∈Dom( (D_11)ϕ(x) ). Let ∈ [0,∞) and let ξ∈Dom( (D_11)ϕ(x) ) be given. We first claim thatD T_x^* S_x() T_x ξ∈Dom( (D_11) ϕ(x^* x) )and that(D_11 ) ϕ(x^* x) D T_x^* S_x() T_x ξ = (D_11) ϕ(x^* x) ξ - (1 + ) (D_11) T_x^* R_x() T_x ξ.But this follows sinceϕ(x^* x) D T_x^* S_x() T_xξ= T_x^* D_x S_x() T_x ξ = ϕ(x^* x) ξ - (1 + ) T_x^* R_x() T_x ξ∈Dom(D_11) ,where we remark that ϕ(x^* x) ξ∈Dom(D_11) since x^* ∈ and that T_x^* R_x() T_x ξ∈Dom(D) ∩Dom(D_11) by condition (c) and Lemma <ref>. Notice now that condition (b) and Proposition <ref> implies that (D_11) ϕ(x^* x) : Dom( (D_11) ϕ(x^* x) ) → Eis selfadjoint and regular. Putting η := T_x^* R_x() T_x(ξ) ∈Dom(D) ∩Dom(D_11) and using the above claim, the lemma is then proved by the following computation:1/2 Q_x( S_x() T_x(ξ)) = Re D T_x^* S_x() T_x(ξ), (D_11) T_x^* S_x() T_x(ξ) = Re (D_11) ϕ(x^* x) D T_x^* S_x() T_x ξ, D η = Re (D_11) ϕ(x^* x) ξ, D η - (1 + ) Re (D_11) η, D η = Re (D_11) ϕ(x) ξ, S_x() T_x ξ - (1 + ) Re (D_11) η, D η. For each x ∈ satisfying condition (b) and (c) of Definition <ref> and that d(x^*) ϕ(x) _∞ < 1, we define the assignmentQ_j(,μ,x) : Im(T_x) → CQ_j(,μ,x)( T_x ξ) := Q_x( M_j(,μ,x) T_x ξ) ,for all ,μ∈ [0,∞), j ∈{1,2,3,4}. The main algebraic result of this section can now be stated and proved: Suppose that x ∈ satisfies condition (b) and (c) of Definition <ref> and that d(x^*) ϕ(x) _∞ < 1. Then we have the identity∑_j = 1^4 Q_j(,μ,x)(T_x ξ) = 2 ReT_x ξ, S_x() S_1(μ)|_E_x T_x ξ,for all ,μ∈ [0,∞) and all ξ∈ E. Let ,μ∈ [0,∞) and ξ∈ E be given. Remark that S_1(μ)ξ,R_1(μ)ξ∈Dom( (D_11) ϕ(x) ). We may thus use Lemma <ref> to compute as follows:∑_j = 1^4 Q_j(,μ,x)(T_x ξ)= Q_x( S_x() T_x S_1(μ) ξ) + Q_x( S_x() T_x R_1(μ) ξ) (1 + μ)+ Q_x( R_x() T_x S_1(μ) ξ) (1 + )+ Q_x( R_x() T_x R_1(μ) ξ) (1 + ) (1 + μ)= 2 Re (D_11) ϕ(x) S_1(μ) ξ, S_x() T_x S_1(μ) ξ + 2 Re (D_11) ϕ(x) R_1(μ) ξ, S_x() T_x R_1(μ) ξ (1 + μ)= 2 Re T_x (D_1^*1) S_1(μ) ξ, S_x() T_x S_1(μ) ξ + 2 Re T_x S_1(μ) ξ, S_x() T_x R_1(μ) ξ (1 + μ)= 2 ReT_x ξ, S_x() S_1(μ)|_E_x T_x ξThis proves the present lemma.We are now ready to treat the positivity condition in Equation (<ref>): Suppose that ⊆ is a localizing subset satisfying the local positivity condition, that d(x^*) ϕ(x) _∞ < 1 for all x ∈ and that there exists a > 0 such that _x ≤ for all x ∈. Then the inequalityϕ(a)^* ( (F_D_1^*1) F_D^* + F_D (F_D_1 1)) ϕ(a) ≥ -4 κϕ(a^* a)holds in the quotient C^*-algebra 𝕃(E)/𝕂(E) for all a ∈ A. By Proposition <ref>, it suffices to show thatT_x^* ( (F_D_1^*1)|_E_x F_D_x + F_D_x (F_D_1 1)|_E_x) T_x + 4 κϕ(x^* x)is positive in 𝕃(E)/𝕂(E) for all x ∈. Let thus x ∈ be fixed. We will prove the inequality2 Re F_D_x (F_D_1 1)|_E_xT_x ξ, T_x ξ≥ - 4 κ T_x ξ, T_x ξin the C^*-algebra C, for all ξ∈Dom(D) ∩Dom(D_11). Remark that this is enough since Dom(D) ∩Dom(D_11)E is norm-dense. Let thus ξ∈Dom(D) ∩Dom(D_11) be given. We have that2 Re F_D_x (F_D_1 1)|_E_x) T_x ξ, T_x ξ =2/π^2∫_0^∞∫_0^∞ (μ)^-1/2ReS_x() S_1(μ)|_E_x T_x ξ , T_x ξd dμ,where the integral converges absolutely in the norm on C and the integrand is norm-continuous from [0,∞)^2 to C. Now, by Lemma <ref> and the local positivity condition we have that2 ReS_x() S_1(μ)|_E_x T_x ξ , T_x ξ = ∑_j = 1^4 Q_j(,μ,x)(T_x ξ) ≥ - κ∑_j = 1^4 M_j(,μ,x) T_x ξ, M_j(,μ,x) T_x ξ.It therefore follows by Lemma <ref> that2/π^2∫_0^∞∫_0^∞ (μ)^-1/2ReS_x() S_1(μ)|_E_x T_x ξ , T_x ξ≥ -κ1/π^2∫_0^∞∫_0^∞ (μ)^-1/2∑_j = 1^4 M_j(,μ,x) T_x ξ, M_j(,μ,x) T_x ξd dμ = - κ∑_j = 1^4 T_x ξ , K_j(x) T_x ξ≥ - 4 κT_x ξ , T_x ξ.But this proves the proposition. amsalpha-lmp CGRS14[BaJu83]BaJu:TBK S. Baaj and P. Julg, Théorie bivariante de Kasparov et opérateurs non bornés dans les C∗-modules hilbertiens, C. R. Acad. Sci. Paris Sér. I Math. 296 (1983), no. 21, 875–878. 715325 (84m:46091)[BDT89]BDT:CRA P. Baum, R. G. Douglas, and M. E. Taylor, Cycles and relative cycles in analytic K-homology, J. Differential Geom. 30 (1989), no. 3, 761–804. 1021372[BMvS16]BMS13 S. Brain, B. Mesland, and W. D. van Suijlekom, Gauge theory for spectral triples and the unbounded Kasparov product, J. Noncommut. Geom. 10 (2016), no. 1, 135–206. 3500818[CGRS14]CGRS:ILN A. L. Carey, V. Gayral, A. Rennie, and F. A. Sukochev, Index theory for locally compact noncommutative geometries, Mem. Amer. Math. Soc. 231 (2014), no. 1085, vi+130. 3221983[CoMo08]CM08 A. Connes and H. Moscovici, Type III and spectral triples, Traces in number theory, geometry and quantum fields, Aspects Math., E38, Friedr. Vieweg, Wiesbaden, 2008, pp. 57–71.[Con94]Con:NCG A. Connes, Noncommutative geometry, Academic Press, Inc., San Diego, CA, 1994. 1303779 (95j:46063)[Con96]Con:GMF , Gravity coupled with matter and the foundation of non-commutative geometry, Comm. Math. Phys. 182 (1996), no. 1, 155–176. 1441908[CoSk84]CoSk:LIF A. Connes and G. Skandalis, The longitudinal index theorem for foliations, Publ. Res. Inst. Math. Sci. 20 (1984), no. 6, 1139–1183. 775126 (87h:58209)[Hil10]Hil:BIK M. Hilsum, Bordism invariance in KK-theory, Math. Scand. 107 (2010), no. 1, 73–89. 2679393[Kaa15]Kaa:UKM J. Kaad, The unbounded Kasparov product by a differentiable module,.[Kaa16]Kaa:MEU , Morita invariance of unbounded bivariant KK-theory, .[Kaa17]Kaa:DAH , Differentiable absorption of Hilbert C^*-modules, connections, and lifts of unbounded operators, To appear in J. Noncommut. Geom. 11 (2017), 1–32. [KaLe13]KaLe:SFU J. Kaad and M. Lesch, Spectral flow and the unbounded Kasparov product, Adv. Math. 248 (2013), 495–530. 3107519[Kas80]Kas:OFE G. G. Kasparov, The operator K-functor and extensions of C∗-algebras, Izv. Akad. Nauk SSSR Ser. Mat. 44 (1980), no. 3, 571–636, 719. 582160 (81m:58075)[KavS17]KaSu:FDA J. Kaad and W. D. van Suijlekom, Factorization of Dirac operators on almost-regular fibrations of spin^c manifolds, Preprint, 2017.[Kuc97]Kuc:KUM D. Kucerovsky, The KK-product of unbounded modules, K-Theory 11 (1997), no. 1, 17–34. 1435704 (98k:19007)[Lan95]Lan:HMT E. C. Lance, Hilbert C *-modules, London Mathematical Society Lecture Note Series, vol. 210, Cambridge University Press, Cambridge, 1995, A toolkit for operator algebraists. 1325694 (96k:46100)[Lat13]Lat:QLM F. Latrémolière, Quantum locally compact metric spaces, J. Funct. Anal. 264 (2013), no. 1, 362–402. 2995712[MeRe16]MeRe:NMU B. Mesland and A. Rennie, Nonunital spectral triples and metric completeness in unbounded KK-theory, J. Funct. Anal. 271 (2016), no. 9, 2460–2538. 3545223[Mes14]Mes:UCN B. Mesland, Unbounded bivariant K-theory and correspondences in noncommutative geometry, J. Reine Angew. Math. 691 (2014), 101–172. 3213549[Wor91]Wor:UAQ S. L. Woronowicz, Unbounded elements affiliated with C *-algebras and noncompact quantum groups, Comm. Math. Phys. 136 (1991), no. 2, 399–432. 1096123 (92b:46117)
http://arxiv.org/abs/1709.08996v1
{ "authors": [ "Jens Kaad", "Walter D. van Suijlekom" ], "categories": [ "math.OA", "math.FA", "math.KT" ], "primary_category": "math.OA", "published": "20170926131258", "title": "On a theorem of Kucerovsky for half-closed chains" }
http://arxiv.org/abs/1709.09010v1
{ "authors": [ "Kai-He Ding", "Lih-King Lim", "Gang Su", "Zheng-Yu Weng" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20170926133910", "title": "Quantum Hall effect in ac driven graphene: from half-integer to integer case" }
Deep convolutional neural networks for estimating porous material parameters with ultrasound tomography Timo Lähivaara^a, Leo Kärkkäinen^b, Janne M.J. Huttunen^b, and Jan S. Hesthaven^c ^aDepartment of Applied Physics, University of Eastern Finland, Kuopio, Finland ^bNokia Technologies, Espoo,Finland Present address: Nokia Bell Labs, Espoo, Finland ^cComputational Mathematics and Simulation Science, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland ================================================================================================================================================================================================================================================================================================== §.§ AbstractWe study the feasibility of data based machine learning applied to ultrasound tomography to estimate water-saturated porous material parameters.In this work, the data to train the neural networks is simulated by solving wave propagation in coupled poroviscoelastic-viscoelastic-acoustic media.As the forward model, we consider a high-order discontinuous Galerkin method while deep convolutional neural networks are used to solve the parameter estimation problem.In the numerical experiment, we estimate the material porosity and tortuosity while the remaining parameters which are of less interest are successfully marginalized in the neural networks-based inversion. Computational examples confirms the feasibility and accuracy of this approach.§ INTRODUCTION Measuring the porous properties of a medium is a demanding task. Different parameters are often measured by different application-specific methods, e.g., the porosity of rock is measured by weighing water-saturated samples and comparing their weight with dried samples. The flow resistivity of the damping materials can be computed from the pressure drop caused to the gas flowing through the material.In medical ultrasound studies, the porosity of the bone is determined indirectly by measuring the ultrasound attenuation as the wave passes through the bone.For the porous material characterization, information carried by waves provides a potential way to estimate the corresponding material parameters.Ultrasound tomography (UST) is one technique that can be used for material characterization purposes. In this technique, an array of sensors is placed around the target. Typically, one of the sensors is acting as a source while others are receiving the data.By changing the sensor that acts as a source, a comprehensive set of wave data can be recorded which can be used to infer the material properties.For further details on the UST, we refer to <cit.> and references therein.The theory of wave propagation in porous media was rigorously formulated in 1950's and 1960's by Biot <cit.>. The model was first used to study the porous properties of bedrock in oil exploration. Since then, the model has been applied and further developed in a number of different fields <cit.>. The challenge of Biot's model is its computational complexity. The model produces several different types of waveforms, i.e., the fast and slow pressure waves and the shear wave, the computational simulation of which is a demanding task even for modern supercomputers. Computational challenges further increase when attempting to solve inverse problems, as the forward model has to be evaluated several times.In this work, we consider a process for the parameter estimation, comprising two sub-tasks: 1) Forward model: the simulation of the wave fields for given parameter values and 2) Inverse problem: estimation of the parameters from the measurements of wave fields.The inverse problem is solved using deep convolutional neural networks that provide a framework to solve the inverse problems: we can train a neural network as a model from wave fields to the parameters.During last decades, neural networks have been applied in various research areas such as image recognition <cit.>, cancer diagnosis <cit.>, forest inventory <cit.>, and groundwater resources <cit.>.Deep convolutional neural networks (CNN) are a special type of deep neural networks <cit.> that employ convolutions instead of matrix multiplications (e.g. to reduce the number of unknowns).In this study, we employ deep convolutional neural networks to two-dimensional ultrasound data.The structure of the rest of the paper is as follows. First, in Section <ref>, we formulate the poroviscoelastic and viscoelastic models and describe the discontinuous Galerkin method. Then, in Section <ref>, we describe the neural networks technique for the prediction of material parameters. Numerical experiments are presented Section <ref>. Finally, conclusions are given in Section <ref>. §.§ Model justification The purpose of this paper is to study the feasibility of using a data based machine learning approach in UST to characterize porous material parameters. In the synthetic model setup, we have a cylindrical water tank including an elastic shell layer.Ultrasound sensors are placed inside the water tank.In the model, we place a cylindrically shaped porous material sample in water and estimate its porosity and tortuosity from the corresponding ultrasound measurements by a convolutional neural network. Figure <ref> shows a schematic drawing of a UST setup studied in this work.The proposed model has a wide range of potential applications. Cylindrically shaped core samples can be taken of bedrock or manmade materials, such as ceramics or concrete to investigate their properties. In addition, from the medical point of view, core samples can be taken, for example, of cartilage or bones for diagnosing purposes. Depending on the application, the model geometry and sample size together with sensor setup needs to be scaled.§ WAVE PROPAGATION MODEL In this paper, we consider wave propagation in isotropic coupled poroviscoelastic-viscoelastic-acoustic media.In the following Sections <ref> and <ref>, we follow <cit.> and formulate the poroviscoelastic and viscoelastic wave models. Discontinuous Galerkin method is discussed in Section <ref>. §.§ Biot's poroelastic wave equation We use the theory of wave propagation in poroelastic media developed by Biot in <cit.>.We express Biot equations in terms of solid displacementand the relative displacement of fluid =ϕ( - ), where ϕ is the porosity andis the fluid displacement. In the following,anddenote the solid and fluid densities, respectively. We havet +t =∇·, t + τ/ϕt + η/kt =∇·,where = (1-ϕ) + ϕ is the average density,is the total stress tensor, andis the fluid stress tensor. In Eq. (<ref>) τ is the tortuosity, η is the fluid viscosity, and k is the permeability.The third term in (<ref>) is a valid model at low frequencies, when the flow regime is laminar (Poiseuille flow).At high frequencies, inertial forces may dominate the flow regime. In this case, the attenuation model may be described in terms of viscous relaxation mechanics as discussed, for example, in references <cit.>. In this paper, we use the model derived in the reference <cit.>. The level of attenuation is controlled by quality factor Q_0.One can express the stress tensors as=2 E + ( + α^2 M -2/3)( E) I- α Mζ I,=M(α( E) - ζ)I,whereis the frame shear modulus, = 1/2(∇ +(∇)) denote the solid strain tensor,is the frame bulk modulus, (·) is the trace, I is the identity matrix, and ζ = -∇· is the variation of the fluid content.The effective stress constant α and modulus M, given in Eqs. (<ref>) and (<ref>), can be written as α = 1 - /, whereis the solid bulk modulus, and M =/(α - ϕ(1-/)), whereis the fluid bulk modulus. §.§ Viscoelastic wave equation The following discussion on the elastic wave equation with viscoelastic effects follows Carcione's book <cit.>, in which a detailed discussion can be found.Expressed as a second order system, the elastic wave equation can be written in the following formt = ∇· +s,whereis the density,the elastic displacement,is a stress tensor, and s is a volume source.In the two-dimensional viscoelastic (isotropic) case considered here, components of the solid stress tensormay be written as <cit.>σ_11 =(+2)ϵ_11+ϵ_22+(+)∑_ℓ=1^L_eν_1^(ℓ)+2∑_ℓ=1^L_eν_11^(ℓ), σ_22 =(+2)ϵ_22+ϵ_11+(+)∑_ℓ=1^L_eν_1^(ℓ)-2∑_ℓ=1^L_eν_11^(ℓ), σ_12 =2ϵ_12+2∑_ℓ=1^L_eν_12^(ℓ),whereandare the unrelaxed Lamé coefficients, ϵ_11, ϵ_22, and ϵ_12 are the strain components, and L_e is the number of relaxation terms.The memory variables ν_1^(ℓ), ν_11^(ℓ), and ν_12^(ℓ) satisfy∂ν_1^(ℓ)/∂ t =-ν_1^(ℓ)/τ_σℓ^(1)+ϕ_1ℓ(0)(ϵ_11+ϵ_22), ∂ν_11^(ℓ)/∂ t = -ν_11^(ℓ)/τ_σℓ^(2)+ϕ_2ℓ(0)(ϵ_11-ϵ_22)/2,ℓ=1,…,L_e, ∂ν_12^(ℓ)/∂ t =-ν_12^(ℓ)/τ_σℓ^(2)+ϕ_2ℓ(0)ϵ_12,whereϕ_k ℓ(t) = 1/τ_σℓ^(k)(1-τ_ϵℓ^(k)/τ_σℓ^(k))(∑_ℓ=1^nτ_ϵℓ^(k)/τ_σℓ^(k))^-1exp(-t/τ_σ l^(k)) ,k=1, 2 .In Eq. (<ref>), τ_ϵℓ^(k) and τ_σℓ^(k) are relaxation times corresponding to dilatational (k=1) and shear (k=2) attenuation mechanisms.Acoustic wave equation can be obtained from the system by setting the Lamé coefficientto zero. §.§ Discontinuous Galerkin method Wave propagation in coupled poroviscoelastic-viscoelastic-acoustic media can be solved using the discontinuous Galerkin (DG) method (see e.g. <cit.>), which is a well-known numerical approach to numerically solve differential equations. The DG method has properties that makes it well-suited for wave simulations, e.g., the method can be effectively parallelized and it can handle complex geometries and, due to its discontinuous nature, large discontinuities in the material parameters.These are all properties that are essential features for the method to be used in complex wave problems.Our formulation follows <cit.>, where a detailed account of the DG method can be found.§ ESTIMATING MATERIAL PARAMETERS BY NEURAL NETWORKS The aim of this paper is to estimate porous material parameters by applying artificial neural networks trained to simulated data. Compared to traditional inverse methods, neural networks has an advantage that it allows computationally efficient inferences. In other words, after the network has been learned, inferences can be carried out using the network without evaluating the forward model. Furthermore, the neural networks provide a straightforward approach to marginalize uninteresting parameters in the inference. First we will give a brief summary to deep neural networks.For a wider representation of the topic, the reader is pointed to review article by LeCun, Bengio, and Hinton <cit.>, Bengio <cit.> or to book by Buduma <cit.>.We consider the following supervised learning task. We have a set of input data {X_ℓ} with a labels {Y_ℓ} (ℓ=1,…, N). In our study, the input data comprises of the measured ultrasound fields considered as “images” such that rows correspond to temporal data and columns to receiver and the outputs are corresponding material parameters.The “image” should be interpreted as two dimensional data, not a traditional picture; there is no color mapping used in our algorithm.The aim is to find a function Θ between the inputs and outputs:Y=Θ(X) .The task is to find a suitable form for the function and learn it from the given data. Contrary to traditional machine leaning in which the features are pre-specified, deep learning has an advantage that features are learned from the data.Neural networks is a widely used class of functions in machine learning.Figure <ref> (left) shows an example of a neural network with one hidden layer.Each circular node represents an artificial neuron and an arrow represents a connection between the output of one neuron to the input of the another.This network has four inputs x_1, …, x_4 (input layer), five hidden units with output activations a_1,…,a_5 (hidden layer), and two outputs y_1 and y_2 (output layer).Each neuron in the hidden layer are computational units that, for a given input (x_1,…,x_4), calculates the activations asa_j=σ(∑_i=1^4 w^(1)_jix_i+b^(1)_j), j=1,…,5,where w^(1)_ji is a weight assigned to the connection between the ith input, b^(1) is a bias terms and jth activation and σ is a non-linear function (nonlinearity).Similarly, the neurons at output layer are units for which the output is calculated asy_j=σ(∑_i=1^5 w^(2)_jia_i+b^(2)_j) , j=1,…,2,where w^(2)_ji and b^(2)_j are again weights and biases.At the present, the most typically used nonlinearity is the Rectified Linear Unit (ReLU), σ(x)=max(0,x).<cit.> During past decades, smoother nonlinearities such as the sigmoid function σ(x)=(1+exp(-x))^-1 or tanh have been used as nonlinearity, but the ReLU typically allows better performance in the training of deep neural architectures on large and complex datasets.<cit.> Some layers can also have different linearities than other, but here we use same σ for notational convenience.It is easy to see that the neural network can be presented in the matrix form asA=σ(w^(1) ·X+b^(1)), Y=σ(w^(2) ·A+b^(2)), where X=(x_1,…,x_4)^T, Y=(y_1,y_2)^T, and A=(a_1,…,a_5)^T, w^(1) and w^(2) are matrices/tensors comprising of the weights w^(1)_ij and w^(2)_ij, respectively, b^(1) and b^(2) are the vectors comprising of the bias variables b^(1)_i and b^(2)_i, respectively, and the function σ operates to the vectors element-wise.The network can be also written as a nested function Θ_θ (X)=σ(w^(2)(σ(w^(1)· X+b^(1)))+b^(2)) where θ represents the unknowns of the network (θ=(w^(1),b^(1),w^(2),b^(2))).Deeper networks can be constructed similarly.For example, a network with two hidden layers can be formed as A^(1)=σ(w^(1)· X+b^(1)), A^(2)=σ(w^(2)· A^(1)+b^(2)) and Y=σ(w^(3)· A^(2)+b^(3)), or Θ_θ(X)=σ(w^(3)·σ(w^(2)·(σ(w^(1)· X+b^(1)))+b^(2))+b^(3)) where θ=(w^(1),b^(1),w^(2),b^(2),w^(3),b^(3)).The neural networks described above are called fully connected: there are connections between all nodes of the network.A disadvantage of such fully connected networks is that total number of free parameters can be enormous especially when the dimensions of input data is high (e.g. high resolution pictures) and/or the network has several hidden layers.Convolutional neural networks (CNN) were introduced <cit.> to overcome this problem.Figure <ref> (right) shows an example of a 1D-convolutional layer.In a convolutional layer, the matrix multiplication is replaced with a convolution: for example, in 1D, the activations of the layer are calculated asA=σ(w^(1) * X+B^(1)), where * denotes the discrete 1D-convolution and w^(1) is a set of filter weights (a feature to be learned).In 2D, the input X can be considered as an image and the convolution w^(1) * X is two dimensional with a convolution matrix or mask w^(1) (a filtering kernel).In practical solution, however, only one convolution mask (per layer) is typically not enough for good performance.Therefore, each layer includes multiple masks that are applied simultaneously and the output of the convolutional layer is a bank (channels) of images.This three dimensional object forms the input of the next layer such that each input channel has own bank of filtering kernels and the convolution is effectively three dimensional.Usually convolutional neural networks employs also pooling <cit.> for down-sampling (to reduce dimension of activations).See for example LeCun, Bengio, and Hinton <cit.> or Buduma <cit.> for details.The training of the neural networks is based on a set of training data {X_ℓ,Y_ℓ}. The purpose is to find a set if weights and biases that minimize the discrepancy between the outputs {Y_ℓ} and the corresponding predicted values given by the neural networks {Θ_θ(X_ℓ) }.Typically, in regression problems, this is achieved by minimizing the quadratic loss function f(θ) over the simulation datasetf(θ)=f(θ; {X_ℓ}, {Y_ℓ})=1/N_nn∑_ℓ=1^N_nn(Θ_θ(X_ℓ ) - Y_ℓ)^2to obtain the network parameters, weights, and biases, of the network. The optimization problem could be solved using gradient descent methods.The gradient can be calculated using back propagation which essentially is a procedure of applying the chain rule to the lost function in an iterative backward fashion <cit.>.The computations of the predictions Θ_θ(X_ℓ) and its gradients is computationally expensive task in the case of large training dataset.Therefore, during the iteration, the cost and gradients are calculated for mini-batches that are randomly chosen from the training set.This procedure is called as stochastic gradient descent (SGD) <cit.>.The validation of the network is commonly carried out using a test set, which is either split from the original data or collected separately.This test set is used to carry final evolution of the performance.This is done due to the tendency of deep networks for overfitting, which comes out as a good performance in the training set but very poor performance in test set (i.e. the network “remembers” the training samples instead of learning to generalize).Third set, the validation data set, is also traditionally used to evaluate performance during the training process. § NUMERICAL EXPERIMENT In this section, we present the results obtained from testing the data driven approach to estimate porous material parameters in ultrasound tomography. In this paper, initial conditions are always assumed to be zero and we apply the free condition as a boundary condition.In the following results, time integration is carried out using an explicit low-storage Runge-Kutta scheme <cit.>. For each simulation, the length of the time step Δ t is computed fromΔ t = (h^ℓ_min/2c^ℓ_max(N^ℓ)^2)_min, ℓ=1,…,K,where c^ℓ_max is the maximum wave speed, N^ℓ is the order of the polynomial basis, h^ℓ_min is the smallest distance between two vertices in the element ℓ, and K is the number of elements.§.§ Model setup Let us first introduce the model problem. Figure <ref> shows the studied two dimensional problem geometry.The propagation medium contains three subdomains: a cylindrical shaped poroelastic inclusion (black), a fluid (light gray), and a solid shell (dark gray).The computational domain is a circle with radius of 10 cm with a 1 cm thick shell layer.A circular shaped poroelastic inclusion with a radius of 4 cm is vertically shifted to 2 cm from the center of the circular domain.Shifting the target from the center avoids symmetrically positioned sensors (with respect to x-axis) to receive equal signal and therefore have more information from the target. A total of 26 uniformly distributed ultrasound sensors are located at a distance of 8 cm from the center of the circle.The source is introduced on the strain components ϵ_11 and ϵ_22 by the first derivative of a Gaussian function with frequency f_0=40 kHz, a time delay t_0=1.2/f_0. The sensor that is used as a source does not collect data in our simulations. Receivers collect solid velocity componentsand . In the following, the simulation time is 0.4 ms. Note that recorded data is downsampled to a sampling frequency of 800 kHz on each receiver.The inclusion is fully saturated with water. The fluid parameters are given by: the density is =1020 kg/m^3, the fluid bulk modulus is =2.295 GPa, and the viscosity is η=1.0e-3 Pa·s.All other material parameters of the inclusion are assumed to be unknown. In this paper, we assume a relatively wide range of possible parameter combinations, see Table <ref>. Furthermore, the unknown parameters are assumed to be uncorrelated. The physical parameter space gives ∼1.5 kHz as an upper bound for Biot's characteristic frequencyf_c = ηϕ/2πτ kand hence we operate in Biot's high-frequency regime (f_c < f_0) in all possible parameter combinations.For the fluid subdomain (water), we set: the density =1020 kg/m^3, the first Lamé parameter =2.295 GPa, and the second Lamé parameter = 0. The elastic shell layer has the following parameters: ρ_e=2000 kg/m^3, λ_e=12.940 GPa, and μ_e=5.78 GPa.The relaxation times for the viscoelastic attenuation in the shell layer are given in Table <ref>.The derived wave speeds for each subdomain are given in Table <ref>. For the poroelastic inclusion, both the minimum and maximum wave speeds are reported. It should be noted that the reported values for the wave speeds in the inclusion correspond to the values generated by sampling the material parameters and hence they may not correspond to the global maximum and minimum values. A detailed approach for calculating wave speeds is given in <cit.>.It should be noted that, depending on the application, only some of the material parameters are unknowns of interest. In this work, we focus on estimating the porosity and tortuosity of the inclusion while the solid density and bulk modulus, frame bulk and shear modulus, permeability, and quality factor are marginalized in the neural networks-based inversion algorithm, discussed in detail below. §.§ Training, validation, and test data For the convolutional neural networks algorithm used in this work, the recorded wave data is as expressed as images X∈ ^d, d=N_t × N_r, where N_t denotes the number of time steps and N_r the number of receivers. The input dimension is d=320 × 25, as the data X can be seen as a 2D-image, comprising 25 pixels, corresponding to the receiver positions, times 320 pixels corresponding to the time evolution of the signal.As the data on each column of the image, we use the followingX(:, ℓ) = x_r^ℓ(^ℓ-^ℓ)+y_r^ℓ(^ℓ-^ℓ)/√((x_r^ℓ)^2+(y_r^ℓ)^2),ℓ=1,…,N_r,where (x_r^ℓ, y_r^ℓ) are the coordinates of the ℓth receiver and (^ℓ, ^ℓ) are the horizontal and vertical velocity components on the ℓth receiver. In Eq. (<ref>) subscript noi denotes the data that is simulated without the porous inclusion. Two example images are shown in Fig. <ref>.We have generated a training data set comprising 15,000 samples using computational grids that have ∼3 elements per wavelength. The physical parameters for each sample are drawn from the uniform distribution (bounds given in Table <ref>).The order of the basis functions is selected separately for each element of the grid. The order N_ℓ of the basis function in element ℓ is defined byN_ℓ = ⌈2π ah^ℓ_max/λ^w_ℓ+b⌉,where λ^w_ℓ=c^ℓ_min/f_0 is the wavelength, c^ℓ_min is the minimum wave speed, and ⌈·⌉ is the ceiling function.The parameters a and b control the local accuracy on each element.Following <cit.>, we set (a, b)=(1.0294, 0.7857).Figure <ref> shows two examples of computational grids and the corresponding basis order selection on each triangle. Example grids consist of 466 elements and 256 vertices (h_min=0.81 cm and h_max=1.95 cm) (sample 1) and 1156 elements and 601 vertices (h_min=0.32 cm and h_max=1.75 cm) (sample 2).Figure <ref> shows two snapshots of the scattered solid velocity field (√((-)^2+(-)^2)) for two time instants. At the first time instant, the transmitted fast pressure wave and also the first reflected wave front is clearly visible. At the second time, all wave components have reflected back from the left surface of the phantom. Furthermore, more complicated wave scattering patterns can be seen inside the porous inclusion. These wave fields demonstrate how the received signals are obtained as combinations of multiple wave fronts.To include observation noise in the training, each image in the simulation set is copied 5 times and each of the copies is corrupted with Gaussian noise of the formX_ℓ'^noised=X_ℓ+Aϵ^A+B|X_ℓ|ϵ^B,where ϵ^A and ϵ^B are independent zero-mean identically distributed Gaussian random variables.The second term represents additive white noise and the last term represents noise relative the signal strength.To represent a wide range of different noise levels, for each sample image, the coefficients A and B are randomly chosen such that the standard deviations of the white noise component is between 0.03-5% (varying logarithmically), and the standard deviations of the relative component is between 0-5%.The total number of samples in the training set is N_nn=5× 15000=75000.Furthermore, two additional data sets were generated: a validation data set and a test set, that both comprise 3000 samples. In machine learning, the validation data set is traditionally used to evaluate performance during the training process and the test set is used for the final evaluation of the network. These data sets are generated similarly as the training set, except computational grids were required to have ∼4 elements per wavelength to avoid inverse crime <cit.>. Furthermore, for the test set, the non-uniform basis order parameters are (a, b)=(1.2768, 1.4384) <cit.> (Eq. (<ref>)) and the noise is added in a more systematic manner (instead of choosing A and B are randomly) to study the performance with different noise levels (see Results section). §.§ Convolutional neural networks architecture The neural network architecture is shown in Table <ref> and Fig. <ref>.The CNN architecture is similar to ones used for image classification, for example, Alexnet <cit.>, with some exceptions.In our problem, the input data can be considered as one channel images instead of color images with three channels (i.e. we do not apply any color mapping).Our network also lacks the softmax layer, which is used as an outmost layer to provide the classification.Instead, the outmost two layers are simply fully connected layers. In addition, our network also has smaller number of convolutional and fully connected layers with and smaller dimensions in filter banks, which leads to significantly smaller number of unknowns.We, however, wish to note that purpose aim was not to find the most optimal network architecture.For example, there can be other architectures that can provide similar performance with even smaller number of the unknowns. Furthermore, similar performance can also be achieved with fully connected networks with ∼3 layers, but at the expense of significantly larger number of unknowns.The loss function is chosen to be quadratic (Eq. (<ref>)).The implementation is carried out using Tensorflow, which is a Python toolbox for machine learning.The optimization was carried out with the Adam optimizer <cit.>.The batch size for the stochastic optimization is chosen to be 50 samples.§.§ Results Figure <ref> shows the loss of the training and validation data. The loss is shown for the two unknown parameters of interest, e.g., porosity and tortuosity. In both cases, we observe that the network has practically reached its generalization capability at least after 2000 full training cycles. The accuracy of the network is affected by the marginalization over all other parameters. In principle, the effect of the other parameters could compensate the changes that using, for example, a different porosity would cause, leaving the waveform of the measurement intact.We have applied the trained network to predict porosity and tortuosity from images of the test that are corrupted with the white noise component with low noise level (Fig. <ref>), moderate noise level (Fig. <ref>), and high noise level (Fig. <ref>).The figures also include error histograms.Table <ref> shows statistics of the prediction error for different noise levels.Figure <ref> shows the maximum absolute error and the root-mean-square error (the square root of Eq. (<ref>)) as a function of the noise level in white noise component.The predictions are slightly positively biased with smaller noise levels, but positive bias is diminished with higher noise levels.Such a behavior might be due to the discretization error in forward models (the simulation of training and testing data were carried out using different levels of discretizations) which may dominate with lower observation noise levels but becomes negligible with higher noise levels.On the contrary, with high levels of noise, the predictions are negatively biased especially for larger values of porosity and tortuosity.We have also studied the effect of the relative noise (the third term in Eq. (<ref>)) to the results.Figure <ref> shows the maximum absolute error and the root-mean-square error as a function of the noise level in the relative component.As we can see, the predictions are almost unaffected by the relative error even with significantly high noise levels (∼5-10%).#1#2 0.8%9 3.2%10 8.1%11 § CONCLUSIONS In this paper, we proposed the use of convolutional neural networks (CNN) for estimation of porous material parameters from synthetic ultrasound tomography data. In the studied model, ultrasound data was generated in a water tank into which the poroelastic material sample was placed. A total of 26 ultrasound sensors were positioned in the water. One of the sensors generated the source pulse while others were used in the receiving mode. The recorded velocity data were represented as images which were further used as an input to the CNN. In the experiment, the parameter space for the porous inclusion was assumed to be large. For example, the porosity of the inclusion was allowed to span the interval from 1% to 99%. The selected parameter space models a different type of materials.We estimated the porosity and tortuosity of the porous material sample while all other material parameters were considered as nuisance parameters (see Table <ref>). Based on the results, it seems that these parameters can be estimated with acceptable accuracy with a wide variety of noise levels, while the nuisance parameters are successfully marginalized. The error histograms for both porosity and tortuosity show excellent accuracy in terms of root-mean-square error and bias. We have marginalized our inference of porosity and tortuosity over 6 other material parameters (listed in Table <ref>), which makes it possible to detect the primary porous material parameters from the waveforms without the knowledge of the values of the nuisance parameters, even if they can have a significant impact on the waveforms themselves. The success in the marginalization significantly increases the potential of the neural networks for material characterization. Future studies should include more comprehensive investigation of model uncertainties including geometrical inaccuracies (positioning and size of the material sample), fluid parameter changes, viscoelastic parameters of the shell layer, material inhomogeneities, and the sensor setup. In addition, the extension to three spatial dimensions together with actual measurements are essential steps to guarantee the effectiveness of the proposed method.§.§ AcknowledgmentsThis work has been supported by the strategic funding of the University of Eastern Finland and by the Academy of Finland (project 250215, Finnish Centre of Excellence in Inverse Problems Research). This article is based upon work from COST Action DENORMS CA-15125, supported by COST (European Cooperation in Science and Technology).10 #1,“#1,” url<#>1urlprefixURL#1“#1” #1#1duric15 authorN. Duric, authorP. J. Littrup, authorC. Li, authorO. Roy, and authorS. Schmidt, titleUltrasound tomography: A decade-long journey from the laboratory to clinic, in booktitleUltrasound Imaging and Therapy,edited by editorA. Karellas and editorB. R. Thomadsen (publisherCRC Press, Taylor & Francis Group, year2015).biot56a authorM. Biot, titleTheory of propagation of elastic waves in a fluid saturated porous solid. I. Low frequency range, journalJ. Acoust. Soc. Am. 28(2), pages168–178 (year1956).biot56b authorM. Biot, titleTheory of propagation of elastic waves in a fluid saturated porous solid. II. Higher frequency range, journalJ. Acoust. Soc. Am. 28(2), pages179–191 (year1956).biot62a authorM. Biot, titleMechanics of deformation and acoustic propagation in porous media, journalJ. Appl. Phys. 33(4), pages1482–1498 (year1962).biot62b authorM. Biot, titleGeneralized theory of acoustic propagation in porous dissipative media, journalJ. Acoust. Soc. Am. 34(5), pages1254–1264 (year1962).sebaa06 authorN. Sebaa, authorZ. Fellah, authorM. Fellah, authorE. Ogam, authorA. Wirgin, authorF. Mitri, authorC. Depollier, and authorW. Lauriks, titleUltrasonic characterization of human cancellous bone using the Biot theory: Inverse problem, journalJ. Acoust. Soc. Am. 120(4), pages1816–1824 (year2006).yvonne14 authorM. Yvonne Ou, titleOn reconstruction of dynamic permeability and tortuosity from data at distinct frequencies, journalInverse Problems 30(9), pages095002 (year2014).lahivaara15 authorT. Lähivaara, authorN. Dudley Ward, authorT. Huttunen, authorZ. Rawlinson, and authorJ. Kaipio, titleEstimation of aquifer dimensions from passive seismic signals in the presence of material and source uncertainties, journalGeophys. J. Int. 200, pages1662–1675 (year2015).mart16 authorJ. Parra Martinez, authorO. Dazel, authorP. Göransson, and authorJ. Cuenca, titleAcoustic analysis of anisotropic poroelastic multilayered systems, journalJ. Appl. Phys. 119(8), pages084907 (year2016).krichevsky authorA. Krizhevsky, authorI. Sutskever, and authorG. Hinton, titleImagenet classification with deep convolutional neural networks, in booktitleAdvances in Neural Information Processing Systems 25,edited by editorF. Pereira, editorC. J. C. Burges, editorL. Bottou, and editorK. Q. Weinberger (publisherCurran Associates, Inc., year2012), pp. pages1097–1105.LeCun15 authorY. LeCun, authorY. Bengio, and authorG. Hinton, titleDeep learning, journalNature 521(7553), pages436–444 (year2015).maclin authorP. S. Maclin, authorJ. Dempsey, authorJ. Brooks, and authorJ. Rand, titleUsing neural networks to diagnose cancer, journalJ. Med. Syst. 15(1), pages11–19 (year1991).alvarez authorL. A. Menendez, authorF. J. de Cos Juez, authorF. S. Lasheras, and authorJ. A. Riesgo, titleArtificial neural networks applied to cancer detection in a breast screening programme, journalMath. Comput. Model. 52(7-8), pages983–991 (year2010).Muukkonen authorP. Muukkonen and authorJ. Heiskanen, titleEstimating biomass for boreal forests using ASTER satellite data combined with standwise forest inventory data, journalRemote Sens. Environ. 99(4), pages434–447 (year2005).niska authorH. Niska, authorJ.-P. Skön, authorP. Packalen, authorT. Tokola, authorM. Maltamo, and authorM. Kolehmainen, titleNeural networks for the prediction of species-specific plot volumes using airborne laser scanning and aerial photographs, journalIEEE Trans. Geosci. Remote Sens. 48(3), pages1076–1085 (year2009).Daliakopoulos2005229 authorI. N. Daliakopoulos, authorP. Coulibaly, and authorI. K. Tsanis, titleGroundwater level forecasting using artificial neural networks, journalJ. Hydrol. 309(1-4), pages229–240 (year2005).jha10 authorS. Mohanty, authorK. Jha, authorA. Kumar, and authorK. P. Sudheer, titleArtificial neural network modeling for groundwater level forecasting in a River Island of Eastern India, journalWater Resour. Manage. 24(9), pages1845–1865 (year2010).lecun98 authorY. LeCun, authorL. Bottou, authorY. Bengio, and authorP. Haffner, titleGradient-based learning applied to document recognition, journalProceedings of the IEEE 86(11), pages2278–2324 (year1998).Bengio09 authorY. Bengio, titleLearning deep architectures for AI, journalFound. Trends Mach. Learn. 2(1), pages1–127 (year2009).carcione01 authorJ. Carcione, titleWave Fields in Real Media: Wave propagation in anisotropic, anelastic and porous media (publisherElsevier, year2001).morency08 authorC. Morency and authorJ. Tromp, titleSpectral-element simulations of wave propagation in porous media, journalGeophys. J. Int. 175(1), pages301–345 (year2008).wilcox10 authorL. Wilcox, authorG. Stadler, authorC. Burstedde, and authorO. Ghattas, titleA high-order discontinuous Galerkin method for wave propagation through coupled elastic-acoustic media, journalJ. Comput. Phys. 229, pages9373–9396 (year2010).ward15 authorN. Dudley Ward, authorT. Lähivaara, and authorS. Eveson, titleA discontinuous Galerkin method for poroelastic wave propagation: Two-dimensional case, journalJ. Comput. Phys. 350, pages690–727 (year2017).kaser06 authorM. Käser and authorM. Dumbser, titleAn arbitrary high-order discontinuous Galerkin method for elastic waves on unstructured meshes - I. The two-dimensional isotropic case with external source terms, journalGeophys. J. Int. 166(23), pages855–877 (year2006).puente08 authorJ. de la Puente, authorM. Dumbser, authorM. Käser, and authorH. Igel, titleDiscontinuous Galerkin methods for wave propagation in poroelastic media, journalGeophysics 73(5), pagesT77–T97 (year2008).gabard15 authorG. Gabard and authorO. Dazel, titleA discontinuous Galerkin method with plane waves for sound-absorbing materials, journalInt. J. Numer. Meth. Eng. 104(12), pages1115–1138 (year2015).hesthavenbook authorJ. Hesthaven and authorT. Warburton, titleNodal Discontinuous Galerkin Methods: Algorithms, Analysis, and Applications(publisherSpringer, year2007).buduma17 authorN. Buduma, titleFundamentals of Deep Learning, Designing Next-Generation Machine Intelligence Algorithms (publisherO'Reilly Media, year2017).hahnloser2000 authorR. Hahnloser, authorM.A.Sarpeshkar, authorM. A. Mahowald, authorR. J. Douglas, and authorH.S.Seung, titleDigital selection and analogue amplification coexist in a cortex-inspired silicon circuit, journalNature 405, pages947-951 (year2000).fukushima80 authorK. Fukushima, titleNeocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position, journalBiological Cybernetics 36, pages93–202 (year1980).Scherer2010 authorD. Scherer, authorA. Müller, and authorS. Behnke, titleEvaluation of Pooling Operations in Convolutional Architectures for Object Recognition, pages92–101 (publisherSpringer Berlin Heidelberg, addressBerlin, Heidelberg).Linnainmaa70 authorS. Linnainmaa, titleThe representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors (in finnish), Master's thesis, schoolUniversity of Helsinki, address<http://people.idsia.ch/ juergen/linnainmaa1970thesis.pdf> (year1970).carpenter94 authorM. Carpenter and authorC. Kennedy, titleFourth-order 2N-storage Runge-Kutta schemes (publisherTechnical report, NASA-TM-109112, year1994).Blanc01042016 authorE. Blanc, authorD. Komatitsch, authorE. Chaljub, authorB. Lombard, and authorZ. Xie, titleHighly accurate stability-preserving optimization of the Zener viscoelastic model, with application to wave propagation in the presence of strong attenuation, journalGeophys. J. Int. 205(1), pages427–439 (year2016).lahivaara11 authorT. Lähivaara and authorT. Huttunen, titleA non-uniform basis order for the discontinuous Galerkin method of the acoustic and elastic wave equations, journalAppl. Numer. Math. 61, pages473–486 (year2011).lahivaara10 authorT. Lähivaara and authorT. Huttunen, titleA non-uniform basis order for the discontinuous Galerkin method of the 3D dissipative wave equation with perfectly matched layer, journalJ. Comput. Phys. 229, pages5144–5160 (year2010).kaipio07 authorJ. Kaipio and authorE. Somersalo, titleStatistical inverse problems: Discretization, model reduction and inverse crimes, journalJ. Comput. Appl. Math. 198(2), pages493–504 (year2007).Kingma2014 authorD. Kingma and authorJ. Ba, titleAdam: A Method for Stochastic Optimization, journalArXiv e-prints(year2014).
http://arxiv.org/abs/1709.09212v2
{ "authors": [ "Timo Lähivaara", "Leo Kärkkäinen", "Janne M. J. Huttunen", "Jan S. Hesthaven" ], "categories": [ "physics.comp-ph" ], "primary_category": "physics.comp-ph", "published": "20170926182847", "title": "Deep convolutional neural networks for estimating porous material parameters with ultrasound tomography" }
A Policy Search Method For Temporal Logic Specified Reinforcement Learning Tasks Xiao Li, Yao Ma and Calin Belta December 30, 2023 ================================================================================ Reward engineering is an important aspect of reinforcement learning. Whether or not the users' intentions can be correctly encapsulated in the reward function can significantly impact the learning outcome. Current methods rely on manually crafted reward functions that often requires parameter tuning to obtain the desired behavior. This operation can be expensive when exploration requires systems to interact with the physical world. In this paper, we explore the use of temporal logic (TL) to specify tasks in reinforcement learning. TL formula can be translated to a real-valued function that measures its level of satisfaction against a trajectory. We take advantage of this function and propose temporal logic policy search (TLPS), a model-free learning technique that finds a policy that satisfies the TL specification. A set of simulated experiments are conducted to evaluate the proposed approach. § INTRODUCTION Reinforcement learning (RL) has enjoyed groundbreaking success in recent years ranging from playing Atari games at super-human level <cit.>, playing competitively with world champions in the game of Go <cit.> to generating visuomotor control policies for robots <cit.>, <cit.>. Despite much effort being put into developing sample efficient algorithms, an important aspect of RL remains less explored. The reward function is the window for designers to specify the desired behavior and impose important constraints for the system. While most reward functions used in the current RL literature have beenbased on heuristics for relatively simple tasks, real world applications typically involve tasks that are logically more complex.Commonly adopted reward functions take the form of a linear combination of basis functions (often quadratic) <cit.>. This type of reward function has limited expressibility and is semantically ambiguous because of its dependence on a set of weights. Reward functions of this form have been used to successfully learn high dimensional control tasks such as humanoid walking <cit.> and multiple household tasks (e.g. placing coat-hangers, twisting bottle caps, etc) <cit.>. However, parameter tuning of the reward function is required and this iteration is expensive for robotic tasks. Moreover, these tasks are logically straightforward in that there is little logical interactions between sub-tasks (such as sequencing, conjunction/disjunction, implication, etc). Consider the harder task of learning to use an oven. The agent is required to perform a series of sub-tasks in the correct sequence (set temperature and timer → preheat → open oven door → place item in oven → close oven door). In addition, the agent has to make the simple decision of when to open the oven door and place the item (i.e. preheat finished implies open oven door). Tasks like this are commonly found in household environments (using the microwave, refrigerator or even a drawer) and a function that correctly maps the desired behavior to a real-valued reward can be difficult to design. If the semantics of the reward function can not be guaranteed, then an increase in the expected return will not necessarily represent better satisfaction of the task specification. This is referred to as reward hacking by <cit.>.Reward engineering has been briefly explored in the reinforcement learning literature. Authors of <cit.> and <cit.> provide general formalisms for reward engineering and discuss its significance. Authors of <cit.> proposed potential-based reward shaping and proved policy invariance under this type of reward transformation.Another line of work aims to infer a reward function from demonstration. This idea is called inverse reinforcement learning and is explored by <cit.> and <cit.>.In this paper, we adopt the expressive power of temporal logic and use it as a task specification language for reinforcement learning in continuous state and action spaces. Its quantitative semantics (also referred to as robustness degree or simply robustness) translate a TL formula to a real-valued function that can be used as the reward. By definition of the quantitative semantics, a robustness value of greater than zero guarantees satisfaction of the temporal logic specification.Temporal logic (TL) has been adopted as the specification language for a wide variety of control tasks. Authors of <cit.> use linear temporal logic (LTL) to specify a persistent surveillance task carried out by aerial robots. Similarly, <cit.> and <cit.> applied LTL in traffic network control. Application of TL in reinforcement learning has been less investigated. <cit.> combined signal temporal logic (STL) with Q-learning while also adopting the log-sum-exp approximation of robustness. However, their focus is in the discrete state and action spaces, and ensured satisfiability by expanding the state space to a history dependent state space. This does not scale well for large or continuous state-action spaces which is often the case for control tasks. Our main contributions in this paper are as follows: * we present a model-free policy search algorithm, which we call temporal logic policy search (TLPS), that takes advantage of the robustness function to facilitate learning. We show that an optimal parameterized policy that maximizes the robustness could be obtained by solving a constrained optimization,* a smoothing approximation of the robustness degree is proposed which is necessary for obtaining the gradients of the objective and constraints. We prove that using the smoothed robustness as reward provides similar semantic guarantees to the original robustness definition while providing significant speedup in learning, * finally, we demonstrate the performance of the proposed approach using simulated navigation tasks. § PRELIMINARIES§.§ Truncated Linear Temporal Logic (TLTL) In this section, we provide definitions for TLTL (refer to our previous work <cit.> for a more elaborate discussion of TLTL).A TLTL formula is defined over predicates of form f(s) < c, where f:IR^n → IR is a function of state and c is a constant. We express the task as a TLTL formula with the following syntax: ϕ := | f(s) < c | ϕ|ϕ∧ψ|ϕ∨ψ|ϕ|ϕ|ϕ 𝒰 ψ|ϕψ|ϕ|ϕψ, whereis the boolean constant true, f(s) < c is a predicate,  (negation/not), ∧ (conjunction/and), and ∨ (disjunction/or) are Boolean connectives, and  (eventually),  (always), 𝒰 (until),  (then),  (next), are temporal operators. Implication is denoted by  (implication). TLTL formulas are evaluated against finite time sequences of states {s_0,s_1,…,s_T}. We denote s_t ∈ S to be the state at time t, and s_t:t+k to be a sequence of states (state trajectory) from time t to t+k, i.e., s_t:t+k=(s_t, s_t+1, ..., s_t+k). The Boolean semantics of TLTL is defined as: s_t:t+k f(s)<c ⇔f(s_t) <c,s_t:t+k ϕ⇔(s_t:t+kϕ), s_t:t+k ϕ⇒ψ⇔ (s_t:t+k ϕ) ⇒(s_t:t+k ψ), s_t:t+k ϕ∧ψ⇔ (s_t:t+k ϕ) ∧(s_t:t+k ψ), s_t:t+k ϕ∨ψ⇔ (s_t:t+k ϕ) ∨(s_t:t+k ψ), s_t:t+k ϕ⇔ (s_t+1:t+k ϕ) ∧(k>0),s_t:t+k ϕ⇔ ∀t^'∈[t,t+k) s_t^':t+k ϕ, s_t:t+k ϕ⇔ ∃t^'∈[t,t+k)s_t^':t+k ϕ, s_t:t+k ϕ 𝒰ψ⇔ ∃t^'∈[t,t+k) s.t. s_t^':t+k ψ∧(∀t^'' ∈[t,t^')s_t^'':t^' ϕ), s_t:t+k ϕ 𝒯 ψ⇔ ∃t^'∈[t,t+k) s.t. s_t^':t+k ψ∧(∃t^'' ∈[t,t^')s_t^'':t^' ϕ). Intuitively, state trajectory s_t:t+kϕ (reads s_t:t+k satisfies ϕ) if the specification defined by ϕ is satisfied for every subtrajectory s_t^':t+k,t^'∈ [t,t+k). Similarly, s_t:t+kϕ if ϕ is satisfied for at least one subtrajectory s_t^':t+k,t^'∈ [t,t+k). s_t:t+kϕ𝒰ψ if ϕ is satisfied at every time step before ψ is satisfied, and ψ is satisfied at a time between t and t+k. s_t:t+kϕψ if ϕ is satisfied at least once before ψ is satisfied between t and t+k. A trajectory s of duration k is said to satisfy formula ϕ if s_0:kϕ.TLTL is equipped with quantitative semantics (robustness degree) , i.e., a real-valued function ρ(s_t:t+k, ϕ) that indicates how far s_t:t+k is from satisfying or violating the specification ϕ. We define the task satisfaction measurement ρ(τ,ϕ) , which is recursively expressed as:ρ(s_t:t+k, ) =ρ_max, ρ(s_t:t+k,f(s_t)<c)= c-f(s_t), ρ(s_t:t+k,ϕ)= -ρ(s_t:t+k,ϕ), ρ(s_t:t+k, ϕ ⇒ψ)=max(-ρ(s_t:t+k, ϕ), ρ(s_t:t+k, ψ)) ρ(s_t:t+k,ϕ_1∧ϕ_2) = min(ρ(s_t:t+k,ϕ_1),ρ(s_t:t+k,ϕ_2)),ρ(s_t:t+k,ϕ_1∨ϕ_2) = max(ρ(s_t:t+k,ϕ_1),ρ(s_t:t+k,ϕ_2)), ρ(s_t:t+k, ϕ)=ρ(s_t+1:t+k,ϕ) (k>0),ρ(s_t:t+k,ϕ) =t^' ∈[t,t+k)min(ρ(s_t^':t+k,ϕ)), ρ(s_t:t+k,ϕ) =t^' ∈[t,t+k)max(ρ(s_t^':t+k,ϕ)), ρ(s_t:t+k,ϕ 𝒰ψ)=t^' ∈[t,t+k)max( min(ρ(s_t^':t+k,ψ),t^ ∈[t,t^')minρ(s_t:t^',ϕ))), ρ(s_t:t+k,ϕψ)=t^' ∈[t,t+k)max( min(ρ(s_t^':t+k,ψ),t^ ∈[t,t^')maxρ(s_t:t^',ϕ))),where ρ_max represents the maximum robustness value. Moreover, ρ(s_t:t+k,ϕ) > 0 ⇒ s_t:t+kϕ and ρ(s_t:t+k,ϕ) < 0 ⇒ s_t:t+kϕ, which implies that the robustness degree can substitute Boolean semantics in order to enforce the specification ϕ.Consider specification ϕ = (s> 5 ∧ s < 10), where s is a one dimensional state. Intuitively, this formula specifies that s eventually reaches region (5,10) for at least one time step. Suppose we have a state trajectory s_0:3=s_0s_1s_2=[11,6, 7] of horizon 3. The robustness is ρ(s_0:3, ϕ) = t ∈ [0,3)max (min(10 - s_t, s_t - 5)) = max(-1,1, 2) = 2. Since ρ(s_t, ϕ) > 0, s_0:1ϕ and the value ρ(s_t, ϕ) = 2 is a measure of the satisfaction margin. Note that both states s_1 and s_2 stayed within the specified region, but s_2 "more" satisfies the predicate (s> 5 ∧ s < 10) by being closer to the center of the region and thereby achieving a higher robustness value than s_1. §.§ Markov Decision ProcessIn this section, we introduce the finite horizon infinite Markov decision process (MDP) and the semantics of a TLTL formula over an MDP. We start with the following definition:A finite horizon infinite MDP is defined as a tuple ⟨ S,A,p(·|·,·)⟩, where S⊆ IR^n is the continuous state space; A ⊆ IR^m is the continuous action space; p(s^'|s,a) is the conditional probability density of taking action a ∈ A at state s ∈ S and ending up in state s^'∈ S. We denote T as the horizon of MDP.Given an MDP in Definition <ref>, a state trajectory of length T (denoted τ=s_0:T-1 = (s_0, ..., s_T-1)) can be produced. The semantics of a TLTL formula ϕ over τ can be evaluated with the robustness degree ρ(τ, ϕ) defined in the previous section. ρ(τ, ϕ) > 0 implies that τ satisfies ϕ, i.e. τϕ and vice versa. In the next section, we will take advantage of this property and propose a policy search method that aims to maximize the expected robustness degree.§ PROBLEM FORMULATION AND APPROACH We first formulate the problem of policy search with TLTL specification as follows: Given an MDP in Definition <ref> and a TLTL formula ϕ, find a stochastic policy π(a|s) (π determines a probability of taking action a at state s) that maximizes the expected robustness degreeπ^⋆ = πmaxE_p^π(τ)[ ρ(τ, ϕ)], where the expectation is taken over the trajectory distribution p^π(τ) following policy π, i.e. p^π(τ) = p(s_0)∏_t=0^T-1p(s_t+1|s_t,a_t)π(a_t|s_t).In reinforcement learning, the transition function p(s^'|s,a) is unknown to the agent. The solution to Problem <ref> learns a stochastic time-varying policy π(a_t|s_t) <cit.> which is a conditional probability density function of action a given current state s at time step t. In this paper, policy π is a parameterized policy π(a_t|s_t;θ_t),∀ t=1,…,T (also written as π_θ in short, where θ={θ_0,θ_1,…,θ_T-1}) is used to represent the policy parameter. The objective defined in Equation (<ref>) then becomes finding the optimal policy parameter θ^* such thatθ^⋆ = θmaxE_p^π_θ(τ)[ ρ(τ,ϕ)].To solve Problem <ref>, we introduce temporal logic policy search (TLPS) - a model free RL algorithm. At each iteration, a set of sample trajectories are collected under the current policy. Each sample trajectory is updated to a new one with higher robustness degree by following the gradient of ρ while also keeping close to the sample so that dynamics is not violated. A new trajectory distribution is fitted to the set of updated trajectories. Each sample trajectory is then assigned a weight according to its probability under the updated distribution. Finally, the policy is updated with weight maximum likelihood. This process ensures that each policy update results in a trajectory distribution with higher expected robustness than the current one. Details of TLPS will be discussed in the next section. As introduced in Section <ref>, the robustness degree ρ consists of embedded max/min functions and calculating the gradient is not possible. In Section <ref>, we discuss the use of log-sum-exp to approximate the robustness function and provide proofs of some properties of the approximated robustness.§ TEMPORAL LOGIC POLICY SEARCH (TLPS)Given a TLTL formula ϕ over predicates of S, TLPS finds the parameters θ of a parametrized stochastic policy π_θ(a|s) that maximizes the following objective function. J^π_θ = E_p^π_θ [ρ(τ,ϕ)],(T < ∞), where p^π_θ=p^π_θ(τ) is defined in Equation (<ref>).In TLPS, we model the policy as a time-varying linear Gaussian, i.e.π(a_t|s_t)=𝒩(K_ts_t+k_t,C_t) where K_t, k_t, C_t are the feedback gain, feed-forward gain and covariance of the policy at time t. (similar approach has been adopted in <cit.>, <cit.>). And the trajectory distribution in Equation (<ref>) is modeled as a Gaussian p^π_θ(τ)=𝒩(τ | μ_τ, Σ_τ) where μ_τ=(μ_s_0,...,μ_s_T) and Σ_τ=diag(Σ_s_0,...,Σ_s_T).At each iteration, N sample trajectories are collected (denoted τ^i, i ∈ [1,N]). For each sample trajectory τ^i, we find an updated trajectory τ̅^i by solving τ̅^imax ρ̂(τ̅^i,ϕ),s.t.(τ̅^i - τ^i)^T(τ̅^i - τ^i) < ϵ.In the above equation, ρ̂ is the log-sum-exp approximation of ρ. This is to take advantage of the many off-the-shelf nonlinear programming methods that require gradient information of the Lagrangian (sequential quadratic programming is used in our experiments). Using the log-sum-exp approximation we can show that its approximation error is bounded. In additional, the local ascending directions on the approximated surface coincide with the actual surface given mild constraints (these will be discussed in more detail in the next section).Equation (<ref>) aims to find a new trajectory that achieves higher robustness. The constraint is to limit the deviation of the updated trajectory from the sample trajectory so the system dynamics is not violated. After we obtain a set of updated trajectories, an updated trajectory distribution p̅(τ)=𝒩(τ | μ̅_τ, Σ̅_τ) is fitted usingμ̅_τ = 1/N∑_i=1^N τ̅^i, Σ̅_τ = 1/N∑_i=1^N (τ̅^i - μ̅_τ)(τ̅^i - μ̅_τ)^T, The last step is to update the policy. We will only be updating the feed-forward terms k_t and the covariance C_t. The feedback terms K_t is kept constant (the policy parameters are θ_t = (k_t, C_t), t ∈ [0,T)). This significantly reduces the number of parameters to be updated and increases the learning speed. For each sample trajectory, we obtain its probability under p̅(τ) p(τ^i) = 𝒩(τ^i |μ̅_τ, Σ̅_τ) ( p(τ^i) is also written in short as p^i) where i ∈ [1,N] is the sample index. Using these probabilities, a normalized weighting for each sample trajectory is calculated using the softmax function w^i = e^α p^i/∑_i=1^N e^α p^i (α>0 is a parameter to be tuned). Finally, similar to  <cit.>, the policy is updated using weighted maximum likelihood by k_t^' = ∑_i=1^N w^i k_t^i C_t^' = ∑_i=1^N w^i (k_t^i - k_t^')(k_t^i - k_t^')^T.According to <cit.>, such update strategy will result in convergence. The complete algorithm is described in Algorithm <ref>.§ ROBUSTNESS SMOOTHING In the TLPS algorithm introduced in the previous section, one of the steps requires solving a constrained optimization problem that maximizes the robustness (Equation (<ref>)). The original robustness definition in Section <ref> is non-differentiable and thus rules out many efficient gradient-based methods. In this section we adopt a smooth approximation of the robustness function using log-sum-exp. Specificallymax(x_1, ..., x_n) ≈1/βlog∑_i^n exp(β x_i)min(x_1, ..., x_n) ≈ -1/βlog∑_i^n exp(-β x_i),where β>0 is a smoothness parameter.We denote an iterative max-min function asM(x)=_if_i(x),where f_i(x)=_jf_j(x).denotes a function as ∈{max,min,ℐ} where ℐ is a operator such that ℐf_j(x)=f_j(x).i and j are index of the functions inand can be any positive integer. As we showed in Section <ref>, any robustness function could be expressed as an iterative max-min function.Following the log-sum-exp approximation, any iterative max-min function (i.e., the robustness of any TL formula) can be approximated as followsM̂(x)=1/βlog(∑_iexp(β f_i(x))),where β_i>0 if _i=max_i and β_i<0 if _i=min_i.In the reminder of this section, we provide three lemmas that show the following: * the approximation error between M(x) and M̂(x) approaches zero as β_i →∞. This error is always bounded by the log of the number of f(x) which is determined by the number of predicates in the TL formulae and the horizon of the problem.Tuning β_i trades off between differentiability of the robustness function and approximation error. * despite the error introduced by the approximation, the optimal points remain invariant (i.e. _xM(x)=_xM̂(x)). This result provides guarantee that the optimal policy is unchanged when using the approximated TL reward,* even though the log-sum-exp approximation smooths the robustness function. Locally the ascending directions of M(x) and M̂(x) can be tuned to coincide with small error and the deviation is controlled by the parameter β. As many policy search methods are local methods that improve the policy near samples, it is important to ensure that the ascending direction of the approximated TL reward does not oppose that of the real one. Due to space constraints, we will onlyprovide sketches of the proofs for the lemmas.Let N_i be the number of terms of _i,M and M̂ satisfyM-∑_i∈ S_min1/|β_i|logN_i≤M̂≤ M + ∑_i∈ S_max1/β_ilogN_iwhere S_min={i:_i=min_i} and S_max={i:_i=max_i}.For simplicity and without loss of generality, we illustrate the proof of Lemma <ref> by constructing an approximation for a finite max-min-max problemΦ(x)=max_i∈ Imin_j∈ Jmax_k∈ Kf_i,j,k(x).Let M_I=|I|, M_J=|J|, M_K=|K|, and β_I>0, β_J<0, β_K>0. Firstly, we define Φ_j(x)=max_k∈ Kf_i,j,k(x). Straightforward algebraic manipulation reveals thatlog(∑_j∈ Jexp(β_JΦ_j))+β_J/β_Klog(M_K)≤log(∑_j∈ J[∑_k∈ Kexp(β_Kf_i,j,k(x))]^β_J/β_K)≤log(∑_j∈ Jexp(β_JΦ_j)).Furthermore, let us define Φ_i=min_j∈ JΦ_j,we haveβ_JΦ_i≤log(∑_j∈ Jexp(β_JΦ_j))≤log(M_J)+β_JΦ_i.By substituting into Equation (<ref>), we obtainβ_JΦ_i+log(M_J) ≥log(∑_j∈ Jexp(β_JΦ_j))≥β_JΦ_i+β_J/β_Klog(M_K).Multiplying 1/β_J on both side, thenlog(∑_i∈ Iexp(β_IΦ_i))+β_I/β_Jlog(M_J)≤log(∑_i∈ I[∑_j∈ J(∑_k∈ Kexp(β_Kf_i,j,k(x)))^β_J/β_K]^β_I/β_J)     ≤log(∑_i∈ Iexp(β_IΦ_i))+β_I/β_Klog(M_K).Finally, let Φ=max_i∈ IΦ_i, then we haveexp(β_IΦ)≤∑_i∈ Iexp(β_IΦ_i)≤ M_Iexp(β_IΦ)β_IΦ≤log(∑_i∈ Iexp(β_IΦ_i))≤log(M_I)+β_IΦSubstitute into Equation (<ref>)β_IΦ+β_I/β_Jlog(M_J)≤log(∑_i∈ I[∑_j∈ J(∑_k∈ Kexp(β_Kf_i,j,k(x)))^β_J/β_K]^β_I/β_J)≤β_IΦ+log(M_I)+β_I/β_Klog(M_K).Then we conclude the proof.Suppose X^*={x^*:x^*∈_xM(x)}, there exist a positive constant B such that for all |β|≥ B x^* is also one of the maximum point of M̂(x) for any x^*, i.e.x^*∈_xM̂(x). We start by considering M as a maximum function, i.e. M(x)=max_if_i(x).let us denote I_max=_if_i(x^*), then x^*∈_xM̂(x) when∑_i≠ I_maxexp(β f_i(x^*))-∑_i≠ I_maxexp(β f_i(x))≤exp(β f_Imax(x^*))-exp(β f_Imax(x)).There always exists a positive constant B, such that for all β>B the above statement holds. Lemma <ref> can be obtained by using the above proof for thefunction in general. Let us denote the sub-gradient of M as ∂M/∂x={∂M/∂x_1,…,∂M/∂x_N} and the gradient of M̂ as ∂M̂/∂x={∂M̂/∂x_1,…,∂M̂/∂x_N}. There exists a positive constant B such that for all |β|≥ B, ∂M/∂x and ∂M̂/∂x satisfy⟨∂M/∂x,∂M̂/∂x⟩≥ 0,where ⟨·,·⟩ denotes the inner product. Here we will only provide the proof when M is a point-wise maximum of convex functions. One can generalize it to any iterative max-min function using the chain rule. Supposing M(x)=max_if_i(x), the sub-gradient of M(x) is ∂ M/∂ x=∂ f_i(x),i∈ I(x),where I(x)={i|f_i(x)=f(x)} is the set of ”active” functions. The corresponding M̂ is defined asM̂=1/βlog(∑_iexp(β f_i(x))),where its first order derivative is∂M̂/∂ x=∑_iexp(β f_i(x))∂ f_i(x)/∑_kexp(β f_k(x)).⟨∂ M/∂ x,∂M̂/∂ x⟩>0 ifexp(β f_i(x))/∑_kexp(β f_k(x))f_i(x)≥∑_j∉ I(x)exp(β f_j(x))/∑_kexp(β f_k(x))f_j(x),∀ i∈ I(x).Therefore, there always exists a positive constant B, such that ⟨∂ M/∂ x,∂M̂/∂ x⟩>0 holds for all β>B. § CASE STUDIESIn this section, we apply TLPS on a vehicle navigation example. As shown in Figure 1, the vehicle navigates in a 2D environment. It has a 6 dimensional continuous state feature space s=[x,y,θ, ẋ, ẏ, θ̇] where (x,y) is the position of its center and θ is the angle its heading makes with the x-axis. Its 2 dimensional action space a=[a_v, a_Φ] consists of the forward driving speed and the steering angle of its front wheels. The car moves according to dynamics ẋ= a_v cosθ ẏ= a_v sinθ θ̇ = a_v/Ltan a_Φwith added Gaussian noise (L is the distance between the front and rear axles). However the learning agent is not provided with this model and needs to learn the desired control policy through trial-and-error. We test TLPS on two tasks with increasing difficulty. In the first task, the vehicle is required to reach the goal g while avoiding the obstacle o. We express this task as a TLTL specificationϕ_1 =(x > x_g^l x < x_g^u y > y_g^l y < y_g^u) (d_o > r_o ).In Equation (<ref>),(x_g^l, x_g^u, y_g^l, y_g^u) defines the square shaped goal region, d_o is the Euclidean distance between the vehicle's center and the center of the obstacle, r_o is the radius of the obstacle. In English, ϕ_1 describes the task of "eventually reach goal g and always stay away from the obstacle". Using the quantitative semantics described in Section <ref> , the robustness of ϕ_i isρ_1(ϕ_1, (x, y)_0:T) =min(t ∈ [0,T)max( min( x_t - x_g^l, x_g^u - x_t, y_t - y_g^l, y_g^u - y_t)), t∈ [0,T)min(d^t_o-r_o)), where (x_t, y_t) and d_o^t are the vehicle position and distance to obstacle center at time t. Using the log-sum-exp, approximation for ρ_1(ϕ_1, (x, y)_0:T) can be obtained asρ̂_1(ϕ_1, (x, y)_0:T) =-1/βlog∑_t=0^T ( exp[-β( x_t - x_g^l)] +exp[-β( x_g^u - x_t)] + exp[-β(y_t - y_g^l)] + exp[-β( y_g^u - y_t)] + exp[-β(d_o^t - r_o)]).Because we used the same β throughout the approximation, intermediate log and exp cancel and we end up with Equation (<ref>). ρ̂_1(ϕ_1, (x, y)_0:T) is used in the optimization problem defined in Equation (<ref>).In task 2, the vehicle is required to visit goals 1, 2, 3 in this specific order while avoiding the obstacle. Expressed in TLTL results in the specification ϕ_2= (ψ_g_1𝒯ψ_g_2𝒯ψ_g_3) ((ψ_g_2ψ_g_3) 𝒰ψ_g_1) ((ψ_g_3) 𝒰ψ_g_2)(i=1,2,3⋀(ψ_g_i⇒◯ψ_g_i))(d_o > r_o), where ⋀is a shorthand for a sequence of conjunction, ψ_g_i:x > x_g_i^l x < x_g_i^u y > y_g_i^l y < y_g_i^u are the predicates for goal g_i. In English, ϕ_2 states "visit g_1 then g_2 then g_3, and don't visit g_2 or g_3 until visiting g_1, and don't visit g_3until visiting g_2, and always if visited g_i implies next always don't visit g_i (don't revisit goals), and always avoid the obstacle" . Due to space constraints the robustness of ϕ_2 and its approximation will not be explicitly presented, but it will take a similar form of nested min()/max() functions that can be generated from the quantitative semantics of TLTL.During training time, the obstacle is considered "penetrable" in that the car can surpass its boundary with a negative reward granted according to the penetrated depth. In practice we find that this facilitates learning compared to a single negative reward given at contact with the obstacle and restarting the episode.Each episode has a horizon T=200 time-steps. 40 episodes of sample trajectories are collected and used for each update iteration. The policy parameters are initialized randomly in a given region (the policy covariances should be initialized to relatively high values to encourage exploration). Each task is trained for 50 iterations and the results are presented in Figures 2 and 3. Figure 2 shows sample trajectory distributions for selected iterations. Trajectory distributions are illustrated as shaded regions with width equal to 2 standard deviations. Lighter shade indicates earlier time in the training process. We used β=9 for this set of results. We can see from Figure 2 that the trajectory distribution is able to converge and satisfy the specification. Satisfaction occurs much sooner for task 1 (around 30 iterations) compared with task 2 (around 50 iterations).Figure 3 compares the average robustness (of 40 sample trajectories) per iteration for TLPS with different values of the approximation parameters βin (<ref>). As a baseline, we also compare TLPS with episode-based relative entropy policy search (REPS) <cit.>. The original robustness function is used as the terminal reward for REPS and our previous work <cit.> has shown that this combination outperforms heuristic reward designed for the same robotic control task.The magnitude of robustness value changes with varying β. Therefore, in order for the comparison to be meaningful (putting average returns on the same scale), sample trajectories collected for each comparison case are used to calculate their original robustness values against the TLTL formula and plotted in Figure 3 (a similar approach taken in <cit.>). The original robustness is chosen as the comparison measure for its semantic integrity (value greater than zero indicates satisfaction).Results in Figure 3 shows that larger β results in faster convergence and higher average return. This is consistent with the results of Section <ref> since larger β indicates lower approximation error. However, this advantage diminishes as β increases due to the approximated robustness function losing differentiability. For the relatively easy task 1, TLPS performed comparatively with REPS. However, for the harder task 2, TLPS exhibits a clear advantage both in terms of rate of convergence and quality of the learned policy.TLPS is a local policy search method that offers gradual policy improvement, controllable policy space exploration and smooth trajectories. These characteristics are desirable for learning control policies for systems that involve physical interactions with the environment. S (likewise for other local RL methods). Results in Figure 3 show a rapid exploration decay in the first 10 iterations and little improvement is seen after the 40^th iteration. During experiments, the authors find that adding a policy covariance damping schedule can help with initial exploration and final convergence. A principled exploration strategy is possible future work. Similar to many policy search methods, TLPS is a local method. Therefore, policy initialization is a critical aspect of the algorithm (compared with value-based methods such as Q-learning). In addition, because the trajectory update step in Equation (<ref>) does not consider the system dynamics and relies on being close to sample trajectories, divergence can occur with a small β or a large learning rate. Making the algorithm more robust to hyperparameter changes is also an important future direction.§ CONCLUSIONAs reinforcement learning research advance and more general RL agents are developed, it becomes increasingly important that we are able to correctly communicate our intentions to the learning agent. A well designed RL agent will be proficient at finding a policy that maximizes its returns, which means it will exploit any flaws in the reward function that can help it achieve this goal. Human intervention can sometimes help alleviate this problem by providing additional feedback. However, as discussed in <cit.>, if the communication link between human and the agent is unstable (space exploration missions) or the agent operates on a timescale difficult for human to respond to (financial trading agent), it is critical that we are confident about what the agent will learn.In this paper, we applied temporal logic as the task specification language for reinforcement learning. The quantitative semantics of TL is adopted for accurate expression of logical relationships in an RL task. We explored robustness smoothing as a means to transform the TL robustness to a differentiable function and provided theoretical results on the properties of the smoothed robustness. We proposed temporal logic policy search (TLPS), a model-free method that utilizes the smoothed robustness and operates in continuous state and action spaces. Simulation experiments are conducted to show that TLPS is able to effectively find control policies that satisfy given TL specifications. IEEEtran
http://arxiv.org/abs/1709.09611v1
{ "authors": [ "Xiao Li", "Yao Ma", "Calin Belta" ], "categories": [ "cs.AI" ], "primary_category": "cs.AI", "published": "20170927163751", "title": "A Policy Search Method For Temporal Logic Specified Reinforcement Learning Tasks" }
IMAGInstitut Montpelliérain Alexander Grothendieck Université de Montpellier. CC051, Place Eugńe Bataillon F-34095 Montpellier, [email protected] 2 Laboratory of Algebra and Number Theory Faculté de MathématiquesUniversity of Science and Technology Houari Boumediene (USTHB)16111 Bab Ezzouar, Algeria [email protected] Functors and Dynamics in Gauge Structures Michel Nguiffo Boyom1 Ahmed Zeglaoui1,2 December 30, 2023 ================================================ We deal with finite dimensional differentiable manifolds. All items are concerned with are differentiable as well. Theclass of differentiability is C^∞. A metric structure in a vector bundle E is a constant rank symmetric bilinear vector bundle homomorphism of E× E in the trivial bundle line bundle. We address the question whether a given gauge structure in E is metric. That is the main concerns. We use generalized Amari functors of the information geometry for introducing two index functions defined in the moduli space of gauge structures in E. Beside we introduce a differential equation whose analysis allows to link thenew index functions just mentioned with the main concerns. We sketch applications in the differential geometry theory of statistics.Reader interested in a former forum on the question whether a giving connection is metric are referred to appendix.§ INTRODUCTIONA metric structure (𝐄,𝐠) in a vector bundle 𝐄 assigns to every fiber 𝐄_x a symmetric bilinear form 𝐠_x : 𝐄_x ×𝐄_x →ℝ. Every finite rank vector bundle admits nondegenerate positive metric structures. One uses the paracompacity for constructing those positive regular metric structures. At another side every nondegenerate metric vector bundle (𝐄,𝐠) admits metric gauge structures, viz gauge structures (𝐄,∇) subject to the requirement ∇𝐠 = 0. In a nondegenerate structure the values of the curvature tensor of a metric gauge structure (𝐄,∇) belong to the orthogonal sub-algebra o(𝐄, ∇) of the Lie algebra 𝔾(𝐄). Arises the question whether a gauge structure (𝐄,∇) is a metric gauge structure in 𝐄. Our concern is to relate this existence question with some methods of the information geometry. In fact in the family ∇^α of α-connections in a non singular statistical model [𝐄,π,M,D,p] the 0-connection yields a metric gauge structure in (TM,g). Here g in the Fisher information of the statistical model as in <cit.>, <cit.>. The question what about the cases α≠ 0 deserves the attention. More generally arises the question when the pair (∇, ∇^⋆) in a statistical manifold (M, g, ∇, ∇^⋆) is a pair of a metric gauge structures? Our aim is to address those questions in the general framework of finite rank real vector bundle over finite dimensional smooth manifolds. Our investigation involve two dynamics in the category 𝔊𝔞(𝐄). The first dynamic is the natural action of the gauge group 𝒢(𝐄). The second is the action of the infinitely generated Coxeter group generated by the family 𝔐𝔢(𝐄) of regularmetric structures (𝐄, 𝐠). This second dynamic is derived from Amari functors. § THE GAUGE DYNAMIC IN 𝔊𝔞(𝐄) §.§ The gauge group of a vector bundleLet 𝐄^⋆ be the dual vector bundle of 𝐄. Throughout this section 2 we go to identify the vector bundles 𝐄^⋆⊗𝐄 and Hom(𝐄, 𝐄). Actually Hom(𝐄,𝐄) is the vector bundle of vector bundle homomorphisms from 𝐄 to 𝐄. The sheaf of sections of 𝐄^⋆⊗𝐄 is denoted by 𝔾(𝐄). This 𝔾(𝐄) is a Lie algebra sheaf bracket is defined by(ϕ ,ψ) ⟼[ ϕ ,ψ] = ϕ∘ψ -ψ∘ϕ.Actually 𝐄^⋆⊗𝐄 is a Lie algebras bundle. It is called the Lie algebra of infinitesimals gauge transformations. The sheaf of inversible sections of 𝐄^⋆⊗𝐄 is denoted by 𝒢(𝐄). This 𝒢(𝐄) is a Lie groups sheaf whose composition is the composition of applications of 𝐄 in 𝐄. Elements of 𝒢(𝐄) are called gauge transformations of the vector bundle 𝐄. Consequently the set 𝒢_x(𝐄) ⊂ Hom(𝐄_x, 𝐄_x) is nothing but the Lie group GL(𝐄_x). This 𝒢(𝐄) is the seheaf of sections of the Lie groups bundle 𝐄̃^̃⋆̃⊗̃𝐄̃⊂𝐄^⋆⊗𝐄. We abuse by calling 𝒢(𝐄) and 𝐄̃^̃⋆̃⊗̃𝐄̃ the gauge group of the vector bundle 𝐄. §.§ Gauge structures in a vector bundle 𝐄A gauge structure in a vector bundle 𝐄 is a pair (𝐄, ∇) where ∇ is a Koszul connection in 𝐄. The set of gauge structures is denoted by 𝔊𝔞(𝐄). We define the action of the gauge group in 𝔊𝔞(𝐄) as it follows𝒢(𝐄) ×𝔊𝔞(𝐄) ⟶𝔊𝔞(𝐄), ϕ^⋆(𝐄, ∇) = (𝐄, ϕ^⋆∇).The Koszul connection ϕ^⋆∇ is defined by(ϕ^⋆∇)_Xs = ϕ(∇_Xϕ^-1s)for all s ∈𝔊a(𝐄) and all vectors field X on M. We denoted the gauge moduli space by Ga(𝐄), vizGa(𝐄) = 𝔊𝔞(𝐄)/𝒢(𝐄) §.§ The equation FE(∇∇^⋆)Inspired by the appendix to <cit.> and by <cit.> and bywe define a map from pairs of gauge structures in the space of differential operators DO(Eo^*⊗ E,T^* M⊗ E^*⊗ E). To every pair of gauge structures [(𝐄, ∇) , (𝐄, ∇^⋆)] we introduce the first order differential operator D^∇∇^⋆ of 𝐄^⋆⊗𝐄 in T^⋆M ⊗𝐄^⋆⊗𝐄 which is defined as it follows D^∇∇^*(ϕ)(X,s) = ∇^*_X(ϕ(s)) - ϕ(∇_Xs) for all s and for all vector fields X. Assume the rank of 𝐄 is equal to r and the dimension of M is equal to m. Assume ( x_i) is a system of local coordinate functions defined in an open subset U ⊂ M and ( s_α) is a basis of local sections of 𝐄 defined in U. We set[ ∇_∂ _x_is_α=∑_βΓ _i:α^βs_β, ∇_∂ _x_i^⋆s_α=∑_βΓ _i:α^⋆βs_β andϕ( s_α) =∑_βϕ _α^βs_β. ]Our concern is the analysis of system of partial derivative equations[ FE( ∇∇^⋆) ] _i:α^γ:∂ϕ _α^β/∂ x_i +∑_β =1^r{ϕ _α^βΓ _i:β^⋆γ-ϕ _β^γΓ _i:α^β} =0. When we deal with the vector tangent bundles the differential operator D^∇∇^* plays many outstanding roles in the global analysis of the base manifold <cit.>. In general though every vector bundle admits positive metric structures thissame claim is far from being true for symplectic structure and for positive signature metric structures. We aim at linking those open problems with the differential equation FE(∇∇^*). The sheaf of germs of solutions to FE(∇∇^⋆) is denoted by 𝒥_∇∇^⋆(𝐄). § THE METRIC DYNAMICS IN 𝔊𝔞(𝐄) §.§ The Amari functors in the category 𝔊𝔞(𝐄)Without the express statement of the contrary a metric structure in a vector bundle 𝐄 is a constant rank symmetric bilinear vector bundle homomorphism 𝐠 of 𝐄⊗𝐄 in ℝ̃. Such a metric structure is denoted by (𝐄, 𝐠). A nondegenerate metric structure is called regular, otherwise it is called singular. The category of regular metric structures in 𝐄 is denoted by 𝔐𝔢(𝐄). Henceforth the concern is the dynamic[ 𝒢(𝐄) ×𝔐𝔢(𝐄) ⟶ 𝔐𝔢(𝐄); ( ϕ ,( 𝐄,𝐠) ) ⟼ ( E,ϕ _⋆𝐠). ]Here the metric ϕ _⋆𝐠 is defined byϕ _⋆𝐠 (s,s^') = 𝐠(ϕ ^-1(s),ϕ ^-1(s^')).This leads to the moduli space of regular metric structures in a vector bundle 𝐄Me(𝐄) = 𝔐𝔢(𝐄)/𝒢(𝐄).A gauge structure (𝐄, ∇) is called metric if there exist a metric structure (𝐄, 𝐠) subject to the requirement ∇𝐠 = 0.We consider the functor 𝔐𝔢(𝐄) ×𝔊𝔞(𝐄) →𝔊𝔞(𝐄) which is defined by [ (𝐄, 𝐠) , (𝐄, ∇) ⟼ (𝐄, 𝐠.∇) ].Here the Koszul connection 𝐠.∇ is defined by𝐠(𝐠.∇_Xs, s^') = X(𝐠(s, s^'))-𝐠(s, ∇_Xs^').The functor just mentioned is called the general Amari functor of the vector bundle 𝐄. According to <cit.>, the general Amari functor yield two restrictions :[ {𝐠}×𝔊𝔞( 𝐄,) ⟶𝔊𝔞( 𝐄); ∇ ⟼ 𝐠.∇ ] [ 𝔐𝔢( 𝐄) ×{∇} ⟶𝔊𝔞( 𝐄); 𝐠 ⟼ 𝐠.∇ ]The restriction (1) is called the metric Amari functor of the gauge structure (𝐄,∇). The restriction (2) is called the gauge Amari functor of the metric vector bundle (𝐄,𝐠) . We observe that ∇𝐠 = 0 if and only if 𝐠.∇ =∇.The restriction (1) gives rise to the involution of 𝔊𝔞( 𝐄) : ∇→𝐠.∇. In other words 𝐠.(𝐠.∇) = ∇ for all (𝐄, ∇) ∈𝔊𝔞(𝐄).In general the question whether an involution admits fixed points has negative answers. In the framework we are concerned withevery involution defined by a regular metric structure has fixed points formed by metric gauge structures in (𝐄, 𝐠).The dynamics[ 𝒢( 𝐄) ×𝔊𝔞( 𝐄 )⟶ 𝔊𝔞( 𝐄);( ϕ ,∇)⟼ϕ ^⋆∇ ] [ 𝒢( 𝐄) ×𝔐𝔢( 𝐄 )⟶ 𝔐𝔢( 𝐄);( ϕ ,𝐠)⟼ϕ _⋆𝐠 ]are linked with the metric Amari functor by the formulaϕ ^⋆𝐠.∇ = ϕ _⋆𝐠 . ϕ ^⋆∇.We go to introduce the metric dynamics in 𝔊𝔞(𝐄). The abstract group of all isomorphisms of 𝔊𝔞(𝐄) is denoted by ISO(𝔊𝔞(𝐄)). By the metric Amari functor every regular metric structure (𝐄, 𝐠) yields the involution (𝐄, ∇) → (𝐄, 𝐠.∇). The subgroup of ISO(𝔊𝔞(𝐄)) which is generated by all regular metric structures in 𝐄 is denoted by 𝒢m(𝐄). This group 𝒢m(𝐄) looks like an infinitely generated Coxeter group. Using this analogy we call 𝒢m(𝐄) the metric Coxeter group of 𝔊𝔞(𝐄). For instance every metric structure (𝐄, 𝐠) generates a dihedral group of order 2.§.§ The quasi-commutativity property of the metric dynamic and the gauge dynamicAt the present step we are dealing with both the gauge dynamic[ 𝒢( 𝐄) ×𝔊𝔞( 𝐄) ⟶𝔊𝔞( 𝐄); ( ϕ ,∇) ⟼ ϕ ^⋆∇ ]and the metric dynamic[ 𝒢m( 𝐄) ×𝔊𝔞( 𝐄)⟶ 𝔊𝔞( 𝐄);( γ, ∇)⟼ γ .∇ ]What we call the quasi commutativity property of (1) and (2) is the linkϕ ^⋆𝐠.∇ = ϕ _⋆𝐠 .ϕ ^⋆∇ . We consider two regular metric structures (𝐄, 𝐠^0) and (𝐄, 𝐠), There exists a unique ϕ∈𝒢(𝐄) subject to the requirement𝐠^0(s, s^') = 𝐠(ϕ(s), s^').By direct calculations one sees that for every gauge structure (𝐄, ∇) one has𝐠.∇ = ϕ^⋆(𝐠^0.∇).The quasi-commutativity property shows that every regular metric structure acts in the moduli space Ga(𝐄). Further the gauge orbit 𝒢(𝐄)(𝐠.∇) does not depend on the choice of the regular metric structure (𝐄,𝐠). Thus the metric Coxeter group 𝒢m(𝐄) acts in the moduli space Ga(𝐄).When there is no risk of confusion the orbit of [∇] ∈ Ga(𝐄) is denoted by𝒢m.[∇] while its stabilizer subgroup is denoted by 𝒢m_[∇]. Consequently one has The index of every stabilizer subgroup 𝒢m_[∇]⊂𝒢m(𝐄) is equal to 1 or to 2.We go to rephrase Proposition 1 versus the orbits of the metric Coxeter group in the moduli space Ga(𝐄).For every orbit 𝒢m(𝐄).[∇]cardinal ♯( 𝒢m(𝐄).[∇] ) ∈{ 1, 2 } §.§ The metric index function.Theconcern is the metric dynamic [ 𝒢m( 𝐄) ×𝔊𝔞( 𝐄)⟶ 𝔊𝔞( 𝐄); ( γ,∇)⟼ γ .∇ ]The length of γ∈𝒢m(𝐄 ) is denoted by 𝔩(γ). It is defined as it follows𝔩(γ) = min{ p ∈ℕ : γ = 𝐠_1𝐠_2…𝐠_p,𝐠_j∈𝔐e(𝐄) }For every gauge structure (𝐄, ∇) the metric index of (𝐄,∇) is defined byind(∇) = min_γ∈𝒢^*m_∇{𝔩(γ) - 1}.Here 𝒢^*m_∇ stands for the subset formed of elements of the isotropy subgroup that differ from the unit element. The flowing statement is astraightforward consequence of the quasi-commutativity property. The non negative integer ind(∇) is a gauge invariant.Consequently we go to encode every orbit [∇] = 𝒢(𝐄)^⋆∇ with metric indexind([∇]) =ind(∇).By Lemma 1 we get the metric index function𝒢a(𝐄) ∋ [∇]→ ind([∇]) ∈ℤ§.§ The gauge index function.We consider the general Amari functor[ 𝔐𝔢( 𝐄) ×𝔊𝔞( 𝐄)⟶ 𝔊𝔞( 𝐄); ( 𝐠,∇)⟼𝐠.∇ ]For convenience we set ∇^𝐠 = 𝐠.∇. Therefore to a pair [(𝐄, 𝐠), (𝐄, ∇)] we assign the differential equation FE(∇∇^𝐠). The sheaf of solutions to FE(∇∇^𝐠) is denoted by 𝒥_∇∇^𝐠(𝐄). We go to perform a formalism which is developed in <cit.>. See also <cit.> for the case of tangent bundles of a manifolds. The concerns are metric structures in vector bundles. We recall that a singular metric structure in 𝐄 is a constant rank degenerate symmetric bilinear vector bundles homomorphism 𝐠 : 𝐄×𝐄→ℝ̃. Let (𝐄,𝐠) be a regular metric structure. We pose ∇^𝐠 = 𝐠.∇. For every ϕ∈𝔾(𝐄) there exists a unique pair (Φ,Φ^* ) ⊂𝔾(𝐄) subject to the the following requirements𝐠(Φ(s), s^') = 1/2[ 𝐠(ϕ(s), s^')+ 𝐠(s, ϕ(s^'))], 𝐠(Φ^⋆(s), s^') = 1/2[ 𝐠(ϕ(s), s^')- 𝐠(s, ϕ(s^'))]We put q(s, s^') = 𝐠(Φ(s), s^') and ω (s, s^') = 𝐠(Φ^⋆(s), s^').(<cit.>)If ϕ is a solution to FE(∇∇^⋆) then Φ and Φ^⋆ are solutions to FE(∇∇^⋆). Furthermore,∇q = 0, ∇ω = 0. By the virtue of the proposition 3 one has rank(Φ) = Constantand rank(Φ^⋆) = Constant. We assume that theregular metric structure (𝐄, 𝐠) is positive definite then we have 𝐄 = (Φ) ⊕ Im(Φ) 𝐄 = (Φ^⋆) ⊕ Im(Φ^⋆)Further one has the following gauge reductions((Φ), ∇) ⊂ (𝐄, ∇), (Im(Φ), ∇^𝐠) ⊂ (𝐄, ∇^𝐠), ((Φ^⋆), ∇) ⊂ (𝐄, ∇), (Im(Φ^⋆), ∇^𝐠) ⊂ (𝐄, ∇^𝐠).Assume that (𝐄, 𝐠, ∇, ∇^𝐠) is the vector bundle versus of a statistical manifold (M, 𝐠, ∇, ∇^𝐠) here ∇ =. Then (10, 11, 12, 13)is (quasi) 4-web in the basemanifold M.Given a metric vector bundle (𝐄, 𝐠) and gauge structure (𝐄, ∇). The triple (𝐄, 𝐠, ∇) is called special if the differential equation FE(∇∇^𝐠) has non trivial solutions. We deduce from Corollary 2that every special statistical manifold supports a canonical (quasi) 4-web, viz 4 foliations in (quasi) general position. Before pursing we remark that among formalisms introduce in <cit.>, many (of them) walk in the category of vector bundles. We go to perform this remark. To every special triple (𝐄, 𝐠, ∇) we assign the function[ 𝒥_∇∇^𝐠( 𝐄)⟶ℤ;ϕ⟼ rank(Φ ) ]Reminder : The map Φ is the solution to FE(∇∇^g) given by𝐠(Φ(s), s^') = 1/2[ 𝐠(ϕ(s), s^')+ 𝐠(s, ϕ(s^')) ]. We define the following non negatives integerss^b( ∇,𝐠) = min_ϕ∈𝒥 _∇∇^𝐠( 𝐄) corank( Φ) , s^b( ∇) = min_( 𝐄,𝐠) ∈𝔐𝔢( 𝐄) s^b( ∇,𝐠 ) . The non negative integer s^b(∇) is a gauge invariant 𝔊a(𝐄), viz s^b(∇) = s^b(Φ^⋆∇) for all gauge transformation Φ. By Proposition 4 we get the gauge index function𝒢a(𝐄) ∋ [∇] → s^b([∇]) ∈ℤ § THE TOPOLOGICAL NATURE OF THE INDEX FUNCTIONS §.§ Index functions as characteristic obstructionAccording to <cit.>, every positive Riemannian foliation (nice singular metric in the tangent bundle of a smooth manifold) admits a unique symmetric metric connection. A combination of <cit.> and <cit.> shows that all those metrics are constructed using methods of the information geometry as in <cit.> (see the exact sequence (16) below). Remind that we are concerned with the question whether a gauge structure (𝐄, ∇) is metric. By the virtue of <cit.> and <cit.> one has In a finite rank vector bundle 𝐄 a gauge structure (𝐄,∇) is metric if and only if for some regular metric structure (𝐄, 𝐠) the differential equationFE(∇∇^𝐠) admits non trivial solutions.If for some metric structure (𝐄,𝐠^0) the differential equation FE(∇∇^g^0) admits non trivial solutions then for every regular metric structure (𝐄, 𝐠) the differential equationFE(∇∇^g) admits non trivial solutions.Hint : use the following the short exact sequence as in <cit.>0 ⟶Ω_2^∇(TM) ⟶𝒥_∇∇^𝐠(TM) ⟶ S_2^∇(TM) ⟶ 0. We recall that the concern is the question whether a gauge structure (𝐄, ∇) is metric. By the remark raised above, this question is linked with the solvability of differential equations FE(∇∇^𝐠) which locally is a system of linear PDE with non constant coefficients. Theorem 1 highlights the links of its solvability with the theory of Riemannian foliations which are objects of the differential topology. The key of those links are items of the information geometry. So giving (𝐄, ∇), the property of (𝐄, ∇) to be metric is equivalent to the property of FE(∇∇^𝐠) to admit non trivial solutions. Henceforth, our aim is to relate the question just mentioned and the invariants ind(∇) and s^b(∇). We assume that (𝐄, ∇) is special.In a gauge structure (𝐄, ∇), the following assertions are equivalent * The gauge structure (𝐄, ∇) is regularly special.* The metric index function vanishes at [∇] ∈𝒢a(𝐄) i.e. ind([∇]) = 0.* The gauge index function vanishes at [∇] ∈𝒢a(𝐄) i.e. s^b([∇]) = 0.A gauge structure (𝐄,∇) is regularly metric if and only if (𝐄,𝐠.∇) is regularly metric for all regular metric structure (𝐄,𝐠)By theorem 1 both ind(∇) and s^b(∇) are characteristic obstructions to (𝐄, ∇) being regularly special. We have no relevant interpretation of the case ind(∇) ≠ 0. Regarding the case s^b(∇) ≠ 0, we haveLet (𝐄, ∇) be a gauge structure with s^b(∇) ≠ 0. Then there exists a metric structure (𝐄, 𝐠) such subject to the following requirement : rank(𝐠) = s^b(∇), further 𝐠 is optimal for those requirement, viz every ∇-parallel metric structure (𝐄, 𝐠) has rank smaller than s^b(∇) §.§ Applications to the statistical geometry Let {∇^α} be the family ofα-connections of a statistical manifold. If ∇^α is regularly metric for all of the positive real numbers α then all of the α-connections are regularly metric. Appendix : When can a Connection Induce a Riemannian Metric for which it is the Levi-Civita Connection?https://mathoverflow.net/questions/54434/when-can-a-connection-induce-a-riemannian-metric-for-which-it-is-the-levi-civita5 aff:che Affane, A.; Chergui, A.: Quasi-connections on degenerate semi-riemanniann manifolds. Mediterranean Journal of Mathematics, Springer, vol 14, number 3, 1-15 (2017) ama:nag Amari, S. I; Nagaoka, H.: Methods of information geometry. Translations of Mathematical Monographs. AMS-OXFORD, 191 (2007) bel Bel'ko, I. V.: Degenerate Riemannian metric. Math. Notes. Acad. Sci. USSR 18, 5, 1046-1049 (1975) boyom1 Nguiffo Boyom, M.: Foliations-Webs-Hessian Geometry-Information Geometry and Cohomology. Entropy12, 18, 433 (2016) boyom2 Nguiffo Boyom, M.:Analytic anchored Victor bundle, metric algebroids and stratified Riemannian foliation.In Algebra, Geometry, Analysis and their Applications, Naseem A, SHehzad H,Arshad K and Yaya A. edts Narosa Publishing House 2016, New Delhi - Chênaie - Mumbai - Kalkata, 1-23 (2016) boyom3 Nguiffo Boyom, M.:Numerical properties of Koszul connections.To appear boyom:wol Nguiffo Boyom, M. ; Wolak, R. A.: Transversely Hessian foliations and information geometry. International Journal of Mathematics, World Scientific, vol. 27, No. 11, 17 pages (2016) koz Kozlov, S. E.:Levi-Civita connections on degenerate pseudo-Riemannian manifolds. J. Math. Sci. 104, 4, 1338-1342 (2001)
http://arxiv.org/abs/1710.00681v1
{ "authors": [ "Michel Nguiffo Boyom", "Ahmed Zeglaoui" ], "categories": [ "math.DG" ], "primary_category": "math.DG", "published": "20170927165818", "title": "Amari Functors and Dynamics in Gauge Structures" }
Department of Physics, University of Seoul, Seoul 02504, Korea SKKU Advanced Institute of Nanotechnology, Sungkyunkwan University, Suwon, 16419, KoreaDepartment of Physics, University of Seoul, Seoul 02504, KoreaDepartment of Physics, The University of Texas at Austin, Austin, Texas 78712, USA [email protected] SKKU Advanced Institute of Nanotechnology, Sungkyunkwan University, Suwon, 16419, [email protected] Department of Physics, University of Seoul, Seoul 02504, KoreaWe present a density functional theory study of the carrier-density and strain dependenceof magnetic order in two-dimensional (2D) MAX_3 (M= V, Cr, Mn, Fe, Co, Ni; A= Si, Ge, Sn, and X= S, Se, Te) transition metal trichalcogenides. Our ab initio calculations show that this class of compoundsincludes wide and narrow gap semiconductors and metals and half-metals,and that most of these compounds are magnetic. Although antiferromagnetic order is most common, ferromagnetism is predicted inMSiSe_3 for M= Mn, Ni, in MSiTe_3 for M= V, Ni, in MnGeSe_3, in MGeTe_3 for M=Cr, Mn, Ni, in FeSnS_3, and in MSnTe_3 for M= V, Mn, Fe.Among these compoundsCrGeTe_3 and VSnTe_3 are ferromagnetic semiconductors.Our calculations suggest that the competition between antiferromagnetic and ferromagnetic order can be substantially altered bystrain engineering, and in the semiconductor case also by gating.The associated critical temperatures can be substantially enhanced by means of carrier doping and strains.75.70.Ak,85.75.Hh,77.80.B-,75.30.Kz,75.50.PpCarrier and strain tunable intrinsic magnetism in two-dimensional MAX_3 transition metal chalcogenides Jeil Jung December 30, 2023 =========================================================================================================§ INTRODUCTION Recently interest in 2D materials research has expanded beyondgraphene<cit.> to include other layered van der Waals materials. <cit.> For example ultrathin transition metal dichalcogenides (TMDC),<cit.> a family that includes metals, semiconductors with exceptionally stronglight-matter coupling, <cit.> charge density waves, and superconductors, <cit.> have emerged as a major research focus.Two-dimensional (2D) materials with room-temperature magnetic order are of particular interestbecause they appear to be potentially attractive hosts for non-volatile information storage and logic devices. Unfortunately single-layer magnetism has so far been realized only inrelatively fragile 2D materials, <cit.>and no 2D materials have yet been discovered that exhibit room-temperature magnetism. Recent density functional theory (DFT) studies have proposed several potentially magnetic single-layer van der Waals materials,including the group-V based dichalcogenides VX_2 (X= S, Se), <cit.> FeBr_3, chromium based ternary tritellurides CrSiTe_3 and CrGeTe_3, <cit.> and MnPX_3ternary chalcogenides. <cit.> Separately theCrATe_3 (A= Si, Ge) <cit.> ternary tritellurides have been predicted by local density approximation (LDA) density functional theory (DFT) calculations to be ferromagnetic semiconductors with small band gaps of 0.04 and 0.06 eV respectively. The few layer limits of these materials have been studiedexperimentally by performing temperature dependent transport <cit.>, and neutron scattering experiments, and these have been interpreted asproviding suggestions of 2D magnetism. <cit.>In its single layer limit CrSiTe_3 is found to be a semiconductor with a generalized gradient approximation (GGA)gap of 0.4 eV, <cit.> substantially larger than its bulk value, and to have negative thermal expansion. <cit.>Different authors have reached various conclusions concerning therelative stability between ferromagnetic <cit.> andantiferromagnetic phases, <cit.> reflecting a relatively small total energy differences between the energies of the two magnetic phases. For compounds like CrGeTe_3 and CrSnTe_3 involving larger atomicnumber Ge and Sn atoms, DFT predicts ferromagnetic semiconducting phases with Curie temperatures between 80-170K. <cit.> Among non-chalcogenide compounds the CrX_3 (X= F, Cl, Br, I) <cit.> trihalidesare expected to be ferromagnetic semiconductors, with Curie temperatures T_ C < 100 K. A Recent breakthrough experiment has realized CrI_3 based devices in ultrathin layered formand demonstrated an intricate competition between ferromagnetic (FM) and antiferromagnetic (AFM)states as a function of layer number and external fields <cit.>. In this paper we present an exhaustive DFT survey of the magnetic phases of single-layer transition metal trichalcogenide compounds of the MAX_3 family. Our survey covers a variety of late 3d transition metals (M= V, Cr, Mn, Fe, Co, Ni), the group IV elements (A=Si, Ge, Sn), and the three chalcogen atoms (X= S, Se, Te).These single-layer compounds are structurally closely related to theirtransition metal trichalcogenide MPX_3compound cousins, which we studied in a recent work <cit.>. The main difference between MAX_3 compounds andMPX_3 compounds is that the group V phosphorus (P) atom inside the (P_2X_6)^4- skeleton is replaced by(A_2X_6)^6- bipyramid ions with X = (S, Se, Te) group IV elements. We find that this change is responsible for important modifications in the resulting electronic and magnetic properties. We have examined how the electronic structures are modified when the magnetic phases changes from antiferromagnetic to ferromagnetic, or from magnetic to non-magnetic,when we modify the electron carrier density, and when strains are applied. Our results indicate a strong interdependence between magnetism and structuralproperties in MAX_3 compounds due to a surprisingly strong dependence of exchange interactionstrengths on electron densities and strains. Because these materials involve transition metals and therefore have strong correlations, we do not expect quantitatively reliable predictions of density functional theory for all properties. Our goal with this survey is to provide insight into expected materials-property trends, and to examine the possibility of engineering magnetic properties in these materials using field effects and strains. Our paper is structured as follows. We begin in Sec. <ref> with a summary of some technical details of our first principles electronic structure calculations. In Sec. <ref> we discuss our results for ground-state properties including structure, magnetic properties,and electronic band structures and densities-of-states.Sec. <ref> is devoted to an analysis of the carrier-density dependence of the magnetic ground states, and to the influence of strains on the magnetic phase diagram.Finally in Sec. <ref> we present a summary and discussion of our results. § AB INITIO CALCULATION DETAILS Ground-state electronic structure and magnetic property calculationshave been carried out using plane-wave density functional theory as implemented in Quantum Espresso. <cit.> We have used the Rappe-Rabe-Kaxiras-Joannopoulos ultrasoft (RRKJUS) pseudoptentials for the semi-local Perdew-Burke-Ernzerhof (PBE) generalized gradient approximation (GGA) <cit.> together with the VdW-D2. <cit.> GGA+D2 (hereafter DFT-D2) wass chosen as an electronic properties reference because of the overall improvement of the GGA over the LDA <cit.> for the description of in-plane covalent bonds, and the interlayer binding captured by the longer ranged D2 correction. <cit.> We have also carried out calculations using GGA+D2+U (hereafter referred to as DFT-D2+U),using U=4 eV.Some larger values of U were used fora few specific cases involving Co and Ni metals to obtain magnetic ground states.The structures were optimized until the forces on each atom reached 10^-5 Ry/a.u. The self-consistency convergence for total energies was set to 10^-10 Ry.The momentum space integrals for rectangular unit cells were performed using a regularly spaced 4×8×1 k-point sampling density, and the plane wave energy cutoffwas set to 60 Ry.The out-of-plane vertical size of the periodic supercell was chosen to be 25 , which left a vacuum spacing of more than 10  between the two-dimensional layers.§ STRUCTURAL AND MAGNETIC PROPERTIES The MAX_3 transition metal trichalcogenide layers consist of 3d transition metals M= (V, Cr, Mn, Fe, Co, Ni) anchored by (A_2X_6)^6- bipyramid ions with X = (S, Se, Te)and A = (Si, Ge, Sn).The 12 electrons taken by the six chalcogen atoms per unit cell are partly compensated by the 6 electrons provided by the sp^3 bonds with the bridge A atoms, leaving a final 6- valence state for the anionic enclosure.The triangular lattice of bipyramids provides enclosuresfor the transition metal atoms, forming a structure that is practically identical to that of MPX_3 compounds enclosed by(P_2X_6)^4- bipyramids. (See Fig. <ref> for a schematic illustration of the single layer unit cell )The main difference is that the A atoms have one fewer electron compared to P atoms, yielding compounds withlarger nominal metal cation valences 3+ rather than 2+ in the MPX_3 compounds. The interaction between the metal cations with both the chalcogen X and the bridge A atoms will determine the magnetic moment that usually concentrates at the metal atom sites.In our calculations all the 2D MAX_3 crystals we considered are magnetically ordered within DFT-D2 except for CoAX_3, NiGeS_3 and NiSnX_3.These are magnetic only within DFT-D2+U(See Fig. <ref> for an illustration of the trends in magnetic condensation energy, i.e. in the energy gained by forming magnetically ordered states.).In the following we present an analysis of the structural, magnetic and electronic propertiesof representative 3d transition metal MAX_3 trichalcogenides, emphasizingtheir dependence on the chalcogen element (Si, Ge, Sn)and their underlying electronic band structures. §.§ Structural propertiesIn their bulk form the MAX_3 single layers are ABC-stacked and are held together mainly by van der Waals forces <cit.>. Although the atomic structures of the single layer transition metal trichalcogenide crystals are similar to truncatedbulk structures, small changes do appear due to the absence of the interlayer couplingand distortions in the ground-state crystalgeometries are correlated with the magnetic phase<cit.>.Theanalysis of magneticproperties is simplified by the fact that the magnetic moments develop almost entirely at the metal atom sites. We have optimized the MAX_3 single layer lattice structures using the rectangular unit cell shown in Fig. <ref> which ischaracterized by the values of the in-plane lattice constants a and b.We define the layer thickness c' as the distance between the chalcogen atoms betweenthe top and bottom layer in a MAX_3 monolayer. The relaxed in-plane lattice parameters and layer thickness of the rectangular unit cells, as obtained using DFT-D2 (See Tables in Supplemental Material<cit.>), are found to increase for larger chalcogen atoms for given A (Si, Ge and Sn) atoms. In general the calculated self-consistent lattice constants depend on the magnetic ordering, The variation is substantial for the transition metals V, Cr, Mn, Fe for all combinations of A (Si, Ge, Sn) and chalcogens S, Se, Te, up to 10% for in-plane lattice constants and up to 20% for the layer thickness. For compounds with Co and Ni the lattice distortions are much smaller, see Fig. <ref>.As a rule of thumb we can see that the magnitude in the distortion of the bondsis roughly proportional to the total energy differences represented in Fig. <ref> and therefore they are largest when we compare magnetic and non-magnetic phases. We have also optimized all the structures in the presence of local electron repulsion introducedthrough Hubbard U i.e. DFT-D2+U. This contribution leads to total energy differences between magnetic and non-magnetic phases that are roughlydoubled (see Fig. <ref>) within DFT-D2+U when compared to DFT-D2, and this difference is reflected in the increase of the lattice distortions. The relative difference of the lattice parameters between DFT-D2-U and DFT-D2 geometries are comparable to the difference between magnetic and non-magnetic phases within the same DFT approximation, see Fig. <ref> and Fig. 1 in the Supplemental Materials. From this observation we can expect that the short-range correlations of the transition metal atomscan substantially modify the energetics of the ground-state magnetic configurations. §.§ Magnetic properties[htbp!] 6.4in3.6in< g r a p h i c s > (Color online)The DFT-D2 density of states (DOS) for single-layer MAX_3 compounds in their lowest-energy magnetic configurationsfor M = V, Cr, Mn, Fe, Co and Ni transition-metal atoms with combination of A =Si, Ge, Sn and X = S, Se, Te chalcogen atoms.Different colors, gray for AFM configurations, red for states in the FM configurations, and green for the NM phases are used to facilitate theclassification of the expectedmagnetic phases.Most ferromagnetic solutions are metallic except for VSnTe_3 and CrGeTe_3,while both gapped and metallic antiferromagnetic phases are common. The magnetic ground-state and meta-stable magnetic configurations are obtained by identifyingenergy extremas via converged self-consistency initiated using initial conditionscorresponding to Néel antiferromagnetic (nAFM), zigzag antiferromagnetic (zAFM),stripy aniferromagnetic (sAFM), ferromagnetic (FM), and nonmagnetic (NM) states. The calculations we have carried out for single layer MAX_3 compoundsconfirm that the magnetic moments develop mainlyat the metal atom sites, with significant net spin polarization induced ongroup IV and chalcogen atoms only in the ferromagnetic configuration case. The late 3d transition metal elements Cr, Mn, Fe, Co and Ni stand out in the periodic table as elements that tend to order magnetically.The bonding arrangements of particular compounds can however enhance or suppress magnetism. In 2D MAX_3 crystals, transition metal ions are contained within trichalcogenidebipyramidal cages, and have weak direct hybridization with other transition metal atoms. The exchange interactions between the metal atoms are therefore mainly mediated by indirect exchange through the intermediate chalcogen and A atoms. Magnetic interactions can be extracted from ab initio electronic structurecalculations by comparing the total energies ofantiferromagnetic, ferromagnetic, and nonmagnetic phasesin V, Cr, Mn, Fe, Co, Ni based compounds as shown in Fig. <ref>. The total energy values in the data set used to construct the magnetic interactionmodel are gathered inTable. I of the Supplemental Material<cit.>, where we find thatmagnetic phases are normally favored over the NM phase. Exceptions to this rule are that DFT-D2 predicts non-magnetic phasesfor CoAX_3, NiGeS_3 and NiSnX_3.These compoundsdo develop magnetic solutions when we use a sufficiently large onsite repulsion parameter U withinDFT-D2+U.We have generally used the U=4 eV value to assess the role of onsite Coulombrepulsion on the magnetic ground state energies, and used larger values (U=5 eV for CoGeTe_3, CoSnTe_3,U=6 eV for CoGeSe_3, CoSnSe_3,and U=7 eV for CoSnS_3) to obtain magnetic solutions when they did not appearfor U=4 eV. Our DFT calculations predict that the Ni based trichalcogenides are magneticonly for some of the considered spin configurations.For instance NiGeS_3 and NiSnX_3 are non-magnetic within DFT-D2,and zAFM and sAFM phases are missing in NiSiX_3 and NiGeSe_3,and NiGeTe_3 has only ferromagnetic ordering.Within DFT-D2+U it is possible to obtain both FM and nAFM for NiGeTe_3, and FM only for NiSnTe_3. The magnetic anisotropy energy estimates that we obtained from non-collinear magnetizationcalculations are 616, 750 and 1166 μeV per formula unit for ferromagnetic order in the compounds CrSiS_3, CrSiSe_3 and CrSiTe_3, with magnetization favored perpendicular to the plane. Assuming that in general these compounds will have a well defined anisotropy axis we make use of theIsing model for the spin-Hamiltonian to obtain an upper bound for the T_c values. The critical temperature estimate would belower when we use a Heisenberg model with weak anisotropy. Using the fact that the magnetic moments are mostly concentrated at the metal atom sites we can map the total energies to an effective classical spin Hamiltonian on a honeycomb lattice:H =∑_⟨ ij ⟩ J_ij S⃗_i ·S⃗_j = 1/2∑_ i ≠ jJ_ij S⃗_i ·S⃗_jwhereJ_ij are the exchange coupling parameters between two local spins, S⃗_i is the total spin magnetic moment of the atomic site i,and the prefactor 1/2 accounts for double-counting.By calculating the three independent total energy differences between the four magneticconfigurations <cit.> illustrated in Fig. <ref>,namely the ferromagnetic (FM), Néel (nAFM),zigzag AFM (zAFM), and stripy AFM (sAFM) configurations, and assuming that the magnetic interactions are relatively short ranged,we can extract the nearest neighbor (J_1), second neighbor (J_2),and third neighbor (J_3) coupling constants:<cit.>E_ FM-E_ AFM =3 (J_1+J_3)S⃗_A·S⃗_BE_ zAFM-E_ sAFM =(J_1 - 3J_3)S⃗_A·S⃗_BE_ FM+E_ AFM -E_ zAFM-E_ sAFM = 8J_2S⃗_A·S⃗_Awhere S⃗_A/B is the average spin magnetic moment on the honeycomb sublattice. The average magnetic moment S = |S⃗_A/B| at each lattice site obtained within DFT-D2and DFT-D2+U are listed in Table <ref>. For MAX_3 compunts where A=Si we get numerical values of magnetization for V, Cr, Mn, Fe close to 2, 3, 4, 1 Bohr magneton units while for A=Ge, Sn we get 2, 3, 3, 1. The magnetization can be understood counting the 4s and 3d valence electrons in the metal atom used in the bonding with the anionic enclosure.For example, in the case of vanadium with three unpaired 3d electrons we expect that the three electrons required for the bonds originate from two 4s electrons and one 3d electron,which leaves two upaired 3d electrons responsible fo the 2μ_B. The magnetization increases as we move to the right in the periodic table until it saturates for Mn with the maximum of five unpaired 3d electrons. We notice that the sudden drop in magnetization for Fe down to one Bohr magneton may be attributed to changes in the 4s and 3d energy level orderingsuch that three out of the four unpaired 3d valence electrons are used for the bonds with the enclosure.Single-layer magnetic ordering temperatures T_ c were estimated by running Monte Carlo simulations of the three-coupling-constant Ising models using the Metropolis algorithm in lattice sizes of up to N=32×64 with periodic boundary conditions <cit.> calculating the heat capacity C = k β^2 ( ⟨ E^2 ⟩ - ⟨ E ⟩^2 ) as a function of temperature and identified its diverging point as the Neel and Curie temperatures. This value can be considered as an upper bound for a Heisenberg model with strong anisotropy. See the Supplemental Material<cit.> for plots of representative resultsfor the temperature dependent heat capacity.The calculated average values of the magnetic momentsvary widely from compound to compound (See Table <ref>),while the magnitudes of the magnetic moments at the metal atoms generally have relativelysmall differences (between 3%∼10%) between different magnetic configurations of the samecompound.The largest variations within a compound were found in FeSnX_3 and NiSnX_3.Within the DFT-D2 approximation we find that the magnetic moments in 2D MAX_3 developalmost entirely at the metal atom sites while the use of on-site U introduces an enhancement of the spinpolarization at non-metal atom sites.Variation of moment from configuration to configuration within a compound is an indication that the system is less accurately described by alocal moment model.§.§ Band structure and density of states Understanding the electronic properties is an essential stepping stoneto seek spintronics device integrations based on MAX_3 magnetic 2D materials. Specifically, it is desirable to understand how the electronic structure depends on the type of magnetic phase in order to seek ways to couple charge and spin degrees of freedom.It is found that the MAX_3 class of materials includes almost all of the behaviors studied in current spintronics research, includingboth antiferromagnets and ferromagnets, and a variety of electrical properties including metals, semi-metals, half-metals and semiconductors. We have classified as semi-metallic those states with vanishingly small gaps or small density of states (DOS) near the Fermi energy from inspection of the electronic structure.The electronic band structures corresponding to the ground state configurations are represented in Fig. <ref> and the respective DOS are in Fig. <ref>. The band structures are plotted using a triangular unit cell around one of the K valleys, at times doubling the cell size to allow for longer period magnetic configurations. The DOS for all magnetic (FM, nAFM, zAFM and sAFM) and non-magnetic phases within DFT-D2 and DFT-D2+U can be found in the Supplemental Material<cit.>.The analysis of the orbital projected partial density of states in the Supplemental Material<cit.> for the MAX_3compounds reveals that the conduction band edges have an important contribution from the d orbitals of metal atoms,as well as s and p orbitals of the A atom for the valence band edges.From the difference in the associated density of states between DFT+D2 and DFT+D2+U we notice the strong sensitivity of the electronic structure to the choice of electron-electron interaction model. As expected, most of the AFM cofigurations are found to be semiconductors while semi-metallic and metallic solutions are alsofound for select spin configurations in V and Cr tellurides, in two instances of Mn sulfides, and in several Fe based compounds, see Fig. <ref>. As a general trend, we notice that both AFM as well as NM band gaps reduce when the chalcogen's atomic number increases from S to Te. The FM configurations are generally metallic, with half metallic solutions for VSiTe_3,MnSiSe_3,FeSnS_3,NiSiSe_3,and are semiconducting for CrGeTe_3, FeSnS_3, FeSnTe_3.We notice that the addition of U switches some of the metallic FM solutions into half-metals, and it leads to semiconducting FM solutions for CrSnS_3, CrSnSe_3 and CoSnS_3.Most of the Co based compounds predict NM states with a semiconducting gap, while the few NM states of Ni based compounds are found to be metallic. We notice that the magnitude of the band gaps in AFM, FM and NM states do not experience notable changes betweenDFT-D2 and DFT-D2+U, as illustrated in Table <ref>. However, the corresponding density of states (DOS), plotted in Fig. <ref>,shows the strong influence on the ground-state electronic structure when Coulomb correlations are included. This suggests that the physics of MAX_3 compounds can be dominated by correlation effects and modeling will be most successful when we rely on effective models that feed from experimental input or high level ab initio calculations. The orbital content of the valence and conduction band edges that are most relevant for studies of carrier-density dependent magnetic properties can be extracted from the orbital projected partial density of states (PDOS). Depending on the specific material composition and the magnetic configuration, the valence and conduction band edge orbitals can be dominatedby metal, non-metal or chalcogen atoms. § TUNABILITY OF MAGNETIC PROPERTIES As we reported in the case of MPX_3,<cit.> the two dimensional magnetic materials are of interestprimarily because of the prospect thattheir properties might be more effectively altered by tuning parameters controllable in situ.Two potentially important control knobs that can be exploited experimentally in two-dimensional-material based nano-devices arecarrier-density and strain.The dependence of magnetic properties oncarrier density is particularly interesting because it provides a convenient route for electrical manipulation of magnetic properties in electronic devices. In this section we explore the possibility of tailoring the electronic and magnetic properties of MAX_3 ultrathin layers by adjusting carrier density or by subjecting the MAX_3 layers to external strains. §.§ Field-effect modification of magnetic propertiesThe possibility of modifying the magnetic properties of a material simply by applying a gate voltage offers advantages overmagnetic-field mediated information writing in magnetic media such as the higher density storage and enhanced information access speed. Electric field control of magnetic order has been demonstrated in ferromagnetic semiconductors and metal films through carrierdensity or Fermi level dependent variation of magnetic exchange or magnetic anisotropy. <cit.> The gate voltage control of magnetism in 2D materials would allow control over magnetically stored information at negligible energy cost.A schematic illustration for a field effect transistor device where magnetic order is modified through a carrier inducing backgate is shown in Fig. <ref>, where we alsosummarize the theoretically predicted trends for the competitionbetween AFM and FM states for 2D MAX_3 compounds that we have considered. In our calculations we have obtained the variation of the total energy differences between magnetic configurations as a function ofcarrier density neglecting the possible role of charge polarization within a MAX_3 layer induced by the external fields. We find that when the ground state is in the AFM phase,generally transitions to FM phases can be achieved by adding sufficiently large electron or hole carrier densities.The origin of this trend can be understood based on energetic considerationswhen FM phases are gapless or have smaller gaps than the AFM phases. <cit.>If we denote by Δ E_gap = E_ AFM^gap - E_ FM^gap the difference between the gap in the AFM and the FM phases, it follows that the energy difference per area unit between AFM and FMphases δ E ≡ E_ AFM -E_ FM is given at low carrier densities by δ E(±δ n)= δ E_0+ ( Δ E_gap/2 ±δμ ) (±δ n) where +δ n is the carrier density of n-type samples, and -δ n is the carrier density of p-type samples,δ E_0 is the energy difference per area unit between AFM and FM phases in neutralMAX_3 sheets, and δμ is the difference between the mid-gap energy of theAFM semiconductors and the chemical potential of the ferromagnetic metals or semiconductors. Introducing n-carriers is most effective in driving a transition from AFM toFM phases when δμ is positive, while introducing p-carriers ismost effective when δμ is negative. The total DOS corresponding to FM and AFM phases in the presence of carrier doping is illustrated for CrSiTe_3 in Fig. <ref>(a) as a specific example.A detailed breakdown of the projected density of states at each atomic sitecan be found in the Supplemental Material <cit.>.Because the energy difference per formula unit between FM and AFM phases ismuch smaller than the energy gap, a transition between them can be driven by carrier density changes per formula unit that are much smaller than one,especially so when δμ plays a favorable role. In particular we see in Fig. <ref> that atransition between FM and AFM phases are predicted inCrSiTe_3, MnSiX_3, CrGeSe_3 at electron carrier densities andMnSiS_3, FeSiSe_3, FeGeSe_3, VSnSe_3 at hole densities as small as ∼0.05 electrons per formula unit which correspond to carrier densities on the order of 10^13 cm^-2. Our calculations show that the FM solution is the preferred stable magnetic configurationin almost all cases when the system is subject to large electron or hole densities within therange of a few times ± 10^14 cm^-2. Even then, cases like vanadium based VSiSe_3, VGeS_3, VGeSe_3,or iron based FeSiTe_3, FeSnS_3, & FeSnSe_3,or manganese based MnGeX_3, MnSnS_3,or NiGeX_3, CrSnTe_3, have not shown any transition within the selected range of electron or hole carrier density.Carrier density changes of this magnitude can be achieved by ionic liquid or gel gating, or through interfaces with ferroelectric materials. Since this size of carrier density is sufficient to completely change the character of the magnetic order,we can expect substantial changes in magnetic energy landscapes and their stability at much smaller carrier densities.Our calculations therefore motivate efforts to find materials which can be used to establishgood electrical contacts to MAX_3 compounds to facilitate either n or p carrier dopingby aligning their fermi levels towards the conduction or valence bands. For compounds whose ground-states are FM at charge neutralitywe find that the transitions to the AFM phase can be achievedfor n-doping in VSiTe_3, VSnTe_3, MnSiSe_3, & FeSnTe_3 and for p-doping for NiSiSe_3, NiSTe_3, & MnSnTe_3.§.§ Strain-tunable magnetic propertiesThe flexible membrane-like behavior of 2D-materials can be used to tailor their electronic structure by means of strains. Examples of strain induced electronic structure modification in 2D materials discussed in theliterature include the observation of Landau-level like density of states near high curvaturegraphene bubbles <cit.>, or the commensuration moiré strains that open up a band gap at the primary Dirac pointin nearly aligned graphene on hexagonal boron nitride <cit.>.Noting that strains can play an important role in configuring the electronic structure we calculate the total energies for different magnetic phases in 2D MAX_3 materials in the presence of expansive or compressive in-plane biaxial strains that we model by uniformly scaling the rectangular unit cell described in Fig. <ref>. The uniform biaxial strains lead to modifications in the magnetic phaseenergy differences E_ AFM - E_ FM that can trigger phase transitionsfor strains as small as 2-4% in certain cases, while much larger strain fields are required in general. In the following we list the five different types of strain-induced effects expected in charge neutral MAX_3 compounds:*No phase change (expansion and compression):The ground-states are not altered in the presence of strains forvanadium based VSiS_3 (sAFM), VSnS_3 (zAFM),chromium based CrSiS_3 (nAFM), CrSiSe_3 (nAFM), CrGeS_3 (nAFM), CrSnS_3 (zAFM), manganese based MnGeTe_3 (FM), nickel based NiGeTe_3 (FM), and iron based FeSnS_3 (FM), FeSnTe_3 (FM). *AFM to FM (compression):For compressive strains we find transitions for vanadium based VSiSe_3 (4%), VGeS_3 (∼1%), VGeSe_3 (4%), VGeTe_3 (9%), chromium based CrSiTe_3 (∼1%), VGeTe_3 (9%), manganese based MnSiS_3 (2%), MnSiTe_3 (∼1%), MnGeS_3(9%), MnSnS_3 (4%), MnSnSe_3 (8%), nickel based NiGeSe_3 (4%), and iron based FeSiTe_3 (4%), FeGeTe_3 (4%). *AFM to FM (expansion): Conversely for expansive strains we find transitionsin vanadium based VGeS_3 (∼12%), VSnSe_3 (∼2%) chromium based CrSiTe_3 (∼2%), CrSnSe_3 (8%), manganese based MnSiTe_3 (∼6%), and iron based FeSnSe_3 (4%).*FM to AFM (compression):Transitions are seen for compressive strains (%) in vanadium based VSiTe_3 (∼1%),VSnTe_3 (∼1%), nickel based NiSiSe_3 (4%), NiSiTe_3 (4%),and chromium based CrGeTe_3 (∼2%), CrSnTe_3 (∼2%).*FM to AFM (expansion): Transitions for expansive strains (%) are found for manganese based MnSiSe_3 (∼1%), MnSnTe_3 (4%), iron based FeSiS_3 (4%), and nickel based NiSiSe_3 (4%), NiSiTe_3 (4%).An example about the DOS evolution as a function of strain is shown in Fig. <ref> (b) for charge neutral CrSiTe_3 monolayer subjected to -4% (compressive) and4% (expansive) strains in the FM and zAFM phases. The expansion strains are found to have a small effect in both the FM and zAFM phases of CrSiTe_3but compressive strains of 4% lead to a closure of theFM-CrSiTe_3 band gap turning it into a semi-metal with finite density of states at the Fermi level. As mentioned earlier, from the projected density of states analysis in the Supplemental Material <cit.> we can observe a relatively large content of Cr-d and Si-s, p orbitals at the bands near the Fermi energy.The Fig. <ref>(c) shows that the T_c can be substantially enhanced when the device is subject to carrier density variations or to strains.For monolayer CrSiTe_3 we showedthat the T_c increased from 16K to almost 700K for carrier densities of 0.2 electrons (holes) per CrSiTe_3, while in the presence of strains they enhanced to 476K(4% compression) and 104K (4% expansion).§ SUMMARY AND DISCUSSION In this paper we have carried out an ab initio study of the MAX_3 transition metaltrichalcogenide class of two-dimensional materials considering differentcombinations of 3d metal (M = V, Cr, Mn, Fe, Co, Ni), group IV (A = Si, Ge, Sn) and chalcogen (X = S, Se, Te) atoms in an effort to make an exhaustive search for magnetic 2D materials useful for spintronics applications.Our calculations suggest that magnetic phases are common in the single-layer limit of thesevan der Waals materials, and that the configuration of the magnetic phase depends sensitivelyon the transition metal/chalcogen element combination.We find that semiconducting antiferromagnetic (AFM) phases are the most common ground state configurations and appear in Néel, zigzag, or stripy configurations. The ferromagnetic (FM) phases are predominantly metallic although semiconducting band structures are found for several compounds, whilethe non-magnetic (NM) phases exist for both metallic and semiconducting states.In compounds with larger chalcogen atoms we have relatively smaller band gaps inthe AFM phases and smaller ground state energy differences between the FM and AFM phases.Compounds such as CoAX_3 and NiAX_3 are found to be non-magnetic within DFT-D2,although they can stabilize magnetic phases unpon inclusion of a sufficiently large U at the metal atom sites.The electronic structures predicted by density functional theory for these materials are sensitive to the choiceof the Coulomb interaction model as manifested in the substantial differences in predicted DOS betweenthe standard semi-local DFT-D2, and calculations including a local U correction.Since the exchange interactions in the AFM phase are expected to vary inversely with the band gap the approximations that underestimate the gap will overestimate the interaction strengths.This sensitivity of the optimized ground-state results to the choice of the DFT approximation scheme which can alter the strength and range of the exchange interaction makes it desirable to benchmark the results against experiment in order to establishtheoretical models upon which we can make further predictions of material's properties.We have found a variety of different stable and meta-stable magnetic configurations in single layer MAX_3 compounds. The critical temperatures for the magnetic phases in the single layer limit are expected to be lowerthan in the bulk due to the reduction in the number of close neighbor exchange interactions. We analyzed the magnetic phases of the 2D MAX_3 compounds by building a model Hamiltonianwith exchange coupling parameters extracted by mapping the total energies from our ab initio calculationsonto an effective classical spin modeland obtained the critical temperatures through a statistical analysis based on the Metropolis algorithm <cit.>. The calculated critical temperatures assumed an Ising model that provides an upper bound for the expected critical temperatures and were found to vary widelyranging between a few tens to a few hundred Kelvin.Control of magnetic phases by varying the electric field in a field effect transistor device is a particularly appealing strategy for 2D magnetic materials. Our calculations indicate that a transition between AFM and FM phases can be achieved by inducing carrier densities in 2D MAX_3 compounds which are effectively injected through field effects using high κ-dielectrics,using ionic liquids, or by interfacing with ferroelectric materials.For materials exhibiting magnetic phases we find that use of the heavier chalcogen Te atoms can reduce the carrier densities required for magnetic transitions. The interdependence between atomic and electronic structure suggest that strains can be employed to tune magnetic phases,and they can be used to facilitate switching the magnetic configurations when they are combined together with carrier density variations. We have found that the ground state magnetic configuration can undergo phase transitionsdriven by in-plane compression or expansion of the lattice constants as small as a few percents in certain cases.Based on our calculations, we conclude that single layer MAX_3 transition metal trichalcogenides are interesting candidate materials for 2D spintronics. Their properties, including their magnetic transition temperatures, can be adjusted by the application of external strains or by modifying the carrier densities in field effect transistor devices.For monolayer CrSiTe_3 we showed in Fig. <ref>(c) that the T_c values can undergo an order of magnitude enhancementwhen subject to large carrier doping or in the presence of compressive or expansive strains. The sensitivity of these systems to variations in system parameters, such as composition, details of the interface, and the exchange coupling of the magnetic properties with external fields, offers ample room for future research that seek new functionalities in magnetic 2D materials. § ACKNOWLEDGEMENTSWe are thankful to the assistance from and computational resources providedby the Texas Advanced Computing Centre (TACC).We acknowledge financial support from NRF-2017R1D1A1B03035932 for BLC, the 2017 Research Fund of the University of Seoul for JJ,NRF-2014R1A2A2A01006776 for EHH,DOE BES Award SC0012670, and Welch Foundationgrant TBF1473 for AHM.99 novoselov_nature K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, M. I. Katsnelson, I. V. Grigorieva, S. V. Dubonos and, A. A. Firsov, Two-dimensional gas of massless Dirac fermions in graphene,Nature. 438, 197 (2005).philipkim_nature Y. Zhang, Y.-W. Tan, H. L. Stormer, and P. Kim, Experimental observation of the quantum Hall effect and Berry's phase in graphene, Nature. 438, 201 (2005).novoselov_pnas K. S. Novoselov, D. Jiang, F. Schedin, T. J. Booth, V. V. Khotkevich, S. V. Morozov, and A. K. Geim, Two-dimensional atomic crystals, Proc. Nat. Ac. Sci. 102, 10451 (2004). chowalla M. Chhowalla, H. S. Shin, G. Eda, L-J. Li, K. P. Loh and, H. Zhang, The chemistry of two-dimensional layered transition metal dichalcogenide nanosheets, Nat. Chem. 5,263 (2013).lightmatter L. Britnell, R. M. Ribeiro, A. Eckmann , R. Jalil, B. D. Belle, A. Mishchenko, Y. J. Kim, R. V. Gorbachev, T. Georgiou, S. V. Morozov, A. N. Grigorenko, A. K. Geim, C. Casiraghi, A. H. Castro Neto, K. S. Novoselov, Strong light-matter interactions in heterostructures of atomically thin films, Science. 340, 1311 (2013).tas2 B. Burk, R. E. Thomson, J. Clarke, A. Zettl, Surface and Bulk Charge Density Wave Structure in 1 T-TaS_2, Science. 257, 362 (1992).tmdneto A. H. Castro Neto, Charge Density Wave, Superconductivity, and Anomalous Metallic Behavior in 2D Transition Metal Dichalcogenides, Phys. Rev. Lett. 86, 4382 (2001).frindt R. F. Frindt, Superconductivity in ultrathin NbSe_2 layers,Phys. Rev. Lett. 28, 299 (1972).mos2sc D. Costanzo, S. Jo, H. Berger, and A. F. Morpurgo, Gate-induced superconductivity in atomically thin MoS_2 crystals, Nat. Nanotech. 11, 339 (2016). mpx3other16c J.-U. Lee, S. Lee, J. H. Ryoo, S. Kang, T. Y. Kim, P. Kim, C.-H. Park, J.-G. Park, and H. Cheong, Ising-Type Magnetic Ordering in Atomically Thin FePS3,Nano. Lett. 16, 7433 (2016).xiaodong2017 B. Huang, G. Clark, E. Navarro-Moratalla, D. R. Klein, R. Cheng, K. L. Seyler, D. Zhong, E. Schmidgall, M. A. McGuire, D. H. Cobden, W. Yao, D. Xiao, P. Jarillo-Herrero, X. Xu,Layer-dependent ferromagnetism in a van der Waals crystal down to the monolayer limit, Nature 546, 270 (2017). zhangxiang2017 C. Gong, L. Li, Z. Li, H. Ji, A. Stern, Y. Xia, T. Cao, W. Bao, C. Wang, Y. Wang, Z. Q. Qiu, R. J. Cava, S. G. Louie, J. Xia, X. Zhang, Discovery of intrinsic ferromagnetism in two-dimensional van der Waals crystals, Nature 546, 265 (2017).hanwei2017 W. Xing, Y. Chen, P. M. Odenthal, X. Zhang, W. Yuan, T. Su, Q. Song, T. Wang, J. Zhong, S. Jia, X. C. Xie, Y. Li and W. Han, Electric field effect in multilayer Cr_2Ge_2Te_6: a ferromagnetic 2D material, 2D materials 4, 024009 (2017). 2dmagnet1 Magnetic Properties of Layered Transition Metal Compounds Ed by L. J. de Jongh, series ofPhysics and Chemistry of Materials with Low-Dimensional Structures, vol- 9 (1990).2dmagtmd Y. Ma, Y. Dai, M. Guo, C. Niu, Y. Zhu, and B. Huang, Evidence of the Existence of Magnetism in Pristine VX_2 Monolayers (X = S, Se) and Their Strain-Induced Tunable Magnetic Properties, ACS Nano, 6, 1695 (2012).lebegue S. Lebegue, T. Björkman, M. Klintenberg, R. M. Nieminen, and O. Eriksson, Two-Dimensional Materials from Data Filtering and Ab Initio Calculations, Phys. Rev. X 3, 031002 (2013). max1 B. Siberchicot, S. Jobic , V. Carteaux , P. Gressier , and G. Ouvrard, Band Structure Calculations of Ferromagnetic Chromium Tellurides CrSiTe_3 and CrGeTe_3, J. Phys. Chem. 100, 5863, (1996).max6 M.-W. Lin, H. L. Zhuang, J. Yan, T. Z. Ward, A. A. Puretzky, C. M. Rouleau, Z. Gai, L. Liang, V. Meunier, B. G. Sumpter, P. Ganesh, P. R. C. Kent, D. B. Geohegan, D. G. Mandrus and K. Xiao, Ultrathin nanosheets of CrSiTe_3: a semiconducting two-dimensional ferromagnetic material J. Mater. Chem. C, 4, 315 (2016).max3 T. J. Williams, A. A. Aczel, M. D. Lumsden, S. E. Nagler, M. B. Stone, J.-Q. Yan, D. Mandrus, Magnetic Correlations in the Quasi-2D Semiconducting Ferromagnet CrSiTe_3, Phys. Rev. B. 92, 144404 (2015).max5 X. Chen, J. Qi, and D. Shi, Strain-engineering of magnetic coupling in two-dimensional magnetic semiconductor CrSiTe3: Competition of direct exchange interaction and superexchange interaction, Phys. Lett. A. 379, 60 (2015).max2 L. D. Casto, A. J. Clune, M. O. Yokosuk, J. L. Musfeldt, T. J. Williams, H. L. Zhuang, M.-W. Lin, K. Xiao, R. G. Hennig, B. C. Sales, J.-Q. Yan, and D. Mandrus, Strong spin-lattice coupling in CrSiTe_3,APL Mater. 3, 041515 (2015).sivadas N. Sivadas, M. W. Daniels, R. H. Swendsen, S. Okamoto, and D. Xiao, Magnetic ground state of semiconducting transition-metal trichalcogenide monolayers, Phys. Rev. B. 91, 235425 (2015).max4 H. L. Zhuang, Y. Xie, P. R. C. Kent, and P. Ganesh, Computational discovery of ferromagnetic semiconducting single-layer CrSnTe_3, Phys. Rev. B. 92, 035407 (2015).CGeT-PRB2017 Y. Liu and C. Petrovic,Critical behavior of quasi-two-dimensional semiconducting ferromagnet Cr_2Ge_2Te_6,Phy. Rev. B, 96, 054406 (2017). CGeT-ChemMat2017 X. Tang, D Fan, K. Peng, D. Yang, L. Guo, X. Lu, J. Dai, G. Wang, H. Liu, X. Zhou,Dopant Induced Impurity Bands and Carrier Concentration Control for Thermoelectric Enhancement in p-Type Cr_2Ge_2Te_6, Chem. Mater, 29, 7401 (2016).CGeT-ChemMat2016 D. Yang, W. Yao, Q. Chen, K. Peng, P. Jiang, X. Lu, C. Uher, T. Yang, G. Wang, X. Zhou,Cr_2Ge_2Te_6: High Thermoelectric Performance from Layered Structure with High Symmetry, Chem. Mater, 28, 1611 (2016).CGeT-JJAP2016 X. Zhang, Y. Zhao, Q. Song, S. Jia, J. Shi, W. Han,Magnetic anisotropy of the single-crystalline ferromagnetic insulator Cr_2Ge_2Te_6, Jap. J. App. Phys, 55, 033001 (2016).CGeT-PRB2017-2 G. T. Lin, H. L. Zhuang, X. Luo, B. J. Liu, F. C. Chen, J. Yan, Y. Sun, J. Zhou, W. J. Lu, P. Tong, Z. G. Sheng, Z. Qu, W. H. Song, X. B. Zhu, and Y. P. Sun,Tricritical behavior of the two-dimensional intrinsically ferromagnetic semiconductor CrGeTe_3, Phy. Rev. B, 95, 245212 (2017).CGeT-2DMat2017 W. Xing, Y. Chen, P. M. Odenthal, X. Zhang, W. Yuan, T. Su, Q. Song, T. Wang, J. Zhong, S. Jia, X. C. Xie, Y. Li, W. Han,Electric field effect in multilayerCr_2Ge_2Te_6 : a ferromagnetic 2D material,2D Materials, 4, 024009 (2017).MnST-PRB2017 Andrew F. May, Yaohua Liu, Stuart Calder, David S. Parker, Tribhuwan Pandey, Ercan Cakmak, Huibo Cao, Jiaqiang Yan, Michael A. McGuire,Magnetic order and interactions in ferrimagnetic Mn_3Si_2Te_6, Phys. Rev. B, 96, 054406 (2017).CrST-SciRep2016 B. Liu, Y. Zou, L. Zhang, S. Zhou, Z. Wang, W. Wang, Z. Qu, Y. Zhang, Critical behavior of the quasi-two-dimensional semiconducting ferromagnet CrSiTe_3, Sci. Rep, 6, 33873 (2016)CrSGT-PRL2016 J. Liu, S. Y. Park, K. F. Garrity, D. Vanderbilt, Flux States and Topological Phases from Spontaneous Time-Reversal Symmetry Breaking in CrSi(Ge)Te_3 -Based Systems, Phy. Rev. Lett, 117, 257201 (2016).MnPSe-PRL2016 N. Sivadas, S. Okamoto, D. Xiao, Gate-Controllable Magneto-optic Kerr Effect in Layered Collinear Antiferromagnets, Phys. Rev. Lett, 117, 267203 (2016).CGeT-2DMat2016 Y. Tian, M. J. Gray, H. Ji, R. J. Cava, K. S. Burch, Magneto-elastic coupling in a potential ferromagnetic 2D atomic crystal, 2D Materials, 3, 025035 (2016). cri3 M. A. McGuire, H. Dixit, V. R. Cooper, and B. C. Sales, Coupling of Crystal Structure and Magnetism in the Layered, Ferromagnetic Insulator CrI_3, Chem. Mater. 27, 612 (2015).crx3 W. B. Zhang, Q. Qu, P. Zhu, C. H. Lam, Robust intrinsic ferromagnetism and half semiconductivity in stable two-dimensional single-layer chromium trihalides, J. Mater. Chem. C, 3, 12457 (2015).jacs X. Li, X. Wu, and J. Yang, Half-Metallicity in MnPSe_3 Exfoliated Nanosheet with Carrier Doping, J. Am. Chem. Soc.136, 11065 (2014).niu X. Li, T. Cao, Q. Niu, J. Shi, and J. Feng, Coupling the valley degree of freedom to antiferromagnetic order, Proc. Nat. Ac. Sci. 110, 3738 (2013).mpx3struct1 R. Brec,Review on structural and chemical properties of transition metal phosphorous trisulfides MPS_3,Solid State Ionics 22, 3, (1986).Bheema_PRB B. Chittari, Y. Park, D. Lee, M. Han, A. H. MacDonald, E. Hwang, J.l Jung, Electronic and magnetic properties of single-layer MPX3 metal phosphorous trichalcogenides,Phys. Rev. B. 94, 184428 (2016).espresso P. Giannozzi, S. Baroni, N. Bonini, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, G. L. Chiarotti, M. Cococcioni, I. Dabo, A. D. Corso, S. de Gironcoli, S. Fabris, G. Fratesi, R. Gebauer, U. Gerstmann, C. Gougoussis, A. Kokalj, M. Lazzeri, L. Martin-Samos, N. Marzari, F. Mauri, R. Mazzarello, S. Paolini, A. Pasquarello, L. Paulatto, C. Sbraccia, S. Scandolo, G. Sclauzero, A. P. Seitsonen, A. Smogunov, P. Umari, and R. M. Wentzcovitch, QUANTUM ESPRESSO: a modular and open-source software project for quantum simulations of materials, J. Phys.: Cond. Matter. 21, 395502 (2009).GGA J. P. Perdew, K. Burke, and M. Ernzerhof, Generalized Gradient Approximation Made Simple, Phys. Rev. Lett. 77, 3865 (1996).d2grimme S. Grimme,Semiempirical GGA-type density functional constructed with a long-range dispersion correction,J. Comp. Chem. 27, 1787 (2006).LDA J. P. Perdew and A. Zunger, Self-interaction correction to density-functional approximations for many-electron systems, Phys. Rev. B. 23, 5048 (1981).noamarom N. Marom, J. Bernstein, J Garel, A. Tkatchenko, E. Joselevich, L. Kronik, and O. Hod, Stacking and Registry Effects in Layered Materials: The Case of Hexagonal Boron Nitride, Phys. Rev. Lett. 105, 046801 (2010).SI See the Supplemental Material for the total energy difference, lattice constants, th J-Coupling parameters along with magnetic moments and transition temperatures using DFT-D2, band structures, the associated density of states and the orbital projected density of states (PDOS) calculated for self-consistently converged magnetic configurations. Carrier density dependent magnetic phase transition is calculated using a finite onsite repulsion U=4 eV on the of DFT-D2. We also obtain the heat capacity as a function of temperature through the Metropolis Monte Carlo simulation in a 32×64 superlattice.chaloupka J. Chaloupka, G. Jackeli, and G. Khaliullin, Zigzag Magnetic Order in the Iridium Oxide Na_2IrO_3, Phys. Rev. Lett. 110, 097204 (2013).metropolis N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, E. Teller, Equation of State Calculations by Fast Computing Machines J. Chem. Phys. 21, 1087 (1953).newman M. E. J. Newman and G. T. Barkema, Monte Carlo Methods in Statistical Physics, Clarendon Press, Oxford (1999).landau D. P. Landauand K. Binder, Monte Carlo Simulations in Statistical Physics, Cambridge University Press, Cambridge (2000)/ohno1 F. Matsukura, Y. Tokura, and Hideo Ohno, Control of magnetism by electric fields, Nat. Nanotech. 10, 209 (2015).ohno2 H. Ohno, D. Chiba, F. Matsukura, T. Omiya, E. Abe, T. Dietl, Y. Ohno, and K. Ohtani, Electric-field control of ferromagnetism, Nature. 408, 944 (2000).endo M. Endo, S. Kanai, S. Ikeda,F. Matsukura and H. Ohno, Electric-field effects on thickness dependent magnetic anisotropy of sputtered MgO/Co_40Fe_40B_20/Ta structures, App. Phys. Lett. 96, 212503(2010).guineabubbles N. Levy, S. A. Burke, K. L. Meaker, M. Panlasigui, A. Zettl, F. Guinea, A. H. Castro Neto, M. F. Crommie, Strain-Induced Pseudo-Magnetic Fields Greater Than 300 Tesla in Graphene Nanobubbles, Science. 329, 544 (2010).jarillogap B. Hunt, J. D. Sanchez-Yamagishi, A. F. Young, M. Yankowitz, B. J. LeRoy, K. Watanabe, T. Taniguchi, P. Moon, M. Koshino, P. Jarillo-Herrero, and R. C. Ashoori,Massive Dirac Fermions and Hofstadter Butterfly in a van der Waals Heterostructure, Science. 340, 1427 (2013).woods C. R. Woods, L. Britnell, A. Eckmann, R. S. Ma, J. C. Lu, H. M. Guo, X. Lin, G. L. Yu, Y. Cao, R. V. Gorbachev, A. V. Kretinin, J. Park, L. A. Ponomarenko, M. I. Katsnelson, Yu. N. Gornostyrev, K. Watanabe, T. Taniguchi, C. Casiraghi, H-J. Gao, A. K. Geim and K. S. Novoselov, Commensurate-incommensurate transition in graphene on hexagonal boron nitride, Nat. Phys. 10, 451 (2014).origingap J. Jung, A. M. DaSilva, A. H. MacDonald, and Shaffique Adam, Origin of band gaps in graphene on hexagonal boron nitride, Nat. Commun. 6 6308 (2015).Supplemental Material In this supplement we present calculation results obtained within DFT-D2+U for the results discussed in the main text within DFT-D2.These comprise the total energy difference, lattice constants, the J-Coupling parameters the magnetic moments, transition temperatures,band structures, the associated density of states and the orbital projected density of states (PDOS) for selected compounds calculated for self-consistently for a different magnetic configurations. We generally use the finite onsite repulsion U=4 eV and larger values for a few select compounds.The temperature dependence of the heat capacity is obtained through the Metropolis Monte Carlo simulation in a 32×64 lattice. [htbp!] 6.4in3.6in< g r a p h i c s >(Color online)The DFT-D2+U density of states (DOS) for single-layer MAX_3 compounds in their lowest-energy magnetic configurations for M = V, Cr, Mn, Fe, Co and Ni transition-metal atoms with combination of A =Si, Ge, Sn and X = S, Se, Te chalcogen atoms. The plotted DOS were calculated using the triangular structural unit cell, except for the cases of sAFM and zAFM, which are predicted to have a larger periodicity magnetic structure that has triangular unit cell with a doubled lattice constant. The state are grey for AFM configurations, red for states in the FM configurations, and green for the NM phases.
http://arxiv.org/abs/1709.09386v1
{ "authors": [ "Bheema Lingam Chittari", "Dongkyu Lee", "Allan H. MacDonald", "Euyheon Hwang", "Jeil Jung" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20170927084148", "title": "Carrier and strain tunable intrinsic magnetism in two-dimensional MAX$_3$ transition metal chalcogenides" }