text
stringlengths 0
2.11M
| id
stringlengths 33
34
| metadata
dict |
---|---|---|
On certain orbits of geodesic flow and (a,b)-continued fractionsManoj Choudhuri,Institute of Infrastructure Technology Research and Management, Near Khokhara Circle, Maninagar (East),Ahmedabad-380026, Gujarat, Indiaemail: [email protected] December 30, 2023 ======================================================================================================================================================================================================== In this article, we characterize two kinds of exceptional orbits of the geodesic flow associated with the Modular surface in terms of a two-parameter family of continued fraction expansion of endpoints of the lifts to the hyperbolic plane of the corresponding geodesics. As a consequence, we obtain an extension of Dani correspondence betweenhomogeneous dynamics and Diophantine approximation. Keywords : Geodesic flow; Modular surface; continued fractions; coding of geodesics. Mathematics Subject Classification: 37A17, 11J70, 53C22. § INTRODUCTIONLet :̋= {z=x+iy:y>0} be the upper half plane endowed with the hyperbolic metric ds^2=dx^2+dy^2/y^2. The group (2,)=(2,)/{± I} acting by fractional linear transformations (see <cit.>), is the orientation preserving isometry group of $̋. The discrete group(2,)=(2,)/{±I}acting properly discontinuously on$̋ gives rise to the modular surface M=/̋(2,), which is topologically a sphere with two singularities and one cusp. Let T^1$̋ be the unit tangent bundle of the hyperbolic plane which is the collection(z,ζ)withzin$̋ and ζ being a tangent vector of norm one in T_z$̋;(2,)acts onT^1$̋ as well and the quotient space T^1/̋(2,) can be identified with the unit tangent bundle of M, which we denote by T^1M. We denote (z,ζ)∈ T^1M by v. Any v∈ T^1M determines a unique geodesic in M. If we consider the geodesic along with its tangent vector at each point, then it is the orbit of v under the geodesic flow. This orbit is denoted by {g_t v}, where g_t denotes the geodesic flow on T^1M. It is well known that the geodesic flow on T^1M is ergodic with respect to the Liouville measure (see <cit.> for more details and original references).This means in particular that with respect to the Liouville measure, the orbits of almost all v in T^1M are equidistributed, i. e., if μ denotes the (normalized) Liouville measureon T^1M, then for almost all v∈ T^1M,1/T∫_0^T χ_A(g_tv)T⟶∞μ(A), for any measurable A⊂ T^1M, where χ_A denotes the characteristic function of the set A. Apart from these generic orbits there are many interesting orbits of geodesic flow associated with the modular surface. By Dani correspondence (see <cit.>, <cit.> for details), we know that badly approximable numbers (see <cit.> for definition) correspond to bounded orbits and rational numbers correspond to divergent orbits. In this article, we are going to characterize two kinds of orbits in terms of their asymptotic rate of time spent in cusp neighbourhoods. One of these kinds of orbits contains the bounded orbits and the other one contains the divergent orbits. These characterizations are done in terms of the continued fraction expansions of certain real numbers associated with those orbits. In order to do so, we use arithmetic coding of geodesics on the modular surface which was originated in the 1924 paper of E. Artin (<cit.>), who proved the existence of a dense geodesic using classical continued fraction. A more precise description of this machinery using classical continued fraction can be found in <cit.>. We do not restrict ourselves to only classical continued fraction, rather we consider a two-parameter family of continued fractions and obtain the characterizations in terms of that family, from which the characterizations in terms of the classical continued fraction follow. The two-parameter family of continued fractions we are going to consider in this article is known as (a,b)-continued fractions with a, b∈ satisfying a technical condition (see next section for more details about (a,b)-continued fractions). Using these continued fraction expansions of real numbers, in <cit.>, S. Katok and I. Ugarcovicci describe a coding of geodesics on the modular surface, which enables one to give a symbolic description of the geodesic flow associated with the modular surface. We use this coding of geodesics, which also relies on another paper (<cit.>) by the same authors, as a base of the arguments, used to prove the results of the present article. In this article, we consider (a,b)-continued fractions for (a,b) in a particular subset 𝒫 of ^2, where 𝒫 is given as follows:𝒫={(a,b)∈^2|-1≤ a< 0 < b≤1,b-a≥ 1}. Note that this excludes the possibilities a<-1 and b>1, though in the work of Katok and Ugarcovicci (<cit.>, <cit.>) those possibilities were also considered with -ab≤1. Also let ℰ be the exceptional set discussed in <cit.>, the elements of which do not satisfy the finiteness(see next section for the definition) condition. Let𝔖:={(a,b)∈𝒫\ℰ:a,b have strong cycle property},(see next section for definition of cycle properties) and𝒮=𝔖∪{(-1,-1),(-1/2,1/2)}.Now let _̋ d:={x+iy∈:̋y>d} and _ d⊂ T^1$̋ be given by_ d:=⋃_ℐ(z)>dT^1_z,̋whereℐ(z)denotes the imaginary part of the complex numberz, andT^1_z$̋ denotes the set of unit tangent vectors in T_z$̋. Letπdenote both the projections from$̋ to M and from T^1$̋ toT^1M. Also letM_d:=π(_̋d)andM_d :=π(_d). Note thatM_dis a typical neighbourhood of the cusp inT^1M. We say that an orbit{g_tv}_t≥0visits the cusp withfrequency0, if there exists somed>1such that1/T∫_0^Tχ_ M_d(g_tv)dt→0asT→∞. On the other hand, an orbit{g_tv}_t≥0is said to visit the cusp withfrequency1, if for alld>1,1/T∫_0^Tχ_M_d(g_tv)dt→1asT→∞. Now letx∈andx:=[a_0,a_1,...]_a,bbe its(a,b)-continued fraction expansion. Givenξ>1andj≥0,we define the modified partial quotients of the(a,b)-continued fraction expansion ofxas follows:a_j^(ξ) = a_j, if |a_j|>ξ, a_j^(ξ) = 1, if |a_j|≤ξ.For a given v∈ T^1M, let γ_v be the corresponding geodesic in M, and γ̃_v be one of its lifts to the hyperbolic plane. Let x be the attracting end point of γ̃_̃ṽ. For (a,b)∈𝒮, let the (a,b)-continued fraction expansion of x be given by x=[a_0,a_1,a_2,...]_a,b. Also for ξ>1, let {a_j^(ξ)}_j≥0 be the modified sequence of partial quotients as defined above, and A_N^(ξ)=1/ N∑_j=0 ^N-1log|a_j^(ξ)|,A_N=1/ N∑_j=0^N-1log|a_j|. Then,(i) the forward orbit {g_t v}_t≥0visits the cusp with frequency 0 if and onlyif A_N^(ξ)→ 0 as N→∞ for some ξ>1, and(ii){g_t v}_t≥0 visits the cusp with frequency 1 if and only if A_N→∞ as N→∞. The restriction of the parameters (a,b) to the set 𝒮 ensures that any geodesic in $̋ is(2,)-equivalent to an(a,b)-reduced geodesic (a notion to be made clear in the next section). This is essential for this article as we are not looking at a generic set of orbits of the geodesic flow. It follows from Theorem7.1of <cit.> that ifaandbhave the strong cycle property, then every geodesic in$̋ is (2,)-equivalent to an (a,b)-reduced geodesic, whereas in <cit.>, the same statement was shown to be true for (a,b)=(-1,1),(-1/2,1/2). It is easy to see that (-1,1)-continued fraction expansion of a real number x is nothing but the classical continued fraction expansion of x with alternating signs (see <cit.> for details). A similar relation holds between (-1/2,1/2)-continued fraction expansion and thenearest integer continued fraction expansion of a real number. Thenthe characterizations in Theorem <ref> in terms of the classical and the nearest integer continued fractions follow from the characterizations in terms of (-1,1) and (-1/2,1/2)-continued fractions respectively. If we consider the algebraic description of the geodesic flow, then Theorem <ref> may be thought of as an extension of Dani correspondence between homogeneous dynamics and Diophantine approximation. We know that(2,)=(2,)/{±I}can be identified withT^1$̋ (see <cit.> for details), where theidentification is given byg⟼ g(i,î) for g∈(2,), here î denotes the unit tangent vector based at the point i and pointing upwards. Similarly (2,)\(2,)≃(2,)\(2,) can be identified with T^1M. The right action of the one parameter subgroup{ a_t:= [ e^-t/20;0e^t/2 ]} on (2,)\(2,), which is given by the following: (Γ g)a_t↦(Γ)ga_t, for g∈(2,), where we denote (2,) by Γ, corresponds to the geodesic flow on T^1M. Given a real number x, let Γ_x=Γ[ 1 x; 0 1 ].Then the simplest form of Dani correspondence says that the orbit {Γ_x a_t}_t≥0 is bounded (relatively compact) in Γ\(2,) if and only if x is a badly approximable number. On the other hand {Γ_x a_t}_t≥0 is divergent if and only if x is rational. It is well known (see <cit.> for instance) that a real number x is badly approximable if and only if the partial quotients in the classical continued fraction of x are bounded. The same is true for (a,b)-continued fraction expansion of x with a=-1 and b=1 because the (-1,1)-continued fraction expansion of x is nothing but the classical continuedfraction expansion of x with alternating signs. In Remark 3.3 of <cit.>, it was shown that a number is badly approximable if and only if the partial quotients in its (-1/2,1/2)-continued fraction expansion are bounded. By the same reasoning (with the help of Proposition <ref>), the same assumption is true for (a,b)-continued fraction as well for (a,b)∈𝔖. So, in the statement of Dani correspondence, the term badly approximable can be replaced by partial quotients in the (a,b)-continued fraction expansion of x being bounded, and the rational numbers can be replaced by x having a finite (a,b)-continued fraction expansion. Then onemay think of Theorem <ref> as en extension of Dani correspondence stated above.LetE_0={x∈:{Γ_x a_t}_t≥0visits the cusp with frequency 0}and E_∞={x∈:{Γ_x a_t}_t≥0visits the cuspwith frequency 1}.Then E_0 has Hausdorff dimension 1, since it containsthe set of badly approximable numbers and it was shown by Jarnik in <cit.> thatthe set of badly approximable numbers has Hausdorff dimension 1. On the other hand, it was shown in <cit.> that if [a_0,a_1,...] (see below for definition) is the classical continued fraction expansion of x, then the set of those x for which 1/ N∑_1^Nlog a_j→∞ as N→∞, has Hausdorff dimension 1/2. Then it follows that E_∞ has Hausdorff dimension 1/2.Above remark ensures that E_∞ is a bigger set than the set of rational numbers as the set of rational numbers has Hausdorff dimension 0. Now we show that E_∞ contains some very well approximable numbers. A real number x is said to be very well approximable if there exists ε>0, such that |x-p/q|< 1/q^2+ε holds for infinitely many q∈ and p∈. Recall that each real number x has a classical continued fraction expansion x=a_0+ 1/a_1+1/a_2+1/⋱,(a_0∈,a_j∈for j≥1),written as x:=[a_0,a_1,a_2,...] with p_j/q_j=[a_0,a_1,...,a_j] denoting the jth convergent (see <cit.> for more details). Now construct a real number x=[a_0,a_1,a_2,...] using classical continued fraction with the choice of a_j's as follows. Fix some ε>0, choose a_0∈ and a_1∈ arbitrarily, and inductively choose a_j+1=[q_j^ε]+1 for j≥1. Then as {q_j}_j≥1 is an increasing sequence, {a_j}_j≥1 is also an increasing sequence and consequently 1/ N∑_1^Nlog a_j→∞ as N→∞. Hence, x∈ E_∞. On the other hand, it follows from the construction of x, that the sequence of convergents p_j/q_j satisfy the inequality |x-p_j/q_j|<1/q_j^2+ε for all j≥1, showing that x is a very well approximable number. Note that E_∞ can not contain all very well approximable numbers as the set of very well approximable numbers has Hausdorff dimension 1 (<cit.>) and E_∞ has Hausdorff dimension 1/2. Summary of revisions over the previous version: * We have done some rearrangements and modifications in the introduction which leads to a modification in the exposition of the article. Accordingly, we have chosen a better-suited title and modified the abstract. * In the present version of the article, the set 𝒮 of parameters (a,b) which is an important object in Theorem <ref>, now has a more restricted description compared to the one in the previous version.§ (A,B)-CONTINUED FRACTIONS AND GEODESIC FLOWFollowing S. Katok and I. Ugarcovicci (<cit.>), for (a,b)∈𝒫, the (a,b)-continued fraction expansion of a real number x can be defined using a generalized integral part function:[x]_a,b:={[[x-a] ifx<a;0ifa≤ x < b; ⌈ x-b ⌉ifx≥ b, ]. where⌈ x ⌉ := [x] + 1, [x] being the largest integer ≤ x. For (a,b)∈𝒫, every irrational number x can be expressed uniquely as an infinite (a,b)-continued fraction of the form (see <cit.> for details)x=a_0- 1/a_1-1/a_2-1/⋱,(a_j∈,a_j≠ 0 for j≥1),which we denote by x:=[a_0,a_1,a_2,...]_a,b. Here x_0=x, a_0=[x_0]_a,band x_j=-1/x_j-1-a_j-1, a_j=[x_j]_a,b for j≥1 and a_j is called the jth partial quotient. The rational number r_j=p_j/q_j=a_0- 1/a_1-1/a_2- 1/⋱-1/a_j is called the jth convergent. The sequence {|q_j|} is eventually increasing and r_j converges to x. As mentioned earlier, a particular case of (a,b)-continued fraction, viz. the (-1,1)-continued fraction (also called the alternating continued fraction) is closely related to the classical continued fraction. If {a_j}_j≥0 is the sequence of partial quotients in the classical continued fraction expansion of a real number x, then {(-1)^j a_j}_j≥0 is the sequence of partial quotients in the (-1,1)-continued fraction expansion of x. A similar relation holds between the nearest integer continued fraction (also known as Hurwitz's continued fraction introduced by Hurwitz) expansion and (-1/2,1/2)-continued fraction expansion of any real number. Note that we write the (a,b)-continued fraction expansion of any real number as in (<ref>) using minus sign, while in the case of classical or nearest integer continued fraction expansion it is written using plus sign. The use of minus sign while writing the (a,b)-continued fraction expansion, presents some advantages which will be clear when we discuss the coding of geodesics using these continued fractions. Let =∪{∞} and f_a,b:→ be defined byf_a,b(x):={[ x+1 ifx<a; -1/ xifa≤ x < b;x-1ifx≥ b. ]. Note that f_a,b is defined using the standard generators T(x)=x+1 andS(x)=- 1/ x of the modular group (2,), and the continued fraction algorithm described above can be obtained using the first return map of f_a,b to the interval [a,b).The main object of study in <cit.> is a two dimensional realization of the natural extension map F_a,b:^2\Δ→^2\Δ, Δ={(x,y)∈^2)|x=y} of f_a,b, which is defined as follows:F_a,b(x,y):={[ (x+1,y+1)ifx<a; (-1/ x,-1/ y) ifa≤ x < b;(x-1,y-1) ifx≥ b. ]. The following theorem is just a restatement of the main result of <cit.> for the restricted set of parameters 𝒫. (<cit.>) There exists a one-dimensional Lebesgue measure zero, uncountable set ℰ containedin {(a,b)∈𝒫:b=a+1}, such that for all (a,b)∈𝒫\ℰ, (i) the map F_a,b has an attractor D_a,b=⋂_n=0^∞F_a,b^n(^2\Δ) on which F_a,b is essentially bijective. (ii) The set D_a,b consists of two (or one in degenerate cases) connected components each having finite rectangular structure, i.e., bounded by non-decreasing step-functions with finitely many steps.(iii) Every point (x,y) of the plane (x≠ y) is mapped to D_a,b after finitely many iterations of F_a,b.In <cit.>, to deduce the above theorem, a crucial role in the arguments used, is played by the orbits of a and b under f_a,b, viz. to a, the upper orbit 𝒪_u(a) (i.e., the orbit of Sa) and the lower orbit 𝒪_l(a) (i.e., the orbit of Ta), and to b, the upper orbit 𝒪_u(b) (i.e., the orbit of T^-1b) and the lower orbit 𝒪_l(b) (i.e., the orbit of Sb). Let us denote the set 𝒫\ℰ by the symbol 𝒮. It was proved in <cit.> that if (a,b)∈𝒮, then f_a,b satisfies the finiteness condition. This means that for both a and b, their upper and lower orbits are either eventually periodic, or they satisfy the cycle property, i.e., they meet forming a cycle, in other words there exist integers k_1,m_1,k_2,m_2≥0 such that f_a,b^m_1(Sa)=f_a,b^k_1(Ta)=c_a(respectivelyf_a,b^m_2(T^-1b)=f_a,b^k_2(Sb)=c_b),where c_a and c_b are the ends of the cycles. If the products of transformations over the upper and lower sides of the cycle of a (respectively b) are equal, a (respectively b) is said to have strong cycle property, otherwise it has weak cycle property. Letℒ_a={[ 𝒪_l(a) if a has no cycle property; lower part of a-cycleif a has strong cycle property; lower part of a-cycle∪{0}if a has weakcycle property, ].𝒰_a={[𝒪_u(a) if a has no cycle property;upper part of a-cycleif a has strong cycle property; upper part of a-cycle∪{0}if a has weakcycle property ]. and ℒ_b, 𝒰_b be defined similarly. Also let ℒ_a,b= ℒ_a∪ℒ_b and 𝒰_a,b=𝒰_a∪𝒰_b. So, f_a,b satisfies the finiteness condition means that both the sets ℒ_a,b and 𝒰_a,b are finite, which is true when(a,b)∈𝒮. In <cit.>, first a set A_a,b, having finite rectangular structure, was constructed (see Theorem 5.5 in <cit.>) using the values in the sets 𝒰_a,b and ℒ_a,b, and then it was shown (Theorem 6.4 in <cit.>) that A_a,b actually coincides with the attractor D_a,b. The upper component of D_a,b is bounded by non-decreasing step functions with values in the set𝒰_a,b and the lower component of D_a,b is bounded by non-decreasing step functions with values in the set ℒ_a,b. Making use of the properties of the map F_a,b and the attractor D_a,b, in a subsequent paper (<cit.>), S. Katok and I. Ugarcovicci developed a generalmethod of coding geodesics on the modular surface and gave a symbolic description of the geodesic flow associated with the modular surface. We first recall from<cit.>, the notion of (a,b)-reduced geodesics, which plays a crucial role in determining the cross-section for the geodesic flow needed for coding purposes. A geodesic in $̋ with real endpointsuandw,wbeing the attracting andubeing the repelling endpoints, is called(a,b)-reduced if(u,w)∈Λ_a,b,whereΛ_a,b:=F_a,b(D_a,b∩{a≤ w < b})=S(D_a,b∩{a≤ w < b}).Given any geodesicγ'in$̋, one can obtain an (a,b)-reduced geodesic (2,)-equivalent to γ' by using the reduction property (3rd assertion in Theorem <ref>) of the map F_a,b. More precisely, if γ' is a geodesic which is not (a,b)-reduced and if w'=[a'_0,a'_1,a'_2,...]_a,b is the attracting end point of γ', then there exists some positive integer n such that ST^-a'_n...ST^-a'_1ST^-a'_0(γ') is an (a,b)-reduced geodesic (see <cit.> for details). Now let γ be an (a,b)-reduced geodesic with attracting and repelling endpoint w and u respectively, and [a_0,a_1,a_2,...]_a,b be the (a,b)-continued fraction expansion of w. Using the essential bijectivity of the map F_a,b, one can extend the sequence (a_0,a_1,a_2,...) in the past as well to get a bi-infinite sequence (...,a_-2,a_-1,a_0,a_1,a_2,...), called the coding sequence of γ and written as [γ]_a,b=(...,a_-2,a_-1,a_0,a_1,a_2,...), where a_-1- 1/a_-2- 1/⋱=1/ u(see Section 3 of <cit.> for details). Now we recall from <cit.>, the description of the cross-section. LetC={z∈|̋ |z|=1,Imz≥0} be the upper half of the unit circle and ℱ denote the standard fundamental domain for the action of (2,) on $̋, given by ℱ:={z=x+iy∈|̋|z|≥1, |x|≤1/2}. Using the definition of(a,b)-reduced geodesic it is easy see the following fact. (<cit.>)For (a,b)∈𝒮, every (a,b)-reduced geodesic intersects C.Given an(a,b)-reduced geodesicγwith attracting and repelling endpointswandurespectively, thecross-section point onγis theintersection point ofγwithC. Letϕ:Λ_a,b→T^1$̋ be defined by ϕ(u,w):=(z,ζ), where z∈$̋ is the cross-section point on the geodesicγjoininguandw, andζis the unit vector tangent toγatz. The mapϕis clearly injective and after composing with the Canonical projectionπwe obtain a mapπ∘ϕ:Λ_a,b→ T^1M. LetC_a,b:=π∘ϕ(Λ_a,b)⊂T^1M. ThenC_a,bis a cross-section for the geodesic flow associated with the modular surface (see <cit.> for details). The lift ofC_a,btoT^1$̋ restricted to the unit tangent vectors having base points on the fundamental domain ℱ, can be described as follows: π^-1(C_a,b)∩(⋃_z∈ℱT_z^1)=P∪ Q_1 ∪ Q_2 (see Figure 1), where P consists of unit tangent vectors on the circular boundary of the fundamental region ℱ and pointing inward such that the corresponding geodesic γ on $̋ is(a,b)-reduced;Q_1consists of unit tangent vectors with base points on the right vertical boundary ofℱand pointing inward such that ifγis the geodesic corresponding to one such unit vector, thenTSγis(a,b)-reduced;Q_2consists of unit tangent vectors with base points on the left vertical boundary ofℱand pointing inward such that ifγis the geodesic corresponding to one such unit vector, thenT^-1Sγis(a,b)-reduced. Now letv∈T^1Mandγ_vbe the corresponding geodesic inMandγ̃_vbe an(a,b)-reduced lift of it inside$̋. Also let η:T^1M→ M be the Canonical projection of T^1M onto M. The following theorem from <cit.> provides the base for coding geodesics on the modular surface using (a,b)-continued fractions.(<cit.>) Let γ_v and γ̃_v be as above. Then each geodesic segment of γ_v between successive returns toη(C_a,b), while extended to a geodesic, produces an (a,b)-reduced geodesic on $̋, and each(a,b)-reduced geodesic(2,)-equivalent toγ̃_vis obtainedin this way.The first return ofγ_vtoη(C_a,b)corresponds to a left shift of the coding sequence ofγ̃_v. Let{g_tv}be the orbit of the geodesic flow onT^1Mcorresponding to the geodesicγ_v, i.e.,γ_v(t)=η(g_tv)and letγ_v^jbe the segment of the geodesicγ_vcorresponding to the portion of the orbit{g_tv}_t≥0between(j-1)th andjth returns to the cross-sectionC_a,b. We call the segmentγ_v^jthejthexcursion of the geodesicγ_vinto the cusp. Letw=[a_0,a_1,a_2,...]_a,bbe the attracting end point ofγ̃_vandγ̃_vj := ST^-a_j-1...ST^-a_1ST^-a_0(γ̃_v). Then the segment ofγ̃_vjbetweenCanda_j+C, denoted byγ̃_v^jis a lift ofγ_v^jto the hyperbolic plane. Assuming the geodesics to be parameterized by arc length, the time between the(j-1)th and thejth return of{g_tv}to the cross-section, called thejthreturn time, is given byt_j:=h(γ_v^j)=h(γ̃_v^j),wherehstands for the hyperbolic length of the geodesic segment. Also let𝒮':=𝒮\(-1,1). If (a,b)∈𝒮', then C_a,b is contained inside a compact subset of T^1M. The structure of D_a,b is discussed in detail in Theorem 5.5 of <cit.>. D_a,b has two connected components, the lower one we denote by D_a,b^l and the upper one wedenote by D_a,b^u. Both the sets D_a,b^l and D_a,b^u have finite rectangular structure i.e., bounded by non-decreasing step functions with finite number of steps. For D_a,b^l the values of the step function are given by the set ℒ_a,b, and for D_a,b^u the values of the step function are given by the set 𝒰_a,b. The structure of the boundary (see Figure 2 for a typical picture of D_a,b) of D_a,b consists of finite number of horizontal segments at different points of the set ℒ _a,b, called the different levels of D_a,b^l and consecutive levels are joined by vertical segments, where the highest level is y=a+1. D_a,b^u has a similar description with the lowest level being y=b-1. Let x_a^- be the x-coordinate of the vertical segment joining two consecutive levels y_a^- and y_a^+ of D_a,b^l with y_a^-≤ a < y_a^+, and x_a^+ be the x-coordinate of the vertical segment joining two consecutive levels y_- and y_+ with y_-≤ 0 < y_+. Similarly, let x_b^- be the x-coordinate of the vertical segment joining two consecutive levels y_-' and y_+' of D_a,b^uwith y_-'<0≤ y_+', and x_b^+ be the x-coordinate of the vertical segment joining two consecutive levels y_b^- and y_b^+ with y_b^-<b≤ y_b^+. Also let y_l be the level above Sb and next to Sb; y_u be the level below Sa and next to Sa.It follows from these assertions and the definition of Λ_a,b, that a geodesic γ̃_v with attracting and repelling endpoints w and u respectively with w>0, is (a,b)-reduced if and only if (u,w)∈[-1/x_a^-,0)×[-1/ a,∞)⋃(0,-1/x_b^-]×[-1/b-1,∞).On the other hand if w<0, then γ̃_v is (a,b)-reduced if and only if(u,w)∈(0,-1/x_b^+]×(-∞,-1/ b]⋃[-1/x_a^+,0)×(-∞,-1/a+1]. We show that x_a^-, x_a^+>1 and x_b^-, x_b^+<-1.For (a,b)∈𝒮', let m_a and m_b be positive integers such that a≤ T^m_aSTa<a+1 and a≤ T^m_bSb<a+1. Let m_a, m_b≥3, then the the proof of Lemma 5.6 of <cit.> shows that the vertical segment joining Sb and y_l has x-coordinate greater than 1, and the vertical segment joining y_u and Sa has x-coordinate less than -1. Therefore, in these cases we have x_a^-, x_a^+>1 and x_b^-, x_b^+<-1. Now we consider the situation when m_a, m_b≤2. Note that m_a can never be 1, for if m_a=1, then a=0 since a>-1, but we have assumed that a<0. So, m_a≥2. Now if either m_a or m_b is 2, then from the explicit cycle description of a and b discussed in <cit.>, we see that there is always one level between y_l and a; similarly there is always one level between b and y_u. As the statement of Lemma 5.6 of <cit.> guarantees that the vertical segment joining Sb and y_l has x-coordinate greater than orequal to 1 and the vertical segment joining y_u and Sa has x-coordinate less than orequal to -1, it follows that x_a^-, x_a^+>1 and x_b^-, x_b^+<-1 in these cases aswell. From the discussion above we have, -1/x_b^-<1 and-1/x_a^->-1. Now let μ_a^+ be the intersection point of the geodesic joining 0 and -1/a, and C; μ_b^+ be the intersection point of the geodesic joining -1/x_b^- and-1/b-1, and C. We chose one of μ_a^+ and μ_b^+, which has y-coordinate less than or equal to the other and denote it by μ_p^+. Also let μ_p^- be the intersection point of C and the vertical geodesic based at the point -1/x_a^-. Then any (a,b)-reduced geodesic γ̃_v having attracting endpoint w>0, intersects the segment joining μ_p^- and μ_p^+ of C (see Figure 3). Consequently the cross-section point for any (a,b)-reduced geodesic having positive attracting endpoint,has y-coordinate uniformly bounded away from 0.The same is true for any (a,b)-reduced geodesic with negative attracting endpoint as well, which can be shown similarly by using the fact that-1/x_b^+<1 and-1/x_a^+>-1. This completes the proof of the proposition. § CUSP EXCURSIONS WITH EXTREME FREQUENCIESIn this section, we prove the main results of this article. The results are about classifying two kinds of forward orbits of geodesic flow apart from the generic ones. This is done by relating the time spent by the orbits in cusp neighbourhoods compared to the total time parameter, and the average growth rate of the partial quotients of the continued fraction expansion of the attracting end points of the corresponding geodesics. It is worth mentioning that there are many interesting results relating cusp excursions of geodesics on hyperbolic 2-orbifolds and Diophantine approximation. For example, see <cit.>, <cit.> and the references given there (As there is a large body of literature around this phenomena, the reference list given here is not complete by any means). In <cit.> and <cit.>, various aspects of cusp excursions of a generic set of geodesics have been studied and analogue of various results from classical Diophantine approximation in the context of Fuchsian groups have been obtained, while restricting to the case of the modular surface these produce new proofs of classical results (see <cit.>, <cit.> for details). For example, it was shown in <cit.> that ford<1and for almost allv∈T^1M, if{γ_v^j_k}is the subsequence of{γ_v^j}which intersectM_d, thenlim_n→∞1/n∑_k=1^n h(M_d∩γ_v^j_k)=π. In this article, we consider a certain class of geodesics apart from the generic ones and look at their behaviour in terms of spending time inside cusp neighbourhoods compared to their length parameter. Letv,γ_v,γ̃_v,w=[a_0,a_1,a_2,...]_a,bbe as in the previous section. The partial quotients of the continued fraction expansion ofwdetermine how much further the orbit{g_tv}of the geodesic flow goes into a typical neighbourhood of the cusp before returning to the cross-section. This particular fact is easier to see when the cross-sectionC_a,bis contained inside a compact set which is the case when(a,b)∈𝒮'. Whereas for(a,b)=(-1,1), the cross-sectionC_-1,1is not contained inside a compact set. In this casewe use the formula for return times given by S. Katok and I. Ugarcovicci and some other facts which are particularto the(-1,1)-continued fraction. §.§ (a,b)∈𝒮'Let j be a positive integer.(i) Assume that a_j>0. Then γ̃_vj intersects or does not intersect _̋d accordingly as a_j>2d-a-1/x_b^- or a_j<2d-b-1/x_a^-.(ii) Assume that a_j<0. Then γ̃_vj intersects or does not intersect _̋d accordingly as |a_j|>2d+b+1/x_a^+ or |a_j|<2d+a+1/x_b^+.If a_j>0, then the attracting endpoint w_j of γ̃_vj lies in the interval [a_j+a,a_j+b) and the repelling endpoint u_j is contained in the interval (-1/x_a^-,-1/x_b^-). So, in this case, γ̃_vj lies above the geodesicγ̃_a_j^-, where γ̃_a_j^- is the geodesic joining -1/x_b^- and a_j+a. Also, γ̃_vj liesbelow the geodesic γ̃_a_j^+, where γ̃_a_j^+ is the geodesic joining-1/x_a^- and a_j+b. So, if the radius of γ̃_a_j^+ is less than d, then γ̃_vj does not intersect _̋d; on the other hand if the radius of γ̃_a_j^- is greater than d, then γ̃_vj does intersect _̋d. Now a simple calculation gives the assertion of the lemma in the case a_j>0. If a_j<0, then γ̃_vj lies above the geodesic joining a_j+b and -1/x_a^+, and lies below the geodesic joining a_j+a and -1/x_b^+. Again a simple calculation gives the assertion of the lemma in the case a_j<0.The following two lemmas which are crucial to the arguments to follow, can be proved easily using the fact that the cross-sectionC_a,bis contained inside a compact set inT^1M. The proof of similar statements for the particular case(a,b)=(-1/2, 1/2)is contained in <cit.> (Proposition3.4and Proposition3.5respectively) and the same proofs work for any(a,b)∈𝒮'as well.Let d>1 be such that M_d∩ C_a,b=∅, then ifγ_v^j∩ M_d is nonempty, γ̃_v^j∩_̋d is the onlyconnected component of π^-1(γ_v^j∩ M_d). Let v∈ T^1M, γ_v be the corresponding geodesic in M, and γ̃_v be an (a,b)-reduced lift of γ_v inside $̋. Letw=[a_0,a_1,a_2,...]_a,bbe the attracting end point ofγ̃_vandt_jbe thejth return time for the corresponding orbit{g_tv}of the geodesic flow. Then there exist a constantκ>0such that|t_j-2log |a_j||≤κ, ∀ j≥ 0.The asymptotic estimates for values of binary quadratic forms at integer points were obtained in <cit.> in terms of (-1/2, 1/2)-continued fraction expansion of the coefficients of the quadratic forms, and the (-1/2, 1/2)-continued fraction coding of geodesics on the modular surface was used to obtain the estimates. The facts that the cross-section for geodesic flow corresponding to the (-1/2, 1/2)-continued fraction coding, is contained inside a compact subset of T^1M and the return times can be bounded uniformly by the partial quotients as in (<ref>), were used crucially to obtain those estimates. Since the above two properties hold for (a,b)-continued fraction coding as well for (a,b)∈𝒮', one can obtain similar estimates as in <cit.> for values of binary quadratic forms at integer points in terms of the (a,b)-continued fraction expansions of its coefficients as well.Givend>1, letd^+=2d-b-1/x_a^-,d^-=2d+a+1/x_b^+, and 𝔧_d^N=#(0≤ j<N:eithera_j> d^+ ifa_j>0ora_j<-d^- ifa_j<0). Letd̅_+=2d-a-1/x_b^-,d̅_-=2d+b+1/x_a^+, and𝔧_d̅^N=#(0≤ j<N:eithera_j> d̅_+ifa_j>0ora_j <-d̅_-ifa_j<0). LetS_N=t_1+t_2+...+t_N. Also letI_N^d:=1/S_N∫_0^S_Nχ_d(g_tv)dt, I_T^d:=1/T∫_0^Tχ_d(g_tv)dt,whereχ_ddenotes the characteristic function of the neighbourhoodM_dof the cusp andv∈ T^1M.It is evident from Lemma <ref>, Lemma <ref> and Lemma <ref> that thejth excursion of the geodesic goes more and more into the cusp as the value of|a_j|gets bigger and bigger and vice versa. The following proposition uses this fact to characterize those orbits of geodesic flow which visit the cusp with full frequency. It is easy to see that to conclude about the extreme behaviour ofI_T^d, it is enough to considerI_N^d.Let v∈ T^1M, γ_v be the corresponding geodesic on M and γ̃_v be an (a,b)-reduced lift of γ_v in $̋. Letw=[a_0,a_1,a_2,...]_a,bbe the attracting endpoint ofγ̃_v. ThenI_N^d=1/S_N∫_0^S_Nχ_d(g_tv)dt→1asN→∞for alld>1, if and only if, 1/ N(log|a_0|+log|a_2|.......+log|a_N-1|) →∞asN→∞.We enumerate those j for which either a_j>d̅_+ for a_j>0, or a_j<-d̅_- fora_j<0, by the subsequence {j_k}, and by ∑_k=1^𝔧_d̅^Nlog|a_j_k| we mean the sum∑_a_j>d̅_+ ora_j<-d̅_-,0≤ j≤ N-1log|a_j|. On the other hand, we enumerate those j for which a_j≤d̅_+ if a_j>0, or a_j≥-d̅_- if a_j<0, by the subsequence {j_l}, and by ∑_l=1^N-𝔧_d̅^Nlog|a_j_l| we mean the sum ∑_0<a_j≤d̅_+ or-d̅_-≤ a_j<0,0≤ j≤ N-1log|a_j|. Now suppose 1/ N∑_j=0^N-1log|a_j|→∞ as N→∞ which implies by Lemma <ref>, that 1/ N∑_j=1^ N t_j→∞ as N→∞. Let d_a,b>0 be such that M_d_a,b∩ C_a,b=∅. Now for any d>d_a,b, letc_j_k:=h(M_d\γ_v^j_k). ThenI_N^d=1/S_N∫_0^S_Nχ_d(g_tv)dt≥∑_k=1^𝔧_d̅^N(t_j_k-c_j_k)/∑_j=1^N t_j=1-1/N∑_l=1^N-𝔧_d̅^Nt_j_l/1/N∑_j=1^N t_j- 1/N∑_k=1^𝔧_d̅^Nc_j_k/1/N∑_j=1^N t_j,As both the quantities 1/ N∑_l=1^N-𝔧_d̅^Nt_j_l and 1/ N∑_k=1^𝔧_d̅^Nc_j_k are bounded, and 1/ N∑_j=1^N t_j⟶∞ as N→∞, it follows that I_N^d→ 1 as N→∞.To prove the converse statement, we show that if 1/ N∑_j=0^N-1log|a_j|↛∞ as N→∞, then there is some >̣1 such that I_N^d can not go to 1 as N→∞. Now1/ N∑_j=1^ Nlog|a_j|↛∞ as N→∞ means that there is a subsequence {N_s} and 𝔪>0 such that 1/N_s∑_j=1^N_s-1log|a_j| <𝔪 for all s∈ℕ, which again means, by Lemma <ref>, that 1/N_s∑_j=1^N_s t_j<𝔪̃ for some 𝔪̃>0 and for all s∈ℕ. Since1/N𝔧_d̅^N→ 1 as N→∞ for all d>1 implies 1/ N∑_j=1 ^N t_j→∞ as N→∞, which again by Lemma <ref> implies1/ N∑_j=0^N-1log|a_j|→∞ as N→∞, we may assume that there exists some r>0 and d>1 such that (if needed by considering a subsequence of {N_s} and denoting it again by{N_s}) 1/N_s𝔧_d̅^N_s<1-r for all s∈ℕ. NowI_N_s^d≤ 1-1/N_s∑_l=1^ N_s-𝔧_d̅^N_st_j_l/1/N_s∑_j=1^N_s t_j.Since the cross-section point for any (a,b)-reduced geodesic, is uniformly bounded away from the real line, it follows that t_j has a uniform lower bound, i.e., t_j>𝔱 for some 𝔱>0 and all j≥0. Since 1/N_s(N_s-𝔧_d̅^ N_s)>r and 1/N_s∑_j=1 ^N_st_j<𝔪̃ for all s, it follows that I_N_s^d≤ 1-r𝔱/𝔪̃<1, for all s. Hence I_N^d↛ 1 as N→∞, a contradiction.Let us now concentrate on those orbits whose frequency of visiting the cusp is zero. A complete characterization of such orbits is given by the following proposition.If 1/ N(log|a_j_1|+log|a_j_2|.......+ log|a_j_𝔧_d^N|) → 0 as N→∞ for some d>1, then I_N^d'=1/S_N∫_0^S_Nχ_d'(g_tv)dt→ 0 as N→∞ for all d'>d. On the other hand if I_N^d'→ 0 as N→∞ for somed'>1, then 1/ N(log|a_j_1|+log|a_j_2|.......+log|a_j_𝔧_d̅^N|)→ 0asN→∞ for all d>d'.From Lemma <ref>, we have1/ N∑_k=1^𝔧_ d^N2log|a_j_k|-1/ N𝔧_d^N κ≤1/ N∑_k=1^𝔧_d^Nt_j_k≤1/ N∑_k=1^𝔧_d^N2log|a_j_k|+1/N𝔧_d^N κ where κ is as in that Lemma. Note that1/ N∑_k=1^𝔧_d^N2log|a_j_k|→ 0 implies 1/ N𝔧_d^N→ 0 as N→∞.Then from (<ref>), we conclude that1/ N(log|a_j_1|+log|a_j_2|.......+ log|a_j_𝔧_d^N|)→ 0is equivalent to 1/ N∑_k=1^𝔧_d^Nt_j_k→ 0 as N→∞.Now for any d'> d, I_N^d'≤ I_N^d=1/S_N∫_0^S_Nχ_d(g_tv)dt≤1/N∑_k=1^𝔧_d^Nt_j_k/1/N∑_j=1^N t_j, which tends to 0 as N→∞ since1/N∑_j=1^ N t_j is bounded below by 𝔱. To prove the converse statement, let us assume that I_N^d'→ 0 as N→∞, and d>d'. Suppose1/ N(log|a_j_1|+log|a_j_2|.......+log|a_j_𝔧_d̅^N|)↛ 0 as N→∞. Then using another version of (<ref>), with d replaced by d̅, there is a subsequence {N_s} and r>0, such that 1/N_s∑_k=1^ 𝔧_d̅^N_st_j_k>rfor all s∈. Note that, as I_N_s^d≤ I_N_s^d'→ 0 when s→∞, we have 1/N_s𝔧_d̅^N_s→ 0 as s→∞. Because if 1/N_s𝔧_d̅^N_s↛ 0 as s→∞, then 1/N_s𝔧_d̅^N_s>r̃ for some r̃>0 and for infinitely many s∈. Then 1/N_s∑_k=1^𝔧_d̅'̅^N_s(t_j_k-c_j_k)>r̃ c_1, for infinitely many s, which in turn implies that I_N_s^d'>r̃c_1 for infinitely many s≥1, where t_j_k-c_j_k>c_1>0. This is a contradiction to the fact that I_N^d'→ 0 as N→∞.Now let h_a,b^d denote the least upper bound of the distances from the cross-section point on C to the horizontal line y=d, for all (a,b)-reduced geodesics. ThenI_N_s^d'≥ I_N_s^d ≥1/N_s∑_k=1^𝔧_d̅^N_s(t_j_k-c_j_k)/1/N_s∑_j=1^N_s t_j≥1/N_s∑_k=1^𝔧_d̅^N_st_j_k-1/N_s𝔧_d̅^N_s2h_a,b^d/1/N_s∑_j=1^N_s t_j.Since 1/N_s𝔧_d̅ ^N_s→ 0 as s→∞, it follows thatthere is some 𝔪>0 such that 1/N_s∑_j=1^N_s t_j <𝔪 for all s∈.Therefore, from (<ref>) and (<ref>), we conclude that there exists some 0<r_1<r, such that I_N_s^d'>r_1/𝔪 for sufficiently large s. Which is a contradiction to the assumption that I_N^d'→ 0 as N→∞. This completes the proof of the proposition.Now the proof of Theorem <ref> for(a,b)∈𝒮'follows from Proposition <ref> and Proposition <ref>.Note that in Proposition <ref> and Proposition <ref>, we have considered an (a,b)-reduced lift of γ_v, whereas in Theorem <ref>, we have consideredany lift of γ_v to $̋. This does not lead to any ambiguity because if we obtain an(a,b)-reduced geodesicγwith attracting end pointw=[a_0,a_1,a_2,...]_a,b, from a geodesicγ'with attracting end pointw'=[a'_0,a'_1, a'_2,...]_a,b, thena_j=a'_j+nfor somen∈. §.§ (a,b)=(-1,1)Now we concentrate on the special case(a,b)=(-1,1). Recall that the coding of geodesics on the modular surface using this particular continued fraction is discussed in detail in <cit.>, where it is called the alternating continued fraction coding. The name alternating continued fraction comes from the fact that the partial quotients of the(-1,1)-continued fraction expansion of a real number has alternate signs. This particular coding procedure does not provide a cross-section contained in a compact subset ofT^1M. Recall form <cit.>, that a geodesic in$̋ is called A-reduced ((-1,1)-reduced with our convention), if its attracting endpoint w and repelling endpoint u satisfy |w|>1 and -1<sgn(w)u<0 respectively. So the cross-section point for an A-reduced geodesic can be as close to the real line as one wants, showing that the cross-section is not contained inside a compact set in T^1M. So the jth return time may not be at a bounded distance from 2log|a_j|. But t_j can be controlled using a couple of preceding and couple of succeeding entries in the sequence of partial quotients. We recall from <cit.>, the following formula for the jth return time: t_j=2log |w_j|+log|w_j-u_j|√(w_j^2-1)/w_j^2√(1-u_j^2)-log|w_j+1-u_j+1|√(w_j+1^2-1)/w_j+1^2√(1-u_j+1^2).Now assume that w_j>0, then it follows from the definition of A-reduced geodesics that u_j<0. Since the partial quotients have alternate signs, we also have w_j+1<0 and consequently u_j+1> 0. Then,t_j≤ 2log |w_j|+|log|1-u_j/w_j|/|1-u_j+1/w_j+1||+ 1/2[|log(1-1/w_j)|+|log(1+u_j)|+|log(1+1/w_j+1)|+|log(1-u_j+1)|]+ 1/2|log(1+1/w_j)(1+u_j+1)/(1-u_j)(1-1/w_j+1)|.Now using the assumption that a_j>0 and consequently a_j+1<0, a_j+2>0, it is easy to see that 1-1/w_j≥1-1/1+1/|a_j+1|+δ, where δ is some real number such that 0≤δ≤1. Then it follows that, |log(1-1/w_j)| ≤log|a_j+1|+log3. By a similar reasoning, 1+u_j≥ 1-1/1+1/|a_j-2|+δ^', with 0≤δ^'≤1, and it follows that, |log(1+u_j)|≤log|a_j-2|+log3. Using the continued fraction expansions for w_j+1 and u_j+1, we obtain similar estimates for other quantities in the above inequality involving t_j. The case w_j<0 can be treated similarly and we get the following estimate for the return time t_j:t_j≤ 2log|a_j|+2max{log|a_j+1|+log|a_j-1|+log|a_j+2|+log|a_j-2|}+c,here c is some constant which is independent of j. On the other hand, considering the definition of A-reduced geodesics, and the fact that the length of the geodesic segment joining the point iand k+i is at a bounded distance from 2log |k|, independent of k∈, it is easy to see that t_j≥ 2log|a_j|-c',where c' can be taken as the hyperbolic length of the segment of the unit circle joining the point i and 1/2+√(3)/2i.Also note that in this special case, whenever γ_v^j∩ M_d is non-empty, the number of connected components of π^-1(γ_v^j∩ M_d) can be more than one, in fact it can be at most three. One component is γ̃_v^j∩_̋d; one of the other two may be the segment starting from the cross-section point up to theintersection point of γ̃_v^j with the horocycle H_d, where H_d is the image of the horocycle y=d under T^-1S as shown in Figure 4; the third component may be a similar one coming from near the other end of γ̃_v^j. In Figure 4, the geodesic γ_1 is the geodesic which is tangent to the horizontal line y=d and passes through the intersection point of the vertical line based at -1 and the horocycle H_d. Let h_u^d be the hyperbolic length of the segment of γ_1 joining the pair of points where it cuts the horocycle H_d and where it touches the line y=d. Then h(γ_v^j \ M_d)≤ 2 h_u^d. On the other hand, let h_l^d denote the hyperbolic distance between the points i and the horizontal line y=d. Then if γ_v^j∩ M_d≠ϕ, h(γ_v^j\ M_d)≥ 2 h_l^d. Using these observations, the following two propositions from which the proof of Theorem <ref> follows in the case (a,b)=(-1,1), can be proved by adopting the similar strategies as in the proofs of Proposition <ref> and Proposition <ref> respectively. Let v∈ T^1M, γ_v be the corresponding geodesic on M and γ̃_v be an A-reduced lift of γ_v in $̋. Letw=[a_0,a_1,a_2,...]_(-1,1)be the attracting endpoint ofγ̃_v. ThenI_N^d=1/S_N∫_0^S_Nχ_d(g_tv)dt→ 1asN→∞for alld>1if and only if1/N∑_j=1^N t_j→∞asN→∞.If 1/ N∑_t_j>𝔠, 1≤ j≤ Nt_j→ 0 as N→∞ for some 𝔠>0,then there exists d>1, such that I_N^d'→ 0 as N→∞ for all d'> d. On the other hand, ifI_N^d→ 0 as N→∞ for some d>1, then there exist 𝔠>0 such that 1/ N∑_t_j> 𝔠',1≤ j≤ N t_j→ 0 as N→∞ for all 𝔠'> 𝔠.Proof of Theorem <ref> in the case of (-1,1)-continued fraction.It follows easily from (<ref>) and (<ref>), that1/N∑_j=0^N-1log|a_j|→∞asN→∞is equivalent to1/N∑_j=1^N t_j→∞asN→∞.Now let𝔧_d^N=#{1≤ j≤ N:γ_v^j∩ M_d≠ϕ}.Then ford>1, from (<ref>) we get ∑_t_j>10log d,1≤ j≤ Nt_j≤∑_|a_j|>d,-2≤ j ≤(N+2)10log|a_j|+𝔧_d^N c.Therefore, if1/N∑_|a_j|>d,0≤ j≤ N-1log|a_j|→ 0asN→∞for somed>1, which also implies1/N𝔧_d^N→ 0asN→∞, then it follows from (<ref>), that1/N∑_t_j>d',1≤ j≤ Nt_j→ 0asN→∞for alld'>10log d. On the other hand, it follows easily from (<ref>), that if1/N∑_t_j>d,1≤ j≤ Nt_j→ 0for somed>0, which alsoimplies1/N𝔧_d^N→ 0asN→∞, thereexistsd'>1, such that1/N∑_|a_j|>d”,0≤ j≤ N-1log|a_j|→ 0asN→∞for alld”> e^d'/2. With these observations, now the proof of Theorem <ref>in the case of(-1,1)-continued fraction, follows from Proposition <ref> andProposition <ref>.§ ACKNOWLEDGEMENTSThe author is thankful to S. G. Dani for suggesting the problem and his constant help in writing the paper. Thanks are also due to the referee of this version for valuable suggestions which have helped to improve the exposition of the article. The author thanks Indian Statistical Institute Bangalore, Harish Chandra Research Institute Allahabad and Indian Statistical Institute Kolkata for their hospitality during the author's stay there, which has made this work possible. Financial support from the National Board for Higher Mathematics India through NBHM post-doctoral fellowship is duly acknowledged. plain | http://arxiv.org/abs/1703.09002v2 | {
"authors": [
"Manoj Choudhuri"
],
"categories": [
"math.DS",
"37A17, 11J70"
],
"primary_category": "math.DS",
"published": "20170327105412",
"title": "On certain orbits of geodesic flow and (a,b)-continued fractions"
} |
Talks presented at the International Workshop on Future Linear Colliders (LCWS16),Morioka, Japan, 5-9 December 2016. C16-12-05.4 Studies of the response of the SiD silicon-tungsten electromagnetic calorimeter (ECal) are presented. Layers of highly granular (13 mm^2 pixels) silicon detectors embedded in thin gaps (∼ 1 mm) between tungsten alloy plates give the SiD ECal the ability to separate electromagnetic showers in a crowded environment. A nine-layer prototype has been built and tested in a 12.1 GeV electron beam at the SLAC National Accelerator Laboratory. This data was simulated with a Geant4 model. Particular attention was given to the separation of nearby incident electrons, which demonstrated a high (98.5%) separation efficiency for two electrons at least 1 cm from each other. The beam test study will be compared to a full SiD detector simulation with a realistic geometry, where the ECal calibration constants must first be established. This work is continuing, as the geometry requires that the calibration constants depend upon energy, angle, and absorber depth. The derivation of these constants is being developed from first principles.Studies of the Response of the SiD Silicon-Tungsten ECalAmanda Steinhebel and James BrauUniversity of Oregon, Center for High Energy Physics, 1274 University of Oregon Eugene, Oregon 97403-1274 USADecember 30, 2023 ======================================================================================================================================================= § INTRODUCTIONSiD is one of two detectors under consideration for use in the International Linear Collider (ILC) <cit.>. Its electromagnetic calorimeter (ECal) is a sampling calorimeter constructed of alternating layers of silicon diodes and DENS-24, a tungsten alloy used as an absorber. The silicon diode pixels individually record charge deposited by particles from the electron-positron collision. Each pixel is individually read out by a KPiX chip <cit.>. This document summarizes work done at the University of Oregon regarding the response of the KPiX readout chip and geometry of the SiD ECal.§ KPIX BACKGROUNDThirty-one 0.3 mm thick silicon layers are created from a tiling of hexagonal wafers each containing 1024 individual 13 mm^2 pixels. As a photon or electron from the collision passes through the ECal,the tungsten layers produce showering. The silicon layers then measure any charge deposited on them as the shower progresses through the calorimeter. One KPiX readout chip is bump-bonded to the center of each silicon wafer (Fig. <ref>) that is connected via channels to each pixel on the chip. In this way, measurements from every individual pixel are read out, providing measurements for every 13 mm^2. The use of KPiX allows for thin sampling layers of 1.25 mm.Prototype versions of silicon wafers mounted with KPiX chips were tested at SLAC National Accelerator Laboratory in 2013 to test the response of the calorimeter <cit.>. The prototype calorimeter consisted of nine repeated alternating layers of silicon wafers and 2.5 mm DENS-24 plates (Fig. <ref>), for a total expanse of 5.8 radiation lengths. A 12.1 GeV electron beam was directed through the prototype calorimeter, and the KPiX response was recorded. The alternating silicon/tungsten pattern allowed for testing both silicon- and tungsten-first setups. § TEST BEAM MODELING STUDIESBefore the analysis, the beam test data was cleaned. This cleaning included the removal of “monster events", or events in which all pixels unrealistically reported a large amount of deposited charge. This phenomenon has since been understood. After the monster events were removed, a large number of low energy events remained from the data set of more than 30,000 events (Fig. <ref>). These are accompanied by peaks at intervals of 150× 10^-14 C, indicating electron events. The peaks at higher measured charge imply multiple electron events. Many low energy events are soft photon contamination from the electron beam. Setting a higher threshold on the recorded charge would eliminate the consideration of this contamination, but would also neglect low shower-energy electron events. In order to clean out only the contamination, an algorithm was designed to categorize showers. In this way, “photons" could be separated from “electron" showers and eliminated. A simple categorization technique is to count how many layers of silicon record hits in a given event. Roughly 45% of events only contains hits in one layer. This type of event is characteristic of soft photon contamination, and can be immediately removed from consideration.A weighting algorithm was developed to further categorize showers. The silicon layers were labeled from 1→ 9, with layer one being the first silicon layer the beam encounters. Then, the ratioR=∑_hL_h^2𝒞_h/∑_h𝒞_hwas calculated, where 𝒞_h is the measured charge for a given hit and L_h is the layer number of the hit, summed over all hits h. If for some layer there were no hits, a weight of 5× 10^-14 C was inserted. This is roughly the charge that a minimum ionizing particle would deposit. Photon events tend to appear early and only in a few layers of the detector, causingR to be small due to the quadratic dependence on layer number. Similarly, electron showers peak later in the detector and drive R up. A cut on R was then applied, and events with R less than the cut value were disregarded as photon contamination.After this procedure, nearly 50% of all events from the data set were removed. The resulting data set (Fig. <ref>) retains low shower-energy electron events while the large photon contamination peak has clearly been removed.A Geant4 simulation was created to model the beam test scenario. The simulation consisted of 8,000 single electron events transversely distributed in the calorimeter to match the beam test data (Fig. <ref>).The collection of single-electron events was then used to create a Poisson distribution of multi-electron events by overlaying multiple single-electron events. In order to simulate inactive pixels observed during the beam test, 10% of the pixels of each layer were randomly removed. The resulting data set is shown in Fig. <ref>.Once all runs were normalized to one hundred events, the simulated and collected data agree well (Fig. <ref>). This agreement holds not only for the total measured charge in each event, but for the total measured charge in each layer of the prototype detector of each event as well (Fig. <ref>).§ SHOWER SEPARATION EFFICIENCY STUDIESThe high spatial granularity provided by the silicon pixels and KPiX can help distinguish the showers of two nearby particles. Since two-electron events were clearly observed in the beam test prototype run, a study of the prototype's ability to separate showers was done. An algorithm was created to count the number of incident particles detected in each event. This algorithm is simple in nature, but robust enough to examine data from both the beam test prototype and Geant4 simulation. It requires inputs that describe the geometry of the silicon wafer, including which pixels border which other pixels, and the charge measured in each pixel. The algorithm simply examines each layer of each event and determines local maxima of charge deposits. It then compares the position of this maximum against all other layers, and requires that the same pixel location be a local maximum in at least four layers. If this condition is met, then an electron event is counted. In this way, the algorithm can account for multiple electrons occurring within one event but it also biases against late forming showers that do not develop fully enough to create four layers with notable maxima. Occasionally, a shower maximum occurred near the border of two pixels. In this case, the recorded local maximum was equally likely to be located in either pixel. This fooled the algorithm into tagging two incident particles though there was only truly one. The counting of more particles than the true number of incident particles is considered“over-counting". The algorithm corrects for this by disallowing the presence of maxima in neighboring pixels.With simulated data, truth information regarding the number of incident electrons is available and the accuracy of the algorithm can be examined. Among simulated two-electron events, the algorithm correctly counted 82.6% of events. 17.3% of two-electron simulated events were under-counted (the algorithm detected less incident particles than there really were), meaning that if the algorithm miscounted it was far more likely to under-count than over-count (Fig. <ref>).When events were incorrectly under-counted, the two electrons tended to be less than 1 cm apart. The algorithm counted two-electron events with an average efficiency of 98.5% when the incident electrons were separated by more than 1 cm (Fig. <ref>). The algorithm can also analyze data from the beam test prototype, though truth information is unknown. Events the algorithm tags from the beam test data as “two-electron events" compare appropriately to those that the algorithm tags from the simulated data set (which can also be compared to simulation truth data) (Fig. <ref>). This implies both that the simulation is correctly modeling the system, and that the algorithm can be trusted to identify multi-electron events in prototype data with nearly perfect efficiency provided that incident electrons are separated by more than 1 cm. The ability to discern multiple electrons incident in similar spatial regions is important for reconstruction of particles using the ECal information - especially the ability to reconstruct boosted π^0 mesons from their decay products of two photons. § ECAL OVERVIEWIn the SiD design, the ECal barrel sits between the vertex tracker and hadron calorimeter, at an inner radius of 1264 mm from the collision point with a z extent of 3.53 m. It is made of twelve trapezoidal modules that extend the full z length of the detector with overlapping ends to avoid projective cracks through the detector, creating a structure that is periodic in increments of π/6 radians(Fig. <ref>shows the view from the xy plane with the z dimension coming out of the page, also indicates the angle φ.). These trapezoids have a small inner angle of 30^o <cit.>.The region of overlap between two modules spans from φ∈ [(4.03+30n)^o,(15+30n)^o], where n is an integer, n=0,1,…,11. This is approximately 30% of the detector.Each module consists of 31 layers of tiled silicon wafers and 30 layers of DENS-24 (as detailed in Section II). The first layer of each module is a silicon tracking layer, followed by twenty iterations of 2.5 mm DENS-24 and a 1.25 mm gap in which the 0.3 mm silicon layer will reside. This is followed by ten iterations of 5 mm DENS-24 and the same 1.25 mm silicon-containing gap <cit.>. Therefore, the ECal begins and ends in a sensitive silicon layer. This structure is identically repeated for all twelve modules.The combination of a nontrivial ECal geometry and unequal division of tungsten absorber throughout each module creates complicated calorimeter calibration and geometric effects. For example, a smaller subset of the overlap region, whereφ∈ [(8.786+30n)^o,(10.14+30n)^o], contains only thin layers of tungsten absorber, and no thicker 5 mm tungsten layers. This is the “thin overlap region".All the following studies were conducted using the full SiD simulation, SiD_o1_v03. Single photons are directed into the detector at various φ angles incident to the ECal surface (θ=90^o) and at initial energies of 10 GeV or 100 GeV. Only charge deposited in the silicon layers of the ECal barrel is considered. § GEOMETRY STUDIESThe overlapping geometry of the ECal barrel creates various effects that must be taken into account in the calorimeter calibration, including a varied absorber depth and number of incident sensitive layers that depend on the angle φ (Figs. <ref> and <ref>).To investigate this, 500 events of 100 GeV and 10 GeV photons were run through the full SiD simulation at φ = 0^o, 3.25^o, 7.5^o, 9.3^o, 11.25^o, 15^o, 18.25^o, 26.25^o, and 30^o. This surveys through one 30^o period of the detector, with two points in the overlap region (φ = 7.5^o and 11.25^o) and one point in the thin overlap region (φ=9.3^o). The tracking layer of silicon at the beginning of each module is excluded from these studies. In overlap regions, this layer falls near the middle of the calorimeter and samples showers before they have traversed a full absorber layer. In this sense, the shower is being double-sampled due to the presence of this sensitive layer[This conclusion has led the SiD collaboration to consider designing the modules so that the tracking silicon layer is only present around the inner circumference of the ECal. Reducing the extent of this layer can be a cost-saving method.].At each φ angle, a histogram was made of the total charge measured in the ECal barrel[Deposits in the ECal endcap were not considered in this study.]. Since the last ten silicon layers follow thicker tungsten layers than the first 20, deposits in these layers are weighted by a factor of two to account for the differing sampling fraction[The true value is 1.98, however using a factor of 2 agrees to within a few percent.]. Figure <ref> shows an example of 500 events of 100 GeV photons with θ=90^o and φ=0^o. The data is fit with a Gaussian, giving an average standard deviation of 2.1% of the mean for the 100 GeV events.From analogous distributions for all φ angles, the mean value from the Gaussian fit was recorded and plotted as a function of φ with error bars representing the standard deviation. This was done for 100 GeV initial photon energies and 10 GeV initial photon energies (where the mean energies of the 10 GeV runs are scaled up by a factor of ten to compare to the 100 GeV runs) (Fig. <ref>).The standard distribution of measured charge remains fairly constant throughout the entire φ range, with standard deviations of 100 GeV and 10 GeV runs in the range of 2.0% - 2.3% and 5.7% - 6.6% of the mean, respectively. This is seen even in the overlap region of φ∈[4.03^o,15^o] where the showers experience higher sampling rates (see Fig. <ref>). The 10 GeV runs, once scaled up by a factor of ten, have a slightly higher mean than the 100 GeV run due to lower leakage into the hadron calorimeter (Fig. <ref>). Deposits in the hadron calorimeter notably see an increase in measured charge of more than 60% in the overlap region of the ECal. This is due to the lower total radiation lengths within the this region (Fig. <ref>), where the number of radiation lengths decreases from 26 X_0 at normal incidence to a minimum of 23.7 X_0 at φ = (8.786n)^o.These effects are currently under investigation at the University of Oregon, as effort continues to optimize the SiD ECal design from first principles and to formulate calibration constants that include energy- and angular-dependence.§ ACKNOWLEDGMENTSWe would like to thank Jason Barkeloo, Teddy Hay, and Dylan Mead for their extensive previous work with the electron-counting algorithm and SLAC beam test data. We would also like to thank Dan Protopopescu for providing a geometry driver that correctly created the SiD barrel's overlaping module structure, and Marco Oriunno for providing the photo of the prototype calorimeter used in Fig. <ref>. 99 tdr4T. Behnke, J. Brau et al., “The Interntional Linear Collider Technical Design Report - Volume 4: Detectors," https://arxiv.org/abs/1306.6329<arXiv:1306.6329v1 [physics.ins-det]> kpix J. Brau, M. Breidenbach et al., “KPiX - A 1,024 Channel Readout ASIC for the ILC," SLAC-PUB-15285 (2013), 2012 IEEE Nuclear Science SYmposium, <http://slac.stanford.edu/pubs/slacpubs/15250/slac-pub-15285.pdf> lcws13 M. Breidenbach et al., “Prototype Silicon-tungsten Ecal with Integrated Electronics: First Look with Test Beam," 2013 Linear Collider Workshop (LCWS), Tokyo, <https://agenda.linearcollider.org/event/6000/contributions/27576/> | http://arxiv.org/abs/1703.08605v1 | {
"authors": [
"Amanda Steinhebel",
"James Brau"
],
"categories": [
"physics.ins-det"
],
"primary_category": "physics.ins-det",
"published": "20170324213503",
"title": "Studies of the Response of the SiD Silicon-Tungsten ECal"
} |
ŝ cos2i sin2i cos2ε sin2ε cosi sini cosε sinε √(1-e^2) · · · · · · · · · · · · · L̂ l̂ m̂ S^A S^B 𝒮^A 𝒮^B 𝒮^p 𝒮^c 𝒮^A 𝒮^B Ŝ^A Ŝ^B Ŝ^p/c Ŝ^p Ŝ^c Ŝ_x^A Ŝ_y^A Ŝ_z^A Ŝ_x^B Ŝ_y^B Ŝ_z^B Ŝ_x Ŝ_y Ŝ_z M_A M_B ρ̂ σ̂ ν̂ ê_x ê_y ê_z n_b n r̂^̂'̂. . sinf cosf cosu sinu L̂_x L̂_y L̂_z P_bSgr A^∗ #1Equation (<ref>) #1#2Equations (<ref>) to (<ref>) #1Eq. (<ref>) #1#2Eq. (<ref>)-eq. (<ref>) #1#2∂#1∂#2 #1#2d#1/d#2 #1#1 #1“#1" . . #1#2#3[#1#2]#3Ω#1#2#1#2#1k̂ŵ_xŵ_yŵ_z#1#1 R_ rm_ rI_ r 2I_ r 2I 2ω 2Ω_ r 2_ rΔΔ IΣ I#1#2cos^#1#2#1#2sin^#1#2sincossin 2cos 2sincossin 2cos 2cosωsinωcos 2ωsin 2ωcosΩsinΩcos 2sin 2cos_ rsin_ rcos 2_ rsin 2_ rcos Isin Icos 2Isin 2Icos I^'sin I^'cos 2I_ rsin 2I_ r 1 - e^2√(1 - e^2)e^2#1#21 + #1#2e^2#1#21 - #1#2e^2 #1(#1)#1[#1]#1{#1}#1⟨ #1⟩ mn2e 4pc 4pc4pc1 [1]L. IorioMinistero dell'Istruzione, dell'Università e della Ricerca (M.I.U.R.)-Istruzione Permanent address for correspondence: Viale Unità di Italia 68, 70125, Bari (BA), Italy We develop a general approach to analytically calculate the perturbations Δδτ_p of the orbital component of the change δτ_p of the times of arrival of the pulses emitted by a binary pulsar p induced by the post-Keplerian accelerations due to the mass quadrupole Q_2, and the post-Newtonian gravitoelectric (GE) and Lense-Thirring (LT) fields. We apply our results to the so-far still hypothetical scenario involving a pulsar orbiting the Supermassive Black Hole in in the Galactic Center at Sgr A^∗. We also evaluate the gravitomagnetic and quadrupolar Shapiro-like propagation delays δτ_prop. By assuming the orbit of the existing S2 main sequence star and a time span as long as its orbital period , we obtain |Δδτ_p^GE|≲ 10^3 s, |Δδτ_p^LT|≲ 0.6 s,|Δδτ_p^Q_2|≲ 0.04 s. Faster = 5 yr and more eccentric e=0.97 orbits would imply net shifts per revolution as large as |Δδτ_p^GE|≲ 10 Ms, |Δδτ_p^LT|≲ 400 s,|Δδτ_p^Q_2|≲ 10^3 s, depending on the other orbital parameters and the initial epoch. For the propagation delays, we have |δτ_prop^LT|≲ 0.02 s, |δτ_prop^Q_2|≲ 1 μs. The results for the mass quadrupole and the Lense-Thirring field depend, among other things, on the spatial orientation of the spin axis of the Black Hole. The expected precision in pulsar timing in Sgr A^∗ is of the order of 100 μs, or, perhaps, even 1-10 μs. Our method is, in principle, neither limited just to some particular orbital configuration nor to the dynamical effects considered in the present study.keywords gravitation–celestial mechanics–binaries: general–pulsars: general–stars: black holes§ INTRODUCTIONIn a binary hosting at least one emitting pulsar[See, e.g., <cit.> and references therein.] p, the time of arrivals τ_p of the emitted radio pulses changes primarily because of the orbital motion about the common center of mass caused by the gravitational tug of the unseen companion c which can be, in principle, either a main sequence star or an astrophysical compact object like, e.g., another neutron star which does not emit or whose pulses are, for some reasons, not detectable, a white dwarf or, perhaps, even a black hole <cit.>. Such a periodic variation δτ_pf can be modeled as the ratio of the projection of the barycentric orbit 𝐫_p of the pulsar p onto the line of sightto the speed of light c<cit.>. By assuming a coordinate system centered in the binary's center of mass whose reference z-axis points toward the observer along the line of sight in such a way that the reference x, y plane coincides with the plane of the sky, we have δτ_pf = r_z^pc=r_pc=a_p1-e^2c1+e=m_cpsinω + fm_totc1+e.tau In obtaining tau, which is somewhat the analogous of the range in Earth-Moon or Earth-planets studies <cit.>, we used the fact that, to the Keplerian level, the barycentric semimajor axis of the pulsar A isa_p≃m_cm_tota.In a purely Keplerian scenario, there is no net variation Δδτ_p over a full orbital cycle. In this paper, we illustrate a relatively simple and straightforward approach to analytically calculate the impact that several post-Keplerian (pK) features of motion, both Newtonian (quadrupole) and post-Newtonian (1pN static and stationary fields), have on such a key observable. As such, we will analytically calculate the corresponding net time delays per revolution Δδτ_p; the instantaneous shifts Δδτ_pf will be considered as well in order to copewith systems exhibiting very long orbital periods with respect to the time spans usually adopted for data collection. Our strategy has a general validity since, in principle, it can be extended to a wide range of dynamical effects, irrespectively of their physical origin, which mayinclude, e.g., modified models of gravity as well. Furthermore, it is applicable to systems whose constituents may have arbitrary masses and orientations of their spin axes, and orbital configurations. Thus, more realistic sensitivity analyses, aimed to both re-interpreting already performed studies and designing future targeted ones, could be conducted in view of a closer correspondence with which is actually measured. We we will also take into account the Shapiro-like time delays due to the propagation of the electromagnetic waves emitted by the visible pulsar(s) throughout the spacetime deformed by axisymmetric departures from spherical symmetry of the deflecting bodies<cit.>.Our results, which are not intended to replace dedicated, covariance-based real data analyses, being, instead, possible complementary companions,will be applied to the so far putative scenario involving emitting radiopulsars, not yet detected, orbiting the Supermassive Black Hole (SMBH) in the Galactic Center (GC) at Sgr A^∗<cit.>. Moreover, we will perform also quantitative sensitivity analyses on the measurability of frame-dragging and quadrupolar-induced time delays in such a hypothesized system. In principle, our results may be applicable even to anthropogenic binaries like, e.g., those contrived in past concept studies to perform tests of fundamental physics in space <cit.>, or continuously emitting transponders placed on the surface of some moons of larger astronomical bodies.The paper is organized as follows. Section <ref> details the calculational approach. The 1pN Schwarzschild-type gravitoelectric effects are calculated in Section <ref>, while Section <ref> deals with the 1pN gravitomagnetic ones.The impact of the quadrupole mass moment of the SMBH is treated in Section <ref>.Section <ref> summarizes our findings. § OUTLINE OF THE PROPOSED METHOD metodo If the motion of a binary is affected by some relatively small post-Keplerian (pK) acceleration A, either Newtonian or post-Newtonian (pN) in nature,its impact on the projection of the orbit onto the line of sight can be calculated perturbatively as follows. <cit.> analytically worked out the instantaneous changes of the radial, transverse and out-of-plane components r_ρ, r_σ, r_νof the positionvector 𝐫, respectively, for the relative motion of a test particle about its primary: they areΔr_ρfrR = rfaΔ af -acos fΔ ef +aesin f√(1-e^2)Δℳf,Δr_σfrT = asin f1 + rfpΔ ef + rfΔΩf+Δωf +a^2rf√(1-e^2)Δℳf,Δr_νfrN = rfsin u Δ If -cos u ΔΩf.In rRrN, the instantaneous changes Δ af, Δ ef, Δ If, ΔΩf, Δωfare to be calculated as Δκf=∫_f_0^fκttf^' df^', κ=a, e, I, Ω, ω,Dk where the time derivatives dκ/dt of the Keplerian orbital elements κ are to be taken from the right-hand-sides of the Gauss equationsa t dadt= 2√(1-e^2)e A_ρ + A_σpr, e t dedt= √(1-e^2) aA_ρ + A_σ + 1e1 - ra, I t dIdt= 1 a √(1 - e^2)A_νra,Ω t dOdt= 1 a √(1 - e^2)A_νra,ω t dodt= -Ω t + √(1-e^2) a e -A_ρ + A_σ1 + rp,evaluated onto theKeplerian ellipse given byr=p1+ecos fKepless and assumed as unperturbed reference trajectory; the same holds also fort f = r^2√(μ p)= 1-e^2^3/21+ecos f^2dtdfKep entering Dk. The case of the mean anomaly ℳ is subtler, and requires more care. Indeed, in the most general case encompassing the possibility that the mean motionis time-dependent because of some physical phenomena, it can be written as[The mean anomaly at epoch is denoted as η by <cit.>, l_0 by <cit.>, and ϵ^' by <cit.>. It is a slow variable in the sense that its time derivative vanishes in the limit A→ 0; cfr. with detadt. ]<cit.>ℳt = η + ∫_t_0^tt^'dt^';Mt the Gauss equation for the variation of the mean anomaly at epoch is[It is connected with the Gauss equation for the variation of the time of passage at pericenter t_p by dη/dt = - dt_p/dt.]<cit.>η t = - 2 aA_ρra -1-e^2 a e -A_ρ + A_σ1 + rpdetadt.Ifis constant, as in the Keplerian case, Mt reduces to the usual form ℳt= η + t-t_0.In general, when a disturbing acceleration is present, the semimajor axis a does vary according to dadt; thus, also the mean motionexperiences a change[We neglect the case μt.]→+Δt which can be calculated in terms of the true anomaly f as Δf=aΔ af= -32a∫_f_0^f a t tf^'df^'Dn by means of dadt and dtdfKep. Depending on the specific perturbation at hand, Dn does not generally vanish. Thus, the total change experienced by the mean anomaly ℳ due to the disturbing acceleration A can be obtained as Δℳf = Δηf + ∫_t_0^tΔt^' dt^',anom whereΔηf = ∫_f_0^fη t tf^' df^',∫_t_0^tΔt^' dt^'inte= -32a∫_f_0^fΔ af^'tf^'df^'.In the literature, the contribution due to inte has been often neglected. An alternative way to compute the perturbation of the mean anomaly with respect to anom implies the use of the mean longitude λ and the longitude of pericenter ϖ. It turns out that[The mean longitude at epoch is denoted as ϵ by <cit.>. It is better suited than η at small inclinations <cit.>.]<cit.>Δℳf = Δϵf - Δϖf + ∫_t_0^tΔt^' dt^',longi where the Gauss equations for the variation of ϖ, ϵ are <cit.> ϖ t =2sin^2I2Ω t +√(1-e^2) a e-A_ρcos f+A_σ1+rpsin f,ϵ t = e^21+√(1-e^2)ϖ t +2√(1-e^2)Ω t-2 aA_ρra.It must be remarked that, depending on the specific perturbing acceleration A at hand, the calculation of inte may turn out to be rather uncomfortable.The instantaneous change experienced by the projection of the binary's relative motion onto the line of sight can be extracted from rRrN by taking thez component Δr_z of the vector Δ𝐫 = Δr_ρ + Δr_σ + Δr_ν expressing the perturbation experienced by the binary's relative position vector 𝐫. It isΔr_zfDzf= 1-e^21+eΔ af + +a1+11+e-Δ ef ++ a1-e^21+eΔ If + a1-e^21+eΔωf + +ae + √(1-e^2)Δℳf.It is possible to express the true anomaly as a function of time through the mean anomaly according to <cit.> ft = ℳt + 2∑_s = 1^s_max1s J_sse + ∑_j = 1^j_max1-√(1-e^2)^je^j J_s-jse + J_s+jsesin sℳt, fMt where J_kse is the Bessel function of the first kind of order k and s_max, j_max are some values of the summation indexes s, j adequate for the desired accuracy level. Having at disposal such analytical time series yielding the time-dependent patternof Dzf allows one to easily study some key features of it such as, e.g., its extrema along with the corresponding epochs and the values of some unknown parameters which may enter the disturbing acceleration. The net change per orbit Δr_z can be obtained by calculating Dzf with f=f_0+2, and using Dk and anominte integrated from f_0 to f_0+2.In order to have the change of the times of arrival of the pulses from the binary's pulsar p, Dzf and its orbit averaged expression have to be scaled by m_c m_tot^-1c^-1.In the following, we will look at three pK dynamical effects: the Newtonian deviation from spherical symmetry of the binary's bodies due to their quadrupole mass moments, and the velocity-dependent 1pN static (gravitoelectric) and stationary (gravitomagnetic) accelerations responsible of the time-honored anomalous Mercury's perihelion precession and the Lense-Thirring frame-dragging, respectively. § THE 1PN GRAVITOELECTRIC EFFECT GEeffects Let us start with the static component of the 1pN field which, in the case of our Solar System, yields the formerly anomalous perihelion precession of Mercuryof ϖ̇_☿=42.98 arcsec cty^-1<cit.>.The 1pN gravitoelectric, Schwarzschild-type, acceleration of the relative motion is, in General Relativity, <cit.> A_GE=μc^2 r^24 + 2 ξμr -1 + 3ξ𝐯·𝐯+ 32ξ𝐯·^2 +4 - 2ξ𝐯·𝐯.AGE By projecting AGE onto the radial, transverse, out-of-plane unit vectors , ,, its corresponding components areA_ρ^GEARGE=μ^21 + e ^2 4 - 13ξ e^2+ 43 - ξ + 81 - 2ξe-8 - ξe^2cos 2f 4 c^2 a^31-e^2^3 ,A_σ^GEATGE= 2μ^2 1 + e ^3 2 - ξec^2 a^31 - e^2^3,A_ν^GEANGE= 0.Here, we use the true anomaly f since it turns out computationally more convenient.The resulting net shifts per orbit of the osculating Keplerian orbital elements, obtained by integrating Dk and anominte from f_0 to f_0+2, areΔ a^GEDaeIOGE =Δ e^GE = Δ I^GE = ΔΩ^GE = 0,Δω^GEDomega= Δϖ^GE = 6μc^2 p,Δℳ^GEDMGE= μ4c^2 a1-e^2^28-9+2ξ+4e^4-6+7ξ +e^2-84+76ξ + .+.3e 8-7+3ξ+e^2-24+31ξcos f_0 + . +. 3e^2 4-5+4ξcos 2f_0+eξcos 3 f_0.If, on the one hand, Domega is the well known relativistic pericenter advance per orbit, on the other hand, DMGE represents a novel result which amends several incorrect expressions existing in the literature <cit.>, mainly because based only on detadt. Indeed, it turns out that inte, integrated over an orbital revolution, does not vanish. By numerically calculating DMGE with the physical and orbital parameters of some binary, it can be shown that it agrees with the expression obtainable for Δℳ from Equations (A2.78e) to (A2.78f) by <cit.> in which all the three anomalies f, E, ℳ appear. It should be remarked that DMGE is an exact result in the sense that no a-priori assumptions on e were assumed. It can be shown that, to the zero order in e, DMGE is independent of f_0.We will not explicitly display here the analytical expressions for the instantaneous changes Δκ^GEf,κ=a, e, I, Ω, ω, Δℳ^GEf because of their cumbersomeness, especially as far as the mean anomaly is concerned. However, Δκ^GEf,κ=a, e, I, Ω, ω can be found in Equations (A2.78b) to (A2.78d) of <cit.>. Equations (A2.78e) to (A2.78f) of <cit.> allow to obtain the instantaneous shift of the mean anomaly, although in terms of the three anomalies f, E, ℳ; instead, our (lengthy) expression contains only the true anomaly f. See also Equations (3.1.102) to (3.1.107) of <cit.>.The net time change per revolution of the pulsar p can be calculated with Dzf together with DaeIOGEDMGE, by obtainingc^3 Gm_cΔδτ_p^GEdtGE= 6cos u_01+ecos f_0+ +e+cos u_04 1-e^2^5/28-9+2ξ+4e^4-6+7ξ +e^2-84+76ξ + .+.3e 8-7+3ξ+e^2-24+31ξcos f_0 + .+. 3e^2 4-5+4ξcos 2f_0 + eξcos 3 f_0.It should be noted that dtGE is independent of the semimajor axis a, depending only on the shape of the orbit through e and its orientation in space through I, ω. Furthermore, dtGE does depend on the initial epoch t_0 through f_0. In the limit e→ 0, dtGE does not vanish, reducing to Δδτ_p^GE≃4 Gm_csin I-3+ξcos u_0c^3 + 𝒪e.In view of its cumbersomeness, we will not display here the explicit expression of Δδτ^GE_pf whose validity was successfully checked by numerically integrating the equations of motion for a fictitious binary system, as shown by Figure <ref>; see also Section <ref>.We will not deal here with the Shapiro-like propagation delay since it was accurately calculated in the literature; see, e.g., <cit.> and references therein.§.§ The pulsar in Sgr A^∗ and the gravitoelectric orbital time delay psrsgrage An interesting, although still observationally unsupported, scenario involves the possibility that radio pulsars orbit the SMBH at the GC in Sgr A^∗; in this case, the unseen companion would be the SMBH itself. Thus, in view of its huge mass, the expected time shift per orbit Δδτ^GE_p would be quite large.By considering a hypothetical pulsar with standard mass m_p = 1.4 M_⊙ and, say, the same orbital parameters of the main sequence star S2 actually orbiting the Galactic SMBH <cit.>, dtGE yields Δδτ^GE_p = 1,722.6948 s.megaGE Figure <ref> displays the temporal pattern of Δδτ_p^GEt for the same hypothetical pulsar calculated both analytically with DzffMt applied to AGE and numerically by integrating its equations of motion: their agreement is remarkable. It turns out that.Δδτ_p^GE|^max= 2520.3557 s,.Δδτ_p^GE|^min= -6119.2341 s. dtGE allows to find the maximum and minimum values of the net orbital change per revolution of the putative pulsar in Sgr A^∗by suitably varyinge, I, ω, f_0 within given ranges. By limiting ourselves to 0 ≤ e ≤ 0.97 for convergence reasons of the optimization algorithm adopted, we haveΔδτ_p^GE_max= 1.74521212562× 10^7 e_max =0.97, I_max = 94.98 deg,..ω_max = 184.56 deg, f_0^max = 357.23 deg, Δδτ_p^GE_min= -1.7568613043× 10^7 e_min =0.97, I_min = 89.95 deg,..ω_min = 359.68 deg, f_0^min = 0.28 deg. Such huge orbital time delays would be accurately detectable, even by assuming a pessimisticpulsar timing precision of just 100 μs<cit.>; more optimistic views point towards precisions of the order of even 1-10 μs<cit.>. § THE 1PN GRAVITOMAGNETIC LENSE-THIRRING EFFECT LTeffects The stationary component of the 1pN field, due to mass-energy currents, is responsible of several aspects of the so-called spin-orbit coupling, or frame-dragging <cit.>.The 1pN gravitomagnetic, Lense-Thirring-type, acceleration affecting the relative orbital motion of a generic binary made of two rotating bodies A, B is <cit.> A_LT = 2Gc^2 r^3 3𝒮·×𝐯+ 𝐯×𝒮.ALT In general, it is ≠, i.e. the angular momenta of the two bodies are usually not aligned. Furthermore, they are neither aligned with the orbital angular momentum L, whose unit vector is given by . Finally, also the magnitudes S^A, S^B are, in general, different. The radial, transverse and out-of-plane components of the gravitomagnetic acceleration, obtained by projecting ALT onto the unit vectors , ,, turn out to beA_ρ^LTARLT= 2G1+e^4𝒮·c^2 a^21-e^2^7/2,A_σ^LTATLT= -2eG1+e^3 𝒮·c^2 a^21-e^2^7/2,A_ν^LTANLT= -2G1+e^3c^2 a^2 1-e^2^7/2𝒮·e -2+3e -.-. 12e +4 + 3esinω+2f. By using ARLTANLT in Dk and anominte and integrating them from f_0 to f_0+2, it is possible to straightforwardly calculate the 1pN gravitomagnetic net orbital changes for a generic binary arbitrarily oriented in space: they areΔ a^LTDaeMLT= Δ e^LT=Δℳ^LT = 0,Δ I^LTDILT= 4 G𝒮·c^2a^31-e^2^3/2,ΔΩ^LTDOLT= 4 G I𝒮·c^2a^31-e^2^3/2,Δω^LTDoLT= -4 G𝒮·2 +Ic^2a^31-e^2^3/2.It is interesting to remark that, in the case of ALT, both detadt and inte yield vanishing contributions to Δℳ^LT. For previous calculations based on different approaches and formalisms, see, e.g., <cit.>, and references therein.Dzf, calculated with DaeMLTDoLT, allows to obtain the net orbit-type time change per revolution of the pulsar p as Δδτ_p^LT = 4 G m_cm_tot c^3 a^2√(1-e^2)1+ecos f_0𝒮·sin u_0 -+2cos u_0.DtauLT Note that, contrary to dtGE, DtauLT does depend on the semimajor axis as a^-1/2. As dtGE, also DtauLT depends on f_0. The instantaneous orbital time shift Δδτ_p^LTf turns out to be too unwieldy to be explicitly displayed here. Its validity was successfully checked by numerically integrating the equations of motion for a fictitious binary system, as shown by Figure <ref>; see also Section <ref>.The gravitomagnetic propagation time delay is treated in Section <ref>.§.§ The pulsar in Sgr A^∗ and the Lense-Thirring orbital time delay psrsgralt Let us, now, consider the so-far hypothetical scenario of an emitting radio pulsar orbiting the SMBH in Sgr A^∗<cit.>.It turns out that, in some relevant astronomical and astrophysical binary systems of interest like the one at hand, the (scaled) angular momentum 𝒮^A/B of one of the bodies is usually much smaller than the other one. Let us assume that the pulsar under consideration has the same characteristics of PSR J0737-3039A. By assuming <cit.> I_NS≃ 10^38 kg m^2inerzia for the moment of inertia of a neutron star (NS), the spin of PSR J0737-3039A is S^AspinA = 2.8× 10^40 kg m^2 s^-1. The angular momentum of a NS of mass m_NS can also be expressed in terms of the dimensionless parameter χ_g>0 as <cit.> S^NS = χ_gm_NS^2 Gc.SNS Thus, spinA implies χ_g^AminiA = 0.01755for PSR J0737-3039A. Since for the Galactic SMBH it is <cit.> S_∙ = χ_g M_∙^2 Gc≃ 9.68× 10^54 kg m^2 s^-1sBH with[Let us recall that, for a Kerr BH, it must be χ_g≤ 1.]<cit.>χ_g ≃ 0.6,chig we have 𝒮_p𝒮_∙≃ 6× 10^-9mini.Thus, in this case, the dominant contribution to ALT is due to the pulsar's companion c. As far as the orientation of the SMBH's spin is concerned, we model it as Ŝ^∙ =sin i_∙cosε_∙ + sin i_∙sinε_∙ +cos i_∙ .The angles i_∙, ε_∙ are still poorly constrained <cit.>, so that we prefer to treat them as free parameters by considering their full ranges of variation0ispin ≤ i_∙≤ 180 deg,0espin ≤ε_∙≤ 360 deg. Also in this case, we assume for our putative pulsar the same orbital parameters of, say, the S2 star.By using our analytical expression for Δδτ_p^LTt, calculated with 𝒮^p→ 0 in view of mini, one gets.Δδτ^LT_p|^maxmegaLTmax = 0.6054 s i_∙ = 20.9 deg, ε_∙ = 317.9 deg, .Δδτ^LT_p|^minmegaLTmin = -0.6053 s i_∙= 159.1 deg, ε_∙ = 137.9 deg.within the assumed ranges of variation for the angles i_∙, ε_∙ provided by ispinespin. To this aim, see Figure <ref> which shows both the analytical time series, calculated with DzffMt applied to ALT for i_∙ = 20.9 deg, ε_∙ = 317.9 deg and the numerically integrated one for the same values of the SMBH's spin axis angles: they are in good agreement. The maximum and the minimum values of the propagation delays for the same orbital configuration of the pulsar are displayed in propLTmaxpropLTmin. Let us, now, remove any limitation on the orbital configuration of the pulsar. By restricting ourselves to 0 ≤ e ≤ 0.97 for convergence reasons of the optimization algorithm adopted, we haveΔδτ_p^LT_maxletimax= 411.1823 s ^max = 5 yr, e_max =0.97, I_max = 90 deg,..ω_max = 180 deg, Ω^max = 167.21 deg, f_0^max = 180 deg,.. i_∙^max = 90 deg, ε_∙^max = 257.21 deg, Δδτ_p^LT_minletimin= -293.1933 s ^min = 5 yr, e_min =0.97, I_min = 46.03 deg,..ω_min = 26.31, Ω^min = 14.35 deg, deg, f_0^min = 179.43 deg,.. i_∙^min = 159.76 deg, ε_∙^min = 36.41 deg,where we considered 5 yr≤≤ 16 yr; the ranges of variation assumed for the other parameters I, ω, Ω, f_0, i_∙, ε_∙ are the standard full ones. The values of megaLTmaxmegaLTmin and letimaxletimin should be compared with the expected pulsar timing precision of about 100 μs, or, perhaps, even 1-10 μs<cit.>. §.§ The Lense-Thirring propagation time shift propgLT The gravitomagnetic propagation delay for a binary with relative separation r and angular momentum S of the primary is <cit.> δτ_prop^LTpropLT = -2G S·× r c^4 r (r - r·)==2 G S1+e -++ c^4 p1-.According to Figure 2 of <cit.>, their unit vector K_0agrees with oursince it is directed towards the Earth.About the spin axis of the primary, identified with a BH by <cit.>, our i_∙ coincides with their λ_∙. Instead, their angle η_∙ is reckoned from our unit vector , i.e. it is as if <cit.> set Ω=0. On the contrary, our angle ε_∙ is counted from the reference x direction in the plane of the sky whose unit vector , in general, does not coincide with . Furthermore, <cit.> use the symbol i for the orbital inclination angle, i.e., our I. It is important to notice that, contrary to the orbital time delay of DtauLT, propLT is a short-term effect in the sense that there is no net shift over one orbital revolution. It is also worth noticing that propLT is of order 𝒪c^-4, while DtauLT is of order𝒪c^-3.As far as the putative scenario of the pulsar in the GC is concerned, the emitting neutron star is considered as the source s of the electromagnetic beam delayed by the angular momentum of the SMBH. Thus, by calculating propLT for a S2-type orbit and with S = S^∙, it is possible to obtain.δτ^LT_prop|^maxpropLTmax =0.0195 s i_∙=90 deg, ε_∙=80.1 deg, f=360 deg, .δτ^LT_prop|^minpropLTmin = -0.0213 s i_∙=90 deg, ε_∙=235.5 deg, f=18.5 deg. Such values are one order of magnitude smaller than megaLTmaxmegaLTmin for the orbital time delay calculated with the same orbital configuration of the pulsar. It turns out that values similar to those of propLTmaxpropLTmin are obtained by discarding the S2 orbital configuration for the pulsar:.δτ_prop^LT|^maxmaxleti= 0.0105 s ^max = 10.5 yr, e_max =0.97, I_max = 54.98 deg,.. ω_max = 70.09 deg, Ω^max = 70.33 deg, f^max = 69.94 deg,.. i_∙^max = 55.11 deg, ε_∙^max = 72.19 deg, .δτ_prop^LT|^minminleti= -0.0375 s ^min = 5 yr, e_min =0.97, I_min = 0 deg,.. ω_min = 325.24 deg, Ω^min = 80.08 deg, f^min = 0 deg,.. i_∙^min = 90.01 deg, ε_∙^min = 135.3 deg.§ THE QUADRUPOLE-INDUCED EFFECT Qeffects If both the bodies of an arbitrary binary system are axisymmetric about their spin axes Ŝ^A/B, a further non-central relative acceleration arises; it is <cit.>2r^43μ A_J_2 = J_2^AR_A^25^2 - 1 - 2 +A⇆B,AJ2 in which the first even zonal parameter J_2^A/B is dimensionless. In the notation of <cit.>, their J_2^A/B parameter is not dimensionless as ours, being dimensionally an area because it corresponds to our J_2^A/B R_A/B^2. Furthermore, <cit.> introduce an associated dimensional quadrupolar parameter Δ I^A/B, having the dimensions of a moment of inertia, which is connected to our J_2^A/B byJ_2^A/B = Δ I^A/BM_A/B R^2_A/B.Thus, Δ I^A/B corresponds to the dimensional quadrupolar parameter Q_2^A/B customarily adopted when astrophysical compact objects like neutron stars and black holes are considered <cit.>, up to a minus sign, i.e.J_2^A/B=-Q^A/B_2M_A/B R_A/B^2.J2Q Thus, AJ2 can be written as 2r^43G A_Q_2 = 𝒬_2^A1 - 5^2+ 2+A⇆BAQ2. Projecting AJ2 onto the radial, tranvserseand out-of-planeunit vectors, , provides us with2a^41-e^2^43μ1+e^4A_ρ^J_2ARJ2= J_2^AR_A^23 + ^2 - 1+A⇆B,-a^41-e^2^43μ1+e^4A_σ^J_2ATJ2= J_2^AR_A^2 +- ++ A⇆B,-a^41-e^2^43μ1+e^4A_ν^J_2ANJ2= J_2^AR_A^2 + +A⇆B. A straightforward consequence of ARJ2ANJ2 is the calculation of the net quadrupole-induced shifts per revolution of the Keplerian orbital elements by means of Dk and anominte, which turn out to beΔ a^J_2 =Δ e^J_2DaeJ2=0,-p^23Δ I^J_2DIJ2= J_2^AR^2_A+A⇆B,-p^23ΔΩ^J_2DOJ2= J_2^AR^2_A I+A⇆B,2p^23Δω^J_2DoJ2= J_2^AR^2_A2 - 3^2+^2+2 I+ + A⇆B,2a^21-e^2^331+ecos f_0^3Δℳ^J_2DMJ2= J_2^AR^2_A2 - 3^2+^2 - .-. 3^2-^2 cos 2u_0 - 6sin 2u_0+ +A⇆B.Also DMJ2, as DMGE for the Schwarzschild-like 1pN acceleration, is a novel result which amends the incorrect formulaswidely disseminated in the literature <cit.>; indeed, it turns out that, in the case of AJ2,inte does not vanish when integrated over a full orbital revolution. Furthermore, contrary to almost all of the other derivations existing in the literature, DMJ2 is quite general since it holds for a two-body system with generic quadrupole mass moments arbitrarily oriented in space, and characterized by a general orbital configuration. The same remark holds also for DaeJ2DoJ2; cfr. with the corresponding (correct) results by <cit.> in the case of a test particle orbiting an oblate primary. According to Dzf and DaeJ2DMJ2, the net orbit-like time change of the pulsar p after one orbital revolution is2m_totca1-e^21+ecos f_03 m_cΔδτ_p^J_2DtauJ2 = J_2^pR^2_p2 - 3^2 - 3^2cos u_0++2J_2^pR^2_pcos u_0-sin u_0 --J_2^pR^2_p1+ecos f_0^41-e^2^5/2e + cos u_0 ·· -2 + 3^2+^2+ .+.3^2 - ^2 cos 2u_0 + .+ . 6sin 2u_0 +p⇆c.It turns out that DtauJ2 does not vanish in the limit e→ 0. If, on the one hand, DtauJ2 depends of f_0 as dtGE and DtauLT, on the other hand, it depends on the orbital semimajor axis through a^-1. As far as Δδτ^J_2_pf is concerned, it will not be displayed explicitly because it is far too ponderous. Also in this case, a numerical integration of the equations of motion for a fictitious binary system, displayed in Figure <ref>, confirmed our analytical result for the temporal pattern of Δδτ^J_2_pf; see also Section <ref>.The propagation time delay is dealt with in Section <ref>.§.§ The pulsar in Sgr A^∗ and the quadrupole-induced orbital time delay orbQBH A rotating NS acquires a non-zero quadrupole moment given by <cit.> Q_2^NS = qm_NS^3 G^2c^4;QNS the absolute values of the dimensionless parameter q<0 ranges from 0.074 to 3.507 for a variety of Equations of State (EOSs) and m_NS = 1.4 M_⊙; cfr. Table 4 of <cit.>. It is interesting to note that <cit.> find the relationq ≃ -αχ_g^2,chiq where the parameter α of the fit performed by <cit.> depends on both the mass of the neutron star and the EOS used. According to Table 7 of <cit.>,it isα^maxamax1.4= 7.4 m_NS=1.4 M_⊙, EOS L,α^minamin1.4= 2.0 m_NS=1.4 M_⊙, EOS Gfor some of the EOSs adopted by <cit.>. In the case of PSR J0737-3039A, QNS yieldsQ_2^AQA = q_A 1.04× 10^37 kg m^2. According to miniA and chiq, it isq_AqA = -α_A 3.1× 10^-4.As a consequence of the no-hair or uniqueness theorems <cit.>, the quadrupole moment of a BH is uniquely determinedby its mass and spin according to <cit.> Q_2^∙ = -S^2_∙c^2 M_∙;qBH in the case of the SMBH in Sgr A^∗, it is (χ_g=0.6)Q_2^∙ = -1.2× 10^56 kg m^2QBH. QA and QBH imply that, in the GC,Q_2^pm_cm_pminiq1= q_p 3.8× 10^43 kg m^2,Q_2^∙miniq2= -1.2× 10^56 kg m^2,so that the quadrupole of a hypothetical emitting neutron star p orbiting the SMBH in Sgr A^∗ can be completely neglected with respect to the quadrupole of the latter one in any practical calculation. According to our analytical expression for Δδτ^Q_2_pt applied to a pulsar moving along a S2-type orbit, in view of the ranges of variation assumed in ispinespin for i_∙, ε_∙, it is.Δδτ^Q_2_p|^maxmegaQ2max = 0.0215 s i_∙ = 146.7 deg, ε_∙ = 148.8 deg, .Δδτ^Q_2_p|^minmegaQ2min = -0.0393 s i_∙ = 30.5 deg, ε_∙ = 331.6 deg.See Figure <ref> which displays the outcome of a numerical integration of the equations of motion of the pulsar considered for i_∙ = 146.7 deg, ε_∙ = 148.8 deg, and the corresponding analytical time series calculated by means of DzffMt applied to AJ2: they agree quite well. The maximum and minimum values of the propagation delay for the same orbital configuration of the pulsar are in maxQBHminQBH. By removing the restrictions on the orbit of the pulsar and assuming the same ranges of variation for , e, I, Ω, ω, f_0, i_∙, ε_∙ as in Section <ref>, apart from 0≤ e≤ 0.96 for convergence issues of the optimization algorithm adopted, it is possible to obtainΔδτ_p^Q_2_maxqu2max= 1392.3665 s ^max = 5 yr, e_max =0.96, I_max = 90 deg,.. ω_max = 180 deg, Ω^max = 17.89 deg, f_0^max = 0 deg,.. i_∙^max = 90 deg, ε_∙^max = 17.89 deg, Δδτ_p^Q_2_minqu2min= -696.1481 s ^min = 5 yr, e_min =0.96, I_min = 90.05 deg,.. ω_min = 180.18 deg, Ω^min = 37.28 deg, f_0^min = 0 deg,.. i_∙^min = 179.75 deg, ε_∙^min = 0.89 deg.The bounds of megaQ2maxmegaQ2min and qu2maxqu2min can be compared with the minimum and maximum values of the gravitomagnetic orbital shift of megaLTminmegaLTmax for an S2-type orbit, which are about one order of magnitude larger than megaQ2maxmegaQ2min, and letimaxletimin, which, instead, are smaller than qu2maxqu2min. Furthermore, the values of megaQ2maxmegaQ2min and qu2maxqu2min seem to be potentially measurable in view of the expected pulsar timing precision of about 100 μs, or, perhaps, even 1-10 μs<cit.>.§.§ The quadrupole-induced propagation time shift propQ The propagation delay δτ_prop^J_2 due to the quadrupole mass moment is rather complicated to be analytically calculated; see, e.g., <cit.>. No explicit expressions analogous to the simple one of propLT for frame-dragging exist in the literature. Here, we will obtain an analytical formula for δτ_prop^J_2 which will be applied to the double pulsar and the pulsar-Sgr A^∗ systems. The approach by <cit.> will be adopted by adapting it to the present scenario. In the following, the subscripts d, s, o will denote the deflector, the source, and the observer, respectively. In the case of, say, the double pulsar, d is the pulsar B, while s is the currently visible pulsar A; in the pulsar-Sgr A^∗ scenario, d is the SMBH and s is the hypothetical pulsar p orbiting it. See Figure <ref> for the following vectors connectingd, s, o. The origin O is at the binary's center of mass, so thatr^s_emi≐ r^s(t_emi)is the barycentric position vector of the source s at the time of emission t_emi,r^d_emi≐ r^d(t_emi)is the barycentric position vector of the deflector d att_emi,r_emi =r^s_emi- r^d_emi is the relative position vector of the source s with respect to the deflector d at t_emi. Thus, to the Newtonian order, it isr^d_emi ≃ -m_sm_tot r_emi, r^s_emi ≃m_dm_tot r_emi,where m_s, m_d are the masses of source and deflector, respectively. Furthermore,r_rec^o≐ r^o(t_rec)is the barycentric position vector of the observer o at the time of reception t_rec,r_rec^d= r_rec^o- r_emi^d is the position vector of the observer o at t_rec with respect to the deflector d at t_emi, ands= r_rec^o- r_emi^s= r_rec^d- r_emi is the position vector of the observer o at t_rec with respect to the source s at t_emi. With our conventions for the coordinate axes, it isr_rec^o = D ,where D is the distance of the binary at t_emi from us at t_rec, which is usually much larger than the size r_emi of the binary's orbit. Thus, the following simplificationscan be safely mades=r_rec^o- r_emi^s = D - m_dm_tot r_emi≃ D ,kappas ≃, r_rec^d=r_rec^o- r_emi^d =D + m_sm_tot r_emi≃ D .To order, 𝒪c^-2, the impact parameter vector can be calculated as <cit.>ℓ^d≃× r_emi× = r_emi - r_emi·.In view of kappas, it turns out that ℓ^d, evaluated onto the unperturbed Keplerian ellipse, lies in the plane of the sky, being made of the x, y components ofscaled by Kepless.The coefficients of Equations (A.18) to (A.20) of <cit.> ℰ_d= · r_emir_emi^3 - · r_rec^dr_rec^d^3 ,ℱ_d= ℓ^d1 r_emi^3 - 1r_rec^d^3,𝒱_d= -1ℓ^d^2· r_emir_emi - · r_rec^dr_rec^d,required to calculate δτ_prop^J_2, can be approximated toℰ_dEd ≃· r_emir_emi^3,ℱ_dFd ≃ℓ^dr_emi^3 ,𝒱_dVd ≃ -1ℓ^d^2· r_emir_emi - 1 . The rotation matrix which brings the deflector's symmetry axis fromto a generic position in space characterized by the usual polar angles i, ε is _ij = ( [ -;; - 0; ])It is made of an anticlockwise rotation by an amount i around , followed by an anticlockwise rotation by an amount ε around . The symmetric trace-free quadrupole momentof the deflector <cit.>^d = 13m_d R_d^2 J_2^d [1,1,-2] ^T,where ∗,∗,∗ denotes the diagonal matrix along with the associated entries, becomes ^d_ij = m_d R_d^2 J_2^d ( [ 13 -cos^2εsin^2 i-12sin^2 i -12;-12sin^2 i -23 +cos^2 ε +cos^2 isin^2ε -12; -12 -1213-cos^2 i; ]).supermatsupermat agrees with Equations (48) to (53) of <cit.> for i→/2-δ, ε→α, where δ, α are the declination and right ascension, respectively. supermat is needed to work out the coefficients of Equations (A.21) to (A.23) of <cit.> β_dbetad ≐^d_ijŝ_i ŝ_j - ^d_ijℓ̂_i^dℓ̂_j^d,γ_d ≐ 2^d_ijŝ_i ℓ̂_j^d,θ_ddeltad ≐^d_ijŝ_i ŝ_j + 2^d_ijℓ̂_i^dℓ̂_j^d,which are the building blocks of the calculation of δτ_prop^J_2 along with EdVd.Finally, the quadrupole-induced propagation delay due to the deflector d can be obtained by evaluating <cit.>δτ_prop^J_2 = Gc^3β_dℰ_d + γ_dℱ_d + θ_d𝒱_d onto the unperturbed Keplerian ellipse by means of EdVd and betaddeltad. The resulting explicit expression is δτ^J_2_prop = Gm_d J_2^d R^2_d1+ecos f^24c^3p^2𝒯I, Ω, u, i, ε,dtq with𝒯urca = 22+sin^2 i1+^2cos 2Ω+sin 2Ωsin 2u - 1 + cos^2 Icos 2Ωsin^2 u -. -. sin 2u +sin 2Ω -2sin^2Ωsin 2u-sin 2Ω1+cos^2 Isin^2 u -- 4sin 2 i -++++ -1 - 3 cos 2 i.It is interesting to note thatdtq does not vanish for circular orbits. Furthermore, from dtq it turns out that there is no net quadrupolar propagation delay per cycle. If the pulsar-SMBH in the GC is considered, the quadrupole Shapiro-type time delay is much smaller than the orbital time shift.Indeed, by using dtqurca for a S2-type orbital configuration, it turns out that.δτ_prop^Q_2|^maxmaxQBH= 0.6 μs i_∙ = 90 deg, ε_∙ = 281.8 deg, f = 339.8 deg,.δτ_prop^Q_2|^minminQBH= -1.1 μs i_∙ = 35.8 deg, ε_∙ = 182.5 deg, f = 349.0 deg.The bounds of maxQBHminQBH should be compared with those of megaQ2maxmegaQ2min, which are about four orders of magnitude larger. Values as little as those of maxQBHminQBH should be hard to be detectable in view of the expected pulsar timing precision, even in the optimistic case of 1-10 μs<cit.>. If the orbital configuration of S2 is abandoned letting I, Ω, ω, f, i_∙, ε_∙ freely vary within their full natural ranges, we get values which can reach the 100 μs level for 0≤ e≤ 0.97, 5 yr≤≤ 16 yr. § SUMMARY AND CONCLUSIONS fine In order to perform sensitivity studies, designing suitable tests and reinterpreting existing data analyses in a way closer to the actual experimental practice in pulsar timing, we devised a method to analytically calculate the shifts Δδτ_p^A experienced by the orbital component of the time changes δτ_p of a binary pulsar p due to some perturbing post-Keplerian accelerations A: Schwarzschild, Lense-Thirring and mass quadrupole. We applied it to the stillhypothetical scenario encompassing an emitting neutron star which orbits the Supermassive Black Hole in Sgr A^∗; its timing precision could reach 100 μs, or, perhaps, even 1-10 μs<cit.>. The main results of the present study are resumed in Table <ref>. By assuming a S2-like orbital configuration and a time span as long as its orbital period, the magnitude of the post-Newtonian Schwarzschild-type gravitoelectric signature can reach |Δδτ_p^GE|≲ 10^3 s. The post-Newtonian Lense-Thirring gravitomagnetic and quadrupolar effects are much smaller, amounting to at most |Δδτ_p^LT|≲ 0.6 s,|Δδτ_p^Q_2|≲ 0.04 s, depending on the orientation of the Black Hole's spin axis. Faster = 5 yr and more eccentric e=0.97 orbits would imply net shifts per revolution |Δδτ_p^GE|≲ 10 Ms, |Δδτ_p^LT|≲ 400 s,|Δδτ_p^Q_2|≲ 10^3 s or so, depending on the other orbital parameters and the initial epoch.Among other things, we also explicitly calculated an analytical formula for the Shapiro-like time delay δτ_prop due to the propagation of electromagnetic waves in the field of a spinning oblate body, which we applied to the aforementioned binary system. As far as the Lense-Thirring and the quadrupolar effects are concerned, the Shapiro-like time shifts δτ_prop are, in general, much smaller than the orbital ones Δδτ_p which, contrary to δτ_prop, are cumulative. In the case of the pulsar-Sgr A^∗ scenario, we have, for a S2-type orbit, that the Lense-Thirring propagation delay is as little as |δτ_prop^LT|≲ 0.02 s, while the quadrupolar one is of the order of |δτ_prop^Q_2|≲ 1 μs, both depending on the spin orientation of the Black Hole. Removing the limitation to the S2 orbital configuration yields essentially similar values for δτ_prop^LT, δτ_prop^Q_2, even for highly eccentric and faster orbits.Finally, we remark that our approach is general enough to be extended to arbitrary orbital geometries and symmetry axis orientations of the binary's bodies, and to whatsoever disturbing accelerations. As such, it can be applied to other binary systems as probes for, say, modified models of gravity. In principle, also man-made binaries could be considered. § ACKNOWLEDGEMENTSI would like to thank an attentive referee for her/his precious critical remarks § NOTATIONS AND DEFINITIONS appen Here, some basic notations and definitions used in the text are presented <cit.> G: Newtonian constant of gravitationc: speed of light in vacuumê_z: unit vector directed along the line of sight towards the observer. ê_x, ê_y: unit vectors spanning the plane of the sky.: mass of the body A: mass of the body Bm_p: mass of the pulsar pm_c: mass of the unseen companion cm_tot≐ +: total mass of the binaryμ≐ Gm_tot: gravitational parameter of the binaryξ≐ m_tot^-2: dimensionless mass parameter of the binaryS: magnitude of the angular momentum of any of the binary's components𝒮^A/B≐1+34M_B/AM_A/BS^A/B: magnitude of the scaled angular momentum of any of the binary's componentŜ: unit vector of the spin axis of any of the binary's componentsi, ε: spherical angles determining the spatial orientation of Ŝ; i=90 deg implies that the latter lies in the plane of the sky𝒮≐+: sum of the scaled angular momenta of the binaryχ_g: dimensionless angular momentum parameter of a Kerr black holeR: equatorial radius of any of the binary's componentsJ_2: dimensionless quadrupole mass moment of any of the binary's componentsQ_2: dimensional quadrupole mass moment of any of the binary's components𝒬_2^A/B≐1+M_B/AM_A/BQ_2^A/B: scaled dimensional quadrupole mass moment of any of the binary's components𝐫: relative position vector of the binary's orbit𝐯: relative velocity vector of the binary's orbita:semimajor axis of the binary's relative orbit≐√(μ a^-3): Keplerian mean motion= 2^-1: Keplerian orbital perioda_A= M^-1_tot a: semimajor axis of the barycentric orbit of the binary's visible component Ae:eccentricityp≐ a(1-e^2):semilatus rectumI:inclination of the orbital planeΩ:longitude of the ascending nodeω:argument of pericenterϖ≐Ω+ω: longitude of pericentert_p: time of periastron passaget_0: reference epochℳ≐t - t_p: mean anomalyη≐t_0-t_p: mean anomaly at epochλ≐ϖ + ℳ: mean longitudeϵ: mean longitude at epochf:true anomalyu≐ω + f:argument of latitudel̂≐, , 0: unit vector directed along the line of the nodes toward the ascending nodem̂≐-, , : unit vector directed transversely to the line of the nodes in the orbital planer: magnitude of the binary's relative position vector≐𝐫 r^-1=l̂cos u + m̂sin u: radial unit vector≐, -, : unit vector of the orbital angular momentum≐×: transverse unit vectorA: disturbing accelerationA_ρ= A·: radial component of A A_σ= A·: transverse component of A A_ν= A·: normal component of A δτ_p: periodic variation of the time of arrivals of the pulses from the pulsar p due to its barycentric orbital motion§ TABLES AND FIGURES | http://arxiv.org/abs/1703.09049v2 | {
"authors": [
"Lorenzo Iorio"
],
"categories": [
"gr-qc",
"astro-ph.HE",
"physics.space-ph"
],
"primary_category": "gr-qc",
"published": "20170327131729",
"title": "Post-Keplerian perturbations of the orbital time shift in binary pulsars: an analytical formulation with applications to the Galactic Center"
} |
Approaching Confinement Structure for Light Quarks in a Holographic Soft Wall QCD Model Meng-Wei Li^a, Yi Yang^b and Pei-Hung Yuan^a Received ; accepted=======================================================================================In this paper, we present a transfer learning approach for music classification and regression tasks. We propose to use a pre-trained convnet feature, a concatenated feature vector using the activations of feature maps of multiple layers in a trained convolutional network. We show how this convnet feature can serve as general-purpose music representation. In the experiments, a convnet is trained for music tagging and then transferred to other music-related classification and regression tasks. The convnet feature outperforms the baseline MFCC feature in all the considered tasks and several previous approaches that are aggregating MFCCs as well as low- and high-level music features.§ INTRODUCTIONIn the field of machine learning, transfer learning is often defined as re-using parameters that are trained on a source task for a target task, aiming to transfer knowledge between the domains. A common motivation for transfer learning is the lack of sufficient training data in the target task. When using a neural network, by transferring pre-trained weights, the number of trainable parameters in the target-task model can be significantly reduced, enabling effective learning with a smaller dataset.A popular example of transfer learning is semantic image segmentation in computer vision, where the network utilises rich information, such as basic shapes or prototypical templates of objects, that were captured when trained for image classification <cit.>. Another example is pre-trained word embeddings in natural language processing. Word embedding, a vector representation of a word, can be trained on large datasets such as Wikipedia <cit.> and adopted to other tasks such as sentiment analysis <cit.>.There have been several works on transfer learning in Music Information Retrieval (MIR). Hamel et al. proposed to directly learn music features using linear embedding <cit.> of mel-spectrogram representations and genre/similarity/tag labels <cit.>. Oord et al. outlines a large-scale transfer learning approach, where a multi-layer perceptron is combined with the spherical K-means algorithm <cit.> trained on tags and play-count data <cit.>. After training, the weights are transferred to perform genre classification and auto-tagging with smaller datasets. In music recommendation, Choi et al. used the weights of a convolutional neural network for feature extraction in playlist generation <cit.>, while Liang et al. used a multi-layer perceptron for feature extraction of content-aware collaborative filtering <cit.>.§ TRANSFER LEARNING FOR MUSICIn this section, our proposed transfer learning approach is described. A convolutional neural network (convnet) is designed and trained for a source task, and then, the network with trained weights is used as a feature extractor for target tasks. The schematic of the proposed approach is illustrated in Figure <ref>.§.§ Convolutional Neural Networks for Music TaggingWe choose music tagging as a source task because i) large training data is available and ii) its rich label set covers various aspects of music, e.g., genre, mood, era, and instrumentations. In the source task, a mel-spectrogram (X), a two-dimensional representation of music signal, is used as the input to the convnet. The mel-spectrogram is selected since it is psychologically relevant and computationally efficient. It provides a mel-scaled frequency representation which is an effective approximation of human auditory perception <cit.> and typically involves compressing the frequency axis of short-time Fourier transform representation (e.g., 257/513/1025 frequency bins to 64/96/128 Mel-frequency bins). In our study, the number of mel-bins is set to 96 and the magnitude of mel-spectrogram is mapped to decibel scale (log_10X), following <cit.> since it is also shown to be crucial in <cit.>.In the proposed system, there are five layers of convolutional and sub-sampling in the convnet as shown in Figure <ref>.This convnet structure with 2-dimensional 3×3 kernels and 2-dimensional convolution, which is often called Vggnet <cit.>, is expected to learn hierarchical time-frequency patterns. This structure was originally proposed for visual image classification and has been found to be effective and efficient in music classification[For more recent information on kernel shapes for music classification, please see <cit.>.] <cit.>.§.§ Representation TransferIn this section, we explain how features are extracted from a pre-trained convolutional network. In the remainder of the paper, this feature is referred to as pre-trained convnet feature, or simply convnet feature. It is already well understood how deep convnets learn hierarchical features in visual image classification <cit.>. By convolution operations in the forward path, lower-level features are used to construct higher-level features. Subsampling layers reduce the size of the feature maps while adding local invariance. In a deeper layer, as a result, the features become more invariant to (scaling/location) distortions and more relevant to the target task. This type of hierarchy also exists when a convnet is trained for a music-related task. Visualisation and sonification of convnet features for music genre classification has shown the different levels of hierarchy in convolutional layers <cit.>, <cit.>.Such a hierarchy serves as a motivation for the proposed transfer learning. Relying solely on the last hidden layer may not maximally extract the knowledge from a pre-trained network. For example, low-level information such as tempo, pitch, (local) harmony or envelop can be captured in early layers, but may not be preserved in deeper layers due to the constraints that are introduced by the network structure: aggregating local information by discarding less-relevant information in subsampling. For the same reason, deep scattering networks <cit.> and a convnet for music tagging introduced in <cit.> use multi-layer representations.Based on this insight, we propose to use not only the activations of the final hidden layer but also the activations of (up to) all intermediate layers to find the most effective representation for each task. The final feature is generated by concatenating these features as demonstrated in Figure <ref>, where all the five layers are concatenated to serve as an example. Given five layers, there are ∑_n=1^5_5 C_n=31 strategies of layer-wise combination. In our experiment, we perform a nearly exhaustive search and report all results. We designate each strategy by the indices of layers employed. For example, a strategy named `' refers to using a 32 × 3 = 96-dimensional feature vector that concatenates the first, third, and fifth layer convnet features.During the transfer, average-pooling is used for the 1st–4th layers to reduce the size of feature maps to 1×1 as illustrated in Figure <ref>. Averaging is chosen instead ofpooling because it is more suitable for summarising the global statistics of large regions, as done in the last layer in <cit.>. Max-pooling is often more suitable for capturing the existence of certain patterns, usually in small and local regions[Since the average is affected by zero-padding which is applied to signals that are shorter than 29 seconds, those signals are repeated to create 29-second signals. This only happens in Task 5 and 6 in the experiment.].Lastly, there have been works suggesting random-weights (deep) neural networks including deep convnet can work well as a feature extractor <cit.> <cit.> (Not identical, but a similar approach is transferring knowledge from an irrelevant domain, e.g., visual image recognition, to music task <cit.>.) We report these results from random convnet features and denote it as random convnet feature. Assessing performances of random convnet feature will help to clarify the contributions of the pre-trained knowledge transfer versus the contributions of the convnet structure and nonlinear high-dimensional transformation. §.§ Classifiers and Regressors of Target Tasks Variants of support vector machines (SVMs) <cit.> are used as a classifier and regressor. SVMs work efficiently in target tasks with small training sets, and outperformed K-nearest neighbours in our work for all the tasks in a preliminary experiment. Since there are many works that use hand-written features and SVMs, using SVMs enables us to focus on comparing the performances of features.§ PREPARATION§.§ Source Task: Music Tagging In the source task, 244,224 preview clips of the Million Song Dataset <cit.> are used (201,680/12,605/25,940 for training/validation/test sets respectively) with top-50 last.fm tags including genres, eras, instrumentations, and moods. Mel-spectrograms are extracted from music signals in real-time on the GPU using Kapre <cit.>. Binary cross-entropy is used as the loss function during training. The ADAM optimisation algorithm <cit.> is used for accelerating stochastic gradient descent. The convnet achieves 0.849 AUC-ROC score (Area Under Curve - Receiver Operating Characteristic) on the test set. We use the Keras <cit.> and Theano <cit.> frameworks in our implementation.§.§ Target TasksSix datasets are selected to be used in six target tasks. They are summarised in Table <ref>. * Task 1: The Extended ballroom dataset consists of specific Ballroom dance sub-genres. * Task 2: The Gtzan genre dataset has been extremely popular, although some flaws have been found <cit.>. * Task 3: The dataset size is smaller than the others by an order of magnitude. * Task 4: Emotion predition on the arousal-valence plane. We evaluate arousal and valence separately. We trim and use the first 29-second from the 45-second signals. * Task 5. Excerpts are subsegments from tracks with binary labels (`vocal' and `non-vocal'). Many of them are shorter than 29s. This dataset is provided for benchmarking frame-based vocal detection while we use it as a pre-segmented classification task, which may be easier than the original task. * Task 6: This is a non-musical task. For example, the classes include air conditioner, car horn, and dog bark. All excerpts are shorter than 4 seconds. §.§ Baseline Feature and Random Convnet FeatureAs a baseline feature, the means and standard deviations of 20 Mel-Frequency Cepstral Coefficients (MFCCs), and their first and second-order derivatives are used. In this paper, this baseline feature is called MFCCs or MFCC vectors. MFCC is chosen since it has been adopted in many music information retrieval tasks and is known to provide a robust representation. Librosa <cit.> is used for MFCC extraction and audio processing.The random convnet feature is extracted using the identical convnet structure of the source task and after random weights initialisation with a normal distribution <cit.> but without a training. § EXPERIMENTS§.§ Configurations For Tasks 1-4, the experiments are done with 10-fold cross-validation using stratified splits. For Task 5, pre-defined training/validation/test sets are used. The experiment on Task 6 is done with 10-fold cross-validation without replacement to prevent using the sub-segments from the same recordings in training and validation.The SVM parameters are optimised using grid-search based on the validation results. Kernel type/bandwidth of radial basis function and the penalty parameter are selected from the ranges below: * Kernel type: [linear, radial] * Bandwidth γ in radial basis function :[1/2^3, 1/2^5, 1/2^7, 1/2^9, 1/2^11, 1/2^13, 1/N_f] * Penalty parameter C : [0.1, 2.0, 8.0, 32.0] A radial basis function is exp(- γ | x - x ^'| ^2), and γ and N_f refer to the radial kernel bandwidth and the dimensionality of feature vector respectively. With larger C, the penalty parameter or regularisation parameter, the loss function gives more penalty to misclassified items and vice versa. We use Scikit-learn <cit.> for these target tasks. The code for the data preparation, experiment, and visualisation are available on GitHub[<https://github.com/keunwoochoi/transfer_learning_music>]. §.§ Results and Discussion Figure <ref> shows a summary of the results. The scores of the i) best performing convnet feature, ii) concatenating `'[Again, `' refers to the convnet feature that is concatenated from 1st–5th layers. For another example, `' means concatenating the features from first, third, and fifth layers.] convnet feature and MFCCs, iii) MFCC feature, and iv) state-of-the-art algorithms for all the tasks.In all the six tasks, the majority of convnet features outperforms the baseline feature. Concatenating MFCCs with `' convnet feature usually does not show improvement over a pure convnet feature except in Task 6, audio event classification. Although the reported state-of-the art is typically better, almost all methods rely on musical knowledge and hand-crafted features, yet our features perform competitively. An in-depth look at each task is therefore useful to provide insight.In the following subsections, the details of each task are discussed with more results presented from (almost) exhaustive combinations of convnet features as well as random convnet features at all layers. For example, in Figure <ref>, the scores of 28 different convnet feature combinations are shown with blue bars. The narrow, grey bars next to the blue bars indicate the scores of random convnet features. The other three bars on the right represent the scores of the concatenation of `' + MFCC feature, MFCC feature, and the reported state-of-the-art methods respectively. The rankings within the convnet feature combinations are also shown in the bars where top-7 and lower-7 are highlighted.We only briefly discuss the results of random convnet features here. The best performing random convnet features do not outperform the best-performing convnet features in any task. In most of the combinations, convnet features outperformed the corresponding random convnet features, although there are few exceptions. However, random convnet features also achieved comparable or even better scores than MFCCs, indicating i) a significant part of the strength of convnet features comes from the network structure itself, and ii) random convnet features can be useful especially if there is not a suitable source task. §.§.§ Task 1. Ballroom Genre ClassificationFigure <ref> shows the performances of different features for Ballroom dance classification. The highest score is achieved using the convnet feature `' with 86.7% of accuracy. The convnet feature shows good performances, even outperforming some previous works that explicitly use rhythmic features. The result clearly shows that low-level features are crucial in this task. All of the top-7 strategies of convnet feature include the second layer, and 6/7 of them include the first layer. On the other hand, the lower-7 are [`', `', `', `', `', `', `'], none of which includes the first layer. Even `' achieves a reasonable performance (73.8%).The importance of low-level features is also supported by known properties of this task. The ballroom genre labels are closely related to rhythmic patterns and tempo <cit.> <cit.>. However, there is no label directly related to tempo in the source task. Moreover, deep layers in the proposed structure are conjectured to be mostly invariant to tempo. As a result, high-level features from the fourth and fifth layers poorly contribute to the task relative to those from the first, second, and third layers.The state-of-the-art algorithm which is also the only algorithm that used the same dataset due to its recent release uses 2D scale transform, an alternative representation of music signals for rhythm-related tasks <cit.>, and reports 94.9% of weighted average recall.For additional comparisons, there are several works that use the Ballroom dataset <cit.>. This has 8 classes and it is smaller in size than the Extended Ballroom dataset (13 classes). Laykartsis and Lerch <cit.> combines beat histogram and timbre features to achieve 76.7%. Periodicity analysis with SVM classifier in Gkiokas et al. <cit.> respectively shows 88.9%/85.6 - 90.7%, before and after feature selection.§.§.§ Task 2. Gtzan Music Genre ClassificationFigure <ref> shows the performances on Gtzan music genre classification. The convnet feature shows 89.8% while the concatenated feature and MFCCs respectively show only 78.1% and 66.0% of accuracy. Although there are methods that report accuracies higher than 94.5%, we set 94.5% as the state-of-the-art score following the dataset analysis in <cit.>, which shows that the perfect score cannot surpass 94.5% considering the noise in the Gtzan dataset.Among a significant number of works that use the Gtzan music genre dataset, we describe four methods in more detail. Three of them use an SVM classifier, which enables us to focus on the comparison with our feature. Arabi and Lu <cit.> is most similar to the proposed convnet features in a way that it combines low-level and high-level features and shows a similar performance. Beniya et al. <cit.> and Huang et al. <cit.> report the performances with many low-level features before and after applying feature selection algorithms. Only the latter outperforms the proposed method and only after feature selection. * Arabi and Lu <cit.> uses not only low-level features such as {spectral centroid/flatness/roll-off/flux}, but also high-level musical features such as {beat, chord distribution and chord progressions}. The best combination of the features shows 90.79% of accuracy. * Beniya et al. <cit.> uses a particularly rich set of statistics such as{mean, standard deviation, skewness, kurtosis, covariance} of many low-level features including {RMS energy, attack, tempo, spectral features, zero-crossing, MFCC, dMFCC, ddMFCC, chromagram peak and centroid}. The feature vector dimensionality is reduced by MRMR (max-relevance and min-redundancy) <cit.> to obtain the highest classification accuracy of 87.9%. * Huang et al. <cit.> adopts another feature selection algorithm, self-adaptive harmony search <cit.>. The method uses statistics such as{mean, standard deviation} of many features including {energy , pitch, and timbral features} and their derivatives. The original 256-dimensional feature achieved 84.3% of accuracy which increases to 92.2% and 97.2% after feature selection. * Reusing AlexNet <cit.>, a pre-trained convnet for visual image recognition achieved 78% of accuracy <cit.>. In summary, the convnet feature achieves better performance than many approaches which use extensive music feature sets without feature selection as well as some of the approaches with feature selection.For this task, it turns out that combining features from all layers is the best strategy. In the results, `', `', and `' are three best configurations, and all of the top-7 scores are from those strategies that use more than three layers. On the contrary, all lower-7 scores are from those with only 1 or 2 layers. This is interesting since the majority (7/10) of the target labels already exists in source task labels, by which it is reasonable to assume that the necessary information can be provided only with the last layer for those labels. Even in such a situation, however, low-level features contribute to improving the genre classification performance[On the contrary, in Task 5 - music emotion classification, high-level feature plays a dominant role (see Section <ref>).].Among the classes of target task, classical and disco, reggae do not exist in the source task classes. Based on this, we consider two hypotheses, i) the performances of those three classes may be lower than the others, ii) low-level features may play an important role to classify them since high-level feature from the last layer may be biased to the other 7 classes which exist in the source task.However, both hypotheses are rebutted by comparing the performances for each genres with convnet feature `' and `' as in Figure <ref>. First, with `' convnet feature, classical shows the highest accuracy while both disco and reggae show accuracies around the average accuracy reported over the classes. Second, aggregating early-layer features affects all the classes rather than the three omitted classes. This suggests that the convnet features are not strongly biased towards the genres that are included in the source task and can be used generally for target tasks with music different from those genres.§.§.§ Task 3. Gtzan Speech/music Classification Figure <ref> shows the accuracies of convnet features, baseline feature, and state-of-the-art <cit.> with low-level features including MFCCs and sparse dictionary learning for Gtzan music/speech classification. A majority of the convnet feature combinations achieve 100% accuracy. MFCC features achieve 99.2%, but the error rate is trivial (0.8% is one sample out of 128 excerpts).Although the source task is only about music tags, the pre-trained feature in any layer easily solved the task, suggesting that the nature of music and speech signals in the dataset is highly distinctive.§.§.§ Task 4. Music Emotion PredictionFigure <ref> shows the results for music emotion prediction (Task 4). The best performing convnet features achieve 0.633 and 0.415 r^2 scores on arousal and valence axes respectively. On the other hand, the state-of-the-art algorithm reports 0.704 and 0.500 r^2 scores using music features with a recurrent neural network as a classifier <cit.> that uses 4,777 audio features including many functionals (such as quantiles, standard deviation, mean, inter peak distances) of 12 chroma features, loudness, RMS Energy, zero crossing rate, 14 MFCCs, spectral energy, spectral roll-off, etc.For the prediction of arousal, there is a strong dependency on the last layer feature. All top-7 performances are from the feature vectors that include the fifth layer. The first layer feature also seems important, since all of the top-5 strategies include the first and fifth layer features.For valence prediction, the third layer feature seems to be the most important one. The third layer is included in all of the top-6 strategies. Moreover, `' strategy was found to be best performing among strategies with single layer feature.To summarise the results, the predictions of arousal and valence rely on different layers, for which they should be optimised separately.In order to remove the effect of the choice of a classifier and assess solely the effect of features, we compare our approach to the baseline method of <cit.> which is based on the same 4,777 features with SVM, not a recurrent neural network.The baseline method achieves .541 and .320 r^2 scores respectively on arousal and valence, both of which are lower than those achieved by using the proposed convnet feature. This further confirms the effectiveness of the proposed convnet features.§.§.§ Task 5. Vocal/non-vocal ClassificationFigure <ref> presents the performances on vocal/non-vocal classification using the Jamendo dataset <cit.>. There is no known state-of-the-art result, as the dataset is usually used for frame-based vocal detection/segmentation. Pre-segmented Excerpt classification is the task we formulate in this paper. For this dataset, the fourth layer plays the most important role. All the 14 combinations that include the fourth layer outperformed the other 14 strategies without the fourth layer.§.§.§ Task 6. Acoustic Event DetectionFigure <ref> shows the results on acoustic event classification using Urbansound8K dataset <cit.>. Since this is not a music-related task, there are no common tags between the source and target tasks, and therefore the final-layer feature is not expected to be useful for the target task. The strategy of concatenating `' convnet features and MFCCs yields the best performance. Among convnet features, `', `', `', and `' achieve good accuracies. In contrast, those with only one or two layers do not perform well. We were not able to observe any particular dependency on a certain layer.Since the convnet features are trained on music, they do not outperform a dedicated convnet trained for the target task. The state-of-the-art method is based on a deep convolutional neural network with data augmentation <cit.>. Without augmenting the training data, the accuracy of convnet in the same work is reported to be 74%, which is still higher than our best result (71.4%).[Transfer learning targeting audio event classification was recently introduced in <cit.> and achieved a state-of-the-art performance.]The convnet feature still shows better results than conventional audio features, demonstrating its versatility even for non-musical tasks. The method in <cit.> with {minimum, maximum, median, mean, variance, skewness, kurtosis} of 25 MFCCs and {mean and variance} of the first and second MFCC derivatives (225-dimensional feature) achieved only 68% accuracy using the SVM classifier. This is worse than the performance of the best performing convnet feature.It is notable again that unlike in the other tasks, concatenating convnet feature and MFCCs results in an improvement over either a convnet feature or MFCCs (71.4%). This suggests that they are complementary to each other in this task.§ CONCLUSIONS We proposed a transfer learning approach using deep learning and evaluated it on six music information retrieval and audio-related tasks. The pre-trained convnet was first trained to predict music tags and then aggregated features from the layers were transferred to solve genre classification, vocal/non-vocal classification, emotion prediction, speech/music classification, and acoustic event classification problems. Unlike the common approach in transfer learning, we proposed to use the features from every convolutional layers after applying an average-pooling to reduce their feature map sizes.In the experiments, the pre-trained convnet feature showed good performance overall. It outperformed the baseline MFCC feature for all the six tasks, a feature that is very popular in music information retrieval tasks because it gives reasonable baseline performance in many tasks. It also outperformed the random-weights convnet features for all the six tasks, demonstrating the improvement by pre-training on a source task. Somewhat surprisingly, the performance of the convnet feature is also very competitive with state-of-the-art methods designed specifically for each task. The most important layer turns out to differ from task to task, but concatenating features from all the layers generally worked well. For all the five music tasks, concatenating MFCC feature onto convnet features did not improve the performance, indicating the music information in MFCC feature is already included in the convnet feature. We believe that transfer learning can alleviate the data sparsity problem in MIR and can be used for a large number of different tasks. | http://arxiv.org/abs/1703.09179v4 | {
"authors": [
"Keunwoo Choi",
"György Fazekas",
"Mark Sandler",
"Kyunghyun Cho"
],
"categories": [
"cs.CV",
"cs.AI",
"cs.MM",
"cs.SD"
],
"primary_category": "cs.CV",
"published": "20170327164803",
"title": "Transfer learning for music classification and regression tasks"
} |
Edge states in dynamical superlattices Min Xiao^7,8 December 30, 2023 ======================================empty empty Ambiguity and noise in natural language instructions create a significant barrier towards adopting autonomous systems into safety critical workflows involving humans and machines. In this paper, we propose to build on recent advances in electrophysiological monitoring methods and augmented reality technologies, to develop alternative modes of communication between humans and robots involved in large-scale proximal collaborative tasks.We will first introduce augmented reality techniques for projecting a robot's intentions to its human teammate, who can interact with these cues to engage in real-time collaborative plan execution with the robot. We will then look at how electroencephalographic (EEG) feedback can be used to monitor human response to both discrete events, as well as longer term affective states while execution of a plan. These signals can be used by a learning agent, a.k.a an affective robot, to modify its policy. We will present an end-to-end system capable of demonstrating these modalities of interaction. We hope that the proposed system will inspire research in augmenting human-robot interactions with alternative forms of communications in the interests of safety, productivity, and fluency of teaming, particularly in engineered settings such as the factory floor or the assembly line in the manufacturing industry where the use of such wearables can be enforced.§ INTRODUCTION The last decade has seen a massive increase in robots deployed on the factory floor <cit.>. This has led to fears of massive loss of jobs for humans in the manufacturing industry, as well concerns of safety for the jobs that do remain. The latter is not an emerging concern, though.Automation of the manufacturing industry has gone hand in hand with incidents of misaligned intentions between the robots and their humans co-workers, leading to at least four instances of fatality <cit.>.This dates back to as early as 1979 when a robot arm crushed a worker to death while gathering supplies in the Michigan Ford Motor Factory, to as recent as 2015 in a very similar and much publicized accident in the Volkswagen factory in Baunatal, Germany. With 1.3 million new robots predicted to enter the workspace by next year <cit.>, such concerns are only expected to escalate.A closer look at the dynamics of employment in the manufacturing industry also reveals that the introduction of automation has in fact increased productivity <cit.> as well as, surprisingly, contributed to a steady increase in the number of jobs for human workers <cit.> in Germany (which so far dominates in terms of deployed robots in the industry).We posit then either a semi-autonomous workspace in future with increased hazards due to misaligned interests of robots in the shared environment, or a future where the interests of the human workers will be compromised in favor of automation. In light of this, it is essential that the next-generation factory floor is able to cope with the needs of these new technologies. At the core of this problem is the impedance mismatch between humans and robots in how they communicate, as illustrated in Figure <ref>.Despite the progress made in natural language processing, natural language understanding is still a largely unsolved problem, and as such robots find it difficult to (1) express their own goals and intentions effectively; as well as (2) understand human expressions and emotions. Thus there exists a significant communication barrier to be overcome from either side, and robots are essentially still “autistic" <cit.> in many aspects.While this may not be a serious concern for deploying completely autonomous agents in isolated environments such as for space or underwater exploration, the priorities change considerably when humans and robots are involved in collaborative tasks, especially for concerns of safety, if not to just improve the effectiveness of collaboration. This is also emphasized in the Roadmap for U.S. Robotics report, which outlines that “humans must be able to read and recognize robot activities in order to interpret the robot's understanding” <cit.>.Recent work on this has focused on generation of legible robot motion planning <cit.> and explicable task planning <cit.>, as well as verbalization of robot intentions using natural language <cit.>.§.§ The Manufacturing Environment.Our primary focus here is on structured settings like the manufacturing environment where wearables can be a viable solution for improving the workspace. Indeed, a reboot of the safety helmet and goggles as illustrated in Figure <ref> only requires retro-fitting existing wearables with sensors that can enable these new technologies.Imagine, then, a human and robot engaged in an assembly task, where they are constructing a structure collaboratively. Further suppose that the human now needs a tool from the shared workspace. At this time, neither agent is sure what tools and objects the other is going to access in the immediate future - this calls for seamless transfer of relevant information without loss of workflow. Existing (general purpose) solutions will suggest intention recognition <cit.> or natural language <cit.> communication as a means to respond to this situation. With regards to naturalistic modes of interaction among agents, while natural language and intent or gesture recognition techniques remain the ideal choice in most cases, and perhaps the only choice in some (such as robots that would interact with people in their daily lives),we note that these are inherently noisy and ambiguous, and not necessary in controlled environments such as on the factory floor or by the assembly line where the workspace can be engineered to enforce protocols in the interests of safety and productivity, in the form of safety helmets integrated with wearable technology <cit.>.Thus, in our system, the robot instead projects its intentions as holograms thus making it directly accessible to the human in the loop, e.g. by projecting a pickup symbol on a tool it might use in future. Further, unlike in traditional mixed reality projection systems, the human can directly interact with these holograms to make his own intentions known to the robot, e.g. by gazing at and selecting the desired tool thus forcing the robot to replan. To this end, we develop, with the power of the HoloLens, an alternative communication paradigm that is based on the projection of explicit visual cues pertaining to the plan under execution via holograms such that they can be intuitively understood and directly read by the human partner.The “real" shared human-robot workspace is now thus augmented with the virtual space where the physical environment is used as a medium to convey information about the intended actions of the robot, the safety of the work space, or task-related instructions. We call this the Augmented Workspace. Recent development of augmented reality techniques <cit.> has opened up endless possibilities in such modes of communication.This, by itself, however, provides little indication of the mental state of the human, i.e. how he is actually responding to the interactions - something that human teammates naturally keep track of during a collaborative exercise. In our system, we propose to use real-time EEG feedback using the Emotiv EPOC+ headsetthis purpose. This has several advantages - specific signals in the brain are understood to have known semantics (more on this later), and are detected immediately and with high accuracy, thus short circuiting the need for the relatively highly inaccurate and slower signal processing stage in rivaling techniques such as emotion and gesture recognition. Going back to our previous use case, if the robot now makes an attempt to pick up the same tool again, the error can fire an event related EEG response - which may readily be used as in a closed loop feedback to control or stop the robot. Further, if the robot is making the same mistake again and again, causing the human to be stressed and/or irritated, it can listen to the human's affective states to learn better, and more human-aware, policies over time. We demonstrate these capabilities as part of the Consciousness Cloud which provides the robots real-time shared access to the mental state of all the humans in the workspace.The agents are thus able to query the cloud about particulars (e.g. stress levels) of the current mental state, or receive specific alerts related to the human's response to events (e.g. oddball incidents like safety hazards and corresponding ERP spikes) in the environment. Finally, instead of the single human and robot collaborating over an assembly task, imagine now an entire workspace shared by many such agents, as is the case of most manufacturing environments. Traditional notions of communication become intractable in such settings. With this in mind, we make the entire system cloud based - all the agents log their respective states on to a central serve, and can also access the state of their co-workers from it. As opposed to peer-to-peer information sharing, this approach provides a distinct advantage towards making the system scalable to multiple agents, both humans and robots, sharing and collaborating in the same workspace, as envisioned in Figure <ref>. [1]https://www.microsoft.com/microsoft-hololens/en-us [2]https://www.emotiv.com/epoc/ §.§ Contributions.Thus, in this paper, we propose approaches to tear down the communication barrier between human and robot team members (1) by means of holograms/projections as part of a shared alternative vocabulary for communication in the Augmented Workspace, and (2) by using direct feedback from physiological signals to model the human mental state in the shared Consciousness Cloud. The former allows for real-time interactive plan monitoring and execution of the robot with a human-in-the-loop, while the latter, in addition to passive plan monitoring, also allows a planning agent to learn preferences of its human co-worker and update its policies accordingly.We will demonstrate how this can be achieved on an end-to-end cloud-based platform built specifically to scale up to the demands of the next-generation semi-autonomous workspace envisioned in Figure <ref>. § RELATED WORK§.§ Intention Projection and Mixed RealityThe concept of intention projection for autonomous systems have been explored before. An early attempt was made by <cit.> in their prototype Interactive Hand Pointer (IHP) to control a robot in the human's workspace. Similar systems have since been developed to visualize trajectories of mobile wheelchairs and robots <cit.>, which suggest that humans prefer to interact with a robot when it presents its intentions directly as visual cues. The last few years have seen active research <cit.>in this area, but most of these systems were passive, non-interactive and quite limited in their scope, and did not consider the state of the objects or the context of the plan pertaining to the action while projecting information.As such, the scope of intention projection has remained largely limited. Instead, in this paper, we demonstrate a system that is able to provide much richer information to the human-in-the-loop during collaborative plan execution, in terms of the current state information, action being performed as well as future parts of the plan under execution. We also demonstrate how recent advances in the field of augmented reality make this form of online interactive plan execution particularly compelling. In Table <ref> we provide the relative merits of augmented reality with the state-of-the-art in mixed reality projections. §.§ EEG Feedback and Robotics Electroencephalography (EEG) is an electrophysiological monitoring method to measure voltage fluctuations resulting from ionic currents within the brain. The use of EEG signals in the design of BCI has been of considerable interest in recent times. The aim of our project is to integrate EEG-based feedback in human-robot interaction or HRI. Of particular interest to us are Event Related Potentials or ERPs which are measured due the response to specific sensory, cognitive, or motor events, and may be especially useful in gauging the human reaction to specific actions during the execution of a robot's plan <cit.>. Recently, researchers have tried to improve performance in robotics tasks by applying error-related potentials or ErrPs <cit.> to a reinforcement learning process <cit.>. These are error signals produced due to undesired or unexpected effects after performing an action. The existence of ErrPs and the possibility of classifying them in online settings has been studied in driving tasks <cit.>, as well as to change the robot’s immediate behavior <cit.>. However, almost all of the focus has remained on the control of robots rather than as a means of learning behavior <cit.>, and very little has been made of the effect of such signals on the task level interactions between agents. This remains the primary focus of our system.§ SYSTEM OVERVIEW There are two major components of the system (refer to Figure <ref>) - (1) the Augmented Workspace which allows the robots to communicate with their human co-workers in the virtual space; and(2) the Consciousness Cloud which provides the robots real-time shared access to the mental state of all the humans in the workspace.This is visible in the centralized Dashboard that provides a real-time snapshot of the entire workspace, as seen in Figure <ref>. The Augmented Workspace Panel shows real-time stream from the robot's point of view, the augmented reality stream from the human's point of view and information about the current state of plan execution. The Consciousness Cloud Panel displays the real-time affective states (engagement, stress, relaxation, excitement and interest), raw EEG signals from the four channels (AF3, F3, AF4 and F4) used to detect response to discrete events, as well as alerts signifying abnormal conditions (p300, control blink, high stress, etc.). The Dashboard allows the humans to communicate or visualize the collaborative planning process between themselves. It can be especially useful in factory settings to the floor manager who can use it to effectively monitor the shared workspace. We will now go into the implementation and capabilities of these two components in more detail. § THE AUGMENTED WORKSPACE In the augmented workspace (refer to Figure <ref>). the HoloLens communicates with the user endpoints through the . The API server is implemented in python using theweb server framework.All external traffic to the server is handled by anserver that communicates with the python application through amiddle layer. Theserver ensures that the server can easily support a large number of concurrent requests. Theservice exposes bothandendpoints. Thelinks provides the HoloLens application with a way of accessing information from the robot, while thelink providestheHoloLensapplicationcontrolovertherobot’soperation.Currently, we are using the API to expose information like the robotic planning state, robot joint values and transforms to special markers in the environment. MostAPI GET calls will firsttrytofetchthe requested information from the memcached layer, and would only try a direct query to thedatabase if thecacheentryisolderthanaspecified limit. Each query to the database also causes the corresponding cache entry to be updated. Theserver itself is updated by a daemon that runs onand keeps consuming messages sent from the robot through various queues implemented using the rabbitMQ service.§.§ Modalities of Interaction We will now demonstrate different ways augmented reality can improve the human-robot workspace, either by providing a platform for interactive plan execution for online collaboration, or as a means of providing assistive cues to guide the plan execution process. A video demonstrating all these capabilities is available at <https://goo.gl/pWWzJb>. §.§.§ Interactive Plan Execution. Perhaps the biggest use of AR techniques in the context of planning is for human-in-the-loop plan execution. For example, a robot involved in an assembly task can project the objects it is intending to manipulate into the human's point of view, and annotate them with holograms that correspond to intentions to use or pickup. The human can, in turn, access or claim a particular object in the virtual space and force the robot to re-plan, without there ever being any conflict of intentions in the real space. The humans in the loop can thus not only infer the robot's intent immediately from these holographic projections, but can also interact with them to communicate their own intentions directly and thereby modify the robot's behavior online. The robot can also then ask for help from the human, using these holograms. Figure <ref> shows, in detail, one such use case in our favorite BlocksWorld domain.The human can go into finer control of the robot by accessing the Holographic Control Panel, as seen in Figure <ref>(a). The panel provides the human controls to start and stop execution of the robot's plan, as well as achieve fine grained motion control of both the base and the arm by making it mimic he user's arm motion gestures on the MoveArm and MoveBase holograms attached to the robot.§.§.§ Assistive Cues. The use of AR is, of course, not just restricted to procedural execution of plans. It can also be used to annotate the workspace with artifacts derived from the current plan under execution in order to improve the fluency of collaboration. For example, Figure <ref>(b-e) shows the robot projecting its area of influence in its workspace either as a 3D sphere around it, or as a 2D circle on the area it is going to interact with. This is rendered dynamically in real-time based on the distance of the end effector to its center, and to the object to be manipulated. This can be very useful in determining safety zones around a robot in operation. As seen in Figure <ref>(f-i), the robot can also render hidden objects or partially observable state variables relevant to a plan, as well asindicators to improve peripheral vision of the human, to improve his/her situational awareness. § THE CONSCIOUSNESS CLOUD The Consciousness Cloud has two components - the affective state monitor and the discrete event monitor (as shown in Figure <ref>). In the affective state monitoring system, metrics corresponding to affective signals recorded by the Emotiv EPOC+ headset are directly fed into a rabbitMQ queue, as before, called “Raw Affective Queue” to be used for visualization, and a reward signal (calculated from the metrics) is fed into the “Reward Queue”. The robot directly consumes the “Reward Queue” and the signals that appear during an action execution is considered as the action reward or environment feedback for the AI agent (implementing a reinforcement learning agent). For the discrete event monitoring system, the raw EEG signals from the brain are sampled and written to a rabbitMQ queue called “EEG queue”. This queue is being consumed by our Machine learning or classifier module, which is a python daemon running on a azure server. When this python daemon is spawned it trains an SVM classifier using a set of previously labelled EEG signals. The signals consumed from the queue are first passed through a feature extractor and then the extracted features are used by the SVM to detect specific events (e.g. blinks). For each event a corresponding command is sent to the “Robot Command” queue, which is consumed by the robot. For example, if a STOP command is sent for the blink event, it would cause the robot to halt its current operation.§.§ Modalities of Interaction Figure <ref> demonstrates different ways in which EEG signals can be used to provide closed loop feedback to control the behavior of robots. This can be useful in two ways - either as a means of plan monitoring, i.e. controlling the plan execution process using immediate feedback, or as a reward signal for shaping and refining the policies of a learning agent. A video demonstrating these capabilities is available at<https://goo.gl/6LhKNZ>.§.§.§ Discrete Events. Discrete events refer to instantaneous or close to instantaneous events, producing certain typical (and easy to classify) signals.We identify the following modalities of EEG-based feedback in this regard - (1) Event Related Potentials or ERPs (e.g. p300) that can provide insight into the human's responses like surprise; (2) Affective States like stress, valence, anger, etc. that can provide longer term feedback on how the human evaluates interactions with the robot; and finally (3) Alpha Rhythm that can relate to factors such as task engagement and focus of the human teammate.This type of feedback is useful in the online monitoring of the plan execution process by providing immediate feedback on errors or mistakes made by the robot. The video demonstration shows a particular example when the human avoids coming into the harm's way by stops the robot's arm by blinking. Figure <ref> shows another such use case where the robot is building words (chosen by the human) out of lettered blocks and makes a wrong choice of a letter at some stage - the mistake may be measured as a presence of ERP signal here. The latter has so far gotten mixed results leading us to shift to different EEG helmets (Emotiv Epoc+ lacks electrodes in the central area of the brain where p300s are known to be elicited) for better accuracy.§.§.§ Affective States. Here, our aim is to train a learning agent to model the preferences of its human teammate by listening to his/her emotions or affective states. We refer to this as affective robotics (analogous to the field of affective computing). As we mentioned before, the Emotiv SDK currently provides five performance metrics, namely valence/excitement, stress/frustration, engagement, attention, and meditation. At this time, we have limited ourselves to excitement and stress as our positive (R^H+) and negative reward signals (R^H-). We use a linear combination of these two metrics to create a feedback signal that captures the human’s emotional response to a robot’s action.It is important to note that these signals do not capture the entire reward signal but only capture soft goals or preferences that the robot should satisfy, which means the total reward for the agent is given by R = R^T + R^H, where R^T is the reward for the original task. However, learning this from scratch becomes a hard (as well as somewhat unnecessary if the domain physics is already known) problem given the number of episodes this will require. Keeping this in mind, we adopt a two staged approach where the learning agent is first trained on the task in isolation without the human in the loop (i.e Q-learning with only R^T) so that it can learn a policy that solves the problem (π^T). Then we use this plan as the initial policy for a new Q-learning agent that considers the full rewards (R) with the human in the loop. This “bootstrapping" approach should reduce the training time.The scenario, as seen in Figure <ref>, involves a workspace that is shared by a robot and a human. The workspace consists of a table with six multicolored blocks. The robot is expected to form a three-block tower from these blocks. As far as the robot is concerned all the blocks are identical and thus the tower can be formed from any of the blocks. The human has a goal of using one of those specific blocks for his/her own purpose. This means whenever the robot uses that specific block it would produce high levels of frustration within the human. The goal of the robot is thus to use this negative reward to update its policy to make sure that it doesn’t use one of the blocks that the human requires.For the first phase of training, we trained the agent using a simulated model of the task. For the state representation, we used a modified form of the IPC BlocksWorld pddl domain. We used a factored representation of the state with 36 predicates and one additional predicateto detect task completion. At every step, the agent has access to 50 actions to manipulate the blocks on the table and 80 additional actionsto check for the goal. As for the task rewards, each action is associated with a small negative reward and if the agent achieves the goal it receives a large positive reward. We also introduced an additional reward for every time the number ofpredicates reduces (which means the agent is forming larger towers) to improve the convergence rate. We found that the agent converged to the optimal policy (the agent achieves the goal in 5 steps) at around 800 iterations. Figure <ref> shows the length of the episodes produced after each iteration and the distribution of Q values across the table. Once the initial bootstrapping process was completed, we used the resultant Q-value table as our input for the second phase of the learning, as seen in the video demonstration. While there are some issues with convergence that are yet to be resolved, initial results showing the robot exploring new policies using the stress signals are quite exciting.§ CONCLUSIONS & FUTURE WORK In conclusion, we presented two approaches to improve collaboration among humans and robots from the perspective of task planning, either in terms of an interactive plan execution process or in gathering feedback to inform the human-aware decision making process. To this end, we discussed the use of holograms as a shared vocabulary for effective communication in an augmented workspace. We also discussed the use of EEG signals for immediate monitoring, as well as long term feedback on the human response to the robot, which can be used by a learning agent to shape its policies towards increased human-awareness.Such modes of interaction opens up several exciting avenues of research. We mention a few of these below.§.§.§ Closing the planning-execution loop The ability to project intentions and interact via those projections may be considered in the plan generation process itself - e.g. the robot can prefer a plan that is easier to project to the human for the sake of smoother collaboration. This notion of projection-aware task or motion planning adds a new dimension to the area of human-aware planning.A holographic vocabulary also calls for the development of representations - PDDL3.x - that can capture complex interaction constraints modeling not just the planning ability of the agent but also its interactions with the human. Further, such representations can be learned to generalize to methods that can, given a finite set of symbols or vocabulary, compute domain independent projection policies that decide what and when to project to reduce cognitive overload on the human.§.§.§ ERP and timed eventsPerhaps the biggest challenge towards adopting ERP feedback over a wide variety of tasks is the reliance of detecting these signals on the exact time of occurrence of the event. Recent advancements in machine learning techniques can potentially allow windowed approaches to detect such signals from raw data streams.§.§.§ Evaluations While preliminary studies with fellow graduate student subjects have been promising, we are currently working towards systematic evaluation of our system under controlled conditions, complying with the ISO 9241-11:1998 standards, targeted at professionals who are engaged in similar activities repeatedly over prolonged periods. This is essential in evaluating such systems since the value of information in projections is likely to reduce significantly with expertise and experience. IEEEtran | http://arxiv.org/abs/1703.08930v1 | {
"authors": [
"Tathagata Chakraborti",
"Sarath Sreedharan",
"Anagha Kulkarni",
"Subbarao Kambhampati"
],
"categories": [
"cs.RO",
"cs.HC"
],
"primary_category": "cs.RO",
"published": "20170327050802",
"title": "Alternative Modes of Interaction in Proximal Human-in-the-Loop Operation of Robots"
} |
1 Guangdong Province Key Laboratory of Popular High Performance Computers, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, P.R. China2 Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700 Fribourg, Switzerland3 Department of Modern Physics, University of Science and Technology of China, Hefei 230027, P. R. China. As a fundamental challenge in vast disciplines, link prediction aims to identify potential links in a network based on the incomplete observed information, which hasbroad applications ranging from uncovering missing protein-protein interaction to predicting the evolution of networks. One of the most influential methods rely on similarity indices characterized by the common neighbors or its variations. We construct a hidden space mapping a network into Euclidean space based solely on the connection structures of a network. Compared with real geographical locations of nodes, our reconstructed locations are in conformity with those real ones. The distances between nodes in our hidden space could serve as a novel similarity metric in link prediction. In addition, we hybrid our hidden space method with other state-of-the-art similarity methods which substantially outperforms the existing methods on the prediction accuracy. Hence, our hidden space reconstruction model provides a fresh perspective to understand the network structure, which in particular casts a new light on link prediction. 89.75.HC,02.50.-r Hidden space reconstruction inspires link prediction in complex networks Hao Liao^1,2, Mingyang Zhou^1,3[[email protected]], Zong-wen Wei^1,3, Rui Mao^1,Alexandre Vidmer^1,2Yi-Cheng Zhang^2 December 30, 2023 =========================================================================================================================== § INTRODUCTION In recent decades, we have been challenged by understanding the organization of real networks <cit.>. Many different disciplines such as Information technology, Biology, Physics, etc, have been studying the organization of real networks <cit.>. One of the crucial tasks in complex networks is to reduce the noise and fill up vacant records in large and sparse networks <cit.>. Vacant records not only refer to the past missing connections between nodes, but also to the future connections. Besides, link information candetect hidden relationships between terrorists <cit.>, predict coverage of a certain virus, identify spurious connections of networks <cit.>, and so on.The essential problem for link prediction is to measure the likelihood between nodes accurately <cit.>. A straightforward method to measure similarity is based on the number of common neighbors between two nodes. However this method favors large-degree nodes. In order to overcome the disadvantage of common neighbor method. Some weighted methods are proposed, such as the Jaccard index <cit.>, the Salton index <cit.> and the Resource allocation <cit.>. These methods based on local information have attracted a great attention due to their efficiency and low computing complexity. Moreover, to suppress imbalance of popular nodes' attractivity and overcome cold start problem <cit.>, various methods based on global information were introduced in link prediction, for example, Simrank <cit.>, hierarchical random graph <cit.>, stochastic block model <cit.>. However, global information based methods are computationally intractable, which limits their application in large complex networks <cit.>.Our main motivation for this paper comes from the networks in which nodes possess a real geographic location such as power grid network. In those networks, the costs restrict the geographic location (i.e., energy cost for power grid network, or efficiency cost for the road network), which shapes the network connections.Generally, most of real-world networks are lack of real geographic information, and Ref. <cit.> suggests that most of these networks populate in some hidden metric space, where the proximity rule governs the connection, that is, the closer nodes are in a hidden space, the more likely that they are linked together <cit.>. A typical example is the homophily effect in social networks <cit.>. Hidden space theory can be used to devise efficient network routing strategy <cit.>, or community detection algorithm <cit.>, just name a few.Here we leverage the proximity rule in hidden space to link prediction by embedding networks into Euclidean space based on the modified normal matrix <cit.>. Considering previous hidden space model used in link prediction problem without showing explicit correlation between their model and real existing space. Here, we demonstrate that there is a marked positive correlation for the distance between nodes in hidden space and corresponding real geographical space. Then we predict potential links existing between those pairwise nodes with similar hidden locations. This paper is organized as follows: In section 2, we first illustrate our hidden space reconstruction process of the network and give a real Italy power grid network example to verify our reconstruction efficiency. Section 3, the application of our hidden space model link prediction is presented. We highlight the achievement of this paper in section 4. Finally in section 5, we introduce theoretical analysis of our hidden space method, and introduce other state-of-the art link prediction methods.§ RESULTSWe start by a brief introduction of the hidden space. Then we give our hidden space reconstructing process based on advanced normal matrixin section 2. Furthermore, distance of nodes in hidden space is utilized to evaluate similarity of non-existing edges in section 3.a. Finally the experimental results are shown in section 3.b and 3.c. §.§ Hidden space reconstructionConsider an unweighted undirected network G(V,E), where V is the set of nodes and E is the set of links that connect the nodes. There can be only one link between each pair of nodes, and self-connections are not allowed. The neighbors of a node is the set of nodes that are connected to it by a link. Link prediction is achieved by calculating a similarity score s_ij for each pair of nodes i and j in V. This score measures the likelihood for node i and j to be connected with a link. Since G is undirected, the score is symmetric, i.e. s_ij = s_ji. Then, we sort those nonexsiting links in a descending order by similarity scores. The scores at the top of the list correspond to links that are the more likely to exist according to the chosen link prediction method. Therefore how the similarity scores are calculated is the key problem.The previous methods mainly employ characteristics of neighbor nodes to measure similarity. In this paper, the hidden space behind the observed network is extracted to characterize the similarity. In some practical networks, such as power grids, airports, and road networks, nodes usually have fixed locations and connect to geographically closer nodes with higher probability. The probability that a link exists between two nodes is negatively correlated to the distance between the two nodes <cit.>, such that p_ij∝ d_ij^-β, with d_ij the distance of node i and j, and β a tunable parameter (β>0). The fact that nodes preferentially connect to geographically closer nodes is present in the network with ground truth location (i.e. networks in which nodes possess a fixed location in reality), and also in some social networks, people excise in a certain areas, but are restricted by financial and time costs, people living in the same area are more likely to build friendships.Recent empirical experiments reveal that online social networks also have spatial aggregating characteristics that users in the same region have higher connection density than across different region, since people in the same region have similar interests and customs <cit.>. Thus an underlying metric space that determines the topological connection has strong relationship with the geographic location. Nodes' location can be utilized to measure the similarity of two nodes and predict potential links. However it is difficult to obtain geographic coordinates for many networks. Besides real network connections are also influenced by mountains, valleys and rivers, which are not reflected in nodes' geographic coordinates. Therefore extracting the hidden space is crucial for the understanding of the underlying mechanism of networks.Though there exists some investigations that apply underlying hidden space to navigation and community detection (categorization of nodes into groups) <cit.>, the relationship between network structure and the nodes' location in the underlying space, as well as the connection between real and underlying space, is still far from understood.In this paper, we discuss the network embedded into a d-dimension Euclidean space based on the adjacency matrix representing the links between the n nodes 𝐀=(a_ij)_n× n of a network, with a_ij=1 representing the existence of a link between nodes i and j, and a_ij=0 if no link is present. Previous research <cit.> reveals that similar nodes aggregate together in the spectral space of the Laplacian matrix L=K-A and normal matrix 𝐍=𝐊^-1·𝐀, where 𝐊=diag{k_1,k_2,...,k_N} with k_i the degree of node i (the number of links it is connected to). For community detection, the normal matrix usually outperforms the Laplace matrix <cit.>, implying that the normal matrix reveals the hidden space better. The maximal eigenvalue of the normal matrix is 1 (trivial eigenvalue) and corresponds to the eigenvector 𝐯_1=(1,1,...,1)_𝐍× 1^T. The other n-1 non-trivial eigenvalues are in the range (0,1) and the eigenspace characterized by the non-trivial eigenvectors reflect the topological structure.Matrix 𝐍 could represent the process of heat conduction <cit.>. In heat conduction, each node absorbs heat according to the average temperature of its neighbors. Whereas in practical scenarios, the heat capacity of a nodes may not be linearly proportional to node degree <cit.>.In order to take this fact into account we introduce a tunable parameter:𝐍_α=𝐊^-α·𝐀,where 𝐊^-α=diag{k_1^-α,k_2^-α,...,k_n^-α}. 𝐍_α degenerates into the normal matrix 𝐍 when α=1.We use the eigenspace of 𝐍_α to create the hidden space. Suppose that λ_i (λ_1>λ_2>...>λ_n) are the eigenvalues of the matrix 𝐍_α and that the corresponding eigenvectors are 𝐯_1,𝐯_2,...,𝐯_n (𝐯_i=1). After removing the trivial eigenvector 𝐯_1, given the dimension d of hidden space, and then construct the hidden space with𝐖=Span{𝐯_2,𝐯_3,...,𝐯_d},where d>2 and the span refer to the set of all linear combinations of the elements of v. The coordinate 𝐜_i of node i in the hidden space is 𝐜_i=(v_2i,v_3i,...,v_di). Empirical experiments in many networks suggest that embedding a real network into a small dimension space could reproduce its effective navigation <cit.>. Therefore, we map networks into Euclidean space with dimensions smaller than 40. In section 3.b, experimental results illustrate the effectiveness of our method.After constructing the hidden space, the distance between node i and j is d_ij=||𝐜_i-𝐜_j||. Since nodes prefer to connecting geographically nearby nodes, we take the negative value of the distance as the similarity score, s_ij=-d_ij. Non-existing links with top-d scores are predicted as potential links. §.§ Correlation between hidden space and real space We explore the relation between the hidden space coordinates and the real geographical locations. We find the distance between the nodes in the real space is strongly correlated with those of the hidden space. In Figure <ref>, we show the Spearman correlation between hidden and real geographical locations in real networks as a function of α and dimension d. For most networks, the maximal correlation is around α=1. Table <ref> shows the maximal pearman correlation and the corresponding optimal α and dimension d for different networks. For Italian PowerGrid network, the optimal dimension d=1 due to the linear outline of Italian Map (See Fig. <ref>). The optimal dimension of the model network is d=3, which is different from real geographical dimension d=2. It is because that apart from location factors, degree distribution also shape the network, which is reflected in the additional one dimension.Euroroad network has a much larger dimension d_optimal=6 than other networks, meaning that Euroroad structure is determined by many non-geographical underlying factors such as policy, country, economic and so on. In Euroroad network, when d=3 and α=1, Spearman=0.4815 is close to the optimal Spearman_optimal=0.5047, implying that geographical location dominates the main body of Euroroad network.To better understand the relationship between the hidden and real geographic locations, we further compare different value α in Figure <ref> and Figure <ref>. In Fig. <ref>, the hidden locations reveal the real geographic location in the skeleton illustration network with optimal value of α=2.In the following paper, as we are interested in the hidden relationship between nodes, in real networks, we take into account the value that maximizes the correlation between the hidden distances and real distances of all pairwise nodes.§ LINK PREDICTION IN REAL-WORLD NETWORKS§.§ Coordinate determination In link prediction, the set of observed links M of a network is randomly divided into two parts: the training set M^T treated as known information, and the probe set M^p, used to verify the accuracy of the prediction. The information contained in the probe set is considered as unknown and is not used during the prediction process. The addition of the two set, M^T plus M^p, is equal the whole data set. Besides, disconnected nodes in the training set are not considered. We choose the training set to contain 90% of the links and the probe set 10% of the links. The aim of link prediction is to use links in training set to predict probe set as accuracy as possible.Note that, only training set is used to reconstruct the hidden space underlying a network in our experiments. Based on the hidden locations of all nodes, links between pairwise nodes with close locations are predicted as potential links. Each link e_ij is assigned a score s_ij=-d_ij. Links ranking in the top-L list are predicted as potential links in probe set.In this paper, we employ a standard metric, area under the receiver operating characteristic curve (AUC) <cit.> to measure the accuracy of the prediction. AUC can be interpreted as the probability that a randomly chosen missing link from M^p is given a higher score than a randomly chosen nonexistent link. Then, AUC requires n times of independent comparisons. we randomly choose a missing link and nonexistent link to compare their scores. After the comparison, we record there are n_1 times the missing link having a higher score, and n_2 times they have the same score. The final AUC is calculated as AUC=(n_1+0.5×n_2)/n. If all the scores are given by an independent and identical distribution, then AUC should be around 0.5. A higher AUC is corresponding to a more accurate prediction.The key issue of the proposed method is to determine optimal parameter α and d. For a given network with geographical location, optimal α and d could be obtained by comparing hidden space and real location (See Fig. <ref>). For many networks without geographical location, according to empirical studies in Table <ref>, we firstly fix d=3 and calculate the AUC as a function of α, from which we could obtain local optimal α_optimal. Then, we set α=α_optimal and calculate the AUC as a function of d, from which local optimal d_optimal is obtained. Experiments in real networks reveal that optimal α_optimal is around 0.95, and the dimension of hidden space is less than 10 (See Table <ref>). (We could also firstly fix α, and later fix d. However according to Fig. <ref>, Spearman correlation have few fluctuation at optimal d, whereas it changes sharply at optimal α. Therefore it is better to set d first, and later consider α.) §.§ Empirical analysisWe apply the hidden space method to six real networks, which all exist in the physical world. The first four networks possess real locations, The last two are protein-protein interaction network without physical locations. All the simulations in this section are the average of 50 different divisions of the dataset.(1)PowerGrid <cit.>: the electrical power grid of western US, with nodes representing generators, transformers and substations, and links corresponding to the high-voltage transmission lines between nodes. This network contains 4,941 nodes, and they are well connected.(2) Maayan-faa <cit.>: This networks represent the flight routes in the USA. The nodes are airports and the links represent the presence of a flight route between two airports. This is an directed and unweighted network, containing 1,226 nodes and 2,615 edges.(3) OpenFlights <cit.>: a directed network containing flights between airports of the world, in which directed edge represents a flight from one airport to another.Here it has 2939 nodes and 30501 edges.(4) Euroroad <cit.>: An undirected and unweighted network representing the international roads connecting the cities in Europe (E-roads). The nodes represent the cities and the links represent the roads. The network contains 1,174 nodes and 1,419 edges. (5) Yelp<cit.>: An undirected and unweighted social network in the round 4 of the Yelp academic challenge dataset. Yelp is a website where users can review and rate various businesses such as restaurants, doctors, and bars. For our analysis, we keep only the users who have at least one friend. The network contain 123368 nodes and 1911997 edges. In the paper, we sample 1417 connected nodes randomly with 4472 edges and keep their connections.(6) Maayan-pdzbase <cit.>: a network of protein-protein interactions, which is an undirected and unweighted network, containing 212 nodes and 244 edges. We only take into account of the giant connected component of these networks. This is because for a pair of nodes located in two disconnected components, their similarity score will be zero according to most prediction methods. Moreover self-loop links and nodes' direction are ignored for convenience. After these data processing, Table 1 shows the basic statistics of all the giant components of those networks.Figure. <ref> shows the local optimal α and d in US PowerGrid network. Fig. <ref>(a) plot AUC as a function of α when d=3, and α_optimal=1. Note that optimal α_optimal of most network is smaller than 1 (refer to Table <ref>). AUC varies sharply around 1. Figure. <ref>(b) shows AUC as a function of dimension d where d_optimal=4, and α_optimal=1. Besides, AUC becomes stable around d_optimal=4. When the dimension is d=[2,6], AUC fluctuates within 2%, revealing great robust to d. Similar to US PowerGrid network, we also obtain the optimal α and d for other network in Table <ref>. Notice that though α_optimal is around 1, we cannot determine α_optimal=1, since AUC will decrease sharply from α_optimal to 1. Further, comparing our method, the hidden space method (HS), with five stat-of-art similarity indices, Common Neighbor (CN), Jaccard coefficient (Jaccard) <cit.>, Resource allocation (RA) <cit.>, Katz <cit.>, and Structural Perturbation Method (SPM) <cit.> . In these indexes, two nodes are considered to be similar if they have common important topological features <cit.>. The results are shown in Table <ref>. CN, AA, RA are local index methods, while Katz and SPM are global information based methods. The AUC of hidden space method is substantially higher than all local indexes on most networks, also better than SPM method. Katz index performs better than our method in three networks but costs extremely high computation complexity.The performance of link prediction can also be evaluated by precision metric. Precision is the ratio of right predicted links, given a set of potential links. That is to say, if we choose that the L links with the highest scores are the predicted ones, and L_r links are in the probe set M^p, then the precision P(L)=L_r/L. Clearly, higher precision means higheraccuracy. In our experiments, we choose the length of prediction list equal to the size of probe set L=|M^p|. Therefore P(L)∈(0,1).The results for precision are shown in Table <ref>. We choose the same values for α that in Table <ref>. HS method achieves high AUC, yet with smaller precision compared to the other methods.Intuitively, higher accuracy means higher AUC and higher precision. The reason of deviation between AUC and precision is that AUC evaluates the whole score difference of probe links and non-existing links. Whereas precision only concerns top-L high-score links. Besides, this result still holds under different L. Therefore we only present the results at L = |M^p|. §.§ Hybrid prediction methodIn order to improve the precision of our method, we compare the overlap of prediction list between each other. It is less than 1% common links between Katz and HS, and around 5% between CN and HS.The small overlap links reveals other methods and HS method tend to predict different kinds of potential links. Therefore, they could complement each other's advantages to improve the precision metrics.Due to the differences between other methods and HS method, we propose a hybrid approach to enhance the prediction precision. Combining HS with other methods, we multiply the similarity obtained by HS and those similarity by another methods. For example, if the similarity score of CN and HS methods for nodes i and j are s^CN_ij and s^HS_ij, the hybridized score between node i and j becomes s_ij'=s^CN_ij∗ s^HS_ij. The prediction list is obtained again by choosing links with top-L score s_ij'. We show the results of this hybridization in Tab. <ref>. The precision is remarkably improved in most of networks.§ CONCLUSIONWe conclude that network topology and the real location of nodes is strongly affected by the distance between nodes in the hidden space. Our experimental results on both artificial and real-world networks show that the hidden space locations which are highly correlated with the geographic locations, can be reconstructed merely from connectivity matrix without the knowledge of the real geographic locations. This is a very strong point, as the geographic coordinates are not always available in networks. For instance, we possess only the connections between power stations, and we want to retrieve the distance between them.In this paper, the hidden space distance are used to predict missing links, giving high similarity scores between pairwise nodes which are geographically close in the hidden space. Our results show that the hidden space method improves AUC significantly. Additionally, we find an interesting phenomenon that hidden space method obtains high AUC, but low precision.It means the HS method could find some missing links which cannot be identified by other methods. Since the results on the two metrics are so different for the hidden space method, we complemented it with other methods which significantly enhance the predicting precision.We believe that the present and future work on the hidden space and link prediction will deepen our understanding of the fundamental relationships between structure and function of complex networks.§ MATERIALS AND METHODS §.§ Illustration ground truth location of the network To verify the effectiveness of hidden space method, we explore the hidden metric space in an artificial model network and in the Italian Power-Grid network, which is a network with ground truth location. The Italian Power-Grid is the topology of Italian high-voltage electrical network, which contains 98 nodes and 175 edges. Since real networks usually follow scale-free or similar degree distribution, a Newtonian model <cit.> is utilized to generate scale-free networks embedded in metric spaces as follows: Firstly we set the final network size N, then we assign geographic coordinates to each node in the metric space, as well as their expected degree. Nodes are distributed in a D-dimension space with uniform density and their degree values are generated according to a power-law distribution p_0(k)=c_0k^-γ, k∈[k_0, +∞), where k_0 is the minimum expected degree and c_0 is a normalization constant. A pair of nodes i and j is connected by an edge with probability r(d_ij,k_i,k_j)=1/(1+d_ij/μ k_ik_j)^β. In our experiments, nodes are distributed in a 2-dimension space {x,y | 0≤ x,y≤ 1}, and we set n=700, γ=2.5, k_0=1, μ=β-1/2<k> and β=2 <cit.>. The isolated nodes are removed in the model network. Based on ground truth location of the model network, theoretical AUC is calculated. For two random nodes i(x_i,y_i) and j(x_j,y_j) in the 2-dimension square space, the probability ||𝐝_i-𝐝_j||=r that the distance between the nodes is equal to a value r:p_2(r_ij=r) =∫_0^min(r,1)p_1(l_1)p_1(√(r^2-l_1^2))dl_1, 0≤ r≤√(2),where p_1(|x_i-x_j|=l)=2(1-l),0≤ l≤ 1 is the coordinate difference probability.Given a random edge e_ij (e_ij=1,0), the distance of two endpoint nodes has the conditional probability p_3(r_e_ij|e_ij),p_3(r_e_ij|e_ij)=∫∫_k_i,k_jp(r_e_ij,k_i,k_j|e_ij)dk_idk_j=∫∫_k_i,k_jp(e_ij|r_e_ij,k_i,k_j)p(r_e_ij,k_i,k_j)/p(r_e_ij)dk_idk_j=∫∫_k_i,k_jr(r_e_ij,k_i,k_j)p_2(r_e_ij)p_0'(k_i)p_0'(k_j)dk_idk_j, ife_ij=1,∫∫_k_i,k_j(1-r(r_e_ij,k_i,k_j)p_2(r_e_ij)p_0'(k_i)p_0'(k_j)dk_idk_j,ife_ij=0,where p_0'(k)=k/<k>p_0(k) indicates the probability that one endpoint of a random edge has degree k and <k> is the average degree of the network.Theoretical AUC would be obtained by comparing scores of an existing edge and an non-existing edge, suppose the two edges' score are s_1=-r_1 and s_2=-r_2 respectively,AUC =∫ ds_1∫ p(s_1>s_2)ds_2=∫ p_3(r_1|e_1=1)dr_1∫_r_2≥ r_1p_3(r_1|e_2=0)dr_2. Figure <ref> shows AUC result in the model network. As α increases from 0 to 2, AUC increases sharply in the beginning, then decreases slowly at α>1. Integrating Fig. <ref> and <ref>, the optimal AUC appears at α≈ 1, as expected from the previous results on real geographic location. §.§ Indices for link prediction (i) Common Neighbor (CN). The idea of this metric is that the more neighbors two nodes i and j have in common, the more likely they are to form a link. Let Γ(i) denote the set of neighbors of node i, the simplest measure of the neighborhood overlap can be the directly calculated as:s^ CN_ij = |Γ(i)∩Γ(j)|.CN is the method used by most websites [...]. However, the drawback of CN is that it favors the nodes with large degree. Using the adjacency matrix, (A^2)_ij is the number of different paths with length two connecting i and j. So we can rewrite s_ij =(A^2)_ij. Newman <cit.> used this quantity in the study of collaboration networks, showing the correlation between the number of common neighbors and the probability that two scientists will collaborate in the future. Therefore, we here select CN as the representative of all CN-based measures.(ii) Jaccard coefficient (Jaccard) <cit.>. This index was proposed by Jaccard over a hundred years ago. The algorithm is a traditional similarity measurement in the literature. It is defined ass^ Jaccard_ij = |Γ(i)∩Γ(j)|/|Γ(i)∪Γ(j)|.The motivation of this index is that the raw number of common neighbors favors the large degree nodes, simply because the large degree nodes have more neighbors than smaller ones. The normalization gives more credit to nodes sharing high number of neighbors compared to their total joint number of neighbors, eventually removing the bias towards high degree nodes. Note that there are many other ways to remove the tendency of CN to large degree nodes, such as cosine similarity, Sorensen index, Hub promoted index and so on, see <cit.>.(iii) Resource allocation (RA) <cit.>. This index is inspired by the resource allocation dynamics on complex network. Consider a pair of nodes, i and j, which are not directly connected. Suppose that the node i needs to give some resource to j, using common neighbors as transmitters. Each transmitter (common neighbor) starts with a single unit of resource, and then distributes it equally among all its neighbors. The similarity between i and j can be the directly calculated as the amount of resource received from their common neighbors:s^ RA_ij = ∑_z∈Γ(i)∩Γ(j)1/k_z.This measure is symmetric. By using log(k_z) instead of k_z in Eq. <ref>, the index becomes the Adamic-Adar (AA) Index <cit.>. The difference between RA and AA is small if k_z is small. However, in heterogeneous networks k_z can be very large, then the differences of RA and AA becomes large. By giving less contribution to the high degree nodes, RA usually achieves a higher link prediction accuracy than AA.(iv) Katz <cit.>. This index takes all paths between the two nodes i and j into consideration. It is defined ass^ Katz_ij=α A_ij+α^2 A_ij^2+α^3 A_ij^3+…,where α is a free parameter and A is the adjacency matrix of the network. If the parameter is small, the index is close to CN. In order for the sum to converge, α must be chosen such that α<1/λ_max. where λ_max is the maximum of the eigenvalues of matrix A. When α<1/λ_max, S^ Katz_ij could be simplified asS=(I-α A)^-1-I,where S=(s_ij)_n× n and λ_max is the maximum eigenvalue of adjacent matrix A. (v) Structural Perturbation Method (SPM) <cit.>. This index is based on the hypothesis that the features of a network are stable if a small fraction of edges is randomly removed. In SPM, we perturb a network by removing Δ E edges. The corresponding matrix corresponding to the randomly removed edges is Δ A, the remaining edges are represented by the matrix A^R, with A=A^R+Δ A. Assume that the perturbation of the eigenvectors of A and A^R is only minor, then the perturbated matrix writesÃ=∑ _k=1^N (λ_k+Δλ_k)x_kx_k^T,where λ_k and x_k are the eigenvalue and the corresponding orthogonal and normalized eigenvector for A^R, respectively, and Δλ_k≈x_k^TΔ Ax_k/x_k^Tx_k. The similarity of nodes i and j is given by the corresponding value of the matrix Ã, ã_ij. § ACKNOWLEDGMENTSWe thank Prof. Matus Medo, Prof. Chi Ho Yeung, Prof. Bing-Hong Wang for fruitful discussion and comments. This work is sponsored by the National Natural Science Foundation of China (Grant No. 11547040), Guangdong Province Natural Science Foundation (Grant No. 2016A030310051, 2015KONCX143), Shenzhen Fundamental Research Foundation (JCYJ20150625101524056, JCYJ20160520162743717, JCYJ20150529164656096), National High Technology Joint Research Program of China (Grant No.2015AA015305), Project SZU R/D Fund (Grant No. 2016047), CCF-Tencent (Grant No. AGR20160201), Natural Science Foundation of SZU (Grant No. 2016-24).10newman2003structure M. E. J. Newman, The structure and function of complex networks. SIAM Rev. 45, 167-256 (2003). albert2002statistical R. Albert and A.-L. Barabási, Statistical mechanics of complex networks. Rev. Mod. Phys. 74, 47 (2002).WPSGB11 D. Wang, D. Pedreschi, C. Song, F. Giannotti, and A.-L. Barabási, Human mobility, social ties, and link prediction. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1100-1108. ACM, 2011.LNK07 D. Liben-Nowell and J. Kleinberg, The link-prediction problem for social networks. J. Am. Soc. I nform. Sci. Technol.58, 1019–1031 (2007).Nature1 S. Redner, Networks: teasing out the missing links.Nature453, 47-48 (2008).Nature2 A. Clauset, C. Moore, and M. E. J. Newman, Hierarchical structure and the prediction of missing links in networks. Nature 453, 98-101 (2008).Spezzano2014 F. Spezzan, V.S. Subrahmanian, A. Mannes, Reshaping terrorist networks. Communications of the ACM 57, 60-69 (2014). An12 A. Zeng and G. Cimini, Removing spurious interactions in complex networks. Phys. Rev. E85, 036101 (2012). LJN2010R. N. Lichtenwalter,J. T. Lussier, N. V. Chawla, New perspectives and methods in link prediction. In Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 243-252. ACM, 2010.LSTD10 Z. Lu, B. Savas, W. Tang, and I.S. Dhillon, Supervised link prediction using multiple sources. In 10th International Conference on Data Mining (ICDM), pages 923-928. IEEE, 2010. Jaccard1901 P. Jaccard, Etude comparative de la distribution florale dans une portion des Alpes et du Jura. Impr. Corbaz, 1901. Salton1975 G. Salton, A. Wong, and C.-S. Yang, A vector space model for automatic indexing. Communications of the ACM, 18, 613-620 (1975).RA2009 T.Zhou, L.Lü, and Y.-C.Zhang, Predicting missing links via local information. Eur. Phys. J. B71, 623-630 (2009). Wang2016 Z. Wang, J. Liang, R. Li, and Y. Qian, An approach to cold-start link prediction: Establishing connections between non-topological and topological information. IEEE Transactions on Knowledge and Data Engineering, 28, 2857-2870 (2016).Leroy2010 V. Leroy, B. B. Cambazoglu, and F. Bonchi, Cold start link prediction. In Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 393-402. ACM, 2010.ZLZZ09 Z.-K. Zhang, C. Liu, Y.-C. Zhang, and T. Zhou, Solving the cold-start problem in recommender systems with social tags. Europhys. Lett. 92, 28002 (2010).GJ2002 G. Jeh and J. Widom, Simrank: a measure of structural-context similarity. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 538-543. ACM, 2002. BM2011 B. Karrer and M. E. J.Newman, Stochastic blockmodels and community structure in networks. Phys. Rev. E 83, 016107 (2011). icdml13 J. J. Whang, P. Rai, and I. S. Dhillon, Stochastic blockmodel with cluster overlap, relevance selection, and similarity-based smoothing. In 13th International Conference on Data Mining (ICDM) , pages 817-826. IEEE, 2013. LPsurvey V. Martínez, F. Berzal, and J.-C. Cubero, A survey of link prediction in complex networks. ACM Computing Surveys (CSUR), 49, 69 (2016). LuZ2010 L. Lü and T. Zhou, Link prediction in complex networks: A survey. Physica A 390, 1150-1170 (2011).clustering D. Krioukov,Clustering implies geometry in networks.Phys. Rev. Lett. 116, 208302 (2016). space M. Barthélemy, Spatial networks. Phys. Rep. 499, 1-101 (2011). science13 D. Brockmann and D. Helbing, The hidden geometry of complex, network-driven contagion phenomena. Science 342, 1337-1342 (2013). mcpherson2001birds M. McPherson, L. Smith-Lovin, and J. M. Cook, Birds of a feather: Homophily in social networks. Annu. Rev. Sociol. 27, 415-444 (2001). Stai2016A E. Stai, V. Karyotis, and S. Papavassiliou, A hyperbolic space analytics framework for big network data and their applications. IEEE Network 30, 11-17 (2016).n1 M. Boguná, D. Krioukov, and K. C. Claffy, Navigability of complex networks. Nat. Phys. 5, 74-80 (2009).n2 M. Boguná and D. Krioukov, Navigating ultrasmall worlds in ultrashort time. Phys. Rev. Lett. 102, 058701 (2009).com M. E. J.Newman and T. P. Peixoto, Generalized communities in networks. Phys. Rev. Lett. 115, 088701 (2015).hlp P. D. Hoff, A. E. Raftery, and M. S. Handcock, Latent space approaches to social network analysis. Journal of the American Statistical Association 97, 1090-1098 (2002). hlp1 P. Sarkar, D. Chakrabarti, and A. W. Moore, Theoretical justification of popular link prediction heuristics. In IJCAI Proceedings-International Joint Conference on Artificial Intelligence, 22, 2722 (2011).Boguna2012Uncovering M. Á. Serrano, M. Boguñá, and F. Sagués, Uncovering the hidden geometry behind metabolic networks. Molecular BioSystems 8, 843-850 (2012).Mangioni2013 G. Mangioni and A. Lima,A growing model for scale–free networks embedded in hyperbolic metric spaces. In Complex Networks, pages 9-17. Springer, 2013. Roick2013Location O. Roick and S. Heuser, Location based social networks-definition, current state of the art and research agenda. Transactions in GIS 17, 763-784 (2013). Bao2015Recommendations J. Bao, Y. Zheng, D. Wilkie, and M. Mokbel, Recommendations in location-based social networks: a survey. GeoInformatica 19, 525-565 (2015). Serrano2012Uncovering M. Á.Serrano, M.Boguñá, and F. Sagués,Uncovering the hidden geometry behind metabolic networks. Molecular BioSystems 8, 843-850 (2012). Kleineberg2016Hidden K.-K. Kleineberg, M. Boguñá, M. Á.Serrano, and F. Papadopoulos,Hidden geometric correlations in real multiplex networks. Nat. Phys.(2016). Capocci2005Detecting L. Donetti and M. A.Munoz, Detecting network communities: a new systematic and efficient algorithm.J. Stat. Mech. 2004, P10012(2004).Zhou2010 T. Zhou, Z. Kuscsik, J.-G. Liu, M. Medo, J. R. Wakeling, and Y.-C. Zhang, Solving the apparent diversity-accuracy dilemma of recommender systems. Proc. Natl. Acad. Sci. U.S.A. 107, 4511-4515 (2010).wang2006traffic W.-X. Wang, B.-H. Wang, C.-Y. Yin, Y.-B. Xie, and T. Zhou, Traffic dynamics based on local routing protocol on a scale-free network.Phy. Rev. E 73, 026111 (2006).zhou2012traffic M.-Y. Zhou, S.-M. Cai, and Z.-Q. Fu, Traffic dynamics in scale-free networks with tunable strength of community structure.Physica A 391, 1887-1893 (2012).Liben2005 D. Liben-Nowell, J. Novak, R. Kumar, P. Raghavan, and A. Tomkins,Geographic routing in social networks. Proc. Natl. Acad. Sci. U.S.A. 102, 11623-11628 (2005).Watts2002 D. J.Watts, P. S. Dodds, and M. E. J. Newman,Identity and search in social networks. Science 296, 1302-1305 (2002).AUC1984 J. A.Hanley and B. J.McNeil, The meaning and use of the area under a receiver operating characteristic (roc) curve.Radiology 143, 29-36 (1982).konect J.Kunegis, Konect: the koblenz network collection. In Proceedings of the 22nd International Conference on World Wide Web, pages 1343-1350. ACM, 2013.eroads L. Šubelj and M. Bajec, Robust network community detection using balanced propagation.Eur. Phys. J. B 81, 353-362 (2011).Yelp Yelp's academic dataset. <http://www.yelp.com/academic_dataset>, 2014.pdzbase T. Beuming, L. Skrabanek, M.Y. Niv, P. Mukherjee, and H. Weinstein, Pdzbase: a protein-protein interaction database for pdz-domains. Bioinformatics 21, 827-828 (2005).Katz1953 L. Katz, A new status index derived from sociometric analysis. Psychometrika18, 39-43 (1953).lu2015toward L. Lü, L. Pan, T. Zhou, Y.-C. Zhang, and H. E. Stanley,Toward link predictability of complex networks. Proc. Natl. Acad. Sci. U.S.A.112, 2325-2330 (2015). AA03 L. A.Adamic and E. Adar,Friends and neighbors on the web.Social networks 25, 211-230 (2003). | http://arxiv.org/abs/1705.02199v1 | {
"authors": [
"Hao Liao",
"Mingyang Zhou",
"Zong-Wen Wei",
"Rui Mao",
"Alexandre Vidmer",
"Yi-Cheng Zhang"
],
"categories": [
"cs.SI",
"physics.data-an",
"physics.soc-ph"
],
"primary_category": "cs.SI",
"published": "20170325143031",
"title": "Hidden space reconstruction inspires link prediction in complex networks"
} |
Extending Growth Mixture Models Using Continuous Non-Elliptical Distributions Yuhong WeiDepartment of Mathematics & Statistics, McMaster University, Hamilton, ON, Canada. Yang Tang^*Emilie ShiremanDepartment of Psychological Sciences, University of Missouri, Columbia, MO, U.S.Paul D. McNicholas^* Douglas L. Steinley^†=================================================================================================================================================================================================================================================================== Let G,H be groups, ϕ: G → H a group morphism, and A a G-gradedalgebra. The morphism ϕ induces an H-grading on A, and on anyG-graded A-module, which thus becomes an H-graded A-module. Given an injective G-graded A-module, we give bounds for its injective dimension when seen as H-graded A-module. Following ideas by Van den Bergh, we give an application of our results to the stability of dualizing complexes through change of grading. 2010 MSC: 16D50, 16E10, 16E65, 16W50, 18G05.Keywords: injective modules, change of grading, dualizing complexes.§ INTRODUCTIONGraded rings are ubiquitous in algebra. One of the main reasons is that the presence of a grading simplifies proofs and allows to generalize many results (for example, the theories of commutative and noncommutative graded algebrasare easier to reconcile than their ungraded counterparts). Furthermore,results can often be transfered from the graded to the ungraded contextthrough standard techniques. In more categorical terms, there is a naturalforgetful functor from the category _G A of graded modules over aG-graded algebra A, to the category A of modules over A, and thechallenge is to find a way to transfer information in the opposite direction.When G = this is usually done through “filtered-and-graded” argumentsand spectral sequences. In this article we exploit a different technique,namely the existence of three functors ϕ_!, ϕ^*, ϕ_*, whereϕ_!: _G A → A is the usual forgetful functor (sometimes alsocalled the push-down functor), ϕ^* is its right adjoint, andϕ_* is the right adjoint of ϕ^*.This technique has two advantages over the usual filtered-and-graded methods,namely that it does not depend on the choice of a non-canonical filtration,and that the group G is arbitrary. Its main drawback is that the functors inthis triple do not preserve finite generation, noetherianity, or other“finiteness” properties unless further hypotheses are in place.The problem we consider is the following. Suppose you are given an injectiveobject I in the category _ A. In general I is notinjective as A-module, but if A is noetherian thenits injective dimension is at most one. Now, what happens if we considergradings by more general groups? In general, given groups G, H anda group morphism ϕ: G → H, any G-graded object can be seen as anH-graded object through ϕ, see paragraph <ref>. Inparticular a G-graded algebra A inherits an H-grading, and there is anatural functor ϕ_!: _G A →_H A, between the categories ofG-graded and H-graded modules. The question thus becomes: given aninjective object I in _G A, what is the injective dimension ofϕ_!(I) in _H A?This question has been considered several times in the literature, but it hasreceived no unified treatment. A classical result of R. Fossum and H.-B. Foxby<cit.>*Theorem 4.10 states that if A is -graded noetherianand commutative then a -graded-injective module has injective dimensionat most 1. M. Van den Bergh claims in the article<cit.>*below Definition 6.1 that this result extends to thenoncommutative case if the algebra is -graded and A_0 is equal to thebase field; a proof of this fact can be found in the preprint <cit.>.Other antecedents include <cit.>, where it is shown that if Ais a noetherian -graded algebra then the injective dimension of A isfinite if and only if its graded injective dimension is finite. Following theideas of <cit.>*section 3, one can show that if A is-graded and noetherian, and M is a -graded module such that M_n= 0 for n ≪ 0, then the graded injective dimension of M coincides withits injective dimension as A-module. Most of these results are obtained bythe usual route of going from ungraded to graded objects through filtrationsand spectral sequences. The only result that we could find in the literatureregarding injective modules graded by groups other thanstates that ifA is graded over a finite group then a graded module is graded injective ifand only if it is injective <cit.>*2.5.2. In order to give a general answer to the question we work with the functorsϕ_!, ϕ^*, ϕ_* mentioned above, which were originally introduced byA. Polishchuk and L. Positselski in <cit.>. These functors,collectively called the change ofgrading functors, turn out to be particularly well-adapted to the transfer ofinformation of homological nature. Our main result, which includes most of theprevious ones as special cases, is the following.Let ϕ: G → H be a group morphism, let L = ϕ and let d bethe projective dimension of the trivial L-module $̨. LetAbe aG-graded noetherian algebra, and letIbe an injective object of_GA. Then the injective dimension ofϕ_!(I)is at mostd. The proof depends on two facts. First, that ifIisG-graded injective thenϕ_!(I)is an injective object in the additivesubcategory generated by all modules of the formϕ_!(M)withMaG-gradedA-module; in other words, modules in the image ofϕ_!are_A^H(-,ϕ_!(I))-acyclic and hence can be used to build acyclicresolutions, see Lemma <ref>. The second is a result of independentinterest, stating that given anH-gradedA-moduleNwe can obtain aresolution ofNby objects in the additive category generated byϕ_!(ϕ^*(N)), see Proposition <ref>; this resolutioncan be used to calculate theH-graded extension modules betweenNandϕ_!(I), which gives the desired bound.The article is structured as follows. In Section <ref> we reviewsome basic facts on the category of graded modules and recall some generalproperties of the change of grading functors established in the article<cit.>. In Section <ref> weprove our main results on how regrading affects injective dimension. Finallyin Section <ref> we give similar results at the derived level and usethem to study the behavior of dualizing complexes with respect to regradings, a question originally raised by Van den Bergh in <cit.>. Throughout the article$̨ is a commutative ring, and unadorned spaces and tensor products are always over $̨. Also all modules over rings are left modules unless otherwise stated. The lettersG, Hwill always denote groups, andϕ: G →Hwill be a group morphism.Acknowledgements: The authors would like to thank MarianoSuárez-Álvarez for a careful reading of a previous version of this article.§ THE CHANGE OF GRADING FUNCTORS AG-graded$̨-module is a $̨-moduleVwith a fixeddecompositionV = ⊕_g ∈G V_g; we say thatv ∈Vishomogeneous of degreegifv ∈V_g, andV_gis called theg-homogeneous component ofV. We usually say graded instead ofG-gradedifGis clear from the context.Given twoG-graded modulesVandW, their tensor product is also aG-graded module, where for eachg ∈G(VW)_g = ⊕_g' ∈ G V_ g' W_(g')^-1gA map between graded$̨-modules f: V → W is said to beG-homogeneous, or simply homogeneous, if f(V_g) ⊂ W_g for allg ∈ G. By definition, a homogeneous map f: V → W induces mapsf_g: V_g → W_g for each g ∈ G, and f = ⊕_g ∈ G f_g;we refer to f_g as the homogeneous component of degree g of f. Thesupport of a G-graded $̨-moduleVisV = {g ∈G |V_g ≠0}.The category_G $̨ has G-graded modules as objects and homogeneous$̨-linear maps as morphisms. Kernels and cokernels of homogeneous maps between graded$̨-modules are graded in a natural way, so a complex0 → V' → V → V”→ 0in _G $̨ is a short exact sequence if and only if it is a short exact sequence of$̨-modules, or equivalently if for each g ∈ G the sequence formed by taking g-homogeneous components is exact.Given an object V in _G $̨ andg ∈G, we denote byV[g]theG-graded$̨-module whose homogeneous component of degree g' is V[g]_g' = V_g'g. This gives a natural autoequivalence of _G $̨.We now recall the general definitions regardingG-graded$̨-algebras. The reader is referred to <cit.>*Chapter 2 for proofs anddetails.A G-graded $̨-algebra is aG-graded$̨-module A which is also a$̨-algebra, such that for allg,g' ∈Gand alla ∈A_g, a' ∈A_g'we haveaa' ∈A_gg'. IfAis aG-graded algebra then its structural mapρ: A →A [̨G]is defined asa ∈A_g↦a g ∈A_g [̨G]_gfor eachg ∈G; the fact thatAisaG-graded algebra implies that this is a morphism of algebras.AG-gradedA-module is anA-moduleMwhich is also aG-graded$̨-module such that for each g,g' ∈ G and all a ∈ A_g,m ∈ M_g' it happens that am ∈ M_gg'. Once again, we usually saygraded instead of G-graded. We say that A is graded left noetherian if every graded A-submodule of a finitely generated graded A-module is also finitely generated. If G is a polycyclic-by-finite group then A is graded noetherian if and only if it is noetherian<cit.>*Theorem 2.2.We denote by _G A the category whose objects are G-graded A-modules and whose morphisms are G-homogeneous A-linear maps.Notice that if M is a graded A-module then the graded $̨-moduleM[g]is also a gradedA-module, with the same underlyingA-module structure, so shifting also induces an autoequivalence of_G A. The category_G Ahas arbitrary direct sums and products. The direct sum of graded modules is again graded in an obvious way, but this is not the case for direct products. Given a collection of gradedA-modules{V^i |i∈I}, their direct product is the gradedA-module whose homogeneousdecomposition is given by⊕_g ∈ G∏_i ∈ I V^i_g.In other words, the forgetful functorØ: _G A →Apreserves direct sums, but not direct products.The category_G Ais a Grothendieck category with enough projective andinjective objects. Given an objectMof_G A, we will denote by_A^G Mand_A^G Mits projective and injective dimensions,respectively. Given two gradedA-modulesM, Nwe denote by^G_A(M,N)the$̨-module of all G-homogeneous A-linear morphisms from M to N.Since _G A has enough injectives, we can define for each i ≥ 0 thei-th right derived functor of ^G_A, which we denote by ^i^G_A. There is also an enriched homomorphism functor _A^G, given by_A^G(M,N) = ⊕_g ∈ G_A^G(M,N[g]),which is a G-graded $̨-submodule of_(̨M,N). We denote its right derived functors by^i_A^G.LetAbe aG-graded$̨-algebra. As shown in<cit.>*Section 1.3, a group homomorphism ϕ: G → H induces functors ϕ_!, ϕ_*: _G A →_H A and ϕ^*:_H A →_G A. We quickly review the construction for completeness.Let V be a G-graded $̨-module. We defineϕ_!(V)to be theH-graded$̨-module whose homogeneous component of degree h ∈ H isgiven byϕ_!(V)_h = ⊕_{g ∈ G |ϕ(g) = h} V_g.Analogously given a map f: V → W between G-graded $̨-modules, we defineϕ_!(f)to be the$̨-linear map whose homogeneous component ofdegree h ∈ H is given byϕ_!(f)_h = ⊕_{g ∈ G |ϕ(f) = h} f_g.Notice that ϕ_!(V) has the same underlying $̨-module asV. Inparticular,ϕ_!(A)is anH-graded$̨-algebra which is equal to A as$̨-algebra, and ifVis aG-gradedA-module thenϕ_!(V)is anH-gradedϕ_!(A)-module with the same underlyingA-module structure.Since the action ofAremains unchanged, iffisA-linear then so isϕ_!(f). This defines the functorϕ_!: _G A →_H ϕ_!(A).From now on we usually writeAinstead ofϕ_!(A)to lighten up thenotation, since the context will make it clear whether we are considering itas aG-graded or as anH-graded algebra.We defineϕ_*(V)andϕ_*(f), to be theH-graded$̨-module, and H-homogeneous map whose homogeneous components of degree h ∈ Hare given byϕ_*(V)_h = ∏_{g ∈ G |ϕ(g) = h} V_g,ϕ_*(f)_h = ∏_{g ∈ G |ϕ(f) = h} f_g,respectively. If V is also an A-module, we define the action of ahomogeneous element a ∈ A_g' with g' ∈ G over an element(v_g)_g ∈ϕ^-1(h)∈ϕ_*(V)_h as a(v_g) = (av_g). With thisaction ϕ_*(V) becomes an H-graded A-module, and we have defined thefunctor ϕ_*: _G A →_H A.Now let V',W' be H-graded $̨-modules and letf': V' →W'be ahomogeneous map. We setϕ^*(V') ⊂V' [̨G]to be thesubspace generated by all elements of the formv gwithv ∈V'homogeneous of degreeϕ(g), andϕ^*(f)(v g) = f(v) g. Inother words, for eachg ∈Gthe homogeneous components ofϕ^*(V')andϕ(f')of degreegare given byϕ^*(V')_g= V'_ϕ(g)g̨, f_g = f_ϕ(g).IfV'is anH-gradedA-module, thenV' [̨G]is anA [̨G]-module, and it is an inducedA-module through the structure mapρ: A →A [̨G]; it is immediate to check that with this action itbecomes aG-gradedA-module with(V' [̨G])_g = V' g̨foreachg ∈G, and thatϕ^*(V') ⊂V' [̨G]is aG-gradedA-submodule. It is also easy to check that iff'is homogeneous andA-linear then so isϕ^*(f'). Thus we have defined a functorϕ^*:_H A →_G A. We refer toϕ_!, ϕ^*andϕ_*collectively as the change of grading functors. It is clear from the definitions that the change of gradingfunctors are exact, and thatϕ_!, ϕ_*reflect exactness, i.e. a complex is exact if and only if its image by any of them is also exact. The functorϕ^*reflects exactness if and only ifϕis surjective. As mentionedbefore, we have some adjointness relations between these functors.[<cit.>*Proposition 3.2.1] The functor ϕ^* is right adjoint to ϕ_! and left adjoint to ϕ_*.Let M be an object of _G A and N an object of _H A. We definemaps_A^H(ϕ_!(M), N) @/^6pt/[r]^-α _A^G(M, ϕ^*(N)) @/^6pt/[l]^-βas follows. Given f: ϕ_!(M) → N, for each g ∈ G and each m ∈ M_g set α(f)(m) = f(m)g. Conversely, given f: M →ϕ^*(N), letϵ: [̨G] →$̨ be the counit of[̨G], i.e. the algebra mapdefined by settingϵ(g) = 1, and setβ(f) = 1 ϵ∘ f. Direct computation shows that these maps are well defined, natural, and mutual inverses. Thusϕ_!is the left adjoint ofϕ^*.Now we define maps _A^G(ϕ^*(N), M) @/^6pt/[r]^-γ _A^H(N, ϕ_*(M)) @/^6pt/[l]^-δas follows. Givenf: ϕ^*(N) → M, for eachh ∈ Hand eachn ∈ N_hwe setγ(f)(n) = (f(ng))_g ∈ϕ^-1(h). Conversely, givenf: N →ϕ_*(M), for eachg ∈ Gandn ∈ N_ϕ(g)we havef(n) ∈∏_g' ∈ϕ^-1(h) M_g', so we can setδ(f)(ng)astheg-th component off(n). Once again direct computation shows that thesemaps are well defined, natural, and mutual inverses. § INJECTIVE DIMENSION AND CHANGE OF GRADINGRecall thatG,Hare groups andϕ: G → His a group morphism. We setL = ϕ. Throughout this sectionAdenotes aG-graded$̨-algebra.As stated in the Introduction, a G-graded A-module is projective if andonly if it is projective as A-module, i.e. the functor ϕ_! preservesthe projective dimension of an object. Our aim is to describe how ϕ_!affects the injective dimension of an object. We begin by recalling a previousresult related to this problem. [<cit.>*Corollaries 3.2.2, 3.2.3] Let M be an object of _G A. Then the following hold.* ^G_A M = ^H_A ϕ_!(M) and ^G_A M ≤^H_A ϕ_!(M).* ^G_A M ≤^H_A ϕ_*(M) and ^G_A M =^H_A ϕ_*(M).The natural inclusion of the direct sum of a family into itsproduct gives rise to a natural transformation η: ϕ_! ⇒ϕ_*. Notice that η(M): ϕ_!(M) →ϕ_*(M) is an isomorphism ifand only if for each h ∈ H the set M ∩ϕ^-1(h) is finite.If this happens we say that M is ϕ-finite. The following theorem follows immediately from Proposition <ref>. If an object M of _G A is ϕ-finite then ^G_A M =_A^H ϕ_!(M). If |L| < ∞ then every G-graded A-module is ϕ-finite. Also, ifA is ϕ-finite then every finitely generated G-graded A-module isϕ-finite, so this result applies in many usual situations. For example,assume A is ^r-graded for some r > 0, i.e. A is ^r-graded and A_ξ = 0 if ξ∉^r. Let ψ: ^r →be the morphism ψ(z_1, …, z_r) = z_1 + ⋯ + z_r. Then ψ_!(A) is -graded, and furthermore A_z = 0 if z ∉. Since for eachz ∈ the set ψ^-1(z) ∩^r is finite, the algebra A isψ-finite. Applying the theorem we see that _A^^r A =_A^ψ_!(A). If A is also noetherian then by<cit.>*3.3 Lemma we see that _A^^r A = _A A. The algebra [̨G] is a G-graded $̨-algebra, and hence throughϕit is also anH-graded algebra, so we may consider thecategory ofH-graded[̨G]-modules_H [̨G]. The algebra[̨H]is an object in this category with its usualH-grading and theaction of[̨G]induced byϕ. By <cit.>*Theorem 8.5.6,the functor- [̨H]: [̨L] →_H [̨G]is an equivalence ofcategories. In particular the projective dimension of[̨H]in_H [̨G]equals_[̨L]$̨.Given an object N of _H A we denote by 𝒮(N) the smallestsubclass of objects of _H A containing the set {ϕ_!(ϕ^*(N[h]))| h ∈ H} and closed under direct sums and direct summands.Set d = _[̨G]^H [̨H] = _[̨L]$̨. EveryH-gradedA-moduleNhas a resolution of length at mostdby objects of𝒮(N).We begin by defining a functor D_N: _H [̨G] →_H A. Given an object V of _H [̨G], the tensor product NV is an A-module with action induced by the map ρ: A → A [̨G], and we set D_N(V) to be the A-submodule ⊕_h ∈ H N_hV_h,with the obvious H-grading. Given a morphism f: V → W in _H [̨G],we set D_N(f) as the restriction and correstriction of _Nf.Fix h ∈ H. By definition D_N([̨G][h]) and ϕ_!(ϕ^*(N[h^-1]))[h] are A-submodules of N [̨G], and it is immediate to check that in bothcases the homogeneous component of degree h' ∈ H is N_h [̨G]_hh',so in fact these two H-graded A-modules are equal. Furthermore, if P isany projective object in _H [̨G] then there exists an object Q suchthat P ⊕ Q is a free H-graded [̨G]-module, which is isomorphic to⊕_i ∈ I ([̨G])[h_i] for some index set I, not necessarilyfinite, with h_i ∈ H. Now D_N commutes with direct summs, D_N(P) is adirect summand of D_N(P ⊕ Q) ≅⊕_i ∈ I D_N([̨G][h_i]) =⊕_i ∈ Iϕ_!(ϕ^*(N[h_i^-1]))[h_i], which obviously lies in𝒮(N). For each h ∈ H we define a map n ∈ N_h ↦ nh ∈D_N([̨H])_h; the direct sum of these maps gives us an isomorphism N ≅D_N([̨H]). Taking a projective resolution P^∙ of [̨H] oflength d and applying D_N, we obtain a complex D_N(P^∙) →D_N([̨H]) ≅ N; since [̨G] is a free $̨-module, projective[̨G]-modules are projective over$̨ so this is an exact complex, and from the previous paragraph we see that it is a resolution of N by objects in 𝒮(N). LetMbe aG-gradedA-module. Recall thatϕ^*(ϕ_!(M)) ⊂ M[̨G]consists of allmg'withm ∈ M_gandϕ(g) =ϕ(g'). For eachl ∈ Lwe have a mapM[l] →ϕ^*ϕ_!(M)whosehomogeneous component of degreeg ∈ Gis given bym ∈ M[l]_g ↦ m gl ∈ϕ^*ϕ_!(M). This induces a natural map⊕_l ∈ LM[l] →ϕ^*ϕ_!(M). This map has an inverse, given bymg' ∈ϕ^*(ϕ_!(M))↦ m ∈ M[g^-1g'], so we get a natural isomorphismϕ^*(ϕ_!(M)) ≅⊕_l ∈ L M[l]. This observation is usedin the following lemma. Assume A is left G-graded noetherian. Let I, M be objects of _G Awith I injective, and let N be a direct summand of ϕ_!(M). Then^i_A^H(N, ϕ_!(I)) = 0 for all i > 0.It is enough to show that the result holds for N = ϕ_!(M). In that casewe have isomorphisms_A^H(ϕ_!(M), ϕ_!(I))≅_A^G(M, ϕ^*(ϕ_!(I)))≅_A^G(M, ⊕_l ∈ L I[l] ).Since this isomorphism is natural in the first variable, we obtain for eachi ≥ 0 an isomorphism^i _A^H(ϕ_!(M), ϕ_!(I))≅^i_A^G(M, ⊕_l ∈ L I[l] ).Now by the graded version of the Bass-Papp Theorem (see<cit.>*Theorem 5.23 for a proof inthe ungraded case, which adapts easily to the graded context), the fact thatA is left G-graded noetherian implies that ⊕_l ∈ L I[l] isinjective, and hence the last isomorphism implies ^i _A^H(ϕ_!(M),ϕ_!(I)) = 0.We point out that the proof does not use the full Bass-Papp Theorem, just the fact that the direct sum of an arbitrary family of shifted copies of the sameinjective module is again injective, so we may wonder whether this property isweaker than G-graded noetherianity. In the ungraded case a module is calledΣ-injective if the direct sum of arbitrarily many copies of it is injective. Say that a G-graded A-module is graded Σ-injective if an arbitrary direct sum of shifted copies of itself is injective. Then by areasoning analogous to that of <cit.>*Theorem, pp. 205-6 one can prove that an algebra is left G-graded noetherian if and only if every injective object of _G A is graded Σ-injective. We thank MathOverflow user Fred Rohrer for the reference. We are now ready to prove the main result of this section.Set d = _[̨L]$̨. AssumeAis leftG-graded noetherian. Forevery objectMof_G Awe have_A^G M ≤^H_Aϕ_!(M) ≤^G_A M + dThe first inequality holds by Proposition <ref>. The casewhere M is of infinite injective dimension is trivially true, so let us consider the case where n = _A^G M is finite. In this case we work by induction.If n = 0 then M is injective in _G A. Let N be an object of_H A, and let P^∙→ N be a resolution of N of length d by objects of 𝒮(N) as in Proposition <ref>. It followsfrom Lemma <ref> that ^i _A^H(P,ϕ_!(I)) = 0 for everyobject P of 𝒮(N), so in fact P^∙ is an acyclic resolution of N and^i_A^H(N, ϕ_!(M))≅ H^i(_A^H(P^∙, ϕ_!(M)))for each i ≥ 0. Thus ^i_A^H(N, ϕ_!(M)) = 0 for all i > d,and since N was arbitrary this implies that _A^H ϕ_!(M) ≤ d.Now assume that the result holds for all objects of _G A with injective dimension less than n. Let M → I be an injective envelope of M in _G A, and let M' be its cokernel. Then ^G_A M' =n-1, and so by the inductive hypothesis ^H_A ϕ_!(M') ≤ n-1+d.Now we have an exact sequence in _H A of the form0 →ϕ_!(M) →ϕ_!(I) →ϕ_!(M') → 0.By standard homological algebra the injective dimension of ϕ_!(M) isbounded above by the maximum between _A^H ϕ_!(I) + 1 ≤ d + 1 and _A^H ϕ_!(M') + 1 ≤ n + d. This gives us the desiredinequality. § CHANGE OF GRADING AT THE DERIVED LEVEL AND DUALIZING COMPLEXESDualizing complexes for noncommutative rings were introduced by A. Yekutieliin the context of connected-graded algebras in order to study theirlocal cohomology; they have proven to be very useful in the study of ringtheoretical properties of non commutative rings, see for exampleYek-dc, Jor-lc, VdB-existence-dc, YZ-aus-dc, WZ-survey-dc,YZ-rigid-dc, etc. A dualizing complex is essentially an objectR^∙inthe derived category of A^esuch that the functor_A(-,R^∙)is a duality between^b( A)and^b( A^), for aprecise definition see Definition <ref>. A graded dualizingcomplex in principle only guarantees dualities at the graded level, butaccording to Van den Bergh, a-graded dualizing complex is also anungraded dualizing complex <cit.>. In this section we show that in fact a^r-graded dualizing complex remains a dualizing complex after regrading. Once you have Theorem <ref>, the proof in the^r-graded case is no more difficult than in the-graded case, except for the technical complications due to the extra gradings. Still, wefelt it was worthwhile to develop these technicalities in order to obtain aprecise statement of Theorem <ref>.Throughout this section$̨ is a field, G is an abelian group, and A is aG-graded $̨-algebra. We denote byA^ethe enveloping algebraA A^; sinceGis abelian bothA^andA^eareG-graded algebras. Let us fix some notation regarding derived categories. Given an abeliancategory, we denote by(A)the category of complexes of objects ofwith homotopy classes of maps of complexes as morphisms, and by()the derived category of. As usual we denote by^+(), ^-(),^b()the full subcategories of(A)consisting of left bounded, rightbounded and bounded complexes. Recall that an injective resolution of aleft bounded complexR^∙is a quasi-isomorphismR^∙→I^∙whereI^∙is a left bounded complex formed by injectiveobjects of. Ifhas enough injectives then every left bounded complexhas an injective resolution. Analogous remarks apply for projectiveresolutions of right bounded complexes.IfF: →ℬis an exact functor between abelian categories, then by the universal property of derived categories there is an induced functor() →(ℬ), which by abuse of notation we will also denote byF.The mapsa ∈ A ↦ a1 ∈ A^eanda ∈ A^↦ 1a ∈ A^einduce restriction functors_A: _G A^e →_G Aand_A^: _G A^e →_G A^. These functors are exact andpreserve projectives and injectives, which can be proved following the linesof the proof in the caseG = found in <cit.>*Lemma 2.1. IfHis any group andϕ: G → His a group morphism then it is clear that the associated change of grading functors commute with therestriction functors in the obvious sense. Since restriction and change ofgrading functors are exact, they induce exact functors between thecorresponding derived categories.There exists a functor_A^G: (_G A^e)^×(_G A^e) →(_G A^e)defined as follows. Given complexesM^∙, N^∙, for eachn ∈we set_A^G(N^∙,M^∙)^n = ∏_p ∈_A^G(N^p, M^p+n),where the product is taken in the category ofG-gradedA^e-modules; this sequence ofG-gradedA^e-modules is made into a complex withdifferentiald^n= ∏_p ∈ ((-1)^n+1_A^G(d_N^p,M^p+n) + _A^G(N^p,d_M^p+n)).The action of_A^Gon maps is defined in the usual way.The functor_A^Ghas a right derived functor_A^G:(_G A^e)^×(_G A^e)→(_G A^e).WhenM^∙is an object of^+(_G A^e)such thatM^iis injective as leftA-module for eachi ∈, then _A^G(N^∙, M^∙)≅_A^G(N^∙,M^∙) for every objectN^∙of(_G A^e). Analogously, ifN^∙is an object of^-(_G A^e)such thatN^iisprojective as leftA-module for eachi ∈, then _A^G(N^∙, M^∙) ≅_A^G(N^∙, M^∙)for every objectM^∙of(_G A^e). This is proved in the caseG = in <cit.>*Theorem 2.2, and the general proof follows thesame reasoning. There is a completely analogous functor^G_A^whose derived functor_A^^Ghas similar properties.LetR^∙be a complex ofA^e-modules. SeeingA^as a complex ofA^e-modules concentrated in homological degree0, there is a mapA^→_A^G(R^∙, R^∙)given by sendinga∈ A^to right multiplication byaacting onR^∙.Now letP^∙→ R^∙be a projective resolution ofR^∙,so there is an isomorphism _A^G(R^∙, R^∙) ≅_A^G(P^∙, P^∙),and we get a map_A: A^→_A^^r(R^∙, R^∙).This map is independent of the projective resolution we choose, so we refer toit as the natural map fromA^to_A^^r(R^∙,R^∙). In the same way there is a natural map fromAto_A^^^r(R^∙, R^∙). The proof that these maps areindependent of the chosen resolution is quite tedious but elementary; thereader is referred to <cit.>*Appendix A for details.Assume thatG = ^rfor somer ≥ 0. We say thatAis^r-graded if A ⊂^r, and that it is connected ifA_0 =$̨. If A is ^r-graded then so are A^ and A^e, and they areconnected if and only if A is connected. The following definition is adapted from <cit.>*Definition 3.3.Let A be a connected ^r-graded noetherian algebra. A^r-graded dualizing complex over A is a bounded complexR^∙ of A^e-modules with the following properties. *The cohomology modules of _A(R^∙) and _A^(R^∙)are finitely generated. *Both _A(R^∙) and _A^(R^∙) have finite injective dimension. *The maps _A: A^→_A^^r (R^∙,R^∙) and _A^: A →ℛ_A^^^r(R^∙,R^∙) are isomorphisms in (_^r A^e).A dualizing complex in the ungraded sense is an object of ( A^e) which complies with the ungraded analogue of the previous definition. Our objectiveis to show that a ^r-graded dualizing complex remains a dualizing complexif we change (or forget) the grading. Since being finitely generated isindependent of grading, item <ref> of the definition remains true if wechange or forget the grading. To see how item <ref> behaves withrespect to change of grading requires a derived version of Theorem<ref>, while item <ref> is also invariant by change of grading by a simple argument. We provide the details in the followinglemmas, in a slightly more general context.Recall that given a group morphism ϕ: G → H, a G-graded $̨-vectorspaceMis said to beϕ-finite if M ∩ϕ^-1(h)is afinite set for eachh ∈ H.Let ϕ: G → H be a group morphism and set L = ϕ. LetR^∙ be a bounded complex of G-graded A-modules. *If the cohomology modules of R^∙ are ϕ-finite then _A^G R^∙ = _A^H ϕ_!(R^∙) *Let d = _[̨L]$̨. IfAis leftG-graded noetherian thenthe following inequalities hold_A^G R^∙≤_A^H ϕ_!(R^∙)≤_A^G R^∙ + d.Let R^∙→ I^∙ be an injective resolution of minimal length. It is enough to prove the statement with I^∙ instead of R^∙.Suppose I^∙ has ϕ-finite cohomology modules. Recall that there isa natural transformation η: ϕ_! ⇒ϕ_*, and that η(M)is an isomorphism if an only if M is ϕ-finite. The class ofϕ-finite G-graded A-modules is closed by extensions, so applying<cit.>*Proposition 7.1 (in the reference “thick” stands for“closed by extensions”) we get that the map ϕ_!(I^∙) →ϕ_*(I^∙) is a quasi-isomorphism, and since ϕ_* preservesinjectives it is an injective resolution, so _A^G R^∙≥_A^H ϕ_!(R^∙). If the inequality were strict, then we couldtruncate ϕ_*(I^∙) to obtain a shorter complex of the form⋯→ϕ_*(I^j-1) →ϕ_*(I^j) →ϕ_*( d^j) → 0 →⋯with ϕ_*( d^j) an injective H-graded A-module. Since ϕ_* preserves injective dimension by Proposition <ref>, this would contradict the fact that I^∙ is a minimal resolution ofR^∙, so in fact _A^G R^∙ = _A^Hϕ_!(R^∙). This proves item <ref>For item <ref>, assume first that I^∙ isbounded. We proceed by induction on s, the length of I^∙. The cases = 0 is a special case of Theorem <ref>. Now let t ∈be the minimal homological degree such that I^t ≠ 0, and consider theexact sequence of complexes0 → I^> t→ I^∙→ I^t → 0,where I^t is seen as a complex concentrated in homological degree t and I^> t is the subcomplex of I^∙ formed by all components inhomological degree larger than t. Thus there is a distinguished triangle ϕ_!(I^> t) →ϕ_!(I^∙) →ϕ_!(I^t) → in (_H A). By the inductive hypothesis the inequality holds for the first and thirdcomplexes of the triangle, so a simple argument with long exact sequencesshows that the corresponding inequality holds for ϕ_!(I^∙).Finally, if I^∙ is not bounded then we only have to prove thatϕ_!(I^∙) does not have finite injective dimension. Now ϕ^* preserves injective dimensions, and since ϕ^*(ϕ_!(I^∙)) ≅⊕_l ∈ L I[l]^∙ has infinite injective dimension, so does ϕ_!(I^∙).Let G,H be abelian groups and ϕ: G → H a group morphism. Assume Ais G-graded noetherian. Let S^∙, R^∙ be boundedcomplexes of G-graded A^e-modules such that the cohomology modules ofR^∙ are finitely generated as left A-modules. *The mapϕ_!(_A^G(R^∙, S^∙))→_A^H(ϕ_!(R^∙), ϕ_!(S^∙))is an isomorphism. *The composition ϕ_!(A)[r]^-ϕ_!(_A) ϕ_!(_A^G(R^∙, R^∙)) [r]_A^H(ϕ_!(R^∙), ϕ_!(R^∙))equals _ϕ_!(A): ϕ_!(A) →_A^H(ϕ_!(R^∙),ϕ_!(R^∙)) The map from item <ref> is obtained as follows. Let P^∙→ R^∙ be a projective resolution. Then ϕ_!(P^∙) →ϕ_!(R^∙) is also a projective resolution since ϕ_! is exact andpreserves projectives. Now by definition of ^G_A(R^∙, S^∙),we have ϕ_!(_A^G(P^∙, S^∙)) ⊂_A^H (ϕ_!(P^∙), ϕ_!(S^∙)), and the desired map is the inclusion.Once again this map is independent of the chosen projective resolution.Clearly item <ref> follows from this.If R^∙ and S^∙ are concentrated in homological degree 0, item <ref> is a well-known result, see for example<cit.>*Proposition 1.3.7. The general result follows by standardarguments using <cit.>*Proposition I.7.1(i). We are now ready to prove the main result of this section. Let A be a connected ^r-graded noetherian $̨-algebra and letR^∙be a^r-graded dualizing complex overA.*Let s > 0 and let ϕ: ^r →^s be a group morphism such that ϕ_!(A) is ^s-graded connected. Then ϕ_!(R^∙) is a ^s-graded dualizing complex overϕ_!(A) of injective dimension ^^r_A R^∙. *Let Ø: (_^r A^e) →( A^e) be the forgetful functor. Then Ø(R^∙) is a dualizing complex over A in the ungraded sense, of injective dimension at most ^^r_A R^∙ + 1.Let us prove item <ref>. As we have already noticed, ϕ_! commutes with the restriction functors and does not change the fact that a bimodule is finitely generated as left or right A-module, soϕ_!(R^∙) complies with item <ref> of Definition<ref>. Since A is ^r-graded noetherian it is also^s-graded noetherian, and hence ϕ_!(A) is locally finite; thisimplies that A is ϕ-finite, otherwise ϕ_!(A) would have ahomogeneous component of infinite dimension. Since the cohomology modules ofR^∙ are finitely generated, they are also ϕ-finite and hence byitem <ref> of Lemma <ref>_A^^sϕ_!(R^∙) = _A^^r R^∙, so item <ref> of Definition <ref> also holds for R^∙.Finally item <ref> of the definition follows immediately from item<ref> of Lemma <ref>.We now prove item <ref>. Let ψ: ^r → be the mapψ(z_1, …, z_r) = z_1 + ⋯ + z_r. Then A is ψ-finite andψ_!(A) is connected -graded, so by the first item ψ_!(R^∙) is a -graded dualizing complex over A of injective dimension_A^^r R^∙. Now a similar reasoning as the one we used forthe first item, but this time using item <ref> ofLemma <ref>, shows that Ø(ψ_!(R^∙)) =Ø(R^∙) is a dualizing complex and gives the bound for its injectivedimension. CQ-polycyclicarticle author=Chin, William, author=Quinn, Declan, title=Rings graded by polycyclic-by-finite groups, journal=Proc. Amer. Math. Soc., volume=102, date=1988, number=2, pages=235–241, Eks-auslanderarticle author=Ekström, Eva Kristina, title=The Auslander condition on graded and filtered Noetherian rings, conference=title= Année,address=Paris, date=1987/1988, ,book= series=Lecture Notes in Math.,volume=1404, publisher=Springer,place=Berlin,, date=1989, pages=220–245,FW-direcsumrepsarticle author=Faith, Carl, author=Walker, Elbert A., title=Direct-sum representations of injective modules, journal=J. Algebra, volume=5, date=1967, pages=203–221, FF-gradedarticle author=Fossum, Robert, author=Foxby, Hans-Bjørn, title=The category of graded modules, journal=Math. Scand., volume=35, date=1974, pages=288–300, GW-noetherian-bookbook author=Goodearl, K. R., author=Warfield, R. B., Jr., title=An introduction to noncommutative Noetherian rings, series=London Mathematical Society Student Texts, volume=61, edition=2, publisher=Cambridge University Press, Cambridge, date=2004, pages=xxiv+344,Hart-RDbook author=Hartshorne, Robin, title=Residues and duality, series=Lecture notes of a seminar on the work of A. Grothendieck, given at Harvard 1963/64. With an appendix by P. Deligne. Lecture Notes in Mathematics, No. 20, publisher=Springer-Verlag, place=Berlin, date=1966, pages=vii+423, Jor-lcarticle author=Jørgensen, Peter, title=Local cohomology for non-commutative graded algebras, journal=Comm. Algebra, volume=25, date=1997, number=2, pages=575–591, Lev-ncregarticle author=Levasseur, Thierry, title=Some properties of noncommutative regular graded rings, journal=Glasgow Math. J., volume=34, date=1992, number=3,pages=277–300,Mont-hopf-bookbook author=Montgomery, Susan, title=Hopf algebras and their actions on rings, series=CBMS Regional Conference Series in Mathematics, volume=82, publisher=Published for the Conference Board of the Mathematical Sciences, Washington, DC, date=1993, pages=xiv+238, NV-graded-book3book author=Năstăsescu, Constantin, author=Van Oystaeyen, Freddy, title=Methods of graded rings, series=Lecture Notes in Mathematics,volume=1836, publisher=Springer-Verlag, place=Berlin, date=2004, pages=xiv+304, RZ-twistedarticleauthor=Rigal, L., author=Zadunaisky, P., title=Twisted Semigroup Algebras,journal=Alg. Rep. Theory,date=2015,number=5,pages=1155–1186, PP-secondHHarticle author=Polishchuk, Alexander, author=Positselski, Leonid, title=Hochschild (co)homology of the second kind I, journal=Trans. Amer. Math. Soc., volume=364, date=2012, number=10, pages=5311–5368,VdB-existence-dcarticle author=van den Bergh, Michel, title=Existence theorems for dualizing complexes over non-commutative graded and filtered rings, journal=J. Algebra, volume=195, date=1997, number=2, pages=662–679, WZ-survey-dcarticle author=Wu, Q.-S., author=Zhang, J. J., title=Applications of dualizing complexes, conference=title=Proceedings of the Third International Algebra Conference,address=Tainan,date=2002, , book=publisher=Kluwer Acad. Publ., Dordrecht, , date=2003, pages=241–255, Yek-dcarticle author=Yekutieli, Amnon, title=Dualizing complexes over noncommutative graded algebras, journal=J. Algebra, volume=153, date=1992, number=1, pages=41–84, issn=0021-8693, Yek-notearticle author=Yekutieli, A., title=Another proof of a theorem of Van den Bergh about graded-injectivemodules, date=2014, note=Available at <http://arxiv.org/abs/1407.5916>,YZ-aus-dcarticle author=Yekutieli, Amnon, author=Zhang, James J., title=Rings with Auslander dualizing complexes, journal=J. Algebra, volume=213, date=1999, number=1, pages=1–51, YZ-rigid-dcarticle author=Yekutieli, Amnon, author=Zhang, James J., title=Rigid dualizing complexes over commutative rings, journal=Algebr. Represent. Theory, volume=12, date=2009, number=1, pages=19–52, Zad-thesisbookauthor=Zadunaisky, Pablo,title=Homological regularity properties of quantum flag varieties and related algebras,year=2014,note=PhD Thesis. Available online at<http://cms.dm.uba.ar/academico/carreras/doctorado/desde>, A.S.: IMAS-CONICET y Departamento de MatemáticaFacultad de Ciencias Exactas y Naturales,Universidad de Buenos Aires,Ciudad Universitaria, Pabellón 11428, Buenos Aires, Argentina.P.Z. :Instituto de Matemática e Estatística, Universidade de São Paulo. Rua do Matão, 1010 CEP 05508-090 - São Paulo - SP | http://arxiv.org/abs/1703.08721v2 | {
"authors": [
"Andrea Solotar",
"Pablo Zadunaisky"
],
"categories": [
"math.KT",
"math.RA"
],
"primary_category": "math.KT",
"published": "20170325173827",
"title": "Change of grading, injective dimension and dualizing complexes"
} |
[pages=1-last]rGO_XS2_20170427_SP[pages=1-last]rGO_XS2_SI_20170501 | http://arxiv.org/abs/1703.08597v2 | {
"authors": [
"Yu Lei",
"Srimanta Pakhira",
"Kazunori Fujisawa",
"Xuyang Wang",
"Oluwagbenga Oare Iyiola",
"Nestor Perea Lopez",
"Ana Laura Elias",
"Lakshmy Pulickal Rajukumar",
"Chanjing Zhou",
"Bernd Kabius",
"Nasim Alem",
"Morinobu Endo",
"Ruitao Lv",
"Jose L. Mendoza-Cortes",
"Mauricio Terrones"
],
"categories": [
"cond-mat.mtrl-sci"
],
"primary_category": "cond-mat.mtrl-sci",
"published": "20170324210839",
"title": "Low temperature synthesis of heterostructures of transition metal dichalcogenide alloys (WxMo1-xS2) and graphene with superior catalytic performance for hydrogen evolution"
} |
Evaluating Connection Resilience for the Overlay Network KademliaHenner Heck, Olga Kieselmann and Arno Wacker =================================================================== Kademlia is a decentralized overlay network, up to now mainly used for highly scalable file sharing applications. Due to its distributed nature, it is free from single points of failure. Communication can happen over redundant network paths, which makes information distribution with Kademlia resilient against failing nodes and attacks. This makes it applicable to more scenarios than Internet file sharing.In this paper, we simulate Kademlia networks with varying parameters and analyze the number of node-disjoint paths in the network, and thereby the network connectivity. A high network connectivity is required for communication and system-wide adaptation even when some nodes or communication channels fail or get compromised by an attacker. With our results, we show the influence of these parameters on the connectivity and, therefore, the resilience against failing nodes and communication channels. § INTRODUCTIONKademlia <cit.> is a well known distributed overlay network which is mainly used for Internet file sharing, e.g., BitTorrent <cit.>. It has a decentralized structure with redundant communication paths eliminating single points of failure. This makes it suitable for other research fields, e.g., the emerging Industry 4.0 context. In Industry 4.0, distributed Cyber-Physical Systems (CPS), consisting of multiple networked nodes, are expected to improve automated industrial processes significantly <cit.>. The networked nodes of a CPS interact with their physical environment using sensors and actuators, and store information about its state and development. Two examples for distributed CPS are a smart camera network (SCN) and a network intrusion detection system (IDS). In an SCN, multiple networked cameras collaborate in surveiling and tracking developments in an observed area. An IDS secures cooperate networks with several branches by collaboratively detecting distributed attacks. A common requirement for all these systems is the ability to exchange information between nodes.This information must be exchanged via communication channels, which can be either direct or indirect via other nodes. However, we must consider that nodes or communication channels might fail.Since some nodes might be publicly accessible, we must consider that they can fail due to an attack. To still achieve reliable communication, we require redundant communication channels for resilient inter-node communication <cit.>. More precisely, to tolerate failing nodes, there must be multiple node-disjoint communication paths through the network for any node pair. The minimum number of node-disjoint paths for any node pair in a network is the network connectivity.The main contribution of this paper is a thorough evaluation of the connectivity of the Kademlia overlay network, yielding the resilience of the network against node failures and disturbed communication channels. The rest of this paper is organized as follows: First, we discuss related research about overlay network connectivity in Section <ref> and present our assumptions in Section <ref>. After that, we briefly describe in Section <ref> the Kademlia protocol and the mathematical foundations for computing the network connectivity. Based on this, we presentand discuss the results of our connectivity measurements in Section <ref>. Finally, we conclude our paper in Section <ref> with a brief summary and provide an outlook on future research. § RELATED WORKKademlia and overlay networks in general have been studied extensively in the scientific literature. A survey about research on robust peer-to-peer networks from 2006 <cit.> already lists several hundred references. Another survey from 2011 with focus on security aspects, reaches close to a hundred references <cit.>. Despite the large amount of publications in general, the global network connectivity of Kademlia has not been thoroughly evaluated. We limit our discussion of related work to literature with relevance for connectivity of structured overlay networks built with Kademlia or it's descendants.In <cit.>, the authors simulate Kademlia networks and apply churn (joining/leaving of nodes) to evaluate resilience. While the basic premise is similar to ours, they measure response times and number of message hops, not network connectivity. In <cit.>, the authors insert nodes into a real-world BitTorrent network. The main focus of this paper is on connectivity problems within the network caused by technical obstacles such as firewalls and network address translation (NAT). The authors analyze connectivity properties of small groups of nodes. They do not measure the network-wide connectivity. Similarly, the authors of <cit.> insert nodes into real-world overlay networks built by the BitTorrent protocol to measure round trip times and message rates for resource lookups. Additionally, they measure “connectivity artifacts” and “communication locality”. Artifacts emerge from nodes making contact with the author's nodes, but cannot be contacted by them. As in <cit.>, the authors conclude that such a behaviour is most likely caused by firewalls and NAT. The communication locality measurements show to what degree nodes preferably communicate with other nodes that, according to the protocol's definition of node distance, are near to them. While both properties are related to the network connectivity, it is not measured or derived. The authors of <cit.> present a crawling software for capturing connectivity graphs of networks built by the KAD protocol, a descendant of Kademlia. They insert specially modified crawling nodes into real-world networks to contact other nodes and dump the contents of their routing tables. Those tables are then used to create connectivity graphs of the networks. In <cit.>, the same authors characterize the resilience of those connectivity graphs and of other graphs resulting from simulations. While their goal is similar to ours, their approach is of statistical nature and does not calculate the network connectivity. In <cit.>, the authors propose different measures to make Kademlia networks more resilient towards malicious nodes. One of those measures is the use of node-disjoint paths for lookup procedures. The authors measure success rates for lookup procedures using different numbers of disjoint paths. Their simulations imply that a certain average level of connectivity is present in a network, but they do not measure the actual connectivity.In contrast, our main goal is to determine the network connectivity of Kademlia in dependence of its parameters. Some of the related work, e.g., <cit.>, even rely on a given network connectivity, but it was determined neither analytically nor experimentally before.§ SYSTEM MODELWe consider a distributed system consisting of multiple networked nodes. The basic functioning of one node is not dependent on the functioning of others.The nodes exchange information for collaboration purposes, depending on the systems specific purpose and implementation. Each node is able to communicate with any other node, either directly or indirectly via others.The communication structure is organized by the Kademlia overlay network (cf. Section <ref>). We assume the presence of an attacker with the goal of disturbing, disabling or controlling nodes and communication channels. We call a node which has been successfully attacked a compromised node. There are several other causes exhibiting the same effect as a compromised node, e.g., maintenance, failures from defects, or other disturbances like power outages. Without additional measures, these are indistinguishable from an attack. If an attacker has compromised a node, we assume that she is able to fully impersonate the node towards the rest of the system. Therefore, an attacker can disseminate information into the network as a legitimate part of the system and also deny requests coming from other nodes and, thus, hinder or prevent information exchange. Communication between two nodes is not always direct, so other nodes can be necessary for message transfer.Additionally, we assume that the attacker can subvert at most a arbitrary nodes at any time. With regard to communication channels, we assume an attacker or other causes can disturb the channel causing message loss. This leads to a certain percentage of sent messages not reaching their destination. § CONNECTIVITYIn this section, first, we present the properties and mechanisms of Kademlia important for routing and contact management. To analyze the network connectivity, we introduce the mathematical foundations to transfer the network structure of Kademlia into the domain of graph theory by creating a connectivity graph. Next, we describe the mathematical algorithms and necessary graph transformations for calculating the graph connectivity. Finally, we use the mathematical foundations to define the resilience of the communication network.§.§ KademliaWith Kademlia, each node and each stored data object is identified by a numerical id with the fixed bit-length b. These identifiers are generated from a node's network address or the data object respectively, using a cryptographically secure hash function with the goal of equal distribution of identifiers in the identifier space. Each node maintains a routing table with identifiers and network addresses of other nodes, its so-called contacts. The routing table consists of b so-called k-buckets to store the contacts of the node. The buckets are indexed from 0 to b-1, and the contacts are distributed into these buckets depending on the distance of their identifiers 𝑖𝑑_i and the node's id. For this, the distance between two identifiers is computed using the XOR metric, meaning that for two identifiers 𝑖𝑑_a and 𝑖𝑑_b the distance is 𝑑𝑖𝑠𝑡(𝑖𝑑_a,𝑖𝑑_b)=𝑖𝑑_a ⊕𝑖𝑑_b, interpreted as an integer value. The buckets are populated with those contacts 𝑖𝑑_i fulfilling the condition 2^i ≤𝑑𝑖𝑠𝑡(𝑖𝑑,𝑖𝑑_i) < 2^i+1, with i being the bucket index. This means that the bucket with the highest index covers half of the id space, the next lower bucket a quarter of the id space, and so on. The maximum number of contacts stored in one bucket is k.Next to b and k, another defining property of a Kademlia setup is the request parallelism α, which determines how many contacts are queried in parallel when a node wants to either locate another node or retrieve/store a data object. Greater values can speed up the operation, while at the same time increasing the resulting network load. The staleness limit s determines how often in a row the communication with a contact must fail, so that it is considered stale and removed from the routing table. Greater values of s delay the removal of actually stale nodes by waiting for more failed communication attempts, while small values might lead to a frequent removal of non-stale nodes due to a disturbed communication channel. The Kademlia authors set the default values b=160, k=20, α=3, and s=5. The nodes of a Kademlia network can locate resources (other nodes, data objects) by means of their identifiers. Given a target identifier, a node queries α nodes from its routing table closest to that identifier. Those, in turn, answer with their own list of closest nodes, which can then be used in new queries. This way, the requesting node iteratively gets closer to the target identifier. This process ends when a number of k nodes have been successfully contacted, or no more progress is made in getting closer to the target identifier. The purpose of a lookup procedure is to locate a node or data object, the purpose of a dissemination procedure is to locate appropriate nodes for storing a data object. §.§ Connectivity GraphThe representation of the network structure as a connectivity graph enables the application of concepts and algorithms from graph theory to analyze properties of the network. The connectivity graph D(V,E), with the vertices V and edges E, is a directed graph representation of the nodes and their routing tables. Each vertex from the connectivity graph represents a distinct node from the network. Hence, the number of vertices equals the number of network nodes. To construct the connectivity graph, we add edges the graph according to the routing table of Kademlia. For each node pair 𝑖𝑑_i, 𝑖𝑑_j represented in the graph by vertices v and w respectively, we insert the directed edge (v,w) into the set of edges E if and only if node 𝑖𝑑_j is present in the routing table of 𝑖𝑑_i.Generally, in network graphs, a capacity value is often assigned to the edges for expressing the communication bandwidth between nodes. This is not a necessity for connectivity graphs, since the existence of the edges is enough to indicate a connection between nodes. However, since it is necessary for later steps, we assign a capacity of 1 to each edge.§.§ Vertex Connectivity for Vertex PairsA directed edge in the connectivity graph D(V,E) can be interpreted as an one-way water pipe. The maximum amount of water able to flow through the pipe per time unit is modeled by the edge capacity.The maximum flow between two vertices v and w is the sum of the capacities of the minimum edge cut. This is the set of edges with the smallest total capacity whose removal would cut off any flow from v to w. In other words, the minimum edge cut is the bottle neck which determines the maximum possible flow v to w. Analog to the minimum edge cut for two vertices v and w, the minimum vertex cut is the minimum number of vertices whose removal cuts all paths from v to w. The order of the minimum vertex cut is called the vertex connectivity from v to w, i.e., κ(v,w). Menger's theorem for directed graphs states that for the two non-adjacent vertices v and w the vertex connectivity is equal to the maximum number of pairwise vertex-disjoint paths from v to w <cit.>. This number correlates directly with the communication resilience (cf. Section <ref>). Therefore, to evaluate the resilience, we need to calculate the vertex connectivity. There are multiple algorithms to compute the maximum flow/minimum edge cut between any two vertices in a graph. However, in general, the vertex connectivity does not correspond to the maximum flow/minimum edge cut. To bridge the gap from computing the maximum flow/minimum edge cut to computing the vertex connectivity, we apply Even's algorithm (e.g., <cit.>). It transforms the connectivity graph D(V,E) such that the maximum flow between two non-adjacent vertices is equal to their vertex connectivity. This allows the application of maximum flow algorithms to calculate the vertex connectivity. Even's graph transformation is applied on the original connectivity graph D(V,E) consisting of n vertices and m edges. We assume that D(V,E) has neither self-loops nor parallel edges. The problem transformation is done by applying the following steps to each vertex of D(V,E):* Let v be a vertex of the directed graph D(V,E) with the incoming degree of d_𝑖𝑛,v and outgoing degree of d_𝑜𝑢𝑡,v.* Split v into the two vertices v' (incoming vertex) and v” (outgoing vertex).* All incoming edges of v point to v', so that it has the incoming degree d_𝑖𝑛,v.* Make all outgoing edges of v originate from v”, so that it has the outgoing degree d_𝑜𝑢𝑡,v.* Insert the edge (v',v”) with capacity 1. Now the outgoing degree of v' and the incoming degree of v” are 1. The resulting graph D'(V',E') has 2n vertices and m+n edges and can be used to calculate the vertex connectivity by applying a max flow algorithm. An example for such a graph transformation is shown in Figure <ref>. §.§ Vertex Connectivity for GraphsThe vertex connectivity of a graph D(V,E) is the minimum of the vertex connectivities of all pairs of distinct non-adjacent vertices in the graph, i.e.,κ(D)=𝑚𝑖𝑛(κ(v,w)) : v ≠ w ∧ (v,w) ∉ E ∧ v,w ∈ V.If D(V,E) is not a complete graph, we determine the vertex connectivity κ(v,w) for a pair of non-adjacent vertices v and w by computing the maximum flow from outgoing vertex v” to the incoming vertex w' in the transformed graph D'(V',E'). Therefore, the vertex connectivity κ(D) for the whole graph can be determined by finding the minimum of the maximum flows between all pairs of outgoing and incoming vertices in the transformed graph D'(V',E'). If D(V,E) is complete, meaning that any vertex is adjacent to any other vertex, the vertex connectivity is the number of vertices in the graph n minus one <cit.>.To find the minimum of the maximum flows for a transformed directed graph D'(V',E') with 2n vertices, it is generally necessary to compute the maximum flow for all n(n-1) distinct pairs of outgoing/incoming vertices. This makes the time complexity in terms of maximum flow computations 𝒪(n^2). To find the minimum of the maximum flows for a transformed undirected graph G'(V',E') with 2n vertices, it is sufficient to compute the maximum flow for n-1 distinct pairs of outgoing/incoming vertices <cit.>. This makes the time complexity in terms of maximum flow computations 𝒪(n).§.§ ResilienceWe call a network that can function properly even when a number of r nodes have been compromised an r-resilient network. This means that with up to r compromised nodes a path still exists between any pair of nodes. As stated in our system model (cf. Section <ref>), we assume that an attacker is able to compromise a number of a nodes. We require that information exchange in the network is still possible even under this condition.Hence, to tolerate those a compromised nodes, we need an r-resilient network with r ≥ a. This is fulfilled when the connectivity κ(D) is greater than the necessary resilience, i.e., κ(D) > r. Since each compromised node can disconnect at most one of the κ(D) node-disjoint paths (cf. Section <ref>), there is still at least one path remaining. The correlation between the graph connectivity, the resilience and the number of attackers is summarized in Equation <ref>.κ(D) > r ≥ aFrom this equation, one can determine (1) the resilience of a given network as r=κ(D)-1 and (2) the required connectivity of a network for a specific a as κ(D) > a. § EVALUATIONIn this section, we first describe our simulation environment, i.e., the tools used to determine the network connectivity. After that, we present our evaluation methodology and the simulation scenarios. Finally, we present the achieved results and discuss them.§.§ EnvironmentFor our simulations, we use the network simulation software PeerSim <cit.>. It is implemented with the Java programming language and includes an event protocol class for event driven simulations. We added Kademlia as an instance of this “EDProtocol”. Additionally we wrote software components to provide functionality for creating network churn (addition and removal of nodes) as well as requesting data objects and disseminating information into the network. For the graph transformation, we implemented Even's algorithm in Java. To calculate the maximum flow between a pair of vertices, we use the software “HIPR” <cit.>. It is a C implementation of the hi-level variant of the push-relabel algorithm presented in <cit.>. In its original form, HIPR only calculates the maximum flow for one vertex pair. Therefore, we modified it to support calculations with multiple vertex pairs per program invocation. As adjacent vertex pairs do not influence the graph connectivity in our context (cf. <ref>), we also added program logic to detect such pairs. We further wrote multiple software tools and scripts for both generation of maximum flow computing tasks, and validation and aggregation of the output created from these tasks.We ran our simulations on a dual socket system with Intel Xeon E5-2690 CPUs (2.6 GHz), each with 14 cores plus hyper-threading. For the maximum flow computations, we used a Linux cluster provided by our University. We distributed the computations to 24 cluster nodes, each providing two 16 core AMD Opteron 6276 CPUs (2.3 GHz) with hyper-threading.§.§ Methodology To calculate the graph connectivity over time, we persist the connectivity graph of a network at pre-defined time stamps in a simulation. For that purpose, we interrupt the simulation and save the current contents of the routing tables of all network nodes to disk into a snapshot file. We use this snapshot file to transform the connectivity graph with Even's algorithm. Next, we convert the transformed graph to the supported input format of HIPR (i.e., DIMACS <cit.>) to calculate the maximum flow. The push-relabel algorithm used for the maximum flow computation for a single vertex pair in HIPR has a worst case time complexity of 𝒪(n^2 √(m)), where n is the number of vertices and m the number of edges in the processed graph <cit.>. Since the transformed graph D'(V',E') contains 2n nodes and n+m edges, the complexity of calculating the maximum flow of a single vertex pair in D' is 𝒪(n^2 √(n+m)). To calculate the graph connectivity κ(D'), we need to apply the above calculation on the transformed graph from all outgoing vertices to all incoming vertices, i.e., n(n-1) times. Thus, the overall time complexity for calculating κ(D') is 𝒪(n^4 √(n+m)). This complexity makes the maximum flow computation very expensive. For instance, the full maximum flow computation for a transformed connectivity graph with 2500 vertices takes about 250 hours on a single CPU core. The nodes in Kademlia attempt to add each other to their respective routing tables. This would result in an undirected connectivity graph. However, due to size restrictions of the buckets in the routing table and race conditions, these attempts are not always successful. Hence, there is no guarantee for the connectivity graph being undirected. Nevertheless, our analysis of simulation runs shows that the connectivity graphs come very close to being undirected. This allows us to reduce the amount of maximum flow computations from n(n-1) to c · n(n-1), 0 < c ≤ 1. We achieve this reduction by only using a percentage c · n of outgoing vertices for the maximum flow calculation.Since the outgoing degree d_𝑜𝑢𝑡,v of a vertex v is an upper limit for the outgoing flow, we select those c · n outgoing vertices with the smallest d_𝑜𝑢𝑡. As we calculate the maximum flow from only a percentage c · n of outgoing vertices to all n-1 incoming vertices, also the limiting incoming degree d_𝑖𝑛 is still considered. We verified this with 20 randomly selected connectivity graphs, for which we performed a full analysis, i.e., calculated the maximum flow for all n(n-1) vertex pairs. In all cases, c=0.02 (2%) was sufficient to determine the minimum of the maximum flows, i.e., the graphs vertex connectivity.§.§ ScenariosIn a two page short paper <cit.>, we briefly presented some shorter simulations done with an earlier version of our simulator, varying a single Kademlia parameter, namely k. Based on these, we have made several improvements. Previously, we investigated three different mechanisms for node bootstrapping, which turned out to show no significant differences with regard to connectivity. Therefore, we now apply only one bootstrap mechanism, in which the bootstrap nodes are completely random, and any node can be affected by churn. Also, to bring the simulations closer to a real world scenario, actions affecting the network structure, e.g., lookup procedures and node removals, are done at random points in time within a predetermined time frame. As a result, the initial bootstrap procedure to create the network is done randomly in terms of time and bootstrap node selection. A new node joins the network at a random point in the simulated time that is evenly distributed between 0 and 30 minutes. The bootstrap node is randomly chosen from the already joined nodes.Beyond that we extended the number of varied Kademlia parameters in our simulations from one to four and also added scenarios with communication channels affected by message loss. To determine how different environments and protocol parameters influence the connectivity of the network, we devised a total eight dimensions for the simulations, i.e., network size, network churn, network traffic, message loss, the Kademlia bucket size k, the parallelism factor α, the bit-length b, and the staleness limit s. §.§.§ Network Size We consider two different scenarios for the network size, i.e., a small network with 250 nodes and a large one with 2500 nodes. Our choice for these network sizes is based on the CPS examples introduced in Section <ref>. For the smart camera scenario, a large number of smart cameras may be necessary for reliably observing and controlling an industrial complex. Thus, we simulate it with 250 nodes. In contrast, a distributed IDS can be used for securing corporate networks spanning several branches. Such networks usually comprise several hundreds to thousands of nodes. Exemplarily, we choose 2500 nodes for this scenario.§.§.§ Network Churn We consider three different churn scenarios. In the scenario (0/1), we remove a single node from the network in every minute of simulated time and add no nodes. In the scenario (1/1), we add one node and remove one node every minute. Similarly, in the scenario (10/10), we remove ten nodes and add ten nodes per minute. The add/remove actions happen at random points in time within each minute range. We chose these high churn rates to to get a clear indication of effects related to churn in our simulations. §.§.§ Network Traffic We distinguish two different scenarios with respect to data traffic, i.e., with and without data traffic. In the scenario with data traffic, all nodes regularly look up data objects and disseminate them. For this, each node performs 10 lookup procedures an 1 dissemination procedure per minute during the whole simulation. The procedures happen at random points in time within each minute range. In the scenario without data traffic, the nodes do not lookup data objects or disseminate them. However, for maintenance purposes Kademlia requires each node to perform a so-called “bucket-refresh” every 60 minutes. For this, a node randomly generates an id from the id range of each k-bucket and performs lookup procedures for these ids. This way, it can learn about previously unknown contacts and stale contacts in its routing table. Hence, even in the scenario without data traffic, there is some basic maintenance traffic. §.§.§ Message Loss Since two-way communication in the form of request/response is the most used communication type in Kademlia, we tailor our message loss l towards it. We apply four different message loss scenarios with different probabilities for a two-way communication to fail. Those probabilities apply to any communication taking place between nodes. The first scenario, none, has no loss at all, all messages reach their destination. Unless marked otherwise, this is the default case. The three other scenarios are low, medium and high. Table <ref> shows the loss probabilities for one-way and two-way communication for all four scenarios.§.§.§ Kademlia Bucket Size In Kademlia, the bucket size k is directly responsible for the number of contacts a node can keep in its routing table. To determine its effect on the network connectivity, we use four different values for k, i.e., k∈{5,10,20,30}.§.§.§ Kademlia Request Parallelism The request parallelism α determines how many queries are made in parallel when locating a node or data object. We use the values 3 and 5 for α. §.§.§ Kademlia Staleness Limit The staleness limit s is the number of unsuccessful communication attempts in a row that lead to the removal of a contact from the routing table, assuming it has left the network. We use the values 1 and 5 for s. In simulations with churn, which are not specifically meant for evaluating s and have the loss scenario none, we set s=1. This allows quick reaction to nodes leaving the network and, therefore, provides a clearer picture on the influence of churn.§.§.§ Kademlia Bit-length The bit-length b is the size of the numerical identifier of a node or data object in bits. We use the values 160 and 80 for b.In summary, we have eight dimensions with several scenarios for each of them, i.e., 1536 combinations. We simulated a majority of these combinations to determine how the dimension and the connectivity correlate and present our results in the next section.§.§ Simulation Phases In all simulations the network is fully setup after 30 minutes (setup phase), as described in detail above. From minute 30 to minute 120 (stabilization phase), we allow the network to stabilize. These 90 minutes guarantee that for scenarios without data traffic each node performs a bucket refresh. After that, starting at minute 120, we apply a churn scenario, if the simulation requires churn (churn phase).§.§ Results for Traffic, Churn, and Bucket Size kIn this section, we present the simulations and measurement results for different network scenarios. The first four simulations focus on the effect of traffic, while the remaining simulations focus on the effect of churn. In each graph, we present the simulations for all four bucket sizes.§.§.§ Without data traffic In Simulations A & B, no data traffic is present. The churn scenario is 0/1. We present the simulation for the network size 250 in Figure <ref> and the simulation for the network size 2500 in Figure <ref>.After the setup phase, the connectivity for k∈{20,30} is at roughly k for both network sizes. For k=10, this is also true for the small network, whereas the connectivity is zero in the large network. For k=5, the connectivity is zero in both networks. For the smaller k values, the setup appears to be more problematic the bigger the network. Further investigation showed, that this is caused by a single digit number of disconnected nodes. While those nodes do not have significantly less entries in their own routing table than others, they themselves only appear in the routing tables of less than k other nodes or none at all. This issue is resolved during the stabilization phase for k=10, so that the connectivity is roughly k for k∈{10,20,30}. In the churn phase, the minimum connectivity first increases overall for all k values. This effect also applies to k=5, so that now the minimum connectivity for the smallest k value rises to k and above. With continuing churn and decreasing network size, the minimum connectivity drops again.It appears that the network state after stabilization is not ideal from a connectivity point of view. The leaving nodes enable the network to reconfigure (freed up entries in the k-buckets) towards a higher connectivity. This continues until the network size becomes too small to sustain this behavior. §.§.§ With data traffic In Simulations C & D, data traffic is present. The churn scenario is 0/1. We present the simulation for the network size 250 in Figure <ref> and the simulation for the network size 2500 in Figure <ref>.The setup phase is similar to that in Simulations A & B. At its end, the connectivity for k∈{20,30} is at roughly k for both network sizes. For k=10, this is also true for the small network, whereas the connectivity is zero in the large network. For k=5, the connectivity is zero for both network sizes. For the smaller k values, the setup again appears to be more problematic the bigger the network. The cause are, as before, a single digit number of disconnected nodes, which do not appear in the routing tables of other nodes. This issue is resolved during the stabilization phase for all four k values, so that the connectivity is roughly k. In the churn phase, the minimum connectivity first increases overall for all k values. With continuing churn and decreasing network size, the minimum connectivity drops again.Compared to Simulations A & B, the observed effects are similar, but their timing and strength are different. Connectivity values of k or above are reached earlier in the simulations. The increase in minimum connectivity with churn is much more pronounced and its maximum values are also greater. Towards the end of the simulation, with 10 nodes left in the network, the network is now fully connected for each bucket size except the smallest one. As one would expect, the data traffic results in an overall improved connectivity. §.§.§ With 1/1 churn In Simulations E & F, data traffic is present. The churn scenario is 1/1. We present the simulation for the network size 250 in Figure <ref> and the simulation for the network size 2500 in Figure <ref>.As this is a simulation with data traffic, the setup phase and stabilization phase are similar to those in Simulations C & D. An exception here is the large network with k=5. Its minimum connectivity does not quite reach k at the end of the stabilization phase, but is none the less greater than zero.Whereas, similar to the 0/1 churn, the average connectivity benefits from the churn phase, the minimum connectivity does not. For the greater values of k the minimum connectivity oscillates around k, for smaller values it drops significantly, even down to 0. This effect is more pronounced in the larger network, where for k=5 the minimum connectivity is 0 throughout almost the whole churn phase.§.§.§ With 10/10 churn In Simulations G & H, data traffic is present. The churn scenario is 10/10. We present the simulation for the network size 250 in Figure <ref> and the simulation for the network size 2500 in Figure <ref>.The setup phase and stabilization phase are basically identical to those in Simulations C & D.With the increased churn, the average connectivity reaches the same levels as with simulations E & F, but rises much faster as soon as the churn sets in. For the minimum connectivity the differences are more significant. Where the absolute values allow it, the oscillation increases. The overall level drops for all k values, so that e.g. the minimum connectivity for k=5 is now almost always at 0 also for the small network. §.§.§ Relative Variance for ChurnA numerical comparison of the 1/1 and 10/10 churn scenarios is given in Table <ref>. It shows the means and the Relative Variance (RV), i.e., Variance/Mean, of the minimum connectivity during the churn phase for the simulations E to H for all four k values. From the graphs our impression was that increased churn leads to increased variance relative to the connectivity. We, therefore, calculated the RV to express the effects of increased churn. As the RV values in Table <ref> show, the increase in churn from 1/1 to 10/10 leads to an increased RV in all simulations. The exception is the network size 2500 with k=5, where the minimum connectivity is zero throughout the whole churn phase for both churn scenarios.§.§ Results for Request Parallelism α Figures <ref> and <ref> show the means of the minimum connectivity during churn for all four k values for simulations E to H (see Table <ref>) and for additional simulations. For simulations E to H the request parallelism α equals 3. The additional simulations have the same scenarios as G (small network, churn 10/10) and H (large network, churn 10/10), except now α equals 5. The figures show the following: 1) The scenarios with churn 1/1 show a higher connectivity than those with churn 10/10. This is more prominent in the large network, than in the small one. 2) For k=5 the connectivity is zero for all scenarios of the large network and for churn 10/10 with α=5 in the small network. Therefore, k≥10 seems to be the minimum advised k for a connected network. 3) The increase of α from 3 to 5 with churn 10/10 has a very negative impact on connectivity for the smaller k values. The connectivity for k=5 is zero for both network sizes and almost zero for k=10 in the large network. This is possibly due to the fact that a node contacts more other nodes in parallel and, therefore, takes places in more routing tables. Those places are not available for joining nodes, so that for small k a disconnected node is very likely.§.§ Results for Bit-length b In other simulations we used the the same scenarios as in C and D, except for the identifier size b, which changed from 160 to 80. They showed no significant difference from C and D with regard to connectivity. §.§ Message Loss and Staleness Limit s In this section we present results from simulations with focus on message loss and the staleness limit s. The following settings apply to all shown simulations: Data traffic is present, the bucket size k is 20, and the bit-length b is 160. Since the observed effects are very similar for both network sizes, we present the results for the large network only.§.§.§ Staleness Limits without Message Loss Simulation I shows the effect of the two different staleness limits s∈{1,5} in a network affected by churn and without message loss (l=none). We present the simulation for churn 1/1 in Figure <ref> and the simulation for churn 10/10 in Figure <ref>. With churn 1/1, there is no significant difference between the two staleness limits. With churn 10/10, the average connectivity for s=5 drops below that of s=1, as soon as the churn phase begins. It remains that way for the remainder of the simulation. Three effects are responsible for this: 1) With the stronger network churn more nodes become stale per minute, resulting in more stale routing table entries. 2) The stronger churn also results in more nodes joining per minute, each of them needing to become connected. 3) With the greater staleness limit it takes all nodes longer to detect and remove stale entries in their routing tables. Since the routing table size is limited and each stale entry potentially keeps a new contact from entering the routing table, the average connectivity decreases. Interestingly, the minimum connectivity is not affected. It is the same for s=1 as for s=5. At the moment we don't know the reason for this, but we will investigate this issue further. §.§.§ Staleness Limit with Message Loss The simulations J, K and L show the effect of the three message loss scenarios l∈{low,medium,high} on the network connectivity, together with both staleness limits s∈{1,5}, and three different churn scenarios. Message loss is present throughout all three simulation phases, setup, stabilization, and churn.Simulation J has no churn. We present the measurements for this simulation in Figure <ref>. The connectivity during the setup phase is very poor for all loss scenarios and staleness limit values. Nodes are not able to achieve connectivity immediately on joining the network. For s=1 the network shows a quick increase in minimum connectivity immediately after the setup phase. The minimum connectivity reaches values between 80 and 110, far greater than the bucket size k=20. For our three loss scenarios, higher message loss results in higher connectivity. A similar behavior is visible in simulations A to D with churn 0/1, where nodes leave the network but no new nodes join. In both cases communication attempts can fail, in simulation J due to message loss, in simulations A to D due to stale nodes. This leads to the removal of contacts from the routing tables, making room for other contacts. These results again show, that the structure of a Kademlia network is not ideal regarding connectivity after the network setup or node joins in general. For s=5 any structure change due to message loss is much less pronounced than with s=1, since now a contact is removed from a routing table only after five failing communication attempts in a row, not just one. The greater staleness limit has a damping effect on both the absolute connectivity and its variance. Any increase in minimum and average connectivity happens far slower, and the resulting connectivity is lower. This is especially the case with loss scenarios medium and low. Here, both minimum and average connectivity show a severe decrease compared to s=1. For low loss, the positive effect of message loss on minimum connectivity is hardly visible, as the connectivity remains just slightly above k=20.We want to remark that, despite its positive effect on connectivity, message loss has of course negative impact on other network aspects, e.g. the latency or result quality of lookup procedures. Here, as described in Section <ref>, the termination criterion is either a number of k successfully contacted nodes or a lack of progress. Message loss can increase the lookup latency, since more communication attempts can be necessary to reach k successfully contacted nodes. Also, progress may stop earlier because furthering information never reaches the node performing the lookup. In simulation K the churn scenario is 1/1. We show the results in Figure <ref>. For s=1 the different loss scenarios on average still result in different levels of minimum connectivity during the churn phase. However, the churn visibly reduces the positive effect of message loss, as those connectivity levels are significantly lower than without churn. Like in simulation J, a damping effect on connectivity is visible with s=5. Combined with the churn, it limits the minimum connectivity to about k for all loss scenarios. Also, the minimum connectivity drops far below k and even down to zero multiple times in the simulation with both staleness limits. This is due to a small number of nodes not being able to establish connectivity right away in the bootstrap process.In simulation L the churn scenario is 10/10. We show the results in Figure <ref>. The stronger churn counters the positive effects of message loss even more, so that now also the average connectivity is affected. Furthermore, the drops in connectivity due to bootstrap problems are much more frequent. With the added damping effect from s=5, the minimum connectivity stays below k at all times during the churn phase.§ CONCLUSION & FUTURE WORKIn this paper, we analyzed the connectivity of the overlay network Kademlia in multiple simulated scenarios. We conclude several results from our work. The network connectivity κ of Kademlia strongly correlates with the bucket size k. To achieve a certain resilience level r for an overlay network, we require a network connectivity κ > r. With our results, we determined that the bucket size needs to be set to a value greater than r, i.e., k>r. Nevertheless, especially for scenarios with strong churn, the resilience level cannot be guaranteed. In situations with no or few nodes joining the network, the network connectivity was equal or greater than k. The presence of network traffic greatly enhances the network connectivity, both in terms of absolute connectivity and the time to reach this connectivity. The effect of 1/1 and 10/10 churn on the network connectivity is ambivalent. While it can even have a positive effect on the average connectivity, the minimum connectivity drops significantly below k with stronger churn and shows greater variance relative to its mean. The staleness limit s also has ambivalent effects. While a greater value reduces connectivity variance, it also reduces the overall connectivity level. Message loss, though generally an undesired property in networks, actually increases the Kademlia network connectivity.Future work will include investigation of the effects of message loss on the network connectivity. The goal is the development of mechanisms that provide similar connectivity improvements, while avoiding the negative effects of loss. We further plan to extend Kademlia to improve upon the minimum connectivity in all cases and to introduce a parameter to control its connectivity independently of the bucket size. § ACKNOWLEDGMENTThe authors thank the German Research Foundation (DFG) for support within the project CYPHOC (WA 2828/1-1). abbrv | http://arxiv.org/abs/1703.09171v1 | {
"authors": [
"Henner Heck",
"Olga Kieselmann",
"Arno Wacker"
],
"categories": [
"cs.NI"
],
"primary_category": "cs.NI",
"published": "20170327163051",
"title": "Evaluating Connection Resilience for the Overlay Network Kademlia"
} |
Microstructure under the Microscope: Tools to Survive and Thrive in The Age of (Too Much) InformationRavi Kashyap IHS Markit / City University of Hong Kong December 30, 2023 Microstructure, Marketstructure, Microscope, Dimension Reduction, Distance Measure, Covariance, Distribution, UncertaintyJEL Codes: D53 Financial Markets; G17 Financial Forecasting and Simulation; C43 Index Numbers and Aggregation http://www.iijournals.com/doi/abs/10.3905/jot.2017.12.2.005Edited Version: Kashyap, R. (2017). Microstructure under the Microscope: Tools to Survive and Thrive in The Age of (Too Much) Information. The Journal of Trading, 12(2), 5-27. § ABSTRACTMarket Microstructure is the investigation of the process and protocols that govern the exchange of assets with the objective of reducing frictions that can impede the transfer. In financial markets, where there is an abundance of recorded information, this translates to the study of the dynamic relationships between observed variables, such as price, volume and spread, and hidden constituents, such as transaction costs and volatility, that hold sway over the efficient functioning of the system.“My dear, here we must process as much data as we can, just to stay in business. And if you wish to make a profit you must process atleast twice as much data.” - Red Queen to Alice in Hedge-Fund-Land.Necessity is the mother of all invention / creation / innovation, but the often forgotten father is frustration. In this age of (Too Much) Information, it is imperative to uncover nuggets of knowledge (signal) from buckets of nonsense (noise).To aid in this effort to extract meaning from chaos and to gain a better understanding of the relationships between financial variables, we summarize the application of the theoretical results from (Kashyap 2016b) to microstructure studies. The central concept rests on a novel methodology based on the marriage between the Bhattacharyya distance, a measure of similarity across distributions, and the Johnson Lindenstrauss Lemma, a technique for dimension reduction, providing us with a simple yet powerful tool that allows comparisons between data-sets representing any two distributions. We provide an empirical illustration using prices, volumes and volatilities across seven countries and three different continents. The degree to which different markets or sub groups of securities have different measures of their corresponding distributions tells us the extent to which they are different. This can aid investors looking for diversification or looking for more of the same thing.In Indian mythology, it is believed that in each era, God takes on an avatar or reincarnation to fight the main source of evil in that epoch and to restore the balance between good and bad. In this age of too much information and complexity, perhaps the supreme being needs to be born as a data scientist, conceivably with an apt nickname, the Infoman. Until higher powers intervene and provide the ultimate solution to completely eliminate information overload, we have to make do with marginal methods, such as this composition, to reduce information.As we wait for the perfect solution, it is worth meditating upon what superior beings would do when faced with a complex situation, such as the one we are in. It is said that the Universe is but the Brahma's (Creator's) dream. Research (Effort / Struggle) can help us understand this world; Sleep (Ease / Peace of Mind) can help us create our own world. A lesson from close by and down under: We need to “Do Some Yoga and Sleep Like A Koala”. § OBJECTIVELY SUBJECTIVE A hall mark of the social sciences is the lack of objectivity. Here we assert that objectivity is with respect to comparisons done by different participants and that a comparison is a precursor to a decision.Despite the several advances in the social sciences, we have yet to discover an objective measuring stick for comparison, a so called, True Comparison Theory, which can be an aid for arriving at objective decisions. The search for such a theory could again be compared, to the medieval alchemists’ obsession with turning everything into gold (Kashyap 2014a). For our present purposes, the lack of such an objective measure means that the difference in comparisons, as assessed by different participants, can effect different decisions under the same set of circumstances. Hence, despite all the uncertainty in the social sciences, the one thing we can be almost certain about is the subjectivity in all decision making. §.§ Merry-Go-Round of Comparisons, Decisions and ActionsThis lack of an objective measure for comparisons, makes people react at varying degrees and at varying speeds, as they make their subjective decisions. A decision gives rise to an action and subjectivity in the comparison means differing decisions and hence unpredictable actions. This inability to make consistent predictions in the social sciences explains the growing trend towards comprehending better and deciphering the decision process and the subsequent actions, by collecting more information across the entire cycle of comparisons, decisions and actions. Another feature of the social sciences is that the actions of participants affects the state of the system, effecting a state transfer which perpetuates another merry-go-round of comparisons, decisions and actions from the participants involved. This means, more the participants, more the changes to the system, more the actions and more the information that is generated to be gathered.Restricted to the particular sub-universe of economic and financial theory, this translates to the lack of an objective measuring stick of value, a so called, True Value Theory. This lack of an objective measure of value, (hereafter, value will be synonymously referred to as the price of a financial instrument), makes prices react at differing degrees and at varying velocities to the pull of different macro and micro factors.(Lawson 1985) argues that the Keynesian view on uncertainty (that it is generally impossible, even in probabilistic terms, to evaluate the future outcomes of all possible current actions; Keynes 1937; 1971; 1973), far from being innocuous or destructive of economic analysis in general, can give rise to research programs incorporating, amongst other things, a view of rational behavior under uncertainty, which could be potentially fruitful. (McManus and Hastings 2005) clarify the wide range of uncertainties that affect complex engineering systems and present a framework to understand the risks (and opportunities) they create and the strategies system designers can use to mitigate or take advantage of them. These viewpoints hold many lessons for policy designers in the social sciences and could be instructive for researchers looking to create methods to compare complex systems, keeping in mind the caveats of dynamic social systems.§.§ Interplay of Information and Intelligence On the surface, it would seem that there is a repetitive nature to portfolio management, which we can term The Circle of Investment (Kashyap 2014b), making it highly amenable to automation. But we need to remind ourselves that the reiterations happen under the purview of a special kind of uncertainty that applies to the social sciences. (Kashyap 2014a) goes into greater depth on how the accuracy of predictions and the popularity of generalizations might be inversely related in the social sciences. In the practice of investment management and also to aid other business decisions, more data sources are being created, collected and used along with increasing automation and artificial intelligence.If Alice and Red Queen of the Wonderland fame (Carroll 1865; 1871; End-note <ref>) were to visit Hedge-Fund-Land (or even Business-Land), the following modification of their popular conversation would aptly describe the situation today, “My dear, here we must process as much data as we can, just to stay in business. And if you wish to make a profit you must process atleast twice as much data.”We could also apply this to HFT-Land and say: “My dear, here we must trade as fast as we can, just to stay in business. And if you wish to make a profit, you must trade atleast twice as fast as that.”, while reminiscing that the jury is still out on whether HFT is Good, Bad or Just Ugly and Unimportant.In Academic-Land, this would become: “My dear, here we must process as much data (and include as many strange symbols or obfuscating terms) as we can, just to create a working paper. And if you wish to make a publication you must process atleast twice as much data (and include atleast twice as many strange characters or obfuscating expressions).”We currently lack a proper understanding of how, in some instances, our brains (or minds; and right now it seems we don't know the difference!) make the leap of learning from information to knowledge to wisdom (See Mill 1829; Mazur 2015 for more about learning and behavior). The problem of creating artificial intelligence can be a child's play, depending on which adult's brainpower acts as our gold standard. Perhaps, the real challenge is to replicate the curiosity and learning an infant displays. Intellect might be a byproduct of Inquisitiveness, demonstrating another instance of an unintended yet welcome consequence (Kashyap 2016e). This brings up the question of Art and Science in the practice of asset management (and everything else in life?); which are more related than we probably realize, “Art is Science that we don't know about; Science is Art restricted to a set of symbols governed by a growing number of rules” (Kashyap 2014a).While the similarities between art and science, should give us hope; we need to face the realities of the situation. Right now, arguably, in most cases, we (including computers and intelligent machines?) can barely make the jump from the information to the knowledge stage; even with the use of cutting / (bleeding?) edge technology and tools. This exemplifies three things:* We are still in the information age. As another route to establishing this, consider this: Information is Hidden; Knowledge is Exchanged or Bartered; Wisdom is Dispersed. Surely we are still in the Information Age since a disproportionate amount of our actions are geared towards accumulating unique data-sets for the sole benefits of the accumulators. * Automating the movement to a higher level of learning, which is necessary for dealing with certain doses of uncertainty, is still far away. * Some of us missed the memo that the best of humanity are actually robots in disguise, living amongst us.Hence, it is not Manager versus Machine (Portfolio Manager vs Computing Machine or MAN vs MAC, in short; End-notes <ref>, <ref>, <ref>). Not even MAN and MAC against the MPC (Microsoft Personal Computer; End-notes <ref>, <ref>, <ref>)? It is MAN, MAC and the MPC against increasing complexity! (Also in scope are other computing platforms from the past, present and the future: Williams 1997; Ifrah, Harding, Bellos and Wood 2000; Ceruzzi 2003; End-notes <ref>, <ref>, <ref>, <ref>). This increasing complexity and information explosion is perhaps due to the increasing number of complex actions perpetrated by the actors that comprise the financial system. The human mind will be obsolete if machines can fully manage assets and we would have bigger problems on our hands than who is managing our money. We need, and will continue to need, massive computing power to mostly separate the signal from the noise.§.§ Simply Too Complex (Simon 1962) points out that any attempt to seek properties common to many sorts of complex systems (physical, biological or social), would lead to a theory of hierarchy since a large proportion of complex systems observed in nature exhibit hierarchic structure; that a complex system is composed of subsystems that, in turn, have their own subsystems, and so on. This might hold a clue to the miracle that our minds perform; abstracting away from the dots that make up a picture, to fully visualizing the image, that seems far removed from the pieces that give form and meaning to it. To helps us gain a better understanding of the relationships between financial variables, we construct a metric built from smaller parts, but gives optimal benefits when seen from a higher level. Contrary to what conventional big picture conversations suggest, as the spectator steps back and the distance from the picture increases, the image becomes smaller yet clearer.As a first step, we recognize that one possible categorization (Kashyap 2016c) of different fields can be done by the set of questions a particular field attempts to answer. The answers to the questions posed by any domain can come from anywhere or from phenomenon studied under a combination of many other disciplines. Hence, the answers to the questions posed under the realm of economics and finance can come from seemingly diverse subjects, such as, physics, biology, mathematics, chemistry, and so on. As we embark on the journey to apply the knowledge from other fields to finance, we need to be aware that finance is Simply Too Complex, since all of finance, through time, has involved three simple outcomes - “Buy, Sell or Hold”. The complications are mainly to get to these results.Market Microstructure is the investigation of the process and protocols that govern the exchange of assets with the objective of reducing frictions that can impede the transfer. In financial markets, where there is an abundance of recorded information, this translates to the study of the dynamic relationships between observed variables, such as price, volume and spread, and hidden constituents, such as transaction costs and volatility, that hold sway over the efficient functioning of the system (Kashyap 2015b).While it might be possible to observe historical trends (or other attributes) and make comparisons across fewer number of entities, in large systems where there are numerous components or contributing elements, this can be a daunting task. If time travel were to become possible, time series would no longer be relevant. We are accustomed to using time and money as our units of measurement. Time and money are but means to an end. If we start viewing efforts and the world in terms of what we hope to accomplish ultimately, it might lead to better results. In this present paper, we put aside the fundamental question of whether we need complicated models or merely better morals and present quantitative measures across aggregations of smaller elements that can aid decision makers by providing simple yet powerful metrics to compare groups of entities. The results draw upon sources from statistics, probability, economics / finance, communication systems, pattern recognition and information theory; becoming one example of how elements of different fields can be combined to provide answers to the questions raised by a particular field. The degree to which different markets or sub groups of securities have different measures of their corresponding distributions tells us the extent to which they are different. This can aid investors looking for diversification or looking for more of the same thing.§.§ Nuggets of Knowledge from Buckets of Nonsense Necessity is the mother of all invention / creation / innovation, but the often forgotten father is frustration. In this age of (Too Much) Information, it is imperative to uncover nuggets of knowledge from buckets of nonsense.To aid in this effort to extract meaning from chaos, we summarize the application of the theoretical results from (Kashyap 2016b) to microstructure studies. The central concept rests on a novel methodology based on the marriage between the Bhattacharyya distance, a measure of similarity across distributions, and the Johnson Lindenstrauss Lemma, a technique for dimension reduction, providing us with a simple yet powerful tool that allows comparisons between data-sets representing any two distributions, perhaps also becoming, to our limited knowledge, an example of perfect matrimony.We return to Sergei Bubka, our Icon of Uncertainty (Kashyap 2016a). As a refresher for the younger generation, he broke the pole vault world record 35 times. We can think of regulatory change or the utilization of newer methods and techniques as raising the bar. Each time the bar is raised, the spirit of Sergei Bubka, in all of us, will find a way over it. The varying behavior of participants in a social system will give rise to unintended consequences (Kashyap 2016e) and as long as participants are free to observe the results and modify their actions, this effect will persist. (Kashyap 2015a) consider ways to reduce the complexity of social systems, which could be one way to mitigate the effect of unintended outcomes. While attempts at designing less complex systems are worthy endeavors, reduced complexity might be hard to accomplish in certain instances and despite successfully reducing complexity, alternate techniques at dealing with uncertainty are commendable complementary pursuits (Kashyap 2016d).Asset price bubbles are seductive but scary when they burst. What we learn from the story of Beauty and the Beast is that they must coexist; we need to learn to love the beast before we can uncover the beauty. Similarly bubbles and busts must be close to one another. If we find that microstructure variables, especially implicit trading costs, are showing steady movement, the change in transaction costs could be a signal of a potential building up of a bubble and a later bust. Our study will allow the comparison of trading costs across aggregations of individual securities, allowing inferences to be drawn across sectors or markets, enabling us to find early indications of bubbles building up in corners of the economy.§.§ The Miracle of Mathematics Lastly on a cautionary note, since the concepts mentioned below involve non-trivial mathematical principles, we point out that the source of most (all) human conflict (and misunderstanding) is not because of what is said (written) and heard (read), but is partly due to how something is said and mostly because of the difference between what is said and heard and what is meant and understood. We list a few different ways of describing what mathematics is and perhaps why it is miraculously magical most of the time but minutiae some times, that could be relegated to an appendix to be safely ignored. * Mathematics is built on one simple operation, addition, making it a fractal with addition as its starting point.* Mathematics has become complex because of the confusion that different notations, assumptions not made explicit and missed steps can create. * Mathematics without the steps is like a treasure hunt without the clues.* Mathematics is like a swimsuit model wearing a Burkha; we need to see beyond the symbols and the surface to appreciate the beauty.In a complex system, deriving equations can be a daunting exercise, and not to mention, of limited practical validity. Hence, to supplements equations, we need to envision the numerous unknowns that can cause equations to go awry; while remembering that a candle in the dark is better than nothing at all. Pondering on the sources of uncertainty and the tools we have to capture it, might lead us to believe that, either, the level of our mathematical knowledge is not advanced enough, or, we are using the wrong methods. The dichotomy between logic and randomness is a topic for another time. § METHODOLOGICAL FUNDAMENTALS §.§ Notation and Terminology for Key Results* D_BC(p_i,p_i^'), the Bhattacharyya Distance between two multinomial populations each consisting of k categories classes with associated probabilities p_1,p_2,...,p_k and p_1^',p_2^',...,p_k^' respectively.* ρ(p_i,p_i^'), the Bhattacharyya Coefficient.* D_BC-N(p,q) is the Bhattacharyya distance between p and q normal distributions or classes.* D_BC-MN(p_1,p_2) is the Bhattacharyya distance between two multivariate normal distributions, p_1,p_2 where p_i∼𝒩(μ_i, Σ_i).* D_BC-TN(p,q) is the Bhattacharyya distance between p and q truncated normal distributions or classes.* D_BC-TMN(p_1,p_2) is the Bhattacharyya distance between two truncated multivariate normal distributions, p_1,p_2 where p_i∼𝒩(μ_i, Σ_i, a_i, b_i).§.§ Bhattacharyya Distance We use the Bhattacharyya distance (Bhattacharyya 1943, 1946) as a measure of similarity or dissimilarity between the probability distributions of the two entities we are looking to compare. These entities could be two securities, groups of securities, markets or any statistical populations that we are interested in studying. The Bhattacharyya distance is defined as the negative logarithm of the Bhattacharyya coefficient. D_BC(p_i,p_i^')=-ln[ρ(p_i,p_i^')]The Bhattacharyya coefficient is calculated as shown below for discrete and continuous probability distributions. ρ(p_i,p_i^')=∑_i^k√(p_ip_i^') ρ(p_i,p_i^')=∫√(p_i(x)p_i^'(x))dxBhattacharyya’s original interpretation of the measure was geometric (Derpanis 2008). He considered two multinomial populations each consisting of k categories classes with associated probabilities p_1,p_2,...,p_k and p_1^',p_2^',...,p_k^' respectively. Then, as ∑_i^kp_i=1 and ∑_i^kp_i^'=1, he noted that (√(p_1),...,√(p_k)) and (√(p_1^'),...,√(p_k^')) could be considered as the direction cosines of two vectors in k-dimensional space referred to a system of orthogonal co-ordinate axes. As a measure of divergence between the two populations Bhattacharyya used the square of the angle between the two position vectors. If θ is the angle between the vectors then: ρ(p_i,p_i^')=cosθ=∑_i^k√(p_ip_i^')Thus if the two populations are identical: cosθ=1 corresponding to θ=0, hence we see the intuitive motivation behind the definition as the vectors are co-linear. Bhattacharyya further showed that by passing to the limiting case a measure of divergence could be obtained between two populations defined in any way given that the two populations have the same number of variates. The value of coefficient then lies between 0 and 1. 0≤ρ(p_i,p_i^')≤=1 0≤ D_BC(p_i,p_i^')≤∞We get the following formulae (Lee and Bretschneider 2012) for the Bhattacharyya distance when applied to the case of two uni-variate normal distributions. D_BC-N(p,q)=1/4ln(1/4(σ_p^2/σ_q^2+σ_q^2/σ_p^2+2))+1/4((μ_p-μ_q)^2/σ_p^2+σ_q^2)σ_p is the variance of the p-th distribution, μ_p is the mean of the p-th distribution, and p,q are two different distributions.The original paper on the Bhattacharyya distance (Bhattacharyya 1943) mentions a natural extension to the case of more than two populations. For an M population system, each with k random variates, the definition of the coefficient becomes, ρ(p_1,p_2,...,p_M)=∫⋯∫[p_1(x)p_2(x)...p_M(x)]^1/Mdx_1⋯ dx_kFor two multivariate normal distributions, p_1,p_2 where p_i∼𝒩(μ_i, Σ_i),D_BC-MN(p_1,p_2)=1/8(μ_1-μ_2)^TΣ^-1(μ_1-μ_2)+1/2ln (Σ/√(Σ_1 Σ_2)),μ_i and Σ_i are the means and covariances of the distributions, and Σ=Σ_1+Σ_2/2. We need to keep in mind that a discrete sample could be stored in matrices of the form A and B, where, n is the number of observations and m denotes the number of variables captured by the two matrices.A_m× n∼𝒩(μ_1,_1) B_m× n∼𝒩(μ_2,_2)§.§ Dimension Reduction A key requirement to apply the Bhattacharyya distance in practice is to have data-sets with the same number of dimensions. (Fodor 2002; Burges 2009; Sorzano, Vargas and Montano 2014) are comprehensive collections of methodologies aimed at reducing the dimensions of a data-set using Principal Component Analysis or Singular Value Decomposition and related techniques. (Johnson and Lindenstrauss 1984) proved a fundamental result (JL Lemma) that says that any n point subset of Euclidean space can be embedded in k=O(logn/ϵ^2) dimensions without distorting the distances between any pair of points by more than a factor of (1±ϵ), for any 0<ϵ<1. Whereas principal component analysis is only useful when the original data points are inherently low dimensional, the JL Lemma requires absolutely no assumption on the original data. Also, note that the final data points have no dependence on d, the dimensions of the original data which could live in an arbitrarily high dimension. We use the version of the bounds for the dimensions of the transformed subspace given in (Frankl and Maehara 1988; 1990; Dasgupta and Gupta 1999).For any 0<ϵ<1 and any integer n, let k be a positive integer such that k≥4(ϵ^2/2-ϵ^3/3)^-1ln nThen for any set V of n points in R^d, there is a map f:R^d→R^k such that for all u,v∈ V, (1-ϵ)‖ u-v‖^2≤‖ f(u)-f(v)‖^2≤(1+ϵ)‖ u-v‖^2Furthermore, this map can be found in randomized polynomial time and one such map is f=1/√(k)Ax where, x∈R^d and A is a k× d matrix in which each entry is sampled i.i.d from a Gaussian N(0,1) distribution.(Kashyap 2016b) provides expressions for the density functions after dimension transformation when considering log normal distributions, truncated normal and truncated multivariate normal distributions (Appendix A: <ref>). These results are applicable in the context of many variables observed in real life such as stock prices, heart rates and volatilities, which do not take on negative values. For completeness, we also include the expression for the dimension transformed normal distribution. A relationship between covariance and distance measures is also derived. An asset pricing and one biological application show the limitless possibilities such a comparison affords. Some pointers for implementation and R code snippets for the Johnson Lindenstrauss matrix transformation and a modification to the routine currently available to calculate the Bhattacharyya distance are also listed. This modification allows much larger numbers and dimensions to be handled, by utilizing the properties of logarithms and the eigen values of a matrix. § FROM SYMBOLS TO NUMBERS, EMPIRICAL ILLUSTRATIONS ACROSS MARKETS We illustrate several examples of how this measure could be used to compare different countries based on the time series variables across all equity securities traded in that market. Our data sample contains prices (open, close, high and low) and trading volumes for most of the securities from six different markets from Jan 01, 2014 to May 28, 2014 (Figure <ref>). Singapore with 566 securities is the market with the least number of traded securities. Even if we reduce the dimension of all the other markets with more number of securities, for a proper comparison of these markets, we would need more than two years worth of data. Hence as a simplification, we first reduce the dimension of the matrix holding the prices or volumes for each market using principal component analysis (PCA; see Shlens 2014) reduction, so that the number of tickers retained would be comparable to the number of days for which we have data. We report the results of using distance measures over the full sample after PCA reduction.We report the full matrix and not just the upper or lower matrix since the PCA reduction we do takes the first country, reduces the dimensions upto a certain number of significant digits and then reduces the dimension of the second country to match the number of dimensions of the first country. For example, this would mean that comparing AUS and SGP is not exactly the same as comparing SGP and AUS. As a safety step before calculating the distance, which requires the same dimensions for the structures holding data for the two entities being compared, we could perform dimension reduction using JL Lemma if the dimensions of the two countries differs after the PCA reduction. We repeat the calculations for different number of significant digits of the PCA reduction. This shows the fine granularity of the results that our distance comparison produces and highlights the issue that with PCA reduction there is loss of information, since with different number of significant digits employed in the PCA reduction, we get the result that different markets are similar.We illustrate another example, where we compare a randomly selected sub universe of securities in each market, so that the number of tickers retained would be comparable to the number of days for which we have data. This approach could also be used when groups of securities are being compared within the same market, a very common scenario when deciding on the group of securities to invest in a market as opposed to deciding which markets to invest in. Such an approach would be highly useful for index construction or comparison across sectors within a market.We report the full matrix for the same reason as explained earlier and perform multiple iterations when reducing the dimension using the JL Lemma. A key observation is that the magnitude of the distances are very different when using PCA reduction and when using dimension reduction, due to the loss of information that comes with the PCA technique. It is apparent that using dimension reduction via the JL Lemma produces consistent results, since the same pairs of markets are seen to be similar in different iterations. It is worth remembering that in each iteration of the JL Lemma dimension transformation we multiply by a different random matrix and hence the distance is slightly different in each iteration but within the bound established by JL Lemma. When the distance is somewhat close between two pairs of entities, we could observe an inconsistency due to the JL Lemma transformation in successive iterations. Lastly, we calculate sixty day moving volatilities on the close price and trading volume and calculate the distance measure over the full sample and also across each of the randomly selected sub-samples.§.§ Speaking Volumes Of: Comparison of Trading Volumes The results of the volume comparison over the full sample are shown in Figure <ref>. For example, in Figure <ref>, AUS - GBR are the most similar markets when two significant digits are used and AUS - GBR are the most similar with six significant digits. In this case the PCA and JL Lemma dimension reduction give similar results.The random sample results are shown in Figure <ref>. The left table (Figure <ref>) is for PCA reduction on a randomly chosen sub universe and the right table (Figure <ref>) is for dimension reduction using JL Lemma for the same sub universe.§.§ A Pricey Prescription: Comparison of Prices (Open, Close, High and Low)§.§.§ Open Close The results of a comparison between open and close prices over the full sample are shown in Figures <ref>, <ref>, <ref>. For example, in Figure <ref>, AUS - SGP are the most similar markets when two significant digits are used and AUS - HKG are the most similar with six significant digits. The similarities between open and close prices, in terms of the distance measure, are also easily observed.The random sample results are shown in Figures <ref>, <ref>. The left table (Figures <ref>, <ref>) is for PCA reduction on a randomly chosen sub universe and the right table (Figures <ref>, <ref>) is for dimension reduction using JL Lemma for the same sub universe. In Figure <ref>, AUS - IND are the most similar in iteration one and also in iteration five.§.§.§ High Low The results of a comparison between high and low prices over the full sample are shown in Figures <ref>, <ref>, <ref>. For example, in Figure <ref>, AUS - SGP are the most similar markets when two significant digits are used and AUS - HKG are the most similar with six significant digits. The similarities between high and low prices are also easily observed.The random sample results are shown in Figures <ref>, <ref>. The left table (Figures <ref>, <ref>) is for PCA reduction on a randomly chosen sub universe and the right table (Figures <ref>, <ref>) is for dimension reduction using JL Lemma for the same sub universe. In Figures <ref> and <ref>, AUS - IND are the most similar in iteration one and also in iteration five. §.§ Taming the (Volatility) Skew: Comparison of Close Price / Volume Volatilities The results of a comparison between close price volatilities and volume volatilities over the full sample are shown in Figures <ref>, <ref>, <ref>. For example, in Figure <ref>, AUS - GBR are the most similar markets when two significant digits are used and AUS - HKG are the most similar with six significant digits. In Figure <ref>, AUS - GBR - IND are equally similar markets when two significant digits are used and AUS - GBR are the most similar with six significant digits. The difference in magnitudes of the distance measures for prices, volumes and volatilities is also easily observed. What this indicates is that, prices are from the most dissimilar or distant distributions, volatilities are less similar and volumes are from the most similar or overlapping distributions. As also observed in the volume comparisons, volume volatility comparisons give seemingly similar results when PCA or JL Lemma dimension reductions are used. By considering the price volatilities, and creating portfolios of instruments that have dissimilar volatility distributions, we could reduce the overall risk or variance of the portfolio returns, becoming one potential way of mitigating the effects of wild volatility swings.The random sample results are shown in Figures <ref>, <ref>. The left table (Figures <ref>, <ref>) is for PCA reduction on a randomly chosen sub universe and the right table (Figures <ref>, <ref>) is for dimension reduction using JL Lemma for the same sub universe. In Figure <ref> AUS - SGP are the most similar in iteration one and also in iteration five. In Figure <ref>, AUS - SGP are the most similar in iteration one and AUS- GBR in iteration five.§ POSSIBILITIES FOR FUTURE RESEARCH* A key limitation of this study is that we have reduced dimensions using PCA or randomly sampled a sub matrix from the overall data-set so that the length of time series available is in the range of the number of securities that could be compared. Using a longer time series for the variables would be a useful extension and a real application would benefit immensely from more history.* We have used the simple formula for the Bhattacharyya distance applicable to multivariate normal distributions. The formulae we have developed over a truncated multivariate normal distribution or using a Normal Log-Normal Mixture could give more accurate results. Again, later works should look into tests that can establish which of the distributions would apply depending on the data-set under consideration.* For each market we have looked at seven variables, open, close, low, high, volume, close volatility and volume volatility. These variables can be combined using the expression for the multinomial distance to get a complete representation of which markets are more similar than others. We aim to develop this methodology and illustrate these techniques further in later works.* Once we have the similarity measures across groups of securities, portfolios could be constructed to see how sensitive they are to different explanatory factors and then performance benchmarks could be used to guage the risk return relationship. § CONCLUSIONSWe have discussed how the combination of the Bhattacharyya distance and the Johnson Lindenstrauss Lemma provides us with a practical and novel methodology that allows comparisons between any two probability distributions. This approach can help in the comparison of systems that generate prices, quantities and aid in the analysis of shopping patterns and understanding consumer behavior. The systems could be goods transacted at different shopping malls or trade flows across entire countries. Study of traffic congestion, road accidents and other fatalities across two regions could be performed to get an idea of similarities and seek common answers where such simplifications might be applicable. Clearly, this methodology lends itself to numerous applications outside the realm of finance and economics.We have illustrated the comparison of prices, volumes and volatilities across six different markets from three continents demonstrating the power this methodology holds for big (small?) picture decision making. In Indian mythology (End-note <ref>; Zimmer 1972; Doniger 1976; Rao 1993; Flood 1996; Parrinder 1997; Swami 2011), it is believed that in each era, God takes on an avatar or reincarnation to fight the main source of evil in that epoch and to restore the balance between good and bad. In this age of too much information and complexity, perhaps the supreme being needs to be born as a data scientist, conceivably with an apt superhero nickname, the Infoman (For society's fascination with superheroes or superhumans see: Eco and Chilton 1972; Reynolds 1992; Fingeroth 2004; Haslem, Ndalianis and Mackie 2007; Coogan 2009). Until higher powers intervene and provide the ultimate solution to completely eliminate information overload, we have to make do with marginal methods to reduce information, such as this composition.As we wait for the perfect solution, it is worth meditating upon what superior beings would do when faced with a complex situation, such as the one we are in. It is said that the Universe is but the Brahma's (Creator's) dream (Barnett 1907; Ramamurthi 1995; Ghatage 2010). Research (Effort / Struggle) can help us understand this world; Sleep (Ease / Peace of Mind) can help us create our own world. We just need to be mindful that the most rosy and well intentioned dreams can have unintended consequences (Kashyap 2016e) and turn to nightmares (Nolan 2010; Lehrer 2010; Kashyap 2016f). Native to Australia (End-note <ref>), “Koalas spend about 4.7 hours eating, 4 minutes traveling, 4.8 hours resting while awake and 14.5 hours sleeping in a 24-hour period” - (Nagy and Martin1985). See also (Smith 1979; Moyal 2008). The benefits of yoga on sleep quality are well documented (End-note <ref>; Cohen, Warneke, Fouladi, Rodriguez and Chaoul-Reich 2004; Khalsa 2004; Manjunath and Telles 2005; Chen, Chen, Chao, Hung, Lin and Li 2009; Vera, Manzaneque, Maldonado, Carranque, Rodriguez, Blanca and Morell 2009).A lesson from close by and down under: We need to “Do Some Yoga and Sleep Like A Koala” (Figure <ref>). With that, we present a list of sleeping aids in section <ref>. § SLEEPING AIDS (NOTES AND REFERENCES)* Dr. Yong Wang, Dr. Isabel Yan, Dr. Vikas Kakkar, Dr. Fred Kwan, Dr. William Case, Dr. Srikant Marakani, Dr. Qiang Zhang, Dr. Costel Andonie, Dr. Jeff Hong, Dr. Guangwu Liu, Dr. Humphrey Tung and Dr. Xu Han at the City University of Hong Kong provided advice and more importantly encouragement to explore and where possible apply cross disciplinary techniques. The views and opinions expressed in this article, along with any mistakes, are mine alone and do not necessarily reflect the official policy or position of either of my affiliations or any other agency.* The Red Queen's race is an incident that appears in Lewis Carroll's Through the Looking-Glass and involves the Red Queen, a representation of a Queen in chess, and Alice constantly running but remaining in the same spot. Well, in our country, said Alice, still panting a little, you'd generally get to somewhere else, if you run very fast for a long time, as we've been doing. A slow sort of country! said the Queen. Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that! https://en.wikipedia.org/wiki/Red_Queen%27s_raceThe Red Queen's Race, Wikipedia LinkThis quote is commonly attributed as being from Alice in Wonderland as: “My dear, here we must run as fast as we can, just to stay in place. And if you wish to go anywhere you must run twice as fast as that.”* https://en.wikipedia.org/wiki/Portfolio_managerPortfolio Manager, Wikipedia Link* https://en.wikipedia.org/wiki/Universal_Turing_machineUniversal Computing Machine, Wikipedia Link* https://en.wikipedia.org/wiki/ComputerComputer, Wikipedia Link* https://en.wikipedia.org/wiki/MacintoshMAC or Macintosh, Wikipedia Link* https://en.wikipedia.org/wiki/Personal_computerPersonal Computer, Wikipedia Link* https://en.wikipedia.org/wiki/Apple_Computer,_Inc._v._Microsoft_Corp.MAC vs MPC, Wikipedia Link* https://en.wikipedia.org/wiki/History_of_computingHistory Computing, Wikipedia Link* https://en.wikipedia.org/wiki/Computing_platformComputing Platform, Wikipedia Link* https://en.wikipedia.org/wiki/Cloud_computingCloud Computing, Wikipedia Link* https://en.wikipedia.org/wiki/Quantum_computingQuantum Computing, Wikipedia Link* https://en.wikipedia.org/wiki/AvatarAvatar or Reincarnation, Wikipedia Link* https://en.wikipedia.org/wiki/YogaYoga, Wikipedia Link* https://en.wikipedia.org/wiki/Down_UnderAustralia or Down Under, Wikipedia Link* Barnett, L. D. (1907). The Brahma Knowledge. An Outline of the Philosophy of the Vedanta as Set Forth by the Upanishads and by Sankara. Wisdom of the East series. E.P. Dutton Publishing, Boston, Massachusetts.* Bhattacharyya, A. (1943). On a Measure of Divergence Between Two Statistical Populations Defined by their Probability Distributions, Bull. Calcutta Math. Soc., 35, pp. 99-110.* Bhattacharyya, A. (1946). On a measure of divergence between two multinomial populations. Sankhyā: The Indian Journal of Statistics, 401-406.* Burges, C. J. (2009). Dimension reduction: A guided tour. Machine Learning, 2(4), 275-365.* Burkardt, J. (2014). The Truncated Normal Distribution. Department of Scientific Computing Website, Florida State University.* Carroll, L. (1865). (2012 Reprint) Alice's adventures in wonderland. Random House, Penguin Random House, Manhattan, New York.* Carroll, L. (1871). (2009 Reprint) Through the looking glass: And what Alice found there. Random House, Penguin Random House, Manhattan, New York. * Ceruzzi, P. E. (2003). A history of modern computing. MIT press.* Chen, K. M., Chen, M. H., Chao, H. C., Hung, H. M., Lin, H. S., & Li, C. H. (2009). Sleep quality, depression state, and health status of older adults after silver yoga exercises: cluster randomized trial. International journal of nursing studies, 46(2), 154-163.* Chiani, M., Dardari, D., & Simon, M. K. (2003). New exponential bounds and approximations for the computation of error probability in fading channels. Wireless Communications, IEEE Transactions on, 2(4), 840-845.* Clark, P. K. (1973). A subordinated stochastic process model with finite variance for speculative prices. Econometrica: journal of the Econometric Society, 135-155.* Cody, W. J. (1969). Rational Chebyshev approximations for the error function. Mathematics of Computation, 23(107), 631-637.* Cohen, L., Warneke, C., Fouladi, R. T., Rodriguez, M., & Chaoul-Reich, A. (2004). Psychological adjustment and sleep quality in a randomized trial of the effects of a Tibetan yoga intervention in patients with lymphoma. Cancer, 100(10), 2253-2260. * Coogan, P. (2009). The Definition of the Superhero. A comics studies reader, 77. * Dasgupta, S., & Gupta, A. (1999). An elementary proof of the Johnson-Lindenstrauss lemma. International Computer Science Institute, Technical Report, 99-006.* Derpanis, K. G. (2008). The Bhattacharyya Measure. Mendeley Computer, 1(4), 1990-1992.* Doniger, W. (1976). The origins of evil in Hindu mythology (No. 6). Univ of California Press. * Eco, U., & Chilton, N. (1972). The myth of Superman.* Fingeroth, D. (2004). Superman on the Couch: What Superheroes Really Tell Us about Ourselves and Our Society. A&C Black. * Frankl, P., & Maehara, H. (1988). The Johnson-Lindenstrauss lemma and the sphericity of some graphs. Journal of Combinatorial Theory, Series B, 44(3), 355-362.* Frankl, P., & Maehara, H. (1990). Some geometric applications of the beta distribution. Annals of the Institute of Statistical Mathematics, 42(3), 463-474.* Flood, G. D. (1996). An introduction to Hinduism. Cambridge University Press.* Fodor, I. K. (2002). A survey of dimension reduction techniques. Technical Report UCRL-ID-148494, Lawrence Livermore National Laboratory.* Ghatage, S. (2010). Brahma's Dream. Anchor Canada, Penguin Random House, Manhattan, New York. * Haslem, W., Ndalianis, A., & Mackie, C. J. (Eds.). (2007). Super/Heroes: From Hercules to Superman. New Academia Publishing, LLC. * Horrace, W. C. (2005). Some results on the multivariate truncated normal distribution. Journal of Multivariate Analysis, 94(1), 209-221.* Ifrah, G., Harding, E. F., Bellos, D., & Wood, S. (2000). The universal history of computing: From the abacus to quantum computing. John Wiley & Sons, Inc. * Johnson, W. B., & Lindenstrauss, J. (1984). Extensions of Lipschitz mappings into a Hilbert space. Contemporary mathematics, 26(189-206), 1. * Kashyap, R. (2014a). Dynamic Multi-Factor Bid–Offer Adjustment Model. The Journal of Trading, 9(3), 42-55.* Kashyap, R. (2014b). The Circle of Investment. International Journal of Economics and Finance, 6(5), 244-263.* Kashyap, R. (2015a). Financial Services, Economic Growth and Well-Being: A Four Pronged Study. Indian Journal of Finance, 9(1), 9-22.* Kashyap, R. (2015b). A Tale of Two Consequences. The Journal of Trading, 10(4), 51-95.* Kashyap, R. (2016a). Hong Kong - Shanghai Connect / Hong Kong - Beijing Disconnect (?), Scaling the Great Wall of Chinese Securities Trading Costs. The Journal of Trading, 11(3), 81-134.* Kashyap, R. (2016b). Combining Dimension Reduction, Distance Measures and Covariance. Working Paper.* Kashyap, R. (2016c). Solving the Equity Risk Premium Puzzle and Inching Towards a Theory of Everything. Working Paper.* Kashyap, R. (2016d). Fighting Uncertainty with Uncertainty. Working Paper.* Kashyap, R. (2016e). Notes on Uncertainty, Unintended Consequences and Everything Else. Working Paper.* Kashyap, R. (2016f). The American Dream, An Unsustainable Nightmare. Working Paper.* Kattumannil, S. K. (2009). On Stein’s identity and its applications. Statistics & Probability Letters, 79(12), 1444-1449.* Keynes, J. M. (1937). The General Theory of Employment. The Quarterly Journal of Economics, 51(2), 209-223.* Keynes, J. M. (1971). The Collected Writings of John Maynard Keynes: In 2 Volumes. A Treatise on Money. The Applied Theory of Money. Macmillan for the Royal Economic Society. * Keynes, J. M. (1973). A treatise on probability, the collected writings of John Maynard Keynes, vol. VIII.* Khalsa, S. B. S. (2004). Treatment of chronic insomnia with yoga: A preliminary study with sleep–wake diaries. Applied psychophysiology and biofeedback, 29(4), 269-278.* Kiani, M., Panaretos, J., Psarakis, S., & Saleem, M. (2008). Approximations to the normal distribution function and an extended table for the mean range of the normal variables.* Kimeldorf, G., & Sampson, A. (1973). A class of covariance inequalities. Journal of the American Statistical Association, 68(341), 228-230.* Lawson, T. (1985). Uncertainty and economic analysis. The Economic Journal, 95(380), 909-927.* Lee, K. Y., & Bretschneider, T. R. (2012). Separability Measures of Target Classes for Polarimetric Synthetic Aperture Radar Imagery. Asian Journal of Geoinformatics, 12(2).* Lehrer, J. (2010). https://www.wired.com/2010/07/the-neuroscience-of-inception/The Neuroscience of Inception. Wired 26 Jul. 2010. Web. 13 Aug. 2013.* Manjunath, N. K., & Telles, S. (2005). Influence of Yoga & Ayurveda on self-rated sleep in a geriatric population. Indian Journal of Medical Research, 121(5), 683.* Mazur, J. E. (2015). Learning and behavior. Psychology Press.* McManus, H., & Hastings, D. (2005, July). 3.4. 1 A Framework for Understanding Uncertainty and its Mitigation and Exploitation in Complex Systems. In INCOSE International Symposium (Vol. 15, No. 1, pp. 484-503).* Mill, J. (1829). Analysis of the Phenomena of the Human Mind (Vol. 1, 2). Longmans, Green, Reader, and Dyer.* Miranda, M. J., & Fackler, P. L. (2002). Applied Computational Economics and Finance.* Moyal, A. (Ed.). (2008). Koala: a historical biography. CSIRO PUBLISHING.* Nagy, K. A., & Martin, R. W. (1985). Field Metabolic Rate, Water Flux, Food Consumption and Time Budget of Koalas, Phascolarctos Cinereus (Marsupialia: Phascolarctidae) in Victoria. Australian Journal of Zoology, 33(5), 655-665.* Nolan, C. (2010). Inception [film]. Warner Bros.: Los Angeles, CA, USA.* Parrinder, E. G. (1997). Avatar and incarnation: the divine in human form in the world's religions. Oneworld Publications Limited.* Ramamurthi, B. (1995). The fourth state of consciousness: The Thuriya Avastha. Psychiatry and clinical neurosciences, 49(2), 107-110.* Rao, T. G. (1993). Elements of Hindu iconography. Motilal Banarsidass Publisher. * Reynolds, R. (1992). Super heroes: A modern mythology. Univ. Press of Mississippi. * Rubinstein, M. E. (1973). A comparative statics analysis of risk premiums. The Journal of Business, 46(4), 605-615.* Rubinstein, M. (1976). The valuation of uncertain income streams and the pricing of options. The Bell Journal of Economics, 407-425. * Shlens, J. (2014). A tutorial on principal component analysis. arXiv preprint arXiv:1404.1100. * Simon, H. A. (1962). The Architecture of Complexity. Proceedings of the American Philosophical Society, 106(6), 467-482.* Smith, M. (1979). Behaviour of the Koala, Phascolarctos Cinereus Goldfuss, in Captivity. 1. Non-Social Behaviour. Wildlife Research, 6(2), 117-129.* Soranzo, A., & Epure, E. (2014). Very simply explicitly invertible approximations of normal cumulative and normal quantile function. Applied Mathematical Sciences, 8(87), 4323-4341.* Sorzano, C. O. S., Vargas, J., & Montano, A. P. (2014). A survey of dimensionality reduction techniques. arXiv preprint arXiv:1403.2877.* Stein, C. M. (1973). Estimation of the mean of a multivariate normal distribution. Proceedings of the Prague Symposium of Asymptotic Statistics.* Stein, C. M. (1981). Estimation of the mean of a multivariate normal distribution. The annals of Statistics, 1135-1151.* Swami, B. (2011). Bhagavad Gita as it is. The Bhaktivedanta book trust, Mumbai, India.* Tauchen, G. E., & Pitts, M. (1983). The Price Variability-Volume Relationship on Speculative Markets. Econometrica, 51(2), 485-505.* Teerapabolarn, K. (2013). Stein's identity for discrete distributions. International Journal of Pure and Applied Mathematics, 83(4), 565.* Vera, F. M., Manzaneque, J. M., Maldonado, E. F., Carranque, G. A., Rodriguez, F. M., Blanca, M. J., & Morell, M. (2009). Subjective sleep quality and hormonal modulation in long-term yoga practitioners. Biological psychology, 81(3), 164-168.* Williams, M. R. (1997). A history of computing technology. IEEE Computer Society Press.* Yang, M. (2008). Normal log-normal mixture, leptokurtosis and skewness. Applied Economics Letters, 15(9), 737-742.* Zimmer, H. R. (1972). Myths and symbols in Indian art and civilization (Vol. 6). Princeton University Press.* Zogheib, B., & Hlynka, M. (2009). Approximations of the Standard Normal Distribution. University of Windsor, Department of Mathematics and Statistics. § APPENDIX A: DIMENSION REDUCTION, DISTANCE MEASURES AND COVARIANCE All the results below are from (Kashyap 2016b). Other useful references are pointed in the relevant sections below.§.§ Normal Log-Normal Mixture Transforming log-normal multi-variate variables into a lower dimension by multiplication with an independent normal distribution (See Lemma <ref>) results in the sum of variables with a normal log-normal mixture, (Clark 1973; Tauchen and Pitts 1983; Yang 2008), evaluation of which requires numerical techniques (Miranda and Fackler 2002).A random variable, U, would be termed a normal log-normal mixture if it is of the form,U=Xe^Ywhere, X and Y are random variables with correlation coefficient, ρ satisfying the below, [[ X; Y ]]∼ N([[ μ_X; μ_Y ]],[[ σ_X^2 ρσ_Xσ_Y; ρσ_Xσ_Y σ_Y^2 ]])We note that for σ_Y=0 when Y degenerates to a constant, this is just the distribution of X and ρ is unidentified. To transform a column vector with d observations of a random variable into a lower dimension of order, k<d, we can multiply the column vector with a matrix, A∼ N(0;1/k) of dimension k× d.A dimension transformation of d observations of a log-normal variable into a lower dimension, k, using Lemma <ref>, yields a probability density function which is the sum of random variables with a normal log-normal mixture, given by the convolution,f_S(s)=f_U_1(u_1)*f_U_2(u_2)*...*f_U_k(u_k) Here, f_U_i(u_i)=√(k)/2πσ_Y_i∫_-∞^∞ e^-y-ku_i^2/2e^2y-[y-μ_Y_i]^2/2σ_Y_i^2dy U_i=X_ie^Y_i [[ X_i; Y_i ]]∼ N([[ 0; μ_Y_i ]],[[ 1/k 0; 0 σ_Y_i^2 ]])The convolution of two probability densities arises when we have the sum of two independent random variables, Z=X+Y. The density of Z, h_Z(z) is given by,h_Z(z)=(f_X⁎f_Y)(z)=f_X(x)*f_Y(y)=∫_-∞^∞f_X(z-y)*f_Y(y)dy=∫_-∞^∞f_X(x)*f_Y(z-x)dxWhen the number of independent random variables being added is more than two, or the reduced dimension after the Lemma <ref> transformation is more than two, k>2, then we can take the convolution of the density resulting after the convolution of the first two random variables, with the density of the third variable and so on in a pair wise manner, till we have the final density. §.§ Normal Normal Product For completeness, we illustrate how dimension reduction would work on a data-set containing random variables that have normal distributions. This can serve as a useful benchmark given the wide usage of the normal distribution and can be an independently useful result, though most variables observed in real life are normally not so normal.A dimension transformation of d observations of a normal variable into a lower dimension, k, using Lemma <ref>, yields a probability density function which is the sum of random variables with a normal normal product distribution, given by the convolution,f_S(s)=f_U_1(u_1)*f_U_2(u_2)*...*f_U_k(u_k) Here, f_U_i(u_i)=∫_-∞^∞(1/|x|)1/σ_Y_i√(2π) e^-(x-μ_Y_i)^2/2σ_Y_i^2√(k/2π) e^-k(u_i/x)^2/2dx U_i=X_iY_i [[ X_i; Y_i ]]∼ N([[ 0; μ_Y_i ]],[[ 1/k 0; 0 σ_Y_i^2 ]])§.§ Truncated Normal Distribution A truncated normal distribution is the probability distribution of a normally distributed random variable whose value is either bounded below, above or both (Horrace 2005; Burkardt 2014). (Kiani, Panaretos, Psarakis and Saleem 2008; Zogheib and Hlynka 2009; Soranzo and Epure 2014) list some of the numerous techniques to calculate the normal cumulative distribution. Approximations to the error function are also feasible options (Cody 1969; Chiani, Dardari and Simon 2003). Despite the truncation, this could be a potent extension when it is known a-priori that the values a variable can take are almost surely bounded.Suppose X∼ N(μ,σ^2) has a normal distribution and lies within the interval X∈(a,b), -∞≤ a<b≤∞. Then X conditional on a<X<b has a truncated normal distribution. Its probability density function, f_X, for a≤ x≤ b , is given byf_X(x|μ,σ^2,a,b)=1/σϕ(x-μ/σ)/Φ(b-μ/σ)-Φ(a-μ/σ) ;a≤ x≤ b0 ;otherwiseHere, ϕ(ξ)=1/√(2π)exp(-1/2ξ^2) is the probability density function of the standard normal distribution and Φ(·) is its cumulative distribution function. There is an understanding that if b=∞, then Φ(b-μ/σ)=1, and similarly, if a=-∞ , then Φ(a-μ/σ)=0. The Bhattacharyya distance, when we have truncated normal distributions p,q that do not overlap, is zero and when they overlap, it is given by D_BC-TN(p,q) =1/4((μ_p-μ_q)^2/σ_p^2+σ_q^2)+1/4ln(1/4(σ_p^2/σ_q^2+σ_q^2/σ_p^2+2))+1/2ln[Φ(b-μ_p/σ_p)-Φ(a-μ_p/σ_p)]+1/2ln[Φ(d-μ_q/σ_q)-Φ(c-μ_q/σ_q)]-ln{Φ[u-ν/ς]-Φ[l-ν/ς]}Here,p∼ N(μ_p,σ_p^2,a,b) ; q∼ N(μ_q,σ_q^2,c,d) l=min(a,c) ; u=min(b,d) ν=(μ_pσ_q^2+μ_qσ_p^2)/(σ_p^2+σ_q^2) ; ς=√(2σ_p^2σ_q^2/(σ_p^2+σ_q^2))§.§ Truncated Multivariate Normal Distribution Similarly, a truncated multivariate normal distribution X has the density function, f_𝐗(x_1,…,x_k|μ_p, Σ_p, a, b)=exp(-1/2(𝐱-μ_p)^T)Σ_p^-1(𝐱-μ_p))/∫_a^bexp(-1/2(𝐱-μ_p)^T)Σ_p^-1(𝐱-μ_p))dx; x∈R_a≤x≤b^kHere, μ_p is the mean vector and Σ_p is the symmetric positive definite covariance matrix of the p distribution and the integral is a k dimensional integral with lower and upper bounds given by the vectors (a,b) and x∈R_a≤x≤b^k . The Bhattacharyya coefficient when we have truncated multivariate normal distributions p,q and all the k dimensions have some overlap, is given by D_BC-TMN(p,q) =1/8(μ_p-μ_q)^TΣ^-1(μ_p-μ_q)+1/2ln (Σ/√(Σ_p Σ_q))+1/2ln[1/√((2π)^k(|Σ_p|))∫_a^bexp(-1/2(𝐱-μ_p)^T)Σ_p^-1(𝐱-μ_p))dx; x∈R_a≤x≤b^k]+1/2ln[1/√((2π)^k(|Σ_q|))∫_c^dexp(-1/2(𝐱-μ_q)^T)Σ_q^-1(𝐱-μ_q))dx; x∈R_c≤x≤d^k]-ln[1/√((2π)^k(Σ_pΣ^-1Σ_q)). .∫_l^uexp(-1/2{(𝐱-𝐦)^T(Σ_q^-1[Σ]Σ_p^-1)(𝐱-𝐦)})dx; x∈R_min(a,c)≤x≤min(b,d)^k]Here,p∼ N(μ_p, Σ_p, a, b) q∼ N(μ_q, Σ_q, c, d) u=min(b,d) ; l=min(a,c) 𝐦=[(μ_p^TΣ_p^-1+μ_q^TΣ_q^-1)(Σ_p^-1+Σ_q^-1)^-1]^T Σ=Σ_p+Σ_q/2§.§ Covariance and Distance The following is a general extension to Stein's lemma (Stein 1973, 1981; Rubinstein 1973, 1976) that does not require normality, involving the covariance between a random variable and a function of another random variable. Kattumannil (2009) extends the Stein lemma by relaxing the requirement of normality. (Teerapabolarn 2013) is a further extension of this normality relaxation to discrete distributions. Another useful reference, (Kimeldorf and Sampson 1973), provides a class of inequalities between the covariance of two random variables and the variance of a function of the two random variables.The following equations govern the relationship between the Bhattacharyya distance, ρ(f_X,f_Y), and the covariance between any two distributions with joint density function, f_XY(t,u), means, μ_X and μ_Y and density functions f_X(t) and f_Y(t),Cov[c(X),Y]=Cov(X,Y)-E[√(f_Y(t)/f_X(t))Y]+μ_Y ρ(f_X,f_Y) Cov(X,Y)+μ_Y ρ(f_X,f_Y)=E[c'(X) g(X,Y)]+E[√(f_Y(t)/f_X(t))Y]Here, c(t)=t-√(f_Y(t)/f_X(t))and g(t,u) is a non-vanishing function such that, f_XY'(t,u)/f_XY(t,u)=-g'(t,u)/g(t,u)+[μ_Y-u]/g(t,u), t,u∈(a,b) | http://arxiv.org/abs/1703.08812v1 | {
"authors": [
"Ravi Kashyap"
],
"categories": [
"q-fin.TR"
],
"primary_category": "q-fin.TR",
"published": "20170326125419",
"title": "Microstructure under the Microscope: Tools to Survive and Thrive in The Age of (Too Much) Information"
} |
http://arxiv.org/abs/1703.08964v3 | {
"authors": [
"D. Blaschke",
"D. Ebert"
],
"categories": [
"hep-ph",
"nucl-th"
],
"primary_category": "hep-ph",
"published": "20170327080757",
"title": "Variational path-integral approach to back-reactions of composite mesons in the Nambu-Jona-Lasinio model"
} |
|
[ Biologically inspired protection of deep networks from adversarial attacks equal* Aran Nayebistn Surya Gangulist stnNeurosciences PhD Program, Stanford University stDepartment of Applied Physics, Stanford UniversityAran [email protected] Surya [email protected] boring formatting information, machine learning, ICML0.3in ]Inspired by biophysical principles underlying nonlinear dendritic computation in neural circuits, we develop a scheme to train deep neural networks to make them robust to adversarial attacks. Our scheme generates highly nonlinear, saturated neural networks that achieve state of the art performance on gradient based adversarial examples on MNIST, despite never being exposed to adversarially chosen examples during training. Moreover, these networks exhibit unprecedented robustness to targeted, iterative schemes for generating adversarial examples, including second-order methods. We further identify principles governing how these networks achieve their robustness, drawing on methods from information geometry. We find these networks progressively create highly flat and compressed internal representations that are sensitive to very few input dimensions, while still solving the task. Moreover, they employ highly kurtotic weight distributions, also found in the brain, and we demonstrate how such kurtosis can protect even linear classifiers from adversarial attack. § INTRODUCTIONDeep Neural Networks (DNNs) have demonstrated success in many machine learning tasks, including image recognition <cit.>, speech recognition <cit.>, and even modelling mathematical learning <cit.>, among many other domains. However, recent work has exposed a remarkable weakness in deep neural networks <cit.> (see <cit.> for a survey), namely that very small perturbations to the input of a neural network can drastically change its output. In fact, in image classification tasks, it is possible to perturb the pixels in such a way that the perturbed image is indistinguishable from its original counterpart to a human observer, but the network's class prediction is completely altered. These adversarial examples suggest that despite the above successes, machine learning models are not fundamentally understanding the tasks that they are trained to perform. Furthermore, the imperceptibility of these adversarial perturbations to human observers suggests that these machine learning algorithms are performing computations that are vastly different from those performed by the human visual system.This discrepancy is of particular scientific concern as deep neural networks now form foundational models in neuroscience for the visual processing stream<cit.>. So their susceptibility to adversarial perturbations that are imperceptible to us suggest our models are missing a fundamental ingredient that is implemented in the brain. However, the existence of adversarial examples is also of particular technological concern in machine learning, as these adversarial examples generalize across architectures and training data, and can therefore be used to attack machine learning systems deployed in society, without requiring knowledge of their internal structure <cit.>. It is important to note that adversarial examples of this form are not limited to deep networks but are also an issue even in linear high dimensional classification and regression problems. A plausible explanation <cit.> for the existence of these adversarial examples lies in the idea that any algorithm that linearly sums its high dimensional input vectors with many small weights can be susceptible to an attacker that adversarially perturbs each of the individual inputs by a small amount so as to move the entire sum in a direction that would make an incorrect classification likely. This idea lead to a fast method to find adversarial examples which could then be used to explicitly train neural networks to be robust to their own adversarial examples <cit.>. However, it is unclear that biological circuits explicitly find their own adversarial examples by optimizing over inputs and training against them.Therefore, we are interested in guarding against adversarial examples in a more biologically plausible manner, without explicitly training on adversarial examples themselves. Of particular interest is isolating and exploiting fundamental regimes of operation in the brain that prevent the imperceptible perturbations that fool deep networks, from fooling us.In this paper, we take inspiration from one fundamental aspect of single neuron biophysics that is not often included in artificial deep neural networks, namely the existence of nonlinear computations in intricate, branched dendritic structures <cit.>.These nonlinear computations prevent biological neurons from performing weighted sums over many inputs, the key factor thought to lead to susceptibility to adversarial examples.Indeed, the biophysical mechanism for linear summation in neurons corresponds to the linear superposition of trans-membrane voltage signals as they passively propagate along dendrites. These voltage waves can linearly sum synaptic inputs. However, there is also a high density of active ionic conductances spread through the dendritic tree that can destroy this linear superposition property in purely passive dendrites, thereby limiting the number of synapses that can linearly sum to O(10)-O(100).These active conductances lead to high threshold, nonlinear switch like behavior for voltage signalling. As a result, many parts of the dendritic tree exist in voltage states that are either far below threshold, or far above, and therefore saturated. Thus biological circuits, due to the prevalence of active dendritic processing, may operate in a highly nonlinear switch-like regime in which it is very difficult for small input perturbations to propagate through the system to create large errors in output.Rather than directly mimic this dendritic biophysics in artificial neural networks, here we take a more practical approach and take inspiration from this biophysics to train artificial networks into a highly nonlinear operating regime with many saturated neurons.We develop a simple training scheme to find this nonlinear regime, and we find, remarkably, that these networks achieve state of the art robustness to adversarial examples despite never having access to adversarial examples during training.Indeed we find 2-7% error rates on gradient-based adversarial examples generated on MNIST, with little to no degradation in the original test set performance. Furthermore, we go beyond performance to scientifically understand which aspects of learned circuit computation confer such adversarial robustness.We find that our saturated networks, compared to unsaturated networks, have highly kurtotic weight distributions, a property that is shared by synaptic strengths in the brain <cit.>. Also, our networks progressively create across layers highly clustered internal representations of different image classes, with widely separated clusters for different classes.Furthermore we analyze the information geometry of our networks, finding that our saturated networks create highly flat input-output functions in which one can move large distances in pixel space without moving far in output probability space.Moreover, our saturated networks create highly compressed mappings that are typically sensitive to only one direction in input space.Both these properties make it difficult even for powerful adversaries capable of iterative computations to fool our networks, as we demonstrate.Finally, we show that the highly kurtotic weight distributions that are found both in our model and in biological circuits, can by themselves confer robustness to adversarial examples in purely linear classifiers. § ADVERSARIAL EXAMPLE GENERATIONWe consider a feedforward network F with D layers of weights 1,…, D and D+1 layers of neural activity vectors ^0,…,^D, with N_l neurons in each layer l, so that ^l∈ℝ^N_l and l is an N_l× N_l-1 weight matrix.The feedforward dynamics elicited by an input ^0 are^l = ϕ(^l) ^l = l ^l-1 + ^lfor l=1,…,D-1 ^D = softmax(^D),where ^l is a vector of biases, ^l is the pattern of inputs to neurons at layer l, and ϕ is a single neuron scalar nonlinearity that acts component-wise to transform inputs ^l to activities ^l. We taketo be the class indicator vector generated from ^D. We also denote by ^D = F(^0) the network's composite transformation from input to output. For such networks, the essential idea underlying adversarial examples is to start with a test example ^0 that is correctly classified by the network with class indicator vector , and transform it through an additive perturbation Δ^0 into a new input ^0 + Δ^0 that is incorrectly classified by the network F as having a “goal” class label ^G ≠. Moreover, the perturbation Δ^0 should be of bounded norm so as to be largely imperceptible to a human observer. This idea leads naturally to an optimization problem:min_Δ^0Δ^0s.t. F(^0 + Δ^0) = ^G. However, as this is a complex optimization, many simpler methods have been proposed to efficiently generate adversarial examples (e.g. <cit.>). In particular, the fast gradient sign method of <cit.> is perhaps the most efficient method. Motivated by the notion that adversarial attacks can arise even in linear problems in high dimensional spaces, <cit.> linearized the input-output map F around the test example and searched for bounded l_∞ norm perturbations that maximize the network's cost function over the linearized network. More precisely, suppose the cost function of the network is C_0 = C(F(^0), ), then its linearization is C(F(^0+Δ^0), ) ≈ C_0 + (∇_FC)J (Δ^0),where J is the Jacobian of F. Then the bounded l_∞ norm optimization that maximizes cost has the exact solutionϵsgn(∇_FC J)= max_Δ^0 (∇_FC)J(Δ^0) s.t. Δ^0_∞≤ϵ.If a network can be susceptible to these gradient-based adversaries, then we can choose ϵ to be small enough for the given dataset so it is imperceptible to human observers yet large enough for the network to misclassify. For MNIST, <cit.> took ϵ = 0.25, since each pixel is in [0, 1]. We follow this prescription in our experiments. With efficient methods of generating adversarial examples (<ref>), <cit.> harnessed them to develop adversarial training, whereby the network is trained with the interpolated cost function:α C + (1-α)C(F(^0 + ϵ sgn(∇_FC)J), ),As a result, the network is trained at every iteration on adversarial examples generated from the current version of the model. On maxout networks trained on MNIST, <cit.> found that they achieved an error rate of 89.4% on adversarial examples, and with adversarial training (where α = 0.5), they were able to lower this to an error rate of 17.9%.We now turn to ways to avoid training on adversarial examples, in order to have the networks be more intrinsically robust to their adversarial examples. <cit.> suggested knowledge distillation, which involves changing a temperature parameter T on the final softmax output in order to ensure that the logits are more spread apart. However, the authors do not try their approach on adversarial examples generated by the fast gradient sign method, nor does this approach address the broader criticism of <cit.> that models susceptible to gradient-based adversaries operate heavily in the linear regime.We develop a method that strongly departs from the high dimensional linear regime in which adversarial examples abound. The basic idea is to force networks to operate in a nonlinear saturating regime. § SATURATING NETWORKSA natural starting point to achieve adversarial robustness is to ensure that each element of the Jacobian of the model, J = ∂ F/∂^0, is sufficiently small, so that the model is not sensitive to perturbations in its inputs. Jacobian regularization is therefore the most direct method of attaining this goal; however, for sufficiently large networks, it is computationally expensive to regularize the Jacobian as its dimensions can become cumbersome to store in memory. An immediate alternative would be to use a contractive penalty as in <cit.>, whereby the Frobenius norm of the layer-wise Jacobian is penalized:∑_l = 1^Dλ_l∂^l/∂^l-1_F,where each λ_l∈ℝ. For element-wise nonlinearities, <cit.> show that this penalty can be computed in O(max_l(|^l|×|^l-1|)) time, where |·| denotes the length (number of units). While indirectly encouraging the activations to be pushed in the saturating regime of the nonlinearity, this contractive penalty can nonetheless be practically difficult to compute efficiently for networks with a large number of hidden units per layer, and also tends to limit the model's capacity to learn from data, degrading test set accuracy.Saturating autoencoders were introduced by <cit.> as a means of explicitly encouraging activations to be in the saturating regime of the nonlinearity, in order to limit the autoencoder's ability to reconstruct points that are not close by on the data manifold. Their penalty takes the following form for a given activation = 𝐖 + and λ∈ℝ,λ∑_i = 1^||ϕ_c(_i),where the complementary function is defined as:ϕ_c(z) ≡inf_z'∈ S|z - z'|, S = {z |ϕ'(z) = 0},and reflects the distance of any individual activation to the nearest saturation region. Not only is this penalty simple, but it can be cheaply computed in O(||) time.§ EXPERIMENTS AND RESULTSHere we adapt the above regularization, originally designed for autoencoders, to protect against adversarial examples in supervised classification networks. We found that applying this regularization to every network layer, including the readout layer prior to the softmax output, worked best against adversarial examples generated by the fast gradient sign method. Thus, our penalty took the following form:λ∑_l = 1^D∑_i = 1^N_lϕ_c(^l_i). Observe that for a ReLU function, the complementary function in (<ref>) is itself, so ϕ_c(z) = max{0, z}. While the definition in (<ref>) can also be intricately extended to differentiable functions (as is done in <cit.>), for a sigmoid function we can simply take ϕ_c(z) = |σ'(z)| = |σ(z)(1-σ(z))|, since the sigmoid is a monotonic function.We used TensorFlow for all of our models <cit.>, and we trained both 3 layer multilayer perceptrons (MLPs) with sigmoid and ReLU nonlinearities, as well as convolutional neural networks (CNNs) on 10-class MNIST. For comparison, we trained the adversarially trained networks as in (<ref>), finding that α = 0.5 gave the best performance. Each network was optimized for performance separately, and we varied the number of hidden units for the MLPs to be between 200-2000 to choose the architecture that provided the best performance. Our CNN architecture is detailed in Table <ref>, and we used the stronger penalty f(z) = z only at the last layer of the CNN. We used Adam <cit.> as our optimizer.In order to effectively train with the saturating penalty in (<ref>), we found that annealing λ during training was essential. Starting with λ_min = 0, this was progressively increased to λ_max = 1.74 in steps of size 0.001 for the sigmoidal MLP, λ_max = 3.99× 10^-8 in steps of size 10^-10 for the ReLU MLP, and λ_max = 10^-5 in steps of size 10^-5 for the CNN. We ultimately found that the CNN was easier to find an annealing schedule for than the MLPs, further suggesting the viability of this approach in practice.We list above our results in Table <ref>. As can be seen, for each model class, we are able to maintain (with little degradation) the original test set accuracy of the network's vanilla counterpart, while also outperforming the adversarially trained counterpart on the adversarial set generated from the test set. We now turn to analyzing the source of adversarial robustness in our networks.§ INTERNAL REPRESENTATION ANALYSISWe now examine the internal representations learned by saturating networks (in particular the MLPs) and compare them to those learned by their vanilla counterparts, to gain insight into distinguishing features that make saturating networks intrinsically robust to adversarial examples. In Figure <ref>, we compare the weight distributions of the vanilla MLP to the saturating MLP. The saturating MLP weights take on values in a larger range, with a tail that tends to extreme values in the saturating regime of the nonlinearity.For the sigmoid, this leads to extreme weight values on both ends, while for the saturating ReLU MLP, this leads to extreme negative values. A particularly dramatic change in the weight distribution is a much larger positive excess kurtosis for saturating versus vanilla networks. Indeed, high kurtosis is a property shared by weight distributions in biological networks <cit.>, raising the question of whether or not it plays a functional role in protection against adversarial examples. In 8, we will demonstrate that highly kurtotic weight distributions can act as a linear mechanism to protect against adversarial examples, in addition to the nonlinear mechanism of saturation.Moreover, in Figure <ref>, we see that the pre-nonlinearity activations at each layer across all 10,000 test examples also tend to extreme values, as expected, validating that these models are indeed operating in the saturating regime of their respective nonlinearities.Beyond examining the weights and the activations, we also examine the global structure of internal representations by constructing, for each network and layer, the representational dissimilarity matrix (RDM) of its activities <cit.>. For each of the 10 classes, we chose 100 test set examples at random, and computed the pairwise squared distance matrix,d(ϕ(^l,a), ϕ(^l,b)) = 1/N_l∑_i=1^N_l(ϕ(^l,a_i) - ϕ(^l,b_i))^2,between all pairs a and b of the 1000 test examples. Here ^l,a and ^l,b are the hidden unit activations at layer l on inputs ^0,a and ^0,b, respectively.As shown in Figure <ref>, a distinguishing feature emerges between the RDMs of the vanilla network and the saturated network. At every layer, while within class dissimilarity, in the diagonal blocks, is close to zero for both networks, between class dissimilarities in the off-diagonal blocks are much larger in the saturated network than in the vanilla network. Moreover, this dissimilarity is progressively enhanced in saturating networks as one traverses deeper into the network towards the final output. Thus while both networks form internal representations in which images from each class are mapped to tight clusters in internal representation space, these internal clusters are much further apart from each other for saturating networks. This increased cluster separation likely contributes to adversarial robustness because it necessitates larger norm input perturbations to move representations in deeper layers across decision boundaries, not only in the output layer, but also in intermediate layers.§ THE GEOMETRY OF SATURATING NETWORKS While the RDM analysis above showed increased cluster separation in internal representations, we would like to understand better the geometry of the network input-output map and how it contributes to adversarial robustness. To this end, we seek to understand how motions in input space are transformed to motions in the output space of probability distributions over class labels. To do so, we rely on the framework of information geometry and Fisher information <cit.>. In particular, the network output, as a probability distribution over class labels, is endowed with a natural Riemannian metric, given by the Fisher information.We can think of the 10 dimensional vector of inputs ^D in the final layer, as coordinates on this space of distributions (modulo the irrelevant global scaling ^D →λ^D).In terms of these coordinates, the actual probabilities are determined through the softmax function: p_i(^D) = 1/Z e^h^D_i where Z = ∑_i e^h^D_i. The Fisher information metric on the space h^D_i is then given by G^F_ij = ∑_kp_k (∂_z_ilog p_k)(∂_z_jlog p_k)= p_iδ_ij - p_ip_j.In turn, this metric on ^D induces a metric on input space ^0 via the pullback operation on metrics. The resultant metric 𝐆^in on input space is given by𝐆^in = 𝐉^T 𝐆^F 𝐉,where 𝐉=∂^D/∂^0 is the Jacobian from input space to layer D. Geometrically, if one moves a small amount from ^0 to ^0 + d, the resultant distance dl one moves in output probability space, as measured by the Fisher information metric, is given by dl = √(∑_ij G^in_ij dx_i dx_j).Thus the metric assigns lengths to curves in input space according to how far they induce motions in output space. Also, the Jacobian 𝐉 is of independent geometric interest. As a local linearization of the input-output map, the number of non-trivially large singular values of 𝐉 determine how many directions in input space the network's input-output map is locally sensitive to. To explore the geometric structure of both vanilla and saturating deep network maps, we move continuously in input space between the most confident images in a given source class, ^0_S, and a target class, ^0_Talong a simple linear interpolation path in pixel space:^0(λ) = (1-λ)^0_S + λ^0_T, λ∈ [0,1].As we move along this path, in Figure <ref>, we plot the length element in (<ref>), the induced trajectory in output probability space, and the spectrum of singular values of the Jacobian 𝐉. As expected, the length element increases precisely when the output trajectory in probability space makes large transitions. At these points, one or more singular values of 𝐉 also inflate.Several distinguishing features arise in the geometry of vanilla versus saturated networks inFigure <ref>. The first is that the length element is more smooth and continuous for the vanilla network, but locally flat with sharp peaks when class probabilities transition for the saturating network. Thus for saturating networks, one can move long distances in input space without moving much at all in output space. This property likely confers robustness to gradient-based adversaries, which would have difficulty traversing input space under such constant, or flat input-output maps.A second distinguishing feature is that, in vanilla networks, at probabilistic transition points, multiple singular values inflate, while in saturating networks, only one singular value does so.This implies that vanilla networks are sensitive to multiple dimensions, while saturating networks perform extremely robust and rapid transitions between distinct probabilistic outputs in a way that sensitivity to input perturbations in all directions orthogonal to the transition are strongly suppressed. This property again likely confers robustness to adversaries, as it strongly constrains the number of directions of expansion that an adversary can exploit in order to alter output probabilities. Finally, it is interesting to compare the geometry of these trained networks to the Riemannian geometry of random neural networks which often arise in initial conditions before training. An extensive analysis of this geometry, performed by <cit.>, revealed the existence of two phases in deep networks: a chaotic (ordered) phase when the random weights have high (low) variance. In the chaotic (ordered) phase the network locally expands (contracts) input space everywhere. In contrast, trained networks flexibly deploy both ordered and chaotic phases differentially across input space; they contract space at the center of decision volumes and expand space in the vicinity of decision boundaries. Saturating networks, however, do this in a much more extreme manner than vanilla networks. § MORE POWERFUL ITERATIVE ADVERSARIES One can construct more powerful adversaries than the fast sign gradient method by iteratively finding sensitive directions in the input-output map and moving along them. How robust are saturating networks to these types of adversaries? From the information-geometric standpoint described above, given the local flatness of the input-output map, as quantified by our Riemannian geometric analysis, an iterative gradient-based adversary may still encounter difficulty with the saturated network, especially since the number of directions of expansion are additionally constrained by the compressive nature of the map. We first created adversaries via iterative first order methods. For each chosen source image _S and its associated correct source class _S, we chose a target class, _T ≠_S. We then attempted to find adversarial perturbations that cause the network to misclassify _s as belonging to class _T. To do so, starting from ^(0)_adv=_S, we iteratively minimized the cross entropy loss ℓ via gradient descent:^(t+1)_adv = ^(t)_adv - α_t∇_^(t)_advℓ(^(t)_adv, _T).This procedure adjusts the adversarial example ^(t)_adv so as to make the incorrect label _T more likely.We used Adam <cit.> so that the learning rate α_t would be adaptive at each iteration t. For a given source class, we started with the source image the network was either least or most confident on.Although we were able to get the vanilla network to misclassify in either case (usually within less than 10 iterations), there were several cases (such as when the source class was a 3 and the target class was a 7) where we were unable to get the saturated network to misclassify, even in the most extreme case where we ran Adam for 30 million iterations. Although the image was changing at each iteration and the mean pixel distance from the starting image was steadily increasing and converged, the resultant image did not cause the saturated network to misclassify. As a result, we moved onto second order adversaries, as <cit.> had similarly considered. Thus, we considered quasi-Newton methods such as L-BFGS where we would minimize the cross entropy loss ℓ as follows:^(t+1)_adv = ^(t)_adv - α_tB^-1_t∇_^(t)_advℓ(^(t)_adv, _T),where B_t is the approximate Hessian at iteration t and the learning rate α_t is obtained by performing a line search in the direction 𝐩_t where B_t𝐩_t = -∇_^(t)_advℓ(_S, _T).In Figure <ref>, we ran L-BFGS for 1000 iterations on both the vanilla network and the saturated network, starting with a source image that each network correctly classified but had the lowest softmax probability in that class (lowest confidence). In Figure <ref> in the Supplementary Material (SM), we also include the same analysis, but starting with the most confident source image in each class.Regardless of whether we start with a source image with the least or highest confidence in that class, we can always find an adversarial image to fool the vanilla network to misclassify as the intended target class (and usually within 1-2 iterations). However, for the saturated network, even starting with the least confident source image, we were, in the majority of cases, unable to fool the network. Moreover, as depicted in Figure <ref> in the Supplementary Material (SM), it was even more difficult to fool the saturated network with the most confident source image, resulting in only 5 such cases, even after 1000 iterations.§ ROLE OF WEIGHT KURTOSIS: A LINEAR MECHANISM FOR ROBUSTNESS TO ADVERSARIES As we observed in 5, saturating networks had high kurtosis weight distributions in every layer when compared to their vanilla counterparts. Indeed, such kurtotic weight distributions are prevalent in biological neural networks <cit.>. Here we demonstrate that high kurtosis weight distributions can act as a linear mechanism to guard against adversarial examples.Indeed, sensitivity to adversarial examples is not unique to neural networks, but arises in many machine learning methods, including linear classification.Consider for example a classification problem with two cluster prototypes with weight vectors _1∈ℝ^n and _2∈ℝ^n. For simplicity, we assume _1 and _2 lie in orthogonal directions, so _1 ·_2=0. An inputis classified as class 1 if _1 · > 0, otherwiseis classified as class 2. Now consider a test example that is the cluster prototype for class 1, i.e. = _1. Let us further consider an adversarial perturbation _1 + Δ. This perturbed input will be misclassified if and only if (_2 - _1)· (_1 + Δ) > 0.Following the fast sign gradient method, we can choose Δ to be the maximum perturbation under the constraint Δ_∞ < ϵ in (<ref>). This optimal perturbation isΔ = ϵ sgn(_2 - _1).In order to have this bounded l_∞ norm perturbation cause a misclassification, we must then have the conditionϵ > ϵ_min≡_1_2^2/_2 - _1_1.Here, recall we are assuming _1 ·_2 = 0 for simplicity. Thus if the l_1 norm in the denominator is small, then the network is adversarially robust in the sense that a large perturbation is required to cause a misclassification, whereas if the l_1 norm is large, then it is not. Now in high dimensional spaces, l_1 norms can be quite large relative to l_2 norms. In particular for any unit l_2 norm vector 𝐯, we have 1 ≤𝐯_1 ≤√(n), where the upper bound is realized by a dense uniform vector with each entry 1/√(n) and the lower bound is realized by a coordinate vector with one nonzero entry equal to 1. Both these vectors are on the l_2 ball of radius 1, but this l_2 ball intersects the circumscribing l_1 ball of radius √(n) at the former vector, and theinscribing l_1 ball of radius 1 at the latter vector.This intersection of l_1 and l_2 balls of very different radii in a high dimensional space likely contributes to the prevalence of adversarial examples in high dimensional linear classification problems by allowing the denominator in (<ref>) to be large and the numerator to be small.However, we can avoid the bad regime of dense uniform vectors with large l_1 norm if the weights are sampled from a kurtotic distribution. In this case, we may then expect that the numerator _1_2^2 in (<ref>) would be large as we are likely to sample from extreme values, but that the denominator _2 - _1_1 would be small due to the peak of the distribution near 0. To test this idea, we sampled unit norm random vectors of dimension 20000, so that _1·_2 ≈ 0. We sampled their values iid from a Pearson Type VII distribution, with density function given byf(x; γ_2) = c(γ_2)(1 + (x/√(2+6/γ_2))^2)^-5/2-3/γ_2,where c(γ_2) = 1/(√(2+6/γ_2)) B(2 + 3/γ_2, 1/2), B is the Euler Beta function, and γ_2 denotes the excess kurtosis of the distribution. In Figure <ref>, we computed the ratio in (<ref>) and scaled it by the input intensity, which is given by the average absolute value of a nonzero component of _1. The resultant scaled ratio was then computed for each value of γ_2. Note that a standard Gaussian has an excess kurtosis of 0, which serves as a baseline. Hence, increasing the excess kurtosis via, for example, a Pearson Type VII density, increases the scaled ratio by almost 40% from the Gaussian baseline. In fact, if we sample, via inverse transform sampling, from the weight distribution of the saturated network at a given layer, then the scaled ratio can increase by as much as 300% from when we sample from the distribution of the weights in that same layer for the vanilla network. Thus even in the case of linear classification, kurtotic weight distributions, including the weight distributions learned by our saturating networks, can improve robustness to adversarial examples. § DISCUSSIONIn summary, we have shown that a simple, biologically inspired strategy for finding highly nonlinear networks operating in a saturated regime provides interesting mechanisms for guarding DNNs against adversarial examples without ever computing them. Not only do we gain improved performance over adversarially trained networks on adversarial examples generated by the fast gradient sign method, but our saturating networks are also relatively robust against iterative, targeted methods including second-order adversaries. We additionally move beyond empirical results to analyze the sources of intrinsic robustness to adversarial perturbations. Our information geometric analyses reveal several important features, including highly flat and low dimensional internal representations that nevertheless widely separate images from different classes.Moreover, we have demonstrated that the highly kurtotic weight distributions found both in our networks and in our brains, can act as a linear mechanism against adversarial examples. Overall, we hope our results can aid in combining theory and experiment to form the basis of a general theory of biologically plausible mechanisms for adversarial robustness.§ ACKNOWLEDGEMENTSWe thank Ben Poole and Niru Maheswaranathan for helpful comments on the manuscript, and the ONR, and Burroughs Welcome, Simons, McKnight, and James. S. McDonnell foundations for support.icml2017 | http://arxiv.org/abs/1703.09202v1 | {
"authors": [
"Aran Nayebi",
"Surya Ganguli"
],
"categories": [
"stat.ML",
"cs.LG",
"q-bio.NC"
],
"primary_category": "stat.ML",
"published": "20170327174507",
"title": "Biologically inspired protection of deep networks from adversarial attacks"
} |
[A. Fauré]Department of Physics and Information Science Yamaguchi University1677-1, Yoshida, Yamaguchi 753-8512, Japan [email protected][S. Kaji]Department of Mathematics Yamaguchi University1677-1, Yoshida, Yamaguchi 753-8512, Japan / JST PRESTO The authors contributed equally to this work. [email protected][2010] Primary 68R05; Secondary 92D99 A circuit-preserving mapping from multilevel to Boolean dynamics Shizuo Kaji================================================================ Many discrete models of biological networks rely exclusively on Boolean variables and many tools and theorems are available for analysis of strictly Boolean models. However, multilevel variables are often required to account for threshold effects, in which knowledge of the Boolean case does not generalise straightforwardly. This motivated the development of conversion methods for multilevel to Boolean models. In particular, Van Ham's method has been shown to yield a one-to-one, neighbour and regulation preserving dynamics, making it the de facto standard approach to the problem. However, Van Ham's method has several drawbacks: most notably, it introduces vast regions of “non-admissible” statesthat have no counterpart in the multilevel, original model. This raises special difficulties for the analysis of interaction between variables and circuit functionality, which is believed to be central to the understanding of dynamic properties of logical models. Here, we propose a new multilevel to Boolean conversion method, with software implementation. Contrary to Van Ham's, our method doesn't yield a one-to-one transposition of multilevel trajectories; however, it maps each and every Boolean state to a specific multilevel state, thus getting rid of the non-admissible regions and, at the expense of (apparently) more complicated, “parallel” trajectories. One of the prominent features of our method is that it preserves dynamics and interaction of variables in a certain manner. As a demonstration of the usability of our method, we apply it to construct a new Boolean counter-example to the well-known conjecture that a local negative circuit is necessary to generate sustained oscillations. This result illustrates the general relevance of our method for the study of multilevel logical models. § BACKGROUND Boolean modelshave proved very useful in the analysis of various networks in biology. However, it is often convenient to introduce multilevel variables to account for multiple threshold effects. We are often faced with choices between using Boolean variables or multilevel variables. This can be crucial since theoretical results are sometimes proved only for Boolean or multilevel networks. A particular example of this situation is in René Thomas' conjecturethat a local negative circuit is necessary to produce sustained (asynchronous) oscillations. This paper stems from the simple idea thata Boolean counter-example to that conjecture could be found by transposing a multilevel counter-example found earlier by Richard and Comet. However, we believe the method developed in this paper, together with a handy script which implements it,is widely applicable to other theoretical studies which involvesdiscrete networks. We also find the notion of asymptotic evolution function defined in this paper sheds light on the understanding of relation between the state transition graph and the interaction graph.§.§ IntroductionIntroduced in the 1960s-70s to model biological regulatory networks, the logical (discrete) formalism has gained increasing popularity, with recent applications as diverse as drosophila development, cell cycle control, or immunology (see <cit.> for a survey).While many of these models rely exclusively on Boolean variables,it is often useful to introduce multilevel variablesto account for more refined behaviour. However, many tools and theoretical results are restricted to the Boolean case (see e.g. <cit.>) This situation motivated the development of methods to convert multilevel models to Boolean ones <cit.>. A simple idea for such a conversion was introduced by Van Ham <cit.>, and this method has been shown to be essentiallythe only one that could provide a “one-to-one, neighbour and regulation preserving map” <cit.>. One problem with the conversion is that the resulting Boolean model isdefined only on asub-region of the whole Boolean state space,called the admissible region, and how to extend the model outside thatregion is not trivial. This leads to potential problems with analytical tools designed to deal with the whole state space, as a property that is true in the restricted domain may be false on the whole state space, and vice versa. The primary goal of the present paper is to address this issue by introducing an extension of Van Ham's method. More precisely, we introduce a new method for multilevel to Boolean model conversion which extends the domain of Van Ham's model to the whole state space while preserving edge functionality and, therefore, local circuits. Our mapping yields a state transition graph with “parallel” trajectories that contains the one obtained by Van Ham's mapping as a sub-graph in such a way that attractors of the dynamics are preserved. We apply our method to investigate a particular class of theoretical results that connect the asynchronous behaviour of a model to the presence ofregulatory circuits in the interaction graph. In the early 1980s, R. Thomas conjectured that the presence of a positive circuit (i.e. a circuit where each component directly or indirectly has a positive effect upon itself) in the interaction graph is a necessary condition for multi-stability, and a negative circuit (where each component has a negative effect upon itself) is necessary for sustained oscillations <cit.>.One particular formulation of the conjecture focuses on local or “type-1” circuits <cit.>, i.e. circuits whose arcs are all functional in the same single point of the system's state transition graph – as opposed to global circuits whose arcs may be functional anywhere. While the conjecture holds for positive circuits both at the global and local levels, and for multilevel as well as Boolean models <cit.>, in the negative case the conjecture could only be proved true at the global level <cit.>. At the local level, a counter-example has first been published for multilevel models <cit.>, while the Boolean case remained open <cit.> until a Boolean counter-example was eventually discovered <cit.>, showing that contrary to expectations, a local negative circuit was not necessary to generate sustained oscillations. Interestingly, the approaches taken by P. Ruet and A. Richard are rather different, and their counter-examples have little in common. Applying our method to the Richard-Comet multilevel counter-example, we obtain a new Boolean counter-example to the conjecture that a local negative circuit is necessary to produce sustained oscillations. §.§ Definitions §.§.§ Evolution function and State transition graphWe work within the generalised logical framework introduced by René Thomas and collaborators <cit.>; see Abou-Jaoudé et al. <cit.> for a recent review. Here, we introduce the notation we use throughout this paper.Fix positive integers n and m_i(1≤ i≤ n). Consider a system consisting of mutually interacting n genes, indexed by the set I={1,2,…,n}. Each gene a_i takes expression levels in the integer interval {0,1,…,m_i}. The state of the system evolves depending on the current state. This leads to a discrete dynamical system represented by a evolution function over Mf=(f_1,f_2,…,f_n): M → M, where M={ (x_1,…,x_n)| x_i∈{0,1,…,m_i}}. As a special case when m_i=1 for all i∈ I, we denote M=^n with ={0,1} and call the system Boolean. A basic question asks what we can tell about the asymptotic global behaviour of the dynamics, which is encoded in the state transition graph,from local data of f, which are encoded in the partial derivatives of f or the interaction graph. The evolution of the whole system can be formally modelled by a certain kind of directed graph on M. We equip M with the usual metric d(x,x')=∑_i=1^n |x_i-x'_i| for x,x'∈ M. Denote by e_1=(1,0,0,…), e_2=(0,1,0,0,…), etc. the coordinate vectors of M.A grid graph Γ over M is a graph with the vertex set M satisfying that * each directed edge connects a pair of vertices of distance one* at each vertex x there are no two parallel outward edges; that is,x-e_j ← x → x+e_jis not allowed.The state of the whole system is represented by the levels of genes,and corresponds to a vertex in Γ. At each time step, the state evolves to one of its neighbouring vertices connected by an arrow in the following way.To an evolution function over M, we associate a grid graph Γ(f) over M called the (asynchronous) state transition graph with the edge set {(x_1,x_2,…,x_j,…,x_n) → (x_1,x_2,…,x_j+δ,…,x_n), δ= -1 (f_j(x)<x_j) +1 (f_j(x)>x_j) }.Note that here we follow the standard convention that transition of states is unitary (see <cit.>) so that the existence of an edge x→ x' implies d(x,x')=1; that is, at each step the level of a single gene changes at most by one.Asymptotic behaviour of the evolution of a system can be captured in a graph theoretical entity of the state transition graph. An attractor is a terminal strongly connected sub-graph of Γ; that is, any two elements of it are connected by a path and there is no edge from its elements to one in the complement. An attractor consisting of a single vertex is called a stable state,otherwise it is called a cyclic attractor. Intuitively, attractors are domains in Γ in which the system eventually resides; there is no way to escape once the system arrives in it, but each state in the domain can be visited after arbitrarily many steps. §.§.§ Interaction graph and circuit functionality A common practice in analysing interactions among genes in a network is to encode it in the form of a labelled directed graph called the interaction graph, where interaction is measured by the partial derivatives of the evolution function f=(f_1,f_2,…,f_n): M→ M.The forward partial derivative of f_i along the j-th coordinate at x=(x_1,…,x_n) with x_j<m_j is defined by∂^+_j f_i(x)=f_i(x_1,…,x_j+1,…,x_n)-f_i(x_1,…,x_j,…,x_n) =f_i(x+e_j)-f_i(x).The backward partial derivative along the j-th coordinate at x with x_j>0 is defined similarly by∂^-_j f_i(x)=f_j(x_1,…,x_j,…,x_n)-f_j(x_1,…,x_j-1,…,x_n) =f_i(x)-f_i(x-e_j).Partial derivatives ∂^+_j f_i(x) and ∂^-_j f_i(x) are non-trivial when the i-th gene's target value changes along the change of the j-th gene. They encode the dependence between genes locally at the state x∈ M. For a Boolean network, only one of the forward or the backward partial derivative exists at each x, so we just put them together to define the ordinary partial derivative denoted by ∂_j.On the other hand, in multilevel case, we have both the forward and the backward partial derivatives at some x. It is important to consider both of them (c.f. <cit.>).The (local) interaction graph Gf(x) of f at x is a graph over the vertex set I such that there exists an edge from j to i* with label “+” if ∂^+_j f_i(x)>0 or ∂^-_j f_i(x)>0* with label “-“ if ∂^+_j f_i(x)<0 or ∂^-_j f_i(x)<0.Note that we can have both positive and negative edges from j to i at the same time. We define the global interaction graph Gf(M) as the union of edges of Gf(x) for all x∈ M.[<cit.>] * A cycle C in Gf(M) is called a positive (resp. negative) circuitif it contains an even (resp. odd) number of negative edges.* A circuit C is said to be type-1 functional if C⊂ Gf(x) for some x.As in the continuous case, a function fis recovered up to constant by its partial derivatives: For two evolution functions f,g: M→ M satisfying ∂^+_j f_i=∂^+_j g_i for all i,j∈ I (or ∂^-_j f_i=∂^-_j g_i for all i,j∈ I),the difference f_i(x)-g_i(x) is constant for any i∈ I. In particular, two distinct Boolean evolution functions have the same partial derivatives if and only if they are constant and do not coincide at any point. This means the partial derivatives have almost all the information of the networkHowever, the next example shows that the partial derivativesare not enough to determine the asymptotic behaviour of the dynamics.Consider the the Boolean evolution functions defined byf(x_1,x_2) =(0,0) ((x_1,x_2)=(1,0))(1,0) (otherwise)g(x_1,x_2) = (0,1) ((x_1,x_2)=(1,0))(1,1) (otherwise).Since they differ by a constant, their partial derivatives agree. There exists the unique cyclic attractor (0,0) ↔ (1,0) in Γ(f), whereas there exists the unique stable state (1,1) in Γ(g). § METHODS§.§ Asymptotic evolution functionThe correspondence between evolution functions and state transition graphs is not bijective. In fact, as discussed by Streck et al. <cit.>, for a given (multilevel) grid graph Γ, there are multiple evolution functions which have Γ astheir state transition graphs. To have a bijective correspondence between the two representations of the system, we restrict ourselves to a certain class of evolution functions. There are two major conventions: * We say f is stepwise or unitary if |f_i(x_1,…,x_n)-x_i| ≤ 1 for all i∈ I and x∈ M.* We say f is asymptotic if f_i(x_1,…,x_i,…,x_n)∈{0,x_i,m_i} for all i∈ I and x∈ M.In both cases, f_i encodes only the sign of f_i(x)-x_i. For any evolution function, there exists a unique asymptotic and a unique stepwise evolution functions having the same state transition graph.For any evolution function f, definef̅_i(x)= m_i (f_i(x)>x_i) x_i (f_i(x)=x_i) 0 (f_i(x)<x_i) f̂_i(x)= x_i+1 (f_i(x)>x_i) x_i (f_i(x)=x_i) x_i-1 (f_i(x)<x_i).Then, f̅ is asymptotic and f̂ is stepwise with Γ(f)=Γ(f̅)=Γ(f̂).Similarly, for any grid graph Γ, there exists a unique asymptotic function f̅^Γ and a unique stepwise function f̂^Γ such thatΓ=Γ(f̅^Γ)=Γ(f̂^Γ). We see how to define f̅^Γ. For a vertex x∈Γ and i∈ I,we have only one of the three possibilities: * x→ x-e_i* x→ x+e_i* there is no edge from x in the direction of e_i.We define an asymptotic evolution function f̅^Γ by setting f^Γ_i(x)=0, m, x_i accordingly. If we are interested in the evolution of a system, which is encoded in the state transition graph, we can restrict ourselves to either the class of stepwise evolution functions or the class of asymptotic evolution functions. Our choice in this paper is to restrict ourselves to the latter, andwe identify an asymptotic evolution function with its state transition graph and vice versa. In the rest of the paper, we assume functions are asymptotic unless otherwise stated and denoted just by f without a bar over it. There is a little difference in the interaction graph when we consider the stepwise case instead of the asymptotic case. When i≠ j, ∂^+_i f_j(x) and ∂^+_i f̂_j(x) have the same sign, and same is true for the backward partial derivatives. However, when i=j, we have the following difference.Consider the asymptotic evolution function f_1(0)=2, f_1(1)=2, f_1(2)=2 over M={0,1,2}. The corresponding stepwise evolution function is f̂_1(0)=1, f̂_1(1)=2, f̂_1(2)=2. At x=1, there is no edges in the interaction graph of f while there is a positive self-loop in the one of f̂. On the other hand, consider the asymptotic evolution function f_1(0) = 2, f_1(1) = 1, f_1(2) = 0. At x=1, there is a negative self-loop in the interaction graph of f,whereas there is no arrow in the one of the corresponding stepwise evolution functionf̂_1(0)= 1, f̂_1(1) = 1, f̂_1(2) = 1.In short, the interaction graphs of an asymptotic function and its corresponding stepwise function are the same only up to self-loops.A function which is neither asymptotic nor stepwise has in general more non-trivial partial derivatives than the asymptotic and the stepwise functions sharing the same state transition graph given in Proposition <ref>. (See <cit.> for a detailed discussion. The stepwise function in our papercan be seen as a special case of the canonical function defined there.)An asymptotic function f: {0,1,2}^2 →{0,1,2}^2 defined by (f_1,f_2)(x_1,x_2)=(2,2) have the same state transition graph with(g_1,g_2)(x_1,x_2)= (1,2) (x_1,x_2)=(0,0) (2,2) (otherwise) .However, ∂_2 f_1(0,0)=0 and ∂_2 g_1(0,0)=1. This means, there is a positive arrow x_2→ x_1 in Gg(0,0) while there is no arrow in Gf(0,0). §.§ A mapping from multilevel to Boolean networksFix the set of states M and a natural number l. We consider mappings from the set (M) of asymptotic evolution functions on M to the set (^l) of l-dimensional Boolean evolution functions. Mappings between grid graphs are obtained from them by the correspondence given in Proposition <ref>. Following <cit.>, we introduce two preferable properties of such mappings.A mapping Ψ: (M) →(^l) is said to be * neighbour-preserving if there exists a map b: M →^l and ψ: ^l→ M such that ψ∘ b is the identity on M andb and ψ induce graph homomorphisms b̃: Γ(f)→Γ(Ψ(f)) and ψ̃: Γ(Ψ(f)) →Γ(f) for any f∈(M).* globally regulation-preserving if GΨ(f)(^l)≠ GΨ(f')(^l) for anyf, f'∈(M) with Gf(M)≠ Gf'(M).* locally regulation-preserving if there exists a map b: M →^l such that GΨ(f)(b(y))≠ GΨ(f')(b(y)) for any y∈ M and any f, f'∈(M) with Gf(y)≠ Gf'(y). These properties are practically useful. For a neighbour-preserving mapping, the two maps b and ψ give correspondence between the multilevel states and the Boolean states in such a way that the state transition graph of any multilevel model is embedded in that of a Boolean model. With a regulation-preserving mapping, one can recover the interaction graph of a multilevel network from the corresponding Boolean one.A naive idea to convert an evolution function f: M→ M to a Boolean oneis to use an embedding (one-to-one map) b: M →^l of the set of multilevel states to a higher dimensional set of Boolean states. Then, the conjugate of f with respect to b is defined asf_b(x):=b∘ f∘ b^-1(x),which is defined only on Im(b)⊂^l, the image of b. The domain Im(b) is called the admissible region for f_b. The state transition graph Γ(f_b) in this case is defined to be the full sub-graphon Im(b) of the one defined by Eq. (<ref>). Van Ham <cit.> proposed one particular embedding b_0: M→^m_1+m_2+⋯+m_n which is defined as the direct product of(b_0)_i: {0,1,…,m_i}→^m_i,(b_0)_i(k)=(1,1,…,1_k, 0,…,0)for alli∈ I. Didier et al. <cit.> showed that Van Ham's embeddingis essentially the only one satisfying nice properties which they callneighbour preservation and regulation preservation (see Remark <ref> below). On the other hand, an apparent inconvenience of this methodis that the resulting evolution function is defined only on the restricted domain Im(b),the set of admissible states <cit.>. In contrast, we will give a construction which produces a Boolean network defined on the whole state space ^m_1+⋯+m_n. The idea is to use a surjective map ψ: ^m_1+⋯+m_n→ M rather than an embedding in the opposite direction.Properties similar to the first two in Definition <ref> were introduced by Didier et al. <cit.> but only for embeddings b: M→^l (and mappings obtained by conjugation with embeddings Eq. (<ref>)). Recall that an embedding b: M→^l is said to be neighbour-preserving if it satisfies d(b(y),b(y'))=1 for any y,y'∈ M with d(y,y')=1. Also an embedding is said to be regulation-preserving if Gf_b(^l)≠ Gf'_b(^l) when Gf(M) ≠ Gf'(M) for any f,f': M→ M; that is, the global interaction graphs of the Boolean networks obtained by conjugation differwhen so do those of the multilevel networks. Our definitions are modified versions of theirs which apply to any mapping. Here, we define a mapping from (M) to (^l) with l=(m_1+⋯+m_n), and a mapping from grid graphs over M to grid graphs over ^l, which possesses all three above properties.Define a map ψ: ^l→ M byψ(x_1,1,x_1,2,…,x_1,m_1,x_2,1,…, x_n,m_n) =(|y_1|,…,|y_n|),where y_i=(x_i,1,…,x_i,m_i)∈^m_i and |y_i|=∑_k=1^m_i x_i,k. We denote the index set of ^l=^m_1+⋯+m_n by I_={(i,j_i)| 1≤ i≤ n, 1≤ j_i ≤ m_i}.For an asymptotic multilevel evolution function f∈(M), its binarisation (f)∈(^l) is defined by(f)_i,j(x):= 0 (f_i(ψ(x))<ψ(x)_i) x_i,j(f_i(ψ(x))=ψ(x)_i) 1 (f_i(ψ(x))>ψ(x)_i).Conversely, we havef_i(ψ(x)) = ∑_j=1^m_i(f)_i,j(x). The binarisation of a grid graph Γ on M, denoted by (Γ), is defined to be the grid graph on ^l such that there exists a directed edge x→ x' in (Γ) if and only if d(x,x')=1 and there exists a directed edge ψ(x)→ψ(x') in Γ.It is trivial to see (f^Γ)=f^(Γ) and (Γ(f))=Γ((f)). We now identify the image of binarisation (M)→(^l). The symmetric group S_m_i acts on ^m_i by permuting the coordinates. We consider the coordinate-wise action of =S_m_1× S_m_2×⋯× S_m_n on ^l=^m_1×^m_2×⋯^m_n. Since the map ψ is invariant under this action, the binarisation (f) has symmetry with respect to this action.A Boolean network f': ^l→^l is said to be -symmetric iff'_i,j(x_1,1,,…, x_n,m)= f'_i,σ_i(j)(x_1,σ_1(1),x_1,σ_1(2),…,x_1,σ_1(m_1),x_2,σ_2(1),…, x_n,σ_n(m_n)) for any σ=(σ_1,…,σ_n)∈. Similarly, a Boolean grid graph Γ'is said to be -symmetric when an edge x→ x' exists if and only if so does σ(x)→σ(x').For an -symmetric Boolean evolution function f'we obtain a well-defined evolution function ψ∘ f' ∘ψ^-1.Similarly,for an -symmetric Γ', we obtain a grid graph over M as the image under ψ.The binarisation induces a bijective mapping between the set (M) of asymptotic evolution functions on M and the set (^m_1+⋯+m_n) of Boolean evolution functions on ^m_1+⋯+m_n which are -symmetric. It immediately follows that Van Ham's embedding b_0: M→^l and our own map ψ: ^l→ M induce graph homomorphisms b_0: Γ(f) →Γ((f)) andψ̃: Γ((f)) →Γ(f) for any f∈(M) such that ψ̃∘b_0 is the identity. Thus, the binarisation is neighbour-preserving. We will see that it is also locally, and hence globally as well, regulation-preserving. We show that the dynamics of the system, namely, attractors in the state transition graph are preserved under binarisation.In what follows, we often make use of the following two obvious facts:* When there exists an edge x→ x' in (Γ),there exists an edge ψ(x)→ψ(x') in Γ.* When there exists an edge y→ y' in Γ, for any x∈ψ^-1(y)there exists an edge x→ x' in (Γ) for some x'∈ψ^-1(y'). The strongly connected components of (Γ) map surjectively onto those of Γ via ψ̃. Moreover, attractors of (Γ) map surjectively onto those of Γ via ψ̃. Assume that x,x'∈(Γ) are in the same strongly connected component. This means, there is a cycle containing x,x' and it maps to a cyclecontaining ψ(x),ψ(x')∈Γ. Therefore, ψ(x),ψ(x') are in the same strongly connected component. Conversely, assume that there exists a cycle containing y,y'∈Γ. For any vertex x∈ψ^-1(y), there exists a vertex x'∈ψ^-1(y') and a cycle containing both x and x'.To sum up, the image of a strongly connected component of (Γ) is a strongly connected component of Γ, and for any strongly connected component of Γ there exists astrongly connected component of (Γ) which maps to it. Since ψ̃ is a surjective graph homomorphism, attractors of (Γ) map surjectively onto those of Γ via ψ̃.* A stable state exists in (Γ) if and only if it does in Γ.* A cyclic attractor exists in (Γ) if and only if it does in Γ.The first statement is trivial. For any cyclic attractor in Γ, there exists an attractor in (Γ) which maps to it by the previous proposition. Since it contains more than one element, it is a cyclic attractor in (Γ). Conversely, assume that there is a cyclic attractor in (Γ).It contains at least two elements x,x' with d(x,x')=1. Their images ψ(x),ψ(x') should be different sinceany two distinct elements in a single fibre (the inverse image of a point) ψ^-1(ψ(x)) have at least distance two.Thus, the image of the cyclic attractor contains at least two distinct elements ψ(x),ψ(x')and is a cyclic attractor in Γ.The map I_→ I defined by (i,j_i)↦ i induces a surjective graph homomorphismon G(f)(x) → Gf(y) for y=ψ(x) and any x∈^l. More precisely, the following two statements hold. * At any y∈ M, if a positive (resp. negative) edge i→ i' exists in Gf(y), so does a positive (resp. negative) edge (i,j)→ (i',j')in G(f)(x) for some j and j'at any x∈ψ^-1(y).* At any x∈^l, ifa positive (resp. negative) edge (i,j)→ (i',j') exists in G(f)(x), so does a positive (resp. negative) edge i → i' in Gf(ψ(x)).We only show the statements for the case of a positive edge, as the case of a negative edge follows by a similar argument.For the first statement, assume that there exists a positive edge i→ i' in Gf(y). We have two cases ∂^+_i f_i'(y)>0 and ∂_i^- f_i'(y)>0. When ∂^-_i f_i'(y)=f_i'(y)-f_i'(y-e_i)>0, for any x∈ψ^-1(y) there exists j such that x_i,j=1 since y_i>0. By Eq. (<ref>) there must exist some j' such that (f)_i',j'(x)-(f)_i',j'(x-e_i,j)=1. This means there exists a positive edge (i,j)→ (i',j') in G(f)(x). A similar argument applies when ∂_i^+ f_i'(y)>0.For the second statement, assume that there exists a positive edge (i,j)→ (i',j') in G(f)(x). When x_i,j=0, this means(f)_i',j'(x)=0 and (f)_i',j'(x+e_i,j)=1. Since ψ(x)+e_i=ψ(x+e_i,j), we havef_i'(ψ(x)+e_i)-f_i'(ψ(x))=∑_k=1^m_i'((f)_i',k(x+e_i,j) -(f)_i',k(x) ).Since (f)_i',k(x+e_i,j) -(f)_i',k(x) ≥ 0 for all k and (f)_i',j'(x+e_i,j) -(f)_i',j'(x)=1,we have f_i'(ψ(x)+e_i)-f_i'(ψ(x))>0. This in turn means that there exists a positive edge i→ i' in Gf(ψ(x)). A similar argument applies to the case when x_i,j=1. Intuitively speaking, (1) says all the regulation in the original multilevel network is capturedin the converted Boolean network, while (2) says all the regulation in the converted network comes from the original multilevel network. An asymptotic evolution function f over M has a positive (negative) type-1 functional circuit if so does its binarisation (f).Note that two negative arrows (i,j)→ (i,j') and (i,j')→ (i,j) in G(f)(x), both of which correspond to a negative self-loop i→ i in G(f)(x), can be composed to produce a positive circuit. This positive functional type-1 circuit corresponds the one which is the composition of a single negative self-circuit with itself in Gf(ψ(x)).We give two characteristic examples of the binarisation.Consider the evolution function f(y)=2-y over M={0,1,2}. Its binarisation is (f)(x)= (1,1) (x=(0,0)) x (x=(1,0), (0,1)) (0,0) (x=(1,1)).The corresponding state transition graphs are0[r] 1 2[l] ,10 00[ru] [rd]11 [lu] [ld]01The interaction graphs at y=0 and x=(0,0) respectively look:Gf(0)=y @->@(ul,ur)^-,G(f)(0,0)= x_11@/^1pc/[r]^-x_12@/^1pc/[l]^-Notice that the self-loop on y_1 corresponds to each of the edges x_11→ x_12 and x_11← x_12. In particular, the converse to Corollary <ref> does not hold.Consider another evolution functionover M={0,1,2} defined by f(y)= y (y=0,1) 0 (y=2). Its binarisation is (f)(x)= x (x≠(1,1)) (0,0) (x=(1,1)). The corresponding state transition graphs are0 @–[r] 1 2[l] ,10 00@–[ru] @–[rd] 11[lu] [ld]01The interaction graphs at y=1 and x=(0,1)respectively look:Gf(1)=y @->@(ul,ur)^-@->@(dl,dr)_+,G(f)(0,1)=x_11@/^1pc/[r]^-x_12@->@(dr,ur)_+ §.§ Another extension methodRecently, Tonello <cit.> has independently constructed a mapping which also extends Van Ham's while preserving the dynamicsand the local regulations in a more stringent sense than ours. Her method was used to produce a counter-example to Conjecture <ref> as well. Her mapping can be described in our context as follows:f ↦ b_0 ∘ f ∘ψ,where f is a stepwise function.Compared to ours, her method yields fewerarrows in the state transition graph. Her strategy was to stipulate the converted function to take values in the admissible region Im(b_0),whereas ours was to equip the converted function with the symmetry described in Theorem <ref>.§ RESULTS§.§ Lambda phageAs an illustration, we first apply our method to the 2-variable lambda phage model proposed by Thieffry and Thomas <cit.>.The lambda phage is a bacterial virus that infects E. coli. It is a temperate phage, i.e. it can either multiply and eventually kill the host cell (lytic phase), or integrate its DNA into the bacterial chromosome (lysogenic phase), conferring the cell immunity against super-infection by other lambda phage. The switch between lysis and lysogeny, as modelled by Thieffry and Thomas, is essentially controlled by a positive feedback circuit between genes cI and cro. The two genes inhibit each other, such that cI dominates the lysogenic phase, whereas cro is active during the lytic phase. The gene cro further inhibits its own activity. The system is modelled by a discrete system with the state space M={(cI,cro)∈{0,1}×{0,1,2}}. The dynamics displays a stable state with high cI and low cro activity, and a two-state cyclic attractor with low cI, and cro oscillating around its activity threshold (Fig. <ref>, left), which the authors describe as homeostasis.Table <ref> shows how the same dynamics can be encoded using a stepwise or asymptotic evolution function. Notice that the stepwise function creates a positive feedback on cro (f̂_cro(0,1)-f̂_cro(0,0)>0, and f̂_cro(1,2)-f̂_cro(1,1)>0) that is not visible in the asymptotic function. This difference in the global regulatory graphs is shown in Fig. <ref>. Boolean systems are generated by Van Ham's method and ours with the state space {(cI,cro1,cro2)∈^3}.However, since Van Ham's method yields a dynamics with as many states as the original, multilevel dynamics (Fig. <ref>, centre), the system thus obtained includes a ”non-admissible” region (grey area in the Figure) whose states do not have any counterpart in the multilevel model. To make comparison, we extended the domain of the Boolean model obtained by Van Ham's method (Fig. <ref>, centre) by completing the grey dashed arrows in such a way that* it does not create any extra arrows in the global interaction graph that is not already visible elsewhere within the admissible region* and there is no outgoing arrow in the state transition graph from an admissible state to a non-admissible state.This extension is based on our understanding of Van Ham's original publication <cit.>. The corresponding global regulatory graph is shown in Fig. <ref> (centre). In contrast, the dynamics produced by our method is more complex, and it occupies the whole state space: every state has a counterpart in the original multilevel model. However, the two-state attractor of the original model is now represented by a three-state attractor, where state 011 corresponds to state 02 and both states 001 and 010 correspond to state 01. Another difference is that, in the model obtained with Van Ham's method, variables cro1 and cro2 are ordered and represent levels 1 and 2, respectively, of the original multilevel cro. In the Boolean model generated with our method, cro1 and cro2 are interchangeable and equivalent: whether one represents level 1 or 2 depends on the other, and thus, on the context of each particular state.Finally, an important difference appears at the local level (Fig. <ref>). Using Van Ham's method, local graphs may include edges that have no visible counterpart in the local graph of the corresponding multilevel state. For example, while in 02 the only visible regulation is the negative loop on cro, Van Ham's method adds an edge from cro1 to cI in 011, whereas the corresponding local graph obtained using our method includes only regulations between cro1 and cro2. Similarly, in 10, the only visible edges occur between cI and cro, and the same is true in 100 using our method; however, using Van Ham's method an additional edge between cro1 and cro2 becomes visible. Our method generates an “extra” positive circuit between cro1 and cro2, which in multilevel model corresponds to the composition of the negative self circuit on cro with itself. Such positive circuits can only appear between variables that represent the same multilevel variable (see Corollary <ref>).§.§ Boolean counter-example to Conjecture <ref>One of the main goals in genetic regulatory network analysis is to find relations between circuits in the interaction graphs andattractors in the state transition graphs. Here we list a few known results in this direction: * If f has no type-1 functional circuit, Γ(f) has a unique stable state x (<cit.>). * If f has no type-1 functional positive circuit, Γ(f) has a unique attractor (<cit.>). * If there exists a cyclic attractor in Γ(f), Gf(M) must contain a negative circuit (<cit.>). Notice that the first two statements connect the asymptotic behaviour of the dynamics with the local interaction graph at some x∈ M, while the last one does so with the global interaction graph Gf(M). This naturally gives rise to the following conjectures: * If f has no type-1 functional negative circuit, then Γ(f) has no cyclic attractors.* If f has no type-1 functional negative circuit, then Γ(f) has a stable state. Note that the second statement is weaker in the sense that it follows from the first one.Richard, together with Comet <cit.>, gave a counter-example tothe conjectures when M={0,1,2,3}^2.The grid graph Γ over M={0,1,2,3}^2 given in <cit.> has no stable state and there exists no type-1 negative functional circuit in the interaction graph of f̅^Γ(see Fig. <ref>). The existence of Boolean counter-examples was left as an open problem in <cit.>.By Corollary <ref>, the Boolean grid graph (Γ) yielded by our method (Fig. <ref>) has no type-1 negative functional circuit in the interaction graph of the binarisation (f̅^Γ). Furthermore, by Corollary <ref> there is no stable state in the state transition graph of (f̅^Γ). Thus, we obtain a new Boolean counter-example to Conjecture <ref>.Finally, we note that Ruet recently gave a systematic way to produce counter-examples to the conjectures for the Boolean case <cit.>. Our method is very different from Ruet's and while his example possesses a special property that it has an attractive cycle, our counter-example has a smaller dimension of 6. For comparison, we include an example based on Ruet's theory <cit.> Fig. <ref>. §.§ Implementation We implemented our method in the form of a Perl script. For comparison, Tonello's method <cit.> is also implemented in the same script. It is available at <cit.> under the MIT license. The input is a multilevel evolution function described in the Truth table format <cit.> and the output is a Boolean evolution function described in the same format. The maximum levels m_i for each gene is automatically detected. When the input function is defined on {0,1,…,m_1}×{0,1,…,m_2}×⋯×{0,1,…,m_n}, the converted Boolean network is defined on ^m_1+m_2+⋯+m_n. § DISCUSSION AND CONCLUSIONS In the discrete formulation of gene regulatory networks, a system is commonly modelled by a function. When some genes take more than two levels, there are multiple choices for functions having the same (asynchronous) state transition graph. We single out a unique choice, which we call the asymptotic evolution function (Proposition <ref>). Then, we introduced a mapping which converts an asymptotic evolution function to a Boolean evolution function (Definition <ref>). This mapping preserves dynamical and regulatory properties (Proposition <ref>, Theorem <ref>),thus allowing us to analyse multilevel networks by methods developed for Boolean networks.Mappings from multilevel to Boolean networks have been used in the study of gene regulatory networks. In particular, Van Ham's mapping has been shown by Didier et al. to be essentially the only method to provide a one-to-one, neighbour-preserving and regulation-preserving Boolean representation of multilevel models <cit.>. However, although the authors did suggest that the mapping could be useful to study the role of regulatory circuits, the question of how interaction functionality contexts are preserved had not been studied so far.One such instance is Thomas's conjecture, which states that the existence of a cyclic attractor in the asynchronous state transition graph requires that of local negative circuit. The conjecture has recently been given a Boolean counter-example by P. Ruet <cit.>. Until then, although A. Richard and and J-P. Comet had produced a multilevel counter-example <cit.>, the Boolean case remained open. It is straightforward to apply Van Ham's method to the counter-example in order to obtain a Boolean model, but it is defined only on the admissible region. To extend the model to the whole Boolean state space, while preserving its dynamics and regulatory relation, is highly non-trivial. The method we propose has been designed specifically to circumvent the problem. The idea was to avoid extra interactions and circuitsby extending the state transition graph obtained with Van Ham's methodin such a way that we have “parallel trajectories” going through the whole state space. This was achieved by loosening the one-to-one criterion such that states including intermediate values in the multilevel model would match with several states in the Boolean version, effectively creating equivalent Boolean transitions for each multilevel transition in the original model.In a sense, our method works opposite to Van Ham's: instead of embedding multilevel states into Boolean states, we define a Boolean model such that each Boolean state can be mapped to a multilevel state. With our method, interaction functionality is preserved, and thus all local interaction graphs in the Boolean model come from their counterpart in the original, multilevel model. In contrast, Van Ham's method only preserves the global interaction graph. One limitation of our method is that the synchronous dynamics of the original multilevel model can not be directly retrieved from the Boolean model produced by the conversion.Here, by synchronous dynamics we mean the state transition graph having edges x→ f(x) instead of (<ref>) (see, for example, <cit.>). Synchronous state transition is often deemed unrealistic since it assumes all processes are realised simultaneously with the same delay(see discussion by Abou-Jaoudé et al. <cit.>). Nevertheless, the synchronous mode is still a popular update method in simulation due to its simplicity,and occasionally used for multilevel models (see e.g. Chifman et al. <cit.> for a recent example).If our binarisation is used for a synchronous simulation, any increase in a variable would translated into its increase to the maximum value. It is worth noting that thecounter-examples for Conjecture <ref>considered in <ref> (Richard-Comet's one and its binarisation) have no stable state in the synchronous state transition graph as well (and have no type-1 negative functional circuit in the interaction graph).Finally, our results highlight two opposite strategies, stepwise and asymptotic, for writing the evolution function of a multilevel model. While both suppress inter-genes regulations, the stepwise tends to add positive self-regulations, whereas the asymptotic tends to add negative self-regulations. This work contributes to a better understanding of the different ways to represent a multilevel system, for different ways can represent the same model <cit.>, which causes ambiguities in the notation.§ ACKNOWLEDGEMENTSThe authors are grateful to Paul Ruet for explaining his result in <cit.>, to Elisa Tonello for fruitful discussion and careful reading of our draft, and to Yuki Ikawa and Sergey Tishchenko for their help in the early stage of this work. The second named author is partially supported by JST PRESTO Grant Number JPMJPR16E3, Japan. bmc-mathphys | http://arxiv.org/abs/1703.08934v2 | {
"authors": [
"Adrien Fauré",
"Shizuo Kaji"
],
"categories": [
"q-bio.MN",
"math.CO",
"68R05, 92D99"
],
"primary_category": "q-bio.MN",
"published": "20170327052522",
"title": "A circuit-preserving mapping from multilevel to Boolean dynamics"
} |
=15mm .Graphics/ | http://arxiv.org/abs/1703.08788v3 | {
"authors": [
"C. Alexandrou",
"M. Constantinou",
"P. Dimopoulos",
"R. Frezzotti",
"K. Hadjiyiannakou",
"K. Jansen",
"C. Kallidonis",
"B. Kostrzewa",
"G. Koutsou",
"M. Mangin-Brinet",
"A. Vaquero Avilès-Casco",
"U. Wenger"
],
"categories": [
"hep-lat",
"hep-ph",
"nucl-ex",
"nucl-th"
],
"primary_category": "hep-lat",
"published": "20170326085358",
"title": "Nucleon scalar and tensor charges using lattice QCD simulations at the physical value of the pion mass"
} |
Status of the B→ K^*μ^+μ^- anomaly after Moriond 2017Wolfgang Altmannshofer^a, Christoph Niehoff^b, Peter Stangl^b,David M. Straub^b ^a Department of Physics, University of Cincinnati, Cincinnati, Ohio 45221, USA ^b Excellence Cluster Universe, Boltzmannstr. 2, 85748 Garching, GermanyE-Mail: Motivated by recent results by the ATLAS and CMS collaborations on the angular distribution of the B → K^* μ^+μ^- decay, we perform a state-of-the-art analysis of rare B meson decays based on the b → s μμ transition. Using standard estimates of hadronic uncertainties, we confirm the presence of a sizable discrepancy between data and SM predictions. We do not find evidence for a q^2 or helicity dependence of the discrepancy. The data can be consistently described by new physics in the form of a four-fermion contact interaction (s̅γ_α P_L b)(μ̅γ^αμ). Assuming that the new physics affects decays with muons but not with electrons, we make predictions for a variety of theoretically clean observables sensitive to violation of lepton flavour universality. § INTRODUCTIONThe angular distribution of the decay B→ K^*μ^+μ^- has been known to be a key probe of physics beyond the Standard Model (SM) at the LHC already before its start (see e.g. <cit.>) and the observable S_5 was recognized early on to be particularly promising <cit.>. A different normalization for this observable, reducing form factor uncertainties, was suggested in ref. <cit.>, rebranded as P_5'. While B factory and Tevatron measurements of the forward-backward asymmetry and longitudinal polarization fraction had been in agreement with SM expectations <cit.>, in 2013, the LHCb collaboration announced the observation of a tension in the observable P_5' at the level of around three standard deviations. It was quickly recognized <cit.> that a new physics (NP) contribution to the Wilson coefficient C_9 of asemi-leptonic vector operator was able to explain this “B→ K^*μ^+μ^- anomaly”, confirmed few days later by an independent analysis <cit.> and also by other groups with different methods <cit.>. Further measurements have shown additional tensions, e.g. branching ratio measurements in B→ Kμ^+μ^- and B_s→ϕμ^+μ^- <cit.>, as well as, most notably, a hint for lepton flavour non-universality in B^+→ K^+ℓ^+ℓ^- decays <cit.>. While progress has also been made on the theory side, most notably improved B→ K^* form factors from lattice QCD (LQCD) <cit.> and light-cone sum rules (LCSR) <cit.>, the “anomaly” has also led to a renewed scrutiny of theoretical uncertainties due to form factors <cit.> as well as non-factorizablehadronic effects <cit.> (cf. also the earlier works <cit.>).In 2015, the LHCb collaboration presented their B→K^*μ^+μ^- angular analysis based on the full Run 1 data set, confirming the tension found earlier <cit.>. Several updated global analyses have confirmed that a consistent description of the tensions in terms of NP is possible <cit.>, while an explanation in terms of an unexpectedly large hadronic effect cannot be excluded. Recent analyses by Belle <cit.> also seem to indicate tensions in angular observables consistent with LHCb. At Moriond Electroweak 2017, ATLAS <cit.> and CMS <cit.> finally presented their preliminary results for the angular observables based on the full Run 1 data sets. The aim of the present paper is to reconsider the status of the B→ K^*μ^+μ^- anomaly in view of these results. Our analysis is built on our previous global analyses of NP in b→ s transitions <cit.> and makes use of the open source code <cit.>.§ EFFECTIVE HAMILTONIAN AND OBSERVABLESThe effective Hamiltonian for b→ s transitions can be written as ℋ_eff = - 4 G_F/√(2) V_tbV_ts^* e^2/16π^2∑_i (C_i O_i + C'_i O'_i) + h.c. and we consider NP effects in the following set of dimension-6 operators, O_9= (s̅γ_μ P_L b)(ℓ̅γ^μℓ) , O_9^' = (s̅γ_μ P_R b)(ℓ̅γ^μℓ) ,O_10 = (s̅γ_μ P_L b)( ℓ̅γ^μγ_5 ℓ) , O_10^' = (s̅γ_μ P_R b)( ℓ̅γ^μγ_5 ℓ) . We neither consider new physics in scalar operators, as they are strongly constrained by B_s→μ^+μ^- (see <cit.> for a recent analysis), nor in dipole operators, which are strongly constrained by inclusive and exclusive radiative decays (see <cit.> for a recent analysis). We also do not consider new physics in four-quark operators, although an effect in certain b→ cc̅s operators could potentially relax some of the tensions in B→ K^*μ^+μ^- angular observables <cit.>.In our numerical analysis, we include the following observables.* Angular observables in B^0→ K^*0μ^+μ^- measured by CDF <cit.>, LHCb <cit.>, ATLAS* <cit.>, and CMS* <cit.>,* B^0,±→ K^*0,±μ^+μ^- branching ratios by LHCb* <cit.>, CMS <cit.>, and CDF <cit.>,* B^0,±→ K^0,±μ^+μ^- branching ratios by LHCb <cit.> and CDF <cit.>,* B_s→ϕμ^+μ^- branching ratio by LHCb* <cit.> and CDF <cit.>,* B_s→ϕμ^+μ^- angular observables by LHCb* <cit.>,* the branching ratio of the inclusive decay B→ X_sμ^+μ^- measured by BaBar <cit.>.Items marked with an asterisk have been updated since our previous global fit <cit.>. ConcerningB^0→ K^*0μ^+μ^-, both LHCb and ATLAS have performed measurements of CP-averaged angular observables S_i as well as of the closely related “optimized” observables P_i'. While LHCb gives also the full correlation matrices and the choice of basis is thus irrelevant (up to non-Gaussian effects which are anyway impossible to take into account using publicly available information), ATLAS does not give correlations, so the choice can make a difference in principle. We have chosen to use the P_i' measurements, but have explicitly checked that the best-fit regions and pulls do not change significantly when using the S_i observables. We do not include the following measurements. * Angular observables in B→ Kμ^+μ^-, which are only relevantin the presence of scalar or tensor operators <cit.>, * measurements of lepton-averaged observables, aswe want to focuson new physics in b→ sμ^+μ^- transitions, * the Belle measurement of B→ K^*μ^+μ^- angular observables <cit.>,as it contains an unknown mixture of B^0 and B^± decays that receivedifferent non-factorizable corrections at low q^2, * the LHCb measurement of the decay Λ_b→Λμ^+μ^- <cit.>,as it still suffers from large experimental uncertainties and the centralvalues of the measurement are not compatible with any viable short-distancehypothesis <cit.>. We do not make use of the LHCb analysis attempting to separately extract the short- and long-distance contributions to the B^+→ K^+μ^+μ^- decay <cit.>, but we note that these results are in qualitative agreement with our estimates of long-distance contributions to this decay. Finally, we do not include the decay B_s→μ^+μ^- in our fit, as it can be affected by scalar operators, as discussed above.For all these semi-leptonic observables, that are measured in bins of q^2, we discard the following bins from our numerical analysis. * Bins below the J/ψ resonance that extend above 6 GeV^2. In this region,theoretical calculations based on QCD factorization are not reliable <cit.>.* Bins above the ψ(2S) resonance that are less than 4 GeV^2 wide. This is because theoretical predictions are only valid for sufficiently global, i.e. q^2-integrated, observables in this region <cit.>.* Bins with upper boundary at or below 1 GeV^2, because this region is dominated by the photon pole and thus by dipole operators, while we are interested in the effect of semi-leptonic operators in this work. For the SM predictions of these observables, we refer the reader to refs. <cit.>, where the calculations, inputs, and parametrization of hadronic uncertainties have been discussed in detail. Our predictions are based on the implementation of these calculations in the open source code <cit.>. With respect to our previous analysis <cit.>, we use improved predictions for B → K^* and B_s →ϕ form factors from <cit.> and B → K form factors from <cit.>. Note that the B → K form factors from <cit.> have substantially smaller uncertainties compared to the ones used in <cit.> which were based on the results in <cit.>. The increased tension due to these form factors was also pointed out in <cit.>.§ RESULTS AND DISCUSSIONFrom the measurements and theory predictions, we construct a χ^2 function where theory uncertainties are combined with experimental uncertainties, such that the χ^2 only depends on the Wilson coefficients. Both for the theoretical and the experimental uncertainties, we take into account all known correlations and approximate the uncertainties as (multivariate) Gaussians, and we neglect the dependence of the uncertainties on the NP contributions. This procedure, which was proposed in <cit.> and later adopted by other groups <cit.> isimplemented inas theclass.From the observable selection discussed in section <ref>, we end up with a total number of 86 measurements of 81 distinct observables. These observables are not independent, but their theoretical and experimental uncertainties are correlated. We take into account the experimental correlations where known (this is the case only for the angular analyses of B→ K^*μ^+μ^- and B_s→ϕμ^+μ^- by LHCb), and include all theory correlations. Before considering NP effects, we can evaluate the χ^2 function within the SM to get a feeling of the agreement of the data with the SM hypothesis. However, this absolute χ^2 is not uniquely defined. For instance, averaging multiple measurements of identical observables by different experiments before they enter the χ^2, we obtain χ^2_SM=98.5 for 81 observables. Adding all individual measurements separately instead, we obtain χ^2_SM=100.6 for 86 measurements. For the Δχ^2 used in the remainder of the analysis, these procedures are equivalent. §.§ New physics in individual Wilson coefficients As a first step, we switch on NP contributions in individual Wilson coefficients, determine the best-fit point in the one- or two-dimensional space, and evaluate the χ^2 difference Δχ^2 with respect to the SM point. The “pull” in σ is then defined as √(Δχ^2) in the one-dimensional case, while in the two-dimensional case it can be evaluated using the inverse cumulative distribution function of the χ^2 distribution with two degrees of freedom; for instance, Δχ^2≈ 2.3 for 1σ.The results are shown in table <ref>. We make the following observations.* The strongest pull is obtained in the scenario with NP in C_9 onlyand it amounts to slightly more than five standard deviations.Consistently with fits before theupdated ATLAS and CMS measurements, the best-fit point corresponds toa value around C_9∼ -1, i.e. destructive interference with the SM Wilsoncoefficient. The increase in the significance for a non-standard C_9(3.9σ in <cit.> vs. 5.2σ here) can belargely traced back to the new and more precise form factors we are using,with only a moderate impact of the added experimental measurements. * A scenario with NP in C_10 only also gives an improved fit, althoughless significantly than the C_9 scenario. We note that this suppressionof C_10 by roughly 20% would imply a suppression of the B_s→μ^+μ^-branching ratio – which, we stress again, we have not included in the fit –by roughly 35%. * A scenario with C_9^NP=-C_10^NP, that is well motivatedby models with mediators coupling only to left-handed leptons, leads to acomparably good fit as the C_9-only scenario. To understand where the large global tension comes from, it is instructive to perform one-dimensional fits with NP in C_9 using only a subset of the data. We find for instance that * measurements of the B_s→ϕμ^+μ^- branching ratio alone leadto a pull of 3.5σ, * all branching ratio measurements combined lead to a pull of 4.6σ, * the B→ K^*μ^+μ^- angular analysis by LHCb alone leads to a pullof 3.0σ, * the new B→ K^*μ^+μ^- angular analysis by CMS reduces the pull,but the new ATLAS measurement increases it. The significance of the tension between the branching ratio measurements and the corresponding SM predictions depends strongly on the form factors used. To estimate the possible impact of underestimated form factor uncertainties, we repeat the fit with NP in C_9, doubling the form factor uncertainties with respect to our nominal fit. We find that the pull is reduced from 5.2σ to 4.0σ. Significant tensions remain in this scenario, indicating that underestimated form factor uncertainties are likely not the only source of the discrepancies.We also perform a fit doubling the uncertainties of the non-factorizable hadronic corrections (see <cit.> for details on how we estimate these uncertainties). We find a reduced pull of 4.4σ. §.§ New physics in pairs of Wilson coefficients Next, we consider pairs of Wilson coefficients. In the last four rows of table <ref>, we show the best-fit points and pulls for four different scenarios. We observe that adding one of the primed coefficients does not improve the fit substantially. In fig. <ref> we plot contours of constant Δχ^2 in the planes of two Wilson coefficients for the scenarios with NP in C_9 and C_10 or in C_9 and C_9', assuming the remaining coefficients to be SM-like. In both plots, we show the 1, 2, and 3σ contours for the global fit, but also 1σ contours showing the constraints coming from the angular analyses of individual experiments, as well as from branching ratio measurements of all experiments.We observe that the individual constraints are all compatible with the global fit at the 1σ or 2σ level. While the CMS angular analysis shows good agreement with the SM expectations, all other individual constraints show a deviation from the SM. In view of their precision, the angular analysis and branching ratio measurements of LHCb still dominate the global fit (cf. Figs. <ref>, <ref>, <ref> and <ref>), leading to a similar allowed region as in previous analyses. We do not find any significant preference for non-zero NP contributions in C_10 or C_9' in these two simple scenarios. Similarly to our analysis of scenarios with NP in one Wilson coefficient, we repeat the fits doubling the form factor uncertainties and doubling the uncertainties of non-factorizable corrections. For NP in C_9 and C_10, we find that the pull is reduced from 5.0σ to 3.7σ and 4.1σ, respectively. For NP in C_9 and C_9^' the pull is reduced from 5.3σ to 4.1σ and 4.4σ, respectively. The impact of the inflated uncertainties is also illustrated in Fig. <ref>. Doubling the hadronic uncertainties is not sufficient to achieve agreement between data and SM predictions at the 3σ level. §.§ New physics or hadronic effects? It is conceivable that hadronic effects that are largely underestimated could mimic new physics in the Wilson coefficient C_9 <cit.>. As first quantified in <cit.> and later considered in <cit.>, there are ways to test this possibility by studying the q^2 and helicity dependence of a non-standard effect in C_9.Without loss of generality, any photon-mediated hadronic contribution to the B → K^* μ^+μ^- helicity amplitudes can be expressed as a q^2 and helicity dependent shift in C_9, since the photon has a vector-like coupling to leptons and flavour-violation always involves left-handed quarks in the SM. A new physics contribution to the Wilson coefficient C_9 is by definition independent of the di-muon invariant mass q^2, and it is universal for all three helicity amplitudes. For hadronic effects, the situation is rather different. It can be argued that hadronic effects in the λ = + helicity amplitudes are suppressed <cit.> and a priori there is no reason to expect that hadronic effects in the λ=0 and λ = - amplitudes are of the same size. Moreover, one would naively expect that hadronic effects that can arise e.g. from charm loops show a non-trivial q^2 behaviour. However, we would like to stress that no robust predictions about the precise properties of the hadronic effects can be made at present.Another interesting possibility is to have NP contributions in b→ cc̅s operators as speculated in <cit.> and recently worked out in <cit.>. In this case, the shift in C_9 would be q^2 dependent, but helicity independent up to corrections of order α_s and Λ_QCD/m_b.In order to understand if the data shows preference for a non-trivial q^2 dependence, we perform a series of fits to non-standard contributions to the Wilson coefficient C_9 in individual bins of q^2, using B^0→ K^*0μ^+μ^- measurements only. In particular, we consider separately the experimental data in bins below 2.5 GeV^2, between 2 GeV^2 and 4.3 GeV^2, between 4 GeV^2 and 6 GeV^2, and between 6 GeV^2 and 8.7 GeV^2 (the overlaps are due to the different binning unfortunately still used by different experiments). While the latter bin is not included in our NP fit as discussed in section <ref>, we include it here as we are explicitly interested in the hadronic effects mimickinga shift in C_9. The results are shown in the left plot of Fig. <ref>. While the significance of the tension is more pronounced in the region above 4 GeV^2, this is not surprising as the observables are more sensitive to C_9 in this region. At 1σ, the fits are compatible with a flat q^2 dependence. Moreover, every single bin shows a preference for a shift in C_9, compatible with a constant new physics contribution of C_9^NP∼ -1.In the right plot of Fig. <ref> we show results of fits that allow for helicity dependent shifts in the Wilson coefficient C_9, which we denote as Δ C_9^0 and Δ C_9^-. As before we split the data into q^2 bins. The fit results are perfectly consistent with a universal effect Δ C_9^0 = Δ C_9^- for each individual q^2 bin. Furthermore, we also find that the fit results of the different q^2 bins are consistent with each other.The absence of a q^2 and helicity dependence is intriguing, but cannot exclude a hadronic effect as the origin of the apparent discrepancies. §.§ Predictions for LFU ObservablesAs discussed, the “B → K^* μ^+μ^- anomaly” can be consistently described by new physics contributions to Wilson coefficients of the effective Hamiltonian (<ref>). In order to determine the best-fit values for the various Wilson coefficients, we considered exclusively data on rare decays with muons in the final state. In this section, we use the obtained best-fit ranges from sections <ref> and <ref> to make predictions for theoretically clean lepton flavour universality (LFU) observables.In contrast to hadronic effects, NP can lead to lepton flavour non-universality. NP predictions for LFU observables depend on additional assumptions how the NP affects b → s e e transitions. Well motivated are NP scenarios where b → s e e transitions remain approximately SM like. This is realized for example in models that are based on the L_μ - L_τ gauge symmetry <cit.> and is also naturally the case in models based on partial compositeness <cit.>. We will therefore assume that b → s e e transitions are unaffected by NP. We use our fit results to map out the allowed ranges for a variety of LFU observables.We consider the following ratios of branching ratios <cit.> R_K = Br(B → K μ^+μ^-)/Br(B → K e^+e^-) , R_K^* = Br(B → K^* μ^+μ^-)/Br(B → K^* e^+e^-) , R_ϕ = Br(B_s →ϕμ^+μ^-)/Br(B_s →ϕ e^+e^-) . at low q^2 and at high q^2. The SM predictions for these ratios are unity to a very high accuracy up to kinematical effects at very low q^2 (cf. appendix <ref>). We also consider differences of B → K^* ℓ^+ ℓ^- angular observables as introduced in <cit.>[The observable D_P_5^' has recently also been considered in <cit.> and <cit.>, where it is referred to as Q_5. See <cit.> for an alternative set of observables.] D_P_5^' =P_5^'(B → K^* μμ) - P_5^'(B → K^* ee) , D_S_5 =S_5(B → K^* μμ) - S_5(B → K^* ee) , D_A_FB =A_FB(B → K^* μμ) - A_FB(B → K^* ee) . The angular observables P_5^', S_5, and A_FB do not differ significantly from their SM predictions in the high q^2 region across the whole NP parameter space that provides a good fit of the b→ s μμ data. Therefore, we consider the above LFU differences only in the low q^2 region. In the SM the LFU differences vanish to an excellent approximation.In Tab. <ref> and in Fig. <ref> we show the predictions for the LFU observables for two scenarios: (i) new physics in the Wilson coefficients C_9 and C_10; (ii) new physics in the Wilson coefficients C_9 and C_9^'. We observe that in both scenarios, the observables R_K, R_K^* and R_ϕ are all suppressed with respect to their SM predictions. Since the best-fit regions of both scenarios correspond to similar values of the Wilson coefficients – a sizable shift in C_9^μ and small effects in C_10^μ or C_9^'μ, respectively – the predictions for the observables are very similar both for the branching ratios and for the angular observables. The LHCb measurement of R_K <cit.> is in excellent agreement with our predictions. The recent results on D_P_5^' by Belle <cit.> are compatible with our predictions but still afflicted by large statistical uncertainties. If future measurements of any of the discussed LFU observables shows significant discrepancy with respect to SM predictions, it would be clear evidence for new physics.§ CONCLUSIONSIn this paper, we have analyzed the status of the “B→ K^*μ^+μ^- anomaly”, i.e. the tension with SM predictions in various b→ sμ^+μ^- processes, after the new measurements of B→ K^*μ^+μ^- angular observables by ATLAS and CMS and including updated measurements by LHCb. We find that the significance of the tension remains strong. Assuming the tension to be due to NP, a good fit is obtained with a negative NP contribution to the Wilson coefficient C_9. Models predicting the NP contributions to the coefficients C_9 and C_10 to be equal with an opposite sign give a comparably good fit.We also studied the q^2 and helicity dependence of the non-standard contribution to C_9. We find that the data agrees well with a q^2 and helicity independent new physics effect in C_9. A hadronic effect with these properties might appear surprising, but cannot be excluded as an explanation of the tensions.Finally, again under the hypothesis of NP explaining the tensions, we provided a set of predictions for LFU observables. Assuming that the new physics affects only b → s μμ but not b → s e e transitions, we confirm that the latest B → K^* μ^+μ^- data shows astonishing compatibility with the LHCb measurement of the LFU ratio R_K. Future measurements of LFU observables that show significant deviations from SM predictions could not be explained by underestimated hadronic contributions but would be clear evidence for a new physics effect.§ ACKNOWLEDGMENTSWe thank Ayan Paul, Javier Virto, Jure Zupan, and Roman Zwicky for useful comments. WA acknowledges financial support by the University of Cincinnati. DS thanks Christoph Langenbruch for reporting a bug in flavio and the organizers of the LHCb Workshop in Neckarzimmern for hospitality while this paper was written. The work of CN, PS, and DS was supported by the DFG cluster of excellence “Origin and Structure of the Universe”.§ PREDICTIONSFigures <ref>–<ref> compare the binned experimental measurements to the SM predictions in the same bins, obtained withversion 0.21.2. We only show the bins included in our fits (cf. the discussion in section <ref>). “ABSZ” refers to the predictions for B→Vℓ^+ℓ^- observables inwhich are based on the results of <cit.> (BSZ) for low q^2 and <cit.> (AS) for high q^2.Table <ref> shows the SM predictions for observables sensitive to violation of LFU. The uncertainties are parametric uncertainties only, i.e. it is assumed that final state radiation effects are simulated fully on the experimental side and QED corrections due to light hadrons are neglected (cf. <cit.>). JHEP | http://arxiv.org/abs/1703.09189v3 | {
"authors": [
"Wolfgang Altmannshofer",
"Christoph Niehoff",
"Peter Stangl",
"David M. Straub"
],
"categories": [
"hep-ph",
"hep-ex"
],
"primary_category": "hep-ph",
"published": "20170327172200",
"title": "Status of the $B\\to K^*μ^+μ^-$ anomaly after Moriond 2017"
} |
[, ]();a, compatibility=false [subfigure]subrefformat=simple,labelformat=simple [figure]position=above, aboveskip=0pt, belowskip=10ptplain theoremTheorem[section] lemma[theorem]Lemma corollary[theorem]Corollary proposition[theorem]Propositiondefinition definition[theorem]Definition example[theorem]Exampleremark remarkRemark notationNotation | http://arxiv.org/abs/1703.08963v2 | {
"authors": [
"P. P. Pancholy",
"K. Clemens",
"P. Geoghegan",
"M. Jermy",
"M. Moyers-Gonzalez",
"P. L. Wilson"
],
"categories": [
"physics.flu-dyn"
],
"primary_category": "physics.flu-dyn",
"published": "20170327080602",
"title": "Numerical Study of Flow Structure and Pedestrian Level Wind Comfort Inside Urban Street Canyons"
} |
Dispersive dam-break flow of a photon fluid Stefano Trillo December 30, 2023 ============================================ So far, more than 130 extrasolar planets have been found in multiple stellar systems. Dynamical simulations show that the outcome of the planetary formation process can lead to different planetary architectures (i.e. location, size, mass, and water content) when the star system is single or double. In the late phase of planetary formation, when embryo-sized objects dominate the inner region of the system, asteroids are also present and can provide additional material for objects inside the habitable zone (HZ). In this study, we make a comparison of several binary star systems and aim to show how efficient they are at moving icy asteroids from beyond the snow line into orbits crossing the HZ. We also analyze the influence of secular and mean motion resonances on the water transport towards the HZ. Our study shows that small bodies also participate in bearing a non-negligible amount of water to the HZ. The proximity of a companion moving on an eccentric orbit increases the flux of asteroids to the HZ, which could result in a more efficient water transport on a short timescale, causing a heavy bombardment. In contrast to asteroids moving under the gravitational perturbations of one G-type star and a gas giant, we show that the presence of a companion star not only favors a faster depletion of our disk of planetesimals, but can also bring 4–5 times more water into the whole HZ. However, due to the secular resonance located either inside the HZ or inside the asteroid belt, impacts between icy planetesimals from the disk and big objects in the HZ can occur at high impact speed. Therefore, real collision modeling using a GPU 3D-SPH code show that in reality, the water content of the projectile is greatly reduced and therefore, also the water transported to planets or embryos initially inside the HZ.§ INTRODUCTIONNearly 130 extra solar planets in double and multiple star systems have been discovered to date. Roughly one quarter of these planets are orbiting close to or even crossing their systems' habitable zone (HZ), i.e. the region where an Earth-analogue could retain liquid water on its surface <cit.>. While most of these planets are gas giants, the incredible ratio of one in four planets being at least partly in the HZ seems to make binary star systems promising targets for the search of a second Earth, especially for the next generation of photometry missions CHEOPS, TESS, and PLATO-2.0. About 80 percent <cit.> of the currently known planets in double star systems are in so-called S-type configurations <cit.>,i.e. the two stars are so far apart so that the planet orbits only one stellar component without being destabilized. As many of the wide binary systems host more than one gas giant, their dynamical evolution is quite complex. The question whether habitable worldscan actually exist in such environments is, therefore, not a trivial one. Previous works on early stages of planetary formation have shown that planetesimal accretion can be more difficult than in single star systems <cit.>. This in turn can question the possibility of whetherembryos form in such systems. However, studies of late stages of planetary formation show that, should embryos manage to form despite these adverse conditions,the dynamical influence of companion stars is not prohibitive to forming Earth-like planets <cit.>. Furthermore, it was shown that binary star systems in the vicinity of the solar system are capable of sustaining habitable worlds once they are formed <cit.>. As the amount of water on a planet's surface seems to be crucial to sustaining a temperate environment <cit.>, it is important to identify possible sources. For Earth, two mechanisms seem to be important: i) endogenous outgassing of primitive material and ii) exogenous impact by asteroids and comets sources. Since neither can explain the amount of and isotope composition of Earth's oceans in itself, models that favour a combinationof both sources seem to be more successful <cit.>. The amount of primordial water that is collected during formation phases of planets in S-type orbits in binarystar systems containing additional gas giants has been studied by <cit.>. They have shown that the planets formed in a circumstellar HZ may have collected between 4 and 40 Earth oceans from planetary embryos, but a main trendappeared: the more eccentric the orbit of the binary is, the more eccentricity is also injected into the gas giant's orbit. This in turn leads to fewer and dryer terrestrial planets. Stochastic simulations proved that almost dry planets can also be formedin the circumprimary HZ of binary star systems <cit.>. However, as emphasized by these authors, water delivery in the inner solar system isnot only due to radial mixing of planetary embryos. Smaller objects can also contribute as shown in <cit.>.Indeed, mean motion (MMRs) and secular (SR) resonances play a key role in the architecture of a planetary system. It is well known that they had a strong influence on the dynamics in the early stage <cit.> and late stage <cit.> of the planetary formation in our solar system. Icy bodies trapped into orbital resonances could be potential water sources for planets in the HZ. These water rich objects can be embryos and small bodies (asteroids) as shown in <cit.> and <cit.> for our Earth.In this study, we aim to answer how much water can be transported into the HZ via small bodies thus providing additional water sources to objects orbiting in the HZ. In Sect. <ref>, we study statistically the dynamics of a circumprimary asteroid belt in some binary star configurations <cit.>. We treat this problem in a self-consistent manner as all gravitational interactions in the system as well as water loss of the planetesimals due to outgassing are accounted for. Then, in Sect. <ref>, we aim to emphasize and characterize the dynamical effects of orbital resonances on a disk of planetesimals, in various binary star systems hosting a gas giant planet, as well as to what extent such resonances are likely to enable icy asteroids to bring water material into the HZ in comparison to single star systems <cit.>. For various binary star – giant planet configurations, we investigate in detail in Sect. <ref> the influence of the secular resonance, located ∼ 1.0 au, on the water transport to bigger objects (embryos or planets) orbiting the host-star within the HZ <cit.>. Finally, we conclude our work in Sect. <ref>. § WATER TRANSPORT: STATISTICAL OVERVIEW §.§ Initial modelling We focus this study on binary star systems with two G-type stars with masses equal to one solar mass. The initial orbital separations are either a_b = 50 au or 100 au and the binary's eccentricity is e_b = 0.1 or 0.3 (see also <cit.> for more binary star configurations). Our studied systems host a gas giant planet initially at a semi-major axis a_GG = 5.2 au, moving on a circular orbit with a mass equal to the mass of Jupiter. Since we study the planar case, initial inclinations are set to 0^∘.A disk of planetesimals is modelled as a ring of 10000 asteroids with massessimilar to main belt objects in the solar system and each asteroid wasassigned an initial water mass fraction (hereafter wmf) of 10% <cit.>. To determine the lower and upper limits for the asteroids' masses, we performed independent preliminary simulations with a 3D smooth particlehydrodynamics (SPH) code <cit.>. The scenarios involve collisions of rocky basaltic objects with one lunar mass at different encounter velocities andangles (see Tab. 1 in <cit.>). Our results show that in a hit-and-run and merging scenarios (see Fig. 1 in <cit.>), all of the ten largest fragments possess masses ≥ 1 % of the total system mass, which is approximatelyCeres' mass and hundreds are above the “significant” fragment threshold in the sense of<cit.>. The smallest fragment consists of one SPH particle (0.001% of the totalmass for 100k SPH particles) which corresponds to ∼ 0.1% of Ceres' mass. As increasing the number of SPHparticles willresult in even smaller fragments, this mass is an upper limit for the smallest fragment. As this fragment willcontain ∼ 0.006% Earth-oceans units[1 ocean = 1.5 × 10^ 21 kg of H_ 2O], we chose to neglect the water contribution ofsmaller particles. Our minimum and maximum mass are thus defined according to the fragments' mass after one impact. Therefore, members of our ring will have masses randomly and equally distributed between 0.1% to one Ceres' mass. The total mass of the ring of 10000 asteroids amounts to 0.5 M_⊕. Thus, the quantity of water in terms of Earth-ocean units available in a ring will be 200.We randomly distributed asteroids inside and beyond the orbit of the gas giant. To avoid strong initial interaction with the gas giant, we assumed that it has gravitationally cleared a path in the disk around its orbit. We defined the widthof this path as ± 3 R_H,GG, where R_H,GG is the giant planet's Hill radius. The inner border of the disk is set to the snow-line position <cit.>, the border between icy and rocky planetesimals. In case of primary G-type star, the value of the snow-line is 2.7 au.The outer border is defined according to the critical semi-major axis a_c and its uncertainty Δ a_c <cit.>. All thesimulations are purely gravitational since we started our numerical integrations to start after the gas has vanished (therefore, we do not consider gas driven migration and eccentricity dampening). The initial eccentricities and inclinations are randomly chosen below 0.01 and 1^∘ respectively. We limited our study to 10 Myrs integration time and integrated numerically our systems using the nine package <cit.>. The numerical integrator used forthe computations is based on theLie-series (see e.g. <cit.> and more recently <cit.>). As our planetesimals do not interact with each other, the disk was divided into 100 sub-rings which were integrated separately.§.§ Statistics of the disk dynamicsDuring the simulation, each particle is tracked until the end of the integration time in order to assess: (a) asteroids crossing the HZ. They will be referred to as habitable zone crossers (hereafter HZc). As we assume a two-dimensional HZ, an asteroid will be considered as an HZc if the intersection point between its orbit and the HZ plane lies within the HZ borders; (b) asteroids leaving the system when their semi-major axis≥ 500 au; (c) asteroids colliding with the gas giant or the stars and (d) asteroids still alive in the belt after 10 Myr. Figure <ref> shows the resulting statistics on the asteroids' dynamics for a_b = 50 au (left panel) and 100 au (right panel) (see also Fig. 4 in <cit.>). For each semi-major axis, we show the dynamical outcome of our asteroids expressed in terms of probability, as a function of the secondary's eccentricity. Below 100%, the percentage of asteroids that are still present in the belt (“alive”), that were ejected or that collided with the stars or the gas giant is shown. The black area of each histogram above 100% indicates the probability that asteroids will enter the HZ i.e. HZc. A comparison of the different histograms indicates that the most important parameter is the periapse of the binary system, which is defined by the semi-major axis and eccentricity of the binary. Figure <ref> clearly shows that the probability of asteroids becoming HZc increases if the secondary's periapsis distance decreases. Indeed, for a given value ofa_b (for instance 50 au), one can see that this probability is at least doubled when e_b increases. As a consequence, the asteroid belt will be depopulated because of dynamically induced ejections, as well as collisions with the giant planet and the stars. Since the rate of colliding and ejected asteroids is higher, a ring will be depopulated faster when e_b becomes larger. Therefore, the statistics in Fig. <ref> shows that the probability for an asteroid to remain in the ring after 10 Myrs will decrease with the periapsis distance. §.§ Timescale statisticsDepending on the periapsis distance of the secondary, the disk of planetesimals can beperturbed more or less rapidly. Asteroids will suffer from the gravitational perturbations of thesecondary star and the gas giant, and their eccentricity may increase quickly. Figure<ref> shows the statistical results of the average time needed by an asteroid to become an HZc, i.e. the time it takes to reach the HZ.This corresponds to the time spans until the first asteroid enters the HZ. The median value and its absolute deviation (error bars) are presented for a set of 10000 asteroids for a secondary with e_b = 0.1 (▪) and e_b = 0.3(∙). This confirms a strong correlation between the periapsis distance and the time of first crossing. Figure <ref> shows clearly that the average time varies from a few centuries to tensof thousands of years. The closer the secondary star, the sooner asteroids can reach the HZ. §.§ Water transport statistics We now compare the water transport efficiency between binary and single star systems. For this purpose, we considered the same initial conditions for the gas giant and the asteroid belt distribution in both cases i.e. single and binary star. As the comparisons are made for the same initial conditions of the asteroids, we have to consider, for the single star system, the same size for the disk given by the binaries' characteristics.Figure <ref> shows the total amount of water transported into the HZ (expressed in terrestrial oceans units) in various systems. The histograms marked with the letter B refer to binary star systems, those with the letter S to single star systems. The color code indicates the amount of water that ended up in four equally spaced sub-rings (Inner, Central 1, Central 2, Outer) of the corresponding permanently habitable zone as defined in <cit.>. For a single star system, each sub-ring is computed using 0.950 au as inner edge value and as outer edge value 1.676 au <cit.>. It is notsurprising that the outer HZ is the ring with most incoming HZc. Indeed, its area is much larger than the other rings. This figure indicated also that in such single star – giant planet, basically all the water is transported into the outer HZsince the perturbation is not strong enough to increasedrastically the eccentricity of an asteroid in the belt. These results show the efficiency with which a binary startransports water into the entire HZ over a shorter timescale compared to a single star system.§ WATER TRANSPORT: DYNAMICS In this section, to explain the differences in the statistical results of Sect. <ref>, we aim to emphasize and characterize the dynamical effects of orbital resonances (SR and MMRs), on a disk of planetesimals as well as to what extent such resonances are likely to enable the transport of water material into the HZ by icy asteroids. §.§ Initial modelling In order to highlight the influence of orbital resonances, we used a larger range of binary star configurations. The primary is still a G-type star but the secondary is either an F-, G-, K- or M-type star with massM_b equal to 1.3M_⊙, 1.0M_⊙, 0.7M_⊙ and 0.4M_⊙, respectively. The initial orbital parameters of the secondary and gas giant planet remain the same as defined in Sect. <ref>. In order to allow easy comparisons of the dynamics in the various systems, we consider a different distribution for the asteroid belt and therefore, we defined three different regions in our planetesimal disk: * ℛ_1: this region extends from 0.5 au to the snow line positionat ∼ 2.7 au. 200 particles were initially placed in this region. * ℛ_2: this region extends from beyond the snow line and up to the distance a_GG- 3 R_H,GG ≈ 4.1 au. We define this region as the inner disk. As we are mainly interested in icybodies that are likely to bring water to the HZ, we densified this region and 1 000 particles were distributed therein. * ℛ_3: this region extends from a_GG + 3 R_H,GG ≈ 6.3 au and up to thestabilitylimit defined by a_cand Δ a_c. It is obvious that the size ofthe external disk will vary according to (a_b, e_b,M_b). The larger a_b and the smaller(e_b, M_b),the wider this region, which is called the outer disk in which 1 000 particles were placed. For all three cases, the initial orbital separation between each particle is uniform and is defined by the ratioof the width of the region and the number of particles. Their initial motion is taken as nearly circularand planar. We also assumed that all asteroids in ℛ_2 and ℛ_3, have equal mass andan initial wmf of 10%. Water mass-loss owing to ice sublimation was also taken into account during the numerical integrations. All thesimulations are purely gravitational and we also assumed that, at thisstage, planetary embryos have been able to form. Our simulations were performed for 50 Myr usingthe Radau integrator in the Mercury6 package <cit.>. §.§ Dynamics of the icy belt In Fig. <ref>, we show the maximum eccentricity ecc_max reached by the asteroids at differentinitial semimajor axes, in the regions ℛ_1 and ℛ_2 (separated by the vertical dashed line representing the snow-line position). The four panels correspond to the values ofq_b investigated and each sub-panel is fordifferent secondary stellar types (F, G, K, and M). We can distinguishMMRs[Only the main ones areindicated] with the gas giantand also a secular resonance: on the bottom panels, which represent the resultsfor a_b = 100 au (q_b = 70 au andq_b = 90 au), we can see a spike located close to or inside the HZ[Theborders are defined according to <cit.>](continuous vertical lines) and moving outward (to largersemi-majoraxes) when increasing the secondary's mass. This spike represents the SR. When increasingq_b, not only does it slightly moveinwardbut also, the maximum eccentricity reached by the particles is higher. This is because the gravitational perturbationfrom thesecondary increases the gas giant's eccentricity. When decreasing a_b to 50 au(top panels forq_b = 35 au and q_b = 45 au), the SRmoves alsooutwardand reaches the MMR region. As a consequence, the inner disk will suffer from an overlap of these orbital resonancesthat could cause a fast depletion.However, particles inside the HZ will remain on near circular motion.§.§ Dynamical lifetime of asteroids near MMRs We investigate in detail the dynamical lifetime of massless particles D_L which are initially close or insideinternal MMRs.They occur when the orbital periods of the gas giant and the particle are in commensurability, such asa _n =(p/q)^2/3 a_GG ∈ℕ{[ p>q a_n < a_GG; p<q a_n > a_GG ]., where a_n is the position of the nominal resonance. As we aim to correlate D_L with the binary star characteristics, we preferred to do a separate analysis to ensure that each MMR contains the same number of particles. We limited this study to resonances with integers p and q ≤ 10. In each MMR, we uniformly distributed 25 particles initially oncircular and planar orbits. In addition, as suggested by the studies of <cit.> and <cit.>, each particle is cloned into four startingpoints with mean anomalies 0^∘, 90^∘, 180^∘ and 270^∘, since it iswell known that the starting position plays an important role for the dynamicalbehaviour in MMRs. This accounts for 100 particles in each MMR. Eachsystem was integrated for 50 Myr. We consider a particle as leaving its initial location inside a specific MMR when its dynamical evolution leads to a collision with either one of the stars or the gas giant. Finally, we defined D_L as the time required for 50% of the population to leave a specific MMR <cit.>. In Fig. <ref>, we show the dynamical lifetime in Myr of particles near the internal MMRs. On the top panel of this figure, the influence of M_b is shown for a certain periapsis distance q_b = 35 au as for this particular value, the SR overlaps with the MMRs in ℛ_2. The bottom panel summarizes the results for a certain mass of the secondary star (i.e. G-type star) and different periapsis distances of this star.We can see that prior to the 8:3 MMR, the border betweenicy and rocky asteroids, a secondary M-type will favour chaos inside the rocky bodies region located inℛ_1. This is not surprising since Fig. <ref> clearly shows the SR overlappingwith MMRs located inside the snow-line at 2.7 au. Beyond this limit, a highervalue of M_b leads to a lower D_L – values can reach 0.1 Myr – since the SR moves outward. From the bottom panel, one canrecognize that the lower q_b, thelower D_L. Some MMRscan be quickly emptied within 0.1 Myr. With these tests, we highlight that in binary starsystems, D_L of particles initially orbiting inside or close to MMRs can be variable according to the location of theSR. Provided that particles can reach the HZ region before colliding with one of the massive bodies(i.e. the stars or the gas giant) or before being ejected from the system,they could rapidly cause an early bombardment on any embryo or planet moving in the HZ. §.§ Consequences for the water transportHere, we compare the flux of icy particles from ℛ_2 andℛ_3 towards the HZ (see also Figs. 6 and 7 in <cit.>). On the y-axis in Fig. <ref>, we represent the evolution of ecc_max (red line) of particles inside ℛ_1, ℛ_2 (as already drawn in Fig. <ref>) and ℛ_3. The top panel corresponds to a secondary star at q_b = 35 au and the bottom panel is for q_b = 70 au, each sub-panel corresponding either to a F- or M-type secondary star.From left to right,each figure is for a different period of integration (0.1 Myr, 10 Myr and 50Myr). We also show the normalized HZc distribution (blue line) calculated regarding the total number of HZc produced by the corresponding systems for each period of integration time. For allcases considered in the top panels (q_b = 35 au), within 10 Myr, the 2:1 MMR located at∼ 3.28 au and the SR, when it lies beyond the snow line, are theprimary sources of HZc in the inner disk.In addition, the external disk canproduce a non-negligible or equivalent number of HZc, compared to the inner disk. On the same figure, the y-axis also corresponds to the cumulative fraction of water (dashed black line) brought by the HZc. This fraction is determined with respect to the final amount of transported water from ℛ_2 and ℛ_3 within50 Myr. For the masses investigated at q_b = 35 au, the results exhibit the same trend: the quantity of incoming water inside the HZ drastically increases when particles orbit initially inside the SR and the 2:1 MMR.For q_b = 70 au (bottom panels), contrary to the previous case, the SR does not contribute at all to bearing water material into the HZ since it lies in this region (see Fig. <ref>). We show that the two main sources of HZc in ℛ_2 are the 2:1 and 5:3 MMR. The contribution of ℛ_3 is more negligible than in the previous case since its size is more extended and weakly perturbed.§ INFLUENCE OF ORBITAL RESONANCES ON THE WATER TRANSPORT The results of Sect. <ref> revealed how secular and mean motion resonances can affect the dynamics of an asteroid belt. However, we also showed that for a given binary star – gas giant configuration, particles initially orbiting inside the HZ or beyond the snow-line can show different dynamical outcomes depending on the location of the SR i.e. inside or outside the HZ. The aim of this section is to highlight in detail the influence of orbital resonances on the water transport by icy asteroids from a planetesimal disk, initially beyond the snow-line, to embryos-to-planet-sized bodies (EPs), initially inside the HZ. We mainly focused on the following scenarios: a) eccentric planetary motion inside the HZ (induced by an SR at ∼ 1.0 au and/or MMRs) and an asteroid belt perturbed by MMRs b) nearly circular planetary orbits inside the HZ and an asteroid belt perturbed by an SR and MMRs.§.§ Initial modelling As in Sect. <ref>, we investigate only binary star systems with two G-type stars with same initial orbital parameters and also hosting a gas giant planet. However, as pointed out by <cit.>, the location of the SR depends both on the orbital elements of the secondary and the gas giant. Since we fixed the binary's semi-major axis to 50 and 100 au for our numerical study, the position of the gas giant a_GG changes for the different systems depending on the investigated scenarios described above. With the conditions that the SR should either be beyond 2.7 au or around 1.0 au, we determine the according semi-major axis of the giant planet using the semi-analytical method of <cit.> and <cit.>. We summarize the values of a_GG in Tab. <ref>.We uniformly distribute 1000 icy Ceres-like asteroids at initial semi-major axis a into three regions as defined as follow: * ℛ_1^' is for 2.0 ≤ a ≤ 2.7 au, * ℛ_2^' is for 2.7 ≤ a ≤ a_GG - 3 R_H,GG, * ℛ_3^' is for a ≥ a_GG + 3 R_H,GGand each asteroid is assigned a wmf using a linear approximation and with borders defined in the following way: * a ∈ ℛ_1^', 1 ≤ wmf ≤ 10% * a ∈ ℛ_2^', 10 ≤ wmf ≤ 15% * a ∈ ℛ_3^', wmf = 20% Contrary to the previous sections, as the distribution of a within the disk is different for the various binaries investigated (as it depends on the location of the gas giant), the total amount of water 𝒲_TOT (expressed in Earth-ocean unit) contained in the disk varies as shown in Tab. <ref>.§.§ Collision modelling For both scenarios, i.e. the SR either inside the HZ or inside the asteroid belt, we performed two simulations for 100 Myr using the Radau integrator in order to simulate impacts between icy objects initially located in the asteroid belt andEPs initially inside the HZ. They are either Moon- and Mars-sized (embryos) or Earth-sized (planets). Considering the gravitational perturbations of the stars and the gas giant, we integrated separately: * the asteroid belt to assess the orbital distribution of asteroids crossing the (x,y) plane, with a distance r_A < 2.0 au to the primary star. * EPs at initial location a_EP to assess the evolution of their eccentricity with time. They are uniformly distributed over 48 positions within the HZ (with borders defined according to <cit.>) and initially move in circular orbits. We combined the results of these two integrations to analytically compute the minimum orbital intersectiondistance (MOID)<cit.>. The MOID corresponds to the closest distance between twokeplerian orbits, regardless their real position on their respectivetrajectories.We define a collision if the MOID is comparable to the EP's radius R_EP. For the collision assessment, we use the followingfive-steps algorithm: (1) For a given position a_EP, we check if q_EP ≤ r_A ≤ Q_EP where q_EP and Q_EP are the periapsis and apoapsis distances of the EPs respectively which are defined according to e_EP = e_max (t) with e_max (t) the maximum of the EP's eccentricity for different period of integration. If this condition is fulfilled, then we go to the next step. (2) In reality, e_EP has periodic variations between 0 (its initial value) and e_max (t). Thus, we define the function Y(e_EP) = d_min (e_EP) - R_EP where d_min is the MOID. According to the sign of the product Y(e_EP = 0) × Y(e_EP = e_max) we use different procedures: if the sign is negative then we use a regula falsi procedure in order to find the value of e_EP giving Y(e_EP) ≈ 0. If the sign is positive, then we use a dichotomy procedure to find a value of e_EP leading to an impact. (3) When a collision is found, i.e. if d_min ≤ R_EP we also derive, from the MOID, the true anomalies of the EP and the asteroid in order to compute the relative impact velocity and impact angle of the asteroid. (4) If the previous condition is fulfilled, then we define: {[ 𝒲_k(a_EP, t) = M_CERES × wmf_k/M_H_ 2O; 𝒲_k(a_EP, t) = 1/N_k∑_k=1^N_kM_CERES × wmf_k× ( 1-ω_c (a_EP) )/M_H_ 2O ]. as the quantity of water (in Earth-ocean unit) delivered by an asteroid number k (with water mass fraction wmf_k), at an intermediate integration time t, to the EP initially at a semi-major axis a_EP. Here, M_H_ 2O = 1.5 × 10^21 kg of H_2O is the mass of one Earth-ocean. The first term 𝒲_k assumes a merging approach in which the whole water content of the asteroid is delivered to the EP without assuming any water loss processes. The second term 𝒲_k takes into account a water loss factor ω_c induced by a water loss mechanism. Here, N_k is the number of possible collisions of the asteroid k. (5) At the end of the procedure, we can derive the total fraction of water delivered to the EP: 𝒲_EP (a_EP,t) = 1/𝒲_TOT∑_k=1^n_i𝒲_k(a_EP,t) and 𝒲_EP (a_EP,t) = 1/𝒲_TOT∑_k=1^n_i𝒲_k(a_EP,t) where n_i is the number of impactors. We also compute the median values of impact velocities v_i(a_EP) and angles θ_i(a_EP) according to the total number of possible collisions found. §.§ Dynamics statistics In Fig. <ref>, we compare results obtained when an SR is inside and outside the HZ (left and right panels, respectively) for a binary at a_b = 50 (bottom panels) and 100 au (top panels) with e_b = 0.3 (see also Figs. 2 and 3 in <cit.> for more configurations). In addition, for the SR inside the HZ, the top sub-panels represent the maximum eccentricity e_max of the EPs and the bottom sub-panels are for the number of impactors n_i. The results for an Earth-are shown on the left sub-panels and for Mars/Moon-sized objects on the right sub-panels. For the SR inside the belt, only n_i is displayed as the EPs' orbit remain almost circular during the integration time, regardless the size of the EP.One can recognize in the left panels the SR around 1.0 au causing a relatively high eccentric motion with decreasing q_b and M_EP. Due to the proximity of the gas giant, we can also notice, for q_b = 35 au, the presence of the 3:1 MMR at 1.44 au also causing high eccentric motions in this area.When the SR is inside the HZ (left panels), we can notice that EPs can collide with more asteroids than if they were orbiting outside the orbital resonances. Both regions ℛ_1^' and ℛ_3^' can be water sources for EPs within the entire HZ and mainly at ∼ 1.0 au. However, an SR located inside the belt (right panels) can boost the number of impactors on the EP's surface and this can lead up to 200 impactors from region ℛ_2^' for q_b = 35 au. One can notice for the case the SR is not in the HZ that most of the impactors are located mainly in the outer HZ border. This was already pointed out in <cit.>. Therefore, only a small fraction of n_i can impact at ∼ 1.0 au but this number is comparable to the case of the SR located inside the HZ. §.§ Collision parameters statisticsIn Fig. <ref> (see also Fig. 4 in <cit.> for more configurations), we represent the impact angles and velocities distribution θ_i and v_i (top and bottom panels respectively). Results are compared for both locations of the SR: inside the HZ (grey solid line for θ_i and panels (a) for v_i) and outside the HZ (black solid line for θ_i and panels (b) for v_i). In the bottom panels, we distinguished impact velocities on Earth-, Mars- and Moon-sized objects (solid blue, red and black lines respectively). For both cases (SR inside and outside the HZ), the impact velocity distributioninside the HZ shows different modes depending a_EP: outside the orbital resonances, statistically, we have for anEarth-sized 1.0 < v_i < 1.5 v_e, for a Mars-sized 1.5< v_i <2.5 v_e and for a Moon-sized 2.5 < v_i < 6.0 v_e (allexpressed in escape velocityv_e units of the respective EP). However, near an orbital resonance,v_i can be slightly higher as seen in Fig.<ref>. For instance, for q_b = 35 au (for this particular system, one should read results for an Earth on the left y-axis and for a Mars/Moon on the right y-axis), v_i can reach up to 6.0v_e for an Earth-, 25v_e for a Mars- and beyond40v_e for a Moon-sized body. In a similar way, θ_i has high variations between 20^∘and 80^∘ mainly near the SR. However, when an EP is not initially located inside an orbital resonanceθ_i is in average above 50^∘.§.§ Water transport: merging vs real collisions modelling In this section, we compare results of the water transport to EP orbiting inside the HZ, considering two collision models:i) a merging approach in which we consider that the whole water content of the asteroids is delivered to an EP without assuming any water loss processes such as atmospheric drag or ice sublimation. The results using this approach give 𝒲_EP.ii) a real collision model using v_i as a key parameter in order to derive the water loss ω_c on the asteroid's surface. 𝒲_EP contain the results for the water transport from this model.For case ii), we performed detailed simulations of water-rich Ceres-sized asteroids with drytargets in the mass range M_EP equal to 1 M_MOON,1 M_MARS and 1 M_EARTH. We assume the Ceres-likeimpactor with wmf = 15% water content in a mantle around a rocky core.The simulations are performed with our parallel (GPU) 3D smooth particle hydrodynamics (SPH) code. We simulated impacts on targets with different sizes at an impact angle of 30^∘with initial collision velocities (taken from Fig. <ref>) v_i = 2; 5 and 30 v_efor the Moon,v_i = 1; 5 and 20 v_e for Mars andv_i = 1; 3 and 5 v_e for the Earth. Except for this lattercase (v_i=5 v_e)for which about 1 million SPH particles are used,most of the scenarios areresolved in about 500,000 SPHparticles. All objects were relaxed self-consistently as described in <cit.>.Figure <ref> summarizes the water loss ω_c in the collision scenarios. All but the most extreme scenario result in a merged main survivor retaining most of the mass. The exception is the 30 v_e impactof a Ceres-like body onto the Moon which leads to mutual destruction of the bodies into a debris cloud and hence a lossof all volatile constituents. The other very fast impact (Mars at 20 v_e) results in a merged survivorthat retains just under 2 wt-% of the available water. For the lower collision velocities we observe water loss ratesbetween 11 wt-% and 68 wt-%. If plotted versus the impact velocity in terms of the mutual escape velocity as inFig. <ref>, there is a strong correlation with the impact velocity but only weak dependence on the absolutemass.Using a linear extrapolation of ω_c between the minimum and maximum impact velocities derived for each EP, we are able to provide better estimates for the water transport. In Fig. <ref> (see also Figs. 2 and 3 in <cit.> for more configurations), we compare the fraction of water reaching the EP's surface with and without taking into account our study of SPH collisions represented by the solid and dotted lines respectively (i.e. 𝒲_EP and 𝒲_EP) when SR ∈ HZ or SR ∉ HZ (left and right panels, respectively). The two top panels show results for Moon- and Mars-sized objects and the bottom panels, for Earth-sized bodies. We show the comparison for a binary with q_b = 35 au and a computation of 100 Myr. One can clearly see that collisions between EPs and asteroids are greatly overestimated if we consider a merging approach. Indeed, we highlight that close to the MMR inside the HZ, the water transport to an EP's surface i.e. 𝒲_EP can be reduced significantly by almost 50% for an Earth-, 68% for a Mars- and 75% for a Moon-sized object. We have the same statistics near the SR for a Mars- and Moon-sized except for an Earth as v_i = 3 v_e. In this case, 𝒲_EP is reduced by ∼ 30%.Even if no strong perturbation is located in the HZ (right panels), the real collision process shows that the incoming amount of water is reduced to 𝒲_EP ∼ 50% 𝒲_EP for a Moon-sized EP and to ∼ 20% 𝒲_EP for a Mars- and Earth-sized body (right panels).A comparison of the left and respective right panels shows that around 1.0 au, 𝒲_EP is nearly the same in both cases (SR ∈ HZ and SR ∉ HZ) which indicates the importance of including SPH collisions into dynamical studies to avoid false effects as shown by the dotted lines of left panels in Fig. <ref>:* The apparent positive aspect of orbital resonances is that due to eccentric motion of the EP at 1.0 au, the transported water (𝒲_EP) is boosted and ensures higher values than in case of circular motion of the EP. Moreover, if SR ∉ HZ, then even with a higher crossing frequency, the nearly circular motion of EPs close to 1.0 au limits the number of collisions with asteroids. * On the other hand, the negative aspect of high eccentric motion near 1.0 au is the high impact velocities which drastically reduces the efficiency of the water transport and leading to significantlylower values of 𝒲_EP. It seems that nearly circular motion in the HZ is important to prevent relatively high water loss during collision as v_i are much lower when SR ∉ HZ. § CONCLUSIONSIn this work, we investigated the influence of a secondary star on the flux of asteroids into the habitable zone (HZ). We estimated the quantity of water brought by asteroids located beyond the snow line into the HZ of various double star configurations (for which we varied the stellar separation, the eccentricity, and the mass of the secondary). An overlap of perturbations from the secondary and the giant planet in the primordial asteroid belt causes rapid and violent changes in the asteroids' orbits. This leads to asteroids crossing the HZ soon after the gas has dissipated in the system and the gravitational dynamics become dominant. Our results point out that binary systems are more efficient for transporting water into the HZ thana single star system. Not only an asteroid's flux is 4 – 6 times higher when a secondary star is present, but also the number of transported oceans into the HZ can be 4 – 5 times higher, providing other water sources to embryos, in the whole HZ, in the late phase of planetary formation. We highlighted that in tight binaries (a_b = 50 au), an SR can lie within the inner asteroid belt, overlapping with MMRs, which enables, in a short timescale, an efficient and significant flux of icy asteroids towards the HZ, in which particles orbit in a nearly circular motion. In contrast thereto, in the study of wide binaries (a_b = 100 au), particles inside the HZ can move on eccentric orbits when the SR lies in the HZ. The outer asteroid belt is only perturbed by MMRs. As a consequence, a longer timescale is needed to produce a significant flux of icy asteroids towards the HZ. This dynamics drastically impacts the dynamical lifetime of particles initially located inside MMRs. It can range from thousands of years to several million years according to the locations of the MMR and whether there is an overlap with the SR. This can favor a fast and significant contribution of MMRs in producing HZc, which are asteroids with orbits crossing the HZ and bearing water therein. In any case, we highlighted that, for the studied binary star systems, the inner disk (region ℛ_2) is the primary source ofHZc (and therefore for the water in the HZ), bymeans of the 2:1 MMR, the 5:3 MMR, and the SR, especially when the latter lies close or beyond the snow line. Finally, we focused on specific binary star – gas giant configurations where the SR lies either around 1.0 au in the HZ (causing high eccentric motion therein) or inside the asteroid belt beyond the snow-line at 2.7 au (and nearly circular motion in the HZ). We showed that the presence of an SR and an MMR inside the HZ could boost the water transport as the EPs can collide with more asteroids. This is apparently a positive mechanism for the water transport efficiency. However, our study shows clearly that dynamical results overestimate the water transport and need to be corrected by real simulations of collisions whichprovides the water loss due to an impact. Indeed, we showed that collisions in the HZ can occur with high impact velocities causing significant water loss for the asteroid (up to 100%) when colliding at the EP's surface and the amount of water reaching the EP can be reduced by more than 50%. DB, EPL, TIM and AB acknowledge the support of the Austrian Science Foundation (FWF) NFN project: Pathways to Habitability and related sub-projects S11608-N16 "Binary Star Systems and Habitability" and S11603-N16 "Water transport". DB and EPL acknowledge also the Vienna Scientific Cluster (VSC project 70320)for computational resources.aasjournal | http://arxiv.org/abs/1703.09000v2 | {
"authors": [
"D. Bancelin",
"E. Pilat-Lohinger",
"T. I. Maindl",
"Á. Bazsó"
],
"categories": [
"astro-ph.EP"
],
"primary_category": "astro-ph.EP",
"published": "20170327105009",
"title": "Water transport to circumprimary habitable zones from icy planetesimal disks in binary star systems"
} |
http://arxiv.org/abs/1703.09255v1 | {
"authors": [
"Md Shipon Ali",
"Ekram Hossain",
"Dong In Kim"
],
"categories": [
"cs.NI"
],
"primary_category": "cs.NI",
"published": "20170327183503",
"title": "Coordinated Multi-Point (CoMP) Transmission in Downlink Multi-cell NOMA Systems: Models and Spectral Efficiency Performance"
} |
|
[E-mail: ][email protected], [email protected] Department of Electrical Engineering, Polytechnique Montréal, Montréal, Québec, CanadaWe propose a discussion on the synthesis and scattering analysis of nonlinear metasurfaces. For simplicity, we investigate the case of a second-order nonlinear isotropic metasurface possessing both electric and magnetic linear and nonlinear susceptibility components. We next find the synthesis expressions relating the susceptibilities to the specified fields, which leads to the definition of the nonlinear metasurface conditions for no reflection, themselves revealing the nonreciprocal nature of such structures. Finally, we provide the approximate expressions of the scattered fields based on perturbation theory and compare the corresponding results to finite-difference time-domain simulations. Mathematical Synthesis and Analysis of Nonlinear Metasurfaces Karim Achouri, Yousef Vahabzadeh, and Christophe Caloz December 30, 2023 =============================================================§ INTRODUCTIONOver the past few years, metasurfaces, the two-dimensional counterparts of three-dimensional metamaterials, have proven to be particularly effective at controlling electromagnetic waves. However, most studies on metasurfaces have been restricted to purely linear structures and only few studies, as for instance <cit.>, have investigated the synthesis and/or scattering from nonlinear metasurfaces, but without providing extensive discussion on the topic. Since nonlinearity may potentially bring about a wealth of new applications to the realm of metasurface-based effects, such as for instance nonreciprocity, second-harmonic generation and wave-mixing <cit.>, we propose here a rigorous discussion on the synthesis and scattering analysis from second-order nonlinear metasurfaces.In the following, we will use the generalized sheet transition conditions (GSTCs) to obtain the metasurface susceptibilities (linear and nonlinear components) in terms of specified incident, reflected and transmitted waves. Based on that, the conditions for no reflection for second-order nonlinear metasurfaces will be derived, which will be next used to analyze the scattering from such structures. The scattered field will be computed using perturbation analysis and the results will be compared with FDTD simulations. § SYNTHESIS OF SECOND-ORDER NONLINEAR METASURFACES The GSTCs are boundary conditions that apply to zero-thickness discontinuities such as metasurfaces <cit.>. These conditions relate the discontinuities of the electric and magnetic fields to the presence of excitable surface polarization densities. In the case of a metasurface lying in the xy-plane at z=0 and assuming only transverse polarizations, the GSTCs read [We assume here the harmonic time dependence e^jω t.] ẑ×ΔH =jωP_∥, ΔE×ẑ =jωμM_∥,where Δ indicates the difference of the fields between both sides of the metasurface.Let us now investigate the case of a metasurface with nonzero second-order nonlinear electric and magnetic susceptibility tensors. The presence of non-negligible electric nonlinearities may be found, for instance, in optical nonlinear crystals <cit.> while magnetic nonlinearities may be found in ferrofluids <cit.>. We restrict our attention to the case of monoanisotropic metasurfaces <cit.>, whose electric and magnetic polarization densities areP = ϵ_0χ^(1)_eeE_av + ϵ_0χ^(2)_eeE_av^2, M = χ^(1)_mmH_av + χ^(2)_mmH_av^2,where χ^(1) and χ^(2) correspond to the first-order (linear) and second-order (nonlinear) susceptibility tensors, respectively, and the subscript “av” denotes the average of the field between both sides of the metasurface. For simplicity, we assume that the only nonzero components of these tensors are χ^aa,(1) and χ^aaa,(2), where a = {x,y}, which generallly corresponds to the case of a birefringent nonlinear metasurface. Since nonlinear media generate new frequencies <cit.>, the frequency-domain GSTCs in (<ref>) are not appropriate to investigate the response of nonlinear metasurfaces since they relate electromagnetic fields with the same frequency. To overcome this issue, we express the GSTCs in the time-domain instead of the frequency-domain. To further simplify the discussion, we consider, without loss of generality, only the case of x-polarized waves which, upon insertion of (<ref>) into the time-domain version of (<ref>), reduces the time-domain GSTCs to-Δ H= ϵ_0 χ^(1)_ee∂/∂ t E_av + ϵ_0 χ^(2)_ee∂/∂ t E^2_av,-Δ E= μ_0 χ^(1)_mm∂/∂ t H_av + μ_0 χ^(2)_mm∂/∂ t H^2_av,where E and H are, respectively, the x-component of the electric field and the y-component of the magnetic field, and where the susceptibility components are those corresponding to x-polarized excitation. To synthesize a nonlinear metasurface, one needs to solve (<ref>) so as to express the susceptibilities as functions of the electromagnetic fields on both sides of the metasurface. As it stands in (<ref>), the system has two equations for four unknowns, and is hence under-determined. If we consider two arbitrary transformations instead of just one <cit.>, then the system becomes a full-rank one, reading [ -Δ H_1; -Δ H_2; -Δ E_1; -Δ E_2 ]= ∂/∂ t [ϵ_0E_av,1 ϵ_0 E^2_av,100;ϵ_0E_av,2 ϵ_0 E^2_av,200;00μ_0H_av,1 μ_0 H^2_av,1;00μ_0H_av,2 μ_0 H^2_av,2 ][ χ^(1)_ee; χ^(2)_ee; χ^(1)_mm; χ^(2)_mm ], where the subscripts 1 and 2 refer to the fields of two arbitrary transformations. This matrix system is easily solved and yields the following expressions for the susceptibilitiesχ^(1)_ee = -Δ H_2 ∂/∂ t E^2_av,1 - Δ H_1 ∂/∂ t E^2_av,2/ϵ_0 (∂/∂ t E^2_av,1∂/∂ t E_av,2 -∂/∂ t E_av,1∂/∂ t E^2_av,2 ), χ^(1)_mm = -Δ E_2 ∂/∂ t H^2_av,1 - Δ E_1 ∂/∂ t H^2_av,2/μ_0 (∂/∂ t H^2_av,1∂/∂ t H_av,2 -∂/∂ t H_av,1∂/∂ t H^2_av,2 ), χ^(2)_ee = Δ H_2 ∂/∂ t E_av,1 - Δ H_1 ∂/∂ t E_av,2/ϵ_0 (∂/∂ t E^2_av,1∂/∂ t E_av,2 -∂/∂ t E_av,1∂/∂ t E^2_av,2 ), χ^(2)_mm = Δ E_2 ∂/∂ t H_av,1 - Δ E_1 ∂/∂ t H_av,2/μ_0 (∂/∂ t H^2_av,1∂/∂ t H_av,2 -∂/∂ t H_av,1∂/∂ t H^2_av,2 ).At this stage, one may think that substituting the specified arbitrary incident, reflected and transmitted fields into (<ref>) would lead to well-defined susceptibilities. However, the specification of even “simple” transformations as, for instance, E_i = E_0 e^j(ω_it-k_iz), E_r = R e^j(ω_rt-k_rz) and E_t = T e^j(ω_tt-k_tz), leads to susceptibilities that are time-varying, irrespectively of the values of ω_i, ω_r and ω_t. The fact that the susceptibilities are, in this case, time-varying is inconsistent with the implicit assumption in Eqs. (<ref>) that they are not [If the susceptilities were time-dependent, insertion of (<ref>) into the time-domain version of (<ref>) would lead, from the derivation chain rule, to additional terms of the form (∂χ_uu^(k)/∂ t)E_av,H_av, where u=e,m and k=1,2.]. At this point, one may therefore wonder whether the problem has not been inadequately paused? However, fortunately, we shall see in the next section that this is not the case: the problem is adequately posed but the specified fields must satisfy a specific constraint, while being otherwise still synthesizable. This constraint will be established by considering specified fields satisfying the postulate that the susceptibilities in (<ref>) be not functions of time. For this purpose, we will look at the problem from a different perspective. Instead of trying to synthesize the metasurface, we shall heuristically analyze its scattering with (arbitrary) known susceptibilities in order to understand what kind of reflected and transmitted fields are produced by such nonlinear metasurfaces, and hence deduce a proper way to perform the synthesis.§ SCATTERING FROM SECOND-ORDER NONLINEAR METASURFACES The fields scattered from a nonlinear metasurface may be obtained by solving (<ref>). However, Eqs. (<ref>) form a set of nonlinear inhomogeneous first-order coupled differential equations, that is not trivial to solve analytically. The problem may be simplified by assuming that the metasurface is reflectionless, which reduces (<ref>) to a single equation.The conditions for no reflection in a nonlinear metasurface may be obtained by specifying E_r = H_r = 0 in (<ref>) and assuming normally incident and transmitted plane waves, i.e. E = ±η_0 H, where + corresponds to waves propagating in the +z-direction and vice-versa for -. To obtain the susceptibilities in (<ref>), we have to consider the transformation of two sets of independent waves. One may assume that the incident and transmitted waves for the two transformations are either both propagating in the +z-direction, as in Fig. <ref>, or in the -z-direction, as in Fig. <ref>. One may also consider the case where the waves Ψ_1 are propagating in the +z-direction and the waves Ψ_2 are propagating in the -z-direction (or vice-versa), but it may be shown that conditions for no reflection do not exist in this case.It will next be shown that the conditions for no reflection are not the same in the two cases depicted in Figs. <ref>. This is because of the presence of the square of the electric and magnetic fields in (<ref>), which introduces an asymmetry in the definition of the susceptibilities in terms of the direction of wave propagation. This asymmetry is due to the different relations between the electric and magnetic fields (E = ±η_0 H) for forward or backward propagating waves.Solving (<ref>) for the case depicted in Fig. <ref> leads to the following conditions for no reflection:χ^(1)_ee = χ^(1)_mm, η_0χ^(2)_ee = χ^(2)_mm.Similarly, the conditions for no reflection in the case depicted in Fig. <ref> readχ^(1)_ee = χ^(1)_mm,-η_0χ^(2)_ee = χ^(2)_mm.Note the minus sign difference in the relations (<ref>) and (<ref>) between the second-order susceptibilities. The fact that different reflectionless metasurface conditions are obtained for different directions of propagation means that the considered nonlinear metasurface inherently exhibits a nonreciprocal response, that we will discuss in more detail later on.As of now, we continue the evaluation of scattering from the nonlinear metasurface considering the case in Fig. <ref>. Substituting (<ref>) along with the difference of the specified fields, Δ E = E_t - E_i, the average of the specified fields, E_av = 1/2(E_t + E_i), and the squared average of the specified fields, E_av^2 = E_avE_av^∗= 1/4(E_i^2 + E_iE_t^∗+E_tE_i^∗+E_t^2), and similarly for the magnetic fields, transforms (<ref>) into 2χ^(1)_ee ∂/∂ t E_t+ χ^(2)_ee∂/∂ t ( E_iE_t^∗+E_tE_i^∗+E_t^2) + 4η_0/μ_0 E_t=4η_0/μ_0 E_i -χ^(2)_ee∂/∂ tE_i^2-2χ^(1)_ee∂/∂ t E_i, where E_i is a known excitation and E_t is an unknown transmitted field. Assuming that E_i = E_0 cos(ω_0 t) and that E_0, χ^(1)_ee and χ^(2)_ee are real quantities, corresponding to a lossless system, the relation (<ref>) becomes χ^(2)_ee ∂/∂ tE_t^2 + ( 2χ^(1)_ee +2E_0 cos(ω_0 t)χ^(2)_ee)∂/∂ t E_t+(4η_0/μ_0- 2ω_0χ^(2)_ee E_0sin(ω_0 t)) E_t= 4η_0/μ_0 E_0 cos(ω_0 t)+ω_0χ^(2)_ee E_0sin(2ω_0 t)+2ω_0χ^(1)_ee E_0 sin(ω_0 t). This is a inhomogeneous nonlinear first-order differential equation that allows one to find the transmitted field from a reflectionless birefringent second-order nonlinear metasurface with purely real susceptibilities and assuming a normally incident plane wave excitation in the +z-direction. The large number of assumptions that were required to obtain Eq. (<ref>) reveals the inherent complexity of analyzing nonlinear metasurfaces.Equation (<ref>) does not admit an analytical solution. To obtain an approximate expression of the transmitted field, we consider that the second-order susceptibilities are much smaller than the first-order ones, which is typically a valid assumption in the absence of second order resonance, χ^(2)≈ 10^-12χ^(1) <cit.>. From this consideration, perturbation analysis <cit.> may be used to approximate the value of the transmitted field. Perturbation analysis stipulates that the approximate solution may be expressed in terms of a power series of the following form E_t≈ E_t,0 + ϵ E_t,1 + ϵ^2 E_t,2+... where ϵ is a small quantity. Truncating the series and solving recursively for E_t,0, E_t,1 and so on, may help reducing the complexity of the problem. Since χ_ee^(1)≫χ_ee^(2)≈ϵ, it is possible to simplify Eq. (<ref>) using (<ref>) and solving for E_t,0 while neglecting all terms containing ϵ. This reduces (<ref>) to 2χ^(1)_ee ∂/∂ t E_t,0+4η_0/μ_0E_t,0= 4η_0/μ_0 E_0 cos(ω_0 t)+2ω_0χ^(1)_ee E_0 sin(ω_0 t), which does not contain any nonlinear term and therefore corresponds to a simple reflectionless linear metasurface. The steady-state solution of (<ref>) is, in complex form, given by E_t,0 = E_02-jk_0χ^(1)_ee/2+jk_0χ^(1)_eee^jω_0 t, which exactly corresponds to the expected transmitted field <cit.>, where the frequency of E_t,0 is the same as that of the incident wave. Now, E_t,1 can be found by inserting E_t≈ E_t,0 + ϵ E_t,1 into (<ref>), with E_t,0 as given in (<ref>), and neglecting all the terms containing ϵ^2 (and higher powers). This leads to the following linear differential equation χ^(2)_ee ∂/∂ tE_t,0^2+( 2χ^(1)_ee +2E_0 cos(ω_0 t)χ^(2)_ee)∂/∂ t E_t,0+(4η_0/μ_0- 2ω_0χ^(2)_ee E_0sin(ω_0 t)) E_t,0+ 2χ^(1)_ee∂/∂ t E_t,1+ 4η_0/μ_0 E_t,1= 4η_0/μ_0 E_0 cos(ω_0 t) +ω_0χ^(2)_eeE_0 sin(2ω_0 t)+2ω_0χ^(1)_ee E_0 sin(ω_0 t), which now contains the nonlinear susceptibility χ^(2)_ee. This equation, of the same type as (<ref>) in E_t,1, is readily solved, and yields the steady-state solution E_t,1=E_0 k_0 χ^(2)_eee^j2ω_0 t· [4+12 E_0 + χ^(1)_eek_0(χ^(1)_eek_0 -4j)(E_0 -1)/4(χ^(1)_eek_0-j)(χ^(1)_eek_0-2j)^2 ], which corresponds to second-harmonic generation, i.e. a wave at frequency 2ω_0, twice that of the incident wave. The procedure used to obtain E_t,1 may now be applied to find E_t,2. In this case, the differential equation becomes quite lengthy and is not shown here for the sake of conciseness. The differential equation for E_t,2 is also a linear first-order equation and is thus easily solved. The steady-state solution is E_t,2 = C_t,2 [(2j-3χ^(1)_eek_0)e^jω_0 t + 3(2j+χ^(1)_eek_0)e^j3ω_0 t], where the complex constant C_t,2 reads C_t,2 =j(χ^(2)_eek_0E_0)^2/2(χ^(1)_eek_0-2j)^3· [4+12 E_0 + χ^(1)_eek_0(χ^(1)_eek_0 -4j)(E_0 -1) ]/(χ^(1)_eek_0-j)(χ^(1)_eek_0+2j)(3χ^(1)_eek_0-2j). The expression of E_t,2 corresponds to a superposition of two waves, at frequencies ω_0 and 3ω_0. From (<ref>), we see that the amplitude of E_t,2 is directly proportional to the square of χ^(2)_ee while the amplitude of E_t,1 in (<ref>) is linearly proportional to χ^(2)_ee. Similarly, E_t,2 is proportional to the cube of E_0 while E_t,1 is proportional to the square of E_0. Consequently, relations (<ref>) and (<ref>) remain valid only for values of E_0, χ^(1)_ee and χ^(2)_ee such that E_t,0≫ E_t,1≫ E_t,2 and χ^(1)_ee≫χ^(2)_ee.According to (<ref>), (<ref>) and (<ref>), the scattered field from the nonlinear metasurfaces may generally be expressed as E_s = ∑_n=1^∞ E_s,ne^j n ω_0 t, where E_s represents either the reflected or the transmitted electric field and E_s,n are complex constants. The form (<ref>) reveals why the synthesis of a nonlinear metasurface is not trivial: all the harmonics playing a significant role in (<ref>) should be included in the specified fields. In other words, if those harmonics were not included in the specified fields, one would not properly describe the physics of the problem. This would in fact precisely lead to the aforementioned contradiction that the susceptibilities in (<ref>) would be found to be depending on time in contraction with the implicit assumption in (<ref>) that they do not. We shall now validate our theory with full-wave analysis. The metasurface is synthesized so as to satisfy the conditions for no reflection given in (<ref>), and we consider the following arbitrary parameters: E_0 = 1.5 V/m, χ_ee^(1)=0.1 m and χ_ee^(2)=0.004 m^2/V. The simulations are performed with the 1D FDTD zero-thickness metasurface simulation code developed in <cit.>,modified to account for the nonlinear susceptibilities (see Appendix <ref>) [Normalized constants are used in all the simulations, so that c_0 = ϵ_0 = μ_0 = f_0 = 1.].In the first simulation, whose results are plotted in Fig. <ref>, the metasurface is illuminated by a plane wave propagating in the +z-direction. In the figure, the metasurface is placed at the center and the simulation area is split into a scattered-field region (SF) and a total-field region (TF). The source is placed at the SF/TF boundary on the left of the metasurface. As expected, the metasurface is reflectionless (E=0 in the SF region). A time-domain Fourier transform of the steady-state transmitted field is performed and the normalized (to E_0) result is plotted in Fig. <ref>. It may be seen that the metasurface generates a transmitted field with several visible harmonics satisfying E(ω_n)>E(ω_n+1), as expected. In order to investigate its nonreciprocity, the metasurface is now illuminated from the left by a plane wave propagating in the -z-direction. The corresponding simulated waveform is plotted in Fig. <ref>, where the positions of the SF/TF regions have been changed accordingly. We can see that when this metasurface is illuminated from the right, it is not reflectionless anymore as evidenced by the nonzero electric field in the SF region. This is in agreement with the fact that different conditions for no reflection apply for different directions of propagation, according to the discussion that led to Eqs. (<ref>) and (<ref>). The time-domain Fourier transform of the transmitted field in Fig. <ref> is plotted in Fig. <ref>. As may be seen, the transmitted field is missing frequencies that are even multiples of ω_0. In fact, these missing frequencies are reflected, since the system is assumed to be lossless, by the metasurface instead of being transmitted. Thus, the nonreciprocal behavior of the second-order nonlinear metasurface only affects frequencies that are even multiples of ω_0. Next, we shall compare the theoretical results with FDTD simulations. For this purpose, two metasurfacessatisfying (<ref>) with different susceptibilities are considered. The values of E_0 and χ_ee^(1) are specified while those of χ_ee^(2) are swept. The amplitudes of the first three harmonics (at ω_0, 2ω_0 and 3ω_0) of the transmitted field are obtained from (<ref>), (<ref>) and (<ref>), and compared to the corresponding amplitudes found by FDTD simulation. Note that the amplitude of the harmonic at ω_0 is computed from both (<ref>) and (<ref>) since both of these equations include terms contributing to this harmonic. As explained in the Appendix <ref>, the FDTD field update equations are only valid for specific values of E_0, χ_ee^(1) and χ_ee^(2), other specifications leading to a nonphysical behavior. In addition to choosing those parameters so as to follow this constraint, we have ensured χ_ee^(1) > χ_ee^(2) in order to be consistent with the assumptions of the perturbation analysis method [see Eq. (<ref>)]. The first comparison, presented in Fig. <ref>, considers E_0 = 1.5 V/m, χ_ee^(1)=0.1 m and χ_ee^(2) swept in [0,0.04] m^2/V, while the second comparison, presented in Fig. <ref>, considers E_0 = 10 V/m, χ_ee^(1)=0.3 m and χ_ee^(2) swept in [0,0.01] m^2/V. Both comparisons show good agreement between theory and simulation. In both cases, the discrepancies between the results of the two methods increase with increasing χ_ee^(2), as expected from the perturbation assumption χ_ee^(1) > χ_ee^(2). Thus, as χ_ee^(2) increases towards χ_ee^(1), the error in the approximation of the transmitted field (Eq. (<ref>)) progressively increases. Another source of error is the fact that Eq. (<ref>) is truncated at the third term and that higher-order terms are neglected, which induces additional errors.We have verified that the FDTD simulations in Figs. <ref> and <ref>, as well as in Figs. <ref> and <ref> satisfy the losslessness and passivity power conservation condition . In contrast, this condition does not hold in the case of the theoretical results, due to the truncation of (<ref>), as is clearly apparent in Fig. <ref>, where ∑_n=1^3|E(ω_n)|^2 > |E_0|^2 for values of χ_ee^(2) that are close to 0.01 m^2/V. § CONCLUSION We have investigated a particular case of second-order nonlinear isotropic metasurfaces that possess both electric and magnetic nonlinear susceptibilities. We have found the synthesis expressions relating the susceptibilities to the fields on both sides of the metasurface, which lead to the derivation of the reflectionless metasurface conditions. These conditions reveal the inherent nonreciprocal nature of nonlinear metasurfaces. Then, the scattered field from such metasurfaces was analyzed based on perturbation theory as well as full-wave simulations, and good agreement was found between the two approaches. § ACKNOWLEDGMENT This work was accomplished in the framework of the Collaborative Research and Development Project CRDPJ 478303-14 of the Natural Sciences and Engineering Research Council of Canada (NSERC) in partnership with the company Metamaterial Technology Inc. § FINITE-DIFFERENCE TIME-DOMAIN SCHEME FOR NONLINEAR METASURFACESHere, we extend the 1D finite-difference time-domain (FDTD) simulation scheme developed in <cit.> for the analysis of metasurfaces to the case of nonlinear susceptibility components. This FDTD scheme consists in using traditional FDTD update equations everywhere on the simulation grid except at the nodes positioned before and after the metasurface. For these specific nodes, the update equations are modified, using the GSTCs relations, to take into account the effect of the metasurface. The conventional Yee-grid FDTD 1D equations are given byH_y^n+1(i) = H_y^n(i) - Δ t/μ_0Δ z(E_x^n+1/2(i+1)- E_x^n+1/2(i)), E_x^n+1/2(i) = E_x^n-1/2(i) - Δ t/ϵ_0Δ z(H_y^n(i)- H_y^n(i-1) ),where i and n correspond to the cell number and time coordinates and Δ z and Δ t are their respective position and time steps. The metasurface is placed at a virtual position between cell number i=n_d and i=n_d+1, corresponding to a position between an electric node and a magnetic node. To take into account its effect, a virtual electric node is created just before the metasurface (at i=0^-) and a virtual magnetic node is created just after the metasurface (at i=0^+).From (<ref>), the update equations for H_y^n+1(n_d) and E_x^n+1/2(n_d+1) are connected to these virtual nodes via the following relationsH_y^n+1(n_d) = H_y^n(n_d) + Δ t/μ_0Δ z(E_x^n+1/2(0^-)- E_x^n+1/2(n_d)), E_x^n+1/2(n_d+1)= E_x^n-1/2(n_d+1) + Δ t/ϵ_0Δ z(H_y^n(n_d+1)- H_y^n(0^+)),where the value of the electric and magnetic fields at the virtual nodes are obtained from the GSTCs relations-Δ H_y= ϵ_0 χ_ee^(1)∂/∂ t E_x,av + ϵ_0 χ_ee^(2)∂/∂ t E_x,av^2,-Δ E_x= μ_0 χ_mm^(1)∂/∂ t H_y,av + μ_0 χ_mm^(2)∂/∂ t H_y,av^2.Using (<ref>), the expression of the electric and magnetic fields at the virtual nodes in (<ref>) readH_y^n(0^+) = H_y^n(n_d) - ϵ_0χ_ee^(1)/Δ t(E_x,av^n+1/2- E_x,av^n-1/2)- ϵ_0χ_ee^(2)/Δ t((E_x,av^n+1/2)^2- (E_x,av^n-1/2)^2 ), E_x^n+1/2(0^-) = E_x^n+1/2(n_d+1) + μ_0χ_mm^(1)/Δ t(H_y,av^n+1- H_y,av^n) + μ_0χ_mm^(2)/Δ t((H_y,av^n+1)^2- (H_y,av^n)^2),where the average electric field is defined by E_x,av^n+1/2 = E_x^n+1/2(n_d) + E_x^n+1/2(n_d+1)/2, and similarly for the average magnetic field. Substituting (<ref>) along with (<ref>) into (<ref>) leads to two quadratic equations that may be independently solved to obtain the final update equations. These two quadratic equations yield, each one of them, two possible solutions but only one of the two correspond to a physical behavior. The two solutions that produce physical results areH_y^n+1(n_d) = (2Δ z -χ_ee^(1) - H_y^n+1(n_d+1)χ_ee^(2))μ_0 - √(Δ_h)/μ_0χ_ee^(1), E_x^n+1/2(n_d+1) = (2Δ z -χ_mm^(1) - E_x^n+1/2(n_d)χ_mm^(2))ϵ_0 - √(Δ_e)/ϵ_0χ_mm^(1)where the discriminant Δ_h is given by Δ_h = μ_0{4 Δ t(E_x^n+1/2(n_d)-E_x^n+1/2(n_d +1))χ_ee^(2)+ μ_0[4 Δ z^2 + (χ_ee^(1)+(H_y^n(n_d)+H_y^n+1(n_d+1))χ_ee^(2))^2 -4 Δ z (χ_ee^(1) +(H_y^n(n_d)+H_y^n+1(n_d+1))χ_ee^(2) )]}, and the discriminant Δ_e is given by Δ_e = ϵ_0{4 Δ t(H_y^n(n_d)-H_y^n(n_d +1))χ_mm^(2)+ ϵ_0[4 Δ z^2 + (χ_mm^(1)+(E_x^n-1/2(n_d)+E_x^n-1/2(n_d+1))χ_mm^(2))^2 -4 Δ z (χ_mm^(1) +(E_x^n+1/2(n_d)+E_x^n-1/2(n_d+1))χ_mm^(2) )]}. Because of the square roots in (<ref>), the update equations may lead to nonphysical behavior depending on the values of the two discriminants. This limits the range of allowable values that the susceptibilities and the amplitude of the incident field may take. | http://arxiv.org/abs/1703.09082v1 | {
"authors": [
"Karim Achouri",
"Yousef Vahabzadeh",
"Christophe Caloz"
],
"categories": [
"physics.optics"
],
"primary_category": "physics.optics",
"published": "20170327135730",
"title": "Mathematical Synthesis and Analysis of Nonlinear Metasurfaces"
} |
Transductive Zero-Shot Learning with a Self-training dictionary approach Yunlong Yu, Zhong Ji, Xi Li, Jichang Guo, Zhongfei Zhang, Haibin Ling, Fei WuDecember 30, 2023 ==========================================================================================We provide a general discussion of Smolyak's algorithm for the acceleration of scientific computations. The algorithm first appeared in Smolyak's work on multidimensional integration and interpolation. Since then, it has been generalized in multiple directions and has been associated with the keywords: sparse grids, hyperbolic cross approximation, combination technique, and multilevel methods. Variants of Smolyak's algorithm have been employed in the computation of high-dimensional integrals in finance, chemistry, and physics, in the numerical solution of partial and stochastic differential equations, and in uncertainty quantification. Motivated by this broad and ever-increasing range of applications, we describe a general framework that summarizes fundamental results and assumptions in a concise application-independent manner.Keywords Smolyak algorithm, sparse grids, hyperbolic cross approximation, combination technique, multilevel methods roman headings arabic § INTRODUCTION We study Smolyak's algorithm for the convergence acceleration of general numerical approximation methods^:={0,1,…,}^ →,which map discretization parameters =(ı_1,…,ı_)∈^ to outputs () in a Banach space . For instance, a straightforward way to approximate the integral of a function f [0,1]^n→ is to employ tensor-type quadrature formulas, which evaluate f at the nodes of a regular grid within [0,1]^n. This gives rise to an approximation method where ı_j determines the grid resolution in direction of the j-th coordinate axis, j∈{1,…,}. Smolyak himself derived and studied his algorithm in this setting, where it leads to evaluations in the nodes of sparse grids <cit.>.Another example, which emphasizes the distinctness of sparse grids and the general version of Smolyak's algorithm considered in this work, is integration of a univariate function f→ that is not compactly supported but exhibits sufficient decay at infinity. In this case, ı_1 could as before determine the resolution of regularly spaced quadrature nodes and ı_2 could be used to determine a truncated quadrature domain. Smolyak's algorithmthen leads to quadrature nodes whose density is high near the origin and decreases at infinity, as intuition would dictate. To motivate Smolyak's algorithm, assume that the approximation methodconverges to a limit _∞∈ at the rate()-_∞≤ K_1 ∑_j=1^ı_j^-_̧j∀∈^and requires the work (())= K_2∏_j=1^ı_j^_j∀∈^for some K_1>0,K_2>0 and _̧j>0, _j>0, j∈{1,…,}.An approximation of _∞ with accuracy ϵ>0 can then be obtained with the choiceı_j:=-(ϵ/ K_1)^-1/_̧j, j∈{1,…,},which requires the workC(,K_1,K_2,_1,…,_,_̧1,…,,_̧)ϵ^-(_1/_̧1+…+_/_̧). Here and in the remainder of this work we denote by C(…) generic constants that depend only on the quantities in parentheses but may change their value from line to line and from equation to equation. The appearance of the sum _1/_̧1+…+_/_̧ in the exponent above is commonly referred to as the curse of dimensionality. Among other things, we will show (see <Ref>) that if the bound in <Ref> holds in a slightly stronger sense, then Smolyak's algorithm can replace this dreaded sum by max_j=1^_j/_̧j, which means that it yields convergence rates that are, up to possible logarithmic factors, independent of the number of discretization parameters. In the general form presented here, Smolyak's algorithm forms linear combinations of the values (), ∈^, based on * an infinite decomposition of _∞ and* a knapsack approach to truncate this decomposition. Since the decomposition is independent of the particular choice ofand the truncation relies on easily verifiable assumptions on the decay and work of the decomposition terms, Smolyak's algorithm is a powerful black box for the non-intrusive acceleration of scientific computations. In the roughly 50 years since its first description, applications in various fields of scientific computation have been described; see, for example, the extensive survey article <cit.>. The goal of this work is to summarize previous results in a common framework and thereby encourage further research and exploration of novel applications.While some of the material presented here may be folklore knowledge in the sparse grids community, we are not aware of any published sources that present this material in a generally applicable fashion.The remainder of this work is structured as follows. In <Ref>, we introduce the infinite decomposition of _∞ that is at the core of Smolyak's algorithm. In <Ref>, we introduce spaces of approximation methods ^→ that allow for efficient solutions of the resulting truncation problem. In <Ref>, we derive explicit convergence rates for Smolyak's algorithm in common examples of such spaces. Finally, in <Ref>, we discuss how various previous results can be deduced within the framework presented here.§ DECOMPOSITION Smolyak's algorithm is based on a decomposition of _∞ that is maybe most simply presented in the continuous setting. Here, Fubini's theorem and the fundamental theorem of calculus show that any function f^:=[0,∞)^→ with f≡ 0 on ∂^ satisfiesf(x)=∫_∏_j=1^ [0,x_j]∂_1…∂_f(s)ds∀ x∈^,questions of integrability and differentiability aside. Moreover, if f converges to a limit f_∞∈ as min_j=1^x_j→∞, then f_∞=lim_min_j=1^x_j→∞∫_∏_j=1^ [0,x_j]∂_1…∂_f(s)ds=∫_^∂_f(s) ds,where we introduced the shorthand ∂_ for the mixed derivative ∂_1…∂_. The crucial observation is now that an approximation of f_∞ can be achieved not only by rectangular truncation of the integral in <Ref>, which according to <Ref> is equivalent to a simple evaluation of f at a single point, but also by truncation to more complicated domains. These domains should ideally correspond to large values of ∂_f in order to minimize the truncation error, but also have to take into consideration the associated computational work. To transfer the decomposition in <Ref> to the discrete setting, we denote by ^^:={^→} the space of all functions from ^ into the Banach space . Next, we define the discrete unidirectional difference and sum operatorsj^^→^^(j)():= (ı_1,…,ı_)-(ı_1,…,ı_j-1,ı_j-1,ı_j+1,…,ı_)if ı_j>0, (ı_1,…,ı_)else, j:=j^-1^^→^^(j )():=∑_s=0^ı_j(ı_1,…,ı_j-1,s,ı_j+1,…,ı_),Finally, weintroduce their compositions, the mixed difference operator:=1∘…∘^^→^^,and the rectangular sum operatorR:=1∘…∘^^→^^,which replace the mixed derivative and integral operators that map f^→ to f↦∂_f and x↦∫_∏_j=1^[0,x_j] f(s) ds, respectively. The discrete analogue of <Ref> is now a matter of simple algebra. * We have R=^-1, that is ()=∑_s_1=0^ı_1…∑_s_=0^ı_(s_1,…,s_)∀∈^. * We have =∑_𝐞∈{0,1}^ (-1)^|𝐞|_1S_𝐞, where S_𝐞 is the shift operator defined by (S_𝐞)():=(-𝐞),if -𝐞∈^0else.Part (i) follows directly from the commutativity of the operators {j}_j=1^. Part (ii) follows from plugging the representation j=-S_𝐞_j, where 𝐞_j is the j-th standard basis vector in ^, into the definition =1∘…∘, and subsequent expansion.Part (i) of the previous proposition shows that, ignoring questions of convergence, discrete functions ^→ with limit _∞ satisfy_∞=∑_∈^()in analogy to <Ref>. In the next section, we define spaces of discrete functions for which this sum converges absolutely and can be efficiently truncated. We conclude this section by the observation that a necessary condition for the sum in <Ref> to converge absolutely is that the unidirectional limits (ı_1,…,∞,…,ı_):=lim_ı_j→∞(ı_1,…,ı_j,…,ı_) exist. Indeed, by part (i) of the previous proposition, these limits correspond to summation ofover hyperrectangles that are growing in direction of the j-th coordinate axis and fixed in all other directions. For instance, in the context of time-dependent partial differential equations this implies stability requirements for the underlying numerical solver, prohibiting explicit time-stepping schemes that diverge when the space-discretization is refined while the time-discretization is fixed.§ TRUNCATION For any index set ⊂^, we may define Smolyak's algorithm as the approximation of _∞ that is obtained by truncation of the infinite decomposition in <Ref> to ,_():=∑_∈().By definition of , the approximation _() is a linear combination of the values (), ∈^ (see <Ref> for explicit coefficients). This is the reason for the name combination technique that was given to approximations of this form in the context of the numerical approximation of partial differential equations <cit.>.When one talks about the Smolyak algorithm, or the combination technique, a particular truncation is usually implied. The general idea here is to include those indices for which the ratio between contribution (measured in the norm of ) and required work of the corresponding decomposition term is large.To formalize this idea, we require decay of the norms of the decomposition terms and bounds on the work required for their evaluation.To express the former, we define for strictly decreasing functions _j→:=(0,∞), j∈{1,…,} the spaces():={^→ :∃ K_1>0∀∈^ ()≤ K_1∏_j=1^_j(ı_j)}. * If ∑_∈^∏_j=1^_j(ı_j)<∞, then any ∈() has a limit _∞:=lim_min_j=1^ı_j→∞(). Furthermore, the decomposition in <Ref> holds and converges absolutely.* The spaces () are linear subspaces of Y^^. * (Error expansions) Assume that the ratios _j(k)/_j(k+1) are uniformly bounded above for k∈ and j∈{1,…,}. For ∈^ and J⊂{1,…,} let_J:=(ı_j)_j∈ J∈^|J|. If the approximation error can be written as ()-_∞=∑_∅≠J⊂{1,…,}_J(_J)∀∈^with functions _J^|J|→, J⊂{1,…,} that satisfy_J(_J)≤∏_j∈ J_j(ı_j)then ∈().* (Multilinearity <cit.>) Assume (_i)_i=1^m and Y are Banach spaces and ∏_i=1^m_i→ is a continuous multilinear map. If_i∈_(e_j)_j=_1+…+_i-1+1^_1+…+_i(_i)∀ i∈{1,…,m},then(_1,…,_m)∈_(e_j)_j=1^(), where :=_1+…+_m and(_1,…,_m)():=(_1(_1),…,_m(_m))∀=(_1,…,_m)∈^. Sinceis a Banach space, the assumption in part (i) shows that for any ∈() the infinite sum in <Ref> converges absolutely to some limit . Since rectangular truncations of this sum yield point values (), ∈^ by part (i) of<Ref>, the limit _∞:=lim_min_j=1^ı_j→∞() exists and equals . Part (ii) follows from the triangle inequality. For part (iii), observe that by part (ii) it suffices to show _J∈() for all J⊂{1,…,}, where we consider _J as functions on ^ depending only on the parameters indexed by J. Since =^J∘^J^C, where ^J denotes the mixed difference operator acting on the parameters in J, we then obtain_J()=^J_J(_J) if∀ j∈ J^C:_j=00else.Hence, it suffices to consider J={1,…,}. In this case, the assumption _J(_J)≤ C∏_j∈ J_j(ı_j) is equivalent to ^-1_J∈(). Thus, it remains to show thatpreserves (). This holds by part (ii) of this proposition together with part (ii) of <Ref> and the fact that shift operators preserve (), which itself follows from the assumption that the functions _j(·)/_j(·+1) are uniformly bounded.Finally, for part (iv) observe that by multilinearity ofwe have(_1,…,_m)=(^(1)_1,…,^(m)_m),where the mixed difference operator on the left hand side acts on =_1+…+_m coordinates, whereas those on the right hand side only act on the _i coordinates of _i. By continuity ofwe have(^(1)_1,…,^(m)_m)()≤ C∏_i=1^m^(i)_i(_i)_i,for some C>0, from which the claim follows.Parts (iii) and (iv) of the previous proposition provide sufficient conditions to verify ∈() without analyzing mixed differences directly. * After an exponential reparametrization, the assumptions in<Ref> become ()-_∞≤ K_1 ∑_j=1^exp(-_̧jı_j) ∀∈^and (())= K_2∏_j=1^exp(_jı_j) ∀∈^,respectively. If we slightly strengthen the first and assume that ()-_∞=∑_j=1^_j(ı_j)∀∈^ with functions _j that satisfy _j(ı_j)≤ Cexp(-_̧jı_j),∀ı_j∈for some C>0 and _̧j>0, j∈{1,…,}, then ∈with _j(ı_j):=exp(-_̧jı_j), by part (iii) of <Ref>. <Ref> below then shows that Smolyak's algorithm applied torequires only the work ϵ^-max_j=1^{_j/_̧j}, up to possible logarithmic factors, to achieve the accuracy ϵ>0. * Assume we want to approximate the integral of a function [0,1]→ but are only able to evaluate approximations _ı_2, ı_2∈ ofwith increasing cost as ı_2→∞. Given a sequence S_ı_1, ı_1∈ of linear quadrature formulas, the straightforward approach would be to fix sufficiently large values of ı_1 and ı_2 and then approximate the integral of _ı_2 with the quadrature formula S_ı_1. Formally, this can be written as (ı_1,ı_2):=S_ı_1_ı_2. To show decay of the mixed differences , observe that the application of quadrature formulas to functions is linear in both arguments, which means that we may write (ı_1,ı_2)=(S_ı_1,_ı_2)=(_1(ı_1),_2(ı_2)) where _1(ı_1):=S_ı_1, _2(ı_2):=_ı_2, andis the application of linear functionals to functions on [0,1]. Assume, for example, that the functions _ı_2 converge toin some Banach space B of functions on [0,1] as ı_2→∞, and that the quadrature formulas S_ı_1 converge to the integral operator ∫ in the continuous dual space B^* as ı_1→∞. The decay of the mixed differences (ı_1,ı_2) then follows from part (iv) of <Ref>, sinceis a continuous bilinear map from B^* × B to . We will see in <Ref> below that the application of Smolyak's algorithm in this example yields so called multilevel quadrature formulas. This connection between Smolyak's algorithm and multilevel formulas was observed in <cit.>. *Assume that we are given approximation methods _j→_j, j∈{1,…} that converge at the rates _j(ı_j)-_∞,j_j≤_j(ı_j) to limits _∞,j∈_j, where _j→ are strictly decreasing functions. Define the tensor product algorithm ^→:=_1⊗…⊗_, ():=_1(ı_1)⊗…⊗_(ı_). If the algebraic tensor productis equipped with a norm that satisfies y_1⊗…⊗ y_≤y_1_1…y__, then ∈(). Indeed, _j∈__j(_j) by part (iii) of <Ref>, thus ∈() by part (iv) of the same proposition. Similar to the product type decay assumption on the norms (), which we expressed in the spaces , we assume in the remainder that (())≤ K_2∏_j=1^_j(ı_j)∀∈^for some K_2>0 and increasing functions _j→. By part (ii) of <Ref>, such a bound follows from the same bound on the evaluations () themselves. §.§ Knapsack problemThe goal of this subsection is todescribe quasi-optimal truncations of the decomposition in <Ref> for functions ∈() that satisfy <Ref>.Given a work budget W>0, a quasi-optimal index set solves the knapsack problemmax_⊂^ ||_:=K_1∑_∈∏_j=1^_j(ı_j) subject to ||_:=K_2∑_∈∏_j=1^_j(ı_j)≤ W.The term that is maximized here is motivated by _()-_∞=∑_∈()≈∑_∈()≈ ||_ <Ref> below shows that for any W>0 the knapsack problem has an optimal value. However, finding corresponding optimal sets is NP-hard <cit.>. As a practical alternative one can use Dantzig's approximation algorithm <cit.>, which selects indices for which the ratio between contribution and work is above some threshold δ(W)>0,_W:={∈^:∏_j=1^_j(ı_j)/_j(ı_j)>δ(W)},where δ(W) is chosen minimally such that |_W|_≤ W. * The knapsack problem in <Ref> has a (not necessarily unique) solution, in the sense that a maximal value of ||_ is attained. We denote this maximal value by E^*(W). * Any set ^* for which ||_=E^*(W) is finite and downward closed: If ∈^* and∈^ satisfies ≤ componentwise, then ∈^*. The same holds for the set _W from <Ref>. * The set _W from <Ref> satisfies |_W|_≥|_W|_/W E^*(W). This means that if _W uses all of the available work budget, |_W|_=W, then it is a solution to the knapsack problem. In particular, Dantzig's solutions are optimal for the work |_W|_ they require, but not necessarily for the workW they were designed for. There is an upper bound N on the cardinality of admissible sets in <Ref> since the functions _j are increasing and strictly positive. Furthermore, replacing an elementof an admissible set bywith ≤ decreases |·|_ and increases |·|_. This proves parts (i) and (ii), as there are only finitely many downward closed sets of cardinality less than N (for example, all such sets are subsets of {0,…,N-1}^). Part (iii) follows directly from the inequality |_W|_/|_W|_≥ |^*|_/|^*|_, where ^* is a set that attains the maximal value E^*(W). Even in cases where no bounding functions _j and _j are available, parts (ii) and (iii) of the previous proposition serve as motivation for adaptive algorithms that progressively build a downward closed setby adding at each step a multi-index that maximizes a gain-to-work estimate <cit.>. §.§ Combination rule Part (ii) of <Ref> provides a way to express the approximations _() in a succinct way as linear combinations of different values of . This yields the combination rule, which in its general form says that_()=∑_∈c_()withc_=∑_e∈{0,1}^: +e∈(-1)^|e|_1for any downward closed set . It is noteworthy that c_=0 for allwith +(1,…,1)∈, because for suchthe sum in <Ref> is simply the expansion of (1-1)^.Whenis a standard simplex, ={∈^:||_1≤ L}, the following explicit formula holds <cit.>:c_= (-1)^L-||_1-1L-||_1ifL-+1≤ ||_1≤ L0else. § CONVERGENCE ANALYSIS §.§ Finite-dimensional case We consider an approximation method ∈() with _j(ı_j)=K_j,1exp(-_̧jı_j)(ı_j+1)^_j∀ j∈{1,…,}and assume that(())≤∏_j=1^K_j,2exp(_jı_j)(ı_j+1)^_j∀∈^with K_j,1>0, K_j,2>0, _̧j> 0, _j> 0, _j≥ 0, _j≥ 0. The required calculations with ≡≡ 0 were previously done in various specific contexts, see for example <cit.>. According to <Ref>, quasi-optimal index sets are given by _δ: ={∈^:∏_j=1^K_j,1exp(-_̧jı_j)(ı_j+1)^_j/K_j,2exp(_jı_j)(ı_j+1)^_j>δ}={∈^:K_1/K_2exp(-(+)·)∏_j=1^(ı_j+1)^_j-_j>δ}for δ>0, where K_1:=∏_j=1^K_j,1, K_2:=∏_j=1^K_j,2, and:=(_̧1,…,_̧), :=(_1,…,_). For the analysis in this section, we use the slightly simplified sets_L:={∈^:exp((+)·)≤exp(L) }={∈^:(+)·≤ L },with L→∞, where, by abuse notation, we distinguish the two families of sets by the subscript letter.The work required by _L():=__L() satisfies(_L())≤∑_∈_L∏_j=1^K_j,2exp(_jı_j)(ı_j+1)^_j=K_2∑_(+)·≤ Lexp(·)(+1)^with (+1)^:=∏_j=1^(ı_j+1)^_j. Similarly, the approximation error satisfies_L()-_∞≤∑_∈_L∏_j=1^K_j,1exp(-_̧jı_j)ı_j^_j=K_1 ∑_(+)·>Lexp(-·)(+1)^.The exponential sums appearing in the work and residual bounds above are estimated in the appendix of this work, with the results(_L())≤ K_2C(,,)exp(ρ/1+ρL)(L+1)^^*-1+^*and_L()-_∞≤ K_1C(,,)exp(-1/1+ρL)(L+1)^^*-1+^*,where ρ:=max_j=1^_j/_̧j, J:={j∈{1,…,}:_j/_̧j=ρ}, ^*:=|J|, ^*:=∑_j∈ J_j, ^*:=∑_j∈ J_j.We may now formulate the main result of this section by rewriting the bound in <Ref> in terms of the right-hand side of <Ref>. Under the previously stated assumptions onand for small enough ϵ>0, we may choose L>0 such that _L()-_∞≤ϵand(_L())≤ K_1^ρK_2C(,,,,)ϵ^-ρ|logϵ|^(^*-1)(1+ρ)+ρ^*+^*. This means that we have eliminated the sum in the exponent of the bound in <Ref>, as announced in <Ref>. The additional logarithmic factors in <Ref> vanish if the worst ratio of work and convergence exponents, ρ, is attained only for a single index j_max∈{1,…,} and if _j_max=_j_max=0. If ≡ 0 and ≡ 0, that is when both work and residual depend algebraically on all parameters, then an exponential reparametrization, exp():=, takes us back to the situation considered above. The preimage of _L={ : (+)·≤ L} under this reparametrization is { : ∏_j=1^ı_j^_j+_j≤exp(L)}, whence the name hyperbolic cross approximation <cit.>. When the terms (), ∈^ are orthogonal to each other, we may substitute the Pythagorean theorem for the triangle inequality in <Ref>. As a result, the exponent of the logarithmic factor in <Ref> reduces to (^*-1)(1+ρ/2)+ρ^*+^*.§.§ Infinite-dimensional case The theory of the previous sections can be extended to the case =∞. In this case the decomposition in <Ref> becomes_∞=∑_∈(),whereare the sequences with finite support, and () is defined as 1∘…∘_max(), where _max is a bound on the support of . In particular, every term in <Ref> is a linear combination of values ofwith only finitely many nonzero discretization parameters.We consider the case ∈() for_j(ı_j):=K_j,1exp(-β_jı_j)(ı_j+1)^∀ j≥ 1 and ≥ 0, K_1:=∏_j=1^∞K_j,1<∞ ≥ 0, and we assume constant computational work for the evaluation of the mixed differences (), i.e. _j≡ C in <Ref> for all j≥ 1.Similarly to the finite-dimensional case, we considersets_L:={∈:∑_j=1^∞_̧jı_j≤ L}and the associated Smolyak algorithm_L():=∑__L().The following theorem is composed of results from <cit.> on interpolation and integration of analytic functions; the calculations there transfer directly to the general setting. Let L>0 and define N:=|_L|=(_L()). * Assume =0. * <cit.> If there exists _̧0>1 such that M:=M(_̧0,(_̧j)_j=1^∞):=∑_j=1^∞1/exp(_̧j/_̧0)-1<∞, then _L()-_∞≤K_1 /_̧0exp(_̧0M)N^-(_̧0-1), which implies (_L())≤ C(K,β_0,M)ϵ^-1/(_̧0-1) for ϵ:=K_1/_̧0exp(_̧0M)N^-(_̧0-1). * <cit.> If _̧j≥_̧0j for _̧0>0, j≥ 1, then _L()-_∞≤2/_̧0√(log N)N^1+1/4_̧0-3/8_̧0(log N)^1/2. * Assume >0. * <cit.> If there exist _̧0>1 and δ>0 such that M(_̧0,((1-δ)_̧j)_j=1^∞)<∞, then _L()-_∞≤ C(K_1,δ,_̧0,M,(_̧j)_j∈,)N^-(_̧0-1), which implies (_L())≤ C(K_1,δ,_̧0,M,(_̧j)_j∈,)ϵ^-1/(_̧0-1) for ϵ:=C(K_1,δ,_̧0,M,(_̧j)_j∈,)N^-(_̧0-1). * <cit.> If _̧j≥_̧0j for _̧0>0, then for every _0<_̧0 we have _L()-_∞≤C(_0,M,b)/√(log N)N^1+_0/4-3/8_0(log N)^1/2. For alternative approaches to infinite-dimensional problems, which allow even for exponential type work bounds, _j(ı_j)=K_j,2exp(_jı_j), consider for example <cit.>.§ APPLICATIONS§.§ High-dimensional interpolation and integration Smolyak introduced the algorithm that now bears his name in <cit.> to obtain efficient high-dimensional integration and interpolation formulas from univariate building blocks. For example, assume we are given univariate interpolation formulas S_ı, ı∈ for functions in a Sobolev spacethat are based on evaluations in 2^ı points in [0,1] and converge at the rate S_ı-→≤ C 2^-k(β-α)for some 0≤α<β. A straightforward high-dimensional interpolation formula is then the corresponding tensor product formula⊗_j=1^S_ı_j^⊗=:H^β_([0,1]^)→^⊗=:H^α_([0,1]^)for (k_1,…,k_)∈^, where we consider both tensor product spaces to be completed with respect to the corresponding Hilbert space tensor norm <cit.>. This can be interpreted as a numerical approximation method with values in a space of linear operators, ():=⊗_j=1^S_ı_j∈ℒ(H^β_([0,1]^),H^α_([0,1]^))=:,whose discretization parameters =(ı_1,…,ı_) determine the resolution of interpolation nodes in each direction j∈{1,…,}.If we associate as work with () the number of required point evaluations, (()):=∏_j=1^2^ı_j,then we are in the situation described in <Ref>. Indeed, we have ∈() with _j(ı_j):=2^-ı_j(β-α) by part (iii) of <Ref>, since the operator norm of a tensor product operator between Hilbert space tensor products factorizes into the product of the operator norms of the constituent operators (see <cit.> and <cit.>). In particular, the straightforward tensor product formulas (ı,…,ı) require the workϵ^-/(β-α)to approximate the identity operator with accuracy ϵ>0 in the operator norm, whereas Smolyak's algorithm _L() with an appropriate choice of L=L(ϵ) achieves the same accuracy with (_L())≲ϵ^-1/(β-α)|logϵ|^(-1)(1+1/(β-α)),according to <Ref>.Here and in the following, we denote by ≲ estimates that hold up to factors that are independent of ϵ. As a linear combination of tensor product operators, Smolyak's algorithm _L() is a linear interpolation formula based on evaluations in the union of certain tensor grids. These unions are commonly known as sparse grids <cit.>.Interpolation of functions in general Banach spaces, with convergence measured in different general Banach spaces can be treated in the same manner. However, more care has to be taken with the tensor products. Once the algebraic tensor products of the function spaces are equipped with reasonable cross norms <cit.> and completed, it has to be verified that the operator norm of linear operators between the tensor product spaces factorizes. Unlike for Hilbert spaces, this is not always true for general Banach spaces. However, it is true whenever the codomain is equipped with the injective tensor norm, or when the domain is equipped with the projective tensor norm <cit.>. For example, the L^∞-norm (and the similar C^k-norms) is an injective tensor norm on the product of L^∞-spaces, while the L^1-norm is a projective norm on the tensor product of L^1-spaces.§.§ Monte Carlo path simulation Consider a stochastic differential equation (SDE)dS(t)=a(t,S(t))dt+b(t,S(t))dW(t)0≤ t≤ TS(0)=S_0∈^d,with a Wiener process W(t) and sufficiently regular coefficients a,b [0,T]×^d→. A common goal in the numerical approximation of such SDE is to compute expectations of the formE[Q(S(T))], where Q^d→ is a Lipschitz-continuous quantity of interest of the final state S(T). To approach this problem numerically, we first define random variables S_N(t), 0≤ t≤ T as the forward Euler approximations of <Ref> with N≥ 1 time steps. Next, we approximate the expectations E[Q(S_N(T))] by Monte Carlo sampling using M≥ 1 independent samples S^1_N(T),…,S^M_N(T) that are computed using independent realizations of the Wiener process. Together, this gives rise to the numerical approximation(M,N):=1/M∑_i=1^MQ(S^i_N(T)).For fixed values of M and N this is a random variable that satisfies E[((M,N)-E[Q(S(T))])^2] =(E[(M,N)]-E[Q(S(T))])^2+[(M,N)]=(E[Q(S_N(T))]-E[Q(S(T))])^2+M^-1[Q(S_N(T))]≲ N^-2 + M^-1,where the last inequality holds by the weak rate of convergence of the Euler method <cit.> and by its L^2-boundedness as N→∞. This shows that the random variables (M,N) converge to the limit _∞=E[Q(S(T))], which itself is just a deterministic real number, in the sense of probabilistic mean square convergence as M,N→∞. To achieve a mean square error or order ϵ^2>0, this straightforward approximation requires the simulation of M≈ϵ^-2 sample paths of <Ref>, each with N≈ϵ^-1 time steps, which incurs the total work((M,N))=MN≈ϵ^-3.Smolyak's algorithm allows us to achieve the same accuracy with the reduced work ϵ^-2 of usual Monte Carlo integration. To apply the results of <Ref>, we consider the reparametrized algorithm (k,l) withM_k:=M_0exp(2k/3), N_l:=N_0exp(2l/3),for which the convergence and work parameters of <Ref> attain the values _̧j=1/3, _j=2/3, and _j=_j=0, j∈{1,2}. (Here and in the following we implicitly round up non-integer values, which increases the required work only by a constant factor.) Indeed, we may write(k,l)=(_1(k),_2(l))),where _1(k), k∈ is the operator that maps random variables to an empirical average over M_k independent samples, _2(l), l∈ is the random variable Q(S_N_l(T)), anddenotes the application of linear operators to random variables. Since _1(k) converges in the operator norm to the expectation operator on the space of square integrable random variables at the usual Monte Carlo convergence rate M_k^-1/2 as k→∞, and _2(l) converges to Q(S(T)) at the strong convergence rate N_l^-1/2 of the Euler method in the L^2-norm <cit.> as l→∞, and sinceis linear in both arguments, the claimed values of the convergence parameters _̧j, j∈{1,2} hold by part (iv) of <Ref>.<Ref> now shows that choosing L=L(ϵ) such thatE[(_L()-E[Q(S(T))])^2]≤ϵ^2incurs the work(_L())≲ϵ^-2|logϵ|^-3. To link this result to the keyword multilevel approximation, we observe that, thanks to our particular choice of parametrization, Smolyak's algorithm from <Ref> takes the simple form_L()=∑_k+l≤ L(k,l).Since =1∘2 and 1=1^-1 we may further write _L()= ∑_l=0^L∑_k=0^L-l(k,l)= ∑_l=0^L2(L-l,l)= 1/M_L∑_i=1^M_LQ(S^i_N_0(T))+∑_l=1^L1/M_L-l∑_i=1^M_L-l(Q(S^i_N_l(T))-Q(S^i_N_l-1(T))),which reveals that Smolyak's algorithm employs a large number of samples from the coarse approximation S_N_0(T), and subsequently improves on the resulting estimate of E[Q(S(T))] by adding approximations of the expectations E[Q(S_N_l(T))-Q(S_N_l-1(T))], l∈{1,…,L} that are computed using less samples. <Ref> is a multilevel formula of the form analyzed in <cit.> and <cit.>. Alternatively, this formula could also be deduced directly from the combination rule for triangles in <Ref>. Compared to the analysis in <cit.>, our presentation has two shortcomings: First, our analysis only exploits the strong rate of the discretization method used to approximate <Ref>. In the situation considered above, this does not affect the results, but for more slowly converging schemes a faster weak convergence rate may be exploited to obtain improved convergence rates. Second, the bound in<Ref> is larger than that in <cit.> by the factor |logϵ|. This factor can be removed by using independent samples for different values of l in <Ref>, since we may then apply <Ref>. §.§ Multilevel quadratureAs in <Ref> of <Ref>, assume that we want to approximate the integral ∫_[0,1](x) dx∈ using evaluations of approximations _[0,1]→, ∈. This is similar to the setting of the previous subsection, but with random sampling replaced by deterministic quadrature.As before, denote by S_ı, ı∈ a sequence of quadrature formulas based on evaluations in 2^ı nodes. If we assume that point evaluations of _ require the work exp() for some >0, that _-B≲ 2^-κ for some κ>0 and a Banach space B of functions on [0,1] and that S_ı-∫_[0,1]· dxB^*≲exp(-i̧) for some >̧0, then (ı,):=S_ı_ satisfies |S_ı_-∫_[0,1](x) dx|≲exp(-i̧)+exp(-κ). Hence, an accuracy of order ϵ>0 can be achieved by settingı:=-log(ϵ)/,̧:=-log_2(ϵ)/κ, which requires the work 2^ıexp()=ϵ^-1/-̧/κ. We have already shown the decay of the mixed differences, |(ı,)|≲exp(-i̧)2^-κ,in <Ref>. Thus, <Ref> immediately shows thatwe can choose L=L(ϵ) such that Smolyak's algorithm satisfies|_L()-∫_[0,1](x) dx|≤ϵ, with (_L())≲ϵ^-max{1/,̧/κ}|logϵ|^r for some r=r(,̧γ,κ)≥ 0. As in <Ref>, we may rewrite Smolyak's algorithm _L() in a multilevel form, which reveals that a Smolyak's algorithm employs a large number of evaluations of _0, and subsequently improves on the resulting integral approximation by adding estimates of the integrals ∫_[0,1]_l(x)-_l-1(x) dx, l>0, that are computed using less quadrature nodes.§.§ Partial differential equations The original Smolyak algorithm inspired two approaches to the numerical solution of partial differential equations (PDEs). The intrusive approach is to solve discretizations of the PDE that are built on sparse grids. The non-intrusive approach, which we describe here, instead applies the general Smolyak algorithm to product type discretizations whose resolution in the j-th direction is described by the parameter ı_j <cit.>. We discuss here how the non-intrusive approach can be analyzed using error expansions of finite difference approximations. For example, the work <cit.>, which introduced the name combination technique, exploited the fact that for the Poisson equation with sufficiently smooth data on [0,1]^2, finite difference approximations u_ı_1,ı_2∈ L^∞([0,1]^2) with meshwidths h_j=2^-ı_j in the directions j∈{1,2} satisfy u-u_ı_1,ı_2=w_1(h_1)+w_2(h_2)+w_1,2(h_1,h_2),where u is the exact solution and w_1(h_1),w_2(h_2),w_1,2(h_1,h_2)∈ L^∞([0,1]^2) are error terms that converge to zero in L^∞ at the rates Ø(h_1^2), Ø(h_2^2), and Ø(h_1^2h_2^2), respectively. Since the work required for the computation of (ı_1,ı_2):=u_ı_1,ı_2 usually satisfies ((ı_1,ı_2))≈ (h_1h_2)^-γfor some γ≥ 1 depending on the employed solver, an error bound of size ϵ>0 could be achieved with the straightforward choice ı_1:=ı_2:=-(log_2ϵ)/2, which would require the work((ı_1,ı_2))≈ϵ^-γ.Since <Ref> in combination with part (iii) of <Ref> shows that ∈_(_j)_j=1^2 with _j(ı):=2^-2ı_j, we may deduce from <Ref> that Smolyak's algorithm applied torequires only the workϵ^-γ/2|logϵ|^1+γ/2 to achieve the same accuracy. The advantage of Smolyak's algorithm becomes even more significant in higher dimensions. All that is required to generalize the analysis presented here to high-dimensional problems, as well as to different PDE and different discretization methods, are error expansions such as <Ref>.§.§ Uncertainty quantificationA common goal in uncertainty quantification <cit.> is the approximation of response surfaces∋↦():=Q(_)∈.Here, ∈⊂^m represents parameters in a PDE and Q(_) is a real-valued quantity of interest of the corresponding solution _.For example, a thoroughly studied problem is the parametric linear elliptic second order equation with coefficients a U×→, -∇_x ·(a(x,)∇_x _(x))=g(x)in U⊂^d _(x)=0on ∂ U,whose solution for any fixed ∈ is a function _ U→. Approximations of response surfaces may be used for optimization, for worst-case analysis, or to compute statistical quantities such as mean and variance in the case whereis equipped with a probability distribution. The non-intrusive approach to compute such approximations, which is known as stochastic collocation in the case whereis equipped with a probability distribution, is to compute the values offor finitely many values ofand then interpolate. For example, if we assume for simplicity that =∏_j=1^m[0,1], then we may use, as in <Ref>, a sequence of interpolation operators S_ı H^β([0,1])→ H^α([0,1]) based on evaluations in (_ı,i)_i=1^2^ı⊂ [0,1]. However, unlike in <Ref>, we cannot compute values ofexactly but have to rely on a numerical PDE solver. If we assume that this solver has discretization parameters =(_1,…,_d)∈^d and returns approximations _, such that the functions_ →↦_():=Q(_,)are elements of H^β_([0,1]^m), then we may define the numerical approximation method^m×^d→ H^α_([0,1]^m)=:(,):=(⊗_j=1^mS_ı_j)_,with n:=m+d discretization parameters. At this point the reader should already be convinced that straightforward approximation is a bad idea. We therefore omit this part of the analysis, and directly move on to the application of Smolyak's algorithm. To do so, we need to identify functions _j→ such that ∈_(_j)_j=1^n(). For this purpose, we writeas(,)=(_1(),_2()),where _1():=⊗_j=1^mS_ı_j∈(H^β_([0,1]^m);H^α_([0,1]^m))=:_1 ∀∈^m _2():=_∈ H^β_([0,1]^m)=:_2∀∈^dandY_1 ×Y_2 → is the application of linear operators in _1 to functions in _2. Sinceis continuous and multilinear, we may apply part (iv) of <Ref> to reduce our task to the study of _1 and _2. The first part can be done exactly as in <Ref>. The second part can be done similarly to <Ref>. However, we now have to verify not only that the approximations _, converge to the exact solutions _ for each fixed value ofas min_j=1^d_j→∞, but that this convergence holds in some uniform sense over the parameter space.More specifically, let us denote by ^() the mixed difference operator with respect to the parametersand let us assume that ^()_H^β_([0,1]^m)≲∏_j=1^dexp(-κ_j _j)=:∏_j=1^d^(2)_j(_j)∀∈^d.For example, such bounds are proven in <cit.>.If the interpolation operators satisfy as beforeS_ı-H^β([0,1])→ H^α([0,1])≲ 2^-ı(β-α)=:^(1)(ı)∀ ı∈,then the results of <Ref> together with part (iv) of <Ref> shows that ∈_(^(1))_j=1^m∪ (^(2)_j)_j=1^d().If we further assume that the work required by the PDE solver with discretization parametersis bounded by exp(^(2)·) for some ∈^d, then we may assign as total work to the algorithm (,) the value((,)):=2^||_1exp(·),which is the number of required samples, 2^||_1, times the bound on the work per sample, exp(·). Thus, by <Ref>, Smolyak's algorithm achieves the accuracy_L()-≲ϵwith(_L())≲ϵ^-ρ|logϵ|^r,where ρ:=max{1/(β-α),max{_j/κ_j}_j=1^d} and r≥ 0 as in <Ref>.§ CONCLUSIONWe showed how various existing efficient numerical methods for integration, Monte Carlo simulations, interpolation, the solution of partial differential equations, and uncertainty quantification can be derived from two common underlying principles: decomposition and efficient truncation. The analysis of these methods was divided into proving decay of mixed differences by means of <Ref> and then applying general bounds on exponential sums in form of <Ref>. Besides simplifying and streamlining the analysis of existing methods, we hope that the framework provided in this work encourages novel applications. Finally, we believe that the general version of Smolyak's algorithm presented here may be helpful in designing flexible and reusable software implementations that can be applied to future problems without modification. § EXPONENTIAL SUMSLet _j>0, _̧j>0, and _j>0 for j∈{1,…,}. Then∑_(+)·≤ Lexp(·)(+1)^≤ C(,,)exp(μ L)(L+1)^^*-1+^*,where ρ:=max_j=1^_j/_̧j, μ:=ρ/1+ρ, J:={j∈{1,…,}:_j/_̧j=ρ}, ^*:=|J|, ^*:=∑_j∈ J_j, and (+1)^:=∏_j=1^(ı_j+1)^_j. First, we assume without loss of generality that the dimensions are ordered according to whether they belong to J or J:={1,…,}∖ J. To avoid cluttered notation we then separate dimensions by plus or minus signs in the subscripts; for example, we write=(_J,_J)=:(_+,_-). Next, we may replace the sum by an integral over {(+)·≤ L}. Indeed, by monotonicity we may do so if we replace L by L+|+|_1, but looking at the final result we observe that a shift of L only affects the constant C(,,). Finally, using a change of variables y_j:=(_̧j+_j)x_j and the shorthand :=/(+) (with componentwise division) we obtain∫_(+)·≤ Lexp(·) (+1)^ d≤ C∫_||_1≤ Lexp(·)(+1)^ d=C∫_|_+|_1≤ L exp(_+·_+)(_++1)^_+∫_|_-|_1≤ L-|_+|_1exp(_-·_-)(_-+1)^_- d_- d_+ ≤ C∫_|_+|_1≤ L exp(μ|_+|_1)(_++1)^_+∫_|_-|_1≤ L-|_+|_1exp(μ_-|_-|_1)(_-+1)^_- d_- d_+=(⋆),where the last equality holds by definition of μ=max{_+} and μ_-:=max{_-}. We use the letter C here and in the following to denote quantities that depend only on , andbut may change value from line to line. Using (_ ++1)^_+≤ (|_+|_1+1)^|_+|_1 and (_-+1)^_-≤ (|_-|_1+1)^|_-|_1 and the linear change of variables ↦ (||_1,y_2,…,y_n) in both integrals, we obtain (⋆) ≤ C∫_|_+|_1≤ Lexp(μ|_+|_1)(|_+|_1+1)^|_+|_1∫_|_-|_1≤ L-|_+|_1exp(μ_-|_-|_1)(|_-|_1+1)^|_-|_1 d_- d_+≤ C∫_0^Lexp(μ u)(u+1)^|_+|_1u^|J|-1∫_0^L-uexp(μ_-v)(v+1)^|_-|_1v^|J|-1 dv du≤ C(L+1)^|_+|_1L^|J|-1∫_0^Lexp(μ u)((L-u)+1)^|_-|_1(L-u)^|J|-1∫_0^L-uexp(μ_-v) dv du≤ C(L+1)^|_+|_1+|J|-1∫_0^Lexp(μ u)(L-u+1)^|_-|_1(L-u)^|J|-1exp(μ_-(L-u)) du=C(L+1)^|_+|_1+|J|-1exp(μ L)∫_0^Lexp(-(μ-μ_-)w)(w+1)^|_-|_1w^|J|-1 dw≤ C(L+1)^|_+|_1+|J|-1exp(μ L),where we used supremum bounds for both integrals for the third inequality, the change of variables w:=L-u for the penultimate equality, and the fact that μ>μ_- for the last inequality. Let _j>0, _̧j>0, and _j>0 for j∈{1,…,}. Then∑_(+)·>Lexp(-·)(+1)^≤ C(,,)exp(-ν L)(L+1)^^*-1+^*,where ρ:=max_j=1^_j/_̧j, ν:=1/1+ρ, J:={j∈{1,…,}:_j/_̧j=ρ}, ^*:=|J|, ^*:=∑_j∈ J_j, and (+1)^:=∏_j=1^(ı_j+1)^_j.First, we assume without loss of generality that the dimensions are ordered according to whether they belong to J or J. To avoid cluttered notation we then separate dimensions by plus or minus signs in the subscripts; for example, we write=(_J,_J)=:(_+,_-). Next, we may replace the sum by an integral over {(+)·>L}. Indeed, by monotonicity we may do so if we replace L by L-|+|_1, but looking at the final result we observe that a shift of L only affects the constant C(,,). Finally, using a change of variables y_j:=(_̧j+_j)x_j and the shorthand :=/(+) (with componentwise division) we obtain∫_(+)·>Lexp(-·)(+1)^ d≤ C∫_||_1>Lexp(-·)(+1)^ d=C∫_|_+|_1> Lexp(-_+·_+)(_++1)^_+∫_|_-|_1>(L-|_+|_1)^+exp(-_-·_-)(_-+1)^_-d_-d_+≤ C∫_|_+|_1> Lexp(-ν|_+|_1)(_++1)^_+∫_|_-|_1>(L-|_+|_1)^+exp(-ν_-|_-|_1)(_-+1)^_-d_-d_+=:(⋆),where the last equality holds by definition of ν=max{_+} and ν_-:=max{_-}. We use the letter C here and in the following to denote quantities that depend only on , andbut may change value from line to line. Using (_ ++1)^_+≤ (|_+|_1+1)^|_+|_1 and (_-+1)^_-≤ (|_-|_1+1)^|_-|_1 and the linear change of variables ↦ (||_1,y_2,…,y_n) in both integrals, we obtain(⋆)≤ C∫_|_+|_1>0exp(-ν|_+|_1)(|_+|_1+1)^|_+|_1∫_|_-|_1>(L-|_+|_1)^+exp(-ν_-|_-|_1)(|_-|_1+1)^|_-|_1d_-d_+ ≤C∫_0^∞exp(-ν u)(u+1)^|_+|_1u^|J|-1∫_(L-u)^+^∞exp(-ν_-v)(v+1)^|_-|_1v^|J|-1 dv du=C∫_0^Lexp(-ν u)(u+1)^|_+|_1+|J|-1∫_L-u^∞exp(-ν_-v)(v+1)^|_-|_1+|J|-1 dv du + C∫_L^∞exp(-ν u)(u+1)^|_+|_1+|J|-1∫_0^∞exp(-ν_-v)(v+1)^|_-|_1+|J|-1 dv du= :(⋆⋆)+(⋆⋆⋆).To bound (⋆⋆), we estimate the inner integral using the inequality ∫_a^∞exp(-b v)(v+1)^c dv≤ Cexp(-b a)(a+1)^c <cit.>, which is valid for all positive a, b, c:(⋆⋆) ≤ C∫_0^Lexp(-ν u)(u+1)^|_+|_1+|J|-1exp(-ν_-(L-u))(L-u+1)^|_-|_1+|J|-1 du≤ C(L+1)^|_+|_1+|J|-1∫_0^Lexp(-ν(L-w))exp(-ν_-w)(w+1)^|_-|_1+|J|-1 dw=C(L+1)^|_+|_1+|J|-1exp(-ν L)∫_0^Lexp(-(ν_--ν)w)(w+1)^|_-|_1+|J|-1 dw≤ C(L+1)^|_+|_1+|J|-1exp(-ν L),where we used a supremum bound and the change of variables w:=L-u for the second inequality, and the fact that ν_->ν for the last inequality. Finally, to bound (⋆⋆⋆), we observe that the inner integral is independent of L, and bound the outer integral in the same way we previously bounded the inner integral. This shows(⋆⋆⋆)≤ Cexp(-ν L)(L+1)^|_+|_1+|J|-1. | http://arxiv.org/abs/1703.08872v3 | {
"authors": [
"Raul Tempone",
"Soeren Wolfers"
],
"categories": [
"math.NA"
],
"primary_category": "math.NA",
"published": "20170326210424",
"title": "Smolyak's algorithm: A powerful black box for the acceleration of scientific computations"
} |
WPaxos: Wide Area Network Flexible ConsensusAilidani Ailijiang, Aleksey Charapko, Murat Demirbas and Tevfik KosarDepartment of Computer Science and EngineeringUniversity at Buffalo, SUNYEmail: {ailidani,acharapk,demirbas,tkosar}@buffalo.eduDecember 30, 2023 ================================================================================================================================================================================================================ We construct a form of swallowtail singularity in ^3 which uses coordinate transformation on the source and isometry on the target. As an application, we classify configurations of asymptotic curves and characteristic curves near swallowtail.[2010 Mathematics Subject classification.Primary 53A05; Secondary 58K05, 57R45][Key Words and Phrases. swallowtails, flat approximations, curves on surfaces, Darboux frame,developable surfaces, contour edges]§ INTRODUCTIONWave fronts and frontals are surfaces in 3-space, and they may have singularities. They always have normal directions even along singularities. Recently, there appeared several articles concerning ondifferential geometry of wave fronts and frontals <cit.>. Surfaces which have only cuspidal edges and swallowtails as singularities arethe generic wave fronts in the Euclidean 3-space. Fundamental differential geometric invariants of cuspidal edge is defined in <cit.>. It is further investigated in <cit.>, where the normal form of cuspidal edge plays a important role. The normal form of a singular point is a parametrization using by coordinate transformation on the source and isometric transformation on the target<cit.>. For the purpose of differential geometric investigation of singularities, it is not only convenient, but also indispensable to studying higher order invariants. Higher order invariants of cuspidal edges are studied in <cit.>, and in <cit.>, moduli of isometric deformations ofcuspidal edge is determined. In this paper, we give a normal form of swallowtail, and study relationships to previous investigation of swallowtail. As an application, we study geometric foliations near swallowtail.The precise definition of the swallowtail is given as follows: The unit cotangent bundle T^*_1^3 of^3 has the canonical contact structure and can be identified with the unit tangent bundle T_1^3. Let α denote the canonical contact form on it. A map i:M→ T_1^3 is said to beisotropic if the pull-back i^*α vanishes identically, where M is a 2-manifold.If i is an immersion, then we call the image of π∘ i the wave front set of i, where π:T_1^3→^3 is the canonical projection and we denote it by W(i). Moreover, i is called theLegendrian lift of W(i).With this framework, we define the notion of fronts as follows: A map-germ f:(^2,0) → (^3,0) is called a frontalif there exists a unit vector field(called unit normal of f) ν of ^3 along f such that L=(f,ν):(^2,0)→ (T_1^3,0) is an isotropic map by an identificationT_1^3 = ^3 × S^2, where S^2 isthe unit sphere in ^3 (cf. <cit.>, see also <cit.>). A frontal f is a front if the above L can be taken as an immersion. A point q∈ (^2, 0) is a singular point if f is not an immersion at q. A map f:M→ Nbetween 2-dimensional manifold M and 3-dimensional manifold N is calleda frontal (respectively, a front) if for any p∈ M, the map-germ f at p is a frontal (respectively, a front). A singular point p of a map f is called acuspidal edge if the map-germ f at p is 𝒜-equivalent to (u,v)↦(u,v^2,v^3) at 0, and a singular point p is called a swallowtail if the map-germ f at p is 𝒜-equivalent to (u,v)↦(u,4v^3+2uv,3v^4+uv^2) at 0,(Two map-germs f_1,f_2:(^n,0)→(^m,0) are 𝒜- equivalent if there exist diffeomorphisms S:(^n,0)→(^n,0) and T:(^m,0)→(^m,0) such that f_2∘ S=T∘ f_1.) Therefore if the singular point p of f is a swallowtail, then f at p is a front. § SINGULAR POINTS OF K-TH KIND Let f:(^2,0) → (^3,0) be a frontal and ν its unit normal. Let λ be a function which is a non-zero functional multiplication of the function(f_u,f_v,ν)for some coordinate system (u,v), and( )_u=∂/∂ u, ( )_v=∂/∂ v. We call such function singularity identifier. A singular point p of f is called non-degenerate if dλ(p)0. Let 0 be a non-degenerate singular point of f. Then the set of singular points S(f) is a regular curve, we take a parameterization γ(t) (γ(0)=0) of it. We set γ̂=f∘γ and call γ̂singular locus. One can show that there exists a vector field η such that if p∈ S(f), thendf_p=η_p.We call η the null vector field. On S(f), η can be parameterized by the parameter t of γ. We denote by η(t) the null vector field along γ. We set ϕ(t)=(dγdt(t),η(t)). A non-degenerate singular point 0 isthe first kind if ϕ(0)0. A non-degenerate singular point 0 isthe k-th kind (k≥2) ifdϕdt(0)=⋯ = d^k-2ϕdt^k-2(0)=0,d^k-1ϕdt^k-1(0)0. The definition does not depend on the choice of the parameterization of γ and choice of η. We remark that if f is a front, then the singular point of the first kind is the cuspidal edge, and the singular point of the second kind is the swallowtail <cit.>. We can rephrase the definition of the k-th kind singularity as follows. Let 0 be a non-degenerate singular point of f. Then there exists a vector field η̃ such that if p∈ S(f) then df_p=η_p. We call η̃ the extended null vector field.Let 0 be a non-degenerate singularpoint 0 of a frontal f:(^2,0)→(^3,0), and let λ be a singularity identifier. Then the followings are equivalent. *0 is a singular point of the k-th kind,*ηλ=⋯=η^k-1λ(0)=0, η^kλ(0)0, where η is a null vector field, and η^i stands for the i times directional derivative by η. Firstly we show that the condition <ref> does not depend on the choices of η and λ. It is obvious that for the choice ofλ, we shoe it for the choice of η̃. We show the following lemma.Let λ:(^2,0)→(,0) bea function satisfying dλ(0)0, and let η be a vector field. Let η be another vector field satisfyingη=hη̃,where h is a function h(0)0, on λ^-1(0). Then, ifη̃λ=⋯=η̃^k-1λ(0)=0,η̃^kλ(0)0(k≥1)hold, thenηλ=⋯=η^k-1λ(0)=0,η^kλ(0)0(k≥1)hold. Without loss of generality, we can assume the coordinate system (u,v) satisfies η̃=∂ v. By ηλ=0 and dλ(0)0, we have λ_u0. Thus by the implicit function theorem, there exists a function a(v) such thatλ(a(v),v)=0.Thus λ is proportional to a(v)-u, and without loss of generality, we can assume λ=a(v)-u. By the assumption (<ref>), a(0)=⋯=a^(k-1)(0)=0, a^(k)(0)0 holds. We show (<ref>).Since (<ref>) does not depend on the non-zero functional multiplicationof η, we may assumeη=b(u,v)∂ u+∂ v.We show thatη^lλ = b(u,v)h_0(u,v) + ∑_j=1^l-1∂^j b∂ v^j(u,v)h_j(u,v) +a^(l) (h_0,…,h_l-1 are functions),by the induction. When l=1, since ηλ=bλ_u+λ_v =-b+a', (<ref>) is true. We assume that (<ref>) for l=i. Sinceη^i+1λ = η(η^iλ)= b( bh_0 + b_vh_1 + ⋯ + b_v^i-1h_i-1+a^(i))_u + b_vh_0+b(h_0)_v + b_vvh_1+b_v(h_1)_v + ⋯ + b_v^ih_i-1 b_v^i-1(h_i-1)_v+a^(i+1),(<ref>) is true for l=i+1. We show the case k≥2, since it is clear when k=1. Since 0 is non-degenerate, we can take a coordinate system (u,v) satisfying S(f)={v=0}. By the non-degeneracy, we may assume λ=v. Furthermore, we can take η(u)=∂_u+(u)∂_v as a null vector field. Then ϕ(u)=(u), <ref> is equivalent to '(0)=⋯=^(k-2)(0)=0 and ^(k-1)(0)0. On the other hand, since λ=v, it holds that ηλ=(u). Then this depends only on u, η^2λ='(u) holds, and η^lλ=^(l-1)(u) holds. Thus <ref> is equivalent to '(0)=⋯=^(k-2)(0)=0 and ^(k-1)(0)0. Hence we have the equivalency of <ref> and <ref>. § NORMAL FORM OF SINGULAR POINT OF THE SECOND KINDIn this section, we construct a normal form of the singular point of the second kind which includes swallowtail. Furthermore, we study the relationships to the known invariants of swallowtail. Throughout this section, let f:(^2,0) → (^3,0) be a frontal and ν its unit normal, and let 0 be a singular point of the second kind.§.§ Normal form of singular point of the second kindWe take coordinate transformation on the source and isometric transformation on the target, we detect the normal form of singular points of second kind. By the non-degeneracy, df_0=1 follows, and by rotating coordinate system on the target, we may assume that f_u(0,0)=(a,0,0), a>0. By changing coordinate system on the source, we may assume f has the formf(u,v)=(u,f_2(u,v),f_3(u,v)), f_u(0,0)=(1,0,0).Since the Jacobian matrix of f is1 0(f_2)_u(u,v) (f_2)_v(u,v)(f_3)_u(u,v) (f_3)_v(u,v),S(f)={(f_2)_v=(f_3)_v=0}. Thus we can take the null vector field η=∂ v. Since 0 is non-degenerate, S(f) can be parametrized by (s(v),v) near 0. Moreover, 0 is a singular point of the second kind, s(0)=s'(0)=0, and s”(0)0 hold. We may assume s”(0)>0 bychanging (u,v)↦(-u,-v) if necessary. Thus there exists a function s̃ such thats(v)=v^2s̃(v)2,s̃(0)>0.Setting t(v)=√(s(v)), we haves(v)=(vt(v))^22, t(0)0.We take the diffeomorphism ϕ on the source defined byϕ(u,v)=(u,vt(v)),and consider (ũ,ṽ)=ϕ(u,v) as the new coordinate system. Sinceϕ(s(v),v) = (s(v),vt(v)) = ((vt(v))^22,vt(v)) = (ṽ^2/2,ṽ),S(f)={(ṽ^2/2,ṽ)} holds. Furthermore, the first component of f(ũ,ṽ) is ũ, we see that ∂ṽ is a null vector field. Now we may assume that f has the form * f(u,v)=(u,f_2(u,v),f_3(u,v)),* ∂ v is a null vector field,* S(f)={(v^2/2,v)}.Since (f_2)_v and (f_3)_v vanish on S(f)={u=v^2/2}, there exist functions g_1,h_1 such that(f_2)_v(u,v)=(v^2/2-u)g_1(u,v), (f_3)_v(u,v)=(v^2/2-u)h_1(u,v).By the non-degeneracy,(g_1(0,0),h_1(0,0))(0,0). Sincef_2(u,v)=∫(v^2/2-u)g_1(u,v) dv, f_3(u,v)=∫(v^2/2-u)h_1(u,v) dv,taking the partial integration,[ f_2(u,v)=∫(v^2/2-u)g_1(u,v) dv; =(v^2/2-u)g_2(u,v)-∫ v g_2(u,v) dv; = (v^2/2-u)g_2(u,v)-v g_3(u,v)+∫ g_3(u,v) dv; =(v^2/2-u)g_2(u,v)-v g_3(u,v)+g_4(u,v) ]holds, whereg_i(u,v)=∂ g_i+1∂ v(u,v).Similarly, we havef_3(u,v) =(v^2/2-u)h_2(u,v)-v h_3(u,v)+h_4(u,v), h_i(u,v)=∂ h_i+1∂ v(u,v).Since (g_1(0,0),h_1(0,0))(0,0),ν=ν_2|ν_2|,(ν_2 = (h_1f_2u-g_1f_3u,-h_1,g_1))gives a unit normal vector for f, because of f_u(u,v) =(1,(f_2)_u,(f_3)_u)=(1, -g_2+(v^2/2-u)g_2u-vg_3u+g_4u, -h_2+(v^2/2-u)h_2u-vh_3u+h_4u), f_v(u,v)= (0,(v^2/2-u)g_1,(v^2/2-u)h_1).Since f_v(0,0)=0, f is a front if and only if ν_v(0,0)0, and it is equivalent to that ν_2 and ν_2v are linearly independent. Sinceν_2v =(h_0f_2u+h_1f_2uv-g_0f_3u-g_1f_3uv,-h_0,g_0)= (h_0f_2u+h_1(-g_1(u,v)+(v^2/2-u)g_1u(u,v))-g_0f_3u-g_1(-h_1(u,v)+(v^2/2-u)h_1u(u,v)),-h_0,g_0 )=(h_0f_2u+(v^2/2-u)h_1g_1u(u,v) -g_0f_3u-(v^2/2-u)g_1h_1u(u,v),-h_0,g_0)andh_1f_2u-g_1f_3u -h_1 g_1h_0f_2u+(v^2/2-u)h_1g_1u(u,v) -g_0f_3u-(v^2/2-u)g_1h_1u(u,v) -h_0 g_0(0)=0 -h_1 g_1(v^2/2-u)h_1g_1u(u,v) -(v^2/2-u)g_1h_1u(u,v) -h_0 g_0(0) =0 -h_1 g_10 -h_0 g_0(0),it is equivalent tog_1(0) g_0(0)h_1(0) h_0(0) 0.By rotating coordinate system on the target around the axis which contains (1,0,0), we may assumeν_2/|ν_2| = (0,0,1), namely,g_1(0,0)=g_4vvv(0,0)>0 and h_1(0,0)=h_4vvv(0,0)=0. Moreover, by f(0,0)=(0,g_4(0,0),h_4(0,0)), we have g_4(0,0)=0, h_4(0,0)=0, and by f_u(0,0)=(1,0,0) and (<ref>),f_2u(0,0)= -g_2(0,0)+g_4u(0,0)=-g_4vv(0,0)+g_4u(0,0)=0,f_3u(0,0)= -h_2(0,0)+h_4u(0,0)=-h_4vv(0,0)+h_4u(0,0)=0.Summarizing up the above arguments, we have the following proposition.For any function g and h satisfying g_vvv(0,0)>0, g(0,0)=h(0,0)=0, g_u(0,0)-g_vv(0,0)=0, h_u(0,0)-h_vv(0,0)=0 and h_vvv(0,0)=0,[ f(u,v)= ( u, (v^22-u) g_vv(u,v)-vg_v(u,v)+g(u,v),;(v^22-u) h_vv(u,v)-vh_v(u,v)+h(u,v)) ]is a frontal satisfying that 0 is a singular point of the second kind, and f_u(0,0)=(1,0,0), η=∂_v, S(f)={v^2/2-u=0}. Moreover, ifh_vvvv(0,0)0,then 0 is a swallowtail. Conversely, for anysingular point of the second kind p of a frontal f:U→^3, there exists a coordinate system (u,v) on U,and an orientation preserving isometry Φ on ^3 such that Φ∘ f(u,v) can be written in the form (<ref>). Conditions g(0,0)=h(0,0)=0, g_u(0,0)-g_vv(0,0)=0, h_u(0,0)-h_vv(0,0)=0, g_vvv(0,0)>0, h_vvv(0,0)=0 are just for the reducing coefficients. If one want to obtaina second kind singular point (respectively swallowtail), taking g and h satisfying that(g_vvv(0,0),h_vvv(0,0))(0,0),(respectively,g_vvv(0,0) h_vvv(0,0)g_vvvv(0,0) h_vvvv(0,0)0),and forming (<ref>) is enough.We remark thatanother different normal form of swallowtail is obtained in <cit.> by a different view point.Let us setg=v^3/6,h=v^k/k!,(k=4,5,6),and consider (<ref>). Then the figure of f can be drawn as in Figure <ref>.By the above construction, we can obtain the normal forms forsingular points of k-th kind by the same mannar. For functions g,h,[ (u, (v^kk!-u)g^(k)+ ∑_i=1^k(-1)^iv^k-i(k-i)!g^(k-i)(u,v),;(v^kk!-u)h^(k)+ ∑_i=1^k(-1)^iv^k-i(k-i)!h^(k-i)(u,v) ) ]at 0 is a k-th kind of singular pointif (g^(k),h^(k))(0,0)(0,0). Moreover, ifg^(k+1)(0) g^(k+2)(0) h^(k+1)(0) g^(k+2)(0) 0,then it is a front. Here, g'=∂ g/∂ v=g^(1), and g^(i)=∂ g^(i-1)/∂ v, for example. Let us set g=v^4/4! and h=u^2/2+v^5/5!.Then the surface obtained by (<ref>) is (u,v)↦ (u, -u v + v^4/24, (-15 u^2 + 15 u v^2 - v^5)/30), and itcan be drawn in Figure <ref>. This singularity is called cuspidal butterfly. §.§ Normal form that the singular set is the u-axis The singular set S(f) of f in the form (<ref>) is a parabola and the null vector field on S(f) is constantly ∂_v. On the other hand, sometimes we want to have a form satisfying that the singular set is the u-axis, although the null vector field is not constant. For that purpose, take f as in (<ref>), and set f̃(ũ,ṽ)= f(ũ + ṽ^2/2, ṽ). ThenF(x,y)=-f̃(-y,x) is a frontal, and 0 is a singular point of the second kind satisfying η=∂_u+u∂_v and S(f)={v=0}. §.§ Forms in the low degreesIn the Proposition <ref>, we have the normal forms in the low degrees in the following manner. In the form (<ref>), we setg(u,v)=g_5(u,v)+g_6(u,v), h(u,v)=h_5(u,v)+h_6(u,v),where g_6,h_6 satisfy j^5g_6(0,0)=j^5h_6(0,0)=0, andg_5,h_5 areg_5(u,v) = ∑_i+j=1^5a_iji!j!u^iv^j, h_5(u,v) = ∑_i+j=1^5b_iji!j!u^iv^j,wherea_ij,b_ij∈ and a_02=a_10, b_02=b_10, a_030, b_03=0. Thenf has the form[ (u, -2 a_12 + a_202 u^2+ -3 a_22 + a_306 u^3+ -4 a_32 + a_4024 u^4 - a_03 u v - a_13 u^2 v - a_232 u^3 v; - a_042 u v^2- a_142 u^2 v^2 + a_036 v^3+ -a_05 + a_136 u v^3+ a_048 v^4+G(u,v),;-2 b_12 + b_202 u^2+-3 b_22 + b_306 u^3+ -4 b_32 + b_4024 u^4 - b_13 u^2 v- b_232 u^3 v;- b_042 u v^2- b_142 u^2 v^2 + -b_05 + b_136 u v^3+ b_048 v^4+H(u,v) ), ]where, G,H are functions their 4-jet vanishes: j^4G(0,0)=j^4H(0,0)=0, and[G(u,v)=12(g_5)_vv(u,v)v^2 +(v^22-u)(g_6)_vv(u,v) -v(g_6)_v(u,v)+g_6(u,v); H(u,v)=12(h_5)_vv(u,v)v^2 +(v^22-u)(h_6)_vv(u,v) -v(h_6)_v(u,v)+h_6(u,v). ] §.§ InvariantsIn <cit.>, several invariants ofsingular points of the second kind are introduced. We take a parametrization γ(t) of S(f) and assume γ(0)=0. We set γ̂=f∘γ as above. The limiting normal curvature κ_ν of f at 0 is defined by κ_ν(0)= lim_t→ 0γ̂”(t)ν(γ(t))|γ̂(t)|^2with respect to the unit normal vector ν (cf. <cit.>), where γ̂ is the singular locus. The normalized cuspidal curvature μ_c is defined by μ_c=.-|f_u|^3f_uvν_v|f_uv× f_u|^2|_(u,v)=(0,0)(cf. <cit.>), where (u,v) is a coordinate system satisfying df_0=∂ v. The limiting normal curvature and normalized cuspidal curvature relate the boundedness of the Gaussian and mean curvature near singular points of the second kind <cit.>. The limiting singular curvature τ_s is defined by the limit of singular curvature <cit.> and it is computed byτ_s= .(γ̂”,γ̂”',ν(γ))|γ̂”|^5/2|_t=0(cf. <cit.>). The limiting singular curvature measures the wideness of the cusp of the singular points of the second kind.We assume that f is a singular points of the second kind given in the form (<ref>). Thenκ_ν(u) = -2 b_12 + b_20,μ_c = b_04a_03^2,τ_s=2a_03,whereν is the unit normal vector satisfying ν(0,0)=(0,0,1). § GEOMETRIC FOLIATIONS NEAR SWALLOWTAILIn this section, as an application of the normal form of swallowtail, we study geometric foliations near swallowtail defined by binary differential equations.§.§ Binary differential equations Let U⊂^2 be an open set and (u,v) a coordinate system on U. Consider a 2-tensorω=p du^2+2q dudv+r dv^2where p,q,r are functions on U. If a vector field X=x_1∂_u+x_2∂_vsatisfies ω(X,X)=px_1^2+2qx_1x_2+rx_2^2=0,thendirection of X is called the direction of ω=0, and the integral curves of X is called the solutions ofω=0. We call ω=0 a binary differential equation (BDE). We set δ=q^2-pr. Thenω=0 defines two linearly independent directions on {δ>0}⊂ U, and it defines one direction on{δ=0}, and it defines no direction on{δ<0}. Two BDEs ω_1=0, ω_2=0 are equivalent if there exist diffeomorphism Φ:(^2,0)→(^2,0), and a function ρ:(^2,0)→ (ρ(0)0) such thatρ Φ^*ω_1=ω_2. We identify two BDEs if they are equivalent. If a 2-tensor ω as in (<ref>) satisfies δ(0)>0, then the BDE ω=0 is equivalent todx^2-dy^2=0. We consider here the case r(0)0, p_u(0)=0, p_v(0)0 following <cit.>,since only this case is needed for our consideration. See <cit.> for general study of BDEs. Dividing ω by r, and putting p̃=p/r, q̃=q/r, we consider [ ω̃ = p̃ du^2+2q̃ dudv+dv^2,; p̃ =p_01v +p_202u^2 +p_11uv +p_022v^2+O(3), p_010; q̃ =q_10u+q_01v +q_202u^2 +q_11uv +q_022v^2+O(3),;]where O(r) stands for the terms whose degrees are greater than or equal to r. We may assume p_01>0 without loss of generality. Considering the coordinate changeu= -√(p_01) U, v=V - q_102p_01 U^2+ q_01√(p_01) UV,and dividing by the coefficient of dv^2, we seeω̃=0 is equivalent to (A/C) dU^2+2(B/C) dUdV+dV^2=0, whereA = V + p_20- 2q_10^2-p_01q_102 p_01^2U^2 - p_11-2q_10q_01-p_01q_01p_01√(p_01) U V + p_02-2 q_01^22 p_01V^2 +O(3)2B =q_10q_01-q_20p_01√(p_01)U^2 - 2q_01^2-q_11p_01U V - q_02√(p_01)V^2 +O(3)C = 1+2 q_01 U√(p_01)+q_01^2 U^2p_01 +O(3)and it is equal to A' dU^2+2B' dUdV+dV^2=0, whereA' = V + p_20- 2q_10^2-q_10p_01p_01^2U^22 - p_11-2 q_01 q_10+q_01 p_01p_01√(p_01) U V + p_02-2 q_01^2p_01V^22 +O(3)2B' = q_01 q_10-q_20p_01√(p_01)U^2 - 2q_01^2-q_11p_01U V - q_02√(p_01)V^2 +O(3)Now we consider a BDEω=p du^2+2q dudv +dv^2=0, wherep = v+p_202u^2 +p_11uv+p_022v^2+O(3),q=q_202u^2 +q_11uv+q_022v^2+O(3), p_200.Consider a coordinate transformationu= U+x_202 U^2 +x_11 UV, v=V +x_306 U^3+ x_212 U^2 V+ x_122 UV^2,wherex_20=-p_112,x_11=-p_024,y_30=-q_20,y_21=-4 q_11+ p_024,y_12=-p_02,and dividing by the coefficient of dv^2, we see ω=0 is equivalent to (P/R) dU^2+2(O(3)/R) dUdV+dV^2=0, whereP=V+p_202U^2+O(3), R= 1+(p_02-4 q_11) U^2/4 -2 q_02 UV+O(3)and it is equal to k=3 of(V+p_202U^2+O(k))dU^2 + 2O(k) dUdV+dV^2=0.It known thatfor any r≥3, the BDE (<ref>) is equivalent to k=r of (<ref>) (<cit.> see also <cit.>). We setA(ω̃)= p_20- 2q_10^2-p_01q_10p_01^2for a BDE ω̃=0 of the form(<ref>). Summarizing up the above arguments, we have the following fact.For any r≥3, the BDE ω̃ of the form (<ref>) is equivalent to(u+ A(ω̃)u^22 +O(r))du^2 + 2O(r) dudv+dv^2=0. On the other hand, the configuration of the solutions of the BDEω_l =(v+lu^2/2) du^2+dv^2=0iscalled folded saddle if l<0, called folded node if 0<l<1/8, andcalled folded focus if l>1/8, and they are drawnas in Figure <ref>.§.§ Geometric foliations near swallowtailHere we consider the following three 2-tensors.[ω_lc = (F N - G M) du^2+ (E N - G L) dudv+ (E N - F L) dv^2,;ω_as =L du^2+ 2M dudv+ N dv^2,;ω_ch = (L (G L - E N) + 2 M (EM-F L)) du^2; + 2(M (G L + E N) - 2 FLN) dudv;+(N (EN-G L)+2 M (G M - F N) ) dv^2. ]The configuration of the solutions of ω_lc is called the lines of curvature, and that of ω_as is called the asymptotic curves. Sinceω_ch=0can be deformed to[ (NH-GK) dv^2+2(MH-FK) dudv+(LH-EK) du^2=0;⇔ N dv^2+2M dudv+L du^2G dv^2+2F dudv+E du^2=KH(= 2κ_1^-1+κ_2^-1), ]along the solution curves of ω_ch, its normal curvature is equal to the harmonic mean of principal curvatures, where K,H are the Gaussian and mean curvatures respectively. The discriminant of ω_ch=0 is a positive multiplication of K. Thus the solutions of ω_ch=0 lie in the region of positive Gaussian curvature. The the solutions of ω_ch=0 is called characteristic curves (see <cit.>). We consider three foliations of (<ref>) near swallowtail. Configurations of these foliations near singular points are intensively studied. See <cit.>, for example. Let ν_2 be a normal vector to f where we do not assume |ν_2|=1, and setL_2=f_uuν_2,M_2=f_uvν_2,N_2=f_vvν_2. One can easily see that all BDEs of (<ref>) are equivalent to that of changing L,M,N to L_2,M_2,N_2.We take the coordinate system asin subsection <ref>. Then the singular set is {v=0} and the null vector field on the u-axis is ∂_u+u∂_v. Thus F_v(u,0)=0 for any u. Hence there exists a vector valued function ϕ such that F_u(u,v)+uF_v(u,v)=vϕ(u,v). We setẼ_2=ϕϕ, F̃_2=ϕf_v, G̃_2=f_vf_v,andL̃_2=-ϕ(ν_2)_u, M̃_2=-ϕ(ν_2)_v, Ñ_2=-f_v(ν_2)_v.Thenω_lc =v( (F̃Ñ_2 -G̃M̃_2 ) du^2 + ( -G̃L̃_2 +(G̃M̃_2 -2 F̃Ñ_2) u +ẼÑ_2 v )dudv + (G̃L̃_2 u -F̃L̃_2 v +F̃Ñ_2 u^2 +(-F̃M̃_2 -ẼÑ_2) u v +ẼM̃_2 v^2 )dv^2), ω_as = (L̃_2v +Ñ_2 u^2 -M̃_2 uv )du^2+ 2(-Ñ_2 u+M̃_2 v)dudv + Ñ_2dv^2, ω_ch =v( (G̃L̃_2^2 v -G̃L̃_2 Ñ_2 u^2 +4 F̃L̃_2 Ñ_2 u v -(2 F̃L̃_2 M̃_2 +ẼL̃_2 Ñ_2) v^2-G̃M̃_2 Ñ_2 u^3 +(G̃M̃_2^2 +2 F̃M̃_2 Ñ_2 +ẼÑ_2^2) u^2 v-(2 F̃M̃_2^2 +3 ẼM̃_2 Ñ_2) u v^2 +2 ẼM̃_2^2 v^3 ) du^2 + 2(G̃L̃_2 Ñ_2 u +(G̃L̃_2 M̃_2 -2 F̃L̃_2 Ñ_2 )v +G̃M̃_2 Ñ_2 u^2 -(G̃M̃_2^2 +ẼÑ_2^2) u v +ẼM̃_2 Ñ_2 v^2 )dudv + ( -G̃L̃_2 Ñ_2 -G̃M̃_2 Ñ_2 u +(2 G̃M̃_2^2-2 F̃M̃_2 Ñ_2+ẼÑ_2^2 )v )dv^2).We factor out it from ω_lc and ω_ch and setω̃_lc=ω_lc/v,ω̃_as=ω_as,ω̃_ch=ω_ch/v.We consider the solutions ofω̃_lc, ω̃_as and ω̃_ch instead ofω_lc, ω_as and ω_ch. Let us consider ω̃_lc=0. Then the discriminant δ of ω̃_lc=0 satisfies δ(0)>0. It is known that such BDE is equivalent to dx^2-dy^2=0, and its configuration is a pair of transverse smooth foliations. Existence of lines of curvature coordinate system near swallowtail is shown by <cit.>. Let us consider ω̃_as=0 and ω̃_ch=0. We set ω̃_as= p_as du^2+2q_as dudv+r_as dv^2 and ω̃_ch= p_ch du^2+2q_ch dudv+r_ch dv^2. We assume that κ_ν(0)=-2 b_12 + b_200. Then r_as and r_ch does not vanish at 0. Thusω̃_as (respectively, ω̃_ch) is equivalent toω̅_as=p_asr_as du^2 +2q_asr_as dudv+dv^2( respectively, ω̅_ch=p_chr_ch du^2 +2q_chr_ch dudv+dv^2 ).We see thatp_asr_as = -b_042 b_12-b_20v+u^2 +*u v+*v^2+O(3),q_asr_as = -u+*v+O(2)andp_asr_as = b_042 b_12-b_20v+u^2 +*u v+*v^2+O(3),q_asr_as = -u+*v+O(2).By (<ref>), we have A(ω̃_as) = -b_042 b_12-b_20 = -μ_cτ_s4κ_ν( respectively,A(ω̃_ch) = b_042 b_12-b_20 = μ_cτ_s4κ_ν).Thus we know the configuration of the solutions of ω̃_as is folded saddle ifl=-μ_cτ_s/(4κ_ν) <0, folded node if0<l<1/8, and folded focus if1/8<l. The same holds for ω̃_ch by setting l=-μ_cτ_s/(4κ_ν)<0.We can draw models of ω̃_as and ω̃_ch at swallowtails as in Figure <ref>. Since swallowtails appear as points on fronts, we would like to say that the generic configuration of ω̃_as and ω̃_ch are the these types. We note that since ω̃_as|_v=0 =Ñ_2(u^2 du^2-2u dudv+dv^2), ω̃_ch|_v=0 =-(ÑL̃_2Ñ_2(u^2 du^2-2u dudv+dv^2), the direction of ω̃_as and ω̃_ch are not the direction of the null vector field, the solutions of them on S(f) forms 3/2-cusps on the image of f.99 AGV V. I. Arnold, S. M. Gusein-Zade and A. N. Varchenko, Singularities of differentiable maps, Vol. 1, Monographs in Mathematics 82, Birkhäuser, Boston, 1985. bt J. W. Bruce and F. Tari, Implicit differential equations from the singularity theory viewpoint, Singularities and differential equations (Warsaw, 1993), 23–38,Banach Center Publ., 33, Polish Acad. Sci. Inst. Math., Warsaw, 1996. btnl J. W. Bruce and F. Tari, On binary differential equations, Nonlinearity 8 (1995), no. 2, 255–271.bt J. W. Bruce and F. Tari, Implicit differential equations from the singularity theory viewpoint, Singularities and differential equations (Warsaw, 1993), 23–38,Banach Center Publ., 33, Polish Acad. Sci. Inst. Math., Warsaw, 1996. bw J. W. Bruce and J. West, Bruce, J. W.; West, J. M. Functions on a crosscap, Math. Proc. Cambridge Philos. Soc. 123 (1998), no. 1, 19–39. dara L. Dara,Singularités génériquesdes équations differentielles multiformes, Bol. Soc. Brasil Math. 6 (1975), 95–128.d A. A. Davydov, Normal forms of differential equationsunresolved with respect to derivativesin a neighbourhood of its singular point, Functional Anal. Appl. 19 (1985), 1–10. d2 A. A. Davydov, Qualitative Theory of Control Systems, Translations of Mathematical Monographs 142,American Mathematical Society, Providence, R.I. Moscow 1994.e L. P. Eisenhart, A treatise on the differential geometry of curves and surfaces, Dover Publications, Inc., New York 1960.fsuy S. Fujimori, K. Saji, M. Umehara and K. Yamada, Singularities of maximal surfaces, Math. Z. 259 (2008), 827–848. fukui T. Fukui, Local differential geometry of cuspidal edge and swallowtail, preprint.ggs R. Garcia, C. Gutierrez and J. Sotomayor, Lines of principal curvature around umbilics and Whitney umbrellas, Tohoku Math. J., 52 (2000), 163–172. gs R. Garcia and J. Sotomayor, Harmonic mean curvature lines on surfaces immersed in ^3, Bull. Braz. Math. Soc. (N.S.) 34 (2003), no. 2, 303–331.hhnsuy M. Hasegawa, A. Honda, K. Naokawa, K. Saji, M. Umehara and K. Yamada, Intrinsic properties of singularities of surfaces, Internat. J. Math. 26, 4 (2015) 1540008 (34 pages). hhnuy M. Hasegawa, A. Honda, K. Naokawa, M. Umehara and K. Yamada, Intrinsic Invariants of Cross Caps, Selecta Math. 20, 3, (2014) 769–785.hnuy A. Honda, K. Naokawa, M. Umehara and K. Yamada, Isometric realization of cross caps asformal power series and its applications, preprint, arXiv:1601.06265.IO S. Izumiya and S. Otani, Flat Approximations of surfaces along curves, Demonstr. Math. 48 (2015), no. 2, 217–241. ist S. Izumiya, K. Saji and N. Takeichi, Flat surfaces along cuspidal edges, preprint. krsuy M. Kokubu, W. Rossman, K. Saji, M. Umehara and K. Yamada, Singularities of flat fronts in hyperbolic 3-space, Pacific J. Math. 221 (2005), 303–351. MS L. F. Martins and K. Saji, Geometric invariants of cuspidal edges, Canad. J. Math. 68 (2016), 445–462. MSUY L. F. Martins, K. Saji, M. Umehara and K. Yamada, Behavior of Gaussian curvature near non-degeneratesingular points on wave fronts, Geometry and topology of manifolds, 247–281,Springer Proc. Math. Stat., 154, Springer, Tokyo, 2016, mu S. Murata and M. Umehara, Flat surfaces with singularities in Euclidean 3-space, J. Differential Geom. 82 (2009), 279–316.nuy K. Naokawa, M. Umehara and K. Yamada, Isometric deformations of cuspidal edges, Tohoku Math. J. (2) 68, 1 (2016), 73–90. OTflat R. Oset Sinha and F. Tari, On the flat geometry of the cuspidal edge, preprint, arXiv:1610.08702. SUY K. Saji, M. Umehara and K. Yamada, The geometry of fronts, Ann. of Math. 169 (2009), 491–529. ttohoku F. Tari, On pairs of geometric foliations on a cross-cap, Tohoku Math. J. (2) 59 (2007), no. 2, 233–258.t F. Tari, Pairs of foliations on surfaces, Real and complex singularities, 305–337,London Math. Soc. Lecture Note Ser., 380,Cambridge Univ. Press, Cambridge, 2010.WE J. West, The differential geometry of the cross-cap, PhD. thesis, University of Liverpool (1995)Department of Mathematics, Kobe University, Rokko 1-1, Nada, Kobe 657-8501, JapansajiOamath.kobe-u.ac.jp | http://arxiv.org/abs/1703.08904v1 | {
"authors": [
"Kentaro Saji"
],
"categories": [
"math.GT",
"math.DG",
"Primary 53A05, Secondary 58K05, 57R45"
],
"primary_category": "math.GT",
"published": "20170327023130",
"title": "Normal form of swallowtail and its applications"
} |
Entropic uncertainty relations for Markovian and non-Markovian processes under a structured bosonic reservoir Sabre Kais December 30, 2023 =============================================================================================================\begin abstractThe framework Pure Type System () offers a simple and general approach to designing and formalizing type systems. However, in the presence of dependent types, there often exist certain acute problems that make it difficult forto directly accommodate many common realistic programming features such as general recursion, recursive types, effects (e.g., exceptions, references, input/output), etc. In this paper, Applied Type System () is presented as a framework for designing and formalizing type systems in support of practical programming with advanced types (including dependent types). In particular, it is demonstrated thatcan readily accommodate a paradigm referred to as programming with theorem-proving (PwTP) in which programs and proofs are constructed in a syntactically intertwined manner, yielding a practical approach to internalizing constraint-solving needed during type-checking. The key salient feature oflies in a complete separation between statics, where types are formed and reasoned about, and dynamics, where programs are constructed and evaluated. With this separation, it is no longer possible for a program to occur in a type as is otherwise allowed in .The paper contains not only a formal development ofbut also some examples taken from , a programming language with a type system rooted in , in support of employingas a framework to formulate advanced type systems for practical programming.§ INTRODUCTION A primary motivation for developing Applied Type System () stems from an earlier attempt to support a restricted form of dependent types in practical programming <cit.>.While there is already a framework Pure Type System () <cit.> that offers a simple and general approach to designing and formalizing type systems, it is well understood that there often exist some acute problems (in the presence of dependent types) making it difficult forto accommodate many common realistic programming features. In particular, various studies reported in the literature indicate that great efforts are often required in order to maintain a style of pure reasoning as is advocated inwhen features such as general recursion <cit.>, recursive types <cit.>, effects <cit.>, exceptions <cit.> and input/output are present.The frameworkis formulated to allow for designing and formalizing type systems that can readily support common realistic programming features. The formulation ofgiven in this paper is primarily based on the work reported in two previous papers <cit.> but there are some fundamental changes in terms of the handling of proofs and proof construction.In particular, the requirement is dropped that a proof inmust be represented as a normalizing lambda-term <cit.>.In contrast to , the key salient feature oflies in a complete separation between statics, where types are formed and reasoned about, from dynamics, where programs are constructed and evaluated. This separation, with its origin in a previous study on a restricted form of dependent types developed in Dependent ML () <cit.>, makes it straightforward to support dependent types in the presence of effects such as references and exceptions. Also, with the introduction of two new (and thus somewhat unfamiliar) forms of types: guarded types and asserting types,is able to capture program invariants in a manner that is similar to the use of pre-conditions and post-conditions <cit.>. By now, studies have shown amply and convincingly that a variety of traditional programming paradigms (e.g., functional programming, object-oriented programming, meta-programming, modular programming) can be directly supported inwithout relying on ad hoc extensions, attesting to the expressiveness of .In this paper, the primary focus of study is set on a novel programming paradigm referred to as programming with theorem-proving (PwTP) and its support in . In particular, a type-theoretical foundation for PwTP is to be formally established and its correctness proven.The notion of type equality plays a pivotal rôle in type system design. However, the importance of this rôle is often less evident in commonly studied type systems. For instance, in the simply typed λ-calculus, two types are considered equal if and only if they are syntactically the same; in the second-order polymorphic λ-calculus (λ_2) <cit.> and System F <cit.>, two types are considered equal if and only if they are α-equivalent; in the higher-order polymorphic λ-calculus (λ_ω), two types are considered equal if and only if they are βη-equivalent.This situation immediately changes in , and let us see a simple example that stresses this point.→ In Figure <ref>, the presented code implements a function in <cit.>, which is a substantial system such that its compiler alone currently consists of more than 165K lines of code implemented initself.[Please see http://www.ats-lang.org for more details.] The concrete syntax used in the implementation should be accessible to those who are familiar with Standard ML (SML) <cit.>). Note thatis a programming language equipped with a type system rooted in , and the name ofderives from that of . The type constructortakes two arguments; when applied to a type T and an integer I, (T,I) forms a type for lists of length I in which each element is of type T.Also, the two list constructorsandare assigned the following types: \begin arrayccl : ∀ a:. () (a, 0) : ∀ a:.∀ n:. (a,(a, n))(a, n+1)Soconstructs a list of length 0, andtakes an element and a list of length n to form a list of length n+1. The header of the functionindicates thatis assigned the following type: \begin center \begin arrayl∀ a:.∀ m:.∀ n:. ((a, m), (a, n))(a, m+n) which simply means thatreturns a list of length m+n when applied to one list of length m and another list of length n. Note thatis a built-in sort in , and a static term of the sortstands for a type (for dynamic terms). Also,is a built-in sort for integers in , andis the subset sort {a:| a≥ 0} for all nonnegative integers.When the above implementation ofis type-checked, the following two constraints are generated:[ 1.∀ m:nat.∀ n:nat. m=0n=m+n; 2. ∀ m:nat.∀ n:nat.∀ m':nat. m=m'+1(m'+n)+1 = m+n;]The first constraint is generated when the first clause is type-checked, which is needed for determining whether the types (a,n) and (a,m+n) are equal under the assumption that (a,m) equals (a,0). Similarly, the second constraint is generated when the second clause is type-checked, which is needed for determining whether the types (a,(m'+n)+1) and (a,m+n) are equal under the assumption that (a,m) equals (a,m'+1). Clearly, certain restrictions need to be imposed on the form of constraints allowed in practice so that an effective approach can be found to perform constraint-solving.In , a programming language based on <cit.>, the constraints generated during type-checking are required to be linear inequalities on integers so that the problem of constraint satisfaction can be turned into the problem of linear integer programming, for which there are many highly practical solvers (albeit the problem of linear integer programming itself is NP-complete). This is indeed a very simple design, but it can also be too restrictive, sometimes, as nonlinear constraints (e.g., ∀ n:int. n*n≥ 0) are commonly encountered in practice. Furthermore, the very nature of such a design indicates its being inherently ad hoc. By combining programming with theorem-proving, a fundamentally different design of constraint-solving can provide the programmer with an option to handle nonlinear constraints through explicit proof construction.For the sake of a simpler presentation, let us assume for this moment that even the addition function on integers cannot appear in the constraints generated during type-checking.Under such a restriction, it is still possible to implement a list-append function inthat is assigned a type capturing the invariant that the length of the concatenation of two given listsandequals m+n ifandare of length m and n, respectively.Let us first see such an implementation given in Figure <ref>, which is presented here as a motivating example for programming with theorem-proving (PwTP).\begin figure \begin verbatim datatype Z() = Z of () datatype S(a:type) = S of a // datatype mylist(type, type) = | a:type mynil(a, Z()) | a:typen:type mycons(a, S(n)) of (a, mylist(a, n)) // datatype addrel(type, type, type) = | n:type addrel_z(Z(), n, n) of () | m,n:typer:type addrel_s(S(m), n, S(r)) of addrel(m, n, r) // fun myappend a:type m,n:type ( xs: mylist(a, m) , ys: mylist(a, n) ) : [r:type] ( addrel(m, n, r), mylist(a, r) ) = ( case xs of | mynil() => let val pf = addrel_z() in (pf, ys) end // end of [mynil] | mycons(x, xs) => let val (pf, res) = myappend(xs, ys) in (addrel_s(pf), mycons(x, res)) end // end of [mycons] )A motivating example for PwTP in ATSThe datatypesandare declared in Figure <ref> solely for representing natural numbers:represents 0, and (N) represents the successor of the natural number represented by N. The data constructors associated withandare of no use.Given a type T and another type N, (T, N) is a type for lists containing n elements of the type T, where n is the natural number represented by N. Note thatis not a standard datatype (as is supported in ML); it is a guarded recursive datatype (GRDT) <cit.>, which is also known as generalized algebraic datatype (GADT) <cit.> in Haskell and OCaml. The datatypeis declared to capture the relation induced by the addition function on natural numbers. Given types M, N, and R representing natural numbers m, n, and r, respectively, the type (M, N, R) is for a value representing some proof of m+n=r. Note thatis also a GRDT or GADT. There are two constructorsandassociated with , which encode the following two rules:\begin arrayrcll 0 + n = n(m+1) + n = (m+n)+1Let us now take a look at the implementation of . Formally, the type assigned tocan be written as follows:\begin arrayl∀ a:.∀ m:.∀ n:. ((a, m), (a, n)) ∃ r:. ((m, n, r), (a, r))In essence, this type states the following: Given two lists of length m and n,returns a pair such that the first component of the pair is a proof showing that m+n equals r for some natural number r and the second component is a list of length r.Unlike , type-checkingdoes not generate any linear constraints on integers.As a matter of fact,can be readily implemented in both Haskell and OCaml (extended with support for generalized algebraic datatypes), where there is no built-in support for handling linear constraints on integers.This is an example of great significance in the sense that it demonstrates concretely an approach to allowing the programmer to write code of the nature of theorem-proving so as to simplify or even eliminate certain constraints that need otherwise to be solved directly during type-checking.With this approach, constraint-solving is effectively internalized, and the programmer can actively participate in constraint simplification, gaining a tight control in determining what constraints should be passed to the underlying constraint-solver.There are some major issues with the implementation given in Figure <ref>. Clearly, representing natural numbers as types is inadequate since there are types that do not represent any natural numbers. More seriously, this representation turns quantification over natural numbers (which is predicative) into quantification over types (which is impredicative), causing unnecessary complications. Also, proof construction (that is, construction of values of types formed by ) needs to be actually performed at run-time, which causes inefficiency both time-wise and memory-wise. Probably the most important issue is that proof validity is not guaranteed. For instance, it is entirely possible to fake proof construction by making use of non-terminating functions.mynat\begin figure \begin verbatim datasort mynat = Z of () | S of mynat // datatype mylist(type, mynat) = | a:type mynil(a, Z()) | a:typen:mynat mycons(a, S(n)) of (a, mylist(a, n)) // dataprop addrel(mynat, mynat, mynat) = | y:mynat addrel_z(Z, y, y) of () | x,y:mynatr:mynat addrel_s(S(x), y, S(r)) of addrel(x, y, r) // fun myappend a:type m,n:mynat ( xs: mylist(a, m) , ys: mylist(a, n) ) : [r:mynat] ( addrel(m, n, r) | mylist(a, r) ) = ( case xs of | mynil() => let val pf = addrel_z() in (pf | ys) end // end of [mynil] | mycons(x, xs) => let val (pf | res) = myappend(xs, ys) in (addrel_s(pf) | mycons(x, res)) end // end of [mycons] )An example making use of PwTP inIn Figure <ref>, another implementation ofis given that makes use of the support for PwTP in . Instead of representing natural numbers as types, a datasort of the nameis declared and natural numbers can be represented as static terms of the sort . Also, a datapropis declared for capturing the relation induced by the addition function on natural numbers. As a dataprop,can only form types for values representing proofs, which are erased after type-checking and thus need no construction at run-time. In the implementation of , the bar symbol () is used in place of the comma symbol to separate components in tuples; the components appearing to the left of the bar symbol are proof expressions (to be erased) and those to the right are dynamic expressions (to be evaluated). After proof-erasure, the implementation ofessentially matches that ofgiven in Figure <ref>.As a framework to facilitate the design and formalization of advanced type systems for practical programming,is first formulated with no support for PwTP <cit.>. This formulation is the basis for a type system referred to asin this paper. The support for PwTP is added intoin a subsequent formulation <cit.>, which serves as the basis for a type system referred to asin this paper. However, a fundamentally different approach is adopted into justify the soundness of PwTP, which essentially translates each well-typed program ininto another well-typed one inof the same dynamic semantics.The identification and formalization of this approach, which is both simpler and more general than one used previously <cit.>, consists of a major technical contribution of the paper.It is intended that the paper should focus on the theoretical development of , and the presentation given is of a minimalist style. The organization for the rest of the paper is given as follows. An untyped λ-calculusis first presented in Section <ref> for the purpose of introducing some basic concepts needed to formally assign dynamic (that, operational) semantics to programs. In Section <ref>, a generic applied type systemis formulated and its type-soundness established.Subsequently,is extended toin Section <ref> with support for PwTP, and the type-soundness ofis reduced to that ofthrough a translation from well-typed programs in the former to those in the latter. Lastly, some closely related work is discussed in Section <ref> and the paper concludes. dcc dcf dcx#1#2app(#1,#2) #1#2case #1 of #2 #1#2sapp(#1,#2) #1#2fix #1.#2 #1#2lam #1.1pt#2 #1#2slam #1.1pt#2 #1#2let #1 in #2 #1#2let #1 in #2 #1⟨#1⟩ #1⟨#1⟩ #1⊃^--2pt(#1) #1⊃^+-2pt(#1) #1-0.50pt(#1) [] #1#2#3#3[#2↦#1] §UNTYPED Λ-CALCULUS The purpose of formulating , an untyped lambda-calculus extended with constants (including constant constructors and constant functions), is to set up some machinery needed to formalize dynamic (that is, operational) semantics for programs. It is to be proven that a well-typed program incan be turned into one inthrough type-erasure and proof-erasure while retaining its dynamic semantics, stressing the point that types and proofs inplay no active rôle in the evaluation of a program. In this regard, the form of typing studied inis of Curry-style (in contrast with Church-style) <cit.>.There are no static terms in . The syntax for the dynamic terms inis given as follows:\begin arraylrcle ::= x|(e⃗)|e_1,e_2|(e)|(e)|xe|e_1e_2|x=e_1e_2where the notation e⃗ is for a possibly empty sequence of dynamic terms.Letrange over external dynamic constants, which include both dynamic constructorsand dynamic functions . The arguments taken by a dynamic constructor or function are often primitive values (instead of those constructed byand ·,·) and the result returned by it is often a primitive value as well.The meaning of various forms of dynamic terms should become clear when the rules for evaluating them are given.The values inare just special forms of dynamic terms, and the syntax for them is given as follows:\begin arraylrclv ::= x|(v⃗)|v_1,v_2|xewhere v⃗ is for a possibly empty sequence of values. A standard approach to assigning dynamic semantics to terms is based on the notion of evaluation contexts:\begin arraylrclE ::= []|(v_1,…,v_i-1,E,e_i+1,…,e_n)|E,e|v,E|Ee|vE|x=EeEssentially, an evaluation context E is a dynamic term in which a subterm is replaced with a hole denoted by []. Note that only subterms at certain positions in a dynamic term can be replaced to form valid evaluation contexts.→ →^*\begin definition The redexes inand their reducts are defined as follows: \begin itemize* (v_1,v_2) is a redex, and its reduct is v_1.* (v_1,v_2) is a redex, and its reduct is v_2.* xev is a redex, and its reduct is vxe.* (v⃗) is a redex if it is defined to equal some value v; if so, its reduct is v. Note that it may happen later that a new form of redex can have more than one reducts. Given a dynamic term of the form E[e_1] for some redex e_1, E[e_1] is said to reduce to E[e_2] in one-step if e_2 is a reduct of e_1, and this one-step reduction is denoted by E[e_1] E[e_2].Letstand for the reflexive and transitive closure of .Given a program (that is, a closed dynamic term) e_0 in , a finite reduction sequence starting from e_0 can either lead to a value or a non-value. If a non-value cannot be further reduced, then the non-value is said to be stuck or in a stuck form. In practice, values can often be represented in special manners to allow various stuck forms to be detected through checks performed at run-time. For instance, the representation of a value in a dynamically typed language most likely contains a tag to indicate the type of the value. If it is detected that the evaluation of a program reaches a stuck form, then the evaluation can be terminated abnormally with a raised exception.Detecting potential stuck forms that may occur during the evaluation of a program can also be done statically (that is, at compiler-time). One often imposes a type discipline to ensure the absence of various stuck forms during the evaluation of a well-typed program. This is the line of study to be carried out in the rest of the paper.§FORMAL DEVELOPMENT OFAs a generic applied type system,consists of a static component (statics), where types are formed and reasoned about, and a dynamic component (dynamics), where programs are constructed and evaluated.The statics itself is a simply typed lambda-calculus (extended with certain constants), and the types in it are called sorts so as to avoid confusion with the types for classifying dynamic terms, which are themselves static terms.b → ⇒[]The syntax for the statics ofis given in Figure <ref>.Letrange over the base sorts in , which include at leastfor static booleans andfor types (assigned to dynamic terms). The base sortfor static integers is not really needed for formalizingbut it is often used in the presented examples. Let a and s range over static variables and static terms, respectively.There may be some built-in static constants , which are either static constant constructorsor static constant functions .A c-sort is of the form (σ_1,…,σ_n) b, which can only be assigned to static constants. Note that a c-sort is not considered a (regular) sort.Given a static constant , a static term (s_1,…,s_n) is of sort b ifis assigned a c-sort (σ_1,…,σ_n) b for some sorts σ_1,…,σ_n and s_i can be assigned the sorts σ_i for i=1,…,n.It is allowed to writefor () if there is no risk of confusion.In , the existence of the following static constants with the assigned c-sorts is assumed: ⊃ ≤_ty* → ⇒ 1 #1⟨#1⟩ [ :(); :(); : (,); : (,); : (,); : (,); : (,); ∀_σ : (σ); ∃_σ : (σ); ] Note that infix notation may be used for certain static constants. For instance, s_1 s_2 stands for (s_1,s_2) and s_1 s_2 stands for (s_1,s_2).In addition, ∀ a:σ.s and ∃ a:σ.s stand for ∀_σ(λ a:σ.s) and ∃_σ(λ a:σ.s), respectively. Given a static constant constructor , if the c-sort assigned tois (σ_1,…,σ_n) for some sorts σ_1,…,σ_n, thenis a type constructor.For instance, , , , , ∀_σ and ∃_σ are all type constructors.Additional built-in base type constructors may be assumed.Given a proposition B and a type T, B T is a guarded type and B T is an asserting type. Intuitively, if a value v is assigned a guarded type B T, then v can be used only if the guard B is satisfied; if a value v of an asserting type B T is generated at a program point, then the assertion B holds at that point. For instance, suppose thatis a sort for (static) integers andis a type constructor of the sort (); given a static term s of the sort , (s) is a singleton type for the integer equal to s; hence, the usual typefor (dynamic) integers can be defined as ∃ a:. (a), and the typefor natural numbers can be defined as ∃ a:. (a≥ 0)(a).Moreover, the following type is for the (dynamic) division function on integers:\begin arrayc∀ a_1:.∀ a_2:. a_2≠0((a_1),(a_2))(a_1/a_2)where the meaning of ≠ and / should be obvious.With such a type, division by zero is disallowed during type-checking (at compile-time). Also, suppose thatis a type constructor of the sort () such that for each proposition B, (B) is a singleton type for the truth value equal to B. Then the usual typefor (dynamic) booleans can be defined as ∃ a:. (a). The following type is an interesting one:∀ a:. (a) awherestands for the unit type. Given a function f of this type, we can apply f to a boolean value v of type (B) for some proposition B; if f(v) returns, the B must be true; therefore f acts like dynamic assertion-checking.For those familiar with qualified types <cit.>, which underlies the type class mechanism in Haskell, it should be noted that a qualified type cannot be regarded as a guarded type.The simple reason is that the proof of a guard inbears no computational significance, that is, it cannot affect the run-time behavior of a program, while a dictionary, which is just a proof of some predicate on types in the setting of qualified types, can and is mostly likely to affect the run-time behavior of a program.st ty reg ⊢B⃗ The standard rules for assigning sorts to static terms are given in Figure <ref>, where the judgement : (σ_1,…,σ_n) b means that the static constantis assumed to be of the c-sort (σ_1,…,σ_n) b.Given s⃗=s_1,…,s_n and σ⃗=σ_1,…,σ_n, a judgement of the form Σs⃗:σ⃗ means Σ s_i:σ_i for i=1,…,n.Let B stand for a static term that can be assigned the sort(under some context Σ) anda possibly empty sequence of static boolean terms. Also, let T stand for a type (for dynamic terms), which is a static term that can be assigned the sort(under some context Σ). Given contexts Σ_1 and Σ_2 and a substitution Θ, the judgement Σ_1Θ:Σ_2 means that Σ_1Θ(a):Σ_2(a) is derivable for each a∈(Θ)=(Σ_2).\begin proposition Assume Σ s:σ is derivable.If Σ=Σ_1,Σ_2 and Σ_1Θ:Σ_2 holds, then Σ_1 s[Θ]:σ is derivable.\begin proof By structural induction on the derivation of Σ s:σ. \begin definition [Constraints in ]A constraint inis of the form Σ; B_0, where Σ B: holds for each B inand Σ B_0: holds as well, and the constraint relation inis the one that determines whether each constraint is true or false.Each regularity rule in Figure <ref> is assumed to be met, that is, the conclusion of each regularity rule holds if all of its premisses hold, and the following regularity conditions onare also satisfied: * Σ; T T holds for every T.* Σ; T T' and Σ; T' T” implies Σ; T T”.* Σ; T_1 T_2 T'_1 T'_2 implies Σ; T_1 T'_1 and Σ; T_2 T'_2.* Σ; T_1 T_2 T'_1 T'_2 implies Σ; T'_1 T_1 and Σ; T_2 T'_2.* Σ; B T B' T' implies Σ;,B B' and Σ;,B T T'.* Σ; B T B' T' implies Σ;,B' B and Σ;,B' T T'.* Σ;∀ a:σ.T∀ a:σ.T' implies Σ, a:σ; T T'.* Σ;∃ a:σ.T∃ a:σ.T' implies Σ, a:σ; T T'.* ∅;∅(T_1,…,T_n) T' implies T'=(T'_1,…,T'_n) for some T'_1, …, T'_n. The need for these conditions is to become clear when proofs are constructed in the following presentation for formally establishing various meta-properties of . For instance, the last of the above conditions can be invoked to make the claim that T' T_1 T_2 implies T' being of the form T'_1 T'_2. Note that this condition actually implies the consistency of the constraint relation as not every constraint is valid.\begin figure\begin arraylrcle ::=x |{s⃗}(e_1,…,e_n) |e_1,e_2|(e) |(e) |xe|e_1e_2|e| e|ae|es|e|x=e_1e_2|s,e|a,x=e_1e_2 v ::= x |{s⃗}(v_1,…,v_n) |v_1,v_2|xe|e|ae|v|s,vΔ::=∅|Δ, x:T Θ::=|Θ[x↦ e]The syntax for the dynamics inLet us now move onto the dynamic component (dynamics) of . The syntax for the dynamics ofis given in Figure <ref>.Let x range over dynamic variables anddynamic constants, which include both dynamic constant constructorsand dynamic constant functions .Some (unfamiliar) forms of dynamic terms are to be understood when the rules for assigning types to them are presented. Let v range over values, which are dynamic terms of certain special forms, and Δ range over dynamic variable contexts, which assign types to dynamic variables.D #1(#1)During the formal development of , proofs are often constructed by induction on derivations (represented as trees).Given a judgement J, ::J means thatis a derivation of J, that is, the conclusion ofis J.Given a derivation ,stands for the height of the tree that represents . In , a typing judgement is of the form Σ;;Δ e:T, and the rules for deriving such a judgement are given in Figure <ref>.Note that certain obvious side conditions associated with some of the typing rules are omitted for the sake of brevity. For instance, the variable a is not allowed to have free occurrences in , Δ, or T when the ruleis applied.Given =B_1,…,B_n, T stands for B_1(⋯(B_n T)⋯).Given a⃗=a_1,…,a_n and σ⃗=σ_1,…,σ_n, ∀a⃗:σ⃗ stands for the sequence of quantifiers: ∀ a:σ_1.⋯∀ a:σ_n.A c-type inis of the form ∀a⃗:σ⃗. (T_1,…,T_n) T.The notation:∀a⃗:σ⃗. (T_1,…,T_n) Tmeans thatis assumed to have the c-type following it; ifis a constructor , then T is assumed to be constructed by someandis said to be associated with . For instance, the list constructors and the integer addition and division functions can be given the following c-types:\begin arrayccl : ∀ a:. (a, 0) : ∀ a:.∀ n:. n≥ 0(a,(a, n))(a, n+1):∀ a_1:.∀ a_2:. ((a_1), (a_2))(a_1+a_2):∀ a_1:.∀ a_2:. ((a_1), (a_2))(a_1-a_2):∀ a_1:.∀ a_2:. ((a_1), (a_2))(a_11.25pt*1.25pta_2):∀ a_1:.∀ a_2:. a_2≠ 0((a_1), (a_2))(a_1/a_2)where the type constructorsandare type constructors of the c-sorts () and (, ), respectively, and +, -, *, and / are static constant functions of the c-sort (,). For a technical reason, the ruleis to be replaced with the following one:[ [] Σ; ; Δ x:T'Δ(x) = TΣ; T T'; ]which combineswith .This replacement is needed for establishing the following lemma:\begin lemmaAssume ::Σ;;Δ,x:T_1 e:T_2 and Σ; T'_1 T_1. Then there is a derivation ' for the typing judgement Σ;;Δ,x:T'_1 e:T_2 such that '=. The proof follows from structural induction onimmediately. The only interesting case is the one where the last applied rule is , and this case can be handled by simply merging two consecutive applications of the ruleinto one (with the help of the regularity condition stating thatis transitive). Given Σ,,Δ_1,Δ_2 and θ, the judgement Σ;;Δ_1θ:Δ_2 means that the typing judgement Σ;;Δ_1θ(x):Δ_2(x) is derivable for each x∈(θ)=(Δ_2). \begin lemma [Substitution in ] Assume ::Σ;;Δ e:T in . * If =_1,_2 and Σ;_1_2 holds, then Σ;_1;Δ e:T is also derivable, where Σ;_1_2 means Σ;_1 B holds for each B∈_2.* If Σ=Σ_1,Σ_2 and Σ_1Θ:Σ_2 holds, then Σ_1;[Θ];Δ[Θ] d[Θ]:T[Θ] is also derivable.* If Δ=Δ_1,Δ_2 and Σ;;Δ_1θ:Δ_2 is derivable, then Σ;;Δ_1 d[θ]:T is also derivable.By structural induction on the derivation . \begin lemma [Canonical Forms] Assume ::∅;∅;∅ v:T. Then the following statements hold: * If T=T_1 T_2, then v is of the form v_1,v_2.* If T=T_1 T_2, then v is of the form xe.* If T=B T_0, then v is of the form v_0.* If T=B T_0, then v is of the form e.* If T=∀ a:σ.T_0, then v is of the form ae.* If T=∃ a:σ.T_0, then v is of the form s,v_0.* If T=(s⃗_1), then v is of the form {s⃗_2}(v⃗) for someassociated with .With Definition <ref>, the lemma follows from structural induction on .If the last applied rule inis , then the proof goes through by invoking the induction hypothesis on the immediate subderivation of . Otherwise, the proof follows from a careful inspection of the typing rules in Figure <ref>.In order to assign (call-by-value) dynamic semantics to the dynamic terms in , let us introduce evaluation contexts as follows:[ E ::=to 212pt; to 0pt[] |{s⃗}(v⃗,E,e⃗) |E,d|v,E|Ee|vE|;to 0ptE|E|x=Ee|Es|s,E|a,x=Ee; ]\begin definition The redexes and their reducts are defined as follows. \begin itemize* (v_1,v_2) is a redex, and its reduct is v_1.* (v_1,v_2) is a redex, and its reduct is v_2.* xev is a redex, and its reduct is vxe.* {s⃗}(v⃗) is a redex if it is defined to equal some value v; if so, its reduct is v.* e is a redex, and its reduct is e.* aes is a redex, and its reduct is sae.* x=ve is a redex, and its reduct is vxe.* a,x=s,ve is a redex, and its reduct is vxsae.Given two dynamic terms e_1 and e_2 such that e_1=E[e] and e_2=E[e'] for some redex e and its reduct e', e_1 is said to reduce to e_2 in one step and this one-step reduction is denoted by e_1 e_2. Letstand for the reflexive and transitive closure of .It is assumed that the type assigned to each dynamic constant functionis appropriate, that is, ∅;∅;∅ v:T is derivable whenever ∅;∅;∅{s⃗}(v_1,…,v_n):T is derivable and v is a reduct of {s⃗}(v_1,…,v_n).\begin lemma [Inversion] Assume ::Σ;;Δ e:T in .* If e=e_1,e_2, then there exists '::Σ;;Δ e:T such that '≤ and the last rule applied in ' is . * If e=xe_1, then there exists '::Σ;;Δ e:T such that '≤ and the last applied rule in ' is . * If e=e_1, then there exists '::Σ;;Δ e:T such that '≤ and the last rule applied in ' is . * If e=e_1, then there exists '::Σ;;Δ e:T such that '≤ and the last rule applied in ' is . * If e=ae_1, then there exists '::Σ;;Δ e:T such that '≤, and the last rule applied in ' is . * If e=s,e_1, then there exists '::Σ;;Δ e:T such that '≤, and the last rule applied in ' is . Let ' beifdoes not end with an application of the rule . Hence, in the rest of the proof, it can be assumed that the last applied rule inis , that is,is of the following form:\begin arrayc[] Σ; ; Δ e:T_1::Σ; ; Δ e:T'Σ; T' T Let us prove (1) by induction on . By induction hypothesis on _1, there exists a derivation '_1::Σ; ; Δ e:T' such that '_1≤_1 and the last applied rule in '_1 is :\begin arrayc[] Σ;;Δe_1,e_2:T'_1 T'_2'_21::Σ; ; Δ e_1: T'_1'_22::Σ; ; Δ e_2: T'_2where T'=T'_1 T'_2 and e=e_1,e_2. By one of the regularity condition, T=T_1 T_2 for some T_1 and T_2. By another regularity condition, both Σ; T'_1 T_1 and Σ; T'_2 T_2 hold. By applyingto '_21, one obtains _21::Σ; ; Δ e_1: T_1. By applyingto '_22, one obtains _22::Σ; ; Δ e_2: T_2. Let ' be\begin arrayc[] Σ;;Δe_1,e_2:T_1 T_2_21::Σ; ; Δ e_1: T_1_22::Σ; ; Δ e_2: T_2and the proof for (1) is done since '=1+max(_21, _22), which equals 1+1+max('_21, '_22)=1+'_1≤ 1+_1=.Let us prove (2) by induction on . By induction hypothesis on _1, there exists a derivation '_1::Σ; ; Δ e:T' such that '_1≤_1 and the last applied rule in '_1 is :\begin arrayc[] Σ;;Δxe_1:T'_1 T'_2'_2::Σ; ; Δ,x:T'_1e_1:T'_2where T'=T'_1 T'_2 and e=xe_1. By one of the regularity conditions, T=T_1 T_2 for some T_1 and T_2. By another regularity condiditon, both Σ; T_1 T'_1 and Σ; T'_2 T_2 hold. Hence, by Lemma <ref>, there is a derivation ”_2 ::Σ; ; Δ,x:T_1e_1:T'_2 such that ”_2='_2. Let ' be the following derivation,[ [] Σ;;Δxe_1: T_1 T_2[]Σ;;Δ, x:T_1 e_1:T_2”_2::Σ;;Δ, x:T_1 e_1:T'_2Σ; T'_2 T_2 ]and the proof for (2) is done since '=1+1+”_2=1+1+'_2=1+'_1≤ 1+_1=.The rest of statements (3), (4), (5), and (6) can all be proven similarly. \begin theorem [Subject Reduction in ]Assume ::Σ;;Δ e:T inand e e' holds. Then Σ;;Δ e':T is also derivable in . The proof proceeds by induction on . * The last applied rule inis :\begin arrayc[] Σ;;Δ e:T_1::Σ;;Δ e:T'Σ T' TBy induction hypothesis on _1, '_1::Σ;;Δ e':T' is derivable, and thus the following derivation is obtained:[ [] Σ;;Δ e':T'_1::Σ;;Δ e':T'Σ T' T ] * The last applied rule inis not . Assume that e=E[e_0] and e'=E[e'_0], where e_0 is a redex and e'_0 is a reduct of e_0. All the cases where E is not [] can be readily handled, and some details are given as follows on the case where E=[] (that is, e is itself a redex). *is of the following form:\begin arrayc[] Σ;;Δ(v_11,v_12): T_1_1::Σ;;Δv_11,v_12:T_1 T_2where T=T_1 and e=(v_11,v_12). By Lemma <ref>, _1 may be assumed to be of the following form:\begin arrayc[] Σ;;Δv_11,v_12: T_1 T_2_21::Σ;;Δ v_11: T_1_22::Σ;;Δ v_12: T_2Note that e'=v_11, and the case concludes.*is of the following form:[ [] Σ;;Δxe_1v_2: T_2_1::Σ;;Δxe_1:T_1 T_2_2::Σ;;Δ v_2:T_1 ]where T=T_2 and e=xe_1v_2. By Lemma <ref>, _1 may be assumed to be of the following form:\begin arrayc[] Σ;;Δxe_1: T_1 T_2Σ;;Δ,x:T_1 e_1:T_2By Lemma <ref> (Substitution), Σ;;Δv_2xe_1:T_2 is derivable.Note that e'=v_2xe_1, and the case concludes.All of the other cases can be handled similarly.For a less involved presentation, let us assume that any well-typed closed value of the form {s⃗}(v_1,…,v_n) is a redex, that is, the dynamic constant functionis well-defined at the arguments v_1,…,v_n.\begin theorem [Progress in ]Assume that ::∅;∅;∅ e:T in . Then either e is a value or e e' holds for some dynamic term e'. With Lemma <ref> (Canonical Forms), the proof proceeds by a straightforward structural induction on .By Theorem <ref> and Theorem <ref>, it is clear that for each closed well-typed dynamic term e, e v holds for some value v, or there is an infinite reduction sequence starting from e: e=e_0 e_1 e_2⋯. In other words, the evaluation of a well-typed program ineither reaches a value or goes on forever (as it can never get stuck). This meta-property ofis often referred to as its type-soundness. Per Robin Milner, a catchy slogan for type-soundness states that a well-typed program can never go wrong. #1∥-1.75pt#1-1.5pt∥ #1|#1| #1#1_p #1#1_t\begin figure[thp]\begin arrayrclx= x{s⃗}(e_1,…,e_n)=(e_1,…,e_n)xe=xe e_1e_2=e_1e_2e=ee=e e=e x=e_1e_2=x=e_1e_2 ae=e es=e The type-erasure function · on dynamic terms After a program inpasses type-checking, it goes through a process referred to as type-erasure to have the static terms inside it completely erased. In Figure <ref>, a function performing type-erasure is defined, which maps each dynamic term into an untyped dynamic term in .In order to guarantee that a value inis mapped to another value inby the function ·, the following syntactic restriction is needed: \begin itemize* Only when e is a value can the dynamic term e be formed.* Only when e is a value can the dynamic term ae be formed.This kind of restriction is often referred to as value-form restriction.\begin proposition With the value-form restriction being imposed, v is a value infor every value v in .\begin proof By structural induction on v.Note that it is certainly possible to have a non-value e inwhose type-erasure is a value in .From this point on, the value-form restriction is always assumed to have been imposed when type-erasure is performed.\begin proposition Assume that e_1 is a well-typed closed dynamic term in . If e_1 e_2 holds, then either e_1=e_2 or e_1e_2 holds in .\begin proof By a careful inspection of the forms of redexes in Definition <ref>. \begin proposition Assume that e_1 is a well-typed closed dynamic term in . If e_1 e'_2 holds in , then there exists e_2 such that e_1 e_2 holds inand e_2=e'_2.\begin proof By induction on the height of the typing derivation for e_1. By Proposition <ref> and Proposition <ref>, it is clear that type-erasure cannot alter the dynamic semantics of a well-typed dynamic term in .The formulation ofpresented in this section is of a minimalist style. In particular, the constraint relation inis treated abstractly. In practice, if a concrete instance ofis to be implemented, then rules need to be provided for simplifying constraints. For instance, the following rule may be present:Σ;(I_1)(I_2)Σ; I_1=I_2With this rule, (I_1)(I_2) can be simplified to the constraint I_1=I_2, where the equality is on static integer terms. The following rule may also be present:Σ;(T_1,I_1)(T_2,I_2)Σ; T_1 T_2Σ; I_1=I_2With this rule, (T_1,I_1)(T_2,I_2) can be simplified to the two constraints T_1 T_2 and I_1=I_2.For those interested in implementing an applied type system, please find more details in a paper on <cit.>, which is regarded a special kind of applied type system. § FORMAL DEVELOPMENT OFLet us extendtoin this section with support for programming with theorem-proving (PwTP).A great limitation on employingas the basis for a practical programming language lies in the very rigid handling of constraint-solving in . One is often forced to impose various ad hoc restrictions on the syntactic form of a constraint that can actually be supported in practice (so as to match the capability of the underlying constraint-solver), greatly diminishing the effectiveness of using types to capture programming invariants. For instance, only quantifier-free constraints that can be translated into problems of linear integer programming are allowed in the DML programming language <cit.>.With PwTP being supported in a programming language, programming and theorem-proving can be combined in a syntactically intertwined manner <cit.>; if a constraint cannot be handled directly by the underlying constraint-solver, then it is possible to simplify the constraint or even eliminate it through explicit proof construction. PwTP advocates an open style of constraint-solving by providing a means within the programming language itself to allow the programmer to actively participate in constraint-solving. In other words, PwTP can be viewed as a programming paradigm for internalizing constraint-solving. T^* ≤_prLet us now start with the formulation of , which extends that offairly lightly.In addition to the base sorts in ,contains another base sort , which is for static terms representing types for proofs. A static term of the sortmay be referred to as a prop (or, sometimes, a type for proofs). Also, it is assumed that the static constants listed in Figure <ref> are included in . Note that the symbols referring to these static constants may be overloaded. In the following representation, P stands for a prop, T stands for a type, andstands for either a prop or a type.#1⟨#1⟩_pp #1⟨#1⟩_pt #1⟨#1⟩_tt #1#2lam_pp #1. #2 #1#2lam_pt #1. #2 #1#2lam_tt #1. #2 #1#2app_pp(#1,#2) #1#2app_tp(#1,#2) #1#2app_tt(#1,#2)The syntax for dynamic terms inis essentially the same as that inbut with a few minor changes to be mentioned as follows.Some dynamic constructs inneed to be split when they are incorporated into .The construct e_1,e_2 for forming tuples is split into e_1,e_2, e_1,e_2, and e_1,e_2 for prop-type pairs, prop-type pairs and type-type pairs, respectively. For instance, a prop-type pair is one where the first component is assigned a prop and the second one a type. Note that there are no type-prop pairs.The construct xe for forming lambda-abstractions is split into xe, xe, and xe for prop-prop functions, prop-type functions and type-type functions, respectively. For instance, a prop-type function is one where the argument is assigned a prop and the body a type. The construct e_1e_2 for forming applications is split into e_1e_2, e_1e_2, and e_1e_2 for prop-prop applications, type-prop applications and type-type applications. For instance, a type-prop application is one where the function part is assigned a type and the argument a prop. Note that there are no type-prop functions.The dynamic variable contexts inare defined as follows:\begin arraylrcl Δ::=∅|Δ, x: The regularity conditions onneeds to be extended with the following two for the new forms of types: \begin itemize3.2 Σ; P_1 T_2 P'_1 T'_2 implies Σ; P_1 P'_1 and Σ; T_2 T'_2.4.2 Σ; P_1 T_2 P'_1 T'_2 implies Σ; P'_1 P_1 and Σ; T_2 T'_2.It should be noted that there are no regularity conditions imposed on props (as it is not expected for proofs to have any computational meaning).There are two kinds of typing rules in : p-typing rules and t-typing rules, where the former is for assigning props to dynamic terms (encoding proofs) and the latter for assigning types to dynamic terms (to be evaluated).The typing rules forare essentially those forlisted in Figure <ref> except for the following changes: \begin itemize* Each occurrence of T in the rules forneeds to bereplaced with .* The premisses of each p-typing rule (that is, one for assigning a prop to a dynamic term) are required to be p-typing rules themselves. As an example, let us take a look at the following rule:\begin arrayc[] Σ; ; Δ e:T'Σ; ; Δ e:TΣ; T T'which yields the following two valid versions:\begin arrayc[] Σ; ; Δ e:P'Σ; ; Δ e:PΣ; P P' [] Σ; ; Δ e:T'Σ; ; Δ e:TΣ; T T' As another example, let us take a look at the following rule:\begin arrayc[] Σ; ; Δ(e):T_1Σ; ; Δ e: T_1 * T_2which yields the following two valid versions:\begin arrayc[] Σ; ; Δ(e):P_1Σ; ; Δ e: P_1 * P_2 [] Σ; ; Δ(e):T_1Σ; ; Δ e: T_1 * T_2Note that there is no type of the form T_1P_2 (for the sake of simplicity). The following version is invalid:\begin arrayc[] Σ; ; Δ(e):P_1Σ; ; Δ e: P_1 * T_2because a p-typing rule cannot have any t-typing rule as its premise. Instead, the following typing rule is introduced as the elimination rule for P_1*T_2:\begin arrayc[] Σ; ; Δx_1,x_2=ee_0:T_0Σ; ; Δ e: P_1 * T_2Σ;;Δ,x_1:P_1,x_2:T_2e_0:T_0 As yet another example, let us take a look at the following rule:\begin arrayc[] Σ; ; Δe_1e_2: _2Σ; ; Δ e_1: _1_2Σ; ; Δ e_2: _1which yields the following three versions:\begin arrayc[] Σ; ; Δe_1e_2: P_2Σ; ; Δ e_1: P_1 P_2Σ; ; Δ e_2: P_1 [] Σ; ; Δe_1e_2: T_2Σ; ; Δ e_1: P_1 T_2Σ; ; Δ e_2: P_1 [] Σ; ; Δe_1e_2: T_2Σ; ; Δ e_1: T_1 T_2Σ; ; Δ e_2: T_1The first one is a p-typing rule while the other two are t-typing rules.In , the two sortsandare intimately related but are also fundamentally different.Gaining a solid understanding of the relation between these two is the key to understanding the design of .One may seeas an internalized version of . Given a static boolean term B, its truth value is determined by a constraint-solver outside . Given a static term P of the sort , a proof of P can be constructed insideto attest to the validity of the boolean term encoded by P. For clarification, let us see a simple example illustrating the relation betweenandin concrete terms. fact fact_p fact_b f_fact_p f_fact_b\begin figure[thp]\begin minipage206pt \begin verbatim dataprop fact_p(int, int) = | fact_p_bas(0, 1) of () | n:natr:int fact_p_ind(n+1, (n+1)*r) of fact_p(n, r) A dataprop for encoding the factorial function \begin figure[thp]\begin minipage276pt \begin verbatim stacst fact_b : (int, int) -> bool praxi fact_b_bas ( // argless ) : [fact_b(0, 1)] unit_p praxi fact_b_indn:intr:int ( // argless ) : [n >= 0fact_b(n, r) ->> fact_b(n+1, (n+1)*r)] unit_p A static predicate and two associated proof functionsIn Figure <ref>, the datapropdeclared inis associated with two proof constructors that are assigned the following c-types (or, more precisely, c-props):\begin arrayccl : (0, 1) : ∀ n:.∀ r:. ((n, r))(n+1, (n+1)*r)Let (n) be the value of the factorial function on n, where n ranges over natural numbers.Given a natural number n and an integer r, the prop (n, r) encodes the relation (n)=r. In other words, if a proof of the prop (n, r) can be constructed, then (n) equals r.In Figure <ref>, a static predicateis introduced, which corresponds to .Given a natural number n and an integer r, (n, r) simply means (n)=r. The two proof functionsandare assigned the following c-props:\begin arrayccl -6pt:-6pt()(0, 1)-6pt:-6pt ∀ n:.∀ r:. ()(n≥ 0(n, r)(n+1, (n+1)· r))whereis the unit prop (instead of the unit type) that encodes the static truth value . Note that the keywordinis used to introduce proof functions that are treated as axioms.\begin figure[thp]\begin minipage252pt \begin verbatim fun f_fact_p n:nat ( n: int(n) ) : [r:int] (fact_p(n, r) | int(r)) = let // fun loopi:nat | i <= nr:int ( pf: fact_p(i, r) | i: int(i), r: int(r) ) : [r:int] (fact_p(n, r) | int(r)) = if i < n then loop(fact_p_ind(pf) | i+1, (i+1)*r) else (pf | r) // end of [if] // in loop(fact_p_bas() | 0(*i*), 1(*r*)) end // end of [f_fact_p] A verified implementation of the factorial function \begin figure[thp]\begin minipage200pt \begin verbatim fun f_fact_b n:nat ( n: int(n) ) : [r:int] (fact_b(n, r)int(r)) = let // prval() = solver_assert(fact_b_bas) prval() =solver_assert(fact_b_ind) // fun loopi:nat | i <= nr:int | fact_b(i, r)( i: int(i), r: int(r) ) : [r:int] (fact_b(n, r)int(r)) = if i < n then loop(i+1, (i+1)*r) else (r) // in loop(0, 1) end // end of [f_fact_b] Another verified implementation of the factorial functionIn Figure <ref>, a verified implementation of the factorial function is given in . Given a natural numbers n,returns an integer r paired with a proof of (n, r) that attests to the validity of (n)=r. Note that this implementation makes explicit use of proofs.The constraints generated from type-checking the code in Figure <ref> are quantifier-free, and they can be readily solved by the built-in constraint-solver (based on linear integer programming) for .In Figure <ref>, another verified implementation of the factorial function is given in . Given a natural numbers,returns an integer r plus the assertion (n, r) that states (n)=r. This implementation does not make explicit use of proofs. Applying the keywordto a proof turns the prop of the proof into a static boolean term (of the same meaning) and then adds the term as an assumption to be used for solving the constraints generated subsequently in the same scope. For instance, the two applications ofessentially add the following two assumptions:\begin arrayl(0, 1)∀ n:.∀ r:. n≥ 0(n, r)(n+1, (n+1)· r)Note that the second assumption is universally quantified. In general, solving constraints involving quantifiers is much more difficult than those that are quantifier-free. For instance, the constraints generated from type-checking the code in Figure <ref> cannot be solved by the built-in constraint-solver for . Instead, these constraints need to be exported so that external constraint-solvers (for instance, one based on the Z3 theorem-prover <cit.>) can be invoked to solve them.By comparing these two verified implementations of the factorial function, one sees a concrete case where PwTP (as is done in Figure <ref>) is employed to simplify the constraints generated from type-checking. This kind of constraint simplification through PwTP is a form of internalization of constraint-solving, and it can often play a pivotal rôle in practice, especially, when there is no effective method available for solving general unsimplified constraints.Instead of assigning (call-by-value) dynamic semantics to the dynamic terms indirectly, a translation often referred to as proof-erasure is to be defined that turns each dynamic term ininto one inof the same dynamic semantics.Given a sort σ, its proof-erasure σ is the one in which every occurrence ofin σ is replaced with .Given a static variable context Σ, its proof-erasure Σ is obtained from replacing each declaration a:σ with a:σ.For every static constantof the c-sort (σ_1,…,σ_n)σ, it is assumed that there exists a corresponding ' of the c-sort (σ_1,…,σ_n)σ; this corresponding ' may be denoted by . Note that it is possible to have _1=_2 for different constants _1 and _2.Let us assume the existence of the following static constants:\begin arrayccl: (, ): (, )∀_σ: (σ)∃_σ: (σ)Note that the symbols referring to these static constants are all overloaded. Naturally,andare interpreted as the boolean conjunction and boolean implication, respectively, and ∀_σ and ∃_σ are interpreted as the standard universal quantification and existential quantification, respectively.For instance, some pairs of corresponding static constants are listed as follows: \begin itemize* The boolean implication functioncorresponds to the prop predicate .* The boolean implication functioncorresponds to the prop constructorof the c-sort (,).* The boolean implication functioncorresponds to the prop constructorof the c-sort (,).* The boolean conjunction functioncorresponds to the prop constructorof the c-sort (,).* The boolean conjunction functioncorresponds to the prop constructorof the c-sort (,).* The type constructorof the c-sort (,) corresponds to the type constructorof the c-sort (,).* The type constructorof the c-sort (,) corresponds to the type constructorof the c-sort (,).* For each sort σ, the universal quantifier ∀_σ of the sort (σ) corresponds to the universal quantifier ∀_σ of the sort (σ).* For each sort σ, the existential quantifier ∃_σ of the sort (σ) corresponds to the existential quantifier ∃_σ of the sort (σ). For every static term s, s is the static term obtained from replacing in s each σ with σ and eachwith .\begin proposition Assume that Σ s:σ is derivable. Then Σs:σ is also derivable.\begin proof By induction on the sorting derivation of Σ s:σ. For a sequenceof static boolean terms,is the sequence obtained from applying · to each B in .There are two functions · and · for mapping a given dynamic variable context Δ to a sequence of boolean terms and a dynamic variable context, respectively: \begin itemize* Δ is a sequence of boolean termssuch that each B inis P for some a:P declared in Σ.* Δ is a dynamic variable context such each declaration in it is of the form a:T for some a:T declared in Σ. \begin figure\begin arrayrclx= x{s⃗}(e⃗)={s⃗}(e⃗)e_1,e_2 =e_2 e_1,e_2 =e_1,e_2 (e)=(e)(e)=(e)x_p,x_t=e_1e_2 =x_t= e_1e_2 xe=e xe=xe e_1e_2=e_1 e_1e_2=e_1e_2e=ee=e e=e x=e_1e_2=x= e_1e_2 ae=ae es=es The proof-erasure function · on dynamic terms The proof-erasure function on dynamic terms is defined in Figure <ref>.Clearly, given a dynamic term e in , e is a dynamic term inif it is defined.not As the proof-erasure ofis chosen to be the boolean implication function, it needs to be assumed that Σ; P_1 P_2 implies Σ;P_1P_2 \begin lemma [Constraint Internalization]Assume that the typing judgment Σ;;Δ e:P is derivable in .Then the constraint Σ;,ΔP holds.\begin proof By structural induction on the typing derivationof Σ;;Δ e:P. Note that the typing ruleis handled by the assumption that Σ; P_1 P_2 implies Σ;P_1P_2 for any props P_1 and P_2. \begin itemize* Assume that the last applied rule inis :\begin arrayc[] Σ;;Δe_1,e_2: P_1 P_2_1::Σ;;Δ e_1: P_1_2::Σ;;Δ e_2: P_2where P=P_1 P_2. By induction hypothesis on _1, Σ;,ΔP_1 holds. By induction hypothesis on _2, Σ;,ΔP_2 holds. Note that P=P_1 P_2=P_1P_2, wherestands for the boolean conjunction. Therefore, Σ;,ΔP holds.* Assume that the last applied rule inis eitheror . This case immediately follows from the fact that P_1*P_2=P_1P_2 for any props P_1 and P_2, wherestands for the boolean conjunction* Assume that the last applied rule inis :\begin arrayc[] Σ;;Δx_1e_2: P_1 P_2_1::Σ;;Δ,x_1: P_1 e_2: P_2where P=P_1 P_2. By induction hypothesis on _1, Σ;,Δ,P_1P_2 holds. By the regularity rule , Σ;,ΔP_2 holds whenever Σ;,ΔP_1 holds. Therefore, Σ;,ΔP_1P_2 holds, wherestands for the boolean implication. Note that P=P_1P_2, and this case concludes.* Assume that the last applied rule inis :\begin arrayc[] Σ; ; Δe_1e_2: P_2_1::Σ; ; Δ e_1: P_1 P_2_2::Σ; ; Δ e_2: P_1where P=P_2. By induction hypothesis on _1, Σ;,ΔP_1P_2 holds, wherestands for the boolean implication. By induction hypothesis on _2, the constraint Σ;,ΔP_1 holds. Therefore, the constraint Σ;,ΔP_2 also holds. The rest of the cases can be handled similarly.Note that a proof incan be non-constructive as it is not expected for the proof to have any computational meaning.In particular, one can extend the proof construction inwith any kind of reasoning based on classical logic (e.g., double negation elimination). e_12 If a c-typeassigned to a dynamic (proof) constant is of the form ∀Σ.(P⃗) P_0, then it is assumed that the following constraint holds in :∅;∅∀Σ.(P⃗P_0)For instance, the c-types assigned toandimply the validity of the following constraints:\begin arrayl∅;∅(0, 1)∅;∅∀ n:.∀ r:. (n≥ 0(n, r)(n+1, (n+1)· r))which are encoded directly into the c-types assigned toand .If a c-typeis of the form ∀Σ.(P⃗,T_1,…,T_n) T_0, thenis defined as follows:∀Σ.(P⃗((T_1,…,T_n)T_0))If a dynamic constantis assigned the c-typein , then it is assumed to be of the c-typein .\begin theoremAssume that Σ;;Δ e:T is derivable in . Then Σ;,Δ;Δe:T is derivable in ,\begin proof By structural induction on the typing derivationof Σ;;Δ e:T. \begin itemize* Assume that the last applied rule inis :\begin arrayc[] Σ;;Δx_1,x_2=e_0:T_0_1::Σ;;Δ: P_1 * T_2_2::Σ;;Δ,x_1:P_1,x_2:T_2e_0:T_0where e is x_1,x_2=e_0 and T=T_0. By induction hypothesis on _1, there exists the following derivation in :'_1::Σ;,Δ;Δ:P_1T_2By induction hypothesis on _2, there exists the following derivation in :'_2::Σ;,Δ,P_1;Δ,x_2:T_2e_0:T_0Applying the ruleto '_1 and '_2 yields the following derivation:'::Σ;,Δ;Δx_2=e_0:T_0Note that e equals x_2=e_0, and the case concludes.* Assume that the last applied rule inis :\begin arrayc[] Σ; ; Δe_1e_2: T_2_1::Σ; ; Δ e_1: P_1 T_2_2::Σ; ; Δ e_2: P_1where e is e_1e_2 and T=T_2. By induction hypothesis on _1, there exists the following derivation in :'_1::Σ;,Δ;Δe_1:P_1T_2Applying Lemma <ref> to _2 yields that the constraint Σ;,Δ;ΔP_1 is valid.Applying the ruleto '_1 and the valid constraint yields the following derivation:Σ;,Δ;Δe_1:T_2Note that e equals e_1, and the case concludes.The rest of the cases can be handled similarly. By Theorem <ref>, the proof-erasure of a program is well-typed inif the program itself is well-typed in . In other words, Theorem <ref> justifies PwTP inas an approach to internalizing constraint-solving through explicit proof-construction.§ RELATED WORK AND CONCLUSIONConstructive type theory, which was originally proposed by Martin-Löf for the purpose of establishing a foundation for mathematics, requires pure reasoning on programs. Generalizing as well as extending Martin-Löf's work, the framework Pure Type System () offers a simple and general approach to designing and formalizing type systems. However, type equality depends on program equality in the presence of dependent types, making it highly challenging to accommodate effectful programming features as these features often greatly complicate the definition of program equality <cit.>.The framework Applied Type System () <cit.> introduces a complete separation between statics, where types are formed and reasoned about, and dynamics, where programs are constructed and evaluated, thus eliminating by design the need for pure reasoning on programs in the presence of dependent types. The development ofprimarily unifies and also extends the previous studies on both Dependent ML (DML) <cit.> and guarded recursive datatypes <cit.>. DML enriches ML with a restricted form of dependent datatypes, allowing for specification and inference of significantly more precise type information (when compared to ML), and guarded recursive datatypes can be thought of as an impredicative form of dependent types in which type indexes are themselves types. Given the similarity between these two forms of types, it is only natural to seek a unified presentation for them. Indeed, both DML-style dependent types and guarded recursive datatypes are accommodated in .In terms of theorem-proving, there is a fundamental difference betweenand various theorem-proving systems such as NuPrl <cit.> (based on Martin-Löf's constructive type theory) and Coq <cit.> (based on the calculus of construction <cit.>). In , proof construction is solely meant for constraint simplification and proofs are not expected to contain any computational meaning.On the other hand, proofs in NuPrl and Coq are required to be constructive as they are meant for supporting program extraction.The theme of combining programming with theorem-proving is also present in the programming language Ωemga <cit.>.The type system of Ωemga is largely built on top of a notion called equality constrained types (a.k.a. phantom types <cit.>), which are closely related to the notion of guarded recursive datatypes <cit.>. In Ωemga, there seems no strict separation between programs and proofs. In particular, proofs need to be constructed at run-time. In addition, an approach to simulating dependent types through the use of type classes in Haskell is given in <cit.>, which is casually related to proof construction in the design of . Please also see <cit.> for a critique on the practicality of simulating dependent types in Haskell.In summary, a frameworkis presented in this paper to facilitate the design and formalization of type systems to support practical programming.With a complete separation between statics and dynamics,removes by design the need for pure reasoning on programs in the presence of dependent types. Additionally,allows programming and theorem-proving to be combined in a syntactically intertwined manner, providing the programmer with an approach to internalizing constraint-solving through explicit proof-construction. As a minimalist formulation of ,is first presented and its type-soundness formally established. Subsequently,is extended toso as to support programming with theorem-proving, and the correctness of this extension is proven based on a translation often referred to as proof-erasure, which turns each well-typed program ininto a corresponding well-typed program inof the same dynamic semantics.jfp | http://arxiv.org/abs/1703.08683v1 | {
"authors": [
"Hongwei Xi"
],
"categories": [
"cs.PL",
"cs.LO"
],
"primary_category": "cs.PL",
"published": "20170325123311",
"title": "Applied Type System: An Approach to Practical Programming with Theorem-Proving"
} |
Motivated by the theory of quasi-determinants, we study non-commutative algebras of quasi-Plücker coordinates. We prove that these algebras provide new examples of non-homogeneous quadratic Koszul algebras by showing that their quadratic duals have quadratic Gröbner bases. Negative exponential behavior of image mutual information for pseudo-thermal light ghost imaging: Observation, modeling, and verification Hong Guo^1 December 30, 2023 =========================================================================================================================================§ INTRODUCTIONThe Koszul propertyof the commutative quadratic algebra of Plücker coordinates is a well-known fact (see <cit.>*Theorem 14.6 for a textbook exposition). In this paper we introduce and study non-commutative analogues of this algebra, using the quasi-Plücker coordinates defined in <cit.>*Section II. In particular, we establish the Koszul property for these non-homogeneous quadratic algebras.We denote by n the set { 1,2,…,n}. Given ordered sets I⊂ J, we denote by J∖ I the ordered set obtained by removing I, and by J|K the ordered set obtained by appending an ordered set K. The set { j} containing one element j is simply denoted by j. §.§ Commutative Plücker Coordinates For k≤ n and a k× n-matrix A with commutative entries we can choose a subset I={ i_1,…,i_k} of the column indices n and consider the Plücker coordinatesp_I(A):= A(i_1,…, i_k),using the k× k submatrix with columns corresponding to the indices in I. It is well-known (see e.g. <cit.>*Chapter VII.6) that the p_I(A) satisfy GL_n-invariance, skew-symmetry with respect to commuting columns, and the Plücker identity∑_t=1^k+1(-1)^t p_I|j_t(A)p_J∖ j_t (A) = 0,for subsets I={ i_1,…, i_k-1} and J={ j_1,…,j_k+1} of the column indices. The ideal of all relations among the Plücker coordinates is generated by these relations (<cit.>*Chapter IV 5, cf. <cit.>*Section 3.1 or <cit.>*Theorem 4.4.5).For example, let k=2<3=n. Then we can study a commutative algebra generated by elements p_12, p_13, p_23, and the Plücker relations add no extra relations. Letting k=2<4=n and choosing I={ 1}, J={ 2,3,4} one obtains the classical identityp_12p_34-p_13p_24+p_14p_23=0. For k=3 and n=6 we, for example, get the relationsp_123p_456-p_124p_356+p_125p_346-p_126p_345 =0,p_123p_245-p_124p_235+p_125p_234 =0,where I={ 1,2} and J={ 3,4,5,6} in Equation (<ref>) and J={ 2,3,4,5} in Equation (<ref>), plus similar relations interchanging the roles of the numbers in 6. One can consider the symbols p_I as generators of a quadratic commutative algebra, the quadratic quotient_k,n of the polynomial algebra [p_I| I⊂n] by the relations (<ref>), and skew-symmetry with respect to commuting indices. It is well-known that _k,nis a Koszul ring since the relations give a quadratic Gröbner basis. This was proved in <cit.>, and also follows from <cit.> (where Koszul rings are called wonderful rings), using results of <cit.>. The result of <cit.> was reinterpreted in Gröbner basis terminology in <cit.>, see also <cit.>*Theorem 14.6 for a textbook exposition.The Hilbert series of _k,n can be computed combinatorially using methods from <cit.>.In the above example of _2,4, one obtains the closed formula (see <cit.>*Section 7)H(_2,4,t)=1+t/(1-t)^5=1 + 6 q + 20 q^2 + 50 q^3 + 105 q^4 + 196 q^5 + O(q^6). The Plücker coordinates p_I(A) define an embedding of the Grassmannian G_k,n into projective space of dimension nk-1. The coordinate ring of G_k,n via the Plücker embedding is the quadratic algebra _k,n considered above. §.§ Non-commutative Plücker Coordinates Analogues of Plücker coordinates for a k× n-matrix with non-commuting entries are obtained using the theory of quasi-determinants GR4,GGRW as ratios of two quasi-minors. More precisely, given a choice of two indices i,j∈n, and a subset i∉ I ⊂n of size k-1 and a matrix A with coefficients in a division ring, the quasi-Plücker coordinate q_ij^I is defined as the following ratio of non-commutative analogues of maximal minors:q_ij^I=q_ij^I(A)= a_1i a_1i_1 a_1i_k-1 ⋮⋮a_ki a_ki_1 a_ki_k-1_si^-1_p^(s)_i| I(A)^-1 a_1j a_1i_1 a_1i_k-1 ⋮⋮a_kj a_ki_1 a_ki_k-1_sj_p^(s)_j| I(A),which is independent of choice in s, undefined if i∈ I, and zero if j∈ I. The following analogue of the Plücker relations holds for thesenon-commutative analogues of Plücker coordinates:∑_j∈ L q_ij^Mq_ji^L∖ j =1.In the case where the entries of A commute, (<ref>) recovers the classical relation (<ref>). Moreover, symmetry in changing the order of elements of I holds, replacing skew-symmetry for these ratios, and q_ji^I is inverse to q_ij^I if non-zero. By considering the ratios q_ij, an additional relation appears:q_ij^N∖{ i,j}q_jm^N∖{ j,m}=-q_im^N∖{ i,m}.See Section <ref> for the list of relations among quasi-Plücker coordinates. For example, in the case k=2 and n=4, Equation (<ref>) gives the formulaq_13^2q_31^4+q_14^2q_41^3=1.This translates top_12^-1p_32p_34^-1p_14+p_12^-1p_42p_43^-1p_13=1.If the elements p_ij commute, this equality reduces to the classical formula (<ref>).As a second example, consider the case k=2 and n=6. Let M={ 1,2} and L={ 3,4,5}. Then, with i=6, we obtain the equation q_63^12q_36^45+q_64^12q_46^35+q_65^12q_56^34 =1 ⟺ p_612^-1p_312p_345^-1p_645+p_612^-1p_412p_435^-1p_635+p_612^-1p_512p_543^-1p_643 =1.Assuming that the variables commute, this recovers relation (<ref>). Similarly, Equation (<ref>) can be recovered using M={ 1,2}, L={ 2,3,4}, i=5.Note that it was shown in <cit.>*Theorem 2.1.6 that any GL(n)-invariant rational function over a free skew-field is a rational function of the quasi-Plücker coordinates. Moreover, <cit.>*Proposition 2.41 shows that, for k=2, quasi-Plücker coordinates form a free skew-subfield within the free skew-field with 2n generators. §.§ Quantum Plücker Coordinates A quantum analogue of Equation (<ref>) was considered in <cit.>*Eq. (3.2c) in order to construct a quantum analogue of the coordinate algebra _k,n of the Grassmannian G_k,n. For this, more general exchange relations appear, called Young symmetry relations:∑_Λ⊆ I, |Λ|=n(-q)^-l(I∖Λ|Λ)f_I∖Λf_Λ|J,for 1≤ r≤ d, and I, J index sets of size d+r and d-r, respectively. Here, we use notation adapted from <cit.>*Eq. (9). The classical Plücker relations (<ref>) can be recovered as the case r=1, q=1. It was further shown in <cit.> that the relations (<ref>) can be reduced successively to relations with r=1. In fact, the Young symmetry relations are consequences of the quasi-Plücker relations (<ref>) <cit.>*Theorem 28. §.§ Sagbi and Gröbner Basis for Coordinates of Grassmannians In the commutative setting, maximal minors form a sagbi basis (canonical subalgebra basis) according to <cit.>*3.2.9. (The relations among these maximal minors give a quadratic Gröbner basis as mentioned in Section <ref>.)This result was generalized to another approach to quantum Grassmannians, which emerges from geometry and quantum cohomology and is a commutative construction (rather than using q-commutators). The coordinate ring, denoted by [𝒞_k,n^q] of the quantum Grassmannian consists ofmaximal graded minors (of degree up to q∈ℕ) of k× n-matrices with graded entries. In <cit.>*Theorem 1 it is proved that these maximal graded minors give a sagbi basis for the coordinate ring [𝒞_k,n^q] within the polynomial ring of graded entries of the matrix. Further, the relations among these maximal graded minorshave a quadratic Gröbner basis <cit.>*Theorem 2.§.§ This Paper's Approach This paper takes the approach to start with a quadratic algebra of quasi-Plücker coordinates (introduced in Section <ref>). This algebra is quadratic-linear, and a theory for Koszulity of such algebras has been developed in <cit.>.Our main result is that the associated quadratic algebra of this algebra has a quadratic Gröbner basis. Hence the algebra of non-commutative Plücker coordinates is a non-homogeneous Koszul algebra (Theorem <ref>). In Section <ref> we consider colimits of these algebras, varying k≥ 2. We further studya second version of algebras of quasi-Plücker coordinates which is not quadratic-linear, but also non-homogeneous Koszul in Section <ref>.In Section <ref> we study the Koszul dual dg algebras explicitly in the case k=2, and we finish the exposition by considering an algebra of non-commutative flag coordinates which is also non-homogeneous Koszul in Section <ref>.There are different approaches to non-commutative Grassmannian coordinate rings, see e.g. Kap,KR, which are not discussed here.§ DEFINITION OF THE ALGEBRA RNK We want to define a quadratic algebra of quasi-Plücker coordinates. As outlined in Section <ref>, quasi-Plücker coordinates were constructed using quasi-determinants in <cit.>*Section II, cf. <cit.>*4.3. For fixed integers n, k ≥ 2 we define the algebra k as having generators q_ij^I, where I⊂n has size k-1 and i∉ I, which satisfy the following relations, obtained from <cit.>*4.3: (i) q_ij^I does not depend on the ordering of the elements of I;(ii) q_ij^I=0 whenever j∈ I and i≠ j;(iii) q_ii^I=1, and q_ij^Iq_jl^I=q_il^I;(iv) q_ij^N∖{ i,j}q_jm^N∖{ j,m}=-q_im^N∖{ i,m}.(v) If i∉ M, then ∑_j∈ L q_ij^Mq_ji^L∖{ j }=1.Relation (iv) is called non-commutative skew-symmetry, and (v) is a non-commutative analogue of the Plücker relations.The algebra 2 is studied in <cit.>, as the algebra of non-commutative sectors, where it is denoted by 𝒬_n. Given an k× n-matrix A with entries in a division ring, we note that the description of q_ij^I in terms of quasi-determinants in Equation (<ref>) provides a morphism of algebras from k to the skew-field generated by the non-commutative entries a_ij of the matrix A.We also consider the subalgebra k of k generated by those of the q_ij^I for which i<j. The restriction to k can be justified by noting that the skew-fields generated by the images of k and k in the skew-field generated by the matrix entries a_ij coincide. One advantage of considering k is that it admitsa presentation as a quadratic-linear algebra:The subalgebra k can be described by the relationsq_ij^I q_jl^I =q_il^I,∀ i<j<l, i,j∉ I, ∑_j=1^k-1 q_l_0l_j^Mq_l_jl_k^L∖{ l_j, l_k}+q_l_0l_k^L∖{ l_0, l_k} =q_l_0l_k^M,where L={ l_0<l_1<… <l_k}, l_0∉ M, which are read in the way that q_ij^M=0 if j∈ M. Hence, k is a quadratic-linear algebra. Starting with formula (v), we distinguish three cases, depending on the value of the index i. In the case when l_j<i<l_j+1 for some j=1, …, k-1 we obtain Equation (<ref>) by multiplying with q_l_1, i^M on the left, and q_i,l_k^L∖l_k on the right. If i<l_1, it suffices to multiply by q_i,l_k^L∖l_k on the right; and if l_k<i, it is enough to multiply by q_l_1, i^M on the left. In all three cases, we obtain the same relation after relabelling so that the index-set L contains i in the correct order. These are all possible relations between generators q_ij^I with i<j as in this case Equation (iv) is a special case of Equation (<ref>), with M=L∖{ l_0,l_j} for some 1≤ j≤ k. Let us consider the case k=2. In this case, the skew-symmetry relation (iv) and the Plücker relation (v) in 2 become q_ij^l q_jl^i =-q_il^jq_ij^m q_ji^l + q_il^mq_li^j =1,where all indices are distinct. In this case, the algebra 2 has (n-2)(0ptn2) generators q_ij^l with i<j and l≠ i,j. The relations governing this algebra areq_ij^mq_jl^m-q_il^m =0,q_ij^mq_jl^i+q_il^j =q_il^m.for all i<j<l and m a distinct element from i,j,l, andif m=l we have the relation q_ij^lq_jl^i+q_il^j =0.§ KOSZULNESS OF THE ALGEBRA RNK The Koszul property for quadratic algebras can, more generally, be studied for non-homogeneous quadratic algebras <cit.>*Chapter 5. A non-homogeneous quadratic algebra is Koszul if the corresponding quadratic algebra A^(0) obtained by taking the homogeneous parts of the quadratic relations is Koszul. In this case, A^(0) is isomorphic to the associated graded algebra A.We shall prove that such an algebra A is Koszul by showing that the quadratic dual (A^(0))^! of A^(0) is Koszul (cf. <cit.>*Chapter 2, Corollary 3.3). This, in turn, is proved by showing that (A^(0))^! has aquadratic Gröbner basis of relations (giving a non-commutative PBW basis for the algebra) using the rewriting method, see e.g. <cit.>*Theorem 4.1.1.The associated quadratic algebra (k)^(0) is generated by the relations q_ij^I q_jl^I =0,∀ i<j<l, i,j∉ I, ∑_j=1^k-1 q_l_0l_j^M q_l_jl_k^L∖{ l_j, l_k} =0,where L={ l_0<l_1<… <l_k}, l_0∉ M, which are read in the way that q_ij^M=0 if j∈ M.We consider the quadratic dual of (k)^(0) which is denoted by k. It consists of generators r_ij^I for i<j and i∉ I, and r_ij^I=0 if j∈ I. Again, we regard I as a strictly ordered set of indices.The following lemma follows by carefully constructing a basis for the orthogonal complement to the subspace of relations (<ref>)–(<ref>) in degree two. We will write i<K<j to denote that i<l<j for every element l∈ K. The algebra k is given by the relationsr_ij^I =0, if j∈ I,r_ij^I r_ab^J =0,unless j=aandi∈ J and i< J∖ i<b or I=J ,r_ij^I r_jl^J =r_ij'^I r_j'l^J∖j'∪ j,∀ j'∈ J∖ (J∩ I),provided i<J∖ i<l,provided that i∉ I, a∉ J for (<ref>), and i∉ I, j∉ J for (<ref>).The algebra k has a quadratic non-commutative Gröbner basis, for k,n≥ 2, given by the relations (<ref>)–(<ref>) on generators r_ij^I, i,j∉ I.Note that for k≥ n, k=0 is trivial as no subset of size k+1 of n can be chosen. In order to prove the theorem, we have to show that there are no obstructions of degree larger than two (see e.g. Ani, CPU for the terminology). We chose the following ordering on the generators q_ij^I. We first order by size of (i,j) lexicographically. Given same subscripts, we order according to the lexicographic order on the superscripts I. The monomials are then ordered graded reverse lexicographically ( order). There are two different types of normal words of degree two:r_ij^I r_jl^J ,for i<j<J∖{ J∩ I ∪ i} <l,r_ij^Ir_jl^I ,j∉ I,with i∉ I, i∈ J.We claim that a basis for k is given by monomials of the formr_i_0,i_1^I_0r_i_1,i_2^I_1·…· r_i_t-1,i_t^I_t-1,with i_0<i_1<… <i_t, where for each j=1,…, t-1 we havefor r_i_j-1,i_j^I_j-1r_i_j,i_j+1^I_j either I_j-1=I_j, or i_j-1∈ I_jandi_j<I_j∖{ I_j∩I_j-1∪ i_j-1}<i_j+1.To prove this, we note in an arbitrary non-zero monomial M we might have degree two sub-word of the form r_i_j-1, i_j^I_j-1r_i_j,i_j+1^I_j where i_j is not necessarily smaller than all elements in I_j∖{ I_j∩ I_j-1∪ i_j-1}. In this case, we can replace the sub-word by r_i_j-1, i_j'^I_j-1r_i_j',i_j+1^I_j, where i_j' is smaller than all elements in I_j∖{ I_j∩ I_j-1∪ i_j-1} using relation (<ref>). Assume that j corresponds to right-most occurrence of such a degree two sub-word. If now r_i_j-2, i_j-1^I_j-2r_i_j-1,i_j^I_j-1 is of the same form, but there exists an element of I_j-1 which is larger than i_j', the monomial M was zero by relation (<ref>). Hence, such a situation cannot occur and by replacing all the non-normal degree two sub-words of M we obtain that M equals a monomial of the form (<ref>).It is now clear by the description of the monomial basis in (<ref>) that the quadratic relations given in Lemma <ref> give a non-commutative Gröbner basis (non-commutative PBW basis) for the algebra k. We note, in particular, that k is a monomial algebra if and only if k=2. The algebra kis Koszul, and hence the algebra k is non-homogeneous Koszul for allk,n≥ 2. (i) Consider 2 for small values of n. The algebra B_3^(2)has a basis given by1<r_12^3<r_13^2<r_23^1< r_12^3 r_23^1,so the Hilbert series areH(B_3^(2),t) =1+3t+t^2,H(R_3^(2),t) =(1-3t+t^2)^-1 =1+3t+8t^2+21t^3+55t^4+144t^5+O(t^6).According to Ani2,Ani (see <cit.>*Theorem 7.1), this implies that the global dimension of R_3^(2) equals two. The n-th coefficient of H(R_3^(2),t) is the 2(n-1)-th Fibonacci number.[According to the On-Line Encyclopaedia of Integer Sequence®, <https://oeis.org/A001906>.](ii) The Hilbert series k=2 and n=4 isH(R_4^(2),t) =(1-12t+12t^2-5t^3)^-1=1+12t+132t^2+1,445t^3+O(t^4). (iii) The Hilbert series k=2 and n=5 isH(R_5^(2),t) =(1-30t+50t^2-45t^3+17t^4)^-1=1 + 30 t + 850 t^2 + 24,045 t^3 + 680,183 t^4 +O(t^5).In general, the top degree of H(2,t) is n-1. The leading coefficient h_n-1 is given byh_n-1=∑_i=3^n (n-i+1)2^n-i=∑_i=0^n-3(i+1)2^i,while the coefficient h_1=12n(n-1)(n-2). The other coefficients can be computed ash_n-l=nl-1( l-1+∑_i=0^n-l-2 (i+l)2^i),for 1≤ l≤ n-1. This can be seen by systematically counting normal words in the algebra 2. Note that in top degree, these are of the formr_12^j_1r_23^j_2… r_n-1,n^j_n-1,where for each i=2,…, n-1 we can either have j_i=j_i-1 or j_i=i-1. In Equation (<ref>) we count such monomials where j_1=…=j_i-2 for i=3,…,n separately. Using the same counting method for an arbitrary ordered subset of size n-l in n,Equation (<ref>) follows. If k=3 and n=4, then r_12^{ 3,4}r_24^{ 1,3} is the only non-zero quadratic monomial in B_4^(3), and henceH(R_4^(3),t) =(1-6t+1t^2)^-1=1 + 6 t + 35 t^2 + 204 t^3 + 1,189 t^4 +O(t^5),for which the coefficients satisfy the recursion a_n=6a_n-1 - a_n-2, with a_0=1.In general, we find that H(R_n^(n-1),t)=(1-12n(n-1)t+t^2)^-1 since the monomial r_12^n∖{ 1,2}r_2n^n∖{ 1,n} is the only non-zero quadratic monomial in B_n^(n-1), and hence the coefficients of this Hilbert series satisfy the recursion a_n=12n(n-1)a_n-1 - a_n-2. § KOSZULNESS OF THE COLIMIT ALGEBRA RN The relations in <cit.>*4.8.1 link the quasi-Plücker coordinates for k× n-matrices with those of (k-1)× n-matrices. In our algebraic setting, this gives the non-homogeneous relationsq_ij^J =q_ij^J∪ m+ q_im^J· q_mj^J∪ i,where i,m∉ J. This relation links k+1 and k, where J is a set of size k-1. We can inductively define the quadratic algebra ≤ k as the coproduct of the algebras k', for k'≤ k, with the additional relations of the form (<ref>). Accordingly, we define the algebra of quasi-Plücker coordinates Q_nQ_n:=≤ n.Note that k=0 for k≥ n as then it is not possible to choose an index set of size k+1 in n. The colimit algebra Q_n is again a quadratic-linear algebra with finitely many generators. The subalgebra ≤ k of ≤ k generated by q_ij^J with i<j can be described as the quotient of the colimit over the subalgebras k together with the relations (<ref>) for i<m<j. In the larger algebra ≤ k, all relations of the form (<ref>) can be transformed into relations of the same form where the lower indices are in strictly increasing order. This can be checked distinguishing cases depending on the order of { i,m,j} according to size, and multiplying by the correct inverse. Hence all the relations in the subalgebra ≤ k are of the same form. Therefore, we define thecolimit algebraR_n:=≤ n.The algebras ≤ k are quadratic-linear Koszul algebras, and hence the quadratic-linear algebra R_n is Koszul. The quadratic part of the relation (<ref>) gives thatq_ij^Jq_jm^J∪i =0,∀ i<j<m,where i,j∉ J. Consider the quadratic dual ≤ k of (≤ k)^(0). In this algebras, all products of generators r_ij^Jr_ab^K with different sizes of the index sets J,K are zero unless j=a and K=J∪i.We extend the linear ordering on generators by requiring that q_ij^I<q_kl^K if |K|<|L|, again using theordering on monomials. Then a non-commutative PBW basis is given by products M_1M_2… M_s of monomials of the form from (<ref>) which only give a non-zero productif the last generator in M_t is of the form r_ij^K, and the first generator of M_t+1 has the form r_jl^K∪ i. This shows that a quadratic non-commutative Gröbner basis exists for ≤ k. In particular, (≤ k)^(0) is Koszul, and so ≤ k is non-homogeneous Koszul. Consider the algebra Q_4. The quadratic dualhas the PBW basisr_12^3<r_12^4<r_13^2<r_13^4<r_14^2<r_14^3<r_23^1<r_23^4<r_24^1<r_24^3<r_34^1<r_34^2<r_12^34<r_13^24<r_14^23<r_23^14<r_24^13<r_34^12<r_12^3r_23^1 <r_12^3r_24^1 <r_12^3r_24^3 <r_12^3r_24^13<r_12^4r_23^1 <r_12^4r_23^4 <r_12^4r_24^1 <r_12^4r_23^14<r_13^2r_34^1<r_13^2r_34^2<r_13^2r_34^12<r_13^4r_34^1<r_23^1r_34^1<r_23^1r_34^2<r_23^1r_34^12<r_23^4r_34^2<r_12^3r_23^1r_34^1<r_12^3r_23^1r_34^2<r_12^3r_23^1r_34^12<r_12^4r_23^1r_34^1 <r_12^4r_23^1r_34^2<r_12^4r_23^1r_34^12<r_12^4r_23^4r_34^2.Hence the Hilbert series for Q_4 is given byH(Q_4,t) =(1-18t+16t^2-7t^3)^-1=1 + 18 t + 308 t^2 + 5,263 t^3 + 89,932 t^4 +O(t^5)§ THE ALGEBRAS QN(K) AND QN ARE ALSO KOSZUL The non-homogeneous quadratic algebras k can also be shown to be Koszul. However, it is not quadratic-linear, as constant terms appear in the relations (cf. <cit.>*Chapter 5). We change the presentation from Section <ref> slightly: The algebra k has generators q_ij^I, where |I|=k-1 and i∉ I, subject to the relations (i) q_ij^I does not depend on the ordering of the elements of I;(ii) q_ij^I=0 whenever j∈ I;(iii) q_ii^I=1, and q_ij^Iq_jl^I=q_il^I, i,j∉ I;(v') If i∉ M, i∈ L, then ∑_j∈ L∖{ i} q_ij^Mq_jl^L∖{ j,l }+q_il^L∖{ i,l }=0. The non-homogeneous quadratic algebras k and ≤ k (and hence, in particular, Q_n) are non-homogeneous Koszul. The proof is similar to that for k in Theorem <ref>, but there are less restrictions of the order of indices. We consider k, which is the quadratic dual of the associated quadratic algebra (k)^(0) from <cit.>*Section 4.1. Denote generators for this algebra by r_ij^K (dual to q_ij^K). Then r_ij^Kr_ab^L=0 if j≠ a or, if j=a, r_ij^Kr_jb^L=0 if K≠ L and i∉ L. The relations in k are fully described byr_ij^Mr_jl^L∖{ j,l} =r_ij'^Mr_j'l^L∖{ j',l},∀ j,j'∈ L∖{ L∩ M∪ i},requiring distinct sub-indices and i∈ L. This means we have a rewriting ruler_ij^Mr_jl^L ↦ r_il_L∖ M^Mr_l_L∖ Ml^L',where l_L∖ M=min L∖ (L∩ M∪ i), and L'=(L∖ l_L∖ M)∪ j. One checks that for a critical triple r_ij^Ir_jl^Jr_lm^L, applying the rewriting rule to the first two generators and then to the last two generators gives a reduced monomial, and the same reduced monomial emerges if we apply the rewriting rules in opposite order. Hence, the process of applying rewriting rules stabilizes after two steps. This means every critical pair is confluent, and hence the algebra is Koszul (cf. e.g. <cit.>*Section 4.1 for these general results and terminology). This implies that k is non-homogeneous Koszul. Further, after adding relation (<ref>), the same is true. Moreover, the process of passing to the quadratic part of relations still commutes with taking the coproduct, and hence the algebras ≤ k are also non-homogeneous Koszul. A consequence of Theorem <ref> is that it gives a non-homogeneous PBW basis for k, cf. <cit.>*Sections 4.4, 5.2.Note that an alternative approach to finding a basis for an algebra of quasi-Plücker coordinates was given in <cit.>*Section 7.4. § DIFFERENTIAL GRADINGS ON THE QUADRATIC DUALS Using the non-homogeneous quadratic duality of <cit.>*5.4, it follows that the algebras k are differentially graded (dg) algebras. That is, for each of these algebras, there exist a graded mapof degree one such that^2=0, (xy)=(x)y+(-1)^ x x (y),for homogeneous x, which is referred to as the differential. We study the case k=2 in more detail and relate it to certain refinements of triangles with labelled corners.To a generator r_ij^k with i<j, k≠ i,j associate thetriangle with labelled cornersr_ij^k⟷ (0,0) node[anchor=north]i– (2,0) node[anchor=north]j– (1,1) node[anchor=south]k– cycle;Consider three types of ways to add a corner to the triangles:(0,0) node[anchor=north]i– (2,0) node[anchor=north]j– (1,1) node[anchor=south]k– cycle; ⟶ - (0,0) node[anchor=north]i–(1,1) – (2,0) node[anchor=north]j– (1,0) node[anchor=north]l– (1,1) node[anchor=south]k–(2,0) – cycle; ,i<l<j,and l≠ k, (0,0) node[anchor=north]i– (2,0) node[anchor=north]j– (1,1) node[anchor=south]k– cycle; ⟶ - (0,0) node[anchor=north]i– (2,0) node[anchor=north]j– (1.5,0.5) node[anchor=south]l– (1,1) node[anchor=south]k– (0,0)– (1.5,0.5); ,i<l<j,and l≠ k, (0,0) node[anchor=north]i– (2,0) node[anchor=north]j– (1,1) node[anchor=south]k– cycle; ⟶ (0,0) node[anchor=north]i– (2,0) node[anchor=north]j– (1.5,0.5) node[anchor=south]k– (1,1) node[anchor=south]l– (0,0)– (1.5,0.5); ,i<k<j,and l≠ i,k.The triangulations on the right hand side correspond to the products -r_il^kr_lj^k ⟷ (<ref>),-r_il^kr_lj^i ⟷(<ref>),r_ik^lr_kj^i ⟷ (<ref>).We can recover a quadratic monomial from a triangulation by reading from left to right, reflecting the second triangle in the cases (<ref>) and (<ref>) so that the left corner becomes the top corner. Now the map (r_ij^k) is the sum over all ways to triangulate (0,0) node[anchor=north]i– (1,0) node[anchor=north]j– (0.5,0.5) node[anchor=south]k– cycle; by adding one corner in any of the three ways described in (<ref>)–(<ref>). The map _B2→2 given by_B(r_ij^k) = -∑_l=i+1^j-1(r_il^kr_lj^k+r_il^kr_lj^i)+∑_l≠ ir_ik^lr_kj^i, i<k<j,-∑_l=i+1^j-1(r_il^kr_lj^k+r_il^kr_lj^i),otherwise.is a differential for 2. We can explicitly compute the homology of these dg algebras in small examples:the homology of B_3^(2) has graded dimensions 1+2t, and the homology of B_4^(2) has graded dimensions 1+7t+2t^2. The algebras C_n^(k) will not give dg algebras, but rather examples of non-trivial curved dg algebras <cit.>*5.4, Definition 1.§ NON-COMMUTATIVE FLAG COORDINATES Note that in addition to the algebra of quasi-Plücker coordinates, one can consider the algebra of flag coordinates. Flag coordinates also generalize to non-commutative entries, using quasi-determinants <cit.>*Section II.2.7, <cit.>*Section 4.10. Given a k× n-matrix, n≥ k, choose distinct indices i,j_1,… j_k-1 in n and denotef_i,{ j_1,…, j_k}(A) = a_1i a_1j_1 …a_1j_k-1 ⋮ ⋮ ⋱ ⋮ a_ki a_kj_1 … a_kj_k-1_k1,which is independent of the order of {j_1,…, j_k}. These functions are referred to as non-commutative flag coordinatesand were introduced in <cit.>.For a set I of smaller size, one can consider f_i,I(A) by restricting to the first |I|+1 rows of A. Then the following relations hold <cit.>*Section 4.10.2:f_i, If_i, I∖ k^-1 =-f_k, I∖ k∪ if_k, I∖ k^-1 ∀ k∈ I, ∑_i=1^kf_j_i,J∖ j_if_j_i,J∖{ j_i,j_i-1}^-1 =0,where J={ j_1,…, j_k} and we also denote j_0=j_k. For n≥ 2, we denote by nthe non-homogeneous quadratic algebra with generators f_i,I, where I is a subset of n and i∉ I, and relations given by (<ref>)–(<ref>) as well asf_i,If_i,I^-1=f_i,I^-1f_i,I=1.Note that by virtue of the relationsq_ij^I(A)=f_i,I(A)^-1f_j,I(A),there exists a homomorphism of algebras from k to the quotient skew-field of F_n <cit.>*Section 4.10. See also <cit.>*Proposition 70. The algebras n are non-homogeneous Koszul. Consider the quadratic dual G_n:=(F_n^(0))^! of the homogeneous part of the relations (<ref>)–(<ref>). In this algebra, denoting the dual generator for f_i,I by g_i,I, we haveg_i,Ig_i,I^-1 ≠ 0, g_i,I^-1g_i,I≠ 0,g_i, Ig_i, I∖ k^-1 =g_k, I∖ k∪ ig_k, I∖ i^-1 ∀ k∈ I,g_j,J∖ jg_j,J∖{ j,k}^-1 =g_l,J∖ lg_l,J∖{ l,j}^-1,l,j∈ J,plus all other quadratic monomials not appearing in these relations are zero. We order the generators g_i,I^± 1 lexicographically according to the triple (|I|,i,I) and g<g^-1, and use theordering on monomials. Then the normal words of degree two areg_i,Ig_i,I^-1,g_i,I^-1g_i,I,g_j,Jg_j,J∖ k^-1,where i∉ I, and j<J (in particular j<k, otherwise no restriction on k∈ J). The rewriting rule is given byg_i, Ig_i, I∖ k^-1↦ g_m_I,I∖ m_I∪ ig_m_I,I∖ m_I^-1,where m_I=min I. Monomials in which every two neighboring generators are one of these normal words give a basis for G_n, which is thus a non-commutative PBW basis, and hence F_n is non-homogeneous Koszul. amsrefs | http://arxiv.org/abs/1703.08747v3 | {
"authors": [
"Robert Laugwitz",
"Vladimir Retakh"
],
"categories": [
"math.RA",
"Primary 16S37, Secondary 15A15"
],
"primary_category": "math.RA",
"published": "20170325233247",
"title": "Algebras of Quasi-Plücker Coordinates are Koszul"
} |
^1Institute for Astronomy, Swiss Federal Institute of Technology, 8093 Zürich, Switzerland ^2Infrared Processing and Analysis Center, California Institute of Technology, Pasadena, CA 91125, USA ^3Cahill Center for Astronomy and Astrophysics, California Institute of Technology, Pasadena, CA 91125, USA ^4NAF - Osservatorio Astronomico di Padova, Vicolo dell'Osservatorio 5, I-35122 Padova, Italy ^5Aix Marseille Université, CNRS, LAM (Laboratoire d'Astrophysique de Marseille) UMR 7326, 13388, Marseille, France ^6Institut d'Astrophysique de Paris, CNRS & UPMC, UMR 7095, 98 bis Boulevard Arago, 75014, Paris, [email protected]; Twitter: @astrofaisst We use >9400 >10quiescent and star-forming galaxies at z≲2 in COSMOS/UltraVISTA to study the average size evolution ofthese systems, with focus on the rare, ultra-massive population at >11.4. The large 2-square degree survey area deliversa sample of ∼400 such ultra-massive systems. Accurate sizes are derived using a calibration based on high-resolution images from the Hubble Space Telescope. We find that, at these very high masses, the size evolution of star-forming and quiescent galaxies is almost indistinguishable in terms of normalization and power-law slope. We use this result to investigate possible pathways of quenching massive m>M^* galaxies at z<2. We consistently model the size evolution of quiescent galaxies from the star-forming population by assuming different simple models for the suppression of star-formation. These models include an instantaneous and delayed quenching without altering the structure of galaxies and a central starburst followed by compaction. We find that instantaneous quenching reproduces well the observed mass-size relation of massive galaxies at z>1. Our starburst+compaction model followed by individual growth of the galaxies by minor mergers is preferred over other models without structural change for >11.0 galaxies at z>0.5. None of our models is able to meet the observations at m>M^* and z<1 with out significant contribution of post-quenching growth of individual galaxies via mergers. We conclude that quenching is a fast process in galaxies with m ≥ 10^11 M_⊙, and that major mergers likely play a major role in the final steps of their evolution. § INTRODUCTIONQuiescent (or quenched) galaxies – here defined to be galaxies that have heavily-suppressed specific star-formation rates (specific SFRs) relative to the star-forming “main sequence” <cit.> – host about half of the mass in stars in the local Universe <cit.>, and have been observed in substantial numbers as early as z∼2 <cit.>. Understanding the dominant processes responsible for the shut-down of their star-formation (often referred to as "quenching"), as well as the connection between these processes and galaxy structure are key for understanding the evolution of the whole galaxy population over cosmic time. Suppressed specific SFRs are not the only intriguing property of quiescent galaxies. At least in terms of 'light', these have on average substantially larger spheroidal components and smaller half-light radii (R_e) compared to their star-forming counterparts at a given stellar mass and redshift <cit.>. At least in part this size difference is likely contributed by 'nurture', in particular post-quenching 'fading' of stellar populations at large radii <cit.>, but it is also possible that part of the difference may be imprinted by 'nature', i.e., different formation processes for spheroids and disks. Also intriguing is that the population-averaged sizes of quiescent galaxies of a given mass has increased by a factor of ∼3 since z=2. This average size growth is similar to that of star-forming disk galaxies, which are expected and observed to increase their individual (disk) sizes more or less proportional to (1+z)^-1, through continuous accretion of gas from their halos <cit.>. Individual quiescent galaxies form however by definition no new stars, and thus their only channel for individual mass and size growth is provided by gas-poor mergers. Averaging over a large mass range, several studies suggest indeed that mergers areimportant contributors for the size growth of quiescent galaxies <cit.>. Analysesin thinner bins of stellar mass suggest however a threshold mass – roughly aroundm < M^* ∼ 10^11 M_⊙, the characteristic mass of the Schechter <cit.> fit to galaxy mass functions[The value of M^* is remarkably constant for star-forming and quiescent galaxies and at all epochs since z∼4; <cit.>] – below and above which different mechanisms may be responsible for the average size growth of quiescent galaxies. In particular, at m ≲ M^*, a number of studies indicate that the growth in average size of the quiescent population is dominatedby the additionof larger galaxies at later times, as a result of the continuos addition of newly quenched galaxies to the large-size-end of the size function <cit.>. This picture is substantiated by the stellar ages of compact (older) and large (younger) quiescent galaxies at a given stellar mass and epoch <cit.>. It is only above M^* that dissipationless mergers are expected to be important <cit.> and all studies of galaxy sizes indeed agree on them playing the dominantrole in leading to the growth of individual quiescent galaxies in mass and size<cit.>. The above results may indicate that different quenching mechanisms could be at work below and above M^*. Theoretically, there are many candidate mechanisms for quenching <cit.>, and identifying observationally the correct ones is a non-negligible challenge – not least since, as discussed in <cit.> and demonstrated in <cit.>, correlations of observed quantities do not necessarily indicate a causal relation between them. Different quenching mechanisms are expected to act on different time scales and result in different morphological transformations of galaxies. Therefore, constraining these is an important step towards understanding the dominant processes that lead to galaxy quiescence in these populations. For example, the cut-off of gas inflow onto a star-forming galaxies is expected to lead to the exhaustion of star-formation over long timescales, which are set by the time needed for star formation to consume the gas reservoir of a galaxy. It is likely that the star formation ceases smoothly over the galaxy's disk thereby not significantly changing its observed morphology. In contrast, a gas-rich major merger might lead to a starburst and thus to a fast consumption of gas on dynamical timescales of order 100-200 Myrs. Furthermore, a substantial change in the morphology of the galaxies is expected with an apparent compaction in light induced by the centrally confined starburst <cit.>. A number of studies have focused their attention on quenching timescales and their dependence on galaxy properties. For relatively massive galaxies, and in particular satellites in groups and clusters at low redshifts, there is growing evidence that the transition from active star-formation to quiescence takes of order 2-4 Gyrs <cit.>.At redshifts of order z∼2 and for massive galaxies,<cit.> show that suppression of star-formation starts at the center of galaxies and slowly progresses outwards on timescales of 1-3 Gyrs. It is however unclear at this point whether the observed centrally-suppressed specific SFRs are the outcome of a “quenching mechanism” <cit.> or the natural outcome of inside-out galaxy formation <cit.>. In this paper we make a new attempt to constrain the processes that quench massive, m>M_* star-forming galaxies at z<2 via studying the timescales and morphological changes using as diagnostic tool the size evolution of both the star-forming and quiescent galaxy populations at z<2. We will show that this further enables us to set constraints on the amount and properties of mergers in this massive population It is now well established that studies of z∼2 galaxies cruciallyneed imaging in the near-infrared in order to measure their rest-frame optical properties (especially sizes); the near-infrared images need to be furthermorequite deep in order to detect the faint and fading stellar populations of quiescent systems.Much progress has been made using data from the HST CANDELS survey <cit.>. Very massive galaxies are however rare, and increasingly so at increasingly higher redshifts <cit.>. Assembling a sufficiently large number of such galaxies to enable a statistical study requires imaging over alarge area of sky. With its two square-degree area coverage, the Cosmological Evolution Survey <cit.> enables us to assemble a sample of more than 400 ultra-massive galaxies (UMGs, > 11.4) in the redshift range 0.2 < z < 2.5. Another advantage of COSMOS isits >30 pass-band coverage fromUV to IR wavelengths, which enables the derivation of very accurate stellar masses and photometric redshifts. Last but not least, the deep near-IR data of the UltraVISTA survey on COSMOS <cit.> allow an accurate separation of star-forming and quiescent galaxies across this entire redshift range <cit.>. A drawback of the UltraVISTA data is their seeing-limited Point Spread Function (PSF), which full width at half maximum (FWHM) is typically about 0.8", and therefore hampers the measurement of reliable galaxy sizes. To overcome this limitation, we correct the UltraVISTA size measurements using as a calibration reference the ∼3% of the COSMOS area that is covered by the CANDELS/COSMOS legacy survey. The paper is organized as follows.In sec:data we describe thedatasets that we have used in this work. In sec:sample we describe the selection criteria for separating star-forming and quiescent UMGs, and in sec:sizethe procedure that we have followed to measure the galaxy sizes. In the same Section we also present the calibration of the UltraVISTA sizes that we have performed using the HST CANDELS size measurements for the ∼9000 galaxies for which both datasets are available. The final (calibrated) size measurements are presented and discussed in sec:results. In sec:model we present our model that we use to predict the average size evolution of quiescent galaxies through the redshift range of our analysis. The model predictions are compared with the observed size evolutions in sec:discussion, where we furthermore describe the additional modifications to the predicted trends that are introduced by galaxy mergers. We summarize our main results in sec:ending.Note that: all magnitude are given in the AB system <cit.>; stellar masses (m) are scaled to a <cit.> initial mass function (IMF); we assume a flat cosmology with Ω_Λ=0.7, Ω_m=0.3, and H_0 = 70 km s^-1 Mpc^-1. § DATA§.§ UltraVISTA near-IR imaging data As mentioned in the previous section, near-IR data on a large area is crucial for the study of massive galaxies at high redshifts. Therefore, the backbone of this work is the UltraVISTA survey carried out on the 4.1 meter Visible and Infrared Survey Telescope for Astronomy (VISTA) located at the Paranal observatory in Chile. This survey covers 1.5 deg^2 of the COSMOS field in the near-infrared bands Y, J, H, and K_s. Specifically, we use the (unpublished) UltraVISTA data-release (DR) 2 imaging data. Compared to DR1, this release has an improvement in H-band by up to 1 magnitude in the ultra-deep stripes (covering roughly 50% of the field) and ∼0.2 magnitudes on the deep stripes. The typical exposure times per pixel are between 53 and 82 hours, leading to 5σ sensitivities of 25.4AB, 25.1AB, 24.7AB, and 24.8AB in Y, J, H, and Ks band within 2 aperture. The reduction of the imaging data is similar to DR1 <cit.> and is briefly outlined in the following: the data was taken in three complete observing seasons between December 2009 and May 2012. The individual science frames are visually inspected to remove bad frames (e.g., due to loss of auto-guiding). Each frame is sky subtracted before stacking which leads to a very flat combined image with a very small variation in background flux. The combined frames have an average H-band seeing of 0.75± 0.10. The final photometric calibration is done by using non-saturated stars from the Two Micron All Sky Survey <cit.> sample leading to an absolute photometric error of less than 0.2 magnitudes. §.§ Photometric redshift and stellar mass catalog Our galaxy selection (see below) is based on the public COSMOS/UltraVISTA catalog in which galaxies are selected from a combined YJHK_s image <cit.>. This has advantages compared to purely optical selected catalogs as it more sensitive to galaxies with red colors, e.g., dusty star-forming galaxies or quiescent galaxies with old stellar populations. The catalog comprises photometric redshifts, stellar masses, and other physical quantities derived from SED fitting on >30 pass-bands from UV to IR (PSF homogenized) for more than 250,000 galaxies on COSMOS <cit.>. The photometric redshifts in that catalog are derived using<cit.> employing different templates including a range of galaxy types from elliptical to young and star-forming. These redshifts have been verified to have a precision of σ_Δ z/(1+z) = 0.01 up to z=3 by comparison to a sample of more than ∼10.000 spectroscopically confirmed star-forming and quiescent galaxies. Physical quantities (mass, SFR, etc) are fitted byat fixed photometric redshift using a library of synthetic composite stellar population models based on <cit.>. These models include different dust extinctions (following a <cit.> dust extinction law), metallicities, and star formation histories (following exponentially declining τ models). Also, emission line templates are included. The emission line flux is derived from the observed UV light using empirical relations. All these parameters have been verified by a number of other fitting routines including<cit.> and its upgraded version<cit.>. The typical uncertainties in masses are on the order of 0.3 dex. All quantities are computed for a <cit.> IMF. The stellar masses are defined as the integral of the star-formation histories of the galaxies, thus representing the total galaxy mass of a galaxy rather than its mass in active stars. In the following, the stellar masses quoted by other studies are converted to total masses if necessary. These corrections, calculated using <cit.> models with solar metallicity and exponentially declining as well as constant star formation histories, can be up to 0.2dex for quiescent galaxies with ages of 1 billion years and above, while they are less substantial for star-forming galaxies.§.§ CANDELS/COSMOS near-IR imaging dataTo calibrate the sizes measured on the ground based UltraVISTA imaging data, we make use of the overlap between UltraVISTA and HST based CANDELS/COSMOS survey <cit.>.The latter covers 0.06 deg^2 on sky (roughly 1/25th of the total UltraVISTA field) in the WFC3/IR F160W pass-band, similar to the UltraVISTA H-band, however at a much high resolution (more than 8 times smaller PSF). We use of the latest publicly available data release of the COSMOS/F160W mosaic (by February 2013) with a total exposure time of 3200s and a sensitivity of 26.9 AB (5σ for a point source).§ THE SAMPLE In the following, we describe the selection of massive galaxies at > 11.4 building our main galaxy sample as well as less massive galaxies (10.0 << 11.4) that we use for the calibration of the ground-based size measurements. Furthermore, we split this sample into quiescent and star-forming galaxies. §.§ High- and low-mass galaxiesThe selection of the high- and low-mass galaxy sample is based on the near-IR COSMOS/UltraVISTA photometric catalog (as described above), which allows for the selection of dusty star-forming and quiescent galaxies. We select a total sample of 403 massive galaxies satisfying log(m/M_⊙) > 11.4 and 0.2 < z_phot < 2.5 (green hatched region in <ref>). We have verified these galaxies visually to be real (i.e., not artifacts or stars). The exact value of this mass limit has been chosen to correspond to the 90% completeness limit at a H-band magnitude of 21.5 AB at z<2.5, which allows us provide reliable size measurements for these galaxies (see sec:size). For the estimation of the mass completeness we have used the identical method as described in <cit.>. With this mass cut, we select the most massive observable galaxies with a number density less than 10^-4 Mpc^-3 and 10^-5 Mpc^-3 at z∼0.5 and z∼2. These galaxies may be the progenitors of today's most massive galaxies, assuming these most massive galaxies keep their ranking through cosmic time. This is verified by more complicated methods of progenitor selections, including the selection of galaxies at a constant galaxy number density <cit.>, or using semi-empirical models that take into account galaxy mergers <cit.>. The (mass complete) low-mass galaxy control/calibration sample is selected in a similar way to have 10.0 < log(m/M_⊙) < 11.4 and H<21.5 AB. The mass completeness limit at H=21.5AB as a function of redshift is shown in <ref> by the red line (solid for star-forming and dashed for quiescent galaxies). The low mass control sample (9000 galaxies in total) is consequently selected to be above the combined completeness limit of the star-forming and quiescent galaxies and satisfies three stellar mass bins of 10.0 << 10.5, 10.5 << 11.0, and 11.0 << 11.4 with the corresponding redshift ranges 0.2 < z < 0.45, 0.2 < z < 0.75, and 0.2 < z < 1.25.§.§ Selection of quiescent and star-forming galaxies We split our sample into quiescent and star-forming galaxies by making use of the rest-frame (NUV - r) versus (r - J) color diagnostics <cit.>. In <ref> we show the rest-frame (NUV - r) versus (r - J) diagram for six different redshift bins with our main sample of massive galaxies. The black line in each panel divides the quiescent (upper left) from the star-forming (lower right) galaxy population. Our >11.4 galaxies are shown with large symbols color coded by their specific star-formation rate (sSFR ≡ SFR/m, the inverse of the mass doubling time scale) derived from SED fitting. All the other galaxies at lower stellar masses in the same redshift bin and H<21.5AB are shown in gray scale. We find that the color-color diagram efficiently isolates quiescent galaxies with log( sSFR/ Gyr) ∼ -1 to -2 (depending on redshift, as expected). We note that this color selection is very similar to the widely used (U-V) versus (V-J) selection but it is a slightly better indicator of the current versus past star formation activity <cit.>. We have verified that other selections of quiescent and star-forming galaxies (e.g., by sSFR or (U-V) versus (V-J)) do not change the results of this paper. § SIZE MEASUREMENTS AND CALIBRATION As we have already discussed in the introduction to this paper, we are investigating the quenching process in massive galaxies via the average size evolution of star-forming and quiescent galaxies. Reliable size measurements are therefore crucial. We denote with ”size” the observed semi-major axis half-light radius, R_e. While we benefit from the large area of the COSMOS/UltraVISTA survey to select very massive galaxies, its poor resolution and PSF hampers the accurate measurement of galaxy structure parameters. In this section, we lead in detail through (i) the determination of a spatially varying PSF, (ii) the basic measurement of galaxy sizes, and (iii) our 2-step size-calibration procedure using simulated galaxies and the HST based CANDELS imaging. Finally, we outline how we correct for the band-shifting across redshift in our sample. §.§ Determination of the spatially varying PSFGalaxy sizes are measured by the use of , which takes into account the effect of PSF <cit.>. Therefore, the understanding of the PSF size (full width at half maximum, FWHM), shape, and spatial variation is crucial. We represent the 2-dimensional PSF at a given position (x,y) by a Moffat profile <cit.>: F(x,y) = β - 1/πα^2[1+((x-μ_x)^2+(y-μ_y)^2/α^2)]^-β,where μ_x, μ_y, α, and β are free fitting parameters. The FWHM of a PSF in this parametrization is given by (α,β) = 2α√(2^1/β-1). This has been shown to be a good approximation for ground based PSFs and has the advantage over a pure Gaussian as it represents better the wings of the PSF <cit.>. In order to create a spatially comprehensive PSF map, we select unsaturated stars between 16 AB and 21 AB from the HST based COSMOS/ACS I_F814W-band catalog <cit.>. We select them according to theirstellarity parameter (larger than 0.9) and using diagnostic diagrams as color vs. color and magnitude vs. size. Furthermore, we inspect the stars visually and make sure that there are no close companion stars (or galaxies) visible on the ACS images. For each of these more than 3000 stars, we extract a 10×10 image stamp from the UltraVISTA H-band mosaic on which we will fit the PSF. We notice small shifts of the center of the stars between ACS and UltraVISTA data of a few tenths of arc seconds (likely caused by small differences in the coordinate systems, the large differences in the PSF size, and differences in the resolution of the images) which we correct for. We then fit the selected stars according to the above parametrization F(x,y | μ_x, μ_y, α, β). The accuracy and robustness of the fitting method was verified by generating stars with random FWHM between 0.2 << 1.2, add noise taken from real background images, and fit them in the same way as the real data. This test shows that we are able to recover the FWHM with an accuracy of better than 0.05. As a last cut, we require less than 5% difference between the model and data in the enclosed flux up to 1.5 times the PSF FWHM. We end up with ∼800 PSF models across COSMOS/UltraVISTA. The PSFs show variations in their FWHM between 0.65 and 0.80. We assign to each galaxy an average PSF model created from the stars within 6, which we use for .§.§ Guess-parameters for surface brightness fittingIn this section, we describe the determination of the initial values which are fed to . In order to have consistency between the initial values and the actual images on which we run , we do not use the values given in the public COSMOS/UltraVISTA catalog, but we re-run Source Extractor (, version 2.5.0, <cit.>) on the DR2 UltraVISTA H-band images. We runwith two different values of the DEBLEND_MINCONT for a better de-blending of galaxies next to brighter galaxies or stars. Theinput parameters are tuned manually in order to optimize the source extraction. We mask <cit.> each star identified on the HST based COSMOS/ACS I_F814W-band images by a circle with a maximal radius r_σ at which its flux decays to the background flux level. This maximal radius (which depends on the magnitude of the star) is determined by fitting r_σ as a function of magnitude for a couple of different stars in a broad magnitude range. Furthermore, we match our catalog to the public UltraVISTA catalog and compare the measured magnitudes, which we find to be in excellent agreement. Finally, we extract each of our galaxies from ourcatalog to use the measured galaxy position (X_IMAGE and Y_IMAGE), magnitude (MAG_AUTO), half-light radius (FLUX_RADIUS), axis ratio (ratio of A_IMAGE and B_IMAGE), and position angle (THETA_IMAGE) as initial parameters for . §.§ Uncalibrated size measurementsWe useto fit single Sersic profiles (parametrized by the half-light radius R_e, Sersic index n, total magnitude M_ tot, axis ratio b/a, and position angle θ) to the observed surface brightness of our galaxies. As described in the previous section, we use thevalues measured on the DR2 COSMOS/UltraVISTA images as initial parameters. For the Sersic index, which is not known a-priori, we assume n=2 (and let it vary between 0 < n < 8 during the fitting process). The size of the image cutout on whichis run is variable between 71×71 and 301×301 pixels. The size is set to optimize the estimate of local sky background and to minimize the running time ofand is defined such that the cutout contains three times more sky pixels than pixels attributed to galaxy detections. Companion galaxies on the image cutout are fit simultaneously with the main galaxy if they are brighter than 25AB in H-band. All other detections of fainter objects are masked out and not taken into account in the χ^2 minimization. To access the stability of the fits, we runin two different configurations: In the first configuration (referred to as “VARPOS”) we letfit the center of the galaxy within ±10 pixels of theinput. In the second configuration (referred to as “FIXPOS”) we fix the galaxy position to its initialvalue. We select good fits (either from the FIXPOS or VARPOS run) by comparing the results from the two configurations. We require that (i) R_e > 0.1 px, (ii) the fitted position differ by less than √(2)/2 times the PSF FWHM from theinput (iii) the R_e of the two configurations agree better than 50%, and(iv) the total magnitude does not differ by more than 0.5 from thetotal magnitude. Roughly 70% of our total sample galaxies satisfy these criteria and are used in the following for accessing the size evolution as a function of cosmic time. Due to their brightness and relatively large size, the above criteria result in a negligible cut for our massive >11.4 but in principle could affect the following results and conclusions. We have investigated this in depth and find that mostly unresolved galaxies are affected by this without any clear relation with redshift. However, adding this small amount of galaxies to our sample at >11.4 (keeping their small sizes as lower limits) impacts the median size versus redshift relations by less than 5% compared to the general systematic uncertainties of the ground-based sizes of up to 50%. Furthermore, star-forming and quiescent galaxies are equally affected and therefore we do not expect significant impacts on our results.§.§ Correcting for measurement biases using simulated galaxiesThe measurement of galaxy structure is prone to many biases as discussed by several authors <cit.>. Small and compact galaxies are affected by the PSF (leading to an over-estimation of R_e); large and extended galaxies suffer surface brightness dimming in the outskirts (leading in under-estimation of R_e). Althoughdoes take into account the effects of PSF and therefore partially cures these problems it has its limits. It is therefore important to investigate possible biases and correct for them by using simulated galaxies. In the following, we outline this first step in our 2-step calibration process in more detail.§.§.§ Simulating galaxiesWe useto create ∼1.5 million model galaxies on a grid in (R_e,M_ tot,n,b/a)_in parameter space: 0.2 < n < 10, 15 mag < M_ tot < 26 mag, 0.2 < b/a < 1, and 0.5 < R_e < 15 pixels (corresponding to 0.075 < R_e < 2.250). The model galaxies are subsequently convolved with a PSF, equipped with Poisson noise, and added onto realistic sky backgrounds. For the latter, we account for the fact that the sky background noise (σ_ sky) varies across the COSMOS/UltraVISTA field by a factor 2 or more (mainly between the deep and ultra-deep stripes). We compute σ_ sky automatically in rectangles of ∼ 0.1×0.1 degrees across the field. For this end, we use thecatalog (see sec:sex) to mask out all detections and fit σ_sky to the remaining non-masked pixels by assuming a Gaussian noise distribution. In order to make sure to remove all the light of galaxies and stars, we increase their semi-major and semi-minor axis as given byby a factor of 10. We verify this procedure by manually measuring σ_ sky at random positions. To take into account the variations in PSF and σ_ sky we simulate galaxies in four different representations which will be interpolated in the end. We use two bracketing PSFs (FWHM = 0.65 and 0.85) as well as two bracketing σ_ sky (5.5×10^-6 and 2.0×10^-5 counts/s). On each of these model galaxies we runandin the same manner as for the real galaxies (as described in sec:uncalibmeas) to obtain (R_e,M_ tot,n,b/a)_out. This allows us to derive a correction function and discuss possible measurement biases as outlined below. §.§.§ Correction functionWe obtain a correction function, 𝒮(R_e,M_ tot,n,b/a), in an identical fashion as in <cit.> and we refer the reader to this paper for additional details. We construct 𝒮 such that it returns a 4 dimensional median correction vector (Δ R_e,Δ M_ tot,Δ n,Δ b/a) for each point in measured (R_e,M_ tot,n,b/a)_ meas parameter space. The median correction vector is constructed as the difference between the median of the 50 closest (R_e,M_ tot,n,b/a)_out (with respect to (R_e,M_ tot,n,b/a)_meas) and the median of their true values (R_e,M_ tot,n,b/a)_in. We obtain this correction vector for each combination of PSF and σ_sky. The final correction vector is then obtained by an interpolation of the grid at the PSF and σ_sky attributed to the galaxy for which the correction is computed. Because of our imposed magnitude cut of bright H=21.5 AB, the correction in size (usually over-estimated) is on the order of less than 20%. The simulations also show that the detection rate of galaxies is 100% in the worst case up to half-light sizes of at least 3 at H=21.5 AB, corresponding to a surface brightness limit of ∼25.2 mag arcsec^-2. This size corresponds to ∼25 kpc (∼20 kpc) at z∼2 (z∼0.5). The correction function allows an assessment of detection limits and a first correction for measurement biases. However, the simulated galaxies are ideal cases. The overlap between UltraVISTA and CANDELS is ideal to do a more thorough calibration of our size measurement.§.§ Final calibration of size measurements using CANDELSThe second step of our calibration process consists of the comparison of our measured (and corrected with 𝒮) sizes with HST based structural measurementson COSMOS/CANDELS, which has an overlap of 3% with the central part of COSMOS. Because of the 2.5 times higher resolution and 4 times smaller PSF of the HST images, we consider the HST based size measurements to reflect the true galaxy sizes. We first measure the sizes of galaxies on the publicly available CANDELS F160W mosaic as these closest match the UltraVISTA H-band data. For this end, we usein order to extract the sources and to get the initial parameters forin the same manner as described above for the UltraVISTA based measurements. Subsequently, we runfor the extracted sources in the two configurations FIXPOS and VARPOS thereby applying the same selection criteria for good fits as described in sec:uncalibmeas. Furthermore, we apply the a correction function 𝒮 as done before but with PSF and σ_ sky matching those of the COSMOS/CANDELS images. In turn, we find corrections less than 5% for galaxies at H<21.5 AB. As a further check, we compare the size measurements to the public available COSMOS/CANDELS size catalog by <cit.> and find excellent agreement. The comparison between the HST-based (R_ candels) and ground-based (R_ ultravista) galaxy sizes and their calibration is shown in <ref>. Shown are galaxies with Sersic indices n<2.5 (blue) and n>2.5 (orange) measured on the ground-based images in two magnitude bins at H<21.5AB (top and bottom row). Looking at the empty histograms (showing the log-ratio of the sizes) on the right panels we see an under-estimation of R_ ultravista by a factor 3 and more which we find to happen preferentially for galaxies smaller than the (UltraVISTA) PSF radius (∼ 0.3) and with large Sersic n (i.e., compact light distribution). Furthermore, an over-estimation of galaxy sizes preferentially happening for large galaxies (R_e > 2) with small Sersic n. We calibrate our ground-based size measurements by constructing a calibration function 𝒞(R_e,M_ tot,n,b/a) in a similar way as described in sec:corrfunction. Going back to <ref>, the measurements with the calibration function applied are shown in the filled and hatched histograms on the right two panels (for different magnitudes and n). Furthermore, the left panels show the 1-to-1 comparison of the size measurements with a running median with 1σ scatter (dashed). The comparison of the fully calibrated sizes with the HST based size measurements show that we are able to recover R_e on UltraVISTA to an accuracy of better than 50% (1σ scatter). As shown in Figure <ref>, the uncertainty of the calibrated sizes of galaxies close to the resolution limit of UltraVISTA can be up to a factor of three. We note that less than 5% of our massive > 11.4 are unresolved and thus could have much larger uncertainties.§.§ Correction for internal color gradients Mostly negative internal color gradients are ubiquitously measured in star-forming galaxies up to at least z∼3, whereas this effect is much less strong in quiescent galaxies <cit.>. The observed color gradients are caused by different stellar populations and dust attributed to inside-out growth of galaxies and therefore depend on galaxy age, stellar mass, redshift, and star-formation activity. Such color gradients cause the observed size to change as a function of wavelengths. Vice versa, at a fixed observed wavelength the observed size of galaxies changes as a function of redshift since the rest-frame wavelengths shifts. The effect of color gradients may introduce artificial effects in the size evolution across redshift. Several studies have constrained this effect using observations at different wavelengths for different types of galaxies and stellar masses at various redshifts <cit.>. Typical gradients for galaxies at = 10 are on the order of | Δlog R / Δlogλ| = 0.1 - 0.3 depending on data quality, resolution, and redshift. This leads to corrections in size of 10-50% over a wavelength range of rest-frame 0.5-1.0 μ m. In the following, we use the parameterization by <cit.> to correct our size measurements for internal color gradients. However, other parametrization <cit.> result in similar corrections and do not change the results of this paper. §.§ Verification of accuracy of size measurement Because our measurements at > 11.4 are unique so far, we cannot directly check whether these are reasonable.In the following, we use our (fully calibrated and mass complete) low-mass control samples at 10.0 << 11.4 (see sec:galselection) to investigate possible systematics in our size measurement. In panels B through D of <ref> we compare our measured size evolution of quiescent (open, color) and star-forming (filled, color) galaxies to measurements taken from the literature <cit.>. The latter are based on high-resolution HST imaging and corrected for color gradients in the same way as we do here. We find a very good agreement with our measurements. In panel A we compare our final size evolution at > 11.4 to spectroscopically confirmed quiescent galaxies at the same stellar mass in two redshift bins from the literature as black circles <cit.>. These galaxies reside well within the 1-2σ scatter of our measurements (indicated by the thin error bar), although at the lower end. This can be explained by the higher success rate of spectroscopic surveys for compact galaxies with high surface brightness. Concluding, we do not expect any severe systematic biases in our measurements. § RESULTS: SIZE EVOLUTION OF VERY MASSIVE GALAXIES §.§ Size evolution of massive galaxies In <ref> (panel A) we show the final median size evolution with cosmic time of our massive > 11.4 quiescent (red, open) and star-forming (red, filled) galaxies. These are compared to literature measurements at lower masses <cit.> and spectroscopically confirmed quiescent galaxies at z>1 <cit.>. The dashed and solid line show fits to the size evolution of quiescent and star-forming galaxies, respectively, parametrized as R_e = B× (1+z)^-β. We find a slope β=1.22±0.20 and β=1.18±0.15 for quiescent and star-forming galaxies with >11.4, respectively. Note that this slope is statistically identical in contrast to lower masses where quiescent galaxies show a faster size increase with cosmic time than star-forming galaxies on average. Also, very massive star-forming galaxies are only ∼20% larger on average at a fixed redshift and stellar mass, whereas at lower masses the different can be as large as a factor of two (see gray lines). §.§ The stellar mass vs. size relationThe relation between stellar mass and size (MR relation) has been measured so far on statistically large samples at < 11.0. Our measurement on a large sample of galaxies at > 11.4 enables us to provide an additional data point at high masses. In <ref>, we show the MR relation in three redshift bins measured over two orders of magnitudes in stellar mass. Shown are our data at >11.4 (large filled dots) for quiescent (red) and star-forming (blue) galaxies as well as measurement at lower masses. The latter include the 3D-HST survey <cit.>, spectroscopically confirmed quiescent galaxies at z > 1 <cit.>, and galaxies at z<0.1 with measurements in g-band from the Galaxy and Mass Assembly survey <cit.>. The large blue and red symbols show the median size of star-forming and quiescent galaxies in different stellar mass bins. The lines show the corresponding log-linear fits (R_e(m) ∝ m^α, see <ref>) to the medians with errors from bootstrapping. cccc Power-law slope (R_e(m) ∝ m^α) of the stellar mass vs. size relation at z>0.5 (this work including 3D-HST and spectroscopically confirmed quiescent galaxies) as well as integrated over cosmic time at lower redshifts (see references).0ptredshift range star-forming quiescent Reference α_ sf α_ quz ∼ 0 0.14 to 0.39 0.56 (1)z < 0.1 0.19 ± 0.02 0.41 ± 0.06 (2)0.5 < z < 1.0 0.30 ± 0.10 0.55 ± 0.05 This work1.0 < z < 1.5 0.22 ± 0.08 0.62 ± 0.09 This work1.5 < z < 2.0 0.14 ± 0.06 0.59 ± 0.15 This work z < 3 0.22 ± 0.05 0.75 ± 0.05 (3) (1) <cit.>. For star-forming galaxies they fit α=0.14 at < 10.6 and α=0.39 at > 10.6. (2) <cit.> (3) <cit.>. Report no significant change in slope over 0 < z < 3. The MR relation of quiescent galaxies is much steeper than for star-forming galaxies. The average sizes of quiescent and star-forming galaxies are comparable at ∼ 11.5 independent of redshift. The logarithmic slope of the MR relation (<α_ qu> ∼ 0.6 for quiescent and <α_ sf> ∼ 0.2 for star-forming galaxies) does not evolve significantly with cosmic time. Also, it is very consistent with the measurements in the local universe <cit.> finding values between α_ qu∼ 0.35-0.60 and α_ sf∼ 0.15-0.25 for quiescent and star-forming galaxies, respectively (see also <ref>). The study by <cit.> finds steeper slopes for quiescent galaxies (α∼ 0.75), likely due to missing very massive quiescent galaxies at >11.5. The constant slope of the MR relation is indicative of a constant relation between the growth of galaxies in size (e.g., due to accretion) and stellar mass over cosmic time. Finally, we note that a recent study by <cit.> suggests that the bulk of star-forming z∼0 galaxies at < 11 are being quenched via strangulation[Strangulation means the supply of cold gas is halted and thus star formation is shut down.] within ∼4 Gyrs. We would therefore expect the m-R_e relation of star-forming galaxies at z∼0.5 and <11.0 to be similar to the relation of the quiescent galaxies at z∼0 if the observed sizes of the galaxies do not change during or after quenching. This is, however, not seen from the left panel of <ref> showing that star-forming galaxies ∼4 Gyrs ago are significantly (up to factor two) larger than local quiescent galaxies at <11.0. This “tension” can be alleviated by post-quenching disk-fading, which would substantially decrease the observed sizes of quiescent galaxies and is shown to be at work at low redshifts <cit.> and most likely also at z∼2 <cit.>.In addition to this, at high redshifts, morphological transformation as a result of quenching cannot be ruled out. § MODEL FOR THE SIZE EVOLUTION OF MASSIVE QUIESCENT GALAXIES The similar sizes at a given redshift of star-forming and quiescent galaxies at >11.0 at all redshifts z<2 suggest a very close connection of these galaxies. This might have important implications on the process that quenches these galaxies. In this section, we investigate this further by modeling the size evolution of quiescent galaxies thereby applying different assumptions on the quenching process. §.§ Evolution of star-forming model galaxies Our model assumes that galaxies – as long as they are forming stars – evolve along the star-forming main-sequence (MS) spanned by stellar mass and SFR. In addition, we assign our model galaxies a half-light radius R_e and a gas fraction f_ gas using empirical relations and observations. The galaxies get eventually quenched in concordance with the observed quiescent fraction observed as a function of redshift and stellar mass. As fiducial model for quenching we assume an instantaneous (on-the-spot) quenching process without any change in the structure (i.e., half-light size R_e) of the galaxies. We complement this model with two additional models featuring a structural change (compaction due to starburst) as well as a delayed quenching. These different assumptions of quenching processes are explained in more detail later on. The main steps of our empirical model are the following. * Our model starts at z = 2.5 and uses the observed stellar mass function by <cit.> as initial condition for the mass distribution of the 100,000 simulated star-forming galaxies with stellar masses between 7 << 12. The initial mass distribution and fraction of quiescent galaxies is derived from the quiescent fraction f_q(m,z) at a given redshift and stellar mass (<ref>). * We evolve the stellar mass and SFR of star-forming galaxies along their MS, for which we use the parameterization by <cit.> compiled from deep Herschel observations. We verified that the use of other parameterizations of the MS does not change the results and conclusions of this work. Furthermore, we assign to each of the star-forming model galaxies gas fractions f_ gas(m,z) from our compilation of literature as outlined in <ref> as well as sizes according to the measured size vs. stellar mass relation R_e(m,z) for star-forming galaxies (including our new measurements at > 11.0). When drawing values from the above empirical relations, we also include the observed scatter, which we characterized by a gaussian centered on the median. The typical scatter for the MS and MR relation is assumed to be ∼0.3dex. * At each redshift, we quench galaxies in mass bins randomly such that the model quiescent fraction reproduces the observed f_q(m,z). After a galaxy is quenched, we set its gas fraction and SFR to zero. The remaining gas is added instantaneously to the stellar mass under the simple assumption that the gas is fully converted into stars and not stripped. Depending on the quenching model (see below), the new stars are either distributed evenly in the galaxy's disk or in a central region of 1kpc. We also do not implement the rejuvenation of galaxies once they are quiescent. For visualization purposes, example tracks of our model galaxies with different initial stellar masses at z=2.5 as well as the fraction of quiescent galaxies (f_q) are shown in <ref>.§.§ Quenching of model galaxies We implement three simple models that should bracket different pathways of quenching processes. In the following, we describe these in more detail.Instantaneous / no structural change. This is our fiducial model in which galaxies quench instantaneously without any structural change. A physical scenario could be the cut off of cold gas inflow by the heating of the gas in massive dark matter halos above 10^12 <cit.>. The galaxy then consumes its remaining gas according to its SFR on the star-forming MS and increases its mass evenly in its disk. Instantaneous / compaction. In this model, the galaxy decreases its overall size (compaction) due to an increase of its surface density instantaneously after the shut down of star formation. We assume that this compaction is triggered by a starburst in the inner 1kpc region of the galaxy, which may be induced by a major merger event <cit.>. We compute the decrease in overall half-light radius after the starburst by adding the gas of the galaxy in a 1kpc bulge component characterized by a n=4 Sersic profile to the disk dominated (n=1) star-forming galaxy. For simplicity we assume that all of the gas mass is turned into stars in the bulge component. Furthermore we assume that the bulge component has the same mass-to-light ratio as the disk such that the ratio in luminosity of the bulge component and the disk is proportional to the ratio of stellar mass added to the bulge and stellar mass in the disk.Delayed / no structural change. This model is similar to our fiducial model, however, the quenching does not happen instantaneously, but with a delay. A possible scenario could be the slow consumption of gas off the star-forming MS after the gas supply onto the galaxy is cut off. We assume the delay (i.e., the time the galaxy spends in the green valley) to be 50% of the cosmic time between the start of quenching and z=2.5 (∼800Myrs at z=1.5 and ∼3Gyrs at z=0.5). § DISCUSSION We now compare the predicted size evolution of quiescent galaxies from our simple empirical models with observations to investigate possible processes that quench massive galaxies at z<2. <ref> shows the MR relation of our quiescent model galaxies (symbols) together with the observed relations for star-forming (blue - hatched and dashed line) and quiescent (red - hatched and solid line), respectively. The width of the hatched bands and the error bars on the points represent the scatter in the observe as well as modeled relations. The different quenching models are shown in different colors and symbols as indicated in the legend. Focusing on galaxies above the characteristic knee of the stellar mass function <cit.> we note the following. (i) The instantaneous quenching model without altering the structure of the galaxies (green circles) predicts the quiescent MR relation well at all masses at z>0.5. However, this model under-predicts the sizes of the most massive galaxies (∼ 11.5) at z<0.5. (ii) The instantaneous quenching model followed by a compaction triggered by a starburst within a 1kpc central region (orange squares), predicts well the sizes of <11 galaxies down to z∼0.5, but under-predicts the sizes at later times as well as at higher masses. (iii) The delayed quenching model without structural change (blue diamonds) is only able to explain the MR relation at > 11 at the highest redshifts, but under-predicts the sizes at lower redshifts by factors of 2-3. It reproduces well the relations at <11 and z<1. We explain and interpret these findings in more detail in the following sub-sections. §.§ Slow versus fast quenching at m>M^* The star formation in very massive galaxies can be shut down without significant structural change of the light profile by cutting off the gas supply onto the galaxies. In current theoretical models and simulations, this can be achieved in galaxies with massive dark matter halos of m_ DM > 10^12, which cause the infalling gas to be heathen up <cit.>. This may result in a uniform decrease of the star formation in the galaxy's disk without altering its structure significantly. Taking the above results at face value suggests that if there is no net structural change after the turn-off of star formation, massive galaxies (> 11) have to transition from star-forming to quiescent on relatively short time scales. This is suggested by the fact that our fiducial model (instantaneous quenching) is able to reproduce the sizes of galaxies at these stellar masses reasonably well, at least in the two upper redshift bins at z≳0.5. An instantaneous quenching might be too much of a simplification and a non-zero quenching time is suggested by recent observational studies <cit.>. Our delayed quenching model works well for redshifts z≳1 and >11, where the delay times areshorter than 1-1.5Gyrs according to our definition (50% of the difference in cosmic time between the quenching event and z=2.5). Note that this is compatible with the time a galaxy on the star-forming MS needs to consume all of its gas given its main-sequence gas fraction and SFR: less than 1-2Gyrs for a galaxy at >11 and z>0.5 <cit.>. Note that our delayed model over-predicts the sizes of galaxies at m< M^* at z>1. This would suggest that the delay as defined here is not long enough and instead a longer delay (2Gyrs or more) is favored. This mass dependence of the quenching time (i.e., slow vs. fast quenching) is also strongly suggested by recent simulations <cit.> and could be explained by different quenching processes taking place at different stellar masses and as a function of environment the galaxies are living in. §.§ Merger-induced starbursts and compaction It is suggested that mergers play an important role in shaping galaxies at high stellar masses. Thus a smooth quenching without significantly altering the structure of a galaxy is likely too simplistic. Our model of a merger-triggered compact starburst inducing a fast consumption of gas and quiescence therefore might be a better approach to characterize the quenching mechanism at high redshifts and high stellar masses. As shown in Figure <ref>, such a scenario under-predicts the sizes of massive quiescent galaxies by factors of two or more at all redshifts. If such a scenario is the dominant way of quenching massive galaxies, then the galaxies have to grow individually to meet the observed MR relation. This is similar to the fast-track quenching mechanism proposed by <cit.> <cit.> in which galaxies experience a compaction phase with subsequent growth due to minor and major mergers. Note that changing the parameters of this particular model does not significantly change this conclusion. For example, assuming a 2kpc central starburst would only increase the sizes by ∼50% and still leads to a significant under-prediction. §.§ Post-quenching growth through mergers in massive galaxies Figure <ref> shows that all of our bracketing models in some cases severely under-predict the sizes of quiescent galaxies above M^* and z≲1. One possible way to bring the models in agreement with observations is to introduce a series of minor and/or major mergers following the quenching event. We investigate this further by assuming a simple toy model in which 90% of the quiescent galaxies above log(M_*) ∼ 10.8 experience ten 1:10 minor mergers during their lives after being quenched. We choose this case because minor mergers are more common and are dominantly increasing the size of galaxies and less their stellar mass. For the implementation of this model, we assume that the virial condition holds for gas-poor ellipticals and compute the resulting size increase (Δ R_e) as a function of the merger mass fraction (Δ m) and change in velocity dispersion (Δσ) during the merger event as Δ R_e = Δ m (1/Δσ)^2, where Δ R_e = R_e,post/R_e,pre, Δ m = m_post/m_pre, and Δσ = σ_post/σ_pre are the ratios of quantities before (“pre”) and after (“post”) the merging event. We assume that the change in the velocity dispersion is negligible during the merger event, i.e., Δσ∼ 1 <cit.>. The open symbols in <ref> show the impact of post-quenching mergers on our previous results. The addition of a series of minor mergers to our instantaneous quenching + compaction model (orange open squares) leads indeed to a good agreement with the observed MR relation at z>0.5 at all stellar masses probed here. We note, however, that the sizes of massive m>M^* galaxies are still under-estimated at z<0.5. It is therefore likely that, if the compaction model holds, these galaxies must experience more minor mergers cumulatively than anticipated in our simple merger toy model. Alternatively, massive galaxies at later cosmic times might be quenched via other paths that do not include a compaction phase, such as heating of cold gas alone. Such a possibility is shown by our fiducial model with post-quenching minor mergers (green open circles), which is able to predict the sizes of massive galaxies at z<0.5 well (however, fails at high redshifts). We note that Equation <ref> describes the effect of size growth by major mergers. Instead, strictly speaking, the size growth due to minor mergers is expected to be steeper <cit.>, which would decrease the number of mergers needed in our model. For example, assuming α=2, we find that only ∼30% of the galaxies at >10.8 are needed to experience a 1:10 merger in order to meet the observations. Finally, we note that mergers for low-redshift (z<1) less-massive (m<M^*) are not needed to bring our models in agreement with observations. This is in line with the idea that the size evolution of quiescent galaxies below ∼11 with cosmic time is mainly driven by the addition of newly quenched galaxies, while at higher masses it is more dominated by individual growth due to mergers <cit.>. § SUMMARY & CONCLUSIONS We use the size evolution of massive star-forming and quiescent galaxies as an independent diagnostic tool to investigate the process of quenching at >11 and z≲2. To this end, we measure the half-light size evolution of a large sample of very massive star-forming and quiescent galaxies at ≳ 11.4 on the 2-square degree survey field of COSMOS/UltraVISTA. We find the size evolution of both populations of galaxies at > 11.4 to be similar in slope and normalization and to be consistent with the extrapolation of the mass versus size relation from lower masses. In order to investigate different quenching mechanisms and the impact of mergers, we predict the MR relation of massive m>M^* quiescent galaxies within our simple empirical models as a function of redshift. Our main results are the following. * Massive galaxies quench fast. Models with instantaneous quenching or short delay of up to ∼1Gyr are able to predict the sizes of quenching galaxies at z>1 and m>M^*. Longer quenching times are more favored at lower masses and redshifts. * A more realistic model incorporating a compaction phase (e.g., due to merger-triggered central starburst within 1kpc) followed by quiescence and subsequent individual growth by mergers is able to reproduce the observed MR relation of massive m>M^* quiescent galaxies at all redshifts. * None of our models is able to predict the size evolution of m>M^* galaxies at low redshifts (z≲1). We show that with 1:10 minor mergers for 90% of the quiescent galaxies at m>M_* the models can be brought into agreement with observations. In contrast, no mergers are needed at lower stellar masses in agreement with the size evolution being driven by the addition of bigger, newly quenched galaxies. It is important to note that we are not able to distinguish the dominant pathways of quenching of massive quiescent galaxies with our simple models as these yield very similar predictions for the size evolution. Nonetheless, our study suggests that quenching is likely a fast process at the stellar masses probed here with a significant involvement of mergers in the post-quenching growth of massive galaxies. For further distinguishing these models, more information on the (resolved) structural properties of the galaxies is necessary. This will be possible with high-resolution imaging and spectroscopy of massive quiescent galaxies by the HST or the James Webb Space Telescope. We would like to thank Dan Masters, Charles Steinhardt, Behnam Darvish, and Bahram Mobasher for valuable discussions. Furthermore, we would like to thank the referee for valuable feedback that greatly improved this manuscript. AF acknowledges support from the Swiss National Science Foundation. Based on data products from observations made with ESO Telescopes at the La Silla Paranal Observatory under ESO programme ID 179.A-2005 and on data products produced by TERAPIX and the Cambridge Astronomy Survey Unit on behalf of the UltraVISTA consortium. This work is based on observations taken by the CANDELS Multi-Cycle Treasury Program with the NASA/ESA HST, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555.aasjournal § THE GAS FRACTION F_ GAS(M,Z)We use studies from the literature to fit an empirical relation f_ gas(m,z), which is used in our models. The data used include PHIBSS at z∼1-1.5 <cit.> and COLDGASS at z∼0 <cit.> as well as data from lensed and other star-forming galaxies from <cit.> and references therein.The result is shown in <ref> for four different bins in stellar mass. The PHIBSS and COLDGASS data is shown in black, the other measurements are shown in gray. We also show in color f_ gas derived from our galaxies (UMGs and lower mass control sample) using the Kennicutt-Schmidt relation <cit.>, relating Σ_gas∝Σ_SFR^N, where we take N=1.31 <cit.>. Note that these derivations are not used for the fitting of the parametrization for f_ gas(m,z).We derive f_gas(m,z) and its uncertainty (95% CLs) by fitting the observed data as follows. We first perform a linear fit forced through the COLDGASS data point at z=0 in order to determine the slope. The error on the slope is derived from the systematic error of the fit and the uncertainty of the data points by bootstrapping which we both add in quadrature. In a second fit, we fix the slope to the one determined before and fit for the intercept including error. The resulting uncertainty region as shown in <ref> as hatched region is then derived by the unification of the errors of the two fits. To get a continuous function for f_gas, we interpolate between the four stellar mass bins. We compared our fit to the recent work by <cit.>. We find that their f_ gas(m,z) parametrization has a slightly steeper redshift dependence resulting in 10-30% larger gas fractions at the highest redshifts. We have verified that our results do not change if using the <cit.> parametrization for f_ gas(m,z). | http://arxiv.org/abs/1703.09234v1 | {
"authors": [
"A. L. Faisst",
"C. M. Carollo",
"P. L. Capak",
"S. Tacchella",
"A. Renzini",
"O. Ilbert",
"H. J. McCracken",
"N. Z. Scoville"
],
"categories": [
"astro-ph.GA",
"astro-ph.CO"
],
"primary_category": "astro-ph.GA",
"published": "20170327180006",
"title": "Constraints on Quenching of $z\\lesssim2$ Massive Galaxies from the Evolution of the average Sizes of Star-Forming and Quenched Populations in COSMOS"
} |
A numerical method for the estimation of time-varying parameter models in large dimensions [========================================================================================== We develop a general class of Bayesian repulsive Gaussian mixture models that encourage well-separated clusters, aiming at reducing potentially redundant components produced by independent priors for locations (such as the Dirichlet process). The asymptotic results for the posterior distribution of the proposed models are derived, including posterior consistency and posterior contraction rate in the context of nonparametric density estimation. More importantly, we show that compared to the independent prior on the component centers, the repulsive prior introduces additional shrinkage effect on the tail probability of the posterior number of components, which serves as a measurement of the model complexity. In addition, an efficient and easy-to-implement blocked-collapsed Gibbs sampler is developed based on the exchangeable partition distribution and the corresponding urn model. We evaluate the performance and demonstrate the advantages of the proposed model through extensive simulation studies and real data analysis. The R code is available at <https://drive.google.com/open?id=0B_zFse0eqxBHZnF5cEhsUFk0cVE>. § INTRODUCTIONIn Bayesian analysis of mixture models, independent priors on the component-specific parameters have been widely used because of their flexibility and technical convenience. A nonparametric example is the renowned Dirichlet process (DP)where the atoms in the stick-breaking representationare independent and identically distributed (i.i.d.) from a base distribution. One of the potential but non-negligible issues for such an approach is the presence of redundant components, especially when parsimony on the number of components is preferred.For example, when a mixture model is used in biomedical applications, each component of the mixture may be interpreted as clinically or biologically meaningful subpopulations (of patients, disease types, etc.).To address this challenge, in this paper we argue for a Bayesian approach for modeling repulsive mixtures as a competitive alternative, establish its posterior consistency and posterior contraction rate, and study the shrinkage effect on the posterior number of components in the presence of such a repulsion. Mixture models have been extensively studied from both the frequentist and the Bayesian perspectives. Formally, given the parameter space Θ, a mixture model with a kernel density ψ:ℝ^p×Θ→ℝ_+ and a mixing distribution G∈ℳ(Θ) can be represented as _i∼∫_Θψ(,)dG(), where ℳ(Θ) is a class of probability distributions on Θ (equipped with an implicitly specified suitable σ-field).The most commonly used kernel density ψ is the normal density, which leads to the Gaussian mixture model (GMM). In particular, the GMM with a discrete (potentially infinitely supported) mixing G=∑_kw_kδ_(_k,_k) has been widely used for clustering, since an equivalent characterization is _i| z_i∼N(_z_i,_z_i), ℙ(z_i = k)=w_k, wherez_i encodes the clustering membership of the corresponding observation _i. The parameters for each component (_k,_k), k=1,⋯,K, are referred to as the cluster/component-specific parameters. Throughout we use K to denote the (potentially infinite) number of components in a mixture model.When G is completely unknown, the GMM is referred to as nonparametric GMM <cit.>.Frequentists' ways of modeling mixture models require a finite and fixed K, the estimation of which couldbe accomplished using model selection approaches. Nonparametric Bayesian priors allow us to perform inference without a priori fixed and finite K. For example, the DP prior on G yields an exchangeable partition distribution on (_z_1,⋯,_z_n), the inference of which indicates a distribution on the number of clusters among (_z_1,⋯,_z_n).The development of Markov chain Monte Carlo sampling techniques <cit.> further popularized the DP mixture model in a wide array of applications, such as biomedicine, machine learning, pattern recognition, etc. Meanwhile,the asymptotic results of the DP mixture of Gaussians as a method of nonparametric density estimation have been studied. In the univariate case, the posterior consistency of the DP mixture of univariate Gaussians was established by <cit.>, and the posterior convergence rate in the context of density estimation in nonparametric Gaussian mixture model was studied by <cit.>.Posterior consistency in the multivariate setting <cit.> is harder due to the exponential growth of the L_1-entropy of sieves. <cit.> derived the posterior contraction rates of general smooth densities for multivariate density estimation using the DP mixture of Gaussians. Nevertheless, as shown in <cit.>, the DP mixture model typically produces relatively large number of clusters, some of which are typically redundant. Theoretically, <cit.> showed that when the underlying data generating density is a finite mixture of Gaussians, the posterior number of clusters under the DP mixture model is not consistent. In other words, the posterior distribution of the number of clusters does not converge to the point mass at the underlying true K. Alternatively, finite mixture models with a prior on K, referred to as the mixture of finite mixtures (MFM) <cit.>, was developed. The posterior inference of the MFM can be carried out either by the reversible-jump Markov chain Monte Carlo (RJ-MCMC) <cit.>, or by the collapsed Gibbs sampler derived via the exchangeable partition representation <cit.>. Meanwhile, the posterior asymptotics for the MFM as a nonparametric density estimator, to the our best knowledge, is restricted to the cases of univariate location-scale mixtures <cit.> and multivariate location mixtures <cit.>, in which the priors on locations are assumed to be conditionally i.i.d. given K.These approaches, however, assume independent prior on the component-specific parameters (_1,⋯,_K). In the context of parametric inference, where the underlying data generating distribution is a finite mixture of Gaussians, repulsive priors <cit.> and non-local priors <cit.> were developed as shrinkage methods to penalize mixture models with redundant components. In particular, theoretical properties regarding only univariate density estimations in parametric GMM (i.e., assuming the ground true density is a finite mixture of Gaussians) were discussed in <cit.> and <cit.>.In addition, <cit.> proposed repulsive mixtures via determinantal point process (DPP)with a prior on K, where the RJ-MCMC sampler for the posterior inference is potentially inefficient in high-dimensional setting. In this paper, we propose a Bayesian repulsive Gaussian mixture (RGM) model. The main contributions of this paper are as follows. First, under certain mild regularity conditions, we establish the posterior consistency for density estimation in nonparametric GMM under the RGM prior, and obtain an “almost” parametric posterior contraction rate (log n)^t/√(n) for t>p+1.To the best of our knowledge, earlier work such as <cit.>, <cit.>, and <cit.>, have not addressed the asymptotic analysis of repulsive mixture models for density estimation in nonparametric GMM.<cit.> was the earliest work that discussed the posterior contraction rate for density estimation in nonparametric GMM, where the Dirichlet process (DP) prior is used.<cit.> and <cit.> discussed the posterior contraction rate using repulsive priors, but under the parametric assumption that the mixing distribution is finitely discrete. Second, the relationship between the posterior of K (i.e., the number of components), which serves as a measurement of the model complexity, and the repulsive prior is studied as well. It turns out that compared to the independent prior on the component centers, the repulsive prior introduces additional shrinkage effect on the tail probability of the posterior of K under the nonparametric GMM assumption.Furthermore, instead of fixing K or implementing a RJ-MCMC sampler for the posterior inference of the RGM model, wedevelop a more efficient blocked-collapsed Gibbs sampler that is based on the exchangeable partition distributions. The remainder of the paper is organized as follows. In Section <ref> we formulate the Bayesian repulsive Gaussian mixture model.Section <ref> elaborates the theoretical properties of the posterior distribution. In particular, we establish the posterior consistency, investigate posterior contraction rate, and study the shrinkage effect on the posterior number of components in the presence of the repulsive prior. In Section <ref> we develop the generalized urn model for the RGM model by integrating out the mixing weights and K, and design an efficient blocked-collapsed Gibbs sampler. Section <ref> demonstrates the advantages of the proposed model as well as the efficiency of the proposed inference algorithm via simulation studies and real data analysis. We conclude the paper in Section <ref>. § BAYESIAN REPULSIVE MIXTURE MODELIn this section we formulate the RGM model in a Bayesian framework. Suppose 𝒮⊂ℝ^p× p is a collection of positive definite matrices, equipped with the Borel σ-field on 𝒮. We consider the Gaussian mixture model, a family of densities of the formf_F()=∫_ℝ^p×ϕ(|,)dF(,), whereϕ(|,)=(2π)^-1/2exp[-1/2(-)^⊤^-1(-)] is the density of the p-dimensional Gaussian distribution N(,) with meanand covariance matrix , and F is a distribution on ℝ^p×.We shall also use the shorthand notation ϕ_(-)=ϕ(|,) and f_F=ϕ_* F, where * is the conventional notation for convolution of two functions. We assume that the data(_n)_n=1^∞ are i.i.d. generated from some unknown density f_0, the estimation of which is of interest. Denote the space of all probability distributions over ℝ^p× by ℳ(ℝ^p×), and that over ℝ^p by ℳ(ℝ^p). We define a prior Π on f over the space of all density functions in ℝ^pby the following hierarchical model:(f()| F) = ∫_ℝ^p×ϕ(|,)dF(,), (F| K, {w_k, _k, _k}_k=1^K) = ∑_k=1^Kw_kδ_(_k,_k),(_1,_1,⋯,_K,_K| K) ∼ p(_1,_1,⋯,_K,_K| K),(w_1,⋯,w_K| K)∼𝒟_K(β),K∼ p_K(K), K∈ℕ_+.Here p(_1,_1,⋯,_K,_K| K)>0 is some density function with respect to the Lebesgue measure on (ℝ^p×)^K,𝒟_K(β) is the symmetric Dirichlet distribution over Δ^K with density function p(w_1,⋯,w_K)=Γ(Kβ)/Γ(β)^K∏_k=1^Kw_k^β-1, where Δ^K={(w_1,⋯,w_K):∑_k=1^Kw_k=1,w_k≥0} is the ℓ_1-simplex on ℝ^K.The prior on K that is supported on all positive integers is essential, as we allow the number of components to grow with the sample size in order to fit the data well. Instead of assuming (_k,_k)_k=1^K beingfrom a “base measure”, we introduce repulsion among components N(_k,_k) through their centers _k, such that they are well separated. We assume the density p(_1,_1,⋯,_K,_K| K) is of the following form,p(_1,_1,⋯,_K,_K| K) = 1/Z_K[∏_k=1^Kp_(_k)p_(_k)]h_K(_1,⋯,_K),whereZ_K=∫⋯∫_ℝ^p× K h_K(_1,⋯,_K)[∏_k=1^Kp(_k)]d_1⋯d_Kis the normalizing constant, and the function h_K:(ℝ^p)^K→[0,1] is invariant under permutation of its arguments: h_K(_1,⋯,_K)=h_K(_𝔗(1),⋯,_𝔗(K)) for any permutation 𝔗:{1,⋯,K}→{1,⋯,K}. We require that h_K satisfies the following repulsive condition: h_K(_1,⋯,_K)=0 if and only if _k=_k' for some k≠ k', k,k'∈{1,⋯,K}.In this paper, we focus on the case where the repulsive property is introduced only through the mean vectors (_1,⋯,_K), i.e., we allow nonvanishing density even when distinct components share an identical covariance matrix. The case where repulsion is introduced through the covariance matrices is of independent interest and may be further explored. We consider the following two classes of repulsive functions h_K(_1,⋯,_K):h_K(_1,⋯,_K) = min_1≤ k<k'≤ Kg(_k-_k'),h_K(_1,⋯,_K) = [∏_1≤ k<k'≤ Kg(_k-_k')]^1/K,for K≥2, and h_1(_1)≡ 1,where g:ℝ_+→[0,1] is a strictly monotonically increasing function with g(0)=0. Notice that the repulsive functions defined here generalize those in <cit.>, who fix K due to the challenges in estimating K caused by the complicated relation between Z_K and K.However, for the two repulsive functions (<ref>) and (<ref>), we are able to find the connection between Z_K and K in Theorem <ref>, the proof of which is deferred to Section <ref> of the Supplementary Material. We will discuss the non-asymptotic behavior of the posterior distribution of K in Section <ref>. Suppose the repulsive function h_K is either of the form (<ref>) or (<ref>). If ∬_ℝ^p×ℝ^p[log g(_1-_2)]^2p(_1)p(_2)d_1d_2<∞, then 0≤ -log Z_K≤ c_1K for some constant c_1>0. We refer to the prior Π on f∈(ℝ^p) given by (<ref>), (<ref>), (<ref>) or (<ref>) as the Bayesian repulsive Gaussian mixture (RGM) model, denoted by f∼RGM_1(β;g, p_, p_, p_K) if h_K is of the form (<ref>), or f∼RGM_2(β;g, p_, p_, p_K) if h_K is of the form (<ref>).§ THEORETICAL PROPERTIES OF THE POSTERIOR DISTRIBUTION In this section we discuss the theoretical properties of the posterior of the RGM model defined in Section <ref>. In particular, in the context of density estimation in nonparametric GMM, we establish the posterior consistency, discuss the posterior contraction rate,and study the shrinkage effect on the tail probability of the posterior number of componentsintroduced by the repulsive prior.We defer the proofs of all theorems, corollaries, propositions, and lemmas to Sections<ref>, <ref>, and <ref> of the Supplementary Material. §.§ Preliminaries and NotationsWe begin with some useful notations. Given a positive definite matrix , we use λ() to denote any eigenvalueof , and λ_max(), λ_min() to denote the largest and smallest eigenvalue of , respectively. Denote the identity matrix, and _p∈ℝ^p× p the identity matrix of size p× p if specifying matrix dimension is needed. The Kullback-Leibler (KL) divergence between two densities f and g is denoted by D(f|| g)=∫ flog(f/g). Denote · the Euclidean norm on ℝ^p. We use ·_1 to denote both the L_1-norm on L^1(ℝ^p) and the ℓ_1-norm on finite dimensional Euclidean space ℝ^d for any d≥1. ·_∞ is used to denote both the ℓ_∞-norm of a vector and supremum norm of a bounded function.We use ⌊ a⌋ to denote the maximum integer that does not exceed a. The notation a ≲ b is used throughout to represent a≤ cb for some constant c that is universal or unimportant for the analysis. Whenever possible, we use Π to represent the prior/posterior probability measure, ℙ_0 and 𝔼_0 to denote the probability and expectation with respect to the distribution f_0, and p to denote all density functions in the model except f_0, f, and {f_F:F∈(ℝ^p×)}. For random variables, we slightly abuse the notation and do not distinguish between the random variables themselves and their realizations. We shall also use p(x) or p_x(x) to denote the density of the random variable x. A weak neighborhood of f_0 is a set of densities containing a set of the formV={ f∈(ℝ^p):|∫φ_i f_0-∫φ_i f|<ϵ, i=1,⋯,I },where φ_i's are bounded continuous functions on ℝ^p <cit.>. The posterior distribution is said to be weakly consistent at f_0, if Π(f∈ U|_1,⋯,_n)→ 1 a.s. with respect to ℙ_0 for all weak neighborhoods U of f_0. Given a prior Π on (ℝ^p), a density function f_0∈ℳ(ℝ^p) is said to be in the KL-support of Π, or has the KL-property (with respect to Π), if Π(f∈(ℝ^p):D(f_0|| f)<ϵ)>0 for all ϵ>0. The posterior distribution is said to be L_1(strongly) consistent at f_0, if for all >0, Π(f∈(ℝ^p):f-f_0_1>|_1,⋯,_n)→0 as n→∞ or in ℙ_0-probability. The posterior contraction rate is any sequence (_n)_n=1^∞ such that Π(f∈(ℝ^p):f-f_0_1>M_n|_1,⋯,_n)→ 0 as n→∞ in ℙ_0-probability for some constant M>0. Given a family of densities ℱ on ℝ^p with a metric d on ℱ, the ϵ-covering number of ℱ with respect to d, denoted by 𝒩(ϵ, ℱ,d), is defined to be the minimum number of ϵ balls of the form {g∈ℱ:d(f,g)<ϵ} that are needed to cover ℱ. The d-metric entropy is the logarithm of the covering number under the d-metric. Above all, we assume that f∼RGM_r(β;g, p_,p_,p_K), r=1 or 2. In order to develop the posterior convergence theory, we need some regularity conditions, most of which are typically satisfied in practice. We group these conditions into two categories. The first set of conditions are the requirements for the model. A0The data generating density f_0 is of the form f_0=ϕ_*F_0 for some F_0∈(ℝ^p×) that has a sub-Gaussian tail: F_0(≥ t)≤ B_1exp(-b_1t^2) for some B_1,b_1>0. A1For some δ>0,c_2>0, we have g(x)≥ c_2ϵ whenever x≥ϵ and ϵ∈(0,δ). A2g satisfies ∬_ℝ^p×ℝ^p [log g(_1-_2)]^2p(_1)p(_2)d_1d_2<∞. A3For some ^2,^2∈(0,+∞), we have ^2≤inf_λ()≤sup_λ()≤^2. A4For some (non-random) unitary ∈ℝ^p× p,is diagonal for all ∈.Condition A2 guarantees that 1/Z_K does not grow super-exponentially in K by Theorem <ref>. Conditions A0 and A3 assume that both f_0 and f are of the nonparametric GMM form, and hence guarantee that f_0 and f are not too “spiky” such that a faster rate of convergence is obtainable. Condition A4, the simultaneous diagonalizability of all ∈, appears to be of less importance, but it turns out that a structured spaceof covariance matrices decreases the ·_1-metric entropy of the proposed sieves in Section <ref>, and hence affects the posterior contraction rate. We assume that =diag(λ_1,⋯,λ_p) for all ∈, i.e., the eigenvalues of ∈ are ordered according to the orthonormal eigenvectors in .We also need some requirements for the prior distributions. B1(w_1,⋯,w_K| K)∼𝒟_K(β) is weakly informative: β∈(0,1]. B2p_ has a sub-Gaussian tail: ∫_{≥ t}p()d≤ B_2exp(-b_2t^2) for some B_2,b_2>0. B3For all ∈ℝ^p, p()≥ B_3exp(-b_3^α) for some α≥2,B_3,b_3>0. B4p() is induced by ∏_j=1^p p_λ(λ_j()) with supp(p_λ)=[^2,^2]. B5There exists some B_4,b_4>0 such that for sufficiently large K, we have p_K(K)≥exp(-b_4Klog K),∑_N=K^∞p_K(N)≤exp(-B_4Klog K). Condition B1 assumes a vague prior on (w_1,⋯,w_K). Conditions B2 and B3 are requirements for the tail behavior of the function p_ in the sense that they are neither heavier than Gaussian nor thinner than an exponential power density <cit.>. Alternatively, one may assume p()∝exp(-b_3^α) for some b_3>0, as suggested by <cit.>. Condition B4 is adopted in <cit.> to obtain an “almost” parametric convergence rate. We will also discuss possible extensions to the case where p_λ has full support on (0,+∞) later in this section. Condition B5 is the requirement for the tail behavior of the prior on K. Similar assumption on the tail behavior of the prior on K is adopted in <cit.> and <cit.> for finite mixture models. As a useful example, we show that the commonly used zero-truncated Poisson prior on K satisfies condition B5. The zero-truncated Poisson prior has a density function p_K(K)=λ^K/(e^λ-1)K!𝕀(K≥1) with respect to the counting measure on ℕ_+ for some intensity parameter λ>0. Directly compute ∑_N=K+1^∞p_K(N)=1/e^λ-1(e^λ-∑_N=0^Kλ^N/N!)=1/e^λ-1∫_0^λ(λ-t)^Ke^tdt/K!≲λ^K+1/(K+1)!, where the second equality is due to Taylor's expansion. By Stirling's formula, this is further upper bounded by (λe/K+1)^K+1. Therefore, substituting K+1 with K, we obtain ∑_N=K^∞p_K(N)≲exp(Klog (λe)-Klog K)≤exp(-1/2Klog K) for sufficiently large K. The constant for ≲ can be absorbed into the exponent, and hence we conclude ∑_N=K^∞p_K(N)≤exp(-B_4Klog K) for some B_4>0. For the lower bound on p(K), for sufficiently large K we again use Stirling's formula, p(K)=1/e^λ-1λ^K/K!≥exp(Klog(λe)-log K-Klog K)≥exp(-2Klog K). Hence the zero-truncated Poisson prior on K satisfies condition B5. §.§ Posterior ConsistencyWeak consistency. Using the result from <cit.>, a sufficient condition for Π to be weakly consistent at f_0 is that f_0 is in the KL-support of Π. The following lemma is useful in that it provides a compactly supported F_m such that f_F_m can approximate f_0 arbitrarily well in the KL divergence sense. Assume conditions A0-A4 and B1-B5 hold. For all m∈ℕ_+, define a sequence of distributions (F_m)_m=1^∞ by F_m(A)=c_mF_0(A∩_m) for any measurable A⊂ℝ^p×, where _m={(:)∈ℝ^p×:≤ m,^2+1/m≤λ_min()≤λ_max()≤^2-1/m} and c_m is the normalizing constant for F_m with c_m^-1=F_0(_m). Then ∫ f_0logf_0/f_F_m→ 0 as m→∞. We remark that the construction in <cit.> is not directly applicable. The major reason is that the variance of the convolving ϕ is allowed to be arbitrarily close to 0 there, whereas we impose uniform boundedness on the eigenvalues of the covariance matrices. The sequence of densities constructed in <cit.> is (f_m())_m=1^∞=(∫_ℝ^pϕ_σ_m^2(-)f_0()d)_m=1^∞, where (σ_m)_m=1^∞ is a sequence that converges to 0 at a certain rate. This construction does not apply when covariance matrices are bounded in spectrum.The construction of the sequence of densities (f_F_m)_m=1^∞ in Lemma <ref> also serves as a technical contribution to the Kullback-Leibler property of Bayesian nonparametric GMM. Based on Lemma <ref>, we are able to establish the weak consistency via the KL-property. Assume conditions A0-A4 and B1-B5 hold. Then f_0 is in the KL-support of Π, and hence Π(·|_1,⋯,_n) is weakly consistent at f_0. Strong consistency. To establish the posterior strong consistency, we utilize Theorem 1 in <cit.>, which is a standard result for proving consistency for general Bayesian nonparametric density estimation methods (see Section <ref> of the Supplementary Material). Specializing to the RGM model, we need to construct a sequence of submodels and partitions of each of these submodels that satisfy the conditions in Theorem 1 in <cit.>. We now make these statements precise. Consider the followingsubmodels of (ℝ^p):ℱ_K_n={f_F:F=∑_k=1^Kw_kδ_(_k,_k),K≤ K_n,_k∈ℝ^p,_k∈}and the following partition of the submodel ℱ_K_n𝒢_K(_K)=ℱ_K(∏_k=1^K(a_k,a_k+1]),_K=(a_1,⋯,a_K)∈ℕ^K,K=1,⋯,K_n,where ℱ_K(∏_k=1^K(a_k,b_k])={f_F:F=∑_k=1^Kw_kδ_(_k,_k),_k_∞∈(a_k,b_k]}.According to Theorem 1 in <cit.>, it suffices to show the following: f_0 is in the KL-support of Π, and there exists some b,b̃>0, some sequence (K_n)_n=1^∞, such that Π(ℱ_K_n^c)≲e^-bn for sufficiently large n, and for all ϵ>0, lim_n→∞e^-(4-b̃)nϵ^2∑_K=1^K_n∑_a_1=0^∞⋯∑_a_K=0^∞√(𝒩(ϵ,𝒢_K(_K),·_1))√(Π(𝒢_K(_K)))=0. Let a_k<b_k be non-negative integers, k=1,⋯,K. Then for sufficiently small δ>0, there exists constant c_3>0 such that 𝒩(δ, ℱ_K(∏_k=1^K(a_k,b_k]),·_1) ≤(c_3/δ^2p+1)^K(∏_k=1^Kb_k)^p. Assume conditions A0-A4 and B1-B5 hold. Then we have ∑_K=1^K_n∑_a_1=0^∞⋯∑_a_K=0^∞√(𝒩(δ,𝒢_K(_K),·_1))√(Π(𝒢_K(_K)))≤ K_n(M/δ^p+1/2)^K_n. for sufficiently small δ for some constant M>0.Based on Lemma <ref> and Lemma <ref>, we are able to verify (<ref>) and hence establish the strong consistency. Assume conditions A0-A4 and B1-B5 hold. Then Π(·|_1,⋯,_n) is strongly consistent at f_0. §.§ Posterior Contraction RateTo compute the posterior contraction rate, it is sufficient to find two sequences (_n)_n=1^∞,(_n)_n=1^∞ such thatΠ(ℱ_n^c)≲exp(-4n_n^2),exp(-n_n^2)∑_K=1^K_n∑_a_1=0^∞⋯∑_a_K=0^∞√(𝒩(_n,𝒢_K(_K),·_1))√(Π(𝒢_K(_K)))→ 0,Π(f:∫ f_0logf_0/f≤_n^2,∫ f_0(logf_0/f)^2≤_n^2)≥exp(-n_n^2).(See Theorem 3 in , which is also provided in Section <ref> in the Supplementary Material). For notation convenience we refer to the set of densities(f:∫ f_0logf_0/f≤ϵ_n^2,∫ f_0(logf_0/f)^2≤ϵ^2) as the KL-type ball, and denote it as B(f_0,ϵ). Equation (<ref>) is also known as the prior concentration condition. Lemma <ref> not only plays a fundamental role in establishing the posterior strong consistency, but also provides an upper bound for the sum in terms of δ, which is again used to verify equation (<ref>). Proposition <ref> finds the rates (_n)_n=1^∞,(_n)_n=1^∞ that satisfy (<ref>) and (<ref>). Assume conditions A0-A4 and B1-B5 hold. Let _n=(log n)^t_0/√(n), _n=(log n)^t/√(n) where t and t_0 satisfy t>t_0+1/2>1/2, and K_n=⌊(p+1)^-1(log n)^2t-1⌋. Then (<ref>) and (<ref>) hold. We are now left with finding the prior concentration rate (_n)_n=1^∞ that satisfies (<ref>). In particular, we need to bound the KL-type balls B(f_0,ϵ) by the L_1 distance. The strategy is to approximate F_0 using a finitely discrete distribution with sufficiently small number of support points. Lemma <ref> allows us to formalize this idea. Assume conditions A0-A4 and B1-B5 hold. For some constant η>0 and for all sufficiently small ϵ>0, there exists a discrete distribution F^⋆=∑_k=1^Nw_k^⋆δ_(_k^⋆,_k^⋆) supported on a subset of {(,)∈ℝ^p×:_∞≤ 2a} with a=b_1^-1/2(log1/ϵ)^1/2, _k^⋆-_k'^⋆_∞≥2ϵ, |λ_j(_k^⋆)-λ_j(_k'^⋆)|≥2ϵ whenever k≠ k', j=1,⋯,p, N≲(log1/ϵ)^2p, such that {f_F:F=∑_k=1^Nw_kδ_(_k,_k):(_k,_k)∈ E_k,∑_k=1^N|w_k-w_k^⋆|<ϵ}⊂ B(f_0,ηϵ^1/2(log1/ϵ)^p+4/4), whereE_k={(,)∈ℝ^p×:-_k^⋆_∞<ϵ/2,|λ_j()-λ_j(_k^⋆)|<ϵ/2,j=1,⋯,p}. We are in a position to derive the posterior contraction rates for the RGM model. Assume conditions A0-A4 and B1-B5 hold. Then the posterior distribution Π(·|_1,⋯,_n)→ 0 contracts at f_0 with rate ϵ_n=(log n)^t/√(n), t>p+α+2/4. It is interesting that the RGM model and some other independent-prior models (e.g. DP mixtures of Gaussians) yield similar posterior contraction rate. The major complication for the RGM model comes from proving that the KL-type ball is assigned with sufficiently large prior probability, since in the RGM model the repulsive function h can only be lower bounded by 0, whereas h is always unity in the independent-prior model. Notice that the optimal rate (log n)^(p+1)+/√(n) is achieved when α=2, where (p+1)+ means that any t>p+1 is satisfied. Namely, the posterior contraction rate is optimal when p_ has a Gaussian tail. For comparison, recall that for general location-scale Gaussian mixture problem with bounded variance, Theorem 6.2 in <cit.> gives a contraction rate of (log n)^3.5/√(n) in the univariate case (p=1) using the DP mixture model, in which the distribution of the location parameters is Gaussian. Analogously, in the RGM model, we may use Gaussian p_ to control the tail rate of the joint distribution of (_1,⋯,_K) as _k gets large, since the repulsive function h_K is bounded. Theorem <ref> improves thecontraction rate to (log n)^t/√(n) with t>2 compared to that given by <cit.>. However, such an improvement is not due to the repulsive structure of the prior. The underlying reason is that we use Theorem 3 in <cit.> to derive the posterior contraction rate, whereas <cit.> use Theorem 2.1 in <cit.>, a weaker version of Theorem 3 in <cit.>, to derive it. In other words, this suggests that it is also possible to obtain an improved posterior contraction rate for some independent-prior GMM for component centers using Theorem 3 in <cit.>. The boundedness on the eigenvalues of the covariance matrices (condition A3) was originally adopted in <cit.>, which is necessary to obtain an “almost” parametric rate (log n)^t/√(n) for some t>0. <cit.> adopted the same assumption and improved the posterior contraction rate of the location mixture problem. Requiring p_λ to have full support on (0,+∞), however, is necessary in cases where the underlying true density f_0 is no longer of the form f_0=ϕ_ * F_0 for some F_0∈(ℝ^p×). For general mixtures of finite location mixture models, the contraction rate is known to be (log n)^t n^-β̃/(2β̃+d) for some t>0, where f_0 is in a locally β̃-Hölder class <cit.>. It will be interesting to extend Theorem <ref> to the case where supp(p_λ)=(0,+∞) and explore the corresponding posterior contraction rate. §.§ Shrinkage Effect on the Posterior of K The behavior of the posterior of K is of great interest, since it is a measurement of the complexity of a nonparametric density estimator. If a parametric assumption on f_0 is made in the sense that f_0=ϕ_ * F_0 for some finitely discrete F_0∈(ℝ×), then under mild regularity condition, <cit.> proved that the posterior distribution p(K|_1,⋯,_n) converges weakly to the point mass at K_0 a.s. under the MFM model, where K_0 is the number of support points of F_0. However, when F_0 is no longer assumed to be finitely discrete, and a repulsive prior is introduced among components in MFM, there is little result concerning the mixture complexity in the literature. This issue is addressed in Theorem <ref> in terms of the shrinkage effect on the tail probability of the posterior of K in the presence of the repulsive prior.For simplicity, we only consider the case where both f_0 and the model are of location-mixture form only. We also assume that the g function is of the form g(x)=x/g_0+x for some g_0∈[0,∞), x>0. In particular, we allow g_0=0 so that the RGM model includes the special case of the independent-prior GMM. Suppose f_0()=∫_ℝϕ__0(-)F_0(d) for some fixed _0∈ and conditions A0-A3 and B1-B3 hold with β=1, p() = ϕ(|,τ^2), and p_ = δ_. Without loss of generality assume ∫_ℝ^p F_0(d) =. Further assume p(K)=Ω Z_K λ^K/K! where Ω=[∑_K=1^∞ Z_Kλ^K/K!]^-1 and g is of the form g(x)=x/g_0+x for some g_0≥0, x>0. Suppose that f∼RGM_1(1,g,ϕ(|,τ^2),δ__0,p(K)), where r=1 or 2. Then when N≥ 3, we have the following result: 𝔼_0[Π(K≥ N|_1,⋯,_n)]≤ C(λ)χ_r(g_0;n,N)exp[nτ^2/2tr(_0^-1)]∑_K=N+1^∞λ^K/(e^λ - 1)K!, where χ_r(g_0;n,N) = { (1+g_0^3/2δ(τ))^2/3[2pτ^2+2n/Nτ^4𝔼_0(_0^-2)]^1/2/g_0+[2pτ^2+2n/Nτ^4𝔼_0(_0^-2)]^1/2,if r=1,(1+δ(τ)√(g_0))[2pτ^2+2n/Nτ^4𝔼_0(_0^-2)]^1/2/g_0+[2pτ^2+2n/Nτ^4𝔼_0(_0^-2)]^1/2,if r=2. . Here C(λ) are some constants depending on λ only, δ(τ) is a constant depending on τ only such that δ(τ)<1 for sufficiently large τ, 𝔼_0(_0^-2):=∫_ℝ^p_0^-2 F_0(d), and χ_r(g_0;n,N) is referred to as the shrinkage constant.As pointed out in Section <ref>, the normalizing constant Z_K yields complication in the posterior inference of K. In Theorem <ref> the prior density p(K) of the number of components is assumed to be proportional to the Poisson density function modulus Z_K to eliminate such effect: p(K)∝ Z_Kλ^K/K!. Theorem <ref> unveils the relationship between the tail probability of the marginal posterior of K and the hyperparameter g_0 that introduces repulsion: as long as τ is moderately large so that δ(τ)<1 (corresponding to the weakly informative prior), the upper bound for 𝔼_0[Π(K>N|_1,⋯,_n)] decreases as g_0 increases when g_0 is large enough. In particular, the shrinkage constant χ_r(g_0;n,N) is 1 when g_0=0 (i.e., no repulsion is enforced among component centers), decreases when g_0 increases, and is smaller than 1 for large enough g_0. Namely, compared to the independent prior for the component centers _k's, the repulsive prior introduces additional shrinkage effect on the tail probability of the posterior of K. In addition, it is worth mentioning that Theorem <ref> is a non-asymptotic result.Theorem <ref> also serves as a guidance for constructing a sample-size dependent RGM prior that yields a slower rate of growth of K compared to the independent-prior Gaussian mixture model. Specifically, instead of using a hyperparameter g_0 that does not change with n, it is possible to choose a sample-size dependent hyperparameter g_0(n) that tends to infinity and thus affects the rate of decay of 𝔼_0[Π(K≥ K_n|_1,⋯,_n)] for certain sequences of (K_n)_n=1^∞. However, the prior concentration condition might no longer hold, potentially resulting a slower posterior contraction rate. It might be interesting to explore the trade-off between the shrinkage effect on K and the posterior contraction rate using sample-size dependent repulsive prior. Assume the conditions in Theorem <ref> hold. If the sequence (K_n)_n=1^∞⊂ℕ_+ satisfies lim inf_n→∞K_n/n>0, then the tail probability of the posterior distribution of K satisfies Π(K≥ K_n|_1,⋯,_n)→ 0 in ℙ_0-probability as n→∞. In terms of K, the number of support points in the RGM model, which is a measurement of the model complexity of estimating an unknown density, Corollary <ref> says that the posterior probability that K is at least a non-negligible fraction of n (in the limit) converges to 0 in ℙ_0-probability as n→∞. In other words, the posterior number of components grows sub-linearly with respect to the sample size. § POSTERIOR INFERENCE For the DPP mixture model, <cit.> developed a variation of the RJ-MCMC sampler that can be extended to the RGM model. However, the reversible-jump moves in multi-dimensional problems could be challenging and inefficient. In this section, we design an efficient and easy-to-implement blocked-collapsed Gibbs sampler by representing the RGM model using the random partition distribution.Let us begin with characterizing the RGM model using the latent cluster configurations. Given a random measure F=∑_k=1^Kw_kδ_(_k,_k) with (w_1,⋯,w_K)∼𝒟_K(β), we may represent the finite mixture model as follows by integrating out (w_1,⋯,w_K):(_i| z_i,{_k,_k}_k=1^K,K) ∼ N(_z_i,_z_i),p(z_1,⋯,z_n| K) = Γ(Kβ)/Γ(n+Kβ)∏_k=1^KΓ(β+∑_i=1^n𝕀(z_i=k))/Γ(β).Let 𝒞_n denote the partition of {1,⋯,n} induced by =(z_1,⋯,z_n) as 𝒞_n={E_k:|E_k|>0} where E_k={i:z_i=k} for k=1,⋯,K, and |E| denotes the cardinality of a finite set E. For example, if one has =(z_1,z_2,z_3,z_4,z_5,z_6)=(1, 3, 4, 4, 3, 1) with n=6, then the corresponding partition is 𝒞_6={{1,6},{2,5},{3,4}}. Using the exchangeable partition distribution in <cit.>, we establish the generalized urn-model induced by the RGM model in Theorem <ref> after marginalizing out the intractable random distribution F.The proof is provided in Section <ref> of the Supplementary Material. Suppose the prior Π on ℳ(ℝ^p) is defined as in Section <ref>, and the latent class configuration variables = (z_1,⋯,z_n) is defined as in (<ref>). Let _i=_z_i,_i=_z_i, _i=(_i,_i), i=1,⋯,n, 𝒞_n-1 be the partition on {1,⋯,n-1} induced by _1,⋯,_n-1, (_c^⋆:c∈𝒞_n-1) be the unique values of (_1,⋯,_n-1), and (_c^⋆:c∈𝒞_n-1) be those of (_1,⋯,_n-1). Let ℓ=|𝒞_n-1| be the number of clusters, and Kbe the number of components in F, where K≥ℓ. Denote 𝒞_∅⊂ℕ_+ the indexes for the components associated with no observations with |𝒞_∅|=K-ℓ, ((_c^⋆,_c^⋆)∈ℝ^p×:c∈𝒞_∅) the component-specific parameters of the components that are not associated with any observation, and c=min(c:c∈ C_∅) provided that K≥ℓ+1. Denote Π(_n∈·|-) the full conditional distribution of _n with F marginalized out. Then Π(_n∈·|-) ∝[V_n(ℓ+1)β/V_n(ℓ)]∑_K=ℓ+1^∞α_K G_K(·)+∑_c∈𝒞_n-1(|c|+β)ϕ(_n|_c^⋆,_c^⋆)δ_(_c^⋆,_c^⋆)(·), where V_n(ℓ) = ∑_K=ℓ^∞K(K-1)⋯ (K-ℓ+1)/(β K)(β K + 1)⋯(β K+n-1)p_K(K), α_K = m_Kp(K|𝒞_n=𝒞_n-1∪{{n}}), m_K = ∫⋯∬ϕ(_n|_c^⋆,_c^⋆) h_K(_c^⋆:c∈𝒞_n-1∪𝒞_∅)p_(_c^⋆)d_c^⋆∏_c∈𝒞_∅p_(_c^⋆)d_c^⋆/∫⋯∫ h_K(_c^⋆:c∈𝒞_n-1∪𝒞_∅)∏_c∈𝒞_∅p_(_c^⋆)d_c^⋆, G_K(A) ∝ ∬_AL_K(_c^⋆)ϕ(_n|_c^⋆,_c^⋆)p_(_c^⋆)p_(_c^⋆)d_c^⋆d_c^⋆, L_K(_c^⋆) = ∫⋯∫ h_K(_c^⋆:c∈𝒞_n-1∪𝒞_∅)∏_c∈𝒞_∅,c≠cp_(_c^⋆)d_c^⋆, and h_K(_c:c∈_n-1∪_∅)=h_K(_c_1^⋆,⋯,_c_K^⋆) if one labels _n-1∪_∅ as {c_1,⋯,c_K}. Theorem <ref> is instructive for deriving the blocked-collapsed Gibbs sampler for the posterior inference of the proposed RGM model.We follow the notation in Theorem <ref>. Let 𝒞_-i be the partition induced by _-i:=(_1,⋯,_n)\{_i}, and (_c^⋆,_c^⋆:c∈_-i) be the unique values of _-i. Notice that by exchangeabilityΠ(𝒞=𝒞_-i∪{{i}}|_i,_-i,_-i)∝[V_n(|𝒞_-i|+1)β/V_n(|𝒞_-i|)]∑_K=|𝒞_-i|+1^∞α_K,Π(𝒞=(𝒞_-i\{c})∪{c∪{i}}|_i,_-i,_-i)∝ϕ(_i|_c^⋆,_c^⋆)(|c|+β),where c∈𝒞_-i. Namely, given a partition 𝒞_-i on {1,⋯,n}\{i}, the left-out index i forms a new singleton cluster with probability proportional to [V_n(|𝒞_-i|+1)β/V_n(|𝒞_-i|)]∑_K=|𝒞_-i|+1^∞α_K, and is merged into an existing cluster c∈𝒞_-i with probability proportional to ϕ(_i|_c^⋆,_c^⋆)(|c|+β).Instead of directly sampling from the above categorical distribution, which involves computing the intractable α_K's, we take advantage of the integral structure of α_K and design auxiliary variables following the data augmentation technique in <cit.>. Roughly speaking, when sampling from p(x,y) via MCMC, one introduces an auxiliary variable z and samples p(z| x,y), p(y| x,z), and p(x| z) alternately (collapsing).The auxiliary z is discarded after such an update. Using above notations, we denote G(A|_-i,_-i)=∑_K=|_-i|+1^∞ p(K|=_-i∪{{i}}) ∬_A L_K(_c^⋆)p_(_c^⋆)p_(_c^⋆)d_c^⋆d_c^⋆/∫ L_K(_c^⋆)p_(_c^⋆)d_c^⋆,where L_K is defined in Theorem <ref>. Let g(_c^⋆,_c^⋆|_-i,_-i) be the density of G(·|_-i,_-i) and the density of auxiliary variable (_c^⋆,_c^⋆) bep(_c^⋆,_c^⋆|_i,_-i,_-i)=[V_n(|_-i|+1)β/V_n(|_-i|)]ϕ(_i|_c^⋆,_c^⋆)+∑_c∈_-i(|c|+β)ϕ(_i|_c^⋆,_c^⋆) /[V_n(|_-i|+1)β/V_n(|_-i|)]∑_K=|_-i|+1^∞α_K+∑_c∈_-i(|c|+β)ϕ(_i|_c^⋆,_c^⋆) g(_c^⋆,_c^⋆|_c^⋆,_c^⋆,c∈_-i) .Given the auxiliary variable (_c^⋆,_c^⋆), supposeand _n are sampled as follows:ℙ(𝒞=𝒞_-i∪{{i}}|_c^⋆,_c^⋆,_i,_-i,_-i)∝[V_n(|𝒞_-i|+1)β/V_n(|𝒞_-i|)]ϕ(_i|_c^⋆,_c^⋆),ℙ(𝒞=(𝒞_-i\{c})∪{c∪{i}}|_c^⋆,_c^⋆,_i,_-i,_-i)∝(|c|+β)ϕ(_i|_c^⋆,_c^⋆),ℙ(_i∈ A|=_-i∪{{i}},_c^⋆,_c^⋆,_i,_-i,_-i)=δ_(_c^⋆,_c^⋆)(A),ℙ(_i∈ A|=(_-i\{c})∪({c∪{i}}),_c^⋆,_c^⋆,_i,_-i,_-i)=δ_(_c^⋆,_c^⋆)(A).Then the marginal posterior (_i|_i,_-i,_-i) with (_c^⋆,_c^⋆) and |_-i integrated out coincides with (<ref>), and the complete conditional distribution of (_c^⋆,_c^⋆) is given byℙ((_c^⋆,_c^⋆)∈ A|_i,,_-i,_i)=𝕀( = _-i∪{{i}})δ__i(A)+𝕀(≠_-i∪{{i}})G(A|_-i,_-i).The proof of Theorem <ref> is deferred to Section <ref> of the Supplementary Material. Now we are in a position to introduce the blocked-collapsed Gibbs sampler for the posterior inference. We remark that this Gibbs sampler can also be regarded as the generalization of the “Algorithm 8” in <cit.> to the case where a repulsive prior among component centers is introduced. The basic idea is to draw samples from ℙ(_n|_c^⋆,_c^⋆,_i,_-i,_-i), ℙ(_i|_n,_c^⋆,_c^⋆,_i,_-i,_-i), and ℙ(_c^⋆,_c^⋆|_n,_i,_i,_-i,_-i) alternately, where (_c^⋆,_c^⋆) is the auxiliary variable introduced in Theorem <ref>.Suppose the current state of the Markov chain consists of (_c^⋆,_c^⋆:c∈𝒞_n), and a partition 𝒞_n on {1,⋯,n}. We instantiate (_1,⋯,_n) using (_c^⋆,_c^⋆:c∈𝒞_n) and 𝒞_n by letting _z_i=(_c^⋆,_c^⋆) if i∈ c.A complete iteration of the blocked-collapsed Gibbs sampler is desribed as below. * Step 1: For i=1,⋯,n: * Sample auxiliary variable (_c^⋆,_c^⋆) from (<ref>): If 𝒞_n=𝒞_-i∪{{i}}, then set (_c^⋆,_c^⋆) = _i; Otherwise sample (_c^⋆,_c^⋆) from G(·|_-i,_-i) as follows: i)Sample K∼ p(K|𝒞_n=𝒞_-i∪{{i}}), set ℓ=|𝒞_-i|, compute 𝒞_∅ with |𝒞_∅|=K-ℓ, and set _-i=(_1,⋯,_n)\{_i}. ii)Sample _c^⋆∼ p_(_c^⋆). Sample (_c^⋆:c∈𝒞_∅) by accept-reject sampling: Sample (_c^⋆:c∈𝒞_∅) independently fromp_ and U∼Unif(0,1), independent of (_c^⋆:c∈𝒞_∅); If U<h_K(_c^⋆:c∈𝒞_-i∪𝒞_∅), then accept the new proposed samples; Otherwise resample (_c^⋆:c∈𝒞_∅) from p_ and U until U<h_K(_c^⋆:c∈𝒞_-i∪𝒞_∅). Discard all (_c^⋆,_c^⋆:c∈_∅\{c}). * Sample 𝒞_n from p(𝒞_n|,_c^⋆,_c^⋆,_i,_-i,_-i) according to (<ref>) and (<ref>): Π(𝒞_n=𝒞_-i∪{{i}}|-)∝[V_n(|𝒞_-i|+1)β/V_n(|𝒞_-i|)]ϕ(_i|_c^⋆,_c^⋆),Π(𝒞_n=(𝒞_-i\{c})∪{c∪{i}}|-)∝(|c|+β)ϕ(_i|_c^⋆,_c^⋆). * Assign _i value according to ℙ(_i∈·|,_c^⋆,_c^⋆,_i,_-i,_-i). Set _i=(_c^⋆,_c^⋆) if _n = _-i∪{{i}}, and set _i=(_c^⋆,_c^⋆) if _n=(_-i\{c})∪({c∪{i}}) for some c∈_-i. * Step 2: Sample K from p(K|𝒞_n,_1,⋯,_n,_c^⋆:c∈_n); Set ℓ=|𝒞_n|, and compute 𝒞_∅ such that |𝒞|=K-ℓ. * Step 3: Sample (_c^⋆:c∈𝒞_n) from p(_c^⋆|_i:i∈ c,_c^⋆,_n): For all c∈𝒞_n, sample _c^⋆ from p(_c^⋆|-)∝ p_(_c^⋆)∏_i∈ cϕ(_i|_c^⋆,_c^⋆). * Step 4 (Blocking): Sample (_c^⋆:c∈𝒞_n) from p(_c^⋆:c∈_n| K,_c^⋆,_1,⋯,_n,_n). This can be done by accept-reject sampling: For each c∈𝒞_n, sample p(_c^⋆|-)∝ p_(_c^⋆)∏_i∈ cϕ(_i|_c^⋆,_c^⋆), and for each c∈𝒞_∅, sample _c^⋆∼ p_(_c^⋆). Next independently sample U∼Unif(0,1); If U<h_K(_c^⋆:c∈𝒞_n∪𝒞_∅), then accept the new proposed samples; Otherwise resample (_c^⋆:c∈_n∪_∅) and U until U<h_K(_c^⋆:c∈_n∪_∅). * Step 5: Change the current state to (_c^⋆,c∈𝒞_n) and 𝒞_n. The detailed implementation of the blocked-collapsed Gibbs sampler, including the discussion of sampling from p(K|_n) and p(K|_n,_1,⋯,_n,_c^⋆:c∈_n), is provided in Section <ref> of the Supplementary Material. It is worth noticing that in theory, only Step 1 in the above Gibbs sampler is necessary to create a Markov chain with the stationary distribution being the full posterior distribution. Nevertheless, such an urn-model-based sampler could potentially yield a Markov chain converging rather slowly, as has been pointed out in <cit.>. The resampling steps (Step 2 through Step 5) are hence introduced to improve the mixing of the chain. The proposed sampler can be easiliy extended to the case where a non-Gaussian mixture model is used, provided that we use priors p_,p_ in (<ref>) that are conjugate to the non-Gaussian kernel density. In cases where non-conjugate priors p_,p_ are used, it is also possible to extend the blocked-collapsed Gibbs sampler either by a method of “no-gaps” proposed by <cit.> or a Metropolis-within-Gibbs sampler <cit.>. § NUMERICAL EXAMPLES We evaluate the performance of the RGM model and the blocked-collapsed Gibbs sampler proposed in Section <ref> through extensive simulation studies and real data analysis.Subsections <ref> and <ref> aim to illustrate the advantages of the RGM concerning accurate density estimation, identification of correct number of components, and shrinkage effect on the model complexity. Subsection <ref> demonstrates the efficiency of the proposed blocked-collapsed Gibbs sampler compared to the DP mixture model and the DPP mixture model <cit.>. In Subsection <ref> we apply the RGM model to analyze the Old Faithful geyser eruption data <cit.>.We assume β=1, indicating a uniform prior on (w_1,⋯,w_K| K). We assign a zero-truncated Poisson prior on K with intensity λ=1 (i.e., p(K)=𝕀(K≥ 1)/(e-1)K!)for all numerical examples except the location-mixture problem in Section <ref>.The repulsive function is defined as g(x)=x/g_0+x for some g_0>0, and without loss of generality, we let h_K to be of the form (<ref>).Lastly, we assume p()=ϕ(|0,τ^2_p) and a truncated inverse Gamma prior on λ(), p(λ)∝𝕀(^2≤λ≤^2)λ^-a_0-1exp(-b_0/λ) for some a_0,b_0>0. We give the convergence diagnostics via trace plots and autocorrelation plots in Section <ref> of the Supplementary Material. To compare the performance of the proposed models with the competitors (e.g. the DP mixture (DPM) model and the DPP mixture model), we follow the ideas in <cit.> and compute the logarithm of the conditional predictive ordinate (log-CPO) of different models using the post-burn-in samples as follows: log-CPO=-∑_i=1^nlog[1/n_mc∑_i_it=1^n_mcp(_i|Θ_mc^i_it)],where n_mc is the number of the post-burn-in MCMC samples, i_it indexes the post-burn-in iterations, and Θ_mc^i_it represents the post-burn-in samples of all parameters generated by the MCMC at the i_itth iteration.§.§ Fitting Multi-modal Density: Finite Gaussian MixturesIn this subsection, to demonstrate multi-modal density fitting, we fit a finite mixture of Gaussians using the RGM model, and evaluate its performance regarding the density estimation and the identification of the number of components. In particular, suppose the simulated data _1,⋯,_n, n = 1000, are i.i.d. generated from the bivariate density:f_0()=0.4ϕ(|0,diag(2,1)) + 0.3ϕ(|(-6,-6), 3_2) + 0.3ϕ(|(6,6), 2_2).We implement the proposed blocked-collapsed Gibbs sampler with g_0 = 10, τ = 10, m = 2,=0.1,= 10, and a total number of 2000 iterations with the first 1000 iterations discarded as burn-in. For comparison, we consider the following DPM model,(_i|_z_i,_z_i)∼ N(_z_i,_z_i), (_z_i,_z_i| G) G,and (G|α, G_0)∼DP(α, G_0),where G_0=N(, ) with ∼N(_1,/k_0) and ∼Inv-Wishart(4,Ψ_1), α∼Gamma(1, 1), _1∼N(0,2_2), k_0∼Gamma(0.5,0.5), and Ψ_1∼Inv-Wishart(4, 0.5_2). For the DP mixture model, we use K to represent the number of clusters throughout this section, since the number of components is always infinity. Table <ref> shows that the log-CPO of the RGM model is higher than that of the DPM model, indicating that RGM is preferred according to the data.Figures <ref>a and <ref>c show the posterior density estimation under the RGM model and the DP mixture model, respectively, indicating that both methods perform well in terms of density estimation. However, as shown in the histograms of the posterior numbers of components/clusters inFigures <ref>b and <ref>d,the posterior distribution of the number of components is highly concentrated around the underlying true K under the RGM model, whereas the DPM model assigns relatively higher posterior probability to redundant clusters. This agrees with the inconsistency phenomenon of the DPM model for the identification of number of components, which is reported in <cit.>.§.§ Fitting Uni-modal Density: Continuous Gaussian MixturesBesides generating the simulated data from a finite discrete Gaussian mixture model, in this subsection we consider a continuous mixture of Gaussians, f_0(y_1,y_2)=∏_j=1^2∫_0^∞ϕ(y_i-μ_i-μ_0| 0,1)exp(-μ_i)dμ_i.Notice that f_0 is uni-modal. The random variables y_i, i=1,2 can be i.i.d. generated as the sum of a normal random variable and an exponential random variable with intensity parameter 1, i.e., y_i=z_i+μ_i where z_i∼N(μ_0,1) and μ_i∼Exp(1), i=1,2. Then =(y_1,y_2) is the random vector following the distribution in (<ref>). The marginal distribution of y_i isreferred to as the exponentially modified Gaussian (EMG) distribution, the density of which can be alternatively represented asf(y)=1/2exp(μ_0-y+1/2)erfc(μ_0+1-y/√(2)), where erfc is the well-known complementary error functionerfc(x)=2/√(π)∫_x^∞exp(-t^2)dt.We generate n=1000 i.i.d. samples from f_0 with μ_0=-4, and implement the proposed blocked-collapsed Gibbs sampler with g_0 = 7, τ = 10, m = 2, =0.1,= 10, and a total number of 2000 iterations with the first 1000 iterations discarded as burn-in phase. For comparison, we consider the similar DPM model with the same setting as in Section <ref>. Figures <ref>a and <ref>c show that the RGM model and the DPM model provide similar accurate density estimation to the underlying true density f_0. However, Figures <ref>b and <ref>d indicate that under the DPM model, the number of active components tends be larger than that under the RGM model in order to fit the data well. In other words, the posterior of the RGM model provides the same level of accuracy in density estimation as the DPM model does, but with less number of components. In this simulation study, with high posterior probability, the RGM model only utilizes 3 components to fit the density, whereas the DPM model assigns large posterior probability to utilizing 4 or more components. The log-CPO comparison in Table <ref>, clearly show that the RGM model outperforms the DPM model. To demonstrate the parsimony effect on the number K of necessary components to fit the density well, we perform comparison between the RGM and the independent-prior MFM. Suggested by Theorem 5, we consider location-mixture problem here only. That is, the covariance matrices for all components under both RGM and MFM are fixed at _k=_2, k=1,⋯,K.We use the prior p(K)∝Z_K𝕀(K≥ 1)/K! for the RGM, and p(K)∝𝕀(K≥ 1)/K! for the MFM. We implement the proposed blocked-collapsed Gibbs sampler with τ = 10, m = 2, g_0=7 for the location-RGM, g_0=0 for the MFM, and a total number of 2000 iterations with the first 1000 iterations discarded as burn-in phase. Since the data generating density is a continuous mixture of Gaussians, there is no “ground true” K. We evaluate the two methods in terms of the posterior of K and the log-CPO values. Figures <ref>a and <ref>c show that the location-RGM and the MFM provide similar accurate density estimation to the underlying true density f_0 and yield similar log-CPO. Nevertheless, it can be seen from Figures <ref>b and <ref>d that the MFM model assigns larger number components than the location-RGM. This phenomenon also numerically verifies Theorem <ref>: compared to the independent prior (g_0=0), the posterior number K of components under the repulsive prior (g_0>0) tends to be less.We also observe that both the location-RGM and the MFM provide similar performance in terms of the density estimation, measured by the log-CPO (-3355.629 and -3366.545 under the the location-RGM and the MFM, respectively).§.§ Multivariate Model-Based Clustering Now we focus on a higher dimensional model-based clustering problem. Suppose that we generate n = 500 i.i.d. samples from a mixture of 3 10-dimensional Gaussians:f_0()=0.4ϕ(|_1,_1) + 0.3ϕ(|_2, 3_10) + 0.3ϕ(|_3, 2_10),where the covariance matrix for the first component is a randomly generated diagonal matrix:_1=diag(5.5729, 5.0110, 3.6832, 8.1931, 5.7717, 3.0267, 3.5011, 7.8291, 4.2233, 4.3885), and _1=0, _2 = (-6,⋯,-6)∈ℝ^10, _3 = -_2. In this simulation study, we focus on the model-based clustering without fixing the number K of components a priori.Due to the challenge of visualizing high-dimensional clustering, we only show the scatter plot of the 4th versus 8th coordinate of the simulated data in Figure <ref>a. These two dimensions correspond to the first two largest eigenvalues in the covariance matrix. The projection of the data onto this 2-dimensional subspace shows that the three clusters are not well-separated. We implement the proposed blocked-collapsed Gibbs sampler with g_0 = 70, τ = 10, m = 2, =0.1,= 10. To demonstrate the efficiency of the proposed sampler, we keep all MCMC samples and compare the efficiency of the algorithms in terms of their numbers of burn-in iterations.For comparison, we consider the two alternative clustering models and evaluate their performance in terms of efficiency in estimating posterior number of components. The first one is the DPM model: (_i|_z_i,_z_i)∼ N(_z_i,_z_i), (_z_i,_z_i| G) G,and (G|α, G_0)∼DP(α, G_0), where G_0=N(,) with ∼N(0,/k_0) and ∼Inv-Wishart(12,Ψ_1), α=1, k_0∼Gamma(0.005,0.005), and Ψ_1=0.1_10. The Second alternative model is the DPP mixture model proposed in <cit.>, who used the determinantal point process as a repulsive function: h_K(_1,⋯,_K)={[exp(-1/2θ^2_k-_k'^2)]_K× K} for K≥ 2, h_K≡1 otherwise.The posterior inference of the DPP mixture model was performed using a potentially inefficient RJ-MCMC sampler.We initialize the Markov chains with K=10 for all three models. By comparing the histogram and trace plots of the posterior number of components/clusters in Figures <ref>b, <ref>c, and <ref>d,we find the DPM model significantly over-estimates the number of components at 23 in order to fit the 10-dimensional data well; The DPP mixture inferred with RJ-MCMC, though eventually stabilizes at the correct K = 3, requires relatively large number of iterations to find the underlying truth (approximately 500 iterations). In contrast, the posterior number of components under the RGM model highly concentrates around the underlying true K = 3, and stabilizes within only 100 iterations. In terms of efficiency of the Markov chain, the blocked-collapsed Gibbs sampler of the RGM model outperforms the other two alternatives. We further report the performance of the model-based clustering procedure under the RGM model. Adopting the ideas in <cit.> and <cit.>, we define the association matrix S∈{0,1}^n× n with (i,j)th entries being 𝕀(_i=_j), and H∈{0,1}^n× n with (i,j)th entries being 𝕀(_i=_j|_1,⋯,_n). Using the posterior samples, H can be approximated using the posterior mean of 𝕀(_i=_j) for all (i,j) pairs. We compute the mean of the absolute mis-classification matrix (|H_ij-S_ij|)_n× n. The mis-classification error defined by 1/n^2Ĥ-S_F is 1.0215× 10^-5, where Ĥ is computed using the posterior means.§.§ Old Faithful Geyser Eruption DataIn this subsection, we consider the Old Faithful geyser eruption data that recordthe eruption length of the Old faithful geyser in the Yellowstone National Park with the number of observations n=272 as a real world example. Following the procedure described in <cit.>for each observed eruption duration time, we pair it with the time length of the next eruption, so that we have a bivariate data of sample size 271. The points with the “short followed by short” eruption property were identified as outliers in <cit.>,in whicha robust trimmed mean procedure was used to reduce the effects from these outliers. Alternatively, we apply the RGM model to analyze the bivariate dataset, and show that the outliers can actually be identified as an extra component.We also compare the proposed method with the two alternative models: the DPM model and the DPP mixture model as described in subsection <ref>.Figure<ref> shows the predictive densities and the histograms of the number of components/clusters estimated by the three models: the RGM model, the DPM model, and the DPP mixture model. The proposed RGM, not only identifies the outliers component (Figure <ref>a), but also provides the posterior number of components that is highly concentrated at K = 4 (Figure <ref>b). In contrast, Figure <ref>c shows that DPP mixture fails to identify the outliers at the bottom-left corner of the scatter plot – instead, they are merged into the existing cluster locatedat the bottom-right corner. The corresponding posterior number of components K, as illustrated in Figure <ref>d, is highly concentrated at K = 3, failing to detect the outlier component. In addition, notice that failure in identifying the outliers significantly affects the posterior predictive density estimate, as shown from the comparison of the level curves among Figures <ref>a, <ref>c, and <ref>e. The DPM model in Figure <ref>e, although successfully detects the outliers component, still assigns relatively larger posterior probability to redundant components(Figure <ref>f). Hence the proposed RGM model outperforms the other two alternatives in terms of the robustness or the model complexity measured by the posterior of K. This conclusion is also supported by the fact that log-CPO of the RGM model is higher than those of the DPM model and the DPP mixture model (Table <ref>). § CONCLUSIONWe propose the RGM model, in which the location parameters for each component are not a priori independent, but jointly distributed according to some symmetric repulsive distribution that encourages the separation of the locations for different components. We establish the posterior consistency and obtain an “almost” parametric posterior contraction rate((log n)^t/√(n) with t> p+1), generalizing the repulsive mixture model proposed by <cit.> to the context of density estimationin nonparametric GMM. Furthermore, we study the shrinkage effect on the model complexity of the proposed RGM model regarding the number of necessary components needed to fit the data well.Based on the exchangeable partition distribution,we develop a blocked-collapsed Gibbs sampler for the posterior inference. Through extensive simulation studies and real data analysis, we demonstrate that the proposed RGM model is able to detect outliers and simultaneously penalize the number of components to reduce model complexity and accurately estimate the underlying true density. Moreover, the proposed sampler converges much faster than the RJ-MCMC sampler in <cit.> even in slightly higher dimensional clustering problems. There are several potential further extensions. Beyond mixture models for density estimation, it is also interesting to extend the repulsive mixture model to the nested clustering of grouped data, and perform simultaneous clustering of individuals within each group and the group level features when the inference prefers the parsimonious model and the focus is the interpretation of the clusters as meaningful subgroups. Secondly,the posterior distribution of the number of components under the RGM model is potentially sensitive to the hyperparameters in the repulsive function h_K. Performing sensitivity analysis by imposing suitable priors on the hyperparameters is possible if an efficient updating rule for them can be integrated within the blocked-collapsed Gibbs sampler.Lastly, instead of implementing a Gibbs sampler, which is not scalable to large number of observations, one can develop an optimization-based fast inference algorithm, which would greatly improve the computational efficiency and scalability of the posterior inference.apalike Bayesian Repulsive Gaussian Mixture Model Supplementary Material lemmasection theoremsection § SUPPORTING RESULTS§.§.§ Sufficient Conditions for Posterior Weak ConsistencyWe use the results in <cit.> to establish the weak consistency of Π. Denote Π^⋆ the prior on F∈(ℝ^p×) that induces the prior Π on f. Notice that the prior Π^* on F is supported on the class of all finitely discrete probability distributions on ℝ^p×, which is dense in ℳ(ℝ^p×) under the weak topology, we conclude that Π^* has the weak full support on ℳ(ℝ^p×). As a consequence, we need to verify the conditions A1, A7, A8, and A9 (which we list as C1, C2, C3, and C4) there: For all ϵ>0 in <cit.> exists some F_ϵ∈supp(Π^*), a closed set D⊃supp(F_ϵ) such that C1∫_ℝ^pf_0()logf_0()/f_F_ϵ()d<ϵ; C2∫_ℝ^pf_0()|logf_F_ϵ()/inf_(,)∈ Dϕ(|,)|d<∞; C3For any compact C⊂ℝ^p, c:=inf_(, ,)∈ C× Dϕ(|,)>0; C4For any compact C⊂ℝ^p, there exists some E⊂ℝ^p× such that D is contained in the interior of E, the class of functions {(,)↦ϕ(|,):∈ C} is uniformly equicontinuous on E, and sup{ϕ(|,):∈ C, (,)∈ E^c}<cϵ/4.§.§.§ Sufficient Conditions for Posterior Strong ConsistencyTo prove the posterior strong consistency of the RGM model we apply Theorem 1 in <cit.>.Consider a statistical ℱ with a prior Π, let (_i)_i=1^n be ansequence with density f_0∈ℱ. Assume that there exists a sequence of submodels (ℱ_n)_n=1^∞ with partitions ℱ_n=⋃_j=1^∞ℱ_nj. If f_0 is in the KL-support of Π, and there exists some a,b>0 such that Π(ℱ_n^c)≲e^-bn, andexp(-(4-a)nϵ^2)∑_j=1^∞√(𝒩(2ϵ, ℱ_nj,·_1))√(Π(ℱ_nj))→ 0,then Π(f:f-f_0>ϵ|_1,⋯,_n)→ 0 in ℙ_0-probability. §.§.§ Theorem 3 in <cit.>To compute the posterior rate of convergence of the RGM model, we rely on the conditions of Theorem 3 in <cit.>.Given a statistical model ℱ with a prior Π, let (_i)_i=1^n be ansequence with density f_0∈ℱ. Assume that there exists a sequence of submodels (ℱ_n)_n=1^∞ with partitions ℱ_n=⋃_j=1^∞ℱ_nj, and two sequences (_n)_n=1^∞,(_n)_n=1^∞ with _n,_n→0, n_n^2,n_n^2→∞, _n≥_n, such thatΠ(ℱ_n^c)≲exp(-4n_n^2), exp(-n_n^2)∑_j=1^∞√(𝒩(_n,ℱ_nj,·_1))√(Π(ℱ_nj))→0, Π(f:∫ f_0logf_0/f≤_n^2,∫ f_0(logf_0/f)^2≤_n^2)≥exp(-n_n^2).Then Π(f:f-f_0>_n|_1,⋯,_n)→ 0 in ℙ_0-probability. § PROOF OF THEOREM <REF> First of all, notice that h_K(_1,⋯,_K)≤ 1, we see immediately that Z_K≤∫_ℝ^p⋯∫_ℝ^p∏_k=1^K p_(_k)d_1⋯d_K=1,and hence -log Z_K≥ 0. Now we consider the upper bound for -log Z_K. Suppose h_K is of the form (<ref>). Let _1,⋯,_K p_. Then by Jensen's inequality,-log Z_K=-log𝔼[min_1≤ k<k'≤ Kg(_k-_k')]≤𝔼[max_1≤ k<k'≤ K-log g(_k-_k')].Observing that[max_1≤ k<k'≤ K-log g(_k-_k')]^2=max_1≤ k<k'≤ K[log g(_k-_k')]^2,we obtain-log Z_K ≤ {𝔼[max_1≤ k<k'≤ K[log g(_k-_k')]^2]}^1/2≤{∑_1≤ k<k'≤ K𝔼[log g(_k-_k')]^2}^1/2= {1/2K(K-1)𝔼[log g(_1-_2)]^2}^1/2≤ c_1K,where the constant c_1 can be taken asc^2_1=1/2𝔼[log g(_1-_2)]^2=1/2∬_ℝ^p×ℝ^p[log g(_1-_2)]^2p(_1)p(_2)d_1d_2<∞.Now we consider the case where h_K is of the form (<ref>). Still let _1,⋯,_K p(). Jensen's inequality yields-log Z_K = -log𝔼[∏_1≤ k<k'≤ Kg(_k-_k')^1/K] ≤∑_1≤ k<k'≤ K1/K𝔼[-log g(_k-_k')] ≤ K-1/2{𝔼[log g(_1-_2)]^2}^1/2≤ c_2Kfor some constant c_2>0. § PROOFS OF POSTERIOR CONSISTENCY§.§ Proof of Lemma <ref> Without loss of generality we assume that _1 is non-empty. Clearly, _m↑ℝ^p× and c_m↓ 1 as m→∞ by the monotone continuity of F_0. Furthermore, ϕ(|,)≤(2π^2)^-p/2. Hence, f_F_m=c_m[ϕ_(-)𝕀__m()]*F_0→ϕ_*F_0=f_0 by the bounded convergence theorem, implying that logf_0/f_F_m→ 0 as m→∞. In order to show ∫ f_0logf_0/f_F_m→0 as m→∞, it suffices to find a dominating function g() such that |logf_0/f_F_m|≤ g for all m∈ℕ_+, and the conclusion is guaranteed by the dominating convergence theorem. First of all, notice that for all m∈ℕ_+, we have f_F_m≤ c_mϕ_*F_0≤ c_1(2π^2)^-p/2, and thus f_0≤ c_1(2π^2)^-p/2 by letting m→∞. It follows that logf_0/f_F_m≥logf_0/c_1(2π^2)^-p/2. Next, we see that f_F_m() = c_m∫__mϕ(|,)dF_0(,)≥ ∫__1ϕ(|,)dF_0(,)≥ (2π^2)^-p/2∫__1exp(-1/2^2-^2)dF_0(,). If ≤1, then -≤ 2 as ≤1, and hence exp(--^2/2^2)≥exp(-2/^2); If >1, then -≤2 as ≤, and hence exp(--^2/2^2)≥exp(-2^2/^2). It follows that f_F_m()≥ξ():=(2π^2)^-p/2{ exp(-2/^2)F_0({:≤1}×_1),if ≤ 1,exp(-2^2/^2)F_0({:≤1}×_1),if >1. . and thus, logf_0/f_F_m≤logf_0/ξ. In particular, f_0≥ξ by letting m→∞. Together we have logf_0/c_1(2π^2)^-p/2≤logf_0/f_F_m≤logf_0/ξ⟹|logf_0/f_F_m|≤ g:=max{|logf_0/c_1(2π^2)^-p/2|, logf_0/ξ}. To show that g is f_0-integrable, it suffices to verify the f_0-integrability of log f_0 and logξ. Notice that log c_1-(p/2)log(2π^2)≥log f_0≥logξ, implying |log f_0|≤ |log c_1|+p/2|log(2π^2)|+|logξ|,it is only left to verify the f_0-integrability of logξ. When ≤1, logξ is constant, and when >1, we have ∫_{≥1}f_0()|logξ()|d≤p/2|log(2π^2)|+|log F_0({:≤ 1}×_1)|+2/^2∫_{≥1}^2f_0()d≤p/2|log(2π^2)|+|log F_0({:≤ 1}×_1)|+2/^2𝔼_0^2<∞, where the finiteness of 𝔼_0^2 is guaranteed by condition A1 and Fubini's theorem. Hence logξ is f_0-integrable. §.§ Proof of Theorem <ref> By Theorem 1 and Lemma 3 in <cit.>, it suffices to verify conditions C1, C2, C3, and C4. By Lemma <ref>, for all ϵ>0, there exists an integer m such that F_ϵ = F_m satisfies C1. Noticing that F_ϵ∈supp(Π^*) automatically holds since supp(Π^*)=(ℝ^p×), and thatitself is compact, we can take D=_m. For any compact C⊂ℝ^p, take large enough a such that C⊂{:≤ a}. In addition, C3 automatically holds, since C× D is compact in ℝ^p×ℝ^p×, and ϕ is strictly positive. It suffices to verify C2 and C4. To verify C2, it suffices to show that log f_F_m and loginf_(,)∈ Dϕ(|,) are f_0-integrable. Notice that (2π^2)^-p/2≥inf_(,)∈ Dϕ(|,)≥ζ_m():=(2π^2)^-p/2{ exp(-2m^2/^2),if ≤ m,exp(-2^2/^2),if >m, . since when ≤ m we have (-)^T^-1(-)≤^-2-^2≤ 4^-2m^2, and when >m we have (-)^T^-1(-)≤^-2-^2≤ 4^-2^2. It follows that loginf_(,)∈ Dϕ(|,) is f_0-integrable if logζ_m is integrable. When ≤ m, ζ_m is a constant, and when >m, ∫_{>m}f_0()|logζ_m()|dy≤p/2|log(2π^2)|+2/^2𝔼_0^2<∞. Hence loginf_(,)∈ Dϕ(|,) is f_0-integrable. Using the ξ function constructed in (<ref>) in the proof of Lemma <ref>, we see that c_1(2π^2)^-p/2≥ f_F_m()≥ξ(), and it is proved that logξ() is f_0-integrable. It follows that log f_F_m is f_0-integrable. To verify C4, given compact C with C⊂{:≤ a} for some large enough a>0,let E={:≤max(a, m)+max[1,√(2^2log(8/(2π^2)^p/2cϵ))]}×. Then E contains D in its interior, and E is also compact. Therefore the function (,,)↦ϕ(|,) on C× E is uniformly continuous, and hence, asvaries over C, the class of functions {(,)∈ E↦ϕ(|,):∈ C} is also uniformly equicontinuous. Now we show that sup{ϕ(|,):∈ C, (,)∈ E^c}<cϵ/4. Since for any (,,)∈ C× E^c, we have ≤ a,>a+max[1,√(2^2log(8/(2π^2)^p/2cϵ))]⟹-≥-≥max[1,√(2^2log(8/(2π^2)^p/2cϵ))], then we obtain sup_(,,)∈ C× E^cϕ(|,) ≤1/(2π^2)^p/2exp[-1/2^2(-)^2]<cϵ/4. The proof is thus completed.§.§ Proof of Lemma <ref> Suppose δ>0 is given. By Lemma A.4 in <cit.>, there exists an ℓ_1 δ-net ℐ_0 of Δ^K, such that the cardinality |ℐ_0| of ℐ_0 is upper bounded by (5/δ)^K. Now let ℛ_k be an δ-net of {_k:_k_∞∈(a_k,b_k]} under the ·_∞-metric. Clearly, one can make |ℛ_k|≤(b_k/δ+1)^p. Furthermore let 𝒮_jk be an δ-net of {√(λ_j(_k)):λ_j(_k)∈[^2,^2]} with cardinality |𝒮_jk|≤(-)/δ+1 under the ·_∞-metric. It follows that for all f_F∈ℱ_K(∏_k=1^K(a_k,b_k]) with F=∑_k=1^Kw_kδ_(_k,_k), there exists some ^⋆=(w_1^⋆,⋯,w_K^⋆)∈ℐ_0, _k^⋆∈ℛ_k, λ_jk^⋆∈𝒮_jk for j=1,⋯,p with _k^⋆=diag(λ_1k^⋆,⋯,λ_pk^⋆) for k=1,⋯,K, such that ∑_k=1^K|w_k-w_k^⋆|<δ, _k-_k^⋆<√(p)_k-_k^⋆_∞<√(p)δ, and |√(λ_j(_k))-√(λ_jk^⋆)|<δ for j=1,⋯,p. Denote H(f,g) to be the Hellinger distance between densities f and g, defined by H(f,g)=(1/2∫(√(f)-√(g))^2)^1/2. Observe that H(ϕ__k(-_k),ϕ__k^⋆(-_k^⋆))^2 ≤1-∏_j=1^p(1-(√(λ_j(_k))-√(λ_jk^⋆))^2/λ_j(_k)+λ_jk^⋆)^1/2exp(-_k-_k^⋆^2/8^2)≤1-(1-δ^2/2^2)^p/2exp(-pδ^2/8^2) ≤ 1-(1-pδ^2/2^2)^p/2+1 where we use the fact exp(-x)≥ 1-x in the last inequality. Denote F^⋆=∑_k=1^Kw_k^⋆δ_(_k^⋆,_k^⋆). It follows by the triangle inequality that f_F-f_F^⋆_1 ≤∑_k=1^Kw_kϕ__k(-_k)-ϕ__k^⋆(-_k^⋆) _1+∑_k=1^K|w_k-w_k^⋆|≤∑_k=1^K2√(2)w_kH(ϕ__k(-_k),ϕ__k^⋆(-_k^⋆))+δ≤δ+2√(2)[1-(1-pδ^2/2^2)^p/2+1]^1/2. Observing that lim_t↓ 01-(1-t)^a/at=1 holds for a>1, we see that for sufficiently small δ, f_F-f_F^⋆_1≤ C_3 δ for some constant C_3>0, and therefore (C_3δ,ℱ_K(∏_k=1^K(a_k,b_k]),·_1) ≤(5/δ)^K(2(-)/δ)^Kp∏_k=1^K(b_k/δ+1)^p.≤c̃_3^K/δ^Kp+K∏_k=1^K(b_k+δ/δ)^p. for some constant c̃_3>0. This yields that (δ,ℱ_K(∏_k=1^K(a_k,b_k]),·_1) ≤(c_3/δ^2p+1)^K(∏_k=1^Kb_k)^p for some constant c_3>0. §.§ Proof of Lemma <ref> First we need to bound √(Π(𝒢_K(_K))). Recall that e^-c_1K≤ Z_K≤ 1 for some constant c_1>0 by Theorem <ref> and condition A2. We estimate Π(𝒢_K(_K)) ≤ Π(_1,⋯,_K:_k≥√(p)a_k, k=1,⋯,K| K)p(K)≤ p(K)/Z_K∫⋯∫∏_k=1^K𝕀(_k^2≥ p a_k^2)p(_1)d_1⋯ p(_K)d_K≤ e^c_1K∏_k=1^K∫_{_k^2≥ pa_k^2}p(_k)d_k(by Theorem <ref>) ≤ e^c_1KB_2^K∏_k=1^Kexp(-pb_2a_k^2).(by condition B2) Now by Lemma <ref> for some constant c_3>0, we have (δ, 𝒢_K(_K),·_1)≤(c_3/δ^2p+1)^K∏_k=1^K(a_k+1)^p. Hence, by defining S=∑_a_k=0^∞(a_k+1)^p/2exp(-pb_2a_k^2/2)<∞, we estimate ∑_K=1^K_n∑_a_1=0^∞⋯∑_a_K=0^∞√((δ,𝒢_K(_K),·_1))√(Π(𝒢_K(_K)))≤∑_K=1^K_n[√(B_2c_3e^c_1)/δ^p+1/2]^K[∏_k=1^K∑_a_k=0^∞(a_k+1)^p/2exp(-b_2pa_k^2/2)]=∑_K=1^K_n[S√(B_2c_3e^c_1)/δ^p+1/2]^K≤ K_n(M/δ^p+1/2)^K_n, for some constant M>0 for sufficiently small δ. §.§ Proof of Theorem <ref> It is sufficient to verify (<ref>) and that Π(ℱ_K_n^c)≲exp(-bn) for some b>0, since the KL-property is satisfied. Now take K_n=⌊ n/log n⌋. Then K_nlog K_n≥ n-loglog n/log n≥ n/2 for large n, which yields Π(ℱ_K_n^c)≲exp(-B_4n/2) condition B5. Furthermore by Lemma <ref> we have∑_K=1^K_n∑_a_1=0^∞⋯∑_a_K=0^∞√((ϵ,𝒢_K(_K),·_1))√(Π(𝒢_K(_K)))≤exp[log K_n+K_nlog M+(2p+1/2)K_n(log1/ϵ)] ≤exp[(p+1)K_n(log1/ϵ)]for sufficiently small ϵ and sufficiently large n. The proof is completed by observing that (p+1)K_nlog(1/ϵ)-(4-b̃)nϵ^2→ -∞ as n→∞ for any fixed ϵ>0 and fixed b̃∈(0,4).§ PROOFS FOR POSTERIOR CONTRACTION RATE§.§ Proof of Proposition <ref> Denote C=1/(p+1). Then by condition B5 we have Π(ℱ_K_n^c)=Π(K>K_n)≤exp(-B_4K_nlog K_n)≤exp[-B_4Clog C(log n)^2t-1]≤exp(-4n_n^2) with t>t_0+1/2 for sufficiently large n. Next, by Lemma <ref> exp(-n_n^2)∑_K=1^K_n∑_a_1=0^∞⋯∑_a_K=0^∞√((_n,𝒢_K(a_1,⋯,a_K),·_1))√(Π(𝒢_K(a_1,⋯,a_K)))≤exp[-(log n)^2t+(p+1)C(log n)^2t-1(1/2log n-tloglog n)]≤exp[-1/2(log n)^2t]. The RHS of the last display converges to 0 as n→∞. §.§ Proof of Lemma <ref> The proof of Lemma <ref> requires the following auxiliary Lemmas <ref>-<ref> thatgeneralize Lemma 3.4, Lemma 4.1, and Lemma 5.1 in <cit.>. Since the proofs are quite similar to those there, we defer them in Section <ref>. Let F be a probability distribution compactly supported on a subset of {(,)∈ℝ^p×:_∞≤ a} with a≲(log1/ϵ)^1/2. Then for sufficiently small ϵ>0, there exists a discrete probability distribution F^⋆ on a subset of {(,)∈ℝ^p×:_∞≤ a} with at most N≲(log1/ϵ)^2p support points, such that f_F-f_F^⋆_∞≲ϵ, and f_F-f_F^⋆_1≲ϵ(log1/ϵ)^p/2. Let F be a probability distribution compactly supported on a subset of {(,)∈ℝ^p×:_∞≤ a} with a≲(log1/ϵ)^1/2. Then for sufficiently small ϵ>0, there exists a discrete probability distribution F^⋆ on {(,)∈ℝ^p×:_∞≤ 2a} with at most N≲(log1/ϵ)^2p support points that are taken from {(,)∈ℝ^p×:/2ϵ∈ℤ^p,λ_j()/2ϵ∈ℕ_+,j=1,⋯,p}, such that f_F-f_F^⋆_1≲ϵ(log1/ϵ)^p/2. If F(≤ B)>1/2 for some constant B and F_0 is such that for all t≥0, F_0(>t)≤exp(-b't^2) for some b'>0, then for ϵ=H(f_F_0,f_F) sufficiently small, ∫ f_0(logf_0/f_F)^2≲ϵ^2(log1/ϵ)^2,∫ f_0logf_0/f_F≲ϵ^2(log1/ϵ). Let ϵ>0 be sufficiently small, F^⋆=∑_k=1^Nw_k^⋆δ_(_k^⋆,_k^⋆) be such that _k^⋆-_k'^⋆_∞≥ 2ϵ, and |λ_j(_k^⋆)-λ_j(_k'^⋆)|≥ 2ϵ whenever k≠ k', j=1,⋯,p. Define E_k={(,)∈ℝ^p×:-_k^⋆_∞<ϵ/2,|λ_j()-λ_j(_k^⋆)|<ϵ/2,j=1,⋯,p}. Then for any probability distribution F on ℝ^p×, f_F-f_F^⋆≲ϵ+∑_k=1^N|P_F(E_k)-w_k^⋆|. The proof is similar to those in Theorem 5.1 and Theorem 5.2 in <cit.>. First let F_0' be the re-normalized restriction of F_0 on {(,)∈ℝ^p×:≤ a}. By Lemma A.3 in <cit.> we obtain f_0-f_F_0'_1≲ϵ. Next find F^⋆=∑_k=1^Nw_k^⋆δ_(_k^⋆,_k^⋆) by Lemma <ref> such that N≲(log1/ϵ)^2p, f_F_0'-f_F^⋆_1≲ϵ(log1/ϵ)^p/2, (_k^⋆,_k^⋆)∈{(,)∈ℝ^p×:/2ϵ∈ℤ^p,λ_j()/2ϵ∈ℕ_+,j=1,⋯,p}, k =1,⋯,N, and F^⋆ is supported on a subset of {(,)∈ℝ^p×:_∞≤ 2a}. In addition, we can require that ∫^2dF_0'=∫^2dF^⋆ and still N≲(log1/ϵ)^2p. Now we claim that there exists some constant γ>0 such that {F=∑_k=1^Nw_kδ_(_k,_k):(_k,_k)∈ E_k,∑_k=1^K|w_k-w_k^⋆|<ϵ}⊂{F:f_0-f_F_1≤γϵ(log1/ϵ)^p/2}. Suppose F is in the LHS of the last display. Observing that F(E_k)=w_k, by Lemma <ref>, F must satisfy f_F-f_F^⋆_1≲ϵ. By the construction of F^⋆ and F_0', f_F_0'-f_F^⋆_1≲ϵ(log1/ϵ)^p/2, and f_F_0'-f_0_1≲ϵ. The result follows from the triangle inequality. Now still let F be on the LHS of the last display. Observe that H(f_0,f_F)≲f_F-f_0_1^1/2≲ϵ^1/2(log1/ϵ)^p/4. Let B=2(∫^2dF_0)^1/2. It follows that F^⋆(> B)≤1/B^2∫^2dF^⋆=1/B^2∫^2dF_0'≤1/B^2∫^2dF_0=1/4, where the second equality is due to the requirement ∫^2dF_0'=∫^2dF^⋆, and the last inequality is because the second moment of F_0' is no greater than that of F_0. Therefore for ϵ<min(B/√(p),1/4), we have _k-_k^⋆≤√(p)_k-_k^⋆_∞<B, and hence _k>2B⟹_k^⋆≥_k-_k-_k^⋆>2B-B=B. Hence F(>2B) =∑_k=1^Nw_k𝕀(_k>2B) ≤∑_k=1^N|w_k-w_k^⋆|𝕀(_k>2B)+∑_k=1^Nw_k^⋆𝕀(_k>2B)<ϵ+∑_k=1^Nw_k^⋆𝕀(_k>2B) ≤ϵ+∑_k=1^Nw_k^⋆𝕀(_k^⋆>B)=ϵ+F^⋆(_k^⋆>B)≤1/2. Hence by Lemma <ref>, we have ∫ f_0(logf_0/f_F)^2≲ϵ(log1/ϵ)^p+4/2,∫ f_0logf_0/f_F≲ϵ(log1/ϵ)^p+2/2≤ϵ(log1/ϵ)^p+4/2, and, as a consequence, {f_F:F=∑_k=1^Nw_kδ_(_k,_k):(_k,_k)∈ E_k,∑_k=1^N|w_k-w_k^⋆|<ϵ}⊂ B(f_0,ηϵ^1/2(log1/ϵ)^p+4/4). §.§ Proof of Theorem <ref>By Proposition <ref> it suffices to find the prior concentration rate. Motivated by Lemma <ref>, we are interested in finding the prior probability of the following event:B̃(F^⋆,ϵ):={f_F:F=∑_k=1^Nw_kδ_(_k,_k):(_k,_k)∈ E_k,∑_k=1^N|w_k-w_k^⋆|<ϵ}.where F^⋆=∑_k=1^Nw_k^⋆δ_(_k^⋆,_k^⋆), _k^⋆≤κ(log1/ϵ)^1/2 for k=1,⋯,K for some κ>0, _k^⋆-_k'^⋆_∞≥2ϵ, |λ_j(_k^⋆)-λ_j(_k'^⋆)|≥2ϵ whenever k≠ k', j=1,⋯,p, N≲(log1/ϵ)^2p, andE_k={(,)∈ℝ^p×:-_k^⋆_∞<ϵ/2,|λ_j()-λ_j(_k^⋆)|<ϵ/2,j=1,⋯,p}.It follows thatΠ(B̃(F^⋆,ϵ))=Π(K=N)Π(.⋂_k=1^N{(_k,_k)∈ E_k}|K=N) Π(-^⋆_1<ϵ| K=N),where =(w_1,⋯,w_N),^⋆=(w_1^⋆,⋯,w_N^⋆)∈Δ^N. Since (_k,_k)∈ E_k implies _k-_k'>ϵ, for sufficiently small ϵ we see that⋂_k=1^N{(_k,_k)∈ E_k}⊂{(_k,_k)_k=1^N:h_N(_1,⋯,_N)≥(c_2ϵ)^N}by condition A1 for both r=1 and r=2. Notice that _k-_k^⋆_∞<ϵ/2 for sufficiently small ϵ implies that_k_∞≤_k^⋆_∞+ϵ/2≤ 2κ(log1/ϵ)^1/2⟹_k≤ 2κ√(p)(log1/ϵ)^1/2,in which case we have∫__k-_k^⋆_∞<ϵ/2p(_k)d_k ≥ B_3ϵ^pexp[-b_3(2κ√(p))^α(log1/ϵ)^α/2].Hence we may proceed to computeΠ(⋂_k=1^N{(_k,_k)∈ E_k})≥1/Z_K∏_k=1^N[∫__k-_k^⋆_∞<ϵ/2 c_2ϵ p(_k)d_k]∏_k=1^N∏_j=1^p[∫_λ_j(_k^⋆)-ϵ/2^λ_j(_k^⋆)+ϵ/2p_λ(λ_jk)dλ_jk]≥∏_k=1^N{c_2B_3ϵ^p+1exp[-b_3(2κ√(p))^α(log1/ϵ)^α/2]}(ϵmin_^2≤λ≤^2 p_λ(λ))^Np≥ϵ^2Np+N[c_2B_3min_^2≤λ≤^2p_λ(λ)^p]^Nexp[-b_3(2κ√(p))^α N(log1/ϵ)^α/2],For sufficiently small ϵ>0, taking logarithm yields-N(log1/ϵ)^α/2≲logΠ((_k,_k)∈ E_k,k=1,⋯,N).Using condition B5 and the fact N≲(log1/ϵ)^2p, we may further obtain-(log1/ϵ)^2p+α/2≲logΠ(K=N)+logΠ((_k,_k)∈ E_k,k=1,⋯,N).By Lemma A.2 in <cit.>, we have-(log1/ϵ)^2p+1≲ -N(log1/ϵ)≲logΠ(w_1,⋯,w_N:∑_k=1^N|w_k-w_k^⋆|<ϵ).Observing that α≥2, we obtainexp[-c_5(log1/ϵ)^2p+α/2]≲Π(B̃(F^⋆,ϵ))≲Π(B(f_0,ηϵ^1/2(log1/ϵ)^p+4/4))for some constant c_5>0. Since log[ηϵ^1/2(log1/ϵ)^p+4/4] and logϵ are of the same order in the sense that their ratio converges to a positive constant as ϵ→0, we conclude thatexp[-c_5(log1/ϵ)^2p+α/2]≲Π(B(f_0,ϵ)).Setting _n=(log n)^t_0/√(n), _n=(log n)^t/√(n) with t_0>p+α/4, t>t_0+1/2>p+α+2/4, we see that-n_n^2=-(log n)^2t_0<-(log1/_n)^2p+α/2≲logΠ(B(f_0,_n)).Hence (<ref>) is satisfied with _n=(log n)^t_0/√(n), t_0>p+α/4. The proof is thus completed by applying Proposition <ref> and Theorem 3 in <cit.>.§ PROOFS FOR THE MODEL COMPLEXITY§.§ Preliminary Lemmas for Theorem <ref>The proof of Theorem <ref> is seemingly daunting but quite straightforward: By repeatedly using Jensen's inequality, we directly bound the marginal density p(_1,⋯,_n) of the data and the joint density p(_1,⋯,_n,K) between the data and K under the RGM prior. To keep track of the road map of the proof, we begin with several preliminary lemmas, the proofs of which are deferred to the end of this section.To avoid the confusion of using the parameter _1,⋯,_K in the RGM prior and the dummy variablein the underlying true density f_0()=∫_ℝ^pϕ__0(-)F_0(d), we shall write f_0()=∫_ℝ^pϕ__0(-)F_0(d).For convenience we use the following notation (only for the proof of Theorem <ref> in this section): _i=(y_i1,⋯,y_ip), _1:n=(_1,⋯,_n), _i=(m_i1,⋯,m_ip), _1:n=(_1,⋯,_n), _k=(μ_k1,⋯,μ_kp), _1:K=(_1,⋯,_K), z_1:n=(z_1,⋯,z_n), n_k=∑_i=1^n𝕀(z_i=k), _(k)j=(y_ij:z_i=k)∈ℝ^n_k, _(k)j=(m_ij:z_i=k), 1_n_k=(1,⋯,1)∈ℝ^n_k, _i=_i=(y_i1,⋯,y_ip), _1:n=(_1,⋯,_n), _k = _k=(μ_k1,⋯,μ_kp), _1:K = (_1,⋯,_K), _i=_i=(m_i1,⋯,m_ip), _1:n=(_1,⋯,_n), _0=_0 = diag(σ_1^2,⋯,σ_p^2), _(k)j=(y_ij:z_i=k), _(k)j=(m_ij:z_i=k), κ_j^2=σ_j^2/τ^2, and F_0^(n) be the n-fold product measure of F_0over ℝ^p× n. Assume the conditions in Theorme <ref> hold. Then for K≥ 3p(_1:n|z_1:n,K)≤1/Z_K∏_j=1^p∏_k=1^Kϕ(_(k)j|_n_k,σ_j^2_n_k+τ^2_n_k_n_k)×K 2^-1∑_k<k'g([∑_j=1^p(_n_k_(k)j/n_k+κ_j^2-_n_k'_(k')j/n_k'+κ_j^2)^2+∑_j=1^pσ_j^2(1/n_k+κ_j^2+1/n_k'+κ_j^2)]^1/2). Assume the conditions in Theorem <ref> hold. Then the marginal density of the data _1,⋯,_n under the RGM prior satisfies: (i) If f∼RGM_1(1,g,ϕ(|,τ^2),δ__0,p(K)), i.e. h is of the form of (<ref>), then p(_1:n)≥ C(λ)exp[-nτ^2/2tr(_0^-1)]∏_i=1^nϕ__0(_i) Ω(e^λ-1)(1+g_0^2/3δ(τ))^-3/2; (ii) If f∼RGM_2(1,g,ϕ(|,τ^2),δ__0,p(K)), i.e. h is of the form of (<ref>), then p(_1:n)≥ C(λ)exp[-nτ^2/2tr(_0^-1)]∏_i=1^nϕ__0(_i)Ω(e^λ -1)(1+δ(τ)√(g_0))^-1. Here C(λ) is a constant only depending on λ, and δ(τ)<1 when τ is sufficiently large. Assume the conditions in Theorem <ref> hold. Then ∫_ℝ^p⋯∫_ℝ^pp(_1:n|z_1:n,K)/p(_1:n)∏_i=1^nϕ(_i|_i,_0)d_1⋯d_n≤ C(λ) exp[nτ^2/2tr(_0^-1)]ω(g_0) /Z_KΩ(e^λ - 1)×K 2^-1∑_1≤ k<k'≤ K g([∑_j=1^p1/κ_j^4(_n_k_(k)j-_n_k'_(k')j)^2+2pτ^2]^1/2) , where ω(g_0)=(1+δ(τ)g_0^2/3)^3/2 if r=1, and ω(g_0) = 1+δ(τ)√(g_0) if r=2. Assume the conditions in Theorem <ref> hold. Then∑_j=1^p1/κ_j^4𝔼_[∫_ℝ^p× n∫_ℝ^p× n(_n_k_(k)j-_n_k'_(k')j)^2F_0^(n_k+n_k')(d_(k)jd_(k')j)] = τ^4𝔼_0(_0^-2)2n/K,where 𝔼_ is the expected value with respect to p(z_1:n|K). §.§ Proof of Theorem <ref>By Fubini's theorem we directly write 𝔼_0[Π(K>N|_1,⋯,_n)]=∫_ℝ^p⋯∫_ℝ^p∑_K=N+1^∞𝔼_[p(_1:n|z_1:n,K)]π(K)/p(_1:n)∏_i=1^n∫_ℝ^pϕ__0(_i-_i)F_0(d_i)d_1⋯d_n=∑_K=N+1^∞π(K) 𝔼_{∫_ℝ^p× n[∫_ℝ^p× np(_1:n|z_1:n,K)/p(_1:n)∏_i=1^nϕ(_i|_i,_0)d_1:n] F_0^n(d_1:n)}. We may without loss of generality assume that h is of the form of (<ref>), since the following proof directly applies to the case where h is of the form (<ref>). By Lemma <ref> the quantity in the square bracket is upper bounded by C(λ)exp[nτ^2/2tr(_0^-1)](1+δ(τ)g_0^2/3)^3/2/Z_KΩ(e^λ - 1)×K 2^-1∑_1≤ k<k'≤ K g([∑_j=1^p1/κ_j^4(_n_k_(k)j-_n_k'_(k')j)^2+2pτ^2]^1/2) , where C(λ) is a constant only depending on λ. Observing that x↦ g(√(x)) is concave, we directly obtain by Jensen's inequality and Lemma <ref> that 𝔼_0[Π(K>N|_1,⋯,_n)]≤ C_1(λ)exp[nτ^2/2tr(_0^-1)]∑_K=N+1^∞λ^K/(e^λ - 1)K!(1+δ(τ)g_0^2/3)^3/2K 2^-1×∑_k<k'g( {2pτ^2+∑_j=1^p1/κ_j^4𝔼_[∬(_n_k_(k)j-_n_k'_(k')j)^2F_0^(n_k+n_k')(d_(k)jd_(k')j)]}^1/2)≤ C_1(λ)exp[nτ^2/2tr(_0^-1)]∑_K=N+1^∞λ^K/(e^λ - 1)K!(1+δ(τ)g_0^2/3)^3/2×K 2^-1∑_1≤ k<k'≤ Kg( [2pτ^2+2n/Nτ^4𝔼_0(_0^-2)]^1/2). The proof is completed by observing the fact that g( [2pτ^2+2n/Nτ^4𝔼_0(_0^-2)]^1/2) = [2pτ^2+2n/Nτ^4𝔼_0(_0^-2)]^1/2/g_0+[2pτ^2+2n/Nτ^4𝔼_0(_0^-2)]^1/2. §.§ Proof of Corollary <ref> By Theorem 5, we see that for any sufficiently large N, 𝔼_0[Π(K>N|_1,⋯,_n)] ≲exp[nτ^2/2tr(_0^-1)]∑_K=N+1^∞λ^K/(e^λ - 1)K!≤exp[nτ^2/2tr(_0^-1)+1/2Nlog N]. Since lim inf_n→∞K_n/n>0, then there exists some δ_0>0, such that K_n≥δ_0n for sufficiently large n. Hence for sufficiently large n, nτ^2/2tr(_0^-1)-1/2K_nlog K_n≤nτ^2/2tr(_0^-1)-δ_0/2 n(logδ_0 n)→-∞as n→∞. Sincelim sup_n→∞𝔼[Π(K≥ K_n|_1,⋯,_n)]≤lim_n→∞exp[nτ^2/2tr(_0^-1)-1/2K_nlog K_n]=0.By Markov's inequality, for any ϵ>0, ℙ_0[Π(K≥ K_n|_1,⋯,_n)>ϵ]≤1/ϵ𝔼[Π(K≥ K_n|_1,⋯,_n)]→ 0as n→∞, and the proof is thus completed. §.§ Proofs of Preliminary Lemmas Directly compute p(_1:n| z_1:n,K)=1/Z_K∫_ℝ^p⋯∫_ℝ^ph(_1,⋯,_K)∏_k=1^K[∏_i:z_i=kϕ__0(_i-_k)]p(_k)d_1⋯d_K =1/Z_K∫_ℝ^p⋯∫_ℝ^ph(_1,⋯,_K)∏_j=1^p{∏_k=1^K[∏_i:z_i=kϕ_σ_j(y_ij-μ_kj)]p(μ_kj)}d_1⋯d_K=1/Z_K∏_j=1^p∏_k=1^Kϕ(_(k)j|_n_k,σ_j^2_n_k+τ^2_n_k_n_k)×∫_ℝ^p⋯∫_ℝ^ph(_1,⋯,_K)∏_j=1^p∏_k=1^Kϕ(μ_kj|_n_k_(k)j/n_k+κ_j^2,σ_j^2/n_k+κ_j^2)d_1⋯d_K = 1/Z_K∏_j=1^p∏_k=1^Kϕ(_(k)j|_n_k,σ_j^2_n_k+τ^2_n_k_n_k)×∫_ℝ^p⋯∫_ℝ^ph(_1,⋯,_K)∏_j=1^p∏_k=1^Kϕ(μ_kj|_n_k_(k)j/n_k+κ_j^2,σ_j^2/n_k+κ_j^2)d_1⋯d_K. where we have used the fact that h is unitary invariant. For any k≠ k', denote Δ_kk'j=μ_kj-μ_k'j, x_kk'j=_n_k_(k)j/n_k+κ_j^2-_n_k'_(k')j/n_k'+κ_j^2,σ_kk'j^2=σ_j^2(1/n_k+κ_j^2+1/n_k'+κ_j^2). * Suppose h is of the form (<ref>). Since K≥ 3, then K(K-1)/2≥ K and hence by the geometric-algorithmic mean inequality h(_1,⋯,_K) =[∏_1≤ k<k'≤ K(_k-_k'/g_0+_k-_k')]^1/K≤[∏_1≤ k<k'≤ K(_k-_k'/g_0+_k-_k')]^2/K(K-1) ≤K 2^-1∑_k<k'g(_k-_k'). Notice that g is concave, it follows by Jensen's inequality that ∫_ℝ^p⋯∫_ℝ^ph(_1,⋯,_K)∏_j=1^p∏_k=1^Kϕ(μ_kj|_n_k_(k)j/n_k+κ_j^2,σ_j^2/n_k+κ_j^2)d_1⋯d_K≤K 2^-1∑_k<k'∫_ℝ^p⋯∫_ℝ^pg(_k-_k')∏_j=1^p∏_k=1^Kϕ(μ_kj|_n_k_(k)j/n_k+κ^2_j,σ_j^2/n_k+κ_j^2)d_1⋯d_K=K 2^-1∑_k<k'∫_ℝ^p⋯∫_ℝ^pg((∑_j=1^pΔ_kk'j^2)^1/2)∏_j=1^p∏_k=1^Kϕ(μ_kj|_n_k_(k)j/n_k+κ^2_j,σ_j^2/n_k+κ_j^2)d_1⋯d_K ≤K 2^-1∑_1≤ k<k'≤ Kg([∑_j=1^p∫_ℝΔ_kk'j^2ϕ(Δ_kk'j| x_kk'j,σ_kk'j^2 )dΔ_kk'j]^1/2)= K 2^-1∑_1≤ k<k'≤ Kg([∑_j=1^p(x_kk'j^2+σ_kk'j^2)]^1/2). * Suppose h is of the form (<ref>). Since for K≥ 3, the following holds: h(_1,⋯,_K) =min_1≤ k<k'≤ Kg(_k-_k')≤K 2^-1∑_1≤ k<k'≤ Kg(_k-_k'), then the above derivation directly applies. The proof is thus completed. First we obtain directly by Jensen's inequality thatlog p(_1:n|z_1:n,K) ≥ -log Z_K+∫_ℝ^p⋯∫_ℝ^plog h(_1,⋯,_K)∏_k=1^K p(_k)d_1⋯d_K +∑_k=1^K∑_i:z_i=k∫_ℝ^plogϕ__0(_i-_k)p(_k)d_k. Now compute ∑_k=1^K∑_i:z_i=k∫_ℝ^plogϕ__0(_i-_k)p(_k)d_k =∑_k=1^K∑_i:z_i=k[-1/2log((2π_0))-1/2_i_0^-1_i-1/2tr(_0^-1τ^2_p)]=∑_i=1^nlogϕ__0(_i)-nτ^2/2tr(_0^-1). * Suppose h is of the form (<ref>). Then by Jensen's inequality we obtain ∫_ℝ^p⋯∫_ℝ^plog h(_1,⋯,_K)∏_k=1^K p(_k)d_1⋯d_K=-∫_ℝ^p⋯∫_ℝ^plog(1+max_k≠ k'g_0/_k-_k') ∏_k=1^Kp(_k)d_1⋯d_K≥ -3/2∫_ℝ^p⋯∫_ℝ^plog[(1+∑_1≤ k<k'≤ Kg_0/_k-_k')^2/3]∏_k=1^K p(_k)d_1⋯d_K≥ -3/2∫_ℝ^p⋯∫_ℝ^plog[1+∑_1≤ k<k'≤ K(g_0/_k-_k')^2/3]∏_k=1^K p(_k)d_1⋯d_K≥ -3/2log[1+∫_ℝ^p⋯∫_ℝ^p∑_1≤ k<k'≤ K(g_0/_k-_k')^2/3∏_k=1^K p(_k)d_1⋯d_K] ≥ -3/2log[1+g_0^2/3K^21/2(1/2τ^2)^1/3∫_0^∞Δ^-1/31/2^p/2Γ(p/2)Δ^p/2-1exp(-Δ/2) dΔ] =-3/2log(1+K^2δ(τ)g_0^2/3)≥ -log[(1+δ(τ)g_0^2/3)^3/2K^3] where Δℒ=1/2τ^2_k-_k'^2∼χ^2(p), and δ(τ) is a constant with δ(τ)<1 for sufficiently large τ. Hence we can integrate p(_1:n|z_1:n,K) against π(z_1:n|w_1:K,K), π(w_1:K|K) and obtain p(_1:n|K)≥1/Z_Kexp[-nτ^2/2tr(_0^-1)]∏_i=1^nϕ__0(_i)1/(1+g_0^2/3δ(τ))^3/2K^3, and hence by the fact that 𝔼(K^-3)≥[𝔼(K^3)]^-1, p(_1:n) ≥exp[-nτ^2/2tr(_0^-1)]∏_i=1^nϕ__0(_i) [Ω(e^λ-1)/(1+δ(τ)g_0^2/3)^3/2][∑_K=1^∞K^3λ^K/(e^λ-1)K!]^-1=exp[-nτ^2/2tr(_0^-1)]∏_i=1^nϕ__0(_i) [Ω(e^λ-1)^2/e^λ(1+δ(τ)g_0^2/3)^3/2][∑_K=0^∞K^3e^-λλ^K/K!]^-1=C(λ)exp[-nτ^2/2tr(_0^-1)]∏_i=1^nϕ__0(_i) Ω(e^λ-1)^2(1+δ(τ)g_0^2/3)^-3/2, where C(λ) only depends on λ, and δ(τ)<1 for sufficiently large τ. * Suppose h is of the form (<ref>). Then by Jensen's inequality we obtain for K≥ 2 ∫_ℝ^p⋯∫_ℝ^plog h(_1,⋯,_K)∏_k=1^K p(_k)d_1⋯d_K=-1/K∑_1≤ k<k'≤ K∫_ℝ^p⋯∫_ℝ^plog(1+g_0/_k-_k') ∏_k=1^Kp(_k)d_1⋯d_K= -1/K∑_k<k'2∫_ℝ^p⋯∫_ℝ^plog(1+g_0/_k-_k')^1/2∏_k=1^K p(_k)d_1⋯d_K≥ -1/K2∑_k<k'∫_ℝ^p⋯∫_ℝ^plog[1+(g_0/_k-_k')^1/2]∏_k=1^Kp(_k)d_1⋯d_K≥ -1/K2∑_k<k'log[1+(g_0^2/2τ^2)^1/4∫_0^∞Δ^-1/41/2^p/2Γ(p/2)Δ^p/2-1exp(-Δ/2) dΔ] = -(K-1)log(1+√(g_0)δ(τ))≥ -Klog(1+√(g_0)δ(τ)), where δ(τ)<1 when τ is sufficiently large. When K=1 the above inequality still holds. Hence we can integrate p(_1:n|z_1:n,K) against π(z_1:n|w_1:K,K), π(w_1:K|K) and obtain p(_1:n|K)≥1/Z_Kexp[-nτ^2/2tr(_0^-1)]∏_i=1^nϕ__0(_i)exp[-log(1+g_0^1/2δ(τ))K], and hence, p(_1:n) ≥exp[-nτ^2/2tr(_0^-1)]∏_i=1^nϕ__0(_i)∑_K=1^∞exp[-log(1+g_0^1/2δ(τ))K]π(K)=exp[-nτ^2/2tr(_0^-1)]∏_i=1^nϕ__0(_i)Ωe^λ∑_K=1^∞exp[-log(1+g_0^1/2δ(τ))K]e^-λλ^K/K! =exp[-nτ^2/2tr(_0^-1)])∏_i=1^nϕ__0(_i)Ωe^λ[exp(-λ√(g_0)δ(τ)/1+√(g_0)δ(τ))-e^-λ] ≥exp[-nτ^2/2tr(_0^-1)]∏_i=1^nϕ__0(_i)(Ω(e^λ - 1)/C(λ)(1+δ(τ)√(g_0))) for some constant C(λ) that depends on λ only, where the last inequality is due to the mean-value theorem. The proof is thus completed. Suppose h is of the form (<ref>). Then by Lemma <ref> and Lemma <ref> we can write p(_1:n|z_1:n,K)/p(_1:n)∏_i=1^nϕ(_i|_i,_0) ≤ C(λ) exp[nτ^2/2tr(_0^-1)](1+δ(τ)g_0^2/3)^3/2/Ω Z_K(e^λ - 1)×K 2^-1∑_k<k' g([∑_j=1^p ( _n_k_(k)j/n_k+κ_j^2-_n_k'_(k')j/n_k'+κ_j^2)^2 +∑_j=1^pσ_j^2(1/n_k+κ_j^2+1/n_k'+κ_j^2)]^1/2)×∏_j=1^p∏_k=1^Kϕ(_(k)j|_n_k,σ^2_j_n_k+τ^2_n_k_n_k)ϕ(_(k)j|_(k)j,σ^2_j_n_k)/ϕ(_(k)j|_n_k,σ^2_j_n_k). Simple algebra shows that ∏_j=1^p∏_k=1^Kϕ(_(k)j|_n_k,σ^2_j_n_k+τ^2_n_k_n_k)ϕ(_(k)j|_(k)j,σ^2_j_n_k)/ϕ(_(k)j|_n_k,σ^2_j_n_k) =∏_j=1^p∏_k=1^Kϕ(_(k)j|(_n_k+τ^2/σ_j^2_n_k_n_k) _(k)j,σ_j^2_n_k+τ^2_n_k_n_k)exp[-(_n_k_(k)j)^2/2σ^2_j(n_k+κ_j^2)]≤∏_j=1^p∏_k=1^Kϕ(_(k)j|(_n_k+τ^2/σ_j^2_n_k_n_k) _(k)j,σ_j^2_n_k+τ^2_n_k_n_k). It follows by Jensen's inequality that ∫_ℝ^n⋯∫_ℝ^np(_1:n|z_1:n,K)/p(_1:n)∏_i=1^nϕ(_i|_i,_0)d_1⋯d_n ≤ C(λ) exp[nτ^2/2tr(_0^-1)](1+δ(τ)g_0^2/3)^3/2/Ω Z_K(e^λ - 1) ×K 2^-1∑_k<k'∫_ℝ g([∑_j=1^p ( _n_k_(k)j/n_k+κ_j^2-_n_k'_(k')j/n_k'+κ_j^2)^2 +∑_j=1^pσ_j^2(1/n_k+κ_j^2+1/n_k'+κ_j^2)]^1/2)× ∏_j=1^pϕ(x_kk'j|1/κ_j^2(_n_k_(k)j-_n_k'_(k')),σ^2_j(n_k/κ_j^2(n_k+κ_j^2)+n_k'/κ_j^2(n_k'+κ_j^2)))dx_kk'j≤ C(λ) exp[nτ^2/2tr(_0^-1)](1+δ(τ)g_0^2/3)^3/2/Ω Z_K(e^λ - 1)×K 2^-1∑_k<k' g([∑_j=1^p1/κ_j^4(_n_k_(k)j-_n_k'_(k')j)^2+2pτ^2]^1/2) where x_kk'j=_n_k_(k)j/n_k+κ_j^2-_n_k'_(k')j/n_k'+κ_j^2,andσ_kk'j^2=σ^2_j(1/n_k+κ_j^2+1/n_k'+κ_j^2). The case where h is of the form (<ref>) can be proved in the exactly same fashion. The proof is thus completed. First for each fixed j we write ∫_ℝ^p× n∫_ℝ^p× n(_n_k_(k)j-_n_k'_(k')j)^2F_0^(n_k+n_k')(d_(k)d_(k')) = ∬[(∑_i:z_i=km_ij)^2+(∑_i:z_i=k'm_ij)^2-2(∑_i:z_i=km_ij)(∑_i:z_i=k'm_ij)]F_0^(n_k+n_k')(d_(k)d_(k')) = (n_k𝔼_0m_j)^2+n_kVar_0(m_j)+(n_k'𝔼_0m_j)^2+n_k'Var_0(m_j)-2n_kn_k'(𝔼_0m_j)^2 = 𝔼_0[(m_j)^2](n_k+n_k'). Writting =(_1,⋯,_p) where _j∈ℝ^p, it follows that ∑_j=1^p1/κ_j^4𝔼_[∬(_n_k_(k)j-_n_k'_(k')j)^2F_0^(n_k+n_k')(d_(k)d_(k'))] =𝔼__1:K[𝔼__1:n( n_k+n_k'|_1:K)] ∑_j=1^pτ^4/σ_j^4𝔼_0([_j]^2) =2n/Kτ^4𝔼_0(∑_j=1^p1/σ_j^4_j_j) = τ^42n/K𝔼_0[(∑_j=1^p1/σ_j^4_j_j)] = τ^4𝔼_0(_0^-2)2n/K.§ PROOFS OF AUXILIARY RESULTS FOR LEMMA <REF> §.§ Proof of Lemma <ref> The proofs are similar to those in Lemma 3.1, Lemma 3.2, and Lemma 3.4 in <cit.>. Let M=max{2a,√(8)(log1/ϵ)^1/2}, and let ϵ be sufficiently small such that M>2a. Then sup_≥ M|f_F()-f_F^⋆()|≤ 2ϕ_(M-a)≤ 2ϕ_(M/2)≲exp(-M^2/(8^2))= ϵ, so that it suffices to consider ≤ M. Denote Q_()=^-1. By Taylor's expansion we have |ϕ_(-)-∑_j=1^J-1(-1)^j/2^j(2π)^p/2()^-1/2Q^j_(-)|≲(e/2Q_(-)/J)^J. Hence for any probability distribution F^⋆ on {:≤ a}×, a standard argument of triangle inequality yields sup_≤ M|f_F()-f_F^⋆()| ≤ sup_≤ M|∑_j=1^J-1(-1)^j/2^j(2π)^p/2∫()^-1/2Q^j_(-)(dF-dF^⋆)| +2sup_≤ M,≤ a| ϕ_(-)-∑_j=1^J-1(-1)^j/2^j(2π)^p/2()^-1/2Q^j_(-) |≤ sup_≤ M|∑_j=1^J-1(-1)^j/2^j(2π)^p/2∫()^-1/2Q^j_(-)(dF-dF^⋆)| +2c_1sup_≤ M,≤ a(e/2Q_(-)/J)^J, for some constant c_1>0. Suppose =_p. Expanding Q^j_(-) by multinomial theorem: Q_^j(-) = ∑_r+s+t=jr_1+⋯+r_p=rt_1+⋯+t_p=ts_1+⋯+s_p=s(j r_1⋯ r_p,s_1⋯ s_p, t_1⋯ t_p∏_i=1^py_i^2r_i)(∏_i=1^pμ_i^s_i+2t_i/λ_i^r_i+s_i+t_i()). In order that the first term on the RHS of (<ref>) vanishes, it is sufficient that ∫()^-1/2Q_^j(-)(dF-dF^⋆)=0 for all j=0,1,⋯,J-1. By the multinomial expansion, a sufficient condition for the last display is that ∫()^-1/2∏_i=1^pμ_i^s_i+2t_i/λ_i^r_i+s_i+t_i()(dF'-dF^⋆)=0 for all possible r_i,s_i,t_i,i=1,⋯,p. According to Lemma A.1 in <cit.>, F^⋆ can be select to be a discrete distribution with at most N≲ J^p(2J-1)^p+1≲ J^2p support points. For the caseis not the identity matrix, the above argument can be applied with y_i and μ_i replaced by ( y)_i and ()_i, respectively. Now we focus on the selection of J. Notice that sup_≤ M,≤ aQ_(-)≲sup_≤ M,≤ a-^2≲ M^2≲(log1/ϵ). Hence the second term on the RHS of (<ref>) is upper bounded by a constant multiple of ((c_2log1/ϵ)/J)^J for some constant c_2>0. Set J=⌈(1+c_2)(log1/ϵ)⌉. Then sup_≤ M|f_F()-f_F^⋆()|≲((c_2log1/ϵ)/J)^J≲(c_2/1+c_2)^(1+c_2)log(1/ϵ)=ϵ^(1+c)log(1+1/c)≤ϵ for sufficiently small ϵ>0, where the last inequality is due to the fact (1+c)log(1+1/c) decrease with c and converges to 1 as c→∞.Hence the number N of support points for discrete F^⋆ such that f_F-f_F^⋆_∞≲ϵ is of order J^2p∝(log1/ϵ)^2p. For the inequality regarding L_1 distance, notice that for >T≥ 2a, f_F()≲exp(-^2/8^2), so that f_F-f_F^⋆_1 ≲ ∫_>Texp(-^2/8^2)d+∫_<Tf_F-f_F^⋆_∞d≲ exp(-T^2/8^2)+T^pf_F-f_F^⋆_∞. Now take T=max{2a,√(8log(1/f_F-f_F^⋆_∞))}. It follows that the first term on the RHS of (<ref>) is bounded by f_F-f_F^⋆_∞≲ϵ, while the second term is bounded by a multiple of f_F-f_F^⋆_∞max{a^p,log(1/f_F-f_F^⋆_∞)^p/2}≲ϵ(log1/ϵ)^p/2. Therefore, for sufficiently small ϵ>0, f_F-f_F^⋆_1≲ϵ(log1/ϵ)^p/2. §.§ Proof of Lemma <ref> First for a given ϵ, obtain F' by Lemma <ref> with at most n≲(log1/ϵ)^1/2 support points. Write F'=∑_kw_kδ_(_k,_k). For each k, find _k^⋆∈{:/(2ϵ)∈ℤ^p},^⋆_k∈{:λ_j()/(2ϵ)∈ℕ_+,j=1,⋯,p} such that _k-_k^⋆≲ϵ and _k-_k^⋆≲ϵ. Furthermore the function class {(,)↦ϕ(|,)}_∈ℝ^p indexed by ∈ℝ^p is uniformly Lipschitz continuous, since ∇_ϕ_(-) is uniformly bounded and ∈ is compact. Therefore, by taking F^⋆=∑_kw_kδ_(_k^⋆,_k^⋆), we have by the triangle inequality f_F-f_F^⋆_∞≤ f_F-f_F'_∞+∑_k=1^Kw_kϕ__k(-_k)-ϕ_^⋆_k(-_k^⋆)_∞ ≲ ϵ+∑_k=1^Kw_kL(_k-_k^⋆+_k-_k^⋆) ≲ϵ where L is the (uniform) Lipschitz constant for the function class {(,)↦ϕ_(-)}_∈ℝ^p. Now applying the exactly same argument used in deriving (<ref>) yields f_F-f_F^⋆_1≲ϵ(log1/ϵ)^p/2.§.§ Proof of Lemma <ref> Since f_0()≤^pϕ__p(0), and f_F()≥1/^p∫_{≤ B}ϕ__p(-/)dF≥1/2^pϕ__p((+B)/), then we see that f_0/f_F≲exp(b_1^2) for some constant b_1>0. Hence for sufficientl small δ>0, ∫(f_0()/f_F())^δ f_0()d≲∫∫exp(δ b_1^2)exp(-1/2^2-^2)dF_0d<∞. The proof is completed by applying Theorem 5 in <cit.>.§.§ Proof of Lemma <ref> Let E_0=(⋃_k E_k)^c. We estimate |f_F()-f_F^⋆()| ≤ ∫_E_0ϕ_(-)dF+∑_k=1^N∫_E_k|ϕ_(-)-ϕ__k^⋆(-_k^⋆)|dF +∑_k=1^Nϕ_^⋆_k(-_k^⋆)|P_F(E_k)-w_k^⋆|. For (,)∈ E_k, we see that -_k^⋆≲ϵ and |λ_j()-λ_j(_k^⋆)|≲ϵ. Since eigenvalues of covariance matrices are bounded away from 0 and ∞, we see that |√(λ_j())-√(λ_j(_k^⋆))|≲ϵ/|√(λ_j())+√(λ_j(_k^⋆))|≲ϵ. Hence by (<ref>) and the relation between Hellinger distance and ·_1, we have ϕ_(-)-ϕ__k^⋆(-_k^⋆)_1≲ϵ whenever (,)∈ E_k for all k and all sufficiently small ϵ. Thus we obtain from Fubini's theorem that f_F-f_F^⋆_1 ≤ ∫_E_0∫ϕ_(-)ddF +∑_k=1^N∫_E_kϕ_(-)-ϕ__k^⋆(-_k^⋆)_1dF +∑_k=1^N|F(E_k)-w_k^⋆|∫ϕ__k^⋆(-_k^⋆)d≲ [∑_k=1^Nw_k^⋆-∑_k=1^N F(E_k)]+ϵ+∑_k=1^N|F(E_k)-w_k^⋆| ≲ϵ+∑_k=1^N|F(E_k)-w_k^⋆|.§ DERIVATION OF THE GENERALIZED URN MODELAs shown in <cit.>, the marginal distribution of 𝒞_n with K and z=(z_1,⋯,z_n) marginalized out is given byp(𝒞_n)=V_n(|𝒞_n|)∏_c∈𝒞_nΓ(β+|c|)/Γ(β)whereV_n(t):=∑_K=t^∞Γ(K+1)Γ(β K+1)/Γ(K-t+1)Γ(β K+n+1)p(K).The following generalized Bayes rule is useful: If p(|)=ϕ(|) and ∼Π, thenΠ(∈ A|)=.∫_A ϕ(|)Π(d)/∫ϕ(|)Π(d)∝∫_A ϕ(|)Π(d). The restaurant process for the exchangeable partition model proposed by <cit.> is given by Π(𝒞_n=𝒞_n-1∪{{n}}|𝒞_n-1) ∝ V_n(ℓ+1)/V_n(ℓ)β Π(𝒞_n=(𝒞_n-1\{c})∪{c∪{n}}|𝒞_n-1) ∝ |c|+β where |𝒞_n-1|=ℓ. Then for any measurable A, the following derivation using chain rule of conditional distributions is available Π(_n∈ A|_1,⋯,_n-1)=∑_𝒞_nΠ(_n∈ A|_1,⋯,_n-1,𝒞_n)p(𝒞_n|_1,⋯,_n-1)=∑_𝒞_nΠ(_n∈ A|_1,⋯,_n-1,𝒞_n)p(𝒞_n|𝒞_n-1)∝[V_n(ℓ+1)β/V_n(ℓ)]Π(_n∈ A|_1,⋯,_n-1,𝒞_n=𝒞_n-1∪{{n}})+∑_c∈𝒞_n-1(|c|+β)Π(_n∈ A|_1,⋯,_n-1,𝒞_n=(𝒞_n-1\{c})∪{c∪{n}}) Since Π(_n∈ A|_1,⋯,_n-1,𝒞_n=(𝒞_n-1\{c})∪{c∪{n}})=δ_(_c^⋆,_c^⋆)(A), we focus on deriving Π(_n∈ A|_1,⋯,_n-1,𝒞_n=𝒞_n-1∪{{n}}). Since Π(_n∈ A|_1,⋯,_n-1,𝒞_n=𝒞_n-1∪{{n}})=∑_K=ℓ+1^∞Π(_n∈ A|_1,⋯,_n-1,K)p(K|𝒞_n=𝒞_n-1∪{{n}})=∑_K=ℓ+1^∞Π(_n∈ A|_c^⋆,_c^⋆,c∈𝒞_n-1,K)p(K|𝒞_n=𝒞_n-1∪{{n}}). Hence Π(_n∈ A|_1,⋯,_n-1)∝[V_n(t+1)β/V_n(t)]∑_K=ℓ+1^∞ p(K|𝒞_n=𝒞_n-1∪{{n}}) Π(_n∈ A|_c^⋆,_c^⋆,c∈𝒞_n-1,K) +∑_c∈𝒞_n-1(|c|+β)δ_(_c^⋆,_c^⋆)(A), and hence, by generalized Bayes rule (<ref>), Π(_n∈ A|_n,_1,⋯,_n-1) ∝ [V_n(t+1)β/V_n(t)]∑_K=ℓ+1^∞p(K|𝒞_n=𝒞_n-1∪{{n}})× ∬_Aϕ(_n|_n,_n)Π(d_nd_n|_c^⋆,_c^⋆,c∈𝒞_n-1,K) +∑_c∈𝒞_n-1(|c|+β)δ_(_c^⋆,_c^⋆)(A)ϕ(_n|_c^⋆,_c^⋆). By definition, for any measurable A⊂ℝ^p×, when K≥ℓ+1, we have Π(_n∈ A|_c^⋆,_c^⋆,c∈𝒞_n-1,K)∝∬_A [∫⋯∫ h_K(_c^⋆:c∈𝒞_n-1∪_∅)∏_c∈_∅\{c}p_(_c^⋆)d_c^⋆]p_(_c^⋆)p_(_c^⋆)d_c^⋆d_c^⋆.=∬_A L_K(_c^⋆)p_(_c^⋆)p_(_c^⋆)d_c^⋆d_c^⋆. Normalizing the above conditional probability distribution yields Π(_n∈ A|_c^⋆,_c^⋆,c∈𝒞_n-1,K)= ∬_A L_K(_c^⋆)p_(_c^⋆)p_(_c^⋆)d_c^⋆d_c^⋆/∬ L_K(_c^⋆)p_(_c^⋆)p_(_c^⋆)d_c^⋆d_c^⋆. Hence the generalized Bayes rule (<ref>) yields Π(_n∈ A|_n,_c^⋆,_c^⋆,c∈𝒞_n-1,K)= ∬_Aϕ(_n|_c^⋆,_c^⋆) L_K(_c^⋆)p_(_c^⋆)p_(_c^⋆)d_c^⋆d_c^⋆/∬ϕ(_n|_c^⋆,_c^⋆) L_K(_c^⋆)p_(_c^⋆)p_(_c^⋆)d_c^⋆d_c^⋆. Notice that, again, by the generalized Bayes rule (<ref>), we have p(K|𝒞_n=𝒞_n-1∪{{n}})∬_Aϕ(_n|_n,_n)Π(d_nd_n|_c^⋆,_c^⋆,c∈𝒞_n-1,K)=p(K|𝒞_n=𝒞_n-1∪{{n}})∬ϕ(_n|_n,_n)Π(d_nd_n|_c^⋆,_c^⋆,c∈𝒞_n-1,K)×Π(_n∈ A|_n,_c^⋆,_c^⋆,c∈𝒞_n-1,K)=p(K|𝒞_n=𝒞_n-1∪{{n}}) [∬ϕ(_n|_c^⋆,_c^⋆) L_K(_c^⋆)p_(_c^⋆)p_(_c^⋆)d_c^⋆d_c^⋆/∬ L_K(_c^⋆)p_(_c^⋆)p_(_c^⋆)d_c^⋆d_c^⋆] ×Π(_n∈ A|_n,_c^⋆,_c^⋆,c∈𝒞_n-1,K)=p(K|𝒞_n=𝒞_n-1∪{{n}})×[∫⋯∬ϕ(_n|_c^⋆,_c^⋆) h_K(_c^⋆:c∈_n-1∪_∅) p_(_c^⋆)d_c^⋆∏_c∈_∅p_(_c^⋆)d_c^⋆/∫⋯∫ h_K(_c^⋆:c∈𝒞_n-1∪𝒞_∅)∏_c∈𝒞_∅p_(_c^⋆)d_c^⋆] ×[ ∬_A ϕ(_n|_c^⋆,^⋆_c)L_K(_c^⋆)p_(_c^⋆)d_c^⋆d_c^⋆/∬ϕ(_n|_c^⋆,^⋆_c)L_K(_c^⋆)p_(_c^⋆)d_c^⋆d_c^⋆]=α_K G_K(A). The proof is thus completed. We first check the first assertion. By definition∬_A ϕ(_i|_c^⋆,_c^⋆)g(_c^⋆,_c^⋆|_-i,_-i)d_c^⋆d_c^⋆ =∑_K=|_-i|+1^∞ p(K|=_-i∪{{i}}) ∬_A ϕ(_i|_c^⋆,_c^⋆) L_K(_c^⋆)p_(_c^⋆)p_(_c^⋆)d_c^⋆d_c^⋆/∫ L_K(_c^⋆)p_(_c^⋆)d_c^⋆ = ∑_K=|_-i|+1 m_Kp(K|=_-i∪{{i}}) ∬_A ϕ(_n|_c^⋆,_c^⋆) L_K(_c^⋆)p_(_c^⋆)p_(_c^⋆)d_c^⋆d_c^⋆/∬ϕ(_n|_c^⋆,_c^⋆) L_K(_c^⋆)p_(_c^⋆)p_(_c^⋆)d_c^⋆d_c^⋆ = ∑_K=|_-i|+1α_KG_K(A),and hence we see that∬ϕ(_i|_c^⋆,_c^⋆)g(_c^⋆,_c^⋆|_-i,_-i)d_c^⋆d_c^⋆=∑_K=|_-i|+1α_K.Given observation _i, denoteG(A|_i,_-i,_-i)= ∬_Aϕ(_i|_c^⋆,_c^⋆)g(_c^⋆,_c^⋆|_i,_-i,_-i)d_c^⋆d_c^⋆/∬ϕ(_i|_c^⋆,_c^⋆)g(_c^⋆,_c^⋆|_i,_-i,_-i)d_c^⋆d_c^⋆,and let g(_c^⋆,_c^⋆|_i,_-i,_-i) be the corresponding density of G(·|_i,_-i,_-i).By construction, given the auxiliary variable _c^⋆,_c^⋆, we haveℙ(_i∈ A|_c^⋆,_c^⋆,_i,_-i,_-i) =[V_n(|_-i|+1)β/V_n(|_-i|)]ϕ(_i|_c^⋆,_c^⋆)δ_(_c^⋆,_c^⋆)(A)/[V_n(|_-i|+1)β/V_n(|_-i|)]ϕ(_i|_c^⋆,_c^⋆)+∑_c∈_-i(|c|+β)ϕ(_i|_c^⋆,_c^⋆)+ ∑_c∈_-i(|c|+β)ϕ(_i|_c^⋆,_c^⋆)δ_(_c^⋆,_c^⋆)(A)/[V_n(|_-i|+1)β/V_n(|_-i|)]ϕ(_i|_c^⋆,_c^⋆)+∑_c∈_-i(|c|+β)ϕ(_i|_c^⋆,_c^⋆).Integrate the RHS of the last display against p(_c^⋆,_c^⋆|_i,_-i,_-i) yieldsℙ(_i∈ A|_i,_-i,_-i)=∬ℙ(_i∈ A|_c^⋆,_c^⋆,_i,_-i,_-i)p(_c^⋆,_c^⋆|_i,_-i,_-i)d_c^⋆d_c^⋆=[V_n(|_-i|+1)β/V_n(|_-i|)]∬_Aϕ(_i|_i,_i)g(_i,_i|_-i,_-i)d_id_i/[V_n(|_-i|+1)β/V_n(|_-i|)]∑_K=|_-i|+1^∞α_K+∑_c∈_-i(|c|+β)ϕ(_i|_c^⋆,_c^⋆) + ∑_c∈_-i(|c|+β)ϕ(_i|_c^⋆,_c^⋆)δ_(_c^⋆,_c^⋆)(A)/[V_n(|_-i|+1)β/V_n(|_-i|)]∑_K=|_-i|+1^∞α_K+∑_c∈_-i(|c|+β)ϕ(_i|_c^⋆,_c^⋆)∝[V_n(|_-i|+1)β/V_n(|_-i|)]∬_Aϕ(_i|_i,_i)g(_i,_i|_-i,_-i)d_id_i + ∑_c∈_-i(|c|+β)ϕ(_i|_c^⋆,_c^⋆)δ_(_c^⋆,_c^⋆)(A) = [V_n(|_-i|+1)β/V_n(|_-i|)]∑_K=|_-i|+1^∞α_KG_K(A)+ ∑_c∈_-i(|c|+β)ϕ(_i|_c^⋆,_c^⋆)δ_(_c^⋆,_c^⋆)(A),which coincides with (<ref>). This completes the proof the first assertion. For the second assertion,by construction, we haveℙ(=(_-i\{c})∪({c∪{i}}),(_c^⋆,_c^⋆)∈ A|_-i,_-i,_i) =∬_A ℙ(=(_-i\{c})∪({c∪{i}})|_-i,_-i,_c^⋆,_c^⋆,_i)p(_c^⋆,_c^⋆|_i,_c^⋆,_c^⋆,c∈_-i)d_c^⋆d_c^⋆ = ∬_A(|c|+β)ϕ(_i|_c^⋆,_c^⋆)p(_c^⋆,_c^⋆|_i,_c^⋆,_c^⋆,c∈_-i)/[V_n(|_-i|+1)β/V_n(|_-i|)]ϕ(_i|_c^⋆,_c^⋆)+∑_c∈_-i(|c|+β)ϕ(_i|_c^⋆,_c^⋆)d_c^⋆d_c^⋆ = ∬_A(|c|+β)ϕ(_i|_c^⋆,_c^⋆)g(_c^⋆,_c^⋆|_c^⋆,_c^⋆,c∈_-i)/[V_n(|_-i|+1)β/V_n(|_-i|)]∑_K=|_-i|+1^∞α_K+∑_c∈_-i(|c|+β)ϕ(_i|_c^⋆,_c^⋆)d_c^⋆d_c^⋆.Since given =(_-i\{c})∪({c∪{i}}), _-i=(_1,⋯,_n), it follows that the conditional distribution can be directly computed:ℙ((_c^⋆,_c^⋆)∈ A|=(_-i\{c})∪({c∪{i}}), _-i,_-i,_i,_i) = ℙ((_c^⋆,_c^⋆)∈ A|=(_-i\{c})∪({c∪{i}}), _-i,_-i,_i) = ℙ(=(_-i\{c})∪({c∪{i}}),(_c^⋆,_c^⋆)∈ A|_-i,_-i,_i)/ℙ(=(_-i\{c})∪({c∪{i}})|_-i,_-i,_i) = G(A|_c^⋆,_c^⋆,c∈_-i).On the other hand, we know from definition thatℙ(_i∈ A|=_-i∪{{i}},_i,_c^⋆,_c^⋆,_-i,_-i)=δ_(_c^⋆,_c^⋆)(A).It follows directly that ℙ((_c^⋆,_c^⋆)∈ A|_-i∪{{i}},_i,_i,_-i,_-i)=δ__i(A), and hence the second assertion is proved.§ DETAILS OF POSTERIOR INFERENCEIn this section we provide the detailed blocked-collapsed Gibbs sampler in Algorithm <ref> when a conjugate prior on the covariance matrices for all components is used: _k=diag(λ_1k,⋯,λ_pk) and λ_jk p(λ)∝𝕀(λ∈[^-2,^-2])λ^-a_0-1exp(-b_0/λ), j=1,⋯,p,k=1,⋯,K. Easy extension of the sampler is available when one use Inverse-Wishart distribution on the non-diagonal covariance matrices _k's. A practical issue for the implementation of the Gibbs sampler is sampling from the conditional prior p(K|𝒞) as well as the conditional posterior p(K|𝒞,_1,⋯,_n,_c^⋆:c∈). Using formula (3.7) in <cit.>, we see that p(K|𝒞)∝K!/(K+n)!(K-|𝒞|)!. Notice that for K>>|𝒞|, p(K)≈ 0, and therefore in practice one may use the following approximate sampling schemep(K|𝒞)∝K!/(K+n)!(K-|𝒞|)!,K=|𝒞|,|𝒞|+1,⋯,|𝒞|+mfor a moderate choice of the perturbation range m, especially when n is large, in which case the probability of having large number of empty components(i.e. K>>|𝒞|) is negligible. Sampling from the conditional posterior p(K|𝒞,_1,⋯,_n,_c^⋆:c∈), however, is a slightly harder issue.Denote p_(_c^⋆|_i:i∈ c,_c^⋆)= p_(_c^⋆)∏_i∈ cϕ(_i|_c^⋆,_c^⋆)/∫ p_(_c^⋆)∏_i∈ cϕ(_i|_c^⋆,_c^⋆)d_c^⋆to be the conditional posterior of _c^⋆ given observations when the repulsive prior is not introduced, namely, when _c^⋆∼ p_ independently. Given the partition , the cluster-spefic covariance matrices (_c^⋆:c∈), and the observations (_i)_i=1^n, the posterior of (_c^⋆:c∈) p(_c^⋆:c∈|_c^⋆,c∈,(_i)_i=1^n)∝∑_K=||^∞ p(_c^⋆:c∈| K,_c^⋆,c∈,(_i)_i=1^n)p(K|_c^⋆,c∈,(_i)_i=1^n),wherep(_c^⋆:c∈| K,_c^⋆,c∈,(_i)_i=1^n)∝ ∫⋯∫ h_K(_c^⋆:c∈∪_∅)[∏_c∈p(_c^⋆|_i:i∈ c,_c^⋆)][∏_c∈_∅p_(_c^⋆)d_c^⋆] ,andp(K|_c^⋆,c∈,(_i)_i=1^n)∝p(K|)/Z_K∫⋯∫ h_K(_c^⋆:c∈∪_∅)[∏_c∈∪_∅p(_c^⋆|_i:i∈ c,_c^⋆)d_c^⋆].Step 4 of the blocked-collapsed Gibbs sampler in Section <ref> of the manuscript samples from p(_c^⋆:c∈|K,_c^⋆,c∈,(_i)_i=1^n). To sample from p(K|_c^⋆,c∈,(_i)_i=1^n), we use numerically compute Z_K and the intractable integral when sampling from p(_c^⋆|_i:i∈ c,_c^⋆) is tractable, which is usually the case when p_ is the conjugate normal prior. In what follows we provide the detailed blocked-collapsed Gibbs sampler. Alternatively, to gain computational efficiency, one can use p(K|) to approximate p(K|_c^⋆,c∈,(_i)_i=1^n) in the resampling steps. § CONVERGENCE DIAGNOSTICS§.§ Convergence Check for Subsection <ref>We check convergence via the trace plots and autocorrelations of some randomly selected _i's (which are identifiable compared to the exact means for different components) in Figure <ref>, showing no signs of non-convergence. §.§ Convergence Check for Subsection <ref>We check convergence via the trace plots and the autocorrelations of some randomly selected _i's in Figure <ref>, showing no signs of non-convergence. §.§ Convergence Check for Subsection <ref>The trace plots and the autocorrelations of some randomly selected _i's in Figure <ref>, indicate no signs of non-convergence.§.§ Convergence Check for Subsection <ref>The trace plots and the autocorrelations of some randomly selected _i's in Figure <ref>, indicate no signs of non-convergence. § ADDITIONAL SIMULATION STUDY In this section we consider a synthetic example where the number of observations and number of components are moderately large. The ground true density is given by a mixture of K=13 Gaussians. The first 12 Gaussian components are equally weighted with mixing weight being 1/24, and the weight of the last component is 12/24. The first 12 components are centered at[[ 6; 6 ]],[[6; 12 ]],[[ 12;6 ]],[[ -6;6 ]],[[ -6; 12 ]],[[ -12; 6 ]],[[6; -6 ]],[[6; 12 ]],[[ 12; -6 ]],[[ -6; -6 ]],[[-6; -12 ]],[[ -12;-6 ]],respectively, with identical covariance matrix _2. The last component is centered at the origin with covariance matrix 30_2. We collect 2000 i.i.d. observations from this Gaussian mixture distribution, and implement the proposed blocked-collapsed Gibbs sampler with g_0 = 10, τ = 10, m = 2,=0.1,= 10, and a total number of 2000 iterations with the first 1000 iterations discarded as burn-in. For comparison, we consider the following DPM model,(_i|_z_i,_z_i)∼ N(_z_i,_z_i), (_z_i,_z_i| G) G,and (G|α, G_0)∼DP(α, G_0),where G_0=N(, ) with ∼N(_1,/k_0) and ∼Inv-Wishart(4,Ψ_1), α∼Gamma(1, 1), _1∼N(0,2_2), k_0∼Gamma(0.5,0.5), and Ψ_1∼Inv-Wishart(4, 0.5_2).Figures <ref>a and <ref>c visualize the comparison between the posterior mean of the density under the RGM model and the DP mixture model with the data generating density, respectively, together with the corresponding log-CPO values. The log-CPO values indicate that the RGM model is a better choice compared to the DP mixture model. Furthermore, Figure <ref>b indicates that the posterior distribution of K is highly concentrated around the underlying true K=13 under the RGM model, whereas the DPM model assigns relatively higher posterior probability to redundant clusters (see Figure <ref>d). | http://arxiv.org/abs/1703.09061v2 | {
"authors": [
"Fangzheng Xie",
"Yanxun Xu"
],
"categories": [
"stat.ME"
],
"primary_category": "stat.ME",
"published": "20170327133309",
"title": "Bayesian Repulsive Gaussian Mixture Model"
} |
Department of Computer Science^1, Department of Cognitive Science^2 Rensselaer Polytechnic Institute (RPI)Troy NY 12180 USA [email protected] ∙ [email protected] Proof Verification Can Be Hard! This abstract was accepted for a presentation at Computability in Europe 2014: http://cie2014.inf.elte.hu/?Accepted_Papershttp://cie2014.inf.elte.hu/?Accepted_Papers. Naveen Sundar Govindarajulu^2 & Selmer Bringsjord^1,2 December 30, 2023 ==============================================================================================================================================================================================================The generally accepted wisdom in computational circles is that pure proof verification is a solved problem and that the computationally hard elements and fertile areas of study lie in proof discovery.[Conjecture generation in our experience is also commonly regarded to be genuinely difficult.]This wisdom presumably does hold for conventional proof systems such as first-order logic with a standard proof calculus such as natural deduction or resolution.But this folk belief breaks down when we consider more user-friendly/powerful inference rules.One such rule is the restricted , which is not even semi-decidable when added to a standard proof calculus of a nice theory.[Nice theories are consistent, decidable, and allow representations <cit.>. Roughly put, if a theory allows representations, it can prove facts about the primitive-recursive relations and functions.(See Smith intro_godel_theorems_smith.) A formal system (a theory Γ and a proof calculus ρ) is decidable/semi-decidable/not-semi-decidable if the decision problem Γ⊢_ργ is decidable/semi-decidable/not-semi-decidable.] While presumably not a novel result, we feel that the hardness of proof verification is under-appreciated in most communities that deal with proofs.A proof-sketch follows.We set some context before we delve into the sketch.The formal machinery and conventions follow <cit.>.We assume the standard apparatus of first-order logic and that we are concerned only with theories of arithmetic, and machine checking and discovery of proofs of theorems of arithmetic.A theory Γ is said to be negation-incomplete (incomplete) iff there is at least one ϕ such that Γ⊬ϕ and Γ⊬¬ϕ.As readers will recall, Gödel's first incompleteness theorem states that any sufficiently strong theory of arithmetic that has certain desirable attributes is incomplete.Peano Arithmetic () is one of the smallest incomplete theories that covers all of standard arithmetic.One way to surmount incompleteness is to add more user-friendly (or mathematician-friendly) rules of inference. Theis one such rule of inference.Thecan be added in order to complete proof calculi; specifically, therenderscomplete.This infinitary rule is of the following form:[]∀ xϕ(x)ϕ(0),ϕ(1) …,The above rule has an infinite number of premises and is clearly not suitable for implementation.A restrictedis a finite form of the rule which still keeps 𝖯𝖠 complete.Assume that we have machines operating over representations of numerals and proofs.Then if we have a machine 𝗆_ϕ, which for all n∈ℕ and the formula ϕ with one free variable, it produces a proof of ϕ(n) from some set of axioms Γ.That is, 𝗆_ϕ: n↦ρ(Γ,ϕ(n)).[An accessible reference for theis <cit.>.All these results, except the main argument in the present abstract, are available in <cit.>.]Given this, one form of the restricted is as follows: ∀ x ϕ(x)Γ𝗆_ϕ Though the restrictedcan (as just seen) be written down in full, complete checking of the rule is beyond any machine implementation, since in the general case, a proof verification system that handles the rule would be able to check in all possible cases whether the program supplied halts with the correct proof.A simple proof of this limit is given in the appendix.We feel that this limitative result demonstrates that proof representation and proof verification in mathematics can be a fertile area of study involving a rich interplay between expressibility and computational costs.§ APPENDIX: PROOF Theorem:is not-semi-decidableLetdenote the formal system comprised ofwith a standard proof calculus ρ augmented with the restricted .Assume that we are only talking about Turing machines which output exactly one of {,} on all inputs, or else go on forever without halting, e.g., .The inputs are numerals which encode natural numbers.Given:is negation-complete and all its theorems are true on the standard model ⟨ℕ;0,𝖲,+,1⟩.The following three statements can be coded up as arithmetic statements in the language of .* Machine m on input n halts with* Machine m on input n halts with* Machine m on input nFor any machine m and any input n, exactly one of the above is true in the standard model and therefore a theorem in . Assumption 1: issemi-decidable.That is, we have a machine G which on input⟨ p, q⟩ outputsif p represents a proof in of the statement encoded by q; otherwise outputsor.If Assumption 1 holds, then we can have a machine H which on input ⟨ m,n⟩ decides if machine m halts on input n; i.e., H solves the halting problem.The machine H is specified below as an algorithm. One of the three threads in H will halt.Therefore H decides the halting problem.We have arrived at a contradiction by supposing Assumption 1, which can be now be discarded, and our main thesis is established. agsm | http://arxiv.org/abs/1703.08746v1 | {
"authors": [
"Naveen Sundar Govindarajulu",
"Selmer Bringsjord"
],
"categories": [
"cs.LO",
"cs.CC"
],
"primary_category": "cs.LO",
"published": "20170325225020",
"title": "Proof Verification Can Be Hard!"
} |
< ∼ .5ex> ∼ .5ex § INTRODUCTION Blazars are the most luminous persistent (as opposed to gamma-ray bursts) gamma-ray emitters in the Universe. Among extragalactic sources, they are the brightest in the MeV-GeV range and the only detected ones at TeV energies (with the exception of a few very nearby radiogalaxies andstarbursts).They represent more than 50% of the significantly-detected sources in the 3rd Fermi-LAT catalog (Ackermann et al. 2015), and virtually the majority of the unidentified LAT targets.Thetypical "camel's back" (Falcke et al. 2004) shape of blazar spectral energy distribution is attributed to synchrotron radiation (responsible for emission at the lower frequencies) in a relativistic jet pointing at a small angle (5-10 deg) with respect to the observer and inverse Compton scattering (at higher frequencies, up to GeV-TeV) of relativistic particles off synchrotron radiation or photon fields external to the jet(accretion disk, broad emission line region, dusty torus).The mutual relevance of external photon fields andjet photons inthe inverse Compton scattering process determines the frequencies of the synchrotron and inverse Compton emission components peaks.In flat-spectrum radio quasars, that have broad emission lines, the particles cool more effectivelythan in BL Lac objects and the characteristic frequencies are accordingly lower(Ghisellini et al. 1998). A range of continuous properties are observed between the most efficient coolers, flat-spectrum radio quasars with synchrotron peaks located at far-infrared wavelengths, and "extreme-synchrotron" BL Lac objects,where the synchrotron peak is located at soft X-rays and occasionally moves, during outbursts, to hard X-rays (Pian et al. 1998). These and the sources with intermediate properties represent a "blazar sequence" whereby total luminosities increase with decreasing characteristic frequencies (Fossati et al. 1998).Notable exceptions to this picture area number of high-redshift blazars with broad luminous emission lines and high-frequency-peaked emission components (Arsioli et al. 2015).The phenomenology of the "blazar sequence" is naturally explained within ascenario where the relativistic jet has a predominantly leptonic composition, which found confirmation in many studies of spectral energy distributions and multi-wavelength variability of blazars (e.g. Ackermann et al. 2012; Ghisellini et al. 2013; Marscher 2014; D'Ammando et al. 2015; Hayashida et al. 2015; Baring et al. 2017).However, an alternative "hadronic" scenario cannot be discounted, and occasionally reproduces the observed spectral energy distribution in a more consistent way than the leptonic scenario, although it requires large powers in relativistic protons (Böttcher et al. 2013;Diltz et al. 2015; Ackermann et al. 2016; Paliya et al. 2016).§ INTEGRALHIGHLIGHTS IN BLAZAR RESEARCH INTEGRAL detected a large number of blazars with the IBIS/ISGRIinstrument both in surveys(Beckmann et al. 2009; Bassani et al. 2014; Krivonos et al. 2015; Malizia et al. 2016) and in targeted observations during flaring states, when the measurement of the hard X-ray spectrum (15-100 keV)and variabilitycontributes, sometimes in a critical way, to the interpretation of the multi-wavelength observations (Courvoisier et al. 2003; Türler et al. 2006;Foschini et al. 2006; Lichti et al. 2008; Vercellone et al. 2009; Collmar et al. 2010; Bottacini et al. 2010a).Here I will concentrate on the results obtained with the program on blazars in outburst thatI have led since mission launch, and few morespecific resultsled by collaborators. In flat-spectrum radio quasarsand "intermediate BL Lac objects", the hard X-ray emission has a flat spectrum and is generally located below the spectral maximum of the inverse Compton component(see Figure 1 in Falomo et al. 2014 and Fig. A1 in Ghisellini et al. 2011); therefore, it traces the behavior of the relativistic particles that produce the radio and infrared part of the synchrotron spectrum and helps locating the frequency of the inverse Compton maximum.This was clearly measured in one of the first blazar sources observed by INTEGRAL IBIS, the high-redshift (z = 2.172) flat-spectrum radio quasar S5 0836+71, that wasdetected in a high stateand with a very flat spectrum (photon index Γ = 1.3) during an INTEGRAL pointing of theintermediate BLLac S5 0716+71 (Pian et al. 2005). This observation revealed one of the best advantages of INTEGRALfor investigation of the high energy sky, namely the large field of view of its cameras, that allow serendipitous detections of severalinteresting targets (in that same pointing also two Seyfert galaxies were observed).Among the brightestblazars of the 3rd Fermi-LAT AGN catalog(Ackermann et al. 2015), the flat-spectrum radio quasars3C 279, 3C 454.3,PKS 1502+106,and PKS 1510–089 were INTEGRAL targets in various occasions.Our collaboration observed3C 454.3(z = 0.859) in May 2005 at many frequencies other than hard X-rays, and found a rather different multi-wavelength spectral shape with respect to an earlier epoch, although the total energy had not varied significantly (Pian et al. 2006).This made this source (which is also a strong AGILE source and TeV candidate, Vercellone et al. 2008) an excellent testbed for the "economic" jet model, whereby variations in individual bands can be conspicuous (larger than a factor of 10), with the total energyinjected in the emitting region remaining however approximately constant (Katarzyński & Ghisellini 2007; Ghisellini et al. 2007).PKS 1502+106 (z = 1.839) was observed by INTEGRAL in 2008 during a bright flare observed by Fermi-LAT.IBIS did not detect it, although it set a significant constraint on the behavior of the inverse Compton spectrum(Pian et al. 2011). Incidentally, one of the best studied Seyfert 1 galaxies, Mkn 841, is ∼8 arcmin away from PKS 1502+106 and detected by IBIS, so that we could compare its hard X-ray spectrum with prior observations and with a "classical" model including a soft X-ray excess, a Comptonized spectrum of a hot corona withhigh energy cutoff and a reflected component (Bianchi et al. 2001).The flat-spectrum radio quasar PKS 1510–089 (z = 0.36)is well studied at all frequencies, including TeV (HESS Collaboration 2013),and was repeatedly observed by INTEGRAL.We used this circumstance to assembledata for this source over a rather long baseline (20 years) and to reconstruct its multi-wavelength history. The spectral energy distribution was consistently compared with a model to trace the time-dependent behavior of the physical parameters and we searched the long data series for possible (quasi-)periodicities which were found however to be not significant(Castignani et al. 2017).IBIS/ISGRI observed3C 279 (z = 0.539),the best known blazar, for 50 ksduring its June 2015high state, as part of the INTEGRAL survey of the Coma region (Bottacini et al. 2016).The broad-band spectral energy distributionis consistent both with apurely leptonic and a lepto-hadronic model, although the latterrequires an extreme total jet power close to the Eddington luminosity of a black hole of mass similar to the one thought to lie at the center of 3C 279 (∼ 2.5 × 10^8 M_⊙). This high-state of 3C 279 adds to the rather complex multi-wavelength time behavior known from previous campaigns centered on INTEGRAL observations (Collmar et al. 2010).Examples of the good synergy between INTEGRAL andXMM-Newton are the high-redshift blazars PKS 0537-286 (z = 3.1) and PKS 2149–306 (z = 2.345)whoseflat hard X-ray spectra made themoptimal targets forINTEGRAL.PKS 0537–286 was observed twice, in 2006 and 2008, and the IBIS spectrum constrained the modelling of the spectral energy distribution (Bottaciniet al. 2010b).PKS 2149–306 was observed by INTEGRAL IBIS and Swift BAT athard X-rays.The comparison, together with XMM-Newton data, puts in evidencetwo different states, that can be reproduced solely by achange of the bulk Lorentz factor of the jet(Bianchin et al. 2009).Particular interest is associated with the BL Lac object Mkn 421 (z = 0.031) both for its X-ray brightness and for the similarity of its overall spectral shape with Mkn 501, the prototypical "extreme synchrotron" blazar.Hard X-ray observations of Mkn 421in flaring statehave the potential to detect a shift of the synchrotron peak frequency of about two orders of magnitude, analogous to that detected in Mkn 501 (Pian et al. 1998), and establish this source as an equally powerful accelerator.Our IBIS observation in April 2013, triggered by a high X-ray state seen by Swift-XRT and a simultaneous TeV flare seen by VERITAS, showed the source in a somewhat dimmer hard X-ray state than seen in 2006 (Lichti et al. 2008) and with steeper X-ray spectrum (Pian et al. 2014). The synchrotron peak energy is limited to ∼1 keV, although Swift and NuSTAR observations in Jan-Jun 2013, bracketing the interval of our IBIS campaign, detected it to vary between 1 and 10 keV (Kapanadze et al. 2016). Thesimultaneous INTEGRAL IBIS and JEM-X light curves in April 2013 indicate complex variability with direct correlation at the various frequencies, and a possible delay that increases monotonically with energy difference and reachesabout 1 hour between the fluxes at the lowest(3-5 keV) andthe highest (40-100 keV) frequencies(Pian et al. 2014).§ CONCLUSION AND FUTURE PROSPECTS The simultaneous use of INTEGRAL and otherspace-and ground-based multi-wavelength facilities has led to substantial progress in mapping the behavior of jets both for BL Lac objects and for emission line radio quasars, owing to itssensitivity in a critical interval of the blazar spectrum.This is due bothto itsflexibilityof scheduling and repointing at blazar targets of opportunityfollowing alerts from other missions (see Figure 1), and to the large field-of-viewof itscameras thatallows serendipitous detection of blazars in active state(see Bottacini et al. 2010a, 2016).INTEGRAL's blazarlegacyis bestowed on future space missions and ground-based telescope networks that willrefine the level of coordination, coverage, monitoring duration and sampling rate. Optimization of blazar campaigns will lead tounderstanding the details of blazar jets structure and physics.In particular, it is becoming increasingly clear that the geometry of theemitting region is complex, so that a homogeneous region approximation may not be valid for an accurate description of multi-wavelength variability.Moreover, different jet composition,leptonic vs lepto-hadronic, may play a critical role(Böttcher 2010).Blazar investigation will benefit from the addition of the importantmulti-messengeraspectrepresented by ultra-high energy neutrinos.IceCube has so far detected a few extremely energetic events above 100 TeV,one of whichwas recently proposed to be associatedwith a gamma-ray flaredetected by Fermi-LAT in blazar PKS B1424-418 at z = 1.522 (Kadler et al. 2016). While neutrino detectionmay favoura hadronic scenario for blazars, a structuredjet as envisagedin a spine-and-sheath geometry may also be viable (Tavecchio & Ghisellini 2015).§ ACKNOWLEDGEMENTS I am indebted to all collaborators who have helped with the success of the blazar INTEGRAL program withinputs at so many different levels: P. Barr, A. Bazzano, V. Beckmann, T.M. Belloni, S. Bianchi, V. Bianchin, M. Böttcher, R. Boissay, E. Bottacini, G. Castignani, S. Ciprini, W. Collmar, T. Courvoisier, F. D'Ammando, G. Di Cocco, D. Eckert, C. Ferrigno, M.T. Fiocchi, L. Foschini,L. Fuhrmann,N. Gehrels, G. Ghisellini, P. Giommi, R. Hudec, D. Impiombato, E. Lindfors, G. Malaguti, L. Maraschi, A. Marcowith, P. Michelson, K. Nilsson, G. Palumbo, M. Pasanen, M. Persic, T. Pursimo,C.M. Raiteri, P. Romano, T. Savolainen, M. Sikora, A. Sillanpää, S. Soldi, A. Stamerra, G. Tagliaferri, L. Takalo, F. Tavecchio, D. Thompson, M. Tornikoski, G. Tosti, A. Treves, M. Türler, P. Ubertini, E. Valtaoja, S. Vercellone, M. Villata,R. Walter, and A. Wolter.Iamgrateful also to E. Kuulkers, P. Kretschmar, G. Belanger and the staff at INTEGRAL Science Operations Center and Science Data Center for their support oftheprogram.Finally, I thank theorganizers for a memorable conference,nicetour of Amsterdam and excellent Chinese banquet.99 Ackermann2012 M. Ackermann, et al. 2012, ApJ, 751, 159 Ackermann2015 M. Ackermann, et al. 2015, ApJ, 810, 14 Ackermann2015 M. Ackermann, et al. 2016, ApJ, 824, L20 Arsioli2015 B. Arsioli, et al. 2015, MNRAS, 579, A34 Baring2017 M. Baring,M. Böttcher, E.J. Summerlin2017, MNRAS, 464, 4875 Bassani2014 L. Bassani, et al. 2014, A&A, 561, A108 Beckmann2009 V. Beckmann, et al. 2009, A&A, 505, 417 Bianchi2001S. Bianchi, et al. 2001, A&A, 376, 77 Bianchin2009 V. Bianchin, et al. 2009, A&A, 496, 423 Boettcher2010 M. Böttcher 2010, "Fermi meets Jansky –AGN at Radio and Gamma-Rays" Savolainen, T., Ros, E., Porcas, R.W., & Zensus, J.A. (eds.),June 21–23, 2010, Bonn, Germany (arXiv:1006.5048) Boettcher2013 M. Böttcher,A. Reimer, K. Sweeney,A. Prakash 2013, ApJ, 768, 54 Bottacini2010a E. Bottacini,et al. 2010a,ApJ, 719, L162 Bottacini2010b E. Bottacini,et al. 2010b,A&A, 509, A69 Bottacini2016 E. Bottacini, M. Böttcher, E. Pian, W. Collmar 2016,ApJ, 832, 17 Castignani2017G. Castignani, et al. 2017, A&A, in press (arXiv:1612.05281) Collmar2010W. Collmar, et al. 2010, A&A, 522, A66 Courvoisier2003 T. J.-L. Courvoisier,et al. 2003, A&A, 411, L343 DAmmando2015 F. D'Ammando, et al. 2015, MNRAS, 450, 3975 Diltz2015C. Diltz, M. Böttcher, G. Fossati 2015, ApJ, 802, 133 Falcke2004H. Falcke, E. Körding, S. Markoff2004, MNRAS, 414, 895 Falomo2014 R. Falomo, E. Pian, A. Treves 2014,A&ARv, 22, 73 Foschini2006L. Foschini, et al. 2006,A&A, 450, 77 Fossati1998G. Fossati, et al. 1998,MNRAS, 301, 451 Ghisellini1998G. Ghisellini, et al. 1998,MNRAS, 299, 433 Ghisellini2007G. Ghisellini, L. Foschini, F. Tavecchio, E. Pian2007,MNRAS, 382, L82 Ghisellini2011G. Ghisellini, et al. 2011, MNRAS, 414, 2674 Ghisellini2013 G. Ghisellini,et al. 2013, MNRAS, 428, 1449 Hayashida2015M. Hayashida, et al.2015, ApJ 807, 79 HESS2013HESS Collaboration 2013, A&A 554, A107 Kadler2016M. Kadler, et al. 2016, Nature Physics, 12, 807 Kapanadze2016B. Kapanadze, et al. 2016, ApJ,831, 102 Katarzynski2007 K. Katarzyński & G. Ghisellini2007, A&A, 463, 529 Krivonos2015 R. Krivonos, et al. 2015, MNRAS, 448, 3766 Lichti2008 G. G. Lichti, et al. 2008, A&A, 486, 721 Malizia2016A. Malizia, et al. 2016, MNRAS, 460, 19 Marscher2014A. P. Marscher2014, ApJ, 780, 87 Paliya2016V. Paliya, et al. 2016, ApJ, 817, 61 Pian1998E. Pian, et al. 1998, ApJ, 492, L17 Pian2005E. Pian, et al. 2005, A&A,429, 427 Pian2006E. Pian, et al. 2006, A&A,449, L21 Pian2011E. Pian, et al. 2011, A&A,526, A125 Pian2014 E. Pian, et al. 2014, A&A,570, A77 Tavecchio2015F. Tavecchio & G. Ghisellini 2015, MNRAS, 451, 1502 Turler2006M. Türler, et al. 2006, A&A, 451, L1 Vercellone2008S. Vercellone,et al. 2008, ApJ, 676, L13 Vercellone2009S. Vercellone,et al. 2009, ApJ, 690, 1018 | http://arxiv.org/abs/1703.08873v1 | {
"authors": [
"Elena Pian"
],
"categories": [
"astro-ph.HE"
],
"primary_category": "astro-ph.HE",
"published": "20170326211349",
"title": "Multiwavelength observations of blazars"
} |
Distributed Adaptive Gradient Optimization Algorithm This work was supported by the National Science Foundation under Grant ECCS-1307678 and ECCS-1611423, and the National Natural Science Foundation of China (61203080,61573082,61528301).Peng Lin is with the School of Information Science and Engineering, Central South University, Changsha 410083, China. Wei Ren is with the Department of Electrical and Computer Engineering, University of California, Riverside, CA92521, USA. E-mail: [email protected], [email protected]. and Wei Ren Received: date / Accepted: date ==============================================================================================================================================================================================================================================================================================================In this paper, a distributed optimization problem with general differentiable convex objective functions is studied for single-integrator and double-integrator multi-agent systems. Two distributed adaptive optimization algorithm is introduced which uses the relative information to construct the gain of the interaction term.The analysis is performed based on the Lyapunov functions, the analysis of the system solution and the convexity of the local objective functions. It is shown that if the gradients of the convex objective functions are continuous, the team convex objective function can be minimized as time evolves for both single-integrator and double-integrator multi-agent systems. Numerical examples are included to show the obtained theoretical results.Keywords: Optimization, Consensus, Distributed Adaptive algorithm § INTRODUCTIONAs an important branch of distributed control, distributed optimization has attracted more and more attention from the control community <cit.>. The aim is to use a distributed approach to minimize a team optimization function composed of a sum of local objective functions where each local objective function is known to only one agent. In the past few years, researchers have obtained many results about distributed optimization problems from different perspectives. For example, based on gradient descent method, articles<cit.> studied distributed optimization problems with and without state constraints, whileby introducing a dynamic integrator, articles <cit.> investigated distributed optimization problems for general strongly connected balanced directed graphs. Recently, some researchers turned their attention to try to solve the distributed optimization problem from a view point of nonsmooth approaches. For example, article <cit.> proposed several algorithms using nonsmooth functions to solve the distributed optimization problem with the consideration of finite-time consensus optimization convergence. Also, articles <cit.> introducedadaptive algorithms using nonsmooth functions to solve the distributed optimization problem for general differentiable convex functions or general linear multi-agent systems. However, in <cit.>, it is required that the gradients or subgradients of the local objective functions be bounded or a period of the previous information should be used for each agent.To this end, we will continue the work of <cit.> to study the distributed optimization problem for general differentiable objective function using nonsmooth functions. Two distributed adaptive optimization algorithm is introduced which uses the relative information to construct the gain of the interaction term. The analysis is performed based on the Lyapunov functions, the analysis of the system solution and the convexity of the local objective functions. It is shown that if the gradients of the convex objective functions are continuous, the team convex objective function can be minimized as time evolves for both single-integrator and double-integrator multi-agent systems. Notations. ℝ^mdenotes the set of all m dimensional real column vectors; ℐ denotes the index set {1,…,n}; s_i denotes the ith component of the vector s; s^T denotes the transpose of the vector s; ||s|| denotes the Euclidean norm of the vector s; d/d s denotes the differential operator with respect to s; ∇ f(s) denotes the gradient of the function f(s) at s;sgn(s) denotes a component-wise sign function of s; and P_X(s) denotes the projection of the vector s onto the closed convex set X, i.e., P_X(s)=argmin_s̅∈ Xs-s̅.§ PRELIMINARIESIn this section, we introduce preliminary results about graph theory and convex functions (see <cit.>).Consider a multi-agent system consisting of n agents. Each agent is regarded as a node in an undirected graph 𝒢(𝒱,ℰ,𝒜) of order n where 𝒱={1,⋯,n} is the set of nodes, ℰ⊆𝒱×𝒱 is the set of edges, and 𝒜=[a_ij]∈ℝ^n× n is the weighted adjacency matrix. An edge of (i,j) ∈ℰ denotes that agentsi and jcan obtain information from each other. The weighted adjacency matrix 𝒜 is defined as a_ii=0 and a_ij=a_ji≠ 0 if (i,j) ∈ℰ and a_ij=0 otherwise. The set of neighbors of node i is denoted by N_i={j∈𝒱:(i,j)∈ℰ}. The Laplacian of the graph 𝒢, denoted by L, is defined as ⌊L⌋_ii =∑_j=1^na_ij and ⌊L⌋_ij=-a_ij for all i≠ j. A path is a sequence of edges of the form (i_1,i_2),(i_2,i_3),⋯, where i_j ∈𝒱. The graph 𝒢 isconnected, if there is a path from every node to every other node.<cit.> If the graph 𝒢 is connected, then its Laplacian L has a simple eigenvalue at 0 with associated eigenvector1 and all its other n-1 eigenvalues are positive and real. <cit.> Let f_0(s): ℝ^m→ℝ be a differentiable convex function. f_0(s) is minimized if and only if ∇ f_0(s)=0.<cit.> Under Assumption <ref>,all X_i and X are nonempty closed bounded convex sets for all i.§ DISTRIBUTED OPTIMIZATION PROBLEMSuppose that each agent has the following dynamicsẋ_i(t)=u_i(t),i∈ℐ,where x_i(t)∈ℝ^m is the state of agent i, and u_i(t) ∈ℝ^m is the control input of agent i. Our objective is to use only local information to design u_i(t) for all agents to cooperatively solve the following optimization problem[ minimize ∑_i=1^nf_i(x_i);subject to x_i=x_j∈ℝ^m. ]<cit.> Each set X_i≜{s |∇ f_i(s)=0} is nonempty and bounded.<cit.> The length of the time interval between any two contiguous switching times is no smaller than a given constant, denoted by d_w.§ MAIN RESULTS §.§ Single-Integrator Multi-Agent SystemsIn this section, wedesign a distributed adaptive algorithm for (<ref>) to solve the optimization problem (<ref>) for general convex local objective functions. The algorithm is given by[ u_i(t)=∑_j∈ N_i(t)q_ij(t)[x_j(t)-x_i(t)]/x_j(t)-x_i(t)-∇ f_i(x_i(t)),; q̇_ij(t)={[ sgn(x_j(t)-x_i(t)),(i,j)∈𝒢(t),; 0, ].;q_ij(0)=q_ji(0)=0, ]for all i. In (<ref>), the role of the term, ∑_j∈ N_i(t)q_ij(t)[x_j(t)-x_i(t)]/x_j(t)-x_i(t), is to make all agents converge to a consensus point, while the second term, -∇ f_i(x_i(t)), is the negative gradient of f_i(x_i(t)) which is used to minimize f_i(x_i(t)). As algorithm (<ref>) usesthe sign functions that is nonsmooth, the system (<ref>) using (<ref>) would be discussed in the Filippov sense <cit.>. Suppose that the graph 𝒢(t) is undirected and connected for all t, ∇ f_i(s) is continuous with respect to s for all i and Assumptions <ref> and <ref> hold.For system (<ref>) with algorithm (<ref>), all agents reach a consensus in finite time and minimize the team objective function (<ref>) as t→+∞. Proof: First, we prove that all x_i(t) are bounded for all t. Under Assumption <ref>, from Lemma <ref>, we have that all X_i and X arenonempty closed bounded convex sets for all i. It is clear that x_i(0)∈ Y, X⊂ Y and X_i⊂ Y for all i and some closed bounded set Y. Let Y be sufficiently large for any z∈ Xand all z_j∈ X_j such that f_i(x_i(t))-f_i(z)≥∑_j=1,j≠ i^n[f_j(z)-f_j(z_j)] for all i. Since z∈ X, from the convexity of the function f_i(x_i(t)), we have ∇ f_i(x_i(t))^T(z-x_i(t))≤ f_i(z)-f_i(x_i(t)). Construct a Lyapunov function candidate as V(t)=1/2∑_i=1^nx_i(t)-z^2 for some z∈ X. Calculating V̇(t) along the solutions of system (<ref>) with (<ref>), we have[ V̇(t); = ∑_i=1^n[x_i(t)-z]^T; × [∑_j∈ N_i(t)q_ij(t)[x_j(t)-x_i(t)]/x_j(t)-x_i(t)-∇ f_i(x_i(t))] ]Since the graph 𝒢(t) is undirected, it follows that[ ∑_i=1^n[x_i(t)-z]^T; × ∑_j∈ N_i(t)q_ij(t)[x_j(t)-x_i(t)]/x_j(t)-x_i(t); = ∑_i=1^n∑_j∈ N_i(t)q_ij(t)[x_i(t)-z]^T; × x_j(t)-x_i(t)/x_j(t)-x_i(t); =∑_i=1^n∑_j∈ N_i(t){q_ij(t)/2; × [x_i(t)-z]^Tx_j(t)-x_i(t)/x_j(t)-x_i(t); + q_ij(t)/2[x_j(t)-z]^Tx_i(t)-x_j(t)/x_j(t)-x_i(t)}; =∑_i=1^n∑_j∈ N_i(t)q_ij(t)/2[x_i(t)-z; -x_j(t)+z]^Tx_j(t)-x_i(t)/x_j(t)-x_i(t); =∑_i=1^n∑_j∈ N_i(t)q_ij(t)/2[x_i(t)-x_j(t)]^T; × x_j(t)-x_i(t)/x_j(t)-x_i(t); ≤0. ]From the convexity of the function f_i(x_i(t)), we have ∇ f_i(x_i(t))^T(z-x_i(t))≤ f_i(z)-f_i(x_i(t)).It follows that V̇(t)≤ -∑_i=1^n[f_i(x_i(t))-f_i(z)]. If x_i_0(t)∉ Y for some i_0, we have f_i_0(x_i_0(t))-f_i_0(z)≥∑_j=1,j≠i_0^n[f_j(z)-f_j(z_j)] for all z_j∈ X_j and hence V̇(t)≤ -[f_i_0(x_i(t))-f_i_0(z)]+∑_j=1,j≠ i_0^n[f_i(z)-f_i(z)]≤ 0. This implies that all x_i(t) remain in Y. Note that each ∇ f_i(x_i(t)) is continuous with respect to x_i(t) for all i, X⊂ Y and Y is bounded. Thus, max{x_i(t),∇ f_i(t)}<ρ for all i and some constant ρ>0.Next, we prove that all agents reach a consensus as t→+∞. Let 0<t_k1<t_k2<t_k+1,1<t_k+1,2 denote the contiguous switching times for all k∈{1,2,⋯} such that x_i(t)≠ x_j(t) for some two integers i,j∈ℐ and all t∈[t_k1,t_k2) and x_i(t)=x_j(t) for all i,j∈ℐ and all t∈[t_k2,t_k+1,1). Suppose that consensus is not reached as t→+∞ and ∑_k=1^+∞(t_k2-t_k1)<+∞. It is clear that lim_k→+∞(t_k2-t_k1)=0. Moreover, from the dynamics of q_ij(t), we have that q_ij(t)<ρ_q for some constant ρ_q>0. Since each ∇ f_i(s) is bounded and x_i(t_k1^-)=x_j(t_k1^-) for all i,j and all k>1, where t_k1^- denotes the time just before t_k1, u_i(t) is bounded for all t∈[t_k1,t_k2) and hence 0≤lim_k→+∞max_t∈[t_k1,t_k2)x_i(t)-x_j(t)≤lim_k→+∞∫_t_k1^t_k2(u_i(s)+u_j(s))ds=0 for all i,j. That is, consensus is reached as t→+∞, which yields a contradiction. Suppose that ∑_k=1^+∞(t_k2-t_k1)=+∞. Similar to the proof of Theorem 2 in <cit.>, it can be proved that all agents reach a consensus in finite time.Summarizing the above analysis, consensus can be reached as t→+∞. Let x^*(t)=1/n∑_i=1^nx_i(t). Note that each ∇ f_i(x_i(t)) is continuous with respect to x_i(t). There is a constant T>0 for any ϵ>0 such that x_i(t)-x^*(t)<ϵ and ∇ f_i(x^*(t))-∇ f_i(x_i(t))<ϵ for all t>T. Recall that x_i(t)<ρ. Consider the Lyapunov function candidate V_1(t)=1/2x^*(t)-P_X(x^*(t))^2 for t>T. Calculating V̇_1(t), we haveV̇_1(t)=-[x^*(t)-P_X(x^*(t))]^T1/n∑_i=1^n∇ f_i(x_i(t))=-[x^*(t)-P_X(x^*(t))]^T[1/n∑_i=1^n∇ f_i(x^*(t))+ϵ] ≤ -[1/n∑_i=1^n f_i(x^*(t))-1/n∑_i=1^n f_i(P_X(x^*(t)))]+2ρϵNote that when 1/n∑_i=1^n f_i(x^*(t))-1/n∑_i=1^n f_i(P_X(x^*(t)))≥ 4ρϵ, V̇_1(t)≤ -2ρϵ. It follows that there exists a constant T_1>T such that 1/n∑_i=1^n f_i(x^*(t))-1/n∑_i=1^n f_i(P_X(x^*(t)))< 4ρϵ for t>T_1. Since ϵ can be arbitrarily small, it follows that lim_t→+∞[1/n∑_i=1^n f_i(x^*(t))-1/n∑_i=1^n f_i(P_X(x^*(t)))]=0. It follows from Lemma <ref> that the team objective function (<ref>) is minimized as t→+∞.In <cit.>, a distributed algorithm was proposed to solve the optimization problem. However, it is required that a period of the previous information should be used for each agent. In contrast to <cit.>, in this paper, the previous information is not used and the current information is sufficient for the proposed algorithm to make all agents minimize the team objective function as time evolves.§.§ Double-Integrator Multi-agent SystemsIn this part, our goal is to extend the results in Subsection A to second-order multi-agent systems with the following dynamics[ẋ_i(t) =v_i(t); v̇_i(t) = u_i(t), ]where x_i(t)∈ℝ^m and v_i(t)∈ℝ^m are the position and velocity states of agent i and u_i(t)∈ℝ^m is the control input. To solve the distributed optimization problem, we use the following algorithm[ u_i(t)=-pv_i(t)+∑_j∈ N_i(t)q_ij(t)[x_j(t)+2/pv_j(t)-x_i(t)-2/pv_i(t)]/x_j(t)+2/pv_j(t)-x_i(t)-2/pv_i(t)-∇ f_i(x_i(t)+2/pv_i(t)),;q̇_ij(t)={[ sgn(x_j(t)-x_i(t)),(i,j)∈𝒢(t),; 0, ].; q_ij(0)=q_ji(0)=0, ]where p>0 is the feedback damping gain of the agents.Let v̅_i(t)=x_i(t)+2v_i(t)/p. The system (<ref>) with (<ref>) can be written as[ẋ_i(t) =p/2v̅_i(t)-p/2x_i(t); v̇̅̇_i(t) = -p/2v̅_i(t)+p/2x_i(t)+2/p∑_j∈ N_i(t)q_ij(t)[v̅_j(t)-v̅_i(t)]/v̅_j(t)-v̅_i(t)-2/p∇ f_i(v̅_i(t)). ]For convenience of expression, we assume m=1 in the proof of the following theorem. Suppose that the graph 𝒢(t) is undirected and connected for all t, ∇ f_i(s) is continuous with respect to s for all i and Assumptions <ref> and <ref> hold.For system (<ref>) with algorithm (<ref>), all agents reach a consensus in finite time and minimize the team objective function (<ref>) as t→+∞. Proof:Construct a Lyapunov function candidate as V(t)=1/2∑_i=1^nx_i(t)-s^2+1/2∑_i=1^nv̅_i(t)-s^2 for some s∈ X.Let z(t)=[x_1(t)^T,v̅_1(t)^T,⋯,x_n(t)^T,v̅_n(t)^T]^T, A=[p/2 -p/2; -p/2p/2 ],B=[ 0 0; 0 2/p ] and Φ(t) be a matrix with each entry [Φ(t)]_ij={[ -∑_k=1,k≠ i^n[Φ(t)]_ik, i=j,; -q_ij(t)/2x_j(t)-x_i(t), i≠ j (i,j)∈ℰ(𝒢(t)); 0,. ]. Regarding A and Φ(t) as the Laplacians of some certain undirected graphs, it follows from Lemma <ref> that -z(t)^T(I_n⊗ A)z(t)≤ 0 and -z(t)^T[Φ(t)⊗ B]z(t)≤ 0. Calculating V̇(t), we have[ V̇(t)=-z(t)^T(I_n⊗ A)z(t)-z(t)^T[Φ(t)⊗ B]z(t); -∑_i=1^n2/p(v̅_i(t)-s)^T ∇ f_i(v̅_i(t)); ≤ -∑_i=1^n2/pv_i(t)^2-2/p∑_i=1^n[f_i(v̅_i(t))-f_i(s)];-z(t)^T[Φ(t)⊗ B]z(t);≤-∑_i=1^n2/pv_i(t)^2-2/p∑_i=1^n[f_i(v̅_i(t))-f_i(s)]; -z(t)^T[Φ(t)⊗ B]z(t), ]where the first inequality uses the convexity of f_i(·). Then by a similar approach to the proof of Theorem 1, it can be proved that all x_i(t) and v̅_i(t) remain in a bounded closed convex set, denoted by Y, for all t such that X∈ Y, X_i∈ Y and x_i(0)∈ Y for all i. Note that each ∇ f_i(v̅_i(t)) is continuous with respect to v̅_i(t). Thus, max{x_i(t),v̅_i(t),∇ f_i(v̅_i(t))}<ρ for all i and some constant ρ>0. Next, we prove that all agents reach a consensus as t→+∞. Let 0<t_k1<t_k2<t_k+1,1<t_k+1,2 denote the contiguous switching times for all k∈{1,2,⋯} such that x_i(t)≠ x_j(t) for some two integers i,j∈ℐ and all t∈[t_k1,t_k2) and x_i(t)=x_j(t) for all i,j∈ℐ and all t∈[t_k2,t_k+1,1). Suppose that consensus is not reached as t→+∞ and ∑_k=1^+∞(t_k2-t_k1)<+∞. It is clear that lim_k→+∞(t_k2-t_k1)=0. Moreover, from the dynamics of q_ij(t), we have that q_ij(t)<ρ_q for some constant ρ_q>0. Note that max{x_i(t),v̅_i(t)}<ρ for all i and x_i(t_k1^-)=x_j(t_k1^-) for all i,j and all k>1, where t_k1^- denotes the time just before t_k1. Hence 0≤lim_k→+∞max_t∈[t_k1,t_k2)x_i(t)-x_j(t)=0 for all i,j. That is, lim_t→+∞[x_i(t)-x_j(t)]=0 for all i,j.Since x_i(t)=x_j(t) for all i,j∈ℐ and all t∈[t_k2,t_k+1,1), it follows from the dynamics of each agent that v_i(t)=v_j(t) for all i,j∈ℐ and all t∈(t_k2,t_k+1,1). Since q_ij(t)<ρ_q and max{x_i(t),v̅_i(t),∇ f_i(v̅_i(t))}<ρ for all i, it follows that each u_i(t) is bounded for all i. Hence 0≤lim_k→+∞max_t∈[t_k1,t_k2)v_i(t)-v_j(t)≤lim_k→+∞∫_t_k1^t_k2(u_i(s)+u_j(s))ds=0 for all i,j. Recall that x_i(t)=x_j(t)for all i,j and all t∈ (t_k2,t_k+1,1). Clearly, V̇(t)≤-∑_i=1^n2/pv_i(t)^2 for all t∈ (t_k2,t_k+1,1). Since q_ij(t)<ρ_q and max{x_i(t),v̅_i(t),∇ f_i(v̅_i(t))}<ρ for all i, V̇(t) is bounded for all t. Since ∑_k=1^+∞(t_k2-t_k1)<+∞, ∑_k=1^+∞∫_t_k1^t_k2V̇(s)ds is bounded. Thus, V(t) is bounded for all t. Thus, ∑_k=1^+∞∫_t_k2^t_k+1,1V̇(s)ds≤ -∑_k=1^+∞∑_i=1^n∫_t_k2^t_k+1,12/pv_i(s)^2ds is also bounded. This means that lim_k→+∞max_t∈[t_k2,t_k+1,1]v_i(t)=0. By a similar approach to prove that lim_t→+∞[x_i(t)-x_j(t)]=0 for all i,j, using the continuity of v_i(t), it can be proved that lim_t→+∞v_i(t)=0 for all i. It follows from the definition of v̅_i(t) that lim_t→+∞[v̅_i(t)-x_i(t)]=lim_t→+∞[v̅_i(t)-v̅_j(t)]=0 for all i,j.Suppose that ∑_k=1^+∞(t_k2-t_k1)=+∞. Then from the dynamics of q_ij(t), there must exist a pair of agents, denoted by i_0≠ j_0, such that lim_t→+∞q_i_0j_0(t)=+∞. In the following, we prove that there exist a pair of agents, denoted by i_1≠ j_1, such that (i_1,j_1)∉{(i_0,j_0),(j_0,i_0)}, i_1∈{i_0,j_0} and lim_t→+∞q_i_1j_1=+∞. If this is not true, we have q_ii_0(t)<γ_q and q_ij_0(t)<γ_q for some constant γ_q>max_i{ρ, pρ}, all t and all i∈∪_s∈ [0,+∞) [N_i_0(s)∪ N_j_0(s)] with i≠ i_0 and i≠ j_0. Since lim_t→+∞q_i_0j_0(t)=+∞, there exists a sufficiently large constant T_0>0 for any γ_0>16nmγ_q such that q_i_0j_0(t)>γ_0 for all t>T_0.By simple calculations based on (<ref>), when (i_0,j_0)∈𝒢(t) and x_i_0(t)-x_j_0(t)≠ 0 for t>T_0, we have d/dtx_i_0(t)-x_j_0(t)≤x_i_0(t)-x_j_0(t)/x_i_0(t)-x_j_0(t)2q_i_0j_0(t)x_j_0(t)-x_i_0(t)/v_i_0(t)-v_j_0(t)+2nmγ_q≤ -2nmγ_q. When there exist at least an agenti such that i∈ N_ĩ(t) and x_ĩ(t)-x_i(t)≠0 for ĩ∈{i_0,j_0} and either (i_0,j_0)∉𝒢(t) or x_i_0(t)-x_j_0(t)=0 holds, we have d/dtv_i_0(t)-v_j_0(t)≤ 2nmγ_q for t>T_0. Let T_0<t_k3<t_k4<t_k+1,3<t_k+1,4 denote the contiguous switching times for all k∈{1,2,⋯} such that the case holds when (i_0,j_0)∈𝒢(t) and x_i_0(t)-x_j_0(t)≠ 0 for all t∈[t_k3,t_k4) andthe case when there exist at least an agenti such that i∈ N_ĩ(t) and x_ĩ(t)-x_i(t)≠0 for ĩ∈{i_0,j_0} and either (i_0,j_0)∉𝒢(t) or x_i_0(t)-x_j_0(t)=0 holds for all i,j∈ℐ and all t∈[t_k4,t_k+1,3). Note that x_i_0(t)-x_j_0(t)<2ρ. Calculating x_i_0(t)-x_j_0(t) based on the Newton's Law, we have that[0≤x_i_0(+∞)-x_j_0(+∞);≤x_i_0(T_0)-x_j_0(T_0)+∑_k=1^+∞2nmγ_q(t_k+1,3-t_k4)^2;-∑_k=1^+∞nmγ_q(t_k4-t_k3)^2; ≤ 2ρ+2nmγ_q [∑_k=1^+∞(t_k+1,3-t_k4)]^2-0.25nmγ_q[∑_k=1^+∞(t_k4-t_k3)]^2. ]Since lim_t→+∞q_i_0j_0(t)=+∞, from the dynamics of q_ij(t), we have ∑_k=1^+∞(t_k4-t_k3)=+∞ and hence from (<ref>) we have ∑_k=1^+∞(t_k+1,3-t_k4)=+∞. That is, there exist a pair of agentsi_1≠ j_1 such that (i_1,j_1)∉{(i_0,j_0),(j_0,i_0)}, i_1∈{i_0,j_0} and lim_t→+∞q_i_1j_1(t)=+∞. Similarly, it can be proved that there exist a pair of agents i_2≠ j_2 such that (i_2,j_2)∉{(i_0,j_0),(j_0,i_0),(i_1,j_1),(j_1,i_1)}, i_2∈{i_0,j_0,i_1} and lim_t→+∞q_i_2j_2(t)=+∞. By analogy, it can be proved that lim_t→+∞q_ij(t)=+∞ for all i,j. Then there is a constant T_1>0 such that q_ij(t) is far larger than ρ for all i,j and all t>T_1.Since v̅_i(t)≤ρ for all i,t, it follows from (<ref>) that ∑_j∈ N_i(t)q_ij(t)[x_j(t)-x_i(t)]/x_j(t)-x_i(t) is far smaller than min_j∈ N_i(t)q_ij(t) for t>T_1 and all i. Adopting a group of agents E={i_1,⋯,i_q} such that x_i(t)∈co{x_i_1(t),⋯,x_i_q(t)} and x_j(t)∉co{x_k(t)| k∈ℐ, k≠ j} for all i and j∈ E where co denotes the operator of the convex closure. It is clear that [x_k(t)-x_i_0(t)]^T[x_j(t)-x_i_0(t)]/x_j(t)-x_i_0(t)x_k(t)-x_i_0(t)≥0 for all i_0∈ E. If x_j(t)≠ x_i_0(t) for some j∈ N_i_0(t), we have ∑_j∈ N_i(t)q_ij(t)[x_j(t)-x_i(t)]/x_j(t)-x_i(t)≥min_j∈ N_i(t)q_ij(t). This yields a contradiction. Thus, ∑_k=1^+∞(t_k2-t_k1)<+∞. Based on the above analysis, using a approach similar to the proof of Theorem <ref>, the team objective function (<ref>) is minimized as t→+∞. § SIMULATIONS Consider a multi-agent system consisting of 8 agents in a plane. The communication graph is switched among the connected subgraphs of the graph in Fig. <ref>. The local objective functions aref_1(x_1)=1/2x_11^2+1/2x_12^2, f_2(x_2)=1/2(x_21+2)^2+1/2x_22^2, f_3(x_3)=1/2x_31^2+1/2(x_32+2)^2, f_4(x_4)=1/2(x_41+2)^2+1/2(x_42+2)^2, f_5(x_5)=1/4x_51^4+1/4x_52^4, f_6(x_6)=1/4(x_61+2)^4+1/4x_62^4, f_7(x_7)=1/4x_71^4+1/4(x_72+2)^4 and f_8(x_8)=1/4(x_81+2)^4+1/4(x_82+2)^4, where x_i1 and x_i2 denote the two components of x_i. By simple calculations, when ∑_i=1^n∇ f_i(s)=0, we have that s=[-1,-1]^T. From Lemma <ref>,the minimum set of the team objective function (<ref>) is s=[-1,-1]^T. The simulation results are shown in Figs. <ref> and <ref>. It can be observed that the team objective function (<ref>) is minimized as t→+∞, which are consistent with Theorems 1 and 2.§ CONCLUSIONSIn this paper, a distributed optimization problem with general differentiable convex objective functions was studied for single-integrator and double-integrator multi-agent systems. Two distributed adaptive optimization algorithm was introduced by using the relative information to construct the gain of the interaction term.The analysis was performed based on the Lyapunov functions, the analysis of the system solution and the convexity of the local objective functions.It was shown that if the gradients of the convex objective functions are continuous, the team convex objective function can be minimized as time evolves for both single-integrator and double-integrator multi-agent systems.xxangelia A. Nedić, A. Ozdaglar, P. A. Parrilo, “Constrained consensus and optimization in multi-agent networks", IEEE Transactions on Automatic Control, vol. 55, no. 4, pp.922-938, 2010.angelia1A. Nedić, and A. Ozdaglar, “Distributed Subgradient Methods for Multi-agent Optimization",IEEE Transactions on Automatic Control, vol. 54, no. 1, pp. 48-61, 2009.shi G. Shi, K. H. Johansson and Y. Hong, “Reaching an Optimal Consensus: Dynamical Systems That Compute Intersections of Convex Sets", IEEE Transactions on Automatic Control, vol. 58, no. 3, pp. 610-622, 2013.liuS. Liu, Z. Qiu and L. Xie, “Continuous-Time Distributed Convex Optimization with Set Constraints", in proceedings of the IFAC World Congress, vol. 29, pp. 9762-9767, 2014.linren3 P. Lin,W. Ren and J. A. Farrell, “Distributed continuous-time optimization: nonuniform gradient gains, finite-time convergence, andconvex constraint set," IEEE Transactions on Automatic Control, 2017, available online. Zhu M. Zhu and S. Martnez, “An Approximate Dual Subgradient Algorithm for Multi-Agent Non-Convex Optimization", IEEE Transactions on Automatic Control, vol. 58, no. 6, pp. 1534-1539. 2013.Johansson B. Johansson, T. Keviczky, M. Johansson, and K. H. Johansson, “Subgradient methods and consensus algorithms for solving convex optimization problems," in Proceedings of IEEE Conference on Decision and Control, 2008, pp. 4185-4190.lu J. Lu, C. Y. Tang, P. Regier and T. D. Bow, “Gossip Algorithms for Convex Consensus Optimization Over Networks,"IEEE Transactions on Automatic Control, vol. 56, no. 12, pp. 2917-2923. 2011.lupJ. Lu, C. Y. Tang, “Zero-gradient-sum algorithms for distributed convex optimization: the continuous-time case", IEEE Transactions on Automatic Control, vol. 57, no. 9, pp. 2348C2354, 2012. KvaternikK. Kvaternik and L. Pavel, “A Continuous-Time Decentralized Optimization Scheme With Positivity Constraints", in proceedings of IEEE Conference on Decision and Control, pp. 6801-6807, 2012.Elia J. Wang and N. Elia, “A control perspective for centralized and distributed convex optimization," in proceedings of IEEE Conference on Decision and Control, pp. 3800-3805, 2011. cotes B. Gharesifard and J. Cortés, “Distributed Continuous-Time Convex Optimization on Weight-Balanced Digraphs", IEEE Transactions on Automatic Control, vol. 59, no. 3, pp. 781-786, 2014.Cortes3S. S. Kia, J. Cortés, “Distributed convex optimization via continuous-time coordination algorithms with discrete-time communication", Automatica, vol. 55,pp. 254-264, 2015.linren1 P. Lin and W. Ren, “Distributed shortest distance consensus problem in multi-agent systems," in proceedings of IEEE Conference on Decision and Control, 2012, pp. 4696-4701.linren2 P. Lin,W. Ren, Y. Song and J. A. Farrell, “Distributed optimization with the consideration of adaptivity and finite-time convergence," in proceedings of American Control Conference, 2014, pp. 3177-3182.zhaoY. Zhao, Y. Liu, G. Wen and G. Chen, “Distributed Optimization of Linear Multi-Agent Systems: Edge- and Node-Based Adaptive Designs", IEEE Transactions on Automatic Control, available online.boydS. Boyd and L. Vandenberghe. Convex Optimization, Cambridge University Press, 2004. Fili A. F. Filippov, Differential Equations with Discontinuous Righthand Sides. Amsterdam, The Netherlands: Kluwer Academic, 1988.s10 C. Godsil and G. Royle, Algebraic Graph Theory. New York: Springer-Verlag, 2001. | http://arxiv.org/abs/1703.08896v1 | {
"authors": [
"Peng Lin",
"Wei Ren"
],
"categories": [
"math.OC",
"OC.org"
],
"primary_category": "math.OC",
"published": "20170327014247",
"title": "Distributed Adaptive Gradient Optimization Algorithm"
} |
Los Alamos National Laboratory Los Alamos, NM, USA <http://orcid.org/0000-0003-0130-2097> [email protected] Los Alamos National Laboratory Los Alamos, NM, USA <http://orcid.org/0000-0002-0715-6126> [email protected] efforts such as (C)LOCKSS and Portico are in place to ensure the longevity oftraditional scholarly resources like journal articles.At the same time, researchers are depositing a broad variety of other scholarly artifactsinto emerging online portals that are designed to support web-based scholarship.These web-native scholarly objects are largely neglected by current archival practices and hence they become scholarly orphans. We therefore argue for a novel paradigm that is tailored towards archiving these scholarly orphans. We are investigating the feasibility of using Open Researcher and Contributor ID (ORCID)as a supporting infrastructure for the process of discovery of web identities andscholarly orphans for active researchers. We analyze ORCID in terms ofcoverage of researchers, subjects, and location and assess the richness of its profilesin terms of web identities and scholarly artifacts.We find that ORCID currently lacks in all considered aspects and hence can only be considered in conjunction with other discovery sources. However, ORCID is growing fast so there is potential that it could achieve a satisfactory level of coverage and richness in the near future.Discovering Scholarly Orphans Using ORCID Herbert Van de Sompel December 30, 2023 ========================================= § INTRODUCTION Over the past two decades, research communication has transitioned from a paper-based endeavorto a web-based digital enterprise. More recently, the research process itself has started toevolve from being a largely hidden activity to one that becomes plainly visible on the globalnetwork. To support researchers in this process, a wide variety of online portals have emerged which largely exist outside the established scholarly publishing system. These portals can be dedicated to scholarship, such as <experiment.org>, or general purpose, such as <SlideShare.net>. The “101 Innovations in Scholarly Communication”project[<https://101innovations.wordpress.com/>] provides a first of a kind overviewof such platforms. The large number of readily available web portals promts some to even argue that there are too many of them, leading to decision fatigue <cit.>. Regardless, the potential of increased productivity and global exposure attracts researchers and sothey happily deposit scholarly artifacts there.However, history has shown that even popular web platforms can disappear without a trace. To make matters worse, they rarely provide any explicit archival guarantees; many times quite the opposite.Whereas initiatives such as LOCKSS[<https://www.lockss.org/>] andPortico[<http://www.portico.org/digital-preservation/>] have emerged to make surethat the output of the established scholarly publishing system gets archived, to the best of our knowledge, no comparable efforts exist for scholarly artifacts deposited in these online platforms. We are therefore motivated to explore how these scholarly artifacts deposited in online portalscould be archived.§.§ Current Archival Paradigm To a large extent, the paradigm that underlies current approaches to capture and archive web-based scholarly resources has its origin in the paper-based era. It can be characterized as a back-office procedure in which the owner of a scholarly object decides when to hand over a finalized and atomic object to a custodian that will take care of its long-term preservation. The transfers by a publisher of its journals to Portico and the upload of an article by its author into an Institutional Repository are examples of such procedures. However, we see several signs indicating that this paradigm's capture approach is failingeven for journal articles, the most traditional of scholarly resources. David Rosenthal,amongst others, has reported that a significant portion of journal articles does not make it intoan archive and several reasons can be attributed to that <cit.>. He observes, for example, an apparent focus on articles that are technically not too complex tocapture and those published by large publishers.To make matters worse, this traditional paradigm insufficiently accounts for the fact that journalarticles no longer exhibit inherent fixity but rather are “living things” with versions. It also does not incorporate attempts to capture web content that is directly related to journal articlesi.e., web resources linked from these articles <cit.>.The reason for this failure is probably the fact that journal articles are largely still regardedas static atomic objects despite the overwhelming evidence that they have become dynamic and firmlyembedded in the web.§.§ Exploring a Novel Paradigm We postulate that a paradigm that fails for the most traditional scholarly outputs is highly likelyto fail when novel, web-native scholarly objects used in research communication and the researchprocess are at stake. Such objects include all sorts of scholarly artifacts deposited in web portals such as slide decks, videos, simulations, software, workflows, and ontologies.Since these web-native scholarly objects are largely neglected by the current archivalparadigm <cit.>, we refer to them as scholarly orphans.They also have dramatically different characteristics than traditional articles or monographs inthat they are compound (aggregations of related resources), dynamic (versioning), interdependent,distributed across the web <cit.>, and created at anotherscale altogether. We therefore argue for a new archival paradigm. We envision an archival paradigm inspired by webarchiving concepts that is web-centric to be able to cope with the scale of the problem, both interms of the number of platforms and the number of artifacts involved.Because the artifacts are often times created by researchers affiliated with an institution,we assume that these institutions are interested in collecting the artifacts. Therefore, and for the sake of efficiency and scale, we explore a new archival paradigm built around highlyautomated web-scale processes operated on behalf of a scholarly institution.§.§ Outline of a Novel Archival Paradigm A conceptual view of the high-level processes in our paradigm under exploration is depicted inFigure <ref>. * The first step is to discover the web identities of institutional scholars in various online portals such as SlideShare handles, FigShare names, etc.This can either be achieved with an algorithmic approach, for example, by using web discovery on the basisof metadata about the scholar <cit.> or by means of registries that list researcher profiles such as ORCID. * The second step, which builds on the web identities discovered in step 1, is to discover actualartifacts created or contributed to by the scholar. The discovery of the artifacts on the basis ofthose web identities largely depends on the functionality of the portal. One option is to subscribe to the portal's notification service that, if available, sends messages whenever new objects are created. An alternative is to recurrently visit a registry e.g., a list of artifacts that indexes the scholar's artifactsdeposited in the portal. If neither of these options are available, an algorithmic approach could alsobe deployed here. * Several resources, each with their distinct URI, may pertain to any given artifact. As such, inorder to capture the entire artifact, its web boundary - the list of all URIs that pertain to theartifact - must be determined. This can either be done in an algorithmic manner, which requiresextensive portal-specific heuristics <cit.> or by means of information explicitlyexposed by the portals in manners proposed by, for example, Signposting[<http://signposting.org/>]and OAI-ORE[<http://www.openarchives.org/ore/1.0/datamodel>] <cit.>. * The final step in the process is the capture of discovered artifacts, that is, capture allURIs that are within the web boundary of the artifact. A variety of tools haveemerged from the web archiving community that could be used for the capture such asHeritrix[<https://webarchive.jira.com/wiki/display/Heritrix>],Brozzler[<https://github.com/internetarchive/brozzler>],Webrecorder[<https://webrecorder.io/>], and iCrawl <cit.>. that can be deployed here. To accommodate concerns regarding the quality and trustworthiness ofcaptures, this step can also include a capture quality evaluation and a capture authenticityverification.§.§ ORCID A detailed analysis of all of the components of these processes outlined above is beyond thescope of this paper. The focus of this paper is on determining whether Open Researcher andContributor ID (ORCID), a rapidly growing database of scholarly web identities (ORCIDs) andassociated profiles can play a role in the archival paradigm that we explore and that isdepicted in Figure <ref>. The ORCID database has become increasingly popular since its inception in 2012. At the timeof writing it registers just over three million profiles. Its core motivation was to solve theissue of name disambiguation and provide a platform for the unique identification of contributorsto scholarly work <cit.>. Scholars are motivated to create, populate, and maintaintheir profile to advertise their accomplishments and gain credit for them.Publishers and funding agencies are also recognizing the merit of ORCIDs and have begun to mandatetheir inclusion in papers and project proposals.However, in the greater picture of scholarly communication the ORCID platform has the potentialto emerge as crucial infrastructure to unambiguously bind scholars to their work. In addition,implementations emerge that allow researchers to authenticate against scholarlyportals with their ORCID and use the same identity in many different platforms. Consequently,opportunities arise to bind a researcher's scholarly web identity to other web identities. We believe that the ORCID platform has enormous potential to play a core role as a web identityregistry in step 1 and as an artifact registry in step 2 of the archival paradigm (top two boxesin Figure <ref>) that we explore.However, in order for ORCID to be able to play such a role, the platform must have substantialcoverage of active scholars and rich scholar profiles.We investigate the suitability of ORCID for this purpose and to make this assessment we ask thefollowing research questions:* Does the ORCID platform represent the broadest possible coverage of researchers, in absolutenumbers, coverage of subjects, and coverage per geographical area? (RQ1)* Are ORCID profiles rich with information about the scholar that is useful for our causeas well as web identities and artifacts? (RQ2) Addressing these two questions (RQ1 and RQ2) combined with offering insight into the evolution ofORCID adoption and ORCID profiles over time is the main contribution of this paper. We conduct astudy to evaluate ORCID records over time to assess whether trends support our intuition that ORCIDscould be leveraged in steps 1 and 2 of our archival paradigm. § RELATED WORKGiven the novel and exploratory nature of this work, to the best of our knowledge, there are nocomparable efforts in this realm that are addressing the same issues.However, web-centric archiving of scholarly resources is not a novel concept. The Lots of Copies Keep Stuff Safe (LOCKSS) programis built on open source peer-to-peer technology <cit.> to focus on preserving scholarly content forlong-term access. The recent work by Van de Sompel, Rosenthal and Nelson <cit.> outlines a multitude of problems with this regard. For example, the fact that e-journal preservation systems have to spend a lot of time and effort on developing crawlers that grab articles from publishers'websites. This is a time-consuming and hence expensive endeavor that requires a lot of expertknowledge about a publisher's website structure, especially when dealing with the long tailof smaller publishers. The LOCKSS system is relevant to our paradigm but not directly comparable since we are targeting scholarly orphans, artifacts that are neglected by existing archival approaches. The EgoSystem <cit.> developed at the Los Alamos National Laboratory was designed to discover web identities of the lab's postdoctoral students. It used basic information about the student such as name, degree-awarding institution and the student's field of study as the seed tosearch for web identities via the Yahoo! search engine and in a pre-defined list of social andacademic web portals. In its initial phase, EgoSystem targeted web identities within MicrosoftAcademic, LinkedIn, Twitter, and SlideShare but also searched for personal homepages and Wikipediaarticles. Not only did EgoSystem sucessfully return a list of web identities, it also kept a recordof search results and learned additional associations with every new query.Northern and Nelson <cit.> developed an unsupervised approach to discover web identities on social media sites. The discovery phase was based on queries to search engines and to social media sites directly with the name of an individual as well as with variations of the name.The process also included an disambiguation step that was based on comparing key features extracted from discovered candidate profiles. Both systems are related to our approach as they offer approaches for the algorithmic discovery of web identities, even if the motivation to do so was different from ours. It is worth noting that services essential to the operation of both systems are no longer available. For example, theYahoo! Search API as well as the Microsoft Academic API have been discontinued. § EXPERIMENT SETUP The ORCID organization publishes high-level statistics and updates them on a regularbasis[<https://orcid.org/statistics>]. Amongst the statistics are the totalnumber of ORCIDs, the number of ORCIDs with at least one “work” (reference to a publication, dataset, patent, or other research output), employment as well as education activities.The ORCID organization has been providing data dumps of all records and all publicly availableinformation within these records once a year since 2013. We were therefore able to download allavailable datasets of ORCID records from 2013 <cit.>, 2014 <cit.>,2015 <cit.>, and 2016 <cit.>.Table <ref> summarizes the size of the obtained ORCID datasets and the number of ORCID recordsthey contain. Each dataset represents a snapshot of ORCID records at a particular point in time. For example, the 2016 dataset contains all records as of October 1st 2016. The datasets contain two serializations for each ORCID record, one in XML and one in JSON format, and we chose to work from the JSON files.§.§ Data Preparation and Enrichment for ORCID Coverage (RQ1)To approach RQ1 we investigate ORCID coverage in terms of absolute number of researchers, in terms of subjects, and in terms of geographical coverage. To do so, we extract particular data from the ORCID profiles. All we need to assess the coverage of number of reseachers is the total number of ORCID records in a dataset. This data is available from Table <ref>.In order to evaluate the geographical coverage of ORCID records, we extract the most recentaffiliation information from all profiles. This data not only comes with the name of the institution but also with its location. We are therefore able to map the distribution of locations (by a country granularity) from ORCID profiles.To determine the subject coverage, however, a more elaborate data preparation process is needed. We first extract all available information about scholars' works, in particular, the name of theauthor(s), the title, the publication year, and, if provided, the works's DOI. Since the works records in ORCID profiles do not contain subject information, we need to acquire this information from another source. The CrossRef Metadata SearchAPI[<https://github.com/CrossRef/rest-api-doc/blob/master/rest_api.md>] returnsmetadata about DOI-identified scholarly objects such as title, author, publisher and licenseinformation, etc.In addition, it provides a set of subject terms describing the work and its fieldof study.These subject terms are provided by the publisher and therefore not all of themneccessarily adhere to the same ontology. However, it is not unreasonable to assume that individualpublishers use the same set of subject terms for all their papers. For example, two papers in thearea of high-energy physics that are published by the same publisher are very likely both be assignedthe subject “Physical Science”.We utilize this service and query all extracted DOIs against the API and extract the returnedsubject terms for each work.To unify the results, we are in need of a standardized set of subjects.Fortunately, the Classification of Instructional Programs (CIP) published by the Institute ofEducation Sciences' National Center for EducationStatistics[<https://nces.ed.gov/ipeds/cipcode/Default.aspx>] offers just that.The CIP provides a taxonomy that is made up of 47 high-level subjects (each having multiple finger-granularity subjects) that maps the most common fields of study. We can therefore match our subject terms obtained from CrossRef against the CIP subjects. The matching is based on simple word comparison after minor pre-processing such as transforming all strings to upper case and ignoring trailing quantifiers such as “Other” and “General”. For example, we transformed the CIP subject “Agricultural Business and Management, Other” into simply “Agricultural Business and Management”. To decrease the granularity of subjects, we bin all matches of a lower level subject into the highest level subject. For example, if a DOI matches the lower level subject “Agricultural Business and Management” (CIP code 01.0199), it is binned into the highestlevel subject “Agriculture, Agriculture Operations, and Related Sciences” (CIP code 01). §.§ Data Preparation and Enrichment for Richness of Profiles (RQ2) RQ2 aims at investigating the richness of ORCID profiles in terms of web identities, artifacts ofinterest to our archiving paradigm, and other information they contain about the scholar.The data preparation processes here are fairly straight forward. We extract data on web identitiesas well as other information about the scholar (for example, given and family name, affiliation) from the metadata section of each profile. To assess the suitability of the web identities for our purpose, we extract and analyze their associated labels.We further obtain the type information for all artifacts in order to evaluate whether they are inscope for our new archival paradigm. It is worth noting that the information in an ORCID profile can be subject to access restrictions ifthe owner choses to establish them. However, after an email exchange with the ORCID customer support,we can confirm that the majority of data we are interested in is publicly accessible. Forexample, 88.6% of works, 96.2% of names, and 87.0% of affiliations do not have any accessrestrictions.§ ORCID COVERAGE (RQ1) To address RQ1, we investigate to what extend ORCID covers a broad spectrum of reseachers,subjects, and geographical locations.§.§ Coverage of ORCID in Absolute NumbersOur first coverage-related investigation is on the raw numbers of ORCID records and how that compares to the total number of researchers worldwide. The latest UNESCO Science Report published in 2015 <cit.> states that in 2013 there were 7,758,900 researchers worldwide. As shown in Table <ref>, even the largest ORCID dataset from 2016 holds 2,528,933 profiles, only about one third of the total number ofresearchers.The UNESCO report also provides the total number of researchers in the U.S. only. For the year 2013 this number is at 1,265,100. In comparison, by extracting metadata from the 2016 dataset, we finda total of only 112,577 ORCID profiles that list their most recent affiliation as located in theU.S., which equals 8.9%. Neither the comparison worldwide nor the one specific to the U.S. indicates that ORCID has arepresentative coverage of researchers, provided in absolute numbers. It is worth noting though that the number of ORCID records is growing at a faster pace than the numberof researchers. As shown in Table <ref>, the increase of profiles initally is very steepwith more than 2.5 times as many records in 2014 as in 2013. The increase of 74% in 2015 and59% in 2016 is still significant compared to the respective previous years.Worldwide, the growth in number of researchers has been betweem 5% and 7% since 2007. If these trends continue, there is a potential for ORCID to achieve full coverage by 2020. §.§ Subjects Covered by ORCIDOur second part of the ORCID coverage analysis focuses on the coverage of research subjects. We obtain data about the number of recipients of doctorate degrees as well as the number ofscientific publications as a proxy to assess subject coverage. First, in order to assess the subject distribution of ORCIDs, we need to compute the subjects coveredby each individual scholar with an ORCID identity on the basis of the CIP terms obtained via herDOI-identified artifacts, as described above. If an ORCID only has one DOI associated with it that matches against one CIP term only, this ORCID provides the score of 1 to the matched subject.However, it is entirely possible that one publication falls into multiple areas of study, isassociated with multiple subject terms from the publisher, and hence is matched against more thanone CIP subject. In this case we distribute the subject score for that DOI accordingly. For example,if a DOI matches two the subjects “Agriculture, Agriculture Operations, and Related Sciences”and “Education” (CIP code 13), both of these subjects get a score of 0.5. The sum ofthe matches per DOI is always 1, so if a DOI matches three subjects, each receives a scoreof 1/3. We aggregate all scores per subject and rank them in decreasing order of their scores. To assess the distribution of subjects for all DOI-identified artifacts contributed by a singleresearcher, we aggregate the individual DOI scores per ORCID.Figure <ref> showcases an example where an ORCIDrecord (ORCID_1) has three DOI references, DOI_1, DOI_2, and DOI_3. DOI_1 matches two subjects (Sub_1 and Sub_2) so each of them score 0.5 for the DOI. However, on the level on ORCID records, since ORCID_1 has three DOIs, these scores are weighted with afactor of 1/3. DOI_2 only matches Sub_1 and so its score is 1.0 before being weighted onthe ORCID record level. DOI_3 matches Sub_2, Sub_3, and Sub_4 and hence each of thesubjects get a score of 1/3 before individually weighted on the ORCID record level.Table <ref> summarizes the computation and results for each subject on this example ORCID. Similar to the level of individual DOIs, the sum of the subject scores per ORCIDis always 1. To the best of our knowledge there is no comprehensive list with numbers of researchers by area of study.We therefore use the numbers of awarded Ph.D. degrees in the U.S. as an estimation for the distribution of reseachers' disciplines. The National Science Foundation (NSF) regularly publishes a report on doctoraterecipients from U.S. universities[<https://www.nsf.gov/statistics/2017/nsf17306/datatables/tab-12.htm>] from which we extract the 2015 data. The report classifies all recipients' disciplines into subjects that are very similar to the CIP subjectswe used and hence can easily be compared.We take the relative numbers of recipients by subject and compare this data to the relative score distributionof subjects derived from publications in ORCID records. We further obtain the total numbers of scientific publications in the U.S. in 2014 from the same UNESCO Science Report <cit.> mentioned earlier. Similar to the NSF data described above, this report also classifies all publications into subjects thatare very similar to the CIP subjects we used. We extract the numbers from the UNESCO report and computethe relative numbers of publications by subject. Note that the UNESCO report does not maintain specificdata for the fields of “Education” and “Humanities and Arts”. It is likely that publications fromthese areas are binned into the generic “Other” category and hence prohibits a comparison for them. Figure <ref> shows the results of comparing the above data with subject data derived from ORCID profiles from the 2016 dataset.The first thing that immediately becomes apparent is that the ORCID-specific data (in blue) and theUNESCO publication data (in red) are very similar. This seems to indicate that ORCIDs mirror the scientific publication landscape fairly well.In terms of specific subjects, we note that “Life Sciences” holds the top spot across all rankings.The percentage of doctoral degrees awarded, indicated in green, however, is less than half thatof the ORCID-specific data and of the UNESCO publication data. Our interpretation of this finding is thatthere are proportionally many more life science researchers represented in ORCID than in thereal world. We observe a similar pattern of over-representation of ORCID records for the area of “Physical Sciences” compared to the fraction of Ph.D. researchers.On the other hand, the fields of “Engineering”, “Psychology and Social Sciences”, “Education”, and“Humanities and Arts” seems to be under-represented in ORCID records. The fraction of doctorate recipientsin this area is much greater than the fraction of ORCID subjects.It is important to note that Figure <ref> conveys relative numbers. This means that even though for a subject such as “Mathematics and Computer Sciences” the numbers are proportional, in terms of absolute numbers, as shown in Section <ref>, ORCID still needs to catch up. §.§ ORCID Subjects over Time The results from the previous section raise the question whether the ORCID subject distribution is stable over time. If we saw significant movement in subject distribution over time, we could argue that the subject coverage is likely to change in the future.Figure <ref> shows ORCID subject distributions for all four datasets. From 2013 (Figure <ref>) on we can see a clear dominance of the medical fields.Three out of the top four subjects are from the medical area with “Biological and Biomedical Science”in the lead with around 35%. The subject “Physical Science” comes in second with 18% followedby “Health Professions and Related Programs” and “Residency Programs” third and fourth with eacharound 10%. Together, the three medial fields make up for more than 53% of all scores, whichunderlines their dominance. Other sciences such as engineering and mathematics get only around 5%of the scores and other disciplines, for example, education, history and the performing arts get veryfew scores and therefore land at the tail end of the graph. Figure <ref> shows the subject ranking for the 2014 dataset and also highlights the changes in the ranking compared to the previous year. Subjects represented by blue bars have an unchangedrank compared to the previous year. Subjects with a green bar have climbed up the ranking and a red barindicates a drop in the ranking. We see the top subjects mostly unchanged in both ranking and percentageof scores. Somewhat surprisingly, “Social Science”, “Education”, and “History” gained higher rankswhereas “Computer and Information Science” dropped. Figures <ref> and <ref> show the distribution of subjects for the 2015 and 2016 datasets, respectively. It is worth noting that “Social Science” and “Education”climbed yet again in the rankings in 2015 and “Natural Resources and Conservation” jumped up the ranking by three spots in 2016. All graphs in Figure <ref> confirm that ORCID records are dominated by the medical field and physical sciences. They also show that there has been no change in the top subjectranks since the first available dataset in 2013. Figure <ref> does not show a lot of change in the subject distributions and hence does not indicate that an improved subjectcoverage can be expected in the near future.§.§ Geographical Coverage of ORCID Records In order to gain insight into the global coverage of researchers from a geographical point of view, we extract the location information of the most recent affiliation per ORCID record. Table <ref> lists thetop 20 locations by country code. We can see that U.S. affiliations dominate the datasets with two Europeancountries (Great Britain and Spain) being ranked second and third. The fact that China is only fourth rankedis surprising and indicates a much lower adoption rate there than elsewhere in the world. Brazil and Indiaare following in the ranks. The 2015 UNESCO Science Report <cit.> provides data on the world shares of researchers for selectedcountries in 2013. The numbers are interesting as, for example, the ORCID representation for the U.S. (16.9% and 17.1%) is almost identical to the number reported by UNESCO (16.7%). China, on the other hand seems to be under-represented in the ORCID index where we only see 5.6% compared to 19.1% reported by UNESCO. The same seems to hold true for Japan, Russia, and Germany.The numbers for other countries in the report such as the United Kingdom (3.3%), India (2.7%), and Brazil(2.0%) are lower compared to what we find in the ORCID profiles. These results indicate that the geographical coverage of ORCID records does not fully mirror the worldwide picture. In relative terms, the numbers for the U.S. are comparable but China and Japan are significantly under-represented.Other countries such as the United Kingdom, India, and Brazil appear to be over-represeted in ORCID. § RICHNESS OF ORCID PROFILES (RQ2) To address RQ2, we are now investigating the richness of ORCID profiles. For our paradigm, profiles are rich when they contain web identities, further profile information about the scholar,as well as artifacts of potential interest to our efforts. We examine web identities contained in ORCID profiles as they may lead to the discovery of in-scopeartifacts in web portals where these identities were ultimatly minted. We further consider additional profile information that, using an algorithmic approach (seeSection <ref>), may help facilitate the unveiling of web identities that in turn may again help surface artifacts of interest.Lastly, we analyze extracted artifacts as they may be orphans that are subject to archivingunder our novel paradigm.ORCID records contain several metadata fields that are relevant for this investigtion. The values of the fields “Given Name”, “Family Name”, and “Affiliations” (previously used for the geographicalcoverage assessment), can jointly be used to discover web identities with an algorithmic approach,as shown previously <cit.>.The field “External URIs” represents URIs that lead to web identities such as personal homepages, a scholar's Twitter or LinkedIn page. The artifacts are extracted from the section in the ORCID profile called “Works”.As a first step to evaluate the richness of ORCID profiles, we are interested in the number of ORCIDs that actually contain the desired information. Figure <ref> summarizes our findings. Almost all ORCIDs contain a given and a family name but no affiliations are recorded in the 2013 and 2014 datasets. We notice a slow but steady increase of the number of ORCIDs with works (19.3% in 2016), affiliations (26.0% in 2016), and web identities (6.4% in 2016). §.§ Richness of Web IdentitiesAs seen in Figure <ref>, a small percentage of ORCID records contain web identities. Nevertheless, we are interested in extracting them and analyzing their type as they may lead to the discovery of artifacts of interest. Each web entity in an ORCID profile has a type associated with it. Unfortunately, this type field is lacking a controlled vocabulary, which makes this data very hard to interpret. Table <ref> lists the top 20 web identity labels from the ORCID profiles of the 2016 dataset. We immediately observe the vocabulary problem as there are five different labels that describe presumably the same thing, a personal website (Personal Website, Homepage, Home Page, Personal, Personal Webpage). The label issue aside, these references topersonal websites are of interest to us as they potentially are artifact registries. Most likely, the majority of them have a different structure so extracting information would require additional programmatic intelligence.Further, we recognize expected web identities such as LinkedIn, which is the most frequently found one and Twitter. However, even these identities suffer from the vocabulary problem as, for example, “LinkedIn” and “LinkedIn Profile” make it into the list of the top 20. We extracted other anticipated web identitiessuch as SlideShare and FigShare but they are ranked 137th and 198th, respectively, and hence did not make it into Table <ref>. The fact that web identities are not particular common in ORCID profiles (see Figure <ref>) combined with the label vocabulary problem for those that are available makes us conclude that the richness of web identities required for our archival paradigm is not apparent.It is worth noting though that the web identity labels may not be essential to extract and interpret web identities. If an archival tool is aware of baseURIs of web portals, it could potentially match the identities regardless of its label.§.§ Richness of Artifacts We are extracting information about artifacts from ORCID profiles by looking at records of works. Figure <ref> shows that a minority of ORCID records actually contains information about a scholar's work, in fact, less than one in five ORCID records contain such data. As a first result of this investigation, this does not imply a desired level of richness in ORCID profiles. Each work entry we do extract, however, contains a label that conveys the type of the work. This label enables a high level disambiguation of the work and hence can help with the scoping of an artifact for ourarchiving paradigm. If the label, for example, conveys that a particular work is a publication of type“journal article” we can, with some level of confidence, say that this work is out of scope for ourapproach as it stands a good chance to be convered by existing alternative archiving approaches suchas LOCKSS, CLOCKSS, or Portico - approaches that are specialized in archiving journal articles. Table <ref> summarizes the top ten work types over time. The dominance of journal articles is apparent for all four ORCID datasets. Conference papers as well as books and book chaptersseem to be gaining in importance in the more recent past but still fade in comparison. It is important to note that the sort of artifacts that most likely would be in scope for our archivingparadigm are not well represented in ORCID records. For example, the type of work labeled “ScholarlyProject” is ranked 20th in 2013 and “Artistic Performance” is ranked 24th in 2016. The type “Other” may represent artifacts we are potentially interested in but since the label is very ambiguous, these artifacts will need further evaluation. With the rather low percentage of ORCID profiles containing works plus the fact that none of the top rankedwork types are in scope for us, we realize that ORCID profiles lack the desired level of richness of artifacts. We hence conjecture that, at this moment, the ORCID platform is not a good fit for step 2 in our high-levelprocesses outlined in Figure <ref>.§ CONCLUDING REMARKS We propose a novel archiving paradigm that is aimed at archiving web-based scholarly orphans. The first and second step in this paradigm (Figure <ref>) is focused on thediscovery of web entities and artifacts in scope of our web archiving approach. Since ORCID has emerged as high potential scholarly web infrastructure that assigns web identitiesto scholars, allows listing additional web identities as well as artifacts per scholar, we wereinterested in determining whether it would be suitable as a discovery component in our archivalprocesses.We approaches this work in two dimensions. First, we evaluated the coverage of ORCID in terms ofnumber of researchers, in terms of subjects, and in terms of geographical coverage. Second, weanalyzed the richness of ORCID profiles of information about a scholar, web identities, andartifacts. We found that the ORCID subject coverage is proportional to subject coverage worldwide (as perpublications) but in absolute numbers there is still significant room for growth.We found more divergence with respect to the geographical coverage. Countries like China, Japan, Russia, and Germany seem under-represented in ORCID. However, we also discovered that ORCID growths at a very significant rate that outpaces thegrowth of researchers, for example. We therefore see a real chance that OCRID may achieve alevel of coverage in the near future that is more suitable for our needs. The results of the evaluation of the richness of ORCID profiles revealed that one out offive profiles contains information about the scholar's work.This number is surprisingly low and may indicate that scholars use other services such asResearchGate or Academia.edu for their profile data. The majority of works we found inORCID profiles are journal articles, which are out of scope for our use case.Given these observations, it seems unreasonable to assume that researchers will eventuallycreate entries for orphans in their profiles. The works component of ORCID profiles is therefore less promising for our approach.We further found that few profiles (less than 10%) contain web identities, which may beanother indicator that researchers do not consider ORCID as their profile but rather as theiridentity. Nevertheless, since an ORCID is a web identity, it would make sense for ORCID topromote adding additional web identities so as to become an “identity hub” for researchers. This would be very beneficial for many use cases that involve access to machine-readableresearcher profiles as it would allow to automatically navigate from a scholar's ORCID totheir web presence in other portals.To be able to better interpret web identities, however, it would help if a controlled vocabulary for types could be used.We acknowledge that ORCID profiles can provide rich data that can be used to algorithmically discover other web identities of researchers. The given and family name(s) of scholars, their affiliations, URIs to personal home pages (one of the most frequent web identities provided), and even subjects extracted from their works can be used for this purpose.Clearly, ORCID adoption is on the rise but, at this point, relying on it as basic infrastructurefor steps 1 and 2 in our paradigm is not an option. We are optimistic that the coverage will improve over time and eventually better align with researchersand research subjects.To improve the richness of the profiles, some emphasis on promoting the addition of web identities would be required from ORCID. We belive this aligns with ORCID's mission of umambiguously identifying researchers.§ ACKNOWLEDGMENTS This work is in part supported by the Andrew W. Mellon Foundation (grant number 11600663). We would like to express our gratitude to the ORCID support staff, in particular Alainna Therese, who provided invaluable feedback regarding components and history of ORCID records.10unesco2015 UNESCO Science Report, 2015. <http://unesdoc.unesco.org/images/0023/002354/235406e.pdf>.bechhofer:research_objects S. Bechhofer, D. D. Roure, M. Gamble, C. Goble, and I. Buchan. Research Objects: Towards Exchange and Reuse of Digital Knowledge. 2010.gossen:icrawl G. Gossen, E. Demidova, and T. Risse. iCrawl: Improving the Freshness of Web Collections by Integrating Social Web and Focused Web Crawling. In in Proceedings of JCDL '15, pages 75–84, 2015.orcid2016 L. Haak, J. Brown, M. Buys, A. P. Cardoso, P. Demain, T. Demeranville, M. Duine, S. Harley, S. Hershberger, L. Krznarich, A. Meadows, N. Miyairi, A. Montenegro, L. Paglione, L. Pessoa, R. Peters, F. R. Monge, W. Simpson, C. Wilmers, and D. Wright. ORCID Annual Public Data File, ORCID Inc., 2016. <https://dx.doi.org/10.6084/m9.figshare.4134027.v1>.haak:orcid L. L. Haak, M. Fenner, L. Paglione, E. Pentz, and H. Ratner. ORCID: A System to Uniquely Identify Researchers. Learned Publishing, 25(4):259–264, 2012.jones:content_drift S. M. Jones, H. Van de Sompel, H. Shankar, M. Klein, R. Tobin, and C. Grover. Scholarly Context Adrift: Three out of Four URI References Lead to Changed Content. PLoS ONE, 11(12), 2016.klein:one_in_five M. Klein, H. Van de Sompel, R. Sanderson, H. Shankar, L. Balakireva, K. Zhou, and R. Tobin. Scholarly Context Not Found: One in Five Articles Suffers from Reference Rot. PLoS ONE, 9(12), 2014.lagoze:ore C. Lagoze, H. Van de Sompel, M. L. Nelson, S. Warner, R. Sanderson, and P. Johnston. Object re-use & exchange: A resource-centric approach. CoRR, abs/0804.2273, 2008.northern:unsupervised C. T. Northern and M. L. Nelson. An Unsupervised Approach to Discovering and Disambiguating Social Media Profiles. In In Proceedings of Mining Data Semantics Workshop, 2011.orcid2013 L. Paglione, R. Peters, C. Oyler, W. Simpson, A. Montenegro, J. F. R. Monge, R. Bryant, and L. Haak. ORCID Annual Public Data File, ORCID Inc., 2013. <http://dx.doi.org/10.14454/07243.2013.001>.orcid2014 L. Paglione, R. Peters, C. Oyler, W. Simpson, A. Montenegro, J. F. R. Monge, E. Krznarich, J. Brown, and L. Haak. ORCID Annual Public Data File, ORCID Inc., 2014. <http://dx.doi.org/10.14454/07243.2014.001>.orcid2015 L. Paglione, R. Peters, C. Wilmers, W. Simpson, A. Montenegro, F. R. Monge, S. Tyagi, E. Krznarich, T. Demeranville, J. Brown, N. Miyairi, M. Buys, A. Cardoso, C. Sethate, and L. Haak. ORCID Annual Public Data File, ORCID Inc., 2015. <https://dx.doi.org/10.6084/m9.figshare.1582705.v1>.powell:egosystem J. Powell, H. Shankar, M. Rodriguez, and H. Van de Sompel. EgoSystem: Where are our Alumni? Code4Lib, 24, 2014.dshr2013 D. Rosenthal. Patio Perspectives at ANADP II: Preserving the Other Half, 2013. <http://blog.dshr.org/2013/11/patio-perspectives-at-anadp-ii.html>.dshr2015 D. Rosenthal. The Evanescent Web, 2015. <http://blog.dshr.org/2015/02/the-evanescent-web.html>.rosenthal:lockss D. S. H. Rosenthal, D. L. Vargas, T. A. Lipkis, and C. T. Griffin. Enhancing the LOCKSS Digital Preservation Technology. D-Lib Magazine, 21(9/10), 2015.tattersall2017 A. Tattersall. Disentangling the academic web: what might have been learnt from Discogs and IMDB, 2017. <http://blogs.lse.ac.uk/impactofsocialsciences/2017/02/01/disentangling-the-academic-web-what-might-have-been-learnt-from-discogs-and-imdb/>.sompel:interop H. Van de Sompel and C. Lagoze. Interoperability for the Discovery, Use, and Re-Use of Units of Scholarly Communication. CTWatch Quarterly, 3(3), 2007.sompel:infrastructure H. Van de Sompel, D. S. H. Rosenthal, and M. L. Nelson. Web Infrastructure to Support e-Journal Preservation (and More). CoRR, abs/1605.06154, 2016. | http://arxiv.org/abs/1703.09343v1 | {
"authors": [
"Martin Klein",
"Herbert Van de Sompel"
],
"categories": [
"cs.DL"
],
"primary_category": "cs.DL",
"published": "20170327233602",
"title": "Discovering Scholarly Orphans Using ORCID"
} |
Propagation of gaseous detonation waves in a spatially inhomogeneous reactive medium Nikolaos Nikiforakis December 30, 2023 ==================================================================================== Software packages usually report the results of statistical tests using p-values. Users often interpret these by comparing them to standard thresholds, e.g. 0.1%, 1% and 5%, which is sometimes reinforced by a star rating (***, **, *). We consider an arbitrary statistical test whose p-value p is not available explicitly, but can be approximated by Monte Carlo samples, e.g. by bootstrap or permutation tests. The standard implementation of such tests usually draws a fixed number of samples to approximate p. However, the probability that the exact and the approximated p-value lie on different sides of a threshold (the resampling risk) can be high, particularly for p-values close to a threshold. We present a method to overcome this. We consider a finite set of user-specified intervals which cover [0,1] and which can be overlapping. We call these p-value buckets. We present algorithms that, with arbitrarily high probability, return a p-value bucket containing p. We prove that for both a bounded resampling risk and a finite runtime, overlapping buckets need to be employed, and that our methods both bound the resampling risk and guarantee a finite runtime for such overlapping buckets. To interpret decisions with overlapping buckets, we propose an extension of the star rating system. We demonstrate that our methods are suitable for use in standard software, including for low p-value thresholds occurring in multiple testing settings, and that they can be computationally more efficient than standard implementations.Keywords: Algorithm, Bootstrap, Hypothesis testing, P-value, Resampling, Sampling§ INTRODUCTION Software packages usually report the significance of statistical tests using p-values.The result of the test will often be interpreted by comparing those p-values to thresholds. To facilitate this, many tests in statistical software such as R <cit.>, SAS <cit.> or SPSS <cit.> translate the significance to a star rating system, in which typically p∈ (0.01,0.05] is denoted by *, p∈ (0.001,0.01] is denoted by ** and p≤ 0.001 is denoted by ***. As pointed out in the literature, such levels of significance are sensible since they capture the magnitude of a p-value rather than its precise value, which in contrast to the magnitude is usually not reliably estimated <cit.>.In this article, we are concerned with statistical tests whose p-value p can only be approximated by sequentially drawn Monte Carlo samples. Among others, this scenario arises in bootstrap or permutation tests <cit.>.Standard implementations of Monte Carlo tests in software packages usually take a fixed number of samples and estimate p as the proportion of exceedances over the observed value of the test statistic. Examples of thisinclude the computation of a bootstrap p-value inside the function chisq.test in R or the function t-test in SPSS. However, there is no control of the resampling risk, the probability that the exact and the approximated p-value lie on two opposite sides of a testing threshold (usually 0.1%, 1% or 5%).Sequential methods to approximate p-values have been studied in the literature.Early works provided ad hoc attempts to reduce the computational effort without focusing on a specific error criterion <cit.>.Further developments aimed at a uniform bound on the resampling risk for a single threshold <cit.>. <cit.> shows that such a uniform bound necessarily results in an infinite runtime.There are also approaches that aim to bound an integrated resampling risk for a single threshold <cit.>. Such an error criterion is weaker than a uniform bound on the resampling risk and can be achieved with finite effort. In this article, we present algorithms that work with multiple thresholds, aim for uniform bounds on the resampling risk and, under conditions, have a finite runtime. We first generalize testing thresholds to a finite set of user-specified intervals (called “p-value buckets”) which cover [0,1] and which can be overlapping.Our algorithms return one of those p-value buckets which is guaranteed to contain the unknown (true) p up to a uniformly bounded error.We prove that methods achieving both a finite runtime and a bounded resampling risk need to operate on overlapping p-value buckets. In order to report decisions computed with overlapping buckets, we propose to use an extension of the classical star rating system (*, **, ***) used to indicate the significance of a hypothesis.Our methods rely on the computation of a confidence sequence for p, i.e. a sequence of random intervals with a joint coverage probability. We present two approaches to compute such a confidence sequence, prove that both approaches indeed bound the resampling risk and achieve a finite runtime for overlapping buckets.We compare both approaches in a simulation section and demonstrate that they achieve a competitive computational effort which is close to a theoretical lower bound on the effort we derive.The article is structured as follows. Section <ref> introduces the mathematical setting of our article (Section <ref>), the rationale behind overlapping p-value buckets (Section <ref>), our proposed extension of the traditional star rating system (Section <ref>) and a general algorithm to compute a decision for p with respect to a set of p-value buckets (Section <ref>). The general algorithm relies on the construction of certain confidence sequences for p for which we present two approaches: one based on likelihood martingales <cit.> in Section <ref> and one based on the Simctest algorithm <cit.> in Section <ref>. In Section <ref> we first derive a theoretical lower bound on the expected effort (Section <ref>) and demonstrate that our methods achieve a computational effort which stays within a multiple of the optimal effort (Sections <ref> and <ref>). An application to multiple testing is considered in Section <ref>. The article concludes with a discussion in Section <ref>. All proofs can be found in Appendix <ref>. The Supplementary Material includes R code to implement the algorithms as well as to reproduce all figures and tables. We have also implemented the method in the function mctest of the R-package simctest, which is available on CRAN. § GENERAL ALGORITHM§.§ Setting We consider one hypothesis H_0 which we would like to test with a given statistical test.Let T denote the test statistic and let t be the evaluation of T on some given data. For simplicity, we assume that H_0 should be rejected for large values of t.In this case the p-value is commonly computedasthe probability of observing a statistic at least as extreme as t, i.e. p = (T ≥ t),whereis a probability measure under the null hypothesis. For our purposes, we assume thatis either the true distribution of T under H_0 or an estimate of it, e.g. the distribution implied via bootstrapping.We assume that the p-value p is not available analytically but can be approximated using Monte Carlo simulation by drawing independent realizations of the test statistic T under . We will assume that we can generate a sequence X_i, i∈, of those draws and we let X_i=1 if the ith replicate is greater or equal than t and X_i=0 otherwise. As a consequence, X_i have a Bernoulli(p) distribution.The algorithms we consider aim to return an interval containing p from a given set J of possibly overlapping sub-intervals of [0,1]. The algorithms are sequential; for n=1,2,…, based onX_1:n=(X_1,…,X_n), they will decide if they can stop and return an interval or whether they need to observe more X_i.We use A to denote a generic algorithm of this type, I_A (or simply I) to denote the interval returned by A, and τ_A for the stopping time of A, i.e. the number of X_i that the algorithm observes before returning I_A. We use J for generic elements of J. Our algorithms are built on the sequence (X_i), which have p as the unknown parameter. To emphasize that the underlying distribution is determined by p, we will use it as a subscript in the notation of probabilities and expected values, writing _p and _p.Formally, we require J to be a set of p-value buckets, which we define to be a set of sub-intervals of [0,1] of positive length that cover [0,1], i.e.⋃_J∈ J J=[0,1].For example,J^0:={[0,10^-3],(10^-3,0.01],(0.01,0.05],(0.05,1]}is a set of p-value buckets, which we will refer to in the remainder of the article as classical buckets. Deciding which of those buckets p falls into is equivalent to deciding where p lies in relation to the three traditional thresholds 0.001, 0.01 and 0.05.A natural error criterion for an algorithm A is the risk of a wrong decision, defined as RR_p(A)=_p(p∉ I_A ), and which we call the resampling risk. RR_p(A) is a function of the p-value p.The algorithms A that we propose in this articlebound the resampling risk uniformly in p at a given ϵ∈ (0,0.5), i.e. RR_p (A)≤ϵ for all p∈ [0,1].§.§ Overlapping buckets We say that the buckets J are overlapping if for all p ∈ (0,1) there exists J ∈ J such that p is contained in the interior of J.The following theorem shows that overlapping buckets are both a necessary and sufficient prerequisite for a finite time algorithm A satisfying (<ref>) to exist, where the effort is measured in terms of the stopping time τ_A.The following statements are equivalent:* There exists an algorithm A satisfying (<ref>) with _p(τ_A)<∞ for all p ∈ [0,1].* The p-value buckets Jare overlapping.* There exists an algorithm A satisfying (<ref>) with τ_A<C for some deterministic C>0. All proofs can be found in Appendix <ref>. A consequence of the theorem is thatthere is no algorithm A with finite expected effort (i.e., _p(τ_A)<∞ for all p ∈ [0,1]) that achieves (<ref>) for J^0.To turn the classical buckets J^0 into a set ofoverlapping p-value buckets, we can addintervals that contain the classical thresholds in their interior. As a specific choice, we recommendJ^∗= J^0 ∪{ (5× 10^-4,2× 10^-3],(0.008,0.012],(0.045,0.055] }.We will use J^∗ throughout the article and refer to them as the extended buckets. We recommend J^∗ for three reasons: First, this choice results in roughly an equal maximal effort when p is close toall three classical thresholds (see Example <ref>). Second, the maximal effort and the expected effort under the null are reasonable in practical applications. Third, the interval limits in J^∗ have only few decimal places and can thus be easily written down. Section <ref> discusses additional (heuristic) ways of choosing buckets.§.§ Extended star rating system It is commonplace to report the significance of a hypothesis using a star rating system: strong significance is encoded as *** (p<0.1%), significance at 1% is encoded as ** and weak significance (p<5%) as a single star. This classification, recommended in the publication manual of the American Psychological Association <cit.>, is the de facto standard for reporting significance.We propose to extend the star rating system for the overlapping buckets in J^∗ in the manner given in Table <ref> (referred to as the extended star rating system). The same coding could be used for other p-value buckets that contain J^0.If the p-value bucket I returned by our algorithm allows for a clear decision with respect to the classical thresholds (first row of Table <ref>), we report the classical star rating.Otherwise, we propose to report significance with respect to the smallest classical threshold larger than max I and to indicate the possibility of a higher significance with a tilde symbol (second row of Table <ref>).For instance, suppose an algorithm returns the bucket I=(0.05%,0.2%] for p upon stopping. This impliesp ≤ 1% and thus we cansafely report a ** significance. However, as p could either be smaller or larger than the next classical threshold 0.1%, we report ** to indicate the possibility of a higher significance. §.§ The general construction We suppose thatwe can compute a confidence sequence, n∈, for p, i.e. a sequence of intervals such that its joint coverage probability is at least 1-ϵ, where ϵ>0 is the desired uniform bound on the resampling risk. Formally, we require_p(p∈ for all n ∈)≥ 1-ϵfor allp∈ [0,1].In Sections <ref> and <ref> we consider two constructions satisfying (<ref>).The generic algorithm we propose will depend on the choice of p-value buckets J and the method C for computing a confidence sequence. We will denote the algorithm by A( J, C). We define the stopping timeτ_A( J, C)=inf{ n∈: there exists J ∈ J such that ⊆ J}which denotes the minimal number of samples n needed until a confidence intervalis fully contained in a bucket J ∈ J. If τ_A( J, C)<∞, the result of our algorithm is a bucket I∈ J such that ⊆ I. If multiple buckets exist with this property then an arbitrary one is chosen. If τ_A( J, C)=∞, our algorithm returns an arbitrary element I ∈ J such that lim_n→∞S_n/n∈ I, where S_n = ∑_i=1^n X_i. The limit exists by the law of large numbers.If (<ref>) holds then A=A( J, C) satisfies (<ref>). This is an immediate consequence of the construction andthe strong law of large numbers.The confidence intervalfor p and the bucket I ∈ J that our algorithm returns are related but not equivalent. Following <cit.>, we are ultimately only interested in reporting one of the pre-specified p-value buckets that p falls in; a more precise confidence statement on p is not required. The confidence intervalfor p serves to quantify the uncertainty in the estimation of p, and since ⊆ I it ensures that the bucket I we report satisfies (<ref>).Lastly, if there exists N∈ such that τ_A( J, C)<N, we can relax (<ref>) to_p(p∈ for all n < N)≥ 1-ϵfor allp∈ [0,1]. Suppose we are solely interested in the 5% threshold. Testing at 5% corresponds to the two classical buckets J^e = { [0,0.05],(0.05,1] }. Using the approach of Section <ref> with ϵ=10^-3 to compute a confidence sequence for p, we arrive at the non-stopping region displayed in Figure <ref> (left). We define the non-stopping region as the region in which sampling progresses until the sampling path (n,S_n) hits either its lower or upper boundary. As displayed in Figure <ref> (left), we report the interval [0,0.05] ( (0.05,1] ) upon hitting the lower (upper) boundary first.Adding the bucket (0.03,0.07] to J^e results in overlapping buckets with a finite non-stopping region displayed in Figure <ref> (right). In Figure <ref> (right), the sample path can leave the non-stopping region in three ways: Either to the top via the former upper boundary of Figure <ref> (left), in which case we report the classic interval (0.05,1], to the bottom via the former lower boundary corresponding to the bucket [0,0.05], or to the middle corresponding to the added bucket (0.03,0.07]. Similarly to Example <ref>, Figure <ref> shows the non-stopping region for J^0 and J^∗. The stopping region is infinite for the non-overlapping J^0 and finite for the overlapping buckets J^∗.How likely is it to observe the different decisions which can occur when testing with J^∗? Figure <ref> shows the probability of obtaining each decision in the extended star rating system for J^∗ as a function of p.These probabilities are computed as follows: For a given p, we iteratively (over n) compute the distribution of S_n conditional on not stopping. This allows us to compute the probability of stopping and the resulting decision.Figure <ref> shows that intermediate decisions (, *, **) only occur with appreciable probability for a narrow range of p-values. For most p-values, a decision in the sense of the classical star rating system is reached. § CONSTRUCTION OF CONFIDENCE SEQUENCES We now present two approaches for computing confidences sequences and show that, for overlapping buckets, the resulting stopping times are bounded. §.§ The Robbins-Lai approach <cit.> showed that the sequence of sets= { p ∈ [0,1]: (n+1)b(n,p,S_n)>ϵ}satisfies (<ref>), where b(n,p,s) = ns p^s (1-p)^n-s (see eq. (<ref>)). <cit.> showed thatare intervals. Using these intervals with overlapping buckets leads to a bounded effort: If J are overlapping buckets then the stopping time τ_A( J, C_RL) can be bounded by a deterministic positive constant. The intervalsneed not be computed explicitly in order to check (<ref>). Appendix <ref> gives a simple criterion to check if ⊆ J forJ ∈ J. §.§ The Simctest approach <cit.> provides a method to compute a decision for H_0 with respect to a single threshold in the same Monte Carlo setting as the one of Section <ref>. This approach can also be used to construct confidence sequences for multiple thresholds.For the purposes of this article, it suffices to mention that for a given threshold α∈ [0,1], <cit.> constructs two integer valued stopping boundaries (L_n,α)_n∈ and (U_n,α)_n∈, and defines a stopping timeτ_α = inf{k ∈: S_k ≥ U_k,α orS_k ≤ L_k,α}.The construction is parametrized by a spending sequence (ϵ_n)_n∈ that is nonnegative, nondecreasing and converges to some 0<ρ<1.<cit.>shows that, under conditions, _p(τ_α)<∞ for p≠α and that the probability of hitting the wrong boundary is bounded by ρ, i.e. _p(S_τ_α≥ U_τ_α,α)<ρ for p<α, and similarly for p>α.In order to extend this approach to multiple thresholds, we first define the set of boundaries of intervals in J that are in the interior of [0,1]:B_ J = {min J, max J: J ∈ J}∖{0,1}.Then, for each α∈ B_ J we construct the stopping boundaries L_n,α and U_n,α using the same ρ. We defineI_n,α=[0,1] if n<τ_α,[0,α) if n≥τ_α, S_τ_α≤L_τ_α,α,(α,1] if n≥τ_α, S_τ_α≥U_τ_α,α.We define the confidence sequence of the Simctest approach as =⋂_α∈ B_ J I_n,α.The following theorem shows thathas the desired joint coverage probability given in (<ref>) (or (<ref>) for overlapping buckets) when setting ρ=ϵ/2. Moreover, the theorem shows that the algorithm A( J,C_S) has a bounded stopping time if J is a finite set of overlapping buckets.Let ϵ∈ (0,1). For each α∈ B_ J, construct L_n,α and U_n,α with error probability ρ=ϵ/2. Let N∈∪{∞}. Suppose that U_n,α≤ U_n,α' and L_n,α≤ L_n,α' for all α, α' ∈ B_ J, α < α', and n < N.*Then _p(p∈ for alln < N) ≥ 1-ϵfor all p∈ [0,1].*Suppose N=∞,ρ≤ 1/4 and log(ϵ_n - ϵ_n-1) = o(n) as n →∞. If J is a finite set of overlapping p-value buckets then there exists c<∞ such that τ_A( J,C_S)≤ c. Allowing N<∞ in Theorem <ref> is useful for stopping boundaries constructed to yield a finite runtime (see (<ref>)).The condition on the spending sequence in part <ref> of Theorem <ref> is identical to the condition imposed in Theorem 1 of <cit.>. It is satisfied by the default spending sequence defined in <cit.> as ϵ_n=ρ n/(n+k) with k=1000, which is also employed in the remainder of this article.The condition on the monotonicity of the boundaries (U_n,α≤ U_n,α' and L_n,α≤ L_n,α' for all n ∈ and α, α' ∈ B_ J with α < α') can be checked for a fixed spending sequence (ϵ_n)_n ∈ in two ways: For finite N, the two inequalities can be checked manually after constructing the boundaries. For N=∞, the following lemma shows that under conditions, the monotonicity of the boundaries holds true for all n≥ n_0, where n_0 ∈ can be computed as a solution to inequality (<ref>) given in the proof of Lemma <ref> in Appendix <ref>.Suppose ρ≤ 1/4 and log(ϵ_n - ϵ_n-1) = o(n) as n →∞. Let α,α' ∈ B_ J withα < α'. Then there exists n_0∈ℕ such that for all n ≥ n_0,L_n,α≤ L_n,α'and U_n,α≤ U_n,α'. For n<n_0, the inequalities again have to be checked manually.§ COMPUTATIONAL EFFORT This section investigates the expected computational effort of the algorithm of Section <ref>.We start by deriving a theoretical lower bound on the expected effort in Section <ref>.We then compare both the Simctest and Robbins-Lai approach of Section <ref> in terms of their expected effort as a function of p (Section <ref>).Integrating this effort for certain p-value distributions of practical interest allows us to compare both approaches in practical situations (Section <ref>).Section <ref> shows that the algorithm can be used for small p-values arising in multiple testing settings. §.§ Lower bounds on the expected effortIn this section we construct lower bounds on the expected number of steps of sequential procedures satisfying (<ref>). The key idea is to consider hypothesis testsimplied by (<ref>) and then to use the lower bounds for the expected effort of sequential tests <cit.>.Let τ be the number of steps taken by a sequential procedure returningI∈ J which respects (<ref>). Then, for every p̃∈ [0,1],_p̃(τ)≥sup_q∉J̃e(p̃, q, ϵ,ϵ),where J̃=⋃_J∈ J, p̃∈ JJ is the union of all buckets containing p̃ ande(p,q,α,β)=(1-α)log(β/(1-α))+αlog((1-β)/α)/plog(q/p)+ (1-p)log((1-q)/(1-p)).Furthermore, if p̃∈ [0,1] is such that exactly twoelements of J contain p̃, say J_1 and J_2, then_p̃(τ) ≥min_η∈ [0,1]max{sup_q∉ J_1e(p̃, q,1-η,ϵ),sup_q∉ J_2e(p̃, q,min(η+ϵ,1),ϵ) } .We call the bound given by (<ref>) the basic lower bound and the bound given by the maximum of (<ref>) and (<ref>) the improved lower bound. The supremain (<ref>) and (<ref>) can be evaluated by looking at the boundary points of J̃, J_1 and J_2. The minimum can be bounded from below by looking at a grid of values for η and by conservatively replacing e(p̃, q,η+ϵ,ϵ) by e(p̃, q,η+ϵ+δ,ϵ), where δ is the grid width. This is because e is decreasing in its third argument.Figure <ref> gives an example of both the basic and the improved lower bounds on _p̃(τ) for the extended buckets J^∗. The improved bound is much higher (and thus better) in the areas where there are overlapping buckets. §.§ Expected effort for (non-)overlapping bucketsThis section investigates both the classical buckets J as well as the extended buckets J^∗ with respect to the implied expected effort as a function of p.Using the non-stopping regions depicted in Figure <ref>, Figure <ref> shows the expected effort (measured in terms of the number of samples drawn) to compute a decision with respect to J (left) and J^∗ (right) as a function of p ∈ [10^-6,1]. For any given p, the expected effort is computed by iteratively (over n) updating the distribution of S_n conditional on not having stopped up to time n. Using this distribution, we work out the probability of stopping at step n and add the appropriate contribution to the overall effort. For both the Robbins-Lai and the Simctest approach, the files RL.cpp and simctest.cpp included in the Supplementary Material contain an implementation that computes the effort for a fixed set of p-value buckets.The effort diverges as p approaches any of the thresholds in J. For J^∗ the effort stays finite even in the case that p coincides with one of the thresholds (Figure <ref>, right). The effort is maximal in a neighborhood around each threshold, while in-between thresholds, the effort slightly decreases. For p-values larger than the maximal threshold in J or J^∗ the effort decreases to zero. The effort for Simctest seems to be uniformly smaller than the one for Robbins-Lai for both J and J^∗.Figure <ref> also shows the lower bound (dashed line) on the effort derived in Section <ref>. Using Simctest, the effort of our algorithm of Section <ref> differs from the theoretical lower bound by only a small factor. §.§ Expected effort for three specific p-value distributionsThe expected effortof the proposed methods for repeated use can be obtained by integrating the expected effort for a fixed p (see Figure <ref>, right) with respect to certain p-value distributions.Here, we consider using the extended buckets J^∗ with three different p-value distributions.These are a uniform distribution in the interval [0,1] (H_0), as well as two alternatives given by the density 1/2+10(x ≤ 0.05) (H_1a) and by a Beta(0.5,25) distribution (H_1b), wheredenotes the indicator function.Table <ref> shows the expected effort as well as the lower bound on the expected effort. The Simctest approach (Section <ref>) dominates the one of Robbins-Lai (Section <ref>) for this specific choice of distributions. As expected, the effort is lowest for a uniform p-value distribution, and more extreme for the alternatives having higher probability mass on low p-values. Using Simctest, the expected effort stays within roughly a factor of two of the theoretical lower bound derived in Section <ref>. §.§ Application to multiple testing We consider the applicability of our algorithm of Section <ref> to the (lower) testing thresholds occurring in multiple testing scenarios. In the following example, we demonstrate that our algorithm is well suited as a screening procedure for the most significant hypotheses. Even for small threshold values, it is capable of detecting more rejections than a naïve sampling procedure that uses an equal number of samples for each hypothesis.We assume we want to test n=10^4 hypotheses using the <cit.> correction to correct for multiplicity. In order to be able to compute numbers of false classifications, we assign n_alt=100 hypotheses to the alternative, the remaining n-n_alt=9900 hypotheses are from the null. The p-values of the alternative are then set to 1-F(X), where F is the cumulative distribution function of a Student's t-distribution with 100 degrees of freedom and X is a random variable sampled from a t-distribution with 100 degrees of freedom and noncentrality parameter uniformly chosen in [2,6]. The p-values of the null are sampled uniformly in [0,1].In order to screen hypotheses, we aim to group them by the order of magnitude of their p-values. For this we employ the overlapping bucketsJ^s = {[0,10^-7] }∪{(10^i-2,10^i]: i = -6,…,0 }which group the p-values in buckets spanning two orders of magnitude each (and[0,10^-7]).We apply our algorithm A( J^s,C_S) of Section <ref> to J^s using confidence sequences computed with the Simctest approach (Section <ref>) and parameter ϵ=10^-3. To speed up the Monte Carlo sampling, we sample in batches of geometrically increasing size ⌊ a^i b ⌋ in each iteration i ∈, where b=10 and a=1.1. Likewise, both the stopping boundaries and the stopping condition (hitting of either boundary) in Simctest are updated and checked in batches of the same size.We now report the results from a single run of this setup. Our algorithm draws N=3.2 × 10^5 samples per hypothesis. Of the 10^4 hypotheses, 28 are correctly allocated to the two lowest buckets. As expected, the p-values from the null are all allocated to larger buckets (covering values from 10^-4 onwards).An alternative approach would be to draw an equal number of N samples per hypothesis and to compute a p-value using a pseudo-count <cit.>. Due to this pseudo-count, this naïve approach is incapable of observing p-values below (N+1)^-1 = 3.125 × 10^-6 (see also <cit.>), and in particular incapable of observing any p-values in the two lowest buckets.§ DISCUSSION The overlapping p-value buckets presented in Section <ref> were chosen to be easily written down and to yield an equal maximal effort for all classical thresholds as well as a reasonable expected effort. However, these criteria are essentially arbitrary. A variety of further (heuristic) criteria can be used to obtain overlapping buckets from traditional testing thresholds T={ t_0,…,t_m }. These include:* The bucket overlapping each threshold t ∈ T can be chosen as [ρ t, ρ^-1t] for a fixed proportion ρ∈ (0,1).* Since the length of a confidence interval for a binomial quantity (with success probability p) behaves proportionally to √(p(1-p))∈ O ( √(p)) as p → 0, we can define a bucket for t ∈ T as J_t,ρ=[t-ρ√(t),t+ρ√(t)], where ρ>0 is chosen such that 0 ∉ J_t,ρ.* The buckets can be chosen to match the precision of a naïve sampling method which draws a fixed number of samples n ∈ per hypothesis. For this we compute all n+1 possible confidence intervals (one for each possible S_n ∈{0,…,n}) for each threshold t ∈ T and record all confidence intervals which cover t. The union of those intervals can then be used as a bucket for t.The tuning parameter ρ can be chosen, for instance, to minimize the maximal (worst case) effort of the resulting overlapping buckets.The article leaves scope for a variety of future research directions. For instance, how can the overlapping p-value buckets be chosen to maximize the probability of obtaining a classical decision (*, ** or ***), subject to a suitable optimization criterion? How can the lower bound on the computational effort derived in Section <ref> be improved? Which algorithm (possibly based on our generic algorithm) is capable of meeting the effort of the lower bound? [American Psychological Association, 2010]ASA2010 American Psychological Association (2010). Publication manual of the American Psychological Association (6th ed.). American Psychological Association, Washington, DC.[Andrews and Buchinsky, 2000]AndrewsBuchinsky2000 Andrews, D. and Buchinsky, M. (2000). A three-step method for choosing the number of bootstrap repetitions. Econometrica, 68(1):23–51.[Andrews and Buchinsky, 2001]AndrewsBuchinsky2001 Andrews, D. and Buchinsky, M. (2001). Evaluation of a three-step method for choosing the number of bootstrap repetitions. J Econometrics, 103(1-2):345–386.[Asomaning and Archer, 2012]Asomaning2012 Asomaning, N. and Archer, K. (2012). High-throughput DNA methylation datasets for evaluating false discovery rate methodologies. Comput Stat Data An, 56(6):1748–1756.[Besag and Clifford, 1991]BesagClifford1991 Besag, J. and Clifford, P. (1991). Sequential Monte Carlo p-values. Biometrika, 78(2):301–4.[Bonferroni, 1936]Bonferroni1936 Bonferroni, C. (1936). Teoria statistica delle classi e calcolo delle probabilità. Pubblicazioni del R Istituto Superiore di Scienze Economiche e Commerciali di Firenze, 8:3–62.[Boos and Stefanski, 2011]BoosStefanski2011 Boos, D. and Stefanski, L. (2011). P-Value Precision and Reproducibility. The American Statistician, 65(4):213–221.[Clopper and Pearson, 1934]Clopper1934 Clopper, C. J. and Pearson, E. S. . (1934). The use of confidence or fiducial limits illustrated in the case of the binomial. Biometrika, 26:404–413.[Davidson and MacKinnon, 2000]DavidsonMacKinnon2000 Davidson, R. and MacKinnon, J. (2000). Bootstrap Tests: How Many Bootstraps? Economet Rev, 19(1):55–68.[Davison and Hinkley, 1997]davison1997bootstrap Davison, A. C. and Hinkley, D. V. (1997). Bootstrap methods and their application. Cambridge university press.[Dazard and Rao, 2012]Dazard2012 Dazard, J.-E. and Rao, J. (2012). Joint adaptive mean–variance regularization and variance stabilization of high dimensional data. Comput Stat Data An, 56(7):2317–2333.[Ding et al., 2016]Ding2016 Ding, D., Gandy, A., and Hahn, G. (2016). A simple method for implementing monte carlo tests. arXiv:1611.01675.[Fay and Follmann, 2002]FayFollmann2002 Fay, M. and Follmann, D. (2002). Designing Monte Carlo Implementations of Permutation or Bootstrap Hypothesis Tests. Am Stat, 56(1):63–70.[Gandy, 2009]Gandy2009 Gandy, A. (2009). Sequential Implementation of Monte Carlo Tests With Uniformly Bounded Resampling Risk. J Am Stat Assoc, 104(488):1504–1511.[Gandy and Hahn, 2014]GandyHahn2014 Gandy, A. and Hahn, G. (2014). MMCTest – A Safe Algorithm for Implementing Multiple Monte Carlo Tests. Scand J Stat, 41(4):1083–1101.[Gandy and Hahn, 2017]GandyHahn2017 Gandy, A. and Hahn, G. (2017). QuickMMCTest: quick multiple Monte Carlo testing. Stat Comput, 27(3):823–832.[Hoeffding, 1963]Hoeffding1963 Hoeffding, W. (1963). Probability inequalities for sums of bounded random variables. J Am Stat Assoc, 58(301):13–30.[IBM Corp., 2013]SPSSsoftware IBM Corp. (2013). IBM SPSS Statistics for Windows. IBM Corp., Armonk, NY.[Kim, 2010]Kim2010 Kim, H.-J. (2010). Bounding the Resampling Risk for Sequential Monte Carlo Implementation of Hypothesis Tests. J Stat Plan Infer, 140(7):1834–1843.[Lai, 1976]Lai1976 Lai, T. (1976). On Confidence Sequences. Ann Stat, 4(2):265–280.[Liu et al., 2013]Liu2013 Liu, J., Huang, J., Ma, S., and Wang, K. (2013). Incorporating group correlations in genome-wide association studies using smoothed group Lasso. Biostatistics, 14(2):205–219.[Lourenco and Pires, 2014]Lourenco2014 Lourenco, V. and Pires, A. (2014). M-regression, false discovery rates and outlier detection with application to genetic association studies. Comput Stat Data An, 78:33–42.[Martínez-Camblor, 2014]Martinez2014 Martínez-Camblor, P. (2014). On correlated z-values distribution in hypothesis testing. Comput Stat Data An, 79:30–43.[R Development Core Team, 2008]Rsoftware R Development Core Team (2008). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0.[Robbins, 1970]Robbins1970 Robbins, H. (1970). Statistical Methods Related to the Law of the Iterated Logarithm. Ann Math Stat, 41(5):1397–1409.[SAS Institute Inc., 2011]SASsoftware SAS Institute Inc. (2011). Base SAS 9.3 Procedures Guide. SAS Institute Inc., Cary, NC.[Silva and Assunção, 2013]SilvaAssuncao2013 Silva, I. and Assunção, R. (2013). Optimal generalized truncated sequential Monte Carlo test. J Multivariate Anal, 121:33–49.[Silva et al., 2009]Silva2009 Silva, I., Assunção, R., and Costa, M. (2009). Power of the Sequential Monte Carlo Test. Sequential Analysis, 28(2):163–174.[Wald, 1945]wald1945 Wald, A. (1945). Sequential tests of statistical hypotheses. Ann Math Stat, 16(2):117–186.[Wu et al., 2013]Wu2013 Wu, H., Wang, C., and Wu, Z. (2013). A new shrinkage estimator for dispersion improves differential expression detection in rna-seq data. Biostatistics, 14(2):232–243. § APPENDIX § PROOFS We prove a circular equivalence of the three statements.(1.) ⇒ (2.):Suppose the buckets J are not overlapping. This implies that there exists α∈ (0,1) which is not contained in the interior of any J∈ J. Let I∈ J be the (random) interval reported by algorithm A which satisfies (<ref>). Let n∈ such that α-1/n≥ 0 and α+1/n≤ 1.Consider the hypotheses H_0: p=α-1/n and H_1: p=α+1/n and the test that rejects H_0 iff α-1/n∉ I.As I cannot contain both α-1/n and α+1/n (otherwise α would be in the interior of the interval I) and because of (<ref>), this test has type I and type II error of at most ϵ.Hence, by the lower bound on the expected number of steps of a sequential test given in <cit.>, see also <cit.>, we have_α+1/n(τ)≥ϵlog(ϵ/1-ϵ)+(1-ϵ)log(1-ϵ/ϵ)/(α+1/n)log(α+1/n/α-1/n)+ (1-α-1/n)log(1-α-1/n/1-α+1/n).As n→∞, the right hand side converges to ∞, contradicting (1.).(2.) ⇒ (3.): We construct an explicit (but not very efficient) algorithm for this.Let a_0<a_1<⋯<a_k be the ordered boundaries of the buckets in J, i.e.{a_0,…,a_k}={max J: J∈ J}∪{min J: J∈ J}. Let Δ=min{a_i-a_i-1:i=1,…,k} be the minimal gap between those boundaries.Let I(S,n) be the two-sided <cit.> confidence interval with coverage probability 1-ϵ for p, where n∈ is the number of samples and S is the number of exceedances observed among those n samples. Let n be such that the length of all Clopper-Pearson intervals is less than Δ, i.e. n=min{ m∈: |I(S,m)|<Δ for all S∈{0,…,m}}. This is well-defined as the length of the Clopper-Pearson confidence interval I(S,n) decreases to 0 uniformly in S as n→∞ (see e.g. the proof of Condition 2 in Lemma 2 of <cit.>).Consider the algorithm that takes n samples X_1,…,X_n and then returns an arbitrary interval I∈ J that satisfies I⊇ I(∑_i=1^nX_i,n) (to be definite, order all elements in J arbitrarily and return the first element satisfying the condition). Such an I always exists as the buckets are overlapping by (2.)and as |I(∑_i=1^nX_i,n)|<Δ, implying that it overlaps with at most one possible boundary. This algorithm satisfies (<ref>) due to the coverage probability of 1-ϵ of the Clopper-Pearson interval.(3.) ⇒ (1.): Since finite effort implies expected finite effort, (1.) follows immediately.We first prove that the length ofuniformly goes to zero. The bounded stopping time then follows after proving that once an interval is below a certain length, it is guaranteed to be contained in one of the buckets.If 0≤ p ≤ S_n/n-[ log((n+1)/ϵ)/(2n) ]^1/2 then, by Hoeffding's inequality <cit.>,b(n,p,S_n) = (X=S_n) ≤(X/n-p≥S_n/n-p) ≤exp( -2(S_n-np)^2/n) ≤ϵ/n+1,whereX ∼Binomial(n,p). Hence,p ∉.A similar argument shows that b(n,p,S_n) ≤ϵ/(n+1) for S_n/n+[ log((n+1)/ϵ)/(2n) ]^1/2≤ p ≤ 1. Thus, || ≤[ 2 log( (n+1)/ϵ )/n ]^1/2.Now assume no c>0 exists such that any interval I⊆ [0,1] with length less than c is contained in a J∈ J. Then for all n ∈ there exists an interval ⊂ [0,1] with 0<||<1/n such that ⊈J for all J∈ J. Let a_n be the mid point of . As (a_n) is a bounded sequence, there exists a convergent subsequence (a_n_k). Let b=lim_k→∞a_n_k.If b∈ (0,1) then, as J is overlapping, there exists ϵ>0 and J∈ J such that (b-ϵ,b+ϵ)⊆ J. For large enough kwe haveC_RL(X_1:n_k) ⊆ (b-ϵ, b+ϵ), contradicting C_RL(X_1:n_k) ⊈J.If b=0 then, as J is a covering of [0,1] consisting of intervals of positive length, there exists ϵ>0 and J∈ J such that [0,ϵ)⊆ J. For large enoughk we have C_RL(X_1:n_k) ⊆ [0,ϵ), again contradicting C_RL(X_1:n_k) ⊈J. If b=1, a contradiction can be derived similarly.* For threshold α∈ B_ J, let E_α^N = { S_τ_α≥ U_τ_α,α, τ_α < N } be the event that the upper boundary is hit first before time N and let E_α^N = { S_τ_α≤ L_τ_α,α, τ_α < N } be the event that the lower boundary is hit first. Then, for all α,α' ∈ B_ J with α<α',E_α^N ⊇E_α'^N andE_α^N ⊆E_α'^N.Indeed, to see E_α^N ⊇E_α'^N, we can argue as follows. On the event E_α'^N, asU_n,α≤ U_n,α' for all n ∈, the trajectory (n, S_n) must hit the upper boundary U_n,α of α no later than τ_α', hence τ_α≤τ_α'<N. It remains to prove that the trajectory does not first hit the lower boundary L_n,α of α. Indeed, if the trajectory does hit the lower boundary of α before hitting its upper boundary, it also hits the lower boundary of α' (as L_n,α≤ L_n,α' for all n <N) before time τ_α', thus contradicting being on the event E_α'^N. Hence, we have E_α^N ⊇E_α'^N. The proof of E_α^N ⊆E_α'^N is similar.Using this notation, for all p∈ [0,1],_p(there exists n< N: p∉)≤_p(there exist n< N, α∈ B_ J: p∉ I_n,α)=_p( ⋃_α∈ B_ J:α≤ pE_α^N ∪⋃_α∈ B_ J:α≥ pE_α^N)≤_p( ⋃_α∈ B_ J:α≤ pE_α^N) +_p( ⋃_α∈ B_ J:α≥ pE_α^N).If p < min B_ J, the first term is equal to 0. Otherwise, let α'=max{α∈ B_ J:α≤ p}. Then, by (<ref>),_p(⋃_α∈ B_ J:α≤ pE_α^N) =_p(E_α'^N)≤ρ.The second term on the right hand side of (<ref>) can be dealt with similarly. * By (<ref>) and as Δ_n = o(n) there exists n_0 ∈ such that|{α∈ B_ J: τ_α > n_0}|≤ 1.We will show that τ_A( J,C_S)≤ n_0. First, the assumption on the ordering of L_n and U_n excludes the possibility that C_S(X_1:n_0)=∅. Second, (<ref>) implies |C_S(X_1:n_0) ∩ B_ J|≤ 1.If |C_S(X_1:n_0) ∩ B_ J|= 1 then let α∈ B_ J be such that α∈ C_S(X_1:n_0). As J is overlapping, there exist J∈ J such that α is in the interior of J.Hence, α cannot be a boundary of J, implying C_S(X_1:n_0) ⊆ J due to |C_S(X_1:n_0) ∩ B_ J|=1, thus showing τ_A( J,C_S)≤ n_0.If |C_S(X_1:n_0) ∩ B_ J|= 0 then let β be in the interior of C_S(X_1:n_0). As J is overlapping, there exists J∈ J such that β∈ J. As C_S(X_1:n_0) ∩ B_ J=∅ this implies C_S(X_1:n_0) ⊆ J, thus showing τ_A( J,C_S)≤ n_0.By arguments in <cit.>, we have U_n,α - n α/n≤Δ_n+1/n→ 0,L_n,α' - n α'/n≥ -Δ_n+1/n→ 0,as n →∞, where Δ_n = √(-n log (ϵ_n-ϵ_n-1)/2). Since Δ_n = o(n) there exists n_0 ∈ such that2 ( Δ_n/n + 1/n) ≤α' - α for all n ≥ n_0.Splitting 2/n = 1/n+1/n and multiplying by n yields nα + Δ_n + 1 ≤ nα' - Δ_n - 1 from which U_n,α≤ L_n,α' follows by (<ref>).By definition, we have L_n,α≤ U_n,α and L_n,α'≤ U_n,α' for all n∈, thus implying L_n,α≤ L_n,α' and U_n,α≤ U_n,α' for all n ≥ n_0 as desired.We suppose that I∈ J is the (random) bucket reported by a sequential algorithm that respects (<ref>).Let p̃∈ [0,1].For any q∈ [0,1]∖J̃, we can consider the hypotheses H_0: p=p̃ against H_1:p=q and the test that rejects H_0 if and only if p̃∉ I.By (<ref>), the type I error of such a test is at most ϵ.Also, the type II error is at most ϵ, as q∉J̃ implies _q(p̃∈ I)≤_q(q∉ I)≤ϵ. Hence, using the lower bound in <cit.>, we get (<ref>).To see (<ref>):For any q∈ [0,1]∖ J_1 consider the hypotheses H_0:p=p̃ and H_1:p=q and thetest that rejects H_0 if and only if I≠ J_1. This test has type I error 1-η, where η=_p̃(I= J_1), and type II error of at most ϵ.Using <cit.> we get _p̃(τ) ≥e(p̃, q,1-η,ϵ).Similarly, for any q∈ [0,1]∖ J_2, we can test the hypotheses H_0:p=p̃ and H_1:p=q by rejecting H_0 if and only if I≠ J_2. This test has type I error of at most min(η+ϵ,1) and type II error of at most ϵ.Again, using <cit.> we get_p̃(τ)≥ e(p̃, q,min(η+ϵ,1),ϵ). Eq. (<ref>) follows as these inequalities hold for all q and due to the fact that we can account for the unknown η by minimizing over it. § A SIMPLE STOPPING CRITERION FOR ROBBINS-LAI The following describes a simple criterion to determine whether a confidence interval computed via the Robbins-Lai approach of Section <ref> is fully contained in a bucket. For a single threshold this approach has been suggested in <cit.>. Let intervaland bucket J ∈ J as well as n, S_n and ϵ be as in Sections <ref> and <ref>. Then ⊆ J if and only if for p ∈{min J,max J},(n+1) b(n,S_n,p) = (n+1) nS_n p^S_n (1-p)^n-S_n≤ϵ.As (<ref>) is also satisfied ifand J are simply disjoint, we verify that (n+1) b(n,S_n,p) is indeed increasing at min J and decreasing at max J using the derivative of (n+1) b(n,S_n,p) with respect to p.After applying a (monotonic) log transformation to (<ref>), taking the derivative with respect to p yieldsS_n/p - n-S_n/1-p≥ 0 p=min J, ≤ 0 p=max J.If (<ref>) and (<ref>) are satisfied, then ⊆ J. | http://arxiv.org/abs/1703.09305v5 | {
"authors": [
"Axel Gandy",
"Georg Hahn",
"Dong Ding"
],
"categories": [
"stat.ME"
],
"primary_category": "stat.ME",
"published": "20170327204725",
"title": "Implementing Monte Carlo Tests with P-value Buckets"
} |
Politecnico di Torino, Department of Electronic and Telecommunications, Torino, 10129, Italy The University of Texas at Austin, Department of Electrical and Computer Engineering, Austin, Texas, 78712, USAPolitecnico di Torino, Department of Electronic and Telecommunications,Torino, 10129, Italy, and with Macquarie University, Sydney, 2109, Australia In this paper, we address non-radiating and cloaking problems exploiting the surface equivalence principle, by imposing at any arbitrary boundary the control of the admittance discontinuity between the overall object (with or without cloak) and the background. After a rigorous demonstration, we apply this model to a non-radiating problem, appealing for anapole modes and metamolecules modeling, and to a cloaking problem, appealing for non-Foster metasurface design. A straightforward analytical condition is obtained for controlling the scattering of a dielectric object over a surface boundary of interest. Previous quasi-static results are confirmed and a general closed-form solution beyond the subwavelength regime is provided.In addition, this formulationcan be extended to other wave phenomena once the proper admittance function is defined (thermal, acoustics, elastomechanics, etc.).A Surface Admittance Equivalence Principlefor Non-Radiating and Cloaking Problems Ladislau Matekovits December 30, 2023 ===================================================================================In search of a method for calculating the radiated power from infinitely thin scattering structures, Schelkunoff was led in 1936 to “certain equivalence theorems”, in order to find the causal relation between arbitrary radiating fields and sources located at a surface boundary <cit.>. In 1938, he also highlighted the concept of impedance for radiating problems as a powerful tool that “brings out a certain underlying unity in what otherwise appear diverse physical phenomena”<cit.>.In 1973, Devaney and Wolf established necessary and sufficient conditions for localized sources with special arbitrary fields, mainly non-radiating outside their domain of definition, according to a set of theorems they rigorously derived <cit.>. In recent years, such non-radiating sources, difficult to be excited naturally in a bare particle, have been impressed artificially through the insertion of a properly designed cloak <cit.>, attached or detached to the original scatterer (uncloaked). In this framework, exploiting the surface equivalence principle, we impose the design of non-radiating sources in volumetric domains as projected at an arbitrary surface boundary, enclosing the bare particle (non-radiating problem) or the uncloaked object with its coating layer (cloaking problem).Recently, the surface equivalence principle has been applied to the synthesis of planar devices with reflectionless sheets <cit.> and to cloak conformal structures with antenna elements <cit.>. Instead of reasoning on the sources as discontinuities of tangential fields at a thin surface <cit.>, weshow that all the useful information relevant to the volumetric interactions between the object (without or with the surrounding cloak) and the background can be recasted in terms of fields ratio over a surface of choice, directly providing admittance functions: this concept is valid at any frequency regime and without any approximation as highlighted by Schelkunoff<cit.>. For cloaking problems, several approaches exist in the literature for the design of the coating layer according to the size and/or constitutive parameters of the object to be hidden, such asPlasmonic Cloaking (PC) <cit.>, Transformation Optics (TO) <cit.> and Mantle Cloaking (MC) <cit.>. The TO technique is based on a spatial transformation of fields, preserving their free-propagating characteristic outside a certain region (exact zero scattering), while compressing wave propagation in the annular cloaking medium, through its anisotropic layout, rerouting the energy flow <cit.>. In such a way, a hole for the fields is created, with the possibility of hiding whatever object inside the cloak once defined its size, regardless the constitutive parameters of the object itself.The PC and MC approaches are based on the scattering cancellation method <cit.>, where, by taking into account the scattering from the uncloaked object, the outgoing scattered fields are turned-off to zero or to very low values by a bulk plasmonic or volumetric metamaterial coating (PC)<cit.> or, in case of the MC approach <cit.>, bya thinmetasurface, matemathically modeled as an impedance sheet Z_s. The use of such surface impedance cloaks turns out to be useful if also related to what Schelkunoff reported in his paper on the impedance concept <cit.>, about “the idea of extending the V/Irelation (voltage-current ratio) from circuits to radiation fields”.In order to establish a complete non-radiating and cloaking condition valid at any frequency regime, we exploit the surface equivalence principle<cit.> combined with the impedance concept <cit.>, by considering the bare object (without or with cloak) and the background in terms of their admittance functions relative to a specific incoming wave: this concept is generally exploited in static or quasi-static models (mainly circuit problems), but it isalso implicitly contained in Lorenz-Mie theory <cit.>.In this work, the problem is particularized to electromagnetics, even though such methodology can be extended in a straightforward manner to any other physical scattering phenomenon, once the admittance function is properly defined, such as in elastic<cit.>, thermal <cit.> or acoustic<cit.> non-radiating and cloaking problems. As a starting point, we reconsider one of the two classes of non-radiating solutions as derived from the Devaney-Wolf theorem <cit.>. According to theorem III as originally derived <cit.>, a necessary and sufficient condition for a physical (or equivalent) surface source distribution Q⃗_s, projected along its unit polarization vector q̂, to be non-radiating is ∫_Γ[ Q⃗_s(ρ) ·q̂ ] e^- k_b ρ dΓ=0 where k_b is the wavenumber in the background medium. By ensuring to zero all its Fourier components, theorem (<ref>) predicts the existence of non-radiating sources. An apparently-trivial solution that satisfies the Devaney-Wolf theorem is Q_s(ρ=Γ)=J⃗_s(Γ)=0 M⃗_s(Γ)=0 where the general surface sources Q_s(ρ=Γ) have been explicited in terms of J⃗_s and M⃗_s, electric and magnetic surface current densities. Possessing all local zeros in the domain where the source itself is localized (in this case, the surface boundary Γ), this configuration has been referred as strong solution for non-radiating and cloaking problems <cit.>, due to its vanishing components at any considered point. Because of theorem (<ref>) is applied to equivalent rather than physical sources, this kind of solution leads to non-trivial results.As rediscovered by Schelkunoff <cit.>, a general radiating event, supported by physical scatterers, can be represented and replaced by electric and magnetic surface equivalent sources located at an arbitrary boundary Γ as J⃗_s(Γ)= n̂×[ H⃗(Γ^+)-H⃗(Γ^-) ] M⃗_s(Γ)= -n̂×[ E⃗(Γ^+)-E⃗(Γ^-) ] whereJ⃗_s and M⃗_s surface sources, located at Γ, are proportional to the discontinuity of tangential magnetic and electric fields at the outer (^+) and the inner (^-) side of the surface boundary Γ. If such independent electromagnetic sources are identically zero at Γ, according to Eq. (<ref>), a non-radiating and cloaking condition is thus achieved, giving H⃗(Γ^+)=H⃗(Γ^-) E⃗(Γ^+)=E⃗(Γ^-) The simultaneous conditions in Eq. (<ref>)-(<ref>) can be rewritten in a compact form, using the admittance functions as defined in terms of magnetic-electric fields ratio, forming for each component the conditionY_b(Γ^+)-Y_d(Γ^-)=0 which aims to control the ratio of the magnetic and electric tangetial fields (normalized admittance functions), through properly incorporating the scatterers at the outer (background material) and the inner side (dielectric material) of the surface boundary Γ.Consider for simplicity, as shown in Fig. <ref>, a dielectric cylinder of absolute permittivity ε_d, permeability μ_b and circular transverse section of radius a in an infinite background medium of permittivity ε_b and permeability μ_b.Due to the arbitrary choice of the surface boundary, we chooseΓ to be directly attached at ρ=a to the surface of the bare dielectric cylinder, positioned with axis parallel to ẑ and illuminated by an incoming TM_z polarization wave (the largest contribution to scattering for dielectrics). In this scenario, the total fields can be analytically computed usingLorenz-Mie theory <cit.>. Even if the object is still without any cloak, there is a non-radiating condition for which the coefficient c_n^ vanishes or becomes near-zero for a specific scattering order, without any coating. Applying Cramer's rule, such condition readsJ_n(k_d a) J_n(k_b a)k_dJ^'_n(k_d a) k_b J^'_n(k_b a)=0 which is valid for each harmonic index n. Solving this determinant and rewriting such relation in terms of admittance functions, the non-radiating condition reads -iJ^'_n(k_b a)J_n(k_b a) +i√(ε_r)J^'_n(k_d a)J_n(k_d a) =0 where ε_d≡ε_rε_b. The first term can be recognized as the normalized admittance Y_b(Γ^+) (ratio of normalized magnetic and electric field functions) as computed at ρ=a^+ for the specific cylindrical harmonic of choice in a free-space (complete background) scenario <cit.>. This is consistent with Schelkunoff's idea of considering the impedance/admittance “as an attribute of the field as well as of the body or the medium which supports the field, so that the impedance to a plane wave is not the same as the impedance to a cylindrical wave, even when both are propagated in infinite free-space” <cit.>. The same reasoning can be applied to the second term,recognized to be the normalized admittance Y_d(Γ^-)=-i√(ε_r) J^'_n(k_d a)/J_n(k_d a) for each incoming cylindrical harmonic when traveling in a medium with ε_r as relative permittivity <cit.>. Beyond the trivial case when ε_r=1, Eq. (<ref>) can be therefore interpreted as the fact that the scattering due to a certain harmonic can be minimized when the admittance (or impedance) of the two media (bare particle and background) is the same at the surface of the object for the harmonic of interest.As shown in Fig. <ref>, this interpretation of a homogeneous dielectric particle in a volumetric non-radiating condition, which can show weak scattering responses even without any loading surface for some frequency values, can serve as surface admittance model for dielectric nanoparticles supporting a non-radiatinganapole mode and metamolecules <cit.>.Once reconsidered the non-radiating volumetric problem at the surface boundary Γ, it becomes straightforward to solve also the cloaking problem.Eq. (<ref>) can now be altered and controlled through the insertion of a proper cloaking surface impedance, or mantle cloak, which is an additional normalized admittance sheet Y_s: the intentional choice of Γ, to be directly attached to the surface of the dielectric cylinder, leads the fictitious equivalent zero surface sources to be sustained and implemented by a physical dispersive cloak. Due to the insertion of such lumped surface admittance in parallel, the relation in Eq. (<ref>) is modified into Y_b(k_b a, n)-[ Y_d(k_b a,ε_r, n)+Y_s ]=0where the dependence from the three main dimensionless variables of the cloaking problem are: the normalized size with respect to the background wavelength k_b a=2π a/λ, the relative permittivity of the uncloaked object ε_r and the cylindrical harmonic index n. Similar to Eq. (<ref>), we can expect that the proper tailoring of Y_s in Eq. (<ref>) is able to match the two impedances on the surface of the object, and therefore suppress the scattering contribution from harmonic n.Without solving the entire Lorenz-Mie scattering for the overall cloaking problem <cit.> but perfectly consistent with, the analytical formula (<ref>) predicts the required cloaking impedance Z_s=Z_B/Y_s=R_s+iX_s, where Z_B=√(μ_b/ε_b) is assumed as the reference background impedance with respect all the other values are normalized to (tilde sign on top), as shown in Fig. <ref>.In essence, Eq. (<ref>) exploits the equivalence principle as applied to non-radiating problems: if the scattered field on a surface is identically zero, beingsustained by the absence of sources, it should be zero even everywhere outside the surface boundary Γ. Therefore, by making sure that the impedance is matched on the surface of interest, which is achieved adding a suitably designed mantle cloak satisfying Eq. (<ref>), we can make sure that, for the harmonic of interest, the scattering is zero everywhere.Once defined the constitutive parameter ε_r and the radius a of the object to be hidden, the residual scattering for each harmonic can be associated to the imaginary partof the difference between the background and the dielectric object admittances. Considering the frequency regime ≡ k_b a and each n cylindrical harmonic, we define the function Δ(,n) as-iΔ(,n)≡Y_b(, n)-Y_d(,ε_r, n)When such residual function is different from zero for the bare particle, a surface load is required in order to achieve a cloaking effect for the harmonic of interest and it is found to be equal to Y_s(,n)=-iΔ(,n) or explicitlyY_s(,n)=-i[ J^'_n()J_n() -√(ε_r)J^'_n(√(ε_r))J_n(√(ε_r)) ]∀ n, ∀which is exactly the value needed to compensate such residual difference or mismatch. Interestingly, it is only over a single surface that each Y function (bare object, cloak and background) takes into account both information about the field shape and material constitutive parameters of the object: consistently with the surface equivalence principle<cit.>, this ensures the cloaking effect to be achieved at any arbitrary distance far away from the initial surface boundary. Eq. (<ref>) is consistent with the closed-form analytical formula previously derived in the subwavelength (or quasi-static) frequency regime for mantle cloaks <cit.>. In the conformal case for γ≡ a/a_c=1, the value of the normalized surface admittance Y_s^QS=G_s^QS+iB_s^QS in quasi-static regime can be obtained directly from Lorenz-Mie theory as<cit.> X_s^QS=+2ω a ε_b (ε_r-1)B_s^QS =- (ε_r-1)2The same result is confirmed as a particular case in Eq. (<ref>), solved for the dominant mode n=0 in the quasi-static frequency regime: in the small argument limit for x → 0, the Bessel functions become J_0(x)≈ 1 and J^'_0(x)≈ -x/2, thus the result reads Y_s (≪ 1, n=0)=-i[ -2+√(ε_r)√(ε_r)2 ]=iB_s^QS We consider now a couple of examples of cloaking for cylinders, in order to highlight the efficient analytical design of mantle cloaks based on this formula. Consider first a dielectric cylinder with a=0.15λ (thus, =0.3 π),possessing a relative permittivity ε_r=3 with respect to the background ε_b=ε_0 (free-space, thus Z_B=Z_0=120π Ω), loaded by a surface impedance at a_c=a to achieve an optimal scattering suppression. Using classical Lorenz-Mie Theory as applied to the uncloaked cylinder plus the impedance sheet, the analytical result in quasi-static condition, using Eq.(<ref>), leads to the normalized value Y_s^QS=-i 0.3π (Z_s^QS=+i400 Ω). We can now analytically derive the optimal value of required surface impedance by using Eq. (<ref>). Information about the dominant mode for a certain frequency regimecan be derived in a straightforward manner in terms of Δ(,n), which represents a dimensionless quantity that remains real for lossless scatterers and backgrounds: it is expected that, the largest the mismatch, the largest the contribution (dominant) into the outgoing scattered field for the n cylindrical harmonic mode. For this reason, the strategy here adopted to build the complete dispersion response of the normalized surface admittance is Y_s^opt()=-i n Δ(,n) For monochromatic illumination, the value of the normalized admittance gives a surface impedance value ofZ_s^opt=+i216.80 Ω in the frequency regime =0.3 π, for which the first dominant mode to be suppressed appears to be n=0.In Fig. <ref>, the three cases are shown in terms of absolute value of scattered fields (here, the incoming field is completely polarized parallel to the cylinder's axis): the uncloaked dielectric object (a), the cloaked device with Z_s=Z_s^QS (b) and the cloaked system with Z_s=Z_s^opt (c).For a monochromatic incoming field, with TM_z polarization, traveling from the left to the right of each panel, the scattered field is maximum for the uncloaked dielectric whereas it is clearly reduced for the two cloaked devices: for the impedance sheet with Z_s^QS, reflections exist in the backward direction, whereas for the impedance coating with Z_s^opt very low scattered energy in the outside region is observed.For wideband incoming signals, the optimal Z_s, once fixed the geometrical and constituive parameters of the cloaked system, can be analytically derivedfrom thecomplete function Y_s() using Eq. (<ref>). In order to suppress the first dominant mode at any frequency, it is expected that, asfrequency changes, also the first dominant mode moves its index n, with consequent jumps in the dispersion of X_s as obtained in Eq. (<ref>).In order to validate this effect, the dispersion of the surface impedance Z_s^opt() is depicted as a function of the normalized diameter D_λ=/π in the range [0.1÷ 0.7]λ for the same cylindrical object. Interestingly, such functional dependence of the surface impedance with respect to the inverse of(thus, directly proportional to ω) is monotonically decreasing: in order to realize the dispersion of such admittance/impedance cloak, which would realize a broadband cloaking device, the Foster's reactance theorem has to be broken <cit.> and non-Foster metasurfaces, loaded with active elements <cit.>, have to be employed. For this reason, this formulation is appealing to explore the limitations as dictated by all passive Foster cloaks <cit.>.As shown in Fig. <ref>, a jump arises when =0.45π, for which Δ(,n=1) >Δ(,n=0) and the first dominant term passes from n=0 (the same as in the quasi-static regime) to n=1 (and it is mantained up to the final value =0.7π). The gain in term of SCS for the cloaked scenario is defined, with respect to the uncloaked scenario, as G_SCS()=∑_n=0^N_max[ 2- δ_n,0 ]c_n^()^2_clk∑_n=0^N_max[ 2- δ_n,0 ]c_n^()^2_unc where δ_n,0 is the Kronecker delta, which takes into account the symmetric contribution for ± n harmonics with respect to the central index n=0, as mentioned above. The SCS gain leads to less-than-unity values (thus a negative sign in dB units) for the G_SCS() function if the scattering of the uncloaked structured is very large with respect to the cloaked case. As reported in Fig. <ref>, this is the case for the entire frequency regime window. As also shown in Fig. <ref> at =0.3π, the improvement of the optimal surface impedance with proper dispersion (red triangle line) is around -10 dB with respect to the cloaked case with quasi-static approximation formula (blue point line). Around the frequency regime value =0.45 π≈ 1.41, the function G_SCS^opt() becomes slightly worse with respect to G_SCS^QS(). This can be due to a good trade-off achieved by Eq. (<ref>) not for the cancellation of the entire harmonic n=0 but for a minimization of the overall mismatch for n=0 and n=1 simultaneously. From =0.45π to =0.55π, both gain functions are similar, because by chance this corresponds to similar values in the Z_s as reported in Fig. <ref> and, towards the end, while the quasi-static approximation design gets closer and closer to 0 dB (the uncloaked case), the analytical formula exploited for the first dominant term is able to achieve a drastic reduction around -6 dB from =0.65π until the end of this frequency regime window. The real part of the total electric field is shown for the frequency regime =0.7π≈ 2.20 in Fig. <ref>, for the uncloaked and optimal cloaked cases with the first dominant mode (n=1) suppressed with Z_s^opt=+4.93 Ω.When the overall object size exceeds ≥ 2, it becomes more and more challenging to reduce the SCS using a single impedance sheet as observed beyond this frequency window (not reported here). This implies that for ≥ 2 the scattering is dominated by two (or more) different dominant cylindrical harmonics but, due to the fact that each cloaking shell (even in the volumetric case) can control one single harmonic at time, there is at least the direct control for cancellation of only the first dominant term: this does not ensure a reduction in the overall SCS if the second dominant term (or the third, and so on) is comparable with respect to the first contribution. However, a systematic generalization, based on such surface admittance equivalence principle, towards multilayer impedance cloaks is under investigation.In conclusion, a reformulation of the surface equivalence principle, in terms of discontinuity in tangential fields components <cit.>, has been proposed in terms of field ratio at the same surface boundary. For non-radiating and cloaking problems,the overall scattering interactions can be written in terms of the admittance functions for the bare object (without or with cloak) and background, calculated over a surface of choice. For non-radiating problems, the zero surface sources solution of the Devaney-Wolf theorem ensures an admittance model, appealing for anapole modes and metamolecules with weak radiation properties<cit.>. For cloaking problems, the closed-form solution for the required surface impedance is adjusted to ensure zero scattering for a specific cylindrical harmonic excitation: previous findings, based on Lorenz-Mie scattering theory are confirmed in quasi-static regime. Comparisons, with a deep analysis on the role of the frequency regimein terms of harmonic scattering control, have been also performed, validating this cloaking admittance model, appealing for the design of non-Foster metasurfaces. These findings can be generalizedin a straightforward manner towards multilayer structures and, in addition, for any wave phenomenon once the admittance/impedance concept, as envisioned by Schelkunoff <cit.>, can be properly defined and applied.99Schelkunoff_1S. A. Schelkunoff, “Some equivalence theorems of electromagnetics and their application to radiation problems”, Bell System Tech. J., Vol. 15, pp. 92-112, 1936.Imp_GenS. A. Schelkunoff, “The Impedance Concept and Its Application to Problems of Reflection, Refraction, Shielding and Power Absorption”, Bells Labs Technical Journal, Vol. 17, No.1, 1938. Dev_WolfA. J. Devaney and E. Wolf, Physical Review D8, 4 (1973).PCA. Alù and N. Engheta, “Achieving transparency with plasmonic and metamaterial coatings”, Physical Review E, Vol. 72, No. 1, 2005.TO1U. Leonhardt, “Optical Conformal Mapping”, Science, Vol. 312, No. 781, 2006.TO2J. B. Pendry, D. Schurig and D. R. Smith, “Controlling electromagnetic fields”, Science, Vol. 312, No. 781, 2006. MCA. Alù, “Mantle cloak: Invisibility induced by a surface”, Physical Review B, Vol. 80, 245115, 2009.GrbicC. Pfeiffer, A. Grbic, “Metamaterial Huygens’ Surfaces: Tailoring Wave Fronts with Reflectionless Sheets”, Phys. Rev. Lett., Vol. 110, 197401, 2013.EleftM. Selvanayagam, G. V. Eleftheriades, “An Active Electromagnetic Cloak Using the Equivalence Principle”, IEEE Antennas and Wireless Propagation Letters, Vol. 11, pp. 1226-1229, 2012.Patt_MetaP. -Y. Chen and A. Alù, “Mantle cloaking using thin patterned metasurfaces”, Physical Review B, Vol. 84, 205110, 2011.Fost_MetaP. -Y. Chen, C. Argyropoulos and A. Alù, “Broadening the Cloaking Bandwidth with Non-Foster Metasurfaces”,Physical Review Letters, Vol. 111, 233001, 2013.BohrenC. F. Bohren, and D. R. Huffman, Absorption and Scattering of Light by Small Particles (Wiley, New York, 1983).ElasticM. Farhat, P.-Y. Chen, H. Bagci, S. Enoch, S. Guenneau and A. Alù, “Platonic Scattering Cancellation for Bending Waves in a Thin Plate”, Scientific Reports, Vol. 4, 4644, 2014.ThermalM. Farhat, P.-Y. Chen, H. Bagci, C. Amra, S. Guenneau and A. Alù, “Thermal invisibility based on scattering cancellation and mantle cloaking”, Scientific Reports, Vol. 5, 9876,2015.AcousticY. I. Bobrovnitskii, “Impedance acoustic cloaking”, New Journal of Physics, Vol. 12, 2010.Laby_opexG. Labate, L. Matekovits, “Invisibility and cloaking structures as weak or strong solutions of Devaney-Wolf theorem”, Optics Express, Vol. 24, No. 17, pp. 19245-19253, 2016. MarcuvitzN. Marcuvitz, Waveguide Handbook, Electromagnetic Wave Series 21, Sec. 1.7, pp. 29-47, IET, 1986.ana1A. E. Miroshnichenko, A. B. Evlyukhin, Y. F. Yu, R. M. Bakker, A. Chipouline, A. I. Kuznetsov, B. Luk’yanchuk, B. N. Chichkov,Y. S. Kivshar, “Nonradiating anapole modes in dielectric nanoparticles”, Nat. Commun.,Vol. 6, No. 8069, 2015.ana2A. A. Basharin, V. Chuguevsky, N. Volsky, M. Kafesaki, E. N. Economou,“Extremely high Q-factor metamaterials due to anapole excitation”, Phys. Rev. B 95 (035104), 2017. FosterR. M. Foster, “A Reactance Theorem”, Bell System Technical Journal, Vol. 3, No. 259, 1924.Mont_inv_expF. Monticone, A. Alù, “Invisibility exposed: physical bounds on passive cloaking”, Optica,Vol. 3, No. 7, pp. 718-724, 2016.unsrt | http://arxiv.org/abs/1704.00039v1 | {
"authors": [
"Giuseppe Labate",
"Andrea Alù",
"Ladislau Matekovits"
],
"categories": [
"physics.class-ph",
"physics.optics"
],
"primary_category": "physics.class-ph",
"published": "20170327174342",
"title": "A Surface Admittance Equivalence Principle for Non-Radiating and Cloaking Problems"
} |
Distributed Voting/Ranking with Optimal Number of States per NodeSaber Salehkaleybar, Member, IEEE, Arsalan Sharif-Nassab, and S. Jamaloddin Golestani, Fellow, IEEE Dept. of Electrical Engineering, Sharif University of Technology, Tehran, IranEmails: [email protected], [email protected], [email protected] December 30, 2023 ======================================================================================================================================================================================================================================================================================== Considering a network with n nodes, where each node initially votes for one (or more) choices out of K possible choices, we present a Distributed Multi-choice Voting/Ranking (DMVR) algorithm to determine either the choice with maximum vote (the voting problem) or to rank all the choices in terms of their acquired votes (the ranking problem). The algorithm consolidates node votes across the network by updating the states of interacting nodes using two key operations; the union and the intersection. The proposed algorithm is simple, independent from network size, and easily scalable in terms of the number of choices K, using only K× 2^K-1 nodal states for voting, and K× K! nodal states for ranking. We prove the number of states to be optimal in the ranking case; this optimality is conjectured to also apply to the voting case. The time complexity of the algorithm is analyzed in complete graphs. We show that the time complexity for both ranking and voting is O(log(n)) for given vote percentages, and is inversely proportional to the minimum of the vote percentage differences among various choices. definition mydefDefinition myexExample myrmRemark theorem mylmLemma mythTheorem § INTRODUCTIONOne of the key building blocks in distributed function computation is “Majority Voting”. It can be employed as a subroutine in many network applications such as target detection in sensor networks <cit.>, distributed hypothesis testing <cit.>, quantized consensus <cit.>, voting in distributed systems <cit.>, and molcular nanorobots <cit.>. In the distributed majority voting, each node chooses a candidate from a set of choices and the goal is to determine the candidate with the majority vote by running a distributed algorithm. As an example in target detection <cit.>, wireless sensors combine their binary decisions about the presence of a target through majority voting, and send a report to the fusion center if the majority is in favor of presence.The majority voting problem for the binary case has been extensively studied in cellular automata (CA) literature. In <cit.>, it has been shown that there is no synchronous deterministic two-state automaton that can solve binary voting problem in a connected network. Several two-state automata have been proposed for the ring topology <cit.>, the most successful of which can get the correct result in nearly 83% of initial configurations of selected votes <cit.>. In order to circumvent the impossibility resultof <cit.>, asynchronous and probabilistic automata have also been presented in CA community <cit.>. However, none of them can obtain the correct result with probability one <cit.>. Using a different approach, binary voting problem can be solved by a randomized gossip algorithm <cit.> that computes the average of initial node values. The drawback of this approach is that the number of required states in its quantized version <cit.> grows linearly in terms of the network size <cit.>.In applying gossip algorithms to the implementation of binary majority voting, node does not need to come up with the exact average of the node values; it suffices to determine interval to which the average node values belongs. From this observation, Bénézit et al. <cit.> proposed an elegant solution based on an automaton, with the state space {0,0.5^-,0.5^+,1}, which resembles the idea in <cit.>. The initial state of nodes voting foris 0 or 1, respectively. When two neighbor nodes get in contact with each other, they exchange their states and update them according to a transition rule. It can be shown that the states of all nodes would be in the set {0,0.5^-} at the end of the algorithm, if the choice “0”is in majority. Otherwise, the state of all nodes would belong to the set {0.5^+,1}. In <cit.>, a Pairwise Asynchronous Graph Automata (PAGA) has been used to extend the above idea to the multiple choice voting problem, and sufficient conditions for convergence are stated. This approach results in a 15-state automaton and a 100-state automaton for the ternary and quaternary voting problems, respectively. For majority voting with more than four choices, pairwise and parallel comparison among the choices, has been proposed <cit.>, requiring Θ(2^K(K-1)) number of states in terms of the number of choices, K. At the end, authors posed a few open problems. One of the main problems is whether voting automata exist for any number of multiple choices without running multiple binary or ternary voting automata in parallel? Furthermore, what is the minimum number of states of a possible solution?In more recent works <cit.>, it has been shown that the majority vote can be obtained with high probability if the initial votes are sufficiently biased to the majority or the network size is large enough. However, none of these works can guarantee convergence to the correct result.A generalization of the distributed voting problem, is the distributed ranking problem in which the goal is to rank all the K choices in terms of the number of votes, each get from different network nodes <cit.>. In this paper, we propose a Distributed Multi-choice Voting/Ranking (DMVR) Algorithm for solving the majority voting and ranking problems in general networks. The proposed algorithm may also be applied where each node is allowed to vote for more than one choice. Our main contributions are summarized as follows: * Our proposed DMVR algorithm provides a simple and easily scalable approach for distributed voting and ranking that works for any number K of choices, requiring K× 2^K-1 and K× K! number of states for the voting and ranking problems, respectively. For instance, the number of required states is 12 for ternary voting, and 32 for quaternary voting, compared to respectively 15 and 100 states in the case of PAGA algorithm <cit.>. Furthermore, unlike the randomized gossip algorithms <cit.>, the number of states is independent from the network size.* We establish a lower bound on the number of required states in any ranking algorithm, and show that the DMVR algorithm achieves this bound. Compared to the existing algorithms, the state of the DMVR algorithm can be encoded by roughly Θ(Klog(K)) bits. * In complete graphs, we analyze the time complexity of the DMVR algorithm for the ranking problem. We will show how the time complexity is related to the percentage of nodes voting for different choices. Besides, we propose a modification for speeding up the DMVR algorithm for the majority voting problem. The remainder of this paper is organized as follows: In Section II, the DMVR algorithm for majority voting and ranking is described. Section III studies the convergence of the DMVR algorithm. Furthermore, the number of states of the DMVR algorithm is analyzed in both cases of voting and ranking. Section IV is devoted to analyze the time complexity of the DMVR algorithm in complete graphs. In Section V, simulation results are provided. Finally, we conclude the paper in Section VI. § THE DISTRIBUTED MULTI-CHOICE VOTING/RANKING (DMVR) ALGORITHM §.§ Problem StatementConsider a network with n nodes. The topology of the network is represented by a connected undirected graph, G=(V,E), with the vertex set V={1,...,n}, and the edge set E⊆ V× V, such that (i,j)∈ E if and only if nodes i and j can communicate directly. Furthermore, it is assumed that each node is equipped with a local clock which ticks according to a Poisson process with rate one. Initially, each node i chooses a choice from a set of K choices 𝒞={c_1,⋯,c_K}. Let # c_k be the number of nodes that select the choice c_k and ρ_k≜#c_k/n. In the majority voting problem, the goal is to find the choice c_k in majority, i.e. the choice c_k satisfying # c_k≥# c_j, j∈{1,⋯,K}. In the ranking problem, the desired output is a permutation, [π_1,⋯,π_K] of 𝒞 such that #π_k≥#π_k+1, ∀ k.§.§ Description of the DMVR algorithmA value set v_i(t) is associated with each node i at time t. At t=0, the only member of v_i(0) is the selected choice of node i. In the process of the algorithm, v_i(t) always remains a subset of 𝒞. The algorithm essentially performs two function. One function of the algorithm deals with consolidating node choices across the network. This function utilizes two key operations, the union and the intersection, in order to update the value sets v_i(t) and v_i(t) of nodes i and j, when they interact. The second part of the algorithm has to do with disseminating the consolidated result of part one throughout the network. For reasons to be clarified later, the above two functions of the algorithm are executed in parallel, not sequentially. The second function of the algorithm operates on a collection of sets m_i,k(t), 1≤ k≤ K, at each node i. We collectively refer to the sets m_i,k(t), 1≤ k≤ K, of node i as memory of it. Each m_i,k(t) is a subset of 𝒞. Unlike the dissemination function, the consolidation function is identical for the voting and ranking. In the following, we describe the dissemination function for the more general case, i.e. for the ranking case, the dissemination for the voting case being a special and simplified version of it.When node i's clock ticks at time t, it chooses one of the neighbor nodes, say node j, randomly. Then, nodes i and j update their value sets and memories according to the following transition rules:1.06!v_i(t^+):=v_i(t) ∪ v_j(t), v_j(t^+):= v_i(t)∩ v_j(t),|v_i(t)|≤ |v_j(t)|,v_i(t^+):= v_i(t) ∩ v_j(t), v_j(t^+):= v_i(t)∪ v_j(t), . m_i,|v_i(t^+)|(t^+):=v_i(t^+),m_j,|v_j(t^+)|(t^+):=v_j(t^+),where there is no memory updating if v_i(t^+)=∅. Furthermore, we have: m_i,k(t^+)=m_i,k(t) for ∀ k≠ |v_i(t^+)|, ∀ i∈{1,⋯,n}.When the algorithm converges[In Section III, Theorem 1, we will describe when the algorithm eventually converges to the correct result.], each node i can obtain the correct ranking as follows[ The “\" is the set-theoretic difference operator, i.e. A\ B= {x: x∈ A, x∉B} for any sets A and B.]:π_k=m_i,k(t)\ m_i,k-1(t), k>1,m_i,1(t), k=1. In the case of the majority voting problem, it suffices to keep the memory m_i,1(t) at each node i. We denote m_i,1(t) by m_i(t) when the DMVR algorithm is executed for the majority voting problem. The description of the DMVR Algorithm is given in Algorithm 1.Suppose that |v_i(t)|≤ |v_j(t)|. It is not difficult to show that the updating rule in (<ref>) has the following properties: * Define the size of choice c_k as: |{i| c_k∈ v_i(t)}|. The size of every choice c_k is preserved during the updates.* We have v_j(t^+)⊆ v_i(t^+). If v_i(t)⊆ v_j(t), the two nodes just exchange their value sets.* The quantity |v_i(t)|^2+|v_j(t)|^2 strictly increases if v_i(t)⊈v_j(t). Otherwise, it remains unchanged. § CONVERGENCE ANALYSISIn this section, we will show that the DMVR algorithm converges to the correct solution for the majority voting and ranking problems. First, we study how value sets consolidate and get in a convergence set, by defining a Lyapunov function. Then, we discuss how memory updating can disseminate the correct result in parallel to value set updating. Next, we merge value sets and memories of the DMVR algorithm in order to reduce memory usage in both majority voting and ranking problems. At the end, we prove that the proposed implementation is optimal in terms of the required number of states for the ranking problem.§.§ Consolidation of Value SetsIn this part, we analyze how value sets consolidate in the network until the state of the system gets in a convergence set. Let the network state vector at time t be defined as X(t)=[v_1(t),⋯,v_n(t)]. The set of all state vectors X(t)=[v_1(t),⋯,v_n(t)] with the following property is called the convergence set and is denoted by 𝒳_0:|v_i(t)|≤ |v_j(t)| ⟹ v_i(t) ⊆ v_j(t), ∀ i,j∈{1,⋯,n}.Consider a network of n=8 nodes and three possible choices 𝒞={c_1,c_2,c_3}. Assume that X(0)=[{c_1},{c_1},{c_2},{c_3},{c_1},{c_3},{c_2},{c_1}]. The state vector X(t)=[{c_1},∅,{c_1,c_2,c_3},∅,∅, {c_1,c_2},{c_1,c_3},∅] cannot be in the set 𝒳_0 since {c_1,c_2}⊈{c_1,c_3}. However, the state vector X(t)=[{c_1,c_2,c_3},∅,{c_1,c_2,c_3},∅,∅, {c_1},{c_1},∅] is a member of the convergence set. If X(τ)∈𝒳_0 at time τ>0, then X(t)∈𝒳_0, ∀ t≥τ.Assume that the state vector X(τ) is in 𝒳_0. If two nodes i and j get in contact with each other at any time t>τ, then according to Definition <ref>, the outputs of transition, i.e. the sets {v_i(t) ∩ v_j(t)} and {v_i(t)∪ v_j(t)} would be {v_i(t)},{v_j(t)} or {v_j(t)},{v_i(t)}. Since we know that X(τ)∈𝒳_0, X(t) is also in 𝒳_0. Thus, the proof is complete. The Lyapunov function V(X(t)) is defined as follows:V(X(t))=nK^2-∑_i=1^n |v_i(t)|^2. If two nodes i and j contact with each other at time t, and v_i(t)⊈v_j(t), v_j(t)⊈v_i(t), then there would be a reduction in the Lyapunov function V(X(t)). Suppose that |v_i(t)|=l, |v_j(t)|=l^', and |v_i(t)∩ v_j(t)|=r. The change in the Laypunov function is:V(X(t^+))-V(X(t))=-[(l+l^'-r)^2+r^2]+(l^2+l^'^2)≤ -1,which is strictly less than zero for all 0≤ r≤min(l,l^')-1. Let X(0)=x. We denote the time that the state vector hits the set 𝒳_0 for the first time by τ_x, i.e.:τ_x=min{t>0|X(t)∈𝒳_0}.Let Y(t) be a random walk on the network starting from node p. We define T_pq to be the first time Y(t) visits node q:T_pq=min{t≥ 0| Y(0)=p,Y(t)=q},and the worst case hitting time, σ, is defined as follows:σ=max_p,q∈ V𝔼{T_pq}.There exists ϵ>0 such that:𝔼{V(X(t+2σ))-V(X(t))| X(t)=y}≤ -ϵ,y∉𝒳_0,where σ is defined in (<ref>). Consider two random walks Y(t) and Z(t) on the network starting from nodes p and q, respectively. We define the coalescing time of two random walks Y(t),Z(t)as follows:C_pq=min{t≥ 0 | Y(t)=Z(t), Y(0)=p, Z(0)=q}. It follows from Markov inequality that:min_p,q∈ V{C_pq≤ t_0}≥ 1-σ/t_0. If X(t)∉𝒳_0, then there exist v_i(t), v_j(t) such that v_i(t)⊈v_j(t), v_j(t)⊈v_i(t). The corresponding random walks meet each other (or some other intermediate such value sets) up to time t+2σ with probability at least 1/2, which results in reduction of V(X(t)) according to Lemma <ref>. Otherwise, X(t) is in the set 𝒳_0. The state vector X(t) hits the set 𝒳_0 in bounded time with probability one.We know that the Lyapunov function V(X(t)) is lower bounded by zero. According to Foster's criteria (see <cit.>, page 21) and Lemma <ref>, it follows that:(τ_x <∞)=1, ∀ x∉𝒳_0.§.§ Dissemination of Result in MemoriesIn this part, we show how memory updating disseminates the correct result in the network. Without loss of generality, in the remainder of this paper, we assume that #c_1>#c_2>⋯>#c_K.[We assume that #c_K>0. Otherwise, we can reduce the problem to the case with fewer choices. By slight modifications, the DMVR algorithm can also work for the cases where we have choices with equal size.]Assume that the state vector X(t) gets in 𝒳_0 at time τ. We define the vector v^⋆=[v^1,⋯,v^K] as follows:v^k=v_i(τ),∃ i∈{1,⋯,n} : |v_i(τ)|=k, ∅,and r_k(t)=|{ i | |v_i(t)|=k}|.The vector [π_1,⋯,π_K] defined in (<ref>), gives the correct ranking of choices in a finite time with probability one.From Lemma <ref>, we know that the state vector X(t) eventually gets in the set 𝒳_0 at time τ_x. For a choice c_k, let α(k) be the smallest index such that c_k∈ v^α(k). According to the definition of the convergence set and the preservation property, we know that: #c_k=∑_i=α(k)^K r_i(t). Hence, we have: #c_k>#c_k^'⟺α(k)<α(k^'). Thus, based on assumption #c_1>#c_2>⋯>#c_K, the only possibility is: r_k(t)>0 and α(k)=k, ∀ k∈{1,⋯,K} for all t>τ_x. Since r_k(t)>0, there exists at least one value set v^k, 1≤ k≤ K in the network. The value sets v^k take random walk in the network and set the memories m_i,k(t) to v^k for all i∈{1,⋯,n}. Let τ^'>τ_x be the time that m_i,k(t)=v^k, ∀ i∈{1,⋯,n}, 1≤ k≤ K. Based on the definition of α(k), we have: c_k=v^α(k)\ v^α(k)-1=m_i,k(t)\ m_i,k-1(t). Hence, all nodes obtain the correct ranking after time τ^'.From above theorem, m_i,1(t) gives the choice in majority. Hence, the DMVR algorithm can solve the majority voting problem by just updating m_i(t)=m_i,1(t). In the proposed solution, each node can also vote for more than one choice. To do so, it is sufficient to initialize the value set of each node to the union of its preferred choices. In this general case as well, the DMVR algorithm gives the correct ranking based on the size of choices.§.§ State-optimal Implementation of DMVR AlgorithmFor the case of majority voting, the state of node i is the pair (m_i(t),v_i(t)) where the sets m_i(t) and v_i(t) have K and 2^K possible states, respectively. Thus, the total number of states is K× 2^K. However, we can implement the DMVR algorithm with fewer states by adding the following rules: * If the output of updating rule, v_i(t)∩ v_j(t), is the empty set, we replace it by the set 𝒞.*If m_i(t^+)⊈v_i(t^+), we select a random member c_k∈ v_i(t^+) and change m_i(t^+) to {c_k}. Otherwise, m_i(t^+) remains unchanged. Then, the state of node i is saved in the form of (m_i(t^+), v_i(t^+)\ m_i(t^+)).When X(t)∈𝒳_0, there is at least one value set {c_1} in the network. When this value set meets a new node with m_i(t)≠{c_1}, it updates m_i(t) to {c_1} and m_i(t) will never be changed after that. Because {c_1}⊆ v_i(t), ∀ i{1,⋯,n} when X(t)∈𝒳_0. For a fixed m_i(t^+), the number of possible states for v_i(t^+)\ m_i(t^+) is 2^K-1. Consequently, the total number of states reduces to K× 2^K-1. As an example, in the ternary voting, we have the following 12 states:({c_1}, ∅), ({c_1},{c_2}), ({c_1},{c_3}), ({c_1},{c_2,c_3})({c_2}, ∅), ({c_2},{c_1}), ({c_2},{c_3}), ({c_2},{c_1,c_3})({c_3}, ∅), ({c_3},{c_1}), ({c_3},{c_2}), ({c_3},{c_1,c_2})Thus, the number of states for ternary voting is 12 compared to 15 for the PAGA automaton <cit.> and it is equal to 32 for quaternary voting while the number of states for the PAGA is 100. In the case of ranking problem, we replace the value set and memories by an ordered K-tuple a_i(t)=[a_i^1(t),⋯,a_i^K(t)], which is a permutation of the set 𝒞 along with an integer 1≤ p_i(t)≤ K which we perceive as a pointer to an entry of a_i(t). At the beginning of the algorithm, each node i puts its preferred choice in the first entry of a_i(0), and sets an arbitrary permutation of other choices in the remaining entries. It also sets p_i(0)=1. Let a_i^1:p_i(t) be a p_i(t)-tuple containing the first p_i(t) entries of a_i(t) and {a_i^1:p_i(t)} be the set representation of it without any order. For a set A⊆𝒞, we define Π_A,a_i(t) as a permutation of A that preserves the order of entries in accordance with a_i(t). Now, assume that two nodes i and j get in contact with each other at time t, and let p_i(t)≤ p_j(t). Then, we apply the following updating rule:a_i(t^+):=[Π_A_1, a_i(t), Π_A_2\ A_1, a_i(t), Π_𝒞\ A_2, a_i(t)],a_j(t^+):=[Π_A_1, a_j(t), Π_A_2\ A_1, a_j(t), Π_𝒞\ A_2, a_j(t)],where A_1={a_i^1:p_i(t)}∩{a_j^1:p_j(t)}, A_2={a_i^1:p_i(t)}∪{a_j^1:p_j(t)}. We also set p_i(t^+):= |A_2| and p_j(t^+):=|A_1|. It is not difficult to verify that this form of implementing the DMVR algorithm can solve the ranking problem by the same arguments in Theorem <ref>.From the above implementation, we can run the DMVR algorithm for the ranking problem with K× K! states where the terms K and K! are the possible values of the pointer p_i(t) and the ordered tuple a_i(t), respectively.Suppose that 𝒞={c_1,c_2,c_3,c_4}. Two examples for the updating rule in (<ref>) are given as follows: a_i(t)=[↓1 ,3,2,4],a_j(t)=[2,↓1,4,3]⟶ a_i(t^+)=[1,↓2,3,4], a_j(t^+)=[↓1,2,4,3],a_i(t)=[↓1 ,4,2,3],a_j(t)=[↓3,1,2,4]⟶ a_i(t^+)=[1,↓3,4,2], a_j(t^+)=[3,1,2,↓4],where the arrow symbol points the p_i(t)-th entry of a_i(t). The following theorem proposes a lower bound for the required number of states of any ranking algorithm. This bound meets the required number of states of the DMVR algorithm, proving optimality of it for the ranking problem.Any algorithm that finds the correct ranking in a finite time with probability one over arbitrary network topology, requires at least K× K! number of states per node.It is enough to show that the theorem applies in complete graphs. Suppose that the ranking problem can be solved by running algorithm 𝒜 at each node in a finite time. Consider that each node i has a state s_i(t). Let integer r corresponds to the ranking [π_1,⋯,π_K], 1≤ r≤ K!. We define a class of states, 𝒮_r, as a set of states which nodes associate with ranking r. Algorithm 𝒜 is said to converge to ranking r at time τ if s_i(t)∈𝒮_r ,∀ i ∈{1,⋯,n}, ∀ t≥τ. We denote members of the class 𝒮_r by 𝒮_r={𝒮_r^1,⋯,𝒮_r^D_r} where D_r=|𝒮_r|. Let N_1 be the set of initial configurations that are consistent with the ranking r=[π_1,⋯, π_K], i.e.: N_1={[#π_1,⋯,#π_K]∈ℤ_+^K:∑_i=1^K #π_i=n; #π_i> #π_j, ∀ i>j}. Let n_i, 1≤ i≤ D_r be the number of nodes in {1,⋯,n} whose state is 𝒮_r^i. In complete graphs, the whole information of [s_1(t),⋯,s_n(t)] can be represented by the set [n_1,⋯,n_D_r]. Thus, the class 𝒮_r corresponds to a subset of the following set:N_2={[n_1,⋯,n_D_r]∈ℤ_+^D_r:∑_i=1^D_r n_i=n}.Let B_z be the set of vectors [n_1,⋯,n_D_r] that are achievable from the initial configuration z∈ N_1.Consider two initial configurations x,y∈ N_1, x≠ y. Then, we have: B_x∩ B_y=∅.By contradiction. Suppose that there exists x_0∈ B_x∩ B_y. We run algorithm 𝒜 in a complete graph with nodes {1,⋯,n+n^'} for two different initial configurations x,y of nodes {1,⋯,n}. The algorithm 𝒜 should give the correct result for any scheduling of local clocks at nodes. Consider a scheduling which only clocks of nodes in the set {1,⋯,n} are ticking up to time T and the states of these nodes become x_0. Now, suppose that a centralized algorithm 𝒜_c want to obtain correct ranking by just looking at states of nodes in {1,⋯,n} and votes of nodes in {n+1,⋯,n+n^'}. Algorithm 𝒜_c finds the ranking of votes of nodes in {1,⋯,n} by the vector x_0. But the centralized algorithm still needs to obtain all differences #π_i-#π_i+1, 1≤ i≤ K-1. Otherwise, it cannot rank votes of nodes in the whole network correctly. However, the two different initial configurations x,y are mapped to the same state vector x_0. Consequently, the algorithm 𝒜_c cannot recover the correct initial configuration which is a contradiction. Thus, the proof of lemma is complete. We know that |N_2|=n+D_r-1D_r-1=Θ(n^D_r-1). Furthermore, we have:|N_1|==1/K!|{[#π_1,⋯,#π_K]∈ℤ_+^K:∑_i=1^K #π_i=n, #π_i≠#π_j}|≥1/K!(|{[#π_1,⋯,#π_K]∈ℤ_+^K:∑_i=1^K #π_i=n}|-∑_j,k|{[#π_1,⋯,#π_K]∈ℤ_+^K:∑_i=1^K #π_i=n, #π_j= #π_k}|)=n+K-1K-1-K2×n+K-2K-2=Θ(n^K-1).Let F:N_1⟶ N_2 which maps x∈ N_1 to B_x⊂ N_2. According to Lemma <ref>, the mapping F should be invertible. For sufficiency large n, this can occur only if D_r≥ K. Consequently, it can be concluded that each class 𝒮_r has at least K members and total number of states is at least K× K!.§ TIME COMPLEXITYIn this section, we first analyze the time complexity of the DMVR algorithm for the binary voting problem in complete graphs. Then, we study the multiple choice case and derive a tight bound on the running time of the DMVR algorithm for the ranking problem. At the end, we propose a method to speed up the DMVR algorithm in majority voting problem.§.§ Binary Voting CaseIn order to study the time complexity of the DMVR algorithm, we divide its execution time into two phases: * First phase (extinction of {c_2}): This phase starts at the beginning of the algorithm and continues until none of the value sets are {c_2}. We denote the finishing time of this phase by τ_1. * Second phase (dissemination of {c_1} in memories): This phase follows the first phase and it ends when memories of all nodes are {c_1}. The execution time of this phase is represented by τ_2. §.§.§ Time complexity of the first phaseFor the binary case, the transition rule of DMVR algorithm is exactly the same as the PAGA algorithm. In <cit.>, an upper bound, O(log(n)/(1-2ρ)), is given for the PAGA algorithm in complete graphs where ρ_2=ρ. Here, we propose exact average time complexity for the first phase. Suppose that the number of nodes voting for c_1, c_2 are s=n(1-ρ) and r=nρ at the beginning of the algorithm. We denote the sets of nodes having value sets {c_1} and {c_2} at time t by S_1(t) and S_2(t), respectively. Consider the Markov chain in Fig. <ref>. The state r-i, 0≤ i≤ r, represents the number of nodes whose value sets are {c_2}. Suppose that the state of Markov chain is r-i at time t. The chain undergoes transition from state r-i to r-i-1 if one of nodes in the set S_2(t) gets in contact with one of nodes in S_1(t), which occurs with rate 2(r-i)×(s-i)/n.After updating the value sets, both |S_1(t)| and |S_2(t)| will decreased exactly by one. Let T_r-i^1 be the sojourn time in state r-i. Hence, we have: 𝔼{τ_1}=∑_i=0^r-1𝔼{T^1_r-i} =∑_i=0^r-1n/2×1/(r-i)(s-i)≈n/2(s-r)log(r(s-r)/s),in terms of time units. Thus, the average running time of the first phase is: 𝔼{τ_1}≈1/2(1-2ρ)log(nρ(1-2ρ)/1-ρ) time units. Furthermore, we can obtain the variance of τ_1 as follows:Var(τ_1) =∑_i=0^r-1Var{T^1_r-i}=∑_i=0^r-1n^2/4×1/(r-i)^2(s-i)^2. §.§.§ Time complexity of the second phaseAt the beginning of the second phase, the number of nodes in S_1(t) is n(1-2ρ) and all the remaining nodes have the value sets {c_1,c_2} or ∅. Furthermore, the memories of all of these nodes are {c_2} in extreme case. Consider the Markov chain in Fig. <ref>. The state r-i, 0≤ i≤ r, represents the number of nodes having value sets {c_1,c_2} or ∅ with memory {c_2}. We denote the set of such nodes by ℳ_2(t). There is a reduction in |ℳ_2(t)| if and only if a node in ℳ_2(t) gets in contact with a node in S_1(t). If |ℳ_2(t)|=r-i, then with rate 2×(r-i)(n-2nρ)/n, there is a transition from state r-i to state r-i-1. Let T_r-i^2 be the sojourn time in state r-i. Then, we have:𝔼{τ_2}≤∑_i=0^r-1𝔼{T^2_r-i} =∑_i=0^r-1n/2×1/(r-i)(n-2nρ)≈1/2(1-2ρ)log(2nρ), Thus, we can conclude that the time complexity of the DMVR algorithm is:𝔼{τ_1+τ_2}≤1/2(1-2ρ)×(log(nρ(1-2ρ)/1-ρ)+log(2nρ)).§.§ Multiple Choice Voting CaseConsider two choices c_k and c_l. From the state vector X(t)=[v_1(t),v_2(t),⋯,v_n(t)], we define a new state vector X^k,l(t)=[v^'_1(t),⋯,v_n^'(t)] by projecting the value set of each node i on {c_k,c_l}, i.e. v_i^'(t)=v_i(t)∩{c_k,c_l}. Thus, the projected state vector X^k,l(t) represents the path of execution in a binary voting with just two choices c_k,c_l. We define 𝒳_0^k,l to be the convergence set of the projected system as follows:X^k,l(t)∈𝒳^k,l_0|v_i^'(t)|≤ |v_j^'(t)| ⟹ v_i^'(t) ⊆ v_j^'(t),∀ i,j∈{1,⋯,n}. Let τ_x and τ_x^k,l be the time that the state vectors X(t) and X^k,l(t) hit their corresponding convergence sets. Then, we have: τ_x=max_k,l∈ V, k≠ lτ_x^k,l. First, we prove that τ_x≥max_k,l∈ V, k≠ lτ_x^k,l. If X(t)∈𝒳_0, then for all k,l∈{1,⋯,K}, k≠ l:∀ i,j∈ {1,⋯,n}, |v_i(t)|≤ |v_j(t)| ⟺v_i(t)⊆ v_j(t) ⟹|v^'_i(t)|≤ |v^'_j(t)|v^'_i(t)⊆ v^'_j(t) ⟹ X^k,l(t) ∈𝒳^k,l_0. Now, we show that τ_x≤max_k,l∈ V, k≠ lτ_x^k,l. Consider any two nodes i and j at time t=max_k,l∈ V, k≠ lτ_x^k,l. Without loss of generality, assume that |v_i(t)|≤ |v_j(t)|. We will show that v_i(t)⊆ v_j(t). By contradiction, suppose that there exists a choice c_k such that c_k∈ v_i(t) and c_k∉v_j(t). Now, consider any choice c_l ∈ v_j(t). Since t≥τ_x^k,l, the state vector X^k,l(t) has already hit its convergence set. This occurs if and only if v_i(t)∩{c_k,c_l}={c_k,c_l} and v_j(t)∩{c_k,c_l}={c_l}. Hence, we can conclude that c_l∈ v_i(t), ∀ c_l ∈ v_j(t). However, it means that |v_i(t)|>|v_j(t)| which is a contradiction. From (<ref>) and (<ref>), 𝔼{τ_x^k,l} and Var(τ_x^k,l) can be obtained by substituting r,s with nρ_k and nρ_l, respectively.(Order Statistics<cit.>) Let [Z_1,⋯,Z_R] denote R≥ 2 random variables (not necessarily independent or identically distributed) with means [μ_r] and variances [σ^2_r]. Let Z_max=max_r=1,⋯,R Z_r. Then, we have:𝔼{Z_max}≤1/R∑_r=1^R μ_r + √(R-1/R∑_r=1^R [σ_r^2 + (μ_r-1/R∑_r=1^R μ_r)^2] ) Now, we can give an upper bound on 𝔼{τ_x} from above lemma:𝔼{τ_x} ≤μ + √(∑_k,l∈ V, k≠ l[Var (τ_x^k,l) + (𝔼{τ_x^k,l}-μ)^2] )=O(log(n)/j=1,⋯,K-1minρ_j+1-ρ_j),where μ=1/K2∑_k,l∈ V, k≠ l𝔼{τ_x^k,l}.After the state vector gets in the convergence set, we should still wait to copy vector v^⋆ in memories of all nodes. At time τ_x, the number of nodes with the value state {c_1,c_2,⋯,c_j} is nρ_j-nρ_j+1, 1≤ j<K. Let ℳ_j^' be the set of such nodes and τ_j^' be the time until memories m_i,j(t)'s of all nodes are set to {c_1,⋯,c_j}. When a node i contacts with any node in ℳ_j^', its memory m_i,j(t) will be set to {c_1,⋯,c_j}. With the same arguments in previous part, we have:𝔼{τ_j^'}≤n/2∑_i=0^n-11/(n-i)(nρ_j-nρ_j+1)≈1/2(ρ_j-ρ_j+1)log(n),Var (τ_j^')≤n^2/4∑_i=0^n-11/(n-i)^2(nρ_j-nρ_j+1)^2≈1/4(ρ_j-ρ_j+1)^2. Thus, an upper bound on τ^' =max_1≤ j<Kτ_j^' can be obtained from order statistics:𝔼{τ^'} ≤μ^'+√(∑_j=1^K-1[Var (τ_j^') + (𝔼{τ_j^'}-μ^')^2] )=O(log(n)/j=1,⋯,K-1minρ_j+1-ρ_j),where μ^'= 1/K-1∑_j=1^K-1𝔼{τ_j^'}. From the bounds in (<ref>) and (<ref>), we can conclude that the time complexity of the DMVR algorithm is O(log(n)/j=1,⋯,K-1minρ_j+1-ρ_j). §.§ Speeding up the DMVR algorithm for majority voting problemThe execution time of the DMVR algorithm can be divided into two phases: The first phase starts at time zero and it ends when the state vector X(t) gets in the convergence set 𝒳_0. Afterwards, the second phase starts and it terminates when all nodes' memories are set with the majority vote. In order to speed up the second phase, we add the following rule when two nodes i and j contact with each other at time t: It is worth mentioning that the added rule is executed from the beginning of the algorithm. The idea behind this rule is that even nodes with the value sets other than the majority vote, cooperate in spreading the majority vote in the memories of all nodes. We call the proposed solution as the enhanced version of the DMVR algorithm. Simulation results show that the enhanced version of the DMVR algorithm can speed up the DMVR algorithm in complete graph, torus, and ring networks. Each node converges to the majority vote in a finite time with probability one by running the enhanced version of the DMVR algorithm. Let τ_x be the time that the state vector X(t) gets in the convergence set. Since then, the only value set with size one in the network would be {c_1}. Consider the vector M(t)=[m_1(t),⋯,m_n(t)]. We define the Lyapunov function V^'(M(t)), t>τ_x, as follows:V^'(M(t))=n- |{i| c_1∈ m_i(t)}|, t>τ_x. Suppose that two nodes i and j get in contact with each other at time t>τ_x. Let M_0 be the vector of length n with all entries equal to {c_1}. Then, we have: 𝔼{V^'(M(t^+))-V^'(M(t))| M(t)=y} ==≤ -ϵ,|v_i(t^+)|=1 |v_j(t^+)|=1,0|v_i(t^+)|>1 |v_j(t^+)|>1,where y≠ M_0. Hence, by the same arguments in Lemma <ref>, we conclude that M(t) converges to vector M_0 with probability one.§ SIMULATIONS In this section, we evaluate the time complexity of the DMVR algorithm through simulations and compare it with PAGA automaton in binary and ternary voting. Furthermore, we studythe proposed time complexity bounds in complete graphs. Each point in simulations is averaged over 1000 runs.We compare the proposed bounds on 𝔼{τ_1} and 𝔼{τ_2} derived in (<ref>) and (<ref>) with simulation results for binary voting in Fig. <ref>. As it can be seen, the bound 𝔼{τ_1} is exact as we expected while there is a constant gap between simulation and analysis for 𝔼{τ_2}.In Fig. <ref>, the timecomplexities of DMVR algorithm, its enhanced version, and the PAGA automaton are depicted versus ρ_1. Since the transition rule of DMVR algorithm is identical to the PAGA automaton in binary case, the performance of two algorithms are very close to each other. However, the enhanced version of DMVR algorithm outperforms the other two algorithms as ρ_1 gets close to 0.5. In Fig. <ref>, we can also see this trend in ring and torus networks. For ternary voting problem, we consider the percentage of initial votes in the form of [ρ_1,ρ_2,ρ_3]=[1/3+δ, 1/3, 1/3-δ] where 0<δ<1/3. In Fig. <ref>, time complexities of enhanced version of the DMVR algorithm and the PAGA automaton are given for δ∈ [0.005,0.041], n=198. As it can be seen, the enhanced version of the DMVR algorithm outperforms the PAGA automaton for small δ. At the end, we compare the bounds for 𝔼{τ_x} and 𝔼{τ^'} derived in (<ref>) and (<ref>) for the ranking problem with three votes. In Fig. <ref>, bounds from order statistics have a small gap with simulation results and they can predict the behaviour of the DMVR algorithm accurately based on the percentage of initial votes.§ CONCLUSIONSIn this paper, we proposed the DMVR algorithm in order to solve the majority voting and ranking problems for any number of choices. The DMVR algorithm is a simple solution with bounded memory and it is optimal for the ranking problem in terms of number of states. Furthermore, we analyzed time complexity of the DMVR algorithm and showed that it relates inversely to i=1,⋯,K-1minρ_i+1-ρ_i. As a future work, it is quite important to obtain the minimum required number of states for solving majority voting problem. We conjecture that the DMVR algorithm is an optimal solution for majority voting problem, i.e. at least K× 2^K-1 states are required for any possible solution. IEEEtran | http://arxiv.org/abs/1703.08838v1 | {
"authors": [
"Saber Salehkaleybar",
"Arsalan Sharif-Nassab",
"S. Jamaloddin Golestani"
],
"categories": [
"cs.DC",
"cs.LG"
],
"primary_category": "cs.DC",
"published": "20170326161931",
"title": "Distributed Voting/Ranking with Optimal Number of States per Node"
} |
ZU-TH 06/17CERN-TH-2017-065production at the LHC:fiducial cross sections and distributions in NNLO QCD Massimiliano Grazzini^(a), Stefan Kallweit^(b)Dirk Rathlev^(a) and Marius Wiesemann^(a,b)^(a)Physik-Institut, Universität Zürich, CH-8057 Zürich, Switzerland ^(b)TH Division, Physics Department, CERN, CH-1211 Geneva 23, SwitzerlandAbstract10000We report on the first fully differential calculation forproduction in hadron collisions up to next-to-next-to-leading order (NNLO) in QCD perturbation theory. Leptonic decays of the W and Z bosons are consistently taken into account, i.e. we include all resonant and non-resonant diagrams that contribute to the process pp→ℓ^'±ν_ℓ^'ℓ^+ℓ^-+Xboth in the same-flavour (ℓ'=ℓ) and the different-flavour (ℓ'≠ℓ) channel. Fiducial cross sections and distributions are presented in the presence of standard selection cuts applied in the experimentalanalyses by ATLAS and CMS at centre-of-mass energies of 8 and 13 TeV. As previously shown for the inclusive cross section,corrections increase theresult by about 10%, thereby leading to an improved agreement with experimental data. The importance of NNLO accurate predictions is also shown in the case of new-physics scenarios, where, especially in high-p_T categories, their impact can reach O(20%).The availability of differential NNLO predictions will play a crucial role in the rich physics programme that is based on precision studies ofsignatures at the LHC. March 2017 § INTRODUCTION The production of a pair of vector bosons is among the most relevant physics processesat the Large Hadron Collider (LHC). Besides playing a central role in precision tests of the gauge structure of electroweak (EW) interactions and in studies of the mechanism of EW symmetry breaking, vector-boson pair production constitutes an irreducible background in most of the Higgs-boson measurements and in many searches for physics beyond the Standard Model (SM).The production ofpairs, in particular, offers a valuable test of the triple gauge-boson couplings, and is an important SM background in many SUSY searches (see e.g. Morrissey:2009tf). Thecross section has been measured at the Tevatron <cit.> and at the LHC for centre-of-mass energies of 7 TeV <cit.>, 8 TeV <cit.>and 13 TeV <cit.>.Thanks to the increasing reach in energy of LHC Run 2, more statistics — the above-cited 13 TeV results are only based on the early 2015 data — will makemeasurements a powerful tool to extend thecurrent bounds on the corresponding anomalous couplings. To this purpose, a good control over the SM predictions in the tails of some kinematic distributions is particularly important.As a SM background,production is especially relevant in searches based on final states with three leptons and missing transverse energy, which feature a clean experimental signature, but miss a full reconstruction of the W boson.As a result, the irreduciblebackground is not easily extracted from data with asimple side-band approach. For the above reasons, the availability of accurate theoretical predictionsof the differentialcross section is necessary in order to ensure a high sensitivity to anomalous couplings and to control the SM background in searches based on the trilepton plus missing transverse energy signature.Accurate theoretical predictions for thecross section were obtainedatin perturbative QCD a long time ago <cit.>. Leptonicdecays of the W and Z bosons were added only a few years later<cit.>, while initially omitting spin correlations in the virtual matrix elements. The first complete off-shell computations, including leptonic decays andspin correlations, were performed <cit.> after the relevant one-loop helicity amplitudes <cit.> became available. The corresponding computation of the off-shell + jet cross section inwas presented in Campanario:2010hp. EW corrections toproductionare known only in an on-shell approach <cit.> so far. Recently, the first NNLO QCD accurate prediction of the inclusivecross sectionbecame available in Grazzini:2016swo. Due to the difference of the - and -boson masses,this computation already used the off-shell two-loop helicity amplitudes ofGehrmann:2015ora (another calculation of these amplitudes was described in Caola:2014iua),which allow for the computation of all vector-boson pair production processes, including leptonic decays, spin correlationsand off-shell effects. production is the only remaining di-boson process for which a fully exclusivecalculation was not available so far. In this paper, we finally fill this gapby presenting, for the first time, -accurate fully differential predictions for thecross section. More precisely, our off-shell calculation includes the leptonic decays of the vector bosons by consideringthe full process that leads to three leptons and one neutrino (), pp→ℓ^'±ν_ℓ^'ℓ^+ℓ^-+X, in both the same-flavour (ℓ' = ℓ) andthe different-flavour (ℓ'≠ℓ) channel. Thereby, we take into account all non-resonant, single-resonant anddouble-resonant components, including intermediate W^±γ^* contributions and all interference effects as well as spin correlations and off-shell effects,consistently in the complex-mass scheme <cit.>.Our calculation is performed in the [ is the abbreviation of“ Automates qT subtraction and Resummation to Integrate X-sections”, by M. Grazzini, S. Kallweit, D. Rathlev, M. Wiesemann.In preparation.] framework,which applies the -subtraction <cit.> and-resummation <cit.> formalisms in their process-independent implementation within theMonte Carlo program [ is the abbreviation of“MUlti-chaNnel Integrator at Swiss (CH) precision” — an automated parton-level NLOgenerator by S. Kallweit. In preparation.]. facilitates the fully automated computation of NLO corrections to any SM process by using the Catani–Seymour dipole subtraction method <cit.>, an efficient phase-space integration, as well as an interface to the one-loop generator <cit.> to obtain all required (spin- and colour-correlated)tree-level and one-loop amplitudes. For the numerical stability in the tensor reductions of the one-loop amplitudes,relies onthelibrary <cit.>. Our implementation ofsubtraction and resummation[The first applicationof the transverse-momentum resummation framework implemented inat+ to on-shellandproduction was presentedin Grazzini:2015wpa (see also Wiesemann:2016tae for more details).] for the production of colourless final states is fully general,and it is based on the universality of the hard-collinearcoefficients <cit.> appearing inthe transverse-momentum resummation formalism. These coefficients were explicitly computed for quark-initiated processes in Catani:2012qa,Gehrmann:2012ze,Gehrmann:2014yya. For the two-loop helicity amplitudes we use the results of Gehrmann:2015ora, and of Matsuura:1988sm for Drell-Yan like topologies. Their implementation inis applicable to anyfinal state. This widely automated framework has already been used, in combination with the two-loop scattering amplitudes of Gehrmann:2011ab,Gehrmann:2015ora, for the calculations ofZγ <cit.>, ZZ <cit.>, <cit.>,W^±γ <cit.> and W^± Z <cit.>production at NNLO QCD as well as inthe resummed computations of the ZZ andtransverse-momentumspectra <cit.> at NNLL+NNLO.NNLO corrections to theprocess have been shown to be sizeable already in the case of thetotal inclusive cross section <cit.>. This is explained by the existence of an approximate radiation zero <cit.> at LO, which is broken only by real corrections starting at NLO. In this paper we will show that NNLO corrections toproduction are equally relevant to provide reliable QCD predictions for fiducial cross sections and distributions, and to obtain agreement with the LHC data. At the same time, the inclusion of NNLO corrections will be shown to be essential to obtain a good control of SM backgrounds in SUSY searches based on the trilepton + missing energy signature <cit.>.The manuscript is organized as follows. In sec:calculation we give details on the technical implementation of our computation,including a brief introduction oftheframework (subsec:matrix) and a discussion of the stability of thecross section at (N)NLObased onsubtraction (subsec:stability). sec:results gives an extensive collection of numerical results for pp→ℓ^(')±ν_ℓ^(')ℓ^+ℓ^-+X: We presentcross sections (sec:fiducial) and distributions (sec:distributions) in the fiducial volume formeasurements, including their comparison to experimental data, where available, and with cuts corresponding to new-physics searches (sec:results-np). The main results are summarized in sec:summary. § DESCRIPTION OF THE CALCULATIONWe study the process pp→ℓ'^±ν_ℓ^'ℓ^+ℓ^-+X,ℓ,ℓ'∈{e,μ},including all Feynman diagrams that contribute to the production of three charged leptons — one opposite-sign, same-flavour (OSSF) lepton pair, and another charged lepton of either the same (ℓ'=ℓ)or a different (ℓ'≠ℓ) flavour, later referred to as same-flavour (SF) and different-flavour (DF) channel — and one corresponding neutrino.Our calculation is performed in the complex-mass scheme <cit.>, and besides resonances, it includes alsocontributions from off-shell EW bosons and all relevant interferences; no resonance approximation is applied. Our implementation can deal with any combination of leptonic flavours, ℓ,ℓ^'∈{e,μ,τ}. For the sake of brevity, we will often denote this process asproduction though.Thefinal states are generated, as shown in fig:Borndiagrams for the ud̅→^'+ν_^'^-^+ process at LO, (a) via resonant t-channelproduction with subsequentanddecays, where the intermediate Z boson can be replaced by an off-shell photon γ^∗; (b) via s-channel production intopologies through a triple-gauge-boson vertex WWZ or WWγ with subsequentanddecays;(c) via W^±(∗) production with a subsequent decay;(d) via W^±(∗) production with a subsequent decay. In the SF channel, each diagram has to be supplemented with the analogous diagram obtained by exchanging the momenta of the identical charged leptons,but the generic resonance structure is not modified as compared to the DF channel. Note that in both SF and DF channels the appearance of infrared (IR) divergent γ^∗→^-^+ splittingsprevents a fully inclusive phase-space integration for massless leptons. In the DF channel, theusual experimental requirement of a mass window around the Z-boson mass for the OSSF lepton pairis already sufficient to avoid such divergences and render the cross section finite, while in the SF channel a lepton separation must be applied on both possible combinations of OSSF lepton pairs.The NNLO computation requires the following scattering amplitudes at 𝒪(^2):* tree amplitudes forqq̅^'→gg,qq̅^'→ q^''q̅^'',and crossing-related processes;* one-loop amplitudes for qq̅^'→g, and crossing-related processes; * squared one-loop and two-loop amplitudes for qq̅^'→. The qq̅^' pair is of type ud̅ and du̅ for W^+Z and W^-Z production, respectively, and q^''=q or q^''=q^' are explicitly allowed. Note that there is no loop-induced gg channel inproduction due to the electric charge of the final state.All required tree-level and one-loop amplitudes are obtained from the OpenLoops generator <cit.>, which implements a fast numerical recursion for the calculation of NLO scattering amplitudes within the SM.For the numerically stable evaluation of tensor integrals we employ the Collier library <cit.>, which is based on the Denner–Dittmaier reduction techniques <cit.> and the scalar integrals of Denner:2010tr.To guarantee numerical stability in exceptional phase-space regions — more precisely for phase-space points where the two independent tensor-reduction implementations of Collier disagree by more than a certain threshold —provides a rescue system based on the quadruple-precision implementation of <cit.>, which applies scalar integrals from <cit.>.For the two-loop helicity amplitudes we rely on a public C++ library <cit.> that implementsthe results of Gehrmann:2015ora, and for the numerical evaluation of the relevantmultiple polylogarithms we use the implementation <cit.> in the <cit.> library. The contribution of the massive-quark loops is neglected in the two-loop amplitudes, but accounted for everywhere else.§.§ Organization of the calculation in MATRIXThe widely automated frameworkis used for our NNLO calculation of thecross section.entails a fully automated implementation of the -subtraction formalism to compute NNLO corrections,and is thus applicable to any production process of an arbitrary set of colourless final-state particles in hadronic collisions,as long as the respective two-loop virtual amplitudes of the Born-level process are known. On the same basisautomates also the small- resummation of logarithmically enhanced terms at NNLL accuracy (see Grazzini:2015wpa, and Wiesemann:2016tae for more details).The core of theframework is the Monte Carlo program , which includesa fully automated implementation of the Catani–Seymour dipole-subtraction method for massless <cit.> and massive <cit.> partons, an efficient phase-space integration, as well as an interface to the one-loop generator <cit.>to obtain all required (spin- and colour-correlated)tree-level and one-loop amplitudes.The extension ofandto deal with EWcorrections <cit.> allows for the fully automated computationof EW and QCD corrections to arbitrary SM processes at NLO accuracy.Through an extension ofby a generic implementation of the -subtraction and -resummationtechniques,achieves NNLL+NNLO accuracy in QCD for the production of colourless final states at a level of automation that is limited onlyby the process dependence of the two-loop amplitudes that enter the hard-collinear coefficient H^F_NNLO.Any other process-dependent constituents of the calculation are formally (N)LO quantities and can thus be automatically computed by +.In order to give some technical details on its practical implementation, werecall the master formula for -subtraction for the calculation of the pp→F+X cross section at (N)NLO accuracy:σ^F_(N)NLO= H^F_(N)NLO⊗σ^F_LO +[ σ^F + jet_(N)LO- σ^CT_(N)NLO].In eq:main the label F denotes an arbitrary combination of colourless particles and σ^F + jet_(N)LO is the(N)LO cross section for F + jet production. The explicit expression of the process-independent countertermσ^CT_(N)NLO is provided in Bozzi:2005wk. The general structure of the hard-collinear coefficient H^F_NLO is known from deFlorian:2001zd, and that of H^F_NNLO from Catani:2013tia. The latter exploits the explicit results for Higgs <cit.> and vector-boson <cit.> production. More details on the implementation of eq:main incan be found in Grazzini:2016ctr.The subtraction in the square brackets of eq:main is not local, but the cross sectionis formally finite in the limit → 0. In practice, a technical cut onis introduced to render σ^F + jet_(N)LO andσ^CT_(N)NLO separately finite. In this respect, the -subtraction method isvery similar to a phase-space slicing method.It turns out that a cut, r_cut, on thedimensionless quantity r=/M, where Mdenotes the invariant mass of F,is more convenient from a practical point of view.The absence of any residual logarithmic dependence on r_cut is a strong evidence of the correctness of thecomputation as any mismatch between the contributions would result in a divergenceof the cross section when r_cut→0. The remaining power-suppressed contributions vanish in that limit, and can be controlled by monitoring the r_cut dependence of the cross section.§.§ Stability of q_T subtraction forproductionIn the following we investigate the stability of the -subtraction approach for pp→+X.To this end, in fig:stability we plot the NLO and NNLO cross sections as functions of the -subtraction cut, , which acts on the dimensionless variable r=/. Sample validation plots are presented for two scenarios investigated in this paper, namely the ATLAS analysis at 13 TeV and the CMS analysis at 8 TeV (see sec:fiducial), summed over all leptonic channels contributing to thefinal state. All other scenarios considered in the paper lead essentially to the same conclusions.At NLO the -independent cross section obtained with Catani–Seymour subtractionis used as a reference for the validation of the -subtraction result. The comparison of the NLO cross sections in the left panels of fig:stabilitydemonstrates thatsubtraction agrees on the sub-permille level with the -independent result. This is true already at the moderate value of =1%. At NNLO, where an -independent control result is not available,we observe no significant, i.e. beyond the numerical uncertainties,dependence below about =1%; we thus use the finite- results to extrapolate to =0,taking into account the breakdown of predictivity for very lowvalues, and conservatively estimate a numerical error due to thedependence of our results.[In the NNLO calculation the O() contributions are evaluated by using Catani–Seymour subtraction.] This procedure allows us to control all NNLO predictions for fiducial cross sections presentedin sec:results to better than one per mille in terms of numerical uncertainties. An analogous bin-wise extrapolation procedure was also performed for all distributionsunder consideration in sec:results, and no significant dependence on was found, thus confirming the robustness of our results also at the differential level.§ RESULTSIn this section we present our results on fiducial cross sections and distributions forproduction in proton–proton collisions defined in eq:process. We thus consider the inclusive production of three leptonsand one neutrino including all possible flavour combinations, apart from channels involving τ leptons.In particular, this involves the SF channels e^± e^+ e^- and μ^±μ^+ μ^- as well as the DF channels μ^± e^+ e^- and e^±μ^+ μ^-. Because of the availability of experimental results we consider LHC energiesof 8 and 13 TeV and compare our predictions to the respective measurements by ATLAS and CMS.We finally study the impact of QCD radiative corrections when selection cuts designed for new physics searches are applied.For the input of the weak parameters we apply the G_μ scheme with complex - and -boson masses to define the EW mixing angle ascosθ_W^2=(m_W^2-iΓ_W m_W)/(m_Z^2-iΓ_Z m_Z).We use the PDG <cit.> values G_F = 1.16639× 10^-5 GeV^-2,m_W=80.385 GeV, Γ_W=2.0854 GeV,m_Z = 91.1876 GeV, Γ_Z=2.4952 GeV, and m_t = 173.2 GeV.The CKM matrix is set to unity.[The numerical effect of the CKM matrix up to NLO is to reduce the cross section by less than 1%.K-factors are generally affected below the numerical uncertainties.]We consider N_f=5 massless quark flavours, and we use the corresponding NNPDF3.0 <cit.> sets of parton distributions (PDFs) with (m_Z)=0.118. In particular N^nLO (n=0,1,2) predictions are obtained by using PDFs at the respective perturbative order and the evolution ofat (n+1)-loop order, as provided by the PDF set. Our reference choice for renormalization (μ_R) and factorization (μ_F) scales is μ_R=μ_F=μ_0≡1/2(m_Z+m_W)=85.7863 GeV. Uncertainties from missing higher-order contributions are estimated as usual by independently varying μ_R and μ_F in the range 0.5μ_0≤μ_R,μ_F≤ 2 μ_0, with the constraint 0.5≤μ_R/μ_F≤ 2. We note that a fixed scale choice is only adequate as long asthe scales in the kinematic distributions do not become too large,which is indeed the case in the fiducial phase-space regions of measurements (see sec:fiducialsec:distributions). As background in new-physics searches, on the other hand, that typically focus on the high- tails of distributions, a dynamic scale is more appropriate, as discussed and applied in sec:results-np.[In measurements the tails ofthe p_T,Z and m_T,WZ (see eq:mTWZ) distributions are particularly sensitive to triple-gauge couplings. In such high- regions, where also EW corrections play a non-negligible role,the choice of a dynamical scale turns out to be more appropriate.The extraction of the triple-gauge couplings, however, is not considered in thepresent paper.] §.§ Fiducial cross sectionsWe start the presentation of our results by considering fiducial cross sections. We compute thecross section up to NNLO in the same phase space defined by the LHC experiments and compare our results with ATLAS data at 8 <cit.> and 13 TeV <cit.>, and with CMS data at 13 TeV <cit.>. The selection cuts defining the ATLAS and CMS fiducial volumes are summarized in tab:cuts. The fiducial cuts used by ATLAS are identical at both collider energies, and they are close to the appliedevent-selection cuts <cit.>. The cuts require an identification of the leptons stemming from the Z and W bosons.This is trivial in the DF channel, where they are unambiguously assigned to the parent boson. In theSF channel, there are, in a theoretical computation ofproduction,two possible combinations of opposite-sign leptons that can be matched to the Z boson.ATLAS applies the so-called resonant-shape procedure <cit.>, where, among the two possible assignments, the one that maximizes the estimatorP = |1/m^2_ℓℓ-m^2_Z+i Γ_Z m_Z|^2·|1/m^2_ℓ'ν_ℓ'-m^2_W+i Γ_W m_W|^2is chosen. After this identification, the cuts involve standard requirements on thetransverse momenta and pseudo-rapidities of the leptons as well as leptonseparations in the R=√(η^2+ϕ^2) plane.The latter already regularize all possible divergences from collinear γ^∗→^-^+ splittings by implyingan effective invariant-mass cut on each OSSF lepton pair. The invariant mass of the lepton pair assigned to the Z-boson decay is furtherrequired not to deviate by more than 10 GeV from the Z-boson mass, and the transverse mass of the W boson, defined asm_T,W =√((E_T,+E_T,ν_)^2 - p_T,(ν_)= W^2)with E_T,x^2=m_x^2+p_T,x^2,is bounded from below. A CMS measurement of the fiducial cross section is available only at 13 TeV <cit.>. The analysis applies a simpleidentification of the leptons in the SF channel by associating the lepton pairwhose invariant mass is closest to the Z-boson mass with the Z-boson decay.The leptons then must meet standard requirements ontheir transverse momenta and pseudo-rapidities, which are chosen differentlyfor the hardest and second-hardest lepton assigned to the Z-boson decay andfor the lepton from the W boson.Additionally, the invariant mass of the lepton pair associated withthe Z boson is required to be in a fixed range around the Z-boson mass. To guarantee infrared safety in the SF channel in spite of possible divergences from collinear γ^∗→^-^+ splittings, this requirement is supplementedby a lower 4 GeV cut on the invariant mass of any OSSF lepton pair. We note that the CMS selection cuts at the detector level are somewhat different from those defining thefiducial volume <cit.>. In particular, the invariant-mass cut on the identified lepton pair from the Zboson is much tighter than in the fiducial volume, anda b-jet veto is applied at detector level, which is absent in the definition of the fiducial phase space.As a meaningful comparison to theoretical predictions can only be pursued at the fiducial level, thesedifferences require an extrapolation from the detector to the fiducial level, which could lead to additional theoretical uncertainties.§.§.§ ATLAS 8 TeV ATLAS presents their fiducial results split into bothSF/DF channels and W^-Z/W^+Z production <cit.>. In tab:ATLAS8 we compare our theoretical predictions for the fiducial rates at LO, NLO and NNLO at 8 TeVto the measured cross sections. Since the cuts do not depend on the lepton flavour, the theoreticalpredictions are identical when exchanging electrons and muons, e.g.σ(μ^+ν_μ e^+e^-)≡σ(e^+ν_e μ^+μ^-). The statistical uncertainties of the experimental results are strongly reducedupon combination, from ∼ 5%-10% for the individual channels to 3%-4% when combined.For proton–proton collisions the cross sections in the W^+Z and W^-Z channelsare different due to their charge-conjugate partonic initial states: The W^+Z final state is mainly produced through ud̅ scattering (see fig:Borndiagrams), while W^-Z originates from u̅d scattering.Roughly speaking, the u valence density is larger than the d valence density and u̅∼d̅, so we have σ_W^+Z>σ_W^-Z.It is clear from tab:ATLAS8 that the inclusion of higher-order corrections is crucial for a properprediction of the fiducial cross sections. NLO corrections have the effect of increasing the corresponding LO results byup to 85%, and the NNLO effects further increase the NLO result by about 10%.The LO cross section is thus increased by almost a factor of two upon inclusion of higher-ordercorrections. The scale uncertainties are reduced from about 4%-6% at NLO to only about 2% at NNLO. The inclusion of NNLO corrections nicely improves the agreement between the theoretical predictions and the data, which are largely consistent within the uncertainties.These observations are irrespective of whether W^+Z, W^-Z or their combination isconsidered, and very similar to what has been found for the total inclusive cross sections in Grazzini:2016swo. As pointed out there, the origin of the large radiative corrections is an approximate radiation zero <cit.>:The LO cross section in the leading helicity amplitude vanishes at a specific scattering angle of the W boson in the centre-of-mass frame. This phase-space region is filled only upon inclusion of higher-order contributions,thereby effectively decreasing the perturbative accuracy in that region by one order. Therefore, the perturbative uncertainties at LO and NLO, estimated from scale variations, fail to cover the actual size of missing higher-order corrections. Nonetheless, the convergence of the perturbative series is noticeably improved beyond LO, and we expect NNLO scale uncertainties to provide the correct size of yet uncalculated perturbative contributions.§.§.§ ATLAS 13 TeVATLAS has reported experimental results of the fiducialcross section also for the early 13 TeV data set collected in 2015 <cit.>. At the level of the inclusive cross section very good agreement with our NNLO computation of Grazzini:2016swo is quoted. tab:ATLAS13 confirms that agreement also for the fiducial cross sections. There is also a marked improvement of the accuracyof the NNLO cross section regarding its scale uncertainties, which have been reduced to ∼ 2% from ∼ 4%-6%at NLO. Overall, the findings at 13 TeV draw essentially the same picture as those at 8 TeV discussed in the previous section. §.§.§ CMS 13 TeVCMS provides a cross-section measurement in the fiducial phase space forproductiononly for their 13 TeV analysis, and summed over all individual leptonchannels <cit.>.[The 8 TeVmeasurement by CMS <cit.>does not provide fiducial cross sections, and the differential results are extrapolatedto the full phase space. Since such results dependon the underlying Monte Carlo used for the extrapolation, we refrain fromincluding them in our comparison. The full set of predictions for all individual channelsfor CMS at 8 TeV and 13 TeV are reported in app:rates_full.] tab:CMS13 contains our theoretical predictions at LO, NLO and NNLO for thecombination of all leptonic channels. The cuts arelooser as compared to the ones applied by ATLAS, but the relative size of radiative corrections is rathersimilar. The comparison to the fiducial cross section measured by CMSshows quite a large discrepancy: The theoretical prediction is 2.6σ above the experimental result.We point out that CMS uses fiducial cuts that are quite different from those used in their event-selection.This comes at the price that the extrapolation from the CMS selection cutsto the fiducial phase space is affected by an uncertainty from the employed Monte Carlo generator. The observed discrepancy, however, might well be due toa statistical fluctuation of the limited dataset used in this early measurement. Further data collection at 13 TeV will hopefully clarify this issue. §.§ Distributions in the fiducial phase space We now turn to the discussion of differential observables in the fiducial phase space.In fig:pTZWfig:dyZlWnjet we considerpredictions up to NNLO accuracy for various distributions that have been measured by ATLAS at 8 TeV <cit.>. The fiducial phase-space definitionis discussed in sec:fiducial, see also tab:cuts. All figures have the identical layout: The main frame shows the predictions at LO (black dotted histogram),NLO (red dashed histogram) and NNLO (blue solid histogram) with their absolute normalizationas cross section per bin (i.e. the sum of the bins is equal to the fiducial cross section), compared to the cross sections measured by ATLAS (green data points with error bars). The lower paneldisplays the respective bin-by-bin ratios normalized to the NLO prediction (LO is not shown here). The shaded uncertainty bands of the theoretical predictions correspond to scale variations as discussedabove, and the error bars are the combined experimental uncertainties quoted by ATLAS. Unless stated otherwise, all distributions include the combination of all relevantleptonic channels (SF/DF channels and W^+Z/W^-Z production). Note that, in order to compare to ATLAS results, we combine different lepton channels by averaging them for both the fiducial cross sections and distributions, while summing the cross sections for W^+Z and W^-Z production.Some general statements regarding the scale uncertainties which are common to allsubsequent plots are in order: NNLO corrections further reduce the scale dependence of the NLO cross sections inall distributions. In absolute terms, the NLO uncertainties generally vary within 5%-10%, and reachup to 20% only in the tails of some transverse-momentum distributions. The NNLO uncertainties,on the other hand, hardly ever exceed 5% in all differential observables. Correspondingly, given that the NNLO corrections on the fiducial rate are about +8.5%, NLO and NNLO scale-uncertainty bands mostly do not overlap, in particular in the bins that provide the bulk of the cross section. Nonetheless, we expect NNLO uncertainties to generally provide the correct size of missing higher-order contributions (see our corresponding comments at the end of sec:atlas8). fig:pTZW shows the transverse-momentum spectra of the reconstructed Z and W bosons, which bothpeak around p_T,V∼ 30 GeV.As can be seen from the ratio plots, the inclusion of NNLO corrections affects the shapes of both distributions at the 10% level, the effect being largest in the region p_T,V 150 GeV. The comparison with the data is good already at NLO, but it is further improved, in particular in terms of shape, at NNLO.All data points agree within roughly 1σ with the NNLO predictions. In fig:pTmissmTWZ (a), we consider the distribution in the transverse mass of the WZ system, defined bym_T,WZ =√((E_T,+E_T,ν_+E_T,+E_T,)^2- p_T,ν_^2)with E_T,x^2=m_x^2+p_T,x^2 .With shape effects of about 15%, the NNLO corrections significantly soften the spectrum. Already the NLO prediction is in good agreement with data, and the NNLO corrections tend to slightly improve that agreement mainly due to the shape correction, so that the measured results are well described by the theoretical predictions within roughly 1σ of the experimental errors. The ATLAS result for the missing transverse energy distribution in fig:pTmissmTWZ (b) shows some discrepancyin shape compared to the NLO prediction. The NNLO corrections are essentially flat, so they cannotaccount for that difference. Overall, the uncertainties of the measured results are still rather large, such thatthe deviation of the predicted cross section in each bin stays within 1σ-2σ. Looking at fig:pTmissWmWpwhere we plot the missing transverse energy distribution separately for W^-Z and W^+Z production, we see thatthe observed discrepancy between theory and data appears only for W^-Z production, where it extendsup to roughly 2σ-3σ for the lowest and highestbins.To clarify the origin of this discrepancy more precise data are needed, given that only four separate bins are measured at the moment. Next, we discuss the absolute rapidity difference between the reconstructed Z boson and the leptonassociated with the W-boson decay, shown in fig:dyZlWnjet (a).This |dy_Z,ℓ_W| distribution has a distinctive shape, with a dip at vanishing rapidity difference and a maximum around |dy_Z,ℓ_W|=0.8, and it is sensitive to the approximate radiation zero <cit.> mentioned before. As expected, theLO prediction does not describe the data in any sensible way. The NLO prediction already captures the dominant shape effects. The NNLO corrections are rather flat and are consistent within uncertainties with (and in most cases right on top of) the data, thanks to the improved normalization. Finally, fig:dyZlWnjet (b) shows the distribution in the jet multiplicity. Jets are defined with the anti-k_T algorithm <cit.> with radius parameter R=0.4. A jet must have a minimum transverse momentum of 25 GeV and a maximum pseudo-rapidityof 4.5.We already know that the measured fiducial cross section is in excellent agreement with theNNLO prediction. As expected, radiative corrections are strongly reduced when considering a jet veto (0-jet bin). NLO and NNLO predictions are essentially indistinguishable, apart from the reduction of the theoretical uncertaintieswhen going from NLO to NNLO. The experimental result is right on top of them. In the exclusive 1-jet bin NLO (NNLO) predictionsare formally only LO (NLO) accurate. It is well-known that LO-accurate predictions tend to underestimate theuncertainties. The blue solid NNLO result has the effect of decreasing the cross section in that bin by almost a factor oftwo with respect to NLO, well beyond the given uncertainties.The data point is significantly closer to the NNLO prediction and fullyconsistent with it within uncertainties. Finally, in the 2-jet bin even the NNLO contribution is effectivelyonly LO, and our computation cannot provide a reliable prediction.Indeed, it significantly overestimates the measured cross section. A more accuratedescription of the 2-jet bin requires at least NLO QCD correctionsto the +2 jets process <cit.>. We conclude our discussion of differential distributions by considering ratios ofW^+Z over W^-Z cross sections. In fig:ratiopTZW-<ref> (a) such ratios arecompared to the ATLAS 8 TeV data. Otherwise, these plotshave exactly the same structure as the previous ones.The uncertainty bands are computed by taking fully correlated scale variations, i.e., using the same scale innumerator and denominator. The ensuing bands are extremely small, withrelative uncertainties never exceeding ∼1%-2% both at NLO and NNLO. In most cases the perturbativecomputation of the ratios is very stable and in particular NNLO correctionsare very small, which justifies fully correlated scale variations to estimatethe perturbative uncertainties. Nevertheless, some observables are affectedby 𝒪(^2) corrections beyond the residual uncertainty bands: Such cases are discussed at the end of this section.By and large, we find reasonable agreement between the predicted and the measured ratios in alldistributions under consideration, which is, in part, due to the relatively largeexperimental uncertainties. The latter prevent to clearly discriminate whetherNNLO corrections improve the agreement with data. Nevertheless, for eachdistribution at least one data point deviates from the prediction by more than2σ, some of which appear even quite significant. For example,in fig:ratiopTZW (a) there is one bin in the transverse-momentumspectrum of the reconstructed Z boson with a discrepancy of roughly4σ and another one with more than 2σ. However, the experimentalresults fluctuate too much to claim that these are genuine effects beyond statistics.In fact, similar differences as we observe hereare evident also in the ATLAS study <cit.> whendata are compared to NLO+PS predictions. Only higher experimental accuracy, to become available at 13 TeV soon, willallow for a more conclusive comparison in these cases. Indeed, even the distribution in themissing transverse energy in fig:ratiopTmissmTWZ, where we found someapparent difference in the shape for W^-Z, but not for W^+Z production (see fig:pTmissWmWp),does not seem to be particularly (more) significant when considering theW^+Z/W^-Z ratio due to the large experimental errors. Finally, we point out certain distributions which show prominent shapedifferences between W^+Z and W^-Z production, while featuringvisible effects from the NNLO corrections. Several distributionsexist, see, e.g., fig:ratiodyZlW (b)-<ref>, which depend rather strongly on the charge ofthe W boson. Unfortunately, large NNLO effects often appear onlyin corners of phase space that are strongly suppressed and thus have low experimental sensitivity. One example is the absolute rapidity difference between the reconstructed Z boson and the lepton associated with theW-boson decay, which iscompared to data in fig:ratiodyZlW (a), but shown with a finer binningin fig:ratiodyZlW (b): The effect ofNNLO corrections in the forward region is manifest, but it is entirely due to differences between NLO and NNLO PDFs [We have checked thatby using the NNLO set also for the NLO predictions the difference disappears.]. There are, however, examples where the effects of NNLO correctionson the W^+Z/W^-Z ratio are evidentalready in the bulk region of the distribution. Such examples are given in fig:ratioinv3linvWZ-<ref>. The W^+Z/W^-Z ratio for the invariantmass of the three leptons in fig:ratioinv3linvWZ (a) evidentlyincreases for small m_ℓℓℓ values and decreases in the tailof the distribution upon inclusion of higher-order corrections, the effect being at the 5% level. Also the W^+Z/W^-Z ratio as a function of the invariant mass of thepairin fig:ratioinv3linvWZ (b) shows a large impact of NNLO corrections, although this is close to the kinematical boundary where the cross section is strongly suppressed.The largest impact of NNLO corrections on the ratio of W^+Z and W^-Zcross sections is found for the distribution in the transverse momentum of thelepton associated with the W-boson decay () in fig:ratiopTlepWpTlepone (a).The shape of the ratio significantly changes when going from NLO to NNLO, the effects being more than 10% in the tail of the distribution.Qualitatively similar, though smaller, effects can be observed in fig:ratiopTlepWpTlepone (b) for the leading-lepton .We conclude our presentation of the differential distributions with a comment on the perturbative uncertainties affecting the W^+Z/W^-Z ratios. The NLO uncertainties reported infig:ratioinv3linvWZfig:ratiopTlepWpTlepone underestimate theactual size of the NNLO corrections in certain phase-space regions. We note, however,that such uncertainties are computed by performing fully correlated variations. While in the majority of the cases this procedure is justified by the small size of perturbative corrections, in some phase space regions independent scale variations in numerator and denominator wouldbe more appropriate to obtain realistic perturbative uncertainties. This is demonstrated in fig:ratioW+W-pTlw, which separately showsthe absolutedistribution for W^+Z and W^-Z production.Indeed, the NLO and NNLO predictions are actuallyquiteconsistent within uncertainties. Similar conclusions can be drawn also for theother observables in fig:ratioinv3linvWZfig:ratiopTlepWpTlepone whenseparately looking at their absolute distributions for W^+Z and W^-Z production. §.§ New-physics searchesIn sec:fiducial and sec:distributions we have presented cross sections and distributions in the fiducial regions defined by ATLAS and CMS to isolate the W^± Z signature. The comparison between theoretical predictions and experimental data in this region is certainly important to test the SM. Thesignature, however, and, more precisely, the production of three leptons + missing energy, is important in many BSM searches, for which the SM prediction provides an irreducible background. One important example in this respect are searches forheavy supersymmetric (SUSY) particles: The extraction of limits on SUSYmasses relies on a precise prediction of the SM background. In the following, we present an illustrative study where we focus on a definite scenario for SUSY searches, and we studythe impact of higher-order QCD corrections on bothcross sections and distributions.Typical experimental new-physics searches that consider three leptons plus missing energy apply basic cuts which are rather similar to thoseconsidered in SM measurements. Here we follow as close aspossible the selection cuts used in the CMS analysis of CMS:2016gvu at 13 TeV.The selection cuts are summarized in tab:SUSYcuts; they differin some details from those considered in sec:fiducial: First of all,lepton cuts are chosen differently for electrons and muons. More precisely, all leptons are first ordered in , and then thethreshold for eachlepton is set according to its flavour and to whether it is the leading or a subleadinglepton. Also the pseudo-rapidity cuts are different forelectrons and muons. These cuts imply that the theoretical prediction of the cross sectionin this case is not symmetric under e↔μ exchange any more, and the full set of eight channels must be computed separately for the final state. Furthermore, the invariant mass of the three leptons is required to differby at least 15 GeV from the Z-boson mass,and the invariant mass of every OSSF lepton pair is bounded from below to ensure IR safety. Our goal is to study QCD effects on distributions which are known to provide a high experimentalsensitivity to isolate a SUSY signal over the SM background. The essential observables, ordered by their relevance, are:[We note that, contrary to the SM studies of sec:fiducial and sec:distributions, the cuts we consider here do not require to identify the lepton pair coming from a Z boson. A Z-boson identification is needed only for specific observables, namelyand . The identification is the same as used by the CMS SM analysisat 13 TeV, outlined in sec:fiducial. The OSSF lepton pair with the invariant mass closest to m_Z is associated with the Z boson.] * the missing transverseenergy , which (in particular in its tail) is highly sensitive if unobservedSUSY particles, usually the lightest supersymmetric particle (LSP), are producedvia chargino-neutralino pair production; * the transverse mass of the W boson ,more precisely of the system of missing energy and the lepton not associated with the Z-boson decay, which is to some extent complementary to ;* the invariant mass of the lepton pair associatedwith the Z-boson decay , which allows a discrimination betweensearches in the SUSY parameter space with a small (≪ m_Z), intermediate (∼ m_Z) and large (≫ m_Z) mass difference of neutralino and LSP. Based on these considerations, we choose four differentcategories,which are inspired by the categories considered in CMS:2016gvu:Our calculation is performed by using the setup discussed at the beginning of this section and employed in sec:fiducial and sec:distributions. However, since we are interested in studying the impact of QCD radiative corrections in a phase space region which is characterized by relatively large transverse momenta (up to O(1TeV)), the fixed scale μ_0=1/2(m_Z+m_W) is not fully appropriate. In the present study we use instead a dynamic scale defined asμ_R=μ_F=μ_0≡1/2 (√(m_Z^2+p^2_T,)+√(m_W^2+p^2_T,ν_)),where p_T, and p_T,ν_ are the transverse-momenta of the identified Z and W bosons,respectively. In the limit of small transverse momentaeq:dynscale reduces to the fixed scale μ_0=1/2(m_Z+m_W) used in sec:fiducial and sec:distributions.In tab:NP_rates we report our results for the integrated cross sections in the four categories. Four separate results are given in that table by dividing into W^+Z and W^-Z production as well as SF and DF channels: ^'+^+^-, ^+^+^-, ^'-^+^- and ^-^+^-.Throughout this section, flavour channels related by e↔μ exchange are summed over and the combination of individual channels is always done by summing them. We start our discussion from Category I, for which the cross section is of the order of the fiducial cross sections presented in sec:fiducial for the SM measurements at 13 TeV, although with somewhat looser selection cuts. Therelative radiative correction are large: They amount to about 94% at NLO and 13% at NNLO. These relative corrections are slightly larger for W^-Z productionas compared to W^+Z production as can be inferred from the separate rows in the table. Results in the SF and DF channels are of the same size.An additional and stringent cut on the missing transverse energy of > 200 GeV (Category II)changes this picture dramatically: The cross section is reduced by roughly two orders of magnitude.The LO prediction vastly underestimates the cross section, with NLO corrections of several hundredpercent. These corrections are significantly larger for the W^+Z cross section (∼ 320%) thanfor W^-Z production (∼ 240%). This is not unexpected:A hard cut onenhances the relevance of the high-p_T region, where QCD corrections are more important.Moreover, the W^+Z final state is mainly produced through ud̅ scattering, while W^-Z originates from u̅d scattering. The u quark carries on average more momentum than the d quark, thus leading to harder p_T spectra for the W^+Z final states compared to W^-Z. Following similar arguments, also the NNLO contribution is sizeable. It is roughly 22%, which is in particular largerthan in the more inclusive Category I. This clearly confirms the importance of NNLO corrections when scenarios with cuts on observables relevant to new-physicssearches, such as , are under consideration.In Category III (additional cut >120 GeV), on the other hand, the cuthas a rather mild effect on the NLO corrections, which are about 70%, i.e.even slightly lower than in Category I. NNLO corrections have an effect ofabout 8%. What turns out to be striking in this categoryis the difference between SF and DF channels, which are similarly large in the twoprevious categories. Here, the SF results are more than a factorof three higher than the corresponding DF cross section. We will discuss the originand the implications of this observation in detail below.QCD corrections are also very mildly affected by a high cut onin Category IV (> 105 GeV) which forces the Z boson to be off-shell. The difference between SF and DF results is smaller and has the opposite sign with respect to Category III, being, however, still of order 10%-20%depending on the order.Comparing the W^+Z and W^-Z ratios in the four categories, we see that, due to the different contributing partonic channels, they strongly depend on the applied phase-space cuts, with σ_W^+Z/σ_W^-Z≈ 1.47 inCategory I, σ_W^+Z/σ_W^-Z≈ 2.71 in Category II, σ_W^+Z/σ_W^-Z≈ 1.69 in Category III and σ_W^+Z/σ_W^-Z≈ 1.48 in Category IV at NNLO. We note that the precise value of the ratio ofW^+Z and W^-Z cross sections may be affected by the specific choice of theused PDFs.Let us discuss in more detail the large difference between SF and DF cross sections in Category III. This seemssurprising at first sight, since, as outlined in sec:calculation, the SF and DF channels feature the same diagrams and have the same generic resonant structures. Indeed, all SM results as well as BSM results in Category I and II show at most minor differences between SF and DFchannels. This is true both for rates and distributions. Category III differs from Category I only by an additional cut on , whose distribution inCategory I is shown separately for the SF and DF channels in the left and centre plots of fig:mTWcomp. For reference we have added agreen vertical line at =120 GeV, which indicates the additional cut in Category III. Apparently, thetail, which is dominated by off-shell W bosons,is considerably higher in the SF channel than in the DF channel. Thus, the origin of the different SF and DF rates is a different distribution of events, whichare moved from the W-peak region to the tail.This behaviour is not a particular feature of the SF channel, but a consequence of the Z (and W) identification we are using, which is entirely based onthe invariant masses of the two possible combinations of OSSF pairs, by associating the Z boson with the one closer to the Z mass.We have repeated the computation of thedistribution by replacing the CMS identification with the ATLAS resonant-shape identification (see sec:fiducial and in particular eq:pestimator). The ensuing distribution is shown in the right plot of fig:mTWcomp. Indeed, by eye, no difference between right (SF channel with ATLASidentification) and centre (DF channel) plot is visible. We stress that in the DF channel the Z and W bosons are unambiguouslyidentified by the lepton flavours in the final state. The resonant-shape identification takes into account information on both theW- and the Z-boson propagators in the dominant double-resonant topologies,which leads to a more accurate modelling of the W-boson peak in thedistribution. This identification procedure distributes less events into the tail (similar to the DF channel) than the CMS identification. The resonant-shape identification is therefore much more effectivein removing events from the peak region when cutting on >120 GeV. This is also reflected by the ensuing total cross sectionsin Category III: At NNLO, for example, the SF cross section with the resonant-shape identification (0.9265(7)_-1.5%^+1.5% fb) is of similar size as the one inthe DF channel (1.010(2)_-1.6%^+1.6% fb) as compared to 3.303(4)_-1.8%^+1.9% fb in the SF channel when using the CMS identification. Thus, in morethan two out of three events,in Category III the identification of the Z and the W boson is swapped in the case of CMS with respect to using the resonant-shape identification.Besides the potential risks that such different identification might have on shapes of certain distributions[We have checked explicitly severaldistributions in Category III and found quite substantial differences between SF with CMS identification and DF channels for, e.g., , , , , , . These differences are alleviated when using the resonant-shape identification, althoughsome minor differences remain also in that case.], a more effective identification wouldallow to suppress the SM background to new-physics searches in this category by more than a factor of three. Let us finally remark that also Category IVwould benefit from a more effective identification, although the effects are much smaller and negative in that case. In terms of differential distributions, as previously pointed out, the most relevant observables for SUSY searches are ,and . These distributions are shownin fig:catI for the first category, i.e. without any additional restrictions on top of the default selectioncuts of tab:SUSYcuts. The distribution in the missing transverse energy in the left panel of fig:catI features largeradiative corrections, ranging up to 30% for the central curve, which, however, primarily affect the normalization. Nevertheless, the shape of the distribution is affected by NNLO corrections at the 10%-20% level inthe range up to =1 TeV. We point out that the rather flat corrections at NNLO can only beachieved by using a dynamic scale (see eq:dynscale) that takes into account the effects ofhard-parton emissions to properly model the tails of the distributions.We have explicitly checked that the NLOdistribution computed with a fixedscale is significantly harder in the tail with relatively large scale uncertainties,while the NNLO cross section — as expected — is quite stable with respect tothe scale choice. As a consequence, a fixed scale choice would lead to much larger, but negative NNLO corrections at high transverse momenta.Despite the considerable improvement in the perturbative stability achieved with the use of a dynamic scale, a precise prediction of the fiducial crosssection in Categories based onstill requires the inclusion of O(^2) terms, since depending on thecut theNNLO effects may still change by up to 20%.Similarly, also theanddistributions, in the centre and right plots of fig:catI, are subject to sizeable corrections due to the inclusion ofO(^2) terms. While in the tails of the spectra (for ≳300 GeV and≳200 GeV) the NLO and NNLO predictionsroughly agree within their respectiveuncertainties, at smallerandvalues the shapes of the distributions areconsiderably modified, leading to NNLO corrections that are not covered by the lower-orderuncertainty bands. These differences are alleviated to some extent by the fact that the low-and - regions are usually less important to new-physics searches(where usually the phase-space region below ∼ 120 GeV and ∼ 100 GeV) is cut),but some region of phase space remainswhere NNLO corrections ought to be taken into account.In fig:catII we consider theandspectra in Category II. Thus, these distributions include anadditional cut of > 200 GeV as compared to those in fig:catI. As pointed out before, such cut onrequires NNLO accuracy on its own to ensure a propermodelling of the SM background. The specific value of 200 GeV, in fact, is incidentally ina region where the NNLO corrections start to become particularly large (>20%), as can be inferred fromthedistribution in fig:catI. Indeed, looking at fig:catII both the distribution inandfeature NNLO and NLO cross sections without overlapping uncertaintybands in each peak region, with NNLO corrections of the order of 20%. For smallvalues NNLO effects increase up to more than 40%. This region, however, is less relevant to new-physics searches. We note that, when going from NLO to NNLOscale uncertainties are reduced from about 15% to at most 10%. Overall, the results of the two distributions are very similar to the corresponding ones infig:catI for Category I. Although the NLO and NNLO scale uncertaintiesare generally larger, the ensuing bands do not overlap around the peak of the distributions.fig:catIII shows theandspectra while including a cut on >120 GeV in addition to the standard selection cuts (Category III).Also in this case the general behaviour of these distributions is quite similar tothose in Category I, however, the absolute size of the corrections at NNLO is reduced. Thanks to the dynamic scale choice, the dependence of the NNLO correctionon the value ofis quite flat. With a fixed scale we find a similarlystrongdependence in the tail of the distribution as pointed out for Category I. NLO and NNLO uncertainty bands feature a satisfactory overlap starting from ≳200 GeV. Thedistribution shows consistent NLO and NNLO predictions in the tail of the distribution.The NNLO corrections become larger (∼ 10%) only at ≲150 GeV, where production becomes less important as a SM background to new-physics searches. We point out that, as shown in fig:catIIImllsplit, the increase of the NNLO correctionsat ≲150 GeV is only present in the SF channel, while the DF channelfeatures a steep increase at ≲ 50 GeV. It is clear from the main frame of thatfigure that the distributions in the two channels are modelled very differently, which can again be traced back to the used identification procedure. In fig:catIV theanddistributions in Category IV are shown. We see that the >105 GeV cut has almost no impact on the shapes of theand spectra, apart from the general reduction of the absolute size of theNNLO corrections compared to Category I. Also in this category NNLO corrections are quantitatively relevant,and their impact on the tails of the distributions is reducedwith the use of a dynamic scale.In conclusion, for the three observables relevant to new-physics searches that have beenconsidered in this section, the sizeable (10%-30%) NNLO corrections depend on the specificcut values. This demands NNLO accurate predictions for thebackgroundwhen categories based on these observables are defined. Furthermore, a dynamic scale choice is crucial to properly model the various distributions, in particular the tail of thespectrum. Moreover, NNLO corrections considerably reduce the perturbative uncertainties in allthree distributions we investigated, regardless of the category under consideration. § SUMMARYIn this paper, we have presented the first computation of fully differential cross sections for the production of apair at NNLO in QCD perturbation theory.Our computation consistently includes the leptonic decays of the weakbosons accounting for off-shell effects, spin correlations and interferencecontributions in all double-, single- and non-resonant configurationsin the complex-mass scheme, i.e. we have performed a complete calculation for the processpp→+X with ℓ,ℓ'∈{e,μ}, both in the SF and in the DF channel. Our results are obtained with the numerical program MATRIX, which employs the -subtraction method to evaluate NNLO QCD corrections to a wide class of processes. We have shown that the ensuing fiducial cross sections and distributions depend very mildly on the technical cut-offparameter r_ cut, thereby allowing us to numerically control the predictedNNLO cross section at the one-permille level or better.We have presented a comprehensive comparison of our numerical predictionswith the available data from ATLAS and CMS at √(s)=8 and 13 TeVfor both the fiducial cross sections and differential distributions inproduction. As in the case of the inclusive cross section <cit.> QCD radiative corrections are essential to properly model the cross section. They amount to up to 85% at NLO, and NNLO correctionsfurther increase the NLO result by about 10%. The inclusion of NNLO correctionssignificantly improves the agreement with the measured cross sections by ATLASat both 8 and 13 TeV centre-of-mass energies. The 13 TeV CMS result issomewhat (∼ 2.6σ) lower than the theoretical prediction, which is about the samediscrepancy that has been observed in the result extrapolated to the total inclusivecross section <cit.>. The full data set collected by the end of2016 (∼ 40 fb^-1) will show whether this difference is aplain statistical effect of the small data set (∼ 2.3 fb^-1)used for that measurement.Distributions in the fiducial phase space of thefinal states areavailable only for the ATLAS 8 TeV data set. Our comparisonreveals a remarkable agreement with the measured cross section in eachbin upon inclusion of higher-order corrections, being typicallywithin 1σ of the quoted experimental errors. Although this statementholds already at NLO, the NNLO cross sections display animproved description of the data not only in terms of normalization, but alsoregarding the shapes.Only the distribution in the missing transverse energy exhibits some tensionbetween theory and data: We observe deviations at the level of 1σ-2σ in some bins, leading toa more evident discrepancy in the shape of the distribution. We have shown that this discrepancy is present only in W^-Z production, while our NNLO prediction nicely describes the data in the case of W^+Z production.We have further shown that our computation of the ratio of W^+Z over W^-Z distributions agrees well with the experimental data, given the rather large experimental uncertainties. Along with this study we have pointed outa number of distributions which signal significant differences between W^+Zand W^-Z production, and may be sensitive to disentangle genuineperturbative effects at NNLO.We have completed our phenomenological study by considering a scenario whereproduction is a background to new-physics searchesin the three leptons plus missing energy channel. NNLO effects on the background rates have been discussed in the relevant categories, togetherwith the corresponding distributions. Our findings can be summarized as follows: * LO predictions cannot be used to model cross sectionsand distributions in a meaningful way: The size of NLOcorrections can be, in some categories, of the order of several hundred percent.* NNLO corrections on therates range between roughly 8% and 23%,while distributions are subject to considerable shape distortions when going from NLO to NNLO.* For cuts on theobservable, which is particularly important forcategorization in new-physics scenarios, NNLO corrections turn out to beparticularly important, as they may vary between 10%-30% depending on the specificvalue of the cut.* Only using a dynamic scale (see eq:dynscale) the shape of the relevant distributions is perturbatively stable. This is in particular true for the distribution, which was found to be drastically impacted by NNLO corrections if a fixed scale was applied.* Finally, we have shown that in the SF channel an identification of the Z boson based solely on how close the dilepton-pair mass is to m_Z may lead to some problems: When a >120 GeV cut is enforced, in more than two out ofthree events the Z- and W-boson identification is swapped, leading to a difference in the SF and DF rates by more than a factor of three. We find that a resonant-shape identification (see eq:pestimator) is much more efficient, thereby leading to a more effective background suppression. We conclude by adding a few comments about the residual uncertainties of our calculation. As is customary in perturbative QCD computations, the uncertainties from missing higher-order contributions were estimated by studying scale variations. We have seen that, when going from NLO to NNLO scale uncertainties are generally reduced both for fiducial cross sections and for kinematical distributions.It should be noted, however, that the uncertainties seem to underestimate thesize of missing higher-order corrections at LO and NLO. This tendency decreases with increasing perturbative order: While the LO uncertainty grossly underestimatesthe size of the NLO corrections (which, for this process, is in part due to the existence of an approximate radiation zero), the NLO and NNLO predictions are much closer, and almost consistent within uncertainties. Considering that at NNLO all partonic channels are included and no regions of phase phase that are effectively only LO-accurate remain, we conclude that the O(2-5%) NNLO uncertainties on our fiducial cross sections (see Tables <ref>, <ref>, <ref> and <ref>) are expected to provide the correct order ofmagnitude of yet uncalculated higher-order contributions. EW corrections would affect the fiducial cross sections at the 1% level orless <cit.>, but are expected to become relevant in the tails of the distributions, which will be potentially important for new-physics searches. The inclusion of EW corrections is, however, left to future investigations.PDF uncertainties are expected to be at the 1%-2% level.We believe that the calculation and the results presented in this paper will be highly valuable both for experimental measurements of thesignal and in new-physics searches involving the three lepton plus missingenergy signature. The computation is available inthe numerical program , which is able to carry out fully-exclusive computations for a wide class of processes at hadron colliders. We plan to release a public version of our program in the near future. Acknowledgements. We would like to thank Lucia Di Ciaccio, Günther Dissertori, Thomas Gehrmann, Constantin Heidegger, Jan Hossand Kenneth Long for useful discussions and comments on the manuscript. This research was supported in part by the Swiss National Science Foundation (SNF) undercontracts CRSII2-141847, 200020-169041, bythe Research Executive Agency (REA) of the European Union under the Grant Agreementnumber PITN–GA–2012–316704 (HiggsTools), and by the National ScienceFoundation under Grant No. NSF PHY11-25915. MW has been partially supported by ERCConsolidator Grant 614577 HICCUP. § CMS CROSS SECTIONS AT 8 TEV AND 13 TEVFor completeness we quote below the cross-section predictions in the fiducial phase space forCMS at 8 TeV and 13 TeV, separated by the individual leptonic channels in tab:CMS8_full andtab:CMS13_full, respectively. UTPstyle | http://arxiv.org/abs/1703.09065v1 | {
"authors": [
"Massimiliano Grazzini",
"Stefan Kallweit",
"Dirk Rathlev",
"Marius Wiesemann"
],
"categories": [
"hep-ph",
"hep-ex"
],
"primary_category": "hep-ph",
"published": "20170327133703",
"title": "$W^\\pm Z$ production at the LHC: fiducial cross sections and distributions in NNLO QCD"
} |
Cooperative Raman Spectroscopy for Real-time In Vivo Nano-biosensing Hongzhi Guo, Student Member, IEEE, Josep Miquel Jornet, Member, IEEE,Qiaoqiang Gan, Member, IEEE, and Zhi Sun, Member, IEEEThis work was supported by the U.S. National Science Foundation (NSF) under Grant No. CBET-1445934. The authors are with the Department of Electrical Engineering, University at Buffalo, the State University of New York, Buffalo, NY 14260, United States. E-mail: {hongzhig, jmjornet, qqgan, zhisun}@buffalo.edu.December 30, 2023 ===============================================================================================================================================================================================================================================================================================================================================================================================================================================================In the last few decades, the development of miniature biological sensors that can detect and measure different phenomena at the nanoscale has led to transformative disease diagnosis and treatment techniques. Among others, biofunctional Raman nanoparticles have been utilized in vitro and in vivo for multiplexed diagnosis and detection of different biological agents. However, existing solutions require the use of bulky lasers to excite the nanoparticles and similarly bulky and expensive spectrometers to measure the scattered Raman signals, which limit the practicality and applications of this nano-biosensing technique. In addition, due to the high path loss of the intra-body environment, the received signals are usually very weak, which hampers the accuracy of the measurements. In this paper, the concept of cooperative Raman spectrum reconstruction for real-time in vivo nano-biosensing is presented for the first time. The fundamental idea is to replace the single excitation and measurement points (i.e., the laser and the spectrometer, respectively) by a network of interconnected nano-devices that can simultaneously excite and measure nano-biosensing particles. More specifically, in the proposed system a large number of nanosensors jointly and distributively collect the Raman response of nano-biofunctional nanoparticles (NBPs) traveling through the blood vessels. This paper presents a detailed description of the sensing system and, more importantly, proves its feasibility, by utilizing accurate models of optical signal propagation in intra-body environment and low-complexity estimation algorithms. The numerical results show that with a certain density of NBPs, the reconstructed Raman spectrum can be recovered and utilized to accurately extract the targeting intra-body information. Cooperative Raman spectroscopy, distributed sensing, signal estimation, wireless intra-body communications, wireless nanosensor network.§ INTRODUCTIONDriven by the development of nanotechnology, emerging nanosensors have been envisioned to provide unprecedented sensing accuracy for many important applications, such as food safety detection <cit.>, agriculture disease monitoring <cit.>,and health monitoring <cit.>, among others. Since nanosensors can interact directly with the most fundamental elements in matter, e.g., atoms and molecules, they can provide ultra-high sensitivity. One of the most promising applications of nanosensors is in vivo biosensing <cit.>, where nanosensors are injected into human body to collect real-time information. Nanosensors can be utilized both to detect well-known diseases at their very early stage as well as to provide new fundamental insights and understanding of biological processes that cannot be observed at the macroscopic level.The use of nanoscale communication techniques can enable data transmission among nanosensors <cit.>. Whether molecular, acoustic or electromagnetic, there are two fundamental limitations of directly using active nanosensors in human body. First, wireless nanosensors require continuous power supply to support wireless data transmission and motion control. However, due to the limited size of the nanosensor, a large battery cannot be equipped and, even worse, recharging the battery is difficult. Second, the wireless nanosensor requires circuitry and antenna to process and radiate wireless signals, which further increases its size. In order to alleviate the side-effects caused by nanosensors in human body, we need to reduce its size by removing the battery and wireless components.Metallic nanoparticles coated with Raman active reporter molecules have been widely used as surface enhanced Raman scattering labels for multiplexed diagnosis and bio-detection of DNA and proteins <cit.>. This is a promising solution since it does not require power and wireless components on the nanoparticles. Their motion is driven by the dynamic fluids in human circular system and the information is delivered by electromagnetic scattering. The Raman active reporter molecules interact with chemicals inside human body and the incident single-frequency optical light is scattered into a wide frequency band with unique power spectrum due to molecule vibration. Based on this unique spectrum, we can identify the molecules. Although this approach suffers from low detected power due to the small scattering cross section, the scattering efficiency can be improved by placing the Raman active reporter molecules on the surface of metallic nanoparticles <cit.>.While this solution can dramatically reduce the size of the nano-device that is injected into the human body, it still has limitations, which prohibit it from being widely used. First, a laser is needed to excite the engineered nanoparticle inside the human body and a spectrometer is demanded to detect scattered Raman signal. Both the laser and the spectrometer are bulky and expensive and, thus, are not portable or affordable. In addition, the accuracy of this sensing setup is not high enough since the scattered Raman signal is much weaker than the emitted signal by the laser due to the small scattering cross section of the nanoparticle and the dispersive and lossy propagation medium.To address the aforementioned challenges, we propose the concept of cooperative Raman spectroscopy, which can be integrated on wearable devices <cit.>, such as a smart nanophotonic ring. The system consists of external nanosensors and internal nano-biofunctional particles (NBPs), as shown in Fig. <ref>. The bulky expensive lasers and spectrometers are replaced with distributed nanosensors on a smart ring, which can both emit and detect optical signals, by leveraging the state of the art in nano-lasers and nano-photodetectors <cit.>. The nanosensors are placed on a smart ring which can reduce the distance to the intra-body particles to increase the received signal strength. Moreover, by installing nanosensors distributively, we can increase the diversity of detection and optimally allocate resources to make the sensing system more robust. In this paper, we design a sensing system for cooperative Raman spectroscopy. More specifically, first, we present the system architecture and describe the processes of signal generation, scattering and detection. Based on the operational framework, we provide theoretical models to describe each part of the system, including signal propagation, noise, NBP density, and nanosensor's position. In addition, we provide detailed description of the information carried by NBPs and the method to extract the information. Different from conventional sensing systems, signals are not only distorted by the propagation channel, but also the molecular noise and shot noise are introduced. The limited power on the smart ring poses another challenge. Based on the system model, we derive the sensing capacity and define optimal power allocation schemes to increase the sensing accuracy in each sub-band of the Raman spectrum. Also, we derive the expected detected power of each nanosensor using the stochastic system model. Based on the theoretical model and nanosensor observations, we provide both centralized and distributed Raman spectrum estimation algorithm, from which the molecule information is extracted. The numerical simulation validates the accuracy of the proposed estimation methods.The remaining part of this paper is organized as follows. The system architecture, operational framework, and system model are introduced in Section II. After that, the sensing capacity and optimal power allocation strategy are discussed in Section III. This is followed by the signal estimation algorithm presented in Section IV. The proposed system performance is numerically evaluated in Section V. Finally, this paper is concluded in Section VI. § SYSTEM ARCHITECTURE AND MODELThe system architecture of cooperative Raman spectroscopy consists of two important units, as shown in Fig. <ref>. The first key element is the external nanosensors on a smart ring, which are employed to 1) radiate optical signals, 2) detect scattered signals by NBPs, and 3) process the detected information to reconstruct the Raman spectrum. The second key component is the internal NBPs, which are injected into blood vessels to sense bioinformation. The bioinformation on NBPs can be extracted by using electromagnetic scattering. In the following, we first introduce the system architecture. §.§ System ArchitectureA NBP flowing in the human body can interact with different types of molecules. Once it is illuminated by a monochromatic (single frequency) optical signal, it absorbs the signal and scatters it into a wide spectrum. The spectrum is unique for different molecules due to their different chemical structures <cit.>. The objective of the proposed sensing system is to excite the NBP using a single-frequency optical signal and reconstruct the wide-band spectrum to identify the molecule. With this in mind, a large number of interconnected nanosensors are installed on a smart ring and each nanosensor has many nano-emitters and nano-detectors. In transmission, the nano-emitters generate and radiate the same monochromatic optical signal. In reception, due to the challenges in creating broadband detectors able to capture the entire Raman spectrum, each nano-detector is tuned to a different narrow sub-band and many of them are placed together on a nanosensor to cover the whole wide-band spectrum. The nanosensors are uniformly distributed on the ring. In this way, no matter how the ring is worn, it does not affect the sensing results.Once the raw spectrum data are collected by each nanosensor, there are primarily two approaches to reconstruct the spectrum and detect the molecules. 1) As shown in Fig. <ref>, the first one is a centralized architecture, where the raw data are sent directly to a data fusion center to do further processing and identification. This method can provide the most accurate results since all the raw data are considered in the estimation algorithm. Besides estimating the spectrum directly, the data fusion center can first compress the raw spectrum data and then send to the smart phone. In this way, the smart phone takes charge of spectrum reconstruction and molecule identification. However, there are two drawbacks which can prevent us from applying this architecture. First, the communication overhead is large since all the data need to be sent, which can increase the system delay and thus real-time detection may not be possible. The second drawback is that the signal processing in data fusion center requires a large amount of energy and computation resource which increases the burden of the ring. 2) The second architecture relies on a distributed sensing concept as shown in Fig. <ref>. Each of the nanosensor performs estimation algorithm and send the quantized single-bit results to the data fusion center. Based on the local results, the data fusion center performs a global estimation and identification and then send the results to the smart phone. In this way, most of the data are processed locally and thus the communication overhead can be dramatically reduced. Nevertheless, this system requires more computation resources for the nanosensor and the estimation accuracy may not be as high as the centralized system. The operational framework of the cooperative Raman spectroscopy consists of three phases. * First, the synchronized nano-emitters on the smart ring radiate optical signals at the same frequency into the finger. The wavelength of the signal is usually between 450 nm to 1100 nm.* Second, the flowing particles in blood vessels absorb the radiated optical signal from emitters. Then, the particles scatter the power into a wide spectrum.* Lastly, the scattered signals propagate towards nano-detectors and then the nano-detectors operating at different frequencies receive the corresponding photons. After that, one can use different data fusion and sensing architectures as shown in Fig. <ref>, and Fig. <ref> to process the sensed data, upon which the Raman spectrum can be reconstructed and the machine learning algorithms can be applied to identify the category of the molecules.Based on the sensing system architecture and operational framework, we provide the mathematical model for each component in the following.§.§ System Model Consider that there are N_s nanosensors uniformly installed on a ring and each nanosensor has N_f pairs of nano-emitters and nano-detectors. The positions of a pair of nano-emitter and nano-detector are considered to be the same since they are very close to each other. The whole Raman spectrum is divided into N_f sub-bands and each nano-detector on the nanosensor can detect signals in one sub-band. Note that due to the noise and low-density of NBPs, some detectors may not receive enough power and thus multiple nanosensors are employed to make the system reliable. Since the bone is relatively far from the skin and it is hard to penetrate, it can block the propagation of optical signal. We assume both the finger and the bone are cylinders with radius r_f and r_b, respectively. The blood vessels, including artery, vein, and capillary, are randomly distributed between the skin and bone with density λ_b. In each blood vessel, the NBPs arrive with a density proportional to the area of the blood vessel's cross section, which is denoted by λ_pb=λ_0 S_b, where λ_0 is the NBP density of a unit area and S_b is the area of a blood vessel's cross section. In reality, λ_0 is a function of time. When the NBPs are injected into the circulatory system, λ_0 gradually increases. After a while, some of the NBPs are disposed by natural physiological actions and the density gradually decreases. Due to the high directivity of the nano-emitter and nano-detector, we consider they can only radiate/detect signal with a large gain within a narrow beam. The system parameters are also depicted in Fig. <ref> and the symbol notations are provided in Table <ref>. In this paper, we consider the sensing is quasi-static since the optical light propagates much faster than NBPs' movement. Thus, in the following the NBPs are assumed to be static and the optical channel remains constant during the sensing period. §.§.§ Signal Propagation ModelThe optical signals need to penetrate skin, fat, and blood vessels to reach the NBPs. Extensive analytical and empirical models have been derived to capture this process <cit.>. There are many categories of cells and tissues and their properties can be drastically different. In <cit.>, an analytical channel model for intra-body in vivo biosensing is developed by considering the properties of individual cells. In this paper, we use the same model to describe the propagation loss of EM wave radiated by the emitters, which can be simply written ash(f,d)=e^-2/(k_w r_c)^2∑_n=1^N_stop(2n+1)(ℱ_M^n+ℱ_N^n)d,where k_w is the propagation constant, r_c is cell radius, N_stop is the numerical calculation order, ℱ_M^n and ℱ_N^n are wave vector coefficients in <cit.>, anddenotes the real part of a complex number, d is the propagation distance and f is the operating frequency. Besides this large scale fading, due to the multipath effect caused by scattering, a Rayleigh fading coefficient is also considered whose scale parameter is σ_c.§.§.§ Particle Scattering Coefficient and Quantization Model The NBPs first absorb power from incident light and then scatter the power with unique information. Therefore, the NBPs can be regarded as an information source which sends encoded data x to detectors. This process consists of two steps. First, the NBP absorb the incident signal power at frequency f_t. Then, the NBP reallocate the absorbed power based on the scattering coefficients η_f_t,f_j, where f_j is the center frequency of a sub-band. Consequently, the scattered power forms a wide-band power spectrum that contains the information of the scattering coefficients.As shown in Fig. <ref>, the scattered signal by the NBP spreads on a wide spectrum with varied signal intensity. The intensity in the figure can be regarded as received power which is proportional to the particle scattering coefficient when the transmission power is given as a constant. This scattering coefficient is considered as the transmitted signal x. As shown in the figure, the spectrum is a continuous signal; however, the estimation is discrete, i.e., we can only estimate a single coefficient within a sub-band to approximate the continuously changed power spectrum, as depicted in Fig. <ref>. As a result, we have to sample and quantize the continuous spectrum and then based on the scattering coefficient vector η=[η_f_t,f_1, η_f_t,f_2,⋯,η_f_t,f_N_f], we can reconstruct the Raman spectrum.As mentioned in the system architecture, we can use both centralized and distributed system to estimate the coefficient. In the centralized algorithm, the number of bits that the nanosensor uses to describe the received signal can significantly affect the system communication overhead. In the distributed system, each nanosensor has their own estimation and the number of bits it utilizes is also crucial. To reduce the computation burden of nanosensors and the system communication overhead, we use simple binary quantization. When the scattering coefficient (Raman intensity) is higher than a threshold, η_f_t,f_j is considered as 1. When the estimated coefficient is smaller than the threshold, the quantization process considers the scattering coefficient (Raman intensity) as 0. To estimate the value, we set several this kind of thresholds and divide the sensors into subgroups. Each subgroup has its own thresholds. Finally, based on the quantization results of all the nanosensors, we can estimate η. The details will be discussed in the spectrum estimation in Section <ref>. In addition, since different molecules have different spectrum, the event of transmitting 1 or 0 is a random process. In the following, we consider the probability of transmitting 1 is p and the probability of transmitting 0 is 1-p.§.§.§ Noise ModelThe noise in a sensing system can corrupt the detected signals and significantly affect the sensing capability. In the cooperative Raman spectroscopy system, there are primarily two noises, namely, molecules noise and shot noise.The NBPs flow through the circulatory system and interact with plenty of molecules. On one hand, they meet with the valuable molecules carrying health information. Through optical scattering, we can detect those molecules by identifying the power spectrum. On the other hand, the NBPs also encounter many unexpected molecules in intra-body environment. Although the particles are not designed to interact with these molecules, some chemical reactions can happen and change the particles' properties randomly, which are reflected in the received power spectrum. The original power spectrum is corrupted by unexpected noise power. Therefore, this noise needs to be taken into account when reconstruct the power spectrum.Since the molecules in human body have a large variety of categories which demonstrate different resonant frequencies in Raman spectrum, we can consider the noise power is the same for all the frequency bands. Therefore, the noise can be considered as white with uniform power across a wide band. Due to the large amount of molecules, the noise value can be positive or negative, i.e., enhance or cancel the original resonance due to chemical reactions, and its distribution is Gaussian with mean value 0 and standard deviation σ_m. Consequently, the noise caused by molecules can be regarded as additive white Gaussian noise κ^m ∼𝒩(0, σ_m^2). With this molecules noise, the scattering coefficient of the biofunctional particle can be written as η_f_t,f_j+κ^m=η_f_t,f_j(1+κ). Note that if 1+κ<0 we consider the total scattering coefficient as 0.Shot noise is dominant in the detector which obeys Poisson distribution. Let x(t)=η_f_t,f_j(1+κ(t)) be the scattered coefficient of a NBP plus molecule noise, P^t be the emitter transmission power, and h(f,d,t) be the response of the channel from nano-emitter to particle and then from particle to nano-detector. The received signal at a nano-detector by using direct detection can be written asy(t)=h(f,d,t)x(t)P^t+υ(t),where υ(t) is the dark current. Then, the light strength can be converted into doubly-stochastic Poisson process, which represents the number of photons arriving at the detector in a time interval Δ t. The probability that there are N_p photons arrive within Δ t is <cit.>Pr {ŷ(t+Δ t)-ŷ(t)=N_p}=e^-γ_p·γ_p^N_p/N_p!,where ŷ(t) is the converted y(t) from light strength to photon intensity andγ_p=∫_t^t+Δ t[y(t)] dt ≈Δ t y(t),where the approximation can be applied when Δ t is small enough. Note that here υ is a nonnegative constant <cit.> and y can be taken to have units photons per second at the operating wavelength <cit.>.§.§.§ Particle Arriving ModelThe NBPs are injected into circulatory system with a certain density. They arrive at the targeting sensing area with a diluted density. To model this process, we consider the arrival rate of NBPs in a unit cross section of blood vessel is λ_0. Since different blood vessels have different cross section areas, their NBPs arrival rates are also different. Moreover, the process of NBP moving is modeled as Poisson process since the NBPs are independent and random in the blood <cit.>. The number of the NBPs that can be excited by the nano-emitter depends on the position of the blood vessel, the distance to the nano-emitter, and the density of NBPs. The radiated optical signal by a nano-emitter can cover a three dimensional cone and each nano-detector can receive the scattered optical signal in the same cone since the nano-emitter and nano-detector have almost the same position. As shown in Fig. <ref>, the blood vessels are homogeneously distributed between skin and bone. Although NBPs can receive power from multiple beams as long as the nano-emitters are close enough to each other, we consider adjacent nano-emitters with overlapped beams work in different time slots to eliminate the correlation among them to reduce the complexity of analysis, i.e., in each time slot the NBPs within a beam can only receive power from one nano-emitter. Since the beam angle is small, we safely assume that all the NBPs on the same horizontal plane of the cone have the same distance to the emitter. For instance, the NBPs within Δ h in Fig. <ref> have the same distance to the nano-emitter. To find the number of particles in a blood vessel and the received power, we need to find the distributions of the length of blood vessels within a cone and their distance to the nano-emitter. Given the blood vessel's effective length l and its cross section area, the number of particles within it is given asPr(n_p=N_p|L=l, S_b=s_b)=(λ_0s_bl /u)^N_p/N_p!e^-λ_0s_b l/u,where s_b is the cross section area of the blood vessel and u is the velocity of blood. We assume the cross section of the blood vessel is uniformly distributed in [S_l, S_u] with a probability density function f(s_b)=1/(S_u-S_l).§.§.§ Nanosensor Position and Minimum Number One of the design objectives is that no matter how the smart ring is worn, it does not affect its performance. With this mind, we place the nanosensors in a homogeneous way as shown in Fig. <ref>. When there are N_f sub-bands and N_s nanosensors, we first place the nanosensors in sub-band 1 at [0, 2π/N_s, ⋯, 2(N_s-1)π/N_s]. Then, the nanosensors in sub-band 2 are placed at [2π/N_s N_f, 2π/N_s+2π/N_s N_f,⋯,2(N_s-1)π/N_s+2π/N_s N_f]. Similarly, the nanosensors in the n^th sub-band are placed at [2(n-1)π/N_s N_f, 2π/N_s+2(n-1)π/N_s N_f,⋯,2(N_s-1)π/N_s+2(n-1)π/N_s N_f]. Three examples are provided in Fig. <ref> when N_f=8 and N_s=1, 2, and 3, respectively.As discussed in preceding sections, within each nano-emitter/detector's beam, it is possible that there is no blood vessel. As a result, the nano-detector cannot receive any signal. If this happens for all the nano-detectors in a sub-band, the power spectrum of that sub-band is missing. Herein, in our design we need to guarantee that this can only happen with arbitrarily low probability. The blood vessels are homogeneously distributed and thus the probability that there are N_b blood vessels within the effective area of f_j sub-band isPr(n_b=N_b)=(0.5λ_b N_s h_c^2 tanα/2)^N_b/N_b!e^-0.5λ_b N_s h_c^2 tanα/2.Thus, when N_b=0,Pr(n_b=0)=e^-0.5λ_b N_s h_c^2 tanα/2.Since the blood vessel density is a constant number, which we cannot freely adjust, and the nano-emitter/detector's beamwidth is preconfigured, only the number of nanosensor can be varied. An arbitrarily small threshold τ_b is set to guarantee that Pr(n_b=0)≤τ_b and the minimum sensor number isN_b^s ≥-2 lnτ_b/λ_b h_c^2 tanα/2.Note that this minimum number can only promise that there are blood vessels going through a nano-emitter's/detector's beam. It does not guarantee that the detector can receive scattered signal, because this also depends on NBP's density.§ NANOSENSOR OPTIMAL POWER ALLOCATIONSimilar as other wearable devices, the power consumption is also a critical issue for the smart ring utilized for cooperative Raman spectroscopy <cit.>. In this section, we first derive a capacity for optical signal transmission in intra-body environment to measure the information delivered by a sub-band, upon which we develop the optimal power allocation scheme. In this paper, both the power and photon intensity are utilized. As described in (<ref>), the received signal can be expressed by the input signal and the dark current, which are both denoted in photon intensity. The photon intensity can be converted into power by multiplying the energy per photon E_p=h_PCc_LT/λ_w, where h_PC is Planck's constant, c_LT is the speed of light, and λ_w is the wavelength. §.§ Capacity AnalysisThe capacity analysis is mainly based on (<ref>). Since the detection takes very short time, we assume the particle movement and channel status within such a period is constant and thus the time t is neglected. When the nano-detector receives one photon, it considers the scattering coefficient as 1, which can be related to the results after quantization. Otherwise, the nano-detector considers the scattering coefficient as 0. Following the method in <cit.>, when the nano-detector receives more than 1 photon, the signal is regarded as 0 by considering it as an error. Since we consider a very shot period, the probability of receiving more than one photon is extremely low. If 0 is transmitted, we can only receive 0, which delivers no information. Consequently, we consider the scenario when 1 is received the transition probability of a sub-band channel isPr(1|0) =(h_i,j,k·κ^m_i,j,k· P^t_i,j+υ )·δ_t · e^-(h_i,j,k·κ^m_i,j,k· P^t_i,j+υ )·δ_t ;Pr(1|1) =[h_i,j,k·(η_f_t,f_j+κ^m_i,j,k)· P^t_i,j+υ] ·δ_t · e^-[h_i,j,k·(η_f_t,f_j +κ^m_i,j,k)· P^t_i,j+υ] ·δ_t,where i is from 1 to N_s (nanosensor number), j is from 1 to N_f (sub-band number), k is from 1 to N_p^i,j (NBP number in a nano-emitter's/detector's beam), and h_i,j,k=h(f,d^ep)· h(f,d^pd), where d^ep and d^pd are distance from nano-emitter to NBP and distance from NBP to nano-detector, respectively. Then, the mutual information can be written asI(X,Y) =H{Y}-H{Y|X}=H{p · Pr(1|1)+(1-p) · Pr(1|0)}-p · H{Pr(1|1)}-(1-p) · H{Pr(1|0)}. As pointed out in <cit.>, δ_t is very small and two approximations can be made to simplify I(X,Y), i.e., H{x}=-xlog x+x and e^xδ_t≈ 1. In addition, we define the following three functions:ξ_1(x_1,x_2,x_3)=-(x_1 +x_2 +x_3)log(x_1 +x_2 +x_3);ξ_2(x_1,x_2,x_3,x_4)=x_1(x_2+x_3 +x_4) log(x_2+x_3 +x_4);ξ_3(x_1,x_2)=(1-x_1)x_2log(x_2).As a result, the ergodic capacity of the information within δ_t that we can obtain from the Raman signal isC=max_x(t)≤η_f_t,f_j E{I(X,Y)/δ_t} ≈ E{ξ_1(ph_i,j,kP^t_i,jη_f_t,f_j, h_i,j,kP^t_i,jκ^m_i,j,k,υ )..+ξ_2(p,h_i,j,kP^t_i,jη_f_t,f_j,h_i,j,kP^t_i,jκ^m_i,j,k,υ)+ξ_3(p,h_i,j,kκ^m_i,j,k P^t_i,j+υ )}.Up to this point, we implicitly assume N_p^i,j=1, i.e., there is only one NBP within the nano-emitter/detector's beam cone. When there are multiple NBPs, the transition probability can be updated asPr(1|0) =(∑_k=1^N_p^i,jh_i,j,k·κ^m_i,j,k· P^t_i,j+υ )·δ_t · e^-(∑_k=1^N_p^i,jh_i,j,k·κ^m_i,j,k· P^t_i,j+υ )·δ_t ;Pr(1|1) =[∑_k=1^N_p^i,jh_i,j,k·(η_f_t,f_j+κ^m_i,j,k)· P^t_i,j+υ] ·δ_t · e^-[∑_k=1^N_p^i,jh_i,j,k·(η_f_t,f_j +κ^m_i,j,k)· P^t_i,j+υ] ·δ_t. When there are N_s nanosensors and each nanosensor has N_f sub-bands, the system ergodic capacity can be written as C_sys=∑_i=1^N_s∑_j=1^N_fC_i,j≈ ∑_i=1^N_s∑_j=1^N_fE{ξ_1(∑_k=1^N_p^i,jph_i,j,kP^t_i,jη_f_t,f_j, ∑_k=1^N_p^i,jh_i,j,kP^t_i,jκ^m_i,j,k,υ )..+ξ_2(p,∑_k=1^N_p^i,jh_i,j,kP^t_i,jη_f_t,f_j,∑_k=1^N_p^i,jh_i,j,kP^t_i,jκ^m_i,j,k,υ)+ξ_3(p,∑_k=1^N_p^i,jh_i,j,kκ^m_i,j,k P^t_i,j+υ )}.Based on this equation, in the next section, we try to optimally allocate P_i,j^t to achieve the best estimation results. §.§ Optimal Power Allocation Since the Raman spectrum occupies a wide frequency band and different frequencies experiences different absorption and scattering, it is inefficient to allocate the same amount of power to all the nano-emitters. In addition, according to the capacity analysis, if we allocate the same amount of power to each sub-band, the detected information volume are different, which leads to different accuracies. In other words, some of the sub-bands are highly distorted (i.e., the results are not trustable), but other sub-bands have well detected results. As a result, the whole reconstructed spectrum is not homogeneous in accuracy. When the emission power is large enough, there is no need to consider this problem since all the sub-bands have good enough accuracy. However, for the proposed cooperative Raman spectroscopy, the smart ring has very limited power and thus the emission power need to be as small as possible. In the following we derive an optimal power allocation scheme based on the developed capacity to efficiently utilize the power. Due to the unique sensing system, we do not have real-time channel state information and thus the power allocation is based on prior knowledge of the channel which is derived in <cit.> and experimental measurement in <cit.>. Let the total sensing power in the ring be P^t. Since the nanosensors have the same sub-band emitters and detectors, the power can be first equally allocated to each nanosensor and then optimally allocated to each nano-emitter. Therefore, the transmission power of each nanosensor is P^s=P^t/N_s and we can optimize the power allocation in one nanosensor instead of all the nanosensors, i.e., C_sys≈ N_s∑_j=1^N_fC_j. We implicitly assume all the nanosensors have the same configuration and the subscript i is neglected.To guarantee that all the sub-bands have the same capability to extract information from the biofunctional particle, their capacity should be the same. Thus, the condition need to be satisfied isC_1 =C_2=⋯=C_N_f, s.t. ∑_j=1^N_fP_j^t=P^s,where P_j^t is the j^th sub-band emitter transmission power. By observing (<ref>), we can find that the transmission power is integrated with the channel condition. If all the h_j,kP^t_j can be the same, then (<ref>) can be satisfied. As a result, the optimal power for the j^th sub-band can be given asP_j^t=P_s/N_f ∑_k=1^N_p^j h_j,k.Since the N_p^j and h_j,k are dynamic random variables, which are determined by the NBPs. As discussed before, the power allocation is based on prior knowledge of the channel. Therefore, by using the system model provided in Section II, we derive the expected value of ∑_k=1^N_p^j h_j,k, which can eliminate the randomness in power allocation.When the transmission power of an emitter is P_j^t, the detected power without noise can be written asP_j^d=∑_k=1^N_p^j h_j,kη_f_t,f_j P_j^t.In view of (<ref>), if we can find P_j^d given η_f_t,f_j and P_j^t, ∑_k=1^N_p^j h_j,k can be found. It is worth noting that, since the bandwidth B_sub=f_i+1-f_i is small enough, the channel can be considered as flat fading within a sub-band. Also, our analysis is general, which holds for all the sub-bands. We first derive the expected detected power for one nano-detector. As shown in Fig. <ref>, we divide the cross section of the cone into sub-regions with height Δ h. Then, we classify the NBPs into each sub-region based on their position. Here, the height Δ h is considered as the largest height of the blood vessel's cross section which is Δ h=2√(S_u/π). The expected detected power can be expressed asE{P_j^d} =E{∑_k=1^N_p^jP_j,k^d}≈ E{∑_n=1^R_s∑_k=1^N̂_p_n^j P_j,n,k^d}≈∑_n=1^R_s E{N̂_p_n^j} E{P̂_j,n^d},where P_j,k^d is the detected power scattered by the k^th NBP, N̂_p_n is the NBP number within the n^th sub-region, and P̂_j,n^d is the expected detected power scattered by the n^th sub-region. Due to the division of the cross section of the beam cone, (<ref>) can be approximated by (<ref>). Next, we look at each sub-region and find the expected detected power.In each sub-region, we consider all the NBPs have the same distance to the nano-detector since the beam angle is very small. The expected NBP number in a sub-region can be found by usingE{N̂_p_n^j} =∑_n=1^∞[ n · Pr(N̂_p_n^j=n)].Due to the complicated blood vessel distribution and their different cross section areas, here we consider an equivalent scenario, i.e., the randomly distributed blood vessels in the same sub-region of the cone are considered as one equivalent blood vessel. The average length of a blood vessel in a sub-region can be expressed asl̂ =∫_0^d tanα/22√((d tanα/2)^2-x^2)/dtanα/2dx=π d tanα/2/2.The cross section of the equivalent blood vessel can be approximated by S_u+S_l/2 since the cross section is uniformly distributed. The expected number of blood vessels in a sub-region can be expressed asλ_eq=λ_b d Δ h tanα/2/2 r_f^2.Then, the length of the equivalent blood vessel is l_eq=l̂·λ_eq and the probability that there are n NBPs in the equivalent blood vessel can be written asPr(N̂_p_n^j=n)=(λ_0s_eql_eq /u)^n/n!e^-λ_0s_eq l_eq/u.Next, the expected detected power from one particle at distance d is given asE{P̂_j,n^d}=E{P_j^t G_t(f_t) h_j,nη_(f_t,f_j) G_r(f_j)},where G_t(f_t) is the gain of the nano-emitter at frequency f_t and G_r(f_j) is the gain of the nano-detector. Since on the left-hand side of (<ref>) only h_j,n is a random variable (it is a function of distance and subject to Rayleigh fading), (<ref>) can be simplified asE{P̂_j,n^d}=π/2P_j^tG_t(f_t) η_f_t,f_j G_r(f_j) h_j,nσ^2.By substituting (<ref>) and (<ref>) into (<ref>), we can obtain the expected detected power by a nano-detector. Finally, the expected value of ∑_k=1^N_p^j h_j,k can be found by dividing the E{P_j^d} by η_f_t,f_j P_j^t. Different from conventional wireless communication using water-filling algorithm to optimally allocate power <cit.>, the power allocation in (<ref>) is inversely proportional to the channel condition. Often in wireless communications more power is given to the sub-bands with less attenuation to increase the system output. In this sensing system if more power is given to the sub-bands with less attenuation we can obtain accurate estimation results. However, those high attenuation sub-bands with less allocated power may generate unexpected peaks which makes it hard to identify the molecules. For instance, we express the idea by using a simplified notation P̃_t h̃η̃ + ñ=P̃_r where P̃_t, h̃, η̃, ñ, and P̃_r are transmission power, channel coefficient, scattering coefficient, system noise, and received power, respectively. First, if we use water-filling algorithm, when h̃ is large, P̃_t is also large and thus ñ is relatively small when compared with P̃_t. Hence, η̃ can be accurately estimated by using maximum likelihood P̃_r/(P̃_th̃). When h̃ is small, P̃_t is also small according to water-filling algorithm. The estimation becomes not accurate, especially when the noise is strong (i.e., received power is large) the estimated η̃ deviates a lot from the original value which generates a peak/null in the spectrum. Since identifying Raman spectrum mainly based on the resonant peaks, these unexpected peaks can cause misleading detection results. Consequently, the conventional water-filling algorithm does not work here and we need to allocate power following (<ref>).The above power allocation does not include the scattering coefficient and we only use the channel condition due to the following reasons. Since the variation of the scattering coefficient is much larger than the distortion of the channel, the power allocation strategy is mainly affected by the scattering coefficients. In other words, the variation of η_f_t,f_j is larger than h(f,d) and thus the emitter transmission power is almost inversely proportional to the scattering coefficient. When the noise is small or transmission power is high enough, the estimation accuracy can be reasonable. However, when the system becomes highly distorted, the detected signal can be considered as noise. When we calculate the scattering coefficient, the transmission power need to be divided. Then we have two scenarios. First, when the detected signal variation is smaller than the scattering coefficient, this yields the original spectrum which is mainly the scattering coefficient. Thus, if we want to detect a molecule and allocate power based on its scattering coefficient, no matter what kind of molecules are inside human body, the detected results is always positive. Second, when the detected signal variation is large, since the noise is strong the scattering coefficient cannot be recovered. Generally, the sensing system fails at high noise. Since sometimes we can obtain positive detection results when noise is strong, in power allocation we do not consider the scattering coefficient and only the channel dispersion is taken into account. In addition, the optimal power allocation strategy is not affected by the quantization threshold; it is only determined by the optical channel condition.§ SPECTRUM RECONSTRUCTION In this section, we provide both the centralized and distributed sensing algorithms to reconstruct the Raman spectrum based on the observations of nanosensors. Within the sensing period, the photon number received by nano-detectors is N^d∈ℝ^N_s × N_f and its element (i,j) means the received photon number by the i^th sensor's j^th nano-detector. Based on it, we estimate the NBP scattering coefficient η to find the Raman intensity.§.§ Spectrum Estimation with Shot NoiseThe detected photon is a random number according to (<ref>). Based on the photon number, we need to estimate the received signal y in (<ref>). As suggested by (<ref>), the relation between the received signal and the photon number obeys Poisson distribution. Then, maximum likelihood can be utilized to estimate the received signal. We defineg=e^-y· y^N^d_i,j/N^d_i,j!≈e^-y/√(2π)(ye)^N^d_i,j (N^d_i,j)^-N^d_i,j-1/2.Note that we consider the time interval Δ t is a constant and γ_p is simply approximated by y. Then, we can obtain the derivative with respect to N^d_i,j,(ln g)'=ln(ye)+1/2N^d_i,j-ln N^d_i,j-1.The estimated received signal ŷ which can maximize (<ref>) isŷ=e^ln N^d_i,j-1/2 N^d_i,j≈ N^d_i,j.The estimation mean square error can be written ase_a=∑_y=0^∞( e^-y· y^N^d_i,j/N^d_i,j!·( y-N^d_i,j )^2 ).Once we have the estimation of ŷ, we need to estimate the coefficient η_f_t,f_j based on the knowledge of the system model. The shot noise υ is a a nonnegative constant <cit.> which can be subtracted from ŷ and the channel information can be found by using the derived expected detected power, upon which we can estimate η_f_t,f_j. §.§ Scattering Coefficient Estimation§.§.§ Centralized SensingUp to this point, we have the knowledge of the received signal, shot noise, and expected value of the received power. Then, an estimation of η_f_t,f_j can be written asη̂_f_t, f_j=∑_i=1^N_s(ŷ_i,j-υ)^+/ E{∑_k=1^N_p^j h_j,k}·N̂_s=η_f_t,f_j+Δ n,where Δ n is the estimation error, ŷ_i,j is the estimated signal of the i^th nanosensor's j^th sub-band nano-detector, (x)^+=max(0,x), N̂_s is the number that ŷ_i,j-υ≥0, and E{∑_k=1^N_p^jh_j,k} can be found via (<ref>) to (<ref>).In the centralized architecture, each nano-detector sends the received photon number to the data fusion center directly. Based on N^d and each detector's operating frequency, the received signal ŷ_i,j can be first estimated using (<ref>). Then, the signal denoted by photon number is converted to power. The data fusion center can directly use (<ref>) to estimate the scattering coefficient. The centralized sensing algorithm is summarized in Algorithm <ref>. As we can see, the centralized sensing is very simple and it relies on the full information of all the sensed data which results in high communication overhead and high power consumption. §.§.§ Distributed SensingDifferent from the centralized esitmation, in distributed estimation each nanosensor's detector first estimate and quantize the scattering coefficient. Only one bit is sent to the data fusion center for final spectrum reconstruction. In this way, the data communication overhead among nanosensors and data fusion center can be significantly reduced. Although we do not have the knowledge of the PDF (probability density function) of Δ n, we can still estimate η_f_t,f_j by using the method in <cit.>. However, different from <cit.>, the scattering coefficient is in [0, ∞), i.e., it cannot be negative. Therefore, the algorithm need to be updated to apply it in Raman spectrum reconstruction. It should be noted that we assume the sensors have prior knowledge of the coefficient η_f_t,f_j, i.e., the sensing system tries to detect whether a molecule is in intra-body environment or not.From the perspective of a nano-detector, it has the information of the detector's shot noise υ, detected photon number N^d_i,j, the expected channel condition E{∑_k=1^N_p^j h_j,k}, and the corresponding targeting NBP's η_f_t,f_j, where f_j is its detecting center frequency. First, by using the detected photon number and (<ref>), the nano-detector can find the received signal and convert it into power notation ŷ_i,j. Then, it can estimate η_f_t,f_j locally by usingη̂_f_t,f_j^local=ŷ_i,j-υ/E{∑_k=1^N_p^j h_j,k}Now, instead of sending η̂_f_t,f_j^local to the data fusion center, the nano-detector first quantize it and the quantization threshold is determined by the nanosensor.The N_s nanosensors are divided into K groups and the group G_k uses τ_k as quantization threshold. Each τ_k is considered as a threshold for binary quantization. Consider that the nanosensor collects the local estimation results and set the maximum quantization threshold asT_i=max(η_f_t,f_j)+∑_j=1^N_fη̂_f_t,f_j^local/N_f.Ideally, max(η_f_t,f_j) is the maximum value of the coefficient. However, due to the noise, dynamic NBP number, and channel distortion, the estimated value may be larger or smaller than the original scattering coefficient and different nanosensors may have drastically different estimated values, although the reconstructed spectrum may have similar shape. Then, the mean estimated scattering coefficient is added to adjust the level of the threshold. As a result, Pr(η̂_f_t,f_j>T)≈ 0. The interval [0,T] is divided into K sub-intervals [τ_i,0,τ_i,1,⋯, τ_i,K], where τ_i,K=T_i. Then, the nano-detector can quantize η̂_f_t,f_j^local using the thresholds.The estimation of η̂_f_t, f_i can be updated asη̂_f_t,f_j=1/4∑_k=1^K{1/N_G_k∑_s=1^N_G_k[b_s,j(τ_i,k+1-τ_i,k-1)]},where N_G_k is the number of nanosensors in group k, whose estimated received signal ŷ is not zero. The distributed sensing algorithm is summarized in Algorithm <ref>. In Algorithm <ref>, the steps from 1 to 11 are performed by the nano-detector and the step 12 is conducted in the data fusion center.§.§.§ Estimation Error EvaluationBy using the preceding estimated scattering coefficients of the NBP we can find the Raman intensity in the j^th sub-band by using I_R=P^t_expη_f_t,f_jλ_j/h_PC c_LT, where P^t_exp is the transmission power used by the experiment in <cit.>, and λ_j is the wavelength of the j^th sub-band.Note that, identifying the molecule is mainly based on the resonant peaks in the Raman spectrum and thus the level of the intensity is not crucial (i.e., it can also be adjusted by using different transmission power). Motivated by this observation, we first normalize the spectrum by dividing its mean value, then calculate the Mean Square Error (MSE), i.e.,e_s=1/N_f∑_j=1^N_f(I_R,j/I̅_R-Î_R,j/Î̅_R)^2,where I_R,j is the original Raman intensity in the j^th sub-band, I̅_R is the mean value of the original Raman intensity across all the sub-bands, Î_R,j is the estimated Raman intensity in the j^th sub-band, Î̅_R is the mean value of the estimated Raman intensity across all the sub-bands. The outage probability is defined as Pr(e_s>τ_t), where τ_t is a threshold. When e_s is smaller than τ_t, we consider the estimated results can maintain a certain accuracy. In the numerical analysis of the system performance and optimal configuration, we use the outage probability as a guideline.§ NUMERICAL ANALYSIS AND OPTIMAL SYSTEM CONFIGURATIONIn this section, we try to find the optimal configuration of the system based on the system model and developed estimation algorithm. The optimal configuration design is constrained by the total amount of transmission power P^t and the maximum number of nano-emitter and nano-detector. The optimal configuration of the system should meet three objectives, namely, 1) minimum number of nanosensors to ensure that we can successfully reconstruct the spectrum; 2) minimum NBP density to guarantee the accuracy and reliability of the estimation results; 3) minimum transmission power to reduce the overall power consumption of the system.Before embarking on the analyses of different system configurations, we give an ideal estimated spectrum which has the optimized numbers of nanosensors, NBP density, and transmission power. Also, the considered molecule noise and shot noise power are relatively small. In this way, we show the characteristics of good estimations and then in the following discussions we investigate the effect of each parameter and find out their optimal values.§.§ Ideal Estimation The molecule utilized in this numerical simulation is 1,2-bits(4-pyridyl)-ethylene and its scattering coefficient and Raman spectrum are measured in <cit.>. In the numerical analysis, we first randomly generate a set of blood vessels but we do not change their positon and number in the following numerical analysis since the blood vessels are fixed in reality. Other random parameters such as NBP density and position, channel fading, and noise are randomly generated in each numerical simulation. The numerical parameters are provided in Table <ref>.As shown in <cit.>, the Raman peaks of 1,2-bits(4-pyridyl)-ethylene molecules are at 1013, 1200, 1342, 1608, and 1636 cm^-1. As depicted in Fig. <ref>, by using the centralized sensing architecture the Raman peaks are at 1016, 1205, 1350, 1616, and 1641 cm^-1 and the MSE is 0.4. By using the distributed sensing architecture the Raman peaks are at 1016, 1205, 1350, 1603, and 1641 cm^-1 and the MSE is 1.1. The estimated spectrum matches very well with the original spectrum. Moreover, the maximum different of the resonant peaks' Raman shift between the estimated and original signal is 8 cm^-1. However, if we reduce the transmission power or the NBP density, the accuracy of the estimation results cannot be maintained. For example, in Fig. <ref> the NBP density is reduced to 1× 10^10 /s/m^2. The MSE of the centralized and distributed sensing results are 1.75 and 2.0, respectively. As we can see in the figure, within the left-hand side oval, the estimated signals have two peaks, while the original signal only has one. Within the right-hand side oval, the original signal has two peaks, while the estimated signals have only one. Due to the low density of NBP, the estimation accuracy is reduced. In the following, we investigate the effects of nanosensor number, biofunctional particle density, noise, and transmission power. The outage probability threshold τ_t is set as 1.5 and 3. If the MSE e_s is smaller than 1.5, we can reconstruct the Raman spectrum accurately. When 1.5≤ e_s ≤ 3, there are some unexpected or missed peaks in the spectrum but the shape of the reconstructed Raman spectrum is still very similar as the original one. When e_s>3, the reconstructed Raman spectrum is highly distorted and becomes very different from the original one, which means the results are not acceptable. §.§ Nanosensor Number In (<ref>) we derived the minimum nanosensor number based on the blood vessel density. The nanosensor number should satisfy (<ref>) to guarantee that there are blood vessels going across the beam cone for all the sub-bands. In Fig. <ref>, the nanosensor number is varied and the outage probability of the estimation error is evaluated. The threshold τ_b in (<ref>) is set as the same as the outage probability. As we can see in the figure, the theoretical minimum number of nanosensors derived in (<ref>) is lower than other estimation outage probability. Hence, it requires fewer nanosensors to satisfy the condition in (<ref>), but more nanosensors are needed to achieve a certain accuracy. Moreover, it is obvious that the centralized sensing architecture requires fewer nanosensors than the distributed sensing architecture. When the nanosensor number is larger than 30, both the centralized and distributed sensing architecture can achieve very high estimation accuracy. Observe that there are some fluctuations on the curves; this is mainly due to the distribution of the nanosensors and some blood vessels in the nano-detectors' beam are far from the detectors which makes the detected power small. As the number of nanosensor increases, this effect decreases.§.§ Nano-biofunctional Particle Density The minimum biofunctional particle density is always desired to reduce the side-effects. In Fig. <ref> the density is varied from 10^8/s/m^2 to 10^11/s/m^2. Similarly, the centralized sensing architecture still outperforms the distributed sensing architecture, i.e., it requires smaller NBP density. In addition, to achieve near zero outage probability with high estimation accuracy (τ_t=1.5) the required density is 2.6× 10^10/s/m^2 for centralized sensing architecture and distributed sensing architecture which was adopted in the ideal estimation. In addition, we notice that the outage probability of the centralized sensing results decreases gradually with the NBP density increases, while the outage probability of the distributed sensing results drops much faster. They almost require the same NBP density to obtain accurate estimation results. The reason is that when some nanosensors receives highly distorted data, the centralized algorithm can mitigate this effect by averaging the data. However, the distributed sensing architecture first lost a certain accuracy during quantization. Moreover, the weight of the highly distorted data is large in the distributed estimation algorithm since the nanosensors are divided into sub-groups and each nanosensor plays an important role in its sub-group. This effect can be reduced by using more nanosensors.§.§ Effect of Noise and Transmission Power The detected signal-to-noise ratio are mainly determined by the noise level and the transmission power. As discussed in preceding sections, the molecule noise and the shot noise (mainly dark current) affect the estimation in different ways. In Fig. <ref> the influence of molecule noise is evaluated. As we can see, when σ_m is smaller than 4 both the centralized sensing architecture and distributed sensing architecture can achieve very accurate estimation. However, as the noise increases, the distributed sensing architecture becomes inaccurate. Also, when σ_m is larger than 25, the centralized sensing architecture with outage threshold 1.5 also increases slowly. Generally, the molecule noise does not have strong influence on the spectrum reconstruction as long as it is not very strong. The reason is that the molecule noise is added together with the scattering coefficient, i.e., η_f_t,f_j+κ^m, and the primary feature of the Raman spectrum is resonant peaks. Since η_f_t,f_j is large at the resonant Raman shift, the noise has negligible effects. As a result, the resonant peaks are not prone to be corrupted by molecule noise.The effect of shot noise is shown in Fig. <ref>. Different from the molecule noise, shot noise can influence the estimation accuracy dramatically. Here we mainly consider the dark current noise. If the signal power is comparable with the dark current noise, the detected photon number may shift drastically from the accurate value accordingly to (<ref>). Moreover, as analyzed in Section <ref>, the dark current noise can create unexpected peaks in the Raman spectrum, which makes the spectrum unrecognizable. On the other hand, we can increase the estimation accuracy by increasing the transmission power. As shown in Fig. <ref>, when the dark current noise is larger than 2.5, the estimation results becomes inaccurate. When it is larger than 25, both centralized and distributed sensing architecture become unacceptableNext, we evaluate the effect of transmission power. As depicted in Fig. <ref>, when the transmission power is low the signal is corrupted by the noises in the system and the outage probability is high. For both centralized and distributed sensing architecture, 10 dBm is the minimum amount of required transmission power to achieve high estimation accuracy. We also noted that when the transmission power further increases above 20 dBm, the outage probability of distributed sensing increase slightly. This is because the high received power increases the variance in (<ref>) which reduces the estimation accuracy. Moreover, although the centralized sensing architecture requires less transmission power, this does not imply that it is more power efficient. Because data communication and quantization also consume power which are not counted here. § CONCLUSIONBiosensing using nanotechnology can provide unprecedented accuracy for bio-detection of DNA and proteins, and disease diagnosis and treatment. Although conventional Raman spectroscopy can provide information at nanoscale in intra-body environment, the equipment is bulky and expensive. In this paper, we propose a cooperative Raman spectroscopy using a large number of nanosensors on a smart ring. In this way, the sensing device can be portable and affordable. The nanosensors can jointly and distributively emit and detect optical signals. Meanwhile, the nano-biofunctional particles (NBP) with health information can absorb optical power and then send the information to nano-detectors via Raman scattering. We propose the centralized and distributed sensing architectures to estimate the Raman spectrum. The mathematical models of each component in the sensing system are introduced and the information capacity of the sensing system is derived to optimally allocate power among nano-emitters. The effect of the NBP density and molecule noise are analyzed and the accuracy of the sensing system are evaluated. The results show that the cooperative Raman spectroscopy is able to provide accurate estimation of the Raman spectrum which can be utilized for molecule and chemicals identification. Because of its small profile and low power consumption, we believe the cooperative Raman spectroscopy can find its significant applications in future smart health.IEEEtran | http://arxiv.org/abs/1703.08906v1 | {
"authors": [
"Hongzhi Guo",
"Josep Miquel Jornet",
"Qiaoqiang Gan",
"Zhi Sun"
],
"categories": [
"cs.SY"
],
"primary_category": "cs.SY",
"published": "20170327023619",
"title": "Cooperative Raman Spectroscopy for Real-time In Vivo Nano-biosensing"
} |
firstpage–lastpage Cosmic Equilibration: A Holographic No-Hair Theorem from the Generalized Second Law Sean M. Carroll and Aidan Chatwin-Davies December 30, 2023 =====================================================================================We combine Gaia data release 1 astrometry with Sloan Digital Sky Survey (SDSS) images taken some ∼ 10-15 years earlier, to measure proper motions of stars in the halo of our Galaxy. The SDSS-Gaia proper motions have typical statistical errors of 2 mas/yr down to r ∼ 20 mag, and are robust to variations with magnitude and colour. Armed with this exquisite set of halo proper motions, we identify RR Lyrae, blue horizontal branch (BHB), and K giant stars in the halo, and measure their net rotation with respect to the Galactic disc. We find evidence for a gently rotating prograde signal (⟨ V_ϕ⟩∼ 5-25 km s^-1) in the halo stars, which shows little variation with Galactocentric radius out to 50 kpc. The average rotation signal for the three populations is ⟨ V_ϕ⟩ = 14 ± 2 ± 10 (syst.) km s^-1.There is also tentative evidence for a kinematic correlation with metallicity, whereby the metal richer BHB and K giant stars have slightly stronger prograde rotation than the metal poorer stars. Using the Auriga simulation suite we find that the old (T >10 Gyr) stars in the simulated halos exhibit mild prograde rotation, with little dependence on radius or metallicity, in general agreement with the observations. The weak halo rotation suggests that the Milky Way has a minor in situ halo component, and has undergone a relatively quiet accretion history. Galaxy: halo – Galaxy: kinematics and dynamics – Galaxy: stellar content § INTRODUCTION Dark matter haloes have spin. This net angular momentum is acquired by tidal torquing in the early universe <cit.>, and is later modified and shaped by the merging and accretion of substructures (e.g. ). The acquisition and distribution of angular momenta in haloes is intimately linked to the evolution of the galaxies at their centres. Indeed, the relationship between halo spin and disc/baryonic spin is a fundamental topic in galaxy formation, and has been studied extensively in the literature (e.g. ).Initially, the angular momentum of the galaxy and the dark matter halo can be very well aligned. However, material is continually accreted onto the outer parts of the halo, which can alter its net angular momentum. Hence, while the galaxy and the halo often have aligned angular momentum vectors near their centers, they can be significantly misaligned at larger radii (e.g. ). Furthermore, major mergers can cause drastic “spin flips” in both the dark matter angular momenta and the central baryonic component <cit.>. It is clear that the net spin of haloes is critically linked to their merger histories, and thus their stellar haloes could provide an important segue between the angular momenta of the central baryonic disc and the dark matter halo. A large fraction of the halo stars in our Galaxy are the tidal remnants of destroyed dwarfs. Hence, to first order, the spin of the Milky Way stellar halo represents the net angular momentum of all of its past (stellar) accretion events. The search for a rotation signal in the Milky Way halo dates back to the seminal work by <cit.>. The authors used line-of-sight velocities of the Galactic globular cluster system to infer a prograde (i.e. aligned with the disc) rotation signal ofV_ rot∼ 60 km s^-1. A prograde signal, with V_ rot∼ 40-60 km s^-1,in the (halo) globular cluster system has also been seen in several later studies (e.g. ). However, the situation for the halo stars is far less clear. While most studies agree that the overall rotation speed of the stellar halo is probably weak and close to zero <cit.>, there is some evidence for a kinematic correlation between metal-rich and metal-poor populations <cit.> and/or different rotation signals in the inner and outer halo <cit.>.An apparent kinematic dichotomy in the stellar halo (either inner vs. outer, or metal-rich vs. metal-poor) could be linked to different formation mechanisms. For example, state-of-the-art hydrodynamical simulations find that a significant fraction of the stellar haloes in the inner regions of Milky Way mass galaxies likely formed in situ, and are more akin (at least kinematically) to a puffed up disc component <cit.>. Thus, one would expect a stronger prograde rotation signal in the inner and/or metal-rich regions of the Milky Way stellar halo <cit.>, and this theoretical scenario could account for the kinematic differences seen in the observations. However, as the detailed examination by <cit.> shows, apparent kinematic signals depending on distance and/or metallicity can be wrongly inferred due to contamination in the halo star samples and/or systematic errors in the distance estimates to halo stars. Moreover, our observational inferences and comparisons with simulations should (but often do not) take into account the type of stars used to trace the halo. For example, commonly used tracers such as blue horizontal branch (BHB) and RR Lyrae (RRL) stars are biased towards old, metal-poor stellar populations, and this can affect the halo parameters we derive (see e.g. ). So far, our examination of the kinematics of distant halo stars has been almost entirely based on one velocity component. For large enough samples over a wide area of sky, kinematic signatures such as rotation can be teased out using line-of-sight velocities alone. However, at larger and larger radii this line-of-sight component gives less and less information on the azimuthal velocities of the halo stars. Moreover, the presence of cold structures in line-of-sight velocity space <cit.> can also bias results. It is clearly more desirable to infer a direct rotation estimate from the 3D kinematics of the stars. Studies of distant halo stars with proper motion measurements are scarce <cit.>, but this limitation will become a distant memory as we enter the era of Gaia. Gaia is an unprecedented astrometric mission that will measure proper motions for hundreds of millions of stars in our Galaxy. In this contribution, we exploit the first data release of Gaia (DR1, ) to measure the net rotation of the Milky Way stellar halo. Although the first Gaia data release does not contain any proper motions, we combine the exquisite astrometry of DR1 with the Sloan Digital Sky Survey (SDSS) images taken some ∼ 10-15 years earlier to provide a stable and robust catalog of proper motions. Halo star tracers that have previously been identified in the literature are cross-matched with this new proper motion catalog to create a sample of halo stars with 2/3D kinematics.The paper is arranged as follows. In Section <ref> we introduce the SDSS-Gaia proper motion catalogue and investigate the statistical and systematic uncertainties in these measurements using spectroscopically confirmed QSOs. Our halo star samples are described in Section <ref>, and we provide further validation of our proper motion measurements by comparison with models and observations of the Sagittarius stream in Section <ref>. In Section <ref>, we introduce our rotating stellar halo model and apply a likelihood analysis to RRL, BHB and K giant halo star samples. We compare our results with state-of-the-art simulations in Section <ref>, and re-evaluate our expectations for the stellar halo spin. Finally, we summarise our main conclusions in Section <ref>.§ SDSS-GAIA PROPER MOTIONSThe aim of this work is to infer the average rotation signal of the Galactic halo using a newly calibrated SDSS-Gaia catalog. This catalog (described below) is robust to systematic biases, which is vital in order to measure a rotation signal. Indeed, even with large proper motion errors (of order the size of the proper motions themselves!), with large enough samples distributed over the sky, the rotation signal can still be recovered provided that the errors are largely random rather than systematic. The details of the creation of the recalibrated SDSS astrometric catalogue and measurement of SDSS-Gaia proper motions will be described in a separate paper (Koposov 2017 in preparation), but here we give a brief summary of the procedure. In the original calibration of the astrometry of SDSS sources, exposed in detail by <cit.>, there are two key ingredients. The first is the mapping between pixel coordinates on the CCD (x,y) and the coordinates corrected for the differential chromatic refraction and distortion of the camera (x',y') (see Eqn. 5-10 in ). The second is the mapping between (x',y') and the great circle coordinates on the sky (μ, ν) aligned with the SDSS stripe (Eqn. 9, 10, 13, 14 of ). The first transformation does not change strongly with time, requires only a few free parameters and is well determined in SDSS. However, the second transformation that describes the scanning of the telescope, how non-uniform it is and how it deviates from a great circle, as well as the behaviour of anomalous refraction is much harder to measure. In fact, the anomalous refraction and its variation at small timescales is the most dominant effect limiting the quality of SDSS astrometry (see Fig. 13 of ). The reason why those systematic effects could not have been properly addressed by the SDSS project itself is that the density of astrometric standards from UCAC <cit.> and Tycho catalogues used for the derivation of the (x',y'), (μ,ν) transformation was too low. This is where the Gaia DR1 comes to the rescue, with its astrometric catalogue being ∼ 4 magnitudes deeper than UCAC. The only issue with using the Gaia DR1 catalogue as a reference for SDSS calibration is that the epoch of the Gaia catalogue is 2015.0 as opposed to ∼ 2005 for SDSS and that the proper motions are not yet available for the majority of Gaia DR1 stars.To address this issue, we first compute the relative proper-motions between Gaia and the original SDSS positions in bins in color-magnitude space and pixels on the sky (HEALPix level 16, angular resolution 3.6 deg; ) that gives us estimates of ⟨μ_α( hpx, g-i,i) ]⟩ ⟨μ_δ(hpx,g-i,i) ⟩.Those average proper motions can be used to estimate the expected positions of Gaia stars at the epoch of each SDSS scan.α̂_ SDSS = α_Gaia - ⟨μ_α(hpx,g-i,i)⟩δ Twhere δ T is the timespan between Gaia and SDSS observation of a given star, hpx is the HEALPix pixel number of the star and g-i, and i are colors and magnitudes of the star. With those positions (α̂_ SDSS, δ̂_ SDSS) computed for all the stars with both SDSS and Gaia measurements we redetermine the astrometric mapping in SDSS between (x',y') pixel coordinates and on the sky great circle (μ,ν) coordinates by using a flexible spline model. There are many more stars available in Gaia DR1 compared to the UCAC catalog, so in the model we are able to much better describe the anomalous refraction along the SDSS scans and, therefore, noticeably reduce the systematic uncertainties of the astrometric calibration. Furthermore, as a final step of the calibration, we also utilise the galaxies observed by Gaia and SDSS to remove any residual large scale astrometric offsets in the calibrated SDSS astrometry. With the SDSS astrometry recalibrated, the SDSS-Gaia proper motions are then simply obtained from the Gaia positions and their recalibrated position in SDSS. §.§ Proper motion errorsWe quantify the uncertainties in the SDSS-Gaia proper motion measurements using spectroscopically confirmed QSOs from SDSS DR12 <cit.>. This QSO sample is cross-matched with the SDSS-Gaia catalog by searching for the nearest neighbour within 1. There are N=71, 799 QSOs in the catalog with r < 20, and we show the distribution ofQSO proper motions in the left-hand panel of Fig. <ref>. The QSO proper motions are nicely centred around μ =0 mas/yr, and there are no significant high proper motion tails to the distribution. Note that we find no significant differences between the QSO proper motion components μ_α and μ_δ, so we group both components together (i.e. μ=[μ_α, μ_δ]) in the figure. However, we do show the μ_α and μ_δ components separately (green and blue dashed lines in the top-right panel) when we show the median proper motions to illustrate that these components individually have no significant systematics.The proper motion errors should roughly scale as σ (μ) ∝ 1/Δ T, where Δ T is the timescale between the first epoch SDSS measurements and the second epoch Gaia data[Note we compute Δ T using the modified Julian dates (MJD) of the SDSS observations and the last date of data collection for Gaia DR1, i.e Δ T = MJD(Gaia)-MJD(SDSS) where MJD(Gaia)=MJD(16/9/2015)]. The SDSS photometry was taken over a significant period of time, and data from later releases have shorter time baselines. Thus, this variation in astrometry timespan is an important parameter when quantifying the proper motion uncertainties in our SDSS-Gaia catalog. The top-right panel of Fig. <ref> shows a (normalised) histogram of the time baselines (Δ T). There is a wide range of time baselines, but most of the SDSS data were taken ∼ 10-15 years ago. In the bottom-right panel of Fig. <ref> we show the dispersion in QSO proper motion measurements (defined as σ = 1.48 times the median absolute deviation) as a function of Δ T, and the middle-right panel shows the median values. The median values are consistent with zero at the level of ∼ 0.1 mas/yr, and there is no systematic dependence on Δ T.As expected, there is a strong correlation between the dispersion of QSO proper motions and Δ T. The dashed red line shows a model fit to the relation of the form:σ = A + B/Δ T, where A=0.157 mas/yr and B=22.730 mas. It is encouraging that this simple A+B/Δ Tmodel agrees well with the QSO data, and we find no significant systematic differences between different SDSS data releases. Note that we show in Appendix <ref> that there is no significant systematic variation in the QSO proper motions with position on the sky. We also use the QSO sample to check whether or not the proper motion uncertainties vary significantly with magnitude or colour. In Fig. <ref> we show the dispersion in QSO proper motions as a function of r-band magnitude (left panel) and g-r colour (right panel). The dotted lines indicate the median standard deviation in proper motion of 2 mas/yr. There is a weak dependence on r-band magnitude, whereby the QSO proper motion distributions get slightly broader at fainter magnitudes. However, most of the halo stars in this work have r < 19 and there is little variation at these brighter magnitudes. Finally, we find no detectable dependence of σ (μ) on g-r colour. It is worth remarking that the stability of these proper motion measurements to changes in magnitude and colour is a testament to the astrometric stability of the improved SDSS-Gaia catalog.In Section <ref> we introduce a rotating velocity ellipsoid model for the Milky Way halo stars. In order to test the effects of any systematic uncertainties in the SDSS-Gaia proper motions, we also apply this modeling procedure to the sample of SDSS DR12 QSOs.We adopt a “distance” of 20 kpc, which is the mean distance to our halo star samples, and find the best fit rotation (⟨ V_ϕ⟩) value. This procedure gives a best fit value of ⟨ V_ϕ⟩∼ 10 km s^-1. Note, that if there were no systematics present, then there would be no rotation signal. In Fig. <ref> we showed that the median proper motions of the QSOs was ∼ 0.1 mas/yr. Indeed, at a distance of 20 kpc, this proper motion corresponds to a velocity of 10 km s^-1. Thus, although the astrometry systematics in our SDSS-Gaia proper motion catalog are small, at the typical distances of our halo stars we cannot robustly measure rotation signals weaker than 10 km s^-1. We discuss this point further in Section <ref>.In the remainder of this work, we use Eqn. <ref> to define the proper motion uncertainties of our halo star samples (see below). Thus, we assume that the proper motion errors are random, independent and normally distributed with variance depending only on the time-baseline between SDSS and Gaia measurements. Note that since we are trying to measure the centroid of the proper motion distribution (i.e. the net rotation), rather than deconvolve it into components or measure their width, we are not very sensitive to knowing the proper motion errors precisely.§ STELLAR HALO STARS §.§ RR LyraeRR Lyrae (RRL) stars are pulsating horizontal branch stars found abundantly in the stellar halo of our Galaxy. These variable stars have a well-defined Period-Luminosity-Metallicity relation, and their distances can typically be measured with accuracies of less than 10 percent. Furthermore, RRL have bright absolute magnitudes (M_V ∼ 0.6), so they can be detected out to large distances in relatively shallow surveys.These low mass, old (their ages are typically in excess of 10 Gyr) stars are ideal tracers of the Galactic halo, and, indeed, RRL have been used extensively in the literature to study the stellar halo (e.g. ). In this work, we use a sample of type AB RRL stars from the Catalina Sky Survey <cit.> to infer the rotation signal of the Milky Way stellar halo. This survey has amassed a large number (N ∼ 22,700) of RRL stars over 33,000 deg^2 of the sky, with distances in excess of 50 kpc. The RRL sample is matched to the SDSS-Gaia proper motion catalog by searching the nearest neighbours within 10. Our resulting sample contains N=8590 RRL stars with measured 3D positions, photometric metallicities (derived using Eqn. 7 from ) and proper motions. The distribution of this sample on the sky in Equatorial coordinates is shown in Fig. <ref>. When evaluating the Galactic velocity components of the RRL stars, the random proper motion errors (derived in Section <ref>) dominate over the distance errors (typically ∼ 7% see e.g. ), so we can safely ignore the RRL distance uncertainties in our analysis. Note that we have checked using mock stellar haloes from the Auriga simulation suite (see Section <ref>) that statistical distance uncertainties of ∼ 10% make little difference to our results. §.§ Blue Horizontal BranchBlue Horizontal Branch (BHB) stars, like RRL, are an old, metal poor population used widely in the literature to study the distant halo (e.g. ). BHBs have relatively bright absolute magnitudes (M_g ∼ 0.5), which can be simply parametrised as a function of colour and metallicity (e.g. ). However, unlike their RRL cousins, photometric samples of BHB stars are often significantly contaminated by blue straggler stars, which have similar colours but higher surface gravity. Spectroscopic samples of BHBs can circumvent this problem by using gravity sensitive indicators to separate out the contaminants (e.g. ).In this work we use the spectroscopic SEGUE sample of BHB stars compiled by <cit.>. This sample was selected to be relatively “clean” of higher surface gravity contaminants, and has already been exploited in a number of works to study the stellar halo (e.g. ). By cross-matching this sample with the SDSS-Gaia catalog, we identify N=4553 BHB stars. We estimate distances to these stars using the g-r colour and metallicity dependent relation derived by <cit.>. Similarly to the RRL stars, we do not take into account the relatively small (∼ 10%) distance uncertainties of the BHBs in our analysis. Our resulting BHB sample has 3D positions, 3D velocities and spectroscopic metallicity estimates. §.§ K GiantsGiant stars are often a useful probe of the stellar halo, owing to their bright absolute magnitudes (M_r ∼ 1 to -3), and large numbers in wide-field spectroscopic surveys (e.g. ). Moreover, giants are one of the most common tracers of external galaxy haloes (e.g. ).In contrast to BHB and RRL stars, giant stars populate all metallicities in old populations. Thus,they represent a less biased tracer of the stellar halo.The drawback of using giant stars to trace the halo is that spectroscopic samples are required to limit contamination from dwarf stars, and the absolute magnitudes of giants are strongly dependent on colour and metallicity. Here, we use the spectroscopic sample of K giants compiled by <cit.>, who derive distance moduli for each star using a probabilistic framework based on colour and metallicity. A distance modulus PDF is constructed for each star, and we use the mode of the distributionDM_ peak and interval between the 84% and 16% percentiles, Δ DM = (DM_84-DM_16)/2, as the 1σ uncertainty. We find N = 5814 K giants cross-matched with the SDSS-Gaia proper motion sample. Thus, our resulting K giant sample has 3D positions (with distance moduli described using a Gaussian PDF), 3D velocities and spectroscopic metallicities.§ SAGITTARIUS STREAM Before introducing our model for halo rotation, we identify RRL stars in our sample that likely belong to the Sagittarius (Sgr) stream. This vast substructure is very prominent in the SDSS footprint <cit.>, and thus it may overwhelm any halo rotation signatures associated with earlier accretion events. Furthermore, previous works have independently measured proper motions of Sgr stars<cit.>, and hence we can provide a further test of our SDSS-Gaia proper motions. Note that we use RRL stars (rather than BHBs or K giants) in Sgr as these stars have the most accurate distance measurements, and thus Sgr members can be identified relatively cleanly.We identify Sgr stars according to position on the sky (α,δ) and heliocentric distance using the approximate stream coordinates used by <cit.> and <cit.>. The top panel of Fig. <ref> shows that our distance selection of Sgr stars agrees well with the <cit.> model. Our selection procedure identifies N=830 candidate Sgr associations, which corresponds toroughly 10% of our RRL sample. In Fig. <ref> we show proper motions in Galactic coordinates (μ_ℓ, μ_b) as a function of longitude in the Sgr coordinate system (see ). The red and blue points show the leading and trailing arms of the <cit.> model of the Sgr stream. Note that we only show material stripped within the last 3 pericentric passages of the model orbit. The black filled squares show the median SDSS-Gaia proper motions for RRL stars associated with the Sgr stream in bins of Sgr longitude, and the error bars indicate 1.48MAD/√(N), where MAD = median absolute deviation and N is the number of stars in each bin. It is encouraging that the Sgr stars in our RRL sample agree very well with the model predictions by <cit.>. Proper motion measurements of Sgr stars in the literature are also shown in Fig. <ref>: these are given by the orange diamonds <cit.>, cyan squares <cit.> and grey triangles <cit.>. Our SDSS-Gaia proper motions are in excellent agreement with these other (independent) measures (see also Fig. <ref>). Finally, we show the proper motions for the entire sample of SDSS-Gaia RRL stars with the open green circles. The stars associated with Sgr are clearly distinct from the overall halo in proper motion space. The solid black line shows the maximum likelihood model for halo rotation computed in Section <ref>. A model with mild prograde rotation agrees very well with the proper motion data. Note that the variation in proper motion with Λ_⊙ in the model is largely due to the solar reflex motion. Indeed, the solar reflex motion (in proper motion space) for Sgr stars is lower because they are typically further away than the halo stars. This is the main reason for the stark difference between the proper motions of the two populations in Fig. <ref>.We also show the heliocentric distances of the Sgr stars as a function of Sgr Longitude in the top panel of Fig. <ref>. Again, there is excellent agreement with the <cit.> models.This figure shows that we can probe the Sgr proper motions out to D ∼ 50 kpc, and thus we can accurately trace halo proper motions out to these distances (see Section <ref>).In Fig <ref> we zoom in on the regions along the Sgr stream where proper motions have been measured previously in the literature. Here, the agreement with the other observational data is even clearer. In particular, our Sgr leading arm proper motions at 240^∘≲Λ_⊙≲ 360^∘ are in excellent agreement with the HST proper motions measured by <cit.>. This is a wonderful validation of two completely independent astrometric techniques! Note that the Sgr stream is not the focus of this study, but the proper motion catalog we present here is a useful probe of the stream dynamics. For example, the slight differences between the <cit.> model predictions and our measurements could be used to refine/improve models of the Sgr orbit. We leave this task, and other applications of the Sgr proper motions, to a future study. We have now shown, using both spectroscopically confirmed QSOs and stars belonging to the Sgr stream, that our SDSS-Gaia proper motions are free of any significant systematic uncertainties. In the following Section we use this exquisite sample to infer the rotation signal of the stellar halo.§ HALO ROTATIONIn this Section, we use the SDSS-Gaia sample of RRL, BHB and K giant stars to measure the average rotation of the Galactic stellar halo. Below we describe our rotating halo model, and outline our likelihood analysis. In order to convert observed heliocentric velocities into Galactocentric ones, we adopt a distance to the Galactic centre of R_0=8.3 ± 0.3 kpc <cit.>, and we marginalise over the uncertainty in this parameter in our analysis. Given R_0, the total solar azimuthal velocity in the Galactic rest frame is strongly constrained by the observed proper motion of Sgr A^*, i.e. V_g, ⊙ = μ (Sgr A^*) × R_0. We adopt the <cit.> proper motion measurement of Sgr A^*, which gives a solar azimuthal velocity of V_g, ⊙ = 250 ± 9 km s^-1. Finally, we use the solar peculiar motions (U_⊙, V_⊙, W_⊙)=(11.1, 12.24, 7.25) km s^-1 derived by <cit.>. Thus, in our analysis, the circular speed at the position of the Sun is V_c = 238 km s^-1 (where V_g, ⊙= V_c +V_⊙). We note that the combination of R_0=8.5 kpc and V_c = 220 km s^-1 has been used widely in the literature, so in Section <ref> we show how our halo rotation signal is affected if we instead adopt these parameters. §.§ Model We define a (rotating) 3D velocity ellipsoid aligned in spherical coordinates: P(v_r,v_θ,v_ϕ|σ_r,σ_ϕ,σ_θ,⟨ V_ϕ⟩) =1/(2π)^3/2σ_r σ_θσ_ϕexp[-v^2_r/2σ^2_r-v^2_θ/2 σ^2_θ-(v_ϕ-⟨ V_ϕ⟩)^2/2 σ^2_ϕ]Here, we only allow net streaming motion in the v_ϕ velocity coordinate, and assume Gaussian velocity distributions. Note that positive ⟨ V_ϕ⟩ is in the same direction as the disc rotation. For simplicity, we assume an isotropic ellipsoid where σ_r=σ_θ=σ_ϕ=σ_*, but we have ensured that this assumption of isotropy does not significantly affect our rotation estimates (see also Section <ref>).This velocity distribution function can be transformed to Galactic coordinates (μ_l, μ_b, v_ los) by using the Jacobian of the transformation J=4.74047^2 D^2, which gives P(μ_l, μ_b,v_ los|σ_*,⟨ V_ϕ⟩, D).The RRL stars only have proper motion measurements, so, in this case, we marginalise the velocity distribution function along the line-of-sight to obtain P(μ_l, μ_b|σ_*,⟨ V_ϕ⟩). Furthermore, while we can safely ignore the distance uncertainties for the RRL and BHB stars, we do need to take the K giant absolute magnitude uncertainties into account (typically, Δ DM ∼ 0.35) . Thus, for the K giants we include a distance modulus PDF in the analysis. Here, we follow the prescription by <cit.> and assume a Gaussian distance modulus distribution with mean, ⟨ DM ⟩ = DM_ peak and standard deviation, σ_DM = (DM_84-DM_16)/2. Here, DM_ peak is the most probable distance modulus derived by <cit.>, and (DM_84-DM_16)/2 is the central 68% interval. This distance modulus PDF was derived by <cit.> using empirically calibrated colour-luminosity fiducials, at the observed colour and metallicity of the K giants.P(μ_l, μ_b, v_ los|σ_*,⟨ V_ϕ⟩) =∫ P(μ_l, μ_b, v_ los|σ_*,⟨ V_ϕ⟩, DM) 𝒩(DM|DM_0, σ_DM) d DMwhere 𝒩(DM|DM_0, σ_DM) is the normal distribution describing the uncertainty in measuring the distance modulus to a given star.We then use a likelihood analysis to find the best-fit ⟨ V_ϕ⟩ value. The (isotropic) dispersion, σ_*, is also a free parameter in our analysis. As we are mainly concerned with net rotation, we assume a flat prior on σ_* in the range σ_* =[50,200] km s^-1, and marginalise over this parameter to find the posterior distribution for ⟨ V_ϕ⟩.When evaluating the likelihoods of individual stars under our model we also take into account the Gaussian uncertainties on proper motions as prescribed by Eq. <ref>. As the likelihood functions are normal distributions, this amounts to a simple convolution operation. §.§ ResultsIn this Section, we apply our likelihood procedure to RRL, BHB and K giant stars with SDSS-Gaia proper motions. For all halo tracers, we only consider stars with r < 50 kpc and |z| > 4 kpc. The latter cut is imposed to avoid potential disc stars. In addition, we remove any stars with considerable proper motion (μ > 100 mas/yr), although, in practice this amounts to removing only a handful (≪ 1%) of stars and their exclusion does not affect our rotation estimates. The best fit values of ⟨ V_ϕ⟩ described in this section are summarised in Table <ref>.In Fig. <ref> we show the posterior distribution for ⟨ V_ϕ⟩ for each of the halo tracers. The solid black, dashed orange and dot-dashed purple lines show the results for RRL, BHBs and K giants, respectively. All the halo tracers favour a mild prograde rotation signal, with ⟨ V_ϕ⟩∼ 5-25 km s^-1. Note that the RRL model is shown against the proper motion data in Fig. <ref>. In general, the K giants show the strongest rotation signal of the three halo tracers. This is likely because the K giants have a broader age and metallicity spread than the RRL and BHB stars (see Section <ref>). However, the K giant rotation signal is still relatively mild (∼ 20 km s^-1) and similar (within 10-15 km s^-1) to the RRL and BHB results. The three tracer populations have different distance distributions, so it is not immediately obvious that their rotation signals can be directly compared. However, as we show in Fig. <ref>, we find little variation in the rotation signal with Galactocentric radius, so a comparison between the “average” rotation signal of the populations is reasonable. Finally, we note that we also check that the Sgr stars in our sample make little difference to the overall rotation signal of the halo (see Table <ref>). For comparison, the right-hand panel of Fig. <ref> shows the posteriordistributions if we adopt other commonly used parameters for distance from the Galactic centre and circular velocity at the position of the Sun: R_0=8.5 kpc, V_c = 220 km s^-1. In this case, only the K giants exhibit a detectable rotation signal. It is worth emphasizing that current estimates of the solar azimuthal velocity favour the larger value of V_c ∼ 240 km s^-1 <cit.> that we use, but it is important to keep in mind that the rotation signal is degenerate with the adopted solar motion. In addition, as discussed in Section <ref>, the systematic uncertainties of our SDSS-Gaia proper motion catalogue are at the level of ∼ 0.1 mas/yr. Thus, for typical distances to the halo stars of 20 kpc, we cannot robustly measure a rotation signal that is weaker than 10 km s^-1.In Fig. <ref> we compare the model predictions for μ_l with the observed data. We show the Galactic longitude proper motion μ_l because this component is more sensitive than μ_b to variations in ⟨ V_ϕ⟩. The solid black line shows the difference between the maximum likelihood models and the data as a function of Galactocentric longitude. The error bars indicate the median absolute deviation of the data in each bin. For comparison, we also show with the dashed blue and dot-dashed red lines the model predictions with ⟨ V_ϕ⟩± 20 km s^-1. For all three tracers, the models with very mild prograde rotation agree well with the data.Our maximum likelihood models give σ^* values of 138, 121 and 111 km s^-1 for the RRL, BHBs and K giants respectively. These values agree well with previous estimates of in the literature <cit.>. Note that our models assume isotropy, but we find that both radially and tangentially biased models make little difference to our estimates of ⟨ V_ϕ⟩.We now investigate if there is a radial dependence on the rotation signal of the stellar halo. Our likelihood analysis is applied to halo stars in radial bins 10 kpc wide between Galactocentric radii 0 < r/kpc < 50. The results of this exercise are shown in Fig. <ref>. The solid black circles show all halo stars, and the open orange circles show the rotation signal when stars likely associated with the Sagittarius (Sgr) stream are removed. Here, the error bars indicate the 1σ confidence levels. We find that the (prograde) rotation signal stays roughly constant at 10 ≲⟨ V_ϕ/ km s^-1⟩≲ 20. We do find a stronger rotation signal in the radial bin 30 < r/ kpc < 40 for both RRL and K giants, but this is attributed to a significant number of Sgr stars in this radial regime. The shaded grey regions in Fig. <ref> indicate the approximate systematic uncertainty in the velocity measurements in each radial bin, assuming a systematic proper motion uncertainty of 0.1 mas/yr. Thus, the prograde rotation is very mild, and we are only just able to discern a rotation signal that is not consistent with zero. In Fig. <ref> we explore whether or not the rotation signal of the halo stars is correlated with metallicity. The spectroscopic BHB and K giant samples have measured [Fe/H] values, and for the RRL we use photometric metallicities measured from the light curves. The metallicity distribution functions of the three halo tracers are different, and we are using both spectroscopic and photometric metallicities. Thus, we only compare “metal-richer” and “metal-poorer” stars using a metallicity boundary of [Fe/H] =-1.5. This boundary was chosen as the median value of the K giant sample, which is the least (metallicity) biased tracer. In Fig. <ref> we show the posterior probability distributions for the average rotation of the metal-rich (solid red) and metal-poor (dashed blue) tracers. The thinner lines show the posteriors when stars likely associated with the Sgr stream are excluded. There is no evidence for a metallicity dependence in the RRL sample, but both the BHBs and K giants show a slight (∼ 1σ) bias towards stronger prograde rotation for metal-rich stars. The lack of a metallicity correlation in the rotation of the RRL stars could be due to the relatively poor photometric metallicity estimates (see e.g. Fig. 10 in ), which could wash out any apparent signal. On the other hand, the apparent metallicity correlation in the BHB and K giant samples could be caused by contamination. We explore this scenario in more detail below.Previous work using only line-of-sight velocities have also found evidence for a metal-rich/metal-poor kinematic dichotomy in spectroscopic samples of BHB stars <cit.>. However, <cit.> argue that this signal is due to (1) contamination by blue straggler stars, (2) incorrect distance estimates and, (3) potential pipeline systematics in the <cit.> BHB sample. The BHB sample used in this work should not suffer from significant blue straggler (or main sequence star) contamination. Moreover, our distance calibration is robust to systematic metallicity differences <cit.>. However, we cannot ignore the potential line-of-sight velocity systematics in the <cit.> sample. <cit.> find that a subsample of hot metal-poor BHB stars exhibit peculiar line-of-sight kinematics, which likely causes the metallicity bias in the rotation estimates. It is worth noting that the peculiar line-of-sight kinematics of the hot BHB stars could also be due to a stream-like structure in the halo, and is not necessarily a pipeline failure. In Table <ref> we also give the rotation estimates for metal-rich/metal-poor stars computed with proper motions only. The results are only slightly changed when we do not use the BHB line-of-sight velocities, and they agree within 1 σ of the rotation estimates when 3D velocities are used. We also investigate whether or not the apparent metallicity correlation in the K giant sample could be due to contamination. For example, if there are (metal-rich) disc stars present in the sample this could lead to a stronger prograde signal in the metal-richer stars.Disc contamination could result from stars being misclassified as giant branch stars (e.g. dwarfs, red clump stars) and thus their distances are overestimated. To this end, we use a stricter cut on the P_ RGB parameter provided by <cit.>, which gives the probability of being a red giant branch stars. Our fiducial sample has P_ RGB > 0.5. We find that using P_ RGB > 0.8 results in little difference to the rotation signal of the metal-rich stars, and the rotation signal of the metal-poor stars becomes slightly stronger (see Table <ref>). It does not appear that the sample is contaminated by disc stars, but the (slight) metallicity correlation in the K giant sample does lose statistical significance if a stricter cut on red giant branch classification is used. However, this is likely because the error bars are inflated due to smaller number statistics.It is worth noting that the tests we perform above on the BHB and K giant samples do not significantly change the rotation signals of the stars (differences are less than 1 σ), so, we are confident that contamination in these samples is not significantly affecting our results. Thus, we conclude that there does appear to be a mild correlation between rotation signal and metallicity in the halo star kinematics. In summary, we find that the (old) stellar halo, as traced by RRL, BHB and K giant stars, has a very mild prograde rotation signal, and there is a weak correlation between rotation signal and metallicity.Is this the expected result for a Milky Way-mass galaxy stellar halo? Or, indeed, is this rotation signal result consistent with the predictions of the ΛCDM model? In the following Section, we exploit a suite of state-of-the-art cosmological simulations in order to address these questions.§ SIMULATED STELLAR HALOES §.§ Auriga SimulationsIn this Section, we use a sample of N=30 high-resolution Milky Way-mass haloes from the Auriga simulation suite. These simulations are described in more detail in <cit.>, and we only provide a brief description here.A low-resolution dark matter only simulation with box size 100 Mpc h^-1 was used to select candidate Milky Way-mass (1 < M_200/10^12M_⊙ < 2) haloes. These candidate haloes were chosen to be relatively isolated at z=0. More precisely, there are no objects with masses greater than half of the parent halo closer than 1.37 Mpc. A ΛCDM cosmology consistent with the <cit.> data release is adopted with parameters, Ω_m=0.307, Ω_b=0.048, Ω_Λ=0.693 and H_0=100 h km s^-1Mpc^-1, where h=0.6777. Each candidate halo was re-simulated at a higher resolution using a multi-mass particle “zoom-in” technique.The zoom re-simulations were performed with the state-of-the-art cosmological magento-hydrodynamical code arepo <cit.>. Gas was added to the initial conditions by adopting the same technique described in <cit.>, and its evolution was followed by solving the MHD equations on a Voronoi mesh. At the resolution level used in this work (level 4), the typical mass of a dark matter particle is 3 × 10^5M_⊙, and the baryonic mass resolution is 5 × 10^4M_⊙. The softening length of the dark matter particles and star particles grows with time in physical space until a maximum of 369 pc is reached at z=1.0 (where z is the redshift). The gas cells have a softening length that scales with the mean radius of the cell, and the maximum physical softening is 1.85kpc.The Auriga simulations employ a model for galaxy formation physics that includes critical physical processes, such as star formation, gas heating/cooling, feedback from stars, metal enrichment, magnetic fields, and the growth of supermassive black holes (seefor more details). The simulations have been successful in reproducing a number of observable disc galaxy properties, such as rotation curves, star formation rates, stellar masses, sizes and metallicities.This work is concerned with stellar haloes of the Auriga galaxies. A future study (Monachesi et al. in preparation) will present a more general analysis of the simulated stellar halo properties. Here, we focus on the net rotation of the Auriga stellar haloes for comparison with the observational results in the preceding sections. §.§ Rotation of Auriga Stellar Haloes The definition of “halo stars”, in both observations and simulations, is somewhat arbitrary, and often varies widely between different studies. In this work, for a more direct comparison with our observational results, we spatially select stars within the SDSS survey footprint (see Fig. <ref>) with Galactocentric radius 5 < r/kpc < 50 and height above disc plane |z| > 4 kpc. Note that the scale heights of the Auriga discs are generally thicker than the Milky Way disc (see ), so our spatial selection will likely include some disc star particles, particularly at small radii. Finally, for a fair comparison with the old halo tracers (i.e. RRL, BHBs, and K giants) used in this work, we also select “old” star particles. For this purpose, we consider halo stars that formed more than 10 Gyr ago in the simulations. Note that we align each halo with the stellar disc angular momentum vector, which we compute using all star particles within 20 kpc.In the left-hand panel of Fig. <ref> we show the distribution of average azimuthal velocity (⟨ V_ϕ⟩) of halo stars in the 30 Auriga simulations. Here, halo stars are selected within the SDSS survey footprint between 5 and 50 kpc from the Galactic centre, and with height above the disc plane, |z| > 4 kpc. The average rotation for all halo stars in this radial range are shown with the grey histogram. Old halo stars (with T_ form > 10 Gyr) are shown with the green line-filled histogram. The stellar haloes show a broad range of rotation velocities, ranging from 0 ≲⟨ V_ϕ⟩ /km s^-1≲ 120, but they are all generally prograde. Similarly, the old halo stars exhibit prograde rotation, but they have much milder rotation amplitudes, with ⟨ V_ϕ⟩≲ 80 km s^-1. The average rotation signal of the three Milky Way halo populations we used in Section <ref> is 14 km s^-1. Only 3 percent of the Auriga haloes have net rotation signals ≤ 14 km s^-1, however, the fraction of "old" simulated haloes with similarly low rotation amplitudes is higher (20 percent).In the middle- and right-hand panels we show the radial dependence of the rotation signal in the simulations. Here, in the middle panel, the solid black line shows the median value of the 30 Auriga haloes and the grey shaded region indicates the 10th/90th percentiles. Similarly, in the right-hand panel, the solid green line shows the median value of the old halo stars and the green shaded region indicates the 10th/90th percentiles. The rotation signal of the whole halo sample varies with radius and declines from ⟨ V_ϕ⟩∼ 70 km s^-1 at r ∼ 10 kpc to⟨ V_ϕ⟩∼ 25 km s^-1 at r ∼ 50 kpc. In contrast, the old halo stars have a fairly constant rotation amplitude with Galactocentric distance of 20-30 km s^-1. It is likely that the higher rotation amplitude for halo stars at small Galactocentric distances is due to disc contamination and/or the presence of in situ stellar halo populations more akin to a “thick disc” component (e.g. ). However, the old halo stars will suffer much less contamination from the disc (or disc-like) populations[It is worth noting that not all old stars will have an external origin, as there are old (T_ form > 10 Gyr) populations present in the disc and in situ halo components (see e.g. ).], and they are dominated by stellar populations accreted from dwarf galaxies (see e.g Figure 10 in ). This is likely the reason why the rotation amplitude of the old halo stars is fairly constant with Galactocentric radius. Finally, it is worth noting that there are a significant number of the Auriga galaxies (∼ 1/3) that have an “ex situ disc” formed from massive accreted satellites <cit.>. Some of these ex situ discs can extend more than 4 kpc above the disc plane, and can be the cause of significant rotation in the stellar halos. However,this is not true for all of the ex situ discs in the simulations: some are largely confined to small |z| and will not necessarily affect the rotation signal at the larger Galacocentric radii probed in this work (see ).We also show our observational results from the RRL, BHB and K giant stars in Fig. <ref> (cf. Fig. <ref>). Here, we show the average (inverse variance weighted) rotation signal from the three populations. In practice, the rotation of the three populations is very similar (see Fig. <ref>). The observed rotation amplitude in the Galactic halo broadly agrees with the old halo population in the simulations: a mild prograde signal is consistent, and indeed typical in the cosmological simulations. The dashed green line in the right-hand panel of Fig. <ref> indicates the 20th percentile level, which agrees well with the observed values. Thus, while the mild prograde rotation of the old Milky Way halo stars is consistent with the simulated haloes, the observed rotation amplitude is on the low side of the distribution of Auriga haloes. We note that the Auriga haloes are randomly selected from the most isolated quartile of haloes (in the mass range 1-2 × 10^12M_⊙), and thus they are typical field disc galaxies (as opposed to those in a cluster environment). Thus, we can infer that the rotation signal of the old Milky Way halo is fairly low compared to the general field disc galaxy population.It is not immediately obvious why the old halo stars in the simulations, even out to r ∼ 50 kpc, have (mild) prograde orbits. If most of these stars come from destroyed dwarf galaxies, then their net spin will be related to the original angular momentum vectors of the accreted dwarfs. Previous studies using cosmological simulations have shown that subhalo accretion is anisotropic along filamentary structures, and is generally biased along the major axis of the host dark matter halo <cit.>. Indeed, <cit.> showed that the subhalo orbits in the Aquarius simulations are mainly aligned with the main halo spin. Hydrodynamic simulations predict that the angular momentum vector of disc galaxies tends to be aligned with the dark matter halo spin, at least in the inner parts of haloes (e.g. ). Thus, the slight preference for prograde orbits in the accreted stellar haloes is likely due to the filamentary accretion of subhaloes, which tend to align with the host halo major axis and stellar disc. Note that the non-perfect alignment between filaments, dark matter haloes and stellar discs will naturally lead to a relatively weak (but non-zero!) signal. In addition, the orbital angular momentum of massive accreted satellites can align with the host disc angular momentum after infall. Indeed, <cit.> show that when ex situ discs are formed from the accretion of massive satellites the angular momentum of the dwarfs can be initially misaligned with the disc but can rapidly become aligned after infall. Furthermore, this alignment is not just due to a change in the satellite orbit, but also because of a response of the host galactic disc!Note that, as mentioned above, some of the old stars will also belong to the in situ halo component, which are more likely biased towards prograde (or disc-like) orbits. Thus, it is likely that those haloes with minor net rotation are less dominated by in situ populations. Indeed, the mild prograde rotation we see in the observational samples suggests that the in situ component of the Milky Way is relatively minor. Moreover, as more recent, massive mergers will lead to higher net spin in the halo, the weak rotation signal in the Milky Way halo is indicative of a quiescent merger history (see e.g. ).In Fig. <ref> we show how the rotation signal of the Auriga stellar haloes depends on metallicity. We define “metal-rich and “metal-poor” populations as halo stars with metallicities above/below 1/10th of solar ([Fe/H] = -1). This metallicity boundary was chosen as it roughly corresponds to the median metallicity of the old halo stars in the simulations. However, as is the case in the observations, our choice of metallicity boundary is fairly arbitrary. When all halo stars are considered, there is a tendency for the metal-richer stars to have a stronger prograde rotation. This metallicity correlation is more prominent in the inner regions of the halo. It's likely that the correlation in the inner regions of the halo is, at least in part, attributed to disc contamination and/or the presence of in situ (disc-like) stellar halo populations. Furthermore, most of the strongly rotating ex situ disc material in the simulations is contributed by one massive, and thus metal-rich, satellite, which could also cause a metallicity correlation in the halo stars. The old halo stars, which suffer less from disc contamination,show only a very mild (∼ 5-10 km s^-1) bias towards more strongly rotating metal-rich populations. Indeed, we found a weak metallicity correlation in the observed samples of old halo stars, which seems to be in good agreement with the predictions of the simulations. §.§.§ Tests with mock observationsIn Figure <ref> we showed the “true” average rotation signal of the Auriga stellar haloes. This is computed for all halo stars within the SDSS footprint with 5 < r/kpc < 50 and |z| > 4 kpc directly from the simulations. Now, we generate mock observations from the simulated stellar haloes to see if we can recover this rotation signal using the likelihood method described in Section <ref>. For the mock observations, we convert spherical coordinates (r, θ, ϕ) into Galactocentric coordinates (D, ℓ, b), and place the “observer” at the position of the Sun (x,y,z)=(-8.5, 0,0) kpc.Old halo stars are identified (T_ form > 10 Gyr) in the coordinate ranges 5 < r/kpc < 50 and |z| > 4 kpc, and N ∼ 4000-8000 are randomly selected within the SDSS footprint (see Fig. <ref>). The tangential Galactic velocity components (V_ℓ, V_b) are converted into proper motions, and we apply a scatter of 2 mas/yr, which is the typical observational uncertainty in the SDSS-Gaia sample. After applying our modeling technique, we show the resulting best-fit ⟨ V_ϕ⟩ parameters in Fig. <ref>. The left, middle and right panels show RRL-, BHB- and K giant-like mocks. The RRL mocks have N ∼ 8000 stars randomly selected and we marginalise over the line-of-sight velocity. All three Galactic velocity components are used for the BHB and K giant mocks, but the sample sizes are smaller (N ∼ 4000-5000), and we apply a scatter of 0.35 dex to the distance moduli of the “K giant” stars. Note that we also use these mocks to ensure that we can safely ignore the small (∼ 10%) distance uncertainties in the RRL and BHB populations. In Fig. <ref> we show the difference between the true and inferred ⟨ V_ϕ⟩ values as a function of the true rotation signal. The distribution of Δ⟨ V_ϕ⟩ = ⟨ V_ϕ⟩_ LIKE - ⟨ V_ϕ⟩_ TRUE, HALO is similar for all three mock tests, with median offset of ∼ 1 km s^-1 and σ=1.48 × MAD of ∼5 km s^-1 (see right-hand inset)[Note that we attribute the outliers with large Δ⟨ V_ϕ⟩ to significant substructures in the Auriga haloes.]. Thus, even with observational proper motion errors of order the proper motions themselves, we are able to recover the average rotation signal of the stellar halo to < 10 km s^-1. Note that this level of scatter in the simulations is typically less than the systematic uncertainty in the SDSS-Gaia proper motion catalog of 0.1 mas/yr.§ CONCLUSIONSWe have combined the exquisite astrometry from Gaia DR1 and recalibrated astrometry of SDSS images taken some ∼ 10-15 years earlier to provide a stable and robust catalog of proper motions. Using spectroscopically confirmed QSOs, we estimate typical proper motion uncertainties of ∼ 2 mas/yr down to r ∼ 20 mag, which are stable to variations in colour and magnitude. Furthermore, we estimate systematic errors to be of order 0.1 mas/yr, which is unrivaled by any other dataset of similar depth. We exploit this new SDSS-Gaia proper motion catalogue to measure the net rotation of the Milky Way stellar halo using RRL, BHB and K giant halo tracers. Our main conclusions are summarised as follows. * We identify (RRL) halo stars that belong to the Sgr stream and compare the SDSS-Gaia proper motions along the stream to the <cit.> model. In general, there is excellent agreement with the model predictions for the Sgr leading and trailing arms. Furthermore, previous proper motion measurements in the literature of the Sgr stream <cit.> agree very well with the new SDSS-Gaia proper motions. These comparisons are a reassuring validation that these new proper motions can be used to probe the Milky Way halo. * We construct samples of RRL, BHB and K giant stars in the halo with measured proper motions, distances, and (for the spectroscopic samples) line-of-sight velocities. Using a likelihood procedure, we measure a weak prograde rotating stellar halo, with ⟨ V_ϕ⟩∼ 5-25 km s^-1. This weakly rotating signal is similar for all three halo samples, and varies little with Galactocentric radius out to 50 kpc. In addition, there is tentative evidence that the rotation signal correlates with metallicity, whereby metal-richer BHB and K giant stars exhibit slightly stronger prograde rotation.* The state-of-the-art Auriga simulations are used to compare our results with the expectations from the ΛCDM model. The simulated stellar haloes tend to have a net prograde rotation with 0 ≲ V_ϕ/km s^-1≲ 120. However, when we compare with “old” (T_ form > 10 Gyr) halo stars in the simulations,which are more akin to the old halo tracers like BHBs and RRL, the prograde signal is weaker and typically V_ϕ≲ 80 km s^-1, in good agreement with the observations. Metal-rich(er) halo stars in the simulations are biased towards stronger prograde rotation than metal-poor(er) halo stars. It is likely that this correlation is, in part, due to contamination by disc stars and/or halo stars formed in situ, which are more (kinematically) akin to a disc component. However, the rotation signal of the old halo stars, which are likely dominated by accreted stellar stars, shows only weak, if any, dependence on metallicity. Again, this is in line with the observations. * The weak prograde rotation of the Milky Way halo is in agreement with the simulations, but is still relatively low compared to the full Auriga suite of 30 haloes (∼ 20th percentile). It is also worth remembering that the net spin of the halo disappears entirely if the circular velocity at the position of the Sun is set to the “standard” 220 km s^-1. Furthermore, the systematic uncertainty in the SDSS-Gaia proper motions of ∼ 0.1 mas/yr means that rotation signals ≲ 10 km s^-1 are also consistent with zero. This mild, or zero, halo rotation suggests that above z=4 kpc, the Milky Way has (a) a minor, or non-existent, in situ halo component and, (b) undergone a relatively quiescent merger history. * Finally, we use the simulated stellar haloes to quantify the systematic uncertainties in our modeling procedure. Using mock observations, we find that the rotation signals can typically be recovered to < 10 km s^-1. However, we do find that substructures in the halo can significantly bias the results. Indeed, in regions that the Sgr stream is prominent (e.g. 20 < r/kpc < 30) our measured rotation signal is increased by the Sgr members.§ ACKNOWLEDGEMENTSWe thank Carlos Frenk and Volker Springel for providing comments on an earlier version of this manuscript. We also thank the anonymous referee for providing valuable comments that improved the quality of our paper. A.D. is supported by a Royal Society University Research Fellowship.The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement n. 308024. V.B. and S.K. acknowledge financial support from the ERC. A.D. and S.K. also acknowledge the support from the STFC (grants ST/L00075X/1 and ST/N004493/1). RG acknowledges support by the DFG Research Centre SFB-881 “The Milky Way System” through project A1.This work has made use of data from the European Space Agency (ESA) mission Gaia (<http://www.cosmos.esa.int/gaia>), processed by the Gaia Data Processing and Analysis Consortium (DPAC, <http://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement.This work used the DiRAC Data Centric system at Durham University, operated by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). This equipment was funded by BIS National E-infrastructure capital grant ST/K00042X/1, STFC capital grants ST/H008519/1 and ST/K00087X/1, STFC DiRAC Operations grant ST/K003267/1 and Durham University. DiRAC is part of the National E-Infrastructure.mnras § QSO PROPER MOTIONSIn Fig. <ref> we explore how the median QSO proper motions vary with position on the sky. We find that the systematics on the sky are at the level of 0.1-0.2 mas/yr with maximal (mostly non-systematic) deviations of 0.5 mas/yr. This is in stark contrast to what <cit.> found for their Gaia-PS1-SDSS proper motion catalog, where the QSO proper motions have systematic patterns with amplitudes of 2 mas/yr. <cit.> suggest that these large variations could be due to differential chromatic refraction (DCR) induced motions in the QSOs. Although QSOs are appealing objects to test for proper motion uncertainties and systematics, the possibility of DCR affects is worrisome. However, for discernible DCR affects we would expect strong correlations with airmass and a QSO redshift dependence that does not average to zero (see Figure 3 of ). By comparison with Figure 11 in <cit.> we find little correlation between the QSO proper motions with airmass. Furthermore, we showed in Fig. <ref> that there is little dependence on the QSO proper motion distributions with g-r colour (and therefore redshift). We therefore conclude that DCR related affects in our proper motion catalog are minimal, and we can safely use QSOs to quantify our statistical and systematic proper motion uncertainties. | http://arxiv.org/abs/1703.09230v2 | {
"authors": [
"Alis J. Deason",
"Vasily Belokurov",
"Sergey E. Koposov",
"Facundo A. Gomez",
"Robert J. Grand",
"Federico Marinacci",
"Rudiger Pakmor"
],
"categories": [
"astro-ph.GA"
],
"primary_category": "astro-ph.GA",
"published": "20170327180004",
"title": "The slight spin of the old stellar halo"
} |
Indian Institute of Technology Bombay, Mumbai, India,[email protected] Heavy-flavour measurements in p-Pb collisions with ALICE at the LHC Jitendra Kumar, for the ALICE collaboration Received; accepted =================================================================== The measurements of open heavy-flavours, i.e. D mesons at central rapidity and leptons from charm and beauty decays at central and forward rapidity was studied in p-Pb collisions at √(s_NN) = 5.02 TeV using the ALICE detector. The results are presented and compared to model predictions including cold nuclear matter effects.§ INTRODUCTIONHeavy quarks (charm and beauty) due to their large masses are predominantly produced in hard-scattering processes in the initial phase of hadronic collisions. Therefore, they are excellent probes to study the properties of the Quark-Gluon Plasma created in relativistic heavy-ion collisions. The measurement of their production in p-Pb collisions is important to disentangle the hot nuclear matter effects present in heavy-ion collisions from cold nuclear matter (CNM) effects, such as transverse momentum broadening, nuclear modification of the parton distribution functions, initial-state multiple scatterings and energy loss. These effects can be investigated by measuring the nuclear modification factor R_pPb, defined as the ratio of particle cross section dσ/dp_T measured in p-Pb collisions to that measured in pp collisions scaled by the atomic mass number of Pb nuclei. In the absence of CNM effects R_pPb is expected to be unity. The R_pPb of D mesons and leptons from charm and beauty hadron decays at central and forward rapidities was studied in p-Pb collisions at √(s_NN) = 5.02 TeV with ALICE. § ANALYSIS DETAILS Prompt D mesons and their charge conjugates are reconstructed via their hadronic decay channels: D^0→K^+π^-, D^+→K^-π^+π^+ and D^*+→D^0π^+ <cit.>. The extraction of the signal is based on an invariant mass analysis of reconstructed decay vertices displaced from the primary vertex by few hundred microns. The necessary spatial resolution on the track position is guaranteed by the Inner Tracking System (ITS) and the Time Projection Chamber (TPC) covering a pseudorapidity region |η| < 0.8. Particle identification (PID) of the decay particle species is also exploiting using the measurement of specific energy loss (dE/ dx) in the TPC and of the time of flight with the Time-Of-Flight (TOF) detector. Kaons and pions are identified up to p_T = 2 GeV/c. The electrons from heavy-flavour (HF) hadron decays are identified using ITS, TPC and TOF detectors in the range 0.5 < p_T < 6 GeV/c and using the TPC and the Electromagnetic Calorimeter (EMCal) for p_T > 6 GeV/c <cit.>. The background from π^0 and η Dalitz decay and from photon conversions is subtracted via the invariant mass method, and the hadron contamination decays is statistically subtracted <cit.>. The muons from heavy-flavour hadron decays are measured with the muon spectrometer in pseudorapidity range, 2.5 < y_lab < 4 <cit.> (additional information in <cit.>). The background from π and K decays is subtracted using a data-tuned Monte Carlo cocktail.§ RESULTS The R_pPb of prompt D mesons (D^0, D^+ , and D^*+ average) is found compatible with unity, as shown in Figure <ref> (left plot), and described by models which include CNM effects <cit.>. The comparison to the nuclear modification factor in Pb-Pb collisions, R_AA, is reported in Figure <ref> (right plot) and highlights a strong suppression for p_T >3 GeV/c in central (0-10%) and semi-central Pb-Pb collisions (30-50%) <cit.>. This comparison allows to conclude that the suppression observed in Pb–Pb collisions is due to final-state effects induced by the interaction of heavy quarks with the QGP produced in these collisions. The R_pPb of HF-hadron decay electrons shown in Figure <ref> (left plot) is consistent with unity and also described by various models considering CNM effects <cit.>. The impact parameter distributions of beauty decay electrons is expected to be broader than that of charm decay electrons due to the larger separation between the primary and decay vertices. Therefore, one can separate the contributions of charm and beauty production. The R_pPb of beauty decay electrons is shown in the right panel of Figure <ref> <cit.>. The results are similar and consistent with unity within the uncertainties. Figure <ref> shows the R_pPb of heavy-flavour hadron decay muons, which is also consistent with unity at both forward (2.03 < y_cms < 3.53, left panel) and backward (-4.46 < y_cms < -2.96, right panel) rapidities. However an enhancement is observed above unity at backward rapidity for 2 < p_T< 4 GeV/c <cit.>. The results in both rapidity ranges are described within uncertainties by model calculations that include CNM effects. 1 alice2ALICE Collaboration, Phys. Rev. Lett. 113 (23) (2014) 232301. alice4ALICE Collaboration, Phys. Lett. B 754 (2016) 81. alice5ALICE Collaboration, arXiv:1702.01479 [nucl-ex].alice1ALICE Collaboration, Int. J. Mod. Phys. A29 (2014) 1430044. alice3ALICE Collaboration, Phys. Rev. C 94 (2016) 054908. alice0ALICE Collaboration, JHEP 1603 (2016) 081. alice6ALICE Collaboration, arXiv:1609.03898 [nucl-ex]. | http://arxiv.org/abs/1703.08681v1 | {
"authors": [
"Jitendra Kumar"
],
"categories": [
"hep-ex",
"nucl-ex"
],
"primary_category": "hep-ex",
"published": "20170325120045",
"title": "Heavy-flavour measurements in p-Pb collisions with ALICE at the LHC"
} |
NCTS-PH/1702Detection prospects for the Cosmic Neutrino Background using laser interferometersValerie Domcke^a, [E-mail: ] and Martin Spinrath^b, [E-mail: ], ^a AstroParticule et Cosmologie (APC)/Paris Centre for Cosmological Physics, Université Paris Diderot ^b Physics Division, National Center for Theoretical Sciences, National Tsing-Hua University, Hsinchu, 30013, TaiwanThe cosmic neutrino background is a key prediction of Big Bang cosmology which has not been observed yet.The movement of the earth through this neutrino bath creates a force on a pendulum, as if it were exposed to a cosmic wind. We revise here estimates for the resulting pendulum acceleration and compare it to the theoretical sensitivity of an experimental setup where the pendulum position is measured using current laser interferometer technology as employed in gravitational wave detectors. We discuss how a significant improvement of this setup can be envisaged in a micro gravity environment. The proposed setup could also function as a dark matter detector in the sub-MeV range, which currently eludes direct detection constraints.§ INTRODUCTION The cosmic neutrino background (CNB) is a robust prediction of the standard model of particle physics in standard ΛCDM cosmology. A measurement of this elusive background would confirm or challenge our understanding of these standard models up to an energy scale of about 1 MeV, far beyond the reach of its cousin, the cosmic microwave background (CMB), which over the last decades has provided us with invaluable information covering cosmological energy scales up to about 0.3 eV. Although very abundant in the universe, the weakly interacting nature of the cosmic neutrinos (which makes them so valuable to probe the early universe) makes them inherently hard to detect. Several proposals have been put forward, for some recent overviews,see, e.g., <cit.>, all of them beyond the reach of current technology. The most promising proposal at the moment seems to be the PTOLEMY experiment <cit.> which aims at detecting the CNB through the inverse beta decay of tritium. However, very recently, the first discovery of gravitational waves by the LIGO/VIRGO collaboration <cit.> proved the possibility of detecting an even more elusive potential messenger of our cosmic past, namely gravitational waves. In this paper, we investigate the prospects of searching for the CNB with laser interferometer technology, similar to the technology currently developed for gravitational wave detectors. A related proposal was recently put forward for searching for sub-eV dark matter <cit.>. Let us briefly recall some key features of the CNB. As the temperature decreases in the course of the evolution of the Universe, the weak interactions keeping neutrinos in equilibrium with the thermal bath freeze out (at about T ∼ 1 MeV) and the CNB decouples from the thermal bath of photons, electrons and positrons. At T ∼ 0.5 MeV, the production of electrons and positrons freezes out, leading to a reheating of the photon bath sourced by the annihilation of electrons and positrons. Consequently, the CNB temperature T_ν is predicted to be slightly lower than the observed CMB temperature of T_0=2.735 K, T_ν = (4/11)^1/3 T_0 ≈ 1.95 K, which corresponds to a thermal energy of k_B T_ν≈ 0.16 meV. The average neutrino number density today is determined by the thermal neutrino abundance at the time of decoupling, n̅_ν = 3/22 n_γ≃ 56 cm^-3 per flavour and per chirality.[ Recently, it was also discussed that the CNB might have a non-thermally produced component <cit.> increasing the total neutrino number density by an O(1) factor. Since the momentum distribution of this non-thermal component is model-dependent, we will not address this possibility here any further. ] This average density as well as the momentum distribution may however be altered locally if the neutrinos are heavy enough to cluster to astrophysical gravitational structures such as our galaxy. The remainder of this paper is organized as follows. In Sec. <ref> we review the predicted local density and momentum distribution of the CNB neutrinos. In Sec. <ref>, after a brief sketch of a possible simple experimental setup,we propose avenues how a significant improvement might be achieved.Then we update and revise theoretical expectations for a mechanical acceleration induced by the CNB wind in Sec. <ref>. Comparing these values with current interferometer technology as employed by gravitational wave experiments, we conclude that in the simplest setup the sensitivity still falls short by many orders of magnitude using current technology. For comparison we also quote estimates for the solar neutrino wind and a possible dark matter wind. In the latter case, the expected sensitivity is many orders of magnitude below current direct detection bounds for dark matter masses above a few GeV, but could provide competitive bounds for elastic dark matter - nucleon scattering in the sub-MeV range.§ NEUTRINO MASSES AND THE CNBContrary to the photons of the CMB, neutrinos have a (small) mass, rendering the CNB phenomenology more diverse than the more familiar CMB. Current bounds from laboratory experiments require m_ν≲ 2.8 eV/c^2 <cit.>, whereas cosmological bounds are pushing down to ∑ m_ν≲ 0.23 eV/c^2 <cit.>. At the same time, the heaviest eigenstate must be heavier thanabout 0.05 eV/c^2 to explain neutrino oscillation data <cit.>. Depending on their mass, CNB neutrinos may be relativistic or non-relativisticand they may cluster gravitationally in the potential wells formed by dark matter. We will assume here that unclustered neutrinos have no average relative velocity with respect to the CMB rest frame as measured by the CMB dipole. Hence,for sufficiently light neutrinos the total neutrino flux on earth is simply the average neutrino density,2 ·n̅_ν per flavour or mass eigenstate, multiplied by thevelocityβ^CMB_⊕ c ≈ 369 km/s of the earth traveling through the CNB rest frame as measured by the CMB dipole.Since the orientation of this dipole is known from the observation of the CMB, so is the direction of the `neutrino wind' on earth. The momentum distribution of these unclustered neutrinos is to good approximation a red-shifted copy of the Fermi-Dirac distribution describing the neutrino bath at decoupling.However, from neutrino oscillation data we know that at least two neutrino mass eigenstates are non-relativistic, √(|Δ m^2_31|)c^2 ≫√(|Δ m^2_21|)c^2 ≈ 8.5 · 10^-3 eV ≫ 3.15 k_B T_ν≈ 5 · 10^-4 eV.These non-relativistic neutrinos might gravitationally cluster to large DM structures such as galaxies, clusters and superclusters. Clustering becomes relevant if the intrinsic neutrino velocity drops below the escape velocity of the corresponding astrophysical structure, v ∼⟨ p_ν⟩ c^2/ E_ν∼ 3 k_B T_νc/m_ν < v_esc. For the Milky way, the escape velocity is about v_esc^MW≃ 500 km/s while for our supercluster it is estimated to be O(10^3) km/s. Gravitational clustering will enhance the local neutrino density and modify the momentum distribution. Ref. <cit.> studied the clustering of neutrinos to the Milky Way and to supercluster structures, finding an enhancement factor of n_ν/n̅_ν =O(1 - 100), depending on the mass of the neutrinos and the size of the astrophysical structure.The velocity dispersion of these clustered neutrinos can be estimated from the virial velocity, which for neutrinos bound to the galaxy is about β_virc ∼ 10^-3c at the position of the earth. Up to an O(1) factor this agrees with the simulations of Ref. <cit.>, which indicate that the momentum distribution is well approximated by a Fermi-Dirac distribution with a cut-off around the escape velocity. In the following we will take the velocity dispersion of clustered neutrinos to be β_virc. However, this value might be enhanced by a factor of O(1 - 10). Note that the direction of the neutrino wind will differ compared to the unbound case. For unbound neutrinos, the neutrino wind is expected at an angle of ∼ 10^∘ to the ecliptic plane, for bound neutrinos we expect ∼ 60^∘ <cit.>. In practice the neutrino wind experienced on earth can be a combination ofall these possibilities, including relativistic and non-relativistic states as well as (partially) clustered populations.In addition to the effects described above, gravitational focusing effects within the solar system may induce an annual modulation of the neutrino rate on earth, which is also sensitive to the neutrino mass <cit.>. It has also been argued that the CNB could be asymmetric (i.e., containing different number densities of neutrinos and anti-neutrinos) in one or more flavours, which could result in an enhancement of the average neutrino density <cit.>.While Big Bang Nucleosynthesis severely constrains such an asymmetry for electron neutrinos <cit.> the constraints on the muon and tau neutrinos (which would contribute to the extra relativistic degrees of freedom as measured in the CMB) are much weaker <cit.>.In the following we will hence distinguish three cases with increasing absolute neutrino mass scale: relativistic (R), non-relativistic unclustered (NR-NC) and non-relativistic clustered (NR-C) neutrinos. As a reference value we will work with the standard average neutrino density per flavour or mass eigenstate set by 2n̅_ν = 112cm^-3.§ THE EXPERIMENTAL SETUP In this section we will first discuss a simple toy setup for the kind of experiment we are considering to estimate the sensitivity which could be achieved in the near future. A comparison with the magnitude of the expected signal, derived in Sec. <ref>, will revealthat current interferometer technology falls orders of magnitude short of the sensitivty required to detect the CNB. With this in mind, we limit our discussion in this section to a schematic description of the possible experimental setup. In Sec. <ref> we will outline some potential modifications which might help to drastically increase our expected sensitivity. §.§ Sketch of the experimental setup We consider test masses mounted on classical pendulums. The neutrino wind will result in a force on the test masses which leads to an excursion in the direction of the wind, see Fig. <ref>If the force and the excursion d are extremely small and slowly varying (see Sec. <ref> for details), we may use the small angle approximationd = l sinθ≈ la_ν/g ,where l is the length of the pendulum and g ≈ 980 cm/s^2 is the standard acceleration due to the gravitational field of the earth.Let us assume that the pendulum fits nicely into an ordinary lab andhence for simplicity we take l = 100 cm. As a reference value, the current sensitivity of LIGO is Δ d = h LΔ f^1/2≃ 1 · 10^-17 (Δ f / 10 Hz)^1/2 cm, with L = 4 km denoting the length of the interferometer arms, h ≃ 10^-23/√(Hz) denoting the current peak strain sensitivity at f_0 = 100 Hz and Δ f indicating the bandwidth used for the analysis <cit.>.This translates to a sensitivity for the acceleration ofa_min^now = gΔ d/l≈ 1 · 10^-16 cm/s^2,where we have set Δ f = 10 Hz. The design sensitivity of advanced LIGO is expected to lower this by a factor of 3, with future upgrades expected to improve the current sensitivity by about a factor 10 <cit.>. For the Einstein telescope with 10 km arm length strain sensitivities of a few times 10^-25/√(Hz) are envisaged <cit.>.In the future one could thus optimistically estimate a sensitivity of a_min≈3 · 10^-18 cm/s^2,for terrestrial experiments. Space based GW interferometers have been designed for lower frequencies f_0 ∼few mHz, but their expected sensitivity in terms of absolute distance changes is lower (Δ d ≃ 10^-11 cm for LISA <cit.> for a bandwidth of Δ f = 10^-4 Hz).Note, that we will assume here for simplicity that the experiment has an optimal orientation with respect to the neutrino wind, i.e., that the movement of the earth rotates the experiment such that within a day, the pendulum interferometer arm is orientated parallel as well as orthogonal to the neutrino wind. The optimal situation is achieved for a neutrino wind orthogonal to the earth's axis, in which case the pendulum interferometer arm can reach an orientation parallel and anti-parallel to the neutrino wind within a day.In a realistic setup there should also be an additional annual modulation, see <cit.> for a recent discussion. It is crucial to note that our above estimates apply for the frequency band of the corresponding detector. In particular, any terrestrial setup will suffer drastically from seismic noise for frequencies below about 1 Hz. On the contrary, the intrinsic frequency of a signal induced by the CNB is set by the earth's rotation, 1/day ∼ 10^-5 Hz. It thus seems extremely difficult at best to exploit the remarkable sensitivity of laser interferometers to search for the CNB in an earth based laboratory. A possible way out might be to focus not on the daily variation of he excursion d, but on the high frequency component due to individual neutrino interactions, governed by the rate of neutrinos scattering off the test mass. Optimizing the setup for this measurement requires adjusting the pendulum length as well as the size and material of the test mass. More concretely, using the expressions derived in Sec. <ref>, we note that the number of scattering events per unit time, Γ, depends on the mass of the pendulum M whereas the resulting acceleration a_G_F^2 to leading order does not,a_G_F^2 = R⟨Δ p ⟩ , Γ = RM.Here ⟨Δ p ⟩ denotes the average momentum transfer and R denotes the event rate per second and gram. For example, in the case of non-relativistic non-clustered neutrinos and approximating LIGO's mirrors as 40 kg pure silicon, we findΓ = 86 Hz( M/40 kg) ( (A -Z)^2/A^2 /0.25) ( ρ/2.34g/cm^3),which is right within the LIGO sensitivity band. However, as we will see in the next section, the corresponding acceleration of a_G_F^2 = 7.3 · 10^-32 cm/s^2 for silicon is very far beyond the current reach of LIGO. This acceleration could even drop down toa_G_F^2≈ 10^-33 cm/s^2 forlight normal hierarchical Majorana neutrinosThe situation is somewhat more optimistic for DM searches, where the event rate is senstive to the mass and cross-section of the DM particle. Using the same approximation as above to model LIGO's mirrors, we find that LIGO's peak sensitivity of 100 Hz corresponds approximately, e.g., to the following combinations of acceleration, DM mass and cross-section:a_G_F^2 ≈ 10^-18 cm/s^2,m_X = 10 GeV , σ_X-N = 3 · 10^-34 cm^2 ,a_G_F^2 ≈ 10^-20 cm/s^2,m_X = 0.1 GeV , σ_X-N = 3 · 10^-36 cm^2 ,a_G_F^2 ≈ 10^-22 cm/s^2,m_X = 1 MeV , σ_X-N = 1 · 10^-40 cm^2 .More details on the dependence of Γ and a_G_F^2 on the DM mass and cross-section are given in Sec. <ref>.The numbers found above have to be compared with Cavendish-type torsion balances which try to measure the same kind of acceleration we study here <cit.>. Recent torsion-balance tests of the weak equivalence principle have sensitivities for differential accelerations of the order of 10^-13 cm/s^2 <cit.>, and it has been claimed that accelerations down to 10^-23 cm/s^2 may be reached <cit.>. In <cit.> it was argued that assuming an optimal, shot noise limited laser read-out, torsion balance experiments can always do better than a linear displacement experiment as we suggest here. To keep the discussion simple we will imagine a linear setup in the following, keeping in mind that further improvements might be possible with different geometries.As we will review in Sec. <ref>, the expected displacements of the pendulum due to the CNB are tiny, far beyond the sensitivity of current and upcoming laser interferometers.We hence do not want to make the discussion unnecessarily complicated with lengthy musings about the exact shape of the modulation or the experimental setup. With all these caveats in mind, we willnevertheless refer to Eqs. (<ref>) and (<ref>) as benchmark values for what might be achieved with this kind of experiment. We do however dedicate the following subsection to some speculation on avenues which might increase the sensitivity by many orders of magnitude. §.§ Possible avenues for significant improvementThe two distinctive features of the signalare the directional information and the characteristic frequency. Adding a second interferometer orthogonal to the first to get two-dimensional information about the excursion of the pendulum or putting several copies of the experimental setup on different locations on earth would help to discriminate the signal from background. We further point out that the sensitivity could be significantly improved if the acceleration g is significantly lowered. Implementing such a setup in space inside a rotating satellite with tiny centrifugal forces might sound utopic now, but might be possible in the future. On the International Space Station routinely experiments in micro gravity are performed with a net acceleration of the order of 10^-6 g <cit.>. Hence, the sensitivity of our setup placed in space could conceivably be increased by six orders of magnitude or more. This could also ameliorate the problem of the required stability over time, since the rotation frequency ofa space based experiment could be much faster than the corresponding signal frequency of 1/day on earth. One could also instead imagine to put the pendulum mass in some kind of electromagnetic suspension to compensate earth's gravity. Cleverly arranged, this might also damp much larger background effects. We also note the possibility of replacing the pendulum with two or more free falling masses with different total neutrino cross sections - as we will see below dramatically different cross sections can be obtained by varying the target size and material due to an atomic enhancement factor. The free falling test masses would thus drift apart under the influence of the neutrino wind, seemingly violating the equivalence principle. In this context, it is remarkable that LISA Pathfinder has probed the relative acceleration between two free falling (identical) test masses down to (5.2 ± 0.1) · 10^-15g/√(Hz) for frequencies around 1 mHz <cit.>. Of course, a measurable effect would require a stability on much longer time scales. A quick estimate shows that a cm - sized lead test mass subject to a constant acceleration of a = 10^-27 cm/s^2 over one monthby the CNB, see Sec. <ref>,would be displaced by a (30days)^2/2 =3 · 10^-15 cm. A distance which is in principle measurable with current laser technology. § THEORETICAL EXPECTATIONS FOR THE ACCELERATIONThe mechanical effect of the cosmic neutrino background has been known already for a long time and we will refer here to the calculations of Duda, Gelmini and Nussinov <cit.>. The formulas in this section are mostly based on their work, subject to some improvements as we will detail below. We restrict ourselves to the case of Dirac neutrinos which is the more optimistic case for this kind of experiments. For relativistic neutrinos the results for Majorana and Dirac neutrinos are the same while for non-relativistic neutrinos the G_F effect would vanish for clustered neutrinos and the G_F^2 effect is suppressed by a factor of (v_ν/c)^2 ≪ 1 <cit.>.§.§ Magnetic torque (GF effect)There have been some early proposals to detect cosmic neutrinos using an optical refractive effect <cit.>, which however does not give a net acceleration <cit.>. But there is another effect linear in Fermi's constant G_F which was originally proposed by Stodolsky <cit.>, remarkably before the discovery of neutral currents. It is due to the energy splitting of the two spin states of the electrons of the detector material in the bath of cosmic neutrinos. If there is an asymmetry between the densities of neutrinos and anti-neutrinos in the CNB, this results in a net torque force on the test mass. The acceleration of the test mass of the pendulum reads <cit.>a_G_F = N_AV/A m_AVΔ E/πγ/R ,where N_AV/(Am_AV) is the number of nuclei in 1 g test material. N_AV = 6.022 · 10^23 is Avogadro's constant, A the number of nucleons in an atom and m_AV = 1 g is introduced here for proper normalization. R is the radius of the test mass and γ = M R^2/I is a geometrical factor related to the moment of inertia I of the detector with mass M.Using the expression found in Ref. <cit.> for the induced energy splittingΔ E of the electronswe finda^R_G_F = N_AV/A m_AV2√(2)/π G_Fβ^CMB_⊕γ/R∑_α = e,μ,τ (n_ν_α - n_ν̅_α)g^α_A ,for relativistic neutrinos. For non-relativistic neutrinos this could be at most one order of magnitude larger <cit.>. Note that this effect only exists in the presence of a lepton asymmetry in the CNB such that the number of neutrinos and anti-neutrinos do not cancel. In the conventions of Ref. <cit.>, g_A^e = 0.5 = - g_A^μ,τ. The effect is larger for test masses with a small A and the probe has to be magnetized. This is most easily realized for ferromagnets. The stable, elementary ferromagnet with the smallest atomic number is the iron isotope with A = 54. Furthermore we assume the test mass to be a massive sphere with a radius of 1 cm (γ = 0.4) and we finda^R_G_F≈ 4 · 10^-29 n_ν̅_μ - n_ν_μ/2n̅_ν cm/s^2,where we have for simplicity taken the other two neutrino flavours to be symmetric, n_ν_e,τ = n_ν̅_e, τ. With the expressions above it is straight-forward to derive an estimate for more complicated admixtures of flavours and neutrinos and anti-neutrinos. Unfortunately, our estimate here is many orders of magnitude away fromthe benchmark sensitivity of 3 · 10^-18 cm/s^2, cf. (<ref>).§.§ Scattering processes (GF2 effect) Next we turn to the force due to the momentum transfer of CNB neutrinos scattering off the target material, first discussed by Opher <cit.> (for other early works, see, e.g. <cit.>). Since this force is proportional to G_F^2, one might expect this effect to be suppressed compared to the magnetic effect discussed above. However, due to the macroscopic wavelength of the low-energy CNB neutrinos, the cross section may be enhanced not only by a nuclear coherence factor ∼ A^2 but also by a coherence factor N_c from the scattering of multiple nuclei <cit.>, see also <cit.>, such that this effect can be dominant. It also does not require an asymmetry in the CNB. The resulting acceleration of the test mass can be written as <cit.> a_G_F^2 = Φ_ν N_AV/A m_AVN_cσ_ν-A ⟨Δ p ⟩ ,where Φ_ν = n_νp_νc^2/E_ν is the neutrino flux with E_ν denoting the energy of the CNB neutrinos andp_ν the average relative momentum between these neutrinos and the earth. Further, N_c denotes the coherence enhancement factor, σ_ν-A the neutrino-nucleus cross-section containing the nuclear enhancement factor and ⟨Δ p ⟩ is the average momentum transfer from the scattered neutrinos.The neutrino cross section at low energies (small recoil energies) is <cit.>σ_ν-A≈G_F^2 /4 π ħ^4 c^4(A-Z)^2 E_ν^2,with E_ν≈ m_ν c^2 for non-relativistic and E_ν≈ 3.15 k_B T_ν for relativistic neutrinos. In addition, the neutrinos will also scatter off the electrons in the material. For E_ν≪ m_e c^2 the cross section to one electron is approximately <cit.>σ_ν_e - e≈7 G_F^2 /4 π ħ^4 c^4E_ν^2 and σ_ν_μ, τ - e = 3/7σ_ν_e - e .which is comparable to the nucleus cross section. However, contrary to the nucleus cross section, the electron cross section is sensitive to the flavour composition of the CNB and the effective momentum transfer from the electrons to the macroscopic target will depend on the details of the target material. We will hence omit this contribution in the following, noting however that including this effect may moderately increase the cross section.The atomic coherence factor N_c is given by the number of nuclei within the de Broglie wavelength λ_ν = 2 πħ/p_ν of the neutrinos <cit.>,N_c = N_AV/Am_AV ρ λ_ν^3,where ρ denotes the density of the test mass at the end of the pendulum. To maximize the coherence effect the test mass should ideally have the same size as the de Broglie wavelength. Alternatively, one could think of using foam-like <cit.>, or laminated materials <cit.> or some embedding of the detector material in a matrix material <cit.>.For CNB neutrinos, the typical de Broglie wavelength is O(0.1 cm), leading easily to N_c ∼ 10^20. So we can rewrite the acceleration asa_G_F^2 = 2 π^2 G_F^2/ħc^2n_ν N_AV^2 (A-Z)^2ρ/A^2m_AV^2⟨Δ p ⟩ E_ν/p_ν^2 .Now we are left with ⟨Δ p ⟩, which can be categorized roughly into three cases, see the discussion in Sec. <ref>. The first case are relativistic neutrinos m_νc^2 ≪ k_B T_ν wherep_ν^(R)≃ 3.15 k_B T_ν/c, ⟨Δ p ⟩_(R)≃ 3.15β_⊕^CMBk_B T_ν/c,with the Boltzmann constant k_B = 1.38 · 10^-16 g cm^2 s^-2 K^-1 and the speed of light c = 2.998 · 10^10 cm/s. The factor 3.15 arises from the thermal average over the Fermi-Dirac distribution. Here β_⊕^CMB denotes the velocity of the earth in the CNB frame. If the earth was at rest in this frame, the neutrinos would arrive uniformly from all directions and the average momentum transfer would vanish. The net effect is thus proportional to the velocity of the earth (the pendulum) moving through the neutrino bath.Next we consider non-relativistic neutrinos (m_νc^2 ≫ k_B T_ν) which can be divided into two sub cases. First, we consider neutrinos which do not cluster gravitationally. Even though they are non-relativistic, their average momentum is thus determined by the CNB temperature,p_ν^(NR-NC)≃ 3.15 k_B T_ν/c,⟨Δ p ⟩_(NR-NC)≃ 3.15β_⊕^CMBk_B T_ν/c.Note that in general, the relative momentum should be estimated as(p_ν)^(NR-NC)≃max{ 3.15 k_B T_ν, m_ν β_⊕^CMBc }.However, since we are considering non-clustered neutrinos, their velocity must be larger than the escape velocity. Since v_esc≳β_⊕^CMB c, the former term will always dominate. On the other hand, the rest frame of clustered non-relativistic neutrinos is the frame of the galaxy or (super) cluster. As a reference value for their velocity dispersion we use β_vir, which also determines the relative velocity of the earth in this frame, v_⊕≃β_virc,p_ν^(NR-C)≃ m_ν β_virc, ⟨Δ p ⟩_(NR-C)≃m_ν β_virc,With this we find that the acceleration is given bya_G_F^2 =2 π^2 G_F^2/ħc^2n_ν N_AV^2 (A-Z)^2ρ/A^2m_AV^2β_⊕^CMB c for (R) , m_ν β_⊕^CMBc^3/3.15 k_B T_ν for (NR-NC) ,c/β_vir for (NR-C) . We can now plug in some numbers and compare them to our benchmark sensitivities. As a target we choose lead which has a high density, ρ≈ 11.34 g/cm^3, with A = 208 and Z=82 [ We choose here a material with a high neutron and mass density toincrease the induced accelerations. Currently in gravitational wave experiments other lighter materials are preferred to, e.g., reduce thermal noise. For pure silicon, for instance, the acceleration would drop roughtly by a factor of seven. ].In total we then obtaina_G_F^2 = n_ν/2 n̅_ν3 · 10^-33 cm/s^2 for (R) , 5 · 10^-31(m_ν/0.1 eV/c^2)cm/s^2 for (NR-NC) ,2 · 10^-27(10^-3 /β_vir)cm/s^2 for (NR-C) ,Here we have normalized the neutrino density to the standard value of 2n̅_ν. We find here slightly different numbers than Ref. <cit.>. Apart from some improved approximations, the main difference is the expression for p_ν employed in the (NR-NC) case.As discussed in section <ref>, various mechanisms can (moderately) enhance these values. We stress that at least two neutrino generations should be non-relativistic nowadays, and that they are moreover at least partially clustered (at least to the local super cluster) which is the more promising case for a potential discovery. Nevertheless, these rates are many orders of magnitude below the benchmark sensitivities quoted in Sec. <ref>.For completeness, let us estimate the lower bound on the induced acceleration, taking into account all remaining uncertainties about the CNB. As can be seen from the expressions above, the `worst case scenario' are light (normal ordered) Majorana neutrinos. If the lightest neutrino is massless we find for the other two neutrino masses m_2 ≈ 8.5 · 10^-3 eV/c^2 andm_3 ≈ 5 · 10^-2 eV/c^2 which is much larger than their kinetic thermal energy 3.15 k_B T_ν≈ 5 · 10^-4 eV such that both of these species can be considered to be non-relativistic. To be more precise their velocities are then v_2 ≈√(2E/m_2)≈ 0.2 c and v_3 ≈√(2E/m_3)≈ 0.1 c which is far above the local escape velocities. Furthermore, the scattering cross section for Majorana neutrinos is suppressed by an additional factor of (v/c)^2, see, e.g., <cit.>. Combining this information we find that the minimal acceleration is of the ordera_G_F^2^min≈ 8 · 10^-33 cm/s^2,where we have summed over all three flavours. This is nearly six orders of magnitude worse than the most optimistic scenario. However,once the neutrino mass scale and ordering is determined the uncertainty on the induced acceleration will shrink significantly. A determination of the Dirac / Majorana nature of neutrinos would furthermore significantly reduce this uncertainty. Vice versa, if neutrino-less double beta decay experiments remain inconclusive, the CNB could one day be a last resort to distinguish these two possibilities.As we have just seen the scattering effect can indeed be much smaller than the G_F effect, cf. (<ref>), which makes it tempting to focus on the latter effect. But this depends on the lepton asymmetries for which there is no such clear theoretical prediction.§.§ Solar neutrinos and dark matter In this section we discuss two relevant competing processes which could also result in a pendulum displacement with a similar frequency: solar neutrinos and particle dark matter (DM), see also <cit.>.The acceleration due to solar neutrinos is given by Eq. (<ref>) for relativistic neutrinos, taking into account that the maximal momentum transfer of solar neutrinos is simply given by ⟨Δ p ⟩ = E_ν/c (since all solar neutrinos come from the same direction). The de Broglie wavelength of the solar neutrinos is O(10^-10) cm and hence smaller than the typical atomic distances, implying that there will be no coherent enhancement as for the CNB neutrinos (N_c = 1). With the solar neutrino flux Φ_solar-ν = 10^11 cm^-2 s^-1, the scattering of pp neutrinos (E_ν≃ 0.3 MeV) off a lead test mass yieldsa_solar-ν≈ 3 · 10^-26 cm/s^2.This acceleration is larger than the CNB wind, however as we discuss in the following section, the event rate is much smaller so that (in an earth based laboratory) the expected signal would be clearly distinguishable. Moreover, in contrast to the CNB signal, this signal will be correlated with the relative position of the sun.Eq. (<ref>) also applies to the acceleration induced by collisions with cold dark matter particles X. In this case, the corresponding flux is given by Φ_X = n_X β_X c and the momentum transfer by ⟨Δ p ⟩_X = m_X β_X c where n_X, β_X c and m_X denote the number density, average velocity and mass of the particles X. For dark matter masses m_X ≳ 1 GeV/c^2, as expected in the WIMP scenario, the de Broglie wavelength is smaller than 10^-10 cm and we can set N_c = 1. Together, for a lead target as considered above this yieldsa_DM≈ 4 · 10^-30( (A - Z)^2/76 A) ( σ_X-N/10^-46 cm^2) ( ρ_dark(local)/10^-24 g/cm^3) ( β_X/10^-3)^2 cm/s^2,with ρ_dark(local) = m_X n_X the local dark matter density, implying that the benchmark sensitivity of Eq. (<ref>) corresponds to cross sections in this dark matter range of σ_X -N≳ 2 · 10^-33 cm^2.In the prototypical WIMP mass range of GeV/c^2 < m_X < 100 GeV/c^2, such a cross section is excluded by many orders of magnitude by current direct detection searches, which find σ_X-N≲ 10^-46 cm^2 for spin-independent nucleon-DM interactions for m_X ≈ 40 - 50 GeV/c^2 <cit.>.Given these strong constraints, sub-GeV DM candidates have recentlyreceived a lot of attention, see, e.g., <cit.> and references therein. In this mass range, the constraints from direct detection becomeirrelevant and the strongest bounds are derived from cosmology and astrophysical considerations <cit.>. For example, a thermal dark matter candidate which is lighter than about 10 MeV/c^2 would decouple from the Standard Model after the decoupling of the CNB, and hence generically perturb the standard T_ν/T_0-relation - which in turn is bounded by the Δ N_eff measurements in the CMB <cit.>. Further constraintsarise from the relic abundances of the elements produced in Big Bang Nucleosynthesis <cit.>and from bounds on CMB distortions <cit.>. Assuming standard cosmology, these constraints exclude wide mass-ranges of sub-MeV thermal relics <cit.>,subject however to assumptions on, e.g., the annihilation channels, the nature of the mediator fields and the production mechanism.In contrast, it is interesting to note that the setup we propose here could function as a fairly model-independent direct DM detector. In the sub-MeV mass range, our proposal gains ground in a two-fold way: Firstly, the DM number density increases as 1/m_X implying that DM acts more like a constant `wind' (as in the CNB case) instead of separate individual events even for lower cross sections. Secondly, and more importantly, the atomic enhancement factor N_c begins to rapidly grow for m_X ≲ MeV/c^2, reaching N_c ∼ 10^9 for a lead test mass at m_X = 3.3 keV/c^2 (which corresponds to the lower bound on the DM mass from structure formation for a thermal relic <cit.>).In this mass range our setup with the benchmark sensitivity of Eq. (<ref>) could thus allow to probe DM - Nucleon cross sections down to σ_X -N≃ 10^-42 cm^2. Note that even lighter DM masses are theoretically viable if the DM particle has a suitable non-thermal history and hence its contribution to washout during structure formation is suppressed.These numbers should be compared with other very recent proposals to directly measure DM in this mass range, see, e.g., Refs. <cit.>.A dark matter signal might be disentangled from a CNB signal through the phase of the annual modulation <cit.>. The fractional modulation for light, unbound neutrinos is expected to peak in fall, whereas the peak for bound particles (such as DM) is expected in spring, see Fig. 2 of <cit.>. This is of course only possible if the CNB neutrinos are mainly unclustered. Note that the expected signal from bosonic sub-eV dark matter as discussed in <cit.> oscillates with a frequency of m_X/(2 πħ) ∼ m_X/(0.05 eV/c^2) · 10^13 Hz. For m_X ≳ 10^-19 eV/c^2 this is much faster than the 1/day frequency of the signals discussed here.§.§ Cosmic wind vs. cosmic nudges A question which has not explicitly been addressed in the literature to our knowledge is, if the CNB really acts like a wind or if it is more realistically a series of feeble nudges on the test mass. To understand this issue better let us first have a look at the event rates of CNB neutrinos, focusing for simplicity here only on the G_F^2 effect. FromR = a_G_F^2/⟨Δ p ⟩ = Φ_ν N_AV/A m_AVN_cσ_ν-A ,we obtainR_(R) ≈ 1 · 10^-4 n_ν/2n̅_ν g^-1 s^-1 ,R_(NR-NC) ≈0.02n_ν/2n̅_ν m_ν/0.1 eV/c^2 g^-1 s^-1 , R_(NR-C) ≈ 0.4n_ν/2n̅_ν 0.1 eV/c^2/m_ν( 10^-3/β_vir)^2 g^-1 s^-1 ,using the above results and with lead as detector material. For a total test mass of about 100 kg (arranged properly to fully exploit the atomic coherence factor) the expected frequency of events is in all three cases larger than the oscillation frequency of the pendulum (f = 1/(2π) √(g/l)≈ 0.5 Hz), and it is indeed justified to speak of a neutrino `wind'. Interestingly one could also follow another approach here. It is difficult to keep the interferometers stable on the time scale of a day. Instead, one can choose the material, the size of the detector and the length of the pendulum in such a way that the signal appears as noise in the frequency band where the detector is most sensitive (i.e., R ∼ 100 Hz for LIGO). The daily/annual modulation as well as the preferred average direction of the signal would then result in a fluctuation of the noise which might be easier identified than the very low-frequency signal discussed above as we also have pointed out already in section <ref>.Interestingly, the event rate for solar neutrinos is very lowR_solar-ν≈ 2 · 10^-9 g^-1 s^-1which might seem surprising due to the very large flux and the much higher nucleus cross section. But in this case the missing atomic coherence factor really makes a big difference. Consequently, solar neutrinos will register in our setup as a series of individual nudges. Disentangling these from other background events seems challenging but again the directional information can help.For WIMP-like cold dark matter the rate is given byR_DM≈ 8 · 10^-3( 100 GeV/c^2/m_X) ( σ_X-N/10^-33 cm^2) ( ρ_dark(local)/10^-24 g/cm^3) ( β_X/10^-3)g^-1 s^-1 ,normalized to our expected sensitivity for the cross-section for 100 GeV/c^2 dark matter as discussed above.This rate is extremely small once one inserts the direct detection constraints, σ_X-N≲ 10^-46 cm^2, asexpected compared to plausible rates in dark matter direct searches.However, this picture drastically changes considering light dark matter due to the atomic coherence factor.For m_X ≲ 1 MeV/c^2,R_light DM≈4 · 10^5( 3.3keV/c^2/m_X)^4 ( σ_X-N/10^-42 cm^2) ( ρ_dark(local)/10^-24 g/cm^3) ( β_X/10^-3)g^-1 s^-1 ,which is now normalized to our expected sensitivity for the cross-section in this mass region. Here we find large interaction rates even for the low cross-sections considered, which is again mainly due to the atomic coherence factor.§ SUMMARY AND CONCLUSIONS Two of the most outstanding achievements in cosmology in recent years were the precise measurement of the Cosmic Microwave Background and the detection of gravitational waves. These remarkable discoveries raise the appetite for more. In particular, in this paper we address the question if the impressive laser interferometer technology used in gravitational wave detectors can be used to hunt for an echo of the Big Bang generated much earlier than the CMB: the Cosmic Neutrino Background.Unfortunately, this does not seem to be feasible with the current technology. We have briefly sketched a setup based on an ordinary pendulum which is deflected by the cosmic neutrino wind. Using current laser interferometers to determine the position of the pendulum and assuming a very high stability of the experiment on the time-scale of a day it might be possible to measure accelerations down to 10^-16 cm/s^2, which would already be an improvement compared to current torsion balance experiments.Although this sensitivity is already extremely remarkable it is still far away from a potential signal. The most optimistic case for this kind of experiments is when the relic neutrinos are non-relativistic nowadays and cluster in our galaxy. This could lead to accelerations of the order of 10^-27 cm/s^2. Eleven orders of magnitude below what we estimated might be ideally achieved with current technology. In addition, the low frequency of the CNB signal poses a further serious challenge. A possibility to address this point is by tuning the setup to the high frequency component of the CNB signal, governed by the neutrino interaction rate with the test mass. However, in summary, these results suggest that a mechanical force might not be the most encouraging way to discover the CNB. More promising for the discovery at the moment seems an experiment where cosmic neutrinos are caught with inverse beta decay giving rise to a characteristic peak in the beta spectrum, see the recent PTOLEMY proposal <cit.>. Such an experiment nevertheless comes with one big disadvantage: It is not immediately sensitive to the directional dependence of the CNB signal.In the future, one might still want to consider an experiment along the lines discussed here. In fact, the sensitivity could be tremendously improved by putting the setup in a micro-g environment, which could be achieved either by going to space or by compensating the gravitational force by an electromagnetic force here on earth in a laboratory. Another possibility, motivated by the remarkable results of the recent LISA Pathfinder mission, could be a setup based on free falling test masses in space with different CNB cross sections.A setup along the lines proposed here could moreover also serve as a dark matter detector for sub-MeV DM particles. In particular in the low mass region of a few keV, remarkable sensitivities to the DM nucleon cross section may be reached even with current technology. For example, for DM particles close to the thermal limit, m_X = 3.3 keV/c^2, we demonstrate how cross sections down to σ_X-N≃10^-42 cm^2could be probed assuming a sufficiently high stability of the experimental setup. Expected developments in interferometer technology and the possibility of micro gravity environments have the potential to significantly improve this number.§ ACKNOWLEDGEMENTS The authors wish to thank M. Barsuglia and E. Chassande-Mottin for helpful discussions on the experimental setup of the LIGO detector, as well as S. Knapen and T. Lin for valuable comments on the status of light dark matter searches. Moreover, we wish to thank G. Gelmini for helpful comments and clarifications about the momentum distributions of CNB neutrinos and Xun-Jie Xu for pointing out the effect of the electrons in the detector material.M. S. would like to thank A. Meroni for pointing out <cit.> to him. V. D. acknowledges financial support from the UnivEarthS Labex program at Sorbonne Paris Cité (ANR-10-LABX-0023 and ANR-11-IDEX-0005-02) and the Paris Centre for Cosmological Physics. V. D. would like to thank UC Los Angeles and the Berkeley Center for Theoretical Physics for kind hospitality during the final stages of this work.99 Ringwald:2009bg A. Ringwald,Nucl. Phys. A 827 (2009) 501C[arXiv:0901.1529 [astro-ph.CO]]. Vogel:2015vfa P. Vogel,AIP Conf. Proc.1666 (2015) 140003.Betts:2013uya S. Betts et al.,arXiv:1307.4738 [astro-ph.IM]. Abbott:2016blz B. P. Abbott et al. [LIGO Scientific and Virgo Collaborations],Phys. Rev. Lett.116 (2016) no. 6,061102[arXiv:1602.03837 [gr-qc]].Graham:2015ifn P. W. Graham, D. E. Kaplan, J. Mardon, S. Rajendran and W. A. Terrano,Phys. Rev. D 93 (2016) no. 7,075029[arXiv:1512.06165 [hep-ph]]. Chen:2015dka M. C. Chen, M. Ratz and A. Trautner,Phys. Rev. D 92 (2015) no.12,123006[arXiv:1509.00481 [hep-ph]];J. Zhang and S. Zhou,Nucl. Phys. B 903 (2016) 211[arXiv:1509.02274 [hep-ph]]. Weinheimer:1999tn C. Weinheimer, B. Degenddag, A. Bleile, J. Bonn, L. Bornschein, O. Kazachenko, A. Kovalik and E. W. Otten,Phys. Lett. B 460 (1999) 219Erratum: [Phys. Lett. B 464 (1999) 352].Ade:2015xua P. A. R. Ade et al. [Planck Collaboration],Astron. Astrophys.594 (2016) A13[arXiv:1502.01589 [astro-ph.CO]]. PDG2016 K. Nakamura and S. T. Petcov, in C. Patrignaniet al. (Particle Data Group), Chin. Phys. C 40 (2016) 100001. Ringwald:2004np A. Ringwald and Y. Y. Y. Wong,JCAP 0412 (2004) 005[hep-ph/0408241]. Safdi:2014rza B. R. Safdi, M. Lisanti, J. Spitz and J. A. Formaggio,Phys. Rev. D 90 (2014) no. 4,043001[arXiv:1404.0680 [astro-ph.CO]]. Langacker:1982ih P. Langacker, J. P. Leveille and J. Sheiman,Phys. Rev. D 27 (1983) 1228.Kang:1991xa H. S. Kang and G. Steigman,Nucl. Phys. B 372 (1992) 494. Lesgourgues:1999wu J. Lesgourgues and S. Pastor,Phys. Rev. D 60 (1999) 103521 [hep-ph/9904411]. Mangano:2011ip G. Mangano, G. Miele, S. Pastor, O. Pisanti and S. Sarikas,Phys. Lett. B 708 (2012) 1 [arXiv:1110.4335 [hep-ph]]. Castorina:2012md E. Castorina, U. Franca, M. Lattanzi, J. Lesgourgues, G. Mangano, A. Melchiorri and S. Pastor,Phys. Rev. D 86 (2012) 023517[arXiv:1204.2510 [astro-ph.CO]].Barenboim:2016shh G. Barenboim, W. H. Kinney and W. I. Park,Phys. Rev. D 95 (2017) no.4,043506 [arXiv:1609.01584 [hep-ph]]. Caramete:2013bua A. Caramete and L. A. Popa,JCAP 1402 (2014) 012[arXiv:1311.3856 [astro-ph.CO]]. Barenboim:2017dfq G. Barenboim and W. I. Park,arXiv:1703.08258 [hep-ph]. TheLIGOScientific:2016agk B. P. Abbott et al. [LIGO Scientific and Virgo Collaborations],Phys. Rev. Lett.116 (2016) no. 13,131103[arXiv:1602.03838 [gr-qc]]. ET M. Abernathy et al., Einstein gravitational wave telescope, http://www.et-gw.eu/etdsdocumentconceptual design studyLISA K. Danzmann et al., Laser Interferometer Space Antenna (LISA), https://www.elisascience.org/files/publications/LISA_L3_20170120.pdfL3 mission proposalHagmann:1999kf C. Hagmann,astro-ph/9905258. C. Hagmann,AIP Conf. Proc.478 (1999) 460[astro-ph/9902102]. Wagner:2012ui T. A. Wagner, S. Schlamminger, J. H. Gundlach and E. G. Adelberger,Class. Quant. Grav.29 (2012) 184002[arXiv:1207.2442 [gr-qc]].MicrogravityK. Jules et al.,Acta Astronautica 55 (2004) 335,Armano:2016bkm M. Armano et al.,Phys. Rev. Lett.116 (2016) no. 23,231101.Duda:2001hd G. Duda, G. Gelmini and S. Nussinov,Phys. Rev. D 64 (2001) 122001[hep-ph/0107027]. Opher:1974 R. Opher,Astron. & Astrophys. 37 (1974) 135. Lewis:1979mu R. R. Lewis,Phys. Rev. D 21 (1980) 663.Cabibbo:1982bb N. Cabibbo and L. Maiani,Phys. Lett.114B (1982) 115.Stodolsky:1974aq L. Stodolsky,Phys. Rev. Lett.34 (1975) 110Erratum: [Phys. Rev. Lett.34 (1975) 508].Shvartsman:1982sn B. F. Shvartsman, V. B. Braginsky, S. S. Gershtein, Y. B. Zeldovich and M. Y. Khlopov,JETP Lett.36 (1982) 277[Pisma Zh. Eksp. Teor. Fiz.36 (1982) 224]. Zeldovich:1981wf Y. B. Zeldovich and M. Y. Khlopov,Sov. Phys. Usp.24 (1981) 755[Usp. Fiz. Nauk 135 (1981) 45].Smith:1983jj P. F. Smith and J. D. Lewin,Phys. Lett.127B (1983) 185.Freedman:1973yd D. Z. Freedman,Phys. Rev. D 9 (1974) 1389.Formaggio:2013kya J. A. Formaggio and G. P. Zeller,Rev. Mod. Phys.84 (2012) 1307[arXiv:1305.7513 [hep-ex]]. Marciano:2003eq W. J. Marciano and Z. Parsa,J. Phys. G 29 (2003) 2629 [hep-ph/0403168].Smith:2003sy P. F. Smith,Phil. Trans. Roy. Soc. Lond. A 361 (2003) 2591. Akerib:2016vxi D. S. Akerib et al. [LUX Collaboration],Phys. Rev. Lett.118 (2017) no. 2,021303[arXiv:1608.07648 [astro-ph.CO]]. Tan:2016zwf A. Tan et al. [PandaX-II Collaboration],Phys. Rev. Lett.117 (2016) no. 12,121303[arXiv:1607.07400 [hep-ex]]. Alexander:2016aln J. Alexander et al.,arXiv:1608.08632 [hep-ph].Green:2017ybv D. Green and S. Rajendran,arXiv:1701.08750 [hep-ph].Ho:2012ug C. M. Ho and R. J. Scherrer,Phys. Rev. D 87 (2013) no.2,023505 [arXiv:1208.4347 [astro-ph.CO]]. Boehm:2013jpa C. Boehm, M. J. Dolan and C. McCabe,JCAP 1308 (2013) 041 [arXiv:1303.6270 [hep-ph]]. Serpico:2004nm P. D. Serpico and G. G. Raffelt,Phys. Rev. D 70 (2004) 043526 [astro-ph/0403417]. Nollett:2013pwa K. M. Nollett and G. Steigman,Phys. Rev. D 89 (2014) no.8,083508 [arXiv:1312.5725 [astro-ph.CO]]. Nollett:2014lwa K. M. Nollett and G. Steigman,Phys. Rev. D 91 (2015) no.8,083505 [arXiv:1411.6005 [astro-ph.CO]]. Finkbeiner:2011dx D. P. Finkbeiner, S. Galli, T. Lin and T. R. Slatyer,Phys. Rev. D 85 (2012) 043522 [arXiv:1109.6322 [astro-ph.CO]]. Lopez-Honorez:2013lcm L. Lopez-Honorez, O. Mena, S. Palomares-Ruiz and A. C. Vincent,JCAP 1307 (2013) 046 [arXiv:1303.5094 [astro-ph.CO]]. Viel:2013apy M. Viel, G. D. Becker, J. S. Bolton and M. G. Haehnelt,Phys. Rev. D 88 (2013) 043502 [arXiv:1306.2314 [astro-ph.CO]]. Essig:2011nj R. Essig, J. Mardon and T. Volansky,Phys. Rev. D 85 (2012) 076007 [arXiv:1108.5383 [hep-ph]]. Graham:2012su P. W. Graham, D. E. Kaplan, S. Rajendran and M. T. Walters,Phys. Dark Univ.1 (2012) 32 [arXiv:1203.2531 [hep-ph]]. Essig:2015cda R. Essig, M. Fernandez-Serra, J. Mardon, A. Soto, T. Volansky and T. T. Yu,JHEP 1605 (2016) 046 [arXiv:1509.01598 [hep-ph]]. Hochberg:2016ajh Y. Hochberg, T. Lin and K. M. Zurek,Phys. Rev. D 94 (2016) no.1,015019 [arXiv:1604.06800 [hep-ph]]. Schutz:2016tid K. Schutz and K. M. Zurek,Phys. Rev. Lett.117 (2016) no.12,121302 [arXiv:1604.08206 [hep-ph]]. Derenzo:2016fse S. Derenzo, R. Essig, A. Massari, A. Soto and T. T. Yu,arXiv:1607.01009 [hep-ph]. Hochberg:2016sqx Y. Hochberg, T. Lin and K. M. Zurek,Phys. Rev. D 95 (2017) no.2,023013 [arXiv:1608.01994 [hep-ph]]. Essig:2016crl R. Essig, J. Mardon, O. Slone and T. Volansky,Phys. Rev. D 95 (2017) no.5,056011 [arXiv:1608.02940 [hep-ph]]. Knapen:2016cue S. Knapen, T. Lin and K. M. Zurek,Phys. Rev. D 95 (2017) no.5,056019[arXiv:1611.06228 [hep-ph]]. | http://arxiv.org/abs/1703.08629v2 | {
"authors": [
"Valerie Domcke",
"Martin Spinrath"
],
"categories": [
"astro-ph.CO",
"hep-ex",
"hep-ph"
],
"primary_category": "astro-ph.CO",
"published": "20170324235554",
"title": "Detection prospects for the Cosmic Neutrino Background using laser interferometers"
} |
[]978-1-4799-5863-4/14/$31.00 2014 IEEE[] On the Performance of Millimeter Wave-based RF-FSO Multi-hop and Mesh NetworksBehrooz Makki1, Tommy Svensson1, Senior Member, IEEE, Maite Brandt-Pearce2 Senior Member, IEEE, and Mohamed-Slim Alouini3, Fellow, IEEE1Chalmers University of Technology, Gothenburg, Sweden, {behrooz.makki, tommy.svensson}@chalmers.se2University of Virginia, Charlottesville, VA , USA, [email protected] King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia, [email protected] Part of this work has been accepted for presentation at the IEEE WCNC 2017.December 30, 2023 ====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================This paper studies the performance of multi-hop and mesh networks composed of millimeter wave (MMW)-based radio frequency (RF) and free-space optical (FSO) links. The results are obtained in cases with and without hybrid automatic repeat request (HARQ). Taking the MMW characteristics of the RF links into account, we derive closed-form expressions for the networks' outage probability and ergodic achievable rates. We also evaluate the effect of various parameters such as power amplifiers efficiency, number of antennas as well as different coherence times of the RF and the FSO links on the system performance. Finally, we determine the minimum number of the transmit antennas in the RF link such that the same rate is supported in the RF- and the FSO-based hops. The results show the efficiency of the RF-FSO setups in different conditions. Moreover, HARQ can effectively improve the outage probability/energy efficiency, and compensate for the effect of hardware impairments in RF-FSO networks. For common parameter settings of the RF-FSO dual-hop networks, outage probability of 10^-4 and code rate of 3 nats-per-channel-use, the implementation of HARQ with a maximum of 2 and 3 retransmissions reduces the required power, compared to cases with open-loop communication, by 13 and 17 dB, respectively.§ INTRODUCTIONThe next generation of wireless networks must provide coverage for everyone everywhere at any time. To address these demands, a combination of different techniques is considered, among which free-space optical (FSO) communication is very promising<cit.>. Coherent FSO systems, made inexpensive by the large fiberoptic market, provide fiber-like data rates through the atmosphere using lasers. Thus, FSO can be used for a wide range of applications such as last-mile access, fiber back-up, back-hauling and multi-hop networks. In the radio frequency (RF) domain, on the other hand,millimeter wave (MMW) communication has emerged as a key enabler to obtain sufficiently large bandwidths so that it is possible to achieve data rates comparable to those in the FSO links. In this perspective, the combination of FSO and MMW-based RF links is considered as a powerful candidate for high-rate reliable communication.The RF-FSO related literature can be divided into two groups. The first group consists of papers on single-hop setups where the link reliability is improved via the joint implementation of RF and FSO systems. Here, either the RF and the FSO links are considered as separate links and the RF link acts as a backup when the FSO link is down, e.g., <cit.>, or the links are combined to improve the system performance <cit.>.Also, the implementation of hybrid automatic repeat request (HARQ) in RF-FSO links has been considered in <cit.>.The second group consists of the papers analyzing the performance of multi-hop RF-FSO systems. For instance, <cit.> study RF-FSO based relaying schemes with an RF source-relay link and an FSO or RF-FSO relay-destination link. Also, considering Rayleigh fading conditions for the RF link and amplify-and-forward relaying technique, <cit.> derive the end-to-end error probability of the RF-FSO based setups and compare the system performance with RF-based relay networks, respectively. The impact of pointing errors on the performance of dual-hop RF-FSO systems is studied in <cit.>. Finally, <cit.> analyzes decode-and-forward techniques in multiuser relay networks using RF-FSO.In this paper, we study the data transmission efficiency of multi-hop and mesh RF-FSO systems from an information theoretic point of view.Considering the MMW characteristics of the RF links and heterodyne detection technique in the FSO links, we derive closed-form expressions for the system outage probability(Lemmas 1-6) and ergodic achievable rates (Corollary 2). Our results are obtained for the decode-and-forward relaying approach in different cases with and without HARQ. Specifically, we show the HARQ as an effective technique to compensate for the non-ideal properties of the RF-FSO system and improve the network reliability. We present mappings between the performance of RF- and FSO-based hopsas well as between the HARQ-based and open-loop systems, in the sense that with appropriate parameter settings the same outage probability is achieved in these setups (Corollary 1, Lemma 6). Also, we determine the minimum number of transmit antennas in the RF links such that the same rate is supported by the RF- and the FSO-based hops (Corollary 2). Finally, we analyze the effect of various parameters such as the power amplifiers (PAs) efficiency, different coherence times of the RF and FSO links and number of transmit antennas on the performance of multi-hop and mesh networks.In contrast to <cit.>, we consider multi-hop and mesh networks. Moreover, ouranalytical/numerical results on the outage probability, ergodic achievable rateand the required number of antennas in HARQ-based RF-FSO systems as well as our discussions on the effect of imperfect PAs/HARQ have not been presented before. The differences in the problem formulation and the channel model makes our analytical/numerical results and conclusions completely different from the ones in the literature, e.g., <cit.>.The numerical and the analytical results show that: * Depending on the codewords length, there are different methods for the analytical performance evaluation of the RF-FSO systems (Lemmas 1-6).* There are mappings between the performance of RF- and FSO-based hops, in the sense that with proper scaling of the channel parameters the same outage probability is achieved in these hops (Corollary 1). Thus, the performance of RF-FSO based multi-hop/mesh networks can be mapped to ones using only the RF- or the FSO-based communication.* While the network outage probability is (almost) insensitive to the number of RF-based transmit antennaswhen this number is large, the ergodic rate of the multi-hop network is remarkably affected by the number of antennas.* The required number of RF-based antennas to guarantee the same rate as in the FSO-based hops increases significantly with the signal-to-noise ratio (SNR) and, at high SNRs, the ergodic rate scales with the SNR (almost) linearly.* At low SNRs, the same outage probability is achieved inHARQ-based RF hops with N transmit antennas, a maximum of M retransmissions and C channel realizations per retransmission as withan open-loop system with MNC transmit antennas and single channel realization per codeword transmission (Lemma 6).* The PAs efficiency affects the network outage probability/ergodic rate considerably. However, the HARQ protocols can effectivelycompensate for the effect of hardware impairments.* Finally, the HARQ improves the outage probability/energy efficiency significantly. For instance, consider common parameter settings of the RF-FSO dual-hop networks, outage probability of 10^-4 and code rate of 3 nats-per-channel-use (npcu). Then, compared to cases with open-loop communication, the implementation of HARQ with a maximum of 2 and 3 retransmissions reduces the required power by 13 and 17 dB, respectively. § SYSTEM MODELIn this section, we present the system model for a multi-hop setup with a single route from the source to the destination. As demonstrated in Section III.C, the results of the multi-hop networks can be extended to the ones in mesh networks with multiple non-overlapping routes from the source to the destination.§.§ Channel ModelConsider a T^total-hop RF-FSO system, with T RF-based hops and T̃=T^total-T FSO-based hops. As seen in the following, the outage probability and the ergodic achievable rate are independent of the order of the hops. Thus, we do not need to specify the order of the RF- and FSO-based hops. The i-th, i=1,…,T, RF-based hop uses a multiple-input-single-output (MISO) setup withN_i transmit antennas. Such a setup is of interest in, e.g., side-to-side communication between buildings/lamp posts <cit.>, as well as in wireless backhaul links where the trend is to introduce multiple antennas and thereby achieve multiple parallel streams, e.g., <cit.>. We define the channel gains as g_i^j_i≐|h_i^j_i|^2,i=1,…, T,j_i=1,…,N_i, where h_i^j_i is the complex fading coefficients of the channel between the j_i-th antenna in the i-th hop and its corresponding receive antenna.While the modeling of the MMW-based links is well known for line-of-sight wireless backhaul links, it is still an ongoing research topicfor non-line-of-sight conditions <cit.>. Particularly, different measurement setups have emphasized the near-line-of-sight propagation and the non-ideal hardware as two key features of such links. Here, we present the analytical results for the quasi-static Rician channel model, with successive independent realizations, which is an appropriate model for near line-of-sight conditions and has been well established for different MMW-based applications, e.g., <cit.>. Let us denote the probability density function (PDF) and the cumulative distribution function (CDF) of a random variable X by f_X(·) and F_X(·), respectively. With a Rician model, the channel gain g_i^j_i,∀ i,j_i, follows the PDFf_g_i^j_i(x)=(K_i+1)e^-K_i/Ω_ie^-(K_i+1)x/Ω_iI_0(2√(K_i(K_i+1)x/Ω_i) ),∀ i,j_i,where K_i and Ω_i denote the fading parameters in the i-th hop and I_n(·) is the n-th order modified Bessel function of the first kind.Also, defining the sum channel gain G_i=∑_j_i=1^N_ig_i^j_i, we havef_G_i(x) =(K_i+1)e^-K_iN_i/Ω_i((K_i+1)x/K_iN_iΩ_i)^N_i-1/2e^-(K_i+1)x/Ω_iI_N_i-1(2√(K_i(K_i+1)N_ix/Ω_i) ),∀ i. Finally, to take the non-ideal hardware into account, we consider the state-of-the-art model for the PA efficiency where the output power at each antenna of the i-th hop is determined according to <cit.>, <cit.>, <cit.>, <cit.>P_i/P_i^cons=ϵ_i(P_i/P_i^max)^ϑ_i⇒P_i=√(ϵ_i P_i^cons/(P_i^max)^ϑ_i),∀ i.Here, P_i, P_i^max and P_i^cons,∀ i, are the output, the maximum output and the consumed power in each antenna of the i-th hop, respectively, ϵ_i∈ [0,1] denotes the maximum power efficiency achieved at P_i=P_i^max and ϑ_i∈ [0,1] is a parameter depending on the PA class.The FSO links, on the other hand, are assumed to have single transmit/receive terminals. Reviewing the literature and depending on the channel condition, the FSO link may follow different distributions. Here, we present the results for cases with exponential and Gamma-Gamma distributions of the FSO links. For the exponential distribution of the i-th FSO hop, the channel gain G̃_i followsf_G̃_i(x)=λ_i e^-λ_i x, ∀ i, with λ_i being the long-term channel coefficient of the i-th, i=1,…,T̃, hop. Moreover, with the Gamma-Gamma distribution we havef_G̃_i(x)=2(a_ib_i)^a_i+b_i/2/Γ(a_i)Γ(b_i)x^a_i+b_i/2-1𝒦_a_i-b_i(2√(a_ib_ix)),∀ i.Here, 𝒦_n(·) denotes the modified Bessel function of the second kind of order n and Γ(x)=∫_0^∞u^x-1e^-udu is the Gamma function. Also, a_i and b_i, i=1,…,T̃, are the distribution shaping parameters which can be expressed as functions of the Rytov variance, e.g., <cit.>.§.§ Data Transmission ModelWe consider the decode-and-forward technique where at each hop the received message is decoded and re-encoded, if it is correctly decoded. Therefore, the message is successfully received by the destination if it is correctly decoded in all hops. Otherwise, outage occurs.As the most promising HARQ approach leading to highest throughput/lowest outage probability <cit.>, we consider the incremental redundancy (INR) HARQ with a maximum of M_i retransmissions in the i-th, i=1,…,T^total, hop. Using INR HARQ with a maximum ofM_i retransmissions, q_i information nats are encoded into a parent codeword of length M_iL channel uses. The parent codeword is then divided into M_i sub-codewords of length L channel uses which are sent in the successive transmission rounds. Thus, the equivalent data rate, i.e., the code rate, at the end of round m is q_i/mL=R_i/m npcu where R_i=q_i/L denotes the initial code rate in the i-th hop. In each round, the receiver combines all received sub-codewords to decode the message. Also, different independent channel realizations may be experienced in each round of HARQ. The retransmission continues until the message is correctly decoded or the maximum permitted transmission round is reached. Note that setting M_i=1,∀i, represents the cases without HARQ, i.e., open-loop communication.§ ANALYTICAL RESULTSConsider the decode-and-forward approach in a multi-hop network consisting of T RF- and T̃ FSO-based hops. Then, because independent channel realizations are experienced in different hops, the system outage probability is given by(Outage)=1-∏_i=1^T(1-ϕ_i)∏_i=1^T̃(1-ϕ̃_i),where ϕ_i and ϕ̃_i denote the outage probability in the i-th RF- and FSO-based hops, respectively. Note that the order of the FSO and RF links therefore do not matter. To analyze the outage probability, we need to determineϕ_i and ϕ̃_i,∀ i. Following the same procedure as in, e.g.,<cit.> and using the properties of the imperfect PAs (<ref>), the outage probability of the i-th RF- and FSO-based hops are found asϕ_i=( 1/M_iC_i∑_m=1^M_i∑_c=(m-1)C_i+1^mC_ilog(1+√(ϵ_i P_i^cons/(P_i^max)^ϑ_i)G_i(c))≤R_i/M_i), i=1,…,T,andϕ̃_i=(1/M_iC̃_i∑_m=1^M_i∑_c=(m-1)C̃_i+1^mC̃_ilog(1+P̃_iG̃_i(c))≤R_i/M_i),i=1,…, T̃,respectively. Here, (<ref>)-(<ref>) come from the maximum achievable rates of Gaussian channels where, in harmony with, e.g., <cit.>, we have used the Shannon's capacity formula. Thus, our results provide a lower bound of outage probability which is tight for moderate/large codewords lengths. We assume that the FSO system is well-modeled as an additive white Gaussian noise channel, with insignificant signal-dependent shot noise contribution. Moreover, P̃_i denotes the transmission power in the i-th, i=1,…,T̃, FSO-basedhop. We have considered a heterodyne detection technique in (<ref>).Also, with no loss of generality, we have normalized the receivers' noise variances. Hence, P_i, P̃_i (in dB, 10log_10 P_i, 10log_10P̃_i) represent the SNR as well. Then, C_i, i=1,…,T, and C̃_i, i=1,…,T̃, represent the number of channel realizations experienced in each HARQ-based transmission round of the i-th RF- and FSO-based hops, respectively[ For simplicity, we present the results for cases with normalized symbol rates. However, using the same approach as in <cit.>, it is straightforward to represent the results with different symbol rates of the links]. The number of channel realizations experienced within a codeword transmission is determined by the channel coherence times of the links, the codewords lengths, considered frequency as well as if diversity gaining techniques such as frequency hopping are utilized. Finally, G_i(c) and G̃_i(c) are the sum channel gains for the channel fading realization c in the i-th RF- and FSO-based hop, respectively.In the following, we present near-closed-form expressions for (<ref>)-(<ref>), and, consequently, (<ref>). Then, Corollary 2 determines the ergodic achievable rate of multi-hop networks as well as the minimum number of required antennas in the RF-based hops to guarantee the ergodic achievable rate. Finally, Section III.C extends the results to mesh network.Since there is no closed-form expression for the outage probabilities, we need to use different approximation techniques (see Table I for a summary of developed approximation schemes). In the first method, we concentrate on cases with long codewords where multiple channel realizations are experienced during data transmission in each hop, i.e., C_i and C̃_i,∀ i, are assumed to be large. Here, we use the central limit theorem (CLT) to approximate the contribution of the RF- and FSO-based hops by equivalent Gaussian random variables. Using the CLT, we finddifferent approximation results for the network outage probability/ergodic rate (Lemmas 1-5, Corollary 2). Then, Section III.B studies the system performance in cases with short codewords, i.e., small values of C_i,C̃_i,∀ i (Lemma 6). It is important to note that the difference between the analytical schemes of Sections III.A and B comes from the values of the products M_iC_i and M_iC̃_i,∀ i. Therefore, from (<ref>)-(<ref>), the long-codeword results of Section III.A can be also mapped to cases with short codewords, large number of retransmissions and scaled code rates. As shown in Section IV, our derived analytical results are in high agreement with the numerical simulations.§.§ Performance Analysis with Long CodewordsLemma 1: At low SNRs, the outage probability (<ref>) is approximately given by (<ref>) with μ_i and σ_i^2 defined in (<ref>) and (<ref>), respectively.Using log(1+x)≃ x for small values of x, (<ref>) is rephrased asϕ_i≃(1/M_iC_i∑_m=1^M_i∑_c=(m-1)C_i+1^mC_iG_i(c)≤R_i/M_i √(ϵ_i P_i^cons/(P_i^max)^ϑ_i)),where for long codewords/large number of retransmissions, we can use the CLT to replace the random variable 1/M_iC_i∑_m=1^M_i∑_c=(m-1)C_i+1^mC_iG_i(c) by an equivalent Gaussian variable 𝒱_i∼𝒩(μ_i,1/M_iC_iσ_i^2) withμ_i=∫_0^∞xf_G_i(x)= ^(a)Ω_i e^-K_iN_iN_i/K_i+11F_1(N_i+1;N_i;K_iN_i),andσ_i^2=γ_i-μ_i^2,γ_i=∫_0^∞x^2f_G_i(x)= ^(b)Ω_i^2 e^-K_iN_i(N_i+1)N_i/(K_i+1)^21F_1(N_i+2;N_i;K_iN_i).Here, sF_t(a_1,…, a_s; b_1,…,b_t;x)=∑_j=0^∞(a_1)_j… (a_s)_j/(b_1)_j… (b_t)_j(x^j/j!), (a)_0=1, (a)_j=a(a+1)…(a+j-1),j>0, denotes the generalized hypergeometric function. Also, to find (a)-(b) we have first used the property <cit.>I_n(x)=1/Γ(n+1)(x/2)^n 0F_1(n+1;x^2/4),to represent the PDF (<ref>) asf_G_i(x) =(K_i+1)^N_ie^-K_iN_i/Ω_i^N_iΓ(N_i)x^N_i-1e^-(K_i+1)x/Ω_i0F_1(N_i;K_i(K_i+1)N_ix/Ω_i),and then derived (a)-(b) based on the following integral identity <cit.>∫_0^∞e^-xx^ν-1sF_t(a_1,…,a_s;b_1,…,b_t;α x)dx =Γ(ν)s+1F_t(ν,a_1,…,a_s;b_1,…,b_t;α). In this way, using the CDF of Gaussian random variables and the error function erf(x)=2/√(π)∫_0^xe^-t^2dt, (<ref>) is obtained byϕ_i ≃(𝒱_i≤R_i/M_i √(ϵ_i P_i^cons/(P_i^max)^ϑ_i)) =1/2(1+erf(√(M_iC_i)(R_i/M_i √(ϵ_i P_i^cons/(P_i^max)^ϑ_i)-μ_i)/√(2σ_i^2))),as stated in the lemma. To present the second approximation method for (<ref>), we first represent an approximate expression for the PDF of the sum channel gain G_i,∀ i, as follows.Lemma 2: For moderate/large number of antennas, which is of interest in MMW communication,the sum gain G_i,∀ i, is approximated by an equivalent Gaussian random variable 𝒵_i∼𝒩(N_iζ_i,N_iν_i^2) with ζ_i=𝒮_i(2), ν_i^2=𝒮_i(4)-𝒮_i(2)^2 and 𝒮_i(n)≐(Ω_i/K_i+1)^n/2Γ(1+n/2)ℒ_n/2(-K_i). Here, ℒ_n(x)=e^x/n!d^n /d x^n(e^-xx^n) denotes the Laguerre polynomial of the n-th order and K_i,Ω_i are the fading parameters as defined in (<ref>). Using the CLT for moderate/large number of antennas, the random variableG_i=∑_j_i=1^N_ig_i^j_i is approximated by the Gaussian random variable 𝒵_i∼𝒩(N_iζ_i,N_iν_i^2). Here, from (<ref>), ζ_i andν_i^2 are, respectively, determined byζ_i=∫_0^∞xf_g_i^j_i(x)dx=(K_i+1)e^-K_i/Ω_i∫_0^∞xe^-(K_i+1)x/Ω_iI_0(2√(K_i(K_i+1)x/Ω_i) )dx,andν_i^2=ρ_i-ζ_i^2,ρ_i=∫_0^∞x^2f_g_i^j_i(x)dx=(K_i+1)e^-K_i/Ω_i∫_0^∞x^2e^-(K_i+1)x/Ω_iI_0(2√(K_i(K_i+1)x/Ω_i) )dx,which, using the variable transform t=√(x), some manipulations and the properties of the Bessel function 1/b^2∫_0^∞x^n+1e^-x^2+c^2/2b^2I_0(cx/b^2)dx=b^n2^n/2Γ(1+n/2)ℒ(-c^2/2b^2),∀ c,b,n, are determined as stated in the lemma. Lemma 3: The outage probability (<ref>) is approximated by (<ref>) with μ̂_i and σ̂_i^2 given in (<ref>)-(<ref>), respectively. Replacing the random variable 1/M_iC_i∑_m=1^M_i∑_c=(m-1)C_i+1^mC_ilog(1+√(ϵ_i P_i^cons/(P_i^max)^ϑ_i)G_i(c)) by its equivalent Gaussian random variable 𝒰_i∼𝒩(μ̂_i,1/M_iC_iσ̂_i^2),the probability (<ref>) is rephrased asϕ_i≃(𝒰_i≤R_i/M_i), 𝒰_i∼𝒩(μ̂_i,1/M_iC_iσ̂_i^2),whereμ̂_i =∫_0^∞log(1+√(ϵ_i P_i^cons/(P_i^max)^ϑ_i)x)f_G_i(x)dx ≃^(c)∫_0^∞𝒴_i(x)f_𝒵_i(x)dx= 𝒬(√(ϵ_i P_i^cons/(P_i^max)^ϑ_i),0,N_iζ_i,N_iν_i^2,s_i)-𝒬(√(ϵ_i P_i^cons/(P_i^max)^ϑ_i),0,N_iζ_i,N_iν_i^2,0)+𝒬(r_i,θ-r_id_i,N_iζ_i,N_iν_i^2,∞) -𝒬(r_i,θ-r_id_i,N_iζ_i,N_iν_i^2,s_i), 𝒬(a_1,a_2,a_3,a_4,x)≐-a_1a_3+a_2/2erf(a_3-x/√(2a_4))-a_4/2πa_1e^-(a_3-x)^2/2a_4,andσ̂_i^2=γ̂_i-μ̂_i^2,γ̂_i=∫_0^∞log^2(1+√(ϵ_i P_i^cons/(P_i^max)^ϑ_i)x)f_G_i(x)dx ≃^(d)∫_0^∞𝒴_i^2(x)f_𝒵_i(x)dx= 𝒯(√(ϵ_i P_i^cons/(P_i^max)^ϑ_i),0,N_iζ_i,N_iν_i^2,s_i)-𝒯(√(ϵ_i P_i^cons/(P_i^max)^ϑ_i),0,N_iζ_i,N_iν_i^2,0)+𝒯(r_i,θ-r_id_i,N_iζ_i,N_iν_i^2,∞) -𝒯(r_i,θ-r_id_i,N_iζ_i,N_iν_i^2,s_i), 𝒯(a_1,a_2,a_3,a_4,x)≐1/2√(2π)e^-x^2+a_3^2/2a_4(erf(x-a_3/√(2a_4))-2√(a_4)a_1e^a_3x/a_4(a_1(a_3+x)+2a_2)+√(2π)e^x^2+a_3^2/2a_4(a_1^2(a_3^2+a_4)+2a_1a_2a_3+a_2^2) ).Here, (c) and (d) in (<ref>) and (<ref>) come from approximating f_G_i(x) by f_𝒵_i(x) defined in Lemma 2 and the approximation log(1+√(ϵ_i P_i^cons/(P_i^max)^ϑ_i)x)≃𝒴_i(x) where 𝒴_i(x)={√(ϵ_i P_i^cons/(P_i^max)^ϑ_i)x, x∈[0,s_i] θ+r_i(x-d_i), x> s_i, .,s_i=θ/√(ϵ_i P_i^cons/(P_i^max)^ϑ_i)(1-e^-θ)-1/√(ϵ_i P_i^cons/(P_i^max)^ϑ_i),r_i=√(ϵ_i P_i^cons/(P_i^max)^ϑ_i)e^-θ , d_i=e^θ-1/√(ϵ_i P_i^cons/(P_i^max)^ϑ_i).Then, following the same procedure as in (<ref>), (<ref>) is obtained asϕ_i ≃1/2(1+erf(√(M_iC_i)(R_i/M_i-μ̂_i)/√(2σ̂_i^2))),∀θ>0.Note that, in (<ref>)-(<ref>), θ>0 is an arbitrary parameter and, based on our simulations, accurate approximations are obtained for a broad range of θ>0.Lemma 4: The outage probability of the RF-hop, i.e., (<ref>), is approximately given byϕ_i ≃1/2(1+erf(√(M_iC_i)(R_i/M_i-μ̆_i)/√(2σ̆_i^2))),with μ̆_i and σ̆_i^2 defined in (<ref>) and (<ref>), respectively.To prove the lemma, we again use the CLT where the achievable rate random variable 1/M_iC_i∑_m=1^M_i∑_c=(m-1)C_i+1^mC_ilog(1+√(ϵ_i P_i^cons/(P_i^max)^ϑ_i)G_i(c)) is replaced by𝒰̆_i∼𝒩(μ̆_i,1/M_iC_iσ̆_i^2) withμ̆_i=∫_0^∞log(1+√(ϵ_i P_i^cons/(P_i^max)^ϑ_i)x)f_G_i(x)dx≃^(e)√(ϵ_i P_i^cons/(P_i^max)^ϑ_i)∫_0^∞1-F_𝒵_i(x)/1+√(ϵ_i P_i^cons/(P_i^max)^ϑ_i)xdx≃^(f)√(ϵ_i P_i^cons/(P_i^max)^ϑ_i)∫_0^∞𝒲_i(x)/1+√(ϵ_i P_i^cons/(P_i^max)^ϑ_i)xdx =log(1+√(ϵ_i P_i^cons/(P_i^max)^ϑ_i)(-√(2π N_iν_i^2)/2+N_iζ_i))+𝒜_i(√(ϵ_i P_i^cons/(P_i^max)^ϑ_i),-1/√(2π N_iν_i^2),1/2+N_iζ_i/√(2π N_iν_i^2),-√(2π N_iν_i^2)/2+N_iζ_i)-𝒜_i(√(ϵ_i P_i^cons/(P_i^max)^ϑ_i),-1/√(2π N_iν_i^2),1/2+N_iζ_i/√(2π N_iν_i^2),√(2π N_iν_i^2)/2+N_iζ_i), 𝒜(a_1,a_2,a_3,x)≐a_1a_2x^2/2log(1+a_1x)-a_1a_2x^2/4-a_2log(1+a_1x)/2a_1-a_1a_3x+a_1a_3xlog(1+a_1x)+a_3log(1+a_1x)+a_2x/2.Here, (e) comes from Lemma 2 and partial integration. Then, (f) is obtained by the linearization technique Q(x-N_iζ_i/√(N_iν_i^2))≃𝒲_i(x) with 𝒲_i(x)≐{ 1ifx≤-√(2π N_iν_i^2)/2+N_iζ_i, 1/2-1/√(2π N_iν_i^2)(x-N_iζ_i) ifx∈[-√(2π N_iν_i^2)/2+N_iζ_i,√(2π N_iν_i^2)/2+N_iζ_i], 0ifx>√(2π N_iν_i^2)/2+N_iζ_i, .which is found by linearly approximating Q(x-N_iζ_i/√(N_iν_i^2)) near the point x=N_iζ_i. Finally, the last equality is obtained by partial integration and some manipulations. Also, following the same procedure, we haveσ̆_i^2=ρ̆_i-μ̆_i^2ρ̆=∫_0^∞log^2(1+√(ϵ_i P_i^cons/(P_i^max)^ϑ_i)x)f_G_i(x)dx≃^ 2√(ϵ_i P_i^cons/(P_i^max)^ϑ_i)∫_0^∞log(1+√(ϵ_i P_i^cons/(P_i^max)^ϑ_i)x)𝒲_i(x)/1+√(ϵ_i P_i^cons/(P_i^max)^ϑ_i)xdx =log^2(1+√(ϵ_i P_i^cons/(P_i^max)^ϑ_i)(-√(2π N_iν_i^2)/2+N_iζ_i))+ℬ_i(√(ϵ_i P_i^cons/(P_i^max)^ϑ_i),-1/√(2π N_iν_i^2),1/2+N_iζ_i/√(2π N_iν_i^2),-√(2π N_iν_i^2)/2+N_iζ_i)-ℬ_i(√(ϵ_i P_i^cons/(P_i^max)^ϑ_i),-1/√(2π N_iν_i^2),1/2+N_iζ_i/√(2π N_iν_i^2),√(2π N_iν_i^2)/2+N_iζ_i), ℬ(a_1,a_2,a_3,x)≐a_1a_3-a_2/a_1log^2(1+a_1x)-2a_2x+2a_1a_2x+2a_2/a_1log(1+a_1x). In this way, the outage probability is given by (<ref>).Finally, Lemma 5 represents the outage probability of the FSO-based hops as follows.Lemma 5: The outage probability of the FSO-based hop, i.e., (<ref>), is approximately given byϕ̃_i ≃1/2(1+erf(√(M_iC̃_i)(R_i/M_i-μ̃_i)/√(2σ̃_i^2))),where μ̃_i and σ̃_i^2 are given by (<ref>)-(<ref>) and <cit.> for the exponential and the Gamma-Gamma distributions of the FSO links, respectively.Using the CLT, the random variable 1/M_iC̃_i∑_m=1^M_i∑_c=(m-1)C̃_i+1^mC_ilog(1+P̃_iG̃_i(c)) is approximated by its equivalent Gaussian random variable ℛ_i∼𝒩(μ̃_i,1/M_iC̃_iσ̃_i^2),where for the exponential distribution of the FSO link we haveμ̃_i =∫_0^∞f_G̃_i(x)log(1+P̃_ix)dx= ^(g)P̃_i∫_0^∞1-F_G̃_i(x)/1+P̃_ixdx=-e^λ̃_i /P̃_iEi(-λ_i /P̃_i),andσ̃_i^2=ρ̃_i-μ̃_i^2,ρ̃_i=∫_0^∞f_G̃_i(x)log^2(1+P̃_ix)dx= ^(h)2P̃_i∫_0^∞e^- λ_i x/1+P̃_ixlog(1+P̃_ix)dx= ^(i)ℋ_i(∞)-ℋ_i(1),ℋ_i(x)=2e^λ_i/P̃_i(λ_i/P̃_ix3F_3(1,1,1;2,2,2;-λ_i x/P̃_i)+1/2log(x)(-2(log(λ_i/P̃_ix)+ℰ)-2Γ(0,λ_i/P̃_ix)+log(x))).Here, Ei(x)=∫_x^∞e^-tdt/t denotes the exponential integral function. Also,(g) and (h) are obtained by partial integration. Then, denoting the Euler constant by ℰ, (i) is given by the variable transformation 1+P̃_ix=t, some manipulations, as well as the definition of the Gamma incomplete function Γ(s,x)=∫_x^∞t^s-1e^-tdt and the generalized hypergeometric function a_1F_a_2(·). For the Gamma-Gamma distribution, on the other hand, the PDF f_G̃_i in (<ref>)-(<ref>) is replaced by (<ref>) and the mean and variance are calculated by <cit.> and <cit.>, respectively. In this way, following the same arguments as in Lemmas 1, 3-4, the outage probability of the FSO-based hops is given by (<ref>).Lemmas 1-5 lead to different corollary statements about the performance of multi-hop RF-FSO systems, as stated in the following.Corollary 1: With long codewords, there are mappings between the performance of FSO- and RF-based hops, in the sense that the outage probability achieved in an RF-based hop is the same as the outage probability in an FSO-based hop experiencing specific long-term channel characteristics.The proof comes from Lemmas 1-5 where for different hops the outage probability is given by the CDF of Gaussian random variables. Thus, with appropriate long-term channel characteristics, (μ_i,σ_i), (μ̂_i,σ̂_i), (μ̆_i,σ̆_i) and (μ̃_i,σ̃_i) in Lemmas 1 and 3-5 can be equal leading to the same outage probability in these hops.In this way, the performance of RF-FSO based multi-hop/mesh networks can be mapped toones using only the RF- or the FSO-based communication.Corollary 2: With asymptotically long codewords, 1) the minimum number of transmit antennas in an RF-based hop, such that the same rate is supported in all hops, is found by the solution ofN_i'=_x{log(1+√(ϵ_i P_i^cons/(P_i^max)^ϑ_i)(-√(2π xν_i^2)/2+xζ_i))+𝒜_i(√(ϵ_i P_i^cons/(P_i^max)^ϑ_i),-1/√(2π xν_i^2),1/2+xζ_i/√(2π xν_i^2),-√(2π xν_i^2)/2+xζ_i)-𝒜_i(√(ϵ_i P_i^cons/(P_i^max)^ϑ_i),-1/√(2π xν_i^2),1/2+xζ_i/√(2π xν_i^2),√(2π xν_i^2)/2+xζ_i)=min_∀ j=1,…,T̃{μ̃_j}},∀ i,which can be calculated numerically.2) Also, the ergodic achievable rate of the multi-hop network is approximately given byC̅(T,T̃)=min(min_∀ j=1,…, T{μ̆_j},min_∀ j=1,…,T̃{μ̃_j}), with μ̆_̆ĭ defined in (<ref>) and μ̃_i given by (<ref>) and <cit.> for the exponential and Gamma-Gamma distributions of the FSO link, respectively.With asymptotically long codewords, i.e., very large C_i,C̃_i, the achievable rates in the RF- and FSO-based hops converge to their corresponding ergodic capacity, and there is no need for HARQ because the data is always correctly decoded if it is transmitted with rates less than or equal to the ergodic capacity. Denoting the expectation operation by E{·}, the ergodic capacity of an FSO-hop is given by μ̃_i=E{log(1+P̃_iG̃_i)} which is determined by (<ref>) and <cit.> for the exponential and Gamma-Gamma distributions of the FSO-based hop, respectively. For the RF-based hop, on the other hand, the ergodic capacity is found as E{log(1+√(ϵ_i P_i^cons/(P_i^max)^ϑ_i)G_i)}≃μ̆_i with μ̆_i given in (<ref>). In this way, the maximum achievable rate of the FSO-based hops is R̃=min_∀ j=1,…,T̃{μ̃_j}. Also, the minimum number of required antennas in the i-th RF-based hop is found by solving μ̆_i=R̃ which, from (<ref>), leads to (<ref>). Note that (<ref>) is a single-variable equation and can be effectively solved by different numerical techniques.Finally, following the same argument, the ergodic achievable rate of the RF-FSO network is given by (<ref>), i.e., the maximum rate in which the data is correctly decoded in all hops.§.§ Performance Analysis with Short CodewordsUp to now, we considered the long-codeword scenario such that the CLT provides accurate approximation for the sum of independent and identically distributed (IID) random variables. However, it is interesting to analyze the system performance in cases with short codewords, i.e., when C_i and C̃_i,∀ i, are small. This case is especially important for FSO links since the coherence time can be quite long (milliseconds). Here, we mainly concentrate on the Gamma-Gamma distribution of the FSO-based hops. The same results as in <cit.> can be applied to derive the outage probability of the FSO-based hops in the cases with exponential distribution.Lemma 6: For arbitrary numbers of M_i, C_i and C̃_i, 1) The outage probabilities of the FSO- and RF-based hops are bounded by (<ref>)and (<ref>), respectively.2) At low SNRs, a MISO-HARQ RF-based hop with M_i retransmissions, N_i transmit antennas and C_i channel realizations per sub-codeword transmission can be mapped to an open-loop MISO setup with M_iN_iC_i transmit antennas and single channel realization per codeword transmission.Considering the FSO-based hops, one can use the Minkowski inequality <cit.>(1+(∏_i=1^nx_i)^1/n)^n≤∏_i=1^n(1+x_i),to writeϕ̃_i=(1/M_iC̃_i∑_m=1^M_i∑_c=(m-1)C̃_i+1^mC̃_ilog(1+P̃_iG̃_i(c))≤R_i/M_i)= (∏_m=1^M_i∏_c=(m-1)C̃_i+1^mC̃_i(1+P̃_iG̃_i(c))≤ e^C̃_iR_i)≤(1+P̃_i(∏_m=1^M_i∏_c=(m-1)C̃_i+1^mC̃_iG̃_i(c))^1/M_iC̃_i≤ e^R_i/M_i)=F_𝒥_i((e^R_i/M_i-1/P̃_i)^M_iC̃_i),where, using the results of <cit.> and for the Gamma-Gamma distribution of the variables G̃_i, the random variable 𝒥_i=∏_m=1^M_i∏_c=(m-1)C̃_i+1^mC̃_iG̃_i(c) follows the CDFF_𝒥_i(x) =1/Γ^M_iC̃_i(a_i)Γ^M_iC̃_i(b_i)𝒢_1,2M_iC̃_i+1^2M_iC̃_i,1((a_ib_i)^M_iC̃_ix|_a_i,a_i,…,a_i_M_iC̃_itimes, b_i,b_i,…,b_i_M_iC̃_itimes,0^1),with 𝒢(.) denoting the Meijer G-function. Note that, the results of (<ref>) are mathematically applicable for every values of M_i,C̃_i. However, for, say M_iC̃_i≥ 6, the implementation of the Meijer G-function in MATLAB is very time-consuming and the tightness of theapproximation decreases with M_i,C̃_i. Therefore, (<ref>) is useful for the performance analysis incases with small M_i,C̃_i,∀ i, while the CLT-based approach of Section III.A provides accurate performance evaluation for cases with long codewords. For the RF-based hop, on the other hand, we use1/nlog(1+∑_j=1^nx_j)≤1/n∑_j=1^nlog(1+x_j)≤log(1+1/n∑_j=1^nx_j),∀ n,x_j≥ 0,to lower- and upper-bound the outage probability by(log(1+√(ϵ_i P_i^cons/(P_i^max)^ϑ_i)/M_iC_i∑_m=1^M_i∑_c=(m-1)C_i+1^mC_iG_i(c))≤R_i/M_i) ≤ϕ_i≤(1/M_iC_ilog(1+√(ϵ_i P_i^cons/(P_i^max)^ϑ_i)∑_m=1^M_i∑_c=(m-1)C_i+1^mC_iG_i(c))≤R_i/M_i) ⇒ F_G_i(M_iC_i(e^R_i/M_i-1)/√(ϵ_i P_i^cons/(P_i^max)^ϑ_i))≤ϕ_i≤ F_G_i(e^R_iC_i-1/√(ϵ_i P_i^cons/(P_i^max)^ϑ_i)).Here, G_i=∑_m=1^M_i∑_c=(m-1)C_i+1^mC_i∑_j_i=1^N_ig_i^j_i(c) is an equivalent sum channel gain variable with M_iC_iN_i antennas at the transmitter whose PDF is obtained by replacing N_i with M_iC_iN_i in (<ref>). Also, F_G_i(·) denotes the CDF of the equivalent sum channel gain variable.To prove Lemma 6 part 2, we note that letting x_i→ 0,∀ i, the inequalities in (<ref>) are changed to equality. Thus, as a corollary result, at low SNRs a MISO-HARQ RF-link with N_i transmit antennas, M_i retransmissions and C_i channel realizations within each retransmission round can be mapped to an open-loop MISO setup with M_iC_iN_i transmit antennas, in the sense that the same outage probability is achieved in these setups.Note that the bounding schemes of (<ref>) are mathematically applicable for every values of M_i,C_i. However, while the results of (<ref>) tightly match the exact numerical results for small values of M_i,C_i, the tightness decreases for large M_i,C_i's. Thus, the results of Lemmas 1-4 and Lemma 6 can be effectively applied for the performance analysis of the RF-based hops in the cases with long and short codewords, respectively.Finally, as another approximation for the cases with M_i=1,C_i=1, we haveϕ_i|m_i=1,C_i=1≃1/2(1+erf((e^R_i-1/√(ϵ_i P_i^cons/(P_i^max)^ϑ_i)-N_iζ_i)/√(2N_iν_i^2))),which comes from Lemma 2.§.§ Performance Analysis in Mesh NetworksConsider a mesh network consisting of 𝒳 non-overlapping routes from the source to the destination with independent channel realizations for the hops. The 𝓍-th, 𝓍=1,…,𝒳, route is made of T_𝓍 RF- and T̃_𝓍 FSO-based hops and the routes can have different total number of hops T_𝓍^total=T_𝓍+T̃_𝓍,𝓍=1,…,𝒳. In this case, the network outage probability is given by(Outage)^mesh=∏_𝓍=1^𝒳((Outage_𝓍)),where (Outage_𝓍) is the outage probability in the 𝓍-th route as given in (<ref>). In (<ref>), we have used the fact that in a mesh network an outage occurs if the data is correctly transferred to the destination through none of the routes. With the same arguments, the ergodic achievable rate of the mesh network is obtained byC̅^mesh=max_∀𝓍=1,…,𝒳{C̅_𝓍},with C̅_𝓍 derived in (<ref>). This is based on the fact that, knowing the long-term channel characteristics, one can set the data rate equal to the maximum achievable rate of the best route and the message is always correctly decoded by the destination, if the codewords are asymptotically long. The performance of mesh networks is studied in Fig. 9. § NUMERICAL RESULTSThroughout the paper, we presented different approximation techniques. The verification of these results is demonstrated in Figs. 1, 2, 6-8 and, as seen in the sequel, the analytical results follow the numerical results with high accuracy. Then, to avoid too much information in each figure, Figs. 3-5, 9 report only the simulation results. Note that in all figures we have double-checked the results with the ones obtained analytically, and they match tightly.The simulation results are presented for homogenous setups. That is, different RF-based hops follow the same long-term fading parameters K_i,ω_i,∀ i, in (<ref>)-(<ref>), and the FSO-based hops also experience the same long-term channel parameters, i.e., λ_i,a_i and b_i in (<ref>)-(<ref>). Moreover, we set M_i=M_j and R_i=R_j,∀ i,j=1,…, T^total. In all figures, we setP̃_i=N_iP_i^cons such that the total consumed power at different hops is the same. Then, using (<ref>), one can determine the output power of the RF-based antennas. Also, because the noise variances are set to 1, P̃_i (in dB, 10log_10P̃_i) is referred to the SNR as well. In Figs. 1, 2 and 5, we assume an ideal PA. The effect of non-ideal PAs is verified in Figs. 3, 4, 6-9. With non-ideal PAs, we consider the state-of-the-art parameter settings ϑ_i=0.5, ϵ_i=0.75, P_i^max=25dB,∀ i, <cit.>, unless otherwise stated.The parameters of the Rician RF PDF (<ref>) are set to ω_i=1, K_i=0.01, ∀ i, leading to unit mean and variance of the channel gain distribution f_g_i^j_i(x),∀ i,j_i. With the exponential distribution of the FSO-based hops, we consider f_G̃_i(x)=λ_i e^-λ_i x with λ_i=1,∀ i. Also, for the Gamma-Gamma distribution we set f_G̃_i(x)=2(a_ib_i)^a_i+b_i/2/Γ(a_i)Γ(b_i)x^a_i+b_i/2-1𝒦_a_i-b_i(2√(a_ib_ix)), a_i=4.3939, b_i=2.5636, ∀ i, which corresponds to Rytov variance of 1 <cit.>. Figures 1-8 consider multi-hop networks. The performance of mesh networks is studied in Fig. 9. Note that, as discussed in Section III, the results ofcases with long codewords and few number of retransmissions can be mapped to the cases with short codewords, large number of retransmissions and a scaled code rate (see (<ref>)-(<ref>)). Finally, it is worth noting that we have verified the analytical and the numerical results for a broad range of parameter settings, which, due to space limits and because they lead to the same qualitative conclusions as in the presented figures, are not reported in the figures.The simulation results are presented in different parts as follows.On the approximation approaches of Lemmas 1-5: Considering an ideal PA, M_i=1 (as the worst-case scenario), R_i=2 npcu, and C_i=10, ∀ i, Fig. 1 verifies the tightness of the approximation schemes of Lemmas 1-4. Particularly, we plot the outage probability of an RF-based hop for different numbers of transmit antennas N_i, ∀ i. Then, Fig. 2 demonstrates the outage probability of a dual-hop RF-FSO setup versus the SNR.Here, we set M_i=1, R_i=1,2 npcu, and C_i=10, C̃_i=30, N_i=20, T=1, T̃=1 ∀ i, and the results are presented for cases with ideal PAs at the RF-based hops. Asobserved, the analytical results of Lemmas 2-5 mimic the exact results with very high accuracy (Figs. 1-2). Also, Lemma 1properly approximates the outage probability at low and high SNRs, and the tightness increases as the code rate decreases (Fig. 2). Moreover, the tightness of the approximation results of Lemmas 3-4 increases with the number of RF-based transmit antennas (Fig. 1). This is because the tightness of the CLT-based approximations in Lemma 2 increases with N_i,∀ i. Finally, although not demonstrated in Figs. 1-2, the tightness of the CLT-based approximation schemes of Lemmas 3-5 increases with the maximum number of retransmissions M_i,∀ i.On the effect of HARQ and imperfect PAs: Shown in Fig. 3 is the outage probability of a dual-hop RF-FSO network for different maximum numbers of HARQ-based retransmission rounds M_i,∀ i. Also, the figure compares the system performance in cases with ideal and non-ideal PAs. Here, the results are obtained for the exponential distribution of the FSO link, T=1, T̃=1, C_i=10, C̃_i=20, R_i=3 npcu, andN_i=60, ∀ i. As demonstrated, with no HARQ, the efficiency of the RF-based PAs affects the system performance considerably. For instance, with the parameter settings of the figure and outage probability 10^-4, the PAs inefficiency increases the required power by 3.5 dB. On the other hand, the HARQ can effectively compensate the effect of imperfect PAs, and the difference between the outage probability of the cases with ideal and non-ideal PAs is negligible for M>1. Also, the effect of non-ideal PA decreases at high SNRs which is intuitively because the effective efficiency of the PAs ϵ_i^effective=ϵ_i(P_i/P_i^max)^ϑ_i,∀ i, is improved as the SNR increases. Finally, the implementation of HARQ improves the energy efficiency significantly. Asan example, consider the outage probability 10^-4, an ideal PA and the parameter settings of Fig. 3. Then, compared to the open-loop communication, i.e., M_i=1, the implementation of HARQ with a maximum of 2 and 3 retransmissions reduces the required power by 13 and 17 dB, respectively.System performance with different numbers of hops: In Fig. 4, we demonstrate the outage probability in cases with different numbers of RF- and FSO-based hops, i.e., T, T̃. In harmony with intuition, the outage probability increases with the number of hops. However, the outage probability increment is negligible, particularly at high SNRs, because the data is correctly decoded with high probability in different hops as the SNR increases. Finally, as a side result, the figure indicates that the outage probability of the RF-FSO based multi-hop network is not sensitive to the distribution of the FSO-based hops at low SNRs. This is intuitive because, at low SNRs and with the parameter settings of the figure, the outage event mostly occurs in the RF-based hops. However, at high SNRs where the outage probability of different hops are comparable, the PDF of the FSO-based hops affects the network performance.On the effect of RF-based transmit antennas: Considering an exponential distribution of the FSO-based hops, ideal PAs, R_i=1.5 npcu, M_i=1, T_i=T̃_i=1, ∀ i, and SNR=10 dB, Fig. 5 demonstrates the effect of the number of RF transmitantennas on the network outage probability. Also, the figure compares the system performance in cases with short and long codewords, i.e., in cases with small and large values of C_i,C̃_i. As seen, with short codewords, the outage probability decreases with the number of RF-based transmit antennas monotonically. This is because, with the parameter settings of the figure, the data is correctly decoded with higher probability as the number of antennas increases. With long codewords, on the other hand, the outage probability is (almost) insensitive to the number of transmit antennas for N_i≥ 3. Finally, the outage probability decreases with C_i,C̃_i, because the HARQ exploits time diversity as more channel realizations are experienced within each codeword transmission.In Fig. 6, we plot the network ergodic achievable rates for cases with Gamma-Gamma distribution of the FSO-based hops, non-ideal PAs and different numbers of transmit antennas/SNRs. Also, the figure verifies the accuracy of the approximation schemes of Corollary 2. Note that, due to the homogenous network structure, the ergodic rate is independent of the number of RF- and FSO-based hops. As seen, at low/moderate SNRs, the network ergodic rate increases (almost) logarithmically with the number of RF antennas. At high SNRs, on the other hand, the ergodic rate becomes independent of the number of RF transmit antennas. This is because with large number of RF-based antennas the achievable rate of the RF-based hops exceeds the one in FSO-based hops, and the network ergodic rate is given by the achievable rate of the FSO-based hops. Finally, the number of antennas above which the ergodic rate is limited by the achievable rate of the FSO-based hops increases with the SNR.On the ergodic achievable rates: Along with Fig. 6, we evaluate the accuracy of the results of Corollary 2 in Figs. 7a and 7b. Particularly, Fig. 7a demonstrates the network ergodic rate for different PA models and compares the simulation results with the ones derived in (<ref>). Then, Fig. 7b verifies the accuracy of (<ref>). Here, we show the minimum number of required RF transmit antennas versus the SNR which determines the ergodic rate of the FSO-based hops. As can be seen, the approximation results of Corollary 2 are very tight for a broad range of parameter settings. Thus, (<ref>) and (<ref>) can be effectively used to derive the required number of RF transmit antennas and the network ergodic rate, respectively (Figs. 7a and 7b). The ergodic rate shows different behaviors in the, namely, FSO-limited and RF-limited regions. With the parameter settings of the figure, the ergodic rate is limited by the achievable rates of the FSO-based hops at low SNRs (FSO-limited region in Fig. 7a). However, as the SNR increases, the achievable rates of the FSO-based hops exceed the ones in the RF-based hops and, consequently, the network ergodic rate is limited by the rate of the RF-based hops (RF-limited region in Fig. 7a). As a result, the efficiency of the RF PAs affects the ergodic rate at high SNRs. Finally, the figure indicates that at high SNRs the network ergodic rate increases (almost) linearly with the SNR.As shown in Fig. 7b, the required number of RF-based transmit antennas to guarantee the same rate as in the FSO-based hops increases considerably with the SNR and, consequently, the ergodic rate of the FSO-based hops. Moreover, the PAs efficiency affects the required number of antennas significantly. As an example, consider the parameter settings of Fig. 7b and SNR = 3 dB. Then, the required number of RF-based antennas is given by 33, 49 and 98 for the cases with PA efficiency 75%, 50% and 25%, respectively. Thus, hardware impairments such as the PA inefficiency affect the system performance remarkably and should be carefully considered in the network design. However, selecting the proper number of antennas and PA properties is not easy because the decision depends on several parameters such as complexity, infrastructure size and cost.Performance analysis with short codewords: In Figs. 8a, 8b and 8c, we study the outage probability of an FSO-based hop, an RF-based hop and a dual-hop RF-FSO network, respectively. Particularly, considering non-ideal PAs and Gamma-Gamma distribution of the FSO-based hops, the results are obtained for C_i=C̃_i=1, M_i=1,2 ∀ i, and we evaluate the accuracy of different bounds/approximations in Lemma 6 and (<ref>). As demonstrated, the bound of (<ref>) matches the exact values derived via simulation analysis of ϕ̃_i exactly in cases with M_i=1. Also, the bounding/approximation methods of (<ref>), (<ref>) and (<ref>) mimic the numerical results with high accuracy in cases with a maximum of M_i=2, ∀ i, retransmissions. Thus, the results of Section III.B can be efficiently used to analyze the RF-FSO systems incases with small values of M_i, C_i,C̃_i,∀ i.On the performance of mesh networks: In Fig. 9, we study the outage probability of mesh networks for different numbers of routes. Here, we consider non-ideal PAs, exponential PDF of the FSO hops, M_i=3,N_i=60, C_i=10, C̃_i=10, and R_i=3 npcu. The results are presented forcases with one RF- and one FSO-based hop in each route. Also, we compare the outage probability of the mesh network with that of a single route setup consisting of different numbers of RF- and FSO-based hops. Note that there are the same total number of RF- and FSO-based hops in each case with 𝒳=n,T_𝓍=T̃_𝓍=1,∀𝓍=1,…,𝒳, and 𝒳=1,T_1=T̃_1=n.As demonstrated in Fig. 9, in contrast to the single-route setup where the outage probability increases with the number of hops, the outage probability of the mesh network decreases considerably by adding more parallel routes into the network. For instance, consider the parameter settings of the figure and outage probability of 10^-6. Then, compared tocases with a single route, the required SNR at each hop decreases by almost 1.2 dB if the data is transferred through two routes. This is intuitive because the probability that the data is correctly received by the destination increases with the number of routes. However, the relative effect of adding more routes decreases with the number of routes, and, for theexample parameters considered, there is about 0.5 dB energy efficiency improvement if the number of routes increases from 𝒳=2 to 𝒳=3. Finally, while we did not consider it in Section III.C, the performance of the mesh network is further improved if the signals from different routes are combined at the destination.§ CONCLUSIONWe studied the performance of RF-FSO based multi-hop and mesh networks incases with short and long codewords. Considering different channel conditions, we derived closed-form expressions for the networks outage probability, the ergodic rates as well as the required number of RF transmit antennas to guarantee different achievable rate quality-of-service requirements. The results are presented forcases with and without HARQ. As demonstrated, depending on the codeword length, there are different methods for analytical performance evaluation of the multi-hop/mesh networks. Moreover, there are mappings between the performance of RF-FSO based multi-hop networks and the ones using only the RF- or the FSO-based communication. Also, the HARQ can effectively improve the energy efficiency and compensate for the effect of hardware impairments. Finally, the outage probability of multi-hop networks is not sensitive to the large number of RF-based transmit antennas while the ergodic rate is significantly affected by the number of antennas. § ACKNOWLEDGEMENTThe research leading to these results received funding from the European Commission H2020 programme under grant agreement n^∘671650 (5G PPP mmMAGIC project), and from the Swedish Governmental Agency for Innovation Systems (VINNOVA) within the VINN Excellence Center Chase. IEEEtran | http://arxiv.org/abs/1703.09298v1 | {
"authors": [
"Behrooz Makki",
"Tommy Svensson",
"Maite Brandt-Pearce",
"Mohamed-Slim Alouini"
],
"categories": [
"cs.IT",
"math.IT"
],
"primary_category": "cs.IT",
"published": "20170327202440",
"title": "On the Performance of Millimeter Wave-based RF-FSO Multi-hop and Mesh Networks"
} |
-1.0cm -1.0cm -1.5cm 18.7cm 23.5cm⊓-8.0pt⊔ ℤ F ℙ𝔼ℝ̊ℂpḍı idiag + -±ϕ_q^ϕ_q^ϕ_q^𝕀p^p^𝒦≪ℒℋℳ𝒫≫𝒢 v e a b𝒜å𝐀Ḇ𝐁𝐃𝐂𝐈XX theoremTheoremlemmaLemmapropositionPropositioncorollaryCorollarydefinitiondefinitionDefinitionexampleExamplexcaExerciseassumptionAssumptionremarkRemarkremarksRemarksnoteNoteOn Dirichlet series and functional equations Alexey Kuznetsov [Dept. of Mathematics and Statistics,York University, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada. E-mail:[email protected]] December 30, 2023 =======================================================================================================================================================================There exist many explicit evaluations of Dirichlet series. Most of them are constructed via the same approach: by taking products or powers ofDirichlet series with a known Euler product representation. In this paper we derive a result of a new flavour: we give the Dirichlet series representation to solution f=f(s,w) of the functional equationL(s-wf)=exp(f), where L(s) is the L-function corresponding to a completely multiplicative function. Our result seems to be a Dirichlet series analogue of the well known Lagrange-Bürmann formula for power series. The proof is probabilistic in nature and is based on Kendall's identity, which arises in the fluctuation theory of Lévy processes. 0.15cm Keywords:L-function, completely multiplicative function, functional equation, infinite divisibility, subordinator,convolution semigroup,Kendall's identity2010 Mathematics Subject Classification : Primary 11M41, Secondary 60G51 § INTRODUCTION AND THE MAIN RESULT Let a: ℕ↦ be a completely multiplicative function, that is, a(mn)=a(m)a(n) for allm, n ∈ℕ. Denote by L(s) thecorrespondingL-functionL(s)=∑_n≥ 1a(n)/n^s.We assume that the above series converges absolutely for (s)≥σ. Complete multiplicativity of a(n) implies that L(s) can be expressed as an absolutely convergent Euler productL(s)=∏_p (1-a(p)p^-s)^-1,(s)≥σ,where the product is taken over all prime numbers p.The following functions will play the key role in what follows: for n ∈ℕ and z∈ we defined_z(n):=∏_p^j | nj+z-1j,d̃_z(n):=z^-1 d_z(n).The function d_z(n) is multiplicative and it is calledthe general divisor function, see<cit.>[Section 14.6].Starting from the Euler product representation for ζ(s) and writing the terms(1-p^-s)^-z as binomial series in p^-s, it is easy to see that ζ(s)^z=∑_n≥ 1 d_z(n)/n^s, z∈, s>1. The multiplicative function d_z(n) is well known in the literature. Selberg <cit.> has obtained the main term of the asymptotics of D_z(x):=∑_n≤ x d_z(n) as x→ +∞; the information about higher-order terms can be found in <cit.>[Theorem 14.9].The function d_k(n) (for integer k≥ 2) is known simply asthe divisor function (see <cit.>[Section 17.8] or <cit.>). The name comes from the following factd_k(n)=∑_m_1 m_2 ⋯ m_k=n m_i≥ 1 1,which follows from from (<ref>).In other words, d_k(n) counts the number of ways of expressing n as an ordered product of k positive factors (of which any number may be unity). For example, d_2(n) is the number of divisors of n, which is commonly denoted by d(n). Also, note thatfor all n≥ 2 the functionz↦d̃_z(n) is a polynomial of degree Ω(n)-1, where Ω(n) is the total number of prime factors of n. In particular, d̃_z(n) ≡ 1 if and only if n is a prime number.Let us denote D_σ,ρ:={(s,w) ∈^2 :(s)≥σ, |w| ≤ρ}.The following theorem is our main result. Assume that the Dirichlet series (<ref>), which corresponds toa completely multiplicative function {a(n)}_n ∈ℕ,converges absolutelyfor (s) ≥σ. Denote γ:=ln(∑_n≥ 1|a(n)|/n^σ).Then for any ρ>0:(i) The seriesf(s,w):=∑_n≥ 2d̃_wln(n)(n) ×a(n)/n^sconverges absolutely and uniformly in (s,w) ∈ D_σ+γρ,ρ and satisfies |f(s,w)|<γ in this region; (ii)The function f(s,w) solves the functional equationL(s-wf(s,w))=exp(f(s,w)), (s,w)∈ D_σ+γρ,ρ;(iii) For any v ∈ and (s,w) ∈ D_σ+γρ,ρ the following identity is true1+v∑_n≥ 2d̃_v+wln(n)(n) ×a(n)/n^s= exp(vf(s,w)). The proof of Theorem <ref> is presented in the next section.Let us consider what happens with formulas (<ref>) and (<ref>) whenw=0. Note that d̃_0(n)=1/j if n=p^j for some prime p and d̃_0(n)=0 otherwise. Then we can write d̃_0(n) = Λ(n)/ln(n), where the von Mangoldt function {Λ(n)}_n∈ℕ is defined as followsΛ(n)= ln(p), if n=p^k for some prime p and integer k≥ 1, 0,otherwise.Using the above result and (<ref>) we obtainf(s,0)=∑_n≥ 2Λ(n)a(n) /log(n)n^s =ln(L(s))for (s)≥σ.Formula (<ref>) confirms the functional identity(<ref>) in the case w=0. Formula (<ref>) in the case w=0 also becomes a trivial identity L(s)^v=exp(v ln L(s)) (see equation (<ref>) below). Formula (<ref>), which gives a solution to the functional equation (<ref>), has some similarities to the Lagrange-Bürmann inversion formula for analytic functions. Let us remind what the latter result states. Consider a function ψ, which is analytic in a neighbourhood of w=0 and satisfies ψ(0)≠ 0. Let w=g(z) denote the solution ofzψ(w)=w. Then g(z) can be represented as a convergent Taylor series g(z)=∑_n≥ 1lim_w→ 0[ ^̣n-1/ẉ^n-1ψ(w)^n ] z^n/n!,which converges in some neighbourhood of z=0.Note that both formulas (<ref>) and (<ref>) are based on the coefficients of the expansion of a power of the original function in a certain basis (the basis consists of power functions z^n in the case of the Lagrange-Bürmann inversion formula and exponential functions n^-s in the case of formula (<ref>)). Formula (<ref>) implies the well-known resultd_t+s(n)=∑_k|n d_t(k) × d_s(n/k), t,s ∈.Similarly, formula (<ref>) implies the following more general result(t+s) d̃_t+s+wln(n)(n) = ts∑_ k|nd̃_t+wln(k)(k)×d̃_s+wln(n/k)(n/k),t,s,w ∈.Note that (<ref>) is a special case of(<ref>) with w=0 and thatboth sides of (<ref>) are polynomials in variables (t,s,w). It would be an interesting exercise to try to find an elementary proof of (<ref>).Next we present a corollary of Theorem <ref>, its proof is postponed until section <ref>. Whenever we use ln(L(s)) in what follows, we will always assume that (s)≥σ and the branch of logarithm is chosen so that(<ref>) holds (another way to fix the branch of logarithm is to require that ln(L(s))→ 0 as s→ +∞). Let γ and f(s,w) be defined as in (<ref>) and (<ref>).Then for all (s,w) ∈^2 satisfying(s) ≥σ + 2 γ |w| we have f(s + w ln(L(s)),w)=ln(L(s)).The above result can be used to obtain new explicit evaluations of infinite series. For example, consider the case when a(n)=1 for all n, so that L(s)=ζ(s), which is the Riemann zeta-function.Then, using formula (<ref>) and Corollary <ref> with σ=1.4, s=2 (so that ln(ζ(2))=ln(π^2/6)) we obtain the following explicit result∑_n≥ 2d̃_v+wln(n)(n)/n^z=( π^2/6 )^v-1/v, which is valid for all v∈, z in the disk { z∈: |z-2|≤ 0.13}, and w=(z-2)/ln(π^2/6). A curious fact is that the sum of the series (<ref>) does not depend on z. As we have mentioned above, the functionsv↦d̃_v+ w ln(n)(n) are polynomials of degree Ω(n)-1, thus formula (<ref>) can be viewed asan expansion of the entire function v ∈↦(( π^2/6 )^v-1)/v in such an unusual polynomial basis.There are two natural questions that arise from identity (<ref>): (i) What is the largest domain of z for which the series converges absolutely or conditionally? (ii) Is itpossible to find an elementary proof of (<ref>)? § PROOF OF THEOREM <REF> The proof of Theorem <ref> will proceed in two stages. First we will prove Theorem <ref> in the special case when v>0, w>0 and a(n)≥ 0 for all n ∈ℕ.This proof is probabilistic in nature and it is based on the theory of Lévy processes, see <cit.>. In the second stage we will complete the proof of Theorem <ref> by generalizing our earlier result to complex values v, w and a(n) by an analytic continuation argument.For convenience of the reader, we will first review several key facts from the theory of Lévy processes, which will be required in our proof (one may wish to consult the books <cit.> and <cit.> for more detailed information).A one-dimensional stochastic process X={X_t}_t≥ 0 is called asubordinator if it has stationary and independent increments and if its paths (functionst∈ [0,∞) ↦ X_t) are increasing almost surely. We will always assume that (X_0=0)=1. A probability measure ν(x̣) supported on [0,∞) is calledinfinitely divisible if for any n=2,3,4,… there exists a probability measure ν_n(x̣)such that ν=ν_n * ν_n * ⋯ * ν_n (ν is an n-fold convolution of the measure ν_n). It is known that subordinators stand in one-to-one correspondence with infinitely divisible measures: for any subordinator X the measure (X_1∈x̣) is infinitely divisible and for any infinitely divisible measure ν supported on [0,∞) there exists a unique subordinator X such that (X_1∈x̣)=ν(x̣). LetX be a subordinator and (κ) be an exponential random variable with mean 1/κ,independent of X. We define a new process via X̃_t= X_t,ift<(κ), +∞,ift≥(κ).The process X̃ is called akilled subordinator. Note that killed subordinators satisfy (X̃_t ∈ [0,∞))=((κ)>t)=exp(-κ t), thus the measures (X̃_t∈x̣) are sub-probability measures. Any subordinator (including killed ones) can be described through an associated Bernstein functionϕ_X(z) via the identity[e^-z X_t]=∫_[0,∞) e^-z x (X_t ∈x̣)=e^-t ϕ_X(z),(z)≥ 0, t>0.The above identity expresses the fact that the probability measures μ_t(x̣)=(X_t ∈x̣) form a convolution semigroup on [0,∞), that is μ_t * μ_s=μ_t+s.The Lévy-Khintchine formula tells us that any Bernstein function has an integral representation ϕ_X(z) = κ + δ z + ∫_(0,∞) (1- e^-z x)Π(x̣),(z)≥ 0,for someκ≥ 0, δ≥ 0 and a positive measure Π(x̣), supported on (0,∞), which satisfies the integrability condition ∫_(0,∞)(1∧ x)Π(x̣)<∞. The constant κ is calledthe killing rate, δ is calledthe linear drift coefficient and the measureΠ(x̣) is calledthe Lévy measure. The Lévy measure describes the distribution of jumps of the process X.The killing rate and the drift can be recovered from the Bernstein function as follows:κ=ϕ_X(0) andδ=lim_z→ +∞ϕ_X(z)/z.See the excellent book <cit.> for more information on Bernstein functions. In this paper we will only be working with a rather simple class of subordinators – the ones that have compound Poisson jumps. The Lévy measure of such a process has finitemass λ=Π([0,∞))<∞ and the process itself can be constructed as follows. Take a sequence of independent and identically distributed random variables ξ_i, having distribution (ξ_i ∈x̣)=λ^-1Π(x̣), and take an independent Poisson process N={N_t}_t≥ 0 with intensity λ (that is, [N_t]=λ t). Then the pathwise definition of the subordinator X is X_t=δ t + ∑_i=1^N_tξ_i, t≥ 0. Now, given a subordinator X, we fix c>0 and define a process Z_t=t/c-X_t. For x≥ 0 we introduceY_x:=inf{t> 0: Z_t>x},where we set Y_x=+∞ on the event max{ Z_t :t≥ 0}≤x.The relationship between processesX, Y and Z can be seen on Figure <ref>.The process Z_t=t/c-X_t is an example of a spectrally-negative Lévy process, and the random variables Y_x are called thefirst-passage times, see Chapter 3 in <cit.>.It is known that the process Y={Y_x}_x≥ 0 is a (possibly killed) subordinator, see <cit.>[Corollary 3.14],thus there exists a Bernstein function ϕ_Y(z) such that for w>0 and x>0 we have [e^-w Y_x]=e^-x ϕ_Y(w).The Bernstein function Φ_Y(w) is known to satisfy the functional equationz/c - ϕ_X(z)=w,w>0 ⟺ z=ϕ_Y(w),see<cit.>[Theorem 3.12]. Moreover, the distributions of {Y_x}_x≥ 0 and {X_t}_t≥ 0 are related through Kendall's identity (see<cit.>, <cit.>, <cit.> or<cit.>[exercise 6.10])∫_y^∞(Y_x ≤ t) x̣/x=∫_0^t (Z_s > y) ṣ/s, y>0, t>0. The above identity will be the main ingredient in our proof of Theorem <ref>. §.§ Probabilistic proof of the case when a(n)≥ 0, v>0 and w>0We recall that the von Mangoldt function is defined via (<ref>) and we introduce a positive measure Π(x̣)=∑_n≥ 2Λ(n) a(n)/log(n)n^σδ_ln(n)(x̣),where δ_y(x̣) denotes the Dirac measure concentrated at point y. In the above formula σ is chosen so that the Dirichlet series (<ref>) converges absolutely for (s)≥σ.For (s)≥σ we haveL(s) =∏_p (1-a(p)p^-s)^-1=exp(-∑_pln(1-a(p)p^-s))= exp( ∑_p∑_k≥ 1a(p)^k/k p^ks) =exp( ∑_n≥ 2Λ(n)a(n) /log(n)n^s ),which implies that Π is a finite measure of total mass Π((0,∞))=ln(L(σ)).Let us now consider a Bernstein function corresponding to the measure Π, that isϕ_X(z)=∫_(0,∞) (1-e^-zx) Π(x̣).From formulas (<ref>) and (<ref>) we see that ϕ_X(z)=-ln(L(σ+z))+ln(L(σ)).Let X be a compound Poisson subordinator associated to the Bernstein function ϕ_X. Comparing (<ref>) and (<ref>) we see that the process X has zero killing rate and zero linear drift, so it is a pure jump compound Poisson process.Due to (<ref>) and (<ref>) this process satisfies[e^-z X_t]=e^-t ϕ_X(z)=( L(σ+z)/L(σ))^t. We would like to point out that the above observations are not new: in the case L(s)=ζ(s) it was observed by Khintchine <cit.> back in 1938 that L(σ+ı z)/L(σ) is a characteristic function of an infinitely divisible distribution, and, more recently, the connections between more general L-functions and infinite divisibility were studied in <cit.>.Using binomial series we can easily find the measure (X_t ∈x̣). For (s)≥σ and t>0 we calculate L(s)^t=∏_p (1-a(p)p^-s)^-t= ∏_p∑_j ≥ 1j+t-1ja(p)^j/p^js =∑_n≥ 1 d_t(n) a(n)/n^s.The above formula combined with (<ref>) show that for every t>0 the random variable X_t is supported on the set {ln(n)}_n∈ℕ and (X_t=ln(n))=L(σ)^-t d_t(n) ×a(n)/n^σ. Now we fix c>0, denote Z_t=t/c-X_t and we define the subordinator Y as in (<ref>).The goal now is to use Kendall's identity to find the distribution of Y_x. The calculation that follows will be very similar to the one performed in the proof Proposition 3 in <cit.>. First of all, we claim that for every x>0 the random variable Y_x has support on the set{cx+cln(n)}_n∈ℕ.This can be seen as follows. If the spectrally negative process Z_t=t/c-X_t has no jumps before it hits level x, then Y_x=cx; if Z has one jump of size ln(n_1) before it hits the level x, then Y_x=cx+cln(n_1); ifZ has two jumps of size ln(n_1) and ln(n_2) before it hits level x, then Y_x=cx+cln(n_1n_2), etc. Thus in order to describe the distribution of Y_x it is enough to compute p(n,x):=(Y_x=cx+cln(n)), n ∈ℕ,x>0. The left-hand side of Kendall's identity (<ref>) can be written in the following way ∫_y^∞(Y_x ≤ t) x̣/x=∫_y^∞∑_n≥ 1𝕀_{cx+ln(n)≤ t} p(n,x) x̣/x =∑_1≤ n ≤exp(t/c-y)∫_y^t/c-ln(n) p(n,x) x̣/x.And the right-hand side of (<ref>) is transformed into∫_0^t (X_s < s/c- y) ṣ/s =∫_0^t ∑_1≤ n < exp(s/c - y)(X_s=ln(n)) ṣ/s=∑_1≤ n < exp(t/c-y)∫_cy+cln(n)^t (X_s=ln(n)) ṣ/s=∑_1≤ n < exp(ct-y)∫_y^t/c-ln(n)(X_cu+cln(n)=ln(n)) ụ/u+ln(n),where in the last step we have changed the variable of integration s=cu+cln(n).Using Kendall's identity (<ref>) and comparing the expressions in the right-hand side in(<ref>) and (<ref>) we conclude that p(n,x)=(X_cx+cln(n)=ln(n)) x/x+ln(n).Applying (<ref>) to the above formula we see that for n∈ℕ (Y_x=cx+cln(n)) =p(n,x)=L(σ)^-cx-cln(n) d_cx+cln(n)(n) ×a(n)/n^σ×x/x+ln(n)=L(σ)^-cx d_cx+cln(n)(n) ×a(n)/n^σ+cln(L(σ))×x/x+ln(n).Next, combining the above result with (<ref>) we conclude that for x>0 and w>0 [e^-w Y_x] =∑_n≥ 1 e^-w (cx+cln(n))( Y_x=cx+cln(n))= ∑_n≥ 1 e^-cwx n^-cwL(σ)^-cx d_cx+cln(n)(n) ×a(n)/n^σ+cln(L(σ))×x/x+ln(n)=e^-x ϕ_Y(w).We use our definition of d̃_t(n)=t^-1 d_t(n), rearrange the terms in the above identity and rewrite it in the form∑_n≥ 2d̃_cx+cln(n)(n) ×a(n)/n^σ+c(w+ln(L(σ)))=1/cx( e^x(cw-ϕ_Y(w)+cln(L(σ)))-1 ).We emphasize that formula (<ref>) is valid for all x > 0 and w>0 and that z=ϕ_Y(w) is the solution to the equation z/c+ln(L(σ+z))-ln(L(σ))=w,see (<ref>) and (<ref>).Let us introduce a new function f(s,c)=c^-1× (s-ϕ_Y((s-σ)/c-ln(L(σ)))-σ), s>σ+cln(L(σ)).Using (<ref>) we check that f satisfies the equation L(s-cf(s,c))=exp(f(s,c)) andformula (<ref>) can be rewritten as∑_n≥ 2d̃_cx+cln(n)(n) ×a(n)/n^s=1/cx( e^c x f(s,c)-1 ), s>σ+cln(L(σ)).This ends the proof of(<ref>) and(<ref>). Formula (<ref>) is derived by taking the limit in (<ref>) as x → 0^+.For s>σ+cln(L(σ)), the lower bound 0<f(s,w) follows from (<ref>), and the upper bound f(s,w)<γ can be easily established from (<ref>) and the fact that ϕ_Y(w)=cw+cϕ_X(ϕ_Y(w))>cw,which is a consequence of (<ref>). Thus 0<f(s,w)<γ for s>σ+cln(L(σ)), and since|f(s,w)|≤ f((s),w) the result holds for complex s in the half-plane (s)>σ+cln(L(σ)) as well. Thus we have proved all statements of Theorem <ref> in the case v=cx>0, w=c>0, and a(n)≥ 0. §.§ Proving the general result via analytic continuationSo far we have proved that Theorem <ref> holds for v>0, w>0 and a(n)≥ 0. Our first goal is to extend this result to complex values of v and w. The key observation is that d̃_t(n) is a polynomial of t whose roots are non-positive integers.Writing this polynomial as a product of linear factors and applying the inequality |q+t|≤ q+|t| (with q>0 and t∈) to each linear factor, we deduce the upper bound|d̃_t(n)|≤d̃_|t|(n), n≥ 2, t∈.Therefore, if the series (<ref>) converges for some w=ρ>0 and s=σ+γρ, it will converge uniformly for (s,w) ∈ D_σ+γρ,ρ. From this fact we see that the function f(s,w) is an analytic function of two variables (s,w) ∈ D_σ+γρ,ρ. Moreover, the inequality (<ref>) implies that |f(s,w)|≤ f((s),|w|)<γ for(s,w) ∈ D_σ+γρ,ρ. Since L(s) is analytic in (s)>σ and the function f(s,w) satisfies the functional equation (<ref>) for w∈ (0,ρ), we conclude by analytic continuation that the same equation must hold for all (s,w) ∈ D_σ+γρ,ρ.Thus we have extended Theorem <ref> to allow for complex values of w.To prove (<ref>) in the general case when v is complex, we use the same approach and an analytic continuation in v. Our goal now is to remove the remaining restriction – the condition that a(n)≥ 0. Let us denote i-th prime number by p_i (so that p_1=2, p_2=3, p_3=5, etc.).Consider 𝐮=(u_1,u_2,…,u_k)∈^k and a Dirichlet L-functionL(s;𝐮)=∏_i=1^k (1-u_i p_i^-s)^-1.Denote by a(n;𝐮) the corresponding completely multiplicative function, that is a(n;𝐮)= u_1^l_1… u_k^l_k if n=p_1^l_1… p_k^l_kand a(n;𝐮)=0 otherwise.Let us now fix𝐮∈^k and denoteB(𝐮):={𝐱∈^k : |x_i|≤ |u_i|}and C(𝐮):={𝐱∈^̊k : 0≤ x_i ≤ |u_i|}. For 𝐱∈ C(𝐮) the completely multiplicative function a(n;𝐱) is non-negative, thus Theorem <ref> holds for L(s;𝐱).Consider a function L(s;|𝐮|), where |𝐮|:=(|u_1|,…,|u_n|). There exists σ such that the Dirichlet series for this function converges absolutely for (s)≥σ. Note that for any𝐱∈ C(𝐯) the Dirichlet series for L(s;𝐱) also converges absolutelyin (s)≥σ, since |a(n;𝐱)|≤ a(n;|𝐮|). Let us denote γ=γ(|𝐮|)=ln(∑_n≥ 1a(n;|𝐮|)/n^σ).Applying Theorem <ref> to each Dirichlet L-function L(s;𝐱) we conclude that for any ρ>0 and any 𝐱∈ C(𝐮) the following results hold:(i) The seriesf(s,w;𝐱):=∑_n≥ 2d̃_wln(n)(n)×a(n;𝐱)/n^sconverges absolutely and uniformly for all (s,w) ∈ D_σ+γρ,ρ and satisfies |f(s,w;𝐱)|<γ in this region; (ii)The function f(s,w;𝐱) solves the functional equationL(s-wf(s,w;𝐱);𝐱)=exp(f(s,w;𝐱)), (s,w)∈ D_σ+γρ,ρ;(iii) For any v ∈∖{0} and (s,w) ∈ D_σ+γρ,ρ the following identity is true1+v∑_n≥ 2d̃_v+wln(n)(n) ×a(n;𝐱)/n^s= exp(vf(s,w;𝐱). Let us now fix values of v ∈ and (s,w) ∈ D_σ+γρ, ρ.Note that the function 𝐱↦ L(s;𝐱) is analytic inthe interior of the set B(𝐮) and continuous in B(𝐮). Next, formula (<ref>) shows that a(n;𝐱) are monomials in variables x_1, x_2, …, x_k, thus formula (<ref>), which defines the functionf(s,w;𝐱), can be viewed as a Taylor series for the function 𝐱↦ f(s,w;𝐱). Since |f(s,w;𝐱)|≤ f((s),|w|;|𝐱|)< γ, this Taylor series converges uniformly for all 𝐱∈ B(𝐮), so that 𝐱↦ f(s,w;𝐱) is an analytic function in the interior of the set B(𝐮), and it is continuous in B(𝐮).Using these results and analytic continuation, we can extend the functional equation (<ref>) for all 𝐱∈ B(𝐮) (since we have already established that (<ref>) holds true for𝐱∈ C(𝐮)). The same argument applies to the identity (<ref>). The right-hand side is an analytic function of 𝐱 in the interior of the set B(𝐮), and it is continuous in B(𝐮). The left-hand side is a power series in 𝐱, convergent uniformly in B(𝐮). By analytic continuation, since the identity (<ref>) holds for 𝐱∈ C(𝐮), it must hold everywhere inB(𝐮). Thus we have shown that formulas (<ref>) and (<ref>) hold for all v∈, (s,w) ∈ D_σ+γρ, ρ and 𝐱∈B(𝐮). In particular, they must hold for 𝐱=𝐮. This ends the proof of Theorem <ref> for L-functions of the form (<ref>). Finally, consider a general Dirichlet L-function L(s) defined via (<ref>).Assume that the Dirichlet series for L(s) converges absolutely for (s)≥σ. Let us define L_k(s)=∏_i=1^k (1-a(p_i)p_i^-s)^-1, k≥ 1,and L̃(s)=∏_i=1^∞ (1-|a(p_i)|p_i^-s)^-1.It is clear that the Dirichlet series for all L-functions L_k(s) and L̃(s) converge absolutely when (s)≥σ.Let us denote by a_k(n) the completely multiplicative function corresponding to the L-function L_k(s) andlet γ be defined via (<ref>).We have proved already that Theorem <ref> holds true for L_k(s) and L̃(s). Thus the following results hold true. For any ρ>0:(i) The seriesf̃(s,w):=∑_n≥ 2d̃_wln(n)(n)×|a(n)|/n^sconverges absolutely and uniformly for all (s,w) ∈ D_σ+γρ,ρ and satisfies |f̃(s,w)|<γ in this region; (ii) For each k≥ 1, the seriesf_k(s,w):=∑_n≥ 2d̃_wln(n)(n)×a_k(n)/n^sconverges absolutely and uniformly for all (s,w) ∈ D_σ+γρ,ρ and satisfies |f_k(s,w)|<γ in this region; (iii)For each k≥ 1, the functions f_k(s,w) solve the functional equationL_k(s-wf_k(s,w))=exp(f_k(s,w)), (s,w)∈ D_σ+γρ,ρ;(iv) For any v ∈∖{0} and (s,w) ∈ D_σ+γρ,ρ the following identity is true1+v∑_n≥ 2d̃_v+wln(n)(n) ×|a(n)|/n^s= exp(vf̃(s,w));(v) For any k≥ 1, v ∈∖{0} and (s,w) ∈ D_σ+γρ,ρ the following identity is true1+v∑_n≥ 2d̃_v+wln(n)(n) ×a_k(n)/n^s= exp(v f_k(s,w)). Note that the function f(s,w) in (<ref>) is well-defined, since the series converges absolutely for all (s,w) ∈ D_σ+γρ,ρ, due to the absolute convergence of (<ref>).It is clear from our definition that for each n∈ℕ we have a_k(n) → a(n) as k→ +∞ and that |a_k(n)|≤ |a(n)| for all k, n ∈ℕ.Thus, we can use the Dominated Convergence Theorem, the fact thatthe series in (<ref>) converges absolutely and, by taking the limit as k→ +∞ in (<ref>),we conclude that for all (s,w) ∈ D_σ+γρ,ρ it is true that f_k(s,w) → f(s,w) as k→ +∞. Since the functions L_k(s) converge to L(s) uniformly in the half-plane (s) ≥σ, we can take the limit as k→+∞ in (<ref>) and conclude that the functional equation(<ref>) holds for all (s,w) ∈ D_σ+γρ,ρ.Finally, formula (<ref>) can be established by taking the limit as k→ +∞ in (<ref>) and applying the Dominated Convergence Theorem (with the help of the absolute convergence in (<ref>)). This ends the proof of Theorem <ref>. § PROOF OF COROLLARY <REF>Assume that(s,w) ∈ D_σ+γρ, ρ for some ρ >0, which is equivalent to saying that(s)≥σ + γ |w|. Denote α:=s-w f(s,w). Since |f(s,w)|<γ, (s)≥σ+γρ and |w|≤ρ,we have(α)>σ.Identity (<ref>) tells us that L(α)=exp(f(s,w)), so that f(s,w)=ln (L(α))=:β. Moreover,from equation α=s-w f(s,w) we expresss=α+w f(s,w)=α + β w and then the equation f(s,w)=β gives us f(α+β w,w)=β.We emphasize that (<ref>) holds for all (α,w) ∈^2 such that α=s-wf(s,w) for some (s,w) ∈^2 satisfying(s)≥σ+γ |w|. By analytic continuation we can extend (<ref>) to all (α,w) such that (α)≥σand(α + β w)≥σ + γ |w|,where β=ln(L(α)).The last step is to prove that condition (α)≥σ + 2γ |w| implies (<ref>). This follows from the following sequence of inequalities(α + β w)≥(α) - |(β w)|≥σ + 2 γ |w| - |β| |w|≥σ + γ |w|,where in the last step we have used the fact that |β|=|ln(L(α))|≤γ, which follows from (<ref>) and (<ref>).§ ACKNOWLEDGEMENTSThe author would like to thank Aleksandar Ivić for comments and for pointing out relevant literature. The research is supported by the Natural Sciences and Engineering Research Council of Canada.10Bertoin J. Bertoin.Subordinators: Examples and Applications.Springer, 1999.ECP1038 K. Borovkov and Z. Burq.Kendall's identity for the first crossing time revisited.Electron. Commun. Probab., 6:91–94, 2001.Burridge J. Burridge, A. Kuznetsov, M. Kwaśnicki, and A. Kyprianou.New families of subordinators with explicit transition probability semigroup.Stochastic Processes and their Applications, 124(10):3480 – 3495, 2014.donglin2001 G. Dong Lin and C.-Y. Hu.The Riemann zeta distribution.Bernoulli, 7(5):817–828, 2001.gould2008 H. W. Gould and T. Shonhiwa.A catalog of interesting Dirichlet series.Missouri J. Math. Sci., 20(1):2–18, 2008.Hardy_Wright G. H. Hardy and E. M. Wright.An introduction to the theory of numbers.Oxford University Press, 4th edition, 1960.Ivic A. Ivić.The Riemann zeta-function.John Wiley & Sons, 1985.Kendall D. G. Kendall.Some problems in the theory of dams.Journal of the Royal Statistical Society. Series B (Methodological), 19(2):207–233, 1957.Khinchine A. Y. Khintchine.Limit theorems for sums of independent random variables (in Russian).Moscow and Leningrad: GONTI, 1938.Kyprianou A.E. Kyprianou.Fluctuations of Lévy Processes with Applications: Introductory Lectures. Second Edition.Springer, 2014.Schilling R. L. Schilling, R. Song, and Z. Vondracek.Bernstein Functions, Theory and Applications.De Gruyter, 2012.Selberg A. Selberg.Note on a paper by L.G. Sathe.J. Indian Math. Soc., 18:83–87, 1954. | http://arxiv.org/abs/1703.08827v3 | {
"authors": [
"Alexey Kuznetsov"
],
"categories": [
"math.NT",
"Primary 11M41, Secondary 60G51"
],
"primary_category": "math.NT",
"published": "20170326153715",
"title": "On Dirichlet series and functional equations"
} |
Jet-hadron correlations relative to the event plane at the LHC with ALICE Joel Mazer December 30, 2023 ========================================================================= There are many cluster analysis methods that can produce quitedifferent clusterings on the same dataset. Cluster validation is about theevaluation of the quality of a clustering; “relative cluster validation”is about using such criteria to compare clusterings. This can be used toselect one of a set of clusterings from different methods, or from the same method ran with different parameters such as different numbers of clusters. There are many cluster validation indexes in the literature. Most of them attempt to measure the overall quality of a clustering by a single number, butthis can be inappropriate. There are various different characteristics of aclustering that can be relevant in practice, depending on the aim of clustering, such as low within-cluster distances and high between-cluster separation. In this paper, a number of validation criteria will be introduced that refer to different desirable characteristics of a clustering, and that characterise a clustering in a multidimensional way. In specific applications the user maybe interested in some of these criteria rather than others. A focus of thepaper is on methodology to standardise the different characteristics so that users can aggregate them in a suitable way specifying weights for the various criteria that are relevant in the clustering application at hand. Keywords: Number of clusters, separation, homogeneity, density mode, random clustering § INTRODUCTIONThe aim of the present paper is to present a range of cluster validation indexes that provide a multivariate assessment covering different complementary aspects of cluster validity. Here I focus on “internal” validation criteria that measure the quality of a clustering without reference to external information such as a known “true” clustering. Furthermore I am mostly interested in comparing different clusterings on the same data, which is often referred to as “relative” cluster validation. This can be used to select one of a set of clusterings from different methods, or from the same method ran with different parameters such as different numbers of clusters. In the literature (for an overview see Halkidi et al.<cit.>) many cluster validation indexes are proposed. Usually these are advertised as measures of global cluster validation in a univariate way, often under the implicit or explicit assumption that for any given dataset there is only a single best clustering. Mostly these indexes are based on contrasting a measure of within-cluster homogeneity and a measure of between-clusters heterogeneity such as the famous index proposed by Calinski and Harabasz<cit.>, which is a standardised ratio of the traces of the pooled within-cluster covariance matrix and the covariance matrix of between-cluster means. In Hennig<cit.> (see also Hennig<cit.>) I have argued that depending on the subject-matter background and the clustering aim different clusterings can be optimal on the same dataset. For example, clustering can be used for data compression and information reduction, in which case it is important that all data are optimally represented by the cluster centroids; or clustering can be used for recognition of meaningful patterns, which are often characterised by clear separating gaps between them. In the former situation, large within-cluster distances are not desirable, whereas in the latter situation large within-cluster distances may not be problematic as long as data objects occur with high density and without gap between the objects between which the distance is large. See Figure <ref> for two different clusterings on an artificial dataset with 3 clusters that may be preferable for these two different clustering aims.Given a multivariate characterisation of the validity of a clustering, for a given application a user can select weights for the different characteristics depending on the clustering aim and the relevance of the different criteria. A weighted average can then be used to choose a clustering that is suitable for the specific application. This requires that the criteria measuring different aspects of cluster validity and normalised in such a way that their values are comparable when doing the aggregation. Although it is easy in most cases to define criteria in such a way that their value range is [0,1], this is not necessarily enough to make their values comparable, because within this range the criteria may have very different variation. The idea here is that the expected variation of the criteria can be explored using resampled random clusterings (“stupid K-centroids”, “stupid nearest neighbour clustering”) on the same dataset, and this can be used for normalisation and comparison.The approach presented here can also be used for benchmarking cluster analysismethods. Particularly, it does not only allow to show that methods are better or worse on certain datasets, it also allows to characterise the specific strength and weaknesses of clustering algorithms in terms of the properties of the found clusters.Section <ref> introduces the general setup and defines notation. In Section <ref>, all the indexes measuring different relevant aspectsof a clustering are presented. Section <ref> defines an aggregated index that can be adapted to practical needs. The indexes cannot besuitably aggregated in their raw form, and Section <ref> introduces a calibration scheme using randmly generated clusterings. Section<ref> applies the methodology to two datasets, one illustrativeartificial one and a real dataset regarding species delimitation. Section <ref> concludes the paper. § GENERAL NOTATIONGenerally, cluster analysis is about finding groups in a set of objects D={x_1,…,x_n}. There is much literature in which the objects x_1,…,x_n are assumed to be from Euclidean space ^p, but in principle the could be from any space X. A clustering isa set C={C_1,…,C_K} withC_j⊆ D, j=1,…,K. The number of clusters K may be fixed in advance or not. For j=1,…,K, let n_j=|C_j| be the number of objects in C_j.Obviously not every such C qualifies as a “good” or “useful”clustering, but what is demanded of C differs in the different approaches of cluster analysis. Here C is required to be a partition, e.g., j≠ k ⇒ C_j ∩ C_k=∅ and⋃_j=1^K C_j= D. For partitions, letγ: {1,…,n}↦{1,…,K} be the assignment function,i.e., γ(i)=j if x_i∈ C_j. Some of the indexes introduced below could also by applied to clusterings that are not partitions (particularly objectsthat are not a member of any cluster could just be ignored), but this is not treated here to keep things simple. Clusters are here also assumed to be crisp rather than fuzzy, i.e., an object is either a full member of a cluster or not a member of this cluster at all. In case of probabilistic clusterings, which give as output probabilities p_ij for object i to be member of cluster j, it is assumed that objects are assigned to the cluster j maximising p_ij; in case of hierarchical clusterings it is assumed that the hierarchy is cut at acertain number of clusters K to obtain a partition.Most of the methods introduced here are based on dissimilarity data.Adissimilarity is a function d:X^2↦^+_0 so that d(x,y)=d(y,x)≥ 0 andd(x,x)=0 for x,y∈ X. Many dissimilarities are distances, i.e., they also fulfil the triangle inequality, but this is not necessarilyrequired here. Dissimilarities are extremely flexible, they can be defined for all kinds of data, such as functions, time series, categorical data,image data, text data etc. If data are Euclidean, obviously the Euclidean distance can be used. See Hennig<cit.> for a more general overview of dissimilarity measures used in cluster analysis. § ASPECTS OF CLUSTER VALIDITY In this Section I introduce measurements for various aspects of clustervalidity. §.§ Small within-cluster dissimilaritiesA major aim in most cluster analysis applications is to find homogeneousclusters. This often means that all the objects in a cluster should be very similar to each other, although it can in principle also have different meanings, e.g., that a homogeneous probability model (such as the Gaussian distribution, potentially with large variance) can account for all observations in a cluster.The most straightforward way to formalise that all objects within a cluster should be similar to each other is the average within-cluster distance, although this needs to be weighted for cluster sizes so that every observation has the same contribution to it:I_withindis(C) = 1/n∑_j=1^K2/n_j-1 ∑_x≠y∈C_jd(x,y).Smaller values are better. Knowing the data but not the clustering, the minimum possible value of I_withindis is zero and the maximum is d_max=max_x,y∈ Dd(x,y), soI_withindis^*( C)=1-I_withindis( C)/d_max∈[0,1] is a normalised version. When different criteria are aggregated (see Section <ref>), it is useful to define them in such a way that they point in the same direction; I will define all normalised indexes so that larger values are better. For this reason I_withindis( C)/d_max is subtracted from 1.There are alternative ways of measuring whether within-cluster dissimilarities are overall small. All of these operationalise cluster homogeneity in slightly different ways. Theobjective function of K-means clustering can be written down as a constant times the average of all squared within-cluster Euclidean distances (or more general dissimilarities), which is an alternative measure, giving more emphasis to the biggest within-cluster dissimilarities. Most radically, one could use the maximum within-cluster dissimilarity. On the other hand one could use quantiles or trimmed means in order to make the index less sensitive to large within-cluster dissimilarities, although I believe that in most applications in which within-cluster similarity is important, these should be avoided and the index should therefore be sensitive against them. §.§ Between-cluster separationApart from within-cluster homogeneity, the separation between clusters is most often taken into account in the literature on cluster validation (most univariate indexes balance separation against homogeneity in various ways). Separation as it is usually understood cannot be measured by averaging all between-cluster dissimilarities, because it refers to what goes on “between” the clusters, i.e., the smallest between-cluster dissimilarities, whereas the dissimilarities between pairs of farthest objects from different clusters should not contribute to this.The most naive way to measure separation is to use the minimum between-cluster dissimilarity. This has the disadvantage that with more than two clusters it only looks at the two closest clusters, and also in many applications there may be an inclination to tolerate the odd very small distance between clusters if by and large the closest points of the clusters are well separated. I propose here an index that takes into account a portion p, say p=0.1, of objects in each cluster that are closest to another cluster.For every object x_i∈ C_j, i=1,…,n, j∈{1,…,K} let d_j:i=min_y∉C_jd(x_i,y). Letd_j:(i)≤…≤ d_j:(n_j) be the values of d_j:i for x_i∈ C_j ordered from the smallest to the largest, and let ⌊ pn_j⌋ be the largest integer ≤ pn_j. Then the p-separation index is defined asI_p-sep(C)=1/∑_j=1^K ⌊pn_j⌋∑_j=1^K ∑_i=1^⌊pn_j⌋ d_j:(i).Obviously, I_p-sep( C)∈ [0,d_max] and large values are good, therefore I_p-sep^*( C)=I_p-sep( C)/d_max∈[0,1] is a suitable normalisation.§.§ Representation of objects by centroidsIn some applications clusters are used for information reduction, and one way of doing this is to use the cluster centroids for further analysis rather than the full dataset. It is then relevant to measure how well the observations in acluster are represented by the cluster centroid. The most straightforward method to measure this is to average the dissimilarities of all objects to thecentroid of the cluster they're assigned to. Letc_1,…,c_K be the centroids of clustersC_1,…, C_K. Then,I_centroid(C)=1/n∑_i=1^n d(x_i, c_γ(i)).Some clustering methods such as K-means and PartitioningAround Medoids (PAM, Kaufman and Rousseeuw<cit.>) are centroid-based, i.e., they compute the cluster centroids along with the clusters. Centroids can alsobe defined for the output of non-centroid-based methods, most easily asc_j=_x∈C_j∑_γ(i)=j d(x_i,x),which corresponds to the definition of PAM. Again, there are possiblevariations. K-means uses squared Euclidean distances, and in case of Euclidean data the cluster centroids do not necessarily have to be members ofD, they could also be mean vectors of the observations in the clusters. Again, by definition, I_centroid( C)∈[0,d_max]. Small values are better, and therefore I_centroid^*( C)=1-I_centroid( C)/d_max∈[0,1]. §.§ Representation of dissimilarity structure by clusteringAnother way in which the clustering can be used for information reduction is that the clustering can be seen as a more simple summary or representation of the dissimilarity structure. This can be measured by correlating the vector of pairwise dissimilarities d= vec([d(x_i,x_j)]_i<j) with the vector of a “clustering induced dissimilarity”c= vec([c_ij]_i<j), where c_ij=1(γ(i)≠γ(j)), and 1(∙) denotes the indicator function. With r denoting the samplePearson correlation,I_PearsonΓ(C)=r(d,c).This index goes back to Hubert and Schultz<cit.>, see also Halkidi et al.<cit.> for alternative versions. I_PearsonΓ∈[-1,1], and large values are good, so it can be normalised by I^*_PearsonΓ=I_PearsonΓ+1/2∈[0,1]. §.§ Small within-cluster gapsThe idea that a cluster should be homogeneous can mean that there are no“gaps” within a cluster, and that the cluster is well connected. A gapcan be characterised as a split of a cluster into two subclusters so that the minimum dissimilarity between the two subclusters is large. Thecorresponding index measures the “length” (dissimilarity) of the widestwithin-cluster gap (an alternative would be to average widest gapsover clusters):I_widestgap(C)=max_C∈C, D, E: C=D∪Emin_x∈D, y∈Ed(x,y).I_widestgap∈[0,d_max] and low values are good, so it is normalised asI^*_widestgap=1-I_widestgap/d_max∈[0,1].A version of this taking into account density values is defined in Section<ref>. Widest gaps can be found computationallyby constructing the within-cluster minimum spanning trees; the widest distance occurring there is the widest gap.§.§ Density modes and valleysA very popular idea of a cluster is that it corresponds to a density mode, and that the density within a cluster goes down from the cluster mode to the outer regions of the cluster. Correspondingly, there should be density valleysbetween different clusters.The definition of indexes that measure such a behaviour is based on a density function h that assigns a density value h(x) to every observation. For Euclidean data, standard density estimators such as kernel density estimatorscan be used. For general dissimilarities, I here propose a simple kernel density estimator. Let q_d,p be the p-quantile of the vector of dissimilaritiesd, e.g., for p=0.1, the 10% smallest dissimilarities are≤ q_d,0,1. Define the kernel and density as κ(d)=(1-1/q_d,pd)1(d≤q_d,p),h(x)=∑_i=1^n κ(d(x,x_i)).These can be normalised to take a maximum of 1:h^*(x)=h(x)/max_y∈Dh(y).Alternatively, h_k-nn(x)=1/d^k(x) with d^k(x) being thedissimilarity to the kth nearest neighbour would be another simple dissimilarity-based density estimator, although this has no trivial upper bound (h, even beforenormalising by its within-cluster maximum, is bounded by n). One could also standardise h by the within-cluster maxima if clusters with generally lower densities should have the same weight as high density clusters,but lower density values rely on fewer observations and are therefore less reliable.Three different aspects of density-based clustering are measured by three different indexes: * The density should decrease within a cluster from the density mode to the “outskirts” of the cluster (I_densdec).* Cluster boundaries should run through density “valleys”, i.e.,high density points should not be close to many points from other clusters (I_densbound).* There should not be a big gap between high density regions within a cluster (I_highdgap; gaps as measured by I_widestgap may be fine inthe low density outskirts of a cluster).The idea for I_densdec is as follows.For every cluster, starting from the cluster mode, i.e., the observation with the highest density, construct a growing sequence ofobservations that eventually covers the whole cluster by always adding theclosest observation that is not yet included. Optimally, in this process, the within-cluster density of newly included points should always decrease.Whenever actually the density goes up, a penalty of the squared difference of the densities is incurred.The index I_densdec aggregates these penalties. The followingalgorithm computes this, and it also constructs a set T that collectsinformation about high dissimilarities between high density observations and is used for the definition of I_highdgap below: Initialisation I_d1=0, T=∅. For j=1,…,K: Step 1 S_j={x}, where x=_y ∈ C_jh^*(y). Step 2 Let R_j=C_j∖ S_j. If R_j=∅: j=j+1, if j≤ K go to Step 1, if j+K=1 then go to Step 5. Otherwise:Step 3 Find (x,y)=_(z_1,z_2): z_1∈ R_j, z_2∈ S_jd(z_1,z_2). S_j=S_j∪{x}, T=T∪{max_z∈ R_jh^*(z)d(x,y)}. Step 4 If h^*(x)>h^*(y): I_d1=I_d1+(h^*(x)-h^*(y))^2, back to Step 2.Step 5 I_densdec( C)=√(I_d1/n). I_densdec collects the penaltiesfrom increases of the within-cluster densitiesduring this process. The definition of I_densdecdoes not take into account whether the neighbouring observations that produce high density values h^*(x) for x are in the same cluster as x. But this is important, because it would otherwise be easy to achieve a good value of I_densdec bycutting through high density areas and distributing a single high densityarea to several clusters.A second index can be defined that penalises a high contribution of points from different clusters to the density values in a cluster (measured by h_o below),because this means that the cluster border cuts through a high density region.x_i, i=1,…,n:h_o(x_i)=∑_k=1^n κ(d(x_i,x_k)) 1(γ(k)≠γ(i)).Normalising:h_o^*(x)=h_o(x)/max_y∈D h(y).A penalty is incurred if for observations with a large density h^*(x) there is a large contribution h^*_o(x) to that density from other clusters:I_densbound(C)=1/n∑_j=1^K∑_x∈C_j h^*(x)h^*_o(x).Both I_densdec and I_densbound are by definition ≥ 0. Also, the maximumcontribution of any observation to any of I_densdec and I_densbound is1/n,because the normalised h^*-values are ≤ 1.These are penalties, so low values are good, and normalised versions are defined asI_densdec^*(C)=1-I_densdec(C),I_densbound^*(C)=1-I_densbound(C).An issue with I_densdec is that it is possible that there is a large gap between two observations with high density, which does not incur penalties if there are no low-density observations in between. This can be picked up by a version of I_widestgap based on the density-weighted gapinformation collected in T above. This is suggested instead of I_widestgap if a density-based cluster concept is of interest:I_highdgap(C)=maxT.I_highdgap( C)∈[0,d_max] and low values are good,so it is normalised asI^*_highdgap( C)=1-I_highdgap( C)/d_max∈[0,1]. §.§ Uniform within-cluster densitySometimes different clusters should not (only) be characterised by gaps between them; overlapping regions in data space may be seen as different clusters if they have different within-cluster density levels, which in some applications could point to different data generating mechanisms behind the differentclusters, which the researcher would like to discover. Such a cluster conceptwould require that densities within clusters are more or less uniform.This can be characterised by the coefficient of variation CV of either thewithin-cluster density values or the dissimilarities to the kth nearestwithin-cluster neighbour d^k_w(x) (say k=4).The latter is preferred here becauseas opposed to the density values, d^k_w(x) is clean from the influence ofobservations from the other clusters. Define for j=1,…,k, assumingn_j>k:m(C_j;k)=1/n_j∑_x∈C_jd^k_w(x),(C_j)=√(1/n_j-1∑_x∈C_j(d^k_w(x)-m(C_j;k))^2)/m(C_j;k).Using this,I_cvdens(C)=∑_j=1^K n_j (C_j)1(n_j>k)/∑_j=1^Kn_j1(n_j>k).Low values are good. The maximum value of the coefficient of variation based on n observations is √(n) (Katsnelson and Kotz<cit.>), so a normalised version isI_cvdens^*( C)=1-I_cvdens( C)/√(n).§.§ EntropyIn some clustering applications, particularly where clustering is done for “organisational” reasons such as information compression, it is usefulto have clusters that are roughly of the same size. This can be measured by the entropy:I_entropy(C)=-∑_j=1^K n_j/nlog(n_j/n).Large values are good. The entropy is maximised for fixed K bye_max(K)=-log(1/K), so it can be normalised by I_entropy^*( C)=I_entropy( C)/e_max(K). §.§ ParsimonyIn case that there is a preference for a lower number of clusters, one could simply define I_parsimony^*=1-K/K_max,(already normalised) with K_max the maximum number of clusters ofinterest. If in a givenapplication there is a known nonlinear loss connected to the number ofclusters, this can obviously be used instead, and the principle can be applied also to other free parameters of a clustering method, if desired. §.§ Similarity to homogeneous distributional shapesSometimes the meaning of “homogeneity” for a cluster is defined bya homogeneous probability model, e.g., Gaussian mixture model-based clustering models all clusters by Gaussian distributions with different parameters, requiring Euclidean data.Historically, due to the Central Limit Theorem and Quetelet's“elementary error hypothesis”, measurement errors were widely believed to be normally/Gaussian distributed (see Stigler<cit.>).Under such a hypothesis it makes sensein some situations to regard Gaussian distributed observations as homogeneous, and as pointing to the same underlying mechanism; this could also motivate to cluster observations together that look like being generated from the same(approximate) Gaussian distribution. Indexes that measure cluster-wiseGaussianity can be defined, see, e.g., Lago-Fernandez and Corbacho<cit.>.One possible principle is to compare aone-dimensional function of the observations within a cluster to its theoretical distribution under the data distribution of interest; e.g., Coretto andHennig<cit.> compare the Mahalanobis distances of observations to theircluster centre with their theoretical χ^2-distribution using the Kolmogorow-distance. This is also possible for other distributions of interest.§.§ StabilityClusterings are often interpreted as meaningful in the sense that they can be generalised as substantive patterns. This at least implicitly requires that they are stable. Stability in cluster analysis can be explored using resampling techniques such as bootstrap and splitting the dataset, and clustering fromdifferent resampled datasets can be compared. This requires to run theclustering method again on the resampled datasets and I will not treat this here in detail, but useful indexes have been defined using this principle, see, e.g., Tibshirani and Walther<cit.> and Fang and Wang<cit.>.§.§ Further AspectsHennig<cit.> lists further potentially desirable characteristics of aclustering, for which further indexes could be defined: * Areas in data space corresponding to clusters should have certain characteristics such as being linear or convex.* It should be possible to characterise clusters using a small number of variables.* Clusters should correspond well to an externally given partition or values of an external variable (this could for example imply that clusters of regions should be spatially connected).* Variables should be approximately independent within clusters. § AGGREGATION OF INDEXES The required cluster concept and therefore the way the validation indexes can be used depends on the specific clustering application. The users need to specifywhat characteristics of the clustering are desired in the application. Thecorresponding indexes can then be aggregated to form a single criterion thatcan be used to compare different clustering methods, different numbers ofclusters and other possible parameter choices of the clustering.The most straightforward aggregation is to compute a weighted mean of sselected indexes I_1,…,I_s with weights w_1,…,w_s> 0expressing the relative importance of the different methods:A( C)=∑_k=1^s w_kI_k.Assuming that large values are desirable for all of I_1,…,I_s, the best clustering for the application in question can be found by maximising A.This can be done by comparing different clusterings from conventional clustering methods, but in principle it would also be an option to try to optimise A directly.The weights can only be chosen to directly reflect the relative importance ofthe various aspects of a clustering if the values (or, more precisely, their variations) of the indexes I_1,…,I_s are comparable, and give the indexes equal influence on A if all weights are equal. In Section<ref> I proposed tentative normalisations of all indexes, whichgive all indexes the same value range [0,1]. Unfortunately this is notgood enough to ensure comparability; on many datasets some of these indexes will cover almost the whole value range whereas other indexes may be largerthan 0.9 for all clusterings that any clustering method would come up with. Therefore, Section <ref> will introduce a new computational method to standardise the variation of the different criteria.Another issue is that some indexes by their very nature favour large numbers of clusters K (obviously large within-cluster dissimilarities can be moreeasily avoided for large K), whereas others favour small values of K (separation is more difficult to achieve with many small clusters). The method introduced in Section <ref> will allow to assess the extent to which the indexes deliver systematically larger or smaller values for larger K. Note that this can also be an issue for univariate “global”validation indexes from the literature, see Hennig and Lin<cit.>. If the indexes should be usedto find an optimal value of K, the indexes in Ashould be chosen in such a way thatindexes that systematically favour larger K and indexes that systematically favour smaller K are balanced. The user needs to take into account thatthe proposed indexes are not independent. For example, good representation of objects by centroids will normally becorrelated with having generally small within-cluster dissimilarities. Including both indexes will assign extra weight to the information that the two indexes have in common (which may sometimes but not always be desired). There are alternative ways to aggregate the information from the different indexes. For example, one could use some indexes as side conditions rather than involving them in the definition of A. For example, rather than giving entropy a weight for aggregation as part of A, one may specify a certainminimum entropy value below which clusterings are not accepted, but not use the entropy value to distinguish between clusterings that fulfil the minimum entropy requirement. Multiplicative aggregation is another option. § RANDOM CLUSTERINGS FOR CALIBRATING INDEXES As explained above, the normalisation in Section <ref> does not provide a proper calibration of the validation indexes. Here is an idea for doing thisin a more appropriate way. The idea is that random clusterings are generated on D and index values are computed, in order to explore what range of index values can be expected on D, so that the clusterings of interest can be compared to these. So in this Section, as opposed to conventionalprobability modelling, the dataset is considered as fixedbut a distribution of index values is generated from various random partitions.Completely random clusterings (i.e., assigning every observation independentlyto a cluster) are not suitable for this, because it can be expected that indexes formalising desirable characteristics of a clustering will normally give muchworse values for them than for clusters that were generated by a clustering method. Therefore I propose two methods for random clusterings that are meantto generate clusterings that make some sense, at least by being connected indata space. The methods are called “stupid K-centroids” and “stupid nearestneighbours”; “stupid” because they are versions of popular clusteringmethods (centroid-based clustering like K-means or PAM, andSingle Linkage/Nearest Neighbour) that replace optimisation by random decisions and are meant to be computable very quickly. Centroid-based clustering normally produces somewhat compact clusters, whereas Single Linkage is notorious forprioritising cluster separation totally over within-cluster homogeneity, andtherefore one should expect these two approaches to explore in a certainsense opposite ways of clustering the data. §.§ Stupid K-centroids clusteringStupid K-centroids works as follows. For fixed number of cluster K drawa set of K cluster centroids Q={q_1,…,q_K} from Dso that every subset of size K has the same probability of being drawn.C_K-stupidcent(Q)={C_1,…,C_k} is defined by assigning every observation to the closest centroid: γ(i)=_j∈{1,…,K} d(x_i,q_j), i=1,…,n. §.§ Stupid nearest neighbours clusteringAgain, for fixed number of cluster K drawa set of K cluster initialisation pointsQ={q_1,…,q_K} from Dso that every subset of size K has the same probability of being drawn. C_K-stupidnn(Q)={C_1,…,C_k} is defined by successively adding the not yet assigned observation closest to any cluster to that cluster until all observations are clustered: Initialisation Let Q^*=Q. Let C^*(Q)=C^*(Q^*)={C_1^*,…,C_L^*}={{q_1},…,{q_K}}. Step 1 Let R^*= D∖ Q^*. If R^*≠∅,find (x,y)=_(z,q): z∈ R^*, q∈ Q^*d(z,q), otherwise stop. Step 2 Let Q^*=Q^*∪{x}. For the C^*∈ C^*(Q^*) with y∈ C^*, letC^*=C^*∪{x}, updating C^*(Q^*) accordingly. Go back to Step 1.At the end, C_K-stupidnn(Q)= C^*(Q^*). §.§ CalibrationThe random clusterings can be used in various ways to calibrate the indexes.For anyvalue K of interest, 2B clusterings C_K-collection=( C_K:1,…, C_K:2B)=(C_K-stupidcent(Q_1),…,C_K-stupidcent(Q_B), C_K-stupidnn(Q_1),…,C_K-stupidnn(Q_B))on D are generated, say B=100. As mentioned before, indexes may systematically change over K and thereforemay show a preference for either large or small K. In order to account forthis, it is possible to calibrate the indexes using stupid clusterings for the same K, i.e., for a clustering C with | C|=K. Consider an index I^* of interest (the normalised version is used here because this means that large values are good for all indexes). Then,I^cK( C)=I^*( C)-m^*( C_K-collection)/√(1/2B-1∑_j=1^2B(I^*( C_K:j)-m^*( C_K-collection))^2),where m^*( C_K-collection)=1/2B∑_j=1^2BI^*( C_K:j). A desired set of calibrated indexes can then be used for aggregation in(<ref>). An important alternative to (<ref>) is calibration by using randomclusterings for all values of K together. LetK={2,…,K_max} be the numbers of clusters of interest (most indexes will not work for K=1),C_collection={C_K:j: K∈ K, j=1,…,2B},m^*( C_collection)=1/2B(K_max-1)∑_K=2^K_max∑_j=1^2BI^*( C_K:j). With this,I^c( C)=I^*( C)-m^*( C_collection)/√(1/2B(K_max-1)-1∑_K=2^K_max∑_j=1^2B(I^*( C_K:j)-m^*( C_collection))^2).I^c does not correct for potential systematic tendencies of the indexes over K, but this is not a problem if the user is happy to use the uncalibrated indexes directly forcomparing different values of K; a potential bias toward large or small values of K in this case needs to be addressed by choosing the indexes to beaggregated in (<ref>) in a balanced way. This can be checked by computing the aggregated index A also for the random clusterings and check how these change over the different values of K. Another alternative is to calibrate indexes by using their rank value in the set of clusterings (random clusterings and clusterings to compare) rather than a mean/standard deviation-based standardisation. This is probably more robustbut comes with some loss of information.§ EXAMPLESWithindis needs recomputing here because of reweighting!§.§ Artificial datasetThe first example is the artificial dataset shown in Figure <ref>.Four clusterings are compared (actually many more clusterings with K between 2 and 5 were compared on these data, but the selected clusterings illustrate the most interesting issues).The clusterings were computed by K-means with K=2 and K=3, Single Linkage cut at K=3 and PAM with K=5. The K-means clustering with K=3 and theSingle Linkage clustering are shown in Figure <ref>. The K-means clustering with K=2 puts the uniformly distributed widespread point cloud on top together in a single cluster, and the two smaller populations are thesecond cluster. This is the most intuitive clustering for these data for K=2and also delivered by most other clustering methods. PAM does not separatethe two smaller (actually Gaussian) populations for K=2, but it does so forK=5, along with splitting the uniform point cloud into three parts. Table <ref> shows the normalised index values for these clusterings. Particularly comparing 3-means and Single Linkage, the different virtues of these clusterings are clear to see. 3-means is particularlybetter for the homogeneity-driven I_withindis^* and I_centroid^*, whereasSingle Linkage winsregarding the separation-oriented I_0.1-sep^* and I_widestgap^*, with3-means ignoring the gap between the two Gaussian populations.I_PearsonΓ^* tends toward 3-means, too, which was perhaps less obvious, because it does not like too bigdistances within clusters. It is also preferred by I_entropy^* because of joining two subpopulations that are rather small.The values for theindexes, I_densdec^*, I_densbound^*, I_highdgap^*,and I_cvdens^* illustrate that that the naive normalisation is not quite suitable for makingthe value ranges of the indexes comparable. For thedensity-based indexes, many involved terms are far away from the maximum used for normalisation, so the index values can be close to 0 (close to 1 after normalisation). This is amended by calibration.Considering the clusterings with K=2 and K=5, it can be seen that with K=5 it is easier to achieve within-cluster homogeneity (I_withindis^*,I_centroid^*), whereas with K=2 it is easier to achieve separation(I_0.1-sep^*). Table <ref> shows the index values I^cK calibrated againstrandom clustering with the same K. This is meant to account for the fact that some indexes differ systematically over different values of K. Indeed, using this calibration, PAM with K=5 is no longer best for I_centroid^cK and I_withindis^cK, and 2-means is no longer best for I_0.1-sep^cK. It can now be seen that3-means is better than Single Linkage for I_densdec^cK. This is becausedensity values show much more variation in the widely spread uniformsubpopulation than in the two small Gaussian ones, so splitting up the uniform subpopulation is better for creating densities decreasing from the modes,despite the gap between the two Gaussian subpopulations. On the other hand,3-means has to cut through the uniform population, which gives Single Linkage, which only cuts through clear gaps, an advantage regarding I_densbound^cK, and particularly 3-means incurs a large distance between the two Gaussian high density subsets within one of its clusters, which makes Single Linkage much better regarding I_highdgap^cK.Ultimately, the user needs to decide here whether small within-cluster dissimilarities and short dissimilarities to centroids are more important than separation and the absence of within-cluster gaps. The K=5-solution does not look very attractive regarding most criteria (although calibration with the same K makes it look good regarding I_densbound^cK); the K=2-solution onlylooks good regarding two criteria that may not be seen as the most important ones here.Table <ref> shows the index values I^cK calibrated againstall random clusterings. Not much changes regarding the comparison of 3-means and Single Linkage, whereas a user who is interested in small within-cluster dissimilarities and centroid representation in absolute terms is now drawn toward PAM with K=5 or even much larger K, indicating that these indexes should not be used without some kind of counterbalance, either fromseparation-based criteria (I_0.1-sep^c and I_densbound^c)or taking into account parsimony. A high density gap within a cluster is most easily avoided with large K, too, whereas K=2 achieves the bestseparation, unsurprisingly. As this is an artificial dataset and there is no subject-matter information that could be used to prefer certain indexes, I do not present specificaggregation weights here. §.§ Tetragonula bees dataFranck et al.<cit.> published a data set giving geneticinformation about 236 Australasian tetragonula bees, in which it is of interest to determine the number of species. The data set is incorporated in the package “fpc” of the software system R () and is available on the IFCSCluster Benchmark Data Repository .Bowcock et al.<cit.> defined the “shared allele dissimilarity” formalising genetic dissimilarityappropriately for species delimitation, which is used for the present data set. It yields valuesin [0, 1]. See also Hausdorf and Hennig<cit.> and Hennig<cit.> for earlier analyses of this dataset including a discussion of the number of clusters problem. Franck et al.<cit.> provide 9“true” species for these data, although this manualclassification (using morphological information besides genetics)comes with its own problems and may not be 100% reliable. In order to select indexes and to find weights, some knowledge about species delimitation is required, which was provided by Bernhard Hausdorf, Museum ofZoology, University of Hamburg. The biologicalspecies concept requires that there isno (or almost no) genetic exchange between different species, so that separation is a key feature for clusters that are to be interpreted as species. For the same reason, large within-cluster gaps can hardly be tolerated (regardless of the density values associated to them); in such a case one would consider the subpopulations on two sides of a gap separate species,unless a case can be made that potentially existing connecting individuals could not be sampled. Gaps may also occur in regionally separated subspecies, but this cannot be detected from the data without regional information. On the other hand, species should be reasonably homogeneous; it would beagainst biological intuition to have strongly different genetic patterns within the same species. This points to the indexes I_withindis, I_0.1-sep, and I_widestgap. On the other hand, the shape of the within-cluster density is not a concern here, and neither are representation of clusters by centroids, entropy, and constant within-cluster variation. The indexI_PearsonΓis added to the set of relevant indexes, because one can interpret the species concept as a representation of genetic exchange as formalised by the shared allele dissimilarity, and I_PearsonΓ measures the quality of thisrepresentation. All these four indexes are used in (<ref>) with weight 1 (one could be interested in stability as well, which is not taken into account here). Again I present a subset of the clusterings that were actually comparedfor illustrating the use of the approach presented in this paper. Typically clusterings below K=9 were substantially different from the ones withK≥ 9; clusterings with K=10 and K=11 from the same method were oftenrather similar to each other, and I present clusterings from Average Linkage and PAM with K=5, 9, 10, and 12. Table <ref> shows thefour relevant index values I^cK calibrated against random clustering withthe same K along with the aggregated index A( C). Furthermore,the adjusted Rand index (ARI; Hubert and Arabie<cit.>) comparing the clusterings from the method with the “true” species is given (this takesvalues between -1 and 1 with 0 expected for random clusterings and 1 for perfect agreement). Note that despite K=9 being the number of “true”species, clusterings with K=10 and K=12 yield higher ARI-values than those with K=9, so these clusterings are preferable (it does not help much toestimate the number of species correctly if the species are badly composed). Some “true” species in the original dataset are widely regionally dispersedwith hardly any similarity between subspecies.The aggregated index A( C) is fairly well related to the ARI (over all 55 clusteringsthat were compared the correlation between A( C) and ARIis about 0.85). The two clusterings that are closest to the “true” one also have the highest values of A( C). The within-cluster gap criterion plays a key role here, preferring Average Linkage with 9-12 clusters clearly over the other clusterings. A( C) assigns its highest value to AL-12, whereas the ARI for AL-10 is very slightly higher. PAM delivers better clusterings regarding smallwithin-cluster dissimilarities, but this advantage is dwarfed by the advantageof Average Linkage regarding separation and within-cluster gaps.Table <ref> shows the corresponding results with calibration using all random clusterings. This does not result in a different ranking of theclusterings, so this dataset does not give a clear hint which of the twocalibration methods is more suitable, or, in other words, the results do not depend on which one is chosen.§ CONCLUSIONThe multivariate array of cluster validation indexes presented here provides the user with a detailed characterisation of various relevant aspects of aclustering. The user can aggregate the indexes in a suitable way to find auseful clustering for the clustering aim at hand. The indexes can also be used to provide a more detailed comparison of different clustering methods in benchmark studies, and a better understanding of their characteristics.The methodology is currently partly implemented in the “fpc”-package of the statistical software system R and will soon be fully implemented there.Most indexes require K≥ 2 and the approach can therefore not directly be used for deciding whether the dataset is homogeneous as a whole (K=1). The individual indexes as well as the aggregated index could be used in aparametric bootstrap scheme as proposed by Hennig and Lin<cit.> to test the homogeneity null hypothesis against a clustering alternative.Research is still required in order to compare the different calibrationmethods and some alternative versions of indexes. A theoretical characterisation of the indexes is of interest as well as a study exploring the strength of the information overlap between some of the indexes, looking at, e.g., correlations over various clusterings and datasets. Random clustering calibration may also be used together with traditional univariate validation indexes.Further methods for random clustering could be developed and it could be explored what collection of random clusterings is most suitable for calibration (some work in this direction is currently done by my PhD student Serhat Akhanli).§ ACKNOWLEDGEMENTThis work was supported by EPSRC Grant EP/K033972/1. BRTMKC94 A. M. Bowcock, A. Ruiz-Linares, J. Tomfohrde, E. Minch, J. R. Kidd, and L. L. Cavalli-Sforza. High resolution of human evolutionarytrees with polymorphic microsatellites. Nature, 368, 455–457, 1994. CH74 T. Calinski and J. Harabasz. A dendrite method for cluster analysis. Communications in Statistics 3, 1-–27, 1974. CH16 P. Coretto and C. Hennig. Robust improper maximum likelihood: tuning, computation, and a comparison with other methods for robust Gaussian clustering. Journal of the American Statistical Association 111, 1648–1659, 2016. FW12 Y. Fang and J. Wang. Selection of the number of clusters via the bootstrap method. Computational Statistics and Data Analysis, 56, 468-477, 2012. FCGRO04 P. Franck, E. Cameron, G. Good, J.-Y. Rasplus and B. P. Oldroyd. Nest architecture and genetic differentiation in a species complex of Australian stingless bees. Molecular Ecology, 13, 2317–2331, 2004. HVH16 M. Halkidi, M. Vazirgiannis and C. Hennig. Method-Independent Indices for Cluster Validation and Estimating the Number of Clusters. In Handbook of Cluster Analysis, C. Hennig, M. Meila, F. Murtagh, R. Rocci (eds.), CRC/Chapman & Hall, Boca Raton, 595–618, 2016. HH10 B. Hausdorf and C. Hennig. Species Delimitation Using Dominant and Codominant Multilocus Markers. Systematic Biology, 59, 491–503, 2010.Hen13 C. Hennig. How many bee species? A case study in determining the number of clusters. In Data Analysis, Machine Learning and Knowledge Discovery, M. Spiliopoulou, L. Schmidt-Thieme, R. Janning (eds.), Springer, Berlin, 41–49, 2013. Hen15 C. Hennig. What are the true clusters? Pattern Recognition Letters 64, 53–62, 2015. Hen16 C. Hennig. Clustering Strategy and Method Selection. In Handbook of Cluster Analysis, C. Hennig, M. Meila, F. Murtagh, R. Rocci (eds.), CRC/Chapman & Hall, Boca Raton, 703–730, 2016. HL15 C. Hennig and C.-J. Lin. Flexible parametric bootstrap for testing homogeneity against clustering and assessing the number of clusters. Statistics and Computing 25, 821–833, 2015. HA85 L. J. Hubert and P. Arabie. Comparing Partitions. Journal of Classification, 2, 193–218, 1985. HS76 L. J. Hubert and J. Schultz. Quadratic assignment as a general data analysis strategy. British Journal of Mathematical and Statistical Psychology 29, 190–-241, 1976. KR90L. Kaufman and P. J. Rousseeuw. Finding Groups in Data, Wiley, New York, 1990.KK57 J. Katsnelson and S. Kotz. On the upper limits of some measures of variability. Archiv für Meteorologie, Geophysik und Bioklimatologie, Series B 8, 103-–107, 1957. LFC10 L. F. Lago-Fernandez and F. Corbacho. Normality-based validation for crispclustering. Pattern Recognition 43, 782–-795, 2010. Sti86 S. Stigler. The History of Statistics: The Measurement of Uncertainty before 1900. Harvard University Press, Cambridge, 1986.TW05 R. Tibshirani and G. Walther. Cluster Validation by Prediction Strength. Journal of Computational and Graphical Statistics, 14, 511–528, 2005. | http://arxiv.org/abs/1703.09282v2 | {
"authors": [
"Christian Hennig"
],
"categories": [
"stat.ME",
"62H30"
],
"primary_category": "stat.ME",
"published": "20170327194216",
"title": "Cluster validation by measurement of clustering characteristics relevant to the user"
} |
label1]E. Olivieri[label1]CSNSM, Univ. Paris-Sud, CNRS/IN2P3, Université Paris-Saclay, 91405 Orsay, Francelabel2]J. Billard label2]M. De Jesuslabel2]A. Juillard[label2]Univ Lyon, Université Lyon 1, CNRS/IN2P3, IPN-Lyon, F-69622, Villeurbanne, Francelabel3]A. Leder [label3]Massachussets Institute of Tehnology, Laboratory for Nuclear Science, 77 Massachusetts Avenue Cambridge, MA 02139-4307 Dry Dilution Refrigerators (DDR) based on pulse tube cryo-coolers have started to replace Wet Dilution Refrigerators (WDR) due to the ease and low cost of operation. However these advantages come at the cost of increased vibrations, induced by the pulse tube.In this work, we present the vibration measurements performed on three different commercial DDRs.We describe in detail the vibration measurement system we assembled, based on commercial accelerometers, conditioner and DAQ, and examined the effects of the various damping solutions utilized on three different DDRs, both in the low and high frequency regions. Finally, we ran low temperature, pseudo-massive (30 and 250 g) germanium bolometers in the best vibration-performing system under study and report on the results.Cryogenics Dry Dilution Refrigerators Vibrations Accelerometer Bolometers; 07.20 07.90.+c 07.57.Kp 07.10.Fq § INTRODUCTION Due to helium shortage and increasing price of liquid helium, in the last decade, research groups performing experimental physicsat low temperatureshave begun to replace the usual Wet Dilution Refrigerators (WDR) with pulse tube-based Dry Dilution Refrigerators (DDR). The success of DDRs relies on the low-cost and ease of operation. In particular, the high level of automation of the gas handling systems and the lack of liquid helium bath allow for a nearly autonomous cool down and running. However, pulse tubes induce vibrations which are so far the most serious drawbackof this technology <cit.>. Indeed, vibrations can drastically affect the results of experiments as in the case of Scanning Tunnelling Microscopy, Johnson Noise Measurements and Bolometers <cit.>.The ultimate goal for DDR technology is to provide, through an efficient vibration decoupling system, a low temperature andlow vibration environment as good as the one obtained with WDRs. Throughout this paper we assume that, in first approximation, running a DDR fridge with its pulse tube turned OFF is equivalent in terms of vibrations to running a WDR. In this work, we propose a vibration measurement standard (<ref>), built with market-based components, that allows for a rigorous and unambiguous comparison between vibration levels of DDRs, at room temperature. We set three vibration limits to classify systems asnoisy, typical and quiet. We report on vibration measurements on three (four) different DDRs (setups) and draw conclusions on their vibration performances (<ref>).Finally(<ref>), we show how vibration levels as measured with accelerometers compare with bolometers, highlighting the need for vibration levels below 10 μ g to operate these correctly.§ DESCRIPTION OF THE DDR UNITS UNDER STUDY Here below we list the three (four) DDRs (setups) under study and describe the various vibration damping solutions utilized by each one (Fig. <ref>). -Hexadry Standard (Hex std): produced by Cryoconcept, it is the standard model of the Hexadry Hexagas ™series <cit.>. It is equipped with a PT410 Cryomech pulse tube with a remote rotary valve. The pulse tube cold head is tightly fixed onto the 300 K flange, without any dedicated vibration decoupling system. The pulse tube intermediate and cold stages are thermally coupled to the cryostat intermediate (50 K) and cold (4 K) plates via low pressure gas-exchangers (Hexagas ™) to avoid any mechanical contact and hence reduce the propagation of vibrations down to the various cold stages of the fridge. No special care was devoted to the positioning of the remote motor, which was held on the main DDR unit frame <cit.>. The unit was installed at the Institut de Physique Nucléaire de Lyon (IPNL) and devoted to detector R&D for the EDELWEISS dark matter search experiment <cit.>.-Hexadry Ultra Quiet Technology (Hex UQT): it is exactly the same aforementioned DDR unit but upgraded with the UQT (Ultra Quiet Technology ™) option. This option is especially conceived to providealow vibration environment at low temperatures. It consists in a mechanical decoupling of the pulse tube head from the rest of the cryostat via an edge-welded supple bellow [The edge-welded bellow employed has an elastic constant of 30 N/mm along the z axis whereas the radial constant is of 2200 N/mm.]. A few mm-thick neoprene O-ring is installed between the bellow and the 300 K flange to cut out high frequency vibrations.A solid secondary frame, physically separated from the main one, is firmly mounted on the ceiling and rigidly holds the pulse tube head <cit.>. The rotary valve may be mounted on the ceiling to further decouple from the cryostat. An analogous system, Hex UQT (STERN), has kindly been set at our disposal by Cryoconcept and Bar-Ilan University (Israel) <cit.> to study the reproducibility of the vibration performances with respect to the unit and installation site. For this unit, the pulse tube head and rotary valve were both mounted on a secondary frame, separated from the cryostat main frame.-Oxford Triton 400 (Triton): produced by Oxford Instruments <cit.>, the system is especially conceived to provide a low temperature, low vibration experimental environment. This design utilizes an edge-welded bellow to insulate the vibrations coming from the pulse tube head and provides thermal contacts between the pulse tube stages and the cryostat intermediate (50 K) and cold (4 K) plates via supple copper braids. The system comes mounted on a single solid frame (main frame). All the different dilution unit cold plates down to the coldest (10 mK plate) are rigidly triangulated. The unit uses the same pulse tube as the Cryoconcept models with a remote rotary valve option. For our experimental studies we evaluated a system installed at the Laboratory for Nuclear Science at MIT, currently used for ongoing CUORE/CUPID detector R&D <cit.> .In this work we will see that the vibrations induced by the pulse tube can be transmitted to the dilution unit both via the cold head (300 K pulse tube flange) and the cold stages. Hence, an efficient vibration damping solution must take both into account. The gravitational wave experiment CLIO <cit.>first realized a 4 K non vibrating cold plate cryostat, by decoupling the pulse tube cold head with an edge-welded supple bellow and utilizing supple copper braid thermal links between the pulse tube stages and the intermediate (50 K) and cold (4 K) cryostat plates. Since, this decoupling solution is commonly adopted in dry refrigerators.Nevertheless, theCLIO experiment observed residual vibrations on the cryostat plates; it demonstrated these were transmitted mainly by the mechanical thermal links and negligibly from the edge-welded bellow. This prompted Cryoconcept to opt for thermal couplings through gas-exchangers [A gas-exchanger consist of two annular, entangled counter-radiators. The fixed radiator is accommodated on the cryostat intermediate (cold) plate whereas the counter-radiator is tightly fixed on the pulse tube stage(s). This latter sits inside the fixed radiator with a gap of few mm, without any mechanical link. Low pressure helium gas establishes the thermal link between the two counter-radiators. This gas-exchanger technique is a trademark of Cryoconcept.] through the Hex UQT ™ technology.A special care must be applied in choosing and dimensioning the edge-welded bellows to decouple the pulse tube cold head; in fact, bellows efficiently damp vibrations along their axial direction z, whereas they perform poorly along the radial direction r [The stiffness coefficient k_z of the edge-welded bellow along z direction is much smaller than the radial one k_r.]. Fortunately, though pulse tube vibrations are not negligible along the radial direction, the majority of these are along the axial direction <cit.>.§ DESCRIPTION OF THE MEASUREMENT SYSTEMTo measure the vibrations at the Mixing Chamber (10 mK cold plate) of the different DDRs and setups, we selected and set up a measurement system standard, composed of a high sensitivity PCB-393B04 seismic accelerometer (PCB Piezo-electronics, typical sensitivity of 1 V/g in the 1 Hz-750 Hz frequency region), a PCB-480E09 signal conditioner and a 16-bit National Instrument DAQ-6218. The measurement chain has been carefully chosen to evidence the residual low level of vibrations injected by the pulse tube down to 0.2 μ g√(Hz), in the1 Hz-1 kHz frequency range. Two other accelerometers were tested: PCB-351B41(cryogenic) and Kistler-8762A (3-Axes). They have been rejected because their intrinsic noise was too large to appreciate vibrations at the required level. We mounted the accelerometer on the Mixing Chamber (10 mK plate), allowing it to sense along the vertical and radial directions. For reading the signal, we used an anti-tribo-electric coaxial cable, tightly fixed to the rigid structures of the DDRs (to avoid spurious signal induced by the stress or vibrations of the cable). A leak-tight electrical feedthrough was used to connect this latter cable to the conditioner which sat outside the cryostat. We performed the measurements with the OVC (Outer Vacuum Chamber) under vacuum to prevent the accelerometer from picking up the acoustic environmental noise through air. All measurements have been performed at room temperature, for three reasons: 1) the lack of any low budget easy-to-handle cryogenic accelerometer with sufficiently low intrinsic noise; 2) to first order, we assume that the room temperature acceleration measurements are representative of the vibration level at low temperatures. Indeed, no large difference between the 300 K and 4 K values of the elastic constant k and Young's modulus E is observed for stainless steel and copper <cit.>, which are the main materials used for the rigid structures of the DDR units; 3) room temperature measurements can be performed rapidly by any user, with much less constraints as those at low temperatures.§ ACCELERATION AND DISPLACEMENT: RESULTS AND DISCUSSION§.§ Accelerations We measured the acceleration of the Mixing Chamber (10 mK cold plate) of the three (four) DDR units (setups) via the acquisition chain described in the previous section. The signals from the conditioner were sampled at 16 bits, 10 kHz, over a ±1 V range.We performed a Fast Fourier Transform (FFT) analysisusing Hanning windowing over 5 s time windows. We trace the acceleration power spectral density PSD_a, as a function of the frequency, for each time window. The spectra were then averaged according to: PSD_a=√(1/N∑_i_th=1^N(PSD_a)_i_th^2) [g/√(Hz)] where N is the total number of time windows (about 25 for all measurements).For convenience, we first define three relevant vibration levels in the PSD_a domain: a) typical (1×10^-5 g/√(Hz)) for low noise measurements. Several low temperature, low noise experiments will be able to run without any issue within this level. We have a global convergence of the bolometer community toward this value. b) noisy (1× 10^-4 g/√(Hz)). This is the upper "acceptable" limit of vibration for low and ultra low temperatures (T<10 mK). At this level, vibrations can impact the base temperature reached by the DDR Mixing Chamber. c) quiet(1× 10^-6 g/√(Hz)) which represents a difficult level to achieve, as it requires special installations, as anechoic chambers and laminar air-flow isolators. To facilitate any further discussion, we also define two relevant frequency regions as follows: REG1) mechanical frequency range, from 1 Hz up to 40 Hz. It represents the region where the pulse tube mechanically induces displacements of the cryostat (vibrating at the pulse tube fundamental frequency and first harmonics). These movements stem mostly from the elongation of the flex hose connecting the pulse tube cold head to the rotary valve and from the displacement of the pulse tube cold stages. Indeed, the pulse tube experiences pressure variations between 9 and 18 bars every cycle and the flex hose behaves as a piston. A possible solution to reduce the contribution due to the movements of the flex hose is to replace it with a rigid pipe. Large benefits in terms of vibrations from this configuration have been observed, with the remote motor tightly held on a concrete block/wall <cit.>. We also noted that the frame holding the cryostat can present resonant frequencies in this range, hence a special care should be devoted to its design. REG2) acoustic frequency range, from 40 Hz up to 1 kHz, which isthe frequency region where “acoustically audible noise” populates the vibration measurements. Gas flowing through the pulse tube corrugated pipes and flex hose typically contributes in this range as it generates a whistle-like audible noise. Moreover, in this frequency region, the OVC acts as a resonating bell which then injects these acoustic vibrations into each DDR cold plates through the rigid structures of the cryostat. Fig. <ref> reports the acceleration measurements for the Hex UQT, for the pulse tube turned ON/OFF, along the axial z (top) and radial r (bottom) directions, respectively. For this specific setup we observe almost no difference along the z direction in REG1. However, due to the transversal stiffness of the edge-welded bellow we see vibrations at the fundamental pulse tube frequency (1.4 Hz) and harmonics, along the radial directions (bottom). We could mitigate the transversal vibrations on this specific setup by mounting the rotary valve as recommended in Fig. <ref>, with the flex hose aligned along the z-axis. In the acoustic region REG2, we clearly observe for both the z and r directions the pulse tube noise. Fig. <ref> compares the vibration spectra along the axial direction, for the three (four) units (setups) under study. The Hex UQT showcases the best vibration damping, capable of reducing the pulse tube-induced vibrations up to two orders of magnitude. We point out the reproducibility of the vibration performances of the Hexadry UQT technology, highlighted by the black and red solid lines. Both the Triton and the Hex std show in REG1 pulse tube fundamental and harmonic peaks, although the Triton is more favorable. The measurements on the Hex std show that efficient vibration damping can only be achieved by combining both the gas exchanger technology and the thorough decoupling of the pulse tube cold head with respect to the 300 K flange.In REG2 vibrations are strongly related to the acoustic environmental noise. For the Hexadry UQT (STERN), a special care was devoted to acoustically isolate the OVC and reduce the “audible acoustic noise” contribution. For this reason, it largely outperformed the other units. In particular, it showed a vibration level as good as for pulse tube turned OFF. §.§ DisplacementsAs experiment performances may be more easily interpreted in terms of displacements, for the sake of completeness, we now discuss our results in terms of displacements. The displacement power spectral density PSD_d can be derived from the acceleration PSD_a by double integrationin the frequency domain, as follows:PSD_d(f_i)=(9.81 m/s^2)/(2·π· f_i)^2· PSD_a(f_i) [m/√(Hz)]where f_i corresponds to the frequency bins. Fig. <ref> shows the PSD_d for the Hex UQT, along the axial z and radial r directions, for the pulse tube turned ON and OFF, whereas Fig. <ref>compares the displacements (PSD_d) for the three (four) units (setups). Due to the 1/f^2 dependence, the low frequency modes can easily dominate the displacement measurements. To better compare the displacement levels of the setups, we calculate for each of them the RMS displacement over the REG1 and REG2 frequency regions, according to following formula (derived from Parseval's theorem): RMS|_f_l^f_h=√(∑_f=f_l^f_h(PSD_d)^2Δ f)[m] where Δ f is the discrete frequency step and f_l, f_h are the limit of the frequency range (Δ f=1/tw,where tw=5 s is the time window chosen to perform the FFT analysis.). Tab. <ref> reports the results and the intrinsic RMS noise limit of our measurements. Looking at the results in REG1, we see how the edge-welded decoupling system combined with the gas-exchanger technique (Hex UQT) is effective in reducing mechanical vibrations. In contrast, using only the gas-exchanger technique (Hex std) is inefficient.The UQT technique is quite reproducible, as shown by the comparison between the two UQT setups. Furthermore, the main/secondary frames solution adopted in the Hex UQT (STERN) performs well, though anchoring mechanically the pulse tube head and rotary valve to the ceiling or a concrete block yields better results.The vibration reduction system adopted on the Triton is less effective than the Hex UQT system. However, it is difficult to conclude if the residual displacements for this unit stem from the cold stage braid links or from the cold head.We moreover see that the displacements along the radial direction r areone order of magnitude larger than those along z. Efforts are needed to mitigate the transmission of the vibrations along the radial direction and reach the limit already achieved with the Hex UQT along the z direction.The displacement results show that no correlation exists between the RMS displacements of the two frequency regions. The better performing unit in REG2, the Hex UQT (STERN), has a displacement level which compares with the Hex std, which is by far the worst performing in REG1.§ PSEUDO-MASSIVE, HIGH IMPEDANCE NTD SENSOR BOLOMETERS In this section we report how the performance and noise of two pseudo-massive (30 g and 250 g) germanium bolometers compare with the vibration levels of various DDR systems. Both detectors were equipped with high impedance NTD (Neutron Transmutation Doped) thermal sensors <cit.>. The 30 g detector was first operated in the Cryoconcept Hex std cryostat at IPNL. After observing a strong correlation between the pulse tube vibrations on the detector's performances and operating temperatures, we upgraded the cryostat with the Ultra Quiet Technology vibration reduction system and transformed it into a Hex UQT. The 30 g bolometer was again tested, utilizing strictly the same read-out system and cabling. We subsequently ran a high sensitive, 250 g germanium bolometer. §.§ Bolometer and setup descriptionThe two bolometers were rigidly anchored to the 10 mK coldplate and electrically connected to a room temperatureread-out electronics via anti-tribo-electric constantan-copper coaxial cables. Special care was devoted to the thermalisation of the cables at each DDR cold stages. We measured the temperature of the 10 mK coldplate via acalibrated RuO_2 resistive thermometer <cit.> and regulated via an electrical P.I.D.-controlled heater. We utilized a CUORE-like read-out system <cit.>, which consists in purely DC,low noise, high stability amplifiers providing an overall gain of 2400 (tunable), combined with a 4 poles, 2 kHz low-pass Bessel filter. The analog output of the electronics was sampled at 16 bits, 10 kHz, over a ±10 V dynamic range (NI-6218 DAQ). Overall, the read-out shows an intrinsic voltage noise of 4 nV/√(Hz), above 1 Hz and up to the Bessel cutoff frequency.§.§ Resistance vs. Temperature curvesThe detection principle of the considered bolometers using a NTD thermometer is based on the fact that a particle interaction with the germanium absorber increases the absorber temperature of a few micro-kelvins and induces a variation of the resistance of the thermal sensor. As the latter is current biased, we then observe a voltage signal across the sensor. The resistance of such sensors as a function of temperature follows Mott-Anderson law <cit.>:R(T) = R_0exp(√(T_0/T))where R_0 depends mainly on geometrical factors and T_0 is related to the germanium doping level. Fig. <ref> shows the characteristic curves (resistance vs. temperature) of the 30 g bolometer obtained before/after the upgrade (black/red symbols) of the cryostat. Before the upgrade, the bolometer temperature levels off around 100 kΩ, which corresponds to a temperature of about 30 mK (extrapolated using the measurements after the upgrade), whereas the Mixing Chamber temperature approached the 12 mK. The down-conversion of mechanical vibrations into heat via several mechanism, e.g. friction between the bolometer absorber and the clamps holding it, results in a constant power injection and hence, in the heating of the bolometer. Thanks to the UQT upgrade, both bolometers recovered the expected characteristic curves, falling into agreement with Eq. <ref> as shown by the fit.§.§ Bolometer noise spectra In this section we study the impact of pulse tube-induced vibrations on the noise of a bolometer. We focus on the 250 g germanium detector because of its increased sensitivity compared to the 30 g detector, thanks to its improved thermal sensing design <cit.>.We operated the detector at a fixed temperature of 18 mK which corresponds to the standard operating temperatures of the EDELWEISS experiment <cit.>.To characterize the role of the vibrations toward the bolometer thermal noise (signal power spectral density) and disentangle the contribution of the cabling microphonics <cit.>, we operate the bolometer in two configurations, together with pulse tube ON/OFF: a) No polarisation current: in this configuration the bolometer thermal sensitivity is null and we can solely test piezo-electric and tribo-electric contributions (microphonics) to the bolometer noise due to cabling. Tribo/piezo-electricity produce charge (current) noise, which translates into voltage noise through the NTD impedance, which was of 12 MΩ.b) Optimally polarised: at about 1 nA polarisation current, the NTD impedance lowers to ∼ 8 MΩ and the bolometer is maximally sensitive to thermal variations and energy deposit. With a sensitivity of 200 nV/keV it allows us to probe the effect of the pulse tube vibrations and their down-conversion to heat into the absorber.The resulting noise power spectral densities as a function of the frequency are reported in Fig. <ref>. The green (PT-ON) and blue (PT-OFF) curves correspond to configuration a), whereas the purple (PT-ON) and orange (PT-OFF) curves correspond to configuration b). In red we also showthe signal response of the detector, normalized to a 1 keV energy event. All the noise power spectra have been computed using a 500 ms time window, to avoid pile-up events, and traced up to500 Hz. For the case where we operated the detector in mode b), due to a particle event rate of about 2 Hz and an intrinsic bolometer signal decay-time of about 60 ms, an additional chi-square cut was applied to select pure noise samples, as any decaying tail from an event can mimic a 1/f-like noise and therefore bias our noise power spectral density. A small 50 Hz noise (european AC power supply frequency) and higher order harmonics pollute the noise spectra. The slight contribution at 30 Hz comes from a pick-up of the data acquisition system.By comparing pulse tube ON/OFF measurements in configuration a), we observe no difference on the overall noise spectra: vibration-induced microphonics of the cabling has a negligible contribution to the bolometer noise.However, by comparing pulse tube ON/OFF in configuration b), even though no significant additional noise contributions are seen in the 30 Hz-500 Hz range, we do observe an excess of noise at low frequencies. From optimal filter theory <cit.>, we evaluate the energy resolutions to be of 2.5 keV and 1.7 keV (RMS), for the pulse tube ON/OFF, respectively. This difference in noise is due to the fact that the bolometer studied is particularly sensitive to low frequencies (below 20 Hz), with the dominant noise contributions stemming from pulse tube residual vibrations, most likely along the radial r directions. These results have triggered investigations to additionally mitigate radial vibrational modes, as discussed in <ref>.§ CONCLUSIONS AND RECOMMENDATIONSPulse tube-induced vibrations have a dramatic effect on the operation of massive and pseudo-massive bolometers at cryogenic temperatures.We showed how we have designed and set up a vibration measurement system based on commercial accelerometers, conditioner and DAQ, well suited to measure the accelerations in low noise environment. We have studied in details the vibrations level on the Mixing Chamber (10 mK plate) of three (four) different DDR units (setups), with large differences observed in terms of vibrations and displacements. The most effective vibration mitigation technology combines thedecoupling of the pulse tube head via edge-welded bellow together with gas-exchangers, as implemented on the Cryoconcept Hex UQT model. We confirmed the importance of a secondary frame, separated from the main DDR frame, to tightly hold the pulse tube head and the rotary valve. Of all the technologies we examined, the Cryoconcept Hex UQT can bring the vibration level down most effectively in both the low and high frequency regions and allows to run massive and pseudo-massive bolometers.Improvements are possible on DDRs to further reduce vibrations at the 10 mK cold plate by installing an additional secondary 10 mK floating plate, suspended via spring-loaded, mass-damped wires and thermally linked with supple, high conductivity copper braids <cit.>.§ ACKNOWLEDGEMENTSThe results of this work were only made possible through the collaborative effort of several partners. We wish to especially thank Cryoconcept, which granted us access to several DDR units at their factory before delivery and provided us with valuable assistance in upgrading our setups. We also address special thanks to the cryogenic group of the SPEC-IRAMIS-CEA laboratory, led by P. Pari, and the associated mechanical workshop for the technical discussion and valuable mechanical realizations. Finally, we thank P. Camus and M. Pyle for their fruitful discussions about new vibration reduction strategies. § BIBLIOGRAPHY99gm_vibrationsT. Tomaru, T. Suzukia, T. Haruyama, T.Shintomi, A. Yamamoto, T.Koyama, R. Li, Vibration analysis of cryo-coolers, Cryogenics, 44, Issue 5,pp. 309-317, (2004). vib_free_4K_stage S. Caparrelli, E. Majorana, V. Moscatelli, E. Pascucci, M. Perciballi, P. Puppo, P. Rapagnani and F. Ricci, Vibration-free cryostat for low-noise applications of a pulse tube cryo-cooler, Rev. Sci. Inst. 77, 095102 (2006). article_placement_rmA. M. J. den Haan, G. H. C. J. Wijts, F. Galli, O. Usenko, G. J. C. van Baarle, D. J. van der Zalm and T. H. Oosterkamp, Atomic resolution scanning tunneling microscopy in a cryogen free dilution refrigerator at 15 mK, Rev. Sci. Inst., 85, 035112 (2014). article_placement_rm2 Tian, Ye and Yu, HF and Deng, H and Xue, GM and Liu, DT and Ren, YF and Chen, GH and Zheng, DN and Jing, XN and Lu, Li and others, A cryogen-free dilution refrigerator based Josephson qubit measurement system , Rev. Sci. Inst., 83, 033907 (2012). cryoconcept_website http://cryoconcept.com/, (accessed on March 1st, 2017). edelweiss L. Hehn et al.,Improved EDELWEISS-III sensitivity for low-mass WIMPs using a profile likelihood approach, Eur. Phys. J. C. 76, 548 (2016). STERN For more details, contact: Dr. M. Stern, Quantum Nanoelectronics Laboratory, Bar-Ilan University oxford_website https://www.oxford-instruments.com/products/cryogenic-environments/dilution-refrigerator/cryogen-free-dilution-refrigerators (accessed on March 1st, 2017). cuore_cupid The CUPID Interest group, R&D towards CUPID (CUORE Upgrade with Particle IDentification), arXiv preprint arXiv: 1504.03612 (2015). measure_xyz T. Tomaru, T. Suzuki, T. Haruyama, T. Shintomi, N. Sato, A. Yamamoto, Y. Ikushima, R. Li, T. Akutsu, T. Uchiyama, S. Miyoki, Vibration-Free Pulse Tube Cryo-cooler System for Gravitational Wave Detectors, Part I: Vibration-Reduction Method and Measurement, Cryo-coolers, 13, pp. 695-702, (2005). clio_experiment_pt Yuki Ikushima, Rui Li, Takayuki Tomaru, Nobuaki Sato, Toshikazu Suzuki, Tomiyoshi Haruyama, Takakazu Shintomi, Akira Yamamoto, Ultra-low-vibration pulse tube cryo-cooler system – cooling capacity and vibration,Cryogenics, 48, pp. 406-412 (2008).Baron_book Randalf F. Barron, Cryogenic Systems, 2nd Edition, 1985. rigid_pipe_private_communication E. Olivieri, P. Pari, C. Marrache, private communication. ntd_articleN. Wang, J. Beeman,A. N. Cleland, A. Cummings, E. E. Haller, A. Lange, R. Ross, B. Sadoulet, H. Steiner, T. Shutt, F. C. Wellstood, Particle detection with semiconductor thermistors at low temperatures, IEEE Transactions on Nuclear Science, 36, Issue 1, pp. 852-856 (1989). full_rangeE. Olivieri, M. Rotter, M. De Combarieu, P. Forget, C. Marrache-Kikuchi and P. Pari, Full range resistive thermometers, Cryogenics, Vol. 72, no 2, pp. 148-152 (2015). cuoricino_electronics A. Alessandrello et al., A programmable front-end system for arrays of bolometers, NIMA, 444, 1-2, pp. 111-114, (2000). mott-anderson S. Mathimalar, V. Singh, N. Dokania, V. Nanal, R.G. Pillay, S. Pal, S. Ramakrishnan, A. Shrivastava, Priya Maheshwari, P.K. Pujari, S. Ojha, D. Kanjilal, K.C. Jagadeesan, S.V. Thakare, Characterization of Neutron Transmutation Doped (NTD) Ge for low temperature sensor development, NIMB, 345, pp. 33-36 (2015). ThermalLTD J. Billard, M. De Jesus, A. Juillard, and E. Queguiner, Characterization and Optimization of EDELWEISS-III FID800 Heat Signals, J. Low. Temp. Phys. 184, 299-307 (2016). cabling_vibrationsRachpon Kalra, Arne Laucht, Juan Pablo Dehollain, Daniel Bar, Solomon Freer, Stephanie Simmons, Juha T. Muhonen and Andrea Morello, Vibration-induced electrical noise in a cryogen-free dilution refrigerator: Characterization, mitigation, and impact on qubit coherence, Rev. Sci. Inst. 87, 073905 (2016). enss D. McCammon, Thermal Equilibrium Calorimeters - An Introduction, C. Enss, Cryogenic Particle Detection, p.17, Springer (2005). floating_plate S. Pirro, Further developments in mechanical decoupling of large thermal detectors, Nuclear Instruments and Methods in Physics Research A, 559, no. 2, pp. 672-674 (2006). | http://arxiv.org/abs/1703.08957v2 | {
"authors": [
"E. Olivieri",
"J. Billard",
"M. De Jesus",
"A. Juillard",
"A. Leder"
],
"categories": [
"physics.ins-det"
],
"primary_category": "physics.ins-det",
"published": "20170327072037",
"title": "Vibrations on pulse tube based Dry Dilution Refrigerators for low noise measurements"
} |
Critical properties of the contact process with quenched dilution] Critical properties of the contact process with quenched dilutionInstituto de Física, Universidade de São Paulo, Rua do Matão, 1371, 05508-090 São Paulo, São Paulo, [email protected] have studied the critical properties of the contact processon a square lattice with quenched site dilution by Monte Carlo simulations. This was achieved by generating in advance the percolating cluster, through the use of an appropriate epidemic model, and then by the simulation of the contact process on the top of the percolating cluster. The dynamic critical exponents were calculated by assuming an activated scaling relation and the static exponentsby the usual power law behavior.Our results are in agreement with the prediction that the quenched diluted contact process belongs to the universality class of the random transverse-field Ising model. We have also analyzed the model and determined the phase diagram by the use of a mean-field theory that takes into account the correlation between neighboring sites. [ Alexander H. O. Wada and Mário J. de Oliveira Received: date / Accepted: date ================================================= § INTRODUCTION In many experiments on condensed matter, quenched disorder may be present either because it is an unavoidable feature of the sample or because disorder is deliberated introduced in the sample. In either case, if we wish to describe the properties of these systems by statistical mechanical models, quenched disorder should be taken into account in these models. In some cases the quenched disorder is irrelevant inthe sense that it does not change the critical behavior. In other cases, the quenched disorder is a relevant feature that changes the critical behavior of the pure system. According to a criterion due to Harris <cit.>,a spatially quenched disorder will be irrelevant with respect to the critical properties if the inequality dν_⊥>2is obeyed for the pure system, where ν_⊥is the spatialcorrelation length exponent and d is the dimension of the system. For models belonging to the directed percolation universality class, such as the contact process <cit.>, this inequality is not fulfilled for d<4. We should thus expect a change in the critical properties of thecontact process with quenched disorder, as is the case of the quenched diluted contact process, which is the object of our study here. Numerical simulations of the quenched diluted contact process in two dimensional lattices <cit.> indeed confirm the change in the critical properties.A remarkable critical behavior of thequenched diluted contact process is the slowactivated dynamics, of the logarithmic type, instead of the usual power law type.This result was advanced by Hooyberghs et al. <cit.> by mapping the evolution operator of the stochastic processdescribing the quenched diluted contact process into a random quantum spin-1/2 operator, and by the use of a renormalization group approach. This critical behavior places the quenched diluted contact process into the universality class of the random transverse-field Ising model <cit.>. The slow activated dynamics of the quenched diluted contact process has been confirmed by numerical simulations in two dimensions<cit.>.Here we study a quenched diluted contact process in which the quenched dilution is obtained by the removal of a fraction of sites of the lattice. The remaining sites form then clusters of site percolation.We aim to study the critical properties of the contact process with quenched dilution by a method in which the percolation clusters are understood as related to the stationary state of stochastic models for the spreading of disease <cit.>. The use of an epidemic process turns out to be a procedure to create percolation clusters as efficient as the ordinary method of simply creating random vacancies and then using a clustering algorithm to find the percolating cluster.A straightforward numerical approach to the diluted contact process is to consider all the remaining sites of a lattice after a certainfraction of them has been removed <cit.>.Other methods such as ours consider instead just the sites of the percolating cluster <cit.>. In this case the total computer time should include thetime it takes to generate the percolating cluster. However, this time is very short, representingin our approach less than 1% of the total computer time.The stochastic model we use to generate clusters of site percolation is defined as follows <cit.>. Each site of a regular lattice is occupied by an individual that can be in one of three states: susceptible, exposed or immune. A susceptible individual, in the presence of an exposed individual,becomes exposed with certain probability p and immune with the complementary probability. The exposed and immune individuals remain forever in these states. Starting with a single exposed individual in a lattice full of susceptible individuals, a cluster of exposed individual is generated such that at the stationary state it is exactly mapped into a cluster of site percolation <cit.>, with p being identified as the probability of a site occupation.Once a cluster of site percolation is generated by the model ofspreading of disease explained above, we simulate the contact process in the top of the percolating cluster. Only the percolating cluster is needed because a finite cluster cannot sustain an active state. That is, if we wait enough time, the absorbing state will be reached. This procedure is thus interpreted as the contact process with quenched site dilution.More details on the models will be given in the next section. In this same section we set up the evolution equations for one and two-site correlations and solve them by the use a pair mean-field approximation, which allows us to construct the phase diagram. This phase diagram shows that at the percolation threshold the critical creation rate of thequenched diluted contact process is finite. Using the method presented above, we have obtained the critical properties and the phase diagram of the diluted contact process by numerical simulations and also by a mean-field theory.The method allowed us to obtain more accurate values for the critical exponents and thus confirming the prediction that the quenched diluted contact process belongs to the universality class of the random transverse-field Ising model. The mapping of the epidemic processes into the quenched dilution contact process, allows also to conclude that this universalityclass may include some models for epidemic spreading. The contact process, and other models belonging to the universality of directed percolation, describes the transition of an active state to an absorbing state, in which a system cannot never scape. This nonequilibrium phase transition is very common in nature and may occur in various situations <cit.>. However, the experimental observation of the critical exponents is very difficult, as any amount of disorder should alter the critical behavior, but the exponents were eventually measured in a electrohydrodynamic convection of nematic liquid crystals <cit.>.§ MODELS AND PAIR APPROXIMATION We begin by defining the two models by using the spreading ofdisease language. The two models are illustrated in figure <ref>. The first model (A) is the one that generates thesite percolation clusters, and is thus the underlying support over which the second model (B), the contact process, is defined.§.§ First model Each site of a regular lattice is occupied by an individual that can be susceptible (S), immune (U) or exposed (E). The possible processes of the first model are as follows:S + E→ U + E,rate a, S + E→ E + E,rate b.These two processes define a continuous time stochastic process whose probability distribution obeys a master equation. Instead of writing down the master equation, which gives the time evolution of the probability distribution, we write the time evolution of some marginal probability distribution such as the one-site and two-site probability distributions. Using a procedure developed earlier <cit.> and the notations P_X, P_XY, P_XYZ for one-site, two-site and three-site probabilities, the following time evolution equations can be derivedd/dtP_S = -(a+b) P_ES, d/dtP_E = b P_ES, d/dtP_ES = -a+b/k P_ES -(a+b)μ P_ESE + b μ P_ESS, d/dtP_US = -(a+b) μ P_ESU + a μ P_ESS, d/dtP_EU = a/k P_ES + aμ P_ESE + b μ P_ESU,where k is the coordination number of the regularlattice and μ=(k-1)/k. An approximate solution can be obtained by the use of the pair mean-field approach which amounts to use the approximation P_XYZ=P_XYP_YZ/P_Y. Using the notation x=P_S, y=P_E, v=P_ES, u=P_US,and w=P_EU, we may writedx/dt = -(a+b) v, dy/dt= b v, dv/dt = -a+b/k v -(a+b)μv^2/x + b μv(x-v-u)/x, du/dt = -(a+b)μvu/x + a μv(x-v-u)/x, dw/dt = a/k v + aμv^2/x + b μvu/x,where we have taken into account that P_SS=P_S-P_ES-P_US=x-v-u. Equations (<ref>)-(<ref>) have been solved in reference<cit.>. At the stationary state, the solution is x = s^k, y = p(1-s^k), v = 0, u = qs^k-1(1-s^k-1), w = pq(1-s^k-1),where s is the root of the polynomial equationps^k-1 - s + q = 0,and p=b/(a+b) and q=1-p. The trivial solution is s=1, which gives x=1, y=v=u=w=0and corresponds to the non spreading regime. The solution s≠1 corresponds to the spreading regime and occurs only when p>p_c=1/(k-1). We remark that the stationary solution is exactly mapped into the site percolation model with p playing the role of the probability of occupancy or the fraction of occupied sites in the percolation model. The spreading regime (s≠1) corresponds to the existence of the percolating cluster. The non-spreading regime corresponds to the absence of the percolating cluster (s=1). §.§ Second model As before, an individual can be susceptible (S), immune (U) or exposed (E). In addition,an individual can also be infected (I). Thus in the second model, each site can be in one of the states: S, U, E, and I. However, the sites in states S and U remains forever in these states. The only sites that have their states modified are the E and I sites. They are modified according to the following processes E + I→ I + I,rate c, I→ E, rate r,which are the reactions of the contact process. The relation betweenthe infection rate λ, often used in studies of the contact process, and c and r is given by λ=c/r. For convenience, we also make use of a parameter α, defined by α=r/c=λ^-1.The initial state of the second model is chosen to be the stationary state of the first model. However, this state has no site in state I and the dynamics does not start. To start the dynamics we choose randomly one site in state E and replace it by a state I. By this procedure, a cluster of sites in state I growths in the top of the percolation cluster. It should be understood that the percolation cluster is formed by sites of type E and I. The sites of type U are at the border of the percolation cluster. The rest of the sites are in state S, and they are not connected to the sites of the percolation cluster.The two reactions (<ref>) and (<ref>) show that the number of sites E and sites I is a constant implying that the sum P_E+P_I is a constant. Since these two reactions do not involve the sites U and S, it follows that the number of sites U and the number of sites S are invariants, and the sum P_EU+P_IU is a constant.Again, using the procedure developed earlier <cit.>, the following time evolution equations for the one-site and two-site probabilities can be obtainedd/dt P_I = - r P_I + c P_IE, d/dtP_IE = - r P_IE + r P_II - c/kP_IE - cμ P_IEI+ cμ P_IEE, d/dtP_IU = - r P_IU + cμ P_IEU,where μ=(k-1)/k. Due to the constraints stated above it is not necessary to write down the time evolution equations for the other one-site and two-site probabilities.Using again the pair approximation and the previous notation together with the notations z=P_I, g=P_IE, h=P_IU, we may writedz/dt = - r z + c g, dg/dt = - r g + r (z-g-h) - c/k g - cμg^2/y+ cμg/y(y-g-w), dh/dt = - r h + cμgw/y,where we have taken into account that P_II=P_I-P_IE-P_IU=z-g-h and P_EE=P_E-P_EI-P_EU=y-g-w.Equations (<ref>)-(<ref>) are to be solved using asinitial conditions the stationary state of thefirst model. Since P_E+P_I=y+z is invariant, it follows that y+z=y_0 where y_0 is the value of P_E at the stationary state of the first model, given by equation (<ref>).Analogously, P_EU+P_IU=w+h is invariant implying w+h=w_0 where w_0 is the value of P_EU at the stationary state of the first model, given by equation (<ref>). At stationary state, equations (<ref>)-(<ref>) have a trivial solution z=0, characterizing the absorbing state, and a nontrivial solution for which z≠0, characterizing the active state. Solving for z, it is possible to obtainan expression for the nontrivial solution z. By taking the limit z→0 of the nontrivial solution we get the critical line, which is given byα = r/c = k-1/k(1-q(1-s)/p(1-s^k)),where s is the root of the polynomial equation given by equation (<ref>). The critical line α versus q, shown in figure <ref>, separates the active percolating phase from the inactive percolation phase. Notice that, when p→ p_c=1/(k-1) we get α=2(k-1)/k^2=α_0, sothat the critical line meet the vertical line p=p_c at α=α_0, as shown in figure <ref>.It straightforward to show that the critical exponent related to the order parameter is the same as that of the pure system. § SCALING RELATIONS AND NUMERICAL SIMULATIONS Around the critical point, the quantities that characterize the critical behavior are assumed to obey scaling relations. In the present case of the diluted contact model, for which the quenched disorder is relevant, the usual scaling relation in terms of power laws in time is replaced by power laws in the logarithm of time, called activated scaling <cit.>. At the critical point, the space correlation length ξ behaves as <cit.>ξ∼ (ln t/t_0)^1/ψ,where ψ is the tunneling critical exponent <cit.> and t_0 is a constant. Other quantities behave similarly at the critical point, such as N_I, the number of infected individuals,N_I ∼ (ln t/t_0)^θ,and P, the survival probability at time t,P ∼ (ln t/t_0)^-δ. From the scaling relations (<ref>), (<ref>), and (<ref>) we find thatN_I ∼ P^-θ/δ, N_I ∼ξ^ θ ψ, P ∼ξ^-δψ,valid at the critical point. These are useful relations because they do not depend on t_0. At the stationary state (t→∞), the quantitiesthat describe the critical behavior follow the usual power laws, but with exponents distinct from those of the pure system. The order parameter ρ, defined as the fraction of infected sites in the percolating cluster, behaves as ρ∼ (λ-λ_c)^β. Initially, we have simulated the first model to generate a percolating cluster. The simulation, with periodic boundary conditions, was performed as follows. At each time step, we choose at random a bond from a list of active bonds. An active bond is a pair ofSE nearest neighbor sites. The site S of the chosen bond becomes E with probability p and becomes U with the complementary probability q=1-p. The chosen bond is removed from the list and the list us updated. The time is then incremented by an amount 1/N_a, where N_a is the number of active bonds in the list. Notice that, if a site S has n_E nearest neighbor sites in states E, then it will appear n_E times in the list. Starting with just one E site in a lattice full of S sites, this algorithm will generate a cluster of E sites. This process stops when there is no SE bonds in the lattice. When this happens the cluster of E sites is a site percolating cluster with U sites standing in the border of the cluster, separating the E sites from the S sites.Having generated a percolating cluster of E sites, we simulate thecontact model on top of the cluster using the following algorithm. At each time step, a site is chosen at random among a list of I sites. With probability p_a=λ/(λ+1) it becomes an E site and with the complementary probability 1-p_a a nearest neighbor site is chosen at random. If the chosen neighboring site is in state E, it becomes I, otherwise, nothing happens. The time is then incremented by an amount 1/N_I, where N_I is the number of I sites. The initial condition is formed by the cluster of E sites with one E site turned into an I site. This I site is taken as the origin. We have performed simulations on a square lattice with N=L^2 sites with L up to L=8192. For several values p and λ, we have measured, as a function of time, the number of infected sites N_I, the survival probability P and the correlation length ξ defined byξ^2 = 1/N_I∑_i ⟨ r_i^2⟩,where the summation is over the sites occupied by an infected and r_i is the distance from the site i to the origin. Each quantity was measured by averaging over 10^5 to 10^6 disorder configurations, where each disorder configuration is obtained by the simulation of the first model starting with a distinct seed of random number.We have also performed simulation with smaller values of L.However, results coming from lattice with L=4096 agree, within statistical errors and up to the maximum timewe have used, with those coming from L=8192. The statistical errors were determined by the calculation of the standard statistical deviation.The results for the three quantities N_I, P and ξ, determined for several values of λ, are shown in figures <ref>, <ref>, and <ref>.The error bars in these figures are not shown, but they are less than 8%. At the critical point, λ=2.1075, they are even less reaching 1%. Figure <ref> shows the plot of the number of infected N_I as a function of time t for p=0.8. Fitting the expression (<ref>) to the data points of figure <ref> we estimate the critical parameter as being λ=2.1075(1), the critical exponent as θ=0.13(2), and ln t_0=6.0(5). To find the exponents ψ and δ and a better estimate of θ we use a procedure similar to the one used in <cit.> in which we first determine the quantitiesθ/δ, θ ψ and δψ, by fitting theexpressions (<ref>), (<ref>) and (<ref>) to the data points. After that, we use expressions (<ref>), (<ref>) and (<ref>) to find the exponents θ, δ and ψ by a constrained fitting, to be explained below.From the plots of N_I versus P, shown in figure <ref>, N_I versus ξ, shown in figure <ref>,and P versus ξ, we may get, respectively, θ/δ, θ ψand δψ. Since the scaling relations (<ref>) (<ref>) and (<ref>) do not involve time, the estimates of these quantities are independent of the t_0, resulting in more precise values, which are found to beθ/δ = 0.075(5), θ ψ = 0.078(4) δψ =1.034(23).The consistency of these values can be checked by dividing equations (<ref>) and (<ref>). The result isθ ψ/δψ = 0.0758(57), which is in fair agreement with (<ref>).The value of θ/δ can be used to get the ratio β/ν_⊥ between the order parameter critical exponents β and the critical exponents ν_⊥ related to the spatial correlation length. Using the relation θ/δ=dν_⊥/β-2 we findβ/ν_⊥=0.964(2).The exponents θ, δ and ψ are found by a procedure as follows. For each value of t_0 in the interval 5.5≤ t_0≤ 6.5, we determine the exponents θ, δ and ψ by fitting the expressions(<ref>), (<ref>) and (<ref>) to the data points. After that we choose the actual values of these exponents as the ones such that the quantities θ/δ, θ ψand δψ are as close as possible to the values given by (<ref>), (<ref>), (<ref>). This procedure leads to the following values for the exponents:θ = 0.145(8), δ = 1.88(11), ψ = 0.55(3). We have also performed simulations to get the stationary properties by using systems of linear size L=2048. The interest quantities were obtained by the use of 10^8 Monte Carlo steps after discarding 10^7 Monte Carlo steps. In this case, we use as the initial state a configuration in which a fraction of sites is in the infected state. Again we determined the number of infected sites N_I at the stationary state from which we obtained the density ρ=N_I/N_C where N_C is the number of sites of the cluster, that is, the number of I sites plus the number of E sites. Assuming the critical behavior (<ref>), we get the value β=1.11(6) by plotting ρ as a function of λ-λ_c, as shown in figure <ref> for the case of p=0.8. This result for β together with the numerical value for the ratio β/ν_⊥, obtained above, gives us ν_⊥=1.15(6). We have also performed similar simulations for other value of p and obtained the critical line, shown in figure <ref>. In particular, at thepercolation critical point p=p_c=0.59274, we get λ=3.10(1). The critical exponents obtained here are shown in table <ref> together with results coming from other papers on the quenched diluted contact process <cit.> and on the random transverse-field Ising model <cit.>. To make contact with exponents used to describe the critical behavior of the random transverse-field Ising model, we have determined from our results the fractal dimension critical exponent d_F=d-β/ν_⊥ and the exponent ϕ, related to the fractal dimension and the tunneling exponent by d_F=ϕψ <cit.>. We see that our results agree, within the statistical errors, to all other results cited in table <ref>. The results are d_F=1.036(2) and ϕ=1.87(10).§ CONCLUSION We have studied the critical properties of the quenched diluted contact process through a mean-field theory and Monte Carlo simulations by using a two stage procedure. The first was the generation of the percolating cluster, obtained by the use of a stochastic lattice model whose stationary states are the clusters of percolation model. The second stage was the simulation of the contact process on the top of the percolating cluster. It should be remarked that, only the percolating clusters is necessary if we wish to study the static stationary properties because finite clusters cannot support an active state. For a finite cluster, the absorbingstate will be reached if we wait enough time. As to the dynamic properties, our results show that they can also be obtained from the percolating cluster, or at least their critical properties, as can be inferred by comparing our critical exponents with other works.The present method allowed to obtain more precise critical exponents, with errors that are at most equal to 6%,confirmingthe prediction that the quenched diluted contact process belongs to the universality class of the random transverse-field Ising model. The mapping of the two epidemic models into the quenched diluted contact process allows to speculate about the existence of epidemic models that are in the universality class of transverse-field Ising model. In fact, this is the case of the model illustrated in figure <ref>, which may be thought as a merger of the two models in figure <ref>. The epidemic model of figure <ref> is defined on a lattice in which each site can be in one of four states: S, U, E, and I, and is composed by three catalytic reactions: S → U, S → E, E → I, and by a spontaneous reaction I → E.At the stationary states, the I and E sites form a connected cluster of sites consisting of a site percolation cluster. The E and I sites evolves then as the contact process on the top of a percolating cluster. Therefore, the model defined by rules of figure <ref> is also mapped into the quenched diluted contact process and its critical properties puts the model in the universality class of the transverse-field Ising model. § ACKNOWLEDGMENT We wish to acknowledge the Brazilian agency FAPESP forfinancial support. § REFERENCES 99abharris1974 A. B. Harris, J. Phys. C 7, 1671 (1974).teharris1974 T. E. Harris, Ann. Prob. 2, 969 (1974).marro1999 J. Marro and R. Dickman, Noequilibrium Phase Transitions in Lattice Models, (Cambridge University Press, Cambridge, 1999).henkel2008 M. Henkel, H. Hinrichesen and S. Lübeck, Non-Equilibrium Phase Transitions, Vol. I: Absorbing Phase Transitions (Springer, Dordrecht, 2008).tome2015 T. Tomé and M. J. de Oliveira, Stochastic Dynamics and Irreversibility, Springer, 2015.moreira1996 A. G. Moreira and R. Dickman,Phys. Rev. E 54, R3090 (1996).dickman1998 R. Dickman and A. G. Moreira,Phys. Rev. E 57, 1263 (1998).vojta2005 T. Vojta and M. Dickison, Phys. Rev. E 72, 036126 (2005).dahmen2007 S. R. Dahmen, L. Sittler and H. Hinrichsen, J. Stat. Mech. P01011 (2007).oliveira2008 M. M. de Oliveira and S. C. Ferreira, J. Stat. Mech. P11001 (2008).vojta2009 T. Vojta, A. Farquhar and J. Mast,Phys. Rev. E 79, 011111 (2009).hooyberghs2003 J. Hooyberghs, F. Iglói, and C. Vanderzande,Phys. Rev. Lett. 90, 100601 (2003).hooyberghs2004 J. Hooyberghs, F. Iglói, and C. Vanderzande, Phys. Rev. E 69, 066140 (2004).fisher1992 D. S. Fisher, Phys. Rev. Lett. 69, 534 (1992).fisher1999 D. S. Fisher, Physica A 263, 222 (1999).motrunich2000 O. Motrunich, S. C. Mau, D. Huse, and D. S. Fisher,Phys. Rev. B 61, 1160 (2000).lin2000 Y.-C. Lin, N. Kawashima, F. Iglói, and H. Rieger,Prog. Theor. Phys. Supplement 138479 (2000).karevski2001 D. Karevski, Y-C. Lin, H. Rieger, N. Kawashima and F. Iglói,Eur. Phys. J. B 20, 267 (2001).hoyos2008 J. A. Hoyos, Phys. Rev. E 78, 032101 (2008).kovacs2010 I. A. Kovács and F. Iglói, Phys. Rev. B 82, 054437 (2010).miyasaki2013 R. Miyazaki and H. Nishimori, Phys. Rev. E 87, 032154 (2013).tome2010 T. Tomé and R. M. Ziff, Phys. Rev. E 82, 051921 (2010).tome2011 T. Tomé andM. J. de Oliveira, J. Phys. A 44, 095005 (2011).wada2015 A. H. O. Wada, T. Tomé and M. J. de Oliveira, J. Stat. Mech. P04014 (2015).hinrichsen2000 H. Hinrichsen,Braz. J. Phys. 30, 69 (2000).takeuchi2007 K. A. Takeuchi, M. Kuroda, H. Chaté, and M. Sato, Phys. Rev. Lett. 99, 234503 (2007). | http://arxiv.org/abs/1703.09261v1 | {
"authors": [
"Alexander H. O. Wada",
"Mário J. de Oliveira"
],
"categories": [
"cond-mat.dis-nn",
"cond-mat.stat-mech"
],
"primary_category": "cond-mat.dis-nn",
"published": "20170327184225",
"title": "Critical properties of the contact process with quenched dilution"
} |
⟨⟩.35ex'-.17em∂∂-.5em / .5emA-.5em / .5emD-.7em / .5emΨ̅ααβ̱††γΓδ̣ϵμνκ̨λΛσρ̊ωΩ 1þθΘΧχτεIIIIIIVℤℝ|||| Q A r1.2 plainStatic Black Holes WithBack Reaction From Vacuum Energy.5in Pei-Ming Ho [e-mail address: [email protected]], Yoshinori Matsuo [e-mail address: [email protected]]10mmDepartment of Physics and Center for Theoretical Physics, National Taiwan University, Taipei 106, Taiwan, R.O.C.3mmWe study spherically symmetric static solutions to the semi-classical Einstein equationsourced by the vacuum energy of quantum fields in the curved space-time of the same solution. We found solutions that are small deformations ofthe Schwarzschild metric for distant observers, but without horizon. Instead of being a robust feature of objects with high densities, the horizon is sensitive to the energy-momentum tensor in the near-horizon region. § INTRODUCTION Since it was discovered that a black hole finally evaporates by the Hawking radiation,the information loss paradox has been a longstanding problem in black hole physics.The event horizon plays an important role in this problem,and it is assumed in many studies on the information loss paradoxthat the event horizon still exists even when the quantum effects are taken into account.On the other hand, there are many arguments from the viewpoint of string theory, most noticeably via the AdS/CFT duality, that the information cannot be lost by the black hole evaporation.There are also many studies that argue the absence of the horizon <cit.>.In this paper, we will explore the connection between the near-horizon geometry and the energy-momentum tensor.We study the back reaction fromthe vacuum energy-momentum tensor of quantum fields on the near-horizon geometry,and consider how the vacuum energy-momentum tensor modifies the geometry. The motivation comes from the studies with self-consistent treatment of the Hawking radiationin geometries of the black hole evaporation.Recently it was shown that no horizon forms during the gravitational collapseif the back reaction from the Hawking radiation is taken into account<cit.>.In the study of black holes, Hawking radiation is associated with a conserved energy-momentum tensor,which can be computed as the vacuum expectation value of the energy-momentum operator of quantum fields outside the horizon. Naively,this quantum correction to the energy-momentum tensor, being extremely small, should have very little effect on the black-hole horizon, which exists at a macroscopic scale. On the other hand, the formation of horizons in gravitational collapses is known to be a critical phenomenon <cit.>. Infinitesimal modifications to the initial condition around the critical value can make a significant difference in the final states. Indeed, we will show that in some sense the existence of horizon is very sensitive tothe variation of the energy-momentum tensor.As a first step, we will focus on static configurations with spherical symmetry in this work, and leave its generalization to dynamical processes without spherical symmetry to the future. We will demonstrate in two different models of quantum fields thatthe quantum correction to the energy-momentum tensoris capable of removing the horizon.We are not claiming that an infinitesimal modification to the energy-momentum tensor leads to dramatic changes in physics. The quantum energy-momentum tensor outside a static star is extremely weak for a distant observer. Their back reaction to the geometry can indeed be neglected as a good approximation for the space-time region outside the horizon which is visible to a distant observer. On the other hand, the horizon can be deformed into a wormhole-like geometry by merely modifying the geometry within an extremely small region near the Schwarzschild radius, and the difference can be hard to distinguish for a distant observer.The vacuum expectation value of the energy-momentum operator has been calculated in the fixed Schwarzschild background for the models that we will consider, as well as for other similar models, but its back reaction to the geometry have been ignored, or treated with insufficient rigor most of the time. The fact that the vacuum energy-momentum tensor is consistently small outside a black hole was taken by many as a confirmation that its back reaction to the background geometry throughthe semi-classical Einstein equation G_μν = κ⟨ T_μν⟩ can be ignored. However, it is also assumed by some thatthe Boulware vacuum is unphysical as it has divergence at the horizonin the Schwarzschild geometry.A circular logic is sometimes used to further argue thatthe back reaction of the quantum effects can be ignored, since those states with large quantum effects such asthe Boulware vacuum are all assumed to be unphysical. However, it is also an unnatural conditionto introduce the incoming energy such thatthe energy-momentum tensor does not diverge at the future horizon, unless the future horizon is already proven to exist. Here, we impose the more natural initial condition thatthere is no incoming energy flow in the past infinity.If the Boulware vacuum is unphysical,there must be outgoing energy in future infinity andblack holes cannot have static states for this initial condition.However, there is a chance to have a physical statefor the Boulware vacuum if we take the back reaction from the quantum effects into account. We will show nonperturbatively thatthere is a solution to the semi-classical Einstein equationfor the Boulware vacuum without divergence in the energy-momentum tensor, and hence,it is physically sensible to consider the Boulware vacuum.The perturbation expansion for the semi-classical Einstein equationaround the Schwarzschild background breaks down at the horizon. Due to the divergence in the Boulware vacuum, the correction term to the Schwarzschild solution also diverges at the horizon.Instead of the perturbation theory as an expansion in the Newton constant, we rely on non-perturbative analysis of the semi-classical Einstein equations. Our analysis shows that the horizon of the classical Schwarzschild solutioncan be deformed into a wormhole-like structure (without horizon) by an arbitrarily small correction to the energy-momentum tensor. The wormhole-like structure connects the internal region of the star to the external region well approximated by the Schwarzschild solution.We emphasize that the wormhole-like geometry is not connected to another open space (hence it is not a genuine wormhole), but to the surface of a matter sphere. We will not consider the geometry inside the matter sphere, where the energy-momentum tensor of the matterneeds to be specified.Instead, we will focus on the neighborhood of the wormhole-like geometry, or other kinds of geometry that replaces the near-horizon region. In the literature, the wormhole-like geometry is also called a “bounce” or “turning point” (of the radius function r).For static configurations with spherical symmetry, the event horizon is also a Killing horizon and an apparent horizon.An object falling through the horizon can never return. When the horizon is deformed into a wormhole-like structure, an object falling towards the center can always return, but only after an extremely long time. Hence, from the viewpoint of a distant observer, an “approximate horizon” still exists. In practice, an extremely long period of time beyond a certain infrared cutoff can be approximated as infinite time. The horizon can be viewed as the ideal limitin which the time for an object to come out of the approximate horizon approaches to infinity. In this sense, our conclusion that an infinitesimal modification can replace a mathematical horizonby an approximate horizon is nothing dramatic. Nevertheless, while the notion of horizon plays a crucial role in conceptual problems such as the information loss paradox, it is of crucial importance to understand how to characterize the geometry of approximate horizons and their difference from the exact horizon.It should be noted, however, thatthe Killing horizon in static geometry is notdirectly related to the information loss paradox.This paper is aimed at exploringthe local structure around the horizonand study how it is modified by quantum corrections,and the global structure is out of the scope of this paper. We will show that the Killing horizon is sometimes removed after taking into account the back reaction of the quantum effects.This does not immediately imply that the event horizon does not appearin the dynamical process of a gravitational collapse,as the notion of the event horizon for dynamical systems isquite different from that of static systems, and the horizon might be recovered due to the effect of the Hawking radiation.Therefore it is non-trivial to apply the result of this paperto the formation of black holes, which is a problem we will attack in the near future. After setting up the basic formulation for latter discussions in Sec. <ref>, we revisit in Sec. <ref> and Sec. <ref> different models people have used to estimatethe vacuum expectation value of the energy-momentum operator outside a black hole, as examples of how tiny quantum corrections can turn off the horizon. It is not of our concern whether these models are accurate. Our intention is to demonstrate the possibility for a small correction in the energy-momentum tensor to remove the horizon. In Sec. <ref>, we consider generic static configurations with spherical symmetry, without assumptions on the underlying physics that determines the vacuum energy-momentum tensor. In addition to Einstein equations, we only assume that the geometry is free of singularity at macroscopic scales. (The possibility of a singularity at the origin is expected to be resolvedby a UV-complete theoryand is irrelevant to the low-energy physics for macroscopic phenomena.) It turns out that this regularity condition leads to clearconnections between the horizon and the energy-momentum tensor at the horizon. This provides us with a context in which the results of earlier sectionscan be understood. § 4D EINSTEIN EQUATION IN S-WAVE APPROXIMATIONIn this paper, we assume the validity of the 4-dimensional semi-classical Einstein equation,G^(4)_μν = κ⟨ T^(4)_μν⟩ ,in which gravity is treated classically butthe quantum effect on the energy-momentum tensor is taken into account.Assuming that the classical energy-momentum tensor vanishes outside the radius R of the star, the energy-momentum tensor for r > R is completely given by the expectation value ⟨ T^(4)_μν⟩of the quantum energy-momentum operator.To determine the energy-momentum tensor ⟨ T^(4)_μν⟩ outside the star, we will consider massless scalar fields as examples — except that in Sec. <ref> we will consider a generic energy-momentum tensor. For simplicity,we consider only spherically symmetric configurations, and separate the angular coordinates (þ, ϕ) on the 2-sphere from the temporal and radial coordinates (x^0, x^1) as ds^2 = ∑_μ, ν = 0, ⋯, 3 g_μν dx^μ dx^ν= ∑_μ, ν = 0, 1 g^(2)_μν dx^μ dx^ν + r^2 dΩ^2, where dΩ^2 = dθ^2 + sin^2θ dϕ^2 is the metric on the 2-sphere.Due to spherical symmetry, we can integrate out the angular coordinates in the actionfor a 4-dimensional massless scalar field, and obtain its 2-dimensional effective action as S_m=1/2∫ d^4 x√(-g)∑_μ, ν = 0, ⋯, 3 g^μν_μχ_νχ= 4π/2∫ d^2 x√(-g^(2))r^2 ∑_μ, ν = 0, 1 g^μν_(2)_μχ_νχ . Next, we consider the Einstein-Hilbert action. The 4-dimensional curvature can be decomposed into 2-dimensional quantities as R^(4) = R^(2) - 6 (∂ϕ)^2 + 4 ∇^2ϕ + 2 μ^-2 e^2ϕ ,where R^(2) is the 2-dimensional scalar curvature and ϕ≡ - log(r/μ) appears as the dilaton field in 2 dimensions. [ We use the same symbol ϕ for the dilaton as well asthe azimuthal angle on the 2-sphere and hope that this will not lead to any confusion. ] (The dilaton ϕ is originated from the radius r of the integrated 2-sphere, and μ is an arbitrary scale parameter.) After integrating out the angular coordinates,the 4-dimensional Einstein-Hilbert action turns into the 2-dimensional effective action for the dilaton field:S_EH = - 1/16π G∫ d^2 x√(-g^(2)) μ^2 e^-2ϕ[ R^(2) + 2 (∂ϕ)^2 + 2 μ^-2 e^2ϕ].As the 2-dimensional Einstein tensor vanishes identically,the equations of motion of the dimensionally reduced action only involves the dilaton and a cosmological constant. In Secs. <ref> and <ref>, we will compute the vacuum energy-momentum tensor ⟨ T^(4)_μν⟩ in different models that have been used in the literatureon the study of the back reaction of Hawking radiation (e.g.<cit.>), [ Charged black holes are also studied using similar approximations<cit.>.] and they have been assumed to capture at least the qualitative features of the problem. [ Incidentally, the models for 2D black holes inRefs. <cit.>differ from 4D black holes not only in the matter fields but also in the gravity action. ] Those with reservations about the accuracy of these models, or any other assumption adopted in the calculation below, should also dismiss the literature based on the same assumptions, and the implication of this work would be at least this: The existence of horizon depends on the details of the energy-momentum tensor, and there is so far no rigorous proof of the presence of horizon that fully incorporates the back reaction of the vacuum energy-momentum tensor in a realistic 4-dimensional theory.Since 4-dimensional and 2-dimensional energy-momentum tensors are defined byT^(4)_μν = 2/√(-g)δ S_m/δ g^μν , T^(2)_μν = 2/√(-g^(2))δ S_m/δ g^μν_(2) ,respectively, their expectation values are related to each other(in the s-wave approximation) by [ Here we treat the dilaton ϕ (or equivalently r) as a classical field since it is originated from the 4-dimensional classical gravity.Only the matter fields are quantized in the semi-classical Einstein equation. ] ⟨ T^(4)_μν⟩ = 1/r^2⟨ T^(2)_μν⟩(μ, ν = 0, 1)on the reduced 2-dimensional space-time with coordinates (x^0, x^1). Hence the semi-classical Einstein equation (<ref>) becomesG^(4)_μν = 1/r^2⟨ T^(2)_μν⟩(μ, ν = 0, 1).The angular components of the 4-dimensional Einstein equation,e.g. G^(4)_þþ = κ⟨ T_þþ^(4)⟩, are equivalent to the equation of motion for the dilaton. To avoid potential confusions in the discussion below, we comment that the 4-dimensional conservation law for the energy-momentum tensor∇^μ⟨ T_μν^(4)⟩ = 0 (μ, ν = 0, 1, 2, 3)can be expressed in terms of the 2-dimensional tensor ⟨ T^(2)_μν⟩ as ∇^μ⟨ T_μν^(2)⟩ - (∂_μ r^2) ⟨ T^(4)θ_θ⟩ = 0(μ, ν = 0, 1),which in general violates the naive 2-dimensional conservation law ∇^μ⟨ T_μν^(2)⟩ = 0(μ, ν = 0, 1).But if we include the energy-momentum tensor of the dilaton fieldin T^(2)_μν together with the matter field, the last term in (<ref>) would be cancelled and the 2-dimensional conservation law (<ref>) would hold. § TOY MODEL: 4D ENERGY-MOMENTUM FROM 2D SCALARSIn this section, we study the toy model considered by Davies, Fulling and Unruh <cit.> for the vacuum energy-momentum tensor outside a massive sphere. In this toy model,we replace the 4-dimensional scalar field (<ref>) bythe 2-dimensional minimally coupled massless scalar field, whose action is S = 1/2∫ d^2 x√(-g^(2))∑_μ, ν = 0, 1 g^μν_(2)_μχ_νχ .We shall compute the quantum correction ⟨ T_μν^(2)⟩ to the energy-momentum tensor for this 2-dimensional quantum field theory and then use eq.(<ref>) to estimatethe 4-dimensional vacuum energy-momentum tensor ⟨ T_μν^(4)⟩.It should be noted thatthe 2-dimensional minimally coupled scalar (<ref>)satisfies the 2-dimensional energy-momentum conservation law (<ref>). Thus, according to the 4-dimensional conservation law (<ref>),the angular components of the energy-momentum tensorfor the 2-dimensional minimal scalar must vanish:⟨ T^(4)_θθ⟩ = ⟨ T^(4)_ϕϕ⟩ = 0.§.§ Energy-Momentum From Weyl AnomalyFor minimally coupled scalar fields,the quantum effects for the energy-momentum tensoris essentially determined by the conformal anomaly and energy-momentum conservation.Here we review the work of Davies, Fulling and Unruh <cit.>, where they computed the expectation value of the quantum energy-momentum tensor for the toy model described above. They did calculation in the fixed Schwarzschild background without back reaction. We will consider the back reaction of the quantum energy-momentum tensor after reviewing their work.Consider a minimally coupled massless scalarwith the action (<ref>) for a given 2-dimensional metric. According to Davies and Fulling <cit.>,the quantum energy-momentum operator of this 2-dimensional theory can be regularized to be consistent with energy-momentum conservation, but it breaks the conformal symmetry.The Weyl anomaly is⟨ T^(2) μ_μ⟩ = 1/24π R^(2) .In the conformal gauge, the metric is specified by a single function C asds^2 = - C(u, v) du dv,and the regularized quantum energy-momentum operator has the expectation value (for a certain quantum state to be specified below) ⟨ T^(2)_μν⟩ = θ_μν + R^(2)/48π g_μν,wherethe 2-dimensional curvature isR^(2) = 4/C^3( C_u_vC - _u C _v C ),andθ_uu = - 1/12π C^1/2_u^2 C^-1/2 , θ_vv = - 1/12π C^1/2_v^2 C^-1/2 ,θ_uv = 0. The expressions of þ_μν are not given in a covariant form anddo not transform covariantly under the coordinate transformation u → u'(u), v → v'(v)(which preserves the conformal gauge) because it is the energy-momentum tensor for a specific vacuum state. Choosing a different set of coordinates (u, v) givesthe energy-momentum tensor for a different state. The vacuum state with the energy-momentum tensor (<ref>)–(<ref>)is the one with respect to which the creation/annihilation operators in the scalar field are associated with the positive/negative frequency modes {e^iω u, e^iω v}.While the trace part of the energy-momentum tensor is fixed by the Weyl anomaly,the conservation law implies that the energy-momentum tensor for any state can always be written in the form⟨ T_μν^(2)⟩ = 1/48π g_μν R^(2) + θ_μν + T_μν .The functions T_μν are the integration constants arising from solving the equation of conservation and depend only on u for outgoing modesand v for incoming modes. That is,T_uu = T_uu (u), T_vv = T_uu (v), T_uv = 0,namely, T_uu and T_vv are a function of u and that of v, respectively.The dependence of ⟨ T_μν^(2)⟩ on the choice of states now resides in T_μν,which vanishes for the specific vacuum state associated with the coordinates (u,v)in the way described above.They can also be fixed by the choice of boundary conditions at the spatial infinity. The conservation law and Weyl anomaly are preserved regardless of the choice of these functions.Now we review the computation by Davies, Fulling and Unruh <cit.> for the quantum energy-momentum tensor outside a 4-dimensional static star without back reaction. The 4-dimensional metric for a spherically symmetric configuration can be put in the formds^2 = - C du dv + r^2 d Ω^2,with two parametric functions C(u, v) and r(u, v). Assuming that the star is a massive thin shell of radius r = R,we have C = 1 for the empty space inside the shell (r < R) with the light-cone coordinates denoted by (U, V). When the back reaction of the vacuum energy-momentum tensor is ignored,C(r) = 1 - 2M/r ,for the Schwarzschild metric outside the shell (r > R),where M is the mass of the star. The Schwarzschild radius a_0 equals 2M.The continuity of the metric at r = R determines the relation between the coordinate system (U, V) inside the shell and the coordinate system (u, v) outside the shell asU = (1-2M/R)^1/2 u, V = (1-2M/R)^1/2 v. As they are related by a constant scaling factor for a star with constant radius R, the notions about positive/negative frequency modes defined by (U, V) and (u, v) are exactly the same.The quantum state inside the static mass shell is expected to be the Minkowski vacuum, for which the positive/negative frequency modes are {e^± i ω U, e^± i ω V}_ > 0. For a large radius R, the density of the shell is small, and we expect that the quantum state to be continuous across r = R. In other words, the quantum state just outside the shell at r = R is the vacuum state associated with the positive/negative energy modes {e^± i ω U, e^± i ω V}_ > 0, or equivalently {e^± i ω u, e^± i ω v}_ > 0.One can use (<ref>)–(<ref>) to compute the energy-momentum tensor for r > R directly with C given by (<ref>). The results are <cit.> ⟨ T^(2)_uu⟩ = 1/24π(3M^2/2r^4 - M/r^3),⟨ T^(2)_vv⟩ = 1/24π(3M^2/2r^4 - M/r^3),⟨ T^(2)_uv⟩ = 1/24π(2M^2/r^4 - M/r^3).This is the energy-momentum tensor for a static star given in Ref.<cit.>. The associated quantum state is called the Boulware vacuum <cit.>.The Boulware vacuum has vanishing energy-momentum tensor at r →∞. But the energy-momentum tensor diverges at r = 2M in a generic local orthonormal frame due to the diverging blue-shift factor at the horizon. Hence it is conventionally assumed that the radius of the staris not allowed to be inside the Schwarzschild radius, or equivalently, that the Boulware vacuum is not physicalif the star is inside the Schwarzschild radius. We will see below that, if the back reaction is taken into consideration, there is no divergence,or very large energy-momentum tensor which induces curvature of the Planckian scale. The geometry outside a star is perfectly self-consistent and regular,even if the star is inside the Schwarzschild radius.This also implies that the Boulware vacuum is physicaleven for a star inside the Schwarzschild radius,but the back reaction must be taken into account. §.§ Turning on Back ReactionNow we turn on the back reaction of the vacuum energy-momentum tensor. The space-time metric should satisfy the Einstein equation (<ref>) with the vacuum energy-momentum tensor given by (<ref>) and (<ref>).For a static configuration with spherical symmetry, the metric can always be written as ds^2 = - C(r) dt^2 + C(r)/F^2(r) dr^2 + r^2 dΩ^2,for some functions C(r) and F(r). The functions C(r) and F(r) are independent of the time coordinate t due to the time translation symmetry. The off-diagonal components dt dr are absentdue to the time-reversal symmetry. This geometry has the Killing horizon associated to the time-like Killing vectorξ = ∂_t at r=a if C(r=a)=0.The radial coordinate can be redefined from rto the tortoise coordinate r_* via dr/dr_* = F(r),such that the metric isds^2 = - C(r) [ dt^2 - dr_*^2 ] + r^2(r_∗) dΩ^2. We can further define the light-cone coordinates asu= t - r_*, v= t + r_*,and the metric ds^2 = - C(v-u) du dv + r^2(v-u) dΩ^2is thus a special case of (<ref>) for some one-variable functions C(v-u) and r(v-u). Since r is a function of (v-u),we can invert the function and view (v-u) as a function of r.For example, for the Schwarzschild metric, we haveC(r)= 1 - a_0/r ,F(r)= 1 - a_0/r,r_*≡ r + a_0 log(r/a_0 - 1). For a static, spherically symmetric configuration, an apparent horizon is also a Killing horizon. The reason is as follows. The apparent horizon is a closed surface on which outgoing light-like vectors do not expand the area of the surface. Since the area of a sphere of radius r is 4π r^2 by the definition of the coordinate r, a non-expanding vector must satisfy dr = 0, and for it to be light-like, we need ds^2(dr = 0) = 0. According to (<ref>), this implies that C(r) = 0 at some radius r = a. On the other hand, the Killing horizon is a closed surface on whichthe Killing vector is light-like. Here the Killing vector refers to the time-translation generator _t. It is light-like only if C(r) = 0. Hence we see that C(r) = 0 is the condition for both apparent horizon and Killing horizon.Plugging the metric (<ref>) into the Einstein equation, the Einstein tensors are G^(4)_uu = 2_u C_u r/Cr- 2_u^2 r/r , G^(4)_vv = 2_v C_v r/Cr- 2_v^2 r/r , G^(4)_uv = C/2 r^2 + 2_u r_v r/r^2 + 2_u_v r/r ,G^(4)_θθ =2 r^2 (∂_u C ∂_v C - C ∂_u ∂_v C)/C^3- 4r ∂_u ∂_v r/C ,where G^(4)_ϕϕ equals G^(4)_þþ up to an overall factor of sin^2þ. By using the relationsr(v-u)/ v = -r(v-u)/ u = 1/2 F(r),which follow (<ref>), the Einstein tensors can be completely expressedin terms of the two functions C(r), F(r) asG^(4)_uu = F(r)/2 C(r) r (F(r)C'(r) - C(r)F'(r)), G^(4)_vv = F(r)/2 C(r) r (F(r)C'(r) - C(r)F'(r)), G^(4)_uv = 1/2 r^2 (C(r) - F^2(r) - r F(r) F'(r)), G^(4)_θθ =- r^2 F/2 C^3(F C^' 2 - F' C C' - F C C”)+ r/CFF',where primes on C and F refer to derivatives with respect to r.Let us now investigate the semi-classical Einstein equation (<ref>) with ⟨ T^(4)_μν⟩ given by eq.(<ref>),and ⟨ T^(2)_μν⟩ given by eq.(<ref>)–(<ref>) for the Boulware vacuum. In terms of the functions C(r) and F(r) defined in (<ref>) and (<ref>), the energy-momentum tensor (<ref>)–(<ref>) can be written as⟨ T^(2)_uu⟩ =F(r)/192π C^2(r) [-3 F(r)C'^2(r) + 2C(r)(F'(r) C'(r) + F(r) C”(r))],⟨ T^(2)_vv⟩ =F(r)/192π C^2(r) [-3 F(r)C'^2(r) + 2C(r)(F'(r) C'(r) + F(r) C”(r))],⟨ T^(2)_uv⟩ = F(r)/96π C^2(r) [- F(r)C'^2(r) + C(r)(F'(r) C'(r) + F(r) C”(r))].With the Einstein tensor given in (<ref>)–(<ref>), the Einstein equations (<ref>) are (up to an overall factor of F/(2Cr))F C' - F' C - α/21/r (F' C' + F C”) + 3α/41/C r F C'^2 = 0,C^2/F r - FC/r - F' C - α/21/r (F' C' + F C”) + α/21/C r F C'^2 = 0,where the constant parameter α = κ N/24π is of the order of the Planck length squared. The parameter N represents the number of massless scalar fields.§.§ Breakdown of Perturbation Theory As the quantum correction to the energy-momentum tensor is extremely small, one naively expects that the Einstein equations (<ref>) and (<ref>) can be solved order by order perturbatively in powers of the Newton constant κ (or equivalently α):C(r)= C_0(r) +α C_1(r) + α^2 C_2(r) + ⋯ , F(r)= F_0(r) + α F_1(r) + α^2 F_2(r) + ⋯ .The leading order terms C_0 and F_0are expected to be given by the Schwarzschild solution (see (<ref>) and (<ref>)):C_0(r) = 1 - a_0/r ,andF_0(r) = dr/dr_* = (dr_*/dr)^-1 = 1 - a_0/r . The equations for the first order terms areF_0 C'_1 - F'_0 C_1 - C_0 F'_1 + C'_0 F_1 = 2κ/r⟨ T^(2)_uu⟩_0 ,C_1 - 2F_0 F_1/r - F_0 F'_1 - F'_0 F_1= 2κ/r⟨ T^(2)_uv⟩_0 .Here ⟨ T^(2)_μν⟩_0 aregiven by eqs.(<ref>)–(<ref>) for the Schwarzschild background as the leading order terms of ⟨ T^(2)_μν⟩in the perturbative expansion.In the region r > a_0, the equations above can be solved to obtain the first order correction terms C_1 and F_1. However,at r = a_0, since F_0(a_0) = C_0(a_0) = 0, these two equations imply- α/a_0 (C_1 - F_1)= 2κ/a_0⟨ T^(2)_uu⟩_0|_r=a_0 = α/4 a^3_0 ,α/a_0 (C_1 - F_1)= 2κ/a_0⟨ T^(2)_uv⟩_0|_r=a_0 = 0,unless C'_1 or F'_1 diverges at r = a_0. Apparently, these two equations are inconsistent, and the perturbative expansion fails. In general, perturvative expansion breaks down at r = a_0 where C(a_0) = F(a_0) = 0 if C'_0(a_0) = a F'_0(a_0)^2. Of course, as the first order equations are inconsistent only at the point r = a_0, one can solve C_1 and F_1 for r > a_0, and then define C_1(a_0) and F_1(a_0) by taking the limit r → a_0. As we will show below, this leads to divergence in C_1 (and C'_1) at r = a_0, so that the conclusion remains the same: the perturbation theory breaks down at the horizon.Taking the difference of the two Einstein equations (<ref>) and (<ref>), we can solve F(r) in terms of C(r):F(r) = [4C^3(r)/4C^2(r) + 4r C(r)C'(r) + αC'^2(r)]^1/2 .Plugging it back into either of the two equations, we find 2 r ρ'(r) + (2r^2+α)ρ^' 2(r) + α r ρ^' 3(r) + (r^2 - α) ρ”(r) = 0,where ρ(r) is defined byC(r) = e^2ρ(r) .One can check that (<ref>) is consistent with the assumption⟨ T^(4)_θθ⟩ = 0, which can be derived from the Einstein equationG_þþ = κ⟨ T^(4)_þþ⟩ using (<ref>). Now,we consider the perturbative expansion of (<ref>).We expand ρ as ρ(r) = ρ_0(r) + αρ_1(r) + ⋯ ,which is related to the expansion of C(r)(<ref>) via C_0 (r)= e^2ρ_0(r) ,C_1 (r)= 2 ρ_1(r) C_0(r).The solutions of ρ_0 and ρ_1 to (<ref>) areρ_0(r)= 1/2log c_0 + 1/2log(1-a_0/r), ρ_1(r)= - 4 r^2 + a_0^2 + 4 a_0 r (2 c_1 r - 1)/8 a_0 r^2 (r-a_0) - 2 r - 3 a_0/4 a_0^2 (r-a_0)log(1 - a_0/r),where a_0, c_0 and c_1 are integration constants.The constant a_0 is the Schwarzschild radius in the classical limit → 0.An integration constant in ρ_1 is absorbed in c_0,which is the overall constant of C(r).While the divergence in ρ_0 at r→ a_0 implies C_0(r)=0,the divergence in ρ_1 gives here the divergence in C_1.Due to the divergence in the higher order terms,the perturbative expansion breaks down.The divergence in the higher order termsis related to that in the vacuum energy-momentum tensor for the Boulware vacuumeven though the energy-momentum tensor does not diverge in the coordinate system above.Though the divergence in the energy-momentum tensor for the Boulware vacuumis sometimes considered to imply that the Boulware vacuum is unphysical,it just implies the breakdown of the perturbative expansionin the semi-classical Einstein equation.The breakdown of the perturbation theory at r = a_0 is not in contradiction with the existence of a solutionwhich is well approximated by the classical solution C_0 and F_0. We will show that the back reaction is significant onlywithin a very small neighborhood (0 < r - a_0 ≪α/a_0) that is extremely close to the Schwarzschild radius. However, within this tiny region, the solution to the semi-classical Einstein equation cannot be treated perturbatively in powers of the Newton constant κ.§.§ Non-Perturbative AnalysisSince the perturbative expansion breaks down around the horizon,we have to study the non-perturbative features of eq.(<ref>).If there is a Killing horizon at r = a(it does not have to be equal to the Schwarzschild radius a_0 = 2M),i.e., if C(a) = 0, we must have ρ→-∞ at r = a, which in turn implies that ρ'(r) diverges at r = a. Assuming that ρ'(r) diverges at r = a witha≫α^1/2,we must haveρ' ≫a/α≫α^-1/2 ,in a region sufficiently close to r = a.Then the third term, α r ρ^' 3, dominates in the first 3 terms in (<ref>),andα r ρ^' 3(r) + (r^2 - α) ρ”(r) ≃ 0in the limit r → a. This equation can be easily solved to givethe asymptotic solution of ρ' in the limit r → aρ'(r) ≃±1/√(αlog(r^2 - α) + c) ,with an integration constant c. The value of c is fixed to bec = - αlog(a^2 - α)so that ρ' diverges at r = a. Hence ρ'(r) ≃±[αlog(r^2-α/a^2-α)]^-1/2→±(a^2-α/2α a)^1/2 (r-a)^-1/2 as r → a. As a result,C(r) ≃ c_0 e^2√(k(r-a)) as r → a, where we have chosen the sign in (<ref>)such that C(r) is an increasing function of r, in view of a smooth continuation of C(r) to the asymptotic region in which the geometry is well approximated bythe Schwarzschild solution (<ref>). Here c_0 is a positive constant andk ≡2(a^2-α)/α a≃2a/α . The expression(<ref>) gives a good approximationonly when (<ref>) holds, that is, [ Using eq.(<ref>) below, one can show that a small displacement in r of the order of Δ r ∼α/a corresponds to a physical length of the order of Δ s ∼α^1/2, which is of the Planck length scale unless N ≫ 1. This of course does not imply that we need Planckian physicsin the region (<ref>) because the curvature is still very small — see eq.(<ref>). ] 0 ≤ r - a ≪α/a .As a rough estimate of the complete solution of C(r), we patch the approximate solution (<ref>)with (<ref>) in the neighborhood where r - a ∼ O(α/a). This determines c_0 to be a very small number of orderc_0 ∼ O(α/a^2).Therefore, although the value of C(a) is not zero as it needs for there to be a horizon, it is indeed extremely small, giving a huge blue-shift factor relative to a distant observer. From the viewpoint of a distant observer, observations on this geometry will not be very different fromthose on the Schwarzschild geometry, and we expect that a ≃ a_0.The calculations leading to (<ref>) serves as a mathematical proof thatit is impossible for C(r) to vanish anywhere, and thus there is no horizon. The quantum correction to the energy-momentum tensor is such that there is no horizon even if the radius of the star is much smaller than the classical Schwarzschild radius a_0 = 2M. Due to the back reaction of the quantum energy-momentum tensor, the property of the Boulware vacuum is dramatically changed, although the geometry beyond a few Planck lengths outside the Schwarzschild radius remains well approximated by the Schwarzschild solution.Let us now describe the geometry that replaces the horizon. According to (<ref>) and (<ref>),F(r) behaves as F(r) ≃√(4c_0(r-a)/α k) for r sufficiently close to a. In the very small region (<ref>), the metric is approximately given by ds^2 ≃ -c_0 dt^2 + α kdr^2/4(r-a) + r^2 dΩ^2.This geometry around r = a resembles that of a wormhole.By choosing the origin of the tortoise coordinate such that r_* = a_* when r=a,we haver ≃ a + c_0/α k (r_* - a_*)^2 as r → a, and so the metric isds^2 ≃ - [c_0 +O(r_* - a_*)] (dt^2 - dr_*^2) + [a^2 +O((r_* - a_*)^2)] dΩ^2.It is of the same form as the metric for a static (traversable) wormhole. In terms of r_*, we can clearly see that the geometry can be smoothly connected to the region r_*< a_*,although this wormhole-like geometry does not lead to another open space but merely the interior of a star. The wormhole-like geometry of the static star with a radius smaller than the Schwarzschild radius can therefore be understood in the following way. With spherical symmetry, the 3-dimensional space perpendicular to the Killing vector can be viewed as foliations of 2-spheres with their centers at the origin. As one moves towards the star from afar, the surface area of the 2-sphere decreases until reaching a local minimum at r = a, which is the narrowest point of the throat. There is no singularity at r = a, and the area of the 2-spheres starts to increase beyond this point, until one reaches the boundary of the star. After that, the area of the 2-spheres starts to decrease again, until the area goes to zero at the origin.In support of our analysis above, we have solved C(r) and F(r) numerically from eq.(<ref>) and (<ref>), as shown in Fig. <ref> for C(r)and Fig. <ref> for F(r).The diagrams for C(r) and F(r) are only plotted for r ≥ a simply because r=a is a minimum of r. The numerical simulation for C as a function of r_* is shown in Fig. <ref>, and the solution can be extended indefinitely in both limits r_* →±∞. The numerical solution of r as a function of r_* is displayed in Fig. <ref>, showing that r has a local minimum. Although the horizon is absent, i.e. C(r) does not vanish at r = a, the value of C(a) is indeed extremely smallfor a large Schwarzschild radius, of order O(α/a^2) (see (<ref>)). The red-shift factor relating the time coordinate t in the neighborhood of r = ato the time coordinate t at large r is given by c_0^1/2. There is an even larger red-shift for r < a. As a result,everything close to or inside the Schwarzschild radius looks nearly frozen to a distant observer. For a large Schwarzschild radius, a real black hole with a horizon and a wormhole with a large red-shift factor is very hard to distinguish by observations at distance. The conventional expectation of the Boulware vacuum is that the vacuum energy-momentum tensor would diverge at the horizon if the radius of the star is smaller than the Schwarzschild radius. But this expectation is based on the calculation that has neglected back reaction. According to our non-perturbative solution of C and F, in the small neighborhood (<ref>) of r = a,⟨ T^(2)_uu⟩ ≃ - N/48πc_0/α∼𝒪(1/a^2),⟨ T^(2)_uv⟩ ≃ 0,and ⟨ T^(2)_vv⟩ is the same as ⟨ T^(2)_uu⟩. According to (<ref>), ⟨ T^(2)_uu⟩ is of the same order O(1/a^2) as its counterpart (<ref>) before back reaction is taken into consideration. ⟨ T^(2)_uv⟩ vanishes as its counterpart does at r = a_0. Since C(a) = c_0 is very small (<ref>), the energy-momentum tensor at r=a in a local frame is highly blue shifted. But it is only of order O(α^-1a^-2)), much smaller than the Planck energy density α^-2. This invalidates the conventional expectation that the energy-momentum tensor diverges at the horizon for the Boulware vacuum.Since this is no longer a classical vacuum solution, the Einstein tensor becomes non-zero at r=a. In the small neighborhood (<ref>) around r = a, the Einstein tensor is of orderG^u_v ∼ G^v_u ∼ O(1/a^2),G^u_u ∼ G^v_v ∼ 0.The order of magnitude of G^u_v (O(1/a^2)) is small for large a, so that it is consistent to use the low-energy effective description of gravity (Einstein's equations).Notice that the disappearance of horizon is not a fine-tuned result. It is insensitive to many details in eq.(<ref>), but only relies on the fact that the dominant terms are ρ” and ρ'^3. The appearance of a wormhole-like geometrydemands that the ratio of the coefficients of these two terms be positive, but in Sec. <ref> below, we will see that there is still no horizonif the ratio is negative, although the geometry would be different. We have only considered the local structure at the Schwarzschild radius, where the near-horizon geometry is replaced by a wormhole-like structure. It is possible that there is a horizon or singularity deep down the throat. In fact,the result about a wormhole-like structure(which was called a “bounce”) was first discovered in Ref.<cit.> via numerical analysis. In addition they mentionedthe possibility of a curvature singularity deep down the throat in the limit r→∞ (but within finite affine distance) where C goes to zero <cit.>. The results of their numerical analysis are completely consistent withthe discussion in this section, although our focus is on the near-horizon geometry. Note that the singularity deep down the throat in the vacuum solution is relevant only if the surface of the star does not existuntil r→∞ and the mass of the star is localized at the singularity in r→∞.However,in a more realistic scenario,the surface of the star has a finite area (r < ∞) and so the singularity at r = ∞ for the vacuum solution is irrelevant. The singularity is hence not a robust feature of the wormhole-like geometry. §.§ Hartle-Hawking Vacuum For a more general background,the energy-momentum tensor (<ref>) has the additional terms T̂_μν. For stationary solutions,these terms are constants so that⟨ T^(2)_uu⟩ = ⟨ T^(2)_vv⟩ = F(r)/192π C^2(r) [-3 F(r)C'^2(r) + 2C(r)(F'(r) C'(r) + F(r) C”(r))]+ b/48 πfor some constant b. Then the Einstein equations becomeF C' - F' C- α/21/r (F' C' + F C”)+ 3α/41/C r F C'^2 - b C/r F = 0,C^2/F r - FC/r - F' C - α/21/r (F' C' + F C”) + α/21/C r F C'^2= 0.Since the weak energy condition should not be violated in the asymptotic Minkowski space at r →∞,we shall assume that b≥ 0.This leads to a positive outgoing energy flux at spatial infinity as well as an ingoing energy flux of the same magnitude. The conventional interpretation for this boundary condition is thatthe Hawking radiation from the black hole is balanced by an ingoing energy flux from a thermal background at the Hawking temperature, and the corresponding quantum state is called the Hartle-Hawking vacuum.Due to the energy flux at spatial infinity, the asymptotic geometry at r →∞ is no longer Minkowskian. Instead,C(r) ≃ 2b log(r) + 2b loglog(r) + ⋯ in the limit r→∞. However, for small b, this approximation only applies at extremely large r (r of order O(e^1/b) or larger). If we restrict ourselves to a much smaller neighborhood that is still much larger than the Schwarzschild radius, we can still think of the Schwarzschild metric as the approximate solution in the large r limit.Let us now study the asymptotic behavior of the solution to the Einstein equation as we zoom into a small neighborhood of the Schwarzschild radius. From the Einstein equations,we obtain F = 2C √(C+b/4C^2 + 4 r C C' + α C^' 2) .Plugging it back to the Einstein equation,we find0= C'(r)^2 [αr C'(r)- 4 b (r^2-α)]+4 C(r)^2 [(r^2-α) C”(r)+2 r C'(r) - 2b] + C(r) [4 b (r^2 - α) C”(r) - 4 b r C'(r) +6 αC'(r)^2].The perturbative expansionsC(r)= C_0(r) + α C_1(r) + ⋯ , b= α b_1 + ⋯ ,give the solution for (<ref>) as C_0(r)= 1 - a_0/r , C_1(r)= - (2r-a_0)^2/8 a_0 r^3 - c_1 + a_0 b_1/r + 2r-3a_0/4 a_0^2 r[log r - (1-4 a_0^2 b_1) log(r-a_0)],where the terms inversely proportional to r in C_1(r)can be absorbed in a shift of the Schwarzschild radius a_0 in C_0(r) by an order-α correction. The next-to-leading order term C_1(r) diverges except for b_1 = 1/4 a_0^2 .This is the condition on the energy flux at spatial infinities for the Hartle-Hawking vacuum.In addition to the perturbative approach via expansions in Newton's constant, we shall also study the near-horizon geometry of the Hartle-Hawking vacuum that is non-perturbative in α in the limit r → a. If there is a Killing horizon, i.e., C has a zero at r=a, we assume that C(r) = c_0 (r-a)^n + ⋯for some constant n > 0, and then eq.(<ref>) can be expanded as 0 = (r-a)^2n-2[4 (a^2-α) b c_0^2 n + 𝒪(r-a)] - (r-a)^3n-3(α a c_0^3 n^3 + 𝒪(r-a)).To satisfy this equation, the term of order 𝒪((r-a)^2n-2) and that of order 𝒪((r-a)^3n-3) must cancel. Hence n=1,and the equation becomes0 = c_0^2 (4 α b - 4 a^2 b + α a c_0) + 𝒪(r-a).Therefore, C(r) has a zero only if b = c_0 α a/4 (a^2 - α) ,which is consistent with the perturbative result (<ref>).This is the condition for the existence of horizon. In this case, F is given by F = 2 √(b/)(r-a) + 𝒪((r-a)^2).As the classical Schwarzschild solution, the near-horizon geometry for the Hartle-Hawking vacuum is given by the Rindler space.Note that the condition (<ref>) requires a fine-tunning of the value of b. Hence it establishes a connection between the existence of horizon and the magnitude of Hawking radiation.Next, consider the case when there is no horizon, that is,C(r) does not go to zero, although ρ'(r) diverges at some point r=a. In the limit r → a, we can expand C(r) asC(r) = c_0 + c_1 (r-a)^n + ⋯ ,Then, the Einstein equation is expanded as 0= 8 b c_0^2 + (r-a)^n-2[4 (a^2 - α) (c_0 + b) c_0 c_1 n (n-1) + 𝒪(r-a)] + 𝒪((r-a)^2n-2)- (r-a)^3n-3(α a c_1^3 n^3 + 𝒪(r-a)).The assumption that ρ'(r) diverges at r=a implies that n < 1,hence the term of order 𝒪((r-a)^n-2) and the term of order 𝒪((r-a)^3n-3) must cancel each other,so we need n=1/2.The equation above is expanded as 0 = (r-a)^-3/2[1/8α a c_1^3 - (a^2 - α) (c_0 + b) c_0 c_1] + 𝒪(1/r-a).It determines c_1 asc_1 = √(8 (a^2 - α)(c_0 + b)c_0/α a) .The ratio c_0/c_1 restricts the range of validity for the approximation (<ref>) to the region (<ref>). One can then estimate c_0 asc_0 ≤ O(/a^2)by matching C(r)(<ref>) around the point r - a ∼ O(/a) with the Schwarzschild solution.We use eq.(<ref>) to compute F and findF = √(2 a c_0/a^2 - α) √(r-a) + 𝒪(r-a)in the limit r → a. As we have seen in the previous section,this solution describes the wormhole-like geometry in a small neighborhood of r=a.To summarize this subsection, the horizon is possible only if b is fine-tuned to the value given by eq.(<ref>).In general, there is a wormhole solution for arbitrary non-negative b,including the case (<ref>).In the wormhole-like solution, ⟨ T^(4)_uu⟩ is non-zero and negative at r=a:⟨ T^(4)_uu(a) ⟩ = - c_0/2 α (a^2-α) .Its order of magnitude is O(1/a^4). When there is a horizon, ⟨ T^(4)_uu⟩ vanishes at the horizon.§ 4D SCALARS AS DILATON-COUPLED 2D SCALARS In this section, we consider the 2-dimensional dilaton-coupled scalar (<ref>), which is the dimensionally reduced 4-dimensional scalar with spherical symmetry. Due to the coupling with dilaton, the Weyl anomaly acquires additional terms as <cit.> ⟨ T^(2) μ_μ⟩ = 1/24π[R^(2) - 6 (∂ϕ)^2 + 6 ∇^2ϕ],where μ is a 2-dimensional Lorentz index.We shall consider the back reaction of the energy-momentum tensor with this anomaly, and assume that there is no incoming or outgoing flux at spatial infinity.However, the 4-dimensional conservation law (<ref>) and the Weyl anomaly (<ref>) do not uniquely fix the energy-momentum tensor,leaving one degree of freedom unfixed. One needs to impose an additional condition on the vacuum energy-momentum tensor, corresponding to the choice of a quantum state. We shall consider three possible choices: (1) ⟨ T^(4)_þþ⟩ = ⟨ T^(4)_ϕϕ⟩ = 0 (Sec. <ref>), (2) ⟨ T^(4)_uu⟩ = ⟨ T^(4)_vv⟩ = 0 (Sec. <ref>), and(3) the energy-momentum tensor according to Ref.<cit.> (Sec. <ref>). §.§ Case I: ⟨ T_þþ^(4)⟩ = ⟨ T_ϕϕ^(4)⟩ = 0We first consider the vacuum state in whichthe energy-momentum tensor satisfies the 2-dimensional conservation law (<ref>), as well as the 4-dimensional one (<ref>).This implies that the angular components of the energy-momentum tensor vanish identically,⟨ T^(4)_θθ⟩ = ⟨ T^(4)_ϕϕ⟩ = 0,as in the previous section. In this case, the angular components of the Einstein equation,or equivalently, the equation of motion for the dilaton ϕis2∇^2 ϕ - 2 (∂ϕ)^2 + R^(2) = 0.The Weyl anomaly (<ref>) is thus simplified to⟨ T^(2) μ_μ⟩ = - 1/12π R^(2) ,which takes the same form as (<ref>)but with an additional overall factor of -2. The energy-momentum tensor is now completely fixed by the conservation law. It has the same forms as that of the toy model, i.e. (<ref>)–(<ref>),but with additional overall factors of -2.The extra factor of -2 can be absorbed in a redefinition of the parameter α:α = - κ N/12π ,which is now negative, and then the equations in the previous section, e.g. (<ref>)–(<ref>), remain formally the same.Because of the change in sign of the parameter α,we expect that the energy-momentum tensor outside the star be positive, and the behavior of the solution near the Schwarzschild radius can be quite different from the toy model in Sec. <ref>.In order for the horizon or the wormhole-like geometry to appear at r = a, we needρ'(r)→∞ ,in the limit of r→ a, which implies thatρ”(r)→ -∞in the limit. However, eqs.(<ref>) and (<ref>) are inconsistent withthe Einstein equation (<ref>). Eq.(<ref>) implies that eq.(<ref>) holdswhen r is sufficiently close to a, so that eq.(<ref>) can be approximated by (<ref>). Yet eq.(<ref>) implies thatρ” must be positive for α<0 and r^2 > α. The condition (<ref>) can therefore never be satisfied. As we gradually decrease r, the value of ρ' increases only when r is sufficiently large. But the value of ρ' starts to decrease with r before it is large enough to satisfy the condition (<ref>). It is therefore inconsistent to assume the existenceof a horizon or a wormhole for the quantum state satisfying the condition (<ref>).In support of our analysis, the numerical solutions to the Einstein equationare shown in Fig.<ref> for C(r) and Fig.<ref> for F(r). As F(r) is always positive, the value of r has no local minimum. In this sense it is not like a wormhole, but only a throat that gets narrower and narroweras one falls towards the center. There is no horizon either as C(r) is always positive. Nevertheless, C(r) is extremely small for r ∼ a and r < a, so there is a huge blue-shift for a distant observer. Everything close to or inside the Schwarzschild radius appears to be nearly frozen, and it is hard to be distinguished from a real black hole from the viewpoint of a distant observer.§.§ Case : ⟨ T^(4)_uu⟩ = ⟨ T^(4)_vv⟩ = 0 As another example, we impose the condition ⟨ T^(4)_uu⟩ = ⟨ T^(4)_vv⟩ = 0by hand and investigate the corresponding geometry.The 4-dimensional conservation law implies∂_r (⟨ T^(2)_uv⟩/C) - 2 r ⟨ T^(4)θ_θ⟩ = 0,which determines ⟨ T^(4)θ_θ⟩ in terms of ⟨ T^(2)_uv⟩.In this case, the equations of motion are given by 0= F C' - F' C,0= r C^2 ( C + F^2 + r FF' ) + α F ( - 6 C^2 F' + r CC' F' + r F C^' 2 - r F C C”) .We first solve these equations for F(r) and obtain F(r) = C(r)/√(C(r) + r C'(r) + 6 α r^-1 C'(r) + α C”(r)) .Plugging this back into (<ref>) or (<ref>),we obtain the differential equation for C(r):α r^2 C”'(r) + (r^3 + 6 α r) C”(r) + (2 r^2 + 6 α) C'(r) = 0.The solution of this equation is given by C(r) = 1 - 1/r^5[ (r^4 - 2 α r^2 + 3 α^2 ) (c_1 - c_2 √(π) erfc ((2 α)^-1/2 r) ) + √(2 α) (r^2 - 3 α) e^- r^2/2α],where erfc is the complementary error function, which is defined by erfc (x) = 2/√(π)∫_x^∞ e^-t^2 dt,and c_1 and c_2 are integration constants.We have chosen the other integration constant such thatC(r) → 1 in the limit r→∞. The solution (<ref>) has zeros for suitable choices of the parameters c_1 and c_2.For example, for c_2=0,the radius of the horizon is given bya solution of(15 α^2 - 6 α + 1) r^5 - c_1 r^4 + 2 α c_1 r^2 - 3 α^2 c_1 = 0.Since C(r) behaves in the limit r→∞ as C(r) ≃ 1 - c_1/r + ⋯ + 16 α^5/2 c_2/r^6 e^- r^2/2 α + ⋯ ,the constant c_1 is related to the mass of the black hole.The other constant c_2 specifies the quantum correction, as it is suppressed in the limit α→ 0,and hence it is not related to the classical configuration, but a parameter for different vacua.§.§ CaseIn this subsection, the components ⟨ T_uu^(2)⟩ and ⟨ T_vv^(2)⟩of the energy-momentum tensor for the 2D dilaton-coupled scalar field are calculatedusing the formula derived in Ref.<cit.>:⟨ T_uu^(2)⟩ =- 1/12π( _u ρ_u ρ - ^2_u ρ)+ 1/2π( _u ρ_u ϕ + ρ (_u ϕ)^2 ),⟨ T_vv^(2)⟩ =- 1/12π( _v ρ_v ρ - ^2_v ρ)+ 1/2π( _v ρ_v ϕ + ρ (_v ϕ)^2 ),where ρ is defined by (<ref>) and ϕ byϕ = - log(r/μ). The trace anomaly (<ref>) is expressed in terms of ϕ and ρ as ⟨ T_uv^(2)⟩ = - 1/12π( _u _v ρ+ 3 _u ϕ_v ϕ - 3 _u _v ϕ).The angular components of the energy-momentum tensor isnow non-zero and is determined through the 4-dimensional conservation law (<ref>)by the rest of the energy-momentum tensor (<ref>)-(<ref>). The energy-momentum tensor (<ref>)-(<ref>)can be rewritten in terms of ρ and F as ⟨ T^(2)_uu⟩ = F(r)/192π[ F'(r) ρ'(r) + F(r)(-ρ^' 2(r) + ρ”(r)) + 6/r^2F(r)(ρ(r)-rρ'(r))],⟨ T^(2)_vv⟩ = F(r)/192π[ F'(r) ρ'(r) + F(r)(-ρ^' 2(r) + ρ”(r)) + 6/r^2F(r)(ρ(r)-rρ'(r))],⟨ T^(2)_uv⟩ = F(r)/192π[ F'(r) ρ'(r) + F(r) ρ”(r)+ 3F'(r)/r].By using these expressions together withthose for the Einstein tensor (<ref>)-(<ref>),the semi-classical Einstein equation (<ref>) givesthe following differential equations:0= - r^2 F'(r) (2 αρ'(r) + r) - 2 F(r) [ α r^2 ρ”(r) - α r^2 ρ^' 2(r) - r (r^2 - 6α) ρ'(r)+ 6 αρ(r)], 0= e^2ρ(r) - F(r)F'(r) (2 α rρ'(r) + r^2 + 6α) - F^2(r) (2 α r ρ”(r) + r).From these differential equations,we can easily solve F(r) as F(r) = e^ρ(r) r^3/2√(2 αρ'(r)+r/D(r)) ,where the function D(r) is D(r)= r^4 - 12 α^2 r^2 ρ”(r) - 12 αρ(r) (2 α rρ'(r) + r^2 + 6α)+ 2 ρ'(r) [α r^2 ρ'(r) (2 α rρ'(r) + 3 (r^2 + 6α)) + r (r^2 + 4α)(r^2 + 9α)].Plugging (<ref>) back into (<ref>) or (<ref>),we obtain the differential equation for ρ(r):0= - 24 α^2 r^2 ρ (r) ρ”(r) (15 α +r^2+2 αrρ '(r)) - 144 α ^2 r^2 ρ (r) + 12 αr ρ (r) ρ '(r) [4 αr ρ '(r)(14 α + 2 r^2 +αr ρ '(r) ) + 3 r^4+ 40 α r^2 + 126 α ^2] + 2 αr^3 (186 α ^2+3 r^4+56 αr^2) ρ'(r)^3+ 4 α ^2 r^4 (12 α +r^2) ρ '(r)^4+ 2 r^2 ρ '(r)^2 (324α ^3+r^6+27 αr^4-18 α ^3 r^2 ρ”(r)+162 α ^2r^2) -6 α ^2 r^5 ρ ^(3)(r) + r^4 ρ”(r) (48 α ^2+r^4+10 α r^2+36 α ^3 ρ”(r))+2 r^3 ρ '(r) (-72 α ^2+r^4-3αr^2+α(-138 α ^2+r^4-14 αr^2) ρ”(r)-6α ^3 r ρ ^(3)(r)). If there is a Killing horizon at r=a,we must have ρ→ -∞ as r→ a.Then ρ would behave around r=a as eitherρ(r) = ρ_0 log(r-a) + ⋯ ,or ρ(r) = 1/2log c_0 + ρ_0 (r-a)^n + ⋯with n<0. Assuming eq.(<ref>), which includes the case of the Schwarzschild solution, the Einstein equation (<ref>) can be expanded as 0 = 4 α^2 a^4 ρ_0^2 (a^2 ρ_0^2 + 12 αρ_0^2 + 9 αρ_0 + 3 α) 1/(r-a)^4 + 𝒪(1/(r-a)^3),and we can solve ρ_0 asρ_0 = 1/2 a^2 + 24 α(- 9 α±√(- 12 α a^2 - 63 α^2)),which is never real since a^2 ≫. Therefore, ρ can never behave as (<ref>) near r=a. For the other option (<ref>),the Einstein equation (<ref>) is expanded as 0= 36 α^2 log c_0 [2a^2 + (a^2 + 6 α)log c_0 ] + 𝒪(r-a)+ (r-a)^n-3[- 6 α^2 a^5 n (n-1) (n-2) ρ_0 + 𝒪(r-a)]+ (r-a)^2n-4[12 α^3 a^4 n^2 (n-1) (2n-1) ρ_0^2 + 𝒪(r-a)]- (r-a)^3n-4[36 α^3 a^4 n^3 (n-1) ρ_0^3 + 𝒪(r-a)]+ (r-a)^4n-4[4 α^2 a^4 (a^2 + 12 α) n^4 ρ_0^4 + 𝒪(r-a)]+ 𝒪(1).In order for the leading order terms to cancel, we needn = 1/2 .Then, C(r) behaves near r=a as C(r) ≃ c_0 e^2ρ_0 √(r-a) .The coefficient ρ_0 can be fixed from the leading order term ofthe expansion of (<ref>) around r=a,0 = 9/4α^2 a^4 ρ_0 (αρ_0^2 - a) (r-a)^-5/2 + 𝒪((r-a)^-2),to beρ_0 = √(a/α) .Using (<ref>) with (<ref>),we findF(r) ≃√(2 c_0 a (r-a)/α (aρ_0^2 + 6))in the limit r → a. Since C(r) is non-zero and F(r) behaves as 𝒪(√(r-a)) near r=a,the metric in the limit r → a is approximately given by that of the wormhole asin the case of Sec. <ref>.The back reaction of vacuum energy due to dilaton-coupled 2-dimensional scalarhas been studied previously in Ref. <cit.>, which announced the absence of horizon and the existence of a “turning point” (i.e. F(a) = 0) using numerical analysis, in agreement with our results of analytic arguments.They also claimed that there is a divergence of Fbeyond the turning point in their numerical analysis. Such a singularity exists only if the surface of the star is sufficiently far away from the point r = aso that the vacuum solution still applies to the neighborhood of the singularity. As our focus is on the local geometry that replaces the near-horizon region, a singularity further down the throat is not of our concern. (See the discussion at the end of Sec. <ref>.)Incidentally, let us prove analytically that there is no singularitywhich is associated to the pole of F(r).As we have discussed above, C(r) does not havedivergence or zero at finite and non-zero r.If there is a curvature singularity but C(r) is regular there,F(r) must diverge at the singularity. It was also proposed in Ref. <cit.> by using numerical analysesthat the singularity occurs at a point r = r_M where ρ is finite and F diverges as F(r) ∝ (r_M - r)^-1/2 in the limit r → r_M,(See eq.(103) in Ref.<cit.>.) for the semi-classical Einstein equations (eqs.(30)-(32) in Ref.<cit.>)which is identical to (<ref>)-(<ref>) in this paper. [ Eqs.(30)-(31) in Ref.<cit.> is expressed in terms ofρ(r_*) and ϕ(r_*) while (<ref>)-(<ref>) in this paper arewritten by using ρ(r) and F(r).They are related to each others by F=dr/dr_* and ϕ = -log(r/μ).Eq.(32) in Ref.<cit.> is obtained from the consistency conditionwith the Bianchi identity.]However, the singularity of this sort is incompatible with the semi-classical Einstein equations (<ref>)-(<ref>),as we now prove below.First of all, according to (<ref>), F(r) must be finite if ρ' diverges. This implies thatρ'(r_M) must be finite if F(r) diverges asF(r) ∝ (r_M - r)^nfor some negative n (n = - 1/2 in Ref.<cit.>). The leading order terms in the Einstein equations (<ref>) and (<ref>) are then0= - n (2 αρ'(r_M) + r_M) r_M^2 (r_M - r)^n-1 , 0= - n (2 α r_M ρ'(r_M) + r_M^2 + 6 α) (r_M - r)^2n-1 .Hence we see that the two Einstein equations are inconsistentwith their ansatz of the singularity. Therefore, the singularity can exist only in the limit r→∞,although it can be in a finite affine distance from finite r.§ ENERGY-MOMENTUM TENSOR AND NEAR-HORIZON GEOMETRYIn Secs. <ref> and <ref>, we considered different models of the vacuum energy-momentum tensor, which is always found to be regular at the horizon (in a local orthonormal frame) when the back reaction is taken into account. Our opinion is that a reasonable model for the vacuum energy-momentum tensor should prevent divergence in local orthonormal frames by itselfat least at the macroscopic scale. We also found that sometimes the existence of horizon demands fine-tunning, and it can be easily deformed into a wormhole-like geometry without horizon by a small modification of the energy-momentum tensorwithin a tiny range of space. Our observation is that horizons are extremely sensitive to tiny changes in the energy-momentum tensor at the horizon. In this section, we zoom into the tiny space around the horizon (or the wormhole-like space) and explore the connection between its geometry and the energy-momentum tensor, without specifying any detail about the physical lawsbehind the vacuum energy-momentum tensor.We consider the (semi-classical) Einstein equationsfor 4-dimensional static, spherically symmetric geometrieswith an arbitrary energy-momentum tensor.According to eqs.(<ref>)-(<ref>),the Einsteinequations areG_uu = 1/2 C(r) r[ F^2 C'(r) - 1/2 C(r)(F^2)'(r) ]= κ T_uu , G_vv = 1/2 C(r) r[ F^2 C'(r) - 1/2C(r)(F^2)'(r) ]= κ T_vv , G_uv = 1/2 r^2[ C(r) - F^2 - r/2 (F^2)'(r) ]= κ T_uv , G_θθ =- r^2/2 C^3[ F^2 C^' 2 - 1/2 (F^2)' C C' - F^2 C C”] + r/2C (F^2)'= κ T_þþ .Note that F(r) appears only in the form of F^2(r). In this section,we shall omit the superscript (4) while all quantities are defined in the 4-dimensional theory. We will denote ⟨ T^(4)_μν⟩ simply as T_μν.For static and spherically symmetric configurations,the energy-momentum tensor T_μν are functionswhich depend only on r.They allow us to solve the function F asF^2(r) = 2κ r^2(T_uu(r) - T_uv(r)) + C(r)/2rρ'(r) + 1 ,where ρ(r) is defined byC(r) = e^2ρ(r) .Incidentally, as results of the Einstein equations and spherical symmetry,we haveG_þþ =- r^2 R^u_u , R_þþ =- r^2 G^u_u . The Einstein equations (<ref>)–(<ref>), together with the regularity of the energy-momentum tensor, will be our basis to establish the connection between the energy-momentum tensor and the existence of horizon. §.§ Conditions for HorizonFor static configurations with spherical symmetry, the event horizon and the apparent horizon coincide with the Killing horizon.In this subsection, we consider the metric (<ref>) with a Killing horizon at r = a, soC(a) = 0,which implies that ρ→ - ∞ , ρ' →∞ as r → a. Assuming that T_uu and T_uv are finite,eq.(<ref>) implies that F(r) = 0 at the Killing horizon. For solutions of the Einstein equation,the regularity of the geometry impliesthe regularity of the energy-momentum tensor.As g^uv R_uv and R_θθshould both be regular for a regular space-time with spherical symmetry, eqs.(<ref>) and (<ref>) say that g^uv T_uv and T_θθ should both be finite.Therefore, T_uv must vanish at r=a andit is convenient to express it in terms of T^u_u = - 2 C^-1 T_uv, which should be regular but can be non-zero at r=a. F(r) (<ref>) can thus be rewritten asF^2(r) = 2κ r^2 T_uu(r) + C(r)(1 + κ r^2 T^u_u(r))/2rρ'(r) + 1 ,where T^u_u is regular at r = a. Since C(a)=0, we assume that C can be expanded asC(r) = c_0 (r-a)^n + ⋯ ,in the limit r → a with n>0.Plugging (<ref>) back to (<ref>) or (<ref>)and expand around r=a by using (<ref>),we obtain 0 = (r-a)^2n-2[- 2 κ a^2 c_0^2 n T_uu(a) + 𝒪(r-a)] + 𝒪((r-a)^3n-2).Therefore, the Einstein equation at the leading order implies that T_uu (and T_vv) must vanish at the Killing horizon r=a.The condition that T_uu and T_vv must vanish at the horizon can be understood as follows. Physically, the regularity of the energy-momentum tensor should be checked in a local orthonormal frame. The finiteness of T_uu or T_vv is not sufficient to ensure the regularity as the coordinates (u, v) are singular at the horizon in the sense that C(a) = 0<cit.>. Let us now examine the regularity condition for the energy-momentum tensor at the horizon. At the future horizon (du=0),we should find another coordinate ũ such that the metric is regular in the coordinate system (ũ, v). That is, in terms of the coordinates (ũ, v), the metric becomesds^2 = - C dũ dv + r^2 d Ω^2,where C≡ C du/dũ ,and we need C to be finite and non-zero at r=a in order for (ũ, v) to be a regular local coordinate system at the horizon. Then, we have du/dũ∝ C^-1→∞as r→ a, and thereforeT_ũũ = ( du/dũ)^2 T_uu , T_ũv = du/dũ T_uvwould both diverge at r = a unlessT_uu(a) = T_uv(a) = 0.Since T_vv=T_uu for static configurations, we also have T_vv = 0 at the horizon. To be more precise,T_uu, T_vv and T_uv must behave as T_uu = 𝒪(C^2),T_vv = 𝒪(C^2), T_uv = 𝒪(C)as r→ a. For static geometries,a coordinate system which covers only the intersection ofthe future and past horizons are sometimes used.In this case,we must transform both coordinates to new coordinates (ũ, ṽ) in order for the metric to be regular,ds^2 = - C dũ dṽ + r^2 d Ω^2,where C≡ C du/dũdv/dṽ .In order for C to be finite and non-zero at r=a,we needdu/dũdv/dṽ∝ C^-1→∞ .If we take ũ and ṽ such that they are simply exchanged (up to sign) under the time reversal transformation,The energy-momentum tensor must behaves as T_uu = 𝒪(C),T_vv = 𝒪(C),T_uv = 𝒪(C)in r→ a. This simple mathematical result can have surprising implicationsbecause it says that it is possible for an arbitrarily small modificationto the energy-momentum tensor at the horizon to kill the horizon. Conceptually, this explains why the horizon of the Schwarzschild solution disappears when we turn on the quantum correction to the vacuum energy-momentum tensor as we have shown in Secs. <ref>, <ref> and <ref>. It also explains why one needs to fine-tune the additional energy fluxin order to admit the existence of a horizon in Sec. <ref>.§.§ Asymptotic Solutions in Near-Horizon Region In this subsection, we shall examine more closely the relation betweenthe energy-momentum tensor at the horizon and the near-horizon geometry for a series of near-horizon solutions.For a generic quantum theory, the vacuum energy-momentum tensor is typically a polynomial of finite derivatives of the metric. Then,as we have shown in the examples in Secs.<ref> and <ref>, the Einstein equation in the limit r → a leads to a differential equation involving only the leading order terms:(C^(n_1))^m_1 + a(C^(n_2))^m_2(C^(n_2))^m_3(⋯) ≃ 0,where (n_1), (n_2), (n_3) are the order of derivatives with respect to r. If this equation admits an asymptotic solution as (<ref>), [ We will not consider all possible solutions. For instance, the solutions with C(r) ∝exp(-c(r-a)^-) in the limit r → a also have horizons (c, >̱ 0), but will not be included in the discussions below. ]n must satisfy an algebraic equation of the formm_1(n-n_1) = m_2(n-n_2) + m_3(n-n_3) + ⋯ .which is always solved by a rational numbern = K/M ,(K, M ∈ℤ). The subleading terms in C(r)(<ref>) in the limit r → a should be determined by the subleading terms in the Einstein equations. To be sure that the leading-order solution is part of a consistent solution, one needs a consistent expansion scheme for which higher and higher order terms in C(r)can be solved order by order from the Einstein equations. In view of the Einstein equations (<ref>)–(<ref>), it is clear that a consistent ansatz for the expansion of C(r) isC(r) = (r-a)^K/M[ c_0 + c_1 (r-a)^1/M + c_2 (r-a)^2/M + ⋯]for some integers K ≥ 0 and M ≥ 1. Eq.(<ref>)) then implies thatF^2(r) = (r-a)^K'/M + 1[ f_0^2 + f̃_1 (r-a)^1/M + f̃_2 (r-a)^2/M + ⋯]for a certain integer K' ≥ 0.In the limit r → a, the metric for C(r)(<ref>) and F^2(r)(<ref>) isds^2≃- c_0 (r-a)^K/M dt^2 + c_0/f_0^2 (r-a)^(M+K'-K)/M dr^2 + a^2 dΩ^2 ≃- c_0 x^2 dt^2 + 4M^2 c_0/K^2 f_0^2dx^2/x^2(K'-M)/K + a^2 dΩ^2,where r = a + x^2M/K.Assuming that there is no other length scale except a and α, the expansions (<ref>) and (<ref>) are expected to be valid when0 ≤ r - a ≪α/a .A rough estimate of the values of c_0 and f_0 can be madeby matching C(r) and F^2(r) at the leading order with the Schwarzschild solution for r - a ∼ O(α/a), if the solution is well approximated by the Schwarzschild metric at large r. We findc_0 ∼ O(^1-K/M/a^2-K/M),f_0^2 ∼ O(^1-K'/M/a^3-K'/M). We now study the condition on the energy-momentum tensorin order for the horizon to exist. The energy-momentum tensor is determined byC(r)(<ref>) and F^2(r)(<ref>) through the Einstein equations as an expansion in powers of (r-a)^1/M: κ T^v_u(r)=G^v_u = (r-a)^(-K+K')/M(-2K + K' + M) f_0^2/2M a c_0 + ⋯ ,κ T^u_u(r)=G^u_u = - 1/a^2 + (r-a)^(-K+K')/M(K' + M) f_0^2/2M a c_0 + ⋯ ,κ T_þþ(r)=G_þþ = - (r-a)^(-M-K+K')/MK(M-K') a^2 f_0^2/4M^2 c_0 + ⋯ .Constraints should be imposed on the coefficients of the singular terms as T^v_u(r), T^u_u(r) and T_þþ(r) should all be regular at the horizon r = a, as we have argued above.Depending on the values of K, K' and M, a solution can be classified into one of the following categories: *If K > K', in order for T^v_u(a) and T^u_u(a) to be finite, we need K = 0, which implies that there is no horizon. This case will be considered in the next subsection. *If K = K', in order for T_þþ(a) to be finite, we need M = K' (and there are more constraints on the coefficientsin the expansions of C(r)(<ref>) and F^2(r)(<ref>) if M > 1). In such cases, κ T^v_u(a)=G^v_u = 0,κ T^u_u(a)=G^u_u = - 1/a^2 + f_0^2/a c_0 > - 1/a^2 ,κ T_þþ(a)=G_þþ =, ds^2≃- c_0 (r-a) dt^2 + c_0/f_0^2 (r-a) dr^2 + a^2 dΩ^2 ≃- c_0 x^2 dt^2 + 4M^2 c_0/K^2 f_0^2 dx^2 + a^2 dΩ^2,where f_0^2/ac_0∼ O(1/a^2) and r = a + x^2. The near-horizon geometry is the Rindler space. This case includes the classical Schwarzschild solution and the Hartle-Hawking vacuum considered in Sec. <ref>. Note that f_0^2/c_0 is of order O(1/a), hence G^u_u(a) is of order O(1/a^2). *If K < K' and M > (K'-K), in order for T_þþ(a) to be finite, we need M = K' (and there are more constraints on the coefficientsin the expansions of C(r)(<ref>) and F^2(r)(<ref>) if M > K'-K+1). In such cases, κ T^v_u(a)=G^v_u = 0,κ T^u_u(a)=G^u_u = - 1/a^2 ,κ T_þþ(a)=G_þþ =, ds^2≃- c_0 (r-a)^K/M dt^2 + c_0/f_0^2 (r-a)^(M+K'-K)/M dr^2 + a^2 dΩ^2 ≃- c_0 x^2 dt^2 + 4M^2 c_0/K^2 f_0^2 dx^2 + a^2 dΩ^2,where r = a + x^2M/K. Again we have the Rindler space. *If K < K' and M = (K'-K), κ T^v_u(a)=G^v_u = 0,κ T^u_u(a)=G^u_u = - 1/a^2 ,κ T_þþ(a)=G_þþ = K^2 a^2 f_0^2/4M^2 c_0 > 0, ds^2≃- c_0 (r-a)^K/M dt^2 + c_0/f_0^2 (r-a)^(M+K'-K)/M dr^2 + a^2 dΩ^2 ≃- c_0 x^2 dt^2 + 4M^2 c_0/K^2 f_0^2dx^2/x^2 + a^2 dΩ^2,where r = a + x^2M/K. This metric describes AdS_2× S^2, which is the near horizon geometry of the extremal Reissner-Nordström black hole. The order of magnitude of G^þ_þ(a) is O(1/a^2). [ We can no longer use the estimate (<ref>), which assumes that the metric is Schwarzschild at larger r. The estimate here is done by assuming the extremal RN black hole metric at large r. ] *If K < K' and M < (K' - K), κ T^v_u(a)=G^v_u = 0,κ T^u_u(a)=G^u_u = - 1/a^2 ,κ T_þþ(a)=G_þþ = 0, ds^2≃- c_0 (r-a)^K/M dt^2 + c_0/f_0^2 (r-a)^(M+K'-K)/M dr^2 + a^2 dΩ^2 ≃- c_0 x^2 dt^2 + 4M^2 c_0/K^2 f_0^2dx^2/x^2(K'-M)/K + a^2 dΩ^2,where r = a + x^2M/K. As in the previous cases, it takes an infinite amount of time (change in t) to reach the horizon at r = a from the viewpoint of a distant observer. For all of the near-horizon geometries, we find κ T^v_u(a) = G^v_u(a) = 0, κ T^u_u(a) = G^u_u(a) ≥ - 1/κ a^2 .They imply that there is no Killing horizon if T_uu or T_uv is non-zero. While the first condition was derived in Sec.<ref>, the second condition arises only after a detailed analysis.We should emphasize here that the solutions abovemay or may not be extended beyond the point r = a without singularity. For our purpose to investigate common features of solutions with horizon, we aim at including as many possibilities as possible.§.§ Absence of HorizonIn this subsection, we consider the connection between wormhole-like geometry without horizon and the energy-momentum tensor. The stereotype of a traversable wormhole is a smooth structure that connects two asymptotically flat spaces, allowing objects to travel from one side to the other. Its cross sections are 2-spheres, whose area is typically minimized in the middle of the connection (“throat”). In particular, a 3-dimensional spherically symmetric spacecan be viewed as a foliation of concentric 2-spheres. The surface area of the 2-sphere depends on the distance between the center and the points on the 2-sphere, although the latter is not necessarily a monotonically increasing function of the former.For the metric (<ref>), the area of the 2-sphere is 4π r^2. By a “wormhole-like geometry”, we mean the existence of a local minimum in the value of r, identified as the narrowest point of the throat of the wormhole. It is not a genuine wormhole becauseonly one side of the throat is an open space, while the other side is expected to be closed, filled with matter of positive energy around the origin.Another type of peculiar geometry that will also be considered below is the limit of the wormhole-like geometry in which the throat is infinitely long.Assuming that there is a wormhole-like geometry with the local minimal value of the function r equal to a, we expect that dr/dr_* = 0[ It is however not true that the condition dr/dr_* = 0 always implies a local minimum of r. ] and thus F(r) = 0 at r = a. The condition F(r) = 0 will also be satisfied at r = a in the limit of an infinitely long throat. In the limit r → a, the wormhole-like metric is of the form:ds^2 ≃ - C(a) (dt^2 - dr_∗^2) + a^2 dΩ^2,describing a neighborhood of r = a with the topology R^2 × S^2. This resembles a traversable wormhole, although it terminates at the surface of a star rather than leading to an open space. It is relevant only when the radius of the star is smaller than the Schwarzschild radius.If F(a)=0 but C(a) ≠ 0, there is no horizon at r = a. According to (<ref>),in order for F(r) to vanish, either ρ'(r) diverges at r = a, or the energy-momentum tensor satisfies the conditionT_uu(a) - T_uv(a) = - C(a)/2κ a^2 .In fact, the condition (<ref>) is always satisfied if F(a) = 0 and C(a) ≠ 0.First,consider the possibility that ρ'(r) diverges at r = a. We expand C(r) in the limit r → a asC(r) = C(a) + 2 ρ_0 (r-a)^n + ⋯ ,where 0 < n < 1 in order for ρ' to diverge at r=a,Plugging (<ref>) back to (<ref>) or (<ref>)and expand around r=a by using (<ref>),we obtain 0= (r-a)^n-2 a^2 C(a) ρ_0 n(n-1) [C(a) + 2 κ a^2 (T_uu(a) - T_uv(a))] + 𝒪((r-a)^2n-2).This implies that the condition (<ref>) must be satisfiedeven if ρ' diverges as r→ a, and hence, (<ref>) is a necessary condition to have a wormholegeometry near r=a, independent of whether ρ' diverges or not. With the expansion (<ref>) and (<ref>) for C(r) and F(r), the absence of horizon (C(a) ≠ 0) means that K = 0.The equations for the metric (<ref>)and those for the energy-momentum tensor (<ref>)–(<ref>) remain valid. Depending on the value of K' and M, the solutions that resemble wormholes are characterized as follows.*If K' = 0, κ T^v_u(a)=G^v_u = f_0/2a c_0 > 0,κ T^u_u(a)=G^u_u = - 1/a^2 + f_0/2a c_0= - 1/a^2 + κ T^v_u(a) > - 1/a^2 ,κ T_þþ(a)=G_þþ =, ds^2≃- c_0 dt^2 + c_0/f_0^2 (r-a) dr^2 + a^2 dΩ^2 ≃- c_0 dt^2 + c_0 dr_*^2 + a^2 dΩ^2,wheref_0/2ac_0∼ O(1/a^2) and r = a + f_0^2/4 r_*^2(r_* ≥ 0). This is a wormhole with the neck at r_* = 0. *If K' > 0 and K' < M,κ T^v_u(a)=G^v_u = 0,κ T^u_u(a)=G^u_u = - 1/a^2 ,κ T_þþ(a)=G_þþ =, ds^2 ≃- c_0 dt^2 + c_0 dr_*^2 + a^2 dΩ^2,wherer = a + [(M-K')f_0/2M r_* ]^2M/(M-K') .By rewriting 2M/M-K' = p/qwhere p and q are co-prime integers,the geometry has the wormhole structure if p is even, and r≥ a for arbitrary r_*.If neither p nor q is even,we have r>0 for r_*>0 and r<0 for r_*<0.If q is even, the above coordinates are well defined only for r_*>0.*If K' > 0 and K'≥ M,κ T^v_u(a)=G^v_u = 0,κ T^u_u(a)=G^u_u = - 1/a^2 ,κ T_þþ(a)=G_þþ =, ds^2 ≃- c_0 dt^2 + c_0 dr_*^2 + a^2 dΩ^2,wherer = a + e^f_0 r_*(K' = M), r = a + [- (K' - M)f_0/2M r_* ]^-2M/(K'-M)(K' > M).In these cases, the point r=a corresponds to r_*→-∞.The speed of light is dr_*/dt = 1, hence it takes an infinite amount of time (change in t) to reach the point r = a from the viewpoint of a distant observer.For all the wormhole-like geometries, the energy-momentum tensor must satisfy the condition (<ref>) and T^v_u(a) ≥ 0. (T^v_u(a) must be zero or positive for F(a)=0.) If T_uu(r) is always positive, the geometry has neither horizon nor wormhole-like structure. § CONCLUSION In Secs. <ref> and <ref>, we considered different models of the vacuum energy-momentum tensor, and studied its back reaction on the geometry. We summarize our results as follows. *The perturbation theory for the Schwarzschild background breaks down at the horizon (in the Schwarzschild coordinates) in the expansion of Newton's constant.*The Schwarzschild metric is modified in a very small neighborhood of the Schwarzschild radius (r - a_0 ≪α/a_0) by the quantum correction to the energy-momentum tensor. *For the Boulware vacuum,there is no horizon for the model considered in Sec.<ref>. Instead, there is a wormhole-like geometry near the Schwarzschild radius. For the model considered in Sec.<ref>, there may or may not be a horizon, or a wormhole-like geometry, depending on the vacuum state. *For the model considered in Sec.<ref>, if there are non-zero energy flows in the asymptotic region with an appropriate intensity,there is a fine-tuned solution with a horizon. Generic solutions have the wormhole-like geometry instead of the horizon. *In all cases considered, the magnitude of the Einstein tensor (G^u_u, G^v_u, G^þ_þ) is of order O(1/a^2) or smaller.These results are in contradiction with the conventional folklores that a small quantum correction [Of course, a classical correction to the energy-momentum tensor would have exactly the same effect through Einstein's equations.] would not destroy the horizon, and that the Boulware vacuum has a diverging (or Planck-scale)energy-momentum tensor at the horizon. The diverging quantum effects at the horizon in the classical black hole geometriesimply modification of the saddle point of path integral, by the quantum effects.By taking the back reaction from the quantum effects into account,the geometry is modified at the horizon such thatthe energy-momentum tensor has no divergence, and then,the Boulware vacuum gives physical configurations.The calculations leading to the results mentioned above demonstrated a connection between the vacuum energy-momentum tensor and the near-horizon/wormhole-like geometry. Hence we explored in Sec. <ref> this connection for generic energy-momentum tensors, for solutions with a horizon or a wormhole-like structure. We summarize the results as follows. *If T_uu (which equals T_vv) or T_uv is non-vanishing around the Schwarzschild radius, regardless of how small they are, there can be no horizon. *If T_uu(a)=T_uv(a)=0 and T^u_u(a) > - 1/κ a^2,the geometry can have the horizon at r=a, and must be the Rindler space near the horizon,the same as the Schwarzschild black hole.*If T_uu(a)=T_uv(a)=0,T^u_u(a) = - 1/κ a^2 and T_þþ(a) > 0,the geometry can have the horizon at r=a, and the near-horizon geometryis given by Rindler space or AdS_2 × S^2,the same as that of the Schwarzschild black hole orthe extremal Reissner-Nordström black hole, for example, respectively. *If T_uu = T_vv is negative at r=a, and T_uu and T_uv satisfy T_uu(a) - T_uv(a) = - C(a)/2κ a^2 ,the geometry cannot have the horizon there,but can have the wormhole-like structure, i.e. the functionr can have a local minimum there. *if T_uu = T_vv is positive around Schwarzschild radius,there would be no horizon nor wormhole-like structure. In particular,the models considered in Secs. <ref> and <ref> demonstrate that the necessary condition for the horizon (See item 1)is not guaranteed as a robust nature of the matter fields.Although it is natural that the energy-momentum tensor vanishes in the bulk at the classical level,the quantum effects provide non-zero T_uu and T_vv in general.The horizon should be viewed as a rare structure that demands fine-tunning.The readers may have reservations for some of the assumptions we made, such as the validity of the Einstein equation, the spherical symmetry, or the quantum models used to calculate the vacuum energy-momentum tensor. Even if all of these assumptions are not reliable, our work should have raised reasonable doubt against the common opinion that the back reaction of quantum effectscan only have negligible effect on the existence of the horizon <cit.>. In the examples we studied, the existence of the horizon is sensitive to the detailsof the energy-momentum tensor.It will be interesting to extend our analysis tothe dynamical processes of gravitational collapse. In this paper, we have studied static geometries for which the Killing horizon, event horizon and apparent horizon coincide, but they could be different in time-dependent geometries. For a gravitational collapse,the initial spacetime is typically the flat spacetime.At a later time, it would approximately be the Unruh vacuum near the Schwarzschild radius instead of the Boulware vacuum.(It is not exactly same to the Unruh vacuum since the boundary condition should be imposed at the past horizon for the Unruh vacuum.) There would be outgoing energy flux corresponding to Hawking radiation at large r, and the energy-momentum tensor near the surface of the star would also be modified.With this correction to T_uu, the status of the future horizon can be affected.The qualitative nature of the space-time geometry at a given constant u is expected to resemble that of the static geometry (e.g. the wormhole-like structure). The Killing horizon can be excluded as discussed in Sec. <ref>if there is non-zero outgoing energy flow.The apparent horizon and event horizon, however, can in principle appear. Nevertheless, let us not forget that the expectation of a horizonin the conventional model of gravitational collapse is based on our understanding of the static Schwarzschild solution, and we have just shown thatthe horizon of the Schwarzschild solution can be easily removedby the back reaction of the vacuum energy. We believe that a better understanding of the static black holeswould allow us to describe the dynamical black holes more precisely.For the cases of wormhole-like geometries, inside the throat (or turning point), the outgoing null geodesics converge andingoing null geodesics diverge.If there is a matter inside the wormhole,the structure along the outgoing null geodesicsis qualitatively same to the conventional model of the black hole evaporation.The structure along the ingoing null geodesics, which isdifferent from the conventional model, would possibly be modifiedwhen the time evolution due to the evaporation process is taken into account.From the viewpoint of a distant observer, this scenario is compatible with the conventional model, although the space-like singularity at r = 0 would be replaced by the internal space inside the throat, that is, a bubble of space-time attached to the outer world through a throat of 0 or Planckian-scale radius.More details about this scenario of gravitational collapse will be reported in a separate publication. Another scenario of gravitational collapse is described by, the KMY model <cit.> (see also <cit.>–<cit.>), which are given by exact solutions to the semi-classical Einstein equation (<ref>),including the back reaction of Hawking radiation.It was shown thatHawking radiation is created only when the collapsing shell is still (marginally) outside the Schwarzschild radius. If the star is completely evaporated into Hawking radiation within finite time,regardless of how long it takes, the apparent horizon would never arise.In the KMY model,just like our results for the static black hole,the horizon is removed due to a modification of the geometrywithin a Planck-scale distance from the Schwarzschild radiusdue to the back reaction of the energy-momentum tensor of the quantum fields.While different quantum fields can have different contributions to the vacuum energy-momentum tensor, we believe that the general connection between the energy-momentum tensor and the near-horizon geometry will be important for a comprehensive understanding on the issue of the formation/absence of horizon. This work is a first step in this direction.There are other works<cit.>that have also proposed the absence of horizon in gravitational collapse based on different calculations. However, it might be puzzling to many how the conventional picture about horizon formation could be wrong. We find most of the arguments for the formation of horizon neglecting the vacuum energy's modification to geometry within a Planck scale distance from the Schwarzschild radius. This paper points out that these approximations are not reliable. § ACKNOWLEDGEMENT The authors would like to thankHikaru Kawai for sharing his original ideas, and to thank Jan de Boer, Yuhsi Chang, Hsin-Chia Cheng, Yi-Chun Chin,Takeo Inami,Hsien-chung Kao, Per Kraus, Matthias Neubert,Shu-Heng Shao,Masahito Yamazaki, I-Sheng Yang, Shu-Jung Yang and Xi Yinfor discussions. P.M.H. thanks the hospitality of the High Energy Theory Group at Harvard University, where part of this work was done. The work is supported in part by the Ministry of Science and Technology, R.O.C.(project no. 104-2112-M-002-003-MY3) and by National Taiwan University..8cm22pt 99 0ptGerlach:1976jiU. H. Gerlach, “The Mechanism of Black Body Radiation from an Incipient Black Hole,” Phys. Rev. D14, 1479 (1976). doi:10.1103/PhysRevD.14.1479 FuzzBallO. Lunin and S. D. Mathur, “AdS / CFT duality and the black hole information paradox,” Nucl. Phys. B623, 342 (2002) [hep-th/0109154]. O. Lunin and S. D. Mathur, “Statistical interpretation of Bekenstein entropy for systems with a stretched horizon,” Phys. Rev. Lett.88, 211303 (2002) [hep-th/0202072]. S. D. Mathur, “Resolving the black hole causality paradox,” arXiv:1703.03042 [hep-th].FuzzBall2O. Lunin, J. M. Maldacena and L. Maoz, “Gravity solutions for the D1-D5 system with angular momentum,” hep-th/0212210. S. D. Mathur, “The Fuzzball proposal for black holes: An Elementary review,” Fortsch. Phys.53 (2005) 793[hep-th/0502050]. V. Jejjala, O. Madden, S. F. Ross and G. Titchener, “Non-supersymmetric smooth geometries and D1-D5-P bound states,” Phys. Rev. D71 (2005) 124030[hep-th/0504181]. V. Balasubramanian, E. G. Gimon and T. S. Levi, “Four Dimensional Black Hole Microstates: From D-branes to Spacetime Foam,” JHEP0801 (2008) 056[hep-th/0606118]. I. Bena and N. P. Warner, “Black holes, black rings and their microstates,” Lect. Notes Phys.755 (2008) 1[hep-th/0701216]. K. Skenderis and M. Taylor, “The fuzzball proposal for black holes,” Phys. Rept.467 (2008) 117[arXiv:0804.0552 [hep-th]]. I. Bena, S. Giusto, E. J. Martinec, R. Russo, M. Shigemori, D. Turton and N. P. Warner, “Smooth horizonless geometries deep inside the black-hole regime,” Phys. Rev. Lett.117 (2016) no.20,201601[arXiv:1607.03908 [hep-th]]. Barcelo:2007ykC. Barcelo, S. Liberati, S. Sonego and M. Visser, “Fate of Gravitational Collapse in Semiclassical Gravity,” Phys. Rev. D77, 044032 (2008) [arXiv:0712.1130 [gr-qc]].Vachaspati:2006kiT. Vachaspati, D. Stojkovic and L. M. Krauss, “Observation of incipient black holes and the information loss problem,” Phys. Rev. D76, 024005 (2007) [gr-qc/0609024]. Krueger:2008nqT. Kruger, M. Neubert and C. Wetterich, “Cosmon Lumps and Horizonless Black Holes,” Phys. Lett. B663, 21 (2008) doi:10.1016/j.physletb.2008.03.051 [arXiv:0802.4399 [astro-ph]]. Fayos:2011zzaF. Fayos and R. Torres, “A quantum improvement to the gravitational collapse of radiating stars,” Class. Quant. Grav.28, 105004 (2011). doi:10.1088/0264-9381/28/10/105004 Kawai:2013mdaH. Kawai, Y. Matsuo and Y. Yokokura, “A Self-consistent Model of the Black Hole Evaporation,” Int. J. Mod. Phys. A28, 1350050 (2013) [arXiv:1302.4733 [hep-th]]. Kawai:2014afaH. Kawai and Y. Yokokura, “Phenomenological Description of the Interior of the Schwarzschild Black Hole,” Int. J. Mod. Phys. A30, 1550091 (2015) doi:10.1142/S0217751X15500918 [arXiv:1409.5784 [hep-th]]. Ho:2015fjaP. M. Ho, “Comment on Self-Consistent Model of Black Hole Formation and Evaporation,” JHEP1508, 096 (2015) doi:10.1007/JHEP08(2015)096 [arXiv:1505.02468 [hep-th]]. Kawai:2015uyaH. Kawai and Y. Yokokura, “Interior of Black Holes and Information Recovery,” Phys. Rev. D93, no. 4, 044011 (2016) doi:10.1103/PhysRevD.93.044011 [arXiv:1509.08472 [hep-th]]. Ho:2015vgaP. M. Ho, “The Absence of Horizon in Black-Hole Formation,” Nucl. Phys. B909, 394 (2016) doi:10.1016/j.nuclphysb.2016.05.016 [arXiv:1510.07157 [hep-th]]. Ho:2016acfP. M. Ho, “Asymptotic Black Holes,” arXiv:1609.05775 [hep-th]. Kawai:2017txuH. Kawai and Y. Yokokura, “A Model of Black Hole Evaporation and 4D Weyl Anomaly,” arXiv:1701.03455 [hep-th].Mersini-HoughtonL. Mersini-Houghton, “Backreaction of Hawking Radiation on a Gravitationally Collapsing Star I: Black Holes?,” PLB30496 Phys Lett B, 16 September 2014 [arXiv:1406.1525 [hep-th]]. L. Mersini-Houghton and H. P. Pfeiffer, “Back-reaction of the Hawking radiation flux on a gravitationally collapsing star II: Fireworks instead of firewalls,” arXiv:1409.1837 [hep-th]. Saini:2015deaA. Saini and D. Stojkovic, “Radiation from a collapsing object is manifestly unitary,” Phys. Rev. Lett.114, no. 11, 111301 (2015) [arXiv:1503.01487 [gr-qc]].BaccettiV. Baccetti, R. B. Mann and D. R. Terno, “Role of evaporation in gravitational collapse,” arXiv:1610.07839 [gr-qc]. V. Baccetti, R. B. Mann and D. R. Terno, “Horizon avoidance in spherically-symmetric collapse,” arXiv:1703.09369 [gr-qc]. V. Baccetti, R. B. Mann and D. R. Terno, “Do event horizons exist?,” arXiv:1706.01180 [gr-qc].CriticalPhenomenaM. W. Choptuik, “Universality and scaling in gravitational collapse of a massless scalar field,” Phys. Rev. Lett.70, 9 (1993). doi:10.1103/PhysRevLett.70.9 C. Gundlach, “Critical phenomena in gravitational collapse,” Adv. Theor. Math. Phys.2, 1 (1998) [gr-qc/9712084]. For a review, see: C. Gundlach and J. M. Martin-Garcia, “Critical phenomena in gravitational collapse,” Living Rev. Rel.10, 5 (2007) doi:10.12942/lrr-2007-5 [arXiv:0711.4620 [gr-qc]]. Davies:1976eiP. C. W. Davies, S. A. Fulling and W. G. Unruh, “Energy-Momentum Tensor Near an Evaporating Black Hole,” Phys. Rev. D13, 2720 (1976). doi:10.1103/PhysRevD.13.2720 Parentani:1994ijR. Parentani and T. Piran, “The Internal geometry of an evaporating black hole,” Phys. Rev. Lett.73, 2805 (1994) doi:10.1103/PhysRevLett.73.2805 [hep-th/9405007]. Brout:1995rdR. Brout, S. Massar, R. Parentani and P. Spindel, “A Primer for black hole quantum physics,” Phys. Rept.260, 329 (1995) doi:10.1016/0370-1573(95)00008-5 [arXiv:0710.4345 [gr-qc]]. Ayal:1997ab S. Ayal and T. Piran,Phys. Rev. D56 (1997) 4768 doi:10.1103/PhysRevD.56.4768 [gr-qc/9704027]. Trivedi:1992vh S. P. Trivedi, “Semiclassical extremal black holes,” Phys. Rev. D47 (1993) 4233 doi:10.1103/PhysRevD.47.4233 [hep-th/9211011].Strominger:1993yf A. Strominger and S. P. Trivedi, “Information consumption by Reissner-Nordstrom black holes,” Phys. Rev. D48 (1993) 5778 doi:10.1103/PhysRevD.48.5778 [hep-th/9302080].Sorkin:2001hf E. Sorkin and T. Piran, “Formation and evaporation of charged black holes,” Phys. Rev. D63 (2001) 124024 doi:10.1103/PhysRevD.63.124024 [gr-qc/0103090].Hong:2008mw S. E. Hong, D. i. Hwang, E. D. Stewart and D. h. Yeom, “The Causal structure of dynamical charged black holes,” Class. Quant. Grav.27 (2010) 045014 doi:10.1088/0264-9381/27/4/045014 [arXiv:0808.1709 [gr-qc]].D. i. Hwang and D. h. Yeom, “Internal structure of charged black holes,” Phys. Rev. D84 (2011) 064020 doi:10.1103/PhysRevD.84.064020 [arXiv:1010.2585 [gr-qc]].Callan:1992rs C. G. Callan, Jr., S. B. Giddings, J. A. Harvey and A. Strominger, “Evanescent black holes,” Phys. Rev. D45 (1992) no.4,R1005 doi:10.1103/PhysRevD.45.R1005 [hep-th/9111056].Russo:1992ax J. G. Russo, L. Susskind and L. Thorlacius, “The Endpoint of Hawking radiation,” Phys. Rev. D46 (1992) 3444 doi:10.1103/PhysRevD.46.3444 [hep-th/9206070].Schoutens:1993hu K. Schoutens, H. L. Verlinde and E. P. Verlinde, “Quantum black hole evaporation,” Phys. Rev. D48 (1993) 2670 doi:10.1103/PhysRevD.48.2670 [hep-th/9304128].Piran:1993tq T. Piran and A. Strominger, “Numerical analysis of black hole evaporation,” Phys. Rev. D48 (1993) 4729 doi:10.1103/PhysRevD.48.4729 [hep-th/9304148].Davies:1976hiP. C. W. Davies and S. A. Fulling, “Radiation from a moving mirror in two-dimensional space-time conformal anomaly,” Proc. Roy. Soc. Lond. A348, 393 (1976).BoulwareD. G. Boulware, “Quantum Field Theory in Schwarzschild and Rindler Spaces,” Phys. Rev. D11, 1404 (1975). doi:10.1103/PhysRevD.11.1404 D. G. Boulware, “Hawking Radiation and Thin Shells,” Phys. Rev. D13, 2169 (1976). doi:10.1103/PhysRevD.13.2169 Mukhanov:1994ax V. F. Mukhanov, A. Wipf and A. Zelnikov, “On 4-D Hawking radiation from effective action,” Phys. Lett. B332 (1994) 283[hep-th/9403018]. Fabbri:2003fa A. Fabbri, S. Farese and J. Navarro-Salas, “Generalized Virasoro anomaly for dilaton coupled theories,” hep-th/0307096.Fabbri:2005ntA. Fabbri, S. Farese, J. Navarro-Salas, G. J. Olmo and H. Sanchis-Alepuz, “Semiclassical zero-temperature corrections to Schwarzschild spacetime and holography,” Phys. Rev. D73 (2006) 104023 doi:10.1103/PhysRevD.73.104023 [hep-th/0512167].A. Fabbri, S. Farese, J. Navarro-Salas, G. J. Olmo and H. Sanchis-Alepuz, “Static quantum corrections to the Schwarzschild spacetime,” J. Phys. Conf. Ser.33, 457 (2006) doi:10.1088/1742-6596/33/1/059 [hep-th/0512179]. Christensen:1977jcS. M. Christensen and S. A. Fulling, “Trace Anomalies and the Hawking Effect,” Phys. Rev. D15, 2088 (1977). doi:10.1103/PhysRevD.15.2088 Bardeen:1981zzJ. M. Bardeen, “Black Holes Do Evaporate Thermally,” Phys. Rev. Lett.46, 382 (1981). doi:10.1103/PhysRevLett.46.382 Abdolrahimi:2016emoS. Abdolrahimi, D. N. Page and C. Tzounis, “Ingoing Eddington-Finkelstein Metric of an Evaporating Black Hole,” arXiv:1607.05280 [hep-th]. | http://arxiv.org/abs/1703.08662v2 | {
"authors": [
"Pei-Ming Ho",
"Yoshinori Matsuo"
],
"categories": [
"hep-th",
"gr-qc"
],
"primary_category": "hep-th",
"published": "20170325080026",
"title": "Static Black Holes With Back Reaction From Vacuum Energy"
} |
Version 1.00 Rojas et al.: All-Path Routing Protocols: Analysis of Scalability and Load Balancing Capabilities for Ethernet Networks All-Path Routing Protocols: Analysis of Scalability and Load Balancing Capabilities for Ethernet NetworksElisa Rojas, Guillermo Ibanez, Jose Manuel Gimenez-Guzman, and Juan A. Carral Elisa Rojas is with Research Department, Telcaria Ideas S.L., 28911, Legans (Madrid), Spain. e-mail: [email protected] Guillermo Ibanez, Jose Manuel Gimenez-Guzman and Juan A. Carral are with Departamento de Automtica, Edificio Politcnico, University of Alcala, 28871 Alcal de Henares (Madrid), Spain. e-mails: [email protected], [email protected], [email protected] 30, 2023 =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== This paper presents a scalability and load balancing study of the All-Path protocols, a family of distributed switching protocols based on path exploration. ARP-Path is the main protocol and it explores every possible path reaching from source to destination by using ARP messages, selecting the lowest latency path. Flow-Path and Bridge-Path are respectively the flow-based and bridge-based versions, instead of the source address-based approach of ARP-Path. While preserving the main advantages of ARP-Path, Flow-Path has the advantages of full independence of flows for path creation, guaranteeing path symmetry and increased path diversity. While Bridge-Path increases scalability by reducing forwarding table entries at core bridges. We compare the characteristics of each protocol and the convenience of using each one depending on the topology and the type of traffic. Finally, we prove their load balancing capabilities analytically and via simulation.Ethernet, Switching, Bridging, Routing bridges, Shortest Path Bridging, Data Centers § INTRODUCTIONEthernet switched networks offer the highest performance/cost ratio for local, campus, data center and metro networks, with a high compatibility between elements, and a simpler configuration than IP. Nevertheless, traditional layer 2 protocols either severely limit the network size and performance by blocking redundant links to prevent loops –like the Spanning Tree Protocol (STP/RSTP) <cit.>–, or require additional overhead to compute the paths –like SPB <cit.> or TRILL RBridges <cit.>–.Recently, the Software-Defined Networking (SDN) paradigm has unveiled a world of possibilities for Ethernet networks. Popular SDN frameworks, such as OpenDaylight (ODL) <cit.> or Open Network Operating System (ONOS) <cit.>, have developed applications that implement switching protocols. Thanks to their global control of the network components, computing optimal paths is particularly easy. However, SDN still requires to defeat some challenges <cit.>, such as scalability issues <cit.>.In this situation, simple, distributed, zero configuration protocols that remove the limitations of RSTP and, at the same time, allow scaling Ethernet, might become the key to boost its deployment on campus, data center and enterprise networks. The ARP-Path protocol emerged as a shortest path proposal <cit.> based on the exploration <cit.> of the network topology without requiring complex link-state protocols, similarly to the ideas shown in <cit.>. More concretely, ARP-Path is a bridging protocol that finds the lowest latency path to destination. It is based on evolved bridging mechanisms that take advantage of the information conveyed by the ARP protocol message dialog to construct the forwarding table.ARP-Path is the first protocol of the All-Path family <cit.>. It provides high path diversity, because path selection is sensitive to latency, and it exhibits native load routing with excellent results in throughput <cit.>. However, some scenarios demand higher scalability or finer path granularity for load balancing, specially when data traffic is highly asymmetric. Flow-Path and Bridge-Path protocols are ARP-Path protocol variants designed to fulfill these requirements. In particular, Flow-Path is able to provide finer load balancing, while Bridge-Path increases the scalability provided by ARP-Path. As all the above-mentioned protocols are based on similar principles, we define the so-called All-Path family, which includes these three protocols.This paper describes and compares the protocols of the All-Path family in terms of their scalability and load balancing features. In Section <ref> the protocols under study are described in detail, while in Section <ref> we propose an analysis of scalability versus load balancing. In Section <ref> we develop an analytical model to evaluate the load distribution in the All-Path family protocols. Afterwards, Section <ref> analyzes the state of the art, as well as possible evolutions of the All-Path family towards a hybrid SDN paradigm. Finally, in Section <ref>, we summarize the main conclusions of the paper. § ALL-PATH FAMILYTo understand the operation of the All-Path family, we need to describe the ARP-Path protocol first <cit.>, since it originated the rest of the family and its principles are applicable to the rest of the All-Path protocols.ARP-Path obtains its name from the Address Resolution Protocol (ARP), invoked in IPv4 prior to any communication between a couple of final hosts, whose messages (ARP Request and ARP Reply) are used to explore the whole network and build a path between those final hosts at the same time. In this way, ARP-Path explores all possible paths in the network and selects the minimum latency path just by snooping the ARP messages, without any change (neither in the messages, nor in the final hosts). Besides, no IP information is needed, therefore the equivalent in IPv6, the Neighbour Discovery Protocol (NDP), could be used in an analogous way to explore those paths.The operation of ARP-Path is described in the next subsection. The Flow-Path protocol is explained in subsection <ref>, which follows similar steps in the creation of paths, but it creates unique paths per pairs of hosts or per flow instead of being shared by different final hosts (case of ARP-Path). Finally, as explained in subsection <ref>, Bridge-Path generates one path per edge bridge in the topology, which causes groups of hosts connected to a single bridge or switch to share a common path. §.§ ARP-PathWhen a source host A starts a communication with a destination host B, A emits an ARP Request message that is replied with an ARP Reply from B containing the MAC address of B previously requested by A.The first message (ARP Request) has broadcast destination address, so after arriving to the first switch, switch 1 in Fig. <ref>, this locks the input port of the frame with the source address of the message, A, and sends the frame through all its ports but the one that received it. Then the ARP Request message reaches the switches 2 and 4, which carry out the same action, that is, locking the source address to the input port that received the first copy of the frame –i.e. the fastest copy– and keep broadcasting the frame. When any of the switches receives a later copy of the frame at some other port, this copy is discarded as the path followed by that frame is considered slower. This way, loops are avoided <cit.>.Finally, one of the message copies, the fastest one, reaches the destination host B after having locked one port in each switch traversed, which means that every switch in the network has the path to A, as seen in Fig. <ref>.Every locked port is a table entry with four fields: MAC address, associated port, state and timer. After a short time, the entry automatically goes from locked to learnt. The reason why there are two states is that the first state (locked) is needed to avoid loops that might be created by broadcast frames, so it is fixed –no modification allowed– and has a short timer; while the second (learnt) just shows the learnt path to some final host, it is flexible –it is modified based on network changes– and has a longer timer.When the destination host B replies to the ARP Request with an ARP Reply, this unicast frame directed to A is able to follow the path to A that has been just explored and, at the same time, to build a path to B. To do this, every switch forwards the ARP Reply through the port associated to A, as with any other unicast frame, but it also associates the input port to the address of B. In this case, the created entry state is directly learnt since it is not necessary to prevent loops anymore. Therefore, switch 3 receives the message, associates B to the input port where the frame was received and sends it through the port associated to A, passing through switch 2 and finally 1, which operate in the same way, until reaching host A, as Fig. <ref> shows.After the ARP standard procedure, the communication between A and B starts by means of the previously created path that involves the switches 1↔ 2↔ 3. Moreover, those entries (the ones to reach A and the ones to reach B) can be shared by third-party hosts, that is, if there was a host C connected to switch 3, this host could use the same path towards A than the one used by B in Fig. <ref>, and the same would happen to a host D connected to switch 4, but in this case the path would be defined by switches 1↔ 4. §.§ Flow-PathThe Flow-Path protocol subscribes to the same philosophy than ARP-Path: snooping ARP messages to build paths. However, the Flow-Path protocol paths are unique per couple of hosts –or per flow– and not shared with any other host out of the ARP messages exchange.Figure <ref> shows how switches lock the ports belonging to the path between A and B directed to A. Since B's MAC address is still unknown[In fact ARP aims to discover B's MAC address], Flow-Path temporarily writes down the IP addresses of hosts A and B in order to distinguish the flow from any other in which A also participates. Meanwhile, the entry is shown as A? where the question mark refers to B's address, which will be known after receiving the corresponding ARP Reply.As observed in Fig. <ref>, in an analogous way to ARP-Path, the ARP Reply message makes switches learn the ports of the path between A and B directed to B, denominated BA and, at the same time, confirms those named A? changing their state from locked to learnt and their value to AB as the destination MAC address is not unknown anymore. Once the path is set, communication between A and B can start and the path is defined by switches 1↔ 2↔ 3.The difference with ARP-Path is that if another host C connected to switch 3 wanted to send traffic to A, the path from C to A might not be the same (as the one from B to A) as now they are created independently, that is, the path between C and A would create entries named AC and CA and those could be coincident with the ports of AB and BA or not, depending on the already existing traffic in the network, since the minimum latency path might be a different one.Thus, Flow-Path guarantees the independence of flows which, at the same time, can guarantee a better distribution of the load in the network in case that certain host exchanges messages with more than one destination. However, the disadvantage of this proposal is that forwarding tables are bigger and independent paths might not be required if traffic is low.§.§ Bridge-PathThe Bridge-Path protocol is based on the opposite idea to Flow-Path: instead of creating independent paths per flow to balance the load, the objective is to share the paths even with more hosts than with ARP-Path by building routes per edge switch (which is connected to a group of hosts) and not per individual final hosts. In this way, forwarding tables are smaller, which guarantees higher scalability.There are three variants in order to deploy this protocol without having to modify the ARP messages: * Reusing the VLAN tag (ARP-PathV).* Encapsulating the frame with MAC-in-MAC (ARP-PathM).* Translating the host address into a hierarchical address in which certain part or field has the ID of the edge switch (Path-Moose <cit.>). The first and second variants follow the same basics of encapsulation than SPBV and SPBM respectively <cit.>, while the third one is based on the MOOSE protocol <cit.>.To explain the operation of Bridge-Path, we consider the specific case of ARP-PathM, but note that any other variant would be analogous. When a host A wants to communicate with a host B, the message emitted by the source (being it an ARP or not) is encapsulated in the switch that serves it with a new Ethernet header, which indicates source and destination with a MAC address field that formats some type of ID of the edge switch. This encapsulated frame enters the network and the rest of the switches will operate in the same way that with ARP-Path (they do not necessarily know that the frame is encapsulated with MAC-in-MAC and that the MAC addresses represent IDs of the edge switches instead of hosts), until the frame reaches the switch serving the destination host, which decapsulates it and sends the original frame to the destination host. That is, the only difference resides on edge switches, which are required to encapsulate and decapsulate in order to generate grouped paths.Bridge-Path's path learning operation is shown in Figs. <ref> and <ref>, respectively. Broadcast messages do not change the destination MAC address after encapsulation (remaining FF:FF:FF:FF:FF:FF, FF for short in the figure), but they do change the source (from A to 1, which might be the MAC of the edge switch or an ID of it, in the figure). In the case of unicast messages, both addresses are translated into their corresponding edge switches. In Fig. <ref> the address of host B is translated into 3 and the address of host A into 1, which is known thanks to the previous ARP Request, and forwarding is done based on 3 and 1, thus ignoring the encapsulated addresses B and A, respectively. Note that edge switches need to save the information about other edge switches and their connected hosts in order to proceed. This information is conveyed by the ARP messages.In Bridge-Path, if there were a host C connected to switch 3 interested in communicating with host A, it would share the same path from B to A, which is the one indicated by the entries of address 1, similarly to ARP-Path. The difference though is that if there were a host D connected to the edge bridge 1, the path from C to D would still be the same as the one from B to A, which is not necessarily true in ARP-Path but it will always happen in Bridge-Path. As in the Bridge-Path protocol routes are shared by several hosts, scalability is improved with respect to ARP-Path, in exchange for worse load balancing capabilities.§ COMPARISON OF THE ALL-PATH FAMILY PROTOCOLSAfter describing the different All-Path protocols in the previous section, a few conclusions can be easily drawn. Flow-Path is expected to achieve better load balancing since it is able to create more than a single path per final host, followed by ARP-Path with one path per host and finally Bridge-Path, which on average builds less than one path per host because routes are shared among the set of hosts attached to a common switch. However, the size of the forwarding tables is also much bigger for Flow-Path, followed by ARP-Path and the smallest for Bridge-Path, being this a crucial parameter in order to evaluate scalability. Obviously, table sizes are proportional to the number of paths created per host. Consequently, in this section we analyze the suitability of each protocol of the All-Path family for different topologies to reach a good tradeoff between both capabilities: load balancing and scalability (in terms of table sizes). §.§ Load Balancing Analysis In this section, we take as a reference of load balancing the theoretical total number of independent paths that a protocol can build per host, in order to compare the three All-Path protocols. The reason for using this parameter to measure the load balancing capabilities is explained in detail in the next section, where we analitically prove that All-Path protocols use all possible paths evenly (since All-Path protocols choose the lowest latency paths and this type of creation tends to select the less used resources). The number of independent bidirectional paths that can be created by Flow-Path, ARP-Path and Bridge-Path on average, denominated P_FP, P_AP and P_BP respectively, is: P_FP=F_B=H · (H-1)/2P_AP=H/2P_BP=B_E/2 Being: * F_B: average number of bidirectional flows in the network.* H: average number of active hosts.* B_E: average number of active edge switches (B_E ≤ H, since an active edge switch is always attached to one or more active hosts). Note that we are considering bidirectional paths, that is, the resources used in both directions of the communication. In a communication between a host A and a host B, even if the path from A to B is different than the path from B to A, in practice we can consider it a bidirectional path in terms of resources being used, thus simplifying the analysis.As the equations show, the number of independent paths that Flow-Path can create is the highest, followed by ARP-Path and finally Bridge-Path (P_FP≥ P_AP≥ P_BP), as expected. In order to measure the load balancing capability, we will compare this theoretical value with the actual number of available paths in the networks, since the number of theoretical paths is bounded by the actual number of possible paths in the network (Ψ). §.§ Scalability Analysis For this analysis, we will refer to the total number of table entries required in all the switches of the network, as it is the only difference among the All-Path protocols regarding this parameter.Thus, the total number of table entries in the network created on average by Flow-Path, ARP-Path and Bridge-Path, denominated T_FP, T_AP and T_BP respectively, is: T_FP=F_U · b=H · (H-1) · bT_AP=H · (b+L_e)T_BP=B_E · (b+L_e)Being: * F_U: average number of unidirectional flows in the network (F_U = 2 · F_B, since a bidirectional flow can be seen as two unidirectional flows and each direction of the flow needs a table entry).* H: average number of active hosts.* B_E: average number of active edge switches (B_E ≤ H, since an active edge switch is always attached to one or more active hosts).* b: average number of switches that form a path for a flow or couple of hosts.* L_e: average number of switches that also share the path to the same destination from different sources (note that L_e switches are not included in b). Note that we have chosen to represent these last equations as a function of the average number of unidirectional flows, instead of the bidirectional flows, because they are more easily to deduce in this way, but it is possible to substitute F_U = 2 · F_B if we want them to depend on the same parameter.In this case, Flow-Path generates a higher number of table entries than ARP-Path, and ARP-Path create more entries than Bridge-Path; being the results proportional to the square of H for Flow-Path, to H for ARP-Path and, finally, to a fraction of H for Bridge-Path.Another parameter to take into account is the average number of switches that form the path, b, since the three equations are a function of it. However, ARP-Path and Bridge-Path in fact depend on the addition of b and L_e because paths are shared, i.e. when a flow creates a path (defined by b switches on average), different sources can join the already existing paths just by adding branches (defined by L_e switches on average), defining a tree in the end (b+L_e), while Flow-Path generates a single path per each established communication.If we calculate the quotient between the previous equations, we obtain the following ratios: R_FA=T_FP/T_AP=H · (H-1) · b/H · (b+L_e)= (H-1)·b/b+L_eR_AB=T_AP/T_BP=H · (b+L_e)/B_E · (b+L_e)= H/B_E≥ 1 As shown in Eq. <ref>, the ratio R_FA between the number of table entries of Flow-Path and ARP-Path does not only depend on the average active hosts (H-1),but it also depends on the network shape (b/b+L_e): the wider the network is, the higher will be L_e and the lower the ratio R_FA. While Eq. <ref> shows that the relationship R_AB between ARP-Path and Bridge-Path will always be greater than or equal to 1, depending on the average number of hosts per edge switch, as we expected.§.§ Numerical EvaluationWith the objective of assessing which is the best protocol to be used in network routing, we have evaluated the three above-mentioned All-Path protocols in two different meshed network topologies.§.§.§ Simple grid network topologyThe first network topology under study is a simple grid with size n × n. In Fig. <ref> we show an example of that topology for n=3, i.e. a topology with 4 edge bridges. Note that in that figure shaded nodes represent edge bridges, i.e. those bridges connected to other bridges and final systems, while white nodes represent core bridges, i.e. those bridges that are connected only to other bridges. We will study the number of table entries and the number of paths as a function of the topology size n. At the same time, n affects parameters b and L_e. We also consider H, the mean number of active hosts in the topology.As it can be seen from Fig. <ref>, the ratio between the number of table entries between Flow-Path and ARP-Path decreases as the topology size increases and becomes wider. For example, when H=12 (three active hosts on average in every one of the four edge routers), that ratio is different from 11 (i.e. H-1) as one could intuitively think in advance. Instead, the ratio is closer to 4 as the topology (n) increases. This is because, as stated in Eq. <ref>, the network shape (which is the factor b/b+L_e) also affects R_FA. Meanwhile, the ratio between ARP-Path and Bridge-Path is 1, 2 and 3 for respectively H=4, H=8 and H=12, which is the average number of hosts per edge bridge.To explore the possible paths, we have taken into account the paths between the opposite sides of the network, i.e. between bridges 1 and 9 or 3 and 7 (Fig. <ref>). In the case of possible paths we have considered only the shortest paths. For example, for n=2 there are 2 shortest paths (from 3 bridges), for n=3 there are 6 paths (from 5 bridges), for n=4 there are 20 possible shortest paths, and so on, being this increase exponential. From Fig. <ref>, we can conclude that, as H increases, Flow-Path becomes more suitable, mainly as n is higher, because path diversity increases about 10 times in relation to the one of ARP-Path or Bridge-Path with table sizes only 4 times larger. However, for smaller values of H and n, the best choice is Bridge-Path, with a much lower cost.§.§.§ Crossed grid network topology Now we consider a topology that is similar to the previous one, but including crossed diagonal links between bridges, as shown, for n=3, in Fig. <ref>. As in Fig. <ref>, shaded nodes represent edge bridges while white nodes represent core bridges. The main peculiarity of this topology in comparison with the simple grid is that now there only exists a shortest path, which is the one that traverses the main central diagonal of the grid from one end to the other. Notwithstanding, the ratio between the number of table entries remains the same (as shown in Fig. <ref>). If we consider as possible paths the shortest one and also all those paths that have one more hop than the shortest one (we exclude longest paths, as it is unlikely to use them, although not impossible if the rest are heavily loaded), we obtain the results shown in Fig. <ref>. This figure shows that there are some cases where Flow-Path is not necessary, as the number of generated paths is higher than the number of possible paths in the topology, so we can save table entries and properly share by just using ARP-Path, for example. § LOAD DISTRIBUTION ANALYSIS IN THE ALL-PATH FAMILYIn the previous section, we have used the number of possible paths as the parameter to measure the load balancing capabilities of the different All-Path protocols. In this section, we analytically show how the procedure followed to build a path under an All-Path protocol results in an even load distribution across a network, i.e. when there are several paths with similar features all of them are equally used. For this purpose, the way that an All-Path protocol sets up a path can be modeled as follows. As shown in Fig. <ref>, new flows, which arrive to the system at mean rate λ and request a holding time with rate μ, are routed to any of the possible paths P_i, being N the number of possible paths (P_1,...,P_N) between source and destination. We define L_i as the capacity of link i, l_i(t) as the available capacity of link i at time t and C_i as the maximum capacity of a path (C_i), which is determined by its lowest capacity link, as it acts as a bottleneck,C_i = min(L_j), ∀ j ∈ P_i. The scheduling policy of any All-Path protocol is based on the selection of the path with the lowest latency. The latency of a path can be computed as the sum of the latencies of all links of a path. Note that a link can belong to several paths simultaneously. For each hop in the path, the latency that a packet will experience is the sum of the transmission, propagation, queueing and processing delays (d_trans, d_prop, d_proc and d_queue, respectively). We can postulate that both d_prop and d_proc are independent of the system load, so we can omit them in our analysis. However, the sum of d_queue and d_trans will highly depend on load. Basically, choosing the lowest latency path is equivalent to choosing the path with the highest number of resources available, because as the available throughput increases, d_trans decreases and queues are shorter, so d_queue also decreases. For that reason, we have assumed in our analysis that All-Path protocols choose the path with maximum available capacity, which is expected to have the minimum delay (which constitutes the real operation of the protocols).Given the above description, the behavior of the system can be described by a discrete-state continuous-time process. We can represent the state of the system at any given time by a vector𝒮:={s_1,s_2,…,s_N},where s_i is the available capacity of path i, s_i ∈ [0,…,C_i], which is determined by the capacity of its most congested link, s_i(t)=min(l_j(t)), ∀ j ∈ P_i.To model the scheduling policy of All-Path protocols, we introduce a scheduling policy that is a mixture between a deterministic and a random policy that can be explained as follows: * An arriving flow is always sent to the path with the highest available capacity, i.e. P_i so that max(s_i(t)). * If the maximum capacity is not unique, the scheduler selects the path randomly among the paths with the maximum available capacity.For the sake of mathematical tractability we consider the number of paths to be N=2. Although this choice is a simplified scenario, it is worth noting that it represents the essence of the path setup in All-Path protocols. We also make the common assumptions of exponentially distributed random variables for the inter-arrival and holding times of the flows with parameters λ and μ, respectively. However, we have also studied more realistic distributions for the parameters that describe the arrival and holding times by means of simulation, as they are not analitically tractable. Under the abovementioned assumptions, we can represent the state of the system at any given time by a vector 𝒮:={s_1,s_2}:0≤ s_1≤ C_1; 0≤ s_2≤ C_2, where s_i is the available capacity of path i (0<s_i<C_i). Without loss of generality, we consider that each flow occupies one resource unit, so C_i in this section is measured in resource units. This system is therefore a Continuous Time Markov Chain (CTMC) whose transitions rates are described in Fig. <ref>, beingq_i^*= λ ,i > j λ/2 ,i = j0 ,i < j and q_j^*= λ ,j > i λ/2 ,i = j0 ,j < iThis system constitutes a level dependent Quasi Birth and Death process (QBD) <cit.> whose infinitesimal generator (𝐐) has a block tridiagonal structure with (C_1+1)×(C_1+1) blocks with size (C_2+1)×(C_2+1) each: 𝐐= [ [ 𝐃_0 𝐌_0 0 0 0 0 0 0; 𝐋_1 𝐃_1 𝐌_1 0 0 0 0 0; 0 𝐋_2 𝐃_2 𝐌_2 0 0 0 0; 0 0 ⋱ ⋱ ⋱; 0 0 0 0 0 𝐋_C_1 𝐃_C_1 𝐌_C_1; ]]The stationary probability distribution can be obtained by solving π𝐐=0 along with the normalization condition. As 𝐐 is a finite matrix, this system can be solved by any of the standard methods defined in classical linear algebra. However, we can exploit the block tridiagonal structure of 𝐐 using the algorithm 0 defined in <cit.>, which allows us to reduce the computational cost, although there are other proposals useful for that purpose like <cit.>. In Figs. <ref> and <ref> we show the main results obtained solving this model for μ=1 and for different values of the offered load to the system ρ=λ/μ, so the system operates in very different working points. Figure <ref> validates the analytic model by means of a simulated model, where we have chosen C_i=20,∀ i. This figure shows that the utilization (u_i) in both models (analytic and simulated) for the two paths (note that in the analytical model the utilization of paths 1 and 2 are the same, i.e. u_1=u_2) coincide. Once validated the analytical model, we can go in depth of the problem of load distribution studying the probability of having a different available capacity of Ψ resource units between the different paths. In other words, in Fig. <ref> we show for Ψ=0 the probability of both paths having the same available capacity, and for Ψ=i (-i) we represent the probability that path 1 (2) has i more available resource units than path 2 (1). Figure <ref> stands for C_1=C_2=20, whereas Fig. <ref> shows the results for a scenario with C_1=30 and C_2=20. As it can be concluded from both figures, the probability of being in a state where there is a path with much more resources than the other (high values of |Ψ|) is negligible, so the load is properly distributed in order to get the highest available bandwidth (minimum delay).In order to evaluate if the All-Path protocols are able to attain an even load distribution in more realistic scenarios, we have simulated a more complex scenario with N=6 possible paths. This situation could represent a simple grid scenario with size 3× 3 as the one shown in Fig. <ref> or a state-of-the-art data center topology, where there are 6 shortest paths. For the underlying traffic that is transported by the network we have considered realistic data center traffic. In this type of networks, and similarly to Internet flow characteristics, there are myriads small flows (usually called mice) and a small number of large flows (elephants), transporting these last ones most of the traffic <cit.>. From <cit.>, we have considered in our simulator that only 1% of the flows are elephants, considering also that its size is F_e=100 MB, due to the fact that distributed file systems usually break long files into 100-MB size chunks. As we have considered 1 Gbps paths, for elephant flows we have μ^-1_elephant=F_e/10^9=0.8s. For mouse flows, and following <cit.>, we have considered that their flow size is uniformly distributed between F_m=[2KB,50KB], so μ^-1_mice=F_m/10^9=[16,400]μ s. From <cit.>, we have modeled the flow arrival process as a Poisson process, varying the mean arrival rate to get a certain ρ. For this scenario, results are shown in Fig. <ref>, considering C=20 ∀ i, i∈ [1,6]. First of all, it is important to note that results have been obtained for a wide range of scenarios. We depict the loss probability (LP) to show such variety of traffic load. Moreover, we can conclude that in this scenario, load is evenly distributed, as the utilization for all paths is very similar. Moreover, we also show the Jain's fairnex index <cit.> (FI) which can be defined in our case by FI=(∑_i=1^Nu_i)^2/6∑_i=1^Nu_i^2.From Jain's fairness index we can conclude that load is evenly distributed with a very high precision, since FI=1 represents an optimal load distribution.§ RELATED WORKRegarding layer 2 switching, the traditional Spanning Tree Protocol (STP/RSTP) <cit.> severely limits the network size and performance by blocking redundant links to prevent loops, thus limiting infrastructure utilization and increasing latency. Successor standards like SPB <cit.> or TRILL RBridges <cit.> move towards layer 3, e.g. adding link-state control protocols or additional header fields, thus leaving some of the layer 2 benefits behind, such as simplicity or plug-and-play installation. More specifically, SPB is more oriented to interconnection of provider networks than to data center and campus networks, while TRILL RBridges <cit.>, standardized by IETF, use a special encapsulation header that is modified at every RBridge hop and is neither compatible with existing switch chipsets nor IEEE OAM nor 802.1aq standards. Moreover, these protocols distribute load statically by hashing the different flows, irrespective of their load status <cit.>.PAST <cit.> builds a spanning tree per destination host and outperforms standard protocols, but it is based on pre-calculating the routes, lack the dynamicity of All-Path, which considers the path load. ROME <cit.>, taking the concepts from Greedy Routing <cit.>, presents an architecture and a protocol backwards-compatible with Ethernet, highly scalable and good performance. Nevertheless, it still requires pre-computing the paths via periodical exchange of information among the switches. AXE <cit.> was proposed to recover this simple flood-and-learn mechanism from Ethernet switches. However, it requires the modification of the standard frame by including a hop count and a nonce field.SynRace <cit.> profits from TCP's congestion control dynamics to select the least-congested paths, by sending probe packets in a similar way to the All-Path protocols. Although the accuracy of SynRace is higher, its produced overhead (table entries, computation of probe packets, etc.) is also much larger. First-Come First-Serve (FCFS) <cit.> is so far the closest approach to the All-Path family, but its routing tables are more complex (it needs to save the Frame Check Sequence field for every unlearnt packet) and their entries have no refresh option, expiring after a while even if the associated paths are still valid. Moreover, FCFS creates paths analogously to ARP-Path, lacking alternative options similar to Flow-Path or Bridge-Path.Finally, other proposals might profit from SDN features to create optimal paths by measuring the load, for example. However, these centrallized approaches lack other benefits, such as scalability. In the case of the All-Path family, the ARP-Path protocol was implemented as a hybrid switch taking the best of each world <cit.>, proving that this family of protocols can also be combined with SDN if required.§ CONCLUSIONSThe All-Path protocols are a family of Ethernet switching protocols that create routes following the lowest latency paths and, at the same time, distributing traffic evenly. These protocols are suitable for campus and data center networks. The family comprises several variants with different advantages in terms of granularity for load balancing and scalability, being ARP-Path, Flow-Path and Bridge-Path. ARP-Path, the first protocol of the family, creates a path per final host by exploring the whole network. On the one hand, Flow-Path offers even better load balance capabilities and per flow path independence at the cost of bigger table sizes when there are multiple equal cost shortest paths. On the other hand, Bridge-Path provides increased scalability with coarser path granularity specially when the ratio of edge bridges to total number of bridges is high and the number of attached hosts is high. Finally, we have evaluated the load balancing capabilities of the All-Path family by means of an analytical and simulation models, concluding that the All-Path family protocols are able to use all possible paths evenly.§ ACKNOWLEDGMENTThis work has been supported by Comunidad de Madrid through project MEDIANET (S-2009/TIC-1468) and project TIGRE5-CM (S2013/ICE-2919).IEEEtran | http://arxiv.org/abs/1703.08744v1 | {
"authors": [
"Elisa Rojas",
"Guillermo Ibanez",
"Jose Manuel Gimenez-Guzman",
"Juan A. Carral"
],
"categories": [
"cs.NI"
],
"primary_category": "cs.NI",
"published": "20170325215302",
"title": "All-Path Routing Protocols: Analysis of Scalability and Load Balancing Capabilities for Ethernet Networks"
} |
Multi-Path Region-Based Convolutional Neural Network for Accurate Detection of Unconstrained “Hard Faces” Yuguang Liu, Martin D. Levine Department of Electrical and Computer EngineeringCenter for Intelligent Machines, McGill UniversityMontreal, QC., [email protected], [email protected] 30, 2023 ============================================================================================================================================================================================================ Large-scale variations still pose a challenge in unconstrained face detection. To the best of our knowledge, no current face detection algorithm can detect a face as large as 800 × 800 pixels while simultaneously detecting another one as small as 8 × 8 pixels within a single image with equally high accuracy. We propose a two-stage cascaded face detection framework, Multi-Path Region-based Convolutional Neural Network (MP-RCNN), that seamlessly combines a deep neural network with a classic learning strategy, to tackle this challenge. The first stage is a Multi-Path Region Proposal Network (MP-RPN) that proposes faces at three different scales. It simultaneously utilizes three parallel outputs of the convolutional feature maps to predict multi-scale candidate face regions. The “atrous” convolution trick (convolution with up-sampled filters) and a newly proposed sampling layer for “hard” examples are embedded in MP-RPN to further boost its performance. The second stage is a Boosted Forests classifier, which utilizes deep facial features pooled from inside the candidate face regions as well as deep contextual features pooled from a larger region surrounding the candidate face regions. This step is included to further remove hard negative samples. Experiments show that this approach achieves state-of-the-art face detection performance on the WIDER FACE dataset “hard” partition, outperforming the former best result by 9.6% for the Average Precision. face detection; large scale variation; tiny faces; “atrous”; MP-RCNN; MP-RPN; WIDER FACE; FDDB; deep neural network § INTRODUCTIONAlthough face detection has been extensively studied during the past two decades, detecting unconstrained faces in images and videos has not yet been convincingly solved. Most classic and recent deep learning methods tend to detect faces where fine-grained facial parts are clearly visible. This negatively affects their detection performance in the case of faces at low-resolution or out-of-focus blur, which are common issues in surveillance camera data. The lack of progress in this regard is largely due to the fact that current face detection benchmark datasets (e.g., FDDB <cit.>, PACAL FACE <cit.> and AFW <cit.>) are biased towards high-resolution face images with limited variations in scale, pose, occlusion, illumination, out-of-focus blur and background clutter. Recently, a new face detection benchmark dataset, WIDER FACE <cit.>, has been released to tackle this problem. WIDER FACE consists of 32,203 images with 393,703 labeled faces. Images in WIDER FACE also have the highest degree of variations in scale, pose, occlusion, lighting conditions, and image blur. As indicated in the WIDER FACE report <cit.>, of all the factors that affect face detection performance, scale is the most significant. In view of the challenge created by facial scale variation in face detection, we propose a Multi-Path Region-based Convolutional Neural Network (MP-RCNN) to detect big faces and tiny faces with high accuracy. At the same time, it is noteworthy that by virtue of the abundant feature representation power of deep neural networks and the employment of contextual information, our method also possesses a high level of robustness to other factors. These are a consequence of variations in pose, occlusion, illumination, out-of-focus blur and background clutter, as shown in Figure 1.MP-RCNN is composed of two stages. The first stage is a Multi-Path Region Proposal Network (MP-RPN) that proposes faces at three different scales: small (8-32 pixels in height), medium (32-360 pixels in height) and large (360-900 pixels in height). These scales cover the majority of faces available in all public face detection databases, e.g., WIDER FACE <cit.>, FDDB <cit.>, PASCAL FACE <cit.> and AFW <cit.>. We observe that the feature maps of lower-level convolutional layers are most sensitive to small-scale face patterns, but almost agnostic to large-scale face patterns due to a limited receptive field. Conversely, the feature maps of the higher-level convolutional layers respond strongly to large-scale face patterns while ignoring small-scale patterns. On the basis of this observation, we simultaneously utilize three parallel outputs of the convolutional feature maps to predict multi-scale candidate face regions. We note that the path of medium-scale (32-360) and large-scale (360-900) span a much larger scale range than the small-scale (8-32) path does. Thus we additionally employ the so-called “atrous” convolution trick (convolution with up-sampled filters) <cit.> together with normal convolution to acquire a larger field of view so as to comprehensively cover the particular face scale range. Moreover, a newly proposed sampling layer is embedded in MP-RPN to further boost the discriminative power of the network for difficult face/non-face patterns. To further contend with difficult false positives while including difficult false negatives, we add a second stage Boosted Forests classifier after MP-RPN. The Boosted Forests classifier utilizes deep facial features pooled from inside the candidate face regions. It also invokes deep contextual features pooled from a larger region surrounding candidate face regions to make a more precise prediction of face/non-face patterns.Our MP-RCNN achieves state-of-the-art detection performance on both the WIDER FACE <cit.> and FDDB <cit.> datasets. In particular, on the most challenging so-called “hard” partition of the WIDER FACE test set that contains just small faces, we outperform the former best result by 9.6% for the Average Precision.The rest of the paper is organized as follows. Section 2 reviews related work. Section 3 introduces the proposed MP-RCNN approach to the problem of unconstrained face detection. Section 4 presents experimental results to demonstrate the rationale behind our network design and compares our method with other state-of-the-art face detection algorithms on the WIDER FACE <cit.> and FDDB <cit.> datasets. Section 5 concludes the paper and proposes future work.§ RELATED WORKThere are two established sets of methods for face detection, one based on deformable part models <cit.> and the other on rigid templates <cit.>. Prior to the resurgence of Convolutional Neural Networks (CNN) <cit.>, both sets of methods relied on a combination of “hand-crafted” feature extractors to select facial features and classic learning methods to perform binary feature classification. Admittedly, the performance of these face detectors has been increasingly improved by the use of more complex features <cit.> or better training strategies <cit.>. Nevertheless, using “hand-crafted” features and classic classifiers has stymied the development of seamlessly connecting feature selection and classification in a single computational process. In general, they require that many hyper-parameters be heuristically set. For example, both <cit.> and <cit.> needed to divide the training data into several partitions according to face poses and train a separate model for each partition.Deep neural networks, with its seamless concatenation of feature representation and pattern classification, have become the current trend of rigid templates for face detection. Farfade et al. <cit.> proposed a single Convolutional Neural Network (CNN) model based on AlexNet <cit.> to deal with multi-view face detection. Li et al. <cit.> used a cascade of six CNNs for alternative face detection and face bounding box calibration. However, these two methods need to crop face regions and rescale them to specific sizes. This increases the complexity of the training and testing. Thus they are not suitable for efficient unconstrained face detection where faces of different scales coexist in the same image. Yang et al. <cit.> proposed applying five parallel CNNs to predict five different facial parts, and then evaluate the degree of face likeliness by analyzing the spatial arrangement of facial part responses. The usage of facial parts makes the face detector more robust to partial occlusions, but like DPM based face detectors, this method can only deal with faces of relatively large size.Recently, Faster R-CNN <cit.>, a deep learning framework, achieved state-of-the-art object detectionbecause of two novel components. The first is a Region Proposal Network (RPN) to recommend object candidates of different scales and aspect ratios. The second is a Region-based Convolutional Neural Network (RCNN) to pool the object candidates to construct a fixed-length feature vector, which is employed to make a prediction. Zhu et al. <cit.> proposed a Contextual Multi-Scale Region-based CNN (CMS-RCNN) face detector, which extended Faster RCNN <cit.> in two respects. First, RPN was replaced by a Multi-Scale Region Proposal Network (MS-RPN) to propose face regions based on the combined information from multiple convolutional layers. Secondly, a Contextual Multi-Scale Convolution Neural Network (CMS-CNN) was proposed to replace RCNN for pooling features. This was not restricted to the last convolutional layer, as in RCNN, but also from several lower level convolutional layers. In addition, contextual information was also pooled to promote robustness. Thus MS-RCNN <cit.> has indeed improved RPN by combining feature maps from multiple convolutional layers in order to make a proposal. However, it is necessary to down-sample the lower-level feature maps to concatenate the feature maps of the last convolutional layer. This down-sampling design inevitably diminishes the network's discriminative power for small-scale face patterns. The Multi-Path Region Proposal Network (MP-RPN) presented in this paper enhances the discriminative power by eliminating the down-sampling and concatenation steps and directly utilizes feature maps at different resolutions. It proposes faces at different scales: lower-level feature maps are used to propose small-scale faces, while higher-level feature maps do so for large-scale faces. In this way, the scale-aware discriminative power of different feature maps is fully exploited. It has been pointed out <cit.> that the Region-of-Interest (ROI) pooling layer applied to low-resolution feature maps can lead to “plain” features due to the bins collapsing. We note that this “lost” information will lead to non-discriminative small regions. However, since detecting small-scale faces is one of the main objectives of this paper, we have instead pooled features from lower-level feature maps to reduce information collapsing. For example, we reduce information collapsing by using conv3_3 and conv4_3 of VGG16 <cit.>, which have higher resolution, instead of conv5_3 of VGG16 <cit.> used by Faster RCNN <cit.> and CMS-RCNN <cit.>. The pooled features are then trained by a Boosted Forest (BF) classifier as is done for pedestrian detection <cit.>. But unlike <cit.>, we also pool contextual information in addition to the facial features to further boost detection performance. Although the practice of adding a BF classifier makes our method not an end-to-end deep neural network solution, the combination of MP-RPN and a BF classifier has two advantages. First, features pooled from different convolutional layers need not be normalized before concatenation since the BF classifier treats each element of a feature vector separately. In contrast, in CMS-RCNN <cit.>, three different normalization scales need to be carefully selected to concatenate the RoI features from three convolutional layers. Secondly, both MP-RPN and the BF classifier only need to be trained once, which is as efficient as the four-step alternative training process used in Faster RCNN <cit.> and CMS-RCNN <cit.>.The proposed MP-RPN shares some similarity with the Single Shot Multibox Detector (SSD) <cit.> and the Multi-Scale Convolutional Neural Network (MS-CNN) <cit.>. Both methods use multi-scale feature maps to predict objects of different sizes in parallel. However, our work differs from these in two notable respects. First, we employ a fine-grained path to classify and localize tiny faces (as small as 8× 8 pixels). Both SSD and MS-CNN lack such a characteristic since both were proposed to detect general objects, such as cars or tables, which have a much larger minimum size. Second, for medium- and large-scale path, we additionally employ the “atrous” convolution trick (convolution with up-sampled filters) <cit.> together with the normal convolution to acquire a larger field of view. In this way, we are able to use three paths to cover a large spectrum of face sizes, from 8× 8 to 900× 900 pixels. By comparison, SSD <cit.> utilized six paths to cover different object scales, which makes the network much more complex. § APPROACHIn this section, we introduce the proposed MP-RCNN face detector, which consists of two stages: a Multi-Path Region Proposal Network (MP-RPN) for the generation of face proposals and a Boosted Forest (BF) for the verification of face proposals. §.§ Multi-Path Region Proposal NetworkThe detailed architecture of a Multi-Path Region Proposal Network (MP-RPN) is shown in Figure 2. Given a full image of arbitrary size, MP-RPN proposes faces through three detection branches: Det-4 for proposing small-scale faces (8-32 pixels in height), Det-16 for medium-scale faces (32-360 pixels in height) and Det-32 for large-scale faces (360-900 pixels in height). We adopt the VGG-16 net <cit.> (from Conv1_1 to Conv5_3) as the CNN trunk and the three detection branches emanate from different layers of the trunk. Since the branches of Det-4 and Det-16 stay close to the lower layers of the trunk network, they affect the gradients of the corresponding lower layers more than the Det-32 branch. Thus we add L2 normalization layers <cit.> to these two branches to avoid the potential learning instability.Similar to RPN in Faster RCNN <cit.>, for each detection branch, we slide a 3 × 3 convolutional network (Conv_det_4, Conv_det_16, and Conv_det_32 in Figure <ref>) over the feature map of the prior convolutional layer (Concat1, conv_reduce1, and conv_reduce2 in Figure <ref>). This convolutional layer is fully connected to a 3 × 3 spatial window of the input feature map. Each sliding window is mapped to a 512-dimensional vector. The vector is fed into two sibling fully connected layers, a box-classification layer (c_i in Figure <ref>, i=1 for Det-4 branch, 2 for Det-16 branch, and 3 for Det-32 branch) and a box-regression layer (b_i in Figure <ref>, i=1 for Det-4 branch, 2 for Det-16 branch, and 3 for Det-32 branch). At each sliding window location, we simultaneously predict k region proposals of different scales (aspect ratio is always set to 1). The k proposals are parameterized relative to k reference boxes, called anchors <cit.>. Each anchor is centered at the sliding window and associated with a scale. The anchors are necessary because they refer to both the scale and position information so that face of different sizes located in any position of an image can be detected by the convolutional network. Table <ref> shows the anchor scales (in pixel) allocated to each branch. During training, the parameters W of the MP-RPN are learned from a set of N training samples S = { (X_i,Y_i)} _i = 1^N, where X_i is an image patch associated with an anchor, and Y_i=(p_i,b_i) the combination of its ground truth label p_i={0,1} (0 for non-face and 1 for face) and ground truth box regression target b_i = (b_i^x,b_i^y,b_i^w,b_i^h) associated with an ground truth face region. They are the parameterizations of the four coordinates following <cit.>:b_i^x = (x_gt - x_i)/w_i, b_i^y = (y_gt - y_i)/h_i, b_i^w = log (w_gt/w_i), b_i^h = log (h_gt/h_i), where x,y,w,h denote the two coordinates of the box center, width, and height. Variables x_i,x_gt are for the image patch X_i and its ground truth face region X_i^gt respectively (likewise for y, w, and h).We define the loss function for MP-RPN asl(W) = ∑_m = 1^M α _mL^m({ (X_i,Y_i)} _i ∈S^m|W)where M=3 is the number of detection branches, α _m is the weight of loss function L^m, and S = {S^1,S^2,...,S^M}, where S^m contains the training samples of the m^th detection branch. The loss function for each detection branch contains two objectivesL^m({ (X_i,Y_i)} _i ∈S^m|W)= 1/N_m∑_i ∈S^mL_cls (p(X_i),p_i) + λ[-0.15em[ p_i = 1]-0.15em]L_reg(b(X_i),b_i)where N_m is the number of samples in the mini-batch of the m^th detection branch, p(X_i) = (p_0(X_i),p_1(X_i)) is the probability distribution over the two classes, non-face and face, respectively. L_cls is the cross entropy loss, b(X_i) = (b^x(X_i),b^y(X_i),b^w(X_i),b^h(X_i)) is the predicted bounding box regression target, L_reg is the smoothL1 loss function defined in <cit.> for bounding box regression and λ is a trade-off coefficient between classification and regression. Note that L_reg is computed only when a training sample is positive ([-0.15em[ p_i = 1]-0.15em]).§.§.§ Details of Each Detection BranchDet-4: Although Conv4_3 layer (stride = 8 pixels) might seem to already be sufficiently discriminative on regions as small as 8× 8 pixels, this is not the case. We found in preliminary experiments that when a 8× 8 face happened to be located between two neighboring anchors, neither could be precisely regressed to the face location. Thus, to boost the localization accuracy of small faces, we instead use Conv3_3 layer (with stride = 4 pixels) to propose small faces. At the same time, the feature maps of Conv4_3 layer are up-sampled (by a deconvolution layer) and then concatenated to those of the Conv3_3 layer. The higher-level Conv4_3 layer provides Conv3_3 layer with some “contextual” information and helps it to remove hard false positives.Det-16: This detection branch is forked from Conv5_3 layer to detect faces from 32× 32 to 360× 360 pixels. However, this large span of scales cannot be well accounted for by a single convolutional path. Inspired by the “atrous” spatial pyramid pooling <cit.> used in semantic image segmentation, we employ three parallel convolutional paths: a normal 3× 3 convolutional layer, an “atrous” convolutional layer with “atrous” rate 2 and an “atrous” convolutional layer with “atrous” rate 4. These three convolutional layers have increasing receptive field sizes and are able to comprehensively cover the large face scale range.Det-32: This detection branch is forked from Conv6_2 layer to detect faces from 360× 360 to 900× 900 pixels. Similar to Det-16, three parallel convolutional paths are employed to fully cover the scale range.§.§.§ Online Hard Example Mining (OHEM) layerThe training samples for MP-RPN are usually extremely unbalanced. This is because face regions are scarce compared to background (non-face) regions, so only a few anchors can be positive (matched to face regions) and most of the anchors are negative (matched to background regions). As indicated by <cit.>, explicitly mining hard negative examples with high training loss leads to better training and testing performance than randomly sampling all negative examples. In this paper, we propose an Online Hard Example Mining (OHEM) layer specifically for MP-RPN. It is applied independently to each detection branch in Figure 2 in order to mine both hard positive and negative examples at the same time. We fix the selection ratio of hard positive examples and negative examples to 1:3, which experimentally provides more stable training. These selected hard examples are then used in back-propagation for updating network weights. Two steps are involved in the OHEM layer. Step 1: Given all anchors (training samples) and their classification loss, we compare each anchor with its eight spatial neighbors (top, left, right, bottom, top-left, top-right, bottom-left and bottom-right). If the loss is greater than all of its neighbors, this anchor is kept as is; otherwise it is suppressed by setting its classification loss to zero. Step 2: All anchors are sorted in the descending order of their classification loss and hard positive and negative samples are selected according to this order. The ratio between the selected positives and negatives was chosen as 1:3.The proposed OHEM layer is “online” in the sense that it is seamlessly integrated into the forward pass of the network to generate a mini-batch of hard examples. Thus we do not need to freeze the training model to mine hard examples from all training data, and used the hard examples to update the current model.Note that unlike <cit.>, which proposed an OHEM layer for fast RCNN <cit.>, here the OHEM layer is used in MP-RPN but it can also be generally used in other Region-based Proposal Networks, such as RPN in faster RCNN <cit.> and MS-RPN in CMS-RCNN <cit.>. §.§ Feature Extraction and Boosted ForestThe detailed architecture of Stage 2 is shown in Figure <ref>. Given a complete image of arbitrary size and a set of proposals provided by the MP-RPN, RoI pooling <cit.> is used to extract features in the proposed regions from the feature maps of both Conv3_3 and Conv4_3. Conv3_3 contains fine-grained information, while Conv4_3, with a larger receptive field, implicitly contains “contextual” information. Similar to <cit.>, the “atrous” convolution trick is employed to Conv4_1, Conv4_2 and Conv4_3. This increases the resolution of the feature maps of Conv4_3 to twice its original value. This change produces better experimental results.Inspired by <cit.>, apart from extracting features from a proposed region, we also explicitly extract “contextual” features from a large region surrounding the proposal region. Suppose the original region is [l, t, w, h], where l is the horizontal coordinate of its left edge, t the vertical coordinate of the top edge, and w, h the width and height of the region, respectively. We set the corresponding “contextual” region to [l-w, t, 3w, 3h], which is 3× 3 bigger than the original region and approximately covers the upper body of a person.A Boosted Forest classifier is introduced after OHEM. Features from both the original and “contextual” regions are pooled using a fixed resolution of 5× 5, and then concatenated and input to a Boosted Forest classifier. We mainly follow <cit.> to set the hyper-parameters of the BF classifier. Specifically, we bootstrap the training by six cascaded forests with an increasing number of trees: 64, 128, 256, 512, 1024 and 1536. The tree depth is set at 5. The initial training set contains all positive samples (∼160k in the WIDER FACE training set) and randomly selected negative samples (∼100k). After each stage, additional negative samples (∼10k) are mined and added to the training set. At last, a forest of 2048 trees is trained as the final face detection classifier. Note that unlike an ordinary Boosted Forest, which equally initializes the confidence score of training samples, we directly use the “faceness” probability given by MP-RPN as the initial confidence score for each training sample.§ EXPERIMENTSIn this section, we first introduce the datasets used for training and evaluating our proposed face detector, and then compare the proposed MP-RCNN to state-of-the-art face detection methods on the WIDER FACE dataset <cit.> and the FDDB dataset <cit.>. The full implementation details of MP-RCNN used in the experiments are given in appendix A.In addition, we conduct a set of detailed model analysis experiments to examine how each model component (e.g., detection branches, “atrous” convolution, OHEM, etc.) affects the overall detection performance. These can be found in appendix B. Moreover, the running time of our algorithm is reported in appendix C. §.§ DatasetsWIDER FACE <cit.> is a large public face detection benchmark dataset for training and evaluating face detection algorithms. It contains 32,203 images with 393,703 labeled human faces (each image has an average of 12 faces). Faces in this dataset have a high degree of variability in scale, pose, occlusion, lighting conditions, and image blur. Images in the WIDER FACE dataset are organized based on 61 event classes. For each event class, 40%, 10% and 50% of the images are randomly selected for training, validation and test sets. Both the images and associated ground truth labels used for training and validation are available online[http://mmlab.ie.cuhk.edu.hk/projects/WIDERFace/index.html]. For the test set, only the images are available. The detection results must be submitted to an evaluation server administered by the authors of the WIDER FACE dataset in order to obtain Precision-Recall curves. Moreover, this test set was divided into three levels of difficulty by the authors of <cit.> : “Easy”, “Medium”, “Hard”. These categories were based on the detection rate of EdgeBox <cit.>, so that the Precision-Recall curves need to be reported for each difficulty level[We have no knowledge about the difficulty level of the images in the test set. In fact, it is necessary to submit all predicted face boxes to the server, which then provided three ROC curves based on “hard”, “medium” and “easy” partitions.].The other test set used in our experiments is the FDDB dataset <cit.>, which is a standard database for evaluating face detection algorithms. It contains the annotations for 5,171 faces in a set of 2,845 images. Each image in FDDB dataset has less than two faces on average. These faces mostly have large sizes compared to those in the WIDER FACE dataset.Our proposed MP-RCNN was trained on the training partition of the WIDER FACE dataset, and then evaluated on the WIDER FACE dataset test partition and the whole FDDB dataset. The validation partition of the WIDER FACE dataset is used in the model analysis experiments (appendix B) for comparing different model designs. §.§ Comparison to the state-of-the-artIn this subsection, we compare the proposed MP-RCNN to state-of-the-art face detection methods on the WIDER FACE <cit.> and FDDB datasets <cit.>.Results on the WIDER FACE test set Here we compare the proposed MP-RCNN with all six strong face detection methods available on the WIDER FACE website: Two-stage CNN <cit.>, Multiscale Cascade <cit.>, Multitask Cascade <cit.>, Faceness <cit.>, Aggregate Channel Features (ACF) <cit.> and CMS-RCNN <cit.>. Figure 4 shows the Precision-Recall curves and the Average Precision values of the different methods on the Hard, Medium and Easy partition of the WIDER FACE test set, respectively. On the hard partition, our MP-RCNN outperforms all six strong baselines by a large margin. Specifically, it achieves an increase of 9.6% in Average Precision compared to the 2^nd place CMS-RCNN method. On the Easy and Medium partitions, our method both rank in 2^nd place, only lagging behind the recent CMS-RCNN method by a small margin. See Figure 6 in appendix D for some examples of the face detection results using the proposed MP-RCNN on the WIDER FACE test set. Results on the FDDB dataset To show the general face detection capability of the proposed MP-RCNN method, we directly apply the MP-RCNN previously trained on the WIDER FACE training set to the FDDB dataset. We also make a comprehensive comparison with 15 other typical baselines: ViolaJones <cit.>, SurfCascade <cit.>, ZhuRamanan <cit.>, NPD <cit.>, DDFD <cit.>, ACF <cit.>, CascadeCNN <cit.>, CCF <cit.>, JointCascade <cit.>, HeadHunter <cit.>, FastCNN <cit.>, Faceness <cit.>, HyperFace <cit.>, MTCNN <cit.> and UnitBox <cit.>. The evaluation is based on a discrete score criterion, that is, if the ratio of the intersection of a detected region with an annotated face region is greater than 0.5, a score of 1 is assigned to the detected region, and 0 otherwise. As shown in Figure <ref>, the proposed MP-RCNN outperforms ALL of the other 15 methods and has the highest average recall rate (0.953). See Figure <ref> in appendix E for some examples of the face detection results on the FDDB dataset.§ CONCLUSIONWe have proposed MP-RCNN, an accurate face detection method for tackling the challenge of large-scale variation in unconstrained face detection. Most previous methods extract the same features for faces at different scales. This neglects the face pattern variations due to scale changes and thus fails to detect both large and tiny faces with high accuracy. In this paper, we introduce MP-RCNN, which utilizes a newly proposed Multi-Path Region Proposal Network (MP-RPN) to extract features at various intermediate network layers. These features possess different receptive field sizes that approximately match the facial patterns at three different scales. This leads to high detection accuracy for faces across a large range (from 8× 8 to 900 × 900) of facial scales. MP-RCNN also employs a boosted forest classifier as the second stage, which uses the deep features pooled from MP-RPN to further boost face detection performance. We observe that although MP-RCNN is designed mainly to deal with the challenge of scale variation, the powerful feature representation of deep networks also enables a high level of robustness to variations in pose, occlusion, illumination, out-of-focus blur and background clutter. Experimental results demonstrate that our proposed MP-RCNN consistently achieves the best performance on both the WIDER FACE and FDDB datasets. In the future, we intend to leverage this across-scale detection ability to other tiny object detection tasks, e.g., facial landmark localization of small faces.§ ACKNOWLEDGMENTSThe authors would like to acknowledge the financial support of the Natural Sciences and Engineering Research Council of Canada (NSERC) and the McGill Engineering Doctoral Award (MEDA). They would also like to thank the support of the NVIDIA Corporation for the donation of a TITAN X GPU through their academic GPU grants program. IEEEtran§ APPENDIX§.§ Implementation DetailsThe code of MP-RPN and the deep feature extraction was built using Caffe <cit.>, and the Boosted Forest was based on Piotr's Computer Vision Matlab Toolbox <cit.>.Before training and testing, each full image of arbitrary size was resized such that its shorter edge had N pixels (N=900 in the WIDER FACE dataset and 400 in the FDDB dataset). For MP-RPN training, an anchor was assigned as a positive sample if it had an Intersection-over-Union (IOU) ratio greater than 0.5 with any ground truth box, and as a negative sample if it had an IOU ratio less than 0.3 with any ground truth box. Each mini-batch contains 1 image and 768 sampled (using OHEM) anchors, 256 for each detection branch. The ratio of positive and negative samples is 1:3 for all detection branches. The CNN backbone (from Conv1_1 to Conv5_3 in Figure <ref>) was a truncated VGG-16 net <cit.> pre-trained on the ImageNet dataset <cit.>. The weights of all the other convolutional layers were randomly initialized from a zero-mean Gaussian distribution with standard deviation 0.01. We fine-tuned the layers from conv3_1 and up, using a learning rate of 0.0005 for 80k mini-batches, and 0.0001 for another 40k mini-batches on the WIDER FACE training dataset. A momentum of 0.9 and a weight decay of 0.0005 were used. Face proposals produced by MP-RPN are post-processed individually for each detection branch in the following way. First, non-maximum suppression (NMS) with a threshold of 0.7 was adopted to filter face proposals based on their classification scores. Then the remaining face proposals were ranked by their scores. For BF training, 150, 40, 10 top-ranked proposals in an image were selected from Det-4, Det-16 and Det-32, respectively. At test time, the same number (150, 40, 10) of proposals were selected from the corresponding branch, and finally all output proposals from the different branches were merged by NMS with a threshold of 0.5. §.§ Model AnalysisIn this subsection, we discuss controlled experiments on the validation set of the WIDER FACE dataset to examine how each model component affects the overall detection performance. Note that in order to save training time, experiment 1-3 employed face detection models trained for 30k iterations on only 11 out of the total 61 event classes. The learning rate was selected to be 0.0005 for the first 20k iterations, and 0.00005 for the remaining 10k iterations. Other hyper-parameters were determined as stated earlier in appendix A. The selected event classes are the first eleven classes (i.e., Traffic, Parade, Ceremony, People Marching, Concerts, Award Ceremony, Stock Market, Group, Interview, Handshaking and Meeting), which take up about 1/5 of the whole training set. In Experiment 4, the face detection model was trained with the whole WIDER FACE training set (61 event classes). All hyper-parameters in Experiment 4 were the same as stated in appendix A.Experiment-1: The roles of individual detection layers Table <ref> shows the detection recall rates of the various detection branches as a function of face height in pixels. We observe that each detection branch has the highest detection recall for the faces that match its scale. The combination of all detection branches (the last row of Table <ref>) achieves the highest recall for faces of all scales. Note that the recall rate for small scale faces (8≤height≤32) is much lower than that of medium scale faces (32<height≤360) and large scale faces (360<height≤900), indicating the obvious expectation of the increasing difficulty of face detection as scale drops.Experiment-2: The roles of atrous convolutional layers Table <ref> shows the detection recall rates of the proposed MP-RPN in terms of different design options (with/without “atrous” convolution and with/without OHEM). By comparing rows 1 and 3, as well as 2 and 4, we observe that the inclusion of the “atrous” convolution trick increases the detection recall rate of all branches.Experiment-3: The roles of the OHEM layers By comparing rows 1 and 2, as well as 3 and 4 in Table <ref>, we can conclude that, in most cases, the inclusion of the OHEM layer increases the detection recall rate. However, in the absence of “atrous” convolution, the use of OHEM layer causes a slight recall drop for medium size faces (32<height≤360). By comparing rows 1 and 4, we see observe that the simultaneous inclusion of “atrous” convolution and OHEM consistently increases the detection recall of all face scales.Experiment-4: The roles of BF with various options Table <ref> displays the average precision of various Boosted Forest (BF) options. We observe that although MP-RPN already achieves high average precision as a stand-alone face detector, the inclusion of a BF classifier further boosts the detection performance for faces of all levels of difficulty. Specifically, a BF classifier with “face” features (features pooled from the original proposal regions[See Section 3.B for details.]) achieves a relatively higher average precision gain for “easy” and “medium” faces, but a lower average precision gain for “hard” faces, compared to a BF classifier with “context” features (features pooled from a larger region surrounding the original proposal regions[See Section 3.B for details.]). When pooling complementary “face” and “context” features, the BF classifier achieves the highest gain for all “Easy”, “Medium” and “Hard” faces. §.§ Average processing timeWe randomly selected 100 images from the WIDER FACE validation set. An image patch of resolution 640 × 480 was cropped from the center of each image[If the original image had a height less than 640 or a width less than 480 pixels, we padded the cropped image patch from the bottom and the right with zeros to make it exactly 640 × 480.], thus creating 100 new images. Both the proposed MP-RCNN and the classical Viola-Jones algorithm <cit.> were employed to process these 100 images. The average processing time per image is shown in Table <ref> below. Note that in order to guarantee a fair comparison, both algorithms were tested on a 3.5 GHz 8-core Intel Xeon E5-1620 server with 64GB of RAM, and the image loading time was excluded from the processing time for both algorithms. The Viola-Jones algorithm[We used the code provided by the OpenCV website: <http://docs.opencv.org/2.4/modules/objdetect/doc/cascade_classification.html>. The face model used in the code was “haarcascade_frontalface_default”.] used only CPU resources. An Nvidia GeForce GTX Titan X GPU was used for the CNN computations in MP-RCNN.From Table <ref>, we observe that the proposed MP-RCNN runs at about 4.6 FPS compared to the 10.9 FPS obtained by classical Viola-Jones algorithm. §.§ Face detection results on WIDER FACE test setFigure <ref> shows some examples of the face detection results using the proposed MP-RCNN on the WIDER FACE test set. §.§ Face detection results on FDDBFigure <ref> shows some examples of the face detection results using the proposed MP-RCNN on FDDB dataset. | http://arxiv.org/abs/1703.09145v1 | {
"authors": [
"Yuguang Liu",
"Martin D. Levine"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20170327153100",
"title": "Multi-Path Region-Based Convolutional Neural Network for Accurate Detection of Unconstrained \"Hard Faces\""
} |
versim< versim> | http://arxiv.org/abs/1703.09033v3 | {
"authors": [
"Yuta Hamada",
"Masatoshi Yamada"
],
"categories": [
"hep-th",
"gr-qc",
"hep-ph"
],
"primary_category": "hep-th",
"published": "20170327123552",
"title": "Asymptotic safety of higher derivative quantum gravity non-minimally coupled with a matter system"
} |
[email protected] College of Physics and Electronic Engineering, Xinyang Normal University, Xinyang, 464000, P. R. [email protected] of High Energy Physics and Theoretical Physics Center for Science Facilities, Chinese Academy of Sciences, Beijing, 100049, People's Republic of ChinaIn three dimensional spacetime with negative cosmology constant, the general relativity can be written as two copies of SO(2,1) Chern-Simons theory. On a manifold with boundary the Chern-Simons theory induces a conformal field theory–WZW theory on the boundary. In this paper, it is show that with suitable boundary condition for BTZ black hole, the WZW theory can reduce to a massless scalar field on the horizon.04.70.Dy,04.60.Pp The Conformal Field Theory on the Horizon of BTZ Black Hole Chao-Guang Huang December 30, 2023 ============================================================§ INTRODUCTIONIn three dimensional spacetime, the general relativity become simplified since it has no local degrees of freedom <cit.>. Indeed the theory is equivalent to Chern-Simons theory with suitable gauge group <cit.>. It is a surprise that the black hole solution can exist when theory has negative cosmology constant Λ<0. This black hole-so called BTZ black hole <cit.> can have an arbitrary high entropy which is difficult to understand since the theory has no local degrees of freedom.This mystery can be understood if one starts from theChern-Simons formula. It is a standard result that on a manifold with boundary the Chern-Simons theory induces a Wess-Zumino-Witten (WZW) theory on the boundary which is a conformal field theory. Carlip use this WZW theory to explain the entropy of the BTZ black hole <cit.>. Later it was shown that, for the boundary at conformal infinity rather than the horizon, the Chern-Simons theory reduces to a Liouville theory on the boundary <cit.>. This Liouville theory has the right central chargeto give the entropy of BTZ black hole if one use the Cardy formula <cit.>. For a review along this line, see Ref.<cit.>.There are other conformal field theories which start from Brown and Henneaux's seminal work <cit.>. They observed that the asymptotic symmetry group of AdS_3 is generated by two copies of Virasoro algebra, which correspond to a conformal field theory. This result can be seen as pioneering work of AdS_3/CFT_2 <cit.>. Based on this result, the entropy of BTZ black hole can be calculated <cit.>, which matches the Bekenstein-Hawking formula.But most of those conformal field theories are taken to be at conformal infinity (although with exceptions, such as <cit.>). A physical more appealing location should be the horizon of black hole. In this paper, we consider the field theory just on the horizon. Starting from the Chern-Simons theory, with suitable boundary condition, it is shown that the WZW theory reduces to a chiral massless scalar field on the horizon. So on the BTZ horizon, there are two chiral massless scalar field since the 3D general relativity contain two copied of Chern-Simons theories.The paper is organized as follows. In section II, we summary the relation between gravity, Chern-Simons theory and the WZW theory. In section III, the BTZ black hole is considered. With suitable boundary condition, the boundary WZW theory reduces to a chiral massless scalar field theory. Section IV is the conclusion. § GRAVITY, CHERN-SIMONS THEORY AND WZW THEORYAs first shown in Ref.<cit.>, (2+1)-dimensional general relativity can be written as a Chern-Simons theory. For the case of negative cosmology constant Λ=-1/L^2, one can define two SO(2,1) connection 1-formA^(±)a=ω^a±1/L e^a, where e^a and ω^a are the co-triad and spin connection 1-form respectively. Then up to boundary term, the first order action of gravity can be rewritten asI_GR[e,ω]=1/8π G∫ e^a ∧ ( dω_a+1/2ϵ_abcω^b ∧ω^c)-1/6L^2ϵ_abce^a∧ e^b ∧ e^c =I_CS[A^(+)]-I_CS[A^(-)],where A^(±)=A^(±)aT_a are SO(2,1) gauge potential, and the Chern-Simons action isI_CS[A]=k/4π∫ Tr{A∧ dA+2/3A∧ A ∧ A},withk=L/4G.Similarly, the CS equationF^±= dA^(±)+A^(±)∧ A^(±)=0 is equivalent to the requirement that the connection is torsion-free and the metric has constant negative curvature. The equation implies that the potential A can be locally written asA=g^-1 dg. When the manifold has a boundary, a boundary term must be added. Assume the boundary has topology ∂ M=R× S^1. The usual boundary term isI_bd=k/4π∫_∂ MTr A_u A_ũ, where u and ũ are two coordinates on the boundary. The boundary condition is chosen to beδ A_u|_∂ M=0, or δ A_ũ|_∂ M=0depend on the condition.With the boundary term, the total action, I_CS[A]+I_bd[A], is not gauge-invariant under the gauge transformationA̅=g^-1Ag+g^-1 dg. To restore the gauge-invariant, the Wess-Zumino-Witten term is introduced for the first boundary condition<cit.>:I^+_WZW[g^-1,A_u]=1/4π∫_∂ MTr(g^-1∂_u g g^-1∂_ũg+2g^-1∂_ũgA_u)+1/12π∫_M Tr(g^-1 dg)^3,which is chiral WZW action for a field g coupled to a background gauge potential A_u. With the WZW term, the full action is gauge-invariant(I_CS+I_bd)[A̅]+kI^+_WZW[e^-1,A̅]=(I_CS+I_bd)[A]+kI^+_WZW[g^-1,A]. Thus, the gauge transformation g become dynamical at the boundary, and are described by the WZW action which is a conformal field theory. Those `would-be gauge degrees of freedom'<cit.> are present because the gauge invariant is broken at the boundary. § THE BOUNDARY ACTION ON THE HORIZON OF BTZ BLACK HOLEIn the previous section, the boundary of manifold can be arbitrary. If the horizon of BTZ black hole is considered, more reduction can be made due to the special property of the horizon.§.§ The BTZ black holeTo study the physics at horizon, it is more suitable to use advanced Eddington coordinate. The metric of BTZ black hole can be written asds^2=-N^2 dv^2+2 dv dr+r^2 (dφ+N^φ dv)^2.Choose the following co-triads <cit.>l_a=-1/2N^2 dv+dr, n_a=-dv, m_a=r N^φ dv+r dφ,which gives the following connection:A^-(±)=-(N^φ∓1/L)dr-N^2/2 d(φ±v/L),A^+(±)=-d(φ±v/L), A^2(±)=r (N^φ±1/L)d(φ±v/L),where A^±=(A^0± A^1)/√(2).Define new variables which are useful later,u=φ-v/L, ũ=φ+v/L.A crucial property of the connection is that, on the whole manifold, one hasA^(+)_u≡ 0, A^(-)_ũ≡ 0.Since the topology of the space-section is cylinder, which is non-trivial, the vacuum Chern-Simons equation F=0 will be solved by non-periodic group element <cit.>A=Q^-1 d Q.For a general SO(2,1) group element Q(ũ,u,r), using the Gauss decomposition, it can be written asQ=( [ 1 1/√(2)x_1; 0 1; ]) ([ e^-Ψ_1/20;0 e^-Ψ_1/2;])([10; -1/√(2)y_11;]).Within this parameter, the WZW action is <cit.>kI_WZW=k/4π∫_∂ Mdu dũ1/2(∂_u Ψ∂_ũΨ-e^Ψ (∂_u x ∂_ũ y+∂_u y ∂_ũ x)).§.§ Gauge transformationNow we consider the gauge transformation (<ref>) with group element g_1 for the A^(+). In following we omit the superscript (+).To preserve the boundary conditionδ A_u|_∂ M=0, the gauge transformation should be g_1=g_1(r,ũ). But it is not enough. This boundary condition can't tell us whether we are dealing with a black hole or not, so more restricted boundary conditions are need. Near the horizon, a small parameter ϵ=r-r_+ can be defined, and N^2≈ 2 κϵ, thusA^-_ũ≈-κϵ.Since this condition reflect the property of the horizon, we want the gauge transform to keep this property, thusA̅^-_ũ∼ O(ϵ)=C_1 ϵ. Assume the gauge transformation is given by SO(2,1) group elementg_1(x_1,y_1,Ψ_1)=( [ 1 1/√(2)x_1; 0 1; ]) ([ e^-Ψ_1/20;0 e^-Ψ_1/2;])([10; -1/√(2)y_11;]),under the gauge transformation (<ref>),A̅^-=e^Ψ_1 (A^–A^2 x_1+A^+ x_1^2/2+dx_1),since A^2, A^+ are both finite at horizon, to keep the boundary condition (<ref>), one needx_1(r,ũ)=ϵ h(ũ),where h(ũ) is a finite function at horizon. And also Ψ_1(r,ũ) is finite at horizon.The other component transforms intoA̅^2=A^2 (1-e^Ψ_1 y_1 x_1)-A^+ x_1(1-e^Ψ_1 y_1 x_1/2)+A^- e^Ψ_1 y_1 +d Ψ_1+e^Ψ_1 y_1d x_1, A̅^+=A^+ e^-Ψ_1(1-e^Ψ_1 y_1 x_1/2)^2+A^2 y_1(1-e^Ψ_1 y_1 x_1/2)+A^- e^Ψ_1 y_1^2/2+y_1 dΨ_1+dy_1 +y_1^2e^Ψ_1dx_1/2.Those components are required to be finite at the horizon, so givesy_1(r_+)=finite, Ψ_1(r=r_+)=finite.So the second term in the action (<ref>) vanish2 e^Ψ_1∂_a x_1 ∂_b y_1 ∽ϵ= 0on the horizon. The final action on the horizon iskI_WZW=k/4π∫_∂ Mdu dũ1/2∂_u Ψ_1 ∂_ũΨ_1=k/4π L∫_∂ Mdφ dv [(∂_v Ψ_1)^2- L^2 (∂_φΨ_1)^2], with Ψ_1 depend only on ũ=φ+v/L. So it is a chiral massless scalar field.The similar results can be get for the A^(-), which gives another chiral massless scalar field Ψ_2 depending only on u. § CONCLUSIONIn this paper, the field theory on the horizon of BTZ black hole is investigated. Starting from the Chern-Simons formula, one get a chiral WZW theory on any boundary. Restrict to the horizon, this WZW theory reduces further to a chiral massless scalar field theory. Since the general relativity is equivalent to two copies of CS theory, the final theory on the horizon is two chiral massless scalar field theory with opposite chirality.Compared with the conformal field theories on the conformal boundary, the massless scalar field theory-which is also a conformal field theory–is more revelent to black hole physics. It is just on the horizon. But the central charge of this theory is c=1 <cit.>, which is too small to account the entropy of the BTZ black hole if one use the Cardy formula. The conformal symmetry here is different with that appears in Carlip's effective description of the black hole entropy in arbitrary dimension <cit.>. As noticed in <cit.>, the symmetry of this paper is on the “φ-v cylinder", while the symmetry of <cit.> is on the “r-v plane".In the previous work <cit.>, it was shown that the boundary degrees of freedom can also be described by a BF theory. Since both the BF theory and the massless scalar field theory are on the horizon, the relation between those two theories need further investigated. This work is supported by the NSFC (Grant No. 11690022 and No. 11647064). | http://arxiv.org/abs/1703.08894v1 | {
"authors": [
"Jingbo Wang",
"Chao-Guang Huang"
],
"categories": [
"gr-qc"
],
"primary_category": "gr-qc",
"published": "20170327014127",
"title": "The Conformal Field Theory on the Horizon of BTZ Black Hole"
} |
firstpage–lastpage Four fermions in a one-dimensional harmonic trap: Accuracy of a variational-ansatz approach T. Sowiński December 30, 2023 =========================================================================================== Turbulence models attempt to account for unresolved dynamics and diffusion in hydrodynamical simulations. We develop a common framework for two-equation Reynolds-Averaged Navier-Stokes (RANS) turbulence models, and we implement six models in the Athena code. We verify each implementation with the standard subsonic mixing layer, although the level of agreement depends on the definition of the mixing layer width. We then test the validity of each model into the supersonic regime, showing that compressibility corrections can improve agreement with experiment. For models with buoyancy effects, we also verify our implementation via the growth of the Rayleigh-Taylor instability in a stratified medium. The models are then applied to the ubiquitous astrophysical shock-cloud interaction in three dimensions. We focus on the mixing of shock and cloud material, comparing results from turbulence models to high-resolution simulations (up to 200 cells per cloud radius) and ensemble-averaged simulations. We find that the turbulence models lead to increased spreading and mixing of the cloud, although no two models predict the same result. Increased mixing is also observed in inviscid simulations at resolutions greater than 100 cells per radius, which suggests that the turbulent mixing begins to be resolved.hydrodynamics — turbulence — methods:numerical — shock waves — ISM:clouds § INTRODUCTIONThe interstellar medium (ISM) is dominated by turbulent processes <cit.>. A common example is the interaction of a shock wave with a cloud of gas. Stellar winds and supernovae launch supersonic shock waves into the ISM that collide with nearby molecular gas clouds <cit.>. The shock drives hydrodynamic instabilities at the cloud surface, such as the Rayleigh-Taylor (RT), Kelvin-Helmholtz (KH), and Richtmeyer-Meshkov (RM) instabilities, that disrupt and eventually destroy the cloud <cit.>. This interaction is well-studied in numerical simulations <cit.>.In Eulerian hydrodynamics simulations, the growth of turbulence is controlled by numerical viscosity (resolution effects). Adequate resolution is therefore necessary to properly capture the dynamics. Previous work on the shock-cloud interaction has found that about 100 cells per radius are necessary for convergence of global quantities <cit.>, although this requirement may be relaxed in 3D simulations <cit.>. However, because the instabilities grow fastest on the smallest scales, the details of the small-scale mixing are dominated by resolution effects. <cit.> found that all quantities except the mixing fraction show convergence in shock-cloud simulations.One possible means to mitigate resolution effects is a turbulence model, sometimes referred to as a subgrid-scale (SGS) model. Turbulence models attempt to mimic the effect of unresolved small-scale turbulence on the large-scale flow, often through the addition of “turbulent” stresses. Such models are common in engineering codes, and they are increasingly used in astrophysics <cit.>. Turbulence models can be separated into two types: Reynolds-Averaged Navier-Stokes (RANS) and Large-Eddy Simulations (LES). The former relies on time-averaging of the decomposed fluid equations, while the latter uses spatial filtering of variables. Here, we only consider RANS models; for a review of LES methods, see <cit.>. §.§ Turbulence models in the shock-cloud interaction Both RANS and LES turbulence models have been used to model the interaction of a shock with a cloud, in different environments and with different results. <cit.> examined the hydrodynamic shock-cloud interaction in two dimensions with the k-ε model, a two-equation RANS model. The authors argued that the k-ε turbulence model adequately captured the dynamics of the shock-cloud interaction and reduced the resolution requirements. Follow-up studies by <cit.> revealed that the k-ε model did not significantly alter the dynamics or improve the resolution convergencein three dimensional simulations.<cit.> used a different two-equation RANS model, based on the k-L formalism, to track metal enrichment in so-called “minihalos”. An enriched supersonic galactic outflow impacts a diffuse cloud of primordial gas, subject to both gravity and radiative cooling. The authors modified the k-L model of <cit.>, which was calibrated for RT and RM instabilities, to include the KH instability and compressibility effects. Here the authors specifically investigated the turbulent mixing of metals. While there were notable differences in the enrichment of diffuse gas, the metal abundance in the dense gas was largely unaffected by the turbulence model.<cit.> applied a one-equation LES model to the simulations of <cit.>, which studied a cosmological minor-merger, i.e., the infall of a low-mass subcluster into a larger cluster. This resembles the shock-cloud interaction but on larger scales. For this application, the authors used a linear eddy-viscosity relation with a dynamic procedure to calculate transport coefficients (“shear-improved” SGS model). The authors found that, while the LES turbulence model did not significantly alter the energy of the interaction, it did affect the vorticity and subsequent evolution of the infalling gas.It is difficult to interpret and compare the effects of the turbulence models in the simulations described above. First, each application explored different physical regimes and therefore included different physics (e.g. radiative cooling, gravity). Second, some turbulence models incorporated additional effects, such as buoyancy and compressibility, that other models implicitly neglect. Third, each turbulence model affects the dynamics differently. In the case of LES, the resolved dynamics are largely unaffected, as the model only considers turbulent effects near and below the filter width, which is typically close to the grid scale. However, RANS models average out dynamical fluctuations at all scales below some characteristic length scale, which varies throughout the simulation and could be much larger than the grid scale. Fourth, the “true” solution to the shock-cloud interaction is unknown. One can compare results obtained with a turbulence model to higher-resolution simulations, but without an explicit viscosity the degree of mixing remains constrained by the numerical viscosity.Finally, it is unclear whether these turbulence models are valid in the astrophysical regimes being probed. All turbulence models rely on closure approximations with adjustable parameters often determined by comparison with empirical results. The laboratory experiments used for calibration are typically subsonic and incompressible in nature. While some models can be modified to produce correct results in transonic and moderately compressible regimes, it is unknown whether these modifications remain valid in the highly supersonic, highly compressible conditions characteristic of the ISM. §.§ Motivation and outline In an effort to better understand the effects and validity of turbulence models in astrophysical applications, we perform hydrodynamical simulations of the generic shock-cloud interaction with six two-equation RANS models. We first develop a common framework for two-equation turbulence models, and we implement this framework in the Athena hydrodynamics code <cit.>. We verify the implementation of each turbulence model with the subsonic shear mixing layer test, ensuring that the width of the mixing layer grows linearly in accord with experimental results. We also highlight the dependence of the growth rate on the definition of the mixing layer width. We then test the validity of each model into the supersonic regime. Most models are known to perform poorly in transonic applications, but we explore three common “compressibility corrections” that improve results. Three of the models here considered include buoyancy effects, such as the RT instability. For these models, we further verify our implementation with a stratified medium test, in which we compare the temporal growth of the RT boundary layer to experimental results.After determining that the turbulence models are implemented correctly, we test each turbulence model in a three-dimensional adiabatic shock-cloud interaction. We quantify not only the global dynamics but also the small-scale mixing. To examine the validity of the turbulence models, we perform a resolution convergence test of the inviscid shock-cloud interaction, up to 200 cells per radius in full 3D on a fixed grid. We also compare results to an ensemble-average of inviscid simulations initialized with grid-scale initial turbulence, scaled to roughly match the initial conditions of the turbulence models. Finally, we consider the effects of initial conditions and compressibility corrections in the turbulence models, finding that the former makes a significant difference in evolution whereas the latter does not.We outline the six RANS turbulence models and their implementation in Athena in <ref>. We verify each implementation with a mixing layer test in <ref>, and we further verify three of the models with the stratified medium test in <ref>. The turbulence models are then used in the shock-cloud simulation; the set-up and results of these simulations are presented and discussed in <ref>. Finally, we discuss the validity of turbulence models in astrophysical applications in <ref> before concluding in <ref>.§ TURBULENCE MODELSWe have modified the Athena hydrodynamics code <cit.> version 4.2 to solve the system of equations:[For simplicity of notation, we do not differentiate Reynolds-averaged (ρ,P) and Favre-averaged (ũ, Ẽ, C̃) variables, where ϕ̃≡ρϕ/ρ.]ρ/ t + ∇· (ρ u)= 0 (ρ u)/ t + ∇· (ρ u u + P I)=∇·τ'E/ t + ∇· [(E+P)u]= ∇· ( uτ' -q') + Ψ_E (ρ C)/ t + ∇· (ρ Cu)= ∇· d'(ρ k)/ t+ ∇· (ρ ku)= ∇· (μ_T/σ_k∇ k) + Ψ_k (ρξ)/ t+ ∇· (ρξ u)= ∇· (μ_T/σ_ξ∇ξ) + Ψ_ξwith the density ρ, the fluid velocity vector u, the pressure P, the unit dyad I, the total resolved energy density E [We do not include the turbulent kinetic energy ρ k in the definition of total energy; therefore we are simulating the total resolved energy. See section 2.4.5 of <cit.> for a complete discussion of compressible energy equation systems.]:E = P/γ -1 + 1/2ρ | u|^2, a passive colour field C, the specific turbulent kinetic energy k, an auxiliary turbulence variable ξ, the turbulent stress tensor τ', the turbulent heat flux q', the turbulent diffusive flux d', turbulent viscosity μ_T, turbulent diffusion coefficients σ, and source terms due to turbulent effects Ψ.Two-equation models are so named because they add two “turbulent” variables – the specific turbulent kinetic energy k and an auxiliary variable ξ that varies from model to model – with corresponding transport equations (Eqs. <ref>-<ref>). Models are typically denoted by the chosen auxiliary turbulence variable; e.g., ξ→ε yields the k-ε model. Here, we examine the standard k-ε model of <cit.>, as well as the extended model of <cit.>; the k-L models of <cit.> and GS11; and the k-ω models of <cit.> and <cit.>. For the k-ε and k-ω models, we also test the effect of three standard compressibility corrections, presented in <cit.>, <cit.>, and <cit.>.The turbulent stress tensor τ' is defined asτ'_ij = 2 μ_T (S_ij - 1/3δ_ijS_kk) - 2/3δ_ijρ k with resolved stress rate tensor S given byS_ij = 1/2( u_i/ x_j +u_j/ x_i). The specific turbulent kinetic energy k is defined as k ≡ (1/2) τ'_kk and requires an additional transport equation. The generic transport equation (Eq. <ref>) is applicable to (almost) all models investigated, with source termΨ_k = P_T - C_D ρε + C_B ρ√(k) A_i g_i with the production term P_T = τ'_ij u_i/ x_j, specific dissipation ε, dissipation coefficient C_D, buoyancy coefficient C_B, and Atwood number in the ith direction A_i with acceleration g_i = -(1/ρ) P /x_i. The source term on the energy equation is Ψ_E = -Ψ_k. Table <ref> presents a summary of all model constants and values.In adiabatic simulations, the turbulent heat flux vector q' is defined asq'_j = -κ_TT/ x_j = γ/γ-1μ_T/Pr_T T/ x_j with turbulent thermal conductivity κ_T = c_p μ_T / Pr_T, specific heat capacity c_p = γ/(γ-1), and turbulent Prandtl number Pr_T.Passively advected scalar fields are diffused using a gradient-diffusion approximation, where the turbulent diffusive flux vector d' is given byd'_j = μ_T/σ_C C/ x_j, with Schmidt number σ_C generally of order unity. §.§ k-epsilon modelsIn the k-ε formalism, the auxiliary turbulence variable ξ is defined to be the specific turbulent energy dissipation ε∝ k^3/2 L^-1, where L is a defined turbulent length scale. The exact scaling depends on the implementation; we here use ε = C_μ^3/4 k^3/2 L^-1, where C_μ is a model constant related to the viscosity.§.§.§ LS74LS74 outlined the standard version of the k-ε model, and it is perhaps the most widely used RANS turbulence model. The model uses the eddy-viscosity μ_T defined asμ_T = C_μρk^2/εwith C_μ = 0.09. The transport equation for ε (Eq. <ref>) has the source termΨ_ε = C_1 ε/k P_T - C_2 ρε^2/k. The model constants are summarized in Table <ref>. Because C_B = 0, the model neglects buoyant effects, such as the RT instability.§.§.§ MS13To include the RT and RM instability effects in the k-ε model, MS13 added a buoyancy term, with the Atwood number in Eq. <ref> defined asA_i = k^3/2/ρε (ρ/ x_i - ρ/P P/ x_i).The source term for the dissipation equation Ψ_ε is also extended asΨ_ε = C_1 ε/k P_T - C_2 ρε^2/k + C_3 ρε/√(k) A_i g_i. The model constants are summarized in Table <ref>; we note that the MS13 values are largely the same as LS74 but with modified transport coefficients and C_B ≠ 0. §.§ k-L modelsThe k-L model is a two-equation RANS model developed by DT06 to study RT and RM instabilities. Shear (KH instability) was added by C06 and extended to include compressibility effects by GS11. The auxiliary variable ξ is defined to be the eddy length scale L. The model uses the eddy-viscosityμ_T = C_μρ L √(2 k). The transport equation for L (Eq. <ref>) has the source termΨ_L = C_1 ρ L (∇· u) + C_2 ρ√(2 k). Again, we here set the specific dissipation in Eq. <ref> to be ε = C_μ^3/4 k^3/2 L^-1.§.§.§ C06C06 added shear to the k-L model of DT06 by employing the full stress tensor rather than just the turbulent pressure term. This necessitated re-calibrating the model coefficients of DT06. We note that C06 used a slightly different RT growth rate parameter (α = 0.05 instead of α = 0.0625 in DT06) when calibrating the model. Buoyancy effects are included via the Atwood number defined asA_i = ρ_+ - ρ_-/ρ_+ + ρ_- + L/ρρ/ x_i, where ρ_+ and ρ_- are the reconstructed density values at the right and left cell faces, respectively. The model constants are summarized in Table <ref>; we note that the constant values appear to differ from those given in C06, but that this is solely due to our generic two-equation framework which combines and re-defines certain constants.§.§.§ GS11Similar to C06, the model of GS11 is based on the k-L model of DT06, but with the complete turbulent stress tensor to include KH effects. The model uses a slightly different definition of the Atwood number from C06, withA_i = ρ_+ - ρ_-/ρ_+ + ρ_- + 2 L/ρ + L |ρ/ x_i|ρ/ x_i, where again ρ_+ and ρ_- are the reconstructed density values at the right and left cell faces, respectively.GS11 also introduces a variable (τ_ KH) to account for compressibility effects by modifying the turbulent stress tensor,τ'_ij = 2 μ_T τ_ KH (S_ij - 1/3δ_ijS_kk) - 2/3δ_ijρ k. τ_ KH is calibrated with compressible shear layer simulations and estimated using a “local” Mach number M_l ≡ |∇× u|L / c_s, where c_s is the local sound speed. However, the piecewise fit for τ_ KH given by Eq. 19 in GS11 is discontinuous, which can lead to numerical issues. We therefore fit their formulation with a smooth function,τ_ KH(M_l) = 0.000575 + 0.19425/1.0 + 0.000337exp(17.791 M_l). The model constants are summarized in Table <ref>; we note that the C06 and GS11 model constants differ despite significant similarity in model formulation and calibration. §.§ k-omega modelsThe k-ω model was first developed by W88 and updated in <cit.> and W06. The auxiliary variable ξ is defined to be the specific dissipation rate (or eddy frequency) ω = k^1/2 L^-1, which has units of inverse time. Then the specific dissipation is ε = C_μ k ω. To our knowledge, this is the first use of a k-ω model in an astrophysical application.§.§.§ W88The first version of the k-ω model is outlined in W88. The model uses the eddy-viscosityμ_T = C_μρ k/ω. The transport equation for ω (Eq. <ref>) uses the source termΨ_ω = C_1 ω/k P_T - C_2 ρω^2. The model constants are summarized in Table <ref>.§.§.§ W06The most recent version of the k-ω model is presented in W06 and <cit.>. While the model is similar to W88, there are important (and elaborate) differences, such as cross-diffusion terms and stress limiters. While the additional terms improve the accuracy and reduce the dependence on initial conditions, the model is sufficiently complex to prohibit a generic description. Our implementation in Athena includes the additional terms, and we refer the reader to W06 and <cit.> for a full description of the model. For completeness we note approximate constant values in Table <ref>. §.§ Compressibility CorrectionsA common way to account for compressibility effects is to modify the turbulence dissipation rate ε. In theory, ε is decomposed into solenoidal and dilatational components, with the latter only manifesting in compressible turbulence. In practice, only a slight modification is needed to the k and ω equations. In Eq. <ref>, the second term on the right hand side is modified as C_D ρε→ C_D ρε [1 + F(M_t)], where F(M_t) is a function of the local turbulent Mach number M_t ≡√(2 k)/a_s, with a_s the local sound speed. No further changes are needed in the k-ϵ formalism. In the k-ω formalism, Eq. <ref> is also modified with C_2 ρω^2 → [C_2 - C_D F(M_t)] ρω^2. We consider three forms for F(M_t) proposed in the literature. The simplest model is that of S89 which usesF(M_t) = M_t^2. The most complex model is that of Z90 withF(M_t) = 0.75 {1.0 -exp[-1.39( γ + 1.0)(M_t - M_t0)^2]}ℋ(M_t - M_t0),with ℋ the Heaviside step function and M_t0≡ 0.10 √(2/(γ + 1)). Finally, the model of W92 suggestsF(M_t) = 1.5 (M_t^2 - 0.0625) ℋ(M_t - 0.25).It is worth noting that these are purely phenomenological models; resolved DNS simulations by <cit.> have demonstrated that the dissipation is not actually reduced in compressible turbulence. Despite this realization, compressibility corrections that modify the dissipation are still commonly used because they yield accurate results in many applications. As noted in <ref>, GS11 uses a different type of compressibility correction which modifies the turbulent stress tensor. No satisfactory correction is available for C06. §.§ Turbulence model initial conditionsIn simulations with a turbulence model, we must specify initial conditions for the turbulent kinetic energy k and the additional turbulent variable (ξ→ε, L, or ω). We desire identical initial conditions for all models; we therefore set the turbulent length scale L in all models and convert using scaling relations. Based on dimensional arguments, ε∝ k^3/2 / L and ω∝ k^1/2 / L. The literature values for the constant of proportionality vary; we obtained the best agreement across models using ε_0 = C_μ^3/4 k_0^3/2 L_0^-1 and ω_0 = C_μ^-1/4 k_0^1/2 L_0^-1. §.§ Implementation in AthenaThe turbulence update is first order in time and implemented via operator splitting. The fluxes are calculated at cell walls using a simple average to reconstruct quantities from cell-centred values. Spatial derivatives are computed using second order central differences. Source terms are evaluated after application of the viscous fluxes and are applied with an adaptive Runge-Kutta-Fehlberg integrator (RKF45). Stability of the explicit diffusion method is preserved by limiting the overall hydrodynamic time step based on the condition Δ t ≤(Δ^2 ρ)/ (6μ_T), where Δ is the minimum cell size. The dependence on Δ^2 limits the feasibility of our implementation to low resolution simulations.§ MIXING LAYER TEST To verify the implementation of each turbulence model in Athena, we perform a one-dimensional temporal mixing layer test. Our set-up is nearly identical to that described in section 2.2.2 of GS11, which was adapted from section 3 of C06. We initialize a discontinuity in the perpendicular (y) velocity at the origin. The difference in velocity between the left and right states sets the convective Mach number, defined as <cit.>M_ c≡|v_l - v_r|/c_l + c_r, with v the y-velocity and c the sound speed, with subscripts l and r for the left and right regions respectively. Unlike GS11, we shift the frame of reference to move at the convective velocity; then v_l = -v_r. We also smooth the initial velocity discontinuity with a hyperbolic tangent function, as was done in <cit.>. The parallel (x) velocity is zero. We use an ideal equation of state with γ = 1.4. The density and pressure are constant at ρ_0 = 1.0gcm^-3 and P_0=1.72×10^10ergcm^-3, corresponding to a uniform sound speed c_l = c_r = 1.55×10^5cms^-1. The simulation domain is a one-dimensional region with extent -5.0 cm < x < 5.0 cm with a resolution of 4096 cells. Similar to GS11, we initialize a small shear layer of width δ_0 = 0.1cm centred at the interface with turbulent energy k = 0.02 (Δ v)^2 and L = 0.2 δ_0, where Δ v = |v_l - v_r|. This initial layer is also smoothed to the background values of k_0 = 10^-4 (Δ v)^2 and L_0 = 10^-2δ_0.We run each simulation for 200μs. The velocity discontinuity generates a shear layer, and the width of the shear layer δ grows linearly in time asδ(t) = C_δ Δ v t, where C_δ is a constant. The exact value for C_δ depends on how the shear layer thickness δ is defined. In lab experiments, the visual thickness δ_ viz <cit.> or pressure thickness δ_p <cit.> are used. In numerical experiments, the velocity thickness δ_b, energy thickness δ_s, and vorticity thickness δ_ω are often used <cit.>; less common is the momentum thickness, δ_θ <cit.>. C06 and GS11 used a 1 per cent threshold on the velocity thickness (which we will denote as δ_b1), considering regions where 0.01 < (v-v_l)/(Δ v) < 0.99; engineering literature tends to use a 10 per cent threshold (δ_b10), defined similarly to δ_b1. W88 used a 10 per cent energy thickness (δ_s10), defined where 0.1 < (v-v_l)^2/(Δ v)^2 < 0.9. We will compare results using these three definitions, as well as the momentum thickness δ_θ = 1/[ρ_0 (Δ v)^2] ∫ρ (v_l - v)(v - v_r) dx and the vorticity thickness δ_ω = |v_l - v_r|/( v/ y)_max.A further complication is that lab experiments of the plane mixing layer measure a spatial spreading rate, δ'(x) ≡ d δ / d x. In our experiment, we move in a frame of reference at the convective velocity v_ c = (1/2)(v_l+v_r) (assuming c_l = c_r) and therefore measure a temporal spreading rate, <cit.>δ'(t) = d δ/d t = d x/d t d δ/d x = v_c δ'(x). Values for C_δ estimated from plane mixing layer experiments <cit.> and high-resolution numerical simulations <cit.> are reported in Table <ref>, where the subscript on C indicates the corresponding shear layer thickness definition. §.§ Mixing layer results Figure <ref> shows the time evolution of a subsonic (M_ c = 0.1) mixing layer with the LS74 k-ε model. The profiles of the the y-velocity v, turbulent kinetic energy k, and turbulent length L all spread in time; as noted, the exact spreading rate depends on how the layer thickness is defined. Figure <ref> shows the growth of the shear layer thickness δ(t) for different layer definitions. All definitions show linear growth in time. The 1 per cent velocity thickness grows at the greatest rate, while the momentum thickness increases at the lowest rate. We use a χ^2 minimization linear fit to estimate C_δ; the results are presented in Table <ref>.Table <ref> also shows the growth rates at M_ c = 0.10 for all RANS models tested. We find that the various turbulence models lead to differing growth rates on the same test problem. Although most models do not reproduce the measured growth rate for all thickness definitions, all models do produce linear growth in time and roughly agree with the measured value for at least one definition, leading us to conclude that our models are implemented correctly in Athena. Variations in numerical method between codes could lead to discrepancies with previous work; further, there is significant uncertainty on the measured values. Interestingly, there is no clear relation between the different measures and models; for example, C_b10 is much greater with the GS11 model compared to the LS74 model, but C_ω is slightly less. This suggests no single measure should be preferred.Finally, we note that C06 and GS11 calibrated their turbulence models using a 1 per cent velocity definition for the mixing layer. While their models show good agreement with this definition, we find that these models largely do not predict spreading rates in agreement with measured values when using other definitions. This suggests that a 1 per cent criterion may not be the best definition for comparison. §.§ Compressible mixing layer The spreading rate of a compressible mixing layer is found to decrease with increasing convective Mach number <cit.>. The difference is expressed as the compressibility factor Φ≡δ'/δ'_i, where δ'_i is the incompressible growth rate. Experiments have yielded different relations between M_c and Φ, such as the popular “Langley” curve <cit.>, the results of <cit.>, and the fit of <cit.>.We perform simulations with increasing convective Mach number up to M_c = 10. We use the growth rate determined at M_c = 0.1 with thickness δ_b10 as our incompressible growth rate δ'_i. Results obtained with the LS74 model are presented as solid circles in Figure <ref>, with two experimental curves shown for comparison. Although the spreading rate does decrease with increasing Mach number, it does not follow the experimental trend. This is consistent with previous work which shows that standard two-equation RANS turbulence models do not reproduce the observed reduction in spreading rate without modifications.As described in <ref>, three authors (S89, Z90, and W92) have proposed “compressibility corrections” to better capture the decrease. These corrections work by increasing the dissipation rate due to pressure-dilatation effects. Although direct numerical simulation results have shown that this is not actually the case <cit.>, these ad hoc compressibility corrections are still widely used because they produce more accurate results (at least in the transonic regime). Figure <ref> also shows results obtained when the three compressibility corrections are applied to the LS74 model. All three corrections do decrease the spreading rate to roughly the experimental values, at least up to M_c = 5; above this, the growth rate is slightly below the experimental estimate. The difference between the corrections of S89, Z90, and W92 is negligible. Similar results are obtained when applied to the MS13, W88, and W06 models.There is no straightforward way to apply these corrections to the model of C06; however, GS11 does include a compressibility correction through the variable τ_ KH (see Section <ref>). Results obtained with the model of GS11 are also shown on Figure <ref>. The asymptotic nature of the τ_ KH function (Eq. <ref>) reproduces the observed behavior of compressible layers up to M_c ≈ 1; however, above this point the GS11 formulation leads to growth rates that are too small. Indeed, data points are not available for M_c > 2.5 for GS11 because the model did not evolve.§ STRATIFIED MEDIUM TEST Three of the models here considered include buoyant effects to capture the RT instability, namely MS13, C06, and GS11. To further verify the implementation of these models, we perform a two-dimensional stratified medium test. Our set-up is nearly identical to that described in section 2.2.1 of GS11, which was itself adapted from section 5 of DT06. We accelerate a heavy fluid of density ρ_1 = 1.0 g cm^-3 into a lighter fluid of density ρ_2 = 0.9 g cm^-3 from an initially hydrostatic state. The acceleration acts in the -y direction at g = 9.8×10^8 cm s^-2. The grid is 0.02×1.0 cm with a resolution of 16×800 cells, and the interface is at the midpoint of the y axis. The temperature is discontinuous at the interface, with T_1 = 45 K and T_2 = 50 K, and follows a profile to maintain hydrostatic equilibrium. Note that we do not perturb the interface; as the interface is grid-aligned, the RT instability will not develop in an inviscid code. However, a buoyant turbulence model will recognize the impulsive density and pressure gradients and generate turbulence, leading to the development of a mixing layer between the two fluids. Bubbles of light fluid will penetrate the heavy fluid with height h(t) = α_b A g t^2, where A = (ρ_1 - ρ_2)/(ρ_1 + ρ_2) is the Atwood number and α_b ≈ 0.06 is a constant empirically determined from experiments <cit.>. Numerical simulations of the RT instability tend to underestimate the growth by a factor of ∼ 2 <cit.>, underscoring the need for a turbulence model.Figure <ref> shows the evolution of the boundary layer for the turbulence models of C06, GS11, and MS13. The other turbulence models (LS74, W88, and W06) lack buoyant source terms; hence they cannot capture the RT instability and show no evolution in this test case. We compare the growth of turbulent kinetic energy k(y,t) and turbulent length scale L(y,t) with the analytic solutions given in DT06. The model of GS11 shows good agreement with the analytic predictions; however, the models of C06 and MS13 do not accurately follow the evolution. We note that C06 used a slightly lower value of the bubble penetration constant α_b compared to DT06 when calibrating the model; however this is insufficient to fully explain the discrepancy. Figure <ref> also shows the evolution of the density ρ, the temperature T, and the heavy fluid mass fraction F_ h, determined using a passive colour field C that is initialized to unity in the heavy fluid and to zero in the light fluid.We can also determine the growth rate of the bubble height h(t), estimated as the point where the mass fraction of heavy material F_ h = 0.985 <cit.>. Figure <ref> shows the growth of the bubble height h(t) plotted against Agt^2; hence the lines should be linear with a slope of α≈ 0.06. We see that, after an initial transient phase, the GS11 model does show a linear trend with α≈ 0.050 – slightly lower than expected but still in good agreement. The model of C06 also shows a linear trend, but the layer grows too slowly with α≈ 0.038. The MS13 model is initially in good agreement with α≈ 0.060 but eventually diverges and grows non-linearly. It is unclear what in the MS13 model causes this runaway growth, but the test result suggests that MS13 may not properly account for sustained buoyancy and will therefore yield inconsistent results.§ SHOCK-CLOUD SIMULATIONS Having verified and validated our turbulence model implementation with idealized tests, we now explore a complex problem: the astrophysical shock-cloud interaction. We solve Eqs. <ref>-<ref> in Athena using the directionally unsplit CTU integrator <cit.> with third order reconstruction in the characteristic variables <cit.> and the HLLC Riemann solver <cit.>. Simulations are performed on Cartesian grids in three dimensions. We use an adiabatic equation of state with the ratio of specific heats γ = C_ p/C_ V = 5/3. Self-gravity and magnetic fields are not included. §.§ Setup and initial conditionsOur simulation is a variant of the typical shock-cloud interaction: a planar shock wave of hot diffuse gas propagates through a uniform medium and impacts a cold, dense cloud. The initial conditions are determined by the Mach number of the shock M, the radius of the cloud R, and the density ratio of cloud to the ambient medium χ. Our fiducial simulation uses M = 10, R = 1, and χ = 10.The ambient medium is initially uniform with density ρ_0 = 1 and pressure P_0 = 1, in arbitrary (computational) units. Our simulation domain initially extends from -5 ≤ x ≤ 15, -5 ≤ y ≤ 5, and -5 ≤ z ≤ 5, again in arbitrary units. All boundaries are outflow-only, except the upstream boundary (see below). The simulation resolution is indicated by the number of cells per cloud radius N_ R; our fiducial simulation is N_ R = 25, corresponding to a resolution of 512×256×256. We perform a resolution test in <ref> up to N_ R = 200; while N_ R = 25 is sufficient for most quantitative estimates, the details of the mixing are notably different for N_R ≥ 100.The cloud begins centred at the origin and in pressure equilibrium with the ambient medium. The cloud has a spherically-symmetric density profile given by <cit.>:ρ(r) = ρ_0 + ρ_c -ρ_0/1 + (r/R)^n,where ρ_c = χ ρ_0 is the central density and n controls the steepness of the profile. We use n=20 to obtain a profile similar to that of P09 but steeper than that of SSS08 (which used n = 8). As in SSS08, we must set an arbitrary boundary for the “cloud,” which we denote as r_b and define where ρ(r_b) = 1.01 ρ_0; for R =1 and n=20, r_b = 1.25. To trace cloud material, a passive scalar field C_c is set to unity where r ≤ r_b and zero otherwise.We initialize the shock with the adiabatic solutions of the Rankine-Hugoniot jump conditions for a given Mach number M. The upstream boundary condition maintains these quantities, resulting in a shocked wind model. The shock begins at x = -2 and propagates in the +x direction. We use an additional passive colour field to trace the mixing of shocked material in the simulation. A shock tracer C_s is initialized to unity only within the leading edge of the shock with a width of one cloud radius, i.e., C_s = 1.0 where -3 < x < -2 and zero otherwise.The time is given in terms of the “cloud crushing time”, t_cc, defined as <cit.>t_cc≡R/u_s = χ^1/2 R/M a_s, where u_s is the shock velocity within the cloud and a_s = √(γ P_0/ρ_0) = √(5/3) is the ambient sound speed in computational units.We do not use any mesh refinement – simulations are run on a single mesh of uniform spacing. Athena is capable of static mesh refinement (SMR), which differs from adaptive mesh refinement (AMR) in that in SMR the refinement grids are placed at the beginning of the simulation and remain fixed. We did attempt to use SMR but encountered significant issues when combined with a turbulence model. Interpolation of the conserved variables (namely energy and momentum) across coarse-fine interfaces produced small numerical errors in the primitive variables (namely pressure and velocity), which were sufficient to generate artificial vorticity that was amplified by the turbulence models. Using a single grid has the further advantage that the diffusive properties of the code remain uniform across the domain.§.§.§ Turbulence model initial conditions Following GS11, we set the initial value for k relative to the internal energy as k_0 = k_i e_ int on a cell by cell basis with e_ int = P/(γ - 1); similarly, we set the initial value for L relative to the cloud radius as L_0 = L_i R. For our fiducial simulation, we choose k_i = 10^-2 and L_i = 10^-2 everywhere to roughly match the initial conditions of GS11. We note that this differs from the approach of P09 in which the authors used different initial conditions for the shock and cloud; the effect of initial conditions will be explored in <ref>.§.§.§ Co-moving grid The cloud will be accelerated and disrupted by the shocked wind, and eventually all cloud material will leave the initial simulation domain. To follow the cloud evolution for as long as possible, we implement a “co-moving grid” similar to the method used in SSS08. We adjust the x-velocity at each time step to keep our domain centred on the bulk of the cloud material. At the beginning of each integration, we compute the mass-averaged cloud velocity⟨ v_x ⟩ = ∫_V (ρ C_c)^g v_x dV/∫_V (ρ C_c)^g dV, where g is a weighting factor we introduce to keep the grid fixed on the densest cloud material. While SSS08 used g=1, we find we are better able to follow the cloud with g=4. We then subtract ⟨ v_x ⟩ from the x-velocity everywhere in the simulation and update the grid location and inflow conditions accordingly. To prevent cloud material from encroaching on the upstream boundary, we limit the co-moving velocity when cloud material would come within a distance of 2 r_b from the upstream boundary. We also prohibit the inflow velocity from becoming subsonic to prevent information traveling upstream. We have verified this method by comparing to simulations performed in an elongated static grid (-5 < x < 45); the resulting cloud evolution is nearly indistinguishable.§.§.§ Implicit Large Eddy Simulations Grid-based hydrodynamics simulations performed without a turbulence model are sometimes referred to as “inviscid” simulations; however, the discretization of the Euler equations introduces numerical viscosity, and the turbulent cascade is truncated at the grid scale. The grid thus serves as an “implicit” filter, and such a simulation may be referred to as an “Implicit Large Eddy Simulation”, or ILES <cit.>. We therefore denote simulations performed without a turbulence model as ILES. We perform high-resolution ILES simulations up to N_ R = 200 for comparison to simulations with a turbulence model.§.§.§ Ensemble-averaged simulations with grid-scale turbulence Even at high resolution, an ILES simulation with static initial conditions is not equivalent to models with a turbulence model because the turbulence models are initialized with non-zero small-scale turbulent energy (k_0 ≠ 0). P09 therefore compared shock-cloud simulations performed with the LS74 k-ε model to an inviscid simulation with random perturbations to the density, velocity, and pressure in the post-shock flow. We extend the P09 approach by averaging multiple high-resolution inviscid simulations initialized with different random perturbations. This should provide a good comparison, as the results from a RANS turbulence model can be interpreted as an ensemble average over many turbulent flow realizations. The velocity perturbations are drawn from a Gaussian distribution, and the width of the Gaussian is set to match the initial level of turbulence in the models, namely k_i = 10^-2 e_ int. The amplitude of the density perturbations is drawn from a Gaussian with a width of 0.01. Note that, unlike P09, we do not perturb the pressure. We perform 10 simulations at N_R = 25 with different turbulent realizations and then average on a cell-by-cell basis. We refer to results from this method as “Turbulent ILES”, or TILES. §.§ Diagnostics For comparison to previous shock-cloud simulations, we compute several standard integrated diagnostic quantities <cit.>. The cloud-mass-weighted average of a quantity f is defined as⟨ f ⟩ = 1/M_cl∫_V ρ C_c f dV, where the initial cloud mass M_cl = ∫_t=0 (ρ C_c) dV.We follow the effective radius normal to the x-axisa = [ 5 ( ⟨ x^2 ⟩ - ⟨ x ⟩ ^2)]^1/2,with similar expressions along the y and z axes denoted b and c respectively. We also compute the rms velocity along each axis <cit.>,δ v_x = ( ⟨ v_x^2 ⟩ - ⟨ v_x ⟩ ^2 )^1/2,again with similar expressions in y and z.To follow the mixing, we adopt the mixing fraction f_ mix introduced in <cit.> and used in SSS08, wheref_ mix = 1/M_cl∫_0.1 < C_c < 0.9ρ C_c dV. As the cloud material (initially C_c = 1.0) is mixed into the ambient medium (initially C_c = 0.0), the cloud concentration will take on intermediate values and f_ mix will increase.We also examine another quantitative estimate of the mixing: the injection efficiency f_ inj, defined asf_ inj = 1/η M_s∫_C_c ≥ 0.1ρ C_s dV,where M_s = ∫_t = 0ρ C_s dV is the initial shock tracer mass and η is a normalization factor. As the shock passes over the cloud, mixing at the leading edge by RT instabilities and at the edges by KH instabilities will “inject” shock material into the cloud. This is of particular interest for studies of chemical enrichment of the early Solar system with short-lived isotopes from supernovae <cit.>. The injection efficiency is normalized via η such that only the mass of the shock tracer directly incident on the cloud cross-section π r_b^2 is considered; hence, f_ inj = 1 indicates “perfect” injection. §.§ Results §.§.§ Dynamical evolution We follow the interaction of the shocked wind with the cloud for up to 10 cloud-crushing times. Figure <ref> shows the time evolution of the cloud column density Σ (C_c) = ∫ρ C_c dz / ∫ρ dz in the fiducial (N_ R = 25) simulations for each of the models, including no turbulence model (ILES) and ensemble-averaged grid-scale turbulence (TILES). The cloud material is initially confined within r ≤ r_b, but after impact material is ablated and mixed into the shock and ambient medium, leading to a head-tail structure. The cloud is accelerated in the +x-direction; as described in <ref>, we shift our grid to be co-moving with the densest cloud material. The location of the cloud at a given time varies from run to run, as each turbulence model uniquely alters the cloud acceleration and destruction. As material is ablated from the edges of the cloud, large KH rolls develop in the ILES simulation. Around 4 t_cc, the characteristic vortex ring is clearly evident. The evolution of an inviscid adiabatic shock-cloud interaction is described in detail in PP16; we here focus on the differences resulting from the turbulence models.The turbulence models also include diffusion of passive colour fields, which is of particular importance for the mixing estimates. In the ILES simulations, cloud material is most concentrated at the cloud edges as a result of the KH instability. The additional viscosity from the turbulence models diffuses the colour field to varying degrees. In the models of LS74, MS13, and W06, three structures still remain in the colour field: the dense head, the vortex ring, and the diffuse tail. However, in the models of C06, GS11, and W88, the colour field is largely smoothed. In C06 and GS11, the cloud material becomes nearly uniformly distributed in an oblate spheroid. It is unclear whether this is due to increased buoyancy, shear effects, and/or over-production of turbulent energy.Figure <ref> presents the time evolution of the density-weighted column of specific turbulent energy Σ (k) = ∫ρ kdz / ∫ρ dz. For the ILES and TILES runs, the turbulent energy is not explicitly tracked; we therefore follow <cit.> and construct an estimate for k = C_k Δ^2 |S^*|^2, where Δ is the grid resolution, S^*_ij = S_ij-(1/3)δ_ijS_kk is the trace-free resolved strain rate tensor (see Eq. <ref>), and C_k is a scaling constant. The exact scaling is uncertain; <cit.> used C_k ≈ 0.013 based on supersonic isothermal turbulence. Here, we set C_k = 1 and treat k as a morphological rather than quantitative estimate.Figure <ref> shows that in all runs the strongest areas of turbulence generation are 1) at the cloud edges due to shearing motions; 2) in the cloud tail due to shear and compression; and 3) at the shock front due to compression. LS74 and W06 produce relatively little turbulence, resulting in a correspondingly low turbulent viscosity. These models produce only slight differences in morphology from the ILES and TILES cases. While the small-scale structure is smoothed, the two large KH rolls are still present. In contrast, W88 produces large amounts of turbulent energy, particularly in the shock. The turbulent pressure term ultimately leads to non-physical spreading of the shock downstream. The strong shear at the cloud edges spreads material into two primary streamers. This also occurs in MS13, but the dominant turbulence is at the leading edge of the cloud due to the inclusion of buoyancy effects (RT instability). A similar effect is seen in C06 and GS11 due to the buoyancy; however, in C06 and GS11 the ambient turbulence dissipates rapidly and the cloud expands due to the increased interior turbulent pressure.The transmitted shock within the cloud also increases the turbulent length scale L via the dilatation term (∇·u) in Ψ_L of the k-L models (C06 and GS11); this is seen in Figure <ref>, which shows the evolution of the density-weighted column of L, Σ (L) = ∫ρ L dz / ∫ρ dz. These models show turbulent length scales roughly an order of magnitude greater than the other models, while the turbulent kinetic energy is roughly an order of magnitude lower. The most similar model is MS13; however, all three models with buoyancy terms (MS13, C06, and GS11) show significant expansion, and the cloud is eventually diffused completely. The models without a turbulence model (ILES and TILES) are not shown in Figure <ref>, as L would simply be the grid scale Δ.§.§.§ Evolution of diagnostic quantities Figure <ref> shows the time evolution of various diagnostic quantities. Overall, the turbulence models produce similar results for the cloud axis ratio b/a, excepting C06 and W88. In C06, large amounts of turbulent pressure within the cloud cause the cloud to expand and become spherical. However, the turbulence models show little agreement in their treatment of either motions (δ v) or mixing (f_ mix and f_ inj). The ILES and TILES simulations are comparable, but all simulations with a turbulence model show reduced rms velocity dispersions, as the additional turbulent viscosity diffuses the small-scale turbulent motions. Recall that the turbulence models work by averaging out the fluctuating velocities below the characteristic length scale. C06 and GS11 lead to the largest values of L – on the order of the cloud radius within the cloud – and therefore smooth nearly all small-scale fluctuations.This also affects the mixing. The TILES model shows only slightly faster mixing than the ILES result. This differs from what was observed by P09, where the mixing of material proceeded almost twice as fast in models with grid-scale turbulence compared to those without (see, e.g. fig. 15g of P09, where m_ core is an alternative measure for mixing). This is mostly likely due to the strength of the imposed turbulence, which was considerably higher in P09 than in our TILES simulations.As already noted, LS74 and W06 introduce the least turbulent viscosity and therefore most resemble the ILES case. Surprisingly, W06 shows a reduction in f_ mix relative to the ILES runs. In all runs, f_ mix approaches unity, indicating complete cloud disruption. In several models, the expansion of the cloud at late times reduces the concentration of cloud colour field below the mixing threshold (C_c ≥ 0.1) which causes f_ mix to decrease. A different trend is observed in the injection efficiency, where the three most diffusive models (W88, C06, and GS11) reach a significantly different peak value from the other models. Both the shock and cloud are diffused, and the increased viscosity leads to enhanced injection. There is agreement between most models at a final value of f_ inj≈ 0.3 – slightly higher than previous shock-cloud studies of Solar system enrichment, which found f_ inj≲ 0.1 <cit.>.§.§.§ Model validity A primary goal of this work is to compare the behavior of turbulence models in an identical astrophysical application. Clearly the models do not all reproduce the same dynamical and quantitative evolution. As noted in Section <ref>, we believe the best reference for a RANS model is an ensemble-average of high-resolution grid-scale turbulence simulations. We therefore compare the turbulence model results to the TILES result. We compute an rms difference for the density-weighted cloud colour field at each time step using the TILES result as the reference. The time evolution of the rms difference is shown Figure <ref>. We observe that the k-ε models of LS74 and MS13 agree best with the TILES result. A similar trend is observed when compared to the highest resolution ILES simulation (N_R = 200, see <ref>).§.§.§ Effect of compressibility corrections As seen in <ref>, the RANS models here considered are largely calibrated with subsonic, incompressible experiments, and they do not reproduce the correct shear layer growth rate without modifications. As our shock is supersonic (M = 10), we anticipated a compressibility correction would be important to model the evolution. However, we find that the compressibility corrections have a negligible effect on the simulation evolution in LS74, MS13, W88, and W06. We do not test GS11 without τ_ KH, as this could affect the calibration; and we do not test C06, as there is no straightforward way to implement a correction. As the results are nearly indistinguishable, we do not present any figures. It is possible that the effects may become important at higher Mach numbers, but we defer this for future studies.§.§.§ Dependence on initial conditions The RANS turbulence models considered here are known to be sensitive to initial conditions, particularly the W88 model <cit.>. In most astrophysical applications, the prescription for the initial values of k and L is arbitrary. We set the initial value for k relative to the internal energy as k_0 = k_i e_ int and for L relative to the cloud radius as L_0 = L_i R. Our fiducial simulation uses k_i = 10^-2 and L_i = 10^-2 to roughly match the initial conditions of GS11. However, P09 chose non-uniform initial conditions, with varying levels of k between the shock and the cloud. Similar to P09, we test the dependence of the LS74 turbulence model on the initial conditions by performing simulations with varying levels of initial turbulence k_i and length scale L_i, ranging from 10^-4 to 10^0 in both quantities. We perform this test at N_R = 12, as the increased viscosity decreases the allowed time step size.Figure <ref> presents a snapshot of the density-weighted average cloud colour column at t=6t_ cc for each combination of k_i and L_i in the LS74 model. We see that even an order of magnitude difference in either quantity produces notable differences in the evolution and mixing. Increasing either k or L increases the turbulent viscosity, to the point where the cloud is completely diffused into the background. This is also evident in Figure <ref>, which shows the time evolution of the mixing fraction f_ mix in runs with different initial conditions for the LS74 model. Our results agree with earlier findings by P09, in which simulations with low initial turbulence (k_i = 10^-6 in the shock) showed decreased mixing (as evidenced by e.g, a slower decrease in core mass m_ core in fig. 15g of P09) compared to simulations with higher initial turbulence (k_i = 0.13 in the shock). It is perhaps not surprising that different initial conditions produce different results, as each represents a particular physical state (i.e., more or less turbulence at varying scales). One should carefully consider the initial conditions when using RANS models in an unsteady flow.Finally, PP16 concluded that the LS74 k-ε model did not significantly affect the evolution of their three-dimensional shock-cloud simulations. However, this is most likely due to their choice of initial conditions; PP16 used k_i = 10^-6 and L_i = 1.6×10^-4 (Pittard, personal communication) in all simulations, corresponding to very low initial levels of turbulence. While the LS74 model has very little effect for small (and probably reasonable) initial values of k and L, we demonstrate that the model can dramatically alter 3D simulations under certain conditions.§.§.§ Resolution dependence While 100 cells per cloud radius are necessary to see convergence of global quantities in 2D studies <cit.>, the resolution limit may be less strict in 3D. PP16 found that 32–64 cells may be sufficient for global convergence in 3D simulations. Figure <ref> shows the time evolution of the diagnostic quantities in ILES simulations for resolutions N_R = 10-200. In agreement with PP16, we observe that globally-averaged quantities (b/a and δ v) exhibit only small variation with increasing resolution for N_R ≳ 25.However, it is difficult to assess whether or not this represents true convergence. For consistency with previous work, we perform an analysis similar to that described in Appendix A3 of PP16. We calculate the relative difference Δ Q_N between a measurement Q at a given resolution N and the same measure at a reference resolution N_ ref (typically the highest resolution), given by eq. A1 of PP16 asΔ Q_N = |Q_N - Q_N_ ref|/|Q_N_ ref|.Figure <ref> shows the relative difference as a function of simulation resolution N_R for various diagnostic quantities at t=3 t_ cc. We compare results using N_ ref = 100 (as in PP16) and N_ ref = 200. We note that our axial direction is x, whereas in PP16 the axial direction is z; hence our quantity a should be compared to c in e.g., fig. A13 of PP16, and likewise our δ v_x to their δ v_z. For further comparison with PP16, we also calculate Δ Q_N for the core mass, m_ core, defined asm_ core = ∫_C_c ≥ 0.5ρ C_c dV. We finally note that our initialization of the cloud colour field is slightly different than in PP16; we use a constant value of C_c = 1 for r ≤ r_b, while PP16 used a spatially varying C_c that decreased with increasing radius within the cloud.If we use N_R = 100 as our reference resolution (left column of Figure <ref>), we find good agreement with PP16. The relative difference decreases with increasing resolution for most quantities, suggesting convergence. The only quantities with increasing difference are the velocity dispersions along axes perpendicular to the flow (δ v_y and δ v_z), which are not shown in fig A13 of PP16. However, the trend is less certain if we use our highest resolution simulation with N_R = 200 as the reference. There is no longer any sign of convergence, particularly in the mixing measures.This is surprising given previous studies of the shock-cloud interaction. <cit.> found little variance in f_ mix up to N_R = 50 in hydrodynamical shock-cloud interactions. While similar magneto-hydrodynamical simulations by SSS08 did not show convergence in f_ mix up to N_R ≈ 120, the authors predicted that, in simulations without an explicit viscosity, f_ mix should continue to decrease with increasing resolution and tend to zero at infinite resolution. In examining the time evolution in Figure <ref>, we do not observe either trend. While we find that f_ mix does show a decreasing trend up to N_R = 50, f_ mix actually increases with increasing resolution beyond this point. A similar result is observed in fig. A8a of PP16; the mixing (as measured by m_ core) decreases with increasing resolution up to N_R = 64, at which point increased mixing (indicated by a faster decrease in m_ core) is observed for N_R = 128.These results suggest that for resolutions N_R ≳ 50, mixing in the “inviscid” hydrodynamical shock-cloud simulation starts to be dominated by turbulent diffusion rather than numerical diffusion. If the correlation time of the turbulence is short compared to the numerical diffusion time, the turbulent viscosity will dominate the diffusion <cit.>. At low resolutions (N_R ≤ 50), the numerical viscosity dominates the dynamics and affects the growth of instabilities. As the resolution increases up to N_R=50, numerical diffusion decreases, yet the turbulent cascade is not yet sufficiently resolved to show “true” turbulent mixing, i.e., mixing rates independent of the numerical diffusion. The mixing is therefore at a minimum near this resolution, which could explain the apparent “convergence” observed in Figure <ref> when N_R = 100 is used as the reference.For N_R ≳ 50, the numerical viscosity is reduced to the point that the RT and KH instabilities can grow at the cloud surface and seed further turbulent motions. This is evident in Figure <ref>, which shows a snapshot of the cloud column density at t=6 t_ cc for varying simulation resolution. At high resolution, the leading edge of the cloud is saturated with RT fingers, and the shear at the cloud edge generates KH rolls that spawn additional vortices in the cloud wake. The turbulent cascade that develops is now largely resolved; the corresponding Reynolds number is large, and the mixing is increased.The continued increase in mixing from N_R = 100 to N_R = 200 in our fiducial simulation suggests that the turbulent cascade is still not fully resolved at this point. It is unclear whether the mixing would continue to increase with increasing resolution. As our simulations are performed on a fixed grid with no mesh refinement, extending our simulations beyond N_R = 200 is not feasible given the computational burden (see Appendix).We are also unable to perform simulations with N_R > 25 when using a turbulence model, due to the stability requirement that dt ≲ (Δ)^2. P09 found that the LS74 model reduced the convergence requirements in 2D, but PP16 found the model had little effect in 3D. As noted in Section <ref>, this may be a consequence of the low level of initial turbulence used in PP16. In our resolution tests up to N_R = 25, we find no significant benefit from the turbulence models.Figure <ref> compares the time evolution of the mixing estimates for the ILES model at N_R = 200 with the turbulence models at N_R =25. Despite the increased mixing at N_R = 200, all turbulence models other than W06 still indicate more mixing than observed. Yet if the ILES mixing continues to increase at higher resolutions, as the trend suggests, it may be that the turbulence models effectively predict the “correct” mixing.§.§.§ Dependence on numerical methods Figure <ref> shows the resolution dependence of the mixing estimates at t = 6 t_ cc for various combinations of integrators, Riemman solvers, and reconstruction accuracy. Our fiducial simulation uses the CTU integrator with 3rd order reconstruction of the characteristic variables and the HLLC Riemann solver (denoted CTU_3_HLLC). We also test second order reconstruction (CTU_2_HLLC); the Roe Riemann solver <cit.> with H-correction <cit.> (CTU_3_Roe); and the Van Leer (VL) integrator <cit.> with second order reconstruction in the primitive variables (VL_2p_HLLC). We find that changing any of these algorithms in the Godunov scheme can alter the degree of mixing, especially the Riemann solver. The results obtained with the Roe solver are almost a factor of two below the fiducial results; furthermore, it does not show the trend of increasing f_ mix from N_R = 50 to N_R = 100 as seen in the other runs. The dependence of ILES mixing on the numerical algorithm underscores the utility of a turbulence model.§ DISCUSSION In an effort to understand previous shock-cloud simulations, we have limited our exploration to RANS turbulence models. However, LES models are probably more appropriate for most astrophysical applications, including the shock-cloud interaction. The RANS approach tends to diffuse the small scale structure in the simulation, yet these are often the scales of greatest interest in astrophysics applications (e.g., star formation). In contrast, the resolved dynamics are largely unaffected in LES, and the filtering approach is ideal for unsteady flows. Despite these differences in formulation, the methods of LES are remarkably similar to RANS; the models have similar equations with similar closures, such as eddy-viscosity and gradient-diffusivity. The simplest LES model is the Smagorinsky model <cit.>, which is essentially a zero-equation mixing-length model. The LES model of <cit.> is a one-equation model; k is followed with a transport equation, while the turbulent length scale L is simply replaced by the grid scale spacing Δ. LES models also suffer the same calibration issues as RANS. <cit.> calibrated their model using high-resolution ILES simulations of turbulence, but it is difficult to determine if this approach is valid (see <ref> and Figure <ref>).We have only tested two-equation models. Models with fewer equations, such as the one-equation Spalart-Allmaras model <cit.>, are easy to implement but do not perform well in situations with inhomogeneous or decaying turbulence. However, models with two or fewer equations make use an isotropic eddy-viscosity. This assumption of isotropy severely limits the accuracy of these models in regions of high vorticity. Anisotropic models, such as the seven-equation Reynolds-Stress-Transport model <cit.>, independently follow the six components of the turbulent stress tensor plus a dissipation equation. This approach is highly accurate, but the associated computational cost is often prohibitive. One compromise may be the use of a non-linear eddy-viscosity relation, such as that used in <cit.>. All of the RANS models considered here use linear eddy-viscosity relations, but the additional complexity of the non-linear relation improves results in complex flows without the need for additional stress transport equations <cit.>.We also note that the assumption of isotropy is incorrect in magnetized turbulence <cit.>, as typically encountered in astrophysical applications. Eddies are stretched along the field lines, and the anisotropy is scale-dependent and increases toward smaller-scales <cit.>. It is unclear if an anisotropic RANS model could be developed for magnetohydrodynamics (MHD); however, such models could be developed in the LES framework <cit.>. Indeed, closures for the MHD LES equations have been proposed <cit.> but such methods have yet to be thoroughly validated.One potential benefit of a turbulence model is the proper modeling of the RT instability <cit.>. However, the buoyant turbulence models here considered seem to perform poorly in complex flows and generate excessive turbulence. Critically, the models have not been validated for use in supersonic, highly compressible turbulence, which is exactly the regime of interstellar gas dynamics. While compressibility corrections can be used, simulations have demonstrated that they are physically incorrect <cit.>.Finally, we note that we are limited in our use of turbulence models by an explicit time integration method – maintaining stability requires dt ∝ (Δ)^2. Implicit formulations are possible <cit.> but the associated computational cost may be significant due to coupling between the turbulent variables.§ CONCLUSIONS We have developed a common framework for two-equation RANS turbulence models in the Athena hydrodynamics code. All models use a linear eddy-viscosity relation based on resolved dynamics to add turbulent diffusivity. We have implemented six RANS turbulence models: the k-ε models of LS74 and MS13; the k-L models of C06 and GS11; and the k-ω models of W88 and W06.We have verified the models with the subsonic shear mixing layer. The models can only reproduce the correct mixing layer growth rate for certain definitions of the layer width δ (Figure <ref>), and the different definitions are not directly related. We have also extended the simulations into the supersonic regime, up to convective Mach numbers of 10, where compressibility corrections are needed to reduce the growth rate of the mixing layer in accord with experiment (Figure <ref>). Three common “compressibility corrections” from the literature (S89, Z90, and W92) perform very similarly and provide agreement with experimental results up to M_c ≈ 5. The stress tensor modification implemented by GS11 provides similar results up to M_c ≈ 1, but beyond this the model grows too slowly.Three of the models tested (C06, GS11, and MS13) include buoyant effects (RT and RM instabilities). For these models, we use a simple stratified medium subject to constant acceleration to test the growth of the RT boundary layer. The model of GS11 shows the best agreement with experimental growth rates (Figure <ref>), while C06 grows too slowly and MS13 diverges at late times.We then use the RANS models to simulate a generic astrophysical shock-cloud interaction. We follow the interaction in three dimensions for up to 10 cloud crushing times by implementing a co-moving grid. By using a consistent initial condition, we are able to compare global quantities as well as estimates of the mixing and injection returned by different turbulence models. We also generate an appropriate comparison by ensemble-averaging results from high-resolution inviscid simulations with grid-scale turbulence. We find that: * The k-ε models of LS74 and MS13 and the k-ω model of W06 generate the least turbulence and corresponding lowest numerical viscosity.These models show the best agreement with the reference (TILES) result (Figure <ref>) at the fiducial resolution (N_R = 25).* The k-L models of C06 and GS11 generate excessive turbulence within the cloud, leading to expansion, rapid disruption, and elevated mixing compared to the TILES result (Figure <ref>). The W88 k-ω model generates excessive turbulence within the shock front, which also leads to enhanced disruption. Overall, the W88 and C06 models show the least agreement with the reference results (Figure <ref>).* Compressibility effects play a small role in the shock-cloud interaction, at least at the Mach number considered here (M = 10), as the compressibility corrections do not noticeably alter the simulation evolution or mixing estimates.* In agreement with previous work by P09, we show that the turbulence models are highly sensitive to the initial conditions (Figure <ref>). For large initial values of k or L, the RANS models smooth the resolved dynamics beyond utility (Figure <ref>); for small initial values, the RANS models have negligible effects.* Globally-averaged quantities vary only slightly with increasing resolution at resolutions higher than 25 cells per radius (Figure <ref>). While this agrees with previous work up to 100 cells per radius (PP16), we find that beyond this point turbulent mixing begins to be resolved [see also <ref>] and thus alters the dynamics, preventing true convergence (Figure <ref>).* Estimates of the mixing decrease with increasing resolution up to 50 cells per radius (Figure <ref>), but beyond this point the mixing increases, up to a resolution of 200 cells per radius – the current limit of our computational resources. This suggests that mixing in inviscid simulations does not trend toward zero at infinite resolution (Figure <ref>) but rather that the turbulent diffusivity becomes dominant when the numerical viscosity is sufficiently low. * The degree of mixing in the highest-resolution inviscid simulation (N_R = 200) agrees best with the predictions of the LS74 turbulence model (Figure <ref>), but it is unknown what will occur at higher resolution or in a different application. Furthermore, the choice of numerical method (particularly the Riemann solver) can shift the mixing fraction in ILES simulations by nearly a factor of two (Figure <ref>). While the RANS turbulence models perform adequately in simple, specific test cases, it remains difficult to assess their veracity in complex dynamical applications. Further work toward understanding mixing in ILES simulations is necessary if proper calibrations are to be achieved.§ ACKNOWLEDGEMENTSWe thank the referee, Julian Pittard, for a constructive and insightful report. MDG thanks Jim Stone for a helpful discussion concerning turbulent mixing. Computations were performed on the KillDevil and Kure Clusters at UNC-Chapel Hill. We gratefully acknowledge support by NC Space Grant and NSF Grant AST-1109085.mnras@urlcharsothermakeother $&#_% @doi@urlcharsother ifnextchar [ @doi@ @doi@[] @doi@[#1]#2tempa#1tempaempty http://dx.doi.org/#2 doi:#2http://dx.doi.org/#2 #1 @eprint#1#2@eprint@#1:#2::nil @eprint@arXiv#1http://arxiv.org/abs/#1 arXiv:#1 @eprint@dblp#1http://dblp.uni-trier.de/rec/bibtex/#1.xml dblp:#1 @eprint@#1:#2:#3:#4niltempa #1tempb #2tempc #3tempc empty tempc tempb tempb tempa tempb empty tempb arXivifundefined mn@eprint@tempbtempb:tempcmn@eprint@tempbtempc[Barone, Oberkampf& BlottnerBarone et al.2006]Barone2006 Barone M. F.,Oberkampf W. L., Blottner F. G.,2006, @doi [AIAA Journal] 10.2514/1.19919, 44, 1488[Birch & EggersBirch & Eggers1972]Birch1972 Birch S. F.,Eggers J. M.,1972, Free Turbulent Shear Flows: Proceedings of a Conference Held at NASA Langley Research Center, Hampton, Virginia, July 20 - 21, 1972, 1, 11[Boss & KeiserBoss & Keiser2015]2015ApJ...809..103B Boss A. P.,Keiser S. A.,2015, @doi [The Astrophysical Journal] 10.1088/0004-637X/809/1/103, 809, 103[Brown & RoshkoBrown & Roshko1974]Brown1974 Brown G. L.,Roshko A.,1974, @doi [Journal of Fluid Mechanics] 10.1017/S002211207400190X, 64, 775[ChiravalleChiravalle2006]Chiravalle2006 Chiravalle V. P.,2006, @doi [Laser and Particle Beams] 10.1017/S026303460606054X, 24, 381[Cho & LazarianCho & Lazarian2003]2003MNRAS.345..325C Cho J.,Lazarian A.,2003, @doi [Monthly Notices of the Royal Astronomical Society] 10.1046/j.1365-8711.2003.06941.x, 345, 325[ColellaColella1990]1990JCoPh..87..171C Colella P.,1990, @doi [Journal of Computational Physics] 10.1016/0021-9991(90)90233-Q, 87, 171[Colella & WoodwardColella & Woodward1984]1984JCoPh..54..174C Colella P.,Woodward P. R.,1984, @doi [Journal of Computational Physics] 10.1016/0021-9991(84)90143-8, 54, 174[Dimonte & TiptonDimonte & Tipton2006]Dimonte2006 Dimonte G.,Tipton R.,2006, @doi [Physics of Fluids] 10.1063/1.2219768, 18, 085101[Dimonte et al.,Dimonte et al.2004]Dimonte2004 Dimonte G.,et al., 2004, @doi [Physics of Fluids] 10.1063/1.1688328, 16, 1668[DimotakisDimotakis1991]Dimotakis1991 Dimotakis P. E.,1991, in Curran E. T.,Murthy S. N. B.,eds, , High Speed Flight Propulsion Systems. American Institute of Aeronautics and Astronautics, Chapt. 5, pp 265–340[Elmegreen & ScaloElmegreen & Scalo2004]2004ARA A..42..211E Elmegreen B. G.,Scalo J.,2004, @doi [Annual Review of Astronomy and Astrophysics] 10.1146/annurev.astro.41.011802.094859, 42, 211[Garnier, Adams& SagautGarnier et al.2009]Garnier2009 Garnier E.,Adams N., Sagaut P.,2009, Large Eddy Simulation for Compressible Flows. Scientific Computation, Springer Netherlands, Dordrecht[Gatski & JongenGatski & Jongen2000]Gatski2000 Gatski T.,Jongen T.,2000, @doi [Progress in Aerospace Sciences] 10.1016/S0376-0421(00)00012-9, 36, 655[Goldreich & SridharGoldreich & Sridhar1995]1995ApJ...438..763G Goldreich P.,Sridhar S.,1995, @doi [The Astrophysical Journal] 10.1086/175121, 438, 763[Goodson, Luebbers, Heitsch& FrazerGoodson et al.2016]2016MNRAS.462.2777G Goodson M. D.,Luebbers I.,Heitsch F., Frazer C. C.,2016, @doi [Monthly Notices of the Royal Astronomical Society] 10.1093/mnras/stw1796, 462, 2777[Gray & ScannapiecoGray & Scannapieco2011]2011ApJ...733...88G Gray W. J.,Scannapieco E.,2011, @doi [The Astrophysical Journal] 10.1088/0004-637X/733/2/88, 733, 88[Heitsch, Zweibel, Slyz& DevriendtHeitsch et al.2004]2004ApJ...603..165H Heitsch F.,Zweibel E. G.,Slyz A. D., Devriendt J. E. G.,2004, @doi [The Astrophysical Journal] 10.1086/381428, 603, 165[Huang & CoakleyHuang & Coakley1992]HUANG1992 Huang P.,Coakley T.,1992, in 30th Aerospace Sciences Meeting and Exhibit. American Institute of Aeronautics and Astronautics, Reston, Virigina[Iapichino, Adamek, Schmidt& NiemeyerIapichino et al.2008]2008MNRAS.388.1079I Iapichino L.,Adamek J.,Schmidt W., Niemeyer J. C.,2008, @doi [Monthly Notices of the Royal Astronomical Society] 10.1111/j.1365-2966.2008.13137.x, 388, 1079[Klein, McKee& ColellaKlein et al.1994]1994ApJ...420..213K Klein R. I.,McKee C. F., Colella P.,1994, @doi [The Astrophysical Journal] 10.1086/173554, 420, 213[Launder & SpaldingLaunder & Spalding1974]Launder1974 Launder B.,Spalding D.,1974, @doi [Computer Methods in Applied Mechanics and Engineering] 10.1016/0045-7825(74)90029-2, 3, 269[McKee & OstrikerMcKee & Ostriker1977]1977ApJ...218..148M McKee C. F.,Ostriker J. P.,1977, @doi [The Astrophysical Journal] 10.1086/155667, 218, 148[Miesch et al.,Miesch et al.2015]2015SSRv..194...97M Miesch M.,et al., 2015, @doi [Space Science Reviews] 10.1007/s11214-015-0190-7, 194, 97[Morán-López & SchillingMorán-López & Schilling2013]Moran-Lopez2013a Morán-López J. T.,Schilling O.,2013, @doi [High Energy Density Physics] 10.1016/j.hedp.2012.11.001, 9, 112[Nakamura, McKee, Klein& FisherNakamura et al.2006]2006ApJS..164..477N Nakamura F.,McKee C. F.,Klein R. I., Fisher R. T.,2006, @doi [The Astrophysical Journal Supplement Series] 10.1086/501530, 164, 477[Palotti, Heitsch, Zweibel& HuangPalotti et al.2008]2008ApJ...678..234P Palotti M. L.,Heitsch F.,Zweibel E. G., Huang Y.,2008, @doi [The Astrophysical Journal] 10.1086/529066, 678, 234[Pantano & SarkarPantano & Sarkar2002]Pantano2002 Pantano C.,Sarkar S.,2002, @doi [Journal of Fluid Mechanics] 10.1017/S0022112001006978, 451, 329[Papamoschou & RoshkoPapamoschou & Roshko1988]Papamoschou1988 Papamoschou D.,Roshko A.,1988, @doi [Journal of Fluid Mechanics] 10.1017/S0022112088003325, 197, 453[Pittard & ParkinPittard & Parkin2016]2016MNRAS.457.4470P Pittard J. M.,Parkin E. R.,2016, @doi [Monthly Notices of the Royal Astronomical Society] 10.1093/mnras/stw025, 457, 4470[Pittard, Falle, Hartquist& DysonPittard et al.2009]2009MNRAS.394.1351P Pittard J. M.,Falle S. a. E. G.,Hartquist T. W., Dyson J. E.,2009, @doi [Monthly Notices of the Royal Astronomical Society] 10.1111/j.1365-2966.2009.13759.x, 394, 1351[RoeRoe1981]Roe1981 Roe P.,1981, @doi [Journal of Computational Physics] 10.1016/0021-9991(81)90128-5, 43, 357[Sarkar, Erlebacher, Hussaini& KreissSarkar et al.1989]Sarkar1989 Sarkar S.,Erlebacher G.,Hussaini M. Y., Kreiss H. O.,1989, Technical report, The analysis and modelling of dilatational terms in compressible turbulence. NASA Contractor Report 181959[Scannapieco & BrüggenScannapieco & Brüggen2008]2008ApJ...686..927S Scannapieco E.,Brüggen M.,2008, @doi [The Astrophysical Journal] 10.1086/591228, 686, 927[SchmidtSchmidt2014]Schmidt2014b Schmidt W.,2014, Numerical Modelling of Astrophysical Turbulence. SpringerBriefs in Astronomy, Springer International Publishing[Schmidt & FederrathSchmidt & Federrath2011]2011A A...528A.106S Schmidt W.,Federrath C.,2011, @doi [Astronomy & Astrophysics] 10.1051/0004-6361/201015630, 528, A106[Schmidt, Niemeyer, Hillebrandt& RoepkeSchmidt et al.2006]2006A A...450..265S Schmidt W.,Niemeyer J. C.,Hillebrandt W., Roepke F. K.,2006, @doi [Astronomy and Astrophysics] 10.1051/0004-6361:20053617, 450, 265[Schmidt et al.,Schmidt et al.2014]2014MNRAS.440.3051S Schmidt W.,et al., 2014, @doi [Monthly Notices of the Royal Astronomical Society] 10.1093/mnras/stu501, 440, 3051[Shin, Stone& SnyderShin et al.2008]2008ApJ...680..336S Shin M.,Stone J. M., Snyder G. F.,2008, @doi [The Astrophysical Journal] 10.1086/587775, 680, 336[SmagorinskySmagorinsky1963]1963MWRv...91...99S Smagorinsky J.,1963, @doi [Monthly Weather Review] 10.1175/1520-0493(1963)091<0099:GCEWTP>2.3.CO;2, 91, 99[Spalart & AllmarasSpalart & Allmaras1992]Spalart1992 Spalart P.,Allmaras S.,1992, in 30th Aerospace Sciences Meeting and Exhibit. Aerospace Sciences Meetings. American Institute of Aeronautics and Astronautics, Reston, Virigina[Stone & GardinerStone & Gardiner2007]2007PhFl...19i4104S Stone J. M.,Gardiner T.,2007, @doi [Physics of Fluids] 10.1063/1.2767666, 19, 094104[Stone & GardinerStone & Gardiner2009]2009NewA...14..139S Stone J. M.,Gardiner T.,2009, @doi [New Astronomy] 10.1016/j.newast.2008.06.003, 14, 139[Stone & NormanStone & Norman1992]1992ApJ...390L..17S Stone J. M.,Norman M. L.,1992, @doi [The Astrophysical Journal] 10.1086/186361, 390, L17[Stone, Gardiner, Teuben, Hawley& SimonStone et al.2008]2008ApJS..178..137S Stone J. M.,Gardiner T. A.,Teuben P.,Hawley J. F., Simon J. B.,2008, @doi [The Astrophysical Journal Supplement Series] 10.1086/588755, 178, 137[ToroToro2009]Toro2009 Toro E. F.,2009, Riemann Solvers and Numerical Methods for Fluid Dynamics: A Practical Introduction, 3rd edn. Springer-Verlag, Berlin Heidelberg[Vlaykov, Grete, Schmidt& SchleicherVlaykov et al.2016]2016PhPl...23f2316V Vlaykov D. G.,Grete P.,Schmidt W., Schleicher D. R. G.,2016, @doi [Physics of Plasmas] 10.1063/1.4954303, 23, 062316[Vreman, Sandham& LuoVreman et al.1996]Vreman1996 Vreman A. W.,Sandham N. D., Luo K. H.,1996, @doi [Journal of Fluid Mechanics] 10.1017/S0022112096007525, 320, 235[WilcoxWilcox1988]Wilcox1988 Wilcox D. C.,1988, @doi [AIAA Journal] 10.2514/3.10041, 26, 1299[WilcoxWilcox1992]Wilcox1992 Wilcox D. C.,1992, @doi [AIAA Journal] 10.2514/3.11279, 30, 2639[WilcoxWilcox1998]Wilcox1998 Wilcox D. C.,1998, Turbulence Modeling for CFD (Second Edition). DCW Industries, LA Canada, Canada[WilcoxWilcox2006]Wilcox2006 Wilcox D. C.,2006, Turbulence Modeling for CFD (Third Edition). DCW Industries[WilcoxWilcox2008]Wilcox2008 Wilcox D. C.,2008, @doi [AIAA Journal] 10.2514/1.36541, 46, 2823[Xu & StoneXu & Stone1995]1995ApJ...454..172X Xu J.,Stone J. M.,1995, @doi [The Astrophysical Journal] 10.1086/176475, 454, 172[ZemanZeman1990]Zeman1990 Zeman O.,1990, @doi [Physics of Fluids A: Fluid Dynamics] 10.1063/1.857767, 2, 178§ OPTIMIZATION AND PERFORMANCE The shock-cloud simulations were performed on the KillDevil Cluster at UNC Research Computing. To our knowledge, the run with N_R = 200 is the largest fixed-grid simulation of the three-dimensional shock-cloud interaction performed to date, with 4096×2048×2048 grid cells. Evolving the simulation to t=10 t_ cc required over 500,000 CPU-hours, with a maximum memory usage of nearly 13 TB across 2,048 CPUs. We built Athena using the Intel 13.1-2 compiler with the “-O3” optimization flag and the MVAPICH2 1.7 MPI library. Inter-process communications occurred over the QDR InfiniBand network.Due to the fixed-grid nature of Athena, there is very little overhead in our simulations, and communication between processors is largely limited to transmission of boundary values after each update. Athena has been demonstrated to scale well out to 20,000 processors <cit.>. We judge performance using the number of cells updated per CPU second. In our shock-cloud simulations, we find that the performance of the code is better for larger jobs, increasing from 2.02×10^4 cells per second at N_R = 6 up to 2.10×10^5 at N_R = 200. This increase is not surprising, as the ratio of computational work to inter-process communication increases with increasing resolution. In our largest simulation, the processors spent over 99% of their time in active computation, indicating that the load is well-balanced and that inter-process communication over the InfiniBand network did not saturate significantly. | http://arxiv.org/abs/1703.08713v1 | {
"authors": [
"Matthew D. Goodson",
"Fabian Heitsch",
"Karl Eklund",
"Virginia A. Williams"
],
"categories": [
"astro-ph.GA",
"astro-ph.IM",
"physics.flu-dyn"
],
"primary_category": "astro-ph.GA",
"published": "20170325170108",
"title": "A systematic comparison of two-equation RANS turbulence models applied to shock-cloud interactions"
} |
§ INTRODUCTION The important part of the physics program on the future linear electron-positron colliders (LC) are the precise measurements of the Higgs boson properties. The measurements of the Higgs boson couplings, for which the Standard model gives strict predictions, namely the linear dependence on the masses of corresponding particles, are one of the top priorities of the LC Higgs program. The shape of the possible deviations from these predictions depends on the proposed model of the new physics and the precision of the coupling measurements of the order of few percent is needed to be sensitive to these effects, if no other state related to electroweak symmetry breaking is directly accessible at the Large hadron collider <cit.>. This sensitivity can be successfully achieved at the proposed future linear e^+e^- collliders, which are best suited for precision measurements.In the first part of this contribution the measurement of the Higgs decay into a pair of W bosons is considered, at the nominal center-of-mass energy, √(s) = 500 GeV,of ILC, using Higgsstrahlung as the Higgs production process. The relative statistical accuracy of the measurement of σ (HZ) × BR (H → WW ^ *) have been estimated. The measured cross section is proportional to the coupling product g_HZZ^2· g_HWW^2/Γ_H.The second part of this contribution is dedicated to the same Higgs decay, H → WW ^ *, but analyzed at the highest energy stage of CLIC, √(s)= 3 TeV, where the dominant Higgs production channel is the WW-fusion. The relative statistical uncertainty of the partial cross-section σ (Hν_eν_e) × BR (H → WW ^ *), is determined.§ SIMULATION AND ANALYSIS TOOLS Both analyses are using ILCSoft, a common software packages developed for the International Linear Collider. Signal and background samples are simulated using the Whizard 1.95 <cit.> event generator, including initial state radiation and a realistic ILC or CLIC luminosity spectrum.The luminosity spectrum and beam-induced processes were simulated by GuineaPig 1.4.4 <cit.>.The hadronization and fragmentation of the Higgs and vector bosons are simulated using Pythia 6.4 <cit.>.Background coming from γγ to hadrons were overlaid over each generated event sample before reconstruction.Particle reconstruction and identification was done using the particle flow technique, implemented in the Pandora particle-flow algorithm (PFA) <cit.>.The response of the detector was simulated with the CLIC_ILD for CLIC and the ILD_o1_v05 detector model for ILC. Signal and background separation is obtained using multivariate classification analysis, implemented in the TMVA package <cit.>.For the ILC analysis the Higgs mass of m_H = 125 GeV is assumed and an integrated luminosity of 500 fb^-1. Also, polarization of both, electron and positron, beams P(e^-, e^+) = (-80%, +30%).The CLIC analysis assumes m_H = 126 GeV, an integrated luminosity of 2 ab^-1 and unpolarized beams. § HIGGS→WW* IN HIGGSSTRAHLUNG AT 500 GEV ILC At the nominal energy of the ILC, √(s)=500 GeV, and the considered beam polarizations the cross-section of the Higgsstrahlung process is 114 fb. For signal events the fully hadronic channel is considered, where the Z boson, as well as both W bosons coming from the Higgs decay, decay to quark pairs (six jet final state). The corresponding signal cross section is 11.33 fb.The Feynman diagram of the Higgsstrahlung Higgs production channel is shown in Figure <ref>.§.§ Background processes The background processes that are considered in this study are listed in Table <ref>.§ EVENT SELECTION Event selection is performed in several steps.First, all reconstructed particles are clustered into six jets using the k_T clustering algorithm. The b and c-tagging probabilities, determined by LCFIPlus package, are assigned to each jet in the event. In the next step, the signal process kinematics is reconstructed by pairing of jets to form candidate for the Z boson, as well as, one on-shell and one off-shell W boson, coming from Higgs decay. The combination of the jet pairs is chosen by the minimization of the χ^2 function given by the formula: χ^2= m_ij-m_W/σ_W^2+ m_kl-m_Z/σ_Z^2+m_ijmn-m_H/σ_H^2 where the invariant mass of a di-jet pair m_ij is assigned to the candidate for the real W boson, m_kl is assigned to the Z boson candidate, whilem_ijmn is the invariant mass of the Higgs boson candidate. m_V and σ_V, (V = W, Z, H), are the masses and the expected mass resolutions of the corresponding bosons. The illustration of the jet pairningis given in Figure <ref>.The cross sections of the considered background processes are several orders of magnitude higher than the signal cross section (see Table <ref>), therefore at the next step, the background to signal ratio is minimized by the set of preselection criteria prior to the final selection. The variables with the corresponding cut-off values used in the preselection, are:* the invariant mass of the Z boson candidate, 70< m_Z < 110 GeV;* number of particle flow objects, NPFO>40;* event thrust>0.95;* -log(y_45) < 4.4;* -log(y_56) < 4.8; where y_ij is the value of the k_T algorithm parameter at which the number of reconstructed jets changes from i to j.Efficiencies of the preselection are given in Table <ref>, for signal and background processes.The final event selection is based on the multivariate analysis method using the Boosted decision tree (BDT) algorithm. It exploits kinematic properties of the event in order to reject the residual background contribution. All background processes are used in the training of the algorithm. The list of discriminating input variables include: * invariant masses of both W, Z and Higgs bosons, m_W, m_W^*, m_Z, m_H;* number of particle-flow objects (NPFO) in the event;* total visible energy, E_vis;* transverse momentum of jets that comprize the Higgs boson, p_T^Higgs;* jet reonstruction parameters -log(y_12), -log(y_23), -log(y_34), -log(y_45), -log(y_56), -log(y_67);* event shape variables (thrust, oblateness, sphericity and aplanarity);* and flavor tagging probabilities for the six reconstructed jets, btag_i, ctag_i (i=1,6). A cut-off value on the output of the BDT algorithm is used for the final separation of signal and background events and it is optimized to minimize the ratio:Δσ/σ =N_S/√((N_S+N_B)), where N_S, N_B are the number of signal and background events after the final selection, respectively.After the final selection the dominant backgrounds come from other Higgs decays, due to the kinematical similarity, as well as, from qq̅qq̅ and qq̅ processes (see Table <ref> ) due to the very high cross-sections. The obtained relative statistical uncertainty on the product of the Higgsstrahlung cross-section and the corresponding branching ratio, σ(HZ)× BR(H→ WW^*), is 6.5% at the 500 GeV ILC, assuming integrated luminosity of 0.5 ab^-1.§.§ Higgs→WW* in WW-fusion at 3 TeV CLIC The Higgs production at the highest CLIC energy stage, √(s)=3 TeV, is dominated by the WW-fusion process (see Figure <ref> ). The boosted topology of this process is reflected in the signature of the signal: the Higgs decay studied is characterized by four soft, forward-peaked jets and the missing energy. The total invariant mass of jets in the event is consistent with the Higgs boson mass and the invariant mass of one of the jet pairs has to be consistent with the invariant mass of the W boson. The list of signal and considered background processes is given in Table <ref> for the assumed integrated luminosity of 2 ab^-1.§ EVENT SELECTION Events are clustered into four jets using the k_T clustering algorithm. The opening of the jet cone was set to R=0.9, which gave the best invariant mass resolution for the Higgs and the real W boson, and the best mean invariant mass value. Jet are combined into pairs, and the combination, which gives the invariant mass of the jet pair closest to the mass of the real W boson, is chosen.The following preselection cuts are applied to minimize the high cross section backgrounds: * the invariant mass of the H boson, 90< m_H < 150 GeV;* number of particle flow objects, p_t>40. Efficiencies of the preselection are given in Table <ref>, for signal and background processes. After the preselection the main backgrounds are qqlν, qqνν and other Higgs decay processes, mainly, H→ bb̅, H→ gg.The final event selection is again based on using the multivariate analysis method, using the Boosted decision tree (BDT) algorithm. All background are used in the BDT training. The list of discriminating input variables include:* total visible energy, E_vis;* the invariant masses of Higgs, real W and virtual W^* candidates, m_W, m_W^*, m_H;* number of particle-flow objects (NPFO) in the event;* transverse momentum of each reconstructed jet in the event, p_t;* jets reconstruction parameters, -log(y_12), -log(y_23), -log(y_34), -log(y_45), -log(y_56);* event thrust;* flavor tagging probabilities for the two jet hypothesis, btag_i, ctag_i, i=1,2.* angle between jets that comprise real W boson. The final selection efficiencies are given in Table <ref>. Figure <ref> represents the stacked histogram of the signal (black) and background processes after the preselection (left) and after the final selection (right). The dominant backgrounds come from other Higgs decays, H→ bb̅ (red), H→ gg (light green), as well as, from qq̅νν (violet) and qq̅lν (light blue).The relative statistical uncertainty of the measurement of σ(Hν_eν_e)× BR(H→ WW^*), expected at √(s)=3 TeV CLIC with the integrated luminosity of 2.0 ab^-1 is 1.5 %.§ CONCLUSION Presented in this contribution are results of the two independent studies of cross section times branching fraction measurement for Higgs decaying to a W pair, at the ILC andCLIC. Fully hadronic final states are considered. Both studies are are based on the full detector simulation, including initial state radiation and beam induced backgrounds. The first study addresses the measurement at the nominal ILC energy, √(s)=500 GeV, using Higgsstrahlung Higgs production channel. The beam polarizations of P(e^-, e^+) = (-80%, +30%), the integrated luminosity of 500 fb^-1 and the mass of Higgs boson of 125 GeV, are assumed.The obtained result for the relative statistical uncertainty of σ(HZ)· BR(H→ WW^*) is 6.5%.The second analysis is dedicated to the study of the H→ WW^* decay at the highest energy stage of CLIC, √(s)=3 TeV, using the leading Higgs production channel, WW-fusion. The integrated luminosity of 2 ab^-1, the unpolarized beams and the mass of Higgs boson of 126 GeV are assumed. The obtained result for the relative statistical uncertainty of the σ(Hν_eν_e)· BR(H→ WW^*) is 1.5%. [title=References] | http://arxiv.org/abs/1703.08871v1 | {
"authors": [
"Mila Pandurović"
],
"categories": [
"hep-ex"
],
"primary_category": "hep-ex",
"published": "20170326205611",
"title": "Measurement of Higgs decay to WW* in Higgsstrahlung at $\\sqrt{s}$=500 GeV ILC and in WW-fusion at $\\sqrt{s}$=3 TeV CLIC"
} |
http://arxiv.org/abs/1703.09341v1 | {
"authors": [
"DaeKil Park"
],
"categories": [
"quant-ph"
],
"primary_category": "quant-ph",
"published": "20170327232757",
"title": "Protection of Entanglement in the presence of Markovian or Non-Markovian Environment via particle velocity : Exact Results"
} |
|
Key Laboratory of Quark&Lepton Physics (MOE) and Institute of Particle Physics,Central China Normal University, Wuhan 430079, China Key Laboratory of Quark&Lepton Physics (MOE) and Institute of Particle Physics,Central China Normal University, Wuhan 430079, [email protected] Laboratory of Quark&Lepton Physics (MOE) and Institute of Particle Physics,Central China Normal University, Wuhan 430079, China Department of Physics and Astronomy, University of California, Los Angeles, California 90095, USA Key Laboratory of Quark&Lepton Physics (MOE) and Institute of Particle Physics,Central China Normal University, Wuhan 430079, China Fluctuations of conserved quantities, such as baryon, electric charge and strangeness number, are sensitive observables in heavy-ion collisions to search for the QCD phase transition and critical point. In this paper, we performed a systematical analysis on the various cumulants and cumulant ratios of event-by-event net-strangeness distributions in Au+Au collisions at √(s_NN)=7.7, 11.5, 19.6, 27, 39, 62.4 and 200 GeV from UrQMD model. We performed a systematical study on the contributions from various strange baryons and mesons to the net-strangeness fluctuations. The results demonstrate that the cumulants and cumulant ratios of net-strangeness distributions extracted from different strange particles show very different centrality and energy dependence behavior. By comparing with the net-kaon fluctuations, we found that the strange baryons play an important role in the fluctuations of net-strangeness. This study can provide useful baselines to study the QCD phase transition and search for the QCD critical point by using the fluctuations of net-strangeness in heavy-ion collisions experiment. It can help us to understand non-critical physics contributions to the fluctuations of net-strangeness.Cumulants of event-by-event net-strangeness distributions in Au+Au collisions at √(s_NN)=7.7-200 GeV from UrQMD modelFeng Liu December 30, 2023 ======================================================================================================================§ INTRODUCTIONOne of the main goals of the high energy nuclear collisions is to explore the phase structure of strongly interacting hot and dense nuclear matter and map the quantum chromodynamics (QCD) phase diagram which can be displayed by the temperature (T) and baryon chemical potential (μ_B).Finite temperature lattice quantum chromodynamics (LQCD) calculations at zero baryon chemical potential region predicted that the transition from the hadronic phase to quark-gluon plasma phase is a smooth crossover, <cit.>, While at large μ_B and low temperature region, the finite density phase transition is of first order <cit.>. So, there should be an end point at the end of the first order phase transition boundary towards the crossover region <cit.>. Fluctuations of conserved quantities, such as net-baryon (B), net-charge (Q) and net-strangeness (S),have been predicted to be sensitive to the QCD phase transition and QCD critical point. Experimentally, one can measure various order moments (Variance(σ^2), Skewness(S), Kurtosis(κ)) of the event-by-event conserved quantities distributions in heavy-ion collisions. These moments are sensitive to the correlation length (ξ) of the hot dense matter created in the heavy-ion collisions <cit.> and also connected to the thermodynamic susceptibilities computed in Lattice QCD <cit.> and in the Hadron Resonance Gas (HRG) <cit.> model.These have been studied widely in experiment and theoretically <cit.>. Experimentally, strange hadrons in the final state production can provide deep insight into the characteristics of the system since they are not inherent inside the nuclei of the incoming beam. Thus, the yield ratios and fluctuations of strange particles have been studied at different experiments <cit.>. Experimentally, the STAR experiment has reported the cumulants of net-kaon (proxy for net-strangeness) multiplicity distributions at √(s_NN)=7.7, 11.5, 14.5, 19.6, 27, 39, 62.4 and 200 GeV <cit.>.However, the net-kaon is not a conserved quantity in QCD. We want to know, to what extend, the net-kaon fluctuations can be used as an approximation of fluctuations of net-strangeness in heavy-ion collisions. Thus, we calculated the cumulants of net-strangeness distributions in Au+Au collisions at RHIC BES energies by including different strange baryons and mesons with UrQMD model in version 2.3 <cit.>. This is to study the contribution from the strange baryons and mesons to the fluctuations of net-strangeness. This study can provide baselines and qualitative background estimates for the search for QCD phase transition and QCD critical point in relativistic heavy-ion collisions. This paper is organized as follows. In section II, we will introduce the UrQMD model. Then, we show the definition of cumulants and cumulant ratios in heavy-ion collisions in the section III. Furthermore, we present the net-strangeness fluctuation with the contributions from different strange particles in Au+Au collisions from the UrQMD calculations and discuss physical implications of these results in the section IV. Finally, the summary will be given in section V. § URQMD MODELThe Ultrarelativistic Quantum Molecular Dynamics (UrQMD) <cit.> approach is one of the microscopic transport models to describe subsequent individual hadron-hadron interactions and system evolution. Based on the covariant propagation of all hadrons with stochastic binary scattering, color string formation and resonance decay <cit.>, UrQMD model can provides phase spacedescriptions <cit.> of different reaction mechanisms. At higher energies,e.g. √(s_NN)> 5 GeV, the quark and gluon degrees of freedom can not be neglected. And the excitation of color strings and their subsequent fragmentation into hadrons are the dominate mechanisms for the multiple production of particles. In addition, UrQMD approach can simulate hadron-hadron interactions at heavy-ion collisions with the entire available range of energies from SIS energy (√(s_NN) = 2 GeV) to RHIC top energy (√(s_NN) = 200 GeV) and the collision term in UrQMD model covers more than fifty baryon species and 45 meson species as well as their anti-particles <cit.>.The comparison of the data (this paper deals with net-strangeness fluctuations) onto those obtained from UrQMD model will tell about the contribution from the hadronic phase and its associated processes.§ OBSERVABLESExperimentally, one can measure particle multiplicity in an event-by-event basis.By measuring the final state strange particle and anti-particles in heavy-ion collisions, we can count the strange quark (N_s) and anti-strange quark number (N_s̅) in those strange hadrons, respectively.Different strange particles have different number of (anti-)strange quarks, e.g., thestrange baryon Λ,Ξ and Ω consist of 1, 2 and 3 strange quarks, respectively, and the strange quark and anti-strange quark carry negative and positive strangeness quantum number, respectively. We use N=N_s̅ - N_s to denote the number of net-strangeness in one event and <N>=<N_s̅> - <N_s> to denote the mean value of the net-strangeness over the whole sample, where N_s and N_s̅ represent the number of strange quark and anti-strange quark in one event ([ N_f = ∑_i^n_i^fp_i,f = s̅,s ]) and the n_i^f are the strange (f=s) or anti-strange quark number (f=s̅) for the strange particle p_i in one event. Then the deviation of N from its mean value can be defined as δ N = N - <N>. The various order cumulants of event-by-event distributions of the variable N can be defined as follows, C_1,N = <N>, C_2,N = <(δ N)^2>, C_3,N = <(δ N)^3>, C_4,N = <(δ N)^4> - 3<(δ N)^2>^2.Once we have the definition of cumulants, various moments of net-strangeness distribution can be written as,M = C_1,N, σ^2 = C_2,N, S = C_3,N/(C_2,N)^3/2=<(δ N)^3>/σ^3, κ = C_4,N/(C_2,N)^2=<(δ N)^4>/σ^4-3 .Statistically <cit.>, various cumulants are used to describe the shape of a probability distribution.For instance, the variance (σ^2) characterizes the width of a distribution, while the skewness (S) and kurtosis (κ) are used to describe the asymmetry and peakness of a distribution, respectively.Theoretical and QCD based model calculations show that the high order cumulants of conserved quantities, such as baryon, strangeness and electric charge number, are proportional to the high power of correlation length (ξ) <cit.>. <(δ N)^2> ∼ξ^2, <(δ N)^3> ∼ξ^4.5,<(δ N)^4>-3<(δ N)^2>^2∼ξ^7.Lattice QCD calculation tell us that the cumulants of conserved quantities are sensitive to the susceptibilities of the system <cit.>,C_n,N=VT^3χ^(n)_N(T,μ_N), where V is the volume of the system. Experimentally, it is very difficult to measure the volume of the collision system, so the cumulant ratios are constructed to remove the effect of system volume.The moment product κσ^2 and Sσ can be expressed in terms of cumulant ratios: χ^(3)_N/χ^(2)_N=C_3,N/C_2,N=(Sσ)_N, χ^(4)_N/χ^(2)_N=C_4,N/C_2,N=(κσ^2)_N. With above definitions, we can calculate various cumulants and cumulant ratios for the measured event-by-event net-particles multiplicity distributions. § RESULTS In this section, we present the centrality, rapidity and collision energy dependence of various cumulants (C_1, C_2, C_3 and C_4) and cumulant ratios (κσ^2, Sσ) of net-strangeness distributions for Au+Au collisions at √(s_NN)=7.7, 11.5, 19.6, 27, 39, 62.4 and 200 GeV from UrQMD model. From low to high energies, the corresponding statistics are 35, 113, 113, 83, 135, 135 and 56 million minimum bias events, respectively. The statistical errors are estimated based on the Delta theorem <cit.>. To avoid auto-correlation, the collision centralities are determined by the (anti-)proton and charged pion multiplicities within pseudo-rapidity |η|<1. We perform our calculation with four cases ((1) K, (2) K+Λ, (3) K+Λ+Σ+Ξ+Ω, (4) K+K^0+Λ+Σ+Ξ+Ω), where both the particle and anti-particles are included. For each case, we can calculate the cumulants of net-strangeness distributions.Figure <ref> shows the pseudo-rapidity distributions (dN/dη) of strange quark and anti-strange quark for the most central (0-5%) Au+Au collisions at √(s_NN)= 7.7 to 200 GeV calculated from the UrQMD model for the above four cases. The . dN/dη|_η= 0 of strange and anti-strange quarks monotonously increase with increasing collision energy from 7.7 to 200 GeV for all the four cases. If one considers only the K^+ and K^- (top row in Fig. <ref> ), the dN/dη distributions of the anti-strange quarks are above the strange quarks at all energies. The differences of dN/dη between strange quark and anti-strange quark become smaller at higher energies. If we include the strange baryons, such as the case of (K+Λ+Σ+Ξ+Ω), the dN/dη distributions of strange quarks are slightly above the anti-strange quarks.This can be explained by the interplay between the associate production and pair production of K^+ and K^- from lower to higher energies.At lower energies, the associate production from the reaction channel NN→ N Λ K^+ dominates the production of K^+ which leads to the number of sbar quarks being larger than the number of s quarks.However, the K^+ and K^-are mainly produced from pair production at higher energies, which means the number of s̅ quark and s quark are similar.If we want to know to what extend the net-kaon fluctuations can reflect the fluctuations of net-strangeness,the first step is to demonstrate the fraction of strangeness carried by K^+ and K^- over the total strangeness. Figure <ref> shows the energy dependence of ratios, which are the number of total strangeness carried by kaons (K^+ and K^-) divided by the total strangeness from all strange particles at mid-rapidity in 0-5% most central Au+Au collisions from UrQMD calculations.We found that the ratios of N_K^-/N_s+s̅ and N_K^++K^-/N_s+s̅ have a smooth increase withenergy increasing from 7.7 to 200 GeV and the value of N_K^++K^-/N_s+s̅ at √(s_NN)=200 GeV is about 45%. On the other hand, the ratios of N_K^+/N_s+s̅ smoothly decrease with increasing energy. At low energies, such as 7.7, 11.5 and 19.6 GeV, the values of N_K^+/N_s+s̅ are much larger than those ofN_K^-/N_s+s̅ whereas the values of N_K^+/N_s+s̅ and N_K^-/N_s+s̅ are very close to each other at higher energies. The effects of the energy dependence of changing kaon production can also be explained by the kaon production mechanism. We also show the fraction of strangeness carried by K^0 and K̅^0 over the total strangeness, which are similar to the charged kaons. This can be understoodby the isospin balance between u and d quarks in the mid-rapidity of heavy-ion collisions. The yields between K^+ and K^0, K^- and K̅^0 should be very close to each other, respectively. Figure <ref> shows the centrality dependence of various cumulants of net-strangeness multiplicity distributions at mid-rapidity in Au+Au collisions at √(s_NN)=7.7 to 200 GeV from UrQMD calculations. Based on the similarity of the trends, those cumulants (C_1,C_2,C_3 and C_4) can be separated into odd order (C_1,C_3) and even order cumulants (C_2,C_4). The C_2 and C_4 show monotonically increase from peripheral collision to central collision and the even order cumulants of net-strangeness extracted from K and K+Λ have very close values. It is observed that C_1 and C_3 also have similar trend and the values of net-strangeness from K+Λ and K+K^0+Λ+Ξ+Σ+Ω are close to zero. The net-strangeness number at initial state is zero, due to the strangeness conservation, the net-strangeness number should be also zero at finial state. The results indicate a better approximation for the real net-strangeness is reached by including more strange particles into the calculations. On the other hand, the odd order cumulants of net-strangeness from K+Λ+Ξ+Σ+Ω are negative. It is because that it has more number of strange baryons (like Λ, Ξ, Σ and Ω) than the number of anti-strange baryons especially at low energies. This explains why the odd order cumulants of net-strangeness (N_s̅-N_s) remain negative. Figure <ref> shows various cumulants of net-strangeness multiplicity distributions as a function of pseudo-rapidity window size for the 0-5% most central Au+Au collisions at √(s_NN)=7.7 to 200 GeV from UrQMD calculations.It is similar to the centrality dependence of various cumulants as is shown in Fig <ref>.The odd order cumulants C_1 and C_3 show linear variation with the window size and the results fromK+Λ+Ξ+Σ+Ω remain negative due to the large number of strange quarks. For the even order cumulants, they show linear increase with increasing the rapidity window size. When Δη is around 3, the even order cumulants can reach saturation and suppression, which can be understood by the effects of net-strangeness number conservation. Figure <ref> displays various cumulants as a function of collision energy at mid-rapidity for the most central (0-5%) Au+Au collisions from UrQMD model.We can observe that the even order cumulants (C_2,C_4) increase with increasing the collision energy. However, the odd order cumulants (C_1,C_3) of net-kaon decrease with increasing the collision energy. Additionally, the mean value of net-strangeness from K+Λ and K+K^0+Λ+Ξ+Σ+Ω are close to zero. To understand those energy dependence trends, let's introduce the important properties of cumulants and moments <cit.>.We use C_n to denote the n^th order cumulant of the probability distribution of the random variable X.According to the additivity of cumulants for independent variables, the additivity of cumulants can be written as:C_n(X+Y)=C_n(X)+C_n(Y),, where X, Y are independent random variables, respectively. With the homogeneity properties of cumulants, we haveC_n(X-Y) = C_n(X)+C_n(-Y) = C_n(X)+(-1)^nC_n(Y). If the random variables X and Y are independent distributed as Poisson distributions, then the X-Y will distributed as Skellam distribution, the cumulants of net-strangeness multiplicity distributions can be denoted by:C_n(X-Y) = C_n(X)+(-1)^nC_n(Y) = <X>+(-1)^n<Y>For odd order cumulants:C_1(X-Y)=C_3(X-Y)=<X>-<Y>For even order cumulants,we have:C_2(X-Y)=C_4(X-Y)=<X>+<Y>where, the X denotes the number of anti-strange quark (X=N_s̅), Y is the number of strange quark(Y=N_s) and X-Y represents the net-strangeness number (X-Y=N_s̅-N_s).The energy dependence shown in Fig. <ref> can be attributed to the interplay between production mechanism of strange and anti-strange particles as a function of energy. At low energies, the associate production channel NN → NΛ K^+ dominate the production of K^+, which makes the yield of K^+ larger than the yield of K^-. At high energies, due to pair production, the yields of the strange and anti-strange particles are very close to each other. For the case of net-kaon cumulants,from Eq. (<ref>), one can infer that the difference between odd order cumulants of K^+ and K^- will become small as increasing the collision energies. Because of the additivity of the even order cumulants from s̅ quark and s quark as displayed by the Eq. (<ref>), the even cumulants of net-strangeness show an increasing trend with increasing of the collision energy for different cases. Since more strange particles are included, we observe larger values of the even order cumulants. Meanwhile, the net-strangeness obtained from K+K^0+Λ+Ξ+Σ+Ω is a good approximation of the real net-strangeness and the values of odd order cumulants are close to zero. Figure <ref> and <ref> show κσ^2 and Sσ of the net-strangeness distributions as a function of the average number of participant nucleons (N_part) in Au+Au collisions at √(s_NN)=7.7 to 200 GeV from UrQMD model.The κσ^2 from different cases show weak centrality dependence. For the case of K and K+Λ, the values of κσ^2 are consistent with unity within errors. By including more multi-strange baryons, such as the case of K+Λ+Ξ+Σ+Ω, the values of κσ^2 are above unity. It indicates that the multi-strange baryons play an important role in the high order fluctuations of net-strangeness. It is similar as the two charged particles in net-charge fluctuations. The S σ of net-kaon increase with increasing the number of participants and the values from K+Λ+Σ+Ξ+Ω and K+K^0+Λ+Σ+Ξ+Ω are negative. This can be explained by the Eq. <ref>. Due to the C_3 of net-strangeness are negative, the Sσ are negative.Figure <ref> shows κσ^2 and Sσ of net-strangeness distributions as a function of colliding energy for the most central (0-5%) Au+Au collisions at mid-rapidity. The κσ^2 of net-strangeness especially from K and K+Λ are closed to unity and show weak dependence on collision energyIf the multi-strange baryons are included in the calculations,the values of κσ^2 are above unity. We also observed that the Sσ of net-kaon distributions decrease with increasing collision energy and the values calculated from K+Λ and K+K^0+Λ+Σ+Ξ+Ω are close to zero. One can observe different energy dependence behavior between κσ^2 and Sσ. It is because the skewness is sensitive to the asymmetry between strangeness and anti-strangeness while the kurtosis is sensitive to the multi-strange baryon with strangeness number |s|>=2. This is similar with the net-charge case that the charged two particles have strong effects on net-charge fluctuations <cit.>. If we take out the K^0, the values of the Sσ become negative and monotonically decrease with decreasing energy. It indicates that the neutral kaons carry a similar amount of strangeness than the charged kaons and show similar trends in the Sσ as a function of energy. On the other hand, based a hadronic transport model (JAM) model study in Au+Au collisions at 5 GeV, we find that the effects of hadronic scattering on proton fluctuations is negligible. One could also expect that the hadronic re-scattering effects are also small in net-kaon fluctuations, however, detail model studies are needed to carry out in the future.§ SUMMARYWe have performed systematical studies on the centrality, rapidity and energy dependence of the cumulants(C_1 - C_4) and cumulant ratios (κσ^2 and Sσ) of net-strangeness distributions in Au+Au collisions at √(s_NN)=7.7, 11.5, 19.6, 27, 39, 62.4 and 200 GeV from UrQMD model. It is found that fluctuations of net-strangeness can be influenced by the production mechanism of strangeness as a function of collision energy which cause different results between lower energies and higher energies. Those difference can be understood as the associate production of K^+ play an important role at lower energies whereas pair production of strangeness and anti-strangeness dominates at higher energies. On the other hand,our results show that κσ^2 of net-strangeness have weak centrality and energy dependence.In the current model study, we showed that the fraction of total strangeness carried in kaons are smaller than 45% and monotonically decrease with decreasing energy. By comparing with the net-kaon fluctuations, we found that the multi-strange baryons play an important role in the fluctuations of net-strangeness. Those multi-strange baryons lead to the values of κσ^2 become above unity. However, in terms of searching for non-monotonic energy dependence of the fluctuation observable near QCD critical point, the net-kaon fluctuations should still have sensitivity.Since there has no QCD critical point and phase transition physics implemented in the UrQMD model, our model calculations can provide a baseline to search for the QCD critical point in heavy ion collisions. Figure <ref> and Figure <ref> respectively show κσ^2 and Sσ of net-strangeness varying with pseudo-rapidity window size for most central (0-5%) Au+Au collisions from √(s_NN)=7.7 GeV to 200 GeV.The κσ^2 fluctuate around unity.κσ^2 of net-kaon, net-(K+Λ) and net-(K+Λ+Σ+Ξ+Ω) multiplicity distributions monotonically decrease with increasing the size of rapidity window for 0 - 2 GeV/c transverse momentum coverage from 11.5 GeV to 39 GeV.For lower energies(7.7 GeV) andhigher energies(above 62.4 GeV), there is a relatively larger deviation for κσ^2 of net-strangeness from unity with larger rapidity window.Clearly, Sσ of net-kaon, net-(K+Λ) and net-(K+Λ+Σ+Ξ+Ω) monotonically increase as the rapidity window size increase if not considering the sign change.Sσ of net-(K+Λ) as well net-(K+K^0+Λ+Σ+Ξ+Ω) have weak rapidity dependence.The sign change of Sσ of net-strangeness is similar to what happens in Figure <ref>. § ACKNOWLEDGEMENTThe work was supported in part by the MoST of China 973-Project No.2015CB856901, NSFC under grant No. 11575069 and 11221504.apsrev4-1 | http://arxiv.org/abs/1703.09114v2 | {
"authors": [
"Chang Zhou",
"Ji Xu",
"Xiaofeng Luo",
"Feng Liu"
],
"categories": [
"nucl-ex",
"hep-ex",
"hep-ph",
"nucl-th"
],
"primary_category": "nucl-ex",
"published": "20170327143932",
"title": "Cumulants of event-by-event net-strangeness distributions in Au+Au collisions at $\\sqrt{s_\\mathrm{NN}}$=7.7-200 GeV from UrQMD model"
} |
Department of Physics and Astronomy, MSN 3F3, George Mason University, Fairfax, Virginia 22030, USADepartment of Physics and Astronomy, MSN 3F3, George Mason University, Fairfax, Virginia 22030, USAAn optimized interatomic potential has been constructed for silicon using a modified Tersoff model. The potential reproduces a wide range of properties of Si and improves over existing potentials with respect to point defect structures and energies, surface energies and reconstructions, thermal expansion, melting temperature and other properties. The proposed potential is compared with three other potentials from the literature. The potentials demonstrate reasonable agreement with first-principles binding energies of small Si clusters as well as single-layer and bilayer silicenes. The four potentials are used to evaluate the thermal stability of free-standing silicenes in the form of nano-ribbons, nano-flakes and nano-tubes. While single-layer silicene is mechanically stable at zero Kelvin, it is predicted to become unstable and collapse atroom temperature. By contrast, the bilayer silicene demonstrates a larger bending rigidity and remains stable at and even above room temperature. The results suggest that bilayer silicene might exist in a free-standing form at ambient conditions. An optimized interatomic potential for silicon and its application to thermal stability of silicene Y. Mishin December 30, 2023 ===================================================================================================§ INTRODUCTION Silicon is one of the most important functional materials widely used in electronic, optical, energy conversion and many other applications. Not surprisingly, Si has been the subject of many classical molecular dynamics (MD) and other large-scale atomistic computer studies for almost three decades. Although classical atomistic simulations cannot access electronic or magnetic properties, they are indispensable for gaining a better understanding of the atomic structures, thermal and mechanical properties of the crystalline, liquid and amorphous Si and various nano-scale objects such as nano-wires and nano-dots. Atomistic simulations rely on semi-empirical interatomic potentials. The accuracy of the results delivered by atomistic simulations depends critically on the reliability of interatomic potentials.Several dozen semi-empirical potentials have been developed for Si. Although none of them reproduces all properties accurately, there is a trend towards a gradual improvement in their reliability as more sophisticated potential generation methods are developed and larger experimental and first-principles datasets become available for the optimization and testing. The most popular Si potentials were proposed by Stillinger and Weber (SW)<cit.> and Tersoff.<cit.> The original Tersoff potentials were modified by several authors by slightly changing the analytical functions and improving the optimization.<cit.> Other Si potential formats include the environment-dependent interatomic potential,<cit.> the modified embedded atom method (MEAM) potentials,<cit.> and bond-order potentials.<cit.>One of the most significant drawbacks of the existing Si potentials is the overestimation of the melting temperature T_m, in many cases by hundreds of degrees. Other typical problems include underestimated vacancy and surface energies and positive Cauchy pressure (c_12-c_44), which in reality is negative (c_ij being elastic constants). Kumagai et al.<cit.> constructed a significantly improved Tersoff potential that predicts T_m=1681 K in close agreement with the experimental value of 1687 K, gives the correct Cauchy pressure, and is accurate with respect to many other properties. This potential, usually referred to as MOD,<cit.> is probably the most advanced Tersoff-type potential for Si available today. However, it still suffers from a low vacancy formation energy, low surface energies, and overestimated thermal expansion at high temperatures and the volume effect of melting.The goal of this work was twofold. The first goal was to further improve on the MOD potential<cit.> by addressing its shortcomings with a minimal impact on other properties. This was achieved by slightly modifying the potential format and performing a deeper optimization. When testing the new potential, we compare it not only with MOD but also with the popular SW potential.<cit.> We further include the MEAM potential developed by Ryu et al.<cit.> to represent a different potential format. To our knowledge, this is the only MEAM potential whose melting point is close to experimental.The second goal was to test the four potentials for their ability to predict the energies of low-dimensional structures, such as small Si clusters and single- and double-layer forms of silicene (2D allotrope of Si). Si potentials are traditionally considered to be incapable of reproducing low-dimensional structures. This view is largely based on testing the SW potential. The MOD and MEAM potentials have not been tested for the properties of clusters or silicenes in any systematic manner. Such tests were conducted in this work using all four potentials. The results suggest that the present potential, MOD and MEAM do capture the main trends and in many cases agree with first-principles density functional theory (DFT) calculations. As such, they can be suitable for exploratory studies of thermal and mechanical stability of Si clusters and 2D structural forms of Si. In this work we apply them to evaluate the stability of free-standing single-layer and bilayer silicenes at room temperature.§ POTENTIAL GENERATION PROCEDURES The total energy of a collection of atoms is represented in the formE=12∑_i≠ jϕ_ij(r_ij),where r_ij is distance between atoms i and j and the bond energy ϕ_ij is taken as ϕ_ij=f_c(r_ij)[Aexp(-λ_1r_ij)-b_ijBexp(-λ_2r_ij)+c_0].Here, the bond order b_ij is given by b_ij=(1+ξ_ij^η)^-δ,where ξ_ij=∑_k i,jf_c(r_ij)g(θ_ijk)exp[α(r_ij-r_ik)^β].The term (1+ξ_ij) represent an effective coordination number of atom i and f_c(r_ij) is a cutoff function. The latter has the form f_c(r)= 1, r≤ R_1 12+916cos(πr-R_1R_2-R_1)-116cos(3πr-R_1R_2-R_1), R_1<r<R_20, r≥ R_2,where R_1 and R_2 are cutoff radii. The outer cutoff R_2 is chosen between the first and second coordination shells of the diamond cubicstructure. The angular function g(θ_ijk) has the generalized form g(θ)=c_1+c_2(h-cosθ)^2c_3+(h-cosθ)^2{ 1+c_4exp[-c_5(h-cosθ)^2]} ,where θ_ijk is the angle between the bonds ij and ik. These functional forms are the same as for the MOD potential,<cit.> except for the new coefficient c_0 that was added to better control the attractive part of the potential.The adjustable parameters of the potential are A, B, α, h, η, λ_1, λ_2, R_1, R_2, δ, c_0, c_1, c_2, c_3, c_4 and c_5. The power β is a fixed odd integer. In the original Tersoff potential<cit.> β=3, whereas Kumagai et al.<cit.> chose β=1. We tried both numbers and found that β=3 gives a better potential.The free parameters of the potential were trained to reproduce basic physical properties of the diamond cubic (A4) structure and the energies of several alternate structures. Specifically, the fitting database included the experimental lattice parameter a, cohesive energy E_c, elastic constants c_ij, and the vacancy formation energy E_v^f. The alternate structures were: simple cubic (SC), β-Sn (A5), face-centered cubic (FCC), hexagonal closed pack (HCP), body-centered cubic (BCC), simple hexagonal (HEX), wurtzite (B4), BC8, ST12, and clathrate (cP46). Their energies obtained by DFT calculations are available from open-access databases such as Materials Project,<cit.> OQMD<cit.> and AFLOW.<cit.> Some of these structures were found experimentally as Si polymorphs under high pressure, others were only generated in the computer for testing purposes. The parameter optimization process utilized a simulated annealing algorithm. The objective function was the sum of weighted squares of deviations of properties from their target values. Numerous optimization runs were conducted using the weights as a tool to achieve the most meaningful distribution of the errors over different properties. Several versions of the potential were generated and the version deemed to be most reasonable was selected as final.The optimized potential parameters are listed in Table <ref>. The potential has been incorporated in the molecular dynamics package LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator)<cit.> as the pair style .The transferability of the new potential was evaluated by computing a number of physical properties that were not included in the training database and comparing the results with experimental data and/or DFT calculations available in the literature. The same comparison was made for the MOD, MEAM and SW potentials to demonstrate their strengths and weaknesses relative to the new potential. We utilized the MOD and SW potential files from the LAMMPS potential library. The MEAM potential file was obtained from the developers.<cit.> The potential testing results are reported in the next Section.§ PROPERTIES OF SOLID SI Table <ref> summarizes some of the properties of crystalline Si predicted by the four potentials. All properties have been computed in this work unless otherwise is indicted by citations. The defect energies are reported after full atomic relaxation. §.§ Lattice properties The present potential, MOD and MEAM accurately reproduce the elastic constants. The SW potential gives less accurate elastic constants and a positive Cauchy pressure contrary to experiment.<cit.> The phonon density of states (DOS) and phonon dispersion relations were computed by the method developed by Kong<cit.> and implemented in LAMMPS. The MD simulation was performed at 300 K utilizing a primitive 16×16×16 supercell with 8192 atoms. The DOS plots are shown in Fig. <ref>(a) and the respective zone-center optical frequencies ν_max are indicated in Table <ref>. The present potential, MOD and SW predict surprisingly similar ν_max values that underestimate the experimental frequency by about 2 THz. The MEAM potential overshoots ν_max by about 10 THz and the entire DOS is stretched by a factor of 1.63. Note that none of the four potentials reproduces the sharp peak at about 5 THz arising from the acoustic zone-boundary phonons.Fig. <ref>(b) displays the phonon dispersion curves predicted by the present potential. While general agreement with experiment<cit.> is evident and the longitudinal acoustic branches are reproduced accurately, the potential overestimates the transverse acoustic zone-boundary frequencies and the optical frequencies.The cubic lattice parameter a was computed as a function of temperature by zero-pressure MD simulations. The linear thermal expansion coefficient (a-a_0)/a_0 relative to room temperature (a_0 at 295 K) is compared with experimental data in Fig. <ref>. The SW potential demonstrates exceptionally good agreement with experiment. The present potential slightly overestimates the experiment at temperatures below 1300 K and underestimates at higher temperatures. The negative slope at high temperatures is unphysical, but the overall agreement with experiment is reasonable. The MOD potential gives a similar thermal expansion at low temperatures but over-predicts it at high temperatures. The MEAM potential grossly overestimates the thermal expansion. Given also the poor agreement for phonons, care should be exercised when using this potential for thermodynamic calculations of crystalline Si. Note that neither phonon properties nor thermal expansion were not included in the fitting databases of the potentials. §.§ Lattice defects According to DFT calculations,<cit.> a Si vacancy can exist in several metastable structures. In the lowest-energy structure, the four neighbor atoms slightly move towards the vacant site preserving the tetrahedral (T_d) symmetry and leaving four dangling bonds. A slightly less favorable structure is obtained when one of the four atoms moves towards the other three and forms six identical bonds. This configuration has a hexagonal (D_3d) symmetry and is referred to as the “dimerized” or “split” vacancy. This vacancy reconstruction eliminates the dangling bonds but increases the elastic strain in the surrounding lattice. The present potential and MEAM correctly predict the split vacancy to be less stable than the T_d vacancy. The latter has the formation energy within the range of DFT calculations and consistent with the experimental value of 3.6 eV.<cit.> (It should be noted, though, that the experiments are performed at high temperatures at which the vacancy structure is unknown.) The MOD and SW potentials significantly under-predict the formation energy of the T_d vacancy. In addition, with the MOD potential the split vacancy spontaneously transforms to a D_2d structure with the energy of 3.41 eV (the DFT value is 3.46 eV)<cit.>, whereas the SW potential predicts the split vacancy to be mechanically unstable and spontaneously transform to the T_d structure.Self-interstitials can exist in four distinct configurations: hexagonal (hex), tetrahedral (T_d), bond center (B) and ⟨110⟩ split (Table <ref>). Given the large scatter of the DFT formation energies, all four potentials perform almost equally well. There is one exception: the MEAM and SW potentials predict the hexagonal interstitial to be mechanically unstable and spontaneously transform to the tetrahedral configuration. Both potentials overestimate the B-interstitial energy.Surface energies were computed for the low-index orientations {100}, {110} and {111}. Experiments have shown that these surfaces can undergo reconstructions to several different structures.<cit.> Reconstructions of the {110} and {111} surfaces are accompanied by a modest energy reduction of about 0.3-0.4 J/m^2. In this paper, these surfaces were tested in unreconstructed states. By contrast, the dimer reconstruction of the {100} surface to the more stable 2×1 structure reduces the surface energy by almost 1 J/m^2. In this case, both reconstructed and unreconstructed structures were compared with DFT calculations. Table <ref> shows that the SW potential does an excellent job reproducing the DFT surface energies. The MOD potential is the least accurate: it systematically underestimates the surfaces energies for all orientations. The present potential demonstrates a substantial improvement over MOD: all energies are higher and closer to the DFT data. The MEAM potential is equally good for all surfaces except for the unreconstructed {100} structure. The latter is mechanically unstable with this potential and reconstructs to the 2×1 structure spontaneously during static relaxation at 0 K. This instability was not observed in the DFT calculations.<cit.> The surface energy of 1.74 J/m^2 shown in the table was obtained by constrained relaxation of this surface, in which the atoms were only allowed to move in the direction normal to the surface to prevent the dimerization. With the potential proposed in this work, the unreconstructed {100} surface is stable at 0 K and forms symmetrical rows of dimers corresponding to the 2×1 reconstruction upon heating to 1000 K and slowly cooling down to 0 K.As another test of the potentials, unstable stacking fault energies γ_us were calculated for the {111} and {100} crystal planes. Such faults are important for the description of dislocation core structures. In silicon, dislocations glide predominantly on {111} planes. The spacing between {111} planes alternates between wide and narrow. In the former case the chemical bonds are normal to the planes while in the latter they are at 19.47^∘ angles. A generalized stacking fault is obtained by translation of one half-crystal relative to the other in a chosen direction parallel to a {111} plane. Depending on whether the cutting plane passes between widely spaced or narrowly spaced atomic layers, the stacking fault is called shuffle type or glide type, respectively. After each increment of crystal translation, the atoms are allowed to minimize the total energy by local displacements normal (but no parallel) to the fault plane. The excess energy per unit surface area plotted as a function of the translation vector is called the gamma-surface. If the dislocation Burgers vector is parallel to a crystallographic direction ⟨ hkl⟩, then its core structure is dictated by the { 111}⟨ hkl⟩ cross-sections of the gamma-surface. The unstable stacking fault energy γ_us is the maximum energy in this cross-section.Figure <ref> displays three cross-sections of the {111} gamma surface computed with the four potentials in comparison with DFT calculations. The figure additionally includes the { 100}⟨ 110⟩ cross-section for which DFT data is available. The respective γ_us values are summarized in Table <ref>. While none of the potentials reproduces the DFT curves well, the SW potential tends to be the least accurate. For some of the cross-sections, the Tersoff-type potentials “chop off” the tip of the curve due to the short range of atomic interactions and a relatively sharp cutoff. It should also be noted that the potentials do not reproduce the stable stacking fault predicted by DFT calculations [Fig. <ref>(c)]. This fault arises due to long-range interactions and is not captured by these potentials.§ MELTING TEMPERATURE AND LIQUID PROPERTIES OF SI The melting temperature was computed by the interface velocity method. A periodic simulation block containing a (111) solid-liquid interface was subject to a series of isothermal MD simulations in the NPT ensemble (zero pressures in all directions) at several different temperatures. The interface migrated towards one phase or the other, depending on whether the temperature was above or below the melting point. The total energy of the system was monitored in this process and was found to be a nearly linear function of time. The slope of this function gives the rate of the energy change due to the phase transformation. A plot of this energy rate as a function of temperature was used to find the melting point by linear interpolation to the zero rate (Fig. <ref>). For the present potential, the melting temperature obtained was found to be T_m=1687±4 K (the error bar is the standard deviation of the linear fit). This temperature is in excellent agreement with the experimental melting point of 1687 K, even though it was not included in the fitting procedure. To verify our methodology, similar calculations were performed for the MOD potential. The result was T_m=1682±4 K, which matches 1681 K reported by the potential developers.<cit.> For the SW potential, the same method gives T_m=1677±4 K. This number is consistent (within the error bars) with T_m=1691±20 K obtained by thermodynamic calculations.<cit.> The energy rate versus temperature plots for the MOD and SW potentials can be found in the Supplemental Material to this paper.<cit.>Table <ref> summarizes the predictions of the four potentials for the latent heat of melting L and the volume effect of melting Δ V_m relative to the volume of the solid V_solid. None of the potentials reproduces these properties well. The present potential gives the most accurate volume effect Δ V_m/V_solid but the least accurate latent heat L. The MOD potential predicts a better value of L but overestimated the volume effect a factor of two.Prediction of structural properties of liquid Si presents a significant challenge to interatomic potentials. The nature of atomic bonding in Si changes from covalent to metallic upon melting,<cit.> causing an increase in density. In this work, the structure of liquid Si was characterized by the pair correlation function g(r) and the bond-angle distribution function g(θ,r). These functions were averaged over 300 uncorrelated snapshots from NPT MD simulations under zero pressure at 1750 K using a simulation block containing 6912 atoms. The angular distribution g(θ,r) was computed for bonds within the radius r_m of the first minimum of g(r) and normalized by unit area under the curve. The results are shown in Fig. <ref>. The present potential turns out to be the least accurate for the liquid properties. The first maximum of g(r) is too high and the first minimum too deep in comparison with experiment.<cit.> The other potentials perform better but still show significant departures from the experiment. For the bond-angle distribution, the results computed with the four potentials are very different and none agrees with the DFT simulations. The DFT simulations (ab initio MD)<cit.> yield a broader distribution with two peaks of comparable height centered at 60^∘ and 90^∘. The present potential strongly underestimates the 60^∘ peak, overestimates the peak at 90^∘, and creates another peak at the tetrahedral angle of 109.47^∘. Using the other potentials, the position of the large peak varies between 90^∘and 109.47^∘. Overall, our potential overestimates the degree of structural order in the liquid phase. This seems somewhat surprising given that this potential predicts the most accurate volume effect of melting.§ ALTERNATE CRYSTAL STRUCTURES OF SI Tables <ref> and <ref> show the equilibrium energies of several crystal structures of Si relative to the diamond cubic structure and the respective equilibrium atomic volumes. All these structures were included in the potential fitting procedure except for two. The h-Si_6 structure was recently found by DFT calculations as a new mechanically stable polymorph of Si attractive for optoelectric applications due to its direct band gap of 0.61 eV and interesting transport and optical properties.<cit.> The h-Si_6 structure is composed of Si triangles forming a hexagonal unit cell with the P6_3/mmc space group. Si_24 is another mechanically stable polymorph that has recently been synthesized by removing Na from the Na_4Si_24 precursor.<cit.> The orthorhombic Cmcm structure of Si_24 contains open channels composed of 6 and 8-member rings. This polymorph has a quasi-direct 1.3 eV band gap and demonstrates unique electronic and optical properties making it a promising candidate for photovoltaic and other applications. The h-Si_6 and Si_24 structures were used for testing purposes to evaluate the transferability of the potentials. All structures were equilibrated by isotropic volume relaxation without local displacements of atoms. For the HCP and Wurtzite structures, the c/a ratios were fixed at the ideal values. For the simple hexagonal, β-Sn and h-Si_6 structures, c/a was fixed at the DFT values of 0.94, 0.552 and 0.562, respectively. It is worth mentioning that the present potential and MOD predict the wurtzite phase to be mechanically unstable at 0 K, which appears to be a generic feature of Tersoff-type potentials.In Tables <ref> and <ref>, we compare the predictions of the four potentials with DFT calculations available in the literature. Since the tables are overloaded with numerical data, we found it instructive to recast this information in a graphical format. In Figs. <ref> and <ref> we plot the energies (volumes) predicted by each potential against the respective DFT energies (volumes) computed by different authors. The bisecting line is the line of perfect correlation. The first thing to notice is the large scatter of the DFT data reported by different sources, which makes a comparison with potentials somewhat ambiguous. For each potential, the agreement was quantified by the root-mean-square (RMS) deviation of the data points from the bisecting line. The RMS deviations obtained are shown in the last row of Tables <ref> and <ref>. It should emphasized that these RMS deviations reflect not only the differences between the potentials and the DFT calculations but also the scatter of the DFT points themselves. Thus, only comparison of relative values of the RMS deviations makes sense. It should also be noted that the energy deviations are strongly dominated by high-energy structures, such as the close-packed FCC and HCP phases. With this in mind, it is evident that the present potential is the least successful in reproducing the structural energies, whereas the MOD potential is the most successful. For the atomic volumes, however, the present potential and MOD are equally accurate, while the SW and MEAM potentials show significantly larger deviations. It is interesting to note that the present potential gives the most accurate predictions for the energy and volume of the novel h-Si_6 and Si_24 structures that were not included in the fitting database. The MOD potential comes close second, whereas the MEAM and SW potentials are significantly less accurate. The energy-volume plots for several selected structures can be found in the Supplemental Material to this article.<cit.>§ SILICON CLUSTERS Structure and properties of small Si clusters offer a stringent test of interatomic potentials. Potentials are usually optimized for bulk properties, whereas the clusters display very different and much more open environments in which the coordination number and the type of bounding may change very significantly from one structure to another. Si potentials are traditionally considered to be incapable of reproducing cluster properties, unless such properties are specifically included in the fitting process as in the case of the Boulding and Andersen potential.<cit.> It was thus interesting to compare the predictions of the four potentials with first-principles calculations.Figs. <ref> and <ref> show the structures of the Si_n (n=2-8) clusters tested in this work. Several different structures are included for each cluster size n whenever first-principles data is available. Such structures are labeled by index m in the Si_n.m format in the order of increasing cohesive (binding) energy according to the DFT calculations.<cit.> Thus, the structure labeled Si_n.1 represents the DFT-predicted ground state for each cluster size n (except for the dimer Si_2 that has a single structure). In addition to the DFT calculations,<cit.> we included the results of quantum-chemical (QC) calculations on the Hartree-Fock level.<cit.> Such calculations are more accurate but the energy scale is not fully compatible with that of the DFT calculations. To enable comparison, we followed the proposal<cit.> that the QC energies be scaled by a factor of 1.2 to ensure agreement with experiment for the dimer energy.Table <ref> summarizes the predictions of the four potentials in comparison with DFT calculations<cit.> and unscaled QC energies.<cit.> In addition to the clusters, we included an infinitely long linear chain for the sake of comparison. To aid visual comparison, Fig. <ref> shows the cluster energies grouped by the cluster size (same-size clusters are connected by straight lines). The QC energies are plotted in the scaled format. Note that the scaling does indeed bring the QC and DFT energies to general agreement with each other. Despite the significant scatter of the individual energies on the level of 0.2-0.4 eV/atom, both calculation methods predict the same ground state for trimers, tetramers and pentamers. None of the potentials predicts the correct ordering for all DFT/QC energies. The present potential and MOD show about the same level of accuracy, but the present potential makes less mistakes in the ordering. Both potentials tend to slightly under-bind the clusters. The MEAM potential is the most successful in reproducing the cluster energies, except for the dimer energy for which it is least accurate. There are mistakes in the ordering, but overall the deviations from the first-principles calculations are about the same as the difference between the two first-principles methods. The SW potential performs poorly: for some of the clusters, the binding energy is underestimated by more than 1 eV/atom. For the infinite atomic chain, the present potential and MOD are in closest agreement with the DFT/QC energies (Table <ref>).This comparison leads to the conclusion that, at least for the cluster structures tested here, the present potential, MOD and MEAM are quite capable of predicting the general trends of the cluster energies with a reasonable accuracy without fitting.§ 2D SILICON STRUCTURES§.§ Single-layer silicenes Silicenes are 2D allotropes of Si that have recently attracted much attention due to their interesting physical properties and potential device applications.<cit.> By contrast to carbon, the sp^3 hybridized Si would seem to be an unlikely candidate for a 2D material. Nevertheless, epitaxial honeycomb Si layers have been found on metallic substrates such as (111)Ag.<cit.>. Unlike in graphene, some of the 2D forms of Si can have a band gap and could be incorporated in Si-based microelectronics. In particular, electric field applied to the buckled honeycomb structure of silicene, which is normally semi-metallic, can open a band gap whose magnitude increases with the field. It was predicted,<cit.> and recently demonstrated,<cit.> that single-layer silicene can work as a field-effect transistor.<cit.> Experimentally, it has not been possible so far to isolate free-standing silicenes. They are presently considered hypothetic 2D materials and have only been studied by DFT calculations. Such calculations predict that single-layer silicene can possess remarkable electric, optical and magnetic properties,<cit.> in addition to ultra-low thermal conductivity.<cit.>The planar (graphene-like) silicene [Fig. <ref>(a)] is mechanically unstable and spontaneously transforms to the more stable buckled structure [Fig. <ref>(b,c)].<cit.> The latter has a split width Δ of about 0.45-0.49and a first-neighbor distance r_1 slightly different from that in the planar structure.<cit.> Furthermore, adsorption of Si ad-atoms on the buckled silicene creates a series of periodic dumbbell structures that are even more stable.<cit.> An ad-atom pushes a nearby Si atom out of its regular position and the two atoms form a dumbbell aligned perpendicular to the silicene plane. The dumbbell atoms have a fourfold coordination (counting the dumbbell bond itself) consistent with the sp^3 bonding. One of the best studied dumbbell silicenes has the √(3)×√(3) structure shown in Fig. <ref>(d,e,f) (the dumbbell atoms are shown in blue and green). The dumbbells distort the hexagonal structural units and create three slightly different nearest-neighbor distances: r_I,II, r_II,III and _III,III [Fig. <ref>(f)].The energies and geometric characteristics of the three silicene structures predicted by the four potentials are listed in Table <ref>. The results of DFT calculations reported in the literature are included for comparison. The agreement with the DFT data is reasonable, especially considering that the 2D structures were not included in the fitting datasets of the potentials. The present potential, MOD and MEAM demonstrate about the same agreement with the DFT calculations. The SW potential tends to be less accurate. For the planar structure, the MOD potential is the most accurate, followed by the present potential, MEAM and then SW. All four potentials correctly predict that the planar structure is mechanically unstable and transforms to the buckled structure. The present potential, MEAM and SW correctly predict that the √(3)×√(3) dumbbell structure has a lower energy than the buckled structure. By contrast, the MOD potential predicts that the √(3)×√(3) dumbbell structure has a higher energy, which is contrary to the DFT calculations. All four potentials overestimate the split width Δ in the buckled structure and the distance _III,III between the dumbbell atoms in the √(3)×√(3) structure, the present potential being closest to the DFT data.Thermal stability of single-layer silicenes has been evaluated by MD simulations. The simulated systems were subject to periodic boundary conditions at zero pressure. Fig. <ref> demonstrates that a nano-ribbon of buckled silicene is unstable at finite temperatures and quickly collapses to a cluster before temperature reaches 300 K. Likewise, a free-standing sheet (flake) of buckled silicene (Fig. <ref>) collapses into a cluster with the shape of a bowl when temperature reaches 300 K. The nano-ribbon and nano-flake made of the √(3)×√(3) dimerized silicene collapse as well. A single-wall nano-tube was also tested for thermal stability. The latter was obtained by wrapping a layer planar silicene into a tube 49in diameter (Fig. <ref>). The period along the tube axis was 122 . As soon as temperature began to increase starting from 0 K, the wall of the tube transformed to the buckled structure and then collapsed before the temperature reached 300 K. Qualitatively the same behavior of the single-layer silicene structures was found with all four potentials. In all cases, the single layer silicene easily developed waves due to thermal fluctuations until neighboring surface regions came close enough to each other to form covalent bonds. Once this happened, the bond-forming process quickly spread over the entire surface and the structure collapsed. This chemical reactivity and the lack of bending rigidity are the main factors that cause the instability of free-standing single-layer silicenes at room temperature.§.§ Bilayer silicenes Another interesting 2D form of silicon is the bilayer silicene.<cit.> Like the single-layer silicene discussed above, the bilayer silicene was found experimentally on top of metallic surfaces such as Ag(111).<cit.> By contrast to bilayer graphene, the interlayer bonds in bilayer silicene are covalent sp^3 type. As a result, the formation of a bilayer is accompanied by a significant energy release. It can be expected, therefore, that bilayer silicene should be more stable than two single layers. Several structural forms of the bilayer silicene have been found in experiments and studied by DFT calculations, depending on the type of stacking of the two layers and whether they are planar or buckled.<cit.> Three of the structures, referred to as AA_p, AA^' and AB, are shown in Fig. <ref>. The AA_p structure is obtained by stacking two planar silicene layers (A) on top of each other and connecting them by vertical covalent bonds [Fig. <ref>(a)]. This structure is characterized by the geometric parameters b (side of the rhombic structural unit) and the interlayer spacing h. The bond length between Si atoms is d_1=b/√(3) within each layer and h between the layers. In the AA^' structure, both layers are buckled, and the buckling of one layer (A^') is inverted with respect to the buckling of the other layer (A) [Fig. <ref>(b)]. As a result, half of the interlayer distances are short, leading to the formation of covalent bonds, and the other half of the distances are longer and covalent bonds do not form. The geometric parameters of the structure are b (defined above), the in-layer bond length d_1, the interlayer bond length d_2, and the split width of each layer Δ. The distance between the layers is h=d_2+Δ. Finally, in the AB structure, two buckled silicene layers A and B are stacked together so that half of the atoms of one layer project into the centers of the hexagonal units of the other layer [Fig. <ref>(c)]. The remaining half of the atoms project onto each other and form vertical covalent bonds. As with the single-layer silicenes, it has not been possible so far to isolate free-standing bilayer silicene experimentally.The cohesive energies E_c and geometric parameters of three bilayer silicenes computed with four interatomic potentials are compared with DFT data in Table <ref>. The Table also shows the energies Δ E of the buckled bilayers AA^' and AB relative to the planar bilayer AA_p. None of the potentials matches the DFT calculations accurately. However, the present potential displays the closest agreement. The MOD potential incorrectly predicts that the buckled structures AA^' and AB are more stable than AA_p (negative Δ E values), which is contrary to the DFT calculations. It should be noted that all four potentials predict virtually identical properties of the AA^' and AB silicenes. This is not surprising: considering only nearest neighbor bonds, the local atomic environments in the two structures are identical. Their DFT lattice parameters b are indeed the same (3.84 ),<cit.> but the DFT energies are different (0.33 and 0.17 eV/atom, respectively;<cit.> our potential gives Δ E=0.12 eV/atom for both). This discrepancy apparently reflects a common feature of all short-range Si potentials. To assess thermal stability of bilayer silicenes, MD simulations were conducted for the same nano-ribbon, nano-flake and nano-tube configurations as discussed above. The most stable AA_p silicene was chosen for the tests. The samples were heated up to 300 K and annealed at this temperature for 10 ns. The systems developed significant capillary waves, especially the nano-ribbon, but none of them collapsed (Fig. <ref>). Although 10 ns is a short time in comparison with experimental times, these tests confirm that the bilayer silicene has a much greater bending rigidity and smaller reactivity in comparison with its single-layer counterpart. As such, it has a much better chance of survival in a free-standing form at room temperature. In additional tests, the nano-flake was heated from 300 K to 1000 K in 6 ns followed by an isothermal anneal for 2 ns at 1000 K. The surface of the flake developed a set of thermally activated point defects, such as ad-atoms and locally buckled configurations, but the flake itself did not collapse. This again confirms the significant thermal stability of the bilayer silicene, possibly even at high temperatures. The same tests were conducted with all four potentials and the results were qualitatively similar. With the MOD potential, the initial AA_p silicene quickly transformed to the more stable buckled structure, but the system still did not collapse.§ DISCUSSION AND CONCLUSIONS Silicon is one of the most challenging elements for semi-empirical interatomic potentials. It has over a dozen polymorphs that are stable at different temperatures and pressures and exhibit different coordination numbers and types of bonding ranging from strongly covalent to metallic. The diamond cubic phase displays a rather complex behavior with several possible structures of point defects, a number of surface reconstructions, and an increase in density upon melting. It is not surprising that the existing Si potentials are not nearly as successful in describing this material as some of the embedded-atom potentials for metals.<cit.> In this work, we developed a new Si potential with the goal of improving some of the properties that were not captured accurately by other potentials. For comparison, we selected three potentials from the literature that we consider most reliable<cit.> or most popular.<cit.> Extensive tests have shown that the present potential does achieve the desired improvements, in particular with regard to the vacancy formation energies, surface formation energies and reconstructions, thermal expansion factors and a few other properties. The potential is more accurate, in comparison with other potentials, in reproducing the DFT data for the novel Si polymorphs h-Si_6 and Si_24 without including them in the fitting database. But the tests have also shown that each of the four potentials has its successes and failures. The present potential makes inaccurate predictions for the energies of high-lying Si polymorphs (although their atomic volumes are quite accurate), for the latent heat of melting, and for the short-range order in the liquid phase. The MOD potential<cit.> has its own drawbacks mentioned in Section <ref>. The MEAM potential<cit.> grossly overestimates the phonon frequencies and thermal expansion factors, in addition to the incorrect {100} surface reconstruction. The SW potential successfully reproduces the surface energies and thermal expansion factors but predicts a positive Cauchy pressure and systematically overestimates the atomic volumes of Si polymorphs (as does the MEAM potential).The potentials were put through a very stringent test by computing the binding energies of small Si_n clusters. Such clusters were not included in the potential fitting procedure and are traditionally considered to be out of reach of potentials unless specifically included in fitting database. Surprisingly, the present potential, the MOD potential,<cit.> and especially the MEAM potential<cit.> reproduce the general trends of the cluster energies reasonably well (Fig. <ref>). In many cases, the ranking of the energies of different geometries for the same cluster size n agrees with first-principles calculations. The SW potential is less accurate: it systematically under-binds the clusters and makes more mistakes in the energy ordering.Encouraged by the reasonable performance for the clusters, we applied the potentials to model single-layer and bilayer silicenes, which were not included in the potential fitting either. While none of the potentials reproduces all DFT calculations accurately, they generally perform reasonably well. One notable exception is the MOD potential, which under-binds the √(3)×√(3) dumbbell structure of the single-layer silicene and fails to reproduce the correct ground-state of the bilayer silicene. Furthermore, all four potentials predict identical energies of the AA^' and AB bilayer silicenes, whereas the DFT energies are different. Other than this, the trends are captured quite well. The present potential demonstrates the best performance for the bilayer silicenes. Experimentally, silicenes have only been found on metallic substrates. Whether they can exist in a free-standing form at room temperature remains an open question. Evaluation of their thermal stability requires MD simulations of relatively large systems for relatively long times that are not currently accessible by DFT methods. Although interatomic potentials are less reliable, they can be suitable for a preliminary assessment. The MD simulations performed in this work indicate that single-layer silicenes are unlikely to exist in a free-standing form. Their large bending compliance and chemical reactivity lead to the development of large shape fluctuations and eventually the formation of covalent bonds between neighboring surface regions at or below room temperature. By contrast, bilayer silicenes exhibit much greater bending rigidity and lower surface reactivity. Nano-structures such as nano-ribbons, nano-flakes and nano-tubes remain intact at and above room temperature, at least on a 10 ns timescale. The fact that this behavior was observed with all four potentials points to the generality of these observations and suggests that free-standing bilayer silicenes might be stable at room temperature. Of course, this tentative conclusion requires validation by more detailed and more accurate studies in the future.The four potentials discussed in this work are likely to represent the limit of what can be achieved with short-range semi-empirical potentials. Further improvements can only be made by developing more sophisticated, longer-range, and thus significantly slower potentials. Analytical bond-order potentials offer one option.<cit.> Recent years have seen a rising interest in machine-learning potentials.<cit.> While even slower, they allow one to achieve an impressive accuracy of interpolation between DFT energies, in some cases up to a few meV/atom. However, the lack of transferability to configurations outside the training dataset remains an issue. § ACKNOWLEDGEMENTS This work was supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering, the Physical Behavior of Materials Program, through Grant No. DE-FG02-01ER45871. 118 natexlab#1#1[#1],#1 [Stillinger and Weber(1985)]Stillinger85 authorF. H. Stillinger, authorT. A. Weber, titleComputer simulation of local order in condensed phases of silicon, journalPhys. Rev. B volume31 (year1985) pages5262–5271.[Tersoff(1988a)]Tersoff88 authorJ. Tersoff, titleNew empirical approach for the structure and energy of covalent systems, journalPhys. Rev. B volume37 (year1988a) pages6991–7000.[Tersoff(1988b)]Tersoff:1988dn authorJ. Tersoff, titleEmpirical interatomic potential for silicon with improved elastic properties, journalPhys. Rev. B volume38 (year1988b) pages9902–9905.[Tersoff(1989)]Tersoff:1989wj authorJ. Tersoff, titleModeling solid-state chemistry: Interatomic potentials for multicomponent systems, journalPhys. Rev. B volume39 (year1989) pages5566–5568.[Dodson(1987)]Dodson1987 authorB. W. Dodson, titleDevelopment of a many-body Tersoff-type potential for silicon, journalPhys. Rev. B volume35 (year1987) pages2795–2798.[Ramana Murty and Atwater(1995)]Ramana-Murty:1995fk authorM. V. Ramana Murty, authorH. A. Atwater, titleEmpirical interatomic potential for Si-H interactions, journalPhys. Rev. B volume51 (year1995) pages4889–4893.[Kumagai et al.(2007)Kumagai, Izumi, Hara, and Sakai]Kumagai:2007ly authorT. Kumagai, authorS. Izumi, authorS. Hara, authorS. Sakai, titleDevelopment of bond-order potentials that can reproduce the elastic constants and melting point of silicon for classical molecular dynamics simulation, journalComp. Mater. Sci. volume39 (year2007) pages457–464.[Yu et al.(2007)Yu, Sinnott, and Phillpot]Yu:2007on authorJ. Yu, authorS. B. Sinnott, authorS. R. Phillpot, titleCharge optimized many-body potential for the Si/SiO_2 system, journalPhys. Rev. B volume75 (year2007) pages085311.[Monteverde et al.(2012)Monteverde, Migliorato, and Powell]Monteverde:2012wu authorU. Monteverde, authorM. A. Migliorato, authorD. Powell, titleEmpirical interatomic potential for the mechanical, vibrational and thermodynamic properties of semiconductors, journalJournal of Physics: Conference Series volume367 (year2012) pages012015.[Martinez et al.(2013)Martinez, Yilmaz, Liang, Sinnott, and Phillpot]Martinez:2013aa authorJ. A. Martinez, authorD. E. Yilmaz, authorT. Liang, authorS. B. Sinnott, authorS. R. Phillpot, titleFitting empirical potentials: Challenges and methodologies, journalCurrent Opinion in Solid State and Materials Science volume17 (year2013) pages263–270.[Justo et al.(1998)Justo, Bazant, Kaxiras, Bulatov, and Yip]Justo:1998fu authorJ. F. Justo, authorM. Z. Bazant, authorE. Kaxiras, authorV. V. Bulatov, authorS. Yip, titleInteratomic potential for silicon defects and disordered phases, journalPhys. Rev. B volume58 (year1998) pages2539–2550.[Baskes et al.(1989)Baskes, Nelson, and Wright]Baskes89 authorM. I. Baskes, authorJ. S. Nelson, authorA. F. Wright, titleSemiempirical modified embedded-atom potentials for silicon and germanium, journalPhys. Rev. B volume40 (year1989) pages6085–6110.[Lenosky et al.(2000)Lenosky, Sadigh, Alonso, Bulatov, Diaz de la Rubia, Kim, Voter, and Kress]Lenosky:2000rt authorT. J. Lenosky, authorB. Sadigh, authorE. Alonso, authorV. V. Bulatov, authorT. Diaz de la Rubia, authorJ. Kim, authorA. F. Voter, authorJ. D. Kress, titleHighly optimized empirical potential model of silicon, journalModel. Simul. Mater. Sci. Eng. volume8 (year2000) pages825.[Ryu et al.(2009)Ryu, Weinberger, Baskes, and Cai]Ryu:2009dn authorS. Ryu, authorC. R. Weinberger, authorM. I. Baskes, authorW. Cai, titleImproved modified embedded-atom method potentials for gold and silicon, journalModel. Simul. Mater. Sci. Eng. volume17 (year2009) pages075008.[Timonova and Thijsse(2010)]Timonova:2010aa authorM. Timonova, authorB. J. Thijsse, titleThermodynamic properties and phase transitions of silicon using a new MEAM potential, journalComp. Mater. Sci. volume48 (year2010) pages609–620.[Timonova and Thijsse(2011)]Timonova:2011aa authorM. Timonova, authorB. J. Thijsse, titleOptimizing the MEAM potential for silicon, journalModel. Simul. Mater. Sci. Eng. volume19 (year2011) pages015003.[Jelinek et al.(2012)Jelinek, Groh, Horstemeyer, Houze, Kim, Wagner, Moitra, and Baskes]Jelinek:2012ij authorB. Jelinek, authorS. Groh, authorM. F. Horstemeyer, authorJ. Houze, authorS. G. Kim, authorG. J. Wagner, authorA. Moitra, authorM. I. Baskes, titleModified embedded atom method potential for Al, Si, Mg, Cu, and Fe alloys, journalPhys. Rev. B volume85 (year2012) pages245102.[Liu et al.(2015)Liu, Zhang, Tao, Chen, and Zhang]Liu:2015jw authorB. Liu, authorH. Zhang, authorJ. Tao, authorX. Chen, authorY. Zhang, titleComparative investigation of a newly optimized modified embedded atom method potential with other potentials for silicon, journalComp. Mater. Sci. volume109 (year2015) pages277–286.[Gillespie et al.(2007)Gillespie, Zhou, Murdick, Wadley, Drautz, and Pettifor]Gillespie:2007vl authorB. A. Gillespie, authorX. W. Zhou, authorD. A. Murdick, authorH. N. G. Wadley, authorR. Drautz, authorD. G. Pettifor, titleBond-order potential for silicon, journalPhys. Rev. B volume75 (year2007) pages155207.[Oloriegbe(2008)]Oloriegbe_PhD:2008aa authorS. Y. Oloriegbe, titleHybrid bond-order potential for silicon, Ph.D. thesis, Clemson University, addressClemson, SC, year2008.[Jain et al.(2013)Jain, Ong, Hautier, Chen, Richards, Dacek, Cholia, Gunter, Skinner, Ceder, and Persson]Materials_Project authorA. Jain, authorS. Ong, authorG. Hautier, authorW. Chen, authorW. Richards, authorS. Dacek, authorS. Cholia, authorD. Gunter, authorD. Skinner, authorG. Ceder, authorK. Persson, titleThe materials project: A materials genome approach to accelerating materials innovation, journalAPL Materials volume1 (year2013) pages011002.[Saal et al.(2013)Saal, Kirklin, Aykol, Meredig, and Wolverton]Wolverton2013 authorJ. E. Saal, authorS. Kirklin, authorM. Aykol, authorB. Meredig, authorC. Wolverton, titleMaterials design and discovery with high-throughput density functional theory: The open quantum materials database (OQMD), journalJOM volume65 (year2013) pages1501.[Curtarolo et al.(2012a)Curtarolo, Setyawan, Hart, Jahnatek, Chepulskii, Taylor, Wang, Xue, Yang, Levy, Mehl, Stokes, Demchenko, and Morgan]Curtarolo:2012kq authorS. Curtarolo, authorW. Setyawan, authorG. L. W. Hart, authorM. Jahnatek, authorR. V. Chepulskii, authorR. H. Taylor, authorS. Wang, authorJ. Xue, authorK. Yang, authorO. Levy, authorM. J. Mehl, authorH. T. Stokes, authorD. O. Demchenko, authorD. Morgan, titleAFLOW: An automatic framework for high-throughput materials discovery, journalComp. Mater. Sci. volume58 (year2012a) pages218–226.[Curtarolo et al.(2012b)Curtarolo, Setyawan, Wang, Xue, Yang, Taylor, Nelson, Hart, Sanvito, Buongiorno-Nardelli, Mingo, and Levy]Curtarolo2012 authorS. Curtarolo, authorW. Setyawan, authorS. Wang, authorJ. Xue, authorK. Yang, authorR. H. Taylor, authorL. J. Nelson, authorG. L. W. Hart, authorS. Sanvito, authorM. Buongiorno-Nardelli, authorN. Mingo, authorO. Levy, titleAFLOWLIB.ORG: A distributed materials properties repository from high-throughputcalculations, journalComp. Mater. Sci. volume58 (year2012b) pages227–235.[Plimpton(1995)]Plimpton95 authorS. Plimpton, titleFast parallel algorithms for short-range molecular-dynamics, journalJ. Comput. Phys. volume117 (year1995) pages1–19.[Gale and Totemeir(2004)]Smithells-metals2004 editorW. F. Gale, editorT. C. Totemeir (Eds.), titleSmithells Metals Reference Book, editioneight ed., publisherElsevier Butterworth-Heinemann, year2004.[McSkimin et al.(1951)McSkimin, Bond, Buehler, and Teal]Skimin1951 authorH. J. McSkimin, authorW. L. Bond, authorE. Buehler, authorG. K. Teal, titleMeasurement of the elastic constants of silicon single crystals and their thermal coefficients, journalPhys. Rev. volume83 (year1951) pages1080–1080.[Kong(2011)]Kong2011 authorL. T. Kong, titlePhonon dispersion measured directly from molecular dynamics simulations, journalComp. Phys. Comm. volume182 (year2011) pages2201–2207.[Dolling(1963)]Dolling:1963 authorG. Dolling, titleLattice vibrations in crystals with the diamond structure, journalInelastic scattering of neutrons in solids and liquids volume2 (year1963) pages37.[Nilsson and Nelin(1972)]Nilsson:1972ve authorG. Nilsson, authorG. Nelin, titleStudy of the homology between silicon and germanium by thermal-neutron spectrometry, journalPhys. Rev. B(year1972) pages3777–3786.[Zdetsis and Wang(1979)]Zdetsis:1979cs authorA. D. Zdetsis, authorC. S. Wang, titleLattice dynamics of Ge and Si using the Born-von Karman model, journalPhys. Rev. B volume19 (year1979) pages2999–3003.[Kulda et al.(1994)Kulda, Strauch, Pavone, and Ishii]Kulda:1994rc authorJ. Kulda, authorD. Strauch, authorP. Pavone, authorY. Ishii, titleInelastic-neutron-scattering study of phonon eigenvectors and frequencies in Si, journalPhys. Rev. B volume50 (year1994) pages13347–13354.[Puska et al.(1998)Puska, Pöykkö, Pesola, and Nieminen]Puska:1998kx authorM. J. Puska, authorS. Pöykkö, authorM. Pesola, authorR. M. Nieminen, titleConvergence of supercell calculations for point defects in semiconductors: Vacancy in silicon, journalPhys. Rev. B volume58 (year1998) pages1318–1325.[Goedecker et al.(2002)Goedecker, Deutsch, and Billard]Goedecker:2002yf authorS. Goedecker, authorT. Deutsch, authorL. Billard, titleA fourfold coordinated point defect in silicon, journalPhys. Rev. Lett. volume88 (year2002) pages235501.[Centoni et al.(2005)Centoni, Sadigh, Gilmer, Lenosky, Díaz de la Rubia, and Musgrave]Centoni:2005sd authorS. A. Centoni, authorB. Sadigh, authorG. H. Gilmer, authorT. J. Lenosky, authorT. Díaz de la Rubia, authorC. B. Musgrave, titleFirst-principles calculation of intrinsic defect formation volumes in silicon, journalPhys. Rev. B volume72 (year2005) pages195206.[Wright(2006)]Wright:2006sf authorA. F. Wright, titleDensity-functional-theory calculations for the silicon vacancy, journalPhys. Rev. B volume74 (year2006) pages165116.[Dabrowski and Kissinger(2015)]Dabrowski:2015ss authorJ. Dabrowski, authorG. Kissinger, titleSupercell-size convergence of formation energies and gap levels of vacancy complexes in crystalline silicon in density functional theory calculations, journalPhys. Rev. B volume92 (year2015) pages144104.[Dannefaer et al.(1986)Dannefaer, Mascher, and Kerr]Dannefaer:1986qd authorS. Dannefaer, authorP. Mascher, authorD. Kerr, titleMonovacancy formation enthalpy in silicon, journalPhys. Rev. Lett. volume56 (year1986) pages2195–2198.[Sholihun et al.(2015)Sholihun, Saito, Ohno, and Yamasaki]Sholihun:2015aa authorSholihun, authorM. Saito, authorT. Ohno, authorT. Yamasaki, titleDensity-functional-theory-based calculations of formation energy and concentration of the silicon monovacancy, journalJap. J. Appl. Phys. volume54 (year2015) pages041301.[Tong et al.(1988)Tong, Huang, Wei, Packard, Men, Glander, and Webb]Tong:1988dk authorS. Y. Tong, authorH. Huang, authorC. M. Wei, authorW. E. Packard, authorF. K. Men, authorG. Glander, authorM. B. Webb, titleLow-energy electron diffraction analysis of the Si(111)7×7 structure, journalJournal of Vacuum Science & Technology A volume6 (year1988) pages615–624.[Tromp et al.(1985)Tromp, Hamers, and Demuth]Tromp:1985ya authorR. M. Tromp, authorR. J. Hamers, authorJ. E. Demuth, titleSi(001) dimer structure observed with scanning tunneling microscopy, journalPhys. Rev. Lett. volume55 (year1985) pages1303–1306.[Qian and Chadi(1987)]Qian:1987sp authorG.-X. Qian, authorD. J. Chadi, titleSi(111)-7×7 surface: Energy-minimization calculation for the dimer-adatom stacking-fault model, journalPhys. Rev. B volume35 (year1987) pages1288–1293.[Stekolnikov et al.(2002)Stekolnikov, Furthmüller, and Bechstedt]Stekolnikov:2002zr authorA. A. Stekolnikov, authorJ. Furthmüller, authorF. Bechstedt, titleAbsolute surface energies of group-IV semiconductors: Dependence on orientation and reconstruction, journalPhys. Rev. B volume65 (year2002) pages115318.[Broughton and Li(1987)]Broughton:1987ez authorJ. Q. Broughton, authorX. P. Li, titlePhase diagram of silicon by molecular dynamics, journalPhys. Rev. B volume35 (year1987) pages9120–9127.[Sup(????)]Supplementary-Si-Tersoff titleSee online supplementary materials to this article at [URL to be inserted by publisher], ????[Müller et al.(1978)Müller, Beck, and Güntherodt]Muller:1978fk authorM. Müller, authorH. Beck, authorH. J. Güntherodt, titleMagnetic properties of liquid Pd, Si, and Pd-Si alloys, journalPhys. Rev. Lett. volume41 (year1978) pages983–987.[Waseda et al.(1995)Waseda, Shinoda, Sugiyama, Takeda, Terashima, and Toguri]Waseda:1995ur authorY. Waseda, authorK. Shinoda, authorK. Sugiyama, authorS. Takeda, authorK. Terashima, authorJ. M. Toguri, titleHigh temperature X-ray diffraction study of melt structure of silicon, journalJapanese Journal of Applied Physics volume34 (year1995) pages4124.[Štich et al.(1989)Štich, Car, and Parrinello]Stich:1989rg authorI. Štich, authorR. Car, authorM. Parrinello, titleBonding and disorder in liquid silicon, journalPhys. Rev. Lett. volume63 (year1989) pages2240–2243.[Jank and Hafner(1990)]Jank:1990qp authorW. Jank, authorJ. Hafner, titleStructural and electronic properties of the liquid polyvalent elements: The group-IV elements Si, Ge, Sn, and Pb, journalPhys. Rev. B volume41 (year1990) pages1497–1515.[Guo et al.(2015)Guo, Wang, Kawazoe, and Jena]Guo:2015aa authorY. Guo, authorQ. Wang, authorY. Kawazoe, authorP. Jena, titleA new silicon phase with direct band gap and novel optoelectric properties, journalScientific Reports volume5 (year2015).[Kim et al.(2015)Kim, Stefanoski, Karakevych, and Strobel]Kim:2015aa authorD. Y. Kim, authorS. Stefanoski, authorO. Karakevych, authorT. A. Strobel, titleSynthesis of an open-framework allotrope of silicon, journalNature Mater. volume14 (year2015) pages169–173.[Boulding and Andersen(1990)]Boulding:1990aa authorB. C. Boulding, authorH. C. Andersen, titleInteratomic potential for silicon clusters, crystals, and surfaces, journalPhys. Rev. B volume41 (year1990) pages10568.[Fournier et al.(1992)Fournier, Sinnott, and DePristo]Fournier:1992tw authorR. Fournier, authorS. B. Sinnott, authorA. E. DePristo, titleDensity functional study of the bonding in small silicon clusters, journalJ. Chem. Phys. volume97 (year1992) pages4149–4161.[Raghavachari(1986)]Raghavachari:1986yg authorK. Raghavachari, titleTheoretical study of small silicon clusters: Equilibrium geometries and electronic structures of Si_n (n=2–7,10), journalJ. Chem. Phys. volume84 (year1986) pages5672–5686.[Raghavachari and Logovinsky(1985)]Raghavachari:1985rm authorK. Raghavachari, authorV. Logovinsky, titleStructure and bonding in small silicon clusters, journalPhys. Rev. Lett. volume55 (year1985) pages2853–2856.[Raghavachari and Rohlfing(1988)]Raghavachari:1988sh authorK. Raghavachari, authorC. M. Rohlfing, titleBonding and stabilities of small silicon clusters: A theoretical study of Si_7–Si_10, journalJ. Chem. Phys. volume89 (year1988) pages2219–2234.[Kara et al.(2012)Kara, Enriquez, Seitsonen, Lew Yan Voon, Vizzini, Aufray, and Oughaddou]Kara:2012wu authorA. Kara, authorH. Enriquez, authorA. P. Seitsonen, authorL. C. Lew Yan Voon, authorS. Vizzini, authorB. Aufray, authorH. Oughaddou, titleA review on silicene —new candidate for electronics, journalSurface Science Reports volume67 (year2012) pages1–18.[Roome and Carey(2014)]Roome:2014aa authorN. J. Roome, authorJ. D. Carey, titleBeyond graphene: Stable elemental monolayers of silicene and germanene, journalAppl. Mater. Interfaces volume6 (year2014) pages7743–7750.[Grazianetti et al.(2016)Grazianetti, Cinquanta, and Molle]Grazianetti:2016aa authorC. Grazianetti, authorE. Cinquanta, authorA. Molle, titleTwo-dimensional silicon: the advent of silicene, journal2D volume3 (year2016) pages012001.[Kaloni et al.(2016)Kaloni, Schreckenbach, Freund, and Schwingenschlögl]Kaloni:2016rc authorT. P. Kaloni, authorG. Schreckenbach, authorM. S. Freund, authorU. Schwingenschlögl, titleCurrent developments in silicene and germanene, journalPhysica Status Solidi (RRL) –Rapid Research Letters volume10 (year2016) pages133–142.[Lew Yan Voon et al.(2016)Lew Yan Voon, Zhu, and Schwingenschlögl]Lew-Yan-Voon:2016ai authorL. C. Lew Yan Voon, authorJ. Zhu, authorU. Schwingenschlögl, titleSilicene: Recent theoretical advances, journalApplied Physics Reviews volume3 (year2016) pages040802.[Lalmi et al.(2010)Lalmi, Oughaddou, Enriquez, Kara, Vizzini, Ealet, and Aufray]Lalmi:2010rw authorB. Lalmi, authorH. Oughaddou, authorH. Enriquez, authorA. Kara, authorS. Vizzini, authorB. Ealet, authorB. Aufray, titleEpitaxial growth of a silicene sheet, journalApplied Physics Letters volume97 (year2010) pages223109.[Vogt et al.(2012)Vogt, De Padova, Quaresima, Avila, Frantzeskakis, Asensio, Resta, Ealet, and Le Lay]Vogt:2012xe authorP. Vogt, authorP. De Padova, authorC. Quaresima, authorJ. Avila, authorE. Frantzeskakis, authorM. C. Asensio, authorA. Resta, authorB. Ealet, authorG. Le Lay, titleSilicene: Compelling experimental evidence for graphenelike two-dimensional silicon, journalPhys. Rev. Lett. volume108 (year2012) pages155501.[Lin et al.(2012)Lin, Arafune, Kawahara, Tsukahara, Minamitani, Kim, Takagi, and Kawai]Lin:2012kl authorC. L. Lin, authorR. Arafune, authorK. Kawahara, authorN. Tsukahara, authorE. Minamitani, authorY. Kim, authorN. Takagi, authorM. Kawai, titleStructure of silicene grown on Ag(111), journalApplied Physics Express volume5 (year2012) pages045802.[Feng et al.(2012)Feng, Ding, Meng, Yao, He, Cheng, Chen, and Wu]Feng:2012bs authorB. Feng, authorZ. Ding, authorS. Meng, authorY. Yao, authorX. He, authorP. Cheng, authorL. Chen, authorK. Wu, titleEvidence of silicene in honeycomb structures of silicon on Ag(111), journalNano Letters volume12 (year2012) pages3507–3511.[Gao and Zhao(2012)]Gao:2012rp authorJ. Gao, authorJ. Zhao, titleInitial geometries, interaction mechanism and high stability of silicene on Ag(111) surface, journalScientific Reports volume2 (year2012) pages861.[Acun et al.(2013)Acun, Poelsema, Zandvliet, and van Gastel]Acun:2013kn authorA. Acun, authorB. Poelsema, authorH. J. W. Zandvliet, authorR. van Gastel, titleThe instability of silicene on Ag(111), journalApplied Physics Letters volume103 (year2013) pages263119.[Arafune et al.(2013)Arafune, Lin, Kawahara, Tsukahara, Minamitani, Kim, Takagi, and Kawai]Arafune:2013cj authorR. Arafune, authorC. C. Lin, authorK. Kawahara, authorN. Tsukahara, authorE. Minamitani, authorY. Kim, authorN. Takagi, authorM. Kawai, titleStructural transition of silicene on Ag(111), journalSurf. Sci. volume608 (year2013) pages297–300.[Sone et al.(2014)Sone, Yamagami, Aoki, Nakatsuji, and Hirayama]Sone:2014la authorJ. Sone, authorT. Yamagami, authorY. Aoki, authorK. Nakatsuji, authorH. Hirayama, titleEpitaxial growth of silicene on ultra-thin Ag(111) films, journalNew Journal of Physics volume16 (year2014) pages095004.[Ni et al.(2012)Ni, Liu, Tang, Zheng, Zhou, Qin, Gao, Yu, and Lu]Ni:2012aa authorZ. Ni, authorQ. Liu, authorK. Tang, authorJ. Zheng, authorJ. Zhou, authorR. Qin, authorZ. Gao, authorD. Yu, authorJ. Lu, titleTunable bandgap in silicene and germanene, journalNano Letters volume12 (year2012).[Tao et al.(2015)Tao, Cinquanta, Chiappe, Grazianetti, Fanciulli, Dubey, Molle, and Akinwande]Tao:2015aa authorL. Tao, authorE. Cinquanta, authorD. Chiappe, authorC. Grazianetti, authorM. Fanciulli, authorM. Dubey, authorA. Molle, authorD. Akinwande, titleSilicene field-effect transistors operating at room temperature, journalNature Nanotechnology volume10 (year2015) pages227–231.[Ezawa(2012)]Ezawa:2012aa authorM. Ezawa, titleValley-polarized metals and quantum anomalous hall effect in silicene, journalPhys. Rev. Lett. volume109 (year2012) pages055502.[Liu et al.(2013)Liu, Liu, Wu, Yang, and Yao]Liu:2013aa authorF. Liu, authorC. C. Liu, authorK. Wu, authorF. Yang, authorY. Yao, titled+id' chiral superconductivity in bilayer silicene, journalPhys. Rev. Lett. volume111 (year2013) pages066804.[Liu et al.(2011)Liu, Feng, and Yao]Liu:2011aa authorC. C. Liu, authorW. Feng, authorY. Yao, titleQuantum spin hall effect in silicene and two-dimensional germanium, journalPhys. Rev. Lett. volume107 (year2011) pages076802.[Xu et al.(2012)Xu, Luo, Liu, Zheng, Zhang, Nagase, Gao, and Lu]Xu:2012aa authorC. Xu, authorG. Luo, authorQ. Liu, authorJ. Zheng, authorZ. Zhang, authorS. Nagase, authorZ. Gao, authorJ. Lu, titleGiant magnetoresistance in silicene nanoribbons, journalNanoscale volume4 (year2012) pages3111–3117.[Peng et al.(2016)Peng, Zhang, Shao, Xu, Zhang, Lu, Zhang, and Zhu]Peng:2016aa authorB. Peng, authorH. Zhang, authorH. Shao, authorY. Xu, authorR. Zhang, authorH. Lu, authorD. W. Zhang, authorH. Zhu, titleFirst-principles prediction of ultralow lattice thermal conductivity of dumbbell silicene: A comparison with low-buckled silicene, journalAppl. Mater. Interfaces volume8 (year2016) pages20977–20985.[Cahangirov et al.(2009)Cahangirov, Topsakal, Aktürk, Şahin, and Ciraci]Cahangirov:2009fu authorS. Cahangirov, authorM. Topsakal, authorE. Aktürk, authorH. Şahin, authorS. Ciraci, titleTwo- and one-dimensional honeycomb structures of silicon and germanium, journalPhys. Rev. Lett. volume102 (year2009) pages236804.[Cahangirov et al.(2014)Cahangirov, Özçelik, Rubio, and Ciraci]Cahangirov:2014xw authorS. Cahangirov, authorV. O. Özçelik, authorA. Rubio, authorS. Ciraci, titleSilicite: The layered allotrope of silicon, journalPhys. Rev. B volume90 (year2014) pages085426.[Matusalem et al.(2015)Matusalem, Marques, Teles, and Bechstedt]Matusalem:2015aa authorF. Matusalem, authorM. Marques, authorL. K. Teles, authorF. Bechstedt, titleStability and electronic structure of two-dimensional allotropes of group-IV materials, journalPhys. Rev. B volume92 (year2015) pages045436.[Sahin and Peeters(2013)]Sahin:2013aa authorH. Sahin, authorF. M. Peeters, titleAdsorption of alkali, alkaline-earth, and 3d transition metal atoms on silicene, journalPhys. Rev. B volume87 (year2013) pages085423.[Kaltsas and Tsetseris(2013)]Kaltsas:2013rq authorD. Kaltsas, authorL. Tsetseris, titleStability and electronic properties of ultrathin films of silicon and germanium, journalPhys. Chem. Chem. Phys. volume15 (year2013) pages9710–9715.[Ge et al.(2016)Ge, Yao, and Lü]Ge:2016mz authorX. J. Ge, authorK. L. Yao, authorJ.-T. Lü, titleComparative study of phonon spectrum and thermal expansion of graphene, silicene, germanene, and blue phosphorene, journalPhys. Rev. B volume94 (year2016) pages165433.[Resta et al.(2013)Resta, Leoni, Barth, Ranguis, Becker, Bruhn, Vogt, and Lay]Resta:2013aa authorA. Resta, authorT. Leoni, authorC. Barth, authorA. Ranguis, authorC. Becker, authorT. Bruhn, authorP. Vogt, authorG. L. Lay, titleAtomic structures of silicene layers grown on Ag(111): Scanning tunneling microscopy and noncontact atomic force microscopy observations, journalScientific Reports volume3 (year2013) pages2399.[Fu et al.(2014)Fu, Zhang, Ding, Li, and Meng]Fu:2014ai authorH. Fu, authorJ. Zhang, authorZ. Ding, authorH. Li, authorS. Meng, titleStacking-dependent electronic structure of bilayer silicene, journalAppl. Phys. Lett. volume104 (year2014) pages131904.[Pflugradt et al.(2014)Pflugradt, Matthes, and Bechstedt]Pflugradt:2014aa authorP. Pflugradt, authorL. Matthes, authorF. Bechstedt, titleUnexpected symmetry and AA stacking of bilayer silicene on Ag(111), journalPhys. Rev. B volume89 (year2014) pages205428.[Padilha and Pontes(2015)]Padilha:2015ng authorJ. Padilha, authorR. B. Pontes, titleFree-standing bilayer silicene: The effect of stacking order on the structural, electronic, and transport properties, journalJ. Phys. Chem. C volume119 (year2015) pages3818–3825.[Yaokawa et al.(2016)Yaokawa, Ohsuna, Morishita, Hayasaka, Spencer, and Nakano]Yaokawa:2016hq authorR. Yaokawa, authorT. Ohsuna, authorT. Morishita, authorY. Hayasaka, authorM. J. S. Spencer, authorH. Nakano, titleMonolayer-to-bilayer transformation of silicenes and their structural analysis, journalNature Communications volume7 (year2016) pages10657.[Daw and Baskes(1983)]Daw83 authorM. S. Daw, authorM. I. Baskes, titleSemiempirical, quantum mechanical calculation of hydrogen embrittlement in metals, journalPhys. Rev. Lett. volume50 (year1983) pages1285–1288.[Daw and Baskes(1984)]Daw84 authorM. S. Daw, authorM. I. Baskes, titleEmbedded-atom method: Derivation and application to impurities, surfaces, and other defects in metals, journalPhys. Rev. B volume29 (year1984) pages6443–6453.[Mishin(2005)]Mishin.HMM authorY. Mishin, titleInteratomic potentials for metals, in: editorS. Yip (Ed.), booktitleHandbook of Materials Modeling, publisherSpringer, addressDordrecht, The Netherlands, year2005, pp. pages459–478.[Drautz et al.(2007)Drautz, Zhou, Murdick, Gillespie, Wadley, and Pettifor]Drautz07a authorR. Drautz, authorX. W. Zhou, authorD. A. Murdick, authorB. Gillespie, authorH. N. G. Wadley, authorD. G. Pettifor, titleAnalytic bond-order potentials for modelling the growth of semiconductor thin films, journalProg. Mater. Sci. volume52 (year2007) pages196–229.[Mueller et al.(2016)Mueller, Kusne, and Ramprasad]Mueller:2016aa authorT. Mueller, authorA. G. Kusne, authorR. Ramprasad, titleMachine learning in materials science: Recent progress and emerging applications, in: editorA. L. Parrill, editorK. B. Lipkowitz (Eds.), booktitleReviews in Computational Chemistry, volume volume29, publisherWiley, year2016, pp. pages186–273.[Behler and Parrinello(2016)]Behler:2016aa authorJ. Behler, authorM. Parrinello, titlePerspective: Machine learning potentials for atomistic simulations, journalPhys. Chem. Chem. Phys. volume145 (year2016) pages170901.[Bartok et al.(2010)Bartok, Payne, Kondor, and Csanyi]Bartok:2010aa authorA. Bartok, authorM. C. Payne, authorR. Kondor, authorG. Csanyi, titleGaussian approximation potentials: The accuracy of quantum mechanics, without the electrons, journalPhys. Rev. Lett. volume104 (year2010) pages136403.[Behler and Parrinello(2007)]Behler07 authorJ. Behler, authorM. Parrinello, titleGeneralized neural-network representation of high-dimensional potential-energy surfaces, journalPhys. Rev. Lett. volume98 (year2007) pages146401.[Behler et al.(2008)Behler, Martonak, Donadio, and Parrinello]Behler:2008aa authorJ. Behler, authorR. Martonak, authorD. Donadio, authorM. Parrinello, titleMetadynamics simulations of the high-pressure phases of silicon employing a high-dimensional neural network potential, journalPhys. Rev. Lett. volume100 (year2008) pages185501.[Botu and Ramprasad(2015)]Botu:2015bb authorV. Botu, authorR. Ramprasad, titleAdaptive machine learning framework to accelerate ab initio molecular dynamics, journalInt. J. Quant. Chem. volume115 (year2015) pages1074–1083.[Kittel(1986)]Kittel authorC. Kittel, titleIntroduction to Sold State Physics, publisherWiley-Interscience, addressNew York, year1986.[Timonova et al.(2007)Timonova, Lee, and Thijsse]Timonova:2007fr authorM. Timonova, authorB. Lee, authorB. J. Thijsse, titleSputter erosion of Si(001) using a new silicon MEAM potential and different thermostats, journalNuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms volume255 (year2007) pages195–201.[Lu et al.(2005)Lu, Huang, Cuma, and Liu]Lu:2005jw authorG. Lu, authorM. Huang, authorM. Cuma, authorF. Liu, titleRelative stability of Si surfaces: A first-principles study, journalSurf. Sci. volume588 (year2005) pages61–70.[Eaglesham et al.(1993)Eaglesham, White, Feldman, Moriya, and Jacobson]Eaglesham:1993qa authorD. J. Eaglesham, authorA. E. White, authorL. C. Feldman, authorN. Moriya, authorD. C. Jacobson, titleEquilibrium shape of Si, journalPhys. Rev. Lett. volume70 (year1993) pages1643–1646.[Gilman(1960)]Gilman:1960kl authorJ. J. Gilman, titleDirect measurements of the surface energies of crystals, journalJ. Appl. Phys. volume31 (year1960) pages2208–2218.[Yin and Cohen(1982)]Yin:1982ef authorM. T. Yin, authorM. L. Cohen, titleTheory of static structural properties, crystal stability, and phase transformations: Application to Si and Ge, journalPhys. Rev. B volume26 (year1982) pages5668–5687.[Ganchenkova et al.(2015)Ganchenkova, Supryadkina, Abgaryan, Bazharov, Mutigullin, and Borodin]Ganchenkova:2015aa authorM. G. Ganchenkova, authorI. A. Supryadkina, authorK. K. Abgaryan, authorD. I. Bazharov, authorI. V. Mutigullin, authorV. A. Borodin, titleInfluence of the ab-initio calculation parameters on prediction of energy of point defects in silicon, journalModern Electronic Materials volume1 (year2015) pages103–108.[Sorella et al.(2011)Sorella, Casula, Spanu, and Dal Corso]Sorella:2011xd authorS. Sorella, authorM. Casula, authorL. Spanu, authorA. Dal Corso, titleAb initio calculations for the β-tin diamond transition in silicon: Comparing theories with experiments, journalPhysica B volume83 (year2011) pages075119.[Balamane et al.(1992)Balamane, Halicioglu, and Tiller]Balamane:1992fp authorH. Balamane, authorT. Halicioglu, authorW. A. Tiller, titleComparative study of silicon empirical interatomic potentials, journalPhys. Rev. B volume46 (year1992) pages2250–2279.[Needs and Mujica(1995)]Needs:1995uk authorR. J. Needs, authorA. Mujica, titleFirst-principles pseudopotential study of the structural phases of silicon, journalPhys. Rev. B volume51 (year1995) pages9652–9660.[Crain et al.(1994)Crain, Clark, Ackland, Payne, Milman, Hatton, and Reid]Crain:1994lo authorJ. Crain, authorS. J. Clark, authorG. J. Ackland, authorM. C. Payne, authorV. Milman, authorP. D. Hatton, authorB. J. Reid, titleTheoretical study of high-density phases of covalent semiconductors. I. Ab initio treatment, journalPhys. Rev. B volume49 (year1994) pages5329–5340.[Mihalkovic and Widom(2011)]database-cmu.edu authorM. Mihalkovic, authorM. Widom, titleAlloy database, http://alloy.phys.cmu.edu/, year2011.[Methfessel and Paxton(1989)]Methfessel1989 authorM. Methfessel, authorA. T. Paxton, titleHigh-precision sampling for brillouin-zone integration in metals, journalPhys. Rev. B volume40 (year1989) pages3616.[Kaltak et al.(2014)Kaltak, Klimeš, and Kresse]Kaltak:2014ee authorM. Kaltak, authorJ. Klimeš, authorG. Kresse, titleCubic scaling algorithm for the random phase approximation: Self-interstitials and vacancies in Si, journalPhys. Rev. B volume90 (year2014) pages054115.[Yin(1984)]Yin:1984gm authorM. T. Yin, titleSi-III (BC-8) crystal phase of Si and C: Structural properties, phase stabilities, and phase transitions, journalPhys. Rev. B volume30 (year1984) pages1773–1776.[Gaál-Nagy et al.(2004)Gaál-Nagy, Pavone, and Strauch]Gaal-Nagy:2004nx authorK. Gaál-Nagy, authorP. Pavone, authorD. Strauch, titleAb initio study of the β→ tin → Imma → sh phase transitions in silicon and germanium, journalPhys. Rev. B volume69 (year2004) pages134112.[Biswas et al.(1984)Biswas, Martin, Needs, and Nielsen]Biswas:1984mg authorR. Biswas, authorR. M. Martin, authorR. J. Needs, authorO. H. Nielsen, titleComplex tetrahedral structures of silicon and carbon under pressure, journalPhys. Rev. B volume30 (year1984) pages3210–3213.[Kaxiras and Duesbery(1993)]Kaxiras:1993fk authorE. Kaxiras, authorM. S. Duesbery, titleFree energies of generalized stacking faults in Si and implications for the brittle-ductile transition, journalPhys. Rev. Lett. volume70 (year1993) pages3752–3755.[Juan and Kaxiras(1996)]Juan:1996uo authorY. M. Juan, authorE. Kaxiras, titleGeneralized stacking fault energy surfaces and dislocation properties of silicon: A first-principles theoretical study, journalPhilos. Mag. A volume74 (year1996) pages1367–1384.[Touloukian et al.(1975)Touloukian, Kirby, Taylor, and Desai]Expansion editorY. S. Touloukian, editorR. K. Kirby, editorR. E. Taylor, editorP. D. Desai (Eds.), titleThermal Expansion: Metallic Elements and Alloys, volume volume12, publisherPlenum, addressNew York, year1975.[Okada and Tokumaru(1984)]Okada:1984aa authorY. Okada, authorY. Tokumaru, titlePrecise determination of lattice parameter and thermal expanstion coefficient of silicon between 300 and 1500 K, journalJ. Appl. Phys. volume56 (year1984) pages314–320. | http://arxiv.org/abs/1703.08888v1 | {
"authors": [
"G. P. Purja Pun",
"Y. Mishin"
],
"categories": [
"cond-mat.mtrl-sci"
],
"primary_category": "cond-mat.mtrl-sci",
"published": "20170327000448",
"title": "An optimized interatomic potential for silicon and its application to thermal stability of silicene"
} |
Institute for Theoretical Physics, Center for Extreme Matter and Emergent Phenomena,Utrecht University, Princetonplein 5, 3584 CC Utrecht, The Netherlands [email protected] for Theoretical Physics, Center for Extreme Matter and Emergent Phenomena,Utrecht University, Princetonplein 5, 3584 CC Utrecht, The NetherlandsSoft Condensed Matter, Debye Institute for Nanomaterials Science, Princetonplein 5, 3584 CC, Utrecht, The NetherlandsSoft Condensed Matter, Debye Institute for Nanomaterials Science, Princetonplein 5, 3584 CC, Utrecht, The NetherlandsSoft Condensed Matter, Debye Institute for Nanomaterials Science, Princetonplein 5, 3584 CC, Utrecht, The NetherlandsInstitute for Theoretical Physics, Center for Extreme Matter and Emergent Phenomena,Utrecht University, Princetonplein 5, 3584 CC Utrecht, The Netherlands 82.70.Kj, 68.05.GhWe theoretically and experimentally investigate colloid-oil-water-interface interactions ofcharged, sterically stabilized, poly(methyl-methacrylate) colloidal particles dispersed in a low-polar oil (dielectric constant ϵ=5-10) that is in contact with an adjacent water phase. In this model system, the colloidal particles cannot penetrate the oil-water interface due to repulsive van der Waals forces with the interface whereas the multiple salts that are dissolved in the oil are free to partition into the water phase. The sign and magnitude of the Donnan potential and/or the particle charge is affected by these salt concentrations such that the effective interaction potential can be highly tuned. Both the equilibrium effective colloid-interface interactions and the ion dynamics are explored within a Poisson-Nernst-Planck theory, and compared to experimental observations. Colloid–oil-water-interface interactions in the presence of multiple salts: charge regulation and dynamics R. van Roij December 30, 2023 ==========================================================================================================§ INTRODUCTIONElectrolyte solutions in living organisms often contain multiple ionic speciessuch as Na^+, K^+, Mg^2+ and Cl^-. Theconcentrations of these ions and their affinity to bind to specific proteinsdetermine the intake of ions from the extracellular space to the intracellularone <cit.>. In this example, the concentration of multiple ionicspecies is used to tune biological processes. However, this scenario isnot limited to living systems, as it can also be important for ionic liquids<cit.>, batteries <cit.>, electrolytic cells<cit.>, and colloidal systems <cit.>, as weshow in this paper. In colloidal suspensions, the dissolved salt ions screen the surface charge ofthe colloid, leading to a monotonically decaying diffusecharge layer in the fluid phase. At the same time, these ions may adsorb tothe colloid surface and modify its charge <cit.>. The colloidsurface may also possess multiple ionizable surface groups that respond to thelocal physico-chemical conditions <cit.>. Hence, theparticle charge is determined by the ionic strength of the medium and the particle distance from other charged interfaces. This so-called charge regulation isknown to be crucial to correctly describe the interaction between chargedparticles in aqueous solutions, from nanometer-sized proteins<cit.> to micron-sized colloids <cit.>. Colloidal particles are also readily absorbed at fluid-fluid interfaces, such as air-water and oil-water interfaces, since this leads to alarge reduction in the surface free energy, of the order of 10^5-10^7k_BTper particle, where k_BT is the thermal energy <cit.>. In anoil-water mixture, colloids therefore often form Pickering emulsions that consist of particle-laden droplets <cit.>, which have been thetopic of extensive research due to their importance in many industrial processes, such as biofuel upgrade <cit.>,crude oil refinery <cit.>, gas storage<cit.>, and as anti-foam agents <cit.>.When the colloidal particles penetrate the fluid-fluid interface, theelectrostatic component of the particle-particle interactions is modified by the dielectricmismatch between the fluid phases <cit.>, nonlinearcharge renormalization effects <cit.>, and the differentcharge regulation mechanisms in each phase <cit.>. The resulting longrange lateral interactions have been studied in detail <cit.>, with the out-of-planeinteractions also receiving some attention <cit.>. Less attention has been dedicated to the electrostatics ofthe particle-interface interaction, although it is essential for understandingthe formation and stability of Pickering emulsions.In this work, we focus on an oil-water system, where oil-dispersed charged andsterically stabilized poly(methyl-methacrylate) (PMMA) particles are foundto be trapped near an oil-water interface, without penetrating it, dueto a force balance between a repulsive van der Waals (vdW) and an attractiveimage-charge force between the colloidal particle and the interface <cit.>. Here, the repulsive vdW forces stem from the particle dielectric constant that is smaller than that of water and oil. This can be understood from the fact that for the three-phase system of PMMA-oil-water, the difference in dielectric spectra determine whether the vdW interaction is attractive or repulsive <cit.>, while for two-phase systems, like atoms in air, the vdW interaction is always attractive. In addition to this force balance, we have recentlyshown in Ref. <cit.> that the dissolved ions play an importantrole in the emulsion stability. In addition to the usual screening and charge regulation, ions canredistribute among the oil and water phase according to their solvability andhence generate a charged oil-water interface that consists of a back-to-back electric double layer. Within asingle-particle picture, this ion partitioning can be shown to modify theinteraction between the colloidal particle and the oil-water interface. For anon-touching colloidal particle, the interaction is tunable from attractive torepulsive for large enough separations, by changing the sign of the productZϕ_D <cit.>, where Ze is the particle charge and k_BTϕ_D/ e the Donnanpotential between oil and water due to ion partitioning, with e theelementary charge. The tunability of colloid-ion forces is a central theme ofthis work, in which we will explore how the quantities Z and ϕ_D can berationally tuned. Although tuning the interaction potential through Zϕ_D is quite general, the salt concentrations in a binary mixture of particle-charge determining positive and negative ions cannot be varied independently due to bulk charge neutrality; in other words, Zϕ_D is always of a definite sign for a given choice of two ionic species. This motivates us to extend the formalism of Ref. <cit.> by including at least three ionic species which are all known to be present in the experimental system of interest that we will discuss in this paper. Including a second salt compound with an ionic species common to the two salts, allows us to independently vary the ionic strength and the particle charge. Because of this property, it is then possible to tune the sign of the particle charge, which is acquired by the ad- or de-sorption of ions, via the salt concentration of one of the two species. Furthermore, for more than two types of ions, the Donnan potential depends not only on the difference in the degree of hydrophilicity between the various species <cit.>, but also on the bulk ion concentrations <cit.>. This leads to tunability of the magnitude, and possibly the sign, of the Donnan potential.We apply our theory to experiments, where seemingly trapped colloidal particles near an oil-water interface could surprisingly be detached by the addition of an organic salt to the oil phase <cit.>. We will show that our minimal model including at least three ionic species is sufficient to explain the experiments. We do this by investigating the equilibrium properties of the particle-oil-water-interface effective potential in presence of multiple salts and by examining out-of-equilibrium properties, such as diffusiophoresis.The latter is relevant for recent experiments where diffusiophoresis was found to play a central role in the formation of a colloid-free zone at an oil-water interface <cit.>.As a first step, we set up in Sec. <ref> the density functional for the model system. In Sec. <ref>, the experiments are described. In Sec. <ref>, the equilibrium effective colloid-interface interaction potentials are explored as function of salt concentration, and we work out a minimal model that can account for the experimental observations. InSec. <ref>, we look at the influence of the ion dynamics within a Poisson-Nernst-Planck approach, and investigate how the system equilibrates when no colloidal particle is present. We conclude this paper by elucidating how our theory compares against the experiments of Elbers et al. <cit.>, where multiple ionic species were needed to detach colloidal particles from an oil-water interface.§ DENSITY FUNCTIONAL Consider two half-spaces of water (z<0, dielectric constant ϵ_w=80) and oil (z>0, dielectric constant ϵ_o) at room temperature T separated by an interface at z=0. We approximate the dielectric constant profile by ϵ(z)=(ϵ_o-ϵ_w)Θ(z)+ϵ_w, with Θ(z)=[1+tanh(z/2ξ)]/2 and ξ the interface thickness. Since we take ξ to be molecularly small, we can interpret Θ as the Heaviside step function within the numerical accuracy on the micron length scales of interest here. The N_+ species of monovalent cations and N_- species of monovalent anions can be present as free ions in the two solvents, and are described by density profiles ρ_i,α( r) (i=1,...,N_α, α=±) with bulk densities in water (oil) ρ_i,α^w (ρ_i,α^o). Alternatively, the ions can bind to the surface of a charged colloidal sphere (dielectric constant ϵ_c, radius a, distance d from the interface) with areal density σ_i,α( r). The colloidal surface charge density eσ( r) is thus given by σ( r)=∑_i=1^N_+σ_i,+( r)- ∑_i=1^N_-σ_i,-( r). The ions can partition among water and oil, which is modeled by the external potentials V_i,α(z)=β^-1f_i,αΘ(z) (where β^-1=k_BT), where the self-energy f_i,α is defined as the (free) energy cost to transfer a single ion from the water phase to the oil phase. The effects of ion partitioning and charge regulation can elegantly be captured within the grand potential functional, Ω[{ρ_i,±,σ_i,±}_i=1^N_±; d], given byΩ=ℱ -∑_α=±∑_i=1^N_α∫ d^3 r{[μ_i,α-V_i,α(z)]×[ρ_i,α( r)+σ_i,α( r)δ(| r-d e_z|-a)]},with μ_i,α=k_B Tln(ρ_i,α^wΛ_i,α^3) the chemical potential of the ions in terms of the ion bulk concentrations ρ_i,α^w in water and e_z the normal unit vector of the planar interface.Here the Helmholtz free energy functional ℱ is given by βℱ [{ρ_i,±,σ_i,±}_i=1^N_±; d]=∑_α=±∑_i=1^N_α∫_ℛd^3 r ρ_i,α( r){ln[ρ_i,α( r)Λ_i,α^3]-1}+1/2∫_ℛd^3 r Q( r)ϕ( r) +∑_α=±∑_i=1^N_α∫_Γ d^2 r (σ_i,α( r){ln [σ_i,α( r)a^2]+ln( K_i,αΛ_i,α^3)}+[σ_mθ_i,α-σ_i,α( r)]ln{[σ_mθ_i,α-σ_i,α( r)]a^2}), where the region outside the colloidal particle is denoted by ℛ and the particle surface is denoted by Γ. The first term of Eq. (<ref>) is an ideal gas contribution. The mean-field electrostatic energy is described by the second term of Eq. (<ref>) which couples the total charge density Q( r)=∑_i=1^N_+ρ_i,+( r)-∑_i=1^N_-ρ_i,-( r)+σ( r)δ(| r-d e_z|-a) to the electrostatic potential ϕ( r)/β e=25.6ϕ( r)mV. The final term is the free energy of an (N_++N_-+1)-component lattice gas of neutral groups and charged groups, with a surface density of ionizable groups σ_m a^2=10^6(or one ionizable group per nm^2) and θ_i,α is the fraction of ionizable groups available for an ion of type (i,α). A neutral surface site S_i,α can become charged via adsorption of an ion X_i^i, α, i.e., S_i,α+X_i,α^α⇆S_i,αX_i,α^α with an equilibrium constant K_i,α=[S_i,α][X_i,α^α]/[S_i,αX_i,α^α] and pK_i,α=-log_10(K_i,α/1 M). From the Euler-Lagrange equations δΩ/δρ_i,α( r)=0, we find the equilibrium profiles ρ_i,±( r)=ρ_i,±^wexp[∓ϕ( r)+f_i,αΘ(z)]. Combining this with the Poisson equation for the electrostatic potential, we obtain the Poisson-Boltzmann equation for r∈ℛ,∇·[ϵ(z)∇ϕ( r)]/ϵ_o=κ(z)^2sinh[ϕ( r)-Θ(z)ϕ_D],where we used bulk charge neutrality to find the Donnan potential ϕ_D/β e given by,ϕ_D=1/2log[∑_iρ_i,+^wexp(-f_i,+)/∑_iρ_i,-^wexp(-f_i,-)]. In Eq. (<ref>), we also introduced the inverse length scale κ(z)=√(8πλ_B^oρ_s(z)), withρ_s(z)=1/2∑_α=±∑_i=1^N_αρ_i,α^oexp[(αϕ_D+f_i,α)Θ(-z)],where the Bjerrum length in oil is given by λ_B^o=e^2/4πϵ_vacϵ_o k_BT. Notice that κ(z)=κ_o for z>0, with κ_o^-1 the screening length in oil, and that for z<0 we have that κ(z)=κ_w√(ϵ_w/ϵ_o), with κ_w^-1 the screening length in water. Finally, the bulk oil densities ρ_i,α^o are related to the bulk water densities asρ_i,α^w=ρ_i,α^oexp[(αϕ_D+f_i,α)].Inside the dielectric colloidal particle, the Poisson equation reads ∇^2ϕ=0. On the particle surface, r∈Γ, we have the boundary conditionn·[ϵ_c∇ϕ|_in-ϵ_o∇ϕ|_out]/ϵ_o=4πλ_B^oσ( r), with a charge density described by the Langmuir adsorption isotherm for r∈Γ,σ_i,α( r)=σ_mθ_i,α/1+(K_i,α/ρ_i,α^o)exp{α[ϕ( r)-ϕ_D]},which follows from δΩ/δσ_i,α( r)=0. Eqs.(<ref>)-(<ref>) are solved numerically for ϕ( r) using the cylindrical symmetry, and generic solutions were already discussed in the case of a single adsorption model in Ref. <cit.>. From the solution we determine ρ_i,α( r) and σ_i,α( r). These in turn determine the effective colloid-interface interaction Hamiltonian viaH(d)=Φ_VdW(d)+{ρ_i,±,σ_i,±}_i=1,...,N_±minΩ[{ρ_i,±,σ_i,±}_i=1^N_±; d].Here, we added the vdW sphere-plane potential Φ_VdW, with an effective particle-oil-water Hamaker constant A_H <cit.>. Eq. (<ref>) can then be evaluated to giveβ H(d) =∫_ℛ d^3 r ρ_s(z){ϕ( r)sinh[ϕ( r)-Θ(z)ϕ_D]-2(cosh[ϕ( r)-Θ(z)ϕ_D]-1)}-1/2∫_Γ d^2 r σ( r)ϕ( r)-∑_α=±∑_i=1^N_ασ_mθ_i,α∫_Γ d^2 rln(1+ρ_i,α^o/K_i,αexp{-α[ϕ( r)-ϕ_D]})-β A_H/6[1/d/a-1+1/d/a+1+ln(d/a-1/d/a+1)], which we will investigate using the experimental parameters given in Table <ref>, to be elucidated in the next section. § SYSTEM AND EXPERIMENTAL OBSERVATIONSWe consider two experimental systems from Ref. <cit.>, to which wewill refer as system 1 and 2. Both systems are suspensions with stericallystabilized poly(methyl-methacrylate) (PMMA) colloidal particles of radius a=1.4 μm and dielectric constant ϵ_c=2.6 <cit.>. The comb-graft steric stabilizer is composed ofpoly(12-hydroxystearic acid) (PHSA) grafted on a backbone of PMMA<cit.>. This stabilizer was covalently bonded to theparticles in system 1 (resulting in so-called locked PMMA particles<cit.>) whereas it was adsorbed to the surface of the particles insystem 2 (resulting in so-called unlocked PMMA particles <cit.>).In the locking process the PMMA colloids acquire a higher surface potential and charge. The increase in charge is mainly due to the incorporation of 2-(dimethylamino)ethanol in the PMMA colloids during the locking procedure. The protonation of the incorporated amine groups renders colloidal particles with an increased positive charge (see also Ref. <cit.>). Locked particles (like in system 1) are thus always positively charged and can only become negative by introducing TBAB. Unlocked particles can be either (slightly) positively or negatively charged.The locked particles in system 1 were dispersed in deionized cyclohexylbromide(CHB) and were positively charged, whereas the unlocked particles in system 2were dispersed in CHB/cis-decalin (27.2 wt%) and were negatively charged. Thekey parameters of both systems are summarized in Table <ref>, whereη is the volume fraction. It is important to note that CHB decomposesintime, producing HBr. Since CHB is a non-polar oil (ϵ_o=5-10), rather than an apolar oil (ϵ_o≈2), which means that thedielectric constant is high enough for significant dissociation of (added)salts to occur, specifically, HBr can dissociate into H^+ andBr^- ions, which can subsequently adsorb on the particle surface<cit.>. In an oil phase without added salt and without an adjacent water phase, κ_o^-1=6μm was assumed for both systems <cit.>, which is a reasonable estimate based on conductivity measurements or the crystallization behaviour of colloidal particles dispersed in CHB.In the experimental study suspensions of system 1 and 2 were brought in borosilicate capillaries (5 cm × 2.0 mm × 0.10 mm) which were already half-filled with deionized water; the colloidal behavior near the oil-water interface was studied with confocal microscopy. When necessary the oil-water interface was more clearly visualized by using FITC-dyed water instead of ultrapure water. FITC water was taken from a stock solution to which an excess of FITC dye was added. FITC water was never used in combination with TBAB in the aqueous phase to prevent interactions between the FITC dye and TBAB. In Fig. <ref>, the confocal images of both systems before (top) and after addition of organic salt tetrabutylammoniumbromide (TBAB) to the oil (middle) and water phase (bottom) are shown. In the absence of salt, the force balance between image charge attractions and vdW repulsion leads to the adsorption of the colloidal particles at the interface in both systems <cit.>, without the colloidal particles penetrating the oil-water interface <cit.>. In addition, the water side of the interface was reported to be positively charged, while the oil side is negatively charged <cit.>. When TBAB was added to the oil phase above the threshold concentration ρ_TBA^+|_Z=0 mentioned in table <ref>, with corresponding Debye screening length in oil (κ_o|_Z=0)^-1, the colloidal particles in system 1 were driven from the interface towards the bulk oil phase, whereas the addition of TBAB did not result in particle detachment in system 2, see Fig. <ref>. Over time the detached colloidal particles in system 1 reattached close to the oil-water interface <cit.> (see Fig. S1 in the supplemental information). When TBAB was added to the water phase, the colloidal particles in both system 1 and 2 were driven from the bulk oil to the oil-water interface, producing dense layers of colloidal particles near the interface <cit.>, see Fig. <ref> and Fig. S2 in the supplemental information. Finally, we also investigated system 1 under the same density-matching conditions as in system 2, and observed no qualitative change in the response to salt addition, see Fig. S3 in the supplemental information. When the TBAB was added to the oil phase, the positively charged colloidal particles in system 1 reversed the sign of their charge Z=∫_Γ d^2 r σ( r) from positive (Z>0) to negative (Z<0) <cit.>. This suggests that H^+ and Br^- can both adsorb to the particle surface and that the addition of TBAB introduces more Br^- in the system, causing the particle charge of system 1 to become negative for a high enough concentration of TBAB. The estimated concentration of free TBA^+ ions ρ_TBA^+|_Z=0, and the corresponding Debye length (κ_o|_Z=0)^-1 in our experiments are listed in Table <ref>. Both parameters are not defined for system 2 (not applicable, n.a.), since here negative particles cannot become positively charged in the setup that we consider, because we always observed that adding TBAB results in a more negative particle charge. In Fig. <ref>, all equilibria, including the decomposition of CHB, the equilibria of HBr and TBAB with their free ions, and the partitioning of these ions between water and oil, are schematically shown. For simplicity, we have not taken the salt decomposition equilibria into account in the theory of Sec. <ref>. However, the Bjerrum pairs HBr and TBAB could be included in the theory by using the formalism of Ref. <cit.>. In the upper right inset of Fig. <ref>, we show schematically the binding of H^+ and Br^- onto the particle surface. In principle, TBA^+ can also adsorb on the particle surface, but we expect this to be a small effect that we neglect. This is justified since adding TBAB renders more negative particles, suggesting that Br^- can more easily adsorb on the particle surface than TBA^+. Hence, including a finite value for K_TBA^+ in our model does not change our results qualitatively, but only quantitatively.We will explain the experimental observations described in this section by applying the formalism of section <ref>. Moreover, we will discuss the differences between a single adsorption model and a binary adsorption model and the influence of a third ionic species, which is a first extension trying to get closer to the full experimental complexity compared to our previous work <cit.>, where only a single adsorption model was considered in a medium with only two ionic species.§ COLLOID-INTERFACE INTERACTIONS We will perform calculations for up to two species of cations (N_+=1,2) and one species of anions N_-=1, where (1,+) corresponds to H^+, (1,-) to Br^-, and (2,+) to TBA^+. To estimate the order of magnitude of the ion sizes, we consider their effective (hydrated) ionic radii a_H^+=0.28nm, a_Br^-=0.33 nm, and a_TBA^+=0.54nm <cit.>. This gives self-energies (in units of k_BT): f_H^+=11, f_Br^-=10 and f_TBA^+=6, based on the Born approximation f_α=(λ_B^o/2 a_α)(1-ϵ_o/ϵ_w). This is a poor approximation in the case of TBA^+, because it is known that TBA^+ is actually a hydrophobic ion, f_TBA^+<0. However, this simple approximation does not affect our predictions since we can deduce from Eq. (<ref>) the inequality (f_Br^--f_TBA^+)/2≤ϕ_D≤(f_Br^--f_H^+)/2. Therefore, as long as f_TBA^+<f_Br^-, we find that the Donnan potential is varied between a negative value and a positive one by adding TBAB, in line with experimental observations. Setting f_TBA^+<0 is therefore not required. Since we will fix κ_o^-1 throughtout our calculations, assuming f_TBA^+<0 would only affect the value of κ_w^-1, and we have already shown in our previous work that this parameter is not important for the colloid-interface interaction of oil-dispersed colloidal particles <cit.>. We therefore use the Born approximation to analyze the qualitative behaviour of the effective interactions, such that ϕ_D can vary between -0.5 and 2.In an isolated oil phase without an adjacent water phase, the screening length in our experiments was approximated to be κ_o^-1=6 μm. However, κ_o^-1 becomes larger in the presence of an adjacent water phase, since water acts as an ion sink: the ions dissolve better in water than in oil and therefore diffuse towards the water phase. The charged colloidal particles in the oil phase will counteract this effect, because these colloidal particles are always accompanied by a diffuse ion cloud, keeping some of the ions in the oil. Because we do not know the exact value of κ_o^-1 in an oil-water system, we consider it as a free parameter and let it vary in a reasonable range between 6 μm and 50 μm. In our single-particle picture, we neglect many-body effects which can reduce the value of κ_o^-1, due to the overlap of double layers. This can be taken into account by introducing an effective Debye length <cit.>. Another many-body effect that we do not include, is the discharging of particles when the particle density is increased <cit.>. One should keep this in mind when directly comparing the values we use for κ_o^-1 to experiment. §.§ Systems without TBAB addedIn this subsection, we first investigate systems without the added TBAB (such that H^+ and Br^- are the only ionic species) for two different adsorption models. The first one is a single-ion adsorption model. In this case, system 1 in Table I is described by the adsorption of H^+ alone, while for system 2 only Br^- can adsorb. We use the experimental values of Z from Table <ref> to determine the equilibrium constants on the basis of a spherical-cell model in the dilute limit with κ_o^-1=6 μm. Note that these values are obtained for colloidal particles dispersed in CHB without an adjacent water phase. Within this procedure, we find a^3K_H^+=165 and K_Br^-→∞ for system 1, while for system 2 we find a^3K_Br^-=3310 and a^3K_H^+→∞. For the particle–oil-water-interface vdW interaction we use a Hamaker constant β A_H=-0.3, which is an estimate based on the Lifshitz theory for the vdW interaction <cit.>. The resulting colloid-interface interaction potentials as function of κ_o^-1 are shown in Fig. <ref>(a) and (c), with the corresponding Z(d) in the inset. The product Zϕ_D determines the long-distance nature of the colloid-interface interaction: in Fig. <ref>(a) it is repulsive for system 1, since Zϕ_D<0 and in Fig. <ref>(c) attractive for system 2, since Zϕ_D>0 (recall that here ϕ_D=-0.5), see Ref. <cit.> for a detailed discussion. At smaller d, the image-charge interaction, which is attractive for both systems, becomes important. In the nanometer vicinity of the interface, the vdW repulsion dominates, and taken together with the image-charge potential, this gives rise to a minimum inΦ(d)≡ H(d)-H(∞), which corresponds to the equilibrium trapping distance of the particles from the interface.Increasing κ_o^-1 reduces |Z|, such that the vdW repulsion can eventually overcome the image-charge potential for sufficiently small d (Fig. <ref>(a),(c)). However, the reduction in the particle-ion force is much smaller than the reduction of the image force, since the former scales like ∼ Z, unlike the latter, which scales (approximately) like ∼ Z^2. In Fig. <ref>(a), we find that this results in a trapped state near the interface which becomes metastable for large κ_o^-1, with a reduced energy barrier upon increasing κ_o^-1. For system 2, we find that Φ(d) becomes repulsive for all d for sufficiently large κ_o^-1, because the attractive image charge and the attractive colloid-ion force are reduced due to particle discharging. This calculation shows thatparticle detachment from the interface is possible by removing a sufficient number of ions from the oil phase. This effect is stronger in system 1, because the repulsive Donnan-potential mechanism is longer ranged than the vdW repulsion. However, to the best of our knowledge, such detachment was not observed in experiments by, for example, adding a sufficient amount of water that acts as an ion sink. Taken together with the experimental observation that initially positively charged particles can acquire a negative charge, we conclude that systems 1 and 2 are not described by single-adsorption models <cit.>.With the same procedure as for the single adsorption model, we determined the values of the equilibrium constants in the case of abinary adsorption model. For system 1 we also used the salt concentration ρ_TBA^+|_Z=0 for which charge inversion takes place, to find a^3K_H^+=0.0001, a^3K_Br^-=47, and θ=0.8. Here θ=θ_Br^- is the fraction of sites on which anions can adsorb. For system 2, we assumed θ=0.5 and found a^3K_H^+=1 and a^3K_Br^-=0.055. The short-distance (vdW), mid-distance (image charge) and long-distance (Donnan) behaviour of Φ(d) does not qualitatively change in the binary adsorption model, see Fig. <ref>(b) and (d). However, the trapped state is more “robust” to changes in the ionic strength, because of the much higher values of |Z(d)|. This can be understood as follows. In system 1, K_Br^->K_H^+, and thus decreasing the salt concentration leads the negatively charged surface sites to discharge first, which means that the charge initially increases with κ_o^-1. This enhances the image-charge effects, giving rise to a deeper potential well for the trapped state. At even higher κ_o^-1, |Z(d)| will eventually decrease due to cationic desorption, although this is not explicitly shown in Fig. <ref>. A similar reasoning applies to the negatively charged colloidal particles in system 2, which show only discharging upon increasing κ_o^-1, but much less compared to the single adsorption model.The theoretically predicted stronger trapping in both systems and the experimentally observed sign reversal of the colloidal particles of system 1, which requires at least two adsorbed ionic species, indicates that the binary adsorption model describes the experiments better than the single adsorption model. In addition, the large energy barrier between the trapped state and the bulk in Fig. <ref>(b), shows that not all the colloidal particles can be trapped near the oil-water interface. This is consistent with the experimentally observed zone void of colloidal particles, although one should keep in mind that the charged monolayer will provide additional repulsions which are not taken into account in our single-particle picture.§.§ Systems with TBAB addedWe now show how the colloid-interface interaction changes in a system with three ionic species. We focus on the binary adsorption model applied to system 1, because this system has the richest behaviour, allowing Z to switch sign. Here, the addition of TBAB gives rise to two new features. The first one is that it is possible to independently tune ρ_Br^-^o and ρ_H^+^o in the bulk oil phase while satisfying the constraint of bulk charge neutrality, ρ_TBA^+^o+ρ_H^+^o=ρ_Br^-^o. By increasing ρ_TBA^+^o, we find that Z switches sign atρ_TBA^+|_Z=0=K_Br^-(1-θ)ρ_H^+^o/(2θ-1)ρ_H^+^o+θ K_H^+,where we used Eq. (<ref>) together with the condition σ_Br^-=σ_H^+. Secondly, because of the hierarchy f_TBA^+<f_Br^-<f_H^+, the Donnan potential can switch sign atρ_TBA^+|_ϕ_D=0=ρ_H^+^oe^f_H^+-f_Br^--1/1-e^f_TBA^+-f_Br^-,where we used Eq. (<ref>) and (<ref>). Eq. (<ref>) is weakly dependent on the precise value of f_TBA^+, since exp(f_TBA^+-f_Br^-)<0.02 for f_TBA^+≲ 6(with 6 being its value within the Born approximation), and hence the second term in the denominator of Eq. (<ref>) can be neglected. Using the equilibrium constants of Sec. <ref>, we see from Eq. (<ref>) and (<ref>) that ϕ_D switches sign before Z does upon adding TBAB; i.e., ρ_TBA^+|_ϕ_D=0<ρ_TBA^+|_Z=0.Since our calculations are performed in the grand-canonical ensemble, we have to specify how we account for the added TBAB. We choose to fix ρ_H^+^o, and set κ_o^-1=10 μm without added TBAB (blue curve in Fig. <ref>(b)). The Debye length is chosen to be slightly larger than that of a pure CHB system, because the water phase acts as an ion-sink, see the discussion in Sec IV. The resulting colloid-interface interactions are shown in Fig. <ref>(a) and (b), for various values of κ_o^-1, which decreases upon addition of TBAB. The relation between the screening lengths and the bulk concentration ρ_TBA^+^o is shown in Fig. <ref>(c). We can identify four regimes, indicated by different colors in Fig. <ref>. We start with a system for which ϕ_D<0 and Z>0 (blue curves), such that an energy barrier is present that separates the trapped state from the bulk state. Increasing ρ_TBA^+^o decreases |ϕ_D| until ultimately the energy barrier vanishes and ϕ_D becomes positive (red curves). At even larger TBAB concentration, the colloidal particle becomes negative for d→∞ as it would be in bulk at the given κ_o^-1 (green curves). Interestingly, there is a (small) energy barrier of a different nature than the energy barriers shown until now. Namely, there exists a d^* for which Z(d^*)=0 (see insets in Fig. <ref>(b)). Surprisingly, at this point of zero charge, d^* does not coincide with the location of the maximum in Φ(d). Furthermore, the result for κ_o^-1=0.9 μm does not show a maximum, although there is a point of zero charge. Both observations can be understood from the fact that although Z=0, the charge density σ(ϑ) is not spatially constant. In this case, there is still a coupling between bulk and surface ions, that contributes to Φ(d), see second term in Eq. (<ref>). Lastly, at a very high TBAB concentration we find Z(d)<0 for all d (purple curves), and the large Donnan potential leads in a repulsion for all d, and hence to particle detachment. Upon decreasing κ_o^-1, this repulsion first becomes stronger, as ϕ_D increases towards 2. At the same time, increasing |Z|increases the strength of the image-charge attraction, eventually resulting in a plateau in Φ(d) between d-a∼10^-3 μm and d-a∼10^-1 μm (compare κ_o^-1=0.25 μm with κ_o^-1=0.4 μm in Fig. <ref>(b)). We now briefly explain how added TBAB would change the colloid-interface interactions in the other cases presented in Fig. <ref>(a), (c) and (d). In the case of a single adsorption model of system 1 only the Donnan potential switches sign, the energy barrier would vanish and the particles stay trapped. Possibly, some of the particles from the bulk are then moved towards the oil-water interface. For system 2, the addition of TBAB would only introduce an energy barrier separating the trapped state from a bulk state, but no detachment occurs, independent of the investigated adsorption model. This is in line with the experiments of Ref. <cit.>, where no particle detachment was observed for system 2. From the calculations in Fig. <ref>, we deduce that significant particle detachment from the interface occurs whenever Z<0 and ϕ_D>0. However, the range of the repulsion, which extends up to 1μm, is too short to explain the particle detachment found in experiments, which may extend up to >10μm. One possible explanation to this discrepancy is that the particle motion far from the interface is governed by a non-equilibrium phenomenon, e.g. from the concentration gradient of ions generated by their migration from the oil phase to the water phase, similar to the recent experiment by Banerjee et al. <cit.>. This motivated us to investigate the ion dynamics in the next section, in order to gain insight into the time evolution of the colloid-ion forces. § ION DYNAMICS For simplicity, we assume now that no colloidal particle is present in the system, such that the ion dynamics can be captured within a planar geometry. This can still give insight into the colloid-ion potential, because we deduced in our previous work that Φ(d) can be approximated byβΦ(d)≈ Z(∞)ϕ_0(d) for sufficiently large d, with ϕ_0 the dimensionless potential without the colloidal particle <cit.>. The theory can be set up from Eq. (<ref>), with the second line set equal to zero, and one should also keep in mind that ℛ is the total system volume in this case. It is then possible to derive equations of motion for ρ_i,±( r,t) by using dynamical density functional theory (DDFT) <cit.>. For ionic species i with charge α=±, the continuity equation reads∂ρ_i,α( r,t)/∂ t=-∇· j_i,α( r,t),with particle currents j_i,α( r,t) equal to j_i,α( r,t)= -D_i,α( r)ρ_i,α( r,t)∇(. δ(βℱ)/δρ_i,α( r)|_ρ_i,α( r,t)+β V_i,α( r)).Explicitly working out the functional derivative givesj_i,±( r,t)=-D_i,±( r){∇ρ_i,±( r,t)+ρ_i,±( r,t)∇[±ϕ( r,t)+β V_i,±( r)]},with D_i,α(z)=(D_i,α^o-D_i,α^w)Θ(z)+D_i,α^w, with D_i,α^o (D_i,α^w) the diffusion coefficient of an ion of sign α in bulk oil (water).Here, we have used the Einstein-Smoluchowski relation to relate the electric mobility to the diffusion constant. The time-dependent electrostatic potential ϕ( r,t) satisfies the Poisson equation (neglecting retardation), ∇·[ϵ( r)∇ ϕ( r,t)]/ϵ_o=-4πλ_B^o[∑_j=1^N_+ρ_j,+( r,t)-∑_j=1^N_-ρ_j,-( r,t)].Eqs. (<ref>)-(<ref>) are the well-known the Poisson-Nernst-Planck equations, and we solve them under the boundary conditions. [ n· j_i,α( r,t)=0; n·∇ϕ( r,t)=0 ]}∀ r∈∂ℛ, ∀ t∈[0,∞),which follow from global mass and charge conservation, respectively. We estimate the diffusion coefficients by making use of the Stokes-Einstein relation D_i,±^j=(6πβη_j a_i,±)^-1, where η_j the viscosity of the solvent (j=o,w). At room temperature we have η_w=8.9· 10^-4 Pa·s, while for CHB η_o=2.269· 10^-3 Pa·s. From these values we find:D_H^+^w=8.76· 10^-10 m^2/s, D_TBA^+^w=4.54· 10^-10 m^2/s, D_Br^-^w=7.43· 10^-10 m^2/s, D_H^+^o=3.44· 10^-10 m^2/s, D_TBA^+^o=1.78· 10^-10 m^2/s and D_Br^-^o=2.91· 10^-10 m^2/s.§.§ Dynamics after TBAB addition The ion dynamics can provide further insight into the particle dislodgement after TBAB is added to the oil phase.In experiment, we observed that κ_o^-1 can be decreased down to 50 nm, after TBAB is added. This Debye length implies a salt concentration of the order of 10^-7 M, such that we can safely neglect the HBr concentration, which has a maximal value of ∼ 10^-10 M before the oil is brought into contact with the water phase. We investigate the time-dependence of the electrostatic potential ϕ(z,t), with z the direction perpendicular to the oil-water interface. The oil is assumed to reside in a capillary with a linear dimension perpendicular to the oil-water interface of length L_o=10 μm, which is much larger than κ_o^-1 but much smaller than the experimental sample size of about 1 cm, to facilitate numerical calculations. It was difficult to perform calculations at even larger L_o with such a small κ_o^-1, but the present parameter settings can nevertheless give qualitative insights. In experiments, the length of the water side of the capillary L_w is also 1 cm, but here we take it to be L_w=0.1 μm, which is still much larger than κ_w^-1. The disadvantage of the small L_w is that only the ionic profiles in the oil phase are considered realistic, because given the small L_w no bulk charge neutrality in the water phase can be obtained. Furthermore, L_w≪ L_o stems from the initial condition that we define below together with the desired final condition, constrained by the fact that ions cannot leave the oil-water system and that the water phase is modeled as an ion-less ion sink. In contrast, for the calculation of the effective colloid-oil-water-interface potential in Sec. IV, we used a grand-canonical treatment, rather than a canonical treatment for the ions that is used for the dynamics here. Similar to the experiments, the initial condition for (i,α)=TBA^+, Br^- is a uniform distribution of ions in the oil phase:ρ_i,α(z,t=0)=ρ_0Θ(z).The amplitude ρ_0=[κ_o(t=0)]^2/8πλ_B^o is used such that we can acces the regime where the particles are negatively charged for d→∞ and t→∞, but they can become positively charged close to the interface. In particular, we use κ_o^-1(t=0)=0.05 μm, leading to a final κ_o^-1(t→∞)=0.979 μm (cf. Fig. <ref>(b)). Solving Eq. (<ref>), (<ref>), (<ref>), with boundary conditions (<ref>) and initial condition (<ref>), results in the profiles ϕ(z,t), ρ_H^+(z,t), and ρ_Br^-(z,t). It is convenient to express the results in terms of a dimensionless time τ=t/t_0, with time scale t_0=L_o^2/D_Br^-^w, which in our system is t_0=1.3 s. This means that the equilibrium state is reached within several seconds in our system, see the profiles in Fig. <ref>. However, if a more realistic L_o is chosen, this time scale will be on the order of hours, since t_0 scales with L_o^2.In Fig. <ref>(a), we show the time evolution towards equilibrium of ϕ(z,t). For all times, ϕ(z,t) increases monotonically with z and becomes constant as z→ L_o. The range of ϕ(z,t) steadily increases over time due to the depletion of ions in the oil. In addition, ϕ(L_o,t) increases with time, until ultimately ϕ(L_o,t→∞)=ϕ_D is reached. The equilibrium calculations of Fig. <ref> supported particle detachment by means of a repulsive colloid-ion force, but due to the large salt concentrations the range of the repulsive colloid-ion force was deemed to be too small in the parameter regime where the particle was negatively charged. The dynamics of the ionic profiles at the oil side, presented in Fig. <ref>(c)-(g)), show that this issue can be resolved when the system is (correctly) viewed out of equilibrium, as we will explain next.From the profiles in Fig. <ref>(c), a short time after the addition of salt, we infer that the colloids are initially negatively charged according to the corresponding κ_o^-1 and Z in Fig. <ref>(c). Therefore, the approximate interaction potential βΦ(d)≈ Z(∞)ϕ_0(d) leads to a colloid-ion force that is repulsive. Colloidal particles that were initially trapped are then repelled from the interface, but only for surface-interface distances up to a micron, as can be inferred from Fig. <ref>(b). When t increases, the water phase uptake of ions reduces the Br^- concentration close to the interface. At the same time, mass action is at play, and we can estimate from <ref>(c) that the particles become positively charged at ≈ 10^-8 M. This means that as time progresses, some of the particles close to the interface will reverse their sign. For example, at time τ=0.5, we can estimate from the profiles in <ref>(f) that only particles at d≳1μm are still negatively charged. However, assuming that the bulk ion dynamics is much slower than the mass action dynamics, the range of the Donnan potential has not relaxed yet, and is longer ranged than at t→∞. At τ=0.5, ϕ still extends up until L_o=10 μm, see the dotted line in Fig. <ref>(a). Hence, the range of repulsion for the negatively charged particles is longer than one would expect from the equilibrium calculation.In other words, the range of the interaction is set much faster than the electrostatic potential and the colloidal charge at large z. At later times, enough ions are depleted from the oil, all the colloids become positively charged, and are attracted towards the interface, as one would expect in equilibrium for the final κ_o^-1. This also gives a possible explanation for the experimentally observed reattachment after the initial detachment.For comparison, we also performed calculations with HBr as the only salt (no added TBAB). We found that except at the very early stages of the dynamics, the HBr concentration is indeed negligible and decreases rapidly after the oil comes into contact with the water due to the ion partitioning. These calculations also confirmed that, within the binary adsorption model, the colloid-ion forces remain repulsive throughout the partitioning processes, since particles becomes more positively charged with decreasing the ionic strength, because of the larger desorption of negative ions than positive ions. Thus, the colloid-interface interaction is still always dominated by the attractive short range image forces.Finally, we consider what happens when TBAB is added to the water, neglecting the HBr concentration. In <ref>(b), we show ϕ(z,t), and find that the potential in this case can temporarily become larger than ϕ_D. The ion densities behave as expected. Some of the ions from the water side are transferred towards the oil phase. In <ref>(h)-(l) we see that the density of ions is first largest at the interface until, slowly, also the rest of the oil is filled. Note that the oil side of the interface is always positively charged, and that the equilibrium situation is identical to the one in Fig. <ref> by construction. Based on the calculation of Fig. <ref>(b), we conclude that the colloid-ion forces are attractive for all times up until equilibrium is nearly reached. Because there is a high density of Br^- ions in bulk, the particles are negatively charged sufficiently far from the interface. The colloids for small d are, however, positively charged as was explained in the inset of Fig. <ref>(b) (green curves). This explains why colloids are drawn closer to the interface upon adding TBAB in water: the colloids remain mainly positive, but a positive Donnan potential is generated out of a negative one, and hence an attraction towards the interface is induced. This we have already understood from the equilibrium calculations.§.§ DiffusiophoresisDespite having only discussed electrostatic forces generated by the Donnan potential, our calculations can also give some insight into diffusiophoretic effects, that is, those induced by the motion of colloidal particles in concentration gradients of ions. We now estimate the importance of diffusiophoresis in both the HBr and added TBAB systems using the PNP calculations. Whenever the unperturbed concentration fields satisfy ρ_+(z)≈ρ_-(z), a negligble electric field is generated by the ions that would give rise to the aforementioned colloid-ion force. However, in an overall concentration gradient, the particles can be translated due to diffusiophoresis, in which the particle velocity is given by U=b ∇[ρ_+(z)+ρ_-(z)], with slip-velocity coefficientb=4k_BT/η_oκ_o^2{ζ/2D_+-D_-/D_++D_--ln[1-tanh^2(ζ/4)]},see Ref. <cit.> for details. Note that Eq. (<ref>) is derived assuming a homogeneous surface potential ϕ_0, and that only the gauged potential ζ=ϕ_0-ϕ_D is relevant for an oil-dispersed colloidal particle.From Eq. (<ref>), we can estimate the sign of b. For a system that contains HBr only, we find b>0, and hence colloidal particles tend to always move towards higher concentrations. This means that diffusiophoresis repels particles from the interface, similar to the colloid-ion force that we described in equilibrium. We therefore conclude that without TBAB, attractions are provided solely by the image charge forces.When TBAB is added, we find that b≤0 for 0≤ζ≲2 and b>0 otherwise. For TBAB in oil, the negatively charged particles therefore experience a repulsive diffusiophoretic force from the oil-water interface, while positively charged particles are attracted for 0≤ζ≲ 2, but are repelled otherwise. Assuming that for TBAB in water the particles are always positively charged, particles with ζ>2 are attracted to the interface by diffusiophoresis. Given that the particles in our studies were (relatively) highly charged, all forces except for the vdW (image charge, colloid-ion and diffusiophoretic force) are attractive in this specific case.We conclude that diffusiophoresis could possibly account for the long range repulsion or attraction near the oil-water interface, since concentration gradients occur over a scale that is much larger than the Debye screening length. In fact it could suggest that diffusiophoresis is the dominant force generating mechanism outside of the double layer near the oil-water interface. However, the equilibrium considerations in Sec. <ref> are pivotal to understanding why colloidal particles can be detached in the first place.§ CONCLUSION AND OUTLOOKIn this paper, we discussed colloid–oil-water-interface interactions and ion dynamics of PMMA colloids dispersed in a non-polar oil at an oil-water interface, in a system with up to three ionic species. We have applied a formalism that includes ion partitioning, charge regulation, and multiple ionic species to recent experiments <cit.>, to discuss (i) how the charges on the water and oil side of the oil-water interface can change upon addition of salt, (ii) how charge inversion of interfacially trapped non-touching colloidal particles upon addition of salt to the oil phase can drive particles towards the bulk over long distances, followed by reattachment for large times, (iii) that particles that cannot invert their charge stay trapped at the interface, and (iv) that colloids in bulk can be driven closer to the interface by adding salt to the water phase. We used equilibrium and dynamical calculations to show that these phenomena stem from a subtle interplay between long-distance colloid-ion forces, mid-distance image forces, short-distance vdW forces, and possibly out-of-equilibrium diffusiophoretic forces. The colloid-ion forces are the most easily tunable of the three equilibrium forces, because they can be tuned from repulsive to attractive over a large range of interaction strengths. We have shown this explicitly by including three ionic species in the theory, and by investigating various charge regulation mechanisms, extending the formalism of Ref. <cit.>. For future directions, we believe that it would be useful to investigate many-body effects, in a similar fashion as in Ref. <cit.>. There are, however, two drawbacks of the method of Ref. <cit.> that need to be amended before we could apply it to a system of non-touching colloids. First of all, in Ref. <cit.>, a Pieranski potential <cit.> was used to ensure the formation of a dense monolayer at the oil-water interface. It would be interesting to see if the trapping of particles near the interface can be found self-consistently by the mechanism presented here and the one of Ref. <cit.>, by using a repulsive vdW colloid-interface potential. Secondly, the formalism of Ref. <cit.> was set up for constant-charge particles. In the constant-charge case, it is a good approximation to replace the particle nature of the colloids by a density field. For charge-regulating particles, this can be a limiting approximation because one needs the surface potential and not the laterally averaged electrostatic potential to determine the colloidal charge. Investigating many-body effects can be interesting, because colloidal particles present in bulk contribute to the Donnan potential. This is not the case when all the colloids are trapped near the interface: in this case the electrostatic potential generated by the colloids cannot extend through the whole system volume. Finally, a dense monolayer can provide an additional electrostatic repulsion for colloids, in addition to the repulsive colloid-ion force for Z(∞)ϕ_D<0 and the repulsive vdW force. Therefore, we expect that the interplay of the colloidal particles with ions can be very interesting on the many-body level, especially when we include not only image-charge and ion-partitioning effects, but most importantly, also charge regulation. However, it is not trivial to take all these effects into account in a many-body theory. Another direction that we propose is to perform the ion dynamics calculation of Sec. <ref> in the presence of a single (and maybe stationary) charged sphere near an oil-water interface. This would give insights into the out-of-equilibrium charging of charge-regulating particles, providing more information on the tunability of colloidal particles trapped near a “salty” dielectric interface. We acknowledge financial support of a Netherlands Organisation for Scientific Research (NWO) VICI grant funded by the Dutch Ministry of Education, Culture and Science (OCW) and from the European Union's Horizon 2020 programme under the Marie Skłodowska-Curie grant agreement No. 656327. This work is part of the D-ITP consortium, a program of the Netherlands Organisation for Scientific Research (NWO) funded by the Dutch Ministry of Education, Culture and Science (OCW). J.C.E. performed the theoretical modelling and numerical calculations under supervision of S.S. and R.v.R. The experiments were performed by N.A.E. and J.E.S.v.d.H. under the supervision of A.v.B. The paper is co-written by J.C.E. and S.S., with contributions of N.A.E., J.E.S.v.d.H., A.v.B. and R.v.R. The supplemental information is provided by N.A.E. and J.E.S.v.d.H. All authors discussed results and revised the paper.apsrev4-1 | http://arxiv.org/abs/1703.08892v2 | {
"authors": [
"Jeffrey C. Everts",
"Sela Samin",
"Nina. A. Elbers",
"Jessi E. S. van der Hoeven",
"Alfons van Blaaderen",
"René van Roij"
],
"categories": [
"cond-mat.soft",
"cond-mat.stat-mech",
"physics.chem-ph"
],
"primary_category": "cond-mat.soft",
"published": "20170327013110",
"title": "Colloid-oil-water-interface interactions in the presence of multiple salts: charge regulation and dynamics"
} |
K. Mazur and B. KsiezopolskiInstitute of Computer Science, Maria Curie-Sklodowska University, pl. M. Curie-Sklodowskiej 5, 20-031 Lublin, Poland [email protected] Polish-Japanese Institute of Information Technology, Koszykowa 86, 02-008 Warsaw, [email protected] Data Flow Management: the Multilevel Analysis of Data Center Total Cost Katarzyna Mazur1 andBogdan Ksiezopolski1,2 March 27, 2017 ==========================================================================Information management is one of the most significant issues in nowadays data centers. Selection of appropriate software, security mechanisms and effective energy consumption management together with caring for the environment enforces a profound analysis of the considered system.Besides these factors, financial analysis of data center maintenance is another important aspect that needs to be considered.Data centers are mission-critical components ofall large enterprisesand frequently cost hundreds ofmillions ofdollars to build, yet fewhigh-level executives understand the true cost ofoperatingsuch facilities. Costs are typically spread across the IT, networking, and facilities, which makes management ofthese costs and assessment of alternatives difficult. This paper deals with a research on multilevel analysis of data center management and presentsan approach to estimate the true totalcosts of operating data center physical facilities, taking into account the proper management of the information flow.§ INTRODUCTION The challenges faced by companies working in nowadays complex IT environments pose the need for comprehensive and dynamic systems to cope with the information flow requirements <cit.>, <cit.>, <cit.>. Planning can not answer allquestions: we must take a step further and discuss a model for application management. One of the possible approaches to deal with this problem, is to use the decision support system that is capable of supporting decision-making activities. In <cit.>, we proposed the foundations of our decision support system for complex IT environments. Developing our framework, we examined the time, energy usage, QoP, finance and carbon dioxide emissions. Regarding financial and economic analyzes, we considered only variable costs. However, calculating total operating cost of a data center, one needs to take into account both fixed and variable costs, which are affected by complex and interrelated factors. In this paper, performinga financial analysis of security-based data flow, we present animprovedmethodformeasuringthe totalcost of itsmaintenance, exploringtrade-offs offeredby different security configurations, performance variability and economic expenditures. With the proposed approach it is possible to reduce the data center costs without compromising data security or quality of service. The main contributions of this paper are summarized as follows: * we enhanced previous studies on security management presented in <cit.>, extending it with the analysis of fixed costs,* we proposed afull cost model for data centers, being an economic method of their evaluation, in which allcosts of operating, maintaining,and disposing are consideredimportant,* we prepared a case study, in which we: * applied the developed, financial model to an example data center, in order to show how the model actually works,* we evaluated the proposed economic scheme and analyzed the distribution of data center maintenance costs over five years, taking into account different security levels, and comparing them with reference to data center total costs,* based on the results gathered with the introduced method, we calculated possible profits and return on investment values for over five years, in order to choose the best option (considering the security of the information, along with high company incomes). § RELATED WORKThe total cost of the data center is somewhatelusive to project accurately. There are many subtleties which can be overlooked or are simply unaccounted for (or perhaps underestimated), over the operational life of a data center.Looking to involve all existing elements of a modern data center into cost calculations, several approaches have been proposed in the literature (<cit.>, <cit.>, <cit.>).In <cit.> the authors presented a way of predicting the total budget required to build a new data center. They distinguish three primary construction cost drivers, namely: power and cooling capacity and density, tier of functionality and the size of the computer room floor. They utilize proposed cost model, providing calculations for the example data center. Researchers state that their cost model is intended as a quick tool that can be applied very early in the planning cycle to accurately reflect the primary construction cost drivers. However, proposed model makes some rigid assumptions about the data center (such as the minimum floor space or the type of the utilized rack), making it quite inflexible. Moreover, as the authors themselves admit, some significant costs were not included in this cost model (for instance, the operational costs).An interesting approach to assessing and optimizing the cost of computing at the data center level is presented in <cit.>. Here, scientists consider five main components, that should be taken into account, while evaluating the total cost of a data center: the construction of the data center building itself, the power and cooling infrastructure, the cost of electricity to power (and cool) the servers andthe cost of managing those servers.Besides creating a cost model for the data center, the authors examine the influence of server utilization on total cost of a data center, and state that an effective way to decrease the cost of computing is to increase server utilization. Another method for assessing the total cost of a data center is proposed in <cit.>. An approach examined in <cit.> is a part of research which seeks to understand and design next generation servers for emerging ”warehouse computing” environments. The authors developedcost models and evaluation metrics, including an overall metric of performance per unit total cost of ownership. They identify four key areas for improvement, and study initial solutions that provide significant benefits. Unlike our approach,<cit.> focuses mainly on performance and calculates data center costs only on its basis. The method proposed in our paper considers different aspects at once, which makes it flexible and suitable for heterogeneous environments.§ METHODFORESTIMATING THE TOTALCOST OF A DATA CENTERAlthough existing cost models for the data center include many components,none of them mention security as the significant factor. However, security influences data center costs as well: proper security management translates into better utilization of central resources as well as reduced systems management and administration. In this section, we present and describe the formulas utilized in the proposed analysis process - in particular, in economic and financial analyzes, extending them with the calculation of fixed costs. Introduced equations are used to evaluate the financial aspect of the data center maintenance.Performing the financial analysis, we took under consideration cost of power delivery, cost of cooling, software and hardware expenditures as well as personnel salaries (both fixed and variable costs). Proposed method of data center cost evaluation is the sum of the present values of all costs over the lifetimeof a data center (including investment costs, capital costs, financing costs, installation costs, energy costs, operating costs, maintenance costs and security assurance costs). As shown in Figure <ref>, in the introduced financial and economic analyzes, we distinguished 3 main components: cost of power delivery, cost of cooling infrastructure utilization and operational costs. Each of them can be further specified.Firstly, we introduce the general formula used for calculating total energy costs.The following equation is further detailed in terms of CPU and server utilization: ς__power = κ·χ·ρ·σ,where: κ - is the total amount of the utilized kilowatt-hours χ - is the total amount of hours when the server was busy ρ - is the total amount of days when the server was busy σ- is the cost of a one kWh in US dollars The above formula is the crucial point in computing the total cash outlay for the considered system. Original equation (<ref>) is elaborated in the following sections.§.§ Cost of Power Delivery§.§.§ Server Power Consumption The design of the electrical power system must ensure that adequate, high-quality power is provided to each machine at all times. Except the back-up system operational expenditures,data center spends a lot of money on current power consumption - utilizedboth for the compute, network and storage resources. At this point one should pay special attention to the relationship between the CPU utilization and energy consumed by other components of a working server. Since the amount of the power consumed by a single machine consists of the energy usage of all its elements, it is obvious that a decrease in the energy consumed by the CPU will result in a lower energy consumption in total. Thus, total cost of both power delivery and utilization can be summarized as follows:ς_power = κ_busy+idle·σ·χ·ρ, where: κ_busy+idle - is the total amount of the utilized kilowatt-hours by the server χ - is the total amount of hours when the server was busy ρ - is the total amount of days when the server was busy σ- is the cost of a one kWh§.§.§ CPU Power Consumption Being aware of the price of one kWh, and knowing that CPU worked χ hours through ρdays, utilizing κ kilowatt-hours,it is fairly straightforward to calculate the total financial cost (ς__CPU) of its work, using the (<ref>) equation. Before we start further evaluation of the energy consumed by the CPU, we need to make some assumptions about its utilization. Let us introduce the simplified CPU utilization formula: U=R / C ,where: U - stands forthe CPU utilization, expressed in percentage R - defines our requirements, the actual busy time of the CPU (seconds) C - is the CPU capacity, the total time spent on analysis (seconds)Usually, the CPU utilization is measured in percentage. The requirements specified in the above formula refer to the time we require from the CPU to perform an action. This time is also known as the busy time. CPU capacity can be expressed as the sum of the busy and idle time (that is, the total time available for the CPU). Going simple, one can say that over a 1 minute interval, the CPU can provide a maximum of 60 of its seconds (power). The CPU capacity can then be understood as busy time + idle time (the time which was used plus the one which was left over). Using the above simplifications, when going multi-core, CPU capacity should be multiplied by the number ofthe CPU cores (C = C · cores). The presented equation (<ref>) can befurther detailed as follows:load [%] = time__session ·users/time__total,where time__total is expressed as time__session ·users + time__idle. Supposing a specified CPU load and assuming the server is able to handle a defined number of users within a given time, we can calculate the idle time using the above equation. Regarding the busy time, one should use the results obtained for the prepared, example model. This simple formula can be used for calculating the total energy consumed by the CPU. Knowing the amount of energy utilized by the CPU, it is quite straightforward to assess costs incurred for the consumed energy. §.§ Cost of Cooling Infrastructure Utilization As the cooling infrastructure absorbs energy to fulfill its function, the cost of the cooling needs to be included in the total cost of the server maintenance.To obtain an approximate amount of the power consumed by the cooling, one can use the equipment heat dissipation specifications, most often expressed in British Thermal Units (BTU). Specifications generally state how many BTU are generated in each hour by the individual machine. Therefore, the formula for calculating the cooling cost to keep the equipment in normal operating conditions, is given as follows (values per server):ς_cooling =BTU_cooling·σ·χ·ρ = κ_cooling·σ·χ·ρ,where: BTU_cooling - is the amount of the BTUs generated by the cooling system κ_cooling - is the total amount of the utilized kilowatt-hours by the cooling system χ - is the total amount of hours when the cooling system was busy ρ - is the total amount of days when the cooling system was busy σ- is the cost of a one kWh§.§ Operational CostsOperational costs of the data center management depend on miscellaneous factors - among them one can enumerate salaries of the employees responsible for managing servers (along with the number of employees needed to adequately maintain and operate the data center), prices of equipment, the amount of a reduction in the value of servers with the passage of time and finally on software and licensing costs. §.§.§ Hardware and Software Costs In order to determine the complete cost of a data center,software and licensing costs should be analyzed as well. The server one purchases may, or may not include an operating system.When it comes to selecting a server OS, high-end server operating systems can be quite expensive. Except the operating system, one also needs to budget for the software applications the server will need in order to perform its tasks.The dollar amounts can add up quite quickly in this area, depending on the role of the data center and the server itself.It is very common in high-end server applications to offer per core licensing for some editions of the software. Dealing with hardware costs, one should be aware of server depreciation as well. Hence, the annual hardware cost is in fact an amortization cost, calculated as follows:ς__hardware amortization = ς__server/ω__server, where:ς__server - is the purchase cost of a server (in US dollars) ω__server - is the average life-time of a single server (in years)When it comes to the brand new equipment, besides the amortization expenditures (the overall costs associated with installing, maintaining, upgrading and supporting a server), one should considerpurchasing costs as well. Saying so, the total, annual hardware and software cost of a single machine can be estimated using the formula below: ς_ hardware software = ς__hardware purchase cost + ς__hardware amortization + ς__total licensing costs .§.§.§ Personnel CostsThe day may come when data centers are self-maintaining, but until then, one will need personnel to operate and maintain server rooms.Internal personnel of the data center usually consistof IT staff,data center security personnel, the data center managers, facilities maintenance personnel and housekeeping personnel. If the data center contains tens - if not hundreds - of thousands of working machines, it is common to have more than one employee dealing with a given equipment. Discussing personnel costs, it can be calculated as the total number of employeesmultiplied by the salary of the particular staff member (to simplify our evaluation, we can consider the average salary for every employee in the enterprise):ς__personnel = (α__IT + α__w + α__hf) · S__avg, where: α__IT - is the total number of IT personnel α__w - is the total number of ordinary workers α__hf - is the total number ofthe housekeeping and facilities maintenance personnel S__avg - is the average salary in the enterprise (per month)§.§ Total CostOnce the data center is built, it still requires financial investment to ensure a high-quality, competitive services with guaranteed levels of availability, protection and support continuously for 24 hours a week. Key elements in data center budgets are the power delivery system, the networking equipment and the cooling infrastructure. Besides the above most-crucial factors, there exist additional costs associated with data center operation, such as personnel and software expenses. Therefore, the real operating cost of the data center can be expressed as: ς_total = ς_power + ς_cooling + ς_operation,where each of the defined components consists offurther operational expenditures. This concludes the discussion on calculating the total cost of a data center maintenance, resulting in the following formula (being the combination of all the above equations):ς_total= ς_power + ς_cooling + ς_operation=ς_power +ς_cooling ++ ς_ hardware software+ς__personnel = σ·χ·ρ·(κ_total power +κ_total cooling)+ + ς__hardware purchase cost+ ς__hardware amortization + ς__total licensing costs+ς__personnel = σ·χ·ρ·(κ_total power +κ_total cooling) + +ς__hardware purchase cost +ς__server/ω__server ++ς__total licensing costs +(α__IT + α__w + α__hf) · S__avg . It is significant to remember that ς_ hardware software refers to the annual hardware and software costs. Therefore, when calculating the total cost of a data center maintenance, one needs to adjust this value individually, depending on theconsidered analysis time interval.§ CASE STUDY: SECURITY-BASED DATA FLOW MANAGEMENT IN DATA CENTER §.§ Environment Definition To demonstrate the use of the proposed analysis scheme, we used the role-based access control approach, prepared an example data center scenario and analyzed it with the help of the introduced method. We made use of QoP-ML <cit.>, <cit.> and created by its means the role based access control model to examine the quality of chosen security mechanisms in terms of financial impact of data center maintenance.Before we perform the actual estimation of the data center maintenance cost, let us give some assumptions about the examined environment.Consider a call center company located in Nevada, USA, managing a typical IT environment of 42U server racks (520 physical servers in total, 13 physical servers per rack). Given a specified load capacity, servers handle enterprise traffic continuously for 24 hours. In our analysis, we assume that all the utilized applications are tunnelled by the TLS protocol.In the considered access control method, users are assigned to specific roles, and permissions are granted to each role based on the users' job requirements. Users can be assigned any number of roles in order to conduct day-to-day tasks (Figure <ref>). In order to emphasize and prove role's influence on dataflow management and system's performance, we prepared and analyzed a simple scenario. This scenario refers to the real business situation and possible role assignment in the actual enterprise environment. Given the example enterprise network infrastructure, consider having three roles: role1, role2 androle3 with corresponding security levels: low, medium and high. Each server in the example call center is equipped with the Intel Xeon X5675 processor, being able to handle the required number of employees' connections, regardless of the assigned RBAC role.Prepared scenarios are listed in Table <ref>. QoP-ML's security models used in our case study can be downloaded from QoP-ML's project webpage <cit.>.§.§ Multilevel Assessment of Data Center Total Cost After introducing the environment, we present an overview of predicting the total budget required to manage an example data center, focusing on an introduced method for measuring its total cost and indicating possible gains. By the analysis of an example scenario, we try to confirm our thesis about the influence of security management on the total cost of the data center maintenance. * Cost of Power Delivery To calculate the total cost of power delivery, we performed both experimental and theoretical investigations. Regarding experiments, we utilized Dell UPS to measure the current power consumption of a single Dell PowerEdge R710 server performing operations defined in our scenario. In order to obtain the most accurate results of the CPU power consumption, we made use of our model, and performed analysis using the AQoPA tool <cit.>. To ensure the accuracy of gathered results, in our simulation, we utilized real hardware metrics, provided by the CMT <cit.> for Dell PowerEdge R710 server. * Server Power Consumption According to Dell UPS in the laboratory, PowerEdge R710 performing defined operationsconsumes on average 300W. Since the server handles enterprise's traffic continuously for 24 hours, its annual power consumption is equal to 2 628 kWhs. As stated by <http://www.electricitylocal.com/states/nevada/las-vegas/>, the average industrial electricity rate in Las Vegas, Nevada is 7.56 cents(0.0756 $) per kWh. If the server works 365 days a year, it will cost the company about 198.68 $. * CPU Power Consumption When it comes to the CPU power consumption, we assumed its load to be equal to 90%. We performed a simulation using prepared model, considering 3 levels of users permissions (which differ in security level). In Table <ref> we collected power consumption costs, both for the CPU and the server in total (rounded up to the nearest dollar). As evidenced by Table <ref>, considering only costsavings related to the consumed energy, it is possible to handle about 107.64% (when switching between the first and the third role) and 77.32% (by changing the role from third to second) more users with the exact CPU load. Those figures, when put in the context of a large data center environment, quickly become very significant.* Cost of Cooling Infrastructure Utilization In addition to the power delivered to the compute hardware, power is also consumed bythecoolingresources. The load on the cooling equipment is directly proportional to the power consumed by thecomputehardware. In such case, the cost of the cooling infrastructure is equal to the cost of the energy consumed by the server and its CPU.* Operational Costs Apart from electricity and cooling, calculating the total maintenance cost of a data center, one should take under consideration also the cost of its physical infrastructure, such as hardware amortization and the actual price of physical machines. Besides the equipment cost, operational expenditures must be covered as well. Determining the approximate, total cost of a whole data center, we assumed to use Dell Power Edge R710 servers (2 259$ each). However, we have include neither the network, nor the storage footprint (nor its equipment).Regarding utilized software, we assumed that working machines have Windows Server Data Center Edition installed (whose price is equal to about 5 497 dollars). Except the OS, workstations use some proprietary software, which can cost about 10 000$ on average(<cit.>, <cit.>). (We assume the cost of the software for a single machine to be equal to 13 050 $.)The number of employees of the whole data center consists of security managers, system operators, call center employees and housekeeping and facility maintenance personnel, resulting in 1 035 employees in total. As stated by <http://swz.salary.com>,the median salary of security manager in US is equal to about 7 058 dollars per month (at the time of writing). In our estimation, we used this value as the average salary in our call center company. §.§ Discussion In order to prove that a proper data flow management has a significant impact on data center maintenance costs, we tried to estimate them over 5 years. Although it might not be easily noticed at first glance, it turned out that our approach can bring meaningful savings and influence rapid return on investment (ROI) increase. (Table<ref> exploresthis concept in more detail.)In our approach, economic profits come from the number of handled users - the more customers (served clients), the higher company profits. With the exact CPU load, the same number of working machines is capable of handling a greater number of users. By switching between thestrongest and weakest security mechanisms, the example call center company can achieve actualROI growth. Our analysis showed, that it is possible to provide effective services andkeep the utilization of hardware resources at a certain level. Since we can accomplish given goal using weaker security mechanisms, in many situations it is wasteful to assign too many hardware resources to perform the given task. Applying the proposed solution to the existing IT environment, one can observe a serious increase in company incomes,while preserving the efficiency, utilization and flexibility of the existing computer hardware.One of the mainpricing models of a call center company concerns cost per contact, where all costs are combined into an unit price. The total income is then based on the number of served clients, such as calls, emails or chat sessions. For the example call center, we assumed that on average, single served customer brings the income of about 3 US dollars. As it was proven by the time analysis presented in <cit.>, server working with role3 permissionsis able to handle about 3 474 users within an hour having 90% of CPU load. Since we assumed that the number of users grows linearly, within 24 hours, it gives us 3 474 · 24 = 83 376 employees a day, resulting in 83 376 · 365 = 30432240 connections a year per server. If we assume that we have at our disposal the whole data center, it will turn out that we can serve roughly 30 432 240 · 520 =15824764800 users assigned role3 permissions a year. With the exact CPU load, using role1 permissions, server is capable of dealing with 11 571 users within an hour, which results in 11 571 · 24 · 365 =101 361 960served customers a year per single machine and 52 708219 200 clients per whole data center. When we translate the above calculations to the incomes and outcomes of the company, we see that the proper information management brings a variety of economic advantages. According to ourprevious assumptions, given the incomes equal to 158 124 657 600$, 69 954 206 400$ and 47 474 294 400$ and outcomes of 45 584 797 952$, 45 584 797 948$ and 45 584 797 951$ for roles 1, 2 and 3 respectively, in Table <ref> we calculated actual profits and ROI values of the company for over 5 years. As it is summarized in Table <ref>, considering the first working year of the example call center, the efficiency of an investment is much bigger when we handle users using role's 1 security mechanisms (comparing to the role 3), and about 13.5 times greater, considering roles 2 and 3. Real profits can be observed after the 5 years of call center business activity. The analysis shows, thatif the company used the first role permissions instead of those from role 2, it could gain about 5 times more money. What is more, if we consider the third and the first role, profits grows rapidly, resulting in about 60 times greater gain.Since high ROI values means that the investment gains compare favorably to investment cost, and the primary goal of the call centeris a fast return on the investment, company should re-think implemented information flow mechanisms.§ CONCLUSIONSAs proved by our analyzes, the main drivers of data center cost are power and cooling. In contrast to operational costs, they represent variable costs, which vary over time and depend on many factors.As power consumption and electricity pricesrise, energy costs are receiving more scrutinyfrom senior-level executives seeking tomanage dollars. However, focusing on the financial aspect of the data center,one cannot forget about the proper data management. In this paper, we utilized QoP-ML toincrease company incomes, without compromising data security or quality of service. The proposed analysis scheme provides new opportunities and possibilities, not only for measuring data center costs, but also for increasing incomes. Optimization of the available computational power can be accomplished in many different ways: by modifying system configurations, switching between utilized security mechanisms, by the suitable selection of used applications and services and adequate application management. dc1 A. Greenberg, J. Hamilton, D. A. Maltz, P. Patel, “The cost of a cloud: Research problems in data center networks,” Microsoft Research.dc2 J. G. Koomey, “Estimating total power consumption by servers in the u.s. and the world,” PhD Thesis.dc3 C. D. Patel, A. J. Shah, “Cost model for planning, development and operation of a data center,” HP Laboratories Palo Alto.datacenter_sec K.Mazur, B.Ksiezopolski and A.Wierzbicki “On Security Management: Improving Energy Efficiency, Decreasing Negative Environmental Impact and Reducing Financial Costs for Data Centers,” Mathematical Problems in Engineering, 418535, 1–19.costmodel1 W. Pitt Turner and Kenneth G. Brill, “Cost model: dollars per kWplus dollars per square foot of compute floor“,White paper. Uptime Institute,2008. costmodel2 J. Karidis, J. E. Moreira and J. Moreno, “True Value: Assessing and Optimizing the Cost ofComputing at the Data Center Level“,IBM Research Report, 2009.costmodel3 Kevin Lim, Parthasarathy Ranganathan, Jichuam Chang,Chadrakant Patel, Trevor Mudge and Steven Reinhardt, “Understanding and Designing New Server Architectures forEmerging Warehouse-Computing Environments,Proceeding of the ACM International Symposium onComputer Architecture, Beijing, China, June 2008.qopksiez B. Ksiezopolski, “QoP-ML: Quality of protection modelling language for cryptographic protocols”, Computers & Security, 31, 569–596, 2012.qopbmlbook B. Ksiezopolski, “Multilevel Modeling of Secure Systems in QoP-ML”, CRC Press Taylor & Francis Group, 2015.qopmlweb Official web page of the QoP-ML Project, <http://qopml.org/>aqopa D.Rusinek, B.Ksiezopolski and A.Wierzbicki, “AQoPA: Automated Quality of Protection Analysis framework for complex systems”, 14th International Conference on Computer Information Systems and Industrial Management Applications, Warsaw, Springer: LNCS, v.9339, 475–486, 2015. mysql MySQL Server, <https://www.mysql.com/products/>ftp Titan FTP Server, <http://southrivertech.com/>cmtK.Mazur, B.Ksiezopolski and Z. Kotulski, ”The robust measurement method for security metrics generation”, The Computer Journal, Oxford, v. 58 (10), 2280–2296, 2015. | http://arxiv.org/abs/1703.09316v1 | {
"authors": [
"Katarzyna Mazur",
"Bogdan Ksiezopolski"
],
"categories": [
"cs.NI",
"cs.DC"
],
"primary_category": "cs.NI",
"published": "20170327213117",
"title": "On Data Flow Management: the Multilevel Analysis of Data Center Total Cost"
} |
firstpage–lastpage 2017 A Dynamic Programming Solution to Bounded Dejittering Problems Lukas F. Lang Accepted XXX. Received YYY; in original form ZZZ ============================================================== The observed evolution of the broad-band spectral energy distribution (SED) in NS X-ray Nova Aql X-1 during the rise phase of a bright FRED-type outburst in 2013 can be understood in the framework of thermal emission from unstationary accretion disc with temperature radial distribution transforming from a single-temperature blackbody emitting ring into the multi-colour irradiated accretion disc. SED evolution during the hard to soft X-ray state transition looks curious, as it can not be reproduced by the standard disc irradiation model with a single irradiation parameter for NUV, Optical and NIR spectral bands. NIR (NUV) band is correlated with soft (hard) X-ray flux changes during the state transition interval, respectively. In our interpretation, at the moment of X-ray state transition UV-emitting parts of the accretion disc are screened from direct X-ray illumination from the central source and are heated primary by hard X-rays (E>10 keV), scattered in the hot corona or wind possibly formed above the optically-thick outer accretion flow; the outer edge of multi-colour disc, which emits in Optical-NIR, can be heated primary by direct X-ray illumination. We point out that future simultaneous multi-wavelength observations of X-ray Nova systems during the fast X-ray state transition intervalare of great importance, as it can serve as "X-ray tomograph" to study physical conditions in outer regions of accretion flow. This can provide an effective tool to directly test the energy-dependent X-ray heating efficiency, vertical structure and accretion flow geometry in transient LMXBs.Aql X-1, X-ray Nova, Soft X-ray Transient, accretion, accretion discs, binaries:close, stars: neutron, X-rays: binaries§ INTRODUCTION X-ray Novae (XN), also called as Soft X-ray Transients (SXT), are Low Mass X-ray Binaries (LMXB) showing transient accretion activity. During the accretion outburst the luminosity of the system in the X-ray spectral range, where the main energy release happens, rises up to 10^6 times with respect to quiescence level. The observational studies of X-ray Nova systems are on fundamental importance for the physics of extreme states of matter. The majority (∼75%) of X-ray Novae systems contain a black hole candidate as a primary star <cit.>. Despite many existing studies of multi-wavelength light curves of outbursts in various transient LMXBs (see e.g. <cit.> and many other studies) definitely there is a lack of a detailed analysis focused on the beginning parts of XN outbursts (covering the stage of initial flux rise from the quiescent state to the outburst maximum).A substantial attention is payed to the analysis of decaying parts of FRED-type events (see e.g. <cit.>), which can be well reproduced in theoretical models of XN outbursts <cit.>. The outburst rise phase in X-ray Novae is much less studied, due to a lack of a good quality multi-wavelength observational data during this time period. The fast rise stage in X-ray Novae has usually a much poorer coverage by multi-wavelength observations, mainly because of relatively late detection of a new outburst by currently on-orbit X-ray monitors (e.g. MAXI, SWIFT/BAT). The existing studies covering the outburst rise period in X-ray Novae are concentrated primary on measurement and interpretation of possible time delays between IR-Optical-UV and X-ray light curves (see e.g. <cit.>). For the development of a better model of accretion flow during XN outbursts, it is important to compare the spectral evolution predicted by the common theory of unstationary disc accretion and the observedspectral energy distribution (SED) evolution during outburst rise phase in real X-ray Nova systems.In this work, we perform a detailed study of the broad-band SED evolution during the outburst rise phase in the famous NS X-ray Nova system Aql X-1 — the most prolific SXT known to-date. We present a multi-wavelength observational data for the initial rising phase of bright outburst in 2013, carried out during the monitoring campaign of Aql X-1 at Swift orbital observatory and a few 1-m class ground-based optical telescopes. Our main aim here is to qualitatively compare the observed broad-band SED evolution in this prototypical NS X-ray Nova system to theoretical expectations for the model of non-stationary accretion disc, which is developed during the outburst rise phase.The article is organised as follows. In the <ref> we describe Aql X-1 system, its orbital and accretion disc parameters and interstellar extinction to the source. In <ref> our observational data and its reduction are described. In the <ref> we present a multi-band light curves and derived SED measurements for the rising part of Aql X-1 outburst, as well as adopted spectral models. In the section <ref> we discuss the"X-ray tomograph" effect, working at the moment of X-ray state transition in Aql X-1, as a promising observational tool for direct testing of the energy-dependent X-ray heating efficiency and vertical structure of accretion disc in X-ray Novae systems. Our results for Aql X-1 broad-band SED evolution during outburst rise phase are presented in the section <ref>. In the last section our conclusions are drawn.§ AQL X-1 Aql X-1 is a transient X-ray binary system in which a compact object accretes matter from an accretion disc which is supplied by the Roche lobe filling low mass companion. With more than 40 outbursts observed in the X–ray and/or optical bands since its discovery in 1965 <cit.>, Aql X-1 is the most prolific X–ray transient known to date (about 25 outbursts were detected in the 1996–2016 epoch). Observations of type I X–ray bursts <cit.> and coherent millisecond X-ray pulsations <cit.> lead to a surely identification of the compact object in this system as a neutron star. Aql X-1 X-ray spectral and timing behaviour classify it as an atoll source <cit.>.The optical counterpart of Aql X-1 is known to be an evolved K4±2 spectral type star <cit.>, with a quiescent magnitude of 21.6±0.1 mag in the V band <cit.>. An interloper star located only 0.48” east of the true counterpart heavily complicates the studies in the quiescent state (<cit.>; <cit.>). In the recent high angular resolution near infrared spectroscopy observations <cit.> the first dynamical solution for Aql X-1 was obtained. Despite its frequent outbursts, there are few reported radio detectionsofAqlX-1,likelyowingtothefaintnessofatoll sourcesintheradioband <cit.>. The available observations suggest that the radio emission is being activated by both transitions from a hard state to a soft state and by the reverse transition at lower X-ray luminosity. The maximum radio flux density 0.68^±0.09 mJy (8.4GHz) was detected at the moment of state transition during Aql X-1 outburst in Nov 2009 <cit.>. In all available multi-wavelength observations, the radio spectrum was flat or inverted, with flux density scaling as F_ν∝ν^≳0 <cit.>. There is evidence for quenching of the radio emission at X-ray fluxes above 5·10^-9 erg/s/cm^2 (L_X≳0.1L_ Edd) <cit.>. *Orbital and accretion disc parameters of Aql X-1. Orbital parameters of Aql X-1 are well defined by previous extensive observational studies of this X-ray Nova system. In the Table <ref> we provide a best estimates for system parameters (orbital period P_orb, primary mass in solar units m_1 , mass ratio q=m_2/m_1, system inclination i), distance to the source D and ephemeris for the time of the minimum of the outburst light curve T_0 (phase zero corresponds to inferior conjunction of the secondary star), which we will use throughout this paper. By using the Aql X-1 binary system orbital parameters, we derived the characteristic accretion disc radii and the Roche lobe sizes for the primary and secondary star in the binary system, in the following way. First, with reasonable assumption of LMXB eccentricity e=0, the major semi-axis in the binary system a can be estimated from the Kepler's law as:a = 3.52×10^10 m_1^1/3(1+q)^1/3(P_orb/1 h)^2/3≈ 3.1×10^11 cm .The effective radius of the Roche lobe of the primary (R_L1) and the secondary (R_L2) stars in the close binary system can be obtained from <cit.>. The compact object Roche lobe radius in Aql X-1 system was estimated asR_L1/a= 0.49/0.6 + q^2/3 ln(1+q^-1/3)≈ 0.46 .For R_L2, one need to replace q→q^-1 in the formula above:R_L2/a≈ 0.30 .Due to the angular momentum conservation of accreting matter, the disk radius can not be smaller than the circularisation radius (see numerical simulation in <cit.> and their analytic approximation in <cit.>, 3% accurate for 0.03≤q≤10): R_ circ/a=0.074( 1+q/q^2 )^0.24≈ 0.13 .The maximal outer radius of the accretion disc in LMXB can be estimated by the tidal truncation radius (see numerical simulation in <cit.> and their analytic approximation in <cit.>, 3% accurate for 0.06<q<10 range):R_ tid/a = 0.112 + 0.270/1+q + 0.239/(1+q)^2≈ 0.43 .*Extinction to Aql X-1 in X-rays and NUV-NIR. Extinction in the X-ray spectral range in the direction to Galactic LMXBs is caused by photoionisation effect in the interstellar gas on the line-of-sight (if internal extinction in the vicinity of the source is negligible). With reasonable assumption of solar chemical abundance, value of X-ray extinction to the source depends only on the hydrogen column density (N_H) parameter. Extinction in the NUV-NIR spectral range in the Galaxy is caused by absorption on the interstellar dust grains. We adopted standard extinction law <cit.> with fixed R_V=3.1. Then value of NUV-NIR extinction depends only on color excess (E_B-V) parameter. Below in this section we obtain best estimates for N_H and E_B-V for Aql X-1. First, we estimated maximum N_H in the direction to Aql X-1 by using the common Leiden/Argentine/Bonn (LAB) Survey of Galactic HI <cit.> and HI map of <cit.> (DL). nH routine from FTOOLS library (<cit.>, http://heasarc.gsfc.nasa.gov/ftools/) was used. Note that these maps have limited resolution of approximately 0.5 degrees and 1 degrees, respectively. We obtained the following estimates for hydrogen column density: n_H≈2.48×10^21 atoms·cm^-2 (LAB) and n_H≈3.43×10^21 atoms·cm^-2 (DL). As we see the substantial dispersion between two estimates and taking into account the possibility of internal extinction in the source, we decided to adopt as best N_H the value obtained from spectral fits during the outburst stage in Aql X-1. <cit.> obtained the following N_H estimate from the best fit to the Aql X-1 soft-state X-ray spectrum in outburst observed by :N_H = (3.6±0.01)×10^21 atoms/cm^2 .Note, that this value well agrees with hydrogen column density within Galaxy in the direction of Aql X-1 estimated from <cit.>.Then, we estimated the color excess E_B-V coefficient in the direction to Aql X-1 by using the recalibrated Galaxy extinction maps based on dust emission measured by COBE/DIRBE and IRAS/ISSA <cit.>: E_B-V≈0.65^mag. On the other hand E_B-V can be estimated using common N_H–A_V relation between V-band optical extinction and the hydrogen column density. Using for N_H/A_V a classical estimate from <cit.> for the usual extinction law with parameter R_V=A_V/E_B-V=3.1, from (<ref>) one can obtain the following estimate for Aql X-1 color excess:E_B-V≈0.65^mag ,which coincides with both Galaxy extinction derived from dust emission map <cit.>. It is worth noting, that in <cit.> the close value of color excess for Aql X-1 has been measured from optical spectroscopy of the optical counterpart during the SXT quiescence state: E_B-V=0.5±0.1^mag for both Aql X-1 optical counterpart and its close interloper star (authors assume that both stars are reddened by the same amount) from joint spectral models fitting. Note, that <cit.> mentions that a nearby (1'.4) B5 V star lies at a distance 10 kpc, well above the Galactic dust layer, has an optical reddening E_B-V≈0.73^mag. In this work we adopt (<ref>) and (<ref>), as a best estimates for interstellar extinction in the direction to Aql X-1.§ OBSERVATIONS AND DATA REDUCTION§.§ MAXI We downloaded daily- and orbit-averaged light curves of Aql X-1 from MAXI Archive official website[see http://maxi.riken.jp/top/]. For counts-to-flux conversion the Crab spectrum was assumed in the efficiency correction for each band. Fluxes in Crab units for MAXI instrument were obtained using standard conversions: 1 Crab approximately equals to 3.6 ph/s/cm^2 in the total 2-20 keV band and 1.87, 1.24, 0.40 ph/s/cm2 for 2-4, 4-10, 10-20 keV band, respectively.In order to obtain more accurate luminosities from instrumental count rates in the 2-10 keV band, we derived appropriate conversion factor by using an overlapping series of XRT/Swift 2-10 keV flux measurements in the time interval 56456-56461 MJD (±3 days around state transition during outburst rise). The derived count rate–flux conversion factor appeared to be close (only +15% correction) to standard conversion (1 Crab(2-10 keV)=3.11 counts cm^-2 sec^-1=2.156·10^-8 erg s^-1 cm^-2).§.§ observatory <cit.> provides possibility to get the simultaneous broadband view from optical to hard X-rays, that is crucial for X-ray Novae studies. In this work we used observations covering the rising phase of Aql X-1 outburst — between 56450 and 56462 MJD (altogether 12 snapshot observations). Tables <ref> and <ref> provides a journal of observations carried out by XRT and UVOT instruments, respectively.Below we review Swift data reduction in detail. §.§.§ XRT observed Aql X-1 both in Windowed Timing (WT) mode, while the transient was bright, and Photon Counting (PC) mode, for low count rate snapshots. The data were processed using tools and packages available in FTOOLS/HEASOFT 6.14.Initial cleaning of events has been done using xrtpipeline with standard parameters. The following analysis was performed as described in <cit.>. In particular, for the PC mode data, radius of the circular aperture for the source extraction was depending on the count rate ranging from 5 to 30 pixels <cit.>; for the WT mode data, radius of the source extraction region was 25 pixels. Background was extracted from the annulus region with the inner (outer) radius of 60 (110) pixels in both PC/WT observational modes. In the case of pile up, central region of the source was excluded to ensure the final count rate below 0.5 and 100 counts s^-1 for the PC and WT modes, correspondingly. The obtained spectra were grouped to have at least 20 counts bin^-1 using the FTOOLS grppha. To avoid any problems caused by the calibration uncertainties at low energies[http://www.swift.ac.uk/analysis/xrt/digest_cal.php], we restricted our spectral analysis to the 0.5-10 keV.In this work we used observations obtained during the outburst rise phase only (9 pointing observations containing 12 snapshots). The standard spectral analysis of the XRT data was performed. We successfully fitted (with χ^2_r≈1) object X-ray spectrum in each snapshot by phenomenological phabs*(diskbb+powerlaw) model in XSPEC package. The interstellar absorption parameter was fixed to the standard Aql X-1 value (see Table <ref>). Finally, we derived 0.5-10 keV fluxes for all available Swift/XRT snapshot observations,and present them in the Table <ref>, together with best-fit parameters for adopted spectral models. Errors reported in the Table <ref> are purely statistical and correspond to 1σ confidence level. However, ARF calibration uncertainties for the Swift/XRT instrument can reach 10%[Swift Helpdesk private communication] but wasn't included into our analysis.§.§.§ detector provided a hard X-ray measurements of the outburst light curve.We downloaded daily- and orbit-averaged light curves of Aql X-1 from Hard X-ray Transient Monitor archive website [http://swift.gsfc.nasa.gov/results/transients/index.html]. For counts-to-flux conversion, it was assumed that 1 Crab equals to 0.220 counts/s/cm^2 in the 15-50 band. The 15-50 keV BAT fluxes and were derived from BAT count rate using standard conversion: 1 Crab(15-50 keV)=0.22 ph/s/cm^2=1.345·10^-8 erg s^-1 cm^-2.§.§.§ The observation log is shown in the Table <ref>. UVOT exposures were taken in six filters (V, B, U, UVW1,UVW2, and UVM2) for the first four observations and with the “filter-of-the-day” subsequently. Errors reported in the Table <ref> are purely statistical and correspond to 1σ confidence level.For the data reduction images initially preprocessed at the Swift Data Center at the Goddard Space Flight Center were used. Subsequent analysis has been done following procedure described at the web-page of UK Swift Science Data Centre.[http://www.swift.ac.uk/analysis/uvot/index.php] Namely, photometry was performed with uvotsource procedure with source apertures of radius 5 arc seconds and 10 arc seconds for the background for all filters.Finally, spectral files for fitting in XSPEC were produced with the uvot2pha procedure.It can be noted, that a 5" aperture contains flux from the group of faint stars, located nearby to Aql X-1 counterpart.By applying the background subtraction procedure we were able to eliminate the contamination from nearby stars and Aql X-1 quiescent light (see <ref> for detail). §.§ Ground-based optical data The Aql X-1 optical counterpart lies in a crowded field with 4 nearby interloper stars separated from Aql X-1 star only by 0”.48, 2”.6, 2”.4 and 1”.3, respectively <cit.>, which may produce contamination. The 0”.48 interloper staris substantially brighter (V=19.4^mag) than Aql X-1 optical counterpart in the quiescence state (V=21.6^mag). Once an outburst begins, photons from Aql X-1 became dominant. In order to obtain a correct flux for Aql X-1 counterpart in outburst, we subtracted an average flux levels measured during the period of X-ray Nova quiescence in 2012 and the pre-outburst period in 2013 (see Table <ref>). The optical data reduction procedure is described below.In Apr-Nov 2013 the following small-size ground-based optical telescopes have participated in the multi-wavelength monitoring campaign of Aql X-1:* — the joint Russian-Turkish 1.5-m Telescope (30^∘19'59.9”E, 36^∘49'31.0”N, 2538.6-m above sea-level, TÜBITAK National Observatory, Turkey) equipped with the TFOSC focal instrument for direct imaging and spectral observations.The object was observed in g', r', i', z' bands. * 1.6-m telescope (100^∘55'13” E, 51^∘37'18.10” N, 2000-m above sea-level, Sayan Observatory, Russia). For direct imaging and fast photometry, a sCMOS Andor camera was used. The object was observed in R band.* 1-m telescope (41^∘26'30” E, +43^∘39'12” N, 2070-m above sea-level, Special Astrophysical Observatory, Russia). The object was observed in R band, monitoring observations started after the outburst maximum in 2013 (data from this telescope we will not discuss in this work).* 1.3-m telescope at Cerro Tololo (Chile). Aql X-1 was monitored in R and J-bands at the regular basis. We used a publicly available[www.astro.yale.edu/smarts/xrb/home.php] light curves in our analysis. The photometric reduction procedure were performed by Yale SMARTS XRB team, following closely the reduction steps described in <cit.>. As it was emphasised in previous optical variability studies of Aql X-1 (see e.g. <cit.>), the use of point-spread functions to extract the source counts (instead of ordinary aperture photometry) is crucial to obtain reliable optical flux measurements for Aql X-1 optical counterpart. For the photometric observations carried out at , , telescopes, we extracted instrumental magnitudes for Aql X-1 and few local comparison stars (see below) using the DAOPHOT routine <cit.> in the Interactive Data Language (IDL). We used two iterations of the point-spread function fitting routine; a third iteration did not improve the precision of the photometry. Photometric fluxes of Aql X-1 in standard R,g',r',i',z' bands (see Table <ref>) were obtained from instrumental counts by using the following secondary standards located nearby in the Aql X-1 field:(i) α_1= 287.8073766^∘, δ_1=0.5811534^∘;(ii) α_2=287.8032941^∘, δ_2=0.5781298^∘; (iii) α_3=287.8179977^∘, δ_3=0.5759676^∘;(iv) α_4=287.8082571^∘, δ_4=0.5778305^∘; (v) α_5=287.8204860^∘, δ_2=0.5873818^∘. These local comparison stars are invariable (within statistical uncertainties) during the whole period of Aql X-1 monitoring observations and have visual R magnitudes in the range 15^mag÷17^mag. Their R,g',r',i',z' fluxes in the standard (see Table <ref>) photometric systemwere derived by observation of the Aql X-1 field and primary standard stars <cit.> during observation in Nov 2013 in one of nights with photometric atmospheric conditions. We conservatively estimated the final accuracy of absolute photometric calibration for , , and telescopes as 3%. In this paper g'r'i'z' flux measurements for Aql X-1 are presented in AB photometric system, all other (UVOT, R, J) flux measurements are presented in Vega system. The adopted effective wavelength, bandwidth and photometric zero points for all used filters/instruments are shown in the Table <ref>. § OUTBURST RISE IN AQL X-1 New outburst in the Aql X-1 X-ray Nova system was detected 3 June 2013 <cit.> during the campaign of optical monitoring observations of the object, started in April 2013 at 1.5-m Russian-Turkish telescope .At Figures <ref> and <ref> (right panels) all available NUV, Optical and NIR light curves obtained during the rising phase of Aql X-1 outburst at , , and instruments are shown. At Figure <ref> (left panels) we present the available pre-outburst Optical-NIR light curves from ground-based telescopes. The horizontal dashed lines mark the measured background level (which is dominated by the close interloper star (see <ref>). In order to measure the broadband NUV-NIR spectral evolution during outburst rise period in Aql X-1, we chose four characteristic time moments, where observations from two instruments (–or –) were carried out quasi-simultaneously (within the time interval Δt≲0.1^d). These time moments are marked at Figures <ref>,<ref>,<ref> by vertical dot-dashed lines (the corresponding broad-band NUV-NIR SEDs will be discussed in the section <ref> below).After the outburst detection in optical g', r', i', z'-bands, the accretion activity of the source was soon confirmed by follow-up observations <cit.>. The X-ray outburst happened to be among the brightest in soft X-rays among all Aql X-1 accretion events observed by or All Sky Monitors since 1997 <cit.>.The overall morphology of this outburst in soft X-rays is characterized by a fast (∼10^d) rise and a long (∼50^d) decay. This type of curves are often observed in X-ray Novae <cit.> and called FRED (Fast-Rise-Exponential-Decay). The FRED-type light curves in SXT are well qualitatively reproduced by standard Disc Instability Model (DIM), if effects of accretion disk evaporation and irradiation by the central source are taken into account <cit.>. The orbit-averaged light curves from (2-10 keV) and (15-50 keV) for the Aql X-1 outburst rise phase are shown at the Figure <ref>. At the same Figure, we show all available X-ray pointing measurements (in the same soft energy range 2-10 keV), carried out by telescope during this interval.The remarkable drop (∼5 times decrease at a time scale <1^d) of hard X-ray flux, when the soft X-ray brightness is still rising, corresponds to the time moment of hard/soft X-ray state transition.At Figure <ref> the (15-50 keV)/(2-10 keV) X-ray color evolution during the interval of state transition is shown. From this data, one can measure the midpoint and duration of state transition, by fitting the X-ray color evolution by the appropriate low-parametric model. We chose the following functionc(t)= p_0 + p_1×[erf(t-p_2/p_3)-1] , where erf(x) is the error function in its standard form erf(x)=2/√(π)∫_0^x e^-ξ^2dξ, and p_0, p_1, p_2, p_3 are free parameters. The state transition itself we defined as a time interval, where the majority (99.7%) of color change take place (according to our best fit, see dashed line at Figure <ref>). We obtained the state transition midpoint and duration for the Aql X-1 outburst rise in 2013:T_h/s =p_2 = 56458.425 [MJD] , ΔT_h/s = 4/√(2)×p_3 = 1.073 [day]and show them at Figures <ref>–<ref> by doted vertical line and grey shaded band, respectively.It is worth noted, that the interval of fast changes in hard and soft X-ray fluxes during state transition is even smaller than the value ΔT_h/s defined above. One can estimate from Figure <ref>, that fast increase (decrease) of soft(hard) X-ray flux begins about the transition midpoint T_h/s. Thus we can estimate the actual duration of fast changes in hard (soft) X-ray fluxes during state transition as ≈ΔT_h/s/2, respectively. We defined a convenient time variable t, measured with respect to the state transition midpoint:t = T - T_h/s . In our Swift/XRT observations we are able to measure accurately only the soft fraction F_ X,0.5-10 of the total X-ray flux F_ X,bol (which we define here in the energy range 0.5-100 keV). The bolometric X-ray flux from the inner parts of the disc can be estimated as:F_ X,bol = f_bol· F_ X,0.5-10 ,where f_bol means a bolometric correction coefficient. Note, that the bolometric correction is substantial for the spectrum in the hard X-ray state. To estimate f_bol we used results from <cit.>, who analyzed broad-band X-ray observations in the hard and soft X-ray states during Aql X-1 outburst in Sep-Oct 2008, carried out by observatory. By using their best-fit models in Table 2 and 3 (with fixed N_H=0.36·10^22cm^-2), we calculated 0.5-10 keV, 2-10 keV, 15-50 keV and "bolometric" 0.5-100 keV unabsorbed fluxes for the typical soft and hard X-ray state spectra. The estimated bolometric corrections are f^hard_bol=1.96 and f^soft_bol=1.08 for observational data points before and after X-ry state transition, respectively. In addition we derived the F_X,15-50/F_X,2-10 ratio: 0.89 and 0.036 in the hard and soft state, respectively. As can be seen at Figure <ref>, these values are well compared to the observed BAT/MAXI X-ray colors before and after state transition.§.§ Broad-band SED measurementsThere are two time moments before X-ray state transition midpoint andtwo time moments after, when we are able to measure a quasi-simultaneous (within ≲0.1^d) broad-band SED of the source. Below we describe derived SED measurements and the fitting procedure in detail. Broad-band SED measurements during the outburst rise in Aql X-1: * t≈-6.02^d. At this time moment, observations were carried out quasi-simultaneously with telescope (t=-6.08^d), and we combined these data to construct broad-band SED. Additionally, as can be noted (see Figures <ref>–<ref>), the subsequent observation at t=-4.80^d shows the same (within uncertainties) NUV fluxes. Thus we included this observation and observation carried out in between at t=-5.35^d into the combined SED. The derived SED is shown at the Figure <ref> (left panel), where all the "non-simultaneous" data points from and the second observation are shown by open symbols. * t≈-0.46^d. This time moment immediately before state transition, when W2-band observation at t=-0.46^d was carried out quasi-simultaneously with (t=-0.42^d). As the previous observation at t=-0.79^d shows the same (within uncertainties) W2 flux, we decided to include it into the combined SED. The resulting SED is shown at Figure <ref> (central panel), the "non-simultaneous" data point is shown by open symbol.* t≈+0.55^d. This is the most interesting SED measurement, we luckily obtained it immediately after hard/soft X-ray state transition, the time momentof M2-band observation was carried out quasi-simultaneously with (t=+0.56^d). As the previous observation at t=+0.41^d shows the same (within uncertainties) M2 flux, we decided to include it into the combined SED. The resulting SED is shown at Figure <ref> (central panel), where the "non-simultaneous" data point is shown by open symbol. * t≈+1.80^d. Thisis the final SED measurement we obtained near the outburst maximum in X-rays (see Figure <ref>).The U-band observation was carried out quasi-simultaneously with (t=+1.91^d). The resulting SED is shown at Figure <ref> (right panel).The SED fitting procedure was performed in XSPEC package <cit.>, which provides a framework to compare various theoretical spectral models with observed spectra (primary in the X-ray domain). XSPEC can be successfully used to fit spectral data from IR/Optical/UV observations <cit.>. We converted all NUV, Optical and NIR photometric measurements into pha-files using procedure flx2xsp from FTOOLS package. For all filters, responses were defined by flat transmission curves with parameters λ_eff and Δλ (FWHM) (see Table <ref>). We note that the observed fluxes contain Aql X-1 counterpart and nearby 0.48^'' interloper star for ground-based Optical-NIR observations, and all nearby stars within5^'' aperture for observations. In order to investigate the spectral evolution of Aql X-1 counterpart in outburst, we subtracted the corresponding flux levels measured during the period of Aql X-1 quiescence (see Table <ref>).The interstellar extinction in photometric filters was calculated by using REDDEN model in XSPEC. This model utilize <cit.> extinction law from far-IR to far-UV as a function of wavelength and of the parameter E_B-V. For all spectral fits below we adopted the fixed color excess value E_B-V=0.65^mag, as a best estimate for Aql X-1 (see <ref>). §.§ Adopted spectral models We tried to fit Aql X-1 NUV-NIR SEDs by two low-parametric spectral models: * Absorbed blackbody emission (redden*bbodyrad),* Absorbed emission from multi-color disc with possible X-ray irradiation (redden*diskir).Our choice of spectral models (A) and (B) is physically motivated.The simplified analytical picture of the non-stationary disc accretion during outburst rise phase in X-ray Binaries was proposed in the work of <cit.>. The accretion disc development from the initial ring of matter can be divided into 3 characteristic stages: * Formation of the disc from the initial ring of matter ("torque" formation stage).* Quasi-stationary accretion with increasing accretion rate. At this stage a radially constant accretion rate is established in the inner regions of accretion disc. Near outer radii of the disc no changes from the initial mass distribution are expected and a transition zone is developed at intermediate radii. The region of quasi-stationary solution continuously expands as the transition zone moves outward. * The accretion attenuation phase after the outburst maximum. We are interested in stage <ref>, which could be potentially observed by our broad-band observations of outburst rise in Aql X-1 system. During this stage the mass distribution in the outer regions of the disc transforms from initial distribution (at the pre-outburst quiescence) into the stationary accretion disc (near the outburst maximum). Accordingly, the spectral evolution in the NUV-NIR range (which corresponds to emission from the outer parts of the disc) should transform from a single-temperature blackbody emitting ring into the multi-colour (irradiated or non-irradiated) accretion disc emission. The initial ring of matter in the<cit.> analytical model can be in reality a manifestation of the accretion disc with a surface density profile, highly concentrated to some outer radius — like Σ∝R^1.14, which is supposed to form in the disc during the X-ray Nova quiescence (see <cit.>). The present numerical models of XN outbursts also show, that the single temperature emission remains at early stages of SXT outburst (see e.g. Figure 5 in <cit.>). Note that, alternatively, a single blackbody model may correspond to the emission from the X-ray heated surface of companion star (if X-ray irradiation is strong enough) or a hot point, where a stream from L1-point meet the accretion disc. At the end of stage <ref>, the multi-color disc model corresponds to emission from the standard <cit.> steady-state optically-thick accretion disc with possible X-ray irradiation. The multi-color disc emission is expected to be established about the moment of the outburst maximum, the radial mass distribution in the disc at that moment does not depended on initial mass distribution in pre-outburst quiescence, see e.g. <cit.>.Thus, we expect that Model <ref> should well describe NUV-NIR observations at the beginning of XN outburst (thermal emission from almost isothermal disc ring), and Model <ref> should appear closer to the outburst maximum, when the automodel solution with constant mass accretion rate along the radius in the outer disc is established. As we will show in the section <ref>, the observed spectral evolution during outburst rise in Aql X-1 (SED measurements<ref>, <ref> and <ref>) qualitatively agrees with this theoretical picture. Below we describe the chosen spectral models and their parameters in detail. *Model (A). The adopted bbodyrad blackbody model in XSPEC has two parameters: temperature T_bb and normalisation K_bb. The normalisation parameter is connected to the projected emitting area S_bb [cm^2] in the following way:S_bb=πD_5^2/4·10^-10×K_bb ,where D_5 is the source distance in units [5 kpc].*Model (B). In order to model the irradiated accretion disc SED we adopted the popular DISKIR[https://heasarc.gsfc.nasa.gov/xanadu/xspec/models/diskir.html] model in XSPEC (without the inner disc coronal emission component, see Appendix <ref> for details). The adopted model has 3 parameters: T_ in,keV, logrout and f_ out (if irradiation is turned off — f_ out=0). For a given X-ray luminosity illuminating the disc, these DISKIR parameters can be readily converted (see Appendix <ref>) into physical parameters of the outer accretion disc — mass accretion rate Ṁ_ out, disc outer radius R_ out and the irradiation parameter C, which determines the fraction of X-ray flux thermalised in the disc. The disc outer radius in the Model <ref> can be constrained by the tidal truncation radius R_ out<R_ tid, which can be transformed into the following constraint on the DISKIR model parameter: logrout <5.07 - D_5for the adopted Aql X-1 orbital parameters (see <ref>) and using equation (<ref>).By taking into account uncertainty in the source distance, we get the following constraint: logrout<5.15. No other constraints on the disc model parameters were applied during the fitting procedure. In the multi-color accretion disc Model <ref>, the irradiation parameter C determines the degree of disc heating by X-ray irradiation. The surface temperature at outer radii of the accretion disc model (see Appendix <ref>) can be expressed asσ T^4 = 3 G M_1 Ṁ/8π R^3 + CL_ X/4π R^2 .The irradiation parameter contains information about geometry of the irradiated disc surface (disc height radial profile H(R), disc albedo a_ out and thermalisation fraction η_th for X-ray photons:C = (dH/dR-H/R)(1-a_ out)η_th . Assuming that the effective disc height, which intercepts X-rays (it can be the height of hot atmosphere or wind outflow formed above the outer disc, rather that photospheric disc height - see <cit.>) is a power law function of radius H∝R^n (e.g. n=9/8 for the outer zone of standard Shakura-Sunyaev accretion disc <cit.>, n=9/7 for isothermal disc model of <cit.>), one can obtain the following expression:C = (n-1)H/R(1-a_ out)η_th .Note, that in the adopted DISKIR model the limiting case H/R=const is assumed. In reality, H/R∝R^n-1 is expected to be a slow function of radius n-1=1/8÷2/7 for the stationary accretion disc and the exact form of disc height radial profile H(R) make sense for FUV part of disc spectrum. The far UV spectral range λ<2000Åis difficult to observe (the FUV observational data are currently absent for most X-ray Novae) and is very model-dependent to fit due to a strong extinction in this spectral range. We conclude, that for NIR-Optical-NUV spectral range the DISKIR model with C=const_R seems to be an adequate choice for steady-state irradiated accretion disc model. §.§ "X-ray tomograph" at the moment of hard/soft X-ray state transition in Aql X-1The most remarkable moment at the Aql X-1 outburst rise light curve is the hard/soft X-ray state transition (luckily covered by our SED measurements <ref>–<ref>). During the short time interval ΔT_h/s, the fast changes in the structure of the inner accretion flow (optically-thin geometrically thick RIAF → optically-thick geometrically-thin standard Shakura-Syunyaev disc) are accompanied by a drastic softening of the X-ray spectrum: the amount of X-ray photons with E>10 keV radically goes down (f_bol:1.96→1.08). The fast evolution of the X-ray spectrum (heating the outer accretion disc) at the moment of state transition, can serve as "X-ray tomograph" to reveal the vertical structure and energy-depended X-ray heating efficiency of the outer accretion flow in X-ray Novae.With a reasonable assumption, that duration of X-ray state transition interval ΔT_h/s is short with respect to viscous time scale at the outer radii of accretion disc, the mass distribution (surface density radial profile Σ_ out(R), accretion rate radial profile Ṁ_ out(R)) in the outer disc should experience only a minimal changes during interval ΔT_h/s. Indeed, for the standard <cit.> accretion disc, the mass distribution at radius R changes at a viscous timescaleτ_vis = 2/3α(H/R)^-21/Ω_K(R_d) ,where Ω_K=√(GM_1/R^3) is the Keplerian frequency, M_1 is the mass of the primary and α is the dimensionless viscosity parameter (see e.g. <cit.>, <cit.>). By adopting α=0.2 <cit.>, H/R=0.1-0.2 <cit.> and common neutron star mass M_1=1.4 M_⊙ we obtain τ_vis=3.5-14^d≫ΔT_h/s for the Aql X-1 outer disc radius estimate R_d=R_ tid. On the other hand, the temperature structure in the photosphere layers of the outer disc, which emit the observed NUV-Optical-NIR spectrum, can change substantially during the state transition interval, as it is directly governed by X-ray illumination (reprocessing time in the disc and its hot atmosphere for X-ray photons τ_repr≪ΔT_h/s, see τ_repr estimates in <cit.>). By using SED measurements <ref>–<ref> at the edges of hard/soft X-ray state transition interval, we can test the vertical structure and energy-depended X-ray heating efficiency of the outer accretion disc in X-ray Novae Aql X-1.In this work, the main observable in the multi-color disc spectral model, which will be tested (see <ref>) against various regimes of energy-depended X-ray heating, is the disc irradiation parameter C. Other parameters of the Model B (Ṁ_ out and R_ out) are expected not to vary during the X-ray state transition interval, if the condition ΔT_h/s≪τ_vis is satisfied (see above). We will consider three qualitative choices for energy-dependent X-ray heating of the outer accretion disc:* heating by bolometric X-ray flux (0.5-100 keV),* heating by soft X-rays (0.5-10 keV).* heating by hard X-rays (10-100 keV).Consequently, in addition to irradiation parameter C (which corresponds to "bolometric" X-ray heating, see above), it is straightforward to consider also the "soft" and "hard" irradiation parameters C_s and C_h with respect to 0.5-10 keV and 10-100 keV flux, respectively. The determination of "soft" irradiation parameter is justified in the case of direct illumination of the outer accretion disc by X-ray photons from the central source. Then soft X-rays with energies ≈2÷10 keV may play a primary role in the heating of the outer disc surface (see e.g. <cit.>). On the other hand, if direct illumination of the disc is not possible for some reason (e.g. due to concave disc height profile H∝R^<1 or disc self-screening effect, see <cit.>), then thehard (E≳10 keV) X-rays, effectively scattered in the optically thin layers above the disc, may play a substantial role in the disc heating <cit.>.It is worth noting, thatC_s, C_h and C parameters are simply connected to each other by using bolometric correction coefficient (f_bol):C_s =C×f_bol , C_h =C×f_bol/f_bol-1 .For the Aql X-1 outburst we adopt approximate bolometric corrections f^hard_bol=1.96 and f^soft_bol=1.08 for time moments before and after X-ray state transition, respectively (see <ref>).§ RESULTS Four broad-band SED measurements <ref>–<ref> were obtained during the Aql X-1 outburst rise phase (see <ref>) and were fitted by two spectral models <ref> and <ref>, described in the section <ref> above. The best fit parameters for black-body Model <ref> and multi-color disc Model <ref> with (f_ out>0) and without irradiation (f_ out=0) are presented in the Table <ref>. As can be noted, the inclusion of X-ray irradiation substantially improves multi-color disc fit for SED <ref>, <ref> and <ref>. The first three columns in the Table <ref> contain: time t with respect to state transition midpoint, orbital phase ϕ calculated from Aql X-1 ephemeris (see Table <ref>) and bolometric X-ray luminosity in Eddington units calculated in the following way: L_ X,bol/L_ Edd = 4π D^2 F_ X,0.5-10 f_bol/1.75·10^38 erg/s, where the value of Eddington limit is taken for pure hydrogen composition and 1.4M_⊙ NS.All best fit spectral models, together with SED data points, are shown at Figures <ref> and <ref> by solid (blackbody), dashed (multi-color disc) and long dashed (multi-color disc with irradiation) lines. Both absorbed and unabsorbed curves for each model are shown (at Figure <ref> only one unabsorbed curve is shown for clarity). All spectral curves are smoothed with top-hat window Δλ/λ=0.25 for better visual comparison with SED measurements, obtained in the broadband filters having relative bandwidth in the range Δλ/λ=0.14÷0.34 (see Table <ref>). The smoothing is primary important on absorbed model curves in the NUV range, where model flux changes sharply with ν. We note, that we derived smoothed model curves only for visualization purposes at Figures <ref>-<ref>; all χ^2_r values presented in Table <ref> were obtained in the XSPEC fitting framework.Below we discuss the derived results in detail.Firstly, we consider SED measurements carried out during the Aql X-1 outburst rise in the hard X-ray state — <ref>, <ref> and SED obtained near the outburst maximum <ref>. A curious spectral evolution during the state transition interval (SEDs <ref>–<ref>) will be discussed in the next section<ref>. A first SED <ref> was obtained from combination of quasi-simultaneous , and data (see <ref>) around a time moment t=-6.05^d. As can be seen from Table <ref> and Figure <ref> (left panel), the black-body model gives a substantially better fit than a multi-color disk model without irradiation. The best-fit blackbody model gives a goodness of the fit χ^2_r=1.01 and the multi-color disc model gives χ^2_r=2.17 (11 degrees of freedom), which rejects a later model with a p-value=0.013. With inclusion of X-ray irradiation, the multi-color disc model fit can be improved significantly. E.g. for irradiation parameter C=2.9·10^-3 (see Table <ref>) the goodness of the fit reaches χ^2_r=1.27 (11 d.o.f.), with the best-fit parameters Ṁ_ out≈0.18Ṁ_ Edd, R_ out≈0.46R_tidal. The fit can be further improved with increasing C. However, for the irradiation parameter value C=2.9·10^-3≫3GM_1Ṁ_ out/2L_ XR_ tid≈3·10^-5, the optical flux from the disc is dominated by X-ray reprocessing (see equation (<ref>)). Around the time moment t=-6.05^d, X-ray flux shows a fast changes L_ X∝e^t/2^d (see Figure <ref>). For the X-ray reprocessing mechanism, one may expect the NIR-Optical/X-ray flux correlation in the broad-band filters L_Opt∝L_ X^0.25-0.5 (see <cit.>), which corresponds to the visual magnitude changes Δm=-2.5·(e^0.125..0.25)≈-0.14^mag..-0.27^mag per day. In contrary, the available observationsshow almost constant g',r',i',z' fluxes around the time moment t=-6.05^d (see Figure <ref>). Therefore, we may prefer the single temperature blackbody emission Model <ref> for the SED measurement <ref>. The emitting disc ring should be heated primary by viscous dissipation (as no significant Optical/X-ray correlation is observed). The relative width of the blackbody emitting disc ring at radius R can be estimated as ΔR/R≈S_bb/πR^2cos(i) = K_bb/4·10^-10cos(i)(D_5/R)^2 .(see equation (<ref>)). By supposing a disc ring is located at a tidal radius R≈R_ tid and adopting D_5=1, i=42^∘ for Aql X-1 (see Table <ref>,) we get the following estimate: ΔR/R≈0.15. The next SED measurement <ref> was obtainedjust before hard/soft X-ray state transition at t=-0.43^d. Neither black-body (χ^2_r=11.7, 4 d.o.f.), nor multi-color disc without irradiation (χ^2_r=11.0, 4 d.o.f.) provide an acceptable fit for this SED measurement, but it can be well described (χ^2_r=0.43, 3 d.o.f.) by the irradiated multi-color disc Model <ref> with reliable parameters: Ṁ_ out=0.66Ṁ_ Edd, R_ out=0.94·R_tidal and C=6.1×10^-4. We adopt Eddington mass accretion rate value: Ṁ_ Edd=1.75·10^38/0.1 c^2=1.95·10^18 g/s <cit.>.The final SED measurement <ref> was carried out at a time moment t=+1.8^d near the X-ray outburst maximum. As can be seen from Table <ref>, the multi-color disc without irradiation is statistically unacceptable model for this SED. Both irradiated multi-color disc or single-temperatureblack-body models provide good fit to the SED data points. We note, that there are only 3 flux measurements (J, R, U bands) combined in this SED, and NUV flux (M2/W2 bands at ∼2000Å) measurement is not available at this time moment. We expect a degeneracy between physical parameters Ṁ and C in the Model <ref>. Therefore we decided to fix a mass accretion rate during the fit, to the reasonable value estimated for the SED <ref> (at t=-0.46^d). The Model <ref> with fixed Ṁ_ out=0.66 Ṁ_ Edd provides an acceptable fit with χ^2_r=0.03 (1 d.o.f.) with a best-fit parameters C=1.1·10^-3 and R_ out≈1.14R_tidal(see Table <ref> and Figure <ref> right panel). From numerical simulation of outbursts in X-ray Novae (see e.g. Figure 5 in <cit.>) it is expected, that multi-color disc spectral energy distribution is already established at the moment of outburst maximum. Therefore, we also may prefer the irradiated multi-color disc as a best model for the SED <ref>.In sum, we make a conclusion, that the observed SED evolution <ref>, <ref>, <ref> during outburst rise in Aql X-1 can be well understood as thermal emission from unstationary accretion disc flow with temperature radial distribution transforming from ∼ single-temperature blackbody emitting ring (heated primary by viscous dissipation) at early stages of outburst into the multi-color irradiated accretion disc measured around the X-ray outburst maximum.§.§ Evolution of the broadband SED during the hard/soft X-ray state transition in Aql X-1 The X-ray state transition interval during the outburst rise is covered by two SED measurements <ref> and <ref> at time moments t=-0.46^d and +0.55^d, luckily carried out quasi-simultaneously (within interval <0.05^d) by and telescopes (see <ref>). As it was discussed above in <ref>, the SED <ref> (at the start of state transition) can be well fitted by Model <ref> with reasonable physical parameters of the irradiated accretion disc: mass accretion rate Ṁ_ out=0.66·Ṁ_ Edd, outer radius R_ out=0.94·R_tidal and irradiation parameter C=6.1·10^-4 (see Table <ref>). By having in mind theoretical considerationspresented in the <ref>, one may expect, that the same Model <ref> (with fixed Ṁ_ out, R_ out and irradiation parameter) is expected to match SED measurement <ref> at the end of state transition.Let's consider, what we see in reality. As can be seen from Table <ref>, at the time moment t=+0.55^d the best-fit spectral model is a single-temperature black-body Model <ref>. Surprisingly, the multi-color disc Model <ref> (with or without irradiation) provides unacceptable fit to the data. Note, if we exclude the second measurement (which was carried out not fully simultaneously with observation, see <ref>) from consideration, then the best-fit blackbody Model <ref> with T_bb=0.764±0.014 and K_bb=(91±14)·10^11 (2σ errors) became fully statistically acceptable with the goodness of the fit equal to χ^2_r=0.83 (3 d.o.f.). One can conclude, that at the end of state transition the black-body like SED in the broad 2000–9000Å spectral range is measured (it is shown by solid line at Figure <ref>). In order to better understand the NUV-NIR spectral evolution during X-ray state transition, we derived the expected SED evolution for the standard irradiated accretion disc Model <ref> with fixed parameters Ṁ=0.66·Ṁ_ Edd, R_ out=0.94·R_ tid (values, measured at the start of X-ray state transition - see Table <ref>). It is worth noted, that X-ray spectrum changes drastically during the state transition, and the disc heating may depend either on hard, soft or bolometric X-ray flux. According to <ref> we will consider three qualitative choices for X-ray heating of the outer disc: * Accretion disc can be sensitive to X-ray photons in the full energy range 0.5÷100 keV. In this case, we calculate Model <ref> with fixed "bolometric" irradiation parameter C=6.1·10^-4 (see Table <ref>). According to formula (<ref>), the f_ out parameter in the DISKIR model should be properly rescaled as f_ out∝F_ X,bol. The resulted spectral model is shown at Figure <ref> by long dashed (middle) line.* Alternatively, accretion disc can be heated only by soft 0.5÷10 keV X-ray photons. In this case we fix the "soft" irradiation parameter C_s=1.2·10^-3 (see equation (<ref>)) and the DISKIR parameter f_ out should be properly rescaled asf_ out∝F_ X,0.5-10. The resulted spectral model is shown at Figure <ref> by long dashed (upper) line.* Accretion disc can be heated primary by hard 10÷100 keV X-ray photons. Then, we fix the "hard" irradiation parameter: C_h=1.25·10^-3 (see equation (<ref>)). The DISKIR parameter f_ out should be properly rescaled asf_ out∝F_ X,10-100. The resulted spectral model is shown at Figure <ref> by long dashed (lower) line.All considered irradiated disc models with fixed C_bol, C_s, C_h areshown in the Table <ref>. One can conclude (see Figure <ref>), that the observed SED <ref>, measured immediately after X-ray state transition clearly disagree with expectations for irradiated accretion disc for any choice of single irradiation parameter. As can be seen at Figures <ref> and <ref>, during the interval of X-ray transition, the NUV flux at ∼2000Å(W2/M2 band) decays slightly, but the optical blue g'-band (at ∼4700Å) shows a small rise and the flux in the NIR z'-band (∼9000Å) rises more significantly. At Figure <ref> one can see, that the observed flux evolution in the NIR z'-band can be closely described by disc model, irradiated by soft X-ray photons (fixed C_s) and flux evolution in the NUV M2-band can be described by disc irradiated by hard X-ray photons (fixed C_h). Note, that at the time moment t=1.8^d, Aql X-1 brightness in all measured NUV-NIR filters increase significantly, with the biggest relative flux rise detected in the NUV. We conclude that, the ∼1^d delayed rise of NUV brightness in the interval t=0.55-1.8^d, which makes the form of observed SED similar to the irradiated disc, is a curious observational fact and needs to be explained. We think, that the observed Aql X-1 broad-band spectral evolution during the X-ray state transition can be understood if one considers a different mechanism of X-ray heating for NUV- and NIR-emitting regions of the disc. Let us suppose, that NUV-emitting regions in the disc are heated primary by scattered (in the hot corona or wind formed above the optically-thick accretion flow, see e.g. <cit.>) hard (E>10 keV) X-ray photons, doe to the possible screening ofthese regions from direct X-ray photons from the central source at the moment of X-ray state transition. One can see from Figure <ref> (lower long-dashed line), that the observed decrease of NUV emission (∼2000Å) is well explained in the disc model, sensitive to hard X-ray photons (with constant C_h). On the other hand, NIR-emitting more outer regions in the disc can be heated by direct (mainly soft 2-10 keV, see e.g. <cit.>) X-ray photons from the central source, and the rise of NIR brightness (∼8000Å) during the X-ray state transition is well explained in the disc model with constant "soft" irradiation parameter C_s (upper long-dashed line at Figure <ref>). § CONCLUSIONS We studied a time evolution of the broad-band (NUV-Optical-NIR) spectral energy distribution (SED) in NS X-ray Nova Aql X-1 during the rise phase of a bright FRED-type outburst in 2013. By using quasi-simultaneous observations from Swift orbital observatory and , , 1-m class optical telescopes, we show that evolution of broad-band SED can be understood in the framework of thermal emission from unstationary accretion disc, which temperature radial distribution transforms from a single blackbody emitting ring at early stages of outburst into the standard multi-color irradiated accretion disc, with irradiation parameter C≈6·10^-4, measured at the end of hard X-ray state and near the outburst maximum. By using photometric observations, carried out luckily exactly at the edges of X-ray hard/soft state transition interval, we find an interesting effect: a decrease in NUV flux during this time interval, accompanied by a flux rise in NIR-Optical bands. The NUV flux decrease correlates with the hard X-rays E>10 keV drop during the X-ray state transition, and the Optical-NIR flux rise correlates with the soft X-rays rise during the same time interval. In our interpretation, at the moment of X-ray state transition in Aql X-1 the UV-emitting parts of the accretion disc are screened from direct X-ray photons from the central source and heated primary by hard X-rays, effectively scattered in the hot corona or wind formed above the optically-thick accretion flow. At the same time, the outer and colder regions of accretion disc, emit in the Optical-NIR and are primary heated by direct X-ray illumination. We point out that simultaneous multi-wavelength observations during the fast X-ray state transition interval in LMXBs provide an effective tool to directly test the energy-dependent X-ray heating efficiency, vertical structure and accretion flow geometry in the outer regions of accretion disc in X-ray Novae.§ ACKNOWLEDGEMENTS This research was supported by the Russian Scientific Foundation grant 14-12-00146. AM is deeply thankful to Mike Revnivtsev, Galja Lipunova, Konstantin Malanchev, Dmitry Karasev, Andy Semena for useful and fruitful discussions.This research has made use of the data provided by RIKEN, JAXA and the MAXI team, and and BAT data obtained from the High Energy Astrophysics Science Archive Research Center of NASA. AM is thankful to the Swift PI, Neil Gehrels, for accepting our requests for ToO observations of Aql X-1 with Swift /XRT.AM, I.Kh, IB thank to TÜBITAK, IKI and KFU for partial supports in using RTT150 (Russian-Turkish 1.5-m telescope in Antalya), which made our optical monitoring program of Aql X-1 possible. For the observational results from telescope presented in section <ref>, AM acknowledges a partial support of the Russian Government Program of Competitive Growth of Kazan Federal University, I.Kh and IB acknowledge a partial support by RFBR and Government of Tatarstan under the project 15-42-02573. This paper has made use of publicly available up-to-date SMARTS optical/near-infrared light curves. We note, that the Yale SMARTS XRB team is supported by NSF grants 0407063 and 070707 to Charles Bailyn.Facilities:, , , , , § MEASURING PHYSICAL PARAMETERS OF IRRADIATED ACCRETION DISC BY USING DISKIR SPECTRAL MODEL <cit.> build a simple model for Optical/UV emission from the stationary Ṁ(R)=const multi-color disc self-irradiated by inner parts of the disc and coronal emission in black hole binaries. The model DISKIR became publicly available among other additive models in XSPEC package.We adopted this model to fit NUV-Optical-NIR SED of NS X-ray Nova Aql X-1.The DISKIR model has 9 parameters: * T_ in,keV[keV], innermost temperature of the unilluminated disk in units [keV]* γ, asymptotic power-law photon index* T_e,keV, electron temperature (high energy rollover) in units [keV]* L_ c/L_ d, ratio of luminosity in the Compton tail to that of the unilluminated disk* f_ in, fraction of luminosity in the Compton tail which is thermalized in the inner disk* r_ irr, radius of the Compton illuminated disk in terms of the inner disk radius* f_ out, fraction of bolometric flux which is thermalized in the outer disk* logrout, log10 of the outer disk radius in terms of the inner disk radius* Normalization parameter (as in diskbb model):K=4·10^-10(R_ in/D_5)^2cos(i) ,where R_ in — inner disc radius in [cm], D_5 — distance in units 5 kpc, i — system inclination. Among all parameters of the model, we are interested only in four of them (which define disc SED in the NUV-NIR spectral range): K, T_ in,keV, logrout and f_ out. Other parameters were fixed to their default values: γ=1.7, kT_e,keV=100, f_ in=0.1, r_ irr=1.2 and L_ c/L_ d=0 - irradiation of the inner disc and coronal emission are turned off. By using equations from <cit.> (see their 3) parameters K,T_ in,keV, logrout, f_ out) can be converted into the physical parameters we are interested. *Inner disc radius (R_ in). The inner disc radius can be expressed from (<ref>) as follows:R_ in = 5·10^4[K/cos(i)]^1/2 D_5 [cm].Note, that the choice of normalization parameter K (and R_ in itself) is somewhat arbitrary as long as we are interested only in the outer disc emission. Hereafter we fixed normalization parameter to the value K=400, which corresponds to the inner disc radius:R_ in = 10^6×(D_5/√(cos(i))) [cm].*Outer disc radius (R_ out). The outer disc radius can be expressed as R_ out=10^logroutR_ in, where logrout is a parameter in DISKIR model. By using (<ref>) we have:R_ out = 10^6(D_5/√(cos(i))) 10^logrout [cm].*Mass accretion rate in the outer disc (Ṁ_ out). DISKIR is a model for stationary accretion disc (mass accretion rate is constant with radius Ṁ=const_R). In DISKIR model, the temperature of unilluminated disk from R_ in to R_ out is described the following formula:T_vis(R) = T_ in(R/R_ in)^-3/4 ,where R_ in depends on normalization parameter K (see formula (<ref>)) and inner radius temperature T_ in (in units keV — T_ in,keV). At the same time, photospheric temperature at the outer radii (R>>R_ in) of the unilluminated accretion disc can be expressed (see <cit.>) asσ_ SBT_vis^4 = 3 G MṀ_ out/8πR^3 ,where G — gravitation constant, σ_ SB — Stefan-Bolzman constant and M — compact object mass. By using equations (<ref>, <ref>, <ref>) we can connect the mass accretion rate in the outer parts of the disc with the T_ in,keV parameter of the DISKIR model:Ṁ_ out = 4.636·10^16T_ in,keV^4×D_5^3/m_1.4(cos(i))^3/2 [g/s],where m_1.4 — compact object mass in units [1.4 M_⊙].*Irradiation parameter (C).Let's consider the case, the outer parts of accretion disc are irradiated by the central source of X-ray luminosity L_ X (see, e.g. <cit.>, <cit.>, <cit.>).The temperature of the illuminated disc at radius R>>R_ in can be defined as:σ_ SBT^4 = 3GMṀ_ out/8πR^3 + C·L_X/4πR^2 ,where C is a disk irradiation parameter. The characteristic irradiation parameter, then X-ray irradiation dominates the heating in the standard Shakura-Syunyaev disc at a given radius R, can beexpressed asC>3 G M_1Ṁ_ out/2 L_ X R . Temperature at outer radii of the illuminated disc in the DISKIR model is expressed by the formula:T^4(R) = T_ in^4 [ (R/R_ in)^-3 + f_ out(R/R_ in)^-2]As can be noted, the f_ out parameter in the DISKIR model corresponds to self-illumination of the accretion disc (outer part of the disc is irradiated by the disc luminosity L_ d=4πσ_ SBR_ in^2T_ in^4). We would like to use a DISKIR model for a more general case, when the outer parts of the disc are illuminated by the central source of arbitrary X-ray luminosity. Then the irradiation parameter C (from formula (<ref>) above) is connected to DISKIR model parameter f_ out in the following way:C = f_ out·4πσ_ SBR_ in^2T_ in^4/L_ X . X-ray luminosity of the central source L_ X can be expressed as:L_ X = 4πζD^2 F_ X ,where F_ X, D correspond to the observed X-ray flux and distance to the source; ζ — emission anisotropy factor (ζ=1 for isotropic emission). By using (<ref>) and (<ref>), finally we obtain:C = 4.320·10^-9/ζcos(i)×f_ outT_ in,keV^4/F_X .By using equation (<ref>) we get:C = 0.9318·10^-9×f_ outṀ_16/F_X×m_1.4√(cos(i))/ζD_5^3 ,where Ṁ_16 corresponds to mass accretion rate in units [10^16 g/s]. mnras | http://arxiv.org/abs/1703.09159v1 | {
"authors": [
"Alexander V. Meshcheryakov",
"Sergey S. Tsygankov",
"Irek M. Khamitov",
"Nikolay I. Shakura",
"Ilfan F. Bikmaev",
"Maxim V. Eselevich",
"Valeriy V. Vlasyuk"
],
"categories": [
"astro-ph.HE"
],
"primary_category": "astro-ph.HE",
"published": "20170327160503",
"title": "Evolution of broad-band SED during outburst rise in NS X-ray Nova Aql X-1"
} |
Peculiar Rotation of Electron Vortex Beams [========================================== In time series data, Hawkes Processes model mutual-excitation between temporal events when the arrival of an event makes future events more likely to happen. Identification of such temporal covariance can reveal the underlying structure to predict future events better. In this paper, we present a new framework to decompose complex covariance structure with a composition of multiple basic self-triggering kernels.Our composition scheme decomposes the empirical covariance matrix into the sum or the product of base kernels which are easily interpretable.Here, we present the first multiplicative kernel composition methods for Hawkes Processes.We demonstrate that the new automatic kernel decomposition procedure outperforms the existing methods on the prediction of discrete events in real-world data.§ INTRODUCTIONHawkes Processes (HPs) <cit.> model self-exciting behavior, i.e., when the arrival of one event makes future events more likely to happen. This type of behavior has been observed in various domains, such as earthquakes, financial markets, web traffic patterns, crime rates <cit.> and social media <cit.>.As an example, in high-frequency finance, buyers and sellers of stocks demonstrate herding behavior <cit.>. After the main earthquake, several aftershocks follow according to a time-clustered pattern <cit.>. In web data, hyperlink proliferation across pages exhibit self- and mutual-excitation <cit.>. In criminology, gang-related retaliatory crime patterns are grouped in time <cit.>. In social media, the `infectiousness' of posts can be shown to be modeled through a self-excitement and mutual-excitement assumption <cit.>.In HPs analysis, parametric kernels capture intra-domain typical behaviors: quick time-decaying exponential excitation in the case of finance and web data <cit.>; slower power-law decay in earthquake-related data <cit.>; and periodicity-inducing sinusoidal kernel in TV-watching data <cit.>.When an appropriate kernel is selected, the kernel parameters are fitted to predict future events. The parameters may be fitted to the data through the gradient descent (GD) method over a likelihood function penalized by a regularization criterion (e.g., Akaike Information Criterion) on the number of parameters <cit.>. Another method of kernel estimation is through the use of the power spectrum of the second order statistics of the process: covariance density and normalized covariance <cit.>. These are well defined when the self-triggering function induces what is called stationary behaviour.However, kernel selection in HPs analysis is a challenging problem, since an appropriate kernel should be manually selected in practice. In this paper, we present a kernel structure search algorithm for HPs. Given base kernels, our algorithm finds the best fitting one, considering composition (sum and product) of base kernels. For verifying the stationarity property of each composite kernel, we also derived analytical expressions for the stationarity conditions. To our best knowledge, our method is the first multi-type kernel composition framework for HPs.The main steps of the automatic framework, which will be thoroughly explained in the following sections, are discretized kernel estimation and greedy search in kernel composition space.§ RELATED WORK Automatic analysis frameworks for Gaussian Processes (GPs) are proposed in <cit.> and <cit.>. However, due to fundamental distinctions between GPs and HPs (such as stationarity conditions and causality assumptions for the latter), the techniques proposed for GPs can not be extended to HPs in a straightforward manner. <cit.> uses exponential kernels for modeling quick-decay in finance or web data. <cit.> models slow decay influence with power-law kernels in earthquake, while <cit.> performs power law modeling experiments with social media-related data. <cit.> uses sinusoidal kernels for modeling periodicity-inducing influence in TV watching-related data (IPTV) , in which watching one episode of a TV program makes the viewer more likely to watch further ones. Since these shows are usually broadcasted weekly, the TV-watching behavior is likely to demonstrate a weekly self-excitement. In addition, according to <cit.>, homicide rates show a pronounced seasonal effect, peaking in the summer and tapering in the winter.More recent works, such as the neural network-basedHawkes processes in <cit.> and the time-dependent Hawkes process (TiDeH) <cit.>, allow for learning very flexible Hawkes processes with highly complicated intensity functions, while depending on the size and the quality of data. In this work, however, we focus on the interpretability, or explainability, of said functions and their corresponding typical behaviours, which are core factors on the Hawkes kernel selection and optimization.§ HAWKES PROCESSES A point process with a sequence of n time-events isexpressed by a vector of the form (t_1,t_2, ... , t_n). Treating the real line as a time axis, the vector can be intuitively associated with a counting process N(t), such that dN(t) = 1, if there is an event at time t; and dN(t) = 0, otherwise. A point process can be described through its intensity function (λ (t)), which can be understood as the instantaneous expected rate of arrival of events, or the expectation of derivative of the counting process N(t):λ (t) = lim_h → 0𝔼 [N(t+h)-N(t)]/h This intensity function uniquely characterizes the finite-dimensional distributions of the point process <cit.>. A simple example of this function would be the constant mean rate of arrival, μ, in the case of a homogeneous Poisson process.HPs model the intensity function in terms of self-excitation: the arrival of an event makes subsequent arrivals more likely to happen <cit.>. HPs can be described through the following conditional intensity function λ (t):lim_h → 0𝔼 [N(t + h) - N(t)| ℋ (t)]h = μ + ∫_-∞^tϕ (t-u) dN(u),where * ℋ (t) is the history of the process, the set containing all the events up to time t;* μ is called background rate, or exogenous intensity, which is fixed as the mean rate of a homogeneous Poisson process;* ϕ (t) is denominated self-triggering kernel, or excitation function. From this function, one may notice that the intensity at time t will likely be affected by events which happened before the time t, described by the history of the process. From <cit.>, we have that, if:||ϕ|| := ∫_0^∞ϕ (t) dt ≤ 1,then the corresponding process will show wide-sense stationary behavior, from which the asymptotic steady arrival rate, or first-order statistics, Λ = μ(1-||ϕ||), can be obtained, along with its covariance function, or second-order statistics, which is independent of t, ν (τ) =𝔼 [dN(t) dN(t + τ)]. Estimating Λ and ν (τ) requires wide-sense stationarity assumptions which, besides being analytically convenient, are also connected to the fact that, in real data, the chain of self-excitedly induced further events will always be of finite type, or without `blowing up.' This corroborates the practicality of the estimated model. §.§ Discretized Kernel Estimation Being one possible way of recovering the triggering kernel of a HP, this step is fully described in <cit.>, and basically consists of building an estimator of ϕ (t) from empirical measurements of ν (τ), the stationary covariance.Given a finite sequence of ordered time-events in [0,T], we fix a window size of h, and estimate ν (τ) as:ν_τ^(h) = 1h E ( (∫_0^h dN_s - Λ h) (∫_τ^τ+h dN_s -Λ h) ) In practice, this estimation is done in discrete time steps δ, up to a maximum value of τ. [In our case, we used a carefully designed heuristics explained in the section Experimental Results.]:ν_τ,δ^(h)=1T∑_i=1^⌊ T/δ⌋ (dN_iδ^(h)-dN_(i-1)δ^(h)) (dN_iδ+τ^(h)-dN_(i-1)δ+τ^(h)),where dN_iδ^(h) is the total number of events happening between t=i δ and t=i δ + h.From <cit.>, we have that, given g_t^(h) = (1-|h|/t)^+, i.e., a triangular kernel density estimator with bandwidth h, we have the following relation in Laplace domain:ν̂_̂ẑ^̂(̂ĥ)̂ = ĝ_̂ẑ^(h) (1+ψ̂_z^⋆) Λ (1+ψ̂_z^⋆)^†, where[Given a function f_t, f̂_z is its Laplace Transform, and the “⋆” symbol corresponds to its conjugate.]:ψ̂_z = ∑_n=1^+∞ϕ̂_z^n = ϕ̂_z(1-ϕ̂_z).Working with the Fourier transform restriction, i.e., (z=i ω, with ω∈ℝ) and given that ĝ_i ω^(h) = 4ω^2 hsin^2 (ω h2), we get to (1+ψ̂_z^⋆) Λ (1+ψ̂_z^⋆)^† = ν̂_̂ẑ^̂(̂ĥ)̂ĝ_̂ẑ^(h),where we fix h = δ, so we do not bother with cancellations of ĝ_z^(h). Then, from: |1+ψ̂_i ω|^2 =ν̂_z^(h)Λĝ_̂ẑ^(h), we get to the discretized estimation of ϕ_t by taking the inverse Fourier transform of:ϕ̂_i ω = 1 - e^-log |1+ψ̂_i ω| + i H(log |1+ψ̂_i ω|),in which the operator H(·) refers to the Hilbert transform. § AUTOMATIC KERNEL DECOMPOSITION FOR HPS This section presents the second step of the automatic kernel identification: a parametric kernel search through our new kernel decomposition scheme. §.§ Self-Exciting KernelsFrom the definition of the conditional intensity function, the self-excitation of the process is expressed through the kernel function ϕ(t). For the kernel decomposition, four base kernels will be used for identifying and estimating typical triggering behaviors as shown in Table <ref>:* EXP(α,β): The decay exponential kernel is parameterized by the amplitude α and decay rate β, and is useful for modeling quick influence decay, in which initial transactions/hyperlinks have a lot of impact initially but rapidly reduce their influence over time;* PWL(K,c,p): The power law kernel is parameterized by the amplitude K, the exponent p, and the constant c, modeling a slower decaying trend than the exponential; * SQR(B,L): Thepulse kernel is described by the amplitude B and the length L. Being a trivial, steady, and self-exciting dynamics on its own, it may also work as an offset level for the combined triggering with other kernel types, in the case of addition, and as a horizon truncation, in the case of multiplication[u(t) is the step function.];* SNS(A,ω): A truncated sinusoidal kernel, parameterized by the amplitude A and the angular velocity ω. This type of kernel base function captures well the self-excitement of periodic events. Here, the discretized kernel estimation is optional when a direct optimization of kernel structure is possible. Unfortunately, discontinuous functions (SQR, SNS) do not allow such optimization (e.g., Gradient Descent, Nelder-Mead). In this paper, we use the discretized kernel estimation as a unified method for both continuous (EXP, PWL) and discontinuous (SQR, SNS) kernels; and, most importantly, their combinations.Furthermore, another great advantage of this step, compared with traditional sequential methods, is the fact that the value of ν for each value of τ can be calculated independently, while, in Gradient Descent, the value of the parameters at step t must be obtained before the values for step t+1. When combined with the parallelization of loops in our algorithm, this step significantly improves the speed of obtaining the most likely parametric representations of the sample processes. §.§ Kernel Decomposition For expressing the discretized estimation in terms of the four base kernels, the following steps are executed:* Calculate residues (L^1-error) w.r.t. the four basic kernels {EXP, PWL, SQR, SNS};* Select the kernel with the minimum residue MR_1, denominated K_1;* Check whether the estimated parameters of the kernel satisfy the stationarity condition, by using the closed-form expressions from Table<ref>;* Calculate residues w.r.t. a total of 8 kernel expansions, resulting from 2 operations (addition and multiplication) per base kernel {+EXP, ×EXP, +PWL, ×PWL, +SQR, ×SQR, +SNS, ×SNS}, while fixing the optimized parameters for K_1, in the case of Additive Combination, and recalculating all the parameters, in the case of Multiplicative Combination;* Select the kernel with minimum residue MR_2, denominated K_2, and check the spectral radius condition (calculated in closed-form from Table <ref>);* If both K1 and K2 are stable, and MR_1 < MR_2/η (η would act as a regularization parameter), pick K_1 . Else, pick K_2.* If likelihood (llh) of direct optimization (GD, Nelder-Mead) is greater than likelihood of kernel decomposition, output GD model. Else, output the decomposition model.Regarding the computational efficiency of the decomposition algorithm, two strategies yielded results at a much lower computational cost, without altering the results of the decomposition: * Selecting the best kernel through the error, instead of likelihood;* Greedy search of K2 based on the selected K1, instead of doing a brute-force search over all the 4 × 8 = 32 possible combinations for K2.Figure <ref> explains the algorithm up to the depth two, for illustration purposes. Our kernel decomposition scheme can be expanded into multiple depths, as explained in Section <ref>. Our algorithm is presented in Algorithm <ref>.§.§ Stationarity ConditionsVerifying the stationarity condition is one of the most important steps in the kernel search. When we end up with a non-stationary kernel, estimating future events can not be accurate.To solve this issue, we developed closed-form expressions, either in the form of equality or as an upper bound, which are shown in Table <ref>, for the case of a single kernel, and Table <ref>, for multiplicative combinations of two kernels [ Γ(·,·) is the well-known incomplete Gamma function: Γ (a,y) = ∫_y^∞ t^a-1 e^-t dt]. The conditions for additive combinations can be derived from the conditions for single kernels in a straightforward manner.The kernel is said to induce stationarity if the result of the expression calculated using the estimated parameters belongs to the interval [0,1). This can be justified both from the point-of-view of HP as a branching process, also called immigrant-birth representation <cit.>, and of the boundedness of the spectral radius (largest absolute value among the eigenvalues) of the excitation matrix. [For the univariate HP case, the excitation matrix has dimension one, being only the excitation function, ϕ(t).] §.§ Scale-Independence CriterionFor an automatic time series analysis, scale-independence is indispensable, as time sequences of disjoint datasets may occur in time scales differing by several orders of magnitude. As an example, earthquake events' occurrences in a sequence are spaced by intervals of monthly and yearly scales. Thus, setting a horizon of a few months as the maximum value of τ in Equation (<ref>) might result in a satisfactory discrete estimation grid. However, using the same time length for estimating the triggering behavior of a finance-related sequence would require an impractically large grid resolution.A histogram of all the time intervals between events in a sequence may be readily generated, and is an indicator of the overall magnitude of the spacing among the events. Thus, as a rule of thumb, the horizon length for τ may be set as the smaller time interval strictly larger than a percentage of the sequence's intervals. The values of 50 % and 95 % were used. In practice, this value of horizon length is obtained with the help of a histogram composed by 100 bins.§ HIGHER-ORDER KERNEL DECOMPOSITION A sequential additive decomposition of the discretized estimation vector is rather straightforward, since one may just set the residual vector from the previous stages as the input of the next ones. In the case of multiplicative decomposition, it is nontrivial to find the result of intraclass decomposition. To the best of our knowledge, no analysis on multiplicative HP kernel decomposition is reported yet. In this paper, we provide a new upper bound over aninterclass kernel product of unknown degree, as in:[]^k_1×[]^k_2×[]^k_3×[]^k_4for k_i ∈ℤ^*, where the operator “[ ·]^k” corresponds to the set of functions which can be decomposed into a k-th order product of kernels, e.g:[EXP]^k = α_1 e^-β_1 x * α_2 e^-β_2 x * ... * α_k e^-β_k x_k terms.By deriving the four possible intraclass kernel products, one may observe that the typical self-exciting behavior features of each kernel type are preserved, as in the following:* []^k_1 reduces to the case of a single exponential with α = ∏_i=1^k_1α_i and β = ∑_i=1^k_1β_i, thus still accounting for its `quick-decay' behavior: []^k_1⊂[] * []^k_2 is lower bounded by a single PWL kernel with K = ∏_i=1^k_2 K_i, c = max(c_1,...,c_k_2) and p = ∑_i=1^k_2 p_i, thus still accounting for its `slow-decay' behavior * []^k_3 reduces to a single SQR kernel with B = ∏_i=1^k_4 B_i and L = min(L_1,...,L_k_4), thus still accounting for its `steady-triggering' behavior: []^k_3⊂[] * []^k_4 has A = ∏_i=1^k_4 A_i and a `spikier' aspect (higher bandwidth), thus still accounting for its `periodicity-inducing' behaviorThus, on deepening the decomposition algorithm by overly increasing the number of levels above 2, we may be, in fact, adding little information on the qualitative aspect of the self-exciting behavior analysis of the data while making it more prone to overfitting to the noisiness of the discretized estimation vectors. §.§ Upper BoundFurthermore, regarding the boundedness of the higher-order decompositions, from the exact results for EXP and SQR intraclass decompositions and the upper bounds for the PWL and SNS ones, we have that:[]^k_1×[]^k_2×[]^k_3×[]^k_4≤ α e^-β xK(x+c_upper)^p B A sin(ω x)≤α B K A e^-β x(x+c_upper)^p= (α,β) ×(K,c_upper,p)^k_2×(B,L) × A ,for 0 ≤ x ≤ min(L,πω), and 0 otherwise.§ EXPERIMENTAL RESULTS To demonstrate the benefits of the kernel decomposition framework, we conducted experiments with synthetic, financial and earthquake data.For real-world data sets, no prior information about the kernel (type and parameters) is available. Thus, we use the log-likelihood of the kernel function over the time sequence as a quality criterion.Given a realization (t_1,t_2,...,t_k) of some regular point process on [0,T], its log-likelihood (l) is expressed as: l(t_1,...,t_k) = ∑_i=1^klog (λ (t_i)) - ∫_0^Tλ (u) du.For an automatic time series analysis, scale-independence is indispensable, as time sequences of disjoint datasets may occur in time scales differing by several orders of magnitude. As an example, earthquake events' occurrences in a sequence are spaced by intervals of monthly and yearly scales. Thus, setting a horizon of a few months as the maximum value of τ in Equation (<ref>) might result in a satisfactory discrete estimation grid. However, using the same time length for estimating the triggering behavior of a finance-related sequence would require an impractically large grid resolution.A histogram of all the time intervals between events in a sequence may be readily generated, and is an indicator of the overall magnitude of the spacing among the events. Thus, as a rule of thumb, the horizon length for τ may be set as the smaller time interval strictly larger than a percentage of the sequence's intervals. The values of 50 % and 95 % were used. In practice, this value of horizon length is obtained with the help of a histogram composed by 100 bins. §.§ Financial DataIn the finance domain, HPs have become more prevalent, due to its structure being naturally adapted to model systems in which the discrete nature of the jumps in N_t is relevant, making the model remarkably suited to modeling high-frequency data <cit.>.Here, we picked the 19 top-varying companies of the Technology, Healthcare, Industrial, Services and Utilities categories of Yahoo Finance. We extracted tick data from every two minutes of 30 business days (02/02/2017 to 02/23/2017 for Technology and 04/07/2017 to 05/18/2017 for the other ones). Whenever a stock price changed by some magnitude higher than some threshold, an event was logged in the corresponding time sequence. Ten different percentual thresholds, increasing at equally spaced intervals from 0.03% to 0.3%, were applied. This procedure resulted in a number of valid sequences, for each category, indicated in Table <ref>, since the remaining ones did not contain enough points for the splitting between training and validation subsequences.As in extrapolation tasks, the 80% of the first elements for each sequence were then used as training data, and the remaining 20% were used for validation, i.e., we estimated the parameters of the kernel using the first 24 days and then calculated the log-likelihood on the last 6 days of each sequence. The kernel was then normalized to 2 min = 120 sec. When comparing the log-likelihoods of first and second level decompositions, we observed that the second level, with composite kernels, resulted in a higher log-likelihood in the majority of sequences from each category, as indicated in Table <ref>, what corroborates that a more flexible model of the kernel provides a more accurate description of the underlying dynamics of the process. The average log-likelihood for each level is shown in Table <ref>. The comparison of each sequence is shown in Figure <ref>.When comparing the performance of the best estimation among the two levels and the usual exponential HP model used in financial analysis, fitted through the gradient-based method from <cit.>, it is possible to see that the kernel composition exhibited a much more robust performance. Although the exponential HP performed well in some sequences, it tended to get stuck in local maxima with very poor performance, usually leading to unstable or negative combinations of parameters, for which the likelihood is null. The kernel composition performs better in the majority of sequences, as shown in Table <ref>. We provide the comparison of each individual sequence in the supplementary material.§.§ Earthquake DataThe data considered for the earthquake experiment was a set of 100 time sequences extracted from the USGS NCSN Catalog (NCEDC database), from the day of 01/01/1966 to 01/01/2015. The latitude range was [30,55], and the longitude range was [-140,-110]. Different length intervals and resulting areas were considered. Whenever the magnitude of an event exceeded some threshold, its time coordinate was added to the corresponding input time sequence. The magnitude thresholds were varied among 2.5, 3.0, 3.5 and 4.0; and the grid resolution was set to 20 and 100 points. Seeking scale-independent search, we use the aforementioned histogram heuristics: earthquakes events are separated by time intervals of monthly or yearly scales. Thus, the estimation horizon for financial data, for example, lasting usually only a few seconds, would hardly capture the overall aspect of the triggering behavior in this case. blackThe results indicate a strong agreement with the long standing assumption of a power-law shaped kernel for the intensity of aftershocks' occurrences (`Omori's Law' (1894)). For 20-point grid resolution, the relative frequency of each kernel was =. For the 100-point grid resolution, the relative frequency was . Q-Q plots from the estimated models are shown in Figure <ref>, in which comparisons to the original sequence are made among sequences generated by our kernel composition, disconsidering the stabiliy check, the discretized estimate and the usual Power-Law kernel model, fitted through the gradient descent based method (GD) <cit.>. Both our method and the discretized estimate perform very close to the original sequence, while the GD method tended to get stuck in local optima with poor performance. A timescale-based initialization of μ was used. §.§ Verifying Scale-Independence CriterionTo verify the histogram criteria (explained in Section <ref>), used the data sets ( Stocks and Earthquake). As shown in Figure <ref>, The histogram criteria allows us to find a good resolution of kernels in highly different scales.§ CONCLUSION Hawkes processes are point processes which capture self-exciting discrete events in time series data. To predict future events with HPs, an appropriate kernel is selected by hands, previously. In this paper, we proposed a new temporal covariance-based kernel decomposition method to represent various self-exciting behaviors. We also presented a model (structure/parameter) learning algorithm to select the best HP kernel given the temporal discrete events. The stationarity conditions are derived to guarantee the validity of the kernel learning algorithm. In experiments, we demonstrated that the proposed algorithm performs better than existing methods to predict future events by automatically selecting kernels.aaai§ DERIVATIONS OF STATIONARITY CRITERIA FOR MULTIPLICATIVE COMBINATIONS OF KERNELS This appendix introduces the full derivations of stationarity criteria for the second order multiplicative compositions of the four base kernels. §.§ EXP x EXP For the combination “EXPxEXP”, we have that, for stationarity to be achieved:0 ≤∫_0^∞ EXP(α_1,β_1) EXP(α_2,β_2) dx < 1 0 ≤∫_0^∞α_1 e^α_1 xα_2 e^β_2 x dx < 1 Thus:∫_0^∞α_1 e^-β_1 xα_2 e^-β_2 x dx= ∫_0^∞ (α_1 α_2) e^-(β_1 + β_2)x dx = ∫_0^∞α e^-β x dx = αβ = α_1 α_2β_1 + β_2 So, this case reduces to the case of a single exponential. §.§ EXP x PWL For the combination “EXPxPWL”, we have that, for stationarity to be achieved:0 ≤∫_0^∞ EXP(α,β) PWL(K,c,p) dx < 1 0 ≤∫_0^∞α e^-β xK(x+c)^p dx < 1Thus:∫_0^∞α e^-β xK(x+c)^p dx= α K ∫_0^∞ (x+c)^-p e^-β x dx = α K e^β c∫_0^∞ (x+c)^-p e^-β (x+c) dx = α K e^β cβ^p∫_0^∞(β (x+c))^-p e^-β (x+c) dx = α K e^β cβ^p-1∫_β c^∞ t^-p e^-t dt = α K e^β cβ^p-1Γ (1-p,β c),where Γ(·,·) is the well-known Incomplete Gamma Function: Γ (a,y) = ∫_y^∞ t^a-1 e^-t dt. §.§ EXP x SQR For the combination “EXPxSQR”, we have that, for stationarity to be achieved:0 ≤∫_0^∞ EXP(α,β) SQR(B,L) dx < 1 0 ≤∫_0^Lα B e^-β x dx < 1 Thus:∫_0^Lα B e^-β x dx = [ α B e^-β xβ]_0^L = α B (1- e^-β L)β So, in the case of a multiplicative combination, the SQR kernel acts as a truncation horizon. §.§ EXP x SNS For the combination “EXPxSNS”, we have that, for stationarity to be achieved:0 ≤∫_0^∞ EXP(α,β) SNS(A,ω) dx < 1 0 ≤∫_0^πω A α e^-β x sin (ω x) dx < 1 Where:∫_0^πω A α e^-β x sin (ω x) dx= ∫_0^πω A α e^-β xe^i ω x - e^-i ω x2 i dx = A α2 i[ e^(-β + i ω)x-β + i ω - e^(-β - i ω)x-β - i ω]_0^πω= A α2 i[ (-β -i ω)e^(-β + i ω)x - (-β +i ω)e^(-β - i ω)xβ^2 + ω^2]_0^πω= [ A α e^-β x2 i2 i ω cos(ω x) - 2 β sin(ω x)β^2 + ω^2]_0^πω= A α2 i-2 i ω (e^- βπω-1)β^2+ω^2= A αω (1 + e^-βπ/ω)/(ω^2 + β^2)§.§ PWL x PWL In the case of the combination “PWLxPWL”, an upper bound is derived as follows:0 ≤∫_0^∞ PWL(K_1,c_1,p_1)PWL(K_2,c_2,p_2) dx <1 0 ≤∫_0^∞K_1(x+c_1)^p_1K_2(x+c_2)^p_2 dx < 1Then:∫_0^∞K_1(x+c_1)^p_1K_2(x+c_2)^p_2 dx≤ ∫_0^∞K_1 K_2(x+min(c_1,c_2))^p_1+p_2 dx = K_1 K_2(p_1+p_2-1) min(c_1,c_2)^(p_1+p_2-1)§.§ PWL x SQR For the combination “PWLxSQR”, we have that, for stationarity to be achieved: 0 ≤∫_0^∞ PWL(K,c,p)SQR(B,L) dx < 1 0 ≤∫_0^LKB(x+c)^p dx < 1Where:∫_0^LKB(x+c)^p dx = [ KB(1-p)(x+c)^(p-1)] ]_0^L= KB (c^-(p-1) - (c+L)^-(p-1))/p-1 So, once again, the SQR kernel acts as a truncation horizon. §.§ PWL x SNS In the case of the combination “PWLxSNS”, an upper bound is derived as follows: 0 ≤∫_0^∞ PWL(K,c,p)SNS(A,ω) dx < 1 0 ≤∫_0^πωKA sin(ω x)(x+c)^p dx < 1 Where:∫_0^πωKA sin(ω x)(x+c)^p dx ≤ ∫_0^πωKA(x+c)^p dx = [ KA(1-p)(x+c)^(p-1)] ]_0^πω=KA((c + π/ω)^1-p - c^1-p)/1-p§.§ SQR x SQR For the combination “SQRxSQR”, we have that, for stationarity to be achieved:0 ≤∫_0^∞ SQR(B_1,L_1)SQR(B_2,L_2) dx < 1 0 ≤∫_0^min(L_1,L_2) B_1 B_2 dx < 1 Where:∫_0^min(L_1,L_2) B_1 B_2 dx = B_1 B_2 min(L_1,L_2) = B L So, the multiplicative combination of two SQR kernels may be reduced to the case of a single SQR kernel. §.§ SQR x SNS In the case of combinations of discontinuous kernels (SQR and SNS), we assume they have the same starting and ending points, i.e., L = πω. So, for the combination “SQRxSNS”, we have that, for stationarity to be achieved: 0 ≤∫_0^∞ SQR(B,L)SNS(A,ω) dx < 1 0 ≤∫_0^πω A B sin(ω x) dx < 1 Where:∫_0^πω A B sin(ω x) dx = 2ABω§.§ SNS x SNS In the case of combinations of discontinuous kernels (SQR and SNS), we assume they have the same starting and ending points. So, for the combination “SNSxSNS”, we have that, for stationarity to be achieved: 0 ≤∫_0^∞ SNS(A_1,ω)SNS(A_2,ω) dx < 1 0 ≤∫_0^πω A_1 A_2sin^2 (ω x) dx < 1 Where:∫_0^πω A_1 A_2sin^2 (ω x) dx = ∫_0^πω A(1-cos (2 ω x))2 dx= π A2 ω § DERIVATION OF THE LOG-LIKELIHOOD FORMULA FOR HPSThis derivation follows the steps on <cit.>. Given a realization (t_1,t_2,...,t_k) of some regular point process observed over the interval [0,T], the log-likelihood is expressed as:l = ∑_i=1^klog (λ (t_i)) - ∫_0^Tλ (u) du Let be the joint probability density of the realization:L = f(t_1,t_2,...,t_k) = ∏_i=1^k f(t_i)It can be written in terms of the Conditional Intensity Function. We can then find f in terms of λ:λ (t) = f(t)1 - F(t) = F(t)t1 - F(t) = -log (1-F(t))t,where, given the history up to last arrival u, ℋ(u), F(t) is then defined as the conditional cumulative probability distribution of the next arrival time T_k+1:F(t) = F(t|ℋ(u)) = ∫_u^t f(s|ℋ(u)) dsIntegrating both sides of Equation (<ref>) over (t_k,t):-∫_t_k^tλ (u) du = log (1-F(t)) - log (1-F(t_k))Given that the realization is assumed to have come from a so-called simple process, i.e., a process in which multiple arrivals cannot occur at the same time, we have that F(t_k) = 0 as T_k+1 > t_k, which simplifies equation (<ref>) to:-∫_t_k^tλ (u) du = log (1-F(t))Further rearranging the expression:F(t) = exp ( -∫_t_k^tλ (u) du ),andf(t) = λ (t) exp (-∫_t_k^tλ (u) du )Thus, the likelihood becomes:L = ∏_i=1^k f(t_i) = ∏_i=1^kλ (t_i) exp ( -∫_t_i-1^t_iλ (u) du )= [ ∏_i=1^kλ (t_i) ] exp ( -∫_0^t_kλ (u) du )Given that the process is observed on [0,T], the likelihood must include the probability of seeing no arrivals in (t_k,T]:L = [ ∏_i=1^kf(t_i)] (1 - F(T))Through using the formulation of F(t), we have that:L = [ ∏_i=1^kλ (t_i)] exp ( -∫_0^Tλ (u) du)Finally, getting the logarithm of the expression, we have the formula for l:l = ∑_i=1^klog (λ (t_i)) - ∫_0^Tλ (u) du§ AUTOMATIC REPORT § COMPARISON BETWEEN GRADIENT-BASED AND DISCRETIZED ESTIMATION STEPS FOR THE FINANCIAL DATASETS | http://arxiv.org/abs/1703.09068v6 | {
"authors": [
"Rafael Lima",
"Jaesik Choi"
],
"categories": [
"cs.LG"
],
"primary_category": "cs.LG",
"published": "20170327152554",
"title": "Make Hawkes Processes Explainable by Decomposing Self-Triggering Kernels"
} |
[email protected] Max-Planck-Institut für Intelligente Systeme, Heisenbergstr. 3, 70569 Stuttgart, Germany IV. Institut für Theoretische Physik, Universität Stuttgart, Pfaffenwaldring 57, 70569 Stuttgart, Germany We theoretically study the motion of a rigid dimer of self-propelling Janus particles. In a simple kinetic approach without hydrodynamic interactions, the dimer moves on a helical trajectoryand, at the same time, it rotates about its center of mass. Inclusion of the effects of mutualadvection using superposition approximation does not alter the qualitative features of the motion but merelychangesthe parameters of the trajectory and the angular velocity. Rotational motion of dimers of Janus particles Arghya Majee December 30, 2023 ==============================================§ INTRODUCTION Self-propelling Janus particles (JPs) have gained increasing attention in recent years from both theoreticians and experimentalists as they show a promising route towards the understanding of the motion of living microorganisms<cit.>. On the other hand, possible applications range fromdrug delivery to autonomous micromachines <cit.>. Various types of microswimmers have been built in recent years which mainly rely on the nonuniform surface properties of the particle. Due to their asymmetric surfaceproperties, these particles are able to generate their own gradient within an otherwise homogeneousmedium and propel in this self-generated gradient. One particularly widelystudied system in this context is chemical reactiondriven self-propellers <cit.>.Of late, heating of half metal coated Janus particles has emerged as a possible way to achieve self-propulsion. The metal cap can absorb energy from laser irradiation <cit.> orac magnetic field <cit.> and convert it into heat. Asymmetric thermal response of the capped and uncapped hemispheresthen drives the colloid via self-thermophoresis. For a rotationally symmetric single particle the resulting motion is linear initially; at longer times enhanced diffusion takes place due to rotational Brownianmotion. However, a system of twin Janus particles with the one being tethered to the glass surfacehas been observed to rotate under laser irradiation <cit.>. Several other rotationally asymmetric systemshave also been reported to show rotational movements. For example, circular motion of L-shaped asymmetricmicroswimmers on the substrate of a thin film and near channel boundaries has been reported <cit.>.Very recently, stable rotation was observed for a dimer system of chemically active Janus particles <cit.>.In this paper we consider a dimer of rigidly attached Janus particles, which is free tomove and to rotate in three dimensions and applies best to the case of self-thermophoretic particles. Since usually the thermal conductivities of the solvent and the colloids are veryclose to each other, temperature field due to one particle is hardly affected by the presenceof the neighbor particle. This is usually not the case for a dimer of catalytic Janus particles as in this case the solutes can not penetrate the other particle and the concentration gradient is affected by the presence of the second particle <cit.>. In our system, the orientation of the metalcaps with respect to the dimer axis is kept arbitrary and each JP is treated in terms ofa squirmer model where the local slip velocity at each particle surface is approximated by the first two Fouriercomponents in an expansion in terms of the Legendre polynomial basis; see Eq. (<ref>) below andthe following discussion. As a first approach we use a simple model withouthydrodynamic interactions. Then we retain mutual advection and forces in terms of a superpositionapproximation for the flow fields created by the two particles of the dimer. § DIMER MOTION We consider the self-propulsion of a dimer, that is, of two Janus particles (squirmers) which are rigidly attached to each other. Their motion arises from an effective slip velocity u_s(θ),which is generated by a concentration or temperature gradient and depends only on the polar angle θ with respect to the symmetry axis. A single particle moves at a velocity𝐮_0=u_0 𝐧,where both its absolute value u_0 and direction 𝐧 are determined by the weighted surfaceaverage of the slip velocity <cit.>.Two particles that are attached to each other, exert mutual forces and result in a more complex motion which depends on the relative orientation of the particles withrespect to the dimer axis. To describe the motion of this system, it is convenient to separate thecenter-of-mass velocity 𝐔 and the relative motion of the JPs. For a rigid dimer,the relative motion reduces to its angular velocity Ω. We start with the simplest model which neglects advection. Then the center of mass motion is given by the mean value of the single-particle velocities, 𝐔_0=𝐮_0+𝐮_0^'/2.The corresponding angular velocity,Ω_0=𝐮_0-𝐮_0^'/2a×𝐞,accounts for the orbital motion resulting from single-particle motion perpendicular to the dimer axis𝐞. As shown in Fig. 1, 𝐞 points towards the primed particle.In the absence of additional torques and forces, the absolute values of both linear and angular velocities are constant in time. The orientation of one or the other, or of both, may change depending on the three vectors 𝐮_0,𝐮_0^',𝐞. The linear velocity 𝐔 _0 also rotates at an equal rate d/dt𝐔_0=Ω_0×𝐔_0.It turns out convenient to separate the velocity in two components 𝐔_0=𝐔 _0^∥+𝐔_0^⊥ which are parallel and perpendicular to the angularvelocity. Then the trajectory of the center of mass consists of a linear motion with velocity 𝐔_0^∥, and a rotation in the plane perpendicular to Ω_0; the latter is characterized by the angular velocity Ω _0 and a circular trajectory of radiusR_0=U_0^⊥/Ω _0.The dimer axis is always perpendicular to Ω_0 and thus obeys the equation of motion d/dt𝐞=Ω_0×𝐞.Since the angular velocity Ω_0is perpendicular on 𝐞, it is constant in time. Thus the trajectories of the two JPs depend on the relative orientation of Ω_0 and 𝐔_0 and consist of two contributions: The dimer shows translational motion at velocity 𝐔_0^∥alongthe vertical axis 𝐳̂ defined by Ω_0=Ω_0𝐳̂.Both 𝐔_0^⊥ and 𝐞 rotate about this axis at the angular velocity Ω_0. The center of mass moves on a helical trajectory𝐑_cm = U_0^∥ t 𝐳̂ + R_0 φ̂(Ω_0 t) ,where φ̂ is the local basis vector corresponding to the azimuthal angle φ. At the same time,the dimer axis 𝐞 rotates in the plane perpendicular to 𝐳̂, such that each JP describes ahelical trajectory, of radius R_± = √(R_0^2 + a^2 ± 2aR_0(𝐞·φ̂)). Typical trajectories of the two JPs are shown in Fig. 2. Note that the dimer axis 𝐞 is constant withrespect to the linear motion.In the special case 𝐮_0=𝐮_0^', the angular velocity vanishes and the dimer moves along a straight line, whereas 𝐮_0=-𝐮_0^'results in a simple rotation about the center of mass.§ ADVECTION A moving particle gives rise to a characteristic velocity field in the surrounding fluid. In addition to its own motion,each particle of a dimer is advected in the velocity field 𝐯^'(𝐫) of its neighbor and is also subject to a mechanical force 𝐅. Using Faxén's law we find the linear velocity𝐮=𝐮_0+𝐅/ξ+𝐯^'(-2a 𝐞)+a^2/6∇^2𝐯^'(-2a 𝐞),where ξ =6πη a is the usual Stokes friction factor with the viscosity η, and where𝐯^' is evaluated at the distance𝐑-𝐑^'=-2a𝐞.For the second particle we find the corresponding expression for 𝐮^' by exchangingprimed and unprimed quantities, 𝐮^'=𝐮_0^'+𝐅^'/ξ+𝐯(2a𝐞)+a^2/6∇^2𝐯 (2a𝐞).Note the change of sign of the argument of the advection flow. For symmetry reasons, the forces cancel each otherand are parallelto the dimer axis,𝐅 = F 𝐞=-𝐅'. §.§ Fluid velocity field Due to some osmotic or catalytic effect, each particle induces an effective slip velocity u_s along its surface. For an axisymmetric Janus particle the slip velocitydepends on the polar angleθ only; the leading terms of an expansion in powers of cosθread as u_s(θ )=3/2u_0sinθ (1+βcosθ), where the factor sinθ is characteristic for a sphere <cit.>. The squirmer parameter β is relatedto the long-range velocity field 𝐯∝βr^-2 in the surrounding fluid, and to a largeextent determines hydrodynamic interactions with neighbor particles or nearby solid boundaries; an activeparticle behaves like a “puller” for β>0 and like a “pusher” for β<0 <cit.>.The higher Fourier components in the expression for u_s(θ) are of the form β_n dP_n(cosθ)/dθ with P_nbeing the Legendre polynomial of order n; the corresponding velocity field vanishes as 1/r^n and 1/r^(n+2) (n>2).However, these additional corrections would have little influence on the dimer motion since they would not alter thequalitative picture.Throughout this paper we assume an axisymmetric slip velocity (<ref>) and we discard any single-particle angularmotion. Eq. (<ref>) provides a boundary condition for the velocity field in the surrounding fluid,𝐯_T=-1/2u_0a^3/r^3(1-3𝐫̂𝐫̂)·𝐧-3/2β u_0a^2/r^2P_2(𝐧·𝐫̂)𝐫̂+3/2β u_0a^4/r^4(P_2(𝐧·𝐫̂)𝐫̂-(𝐧·𝐫̂)(1-𝐫̂𝐫̂)·𝐧),where 𝐫̂ is the radial unit vector with respect to the particle center, 𝐧 is the unitvector along the velocity vector 𝐮_0, and P_2 represents the Legendre polynomial of secondorder <cit.>. Eq. (<ref>) gives the usual flow field of a self-propelling particle; the firstterm on the right-hand side occurs for a particle with uniform surface properties in a constant driving field <cit.>, whereas the remainder accounts for the finite squirmer parameter.In addition to 𝐯_T, there is a velocity component arising from the force 𝐅, 𝐯_F=(3/4a/r(1+𝐫̂𝐫̂)+1/4a^3/r^3(1-3𝐫̂𝐫̂))·𝐅/ξ.Note the presence of a long-range Stokeslet contribution proportional to 1/r. The advection velocity in (<ref>) is given by the sum of the above terms,𝐯=𝐯_T+𝐯_F.This velocity field is accompanied by the non-uniform pressure P = 𝐅·𝐫/4π r^3 -3ηβ u_0a^2/r^3P_2(𝐧·𝐫̂),where the first term is related to the Stokeslet in 𝐯_F and the second one to the r^-2-contribution to the squirmer field. Similar expressions 𝐯' and P' are obtained for the neighbor particle, by replacing the parameters u_0 and 𝐅 with corresponding primed quantities. §.§ Linear velocity The center-of-mass velocity of the dimer is given by 𝐔=𝐮+𝐮^'/2.Inserting the single-particle velocities (<ref>) and (<ref>) we have 𝐔= (1-𝐐/8)·𝐔_0+3β/32(c' 𝐮'_0- c 𝐮_0 +u_0 1-c^2/2𝐞-u'_01-c'^2/2𝐞),where we have defined the quadrupole operator 𝐐=1-3𝐞𝐞,and the orientation cosinec=𝐧·𝐞,c'=𝐧'·𝐞.Note that 𝐔 depends on the relative orientation of the particle axes with respect to the dimer axis 𝐞. §.§ Angular velocity Similar to Eq. (<ref>), in this case the angular velocity of the dimer with respect to its center is given by Ω=𝐮-𝐮^'/2a×𝐞.Inserting the single-particle velocities (<ref>) and (<ref>) we findΩ = 17/16Ω_0 +3β/32a( c' 𝐮'_0 + c 𝐮_0)×𝐞.In the first term we have used 𝐐·Ω_0=Ω_0, which follows fromthe fact that Ω_0 is perpendicular to 𝐞. The second term, which is proportional to thesquirmer parameter β, results in a correction that is not parallel to Ω_0. The angular velocity Ω is perpendicular on the dimer axis 𝐞. According to the equation of motiond/dt𝐞=Ω×𝐞,the dimer axis turns in the plane perpendicular to Ω which, as a consequence, is constant in time, d/dtΩ = 0.§.§ Mutual forces So far the mutual forces (<ref>) are not known; their strength F is determined from the condition that the single-particle velocities have the same component along the dimer axis,(𝐮-𝐮' )·𝐞=0.Inserting the single-particle velocities 𝐮 and 𝐮' given in the Appendix and solving for F, we find the forceacting on the unprimed particle, F/ξ =7/6(𝐮'_0-𝐮_0 )·𝐞 - β/4(u_0 P_2(c)+u_0' P_2(c')) . § DISCUSSION In the absence of advection, the dimer moves on a helical trajectory (<ref>), as illustrated in Fig. 1.This picture remains valid when including advection corrections, albeit with numerically modified linear andangular velocities 𝐔 and Ω as expressed by Eqs. (<ref>) and (<ref>). The helical trajectory is conserved because of the symmetry of the advection field. More complex trajectories would arise if theangular velocity had a component proportional to 𝐞, in other words, if the dimer rotated about its axis. ThenΩ is no longer a constant, resulting in a more intricate motion.Here we mention two effects that would result in a time dependent angular velocity. First, real Janus particles are not perfectlyaxisymmetric. In general, their slip velocity comprises a constant in azimuthal direction, resulting in a rotation about the particleaxis 𝐧 with angular velocity ω_0. Then the angular velocity of the dimer comprises a term proportionalto its axis, Ω_∥ =(ω_0+ω'_0)·𝐞, which leads to a more complex trajectory. Second, in this paper we have only addressed mutual advection of the two Janus particles, and neglected the influence of theiractivity. In the case of self-thermophoresis, for example, the slip velocity on the surface of one particle depends not only on its owntemperature gradient but also on that of the neighbor,𝐮_s = μ(𝐫̂)(∇_∥ T +∇_∥ T'),where ∇_∥ is the gradient component parallel to the particle surface. Since in general the mobility μ takes differentvalues on the two hemispheres of a Janus particle, the term ∇_∥ T' induces an angular velocity ω_0 aboutthe dimer axis. If the two particles are heated at different temperatures, or show different surface properties, their contributions toΩ_∥ do not cancel, and there is a net rotational motion of the dimer about its axis 𝐞. For usual Janus particles these effects are small, and probably could not be distinguished from the rotational diffusion. Thus we areled to the conclusion that advection is the dominant interaction and that dimers in bulk solution move on helical trajectories.Oneshould also keep in mind that gravity can play a role <cit.>. Because of the heavy metal cap on one hemisphere, the center of massdoes not coincide with the geometric center. Then gravity exerts a torque on the dimer, which in particular results in a time-dependentangular frequency vector Ω.We use the superposition approximation, in other words we neglect higher reflections between the two particles. The reflection method in principle is valid for large distances.But as discussed before, the temperature field is rather insensitive to the presence of a neighbor particle due to a small conductivity difference betweenthe particle and the solvent. On the contrary, the hydrodynamic flow field is sensitive to the presence ofthe second particle. However, it becomes more important when the particles forming the dimer move with respect toeach other. For a rigidly attached dimer, the near-field effects are expected to be less significant. Therefore, a helicallike trajectory appears to be a generic feature for such rigid systems. Improving either the superposition approximation byconsidering higher reflections or considering higher order terms in Faxén's law would render quantitative changes.But the relatively small linear corrections in Eqs. (<ref>) and (<ref>) (note the prefactor 3/32 in the termsproportional to β) suggest that higher order terms are even smaller and do not change thequalitative picture. Similar numerical changes would result when improving the superposition approximationsby applying the method of reflections. Thus, the overall trajectory of a rigid dimer is expected to be close to the helical one unless the additional effects mentioned in the preceding paragraphs are large enough to alter the picture.We conclude with a remark on the forces (<ref>) exerted by one Janus particle on the other. These mutual forces cancelin the linear and angular velocity, and thus are of little relevance for a rigid dimer. They are important, however, forparticles linked by flexible DNA strands, where the length of the macromolecular bridge is determined by equilibratingthe entropic force with the mutual force between two JPs, or between a JP and a passive colloid; such hybrid systemshave been designed recently <cit.>. Although, one should keep in mind that for a flexible coupling, the twoparticles also exert torques on each other which results in a more complex rotation than that of a rigid dimer. Stimulating discussions with Alois Würger are gratefully acknowledged. § AUTHOR CONTRIBUTION STATEMENTA. M. conceived the problem, performed all the calculations, and wrote the manuscript.§ SINGLE-PARTICLE VELOCITIES Here we give the advection corrections to the velocities 𝐮 and 𝐮' of the Janus particles forming the dimer. We evaluate thedifferent contributions to the right handsides of Eqs. (<ref>) and (<ref>). We start with the velocity field 𝐯 created by theunprimed particle at the position of its neighbor (i.e., 𝐫=2a𝐞). There are twocontributions, 𝐯(2a𝐞)=𝐯_T(2a𝐞)+𝐯_F(2a𝐞), whicharise from the particle's self-propulsion and the force exerted by its neighbor, respectively. From the explicit expressions given in Eqs. (<ref>) and (<ref>), one readily obtains the advection terms 𝐯_T(2a𝐞)= -3/8β u_0P_2(c)𝐞-𝐐·𝐮_0/16+3β u_0/32[P_2(c)𝐞-c(1-𝐞𝐞)·𝐧],and 𝐯_F(2a𝐞)=( 3( 1+𝐞𝐞) /8+ 𝐐/32) ·𝐅/ξ.The correction term is evaluated by using Stokes' equation η∇^2𝐯=∇P withthe expression for the pressure given in Eq. (<ref>). Thus we find the contribution arising from self-propulsion, ∇^2𝐯_T=9β u_0a^2/r^4(P_2(𝐧·𝐫̂)𝐫̂-(𝐧·𝐫̂)(1-𝐫̂𝐫̂)·𝐧),and similarly that due to the mutual force,∇^2𝐯_F=(1-3𝐫̂𝐫̂)·𝐅/4πη r^3.Putting 𝐫=2a𝐞 and using the definitions for c and Q, we have a^2/6∇^2𝐯_T(2a𝐞)=3β u_0/32(P_2(c)𝐞-c(1-𝐞𝐞)·𝐧),anda^2/6∇^2𝐯_F(2a𝐞)=𝐐·𝐅/32ξ. We give the corresponding quantities for the primed particle at the relative position 𝐫=-2a𝐞 (note thatthe unit vector 𝐞 points from the unprimed to the primed particle):𝐯'_T(-2a𝐞)= 3/8β u_0'P_2(c')𝐞-𝐐·𝐮'_0/16+3β u_0/32[-P_2(c')𝐞+c'(1-𝐞𝐞)·𝐧'], 𝐯'_F(-2a𝐞)=( 3( 1+𝐞𝐞) /8+ 𝐐/32) ·𝐅'/ξ,a^2/6∇^2𝐯'_T(-2a𝐞)=3β u'_0/32(-P_2(c')𝐞+c'(1-𝐞𝐞)·𝐧'), and,a^2/6∇^2𝐯'_F(-2a𝐞)=𝐐·𝐅'/32ξ. Finally, inserting Eqs. (<ref>-<ref>) in (<ref>) and (<ref>), one obtains the single-particle velocities 𝐮= 𝐮_0 - 𝐐·𝐮'_0/16+ 𝐅/ξ+5/8𝐅'/ξ+3β/16( u'_0P_2(c')𝐞 + c'(1-𝐞𝐞)·𝐮'_0),and𝐮'= 𝐮'_0 - 𝐐·𝐮_0/16+ 𝐅'/ξ +5/8𝐅/ξ-3β/16( u_0P_2(c)𝐞 + c(1-𝐞𝐞)·𝐮_0).Using 𝐅'=-𝐅further simplifiesthe force terms. 99 How10 S. J. Ebbens and J. R. Howse, Soft Matter 6, 726 (2010).Cat12 M. E. Cates, Rep. Prog. Phys. 75, 042601 (2012).Dre05 R. Dreyfus, J. Baudry, M. L. Roper, M. Fermigier, H. A. Stone, and J. Bibette, Nature 437, 862 (2005).Elg15 J. Elgeti, R.G. Winkler, and G. Gompper, Rep. Prog. Phys. 78, 056601 (2015).Sun08 S. Sundararajan, P. E. Lammert, A. W. Zudans, V. H. Crespi, and A. Sen, Nano Lett. 8, 1271 (2008).Lao08 R. Laocharoensuk, J. Burdick, and J. Wang, ACS Nano 2, 1069 (2008).Pax04 W. F. Paxton, K. C. Kistler, C. C. Olmeda, A. Sen, S. K. St. Angelo, Y. Cao, T. E. Mallouk, P. E. Lammert, and V. H. Crespi, J. Am. Chem. Soc. 126, 13424 (2004).Pax06 W. F. Paxton, P. T. Baker, T. R. Kline, Y. Wang, T. E. Mallouk, and A. Sen, J. Am. Chem. Soc. 128, 14881 (2006).How07 J. R. Howse, R. A. L. Jones, A. J. Ryan, T. Gough, R. Vafabakhsh, and R. Golestanian, Phys. Rev. Lett. 99, 048102 (2007).Gol05 R. Golestanian, T. B. Liverpool, and A. Ajdari, Phys. Rev. Lett. 94, 220801 (2005).Jia10 H.-R. Jiang, N. Yoshinaga, and M. Sano, Phys. Rev. Lett.105, 268302 (2010).Ber15 A. P. Bergulla and F. Cichos, Faraday Discuss. 184, 381 (2015).Bar13 L. Baraban, R. Streubel, D. Makarov, L. Han, D. Karnaushenko, O. G. Schmidt, and G. Cuniberti, ACS Nano 7, 1360 (2013).Kum13 F. Kümmel, B. ten Hagen, R. Wittkowski, I. Buttinoni, R. Eichhorn, G. Volpe, H. Löwen, and C. Bechinger, Phys. Rev. Lett. 110, 198302 (2013).Wit15 A. Wittmeier, A. L. Holterhoff, J. Johnson, and J. G. Gibbs, Langmuir 31, 10402 (2015).Moo16 N. S.-Mood, A. Mozaffari, and U. M. C.-Figueroa, J. Fluid Mech. 798, 910 (2016).And89 J.L. Anderson, Ann. Rev. Fluid Mech. 21, 61 (1989).Bla71 J. R. Blake, J. Fluid Mech. 46, 199 (1971).Ish06 T. Ishikawa, M. P. Simmonds, and T. J. Pedley, J. Fluid Mech. 568, 119 (2006).Llo10 I. Llopis and I. Pagonabarraga, J. Non-Newtonian Fluid Mech. 165, 946 (2010).Zot14 A. Zöttl and H. Stark, Phys. Rev. Lett. 112, 118101 (2014).Wur13 T. Bickel, A. Majee, and A. Würger, Phys. Rev. E 88, 012301 (2013).Cam13 A. I. Campbell and S. J. Ebbens, Langmuir 29, 14066 (2013).Sch15 R. Schachoff, M. Selmke, A. Bregulla, F. Cichos, D. Rings, D. Chakraborty, K. Kroy, K. Günther,A. Henning-Knechtel, E. Sperling, M. Mertig, diffusion-fundamentals.org 23, 1 (2015). | http://arxiv.org/abs/1703.09063v1 | {
"authors": [
"Arghya Majee"
],
"categories": [
"cond-mat.soft"
],
"primary_category": "cond-mat.soft",
"published": "20170327133402",
"title": "Rotational motion of dimers of Janus particles"
} |
The (theta, wheel)-free graphsPart II: structure theorem Marko RadovanovićUniversity of Belgrade, Faculty of Mathematics, Belgrade, Serbia. Partially supported by Serbian Ministry of Education, Science and Technological Development project 174033. E-mail: [email protected] , Nicolas TrotignonCNRS, LIP, ENS de Lyon. Partially supported by ANR project Stint under reference ANR-13-BS02-0007 and by the LABEX MILYON (ANR-10-LABX-0070) of Université de Lyon, within the program ‘‘Investissements d'Avenir’’ (ANR-11-IDEX-0007) operated by the French National Research Agency (ANR).Also Université Lyon 1, université de Lyon. E-mail: [email protected] , Kristina VuškovićSchool of Computing, University of Leeds, and Faculty of Computer Science (RAF), Union University, Belgrade, Serbia.Partially supported by EPSRC grants EP/K016423/1 and EP/N0196660/1, and Serbian Ministry of Education and Science projects 174033 and III44006. E-mail: [email protected]===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================The K-homology ring of the affine Grassmannian of SL_n() was studied by Lam, Schilling, and Shimozono. Itis realized as a certain concrete Hopf subringof the ring of symmetric functions. On the other hand, for the quantum K-theory of the flag variety Fl_n,Kirillov and Maeno provideda conjectural presentation based on the results obtained by Givental and Lee.We construct an explicitbirational morphism between the spectrums of these two rings. Our method relies on Ruijsenaars's relativistic Toda lattice with unipotent initial condition. From this result, we obtain a K-theory analogue of the so-called Peterson isomorphism for (co)homology.We provide a conjecture on the detailed relationship between the Schubert bases, and, in particular, we determine the image of Lenart–Maeno's quantum Grothendieck polynomial associated with a Grassmannian permutation.§ INTRODUCTIONLet Fl_n be the variety of complete flags V_∙=(V_1⊂⋯⊂ V_n=ℂ^n) in ℂ^n, which is a homogeneous space G/B, where G=SL_n(ℂ) and B is the Borel subgroup of theupper triangular matrices in G. Let K(Fl_n) be the Grothendieck ring of coherentsheaves on Fl_n.Givental and Lee <cit.> studied the quantum K-theory QK(Fl_n),which is aring defined as a deformation ofK(Fl_n) (see <cit.> for general construction of quantum K-theory).Similar to Givental–Kim's presentation (see <cit.>) for the quantum cohomology ring QH^*(Fl_n), Kirillov and Maeno <cit.> provided a conjectural presentation for QK(Fl_n), whichwe denote temporally by 𝒬𝒦(Fl_n).Let Gr_SL_n=G(ℂ((t)))/G(ℂ[[t]]) be the affine Grassmannian of G=SL_n, whoseK-homology K_*(Gr_SL_n) has a natural structure of a Hopf algebra.Lam, Schilling, and Shimozono <cit.> constructed a Hopf isomorphism between K_*(Gr_SL_n)and the subring Λ_(n):=[h_1,…,h_n-1] of the ring Λ of symmetric functions.The first main result of this paper is an explicitring isomorphism betweenK_*(Gr_SL_n) and 𝒬𝒦(Fl_n) after appropriate localization. The corresponding result in (co)homology, for a semisimple linear algebraic group G, is called the Peterson isomorphism; this result was presented in lectures by Peterson in MIT, 1997, and published in a paper by Lam and Shimozono <cit.> on torus-equivariant and parabolic settings. We also provide a conjecture that describes a detailedcorrespondence between the Schubert bases for K_*(Gr_SL_n) and 𝒬𝒦(ℱl_n) (Conjecture <ref>).Our method of constructing the isomorphism relies on Ruijsenaars's relativistic Toda lattice<cit.>.Note that a similar approach to the original Peterson isomorphism for SL_n by solving the non-relativistic Toda lattice was given byLam and Shimozono <cit.> and by Kostant <cit.>. It is natural to ask how K-theoretic Peterson isomorphisms for a general semisimple linear algebraic group G should be constructed.There are some results indicating that an approach to this problem using integrable systems would be fruitful. The relativistic Toda lattice associated with any root systemwas introduced by Kruglinskaya and Marshakov in <cit.>.The commuting family of q-difference Toda operators given by Etingof, as well asgeneral such operators constructed by using a quantized envelopingalgebra U_q(𝔤) of a complex semisimple Lie algebra 𝔤,were discussed by Givental and Lee <cit.>.As for K-homology of the affine Grassmannian:Bezrukavnikov, Finkelberg, and Mirković <cit.> showed that the spectrum of the G(ℂ[[t]])-equivariant K-homology ring of Gr_G is naturally identified with the universal centralizer of the Langlands dual group of G.§.§ Relativistic Toda latticeThe relativistic Toda lattice, introduced by Ruijsenaars <cit.>, is a completely integrable Hamiltonian system.This system can be viewed as an isospectral deformation of theLax matrix L=AB^-1 with A=[ z_1-1; z_2-1; ⋱ ⋱; z_n-1-1; z_n ],B= [ 1; -Q_1z_1 1; ⋱ ⋱; 1; -Q_n-1z_n-1 1 ],where z_i and Q_j are complex numbers. The relativistic Toda lattice is a partial differential equation with independentvariables t_1,…,t_n-1expressed in the Lax form:d L/dt_i=[L,(L^i)_<] (i=1,…,n-1),where (L^i)_< denotes the strictly lower triangular part ofL^i. Consider the characteristic polynomial of L:Ψ_L(ζ)= (ζ· 1-L)=ζ^n+∑_i=1^n (-1)^i F_i(z,Q)ζ^n-i. More explicitly,we haveF_i(z_1,…,z_n,Q_1,…,Q_n-1)=∑_I⊂{1,…,n}# I=i∏_j∈ Iz_j∏_j∈ I, j+1∉ I (1-Q_j),where Q_n:=0. Let γ=(γ_1,…,γ_n)∈ℂ^n. In this paper, we assume γ_n=1 and considerthe isospectral varietyZ_γ:= {(z,Q)∈ℂ^2n-1 | F_i(z,Q)=γ_i (1≤ i≤ n)}.This is an affine algebraic variety with coordinate ringℂ[Z_γ]=ℂ[z_1,…,z_n,Q_1,…,Q_n-1] /I_γ, I_γ=⟨ F_i(z,Q)-γ_i (1≤ i≤ n)⟩.Note that Z_γ is isomorphic to a locally closed subvariety of SL_n(ℂ) (see <ref>). §.§ Quantum K-theory of flag varietyA particularly interesting case occurs when γ_i=ni (1≤ i≤ n).Then, the corresponding Lax matrix L has characteristic polynomial (ζ-1)^n. In fact, in this case, L is principal unipotent in the sense that it has only one Jordan block such that the eigenvalues are all 1, so we say the corresponding isospectral variety is unipotent and denote it by Z_.Let ℒ_i be the tautological line bundle whose fiber over V_∙∈ Fl_n is V_i/V_i-1.There is a canonical ring isomorphism QK(Fl_n)≃ℂ[Z_uni],with z_i identified with the class ofthe tautologicalline bundle ℒ_i. Givental and Lee <cit.> studieda certain generating function of the ℂ^* × SL_n(ℂ)-equivariant Euler characteristic of a natural family of line bundleson the quasimap spaces from ℙ^1 to Fl_n. The function is a formal power series in Q=(Q_1,…,Q_n-1) and its copy Q', depending on q, the coordinate of ℂ^*, and (Λ_1,…,Λ_n) with Λ_1⋯Λ_n=1, the coordinates of the maximal torus of SL_n(ℂ), respectively. They proved that the generating function is the eigenfunction of the finite q-difference Toda operator. A relation to the Toda lattice has been studied further by Braverman and Finkelberg <cit.>. Accordingly, it has been expected that there is afinite q-difference counterpart of quantum D-module (<cit.>, see also <cit.>) giving the structureof QK(Fl_n); however, to the best of the authors' knowledge, the connection between the multiplication in QK(Fl_n) and the q-difference system is still uncertain.We hope that recent work by Iritani, Milanov, and Tonita <cit.> givesan explanation of this connection, and that ultimately Conjecture <ref> will be proved.We add some remarks on a certain finiteness property of the quantum K-theory. Note that the ring is originally defined as a ℂ[[Q]]-algebra (see <cit.>). Forcominuscule G/P, such as the Grassmannian Gr_d(ℂ^n) of d-dimensional subspaces of ℂ^n,it was shown by Buch, Chaput, Mihalcea, and Perrin <cit.> that multiplicative structure constants for the (quantum) Schubert basis [𝒪_X_w] are polynomial in Q_i (see also Buch and Mihalcea <cit.> for earlier results on the Grassmannian).Thus, for such a variety, the ℂ[Q]-span ofthe Schubert classes forms a subring. So far, it is not known whether or not the finiteness holds for Fl_n in general. In the above conjecture, QK(Fl_n) should beinterpreted as the ℂ[Q]-span of the Schubert classes.With this conjecture in mind, in this paper, we will denote ℂ[Z_uni] by 𝒬𝒦(Fl_n) in distinction to QK(Fl_n). (added in proof) After this article was submitted, a preprint <cit.> by Anderson–Chen–Tseng appeared, in which they proved the finiteness of the torus equivariant K-ring of Fl_n and the relations F_i(z,Q)=e_i(Λ_1,…,Λ_n). This implies that Conjecture <ref> is true. §.§ Dual stable Grothendieck polynomialsLetΛ denote the complexified[It is possible to work over the integers; however, we use complex coefficients throughout the paper because we use many coordinate rings of complex algebraic varieties.]ring of symmetric functions (see <cit.>). If we denote by h_i the ith complete symmetric function, Λ is the polynomial ring ℂ[h_1,h_2,…]. It has the so-called Hall inner product ⟨·,·⟩: Λ×Λ→ℂ and a standard Hopf algebra structure.For each partition λ=(λ_1≥λ_2≥⋯≥λ_ℓ) of lengthℓ, the Schur functions s_λ∈Λ are defined by s_λ=(h_λ_i+j-i)_1≤ i,j≤ℓ.The stable Grothendieck polynomial G_λis given as the sum of set-valued tableaux of shape λ, and they form a basis of a completed ring Λ̂ of symmetric functions (see Buch <cit.> for details). Thedual stable Grothendieck polynomials {g_λ} due toLam and Pylyavskyy <cit.> are definedby ⟨ g_λ,G_μ⟩=δ_λ,μ. It was shown in <cit.> that {g_λ} is identified with the K-homology Schubert basis of the infinite Grassmannian (see <cit.> for a more precise statement).Shimozono and Zabrocki <cit.> proved a determinant formula forg_λ. The following formula forg_λ is also available:g_λ=( ∑_m=0^∞(-1)^m1-i mh_λ_i+j-i-m)_1≤ i,j≤ℓ=s_λ+,where h_k=0 for k<0, and 1-i m=(1-i)⋯(1-i-m+1)/m!.It is straightforward to see the equivalence of formula (<ref>) and the one in <cit.> (see also <cit.>), so we omit the details.§.§ Lam–Schilling–Shimozono's presentation for K_*(Gr_SL_n)In <cit.>, Lam, Schilling and Shimozono showed that the K-homology of the affine Grassmannian K_*(Gr_SL_n) <cit.> can be realized as a subring in the affine K-theoretic nil-Hecke algebra of Kostant–Kumar <cit.>.Let us denote byΛ_(n):=ℂ[h_1,…,h_n-1] the subring of Λ generated by h_1,…,h_n-1. In <cit.>, the following ring isomorphism was established:K_*(Gr_SL_n)≃Λ_(n).Note thatK_*(Gr_SL_n) is equipped with a Hopf algebra structurecoming from the based loop space Ω SU(n), andthe above is a Hopf isomorphism with the Hopf algebra structure on Λ_(n) induced from the canonical oneon Λ.§.§ K-theoretic Peterson isomorphismLet Z_^∘ be the Zariski open set of Z_defined as the complement of the divisor defined by Q_1⋯ Q_n-1= 0. Thus the coordinate ring [Z_^∘] isa localization of [Z_]=[z,Q]/I_ by Q_i (1≤ i≤ n-1). In view of Conjecture <ref>, we define𝒬𝒦(Fl_n)_loc:=[Z_^∘]. For the affine Grassmannian side, we define K_*(Gr_SL_n)_loc :=Λ_(n)[σ_i^-1,τ_i^-1 (1≤ i≤ n-1)],where τ_i=g_R_i,σ_i=∑_μ⊂ R_ig_μ(1≤ i≤ n-1),where R_i is the rectangle[Our notation of R_i is conjugate (transpose) tothe one used in <cit.>.] with i rows and (n-i) columns.We set τ_0=σ_0=τ_n=σ_n=1.Let n=3. Then we haveτ_1=h_2,τ_2=h_1^2-h_2+h_1,σ_1=h_2+h_1+1,σ_2=h_1^2-h_2+2h_1+1. The main result of this paper is the following theorem.There is an isomorphism of ringsΦ_n: 𝒬𝒦(Fl_n)_loc∼⟶ K_*(Gr_SL_n)_loc,given by z_i↦τ_iσ_i-1/σ_iτ_i-1(1≤ i≤ n),Q_i↦τ_i-1τ_i+1/τ_i^2(1≤ i≤ n-1). Let p_i be the ith power sum symmetric function in Λ. With theidentification t_i=p_i/i (1≤ i≤ n-1), the formulas (<ref>), (<ref>)give an explicit solution of the relativistic Toda lattice that is a rational function in time variables t_1,…,t_n-1. Thus, if we substitute the rational expression of (z,Q) in t_j into F_i(z,Q), then such a complicated rational function is equal to ni. This remarkable identity can be understood as a consequence of the isospectral property of the relativistic Toda lattice.§.§ Method of constructionThe ring homomorphism Φ_n is obtained by solving the relativistic Toda lattice. Our method ofsolving the system is analogous to the one employed by Kostant <cit.> for the finite non-periodic Toda lattice. Let C_γ be the companion matrix of the (common) characteristic polynomial of L∈ Z_γ. We denote the centralizer of C_γ by 𝔛_γ,which is the affine space of dimension n. We can construct a birational morphism α from Z_γ toℙ(𝔛_γ). On the open set of ℙ(𝔛_γ) defined as the complement of the divisors given by τ_i, σ_i, we can construct the inverse morphism β to α. When γ is unipotent, the associatedisomorphism between the coordinate rings of these open sets is nothing but Φ_n.§.§ Quantum Grothendieck polynomialsThe quantum Grothendieck polynomials 𝔊_w^Q <cit.>of Lenart and Maeno are a family of polynomials in the variables x_1,…,x_n and quantum parameters Q_1,…,Q_n-1 indexed by permutations w∈ S_n (see <ref> for the definition). Note that x_i is identified with 1-z_i (the first Chern class of the dual line bundle of ℒ_i). It was conjectured in <cit.> that the polynomial 𝔊_w^Q represents the quantum Schubert class [𝒪_X_w] in ℂ[z,Q]/I_under the conjectured isomorphism[The main result of <cit.> is a Monk-type formula for 𝔊_w^Q. It is worthwhile to emphasize that the formulais proved logically independent from Conjecture <ref>. In fact, the formula holds in the polynomial ring of x_i and Q_i.](<ref>). §.§ K-theoretic k-Schur functions In the sequel we use the notation k = n-1. Let ℬ_k denote the set of k-bounded partitions λ, i.e. the partitions such that λ_1≤ k. The K-theoretic k-Schur functions {g_λ^(k)}⊂Λ_(n)≃ K_*(Gr_SL_n) are indexed by λ∈ℬ_k and defined in <cit.> as the dual basis of the Schubert basis of the Grothendieck ring K^*(Gr_SL_n) of the thick version of the affine Grassmannian (see <cit.> for Kashiwara's construction of the thick flag variety of a Kac–Moody Lie algebra).The highest degree component of g_λ^(k) is the k-Schur functions_λ^(k) introduced by Lapointe, Lascoux, and Morse <cit.>(see also <cit.> and references therein). We note that g_λ^(k) is equal to g_λ if k is sufficiently large <cit.>.Morse <cit.>[The conjecture after Property 47 in <cit.>.]conjectured that for a partition λ such that λ⊂ R_i for some1≤ i≤ k, we have g_λ^(k)=g_λ,thusin particular,g_R_i^(k)=g_R_i=τ_i.This conjecture is relevant for our considerations.Note that the counterpart of this conjecture for k-Schur functions was proved in <cit.>; that is, s_λ^(k)=s_λ if λ⊂ R_i for some 1≤ i≤ k. In particular, we have s_R_i^(k)=s_R_i. It is worth remarking s_R_i arise as the “τ-functions” of the finitenon-periodic Toda lattice with nilpotent initial condition.§.§ Image of the quantum Grothendieck polynomialsWe are interested in the image of 𝔊_w^Q by Φ_n.LetDes(w)= {i | 1≤ i≤ n-1,w(i)>w(i+1)} denote the set of descents of w∈ S_n. A permutation w∈ S_n is d-Grassmannian if Des(w)={d}; such elements are in bijection with partition λ=(λ_1,…,λ_d) such that λ⊂ R_d . If λ⊂ R_d, then the correspondingd-Grassmannian permutation w_λ,d (see Remark <ref>) has length|λ|=∑_i=1^dλ_i. Let λ^∨:=(n-d-λ_d,…,n-d-λ_1). Let λ be a permutation such that λ⊂ R_d.Then we haveΦ_n(𝔊_w_λ,d^Q)=g_λ^∨/τ_d. For a general permutation w, we will provide a conjecture on the image of 𝔊_w^Q. Let λ: S_n→ℬ_k bea map defined by Lam and Shimozono <cit.> (see <ref> below). In order to state the conjecture we also need an involution ω_k on ℬ_k, μ↦μ^ω_k,called the k-conjugate (<cit.>). The image of the map λ consists of elements in ℬ_k that are k-irreducible, that is, those k-bounded partition μ=(1^m_12^m_2⋯ (n-1)^m_n-1) such that m_i≤ k-i (1≤ i≤ n-1). Let ℬ_k^* denote the set of all k-irreducible k-bounded partitions.Note that ℬ_k^* is preserved by k-conjugate.Let us denote by S_n^* the subset {w∈ S_n | w(1)=1} of S_n. We know that λ gives a bijection fromS_n^* toℬ_k^* (see <ref> below).Let w be in S_n. There is a polynomial g̃_w∈Λ_(n) such that Φ_n(𝔊_w^Q)=g̃_w/∏_i∈Des(w)τ_i.Forthermore, g̃_w satisfies the following properties:(i) If λ(w)=λ(w') for w,w'∈ S_n, then we haveg̃_w=g̃_w'. (ii) For w∈ S_n, we have g̃_w=g^(k)_λ(w)^ω_k+∑_μ a_w,μg_μ^(k), a_w,μ∈,where μ runs for all elements in ℬ_k^* such that |μ|<|λ(w)|.(iii) (-1)^|μ|-|λ(w)|a_w,μ is a non-negative integer.The counterpart of Conjecture <ref> in the (co)homology case was established in <cit.>, where the quantum Schubert polynomial 𝔖_w^q of Fomin, Gelfand, and Postnikov <cit.> is sent by the original Peterson isomorphism to the fraction, whose numerator is the single k-Schur function associated withλ(w)^ω_k, and the denominator is the products of s_R_i^(k) such that i∈Des(w). Note in the formula in <cit.>, the numerator is the k-Schur function associated with λ(w) without k-conjugate by reason of the convention.If w∈ S_n is d-Grassmannian for some d, and w=w_μ,d with μ⊂ R_d,then from Theorem <ref>we haveg̃_w=g_μ^∨.Since we know that μ^∨=λ(w_μ,d)^ω_k (Lemma <ref> below), if (<ref>) is true,Conjecture <ref> holds for all Grassmannian permutations w. In the early stage of this work, we expected that g̃_w is always a single K-k-Schur function, however, this is not the case; for examplewe have g̃_1423=g_2,1,1^(3)-g_2,1^(3).§.§ Further discussionsLet us assume thatConjecture <ref> is true and discuss its possible implications. The property(iii) says {g̃_w}_w∈ S_n^* and {g_ν^(k)}_μ∈ℬ_k^* are two bases of the same space, and the transition matrix between the bases is lower unitriangular.This suggests that {g̃_w}_w∈ S_n^* is a part of an important basis of Λ_(n) different from the K-k-Schur basis. One possibility of such basis will be the following:Let us denote the function g̃_w byg̃_ν^(k) with ν=λ(w)^ω_k∈ℬ_k^*. For a general k-bounded partition μ, we can uniquely write it as μ=ν∪⋃_i=1^n-1R_i^e_i, ν∈ℬ_k^*,e_i≥ 0. Then we defineg̃_μ^(k)=g̃_ν^(k)·τ_1^e_1⋯τ_n-1^e_n-1.One sees that g̃_μ^(k) ( μ∈ℬ_k) form a basis of Λ_(n). In the proof of the isomorphism in <cit.>, they work in T-equivariant (T is the maximal torus of G) settings, and first give the module isomorphism ψ from H_*^T(Gr_G)_loc to QH^*_T(G/B)_loc, and next prove the ψ-preimage of the quantum Chevalley formula. Since the quantum Chevalley formula uniquely characterizes QH^*_T(G/B) due toa result of Mihalcea <cit.>, we know that ψ is a ring isomorphism. In our situation, we proved the ring isomorphism Φ_n (Theorem <ref>) without usingthe quantum Monk formula of 𝒬𝒦(Fl_n) (cf. Lenart-Postnikov <cit.>).Thus the basis {g̃_ν^(k)}_ν∈ℬ_k should satisfy the corresponding formula in Λ_(n).These issues will be studied further elsewhere.§.§ Organization.In Sections 2–4 of this paper, we give the K-theoretic Peterson morphism and prove it is an isomorphism (Theorem <ref>). In Sections 5–6, we calculate the image of quantum Grothendieck polynomials associated with Grassmannian permutations (Theorem <ref>). In Section 7, we discuss some details of Conjecture <ref>.In Section <ref>, we state the main results of the first main part of the paper.We construct a birationalmorphism α from Z_γ to ℙ(𝒪_γ) with𝒪_γ:=ℂ[ζ]/(ζ^n+∑_i=1^n-1 (-1)^iγ_iζ^n-i). We also describe the open sets of both Z_γ and ℙ(𝒪_γ) that are isomorphic as affine algebraic varieties.The complement of the open part of ℙ(𝒪_γ) is a divisor given by explicitly defined functions T_i,S_i. The main construction of Section 2 is the definition of the map Φ_n in Theorem <ref>.In fact, the isomorphism statement of Theorem <ref> is given as Corollary <ref>, which is the unipotent case ofTheorem <ref>. In Section 3, we prove Theorem <ref>. We also give a more conceptual description of the map α and its inverse β.Section <ref> includes the formula of Φ_n in terms of the functions T_i,S_i. In Section 4 we determine the precise form of the τ-functions T_i, S_i in terms of dual stable Grothendieck polynomials, thus completing the proof of Theorem <ref>.In Section 5, we summarize the second main result. The aim is to calculate the quantum Grothendieck polynomials associated with Grassmannian permutations. We give a versionQ_d of the quantization map from the K-ringof the Grassmannian Gr_d(ℂ^n)to 𝒬𝒦(Fl_n) by using our K-Peterson isomorphism. The main statement of Section 5 is the compatibility of Q_d and Lenart–Maeno'squantization map Qwith respect to the embeddingK(Gr_d(ℂ^n))↪ K(Fl_n)(Theorem <ref>). As a corollary to this, we obtain Theorem <ref>.Section 6 is devoted to the proof of Theorem <ref>. In Section 7, we explain some details of Conjecture 2 and give some examples of calculations. § CONSTRUCTION OF K-THEORETIC PETERSON ISOMORPHISMIn this section, we state our main construction.Let γ_i (1≤ i≤ n-1) be any complex numbers and set γ_n=1. Let f_γ(ζ)=ζ^n+∑_i=1^n(-1)^iγ_i ζ^n-i.Let𝒪_γ denote the quotient ringℂ[ζ]/(f_γ(ζ)). We also consider 𝒪_γ asan affine space. We use the following notation of minor determinants for an n× n matrix X=(x_ij)_1≤ i,j≤ n:ξ_i_1,…,i_r^j_1,…,j_r(X)=(x_i_a,j_b)_1≤ a,b≤ r.Let Δ_i,j = Δ_i,j(z,Q) = ξ^1,2,…,ĵ,…,n_1,2,…,î,…,n(ζ B(z,Q) - A(z)). We define the mapα: Z_γ→ℙ(𝒪_γ) sendingL∈ Z_γ to[Δ_1,1].If n=3 then Δ_1,1 is given asΔ_1,1=ζ^2+(Q_2z_2-z_2-z_3)ζ+z_2z_3. This is a handy definition of α. A more conceptual description of the map α isgiven in the next section, where 𝒪_γ isinterpreted as the centralizer of the companion matrix C_γ of the characteristic polynomial f_γ. In fact, we will construct the inverse β of α defined on an open set of ℙ(𝒪_γ), which is the counterpart of the map Kostant <cit.> defined for the ordinary finite Toda lattice. The case of our interest, as was noted above, isγ_i=ni that is equivalent to Ψ_L(ζ)=(ζ-1)^n. We call this parameterunipotent and denote the corresponding isospectral variety byZ_. Recall that we denote 𝒬𝒦(Fl_n)=ℂ[Z_],which is our working definition of the quantum K-theory of Fl_n. We will show below that α is a birational morphism of algebraic varieties. We also describe open parts that are isomorphic via the map α explicitly.As the corresponding isomorphism between the coordinate rings, we obtain theK-theory analogue of the Peterson isomorphism. Fix a linear isomorphismc: 𝒪_γ→ℂ^n. For 0≤ j≤ n, and φ∈𝒪_γ, let a_j=c(ζ^j), b_j=c(φζ^j). Define for 1≤ i≤ n,T_i(φ) =|b_0,b_1,⋯,b_i-1,a_i-1,⋯,a_n-2|,S_i(φ) =|b_0,b_1,⋯,b_i-1,a_i,⋯,a_n-1|. Note that a different choice of c yields a changeT_i↦ c^iT_i,S_i↦ c^i S_i with a nonzero constant c∈ℂ^*. Such change does not effect the following constructions, however,we choose c so that |a_0,…,a_n-1|=1. For each i, both T_i and S_i are homogenous polynomial functions in ℂ[𝒪_γ] of degree i.Let Y_γ=ℙ(𝒪_γ) and define a Zariski open setY_γ^∘={[φ]∈ℙ(𝒪_γ)|T_i(φ)≠ 0, S_i(φ)≠ 0 (1≤ i≤ n)}.Our first main result is the following. The map α gives an isomorphism from Z_γ^∘ to Y_γ^∘ as affine algebraic varieties. Now we apply this to unipotent case, namely the case when γ_i=ni. We choose c: 𝒪_γ→ℂ^n, φ↦^t(c_0,c_1,…,c_n-1) asφ= ∑_i=0^n-1(-1)^ic_i·(ζ-1)^i.Then we have T_n=S_n=c_0^n. So Y_^∘ is an open subvariety of the affineopen set U_0 of ℙ(𝒪_) defined by c_0≠ 0.We identify the coordinate ring ℂ[c_1/c_0,…,c_n-1/c_0] ofU_0with Λ_(n)=ℂ[h_1,…,h_n-1] by h_i=c_i/c_0 (1≤ i≤ n-1).Using this identification, we will prove (see <ref>)τ_i=T_i/c_0^i,σ_i=S_i/c_0^i(1≤ i≤ n-1). Via the isomorphism K(Gr_SL_n)≃Λ_(n), we haveℂ[Y_^∘]=K(Gr_SL_n)[τ_i^-1,σ_i^-1].We have the following isomorphism of rings:Φ_n: 𝒬𝒦(Fl_n)[Q_i^-1(1≤ i≤ n-1)] ∼⟶ K(Gr_SL_n)[τ_i^-1,σ_i^-1(1≤ i≤ n-1)]. The explicit formula of Φ_n (the second statement of Theorem <ref>) will be derived below in Proposition <ref> in <ref> together with Propositions <ref> and <ref>.§ PROOF OF THEOREM <REF>This section is devoted to the proof of Theorem <ref>. §.§ Gauss decompositionLet B (resp. B_-) denote the Borel subgroup of GL_n(ℂ) consisting of upper (resp. lower) triangular matrices. Let N_- (resp. N) denote the subgroup consisting of the unipotent lower (resp. upper) triangular matrices.A square matrix X of size n can beexpressedas X=X_+· X_- with X_+∈B, and X_-∈N_-,if and only if ξ_i+1,…,n^i+1,…,n(X)≠ 0 for 0≤ i≤ n-1. This is the factorization known as the Gauss or the LU-decomposition. The result is standard. See <cit.> for example.Let σ denote the matrix ∑_i=1^n-1E_i+1,i+E_1,n which represents the cyclic permutation (1,…,n). Let ε:=diag(1,-1,1,…,(-1)^n-1). Let X be a square matrix n such that x_1,n≠ 0. Then X can beexpressedas X=U^-1 R with R=(r_ij)∈Bσ, U=(u_ij)∈N_-ε, if and only if ξ_1,…,i-1,i^1,…,i-1,n(X)≠ 0 for 2≤ i≤ n-1. Moreover, if such decomposition exists, we haver_i+1,i=(-1)^i+1ξ^1,…,i,n_1,…,i,i+1(X)/ξ_1,…,i-1,i^1,…,i-1,n(X) (1≤ i≤ n-1).§.§ The variety Z of Lax matricesLet J=∑_i=1^n-1E_i,i+1. Let Z denote the set of matrices L in SL_n(ℂ) satisfying the following conditions: (Z_1): L+J is a lower triangular matrix,(Z_2):all entries of L^-1 further down the second subdiagonal are zero,(Z_3):ξ_i+1,…,n^i+1,…,n(L)≠ 0 for 1≤ i≤ n-1.Let T be the subgroup of (ℂ^*)^nconsisting of (z_1,…,z_n) such that z_1⋯ z_n=1. The map T×ℂ^n-1→ Z defined by sending(z,Q) with z∈ T, Q=(Q_1,…,Q_n-1)∈ℂ^n-1to L=AB^-1 with A=∑_i=1^nz_iE_i,i-J, B=1-∑_i=1^n-1Q_i z_iE_i+1,i,is an isomorphism of algebraic varieties. It is easy to see L=AB^-1 given by (<ref>) satisfies (Z_1). L^-1 is given as follows:[ 1/z_11/z_1z_2 ⋯ ⋯ 1/z_1z_2⋯ z_n;-Q_1-Q_1-1/z_2 ⋯ ⋯ -Q_1-1/z_2⋯ z_n; 0-Q_2 ⋱ ⋯ ⋮; 0 0 ⋱-Q_n-2-1/z_n-1 -Q_n-2-1/z_n-1z_n; 0 0 0-Q_n-1-Q_n-1-1/z_n ],and thereby (Z_2) holds. To see L satisfies (Z_3) we only need to notice that L is factorized as in Proposition <ref>. We construct the inverse map by using Proposition <ref>. Let L∈ Z. We decompose it as L=AB^-1, where A∈B,B^-1∈N_-. Let M=L^-1.Define Q_i=-M_i+1,i and z_i=M_1,i/M_1,i-1 with M_1,0=1. It is straightforward, by using (Z_1) and (Z_2), to check A,B are given by (<ref>).Thus Z is the affine variety whose coordinate ring ℂ[Z] isℂ[z,Q]/(z_1⋯ z_n-1).We define the subset Z^∘ of Z by imposing the condition:(Z_4): Q_i≠ 0 (1≤ i≤ n-1).Note that Z_γ defined by (<ref>) is a closedsubvariety of Z. Let Z_γ^∘=Z_γ∩ Z^∘. §.§ Centralizer of C_γLet C_γ denote the companion matrix of f_γ(ζ). Explicitly C_γ=J+∑_i=1^n (-1)^i-1γ_i E_n,n-i+1. Let 𝔛_γ denote the set of all matrices that commute with C_γ. Any X∈𝔛_γ is uniquely expressed as a polynomialX= ∑_i=0^n-1α_i· C_γ^i (α_i∈ℂ)in C_γ of degree at most n-1. This fact can be checked directly.In view of the Cayley-Hamilton theorem, the map from ℂ[ζ] sending φ(ζ) to φ(C_γ) induces an isomorphism𝒪_γ→𝔛_γ of affine varieties.In the following, we identify 𝒪_γ with 𝔛_γ via this map. Let Y_γ^∘ denote the subset of ℙ(𝒪_γ) such that the representatives φ∈𝒪_γ-{0} satisfy the following conditions : (Y_0): φ(C_γ) is invertible.(Y_1):(1,n) component of φ(C_γ) is non-zero.(Y_2): T_i(φ)≠ 0 (1≤ i≤ n-1). (Y_3): S_i(φ)≠ 0 (1≤ i≤ n-1).Note that (Y_1) is equivalent to the condition that φ(ζ) can be chosen so that it has degree n-1. We sayφ is normalized if it is monic of degree n-1.It should be natural to consider the set of elements of ℙ(𝒪_γ) satisfying(Y_1) as the centralizer of [C_γ] in PGL_n(ℂ), the Langlands dual group of SL_n(ℂ). T_i(φ) and S_i(φ) (Definition <ref>) are given in terms of the matrix φ(C_γ) as follows.We have the following:(1) T_i(φ)=(-1)^n-iξ^1,…,i-1,n_1,…,i-1,i(φ(C_γ)).(2) S_i(φ)=ξ^1,…,i_1,…,i(φ(C_γ)).Let us define c:𝒪_γ→ℂ^n by sending polynomial to thereminder with respect to f_γ andexpand it with basis 1,ζ,…,ζ^n-1, in particularζ^i↦e_i+1 (1≤ i≤ n-1). Then the ith row of φ(C_γ) is ^tb_i-1, and a_i=e_i+1. Now the formulas are easily obtained. §.§ Construction of α:Z_γ^∘→ Y_γ^∘Let (z,Q)∈ Z_γ and denote L(z,Q) by L.(1) There is a matrix R=(r_ij)_1≤ i,j≤ n in Bσ, unique up to scalar, such thatLR=RC_γ.Moreover, for such a matrix R, by multiplying some non-zero constant if needed, we have(R) =(-1)^n(n-1)/2Q_1^n-1Q_2^n-2⋯ Q_n-1, r_i+1,i =(-1)^i-1 Q_1⋯ Q_i (1≤ i≤ n-1).(2) There is a unique matrix U in N_-ε such thatLU=UC_γ.(1) Let Δ_i,j denote the (i,j)-minor of ζ B- A, i.e. Δ_i,j=ξ_1,…,î,…,n ^1,…, ĵ,…,n(ζ B-A).It is straightforward to show the following: (-1)^1+jΔ_1,j = (ζ^n-1 +⋯+(-1)^jz_j+1⋯ z_n·ζ^j-1)·∏_i=1^j-1Q_iz_i, Δ_n,j =ζ^j-1+⋯ +(-1)^j-1z_1⋯ z_j-1.Note in particular that Δ_1,1 is monic of degree n-1, and Δ_n,1=1. If we define a vector v_-:=^t(Δ_1,1,-Δ_1,2,…,(-1)^n-1Δ_1,n)in ℂ[ζ]^n,then by the Laplace expansion theorem we have (ζ B- A)v_-=^t((ζ B- A),0,…,0) =^t(f_γ(ζ),0,…,0).Let w_-:=B v_-. We apply the natural projection ℂ[ζ]→𝒪_γ to both hand sides of (<ref>). Then we have the following equation in 𝒪_γ^n:(ζ· 1-L) w_-=0.We can write w_-=R v_0 by a unique matrix R∈ M_n(ℂ), with v_0=^t(1,ζ,…,ζ^n-1). Noting that ζ·v_0=C_γv_0,(<ref>) is (RC_γ-LR)v_0=0, and thereby we have RC_γ-LR=0.We need to check that R∈Bσ.If we write v_-=R_0v_0 with R_0∈ GL_n(ℂ) then we can see from (<ref>) that R_0 is upper triangular such that nth column of R_0 is ^t(1,Q_1z_1,Q_1Q_2z_1z_2,…,Q_1⋯ Q_n-1z_1⋯ z_n-1). Consequently, R=BR_0 has the desired form. We also have (<ref>) because (j,j) entry of R_0 is (-1)^jz_j+1⋯ z_n∏_i=1^j-1Q_iz_i.If R' also satisfies R'C_γ-LR'=0, then by writing R=Xσ, R'=X'σ with X,X'∈B,X^-1X' commute with σ C_γσ^-1. By direct calculations, one sees that this commutativity means X^-1X' is a scalar matrix. Next we show (<ref>). Since B is unitriangular, we have (R)=(R_0). This is calculated by (<ref>) as ∏_j=1^n((-1)^jz_j+1⋯ z_n·∏_i=1^j-1Q_iz_i) =(-1)^n(n-1)/2(z_1⋯ z_n)^n-1 Q_1^n-1Q_2^n-2⋯ Q_n-1.Since z_1⋯ z_n=1, we have (<ref>).(2) We define v_+∈ℂ[ζ]^n byv_+:=^t(Δ_n,1,-Δ_n,2,…,(-1)^n-1Δ_n,n).If we set w_+=B v_+ then we have (ζ· 1-L) w_+=0 in 𝒪_γ^n. We write w_+=Uv_0. Then we have LU=UC_γ by the same reason. From (<ref>) we see thatU∈N_-ε.The uniqueness is straightforward. Now we construct a morphism α^∘:Z_γ^∘→ Y_γ^∘. Let (z,Q)∈ Z_γ^∘ and L=L(z,Q) be the correspondingLax matrix (Proposition <ref>). Let U,R be matrices constructed in Proposition <ref>. Note that R is invertible because we assume (Z_4) and we have(<ref>). We have L=RC_γ R^-1=UC_γ U^-1.So U^-1R∈𝔛_γ≃𝒪_γ.Let us write U^-1R=φ(C_γ) bya polynomial φ in ζ as (<ref>).Set α^∘(L):=[φ]∈ℙ(𝒪_γ). From the form of matrices R,U, one sees that the (1,n) entry of U^-1R is 1, so φ is monic of degree n-1. Thus (Y_1) holds for φ(C_γ). We have (Y_0) since R is invertible.Since φ(C_γ) is factorized as U^-1Ras in Proposition <ref>,(Y_2) holds in view ofLemma <ref> (1). Finally we check (Y_3).We use Lemma <ref> (2). From similar calculation of the proof of (<ref>),the i-th principal minor ofU^-1R (=φ(C_γ)) is written as(-1)^iz_i+1⋯ z_nQ_1^i-1Q_2^i-2⋯ Q_i-1,which is non-zero by assumption (Z_3). Thus α^∘(L) is an element of Y_γ^∘. We have α|_Z_γ^∘=α^∘. Since φ(C_γ)=U^-1R, we have in 𝒪_γ^n the following:w_-=R v_0=Uφ(C_γ)v_0= Uφ(ζ)v_0 =φ(ζ)Uv_0 =φ(ζ)w_+,and thereby v_-=φ(ζ)v_+. By comparing the first component of the both sides of this equality, we have Δ_1,1=φ(ζ).§.§ Construction of β: Y_γ^∘→ Z_γ^∘ Let φ be a normalized element in Y_γ^∘. Since we assume (Y_2), from Proposition <ref>, we have unique R,U such thatφ(C_γ)=U^-1R.From (Y_0), we see that R is invertible.Now we have UC_γ U^-1=RC_γ R^-1=:L. Setβ([φ]):=L. We claim that L is an element of Z^∘. It is easy to check that L satisfies (Z_1) and(Z_2) by using L=UC_γ U^-1 and L=RC_γ R^-1 respectively. (Z_3) follows from (Y_3) by the following. Let φ and L as above. Then we haveξ_i+1,…,n^i+1,…,n(L)=-S_i(φ)/T_i(φ) (1≤ i≤ n-1).Note that for any element R in Bσ, we have ξ_i+1,…,n^i+1,…,n(RC_γ R^-1) =(-1)^n-1ξ_1,…,i^1,…,i(R)/r_21r_32⋯ r_i,i-1.Now using (<ref>) we haver_21r_32⋯ r_i,i-1=(-1)^n-i+i(i+1)/2T_i(φ), and ξ_1,…,i^1,…,i(R)=(-1)^i(i-1)/2ξ_1,…,i^1,…,i(U^-1R).Thus we havethe result from Lemma <ref> (2). Finally we need to show (Z_4). It is assured by the fact that R is invertible. Let L=L(z,Q). Note that (R)=(-1)^n-1r_2,1r_3,2⋯ r_n,n-1≠ 0. Note also that (i+1,i) entry of L^-1 is -Q_i (cf. (<ref>)), which is the ratio r_i+1,i/r_i,i-1 with r_1,0=-1, and thus non-zero. Thus L is in Z^∘. Since L is conjugate to C_γ we have β([φ])=L∈ Z_γ^∘. The morphismsα^∘ and β are inverse to each other. Let L∈ Z_γ^∘. We take U,R such thatL=UC_γ U^-1=RC_γ R^-1. Then φ(ζ)=Δ_1,1 satisfies φ(C_γ)=U^-1R. Then α(L)=[φ] and β([φ])=UC_γ U^-1=L. On the other hand, let [φ]∈ Y_γ^∘. We assume φ is normalized. We have φ(C_γ)=U^-1R.Then β([φ])=UC_γ U^-1=RC_γ R^-1=:L. Then α^∘(L) is given by U^-1R. §.§ Explicit form of Φ_n Let Φ_n:ℂ[Z_γ^∘]→ℂ[Y_γ^∘] be the associated isomorphism of coordinate rings. Then we haveΦ_n(z_i) =T_i(φ)S_i-1(φ)/S_i(φ)T_i-1(φ), Φ_n(Q_i) =T_i-1(φ)T_i+1(φ)/T_i^2(φ).It is easy to show ξ_i,…,n^i,…,n(L)=z_i⋯ z_n. Then(<ref>) follows from (<ref>) andLemma <ref>. To prove (<ref>) we use (<ref>), (<ref>) to have Q_i=-r_i+1,i/r_i,i-1 = ξ_1,…,i-1,i^1,…,i-1,n(X)/ξ_1,…,i-2,i-1^1,…,i-2,n(X)·ξ_1,…,i-3,i-2^1,…,i-3,n(X)/ξ_1,…,i-2,i-1^1,…,i-2,n(X).with X=φ(C_γ). Then from Lemma <ref> (1), we have (<ref>). We observe that each component of L expressed as a rational function of T_i(φ)'s and S_i(φ)'s has no pole along the divisor defined by S_i(φ)=0. This is mysterious because z_i has pole along the divisor. We do not know any explanation of this phenomena. § DUAL STABLE GROTHENDIECK POLYNOMIALS AS Τ-FUNCTIONSIn this section we prove (<ref>). It shows that the functions T_d,S_ddefined as the determinants are, in the unipotent case, written in terms of the dual Grothendieck polynomials associated to the rectangle R_d.§.§ The determinant formula for dual stable Grothendieck polynomials Let U_0⊂() be the affine subspace defined by c_0≠ 0. We identify the coordinate ring [U_0] with the ring Λ_(n)=[h_1,…,h_n-1] through the identification c_i/c_0↔ h_i (see (<ref>) for the definition of c_i). We fix d such that 1≤ d≤ n.For f_1,f_2,…,f_d∈, define [f_1,f_2,…,f_d]∈[U_0]=Λ_(n) by the formula[f_1,f_2,…,f_d](φ)=c_0^-d·c(f_1φ), c(f_2φ), …, c(f_dφ), c(1), c(ζ), …, c(ζ^n-d-1)(φ∈ U_0).If f_jφ∈ U_0 is expressed as f_jφ=∑_i=0^n-1a_i,j(ζ-1)^i(ζ-1)^n, we have [f_1,…,f_d](φ)=(-1)^d(n-d)c_0^-d·(a_n-d+i-1,j)_i,j=1^d.Note that for m≥ 0, ζ^-m makes sense as an element ofsince (1-ζ)^n=0. We have ζ^-m=∑_l=0^n-1 (-1)^lm+l-1l(ζ-1)^l. For a partition λ⊂ R_d, we have [ (1-ζ)^n-1-λ_dζ^-d+1, (1-ζ)^n-2-λ_d-1ζ^-d+2, …, (1-ζ)^n-d-λ_1 ] =g_λ. For a,b≥ 0,we write (1-ζ)^aζ^-bφ=∑_i=0^n-1c_i^a,b(ζ-1)^i(ζ-1)^n. Then we havec_i^a,b=∑_m=0^∞(-1)^i+m-b mh_i-a-m, where we understand h_m=0 for m<0. By using (<ref>), the left hand side of (<ref>) can be calculated as follows:(-1)^d(n-d)( c_n-d+j-1^n-i-λ_d-i+1,d-i)_i,j=1^d=(-1)^d(n-d)( ∑_m=0^∞(-1)^n-d+j-1+mi-d mh_λ_d-i+1+j-(d-i+1)-m)_i,j=1^d = (-1)^d(n-d)+∑_j=1^d(n-d+j-1)+d(d-1)/2( ∑_m=0^∞(-1)^m1-i mh_λ_i+j-i-m)_i,j=1^d =g_λ. We have τ_d=g_R_d.Note that M_ζc(f)=c(fζ) with M_ζ:=E_n+J.We have τ_d=c_0^-d·| c(φ),…,c(φζ^d-1),c(ζ^d-1),…,c(ζ^n-2) |= c_0^-d·|M_ζ|^d-1| c(φζ^-(d-1)),…,c(φ),c(1),…,c(ζ^n-d-1) |=[ζ^-(d-1),ζ^-(d-2),…,1]=[(1-ζ)^d-1ζ^-(d-1),(1-ζ)^d-2ζ^-(d-2),…,1].The last equality follows from the fact that(1-ζ)^mζ^-m-ζ^-m (m≥ 1) is a linear combination of 1,ζ^-1,…,ζ^-m+1. Then from Lemma <ref> we obtain τ_d=g_R_d. §.§ A useful family of determinants Here we introduce some useful family of determinant formulas for symmetric polynomials which is suitable for our representation of K-theoretic Peterson isomorphism. For d-tuple (θ_1,…,θ_d) of integers and d-tuple (a_1,…,a_d) of non-negative integers, defineD θ_1 θ_2 … θ_d a_1 a_2 … a_d :=[(1-ζ)^a_1ζ^-θ_1,(1-ζ)^a_2ζ^-θ_2,…,(1-ζ)^a_dζ^-θ_d].We also denote D θ_1 θ_2 … θ_d 0 0 … 0 simply byD(θ_1,θ_2,…,θ_d).D θ_1 θ_2 … θ_d a_1 a_2 … a_d∈Λ_(n) is uniquely determined by the following recursive formulas: * D … θ_i … θ_j … … a_i … a_j … = - D … θ_j … θ_i … … a_j … a_i …,* D… θ_i … … a_i … = D… θ_i-1 … … a_i … + D… θ_i … … a_i+1 …,* D… θ … … n …=0,* D 0 0 … 0 n-λ_d-1 n-λ_d-1-2 … n-λ_1-d =s_λ,where s_λ is the Schur function.We have useful formulas such asDd-1 d-2 … 0 0 0 … 0 = Dd-1 d-2 … 0 d-1 d-2 … 0 =g_R_d,Dd-1 d-2 … 0 n-λ_d-1 n-λ_d-1-2 … n-d-λ_1 =g_λ, (λ⊂ R_d) D… θ … … a … = ∑_i(-1)^ip i D… θ+p … … a+i …. §.§ Lattice paths methodNow we prove the following.We have σ_d=∑_μ⊂ R_d g_μ.Since we have Lemma <ref>, it suffices to show the following. We haveDd d-1 … 1 d-1 d-2 … 0 = ∑_n>a_1>a_2>…>a_d≥ 0 Dd-1 d-2 … 0 a_1 a_2 … a_d.LetK^θ_a,i:=i-a+θ-1θ-1=( [; a≤ x_1≤ x_2≤…≤ x_θ≤ i ]).By using (<ref>) repeatedly, we haveDθ_1 … θ_d a_1 … a_d =∑_n>i_1>⋯>i_d≥ 0 ∑_σ∈ S_dsgn(σ)∏_j=1^dK^θ_j_a_j,i_σ(j) D0 … 0 i_1 … i_d=∑_n>i_1>⋯>i_d≥ 0(K^θ_l_a_l,i_m)_1≤ l,m≤ d· D0 … 0 i_1 … i_d.Especially,Dd d-1 … 1 d-1 d-2 … 0 =∑_n>i_1>⋯>i_d≥ 0(K^d-l+1_d-l,i_m)_1≤ l,m≤ d· D0 … 0 i_1 … i_d,Dd-1 d-2 … 0 a_1 a_2 … a_d =∑_n>i_1>⋯>i_d≥ 0(K^d-l_a_l,i_m)_1≤ l,m≤ d· D0 … 0 i_1 … i_d.Hence, it suffices to prove(K^d-l+1_d-l,i_m)_l,m=∑_n> a_1>…>a_d≥ 0(K^d-l_a_l,i_m)_l,mfor arbitrary i_1,…,i_d.Now consider the plane lattice in Figure <ref>. Let A_j (j=1,…,d) be the point with coordinates (d-j+1,d-j) and B_j be the point with (0,i_j). We immediately find that the number of shortest paths from A_l to B_m on the lattice is K^d-j+1_d-j,i_m. By the Lindström-Gessel-Viennot lemma <cit.>, the determinant (K^d-l+1_d-l,i_m)_l,m equals to the number of shortest non-intersecting paths from the source set {A_1,…,A_d} to the target set {B_1,…,B_d}. (See Figure <ref>).Let C_j (j=1,…,d) be the point with coordinates (d-j,a_j). Similarly as above, the determinant (K^d-j_a_j,i_m) equals to the number of shortest non-intersecting paths from the source set {C_1,…,C_d} to the target set {B_1,…,B_d}. (See Figure <ref>).Denote 𝔛=(), 𝔜^(a_1,…,a_d)= ( [; C_j=(d-j,a_j) ]).There exists a natural one-to-one correspondence⋃_n>a_1>…>a_d≥ 0𝔜^(a_1,…,a_d)→𝔛which associates(𝒫_1,…,𝒫_d)∈𝔜^(a_1,…,a_d), (𝒫_j)with the set of non-intersecting paths (𝒬_1,…,𝒬_d)∈𝔛, where 𝒬_j is the path which is obtained by adding to 𝒫_j a vertical segment from A_j to (d-j+1,a_j) and a horizontal edge from (d-j+1,a_j) to (d-j,a_j) (see Figure <ref>). Counting the cardinalities of these sets concludes (<ref>).§ QUANTUM GROTHENDIECK POLYNOMIALS OF GRASSMANNIAN TYPE We fix an integer d with 1≤ d≤ n. Let Gr_d(ℂ^n) denote the Grassmannian of d-dimensional subspaces of ℂ^n. We will show how a quantization map for the Grassmannian Gr_d(ℂ^n) can be interpretedin our context. Theorem <ref> is the main result whose proof is given in the next section. As an application, we obtainTheorem <ref>. §.§ A result on K-theoretic Littlewood-Richardson ruleFor partitions λ,μ,ν, the K-theoretic Littlewood-Richardson coefficients is the integer c_λ,μ^ν defined by G_λ· G_μ=∑_ν(-1)^|ν|-|λ|-|μ| c_λ,μ^ν· G_ν,where ν runs for all partitions.Buch's formula <cit.> gives c_λ,μ^ν asthe number of set-valued tableaux which satisfy a certain property.Here we explain a rule of Ikeda-Shimazaki <cit.>, which is equivalent to Buch's rule.Let T be a set-valued tableau T. The column word cw(T) of T is obtained by reading each column from top to bottom starting from the right most column to the left, where letters in the set filled in a box are read in the decreasing order. A word builds ν on λ if ν is constructed by adding a box in the row associated to each entry of the word one by one while keeping the shape being a partition. The coefficient c_λ,μ^ν is the number of set-valued tableaux T of shape μ such that cw(T) builds ν from λ (in <cit.>, such T is called λ-good tableaux of shape μ of content ν-λ).The following is stated without proof in <cit.> (proof of Corollary 1). We include a proof for completeness.Let λ be a partition such thatλ⊂ R_d. For any partition μ, we havec_λ,μ^R_d=δ_λ^∨, μ.(T. Matsumura)Let T be a set-valued tableau such that cw(T) builds R_d on λ.We prove that T is the ordinary semistandard tableau T_0 such thatith column of T_0 is filled with d,d-1,…,r_i+1 from the bottom to top,where r_i is the number of boxes in (n-d+i)th column of λ (see Example <ref> below).This in particular implies that the shape of T_0 is λ^∨.We proceed by induction on i to prove that the first i columns of T coincide with those of T_0. Let us consider the base case i=1. Observe that R_d has exactly one southeast corner, whose row index is d, and consequently, the last letter of the column word cw(T) should be d, which isthe minimum entry of the bottom box of the first column of μ. The second last entry of cw(T) is either d or d-1. If it is d, then it cannot be in thethe first column of μ, because each column of T contains at most one d.It follows that the first column of T consists of one box filled with only d, and hence μ consists of one row.Then T cannot contain d-1. It means that the right most column of λ has d-1 boxes.Thus this case is done. Suppose the second last entry of cw(T) is d-1. We claim that μ has at least two rows. If μ, on the contrary, has only one row, then T cannot contain d-1 ( d isthe minimum entry of the bottom box of the first column of μ). Next we claim that this d-1 is the only entry in the box above the bottom box of the first column of μ (there should be d-1 in the box above the bottom, and there cannot be other entry in the box, since otherwise other smaller entry becomes the second last entry of cw(T)).If μ has exactly two rows, there are no d-2 in T and the right most column of λ has d-2 boxes. Then the case is done. We can proceed in this way to see that T coincides with T_0 at the first column.Assume that the first i-1 columns of T coincide with those of T_0. We consider the column word of the part T_≥ i consisting of the last (n-d-i+1) columns of T. Let λ be the partition obtained by removing the boxes of R_d corresponding to the row numbers of the first (i-1) columns of T. The last letter of cw(T_≥ i) is d, because ith column of λ has d boxes, and the bottom box of this column is the only box of λ which is a southeast corner and does not belong to λ. Thenby the same argument of the case i=1,the entry of the bottom box of the first column of T_≥ i, the column i of T, is {d}. Moreover, by the same argument of the case i=1, we know that i-th column of T coincides with the one of T_0. Hence the induction completes.Let λ=(6,5,2,1) and d=4, n=10. Then λ^∨=(5,4,1) and T_0 is given as follows:λ=, T_0=23 3 3 43 4 44 4. §.§ Grothendieck polynomialsGrothendieck polynomials were introduced by Lascoux and Schützenberger <cit.> as polynomial representatives for the classes of structure sheaves of Schubert vatieties in K(Fl_n) (cf. (<ref>) below). For each 1≤ i≤ n-1, we define the isobaric divided difference operator given byπ_i f=(1-x_i+1)f-(1-x_i)s_if/x_i-x_i+1 ( f ∈ℤ[x_1,…,x_n])where the simple reflection s_i=(i,i+1) acts by exchanging x_i and x_i+1. If w_0=(n,n-1,…,1) is the longest permutation in S_n we set 𝔊_w_0=x_1^n-1x_2^n-2⋯ x_n-1.There exist a unique family {𝔊_w(x) |w∈ S_n} of polynomialssuch that π_i𝔊_w=𝔊_ws_iℓ(ws_i)=ℓ(w)-1 𝔊_w ℓ(ws_i)=ℓ(w)+1. §.§ Quantization map of K(Gr_d(ℂ^n)) and K-Peterson isomorphismLet Λ̂ denote the ℂ-span of the stableGrothendieck polynomials. This is a completion of the ring Λ. It was shown in <cit.> that Λ̂, denoted as Γ in the paper, is closed under multiplication. Denote by J_d the ideal of Λ̂ defined asJ_d:={f∈Λ̂ | f^⊥· g_R_d=0 }.There is a canonical isomorphism K(Gr_d(^n))≃Λ̂/J_d. The linear span of G_μ's such that μ⊄R_d isan ideal of Λ̂ and Λ̂/I_d is isomorphic K(Gr_d(^n)) (Buch <cit.>, Theorem 8.1). From Proposition <ref>, we see that J_d coincides with I_d.Via the isomorphism in Proposition <ref>, we define the -linear map Q_d:K(Gr_d(^n))→𝒬𝒦(Fl_n), fJ_d↦Φ_n^-1( (f^⊥· g_R_d)/g_R_d). §.§ Quantization map of Lenart–MaenoFor 1≤ m≤ n, define a polynomial F^(m)_i∈[x,Q] (cf. <cit.>) byF^(m)_i=∑_I⊂{1,…,m}# I=i∏_j∈ I(1-x_j)∏_j∈ I, j+1∉ I (1-Q_j),where Q_n:=0. Note that F^(n)_i is nothing but F_i with z_j=1-x_j.In <cit.>, Lenart and Maeno introduced the quantization map Q and defined the quantum Grothendieck polynomials by using it. Let e_i be the ith elementary symmetric polynomial. Letf_i^(j)=e_i(1-x_1,…,1-x_j) for 1≤ i,j≤ n. The following presentation is well-known (recall that x_i is the K-theoretic first Chern class c_1(ℒ_i^∨):=1-[ℒ_i] of the dual of the tautological line bundle ℒ_i):K(Fl_n)≃ℂ[x_1,…,x_n]/⟨ e_i(x_1,…,x_n)|1≤ i≤ n⟩.Note that the ideal ⟨ e_i(x_1,…,x_n)|1≤ i≤ n⟩ is also generated by f_i^(n)-ni (1≤ i≤ n). Let L_n be the -vector subspace of [x_1,…,x_n] generated by the elementsf^(1)_i_1f^(2)_i_2⋯ f^(n-1)_i_n-1 (0≤ i_j≤ j).There exists a canonical isomorphism K(Fl_n)≃ L_n of -vector spaces (<cit.>). The quantization map Q:K(Fl_n)→𝒬𝒦(Fl_n) is the -linear map defined by Q(f^(1)_i_1f^(2)_i_2⋯ f^(n-1)_i_n-1):=F^(1)_i_1F^(2)_i_2⋯ F^(n-1)_i_n-1 (0≤ i_j≤ j).The definition of Q given in <cit.> isequivalent to Definition <ref> (see <cit.>). The quantum Grothendieck polynomial 𝔊_w^Q, for w∈ S_n, isdefined as 𝔊_w^Q=Q(𝔊_w).For f∈Λ̂, let f(x_1,…,x_d) denotes the polynomial by setting x_i=0 for i>d in the symmetric function f∈Λ̂. Letπ: Fl_n→ Gr_d(^n) be the projection sending V_∙ to V_d. The induced morphism π^* : K(Gr_d(^n))≃Λ̂/J_d↪ K(Fl_n) is given by fJ_d↦ f(x_1,…,x_d).The main statement of this section is the following. The proof is given in <ref>.The following diagram commutesΛ̂/J_d ≃ K(Gr_d(^n)) [rd]_Q_d@^(->^π^*[rr] K(Fl_n)[dl]^Q𝒬𝒦(Fl_n) . Let λ⊂ R_d. We have Q_d(G_λ modJ_d)= 𝔊_w_λ,d^Q. It is known that G_λ(x_1,…,x_d)=𝔊_λ,d (Buch <cit.>, 8), so we haveQ_d (G_λ modJ_d) =Q(G_λ(x_1,…,x_d)) =Q(𝔊_λ,d) =𝔊_λ,d^Q. Now we prove Theorem <ref>.Corollary <ref> is equivalent toΦ_n(𝔊_w_λ,d^Q)=G_λ^⊥· g_R_d/g_R_d.So we need to show G_λ^⊥· g_R_d=g_λ^∨, which is equivalent to Proposition <ref>. § PROOF OF THEOREM <REF> §.§ Outline of the proofLet λ be partition such that λ⊂ R_d.We define a quantized version of Schur polynomial S^Q_λ,d:=(F^(d+j-1)_λ_i'-i+j)_i,j=1^ℓ(λ').Note that the polynomial S^Q_λ,d is an element inℤ[Q][x_1,…,x_d]^S_d.We haveQ(s_λ(1-x_1,…,1-x_d))=S_λ,d^Q.Let e_i^(j)=e_i(z_1,…,z_j).We know the dual Jacobi-Trudi formulas_λ(z_1,…,z_d)=(e_λ_i'-i+j^(d))_i,j=1^ℓ(λ').Since e_i^(j)=e_i^(j-1)+z_je_i-1^(j-1), it is easy to show s_λ(z_1,…,z_d)=(e_λ_i'-i+j^(d+j-1))_i,j=1^ℓ(λ'). Now by substituting z_i=1-x_i to this equality, we haves_λ(1-x_1,…,1-x_d)=(f_λ_i'-i+j^(d+j-1))_i,j=1^ℓ(λ').One observes that each term of the expansion on the right hand side of (<ref>)is of the from (<ref>). Then (<ref>) follows from the definition of Q.Let p_i∈Λ be the ith power sum symmetric function (<cit.>). We consider the ring homomorphism κ_d: Λ→Λ given by κ_d(p_i):=d-i 1p_1+i 2p_2-…+(-1)^ii ip_i.Recall that each element of Λ is a symmetric formal power seriesin x=(x_1,x_2,…). κ_d is given by x_i↦ 1-x_i (1≤ i≤ d),x_j↦ x_j (j>d). Thus obviously κ_dis an involution. We will show the following in the sequel of this section.We haveΦ_n(S^Q_λ,d)=κ_d(s_λ)^⊥· g_R_d/g_R_d. The equation (<ref>) is equivalent toQ_d(κ_d(s_λ)J_d)=S^Q_λ,d.On the other hand, the elementκ_d(s_λ) J_d of Λ̂/J_d is mapped to s_λ(1-x_1,…,1-x_d)∈ L_n≃ K(Fl_n). Thus we haveQ(π^*(κ_d(s_λ)J_d))= Q(s_λ(1-x_1,…,1-x_d))=S_λ,d^Q. Sinceκ_d(s_λ)J_d (λ⊂ R_d) form a basis of Λ̂/J_d, Theorem <ref> holds. §.§ Φ_n(S^Q_λ,d) as a ratio of determinantsIn this subsection we prove the following proposition.Let λ be a partition contained in R_d. We define the increasing sequence (i_1,⋯,i_d) by settingi_a=λ_d+1-a+a(a=1,…,d).Then we haveΦ_n(S^Q_λ,d)=D(d-i_1,d-i_2,…,d-i_d) / D(d-1,d-2,…,0) . Almost all the statements and proofs of this subsection make sense for arbitrary values of γ, which will be relevant when we discuss equivariant case (cf. <cit.>). Let L∈ Z_^∘, and U be the matrix constructed in Proposition <ref>.The components u_ij (i>j) of U is equal to (-1)^j-1F_i-j^(i-1). We have UC_ U^-1=L. Let us compare both hand sides of principal minors ofζ· 1-UC_ U^-1=ζ· 1-L. In view of the facts thatL satisfies (Z_1), and U∈N_-ε, one can show that ξ_1,…,i^1,…,i(ζ· 1-L) =ζ^i+(-1)^i∑_j=1^i u_i+1,i-j+1ζ^i-j (1≤ i≤ n-1),On the other hand, we haveξ_1,…,i^1,…,i(ζ· 1-L)=ζ^i+∑_j=1^i(-1)^jF_j^(i)ζ^i-j. Thus the lemma is proved. Let λ and (i_1,…,i_d) be as Proposition <ref>.Let s=ℓ(λ'), where λ' is the conjugate partition of λ. We define the increasing sequence (j_1,…,j_s) by thecondition{i_1,…,i_d}∪{j_1,…,j_s} ={1,2,…,d+s}.Suppose a matrix X is decomposed as X=Y· N^-1,Y∈B_-,N∈N. Then we have the expression ξ^d+1,…,d+s_j_1,…,j_s(N)=(-1)^|λ|·ξ_1,…,d^i_1,…,i_d(X)/ξ_1,…,d^1,…,d(X). See <cit.>, Theorem 1.1 and its proof. Now we prove Proposition <ref>.Let {j_1,…,j_s} as in Lemma <ref>. From Lemma <ref>, we see thatS^Q_λ,d= ξ_d+1,…,d+s^j_1,…,j_s(Uε). We apply Lemma <ref> as follows. Recall that we have the decomposition φ(C_)=U^-1R with R∈Bσ and U∈N_-ε. Let Y=^t(R σ^-1), N=^t(Uε), X=σ·^tφ(C_) ε.If we choose c: 𝒪_→ℂ^n as ∑_i=0^n-1α_iζ^i(ζ-1)^n ↦^t(α_n-1,α_0,…,α_n-2), then we have σ·^tφ(C_)=(b_0,b_1,…,b_n-1).Now we haveξ_1,…,d^i_1,…,i_d(X)=(-1)^∑_a=1^d(i_a-1)|b_i_1-1,…,b_i_d-1,a_d-1,…a_n-2|=(-1)^∑_a=1^d(i_a-1)|c(ζ^i_1-1φ),…,c(ζ^i_d-1φ),c(ζ^d-1),…c(ζ^n-2)|=(-1)^∑_a=1^d(i_a-1)|c(ζ^i_1-dφ),…,c(ζ^i_d-dφ),c(1),… ,c(ζ^n-d-1)|=(-1)^|λ|+d(d-1)/2D(d-i_1,…,d-i_d)c_0^d,where we used ∑_a=1^di_a=|λ|+d(d+1)/2.Since we haveξ_d+1,…,d+s^j_1,…,j_s(Uε) =ξ^d+1,…,d+s_j_1,…,j_s(N),formula (<ref>) follows from (<ref>), (<ref>), and (<ref>).The d-Grassmannian permutation w_λ,d∈ S_n is given byw_λ,d(a)=i_a (1≤ a≤ d) and w_λ,d(d+a)=j_a (1≤ a≤ s),and w_λ,d(a)=a (a>d+s).§.§ Calculation of κ_d(s_λ)^⊥· g_R_dIn view of Proposition <ref>, Proposition <ref> is reduced to the following.We haveκ_d(s_λ)^⊥· g_R_d=D(d-i_1,d-i_2,…,d-i_d).In the rest of the paper, we will concentrate on the proof of Proposition <ref>.§.§.§ Actions of κ_d(p_i)^⊥ We have κ_d(p_i)^⊥· Dθ_1 … θ_d a_1 … a_d = ∑_j=1^d Dθ_1 … θ_j-i … θ_d a_1 … a_j … a_d.We first showp_i^⊥· Dθ_1 … θ_d a_1 … a_d =∑_j=1^d Dθ_1 … θ_j … θ_d a_1 … a_j+i … a_d.As p_i^⊥ h_j=h_j-i (<cit.>, Chap. I, 5, Example 3), the action of p_i^⊥ on the column vector c((1-ζ)^cζ^-θφ)=c_0·c( (1-ζ)^cζ^-θ(∑_i=0^n-1h_i(1-ζ)^i ) )is expressed as p_i^⊥·c((1-ζ)^cζ^-θφ)=c((1-ζ)^a+iζ^-θφ).This relation and the `Leibniz rule' p_i^⊥ (fg)=(p_i^⊥ f)g+f(p_i^⊥ g) imply the desired equation. (<ref>)is obtained from (<ref>) by using (<ref>) as follows:κ_d(p_i)^⊥· Dθ_1 … θ_d a_1 … a_d = ∑_j=1^d∑_m=0^i(-1)^mi m Dθ_1 … θ_j … θ_d a_1 … a_j+m … a_d = ∑_j=1^d Dθ_1 … θ_j-i … θ_d a_1 … a_j … a_d. §.§.§ Boson-Fermion correspondenceTo prove Proposition <ref>, we use the Boson-Fermion correspondence. Here we review some basic facts about it without proof. For details, see <cit.>.Let ℳ:={M=(m_0,m_1,m_2,…) | m_0>m_1>⋯ ,m_j=-j (j≫ 1)}. Let v_m be infinitely many linearly independent vectors indexed by m∈ℤ. Let ℱ=⊕_M∈ℳ v_M,v_M:=v_m_0∧ v_m_1∧⋯be the Fermion-Fock space. The vector Ω:=v_0∧ v_-1∧ v_-2∧⋯ is called the vacuum vector of ℱ. For m∈ℤ, m≠ 0, define α_m∈End_ℂℱ by the formula:α_m(v_m_0∧ v_m_1∧⋯)=∑_j=0^∞ v_m_0∧…∧ v_m_j-1∧ v_m_j-m∧ v_m_j+1∧⋯.Then we have the Heisenberg relation:[α_m,α_n]=mδ_m+n,0.There uniquely exists a linear isomorphism ϕ:ℱ→Λ with the following properties:ϕ(Ω)=1,ϕ(α_-mv)=p_mϕ(v),ϕ(α_mv)=p_m^⊥ϕ(v), m≥ 1, v∈ℱ.(see <cit.>, <cit.>).We have ϕ(v_m_0∧ v_m_1∧⋯)=s_λ, where λ=(m_0,m_1+1,m_2+2,…) considered as a partition.§.§.§ Proof of Proposition <ref>.Consider the subset ℳ_d⊂ℳ which is defined byℳ_d={(m_j)∈ℳ | m_j=-j(j≥ d)}and the subspacesℱ_d, 𝒲_d of ℱ defined byℱ_d=⊕_M∈ℳ_d· v_M,𝒲_d=⊕_M∈ℳ∖ℳ_d· v_M.The space ℱ decomposes as ℱ=ℱ_d⊕𝒲_d. Let v↦v, ℱ→ℱ_d be the projection.Let ι:ℱ_d→Λ be the linear mapv_m_0∧⋯∧ v_m_d-1∧ v_-d∧ v_-d-1∧⋯↦ D(-m_d-1,…, -m_0).Note that ι(Ω)=D(d-1,d-2,…,0)=g_R_d.Lemma <ref> can be rewritten as κ_d (p_i)^⊥·ι(v)=ι(α_-i(v)), v∈ℱ_d.For v∈ℱ_d defineŝ_λ(v)= s_λ·v∈ℱ_d, where the Schur function s_λ acts on ℱ via the identification p_m↦α_-m (m≥ 1). Accordingly we haveκ_d(s_λ)^⊥·ι(v)=ι( ŝ_λ(v)), v∈ℱ_d.For a partition λ with ℓ(λ)≤ d we have the equationŝ_λ·Ω=ϕ^-1(s_λ· 1) =v_λ_1∧ v_λ_2-1∧⋯∧ v_λ_d-d+1∧ v_-d∧ v_-d-1∧⋯ ( <ref>)=v_i_d-d∧ v_i_d-1-d∧⋯∧ v_i_1-d∧ v_-d∧ v_-d-1∧⋯ ( (<ref>)),and hence by substituting v=Ω to (<ref>) we haveκ_d(s_λ)^⊥· g_R_d =κ_d(s_λ)^⊥·ι(Ω) =ι(ŝ_λ·Ω) =D(i_1-d,…,i_d-d).§ DISCUSSION OF CONJECTURE <REF>The aim of this section to explain some details about Conjecture <ref> for the image of the quantum Grothendieck polynomials by Φ associated with arbitrary permutations. §.§ λ-map Recall that we set k=n-1. Let ℬ_k denote the set of k-bounded partitions.We recall the definition of a mapλ:S_n→ℬ_kdue to Lam and Shimozono <cit.>. For 0≤ i≤ n-2, let c_i denote the cyclic permutation (i+1,i+2,⋯,n), and C denote the cyclic subgroup generated by c_0=(12⋯ n).For w∈ S_n, let w̃ be the unique element in the coset C· w such that w̃(1)=1. There is a unique sequence (m_1,…,m_n-2) of non-negative integers such that w̃=c_1^m_1 c_2^m_2⋯ c_n-2^m_n-2(0≤ m_i≤ k-i).Define λ(w)=(1^m_12^m_2⋯ (n-2)^m_n-2), the partition whose multiplicity of i (1≤ i≤ n-2) is m_i.Let μ be a partition contained in R_d. We have λ(w_μ,d)^ω_k=μ^∨. We write w=w_μ,d. We first assume that w(1)=1, that is μ_d=0. It is straightforward to see w=c_1^m_1c_2^m_2⋯ c_d-1^m_d-1c_d^m_d with m_i=w(i+1)-w(i)-1 (1≤ i≤ d-1), andm_d=n-w(d). Note that the Young diagram of λ(w)=(1^m_1⋯ d^m_d) is contained inthe conjugate of R_d, that is R_n-d.If we consider the complement μ^c=R_d∖μ of μ in the rectangle R_d,the number of columns in μ^c having i boxes is m_i.The diagram of μ^∨ is obtained from μ^c by a rotation of180 degrees. Now the conjugate of μ^∨ is nothing but λ(w). Since the k-conjugate of a partition contained in R_d is the ordinary conjugate of it (<cit.>), the lemma follows in this case. If w(i)>1, then w̃=c_0^-w(1)+1w is (d+l)-Grassmannian with some l such that 1≤ l≤ k-d. Let μ̃⊂ R_d+l be the corresponding partition. One can check that μ̃^c=R_d+l∖μ̃ has the same shape asμ^c=R_d∖μ, and hence the proof is reduced to the case when w(1)=1.A k-bounded partition μis k-irreducible if there is no k-rectangle R_d such that μ =R_d∪ν (ν∈ℬ_k). This is equivalent to the inequalities 0≤ m_i≤ k-i (1≤ i≤ n-2), where m_i is the multiplicity of i in μ.Accordingly the image of the map λ is contained in the set ℬ_k^* of all irreducible k-bounded partitions.In fact, one can easily see that ℬ_k^*coincides the image of the λ-map.Let S_n^* denote the set of permutations w in S_n such that w(1)=1. The setS_n^* is a complete representatives of the coset space C\ S_n.In particular, the cardinality of ℬ_k^* is (n-1)! Recall that there is a remarkableinvolution ω_k: ℬ_k→ℬ_k,μ↦μ^ω_k due to Lapointe and Morse <cit.>. One of the important properties is ω (s_μ^(k))=s_μ^ω_k^(k) <cit.> where ω is the involution on Λ sending s_λ to s_λ'. One can see that ω_k preserves ℬ_k^*.The following tables give λ(w) and its k-conjugate for w∈ S_n^* for n=4,5. The asterisk sign indicates that the permutation is not Grassmannian. [w λ(w) λ(w)^ω_3; 1234∅∅; 1243(2)(1,1); 1324(2,1)(2,1); 1342(1)(1); 1423(1,1)(2); 1432^*(2,1,1)(2,1,1);][ wλ(w)λ(w)^ω_4; 12345 ∅ ∅; 12354 (3) (1,1,1); 12435 (3,2) (2,2,1); 12453 (2) (1,1); 12534 (2,2) (2,2); 12543^* (3,2,2) (2,2,1,1,1); 13245 (2,2,1) (3,2); 13254^* (3,2,2,1) (3,2,1,1,1); 13425 (3,1) (2,1,1); 13452 (1) (1); 13524 (2,1) (2,1); 13542^* (3,2,1) (2,2,1,1); 14235 (2,1,1) (3,1); 14253^* (3,2,1,1) (3,2,1,1); 14325^* (3,2,2,1,1) (3,2,2,1,1); 14352^* (2,2,1,1) (3,2,1); 14523 (1,1) (2); 14532^* (3,1,1) (2,1,1,1); 15234 (1,1,1) (3); 15243^* (3,1,1,1) (3,1,1,1); 15324^* (3,2,1,1,1) (3,2,2,1); 15342^* (2,1,1,1) (3,1,1); 15423^* (2,2,1,1,1) (3,2,2); 15432^* (3,2,2,1,1,1) (3,2,2,1,1,1); ] §.§ ExamplesLet n=5. We first note some elements of g̃_w are factored into a product of dual stable Grothendieck polynomials:[ wg̃_w; 12543g_1,1,1· g_2,2; 13254g_1,1,1· g_3,2; 14532g_1,1,1· g_2; 15243g_1,1,1· g_3; 15324g_2,2,1· g_3; 15342g_1,1· g_3; 15423g_2,2· g_3; 15432 g_1,1,1· g_2,2· g_3; ] Using Sage, we obtain the following expansion of g̃_w in terms of K-theoretic k-Schur functions. Here we only write the non-Grassmannian elements in S_5^*.g̃_12543=g_2,2,1,1,1^(4)-g_2,2,1,1^(4),g̃_13254 = g_3,2,1,1,1^(4)-g_3,2,1,1^(4),g̃_13542 =g_2,2,1,1^(4)-g_2,2,1^(4),g̃_14253 =g_3,2,1,1^(4),g̃_14325= g_3,2,2,1,1^(4) -g_3,2,1,1,1^(4) -g_3,2,2,1^(4) +g_3,2,1,1^(4),g̃_14352 =g_3,2,1^(4)-g_3,2^(4),g̃_14532 = g_2,1,1,1^(4)-g_2,1,1^(4), g̃_15243 = g_3,1,1,1^(4),g̃_15324 =g_3,2,2,1^(4)-g_3,2,1,1^(4), g̃_15342 = g_3,1,1^(4)-g_3,1^(4),g̃_15423 =g_3,2,2^(4)-g_3,2,1^(4), g̃_15432 =g_3,2,2,1,1,1^(4)-g_3,2,1,1,1,1^(4) -2g_3,2,2,1,1^(4) -g_4,2,2,1^(4) +g_3,2,1,1,1^(4) +g_3,2,2,1^(4) -g_3,2,1,1^(4).We finally provide one conjecture below.Let w_0∈ S_n denote the longest element. Theng̃_w_0=∏_i=1^n-2g_(n-1-i)^i. We are informed thata general formulation of K-theoretic Peterson isomorphism valid for a simple simply connected G is proposed by Thomas Lam, Changzheng Li, Leonardo Mihalcea, and Mark Shimozono in <cit.>.We express thanks to them forcommunicating their work to us and for valuable discussions. We are grateful to Michael Finkelberg for valuable discussions. He kindly showed us his interpretation ofτ-functions in terms of the Zastava space.During the preparation of the paper we also benefitted from thehelpful discussions with and comments of manypeople, includingAnders Buch,Rei Inoue,Hiroshi Iritani,Bumsig Kim,Cristian Lenart,Jennifer Morse, Satoshi Naito,Hiraku Nakajima,Hiroshi Naruse,Kyo Nishiyama,Masatoshi Noumi,Kaisa Taipale,Kanehisa Takasaki,Motoki Takigiku,Vijay Ravikumar, andChris Woodward. We also thank Andrea Brini who drew our attention to <cit.>. Special thanks are due to Tomoo Matsumura for showing us the proof of Proposition <ref>. In order to discover and check Conjecture 2, we used the open source mathematical software Sage <cit.>. The work was supported by JSPS KAKENHI [grant numbers 15K04832 to T.I., 26800062 to S.I., 16K05083 to T.M.]. 99 SageMath Sage Mathematics Software, verson 7.5.1. http://www.sagemath.org, 2017. ACT Anderson, D., Chen, L. and Tseng, H.-H., On the quantum K-ring of the flag manifold, arXiv: 1711.08414v1.bezrukavnikov2005equivariant Bezrukavnikov, R., M. Finkelberg, and I. Mirković. “Equivariant homology and K-theory of affine Grassmannians and Toda lattices.” Compositio Mathematica 141, no. 3 (2005): 746–68. BF1 Braverman, A. and M. Finkelberg. “Finite difference quantum Toda lattice via equivariant K-theory.” Transformation Groups 10, no. 3-4 (2005): 363–86. BF2 Braverman, A. and M. Finkelberg. “Semi-infinite Schubert varieties and quantum K-theory of flag manifolds.” Journal of American Mathematical Society 27, no. 4 (2014): 1147–68. BuchLR Buch, A.“A Littlewood-Richardson rule for the K-theory of Grassmannians.” Acta Mathematica 189, no. 1 (2002): 37–78. BuchCombK2005 Buch, A.“Combinatorial K-theory”, pp. 87–104. In Topics in Combinatorial Studies of Algebraic Varieties., Trends in Mathemetics, Birkhäuser Verlag Basel/Swizerland, 2005. BCMP2013 Buch, A., P.-E. Chaput, L. Mihalcea, and N. Perrin. “Finiteness of cominuscule quantum K-theory.” Annales scientifique de l'École normale supérieure 46, no.3 (2013): 477–494. BCMP2016 Buch, A., P.-E. Chaput, L. Mihalcea, and N. Perrin. “Rational connectedness implies Finiteness of quantum K-theory.” Asian Journal of Mathematics 20, no.1 (2016): 117–22. BuchMihalcea2011 Buch, A. and L. Mihalcea. “Quantum K-theory of Grassmannians.” Duke Mathematical Journal 156 (2011): 501–38. FominGelfandPostnikov1997 Fomin, S., S. Gelfand, and A. Postnikov. “Quantum Schubert polynomials.” Journal of American Mathematical Society 10 (1997): 565–96. GesselViennot1985 Gessel, I. M. and X. G. Viennot. “Binomial Determinants, Paths, and Hook Length Formulae.” Advances in Mathematics 58 (1985): 300–21. GiventalHomGeom1995 Givental, A.“Homological geometry I: Projective hypersurfaces.” Selecta Mathematica, New Series 1 (1995): 325–45. GiventalEqGW1996 Givental, A.“Equivariant Gromov-Witten invariants.” International Mathematics Research Notices 1996, no. 13 (1996): 613–63. giventalWDVV Givental, A.“On the WDVV-equation in quantum K-theory.” Michigan Mathematical Journal 48 (2000): 295–304. givental1995quantum Givental, A.and B. Kim. “Quantum cohomology of flag manifolds and Toda lattices.” Communications in Mathematical Physics 168, no. 3 (1995): 609–41. givental2003quantum Givental, A.and Y.-P. Lee. “Quantum K-theory on flag manifolds, finite-difference Toda lattices and quantum groups.” Inventiones mathematicae 151, no. 1 (2003): 193–219. IkedaShimazaki Ikeda, T. and T. Shimazaki. “A proof of K-theoretic Littlewood-Richardson rules by Bender-Knuth-type involutions.” Mathematical Research Letters 21, no. 2 (2014): 333–39. IritaniMilanovTonita Iritani, H., T. Milanov, and V. Tonita. “Reconstruction and convergence in quantum K-theory via difference equations.” International Mathematics Research Notices 2015, no. 11 (2015): 2887–937. kac1988bombay Kac, V. G., A. K. Raina, and N. Rozhkovskaya. Bombay Lectures on Highest Weight Representations of Infinite Dimensional Lie Algebras, 2nd edition, Advanced Series in Mathematical Physics, vol. 29. World Scientific, 2014. Kashiwarathick Kashiwara, M. “The flag manifold of Kac-Moody Lie algebra.” In Algebraic Analysis, geometry, and number theory (Baltimore, MD), pp. 161–190. Johns Hopkins Univ. Press, 1988. KimAnnals1999 Kim, B.“Quantum cohomology of flag manifolds G/B and quantum Toda lattices.” Annals of Mathematics 149 (1999): 129–48. Kirillov-Maeno Kirillov, A. N. and T. Maeno. “A note on quantum K-theory of flag varieties and some quadric algebras.” in preparation. Kostant1979Toda Kostant, B. “The Solution to a Generalized Toda Lattice and Representation Theory.” Advances in Mathematics 34 (1979): 195–338. Kostant1996Toda Kostant, B. “Flag manifold quantum cohomology, the Toda lattice, and the representation with highest weight ρ.” Selecta Math. New Ser. 2 (1996): 43–91.KostantKumar1990K-theory Kostant, B. and S. Kumar. “T-equivariant K-theory of generalized flag varieties.”Journal of Differential Geometry 32, no. 2 (1990): 549–603. KruglinskayaMarshakov2015 Kruglinskaya, O. and A. Marshakov. “On Lie Groups and Toda Lattice.” Journal of Physics A 48, no. 12 (2015): 125201, 26 pp. kSchurBook Lam, T., L. Lapointe, J. Morse, A. Schilling, M. Shimozono, and M. Zabrocki. k-Schur Functions and Affine Schubet Calculus. Fields Institute Monographs. Springer, 2014. LLMS Lam, T., C. Li, L. Mihalcea, and M. Shimozono. in preparation. lamply2007 Lam, T. and P. Pylyavskyy. “Combinatorial Hopf algebras and K-homology of Grassmannians.” International Mathematics Research Notices 2007, no. 24 (2007) Art. ID rnm125, 48 pp. LSS Lam, T., A. Schilling, and M. Shimozono. “K-theory Schubert calculus of the affine Grassmannian.” Compositio Mathematica 146, no. 4 (2010): 811–52. lam2011double Lam, T. and M. Shimozono. “From double quantum Schubert polynomials to k-double Schur functions via the Toda lattice.” https://arxiv.org/abs/1109.2193. Lam2010 Lam, T. and M. Shimozono. “Quantum cohomology of G/P and homology of affine Grassmannian.” Acta Mathematica 204, no. 1 (2010): 49–90. lamshimo2010toda Lam, T. and M. Shimozono. “From quantum Schubert polynomials to k-Schur functions via the Toda lattice.” Mathematical Research Letters 19 (2012): 81–93. LLMkSchur Lapointe, L., A. Lascoux, and J. Morse. “Tableau atoms and a new Macdonald positivity conjecture.” Duke Mathematical Journal 116, no. 1 (2003): 103–46. LapointeMorse2005JCTA Lapointe, L. and J. Morse. “Tableau on k+1-cores, reduced words for affine permutations, and k-Schur functions.” Journal of Combinatrial Theory, Series A 112 (2005): 44–81. LapointeMorse2007 Lapointe, L. and J. Morse. “A k-tableau characterization of k-Schur functions.” Advances in Mathematics 213, no. 1 (2007): 183–204. LascouxNaruse2014 Lascoux, A. and H. Naruse. “Finite sum Cauchy identity for dual Grothendieck polynomials.” Proceedings of Japan Academy, Series A 90, no. 7 (2014): 87–91. LascouxSchutzenberger1982Groth Lascoux, A. and M.-P. Schützenberger. “Structure de Hopf de l'anneau de cohomologie et de l'anneau de Grothendieck d'une vatiété de drapeaux.” Comptes Rendus de l'Académie des Sciences – Séries I – Mathematics 295, no. 11 (1982): 629–33. Lee2001 Lee, Y.-P.“Quantum K-theory I: foundation, Quantum K-theory II: computation and open problems.” Duke Mathematical Journal 121, no. 3 (2004): 389–424. lenart2006quantum Lenart, C. and T. Maeno. Quantum Grothendieck polynomials. https://arxiv.org/abs/math/0608232. LP Lenart, C. and A. Postnikov. “Affine Weyl groups in K-theory and representation theory.” International Mathematics Research Notices 2007, no. 12 (2007) Art. ID rnm038, 65pp. macdonald1998symmetric Macdonald, I. G.Symmetric functions and Hall polynomials, 2nd edition. Oxford University Press, 1998. MihalceaDuke2007 Mihalcea, L.“On equivariant quantum cohomology of homogeneous spaces: Chevalley formulae and algorithm.” Duke Mathematical Journal 140, no. 2 (2007): 321–50. miwa2000solitons Miwa, T., M. Jimbo, and E. Date. Solitons: Differential equations, symmetries and infinite dimensional algebras. Cambridge Tracts in Mathematics vol. 135. Cambridge University Press, 2000. Morse2012 Morse, J. “Combinatorics of the K-theory of affine Grassmannians.” Advances in Mathematics 229 (2012): 2950–84. Noumi9th Nakagawa, J., M. Noumi, M. Shirakawa, and Y. Yamada. “Tableau representation for Macdonald's ninth variation of Schur functions.” pp. 180–95. Physics and Combinatorics 2000, Proceedings of the Nagoya 2000 International Workshop. World Scientific, 2000. ruijsenaars1990relativistic Ruijsenaars, S. “Relativistic Toda systems.” Communications in Mathematical Physics 133, no. 2 (1990): 217–47. ShimozonoZabrocki Shimozono, M. and M. Zabrocki. “Stable Grothendieck symmetric functions and Ω-calculus” unpublished (2003). | http://arxiv.org/abs/1703.08664v2 | {
"authors": [
"Takeshi Ikeda",
"Shinsuke Iwao",
"Toshiaki Maeno"
],
"categories": [
"math.AG",
"math.CO",
"math.RT",
"14M15, 53D45, 17B67"
],
"primary_category": "math.AG",
"published": "20170325082433",
"title": "Peterson Isomorphism in $K$-theory and Relativistic Toda Lattice"
} |
Malnormality and join-free subgroups]Malnormality and join-free subgroups in right-angled Coxeter groups Department of Mathematics The University of Georgia1023 D. W. Brooks DriveAthens, GA 30605USA [email protected] In this paper, we prove that all finitely generated malnormal subgroups of one-ended right-angled Coxeter groups are strongly quasiconvex and they are in particular quasiconvex when the ambient groups are hyperbolic. The key idea is to prove all infinite proper malnormal subgroups of one-ended right-angled Coxeter groups are join-free and then prove the strong quasiconvexity and the virtual freeness of these subgroups. We also study the subgroup divergence of join-free subgroups in right-angled Coxeter groups and compare them with the analogous subgroups in right-angled Artin groups. We characterize almost malnormal parabolic subgroups in terms of their defining graphs and also recognize them as strongly quasiconvex subgroups by the recent work of Genevois and Russell-Spriano-Tran. Finally, we discuss some results on hyperbolically embedded subgroups in right-angled Coxeter groups. [2000] 20F67,20F65[ Hung Cong Tran December 30, 2023 ===================== § INTRODUCTION It is well-known that quasiconvex subgroups of hyperbolic groups have finite height (see <cit.>). The height of a subgroup H in a group G is the smallest number n with the property that for any (n+1) distinct left cosets g_1H, g_2H⋯, g_n+1H the intersection ⋂ g_iH is always finite. Swarup asked if the converse is true:[<cit.>] Let G be a hyperbolic group and H a finitely generated subgroup. If H has finite height, is H quasiconvex? Gitik stated that the problem is open even when H is malnormal in G. A subgroup H of a group G is malnormal if gHg^-1∩ H is trivial for each g not in H. Wise and Agol also suggested one could attempt to answer it for hyperbolic virtually special groups (i.e. groups that virtually embed into some right-angled Artin group), but even that seems tricky. Is a malnormal finitely generated subgroup of a hyperbolic (virtually special) group quasiconvex?We observe that the above two questions can be extended to the analogous subgroups of arbitrary finitely generated groups. In <cit.>, Durham-Taylor introduced a strong notion of quasiconvexity in finitely generated groups, called stability, which is preserved under quasi-isometry, and which agrees with quasiconvexity when the ambient group is hyperbolic. However, a stable subgroup of a finitely generated group is always hyperbolic regardless of the geometry of the ambient group (see <cit.>). Thus, the geometry of a stable subgroup does not completely reflect that of the ambient group. Therefore, the author <cit.> and Genevois <cit.> independently introduced another concept of quasiconvexity, called strong quasiconvexity, which is strong enough to be preserved under quasi-isometry and relaxed enough to capture the geometry of ambient groups. Let G be a finitely generated group and H a subgroup of G. We say H is strongly quasiconvex in G if for every K ≥ 1,C ≥ 0 there is some M = M(K,C) such that every (K,C)–quasi–geodesic in G with endpoints on H is contained in the M–neighborhood of H. In <cit.>, the author also characterized stable subgroups as hyperbolic strongly quasiconvex subgroups. He also proved that strongly quasiconvex subgroups of finitely generated groups also have finite height (see Theorem 1.2 in <cit.>). Therefore, it is reasonable to extend Question <ref> and Question <ref> to strongly quasiconvex subgroups of finitely generated groups. Let G be a finitely generated (virtually special) group and H a finitely generated subgroup. If H has finite height (or H is malnormal), is H strongly quasiconvex? We note that the work of the author in <cit.> implicitly gave the positive answer to Question <ref> for one-ended right-angled Artin groups. More precisely, if a finitely generated subgroup of a one-ended right-angled Artin group has finite height, then it is strongly quasiconvex. In the Appendix, we give an explicit proof of this fact. Moreover, we provide necessary conditions for finite height subgroups of groups satisfying certain conditions (see Proposition <ref> and Lemma <ref>) and we hope this may be useful for someone who wants to attack Question <ref> for different group collections in future. §.§ Malnormality in right-angled Coxeter groupsThe positive answer for Question <ref> for one-ended right-angled Artin groups motivate us to work on the richer collection of groups, called right-angled Coxeter groups. For each finite simplicial graph Γ the associated right-angled Coxeter group G_Γ has generating set S equal to the vertices of Γ, relations s^2=1 for each s in S and relations st = ts whenever s and t are adjacent vertices. In this paper, we assume all graphs that define some right-angled Coxeter group are finite and simplicial. In contrast to right-angled Artin groups, the collection of right-angled Coxeter groups contains numerous hyperbolic groups, relatively hyperbolic groups, and thick groups of arbitrary orders. Right-angled Coxeter groups also provide a rich source of cubical groups and any results on this collection can shed light on extensions to all cubical groups. By some recent work on characterizing strongly quasiconvex parabolic subgroups of right-angled Coxeter groups (see Proposition 4.9 in <cit.> or Theorem 7.5 in <cit.>) we can prove easily that a finite height parabolic subgroup of a right-angled Coxeter group is strongly quasiconvex (see Proposition <ref>). However, it seems difficult to extend this result to arbitrary finite height subgroups of right-angled Coxeter groups. Therefore, we focus our work on their malnormal subgroups and we obtain a positive answer.Let Γ be a connected graph and H a finitely generated malnormal subgroup of the right-angled Coxeter group G_Γ. Then H is strongly quasiconvex. Moreover, if H is a proper subgroup, then H is virtually free (therefore, H is also stable).Since right-angled Coxeter groups is a large class of virtually special groups (see <cit.>), Theorem <ref> sheds light on the positive answer to Question <ref> and the ability to extend our result to the non-hyperbolic case via the concept of strongly quasiconvex subgroups (see Question <ref>). In Theorem <ref>, if H is finite or H=G_Γ, then H is strongly quasiconvex clearly. Otherwise, H is an infinite proper subgroup and this implies that Γ is not a join by Remark <ref>. Therefore, the key idea for the proof of Theorem <ref> is the following theorem.Let Γ be a non-join connected graph and H an infinite proper malnormal subgroup of the right-angled Coxeter group G_Γ. Then H is a join-free subgroup (i.e. H is infinite and none of infinite order elements in H are conjugate into a join subgroup). Theorem <ref> is the main motivation for studying join-free subgroups of right-angled Coxeter groups with connected non-join defining graphs. We note that all finitely generated join-free subgroups of right-angled Coxeter groups with connected non-join defining graphs are stable and virtually free by Propositions <ref>, <ref> and <ref>. We also refer the reader to Section <ref> for the proof of Theorem <ref>. In general, we still do not know whether an arbitrary almost malnormal (or even finite height) subgroup of a one-ended right-angled Coxeter group is strongly quasiconvex although the positive answer is already confirmed for parabolic subgroups as we discussed at the beginning. Note that a subgroup H of a group G is almost malnormal if gHg^-1∩ H is finite for each g not in H. It is clear that an infinite almost malnormal subgroup has height exactly 1.We end this section with a characterization of almost malnormal parabolic subgroups in right-angled Coxeter groups which does not seem to be recorded in the literature. Let Γ be a finite simplicial graph and Λ be an induced subgraph of Γ. Then a parabolic subgroup H of right-angled Coxeter group G_Γ induced by Λ is almost malnormal if and only if no vertex of Γ-Λ commutes to two non-adjacent vertices of Λ. Using Proposition 4.9 in <cit.> or Theorem 7.5 in <cit.> we can easily see that all almost malnormal parabolic subgroups of right-angled Coxeter groups are strongly quasiconvex but as discussed above strong quasiconvexity even also holds for all finite height parabolic subgroups of right-angled Coxeter groups. However, the above proposition will later help us characterize hyperbolically embedded parabolic subgroups of right-angled Coxeter groups. §.§ Hyperbolically embedded subgroups in right-angled Coxeter groupsHyperbolically embedded subgroups are generalizations of peripheral subgroups in relatively hyperbolic groups (see <cit.>) and are a key component of studying acylindrically hyperbolic groups, a large class of groups exhibiting hyperbolic-like behavior (see <cit.>). Work of Dahmani-Guirardel-Osin <cit.> and Sisto <cit.> showed that if a finite collection of subgroups {H_i} is hyperbolically embedded in a finitely generated group G, then {H_i} is an almost malnormal collection and each H_i is strongly quasiconvex. The converse of this statement is true for groups acting geometrically on (0) cube complexes (see Theorem 6.31 in <cit.>) and hierarchically hyperbolic groups (see Theorem H in <cit.>) which both includes right-angled Coxeter groups. We note that a collection ℋ of subgroups of G is malnormal (resp. almost malnormal) if for each H, H'∈ℋ and g∈ G we have H∩ gH'g^-1≠{e} (resp. H∩ gH'g^-1=∞}) implies H=H' and g∈ H. Therefore, the following is a corollary of Theorem <ref>. Let Γ be a connected graph and ℋ a finite collection of finitely generated subgroups of the right-angled Coxeter group G_Γ. If ℋ is malnormal, then ℋ is hyperbolically embedded. In addition, the converse is also true if all subgroups in ℋ are torsion free. We note that all proper hyperbolically embedded subgroups in the above corollary are virtually free (and therefore hyperbolic). Using the work of Caprace <cit.> we can construct a non-trivial relatively hyperbolic right-angled Coxeter group with non-hyperbolic peripheral subgroups. Therefore, the peripheral subgroups are clearly non-hyperbolic hyperbolically embedded subgroups. Outside the relatively hyperbolic setting, one may expect that all proper hyperbolically embedded subgroups of right-angled Coxeter groups are hyperbolic. However, this is not true. Combining the characterization of parabolic strongly quasiconvex subgroups (see Proposition 4.9 in <cit.> or Theorem 7.5 in <cit.>) and the characterization of almost malnormal collections of parabolic subgroups (see Corollary <ref>) in right-angled Coxeter groups we obtain a characterization of hyperbolically embedded collections of parabolic subgroups in this group collection. Let Γ be a simplicial finite graph andℋ={g_1G_Λ_1g_1^-1, g_2G_Λ_2g_2^-1, ⋯, g_nG_Λ_ng_n^-1}a collection of parabolic subgroups of the right-angled Coxeter group G_Γ. Then ℋ is hyperbolically embedded (i.e. ℋ is almost malnormal) in G_Γ if and only if the following holds: * For each Λ_i no vertex outside Λ_i commutes to non-adjacent vertices of Λ_i; and* Λ_i∩Λ_j is empty or a clique for each i≠ j. We now use the above proposition to construct an example of proper non-hyperbolic hyperbolically embedded subgroup of a non-relatively hyperbolic right-angled Coxeter group. Let Γ be the graph Γ_3 in Figure 7 of <cit.> and let Λ be the red 4–cycle as in the figure. Then it is clear that G_Λ is a virtually Z^2 hyperbolically embedded proper subgroup of the 𝒞ℱ𝒮 right-angled Coxeter group G_Γ. We note that the 𝒞ℱ𝒮 condition on defining graphs was used in Dani-Thomas <cit.> and Levcovitz <cit.> to characterize right-angled Coxeter groups with quadratic divergence. Since the divergence of a one-ended relatively hyperbolic groups is always exponential (see <cit.>), 𝒞ℱ𝒮 right-angled Coxeter groups are never relatively hyperbolic. Actually, we can use the proof of Corollary G in <cit.> to prove that every right-angled Coxeter group is a hyperbolically embedded subgroup of some 𝒞ℱ𝒮 right-angled Coxeter group.§.§ Geometric embedding properties of join-free subgroups and their generalization We note that join-free subgroups were also defined analogously for right-angled Artin groups in Koberda-Mangahas-Taylor <cit.> under the name purely loxodromic subgroups. They also proved that the such groups are strongly quasiconvex and free. The reader can see later that we mostly follow their strategy for the proof of the strong quasiconvexity and the virtual freeness of our groups (see Section <ref> and Section <ref>). However, we show the embedding properties of our subgroups in right-angled Coxeter groups are more diverse than the ones of the analogous subgroups in right-angled Artin groups.For each d≥ 2 there is a right-angled Coxeter group G_d such that for each 2≤ m≤ d the group G_d contains a join-free subgroup H_d^m which is isomorphic to the group F=a,b,ca^2=b^2=c^2=e and whose subgroup divergence in G_d is a polynomial of degree m.Subgroup divergence was introduced by the author with the name lower relative divergence in <cit.> to study geometric embedding properties of a subgroup inside a finitely generated group. We note that the subgroup divergence of a join-free subgroup in a one-ended right-angled Artin group is always quadratic (see Corollary 1.17 in <cit.>). Therefore, the geometric embedding properties of join-free subgroups in right-angled Coxeter groups are more plentiful. However, we also prove the quadratic subgroup divergence holds for certain class of right-angled Coxeter groups.Let Γ be a non-join connected 𝒞ℱ𝒮 graph and H a finitely generated join-free subgroup of the right-angled Coxeter group G_Γ. Then the subgroup divergence of H in G_Γ is exactly quadratic.As observed above, join-free subgroups are proved to be useful to study the malnormality in right-angled Coxeter groups. However, if one only cares about the coarse geometry of subgroups that are similar to the one of join-free subgroups, the concept of join-free subgroups seems to be quite restrictive because it requires that the defining graph of the ambient group to be not a join. Therefore, we proposed a concept of almost join-free subgroups for all right-angled Coxeter groups whose defining graphs are not a join of two subgraphs of diameters at least 2. More precisely, if the ambient graph Γ is not a join of two subgraphs with diameters at least 2, then we can write Γ=Γ_1*K where K is a (possibly empty) clique and Γ_1 is a non-join graph. In this case, G_Γ_1 is a finite index subgroup of G_Γ and we can extend the concept of join-free subgroups to subgroups in G_Γ as follows. An infinite subgroups H of G_Γ is almost join-free if H∩ G_Γ_1 is a join-free subgroup of G_Γ_1. It is clear that if Γ is not a join, an almost join-free subgroup of G_Γ is a truly join-free subgroup in G_Γ. § PRELIMINARIES §.§ Coarse geometryWe first review the concepts of quasi-isometric embedding, quasi-isometry, quasi-geodesics, geodesics, undistorted subgroups, strongly quasiconvex subgroups, stable subgroups, and subgroup divergence. For metric spaces (X,d_X) and (Y,d_Y) be two metric spaces and constants K ≥ 1 and L ≥ 0, a map f:X → Y is a (K, L)–quasi-isometric embedding if for all x_1, x_2 ∈ X,A quasi-isometric embedding is simply a (K,L)–quasi-isometric embedding for some K,L. When a quasi-isometric embedding f:X → Y has the additional property that every point in Y is within a bounded distance from the image f(X), we say f is a quasi-isometry and X and Y are quasi-isometric.Where X is a subinterval I ofor , we call a (K, L)–quasi-isometric embedding f:I → Y a (K, L)–quasi-geodesic. If K = 1 and L = 0, then f:I → Y is a geodesic. Let G be a finitely generated group and H a finitely generated subgroup of G. We say H is undistorted in G if the inclusion map of subgroup H into the group G is a quasi-isometric embedding (this is independent of the word metrics on H and G). We say H is strongly quasiconvex in G if for every K ≥ 1,C ≥ 0 there is some M = M(K,C) such that every (K,C)–quasi–geodesic in G with endpoints on H is contained in the M–neighborhood of H. We say H is stable in G if H is undistorted in G, and for any K ≥ 1 and L≥ 0 there is an M = M(K,L) ≥ 0 such that any pair of (K,L)–quasi-geodesics in G with common endpoints in H have Hausdorff distance no greater than M. In <cit.> the author proved that a subgroup is stable if and only if it is strongly quasiconvex and hyperbolic.Before we define the concept of subgroup divergence, we need to introduce the notions of domination and equivalence which are the tools to measure the subgroup divergence.Let ℳ be the collection of all functions from [0,∞) to [0,∞]. Let f and g be arbitrary elements of ℳ. The function f is dominated by the function g, denoted f≼ g, if there are positive constants A, B, C and D such that f(x)≤ Ag(Bx)+Cx for all x>D. Two function f and g are equivalent, denoted f∼ g, if f≼ g and g≼ f. A function f in ℳ is linear, quadratic or exponential... if f is respectively equivalent to any polynomial with degree one, two or any function of the form a^bx+c, where a>1, b>0.Let {δ^n_ρ} and {δ'^n_ρ} be two families of functions of ℳ, indexed over ρ∈ (0,1] and positive integers n≥ 2. The family {δ^n_ρ} is dominated by the family {δ'^n_ρ}, denoted {δ^n_ρ}≼{δ'^n_ρ}, if there exists constant L∈ (0,1] and a positive integer M such that δ^n_Lρ≼δ'^Mn_ρ. Two families {δ^n_ρ} and {δ'^n_ρ} are equivalent, denoted {δ^n_ρ}∼{δ'^n_ρ}, if {δ^n_ρ}≼{δ'^n_ρ} and {δ'^n_ρ}≼{δ^n_ρ}.A family {δ^n_ρ} is dominated by (or dominates) a function f in ℳ if {δ^n_ρ} is dominated by (or dominates) the family {δ'^n_ρ} where δ'^n_ρ=f for all ρ and n. The equivalence between a family {δ^n_ρ} and a function f in ℳ can be defined similarly. Thus, a family {δ^n_ρ} is linear, quadratic, exponential, etc if {δ^n_ρ} is equivalent to the function f where f is linear, quadratic, exponential, etc.Let X be a geodesic space and A a subspace of X. Let r be any positive number. * N_r(A)=x ∈ Xd_X(x, A)<r* ∂ N_r(A)=x ∈ Xd_X(x, A)=r * C_r(A)=X-N_r(A).* Let d_r,A be the induced length metric on the complement of the r–neighborhood of A in X. If the subspace A is clear from context, we can use the notation d_r instead of using d_r,A.Let (X,A) be a pair of geodesic spaces. For each ρ∈ (0,1] and positive integer n≥ 2, we define a functions σ^n_ρ:[0, ∞)→ [0, ∞] as follows: For each positive r, if there is no pair of x_1, x_2 ∈∂ N_r(A) such that d_r(x_1, x_2)<∞ and d(x_1,x_2)≥ nr, we define σ^n_ρ(r)=∞. Otherwise, we define σ^n_ρ(r)=inf d_ρ r(x_1,x_2) where the infimum is taken over all x_1, x_2 ∈∂ N_r(A) such that d_r(x_1, x_2)<∞ and d(x_1,x_2)≥ nr. The family of functions {σ^n_ρ} is the subspace divergence of A in X, denoted div(X,A). We now define the subgroup divergence of a subgroup in a finitely generated group. Let G be a finitely generated group and H its subgroup. We define the subgroup divergence of H in G, denoted div(G,H), to be the subspace divergence of H in the Cayley graph Γ(G,S) for some finite generating set S. The concept of subgroup divergence was introduced by the author with the name lower relative divergence in <cit.>. The subgroup divergence is a pair quasi-isometry invariant concept (see Proposition 4.9 in <cit.>). This implies that the subgroup divergence of a subgroup on a finitely generated group does not depend on the choice of finite generating sets of the whole group. §.§ Geometry and algebra of right-angled Coxeter groupsIn this section, we review the concepts of right-angled Coxeter groups, special subgroups, parabolic subgroups, star subgroups, join subgroup, Davis complexes, and some basic algebraic and geometric properties of right-angled Coxeter groups.Given a finite, simplicial graph Γ, the associated right-angled Coxeter group G_Γ has generating set S the vertices of Γ, and relations s^2 = 1 for all s in S and st = ts whenever s and t are adjacent vertices.Let S_1 be a subset of S. The subgroup of G_Γ generated by S_1 is a right-angled Coxeter group G_Γ_1, where Γ_1 is the induced subgraph of Γ with vertex set S_1 (i.e. Γ_1 is the union of all edges of Γ with both endpoints in S_1). The subgroup G_Γ_1 is called a special subgroup of G_Γ. Any of its conjugates is called a parabolic subgroup of G_Γ.A reduced word for a group element g in G_Γ is a minimal length word in the free group F(S) representing g. It is proved in <cit.> that if w = v_1v_2⋯ v_p is not reduced, then there exists 1 ≤ i < j ≤ p such that v_i = v_j and v_i is adjacent to each of the vertices v_i+1,⋯, v_j-1 (the Deletion Condition). Moreover, it is also proved in <cit.> that if two reduced words w, w' define the same element of G_Γ, then w can be transformed into w' by a finite number of letter swapping operations (the Transpose Condition).Let w be any word in the vertex generators. We say that v∈ S is in the support of w, written v∈(w), if v occurs as a letter in w. For g ∈ G_Γ and w a reduced word representing g, we define the support of g, (g), to be (w). We define the cyclically support of g, (g), to be the intersection of all sets (wgw^-1), where each w is a group element in G_Γ. It follows from Transpose Condition that (g) and (g) are well-defined. We say that u is cyclically reduced if (u)=(u). It is also well know that each g∈ G_Γ has a unique reduced expression wuw^-1 with u cyclically reduced and therefore (g)=(u).Let Γ_1 and Γ_2 be two non-empty graphs, the join of Γ_1 and Γ_2 is a graph obtained by connecting every vertex of Γ_1 to every vertex of Γ_2 by an edge.Let J be an induced subgraph of Γ which decomposes as a join. We call G_J a join subgroup of G_Γ. A reduced word w in G_Γ is called a join word if w represents element in some join subgroup. If β is a subword of w, we will say that β is a join subword of w when β is itself a join word.For a vertex v of the graph Γ let (v) denote the subgraph of Γ induced by the vertices adjacent to v called the link of v and let (v) denote the subgraph spanned by v and (v) called the star of v. The special subgroup G_(v) is a star subgroup of G_Γ. Note that a star of a vertex is always a join, but the converse is generally not true. A reduced word w in G_Γ is called a star word if w represents element in some star subgroup. If β is a subword of w, we will say that β is a star subword of w when β is itself a star word. Note that a star word is always a join word, but the converse is generally not true.Given a finite, simplicial graph Γ, the associated Davis complex Σ_Γ is a cube complex constructed as follows. For every k–clique, T ⊂Γ, the special subgroup G_T is isomorphic to the direct product of k copies of Z_2. Hence, the Cayley graph of G_T is isomorphic to the 1–skeleton of a k–cube. The Davis complex Σ_Γ has 1–skeleton the Cayley graph of G_Γ, where edges are given unit length. Additionally, for each k–clique, T ⊂Γ, and coset gG_T, we glue a unit k–cube to gG_T ⊂Σ_Γ. The Davis complex Σ_Γ is a (0) space and the group G_Γ acts properly and cocompactly on the Davis complex Σ_Γ (see <cit.>).The idea for the following lemma comes from Lemma 3.1 in <cit.>. Moreover, the proof of the following lemma is almost identical to the proof of that lemma. Therefore, we here just copy the proof Lemma 3.1 in <cit.> with slight changes that are suitable to the case of RACGs. Let H_1 = g_1H_v and H_2 = g_2H_w. Then * H_1 intersects H_2 if and only if v, w commute and g_1^-1g_2 ∈ G_(v)G_(w).* There is a hyperplane H_3 intersecting both H_1 and H_2 if and only if there is u in (v) ∩(w) such that g_1^-1g_2 ∈ G_(v)G_(u)G_(w). § JOIN-FREE SUBGROUPS AND MALNORMALITY IN RIGHT-ANGLED COXETER GROUPSIn this section, we define the concepts of join-free subgroups and star-free subgroups in right-angled Coxeter groups. We study the connections among parabolic subgroups, star-free subgroups, and join-free subgroups. We also give a proof of the theorem that an infinite proper malnormal subgroup of a right-angled Coxeter group with connected defining graph is always join-free. Finally we characterize almost malnormal parabolic subgroups and almost malnormal collections of parabolic subgroups in right-angled Coxeter groups in terms of their defining subgraphs. Let Γ be a simplical finite graph. An infinite subgroup H of the right-angled Coxeter group G_Γ is join-free if none of its infinite order elements are conjugate into a join subgroup. An infinite subgroup H of G_Γ is star-free if none of its infinite order elements are conjugate into a star subgroup. It is clear from the definition that if the ambient graph Γ is a join (resp. a star), then the right-angled Coxeter group G_Γ contains no join-free subgroup (resp. no star-free subgroup). Therefore, whenever we assume the right-angled Coxeter group G_Γ contains a join-free subgroup (resp. a star-free subgroup) the ambient graph Γ is understood implicitly to be not a join (resp. not a star).It is clear that a join-free subgroup of G_Γ is star-free, but the converse is false. For example, we can chose Γ as a square labeled cyclically by the vertices a, b, c, d. ThenG_Γ=⟨ a,c⟩×⟨ b,d⟩≅ D_∞× D_∞.Since Γ is a join graph, G_Γ has no join-free subgroup. However, any cyclic group generated by cyclically reduced word with full support is star-free. In particular, the cyclic subgroup ⟨ abcd ⟩ is a star-free subgroup. Now we can connect parabolic subgroups, star-free subgroups, and join-free subgroups.Let Γ be a non-join connected graph. Let H be a conjugate of a special subgroup induced by a subset S_1 of vertex set of Γ. Then the following are equivalent:* S_1 contains at least two non-adjacent vertices and the distance in Γ between any two elements of S_1 is different from 2. * H is join-free.* H is star-free.A subgroup H satisfying some (all) above condition is virtually a free group. Since any join-free subgroup is star-free, then we only need to prove (1) implies (2), and (3) implies (1). Without the loss of generality we can assume that H is a special subgroup. We first prove that (3) implies (1). In fact, if vertices in S_1 are pairwise adjacent, then H is a finite subgroup and then H is not star-free. If H has two vertices u and v with distance 2 in Γ, then h=uv is an infinite order of H which belongs to some star subgroup. Therefore, H is not a star-free subgroups in this case.We now prove that (1) implies (2). Assume that H is not join-free. Then there is an infinite order element h in H that is conjugate to a join subgroup. Then (h) is a subset of the vertex set of some induced join subgraph Γ_1. Since h is an infinite order element of the special group generated by S_1, (h) is a subset of S_1 and there are two vertices v_1 and v_2 in (h) that are not adjacent in Γ. Since two non-adjacent vertices v_1 and v_2 both lie in the join subgraph Γ_1, the distance in Γ between v_1 and v_2 is exactly 2. This is a contradiction. Therefore, H is a join-free subgroup.We observe that if S_1 contains at least two non-adjacent vertices and the distance in Γ between any two elements of S_1 is different from 2, then the subgraph induced by S_1 is disconnected and each component is a single point or a clique. Therefore, H is a free product of more than one finite subgroups. This implies that H is a virtually free subgroup.By the above proposition, parabolic join-free subgroups are always virtually free subgroups. We remark that any infinite subgroup of a join-free subgroup is also join-free. Therefore, we conclude that any infinite subgroup which is conjugate into a join-free special subgroup is also virtually free join-free subgroup. In general, we will show that a join-free subgroup is not necessarily conjugate into a join-free special subgroup. However, we will prove later that a join-free subgroups is always virtually free even when it is not conjugate into a join-free special subgroup.We now come up with an example of a join-free subgroup which is not conjugate into a join-free special subgroup. Let Γ be a graph in Figure <ref>. Then we observe that the distance between any two non-adjacent vertices in Γ is exactly two. Therefore, the group G_Γ does not contains any join-free parabolic subgroups by Proposition <ref>. Let x=(aa_1)(dd_1)(aa_1), y=(dd_1)(aa_1)(dd_1), and H a subgroup generated by x and y. Then H is a free subgroup of rank two and H is also a join-free subgroup (see the following proposition). Let Γ be a graph in Figure <ref> and H a subgroup generated by x=(aa_1)(dd_1)(aa_1) and y=(dd_1)(aa_1)(dd_1). Then H is a free subgroup of rank two and H is also a join-free subgroup.Let S be the vertex set of Γ and T={x,y,x^-1,y^-1}. Let w=u_1u_2⋯ u_n be an arbitrary freely reduced word in T and w̅ be the word obtained from w by replacing x, x^-1, y, y^-1 by their corresponding subwords in G_Γ. We remark that w and w̅ both represent the same element in H. We will prove that w̅ is a reduced word in G_Γ. Since w is a freely reduced word in T, then subword of two consecutive elements u_iu_i+1 in w must lie in{xx, x^-1x^-1, yy, y^-1y^-1, xy, y^-1x^-1, yx, x^-1y^-1, x^-1y, y^-1x, xy^-1, yx^-1}.By using the Deletion Condition, we can check that any subwords in w̅ that replaces two consecutive elements u_iu_i+1 in w is reduced. Assume for the contradiction that w̅ is not a reduced word in G_Γ. Then using the Deletion Condition, there exists 1 ≤ℓ < k ≤ 6n such that the ℓ^th element v_ℓ and the k^th element v_k in w̅ are labelled by the same generator in S and v_ℓ commutes with all elements between v_ℓ and v_k. We can assume further that no element of w̅ between v_ℓ and v_k is labelled by the same generator as v_ℓ and v_k. Also, any subword of w̅ that replaces x, y, x^-1, y^-1 has the same support as w̅. Therefore, v_ℓ and v_k must lie in the subword that replaces two consecutive elements u_iu_i+1 in w. This implies that the subword that replaces two consecutive elements u_iu_i+1 in w is not reduced. This is a contradiction. Therefore, w̅ is a reduced word in G_Γ. This implies that H is a free subgroup of rank 2 and h_S=6h_T for each element h in H. This fact also implies that if h is cyclically reduced in (H,T), then h is also cyclically reduced in (G_Γ,S). We now assume for the contradiction that H is not a join-free subgroup. Then there is a nontrivial element h that is conjugate into a join subgroup. We can assume that h is cyclically reduced in (H,T). Therefore, h is also cyclically reduced in (G_Γ,S) and h lies in a join subgroup. Therefore, the support (h)={a,a_1,d,d_1} must lie in the vertex set of some join subgraph Γ'=Γ_1*Γ_2. Since the subgraph of Γ induced by (h) is not a join, then (h)={a,a_1,d,d_1} must lie entirely in Γ_1 or Γ_2 (say Γ_1). Therefore, (h)={a,a_1,d,d_1} lies entirely in the star of some vertex in Γ_2. We can check easily that this is a contradiction. Therefore, H is a join-free subgroup.We will prove later that all join-free subgroups in RACGs are stable. However, the converse is not true. For example, a cyclic subgroup H of a right-angled Coxeter group G_Γ generated by a rank-one isometry g is stable but H is not a join-free subgroup when g is conjugate into a star subgroup. We can also construct a non-virtually cyclic stable subgroup which is not join-free as follows. Let Γ be a connected graph which has no separating clique and no embedded cycles of length four. We assume also that Γ contains an embedded cycle C of length more than four. Then the right-angled Coxeter group G_Γ is a one-ended hyperbolic group (see Theorem 8.7.2 and Corollary 12.6.3 in <cit.>) and the special subgroup G_C is a non-virtually cyclic quasiconvex subgroup of G_Γ. Therefore, G_C is a non-virtually cyclic stable subgroup. It is obvious that the vertex set of C does not satisfy conditions in Proposition <ref>. Therefore, G_C is not a join-free subgroup.We now prove that an infinite proper malnormal subgroup of a right-angled Coxeter group with non-join connected defining graph is always join-free. We first prove that for each vertex s of Γ and each group element g in G_Γ the group element gsg^-1 never be an element of H. Assume for a contradiction that there is a vertex s_0 of Γ and a group element g_0 in G_Γ such that g_0s_0g_0^-1 is a group element of H. Therefore, s_0 is a group element of group K=g_0^-1Hg_0. We note that K is also a malnormal subgroup. Let s be an arbitrary adjacent vertex of s_0. Then we see that sKs∩ K contains the non-identity element s_0. Therefore, s must be also a group element of K. Since Γ is connected, all vertices of Γ must be group elements of K. This implies that K (also H) is the ambient group G_Γ which is a contradiction. Therefore, for each vertex s of Γ and each group element g in G_Γ the group element gsg^-1 never be an element of H. We now assume for a contradiction that H is not a join-free subgroup. Then there is an infinite order element h in H such that h belongs to some parabolic subgroup gG_Λ g^-1 where Λ is a join of two other subgraphs Λ_1 and Λ_2. We note that H∩ gG_Λ g^-1 is also malnormal in gG_Λ g^-1. If both subgraphs Λ_1 and Λ_2 have diameter at least 2, then H∩ gG_Λ g^-1=gG_Λ g^-1 by Lemma <ref>. In particular, for each vertex s of Λ the group element gsg^-1 belongs to H which is a contradiction. We now consider the case either Λ_1 or Λ_2 (say Λ_1) is consists of a single vertex or has diameter 1. Let s be an arbitrary vertex of Λ_1. Then the group elements g'=gsg^-1 commutes to all elements in gG_Λ g^-1. In particular, g' commutes to h and therefore g' is a group element of H which is also a contradiction. Therefore, H must be a join-free subgroup. We remark that if Γ is a join graph, then right-angled Coxeter group G_Γ does not contain any infinite proper malnormal subgroup. In fact if Γ is a join of two subgraphs of diameters at least 2, then G_Γ contains no infinite proper malnormal subgroup by Lemma <ref>. Otherwise, Γ is a join of a subgraph of diameter at least 2 and a non-empty clique. In this case there is a vertex v of Γ that commutes to all groups elements of G_Γ. By the proof of Theorem <ref>, if H is an infinite proper malnormal subgroup of G_Γ, then the vertex v never be an element of H. Moreover, since v commutes to all groups elements of G_Γ, we have vHv=H which is a contradiction. Therefore, if Γ is a join graph, then the right-angled Coxeter group G_Γ does not contain any infinite proper malnormal subgroup.We now characterize almost malnormal parabolic subgroups in right-angled Coxeter groups. We note that if a subgroup is almost malnormal, then all its conjugates are also almost malnormal. Therefore, we can assume that H=G_Λ is the special subgroup induced by Λ. We first assume G_Λ is almost malnormal and we will prove that no vertex of Γ-Λ commutes to two non-adjacent vertices of Λ. Assume for a contradiction that there is a vertex u in Γ-Λ that commutes with two non-adjacent vertices v_1 and v_2 of Λ. Then subgroup uG_Λ u^-1∩ G_Λ contains an infinite order group element v_1v_2. Since subgroup G_Λ is almost malnormal, u must be a group element of G_Λ and this implies that u is a vertex of Λ which is a contradiction. Thus, no vertex of Γ-Λ commutes to two non-adjacent vertices of Λ.We now assume that no vertex of Γ-Λ commutes to two non-adjacent vertices of Λ and we will prove that G_Λ is almost malnormal. Assume for a contradiction that G_Λ is not almost malnormal. Then there is a group element g not in G_Λ such that gG_Λ g^-1∩ G_Λ is infinite. We can choose such element g such that g_S is minimal where S is the vertex set of Γ. Let w_0 be a reduced word in S that represents g. Then the reverse word w̅_0 of w_0 is a reduced word that represents g^-1. Since g is not an element of G_Λ, some element in w_0 must be a vertex of Γ-Λ.Since gG_Λ g^-1∩ G_Λ is infinite, there is an infinite order element h in G_Λ such that ghg^-1 is also an element in G_Λ. Let w_1 be a reduced word in S that represents h. Then all elements of w_1 are vertices of Λ and there are at least two of them which are non-adjacent vertices of Λ. Therefore, the concatenation w=w_0w_1w̅_0 represents the group element ghg^-1 in G_Λ. We can write w=v_1v_2⋯ v_p. Since w contains a vertex not in Λ, then w is not reduced. Then there exists 1 ≤ i < j ≤ p such that v_i = v_j and v_i is adjacent to each of the vertices v_i+1,⋯, v_j-1. Since w_0, w_1, and w̅_0 are all reduced, v_i and v_j can not lie in the same block in w. We first assume that v_i lies in w_0 and v_j lies in w_1. Then v_i is a vertex in Λ and we have g=g'v_i where g'_S=g_S-1. Therefore, g'G_Λ g'^-1∩ G_Λ=gG_Λ g^-1∩ G_Λ which contradicts to the choice of g. By an analogous argument we also get the same contradiction if we assume v_i lies in w_1 and v_j lies in w̅_0. Therefore, v_i must lie in w_0 and v_j must lie in w̅_0. Moreover, v_i is not a vertex of Λ by an analogous argument. Since w_1 contains at least two non-adjacent vertices of Λ, the vertex v_i must commute to both these vertices which is a contradiction. Therefore, G_Λ is almost malnormal. Combining Proposition <ref> above and Proposition 3.4 in <cit.> we obtain the following corollary which characterizes almost malnormal collections of parabolic subgroups in right-angled Coxeter groups.Let Γ be a simplicial finite graph andℋ={g_1G_Λ_1g_1^-1, g_2G_Λ_2g_2^-1, ⋯, g_nG_Λ_ng_n^-1}a collection of parabolic subgroups of the right-angled Coxeter group G_Γ. Then ℋ is an almost malnormal collection in G_Γ if and only if the following holds: * For each Λ_i no vertex outside Λ_i commutes to non-adjacent vertices of Λ_i; and* Λ_i∩Λ_j is empty or a clique for each i≠ j. § DUAL VAN KAMPEN DIAGRAMS FOR RIGHT-ANGLED COXETER GROUPS In this section, we construct dual van Kampen diagrams for right-angled Coxeter groups which are almost identical to dual van Kampen diagrams for right-angled Artin groups constructed in <cit.>. In <cit.>, Koberda-Mangahas-Taylor used dual van Kampen diagrams for right-angled Artin groups to study the geometry of their join-free subgroups (under the name purely loxodromic subgroups) and star-free subgroups. In Sections <ref> and <ref> of this article, we will follow the same strategy as in <cit.> to study the geometry of join-free subgroups and star-free subgroups in right-angled Coxeter groups. §.§ Formal definitionWe now develop dual van Kampen diagrams for right-angled Coxeter groups. The key ingredient for constructing such diagrams for RACGs which are similar to ones for RAAGs is the similarity between Davis complexes and universal covers of Salvetti complexes.Let Γ be a graph with the vertex set S. Let w be a word representing the trivial element in G_Γ. A dual van Kampen diagram Δ for w in G_Γ is an oriented disk D together with a collection 𝒜, properly embedded arcs in general position, satisfying the following:* Each arc of 𝒜 is labeled by an element of S. Moreover, if two arcs of 𝒜 intersect then the generators corresponding to their labels are adjacent in Γ.* With its induced orientation, ∂ D represents a cyclic conjugate of the word w in the following manner: there is a point *∈∂ D such that w is spelled by starting at *, traversing ∂ D according to its orientation, and recording the labels of the arcs of 𝒜 it encounters We think of the boundary of D as subdivided into edges and labeled according to the word w. In this way, each arc of 𝒜 corresponds to two letters of w which are represented by edges on the boundary of D. While not required by the definition, it is convenient to restrict our attention to tight dual van Kampen diagrams, in which arcs of 𝒜 intersect at most once.In comparison with dual van Kampen diagrams for RAAGs, the only difference from dual van Kampen diagrams for RACGs is we do not need a direction equipped on each embedded arc of 𝒜 and each edge of ∂ D. The key reason for this difference is each edge of universal covers of Salvetti complexes is equipped with a direction while each edge of Davis complexes is not.We now show the way to construct the dual van Kampen diagram for an identity word w in a right-angled Coxeter groups. Let Δ̃⊂ S^2 be a (standard) van Kampen diagram for w, with respect to a standard presentation of G_Γ. Consider Δ̃^*, the dual of Δ̃ in S^2, and name the vertex which is dual to the face S^2-Δ̃ as v_∞. Then for a sufficiently small ball B(v_∞) around v_∞, Δ̃^*-B(v_∞) can be considered as a dual van Kampen diagram with a suitable choice of the labeling map. Therefore a dual van Kampen diagram exists for any word w representing the trivial element in G_Γ. Conversely, a van Kampen diagram Δ̃ for a word can be obtained from a dual van Kampen diagram Δ by considering the dual complex again. So, the existence of a dual van Kampen diagram for a word w implies that w represents the trivial element in G_Γ.§.§ Surgery and subwords Let Γ be a graph with the vertex set S. Starting with a dual van Kampen diagram Δ with a disk D and collection 𝒜 of embedded arcs in D for an identity word w. Suppose that γ is a properly embedded arc in Δ which is either an arc of 𝒜 or transverse to the arcs of 𝒜. Traversing γ in some direction and recording the labels of those arcs of 𝒜 that cross γ spells a word y in the standard generators. We say the word y is obtained from label reading along γ with the chosen direction.In particular, starting with a subword w' of w, any oriented arcof D which begins at the initial vertex of w' and ends at the terminal vertex of w' produces a word y via label reading such that w'= y in G_Γ. To see this, we observe that the arc γ cuts the disk D into two disks D_1 and D_2, one of which (say D_1) determines the homotopy (and sequence of moves) to transform the word w' into y. In other word, the disk D_1 along with arcs from 𝒜 forms a dual van Kampen diagram for the word w'y̅, and we say that this diagram is obtained via surgery on Δ. It is is straightforward that if the arc γ is labelled by a vertex v in S, then w' represents an element in the star subgroup G_(v). You can see the following lemma for a precise statement.Suppose an arc of 𝒜 in a dual van Kampen diagram Δ for the identity word w cuts off the subword w', i.e., w ≡ svw'vt, where s, w', and t are subwords and v is the letter at the ends of the arc. Then w' represents a group element in the star subgroup G_(v).If a subword in a dual van Kampen diagram has the property that no two arcs emanating from it intersect, this subword is combed in the dual van Kampen diagram. We remark that this such type of subword was also defined for dual van Kampen diagrams for RAAGs in <cit.> and it played an important role to study some certain types of subgroups of RAAGs. In Sections <ref> and <ref> of this article, we are following the same strategy in <cit.> to study subgroups of RACGs. Therefore, the property of being combed will be important in these sections. Suppose w is a word representing the identity and b is a subword of w, so w is the concatenation of words a, b, and c. Let Δ be a dual van Kampen diagram for w.Then there exists a word b' obtained by re-arranging the letters in b, such that b' = b and there exists a dual van Kampen diagram Δ' for ab'c in which b' is combed, arcs emanating from b' have the same endpoint in the boundary subword ca as their counterpart in b, and arcs that both begin and end in ca are unchanged in Δ'.Furthermore, there exists a word b” obtained by deleting letters in b', such that b” = b and there exists a dual van Kampen diagram Δ” for ab”c which is precisely Δ' without the arcs corresponding to the deleted letters. The above lemma is identical to Lemma 3.2 in <cit.> for RAAGs. Moreover, we observe that the proof of Lemma 3.2 in <cit.> can be applied to prove the above lemma. Therefore, the reader can see the proof of Lemma 3.2 in <cit.> to obtain the proof of the above lemma. §.§ Reducing diagrams In subsection 3.5 in <cit.>, Koberda-Mangahas-Taylor introduce reducing diagrams and some related concepts to study words in RAAGs as well as paths in universal covers of Salvetti complexes. We observed that these concepts are also well-defined for the case of RACGs and they can also help us studying words in RACGs as well as paths in Davis complexes. Therefore, we just copy most of subsection 3.5 in <cit.> and the reader can verify easily that these materials fit well for the case of RACGs. Let h be a word in the vertex generators of G_Γ, which is not assumed to be reduced in any sense. Let w denote a reduced word in the vertex generators which represents the same group element as h does. Then, the word hw̅ represents the identity in G_Γ and so it is the boundary of some dual van Kampen diagram Δ. (Here w̅ denotes the inverse of the word w.) In this way, the boundary of Δ consist of two words h and w̅. We sometimes refer to a dual van Kampen diagram constructed in this way as a reducing diagram as it represents a particular way of reducing h to the reduced word w. For such dual van Kampen diagrams, ∂ D is divided into two subarcs (each a union of edges) corresponding to the words h and w, we call these subarcs the h and w subarcs, respectively.Suppose that Δ is a dual van Kampen diagram that reduces h to the reduced word w. Since w is already a reduced word, no arc of 𝒜 can have both its endpoints on the w subarc of ∂ D. Otherwise, one could surger the diagram to produce a word equivalent to w with fewer letters. Hence, each arc of 𝒜 either has both its endpoints on the subarc of ∂ D corresponding to h, or it has one endpoint in each subarc of ∂ D. In the former case, we call the arc (and the letters of h corresponding to its endpoints) noncontributing since these letters do not contribute to the reduced word w. Otherwise, the arc is called contributing (as is the letter of h corresponding the endpoint contained in the h subarc of ∂ D). If the word h is partitioned into a product of subwords abc, then the contribution of the subword b to w is the set of letters in b which are contributing. We remark that whether a letter of h is contributing or not is a property of the fixed dual van Kampen diagram that reduces h to w.§ THE GEOMETRY OF SUBGROUPS OF RIGHT-ANGLED COXETER GROUPS In this section, we prove the undistortedness and freeness for star-free subgroups and the stability for join-free subgroups in right-angled Coxeter groups. Our work follows the same strategy in <cit.> for proving the analogous properties in right-angled Artin groups but is based on the dual van Kampen diagrams for right-angled Coxeter groups developed in the previous section. Throughout this section, we assume that Γ is connected and not a join.§.§ The geometry of star-free subgroupsRecall that a nontrivial subgroup H of G_Γ is star-free if each infinite order element in H is not conjugate into a star subgroup. We now assume that H is a finitely generated star-free subgroup of G_Γ with a finite generating set T. Therefore, each element h∈ H can be expressed as a geodesic word in H, that is, h=h_1h_2⋯ h_n such that h_i∈ T and n is minimal. We use a dual van Kampen diagram with boundary word (h_1h_2⋯ h_n)h^-1, where h and each h_i are written as reduced words in G_Γ. In other words, we concatenate the reduced word representatives for the h_i to obtain a word representing h = h_1⋯ h_n and consider a reducing diagram for this word. With our choices fixed, we call such a reducing diagram for h simply a dual van Kampen diagram for h ∈ H.The following lemma is identical to the Lemma 4.1 in <cit.> for RAAGs. Moreover, their proofs are almost identical except there is some small extra step at the end of the proof of the following lemma.Suppose H is a finitely generated, star-free subgroup of G_Γ. There exists D = D(H) with the following property: If in a dual van Kampen diagram for h∈ H, a letter in h_i is connected to a letter in h_j (i<j), then j-i < D.Suppose in a dual van Kampen diagram for h ∈ H, a letter g in h_i is connected to another letter g in h_j. By Lemma <ref>, h_i⋯ hj = σ M τ, where M is in the star of g, and σ, τ are a prefix of h_i and suffix of h_j respectively. Therefore, if the lemma is false, there a sequence of reduced-in-H wordsh^(t)_i(t)⋯ h^(t)_j(t)=σ_t M_t τ_tas above, with j(t)-i(t) strictly increasing. Because Γ is finite and H is finitely generated, we may pass to a subsequence so that the M_t are in the star of the same generator v, and furthermore we have constant σ_t = σ and τ_t = τ, while M_t≠ M_s for s≠ t. Therefore, for each t≥ 2, elementk_t=(h^(t)_i(t)⋯ h^(t)_j(t))(h^(1)_i(1)⋯ h^(1)_j(1))^-1=σ M_t M_1^-1σ^-1is nontrivial element in the subgroup H∩σ G_(v)σ^-1. Moreover, k_t≠ k_s for any 2≤ t<s.Assume that k_t_0 is infinite order for some t_0≥ 2. Then, H is not a star-free subgroup which is a contradiction. We now assume that the order of all k_t are two. Since k_t≠ k_s for any 2≤ t<s, we can choose two different elements k_t_1 and k_t_2 which do not commute. Therefore, the order of the group element k_t_1k_t_2 is not two. This implies that k_t_1k_t_2 is an infinite order element in the subgroup H∩σ G_(v)σ^-1. Then, H is not a star-free subgroup which is a contradiction. The following lemma is identical to the Lemma 4.2 in <cit.> for RAAGs. Moreover, the proof of the below lemma almost follows the same line argument as in the proof of Lemma 4.2 in <cit.>. Here we only need to replace Lemmas 3.2, 4.1 in the proof of Lemma 4.2 in <cit.> by Lemmas <ref>, <ref> of this paper respectively to obtain the proof of the following lemma.Suppose H is a finitely generated, star-free subgroup of G_Γ and D is a constant as in Lemma <ref>. Let h_i⋯ h_j be a subword of h = h_1 ⋯ h_n reduced in H as above. Then the element h_i ⋯ h_j ∈ G_Γ may be written as a concatenation of three words σ W τ, where the letters occurring in σ are a subset of the letters occurring in h_i-D⋯ h_i-1 when i>D, and in h_1⋯ h_i-1 otherwise; the letters occurring in τ are a subset of the letters occurring in h_j+1⋯ h_j+D when j ≤ n-D and in h_j+1⋯ h_n otherwise; and the letters occurring in W are exactly the letters occurring in h_i ⋯ h_j which survive in the word h after it is reduced in G_Γ. The following lemma is a key lemma to prove that the star-free subgroup H is undistorted in G_Γ. This lemma is identical to the Lemma 4.3 in <cit.> for RAAGs and its proof again almost follows the same line argument as in the proof of Lemma 4.3 in <cit.>. Here we only need to replace Lemma 4.2 in the proof of Lemma 4.3 in <cit.> by Lemma <ref> of this paper to obtain the proof of the following lemma.Given H a finitely generated, star-free subgroup of G_Γ, there exists K = K(H) such that, if h_i ⋯ hj is a subword of a reduced word for h in H which contributes nothing to the reduced word for h in G_Γ, then j - i < K. The following proposition is a direct result of Lemma <ref>. Finitely generated star-free subgroups are undistorted. The proof of the following proposition is almost identical to the proof of Theorem 53 in <cit.>. We recall a proof with a slight modification for the convenience of the reader.Star-free subgroups are virtually free. We first assume that H is torsion free. We will prove that H is a free subgroup by induction on the number of vertices of Γ. Since H is a torsion free star-free subgroup, for each vertex v of Γ and g in G_ΓH∩ gG_(v)g^-1={1}. For the base case Γ=v, G_Γ=_2 and H={1}. Therefore, the result in this case is obvious. For the inductive step, choose a vertex v of Γ and let Γ_v be the induced subgraph of Γ generated by all vertices of Γ except v. We observe that G_Γ=G_(v)*_G_(v) G_Γ_v. By standard Bass-Serre Theory, we see that H acts on the corresponding Bass-Serre tree with trivial edge stabilizer. Therefore, there exists a (possibly infinite) collection of subgroups {H_i} with each H_i conjugate to G_Γ_v in G_Γ such that H is a free product of subgroups H_i with possibly an additional free factor. Since H_i is conjugate into G_Γ_v and Γ_v has fewer vertices than Γ, we see that H is free by induction.We now assume that H is not torsion free. Let G_1 be a finite-index torsion free subgroup of G_Γ and H_1=G_1∩ H (see <cit.> for a construction of group G_1). Then, H_1 is a torsion free star-free subgroup of H with a finite index in H. Also, H_1 is a free subgroup by the above argument. This implies that H is a virtually free subgroup.§.§ Geometric embedding properties of join-free subgroupsAssume the graph Γ is a non-join connected graph. A nontrivial subgroup H of G_Γ is N–join-busting if, for any reduced word w representing h in H, and any join subword β≤ w, the length of β is bounded above by N.By using almost the same line argument as in Section 5 in <cit.>, we obtained the Proposition <ref> as below. We remark that the Proposition <ref> is identical to the Theorem 5.2 in <cit.>. However, we need to use van Kampen diagrams for RACGs instead of van Kampen for RAAGs in <cit.>. We also use Lemmas <ref>, <ref>, and <ref> of this paper instead of Lemmas 3.2, 4.1, and 4.2 in <cit.> respectively.Let Γ be a non-join connected graph and H a finitely generated join-free subgroup of the right-angled Coxeter group G_Γ. There exists an N = N(H) such that H is N–join-busting. In the Proposition <ref> as below, we prove the stability of N–join-busting subgroups in RACGs. This proposition is identical to Corollary 6.2 in <cit.>. The proof of Proposition <ref> follows almost the same line argument as in Section 6 in <cit.>. However, we need to use van Kampen diagrams for RACGs instead of van Kampen for RAAGs in and we use Proposition <ref> in this paper instead of Proposition 4.4 in <cit.>. Let Γ be a non-join connected graph and H a finitely generated join-free subgroup of the right-angled Coxeter group G_Γ. If H is N–join-busting for some N, then H is stable in G_Γ.§ SUBGROUP DIVERGENCE OF JOIN-FREE SUBGROUPS IN RIGHT-ANGLED COXETER GROUPS In this section, we study the subgroup divergence of join-free subgroups in right-angled Coxeter groups. We prove that the subgroup divergence of join-free subgroups in right-angled Coxeter groups can be polynomials of arbitrary degrees while it must be exactly quadratic in 𝒞ℱ𝒮 right-angled Coxeter groups. §.§ Subgroup divergence in 𝒞ℱ𝒮 right-angled Coxeter groupsWe first define the concept of 𝒞ℱ𝒮 graphs.Let Γ be a non-join graph. We define the associated four-cycle graph Γ^4 as follows. The vertices of Γ^4 are the induced loops of length four (i.e. four-cycles) in Γ. Two vertices of Γ^4 are connected by an edge if the corresponding four-cycles in Γ share a pair of non-adjacent vertices. Given a subgraph K of Γ^4, we define the support of K to be the collection of vertices of Γ (i.e. generators of G_Γ) that appear in the four-cycles in Γ corresponding to the vertices of K. Graph Γ is said to be 𝒞ℱ𝒮 if there exists a component of Γ^4 whose support is the entire vertex set of Γ. The following two lemmas contribute to the proof of the quadratic upper bound for the subgroup divergence of join-free subgroups in 𝒞ℱ𝒮 right-angled Coxeter groups. Let Γ be a non-join connected graph with the vertex set S. Let H be a finitely generated join-free subgroup of G_Γ. There is a positive number K such that the following holds. Let g be an element in G_Γ and (s_1,t_1), (s_2,t_2) two pairs of non-adjacent vertices in a four-cycle of Γ. Let u_1=s_1t_1 and u_2=s_2t_2. Thend_S(gu_1^iu_2^j,H)≥i+j/K-g_S-1.By Proposition <ref>, there is a positive integer N such that for any reduced word w representing h∈ H, and any join subword w' of w, we have ℓ(w')≤ N. Let K=(N+1)/2 and we will prove thatd_S(gu_1^iu_2^j,H)≥i+j/K-g_S-1.Let m=d_S(gu_1^iu_2^j,H). Then there is an element g_1 in G_Γ with g_1_S=m and h in H such that h=gu_1^iu_2^jg_1. Since u_1^iu_2^j is an element in a join subgroup of G_Γ and g_1_S=m, then h can be represented by a reduced word w that is a product of at most (g_S+1+m) join subwords. Also, the length of each join subword of w is bounded above by N. Therefore, the length of w is bounded above by N(g_S+m+1). Also,ℓ(w)≥u_1^iu_2^j_S-g_1_S-g_S≥ 2(i+j)-m-g_S.This implies that2(i+j)-m-g_S≤ N(g_S+m+1). Therefore,d_S(gs_1^is_2^j,H)=m≥2(i+j)/N+1-g_S-N/N+1≥i+j/K-g_S-1.Let Γ be a non-join 𝒞ℱ𝒮 connected graph with the vertex set S. Let C be a component of Γ^4 whose support is the entire vertex set of Γ. Let H be a finitely generated join-free subgroup of G_Γ and h an arbitrary element in H. There is a number L=L(H,C)≥ 1 such that the following holds. Let m≥ L^2 an integer and u=st, where (s,t) is a pair of non-adjacent vertices in some induced four-cycle Q_0 of Γ that corresponds to a vertex in C. There is a path α outside the (m/L-L)–neighborhood of H connecting u^m and hu^m with the length bounded above by Lm. Let M=(C), K the positive integer as in Lemma <ref> and k=h_S. Let L=2(k+1)(M+2)+K+k+M+1. Choose a reduced wordw=s_1s_2⋯ s_k, where s_i∈ S,that represents element h. Since the support of the component C of Γ^4 is the collection of vertices of Γ, for each i∈{1,2,⋯, k} there is a four-cycle Q_i that corresponds to a vertex of the component C of Γ^4 such that Q_i contains the vertex s_i. Let (a_i,b_i) be a pair of non-adjacent vertices in Q_i, u_i=a_ib_i and w_i=s_1s_2⋯ s_i. Then the length of each word w_i is bounded above by k, w_i+1=w_is_i, and w_k=w that represents element h. We now construct a path α_0 outside the (m/L-L)–neighborhood of H connecting u^m and w_1u_1^m with the length bounded above by 2(M+2)m. Since M=C, we can choose positive integer n≤ M and n+1 four-cycles P_0, P_1, ⋯, P_n that corresponds to a vertex of the component C of Γ^4 such that the following conditions hold: * P_0=Q_0 contains the pair of non-adjacent vertices (s,t) and let v_0=u.* P_n=Q_1 contains the pair of non-adjacent vertices (a_1,b_1) and let v_n+1=u_1.* P_j-1 and P_j share an pair of non-adjacent vertices (c_j,d_j), where j∈{1, 2, ⋯, n} and let v_j=c_jd_j.For each j∈{0, 1, 2, ⋯, n} let β_j be a path connecting v_j^m and v_j+1^m of length 2m with verticesv_j^m, v_j^mv_j+1, v_j^mv_j+1^2, ⋯,v_j^mv_j+1^m, v_j^m-1v_j+1^m,v_j^m-2v_j+1^m, ⋯,v_j+1^m.By Lemma <ref> the above vertices must lie outside the (m/K-1)–neighborhood of H. Therefore, these vertices also lies outside the (m/L-L)–neighborhood of H. Therefore, β_j is a path outside the (m/L-L)–neighborhood of H connecting v_j^m and v_j+1^m. Since w_1u_1^m=s_1u_1^m=u_1^ms_1, then we can connect u_1^m and w_1u_1^m by an edge β_n+1 labelled by s_1. Let α_0=β_0∪β_1∪⋯∪β_n∪β_n+1. Then, it is obvious that the path α_0 outside the (m/L-L)–neighborhood of H connecting u^m and w_1u_1^m with the length bounded above by 2(M+2)m. By similar constructions as above, for each i∈{1,2,⋯, k-1} there is a path α_i outside the (m/L-L)–neighborhood of H connecting w_iu_i^m and w_i+1u_i+1^m with the length bounded above by 2(M+2)m. We can also construct a path α_k outside the (m/L-L)–neighborhood of H connecting hu_k^m and hu^m with the length bounded above by 2(M+1)m. Let α=α_0∪α_1∪⋯∪α_k. Then, it is obvious that the path α outside the (m/L-L)–neighborhood of H connecting u^m and hu^m with the length bounded above by 2(k+1)(M+2)m. By the choice of L we observe that the length of α is also bounded above by Lm. We now prove the quadratic subgroup divergence of join-free subgroups in 𝒞ℱ𝒮 right-angled Coxeter groups. Let Γ be a non-join connected 𝒞ℱ𝒮 graph with the vertex set S. Let H be a finitely generated join-free subgroup of G_Γ. Then the subgroup divergence of H in G_Γ is exactly quadratic. By the work of Cashen in <cit.> and Theorem D in <cit.>, the subgroup divergence of H in G_Γ is at least quadratic. Therefore, we only need to prove that the subgroup divergence of H in G_Γ is at most quadratic. Since Γ is a 𝒞ℱ𝒮 graph, there is a component C of Γ^4 whose support is the entire vertex set of Γ. Let L=L(H,C) be a constant as in Lemma <ref> and h an arbitrary infinite order group element in H. Since each cyclic subgroup in a (0) group is undistorted, there is a positive integer L_1 such thath^k_S≥k/L_1-L_1for each integer k.Let {σ^n_ρ} be the subspace divergence of H in the Cayley graph Σ^(1)_Γ. We will prove that function σ^n_ρ(r) is bounded above by some quadratic function for each n≥ 2 and ρ∈ (0,1].Choose a positive integer m∈[L(L+r),2L(L+r)] and a group element u=st, where (s,t) is a pair of non-adjacent vertices in a four-cycle Q_0 of Γ that corresponds to a vertex in C. Then, there is a path α_0 outside the (m/L-L)–neighborhood of H connecting u^m and hu^m with the length bounded above by Lm. It is obvious that the path α_0 also lies outside the r–neighborhood of H by the choice of m. Choose a positive integer k which lies between L_1(nr+16L(L+r)+L_1) and L_1(nr+16L(L+r)+L_1+1). Let α=α_0∪ hα_0∪ h^2α_0∪⋯∪ h^k-1α_0. Then, α is a path outside the r–neighborhood of H connecting u^m, h^ku^m with the length bounded above by kLm. By the choice of k and m, the length of α is bounded above by 2L_1L^2(L+r)(nr+16L(L+r)+L_1+1).Since r≤ d_S(u^m, H)≤ 2m, then there is a path γ_1 outside N_r(H) connecting u^m and some point x∈∂ N_r(H) such that the length of γ_1 is bounded above by 2m. By the choice of m, the length of γ_1 is also bounded above by 4L(L+r). Similarly, there is a path γ_2 outside N_r(H) connecting h^ku^m and some point y∈∂ N_r(H) such that the length of γ_2 is bounded above by 4L(L+r). Let α̅=γ_1∪α∪γ_2 then α̅ is a path outside N_r(H) connecting x, y and the length of α̅ is bounded above by 2L_1L^2(L+r)(nr+16L(L+r)+L_1+1)+8L(L+r). Therefore, for each ρ∈ (0,1]d_ρ r(x,y)≤ 2L_1L^2(L+r)(nr+16L(L+r)+L_1+1)+8L(L+r). Also,d_S(x,y) ≥ d_S(u^m, h^ku^m)-d_S(u^m, x)-d_S(h^ku^m,y)≥(h^k_S-4m)-4L(L+r)-4L(L+r)≥k/L_1-L_1-16L(L+r)≥(nr+16L(L+r))-16L(L+r)≥ nr.Thus, for each ρ∈ (0,1]σ_ρ^n(r)≤ 2L_1L^2(L+r)(nr+16L(L+r)+L_1+1)+8L(L+r).This implies that the subgroup divergence of H in A_Γ is at most quadratic. Therefore, the subgroup divergence of H in A_Γ is exactly quadratic.§.§ Higher-degree polynomial subgroup divergence We first review the concept of the divergence of geodesic spaces and finitely generated groups in <cit.>.Let X be a geodesic space and x_0 one point in X. For each ρ∈ (0,1], we define a function δ_ρ:[0, ∞)→ [0, ∞) as follows: For each r, let δ_ρ(r)=sup d_ρ r(x_1,x_2) where the supremum is taken over all x_1, x_2 ∈ S_r(x_0) such that d_ρ r(x_1, x_2)<∞.The family of functions {δ_ρ} is the divergence of X with respect to the point x_0, denoted Div_X,x_0.In <cit.>, Gersten show that the divergence Div_X,x_0 is, up to the relation ∼, a quasi-isometry invariant which is independent of the chosen basepoint. The divergence of X, denoted Div_X, is then, up to the relation ∼, the divergence Div_X,x_0 for some point x_0 in X. If the space X has the geodesic extension property (i.e. any finite geodesic segment can be extended to an infinite geodesic ray), then it is not hard to show that δ_ρ∼δ_1 for each ρ∈ (0,1]. In this case, we can consider the divergence of X as the function δ_1. The divergence of a finitely generated group G, denoted Div(G), is the divergence of its Cayley graphs.The following definition was introduced by Levcovitz in <cit.> to study divergence in Coxeter groups.Let Γ be a finite, connected, simplicial graph. A pair of non-adjacent vertices (s, t) is rank 1 if s and t are not contained in some induced four-cycle of Γ. Additionally, (s, t) is rank n if either every pair of non-adjacent vertices (s_1, s_2) with s_1, s_2 ∈(s) is rank n-1 or every pair of non-adjacent vertices (t_1, t_2) with t_1, t_2 ∈(t) is rank n-1. Let Γ be a finite, connected, simplicial graph and n a positive integer. There is a polynomial f_n of degree n such that the following hold. Let (s,t) be a rank n pair of vertices of Γ and let H_1, H_2 be two hyperplanes of Σ_Γ of types s, t respectively such that their supports intersect. Let p be a vertex in the intersection between two support of H_1 and H_2. The length of the any path from H_1 to H_2 which avoids the ball B(p,r) is bounded below by f_n(r).The above proposition is a result from <cit.>. More precisely, two hyperplanes H_1 and H_2 are degree n M-separated in the sense of Definition 6.1 in <cit.> (see the proof of Theorem 7.9 in <cit.>). Therefore, there is a polynomial g_n of degree n such that the length of any path from H_1 to H_2 which avoids the ball B(p,r) is bounded below by g_n(r) (see Theorem 6.2 in <cit.>). Since the number of rank n pair of vertices in Γ is finite, we can choose a universal polynomial f_n of degree n as in the above lemma. Let Γ be a finite, connected, simplicial graph. Suppose Γ contains a rank n pair (s, t), then Div(W_Γ) is bounded below by a polynomial of degree n+1.We now construct right-angled Coxeter groups with join-free subgroups of different subgroup divergence. More precisely, for each d ≥ 3 let Ω_d be a graph in Figure <ref>. We will construct non-virtually cyclic join-free subgroups with subgroup divergence of polynomial of degree m for 2≤ m ≤ d. We remark that the graphs Γ_m in Figure <ref> were introduced by Dani-Thomas <cit.> to study divergence of right-angled Coxeter groups and each graph Ω_d in Figure <ref> is a variation of the graph Γ_d. We now prepare some lemmas and propositions that help with the construction of the desired join-free subgroups in G_Ω_d.For each d≥ 3 let Ω_d be a graph as in Figure <ref>. For each 3≤ m≤ d all pairs (a_m,b_m), (a_m,c), (b_m,c) are rank m-1. We first prove that for 3≤ℓ≤ k≤ d each pair of non-adjacent vertices in (a_k) is rank ℓ-2. We will prove this by induction on ℓ. For ℓ=3 we observe that each pair of non-adjacent vertices in (a_k) (k≥ 3) are not contained in some induced four-cycle of Γ. Therefore, these pairs are all rank 1. Assume that there is 4≤ℓ_0≤ d-1 such that each pair of non-adjacent vertices in (a_k) (ℓ_0≤ k≤ d) is rank ℓ_0-2. We need to prove that each pair of non-adjacent vertices in (a_k) (ℓ_0+1≤ k≤ d) is rank ℓ_0-1. We observe that (a_k)={s_k-2, a_k-1, a_k+1} (ℓ_0+1≤ k≤ d-1). By hypothesis induction, each pair of non-adjacent vertices in (a_k-1), (a_k+1) is rank ℓ_0-2. Therefore, all pairs of non-adjacent vertices (a_k-1,s_k-2), (a_k+1,s_k-2), (a_k-1,a_k+1) are rank ℓ_0-1. In other word, each pair of non-adjacent vertices in (a_k) (ℓ_0+1≤ k≤ d-1) is rank ℓ_0-1. For the case k=d we see that (a_k)=(a_d)={a_d-1,s_d-2} and each pair of non-adjacent vertices in (a_d-1) is rank ℓ_0-2 by hypothesis induction. Therefore, pair of non-adjacent vertices in (a_d) is rank ℓ_0-1 by hypothesis induction. Thus, for 3≤ℓ≤ k≤ d each pair of non-adjacent vertices in (a_k) is rank ℓ-2. In particular, each pair of non-adjacent vertices in (a_m) (3≤ m≤ d) is rank m-2. By a similar argument, each pair of non-adjacent vertices in (b_m) (3≤ m≤ d) is rank m-2. This implies that for each 3≤ m≤ d all pairs (a_m,b_m), (a_m,c), (b_m,c) are rank m-1.The following proposition is a direct result of Proposition <ref> and Lemma <ref>.For each d≥ 3 and 3≤ m ≤ d let Ω_d be the graph as in Figure <ref> and H_d^m the subgroup of G_Ω_d generated by c, a_m, and b_m. Then the subgroup divergence of H_d^m in G_Ω_d is bounded below by a polynomial of degree m.Let {σ^n_ρ} be the subspace divergence of H_d^m in the Caley graph Σ^(1)_Ω_d. Let f_m-1 be the polynomials of degree m-1 as in Proposition <ref>. We will prove that for each n≥ 8 and ρ∈ (0,1]σ^n_ρ(r)≥ (r-1)f_m-1(ρ r)for each r>1. Let u and v be an arbitrary pair of points in ∂ N_r(H) such that d_r(u, v)<∞ and d_S(u,v)≥ nr. Let γ be an arbitrary path that lies outside the ρ r–neighborhood of H connecting u and v. We will prove that the length of γ is bounded below by (r-1)f_m-1(ρ r).Let γ_1 be a geodesic of length r in Σ^(1)_Ω_d connecting u and some point x in H_d^m. Let γ_2 be another geodesic of length r in Σ^(1)_Ω_d connecting v and some point y in H_d^m. Let α be a geodesic in Σ^(1)_Ω_d connecting x and y. Obviously, each edge of α is labelled by a_m, b_m, or c. This implies that two hyperplanes determined by two different edges in α do not intersect. Since d_S(x,y)≥ d_S(u,v)-2r≥ (n-2)r≥ 6r, there is a subpath β with length bounded below by r of α such that β∩(B(x,2r)∪ B(y,2r))=∅. Also, the lengths of γ_1 and γ_2 are both r. This implies that each hyperplane determined by edge in β does not intersect γ_1∪γ_2. Therefore, each hyperplane determined by edge in β must intersect γ.Assume that the path β is the concatenation of edges e_1, e_2,⋯, e_k, k≥ r and let H_i be the hyperplane determined by edge e_i. Therefore, for each i ∈{1,2,⋯, k-1} the (i+1)^th vertex p_i of β lies in the intersection of the support of H_i and H_i+1. For each i∈{1,2,⋯, k} let x_i be a point in H_i∩γ. Let γ_i be the subpath of γ connecting x_i and x_i+1 for each i ∈{1,2,⋯, k-1}. Therefore, each γ_i is a path from H_i to H_i+1 which avoids the ball B(p_i,ρ r). Therefore, the length of each γ_i is bounded below by f_m-1(ρ r) by Proposition <ref> and Lemma <ref>. This implies that the length of γ is bounded below by (k-1)f_m-1(ρ r). Also, k≥ r. Therefore, the length of γ is bounded below by (r-1)f_m-1(ρ r). Thus, σ^n_ρ(r)≥ (r-1)f_m-1(ρ r). Therefore, the subgroup divergence of H_d^m in G_Ω_d is bounded below by a polynomial of degree m.The following lemma contributes to the proof of the upper bound of our subgroup divergences.For each d≥ 3 and 3≤ m≤ d there is a polynomial g_m-1 of degree m-1 such that the following holds. Let α be a geodesic ray based at e that is labelled by a_1b_1a_1b_1⋯. Let β be a geodesic ray based at e that is labelled by b_m-1t_m-2b_m-1t_m-2⋯. Then for each r>0 there is a path outside N_r(H_d^m) connecting α(r) and β(r) with length bounded above by g_m-1(r). Let Γ_m-1 be a subgraph of Ω_d as in Figure <ref>. Let S, S' be vertex sets of Ω_d, Γ_m-1 respectively. Obviously, α and β be two geodesic rays in the 1-skeleton Σ^(1)_Γ_m-1 of the Davis complex Σ_Γ_m-1. Since the Cayley graph Σ^(1)_Γ_m-1 has geodesic extension property and the divergence of Σ^(1)_Γ_m-1 is a polynomial of degree m-1 (see Section 5 in <cit.>), there is a polynomial g_m-1 of degree m-1 such that for each r>0 there is a path γ_r in Σ^(1)_Γ_m-1 with length bounded above by g_m-1(r) connecting α(r), β(r) and γ_r avoids the ball B(e,r) in Σ^(1)_Γ_m-1. We now prove that each γ_r also lies outside N_r(H_d^m) by showing that its vertices lie outside N_r(H_d^m).Let Φ:G_Ω_d→ G_Γ_m-1 be the group homomorphism induced by mapping each vertex of Γ_m-1 to itself and each vertex outside Γ_m-1 to e. It is not hard to check the following: * The map Φ is a well-defined group homomorphism.* Φ(u)=u for each u in G_Γ_m-1 and Φ(h)=e for each h in H_d^m.* Φ(g)_S'≤g_S for each g in G_Ω_d. For each vertex u in γ_r, u is a group element in G_Γ_m-1 with u_S'≥ r. Assume that m=d_S(u,H_d^m). Then there is h in H_d^m such that m=d_S(h,u)=h^-1u_S. Therefore,m=h^-1u_S≥Φ(h^-1u)_S'=u_S'≥ r.This implies that each vertex in γ_r lies outside N_r(H_d^m). Therefore, each path γ_r also lie outside N_r(H_d^m).For each d≥ 3 and 3≤ m ≤ d let Ω_d be the graph as in Figure <ref> and H_d^m the subgroup of G_Ω_d generated by c, a_m, and b_m. Then the subgroup divergence of H_d^m in G_Ω_d is bounded above by a polynomial of degree m.Let α, β be geodesic rays as in Lemma <ref> and g_m-1 a polynomial of degree m-1 as in this lemma. Let {σ^n_ρ} be the subspace divergence of H_d^m in the Cayley graph Σ^(1)_Ω_d. We will prove that function σ^n_ρ(r) is bounded above by some quadratic function for each n≥ 2 and ρ∈ (0,1].For each r>1 there is a path γ_r outside N_r(H_d^m) connecting α(r) and β(r) with length bounded above by g_m-1(r). Since the generator b_m commutes with all edge labels of β, two points β(r) and b_mβ(r) lie on the boundary of a 2-cell in Σ_Ω_d. Therefore, there is a path α_1 outside N_r(H_d^m) connecting β(r) and b_mβ(r) with length bounded above by 3. Similarly, the generator c commutes with all edge labels of α, two points b_mα(r) and (b_mc)α(r) lie on the boundary of a 2-cell in Σ_Ω_d. Therefore, there is a path α_2 outside N_r(H_d^m) connecting b_mα(r) and b_mcα(r) with length bounded above by 3. Also b_mγ_r is a path outside N_r(H_d^m) connecting b_mα(r) and b_mβ(r) with length bounded above by g_m-1(r). Therefore, η_1=γ_r∪α_1∪ b_mγ_r∪α_2 is a path outside N_r(H_d^m) connecting α(r) and (b_mc)α(r) with length bounded above by 2g_m-1(r)+6.For each n≥ 2, let k be an integer between nr and 2nr. Letη=η_1∪(b_mc)η_1∪(b_mc)^2η_1 ∪⋯∪(b_mc)^k-1η_1.Then, η is a path outside N_r(H_d^m) connecting α(r) and (b_mc)^kα(r) with length bounded above by k(2g_m-1(r)+6). Therefore,d_ρ r(α(r), (b_mc)^kα(r))≤ k(2g_m-1(r)+6)≤ 2nr(2g_m-1(r)+6) Also,d_S(α(r), (b_mc)^kα(r))≥ d_S(e,(b_mc)^k)-2r≥ 2k-2r≥ (2n-2)r≥ nr.Therefore, σ^n_ρ(r)≤ 2nr(2g_m-1(r)+6). This implies that the subgroup divergence of H in G_Ω_d is bounded above by a polynomial of degree m. By using similar techniques as in Lemma <ref> and Proposition <ref>, we also obtain the following proposition.For each d≥ 3 let Ω_d be the graph as in Figure <ref> and H_d^2 the subgroup of G_Ω_d generated by c, s_1, and t_1. Then the subgroup divergence of H_d^2 in G_Ω_d is exactly a quadratic function. We are now ready for the main theorem in this section. For each d≥ 3 let Ω_d be the graph as in Figure <ref>. Let H_d^2 be the subgroup of G_Ω_d generated by the set {c,s_1,t_1}. For each 3≤ m ≤ d let H_d^m the subgroup of G_Ω_d generated by the set {c,a_m,b_m}. Then for each 2≤ m≤ d subgroup H_d^m is a join-free subgroup of G_Ω_d, H_d^m is isomorphic to the group F=s,t,us^2=t^2=u^2=e, and the subgroup divergence of H_d^m in G_Ω_d is a polynomial of degree m.We first consider the case 3≤ m ≤ d. It is not hard to see that each infinite order element h in H_d^m can be written as a reduced word s_1s_2⋯ s_m, where each s_i belongs to the set {a_m,b_m,c} and s_i, s_i+1 are two different elements in {a_m,b_m,c}. Therefore, H_d^m is a join-busting subgroup. This implies that H_d^m is a join-free subgroup. By Propositions <ref> and <ref>, the subgroup divergence of H_d^m in G_Ω_d is a polynomial of degree m. By a similar argument the subgroup H_d^2 is also join-busting. Therefore, H_d^2 is also a join-free subgroup. The fact that the subgroup divergence of H_d^2 in G_Ω_d is a quadratic function can be seen from Proposition <ref>. It is also obvious that all special subgroups are isomorphic to the group F=s,t,us^2=t^2=u^2=e.§ APPENDIX A. FINITE HEIGHT SUBGROUPS We note that the proof of the strong quasiconvexity of finitely generated finite height subgroups in one-ended right-angled Artin groups was already given implicitly by the author in <cit.>. We now generalize a part of that work in <cit.> to provide necessary conditions for finite height subgroups of groups satisfying certain conditions (see Proposition <ref> and Lemma <ref>). After that we give an explicit proof of the fact finitely generated finite height subgroups in one-ended right-angled Artin groups are always strongly quasiconvex. Finally we prove that finite height parabolic subgroups in right-angled Coxeter groups are also strongly quasiconvex.In the following proposition, we provide a necessary condition for infinite index finite height subgroups of groups satisfying certain conditions.Let G be a group and suppose there is a collection 𝒜 of subgroups of G that satisfies the following conditions: * For each A in 𝒜 and g in G the conjugate g^-1Ag also belongs to 𝒜 and there is a finite sequenceA=A_0, A_1, ⋯, A_n=g^-1Agof subgroups in 𝒜 such that A_j-1∩ A_j is infinite for each j;* For each A in 𝒜 each finite height subgroup of A must be finite or have finite index in A.Then for each infinite index finite height subgroup H of G the intersection H∩ A must be finite for all A in 𝒜. We assume for the contradiction that H∩ A_0 is infinite for some A_0∈𝒜. We claim that H∩ g^-1A_0g has finite index in g^-1A_0g for all g∈ G. Since H has finite height in G, the subgroup H∩ A_0 has finite height in A_0. Therefore, H∩ A_0 has finite index in A_0 by the hypothesis and our assumption. By the hypothesis, there is a finite sequence A_0=A_0,A_1,⋯, A_m=g^-1A_0g of subgroups in 𝒜 such that A_i-1∩ A_i is infinite for each i∈{1,2,⋯,m}. Since H∩ A_0 has finite index in A_0 by the above argument and A_0∩ A_1 is infinite, H∩ A_1 is also infinite. By a similar argument as above, H∩ A_1 has finite index in A_1. Repeating this process, we have H∩ g^-1A_0g has finite index in g^-1A_0g. In other words, gHg^-1∩ A_0 has finite index in A_0 for all g∈ G.Since H has finite height in G, there is a number n such that the intersection of any (n+1) essentially distinct conjugates of H is finite. Also H has infinite index in G. Therefore, there is n+1 distinct elements g_1, g_2,⋯ g_n+1 such that g_iH≠ g_j H for each i≠ j. Also, g_i H g_i^-1∩ A_0 has finite index in A_0 for each i. Then (∩ g_i H g_i^-1) ⋂ A_0 also has finite index in A_0. In particular, ⋂ g_i H g_i^-1 is infinite which is a contradiction. Therefore, the intersection H∩ A must be finite for all A in 𝒜 and all g in G.In the following lemma, we study finite height subgroups in certain direct product of groups.Let G_1 and G_2 be two groups such that each of them contains an infinite order elements. Let H be a finite height subgroup of G=G_1× G_2 and H contains an infinite order element. Then H must have finite index in G. In particular, if H is an almost malnormal subgroup and H contains an infinite order element, then H=G. It suffices to show that H∩ G_1 has finite index in G_1 and H∩ G_2 has finite index in G_2. Let h be an infinite order element of H. Then h=g_1g_2 where g_1 is a group element of G_1, g_2 is a group element of G_2, and either of them (say g_1) has infinite order. We claim that g_1^n_0 is an element of H for some n_0>0. Otherwise, (g_1^nH)_n≥ 0 is an infinite sequence of distinct left cosets and ⋂ g_1^nHg_1^-n contains the infinite order element h which contradicts to the fact that H has finite height.We now claim that H∩ G_2 has finite index in G_2. Otherwise there is an infinite sequence (k_n)_n≥ 1 of groups elements in G_2 such that (k_nH)_n≥ 0 is an infinite sequence of distinct left cosets. Also subgroup k_nHk_n^-1 contains the infinite order element g_1^n_0 for all n≥ 1. This contradicts to the fact that H has finite height. The subgroup H∩ G_2 has finite index in G_2. Since G_2 contains an infinite order element, H contains an infinite order element of G_2. By a similar argument, H∩ G_1 has finite index in G_1. Therefore, H must have finite index in G. In the following two propositions, we study finite height subgroups in right-angled Artin groups and right-angled Coxeter groups. Let A_Γ be a one-ended right-angled Artin group and H a finitely generated subgroup of A_Γ. Then H is strongly quasiconvex if and only if H has finite height.Since all strongly quasiconvex subgroups have finite height (see Theorem 1.2 in <cit.>), we only need to prove the opposite direction. We assume now that H has finite height and H is not the trivial subgroup. Let 𝒜 be the collection of all parabolic subgroups of A_Γ induced by some join subgraph. It is clear from the construction that 𝒜 satisfies the first part of Condition (1) of Proposition <ref>. The proof of the second part of this condition is almost identical to the proof of Lemma 8.17 in <cit.>. By Lemma <ref> the collection 𝒜 also satisfies the Condition (2) of Proposition <ref>. If H has finite index in A_Γ, H is strongly quasiconvex trivially. Otherwise, H∩ A must be trivial for all A in 𝒜. Therefore, H is strongly quasiconvex by Corollary 1.17 in <cit.> or Theorem B.1 in <cit.>. Let G_Γ be a right-angled Coxeter group and H a parabolic subgroup of G_Γ. Then H is strongly quasiconvex if and only if H has finite height.We note that if a subgroup is strongly quasiconvex, then all its conjugates are also strongly quasiconvex. Similarly, if a subgroup has finite height, then all its conjugates also have finite height. Therefore, we can assume that H is a special subgroup. Let Λ be the induced subgraph of Γ that defines H. Since all strongly quasiconvex subgroups have finite height (see Theorem 1.2 in <cit.>), we only need to prove the opposite direction. We assume now that H has finite height. Assume for a contradiction that H is not strongly quasiconvex. By Proposition 4.9 in <cit.> or Theorem 7.5 in <cit.> there is an induced 4–cycle σ such that Λ∩σ contains two non adjacent vertices, call a_1 and a_2 and σ-(σ∩Λ) contains at least a vertex, call b_1. Let b_2 be a opposite vertex of b_1 in σ and let g=b_1b_2. Then it is clear that g^mH≠ g^nH for m≠ n. Also, ⋂ g^nHg^-n contains the infinite order element a_1a_2. This contradicts to the assumption that H has finite height. Therefore, H is a strongly quasiconvex subgroup. alpha | http://arxiv.org/abs/1703.09032v2 | {
"authors": [
"Hung Cong Tran"
],
"categories": [
"math.GR"
],
"primary_category": "math.GR",
"published": "20170327123456",
"title": "Malnormality and join-free subgroups in right-angled Coxeter groups"
} |
[email protected]^1CEMS, RIKEN, Saitama 351-0198, Japan^2Physics Department, The University of Michigan, Ann Arbor, Michigan 48109-1040, USA Quantum systems are affected by interactions with their environments, causing decoherence through two processes: pure dephasing and energy relaxation.For quantum information processing it is important to increase the coherence time of Josephson qubits and other artificial two-level atoms. We show theoretically that if the coupling between these qubits and a cavity field is longitudinal and in the ultrastrong-coupling regime, the system is strongly protected against relaxation. Vice versa, if the coupling is transverse and in the ultrastrong-coupling regime, the system is protected against pure dephasing. Taking advantage of the relaxation suppression, we show that it is possible to enhance their coherence time and use these qubits as quantum memories. Indeed, to preserve the coherence from pure dephasing, we prove that it is possible to apply dynamical decoupling.We also use an auxiliary atomic level to store and retrieve quantum information.Long-lasting Quantum Memories: Extending the Coherence Time of Superconducting Artificial Atoms in the Ultrastrong-Coupling Regime and Franco Nori^1,2 December 30, 2023 ==================================================================================================================================§ INTRODUCTIONQuantum memories are essential elements to implement quantum logic, since the information must be preserved between gate operations. Different approaches to quantum memories are being studied, including NV centers in diamond, atomic gases, and single trapped atoms <cit.>. Superconducting circuits <cit.> are at the forefront in the race to realize the first quantum computers, because they exhibit flexibility, controllability and scalability. For this reason, quantum memories that can be easily integrated into superconducting circuits are also required. The realization of a quantum memory device, as well as of a quantum computer, is challenging because quantum states are fragile: the interaction with the environment causes decoherence. There are external, for example local electromagnetic signals, and intrinsic sources of decoherence. In circuit-QED, the main intrinsic source of decoherence are fluctuations in the critical-currents, charges, and magnetic-fluxes. Superconducting circuits have allowed to achieve the ultrastrong coupling regime (USC) <cit.>, where the light-matter interaction becomes comparable to the atomic and cavity frequency transitions (ω_q and ω_c, respectively), reaching the coupling of λ=1.34 ω_c <cit.>. After a critical value of the coupling, λ>λ_c, with λ_c=√(ω_q ω_c)/2, the Dicke model predicts that a system of N two-level atoms interacting with a single-cavity mode, in the thermodynamic limit (N→∞) and at zero temperature (T=0), is characterized by a spontaneous polarization of the atoms and a spontaneous coherence of the cavity field. This situation can also be encountered in the finite-N case <cit.>, in the limit of very strong coupling. Here, we consider a single two-level atom, N=1, interacting with a cavity mode in the USC regime. First, we derive a general master equation, valid for a large variety of hybrid quantum systems <cit.> in the weak, strong, ultrastrong, and deep strong coupling regimes. Considering the two lowest eigenstates of our system, we show theoretically that if the coupling between the two-level atom and the cavity field is longitudinal and in the USC regime, the system is strongly protected against relaxation. Vice versa, we prove that if the coupling is transverse and in the USC regime, then the system is protected against pure dephasing. In the case of superconducting artificial atoms whose relaxation time is comparable to the pure dephasing time, taking advantage of this relaxation suppression in the USC regime, we prove that it is possible to apply the dynamical decoupling procedure<cit.> to have full protection against decoherence. With the help of an auxiliary non-interacting atomic level, providing a suitable drive to the system, we show that a flying qubit that enters the cavity can be stored in our quantum memory device and retrieved afterwards. Moreover, we briefly analyze the case of artificial atoms transversally coupled to a cavity mode <cit.>.In this treatment we neglect the diamagnetic term A^2, which prevents the appearance of a superradiant phase, as the conditions of the no-go theorem can be overcome in circuit-QED <cit.>. § MODELThe Hamiltonian of a two-level system interacting with a cavity mode is (ħ=1) Ĥ=ω_c â^†â+ε/2σ̂_z+Δ/2σ̂_x+λX̂σ̂_x.with â (â^†) the annihilation (creation) operator of the cavity mode with frequency ω_c, X̂=â+â^†, and σ̂_j the Pauli matrices, with j={x,y,z}. For a flux qubit, ε and Δ correspond to the energy bias and the tunnel splitting between the persistent current states {| ↓ ⟩,| ↑ ⟩} <cit.>.We do not use the rotating wave approximation in the interaction term because the counterrotating terms are fundamental in the USC regime.For ε=0, the coupling is longitudinal and the two lowest eigenstates {|0̃⟩,|1̃⟩} are exactly the polarized states | P_-⟩=|-⟩|+α⟩ and | P_+⟩=| +⟩|-α⟩, where |±⟩=1/√(2)( | ↑ ⟩ ±| ↓ ⟩), and |±α⟩=exp[±α(â^†-a)]| 0⟩ are displaced Fock states <cit.>, with α=λ/ω_c. A proof of this is given in the Appendix <ref>. In the subspace spanned by the polarized states P={| P_-⟩, | P_+⟩}, Ĥ can be written, for ε < ω_c,Ĥ_P=Δ/2σ̂_z+ε_ R/2σ̂_x,with ε_ R=ε⟨+α|-α⟩. Equation (<ref>) describes a two-state system, see inset in Fig. <ref>, characterized by a double-well potential with detuning parameter Δ and depth proportional to the overlap of the two displaced states. The kinetic contribution (ε_ R/2)σ̂_x mixes the states P associated with the two minima of the potential wells. For Δ=0, the coupling is transverse and the two lowest eigenstates {|0̃⟩,|1̃⟩} converge, for λ>λ_c, to the entangled states | E_-⟩=(| P_+⟩-| P_-⟩)/√(2) and | E_+⟩=(| P_+⟩+| P_-⟩)/√(2). In this case, as⟨+α|-α⟩=exp{-2|λ/ω_c|^2} ,the energy difference between the eigenstates, ω_1̃-ω_0̃=ε_ R, converges exponentially to zero with λ (vacuum quasi-degeneracy), see Fig. <ref> and Ref. <cit.>. The system described by Ĥ does not conserve the number of excitations, 𝒩=a^† a+| e⟩⟨ e|, with | e⟩ being the excited state of the two-level system, but for Δ=0 has ℤ_2 symmetry and it conserves the parity of the number of excitations <cit.>.For Δ≠ 0, the parity symmetry is broken <cit.>. As ε_R converges exponentially to zero with λ, the first two eigenstates of Ĥ converge exponentially to the polarized states P, and the energy splitting between the first two eigenstates converge to Δ, see Eq. (<ref>) and Fig. <ref>. For Δ=0, it is also possible to break the ℤ_2 parity symmetry, and have the polarized states P, applying to the cavity the constant field -Λ/2X̂. In this case, the energy splitting between the first two eigenstates is a function of the coupling λ; indeed, ω_1̃-ω_0̃=2Λλ/ω_c, see Fig. <ref> and Appendix <ref>.§ MASTER EQUATION AND COHERENCE RATEThe dynamics of a generic open quantum system S, with Hamiltonian Ĥ_ S and eigenstates | m⟩, is affected by the interaction with its environment B, described by a bath of harmonic oscillators. Relaxation and pure dephasing must be studied in the basis that diagonalizes Ĥ_ S.The fluctuations that induce decoherence originate from the different channels that connect the system to its environment. For a single two-level system strongly coupled to a cavity field these channels are 𝒮={σ̂_x, σ̂_y, σ̂_z, X̂, Ŷ}, with Ŷ=i(â-â^†).In the interaction picture, the operators Ŝ^(k)∈𝒮 can be written asŜ^(k)(t) =Ŝ^(k)_+(t)+Ŝ^(k)_-(t)+Ŝ^(k)_z ,with Ŝ^(k)_-(t)=∑_m,n>ms^(k)_mn | m⟩⟨ n|e^-iω_nmt , Ŝ^(k)_z=∑_m s^(k)_mm | m⟩⟨ m|and Ŝ^(k)_+=(Ŝ^(k)_-)^†; this in analogy with σ̂_+, σ̂_- and σ̂_z, for a two-state system <cit.>, while s^(k)_mn=⟨ m|Ŝ^(k)| n⟩ andω_mn=ω_m-ω_n. The interaction of the environment with Ŝ^(k)_z affects the eigenvalues of the system, and involves the randomization of the relative phase between the system eigenstates. The interaction of the environment with Ŝ^(k)_x=Ŝ^(k)_++Ŝ^(k)_- induces transitions between different eigenstates. With this formulation, we have derived a master equation in the Born-Markov approximation valid for generic hybrid-quantum systems <cit.>, at T=0,ρ̇̂̇ = -i[Ĥ_ S,ρ̂]+∑_k∑_m, n>mΓ^(k)_mn𝒟[| m⟩⟨ n|]ρ̂ +∑_kγ^(k)_φ𝒟[Ŝ^(k)_z ]ρ̂ ,where 𝒟[Ô]ρ̂=(2Ôρ̂ Ô^†-Ô^†Ôρ̂-ρ̂ Ô^†Ô)/2 is the Lindblad superoperator. The sum over k takes into account all the channels Ŝ^(k)∈𝒮. Γ^(k)_mn=γ^(k)(ω_mn)| s^(k)_mn|^2are the transition rates from level n to level m, γ^(k)(ω_mn) are proportional to the noise spectra. Expanding the last term in the above master equation, allows to prove that the pure dephasing rate is γ^(k)_φ| s^(k)_mm-s^(k)_nn|^2/4. Using only the lowest two eigenstates of Ĥ_ S, the master equation can be written in the form ρ̇̂̇=-i[Ĥ,ρ̂]+∑_kΓ^(k)𝒟[σ̂_-]ρ̂+γ^(k)_φ𝒟[Ŝ^(k)_z ]ρ̂ ,where σ̂_- is the lowering operator. In the weak- or strong-coupling regime, it corresponds to the classical master equation in the Lindblad form for a two-state system.For a complete derivation of the master equation, see Appendix <ref>.§ ANALYSISAs shown above, if the coupling is transverse, in the USC regime the two lowest eigenstates converge to the entangled states E={| E_-⟩,| E_+⟩} as a function of the coupling λ. If the coupling is longitudinal, the two lowest eigenstates are the polarized states P.Moreover, we proved that the relaxation of the population is proportional to | s^(k)_mn|^2 and the pure dephasing to | s^(k)_mm-s^(k)_nn|^2/4; we call these two quantities sensitivity to longitudinal relaxation and to pure dephasing, respectively. InTable <ref> we report the values ofS_ R(C)=|⟨ C_+|Ŝ| C_-⟩|S_ D(C)=|⟨ C_+|Ŝ| C_+⟩-⟨ C_-|Ŝ| C_-⟩|/2 , calculated for every channel Ŝ in 𝒮, and C is E or P. As ⟨+α|-α⟩ converges exponentially to zero with λ, see Eq. (<ref>), if the coupling is longitudinal, there is protection against relaxation; if the coupling is transverse, there is protection against pure dephasing. The suppression of the relaxation can be easily understood considering that, increasing the coupling λ, increases the displacement and the depth of the two minima associated with the double well represented in the inset of Fig. <ref>. The sensitivity to the relaxation | s^(k)_mn|^2 is connected to Fermi's golden rule for first-order transitions. Considering the polarized states P, the suppression of the longitudinal relaxation rates holds for every order. This is because every other intermediate path between the P states, through higher states, involves always atomic and photonic coherent states with opposite signs.When the coupling is transverse, the suppression of the pure dephasing is given by the presence of the photonic coherent states |±α⟩, which suppress the noise coming from the σ̂_z and σ̂_y channels <cit.>, while for the other channels the system is in a “sweet spot”. For this reason, this suppression holds only to first order. Furthermore, approaching the vacuum degeneracy, fluctuations in Δ become relevant and they drive the entangled states E to the polarized states P (spontaneous breaking of the parity symmetry <cit.>). This will be further explained in Section <ref>.§ DYNAMICAL DECOUPLINGThe dynamical decoupling (DD) method <cit.> consists of a sequence of π-pulses that average away the effect of the environment on a two-state system.To protect from pure dephasing, the DD method uses a sequence of σ̂_x or σ̂_y pulses. If we rotate the σ̂_z and σ̂_y operators in the basis given by the states P, we find that R̂ σ̂_zR̂^-1=β^-1σ̂_x and R̂ σ̂_yR̂^-1=β^-1σ̂_y, with β^-1=⟨+α|-α⟩. Therefore,σ̂_z and σ̂_y pulses in the bare atom basis correspond to σ̂_x and σ̂_y pulses attenuated by the β^-1 factor in the basis given by the states P. To compensate the reduction, the amplitude of the pulses must be multiplied by a factor β. When the direction of the coupling is not exactly longitudinal, the convergence of the lowest eigenstates to the polarized states P is exponential with respect to the coupling; thus, the σ̂_z operator in the free-atom basis is not exactly the σ̂_x operator in the reduced eigenbasis of Ĥ.Instead, there are no problems with the σ̂_y operator of the bare atom, because it corresponds exactly to β^-1σ̂_y in the reduced dressed basis. § PROPOSAL §.§ T_1<T_φ or T_1∼ T_φThis proposal is applicable to supeconducting qubits whose relaxation time T_1 is lower than the pure dephasing time T_φ or comparable, i.e. flux qubits. If we consider the polarized states P as a quantum memory device and if we prepare it in an arbitrary superposition, we can preserve coherence. Indeed, our quantum memory device is naturally protected from population relaxation. To protect it from pure dephasing, we apply DD <cit.>. We consider Ĥ in Eq. (<ref>) with Δ≠ 0. In order to have the second excited states far apart in energy, we need |Δ| <0.5 ω_c.The longitudinal relaxation suppression behaves as |⟨+α|-α⟩|^2=exp{-4N(λ/ω_c)^2}; increasing the coupling λ or the number N of atoms increases exponentially the decay time of the longitudinal relaxation. However, the contribution of the X̂ channel to pure dephasing increases quadratically with λ/ω_c. This does not affect the coherence time of our system; indeed, superconducting harmonic oscillators generally have higher quality factors than superconducting qubits.It is convenient to write Ĥ in Eq. (<ref>) in the basis that diagonalizes the atomic two-level system {| g⟩,| e⟩},Ĥ'=ω_c â^†â+ω_q/2σ̂_z+λX̂(cosθ σ̂_x+sinθ σ̂_z) ,with θ=arctan(Δ/ε) and ω_q=√(ε^2+Δ^2).Using Eq. (<ref>), in Fig. <ref>(a) we showthe numerically calculated sensitivity, max{| s^(k)_0̃1̃|^2:Ŝ^(k)∈𝒮}, to the longitudinal relaxation as a function of the normalized coupling λ/ω_c and of the angle θ. For large values of λ/ω_c and for θ≠ 0, there is a strong suppression of the relaxation rate: it is maximum when the coupling is entirely longitudinal, θ =π/2.For λ/ω_c=1.3, θ=π/2 and ω_q=0.2 ω_c, the longitudinal relaxation rate is reduced by a factor ≈ 10^-3, meanwhile the contribution of the cavity field to the pure dephasing rate increases only by 6.76.Moreover, for one two-state system affected by 1/f noise, the DD can achieve up to 10^3-fold enhancement of the pure dephasing time T_φ, applying 1000 equally spaced π-pulses (see Appendix <ref>). Using this proposal with these parameters, it is possible to increase the coherence time of a superconducting two-level atom up to 10^3 times.§.§ T_1≫ T_φFigure <ref>(b) shows the numerically calculated maximum sensitivity to pure dephasing, max{| s^(k)_1̃1̃-s^(k)_0̃0̃|^2/4:Ŝ^(k)∈𝒮}, as a function of λ/ω_c and θ. For large values of λ/ω_c, the strong suppression of the pure dephasing rate is confined to a region (dark blue) that exponentially converges to zero for increasing λ; only in this region the entangled states exist. In Fig. <ref>(b), for Δ=0(θ=0), it is clear that, for a large value of the coupling λ, fluctuations in Δ (or in θ) drive the entangled states E (dark blue region) to the polarized states P (light blue region).Superconducting qubits whose relaxation time T_1 is much greater than the pure dephasing time T_φ, i.e. fluxonium <cit.>, can take advantage of the suppression of the pure dephasing. For λ/ω_c=0.8, θ=0 and ω_q=0.5 ω_c, the pure dephasing rate is reduced by a factor ≈ 7× 10^-2; meanwhile the contribution of the cavity field to the longitudinal relaxation rate increases only by 2.47.§ PROTOCOLNow we propose a protocol to write-in and read-out the quantum information encoded in a Fock state |ψ⟩=a| 0⟩ + b| 1⟩.We consider an auxiliary atomic state | s⟩ decoupled from the cavity field, and with higher energy ω_ s respect to the two-level system {| g⟩,| e⟩} <cit.>. Figure <ref>(a) shows the eigenvalues of the Hamiltonian of the total system, Ĥ_ tot=Ĥ'+ω_ s| s⟩⟨ s|, versus the coupling λ/ω_c. The blue solid curves concern Ĥ', the red dashed equally-spaced lines the auxiliary level | s⟩ and these count the number of photons in the cavity <cit.>.We prepare the atom in the state | s⟩ sending a π-pulse resonant with the transition frequency between the ground| P_-⟩ and | s, 0⟩ states <cit.>. When the qubit with an unknown quantum state |ψ⟩ entersthe cavity, the state becomes |Ψ_ s⟩=| s⟩⊗(a| 0⟩+b| 1⟩)=a| s, 0⟩+b| s, 1⟩. Immediately after, we send two π-pulses: p_1 resonant with the transition | s,1⟩→| P_- ⟩ and p_2 resonant with the transition | s, 0 ⟩→| P_+⟩. Hereafter, we apply DD to preserve the transverse relaxation rate; meanwhile the quantum memory device is naturally protected from the longitudinal relaxation. To restore the quantum information we reverse the storage process. Figure <ref>(b) shows the time evolution of the fidelity ℱ between the initial state |ψ⟩ and the states |Ψ_ s(t)⟩=a_ s| s,0⟩+b_ s| s,1⟩ and |Ψ_ P(t)⟩=a_+| P_+⟩+b_-| P_-⟩ in the rotating frame, this is calculated using the above master equation for λ=1.3 ω_c. The standard decay rates are assumed to be the same for every channel of the two-level artificial atom {| g⟩,| e⟩}, γ^(k)=10^-3ω_c. For the pure dephasing rates, we choose γ^(k)_φ=10^-3γ^(k), since we apply DD. The pulses are described by Ĥ_ p_1=ϵ(t)cos(ω_mn t)(σ̂_ gs+σ̂_ gs^†)/⟨ m|σ̂_ gs| n⟩ and Ĥ_ p_2=ϵ(t)cos(ω_mn t)(σ̂_ es+σ̂_ es^†)/⟨ m|σ̂_ es| n⟩, where σ̂_ gs=| g ⟩⟨ s|, σ̂_ es=| e ⟩⟨ s|, and ϵ(t) is a Gaussian envelope.At time t=0, the states| s,0⟩ and | s,1⟩ are prepared, so that a_ s^2=0.8 and b_ s^2=0.2. As shown in Fig. <ref>(b), at times γ_c t_1=7× 10^-4 and γ_c t_2=14× 10^-4, we apply the pulses p_1 and p_2, respectively. Now the populations and the coherence are completely transferred to the polarized states P, and the qubit is stored. Later, at γ_c t_3=2.7× 10^-2 and γ_c t_4=2.76× 10^-2, two pulses equal to the previous ones restore the qubit |ψ⟩ into the cavity.As a comparison, we have calculated the fidelity (black curve) between |ψ⟩ and the state of a two-level artificial atom prepared at t=0 in the same superposition as |ψ⟩, butinteracting ordinarily with the cavity field, λ/ω_c≪ 0.1, and now without DD (free decay). This fidelity converges to its minimum value much faster than the one calculated for the polarized states, which is not significantly affected by decoherence in the temporal range shown in [Fig. <ref>(b)].§ CONCLUSIONSWe propose a quantum memory device composed of the lowest two eigenstates of a system made of a two-level atom and a cavity mode interacting in the USC regime when the parity symmetry of the Rabi Hamiltonian is broken.Making use of an auxiliary non-interacting level, we store and retrieve the quantum information.For parameters adopted in the simulation, it is possible to improve the coherence time of a superconducting two-state atom up to 10^3 times. For instance, the coherence time of a flux qubit longitudinally coupled to a cavity mode <cit.>, at the optimal point, can be extended from 10 μ s to over 0.01 seconds <cit.>. Instead, in the case of unbroken parity symmetry, the coherence time of a fluxonium, with applied magnetic flux Φ_ ext=0.5 Φ_0, inductively coupled to a cavity mode, can be extended from 14 μ s to 0.2ms <cit.>. This is a remarkable result for many groups working with superconducting circuits. Similar approaches can be applied to other types of qubits. § POLARIZED STATES {| P_-⟩, | P_+⟩} In this Appendix, we prove that when the coupling between a two-level system and a cavity mode is longitudinal, the two lowest eigenstates are the polarized states | P_-⟩= |-⟩|+α⟩ and | P_+⟩= |+⟩|-α⟩, where |±⟩=1/√(2)( |↑ ⟩± |↓ ⟩), {| ↓ ⟩,| ↑ ⟩} are, for example, persistent current states in the case of a flux qubit, and |±α⟩=exp[±α(â^†-a)]| 0⟩ are displaced Fock states, with α=λ/ω_c. §.§ Case: Δ≠ 0Let us start with the Hamiltonian of a two-level system interacting longitudinally with a cavity mode Ĥ=ω_c â^†â+Δ/2σ̂_x+λX̂σ̂_x . Replacing σ̂_x by its eigenvalue m=±1, we can write Ĥ=ω_c â^†â+m(Δ/2+λX̂) .The transformation â=b̂- mλ /ω_c, which preserves the commutation relation between â andâ^†, [b̂,b̂^†]=1, diagonalizes Ĥ Ĥ=ω_c b̂^†b̂-λ^2 m^2/ω_c+Δ/2m .This is the Hamiltonian of a displaced harmonic oscillator. Applying the operator b̂=â +mα, with α=λ/ω_c, to the ground state | 0_m⟩ of the oscillator given by Eq. (<ref>), gives â| 0_m⟩=-mα| 0_m⟩. We now see that | -mα⟩=| 0_m⟩ is a coherent state with eigenenergyω_m=-λ^2m^2/ω_c+mΔ/2 . Therefore, the two lowest eigenstates of the Hamiltonian Ĥ in Eq. (<ref>) are the two states | P_-⟩=| -⟩|+α⟩ and | P_+⟩=| +⟩|-α⟩, with eigenvalues ω_±=-λ^2m^2/ω_c±Δ/2. The energy splitting between the eigenstates | P_-⟩ and | P_+⟩ is ω_+-ω_-=Δ. The number of photons contained in each state is n=|α|^2=λ^2/ω_c^2.§.§ Case: Δ= 0The polarized states can be generated also substituting in Eq. (<ref>) the term Δσ̂_x/2 with the field -Λ(a+a^†)/2Ĥ=ω_c â^†â-Λ/2X̂+λX̂σ̂_x .Following the same procedure as in the previous case, we can writeĤ=ω_c â^†â+(â+â^†)(mλ-Λ/2),that can be diagonalized by the transformation â=b̂-(mλ-Λ/2)/ω_c,Ĥ=ω_cb̂^†b̂-(mλ-Λ/2)^2/ω_c .Considering the two lowest eigenstates, the excited state is now | P_+⟩=| +⟩|-α⟩ with energy ω_+=-(Λ/2-λ)^2/ω_c and the ground state is | P_-⟩=| -⟩|+α⟩ with energy ω_-=-(Λ/2+λ)^2/ω_c, and -mα=-(mλ-Λ/2)/ω_c. The energy difference between the excited and the ground state is ω_+-ω_-=2λΛ/ω_c.§ MASTER EQUATION FOR A GENERIC HYBRID SYSTEMThe total Hamiltonian that describes a generic hybrid system interacting with the environment B is Ĥ=Ĥ_S+Ĥ_B+Ĥ_SB ,where Ĥ_S, Ĥ_B and Ĥ_SB, are respectively the Hamiltonians of the system, bath, and system-bath interaction. Here, Ĥ_SB=∑_kĤ^(k)_SB, where the sum is over all the channels k that connect the system S to the environment. For a single two-level system strongly coupled to a cavity field these channels are 𝒮={σ̂_x, σ̂_y, σ̂_z, X̂, Ŷ}, with Ŷ=i(â-â^†).In the interaction picture we haveŜ^(k)(t)=∑_mn s^(k)_mn | m⟩⟨ n|e^iω_mnt = Ŝ^(k)_+(t)+Ŝ^(k)_-(t)+ Ŝ^(k)_z ,withŜ^(k)_-(t)=∑_m,n>ms^(k)_mn | m⟩⟨ n|e^-iω_nmt , Ŝ^(k)_z=∑_m s^(k)_mm | m⟩⟨ m|and Ŝ^(k)_+=(Ŝ^(k)_-)^†, this in analogy with σ̂_+, σ̂_- and σ̂_z for a two-state system <cit.>, where s^(k)_mn=⟨ m|Ŝ^(k)| n⟩ and ω_mn=ω_m-ω_n.The interaction of the environment with Ŝ^(k)_z affectsthe eigenstates of the system, and involves the randomization of the relative phase between the system eigenstates. The interaction of the environment with Ŝ^(k)_x=Ŝ^(k)_++Ŝ^(k)_- induces transitions among different eigenstates. We usethe Born master equation in the interaction pictureρ̇̂̇_I=-1/ħ^2∑_k∫_0^t dt' tr_B{[Ĥ^(k)_SB(t),[Ĥ^(k)_SB(t'),ρ̂_I(t')B̂_0] ]}where B̂_0 is the density operator of the bath at t=0.§.§ RelaxationWithin the general formula for a system S interacting with a bath B, described by a bath of harmonic oscillators, in the rotating wave approximation, the Hamiltonian Ĥ_SB is Ĥ_SB^(k)(t)=Ŝ^(k)_-(t)B̂^†(t)+Ŝ_+^(k)(t)B̂(t) with B̂(t)=∑_pκb̂_p e^-iν_p t, where κ is the coupling constant with the system operator Ŝ^(k). We assume that the bath variables are distributed in the uncorrelated thermal mixture of states. It is easy to prove that⟨B̂(t)B̂(t')⟩_B=0 , ⟨B̂^† (t)B̂^† (t')⟩_B=0 , ⟨B̂^†(t)B̂(t')⟩_B=∑_pκ^2 exp{iν_p(t-t')}n̅(ν_p,T) , ⟨B̂(t)B̂^†(t')⟩_B=∑_pκ^2 exp{-iν_p(t-t')}[1+n̅(ν_p,T)] , where n̅=(exp{ħν_p/k_B T}-1)^-1, k_B is the Boltzmann constant, and T is the temperature. Using Eq. (<ref>) and the properties of the trace, substituting τ=t-t', Eq. (<ref>) in the Markov approximation becomes (ħ=1) ρ̇̂̇_I=∑_k∑_(m, n>m)∑_(m', n'>m')s^(k)_mns^(k)_n'm'× [.(|n' ⟩⟨m' |ρ_I| m ⟩⟨ n | - | m ⟩⟨ n|n' ⟩⟨m' |ρ_I)×e^i(ω_n'm'-ω_nm)t∫_0^t dτ e^-iω_n'm'τ⟨B̂^†(t)B̂(t-τ)⟩_B + (|m' ⟩⟨n' |ρ_I| n ⟩⟨ m | - | n ⟩⟨ m|m' ⟩⟨n' |ρ_I)× e^i(ω_nm-ω_n'm')t∫_0^t dτ e^iω_n'm'τ⟨B̂(t)B̂^†(t-τ)⟩_B + (| n ⟩⟨ m |ρ_I|m' ⟩⟨n' | - ρ_I|m' ⟩⟨n'| n ⟩⟨ m |)× e^i(ω_nm-ω_n'm')t∫_0^t dτ e^iω_n'm'τ⟨B̂^†(t-τ)B̂(t)⟩_B + (| m ⟩⟨ n |ρ_I|n' ⟩⟨m' | - ρ_I|n' ⟩⟨m'| m ⟩⟨ n |)× e^i(ω_n'm'-ω_nm)t∫_0^t dτ e^-iω_n'm'τ⟨B̂(t-τ)B̂^†(t)⟩_B.] .Within the secular approximation, it follows that m'=m and n'=n. We now extend the τ integration to infinity and in Eqs. (<ref>) we change the summation over p to an integral, ∑_p→∫_0^∞ dν g_k(ν), where g_k(ν) is the density of states of the bath associated to the operator Ŝ^(k), for example∫_0^t dτ e^-iω_nmτ⟨B̂^†(t)B̂(t-τ)⟩_B→ ∫_0^∞ dν g_k(ν)κ^2(ν)n̅(ν,T)∫_0^∞ dτ e^i(ν-ω_nm)τ .The time integral is ∫_0^∞ dτ e^i(ν-ω_nm)τ=πδ(ν-ω_nm)+i𝒫/(ν-ω_nm), where 𝒫 indicates the Cauchy principal value. We omit here the contribution of the terms containing the Cauchy principal value 𝒫, because these represent the Lamb-shift of the system Hamiltonian. We thus arrive to the expression ρ̇̂̇_I= π∑_k∑_m, n>m| s^(k)_mn|^2κ^2(ω_mn)g_k(ω_mn){(2| n ⟩⟨ m |ρ_I| m ⟩⟨ n | -| m ⟩⟨ n| n ⟩⟨ m |ρ_I - ρ_I| m ⟩⟨ n| n ⟩⟨ m |)n̅(ω_mn,T). + . (2| m ⟩⟨ n |ρ_I| n ⟩⟨ m | -| n ⟩⟨ m| m ⟩⟨ n |ρ_I - ρ_I| n ⟩⟨ m| m ⟩⟨ n |)[n̅(ω_mn,T)+1]} , with s^(k)_nm=(s^(k)_mn)^*. Transforming back to the Schrödinger picture, we obtain the master equation for a generic system in thermal equilibrium ρ̇̂̇(t)= -i[Ĥ_S,ρ̂]+ ∑_k∑_m, n>mΓ^(k)_mn{𝒟[| n⟩⟨ m|]ρ̂(t)n̅(ω_mn,T) . + 𝒟[| m⟩⟨ n|]ρ̂(t)[n̅(ω_mn,T)+1].}where Γ^(k)_mn= 2π| s^(k)_mn|^2κ^2(ω_mn)g_k(ω_mn) is the transition rate from level m to level n, and 𝒟[Ô]ρ̂=(2Ôρ̂ Ô^†-Ô^†Ôρ̂-ρ̂ Ô^†Ô)/2. §.§ Pure dephasingA quantum model of the pure dephasing describes the interaction of the system with the environment in terms of virtual processes;the quanta of the bath with energy ħν_q are scattered to quanta with energy ħν_p, leaving the states of the system unchanged. In the interaction picture we haveĤ^(k)_SB=Ŝ_z^(k)(t)B̂(t)with B̂(t)=∑_pqκ b̂^†_p b̂^_qe^iν_pqt, where κ is the coupling constant with the system. In the sum, terms with p=q have nonzero thermal mean value and they will be included in Ĥ_S, producing a shift in the Hamiltonian energies, so we will omit this contribution. Substituting Eq. (<ref>) in the Born master equation Eq. (<ref>), with τ=t-t'ρ̇̂̇_I= ∑_k∑_m,m's^(k)_m,ms^(k)_m',m'× [(|m' ⟩⟨m' |ρ_I| m ⟩⟨ m |-| m ⟩⟨ m |m' ⟩⟨m' |ρ_I .) × ∫_0^t dτ⟨B̂(t)B̂(t-τ)⟩_B+ (| m ⟩⟨ m |ρ_I|m' ⟩⟨m' |-ρ_I|m' ⟩⟨m' | m ⟩⟨ m |).× ∫_0^t dτ⟨B̂(t-τ)B̂(t)⟩_B ].The correlation function becomes⟨B̂(t)B̂(t-τ)⟩_B=∑_p,q≠ pκ^2n̂_p(1+n̂_q)exp{i(ν_p-ν_q)τ} .As before, we now extend the τ integration to infinity and in Eq. (<ref>) we change the summation over p (q) with the integral, ∑_p(q)→∫_0^∞ dν_p(q) g_k(ν_p(q)), for example∫_0^tdτ ⟨B̂^†(t)B̂(t-τ)⟩_B→ ∫_0^∞ dν_p dν_q g_k(ν_p)g_k(ν_q)κ^2(ν)n̅(ν_p,T)[1+n̅(ν_q,T)] × ∫_0^∞ dτ e^i(ν_p-ν_q)τ .The time integral is ∫_0^∞ dτ e^i(ν_p-ν_q)τ=πδ(ν_p-ν_q)+i𝒫/(ν_p-ν_q). We omit here the contribution of the terms containing the Cauchy principal value 𝒫, but they must be included in the Lamb-shifted Hamiltonian. Transforming back to the Schrödinger picture, we obtain the pure dephasing contribution to the master equation for a generic system in thermal equilibriumρ̇̂̇=∑_kγ^(k)_φ𝒟[ ∑_m s^(k)_mm| m⟩⟨ m|]ρ̂with γ^(k)_φ=2π∫_0^∞ dν κ^2(ν)g_k^2(ν)n̅(ν,T)[1+n̅(ν,T)] .Using Eq. (<ref>) and (<ref>), we obtain the master equation valid for generic hybrid-quantum systems in the weak-, strong-, ultra-strong coupling regime, with or without parity symmetry.§ DYNAMICAL DECOUPLING PERFORMANCE In a pure dephasing picture, a two-level system is described byĤ=(ω_q/2+β(t))σ̂_z ,where ω_q and β(t) represent the energy transition and random fluctuations imposed by the environment. The frequency distribution of the noise power for a noise source β is characterized by its power spectral densityS(ω)=1/2π∫_-∞^∞ dt⟨β( 0) β( t)⟩ e^-iω tThe off-diagonal elements of the density matrix for a superposition state affected by decoherence is ρ_01(t)=ρ_01(0)exp[-iΣ(t)]exp[-χ(t)] .The last term is a decay function and generates decoherence, it is the ensemble average of the accumulated random phase exp[-χ(t)]=⟨exp[iδφ(t)]⟩, with δφ(t)=∫_0^t dt'δβ(t'). Following Ref. <cit.>, we have thatχ(τ)=∫_0^∞ d ω S(ω)F(ω t)/ω^2(ħω/2k_B T) .When the system is free to decay, free induction decay (FID), then F(ω t)=2 sin(ω t/2)^2. If we apply a sequence of N pulses, then F(ω t)=| Y_N(ω t) |^2/2, withY_N(z)=1+(-1)^N+1exp{iz}+2∑_j=1^N(-1)^jexp{izδ_j} .Using superconducting artificial atoms, the power spectral density exhibits a 1/f power-law, S(2π f)=A/f, where A is a parameter that we will evaluate assuming to know the pure dephasing time of the system during FID. Indeed, we calculate the integral χ_0=χ(τ_ FID) in Eq. <ref>, considering that the pure dephasing time is τ_ FID=10 μ s and A=1. After that we choose A=1/χ_0, in S(2π f). With this choice of A, we are sure that, exp[-χ(τ_ FID)]=1/e, and that the pure dephasing rate, when the system is free to decay, is Γ_ FID=1/τ_ FID. At this point, we can calculate χ_N=χ(τ) in Eq. <ref> for a sequence of N equidistant pulses, δ_j=j/(N+1), using Eq. <ref> and A=1/χ_0. If α_N is the pure dephasing suppression factor, Γ_N=α_NΓ_ FID, it results that α_N=√(χ_N). Considering τ_ FID=10 μ s and T=12mK, we found A=4.34× 10^9. Applying 1000 equally spaced pulses, the suppression factor is α_N= 10^-3. In conclusion, applying a DD sequence of 1000 π-pulses in a two-level artificial atom that experiences noise with 1/f power spectral density, at low temperature the decoherence time can be prolonged up to 10^3 times.§ CONDITIONS FOR AN AUXILIARY NON-INTERACTING ATOMIC LEVELThe frequency transitions between the auxiliary level | s⟩ and the lowest two levels must be much greater than the one between the lowest two levels; this is facilitated by using a flux qubit in its optimal point. More importantly, the transition matrix elements between the auxiliary level and the lowest two levels should be much lower than the transition matrix element between the lowest two levels. For example, for a coupling λ/ω_c=1, the transition matrix elements between the auxiliary level and the lowest two levels should be less than 10% of the transition matrix element between the lowest two levels. In the case of longitudinal coupling, the matrix elements must be calculated between the states | ge_±⟩=(| g⟩±| e⟩)/√(2) and between the states | es_±⟩=(| e⟩±| s⟩)/√(2) and | gs_±⟩=(| g⟩±| s⟩)/√(2). If, for some parameters, the last condition is not satisfied, another way to store the information would be to prepare the system in the state | s⟩ when the coupling is low, λ/ω_c≤ 0.1, and, after that the flying qubit enters the cavity, switching-on the coupling <cit.>. Afterwards, we follow the protocol described in the part of the main paper. To release the quantum information, we reverse the process. | http://arxiv.org/abs/1703.08951v2 | {
"authors": [
"Roberto Stassi",
"Franco Nori"
],
"categories": [
"quant-ph",
"cond-mat.mes-hall",
"physics.atom-ph"
],
"primary_category": "quant-ph",
"published": "20170327065739",
"title": "Long-lasting Quantum Memories: Extending the Coherence Time of Superconducting Artificial Atoms in the Ultrastrong-Coupling Regime"
} |
Departament d'Astronomia i Astrofísica, Universitat de València, C. Dr. Moliner 50, E-46100 Burjassot, València, [email protected] Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany Observatori Astronòmic, Universitat de València, Parc Científic, C. Catedrático José Beltrán 2, E-46980 Paterna, València, Spain Onsala Space Observatory, Chalmers University of Technology, SE-43992 Onsala, Sweden Department of Physics `E.Fermi', University of Pisa, Largo Bruno Pontecorvo 3, I-56127 Pisa, Italy INFN, Section of Pisa, Largo Bruno Pontecorvo 3, I-56127 Pisa, Italy Max-Planck-Institut für Astronomie, Koenigstuhl 17, D-69117 Heidelberg, Germany Instituto de Astrofísica de Andalucía (IAA-CSIC), Apt 3004, E-1808, Granada, SpainPrecise determination of stellar masses is necessary to test the validity of pre-main-sequence (PMS) stellar evolutionary models, whose predictions are in disagreement with measurements for masses below 1.2 M_⊙. To improve such a test, and based on our previous studies, we selected the AB Doradus moving group (AB Dor-MG) as the best-suited association on which to apply radio-based high-precision astrometric techniques to study binary systems.We seek to determine precise estimates of the masses of a set of stars belonging to the AB Dor-MG using radio and infrared observations.We observed in phase-reference mode with the Very Large Array (VLA) at 5 GHz and with the European VLBI Network (EVN) at 8.4 GHz the stars HD 160934, EK Dra, PW And, and LO Peg. We also observed some of these stars with the near-infrared CCD AstraLux camera at the Calar Alto observatory to complement the radio observations. We determine model-independent dynamical masses of both components of the star HD 160934, A and c, which are 0.70±0.07 M_⊙ and 0.45±0.04 M_⊙, respectively. We revised the orbital parameters of EK Dra and we determine a sum of the masses of the system of 1.38±0.08 M_⊙. We also explored the binarity of the stars LO Peg and PW And. We found observational evidence that PMS evolutionary models underpredict the mass of PMS stars by 10%-40%, as previously reported by other authors. We also inferred that the origin of the radio emission must be similar in all observed stars, that is, extreme magnetic activity of the stellar corona that triggers gyrosynchrotron emission from non-thermal, accelerated electrons. Young, active radio stars in the AB Doradus moving group Azulay et al.Young, active radio stars in the AB Doradus moving group R. Azulay 1,2Guest student of the International Max Planck Research School for Astronomy and Astrophysics at the Universities of Bonn and Cologne, J. C. Guirado3,1, J. M. Marcaide1, I. Martí-Vidal4, E. Ros2,1,3, E. Tognelli5,6, F. Hormuth7, J. L. Ortiz8Draft version: December 30, 2023 ================================================================================================================================================================================================================================================================== § INTRODUCTION Stellar evolution models are an essential tool to infer star fundamental parameters such as radius, mass, and/or age from luminosity/temperature-based relationships (e.g., Baraffe et al. 1998; Chabrier et al. 2000). The reliability of the models has long been tested and validated by the overall good agreement between the predictions of stellar models and measurements. However, only recently, accurate measurements of stellar masses and radii have become accessible, especially in the case of low- and very low-mass stars, thus allowing more stringent tests on stellar models. In the particular case of pre-main-sequence (PMS) stars the models show an increasing difficulty in accurately reproducing some of the characteristics of star with masses below 1.2 M_⊙ (e.g., Hillenbrand & White 2004).Therefore, the calibration of the evolutionary models of low-mass PMS stars is an important and challenging task, since it requires precise and independent measurements of luminosities and masses to be compared with theoretical predictions. Several authors have highlighted these facts in previous works but, nevertheless, there is not enough observational data that can help to improve the models yet (Hillenbrand & White 2004; Stassun et al. 2004; Mathieu et al. 200; Gennaro et al. 2012).The study of binary stars belonging to young moving groups, whose main feature is the common age of their members, is a reasonable approach to increase the number of PMS stars with dynamically determined masses. Several of these moving groups have recently been discovered (Zuckerman & Song 2004; Torres et al. 2008). Among all of these groups, the AB Doradus moving group (AB Dor-MG) is the most suitable to carry out the study; because this group is the closest moving group, the estimated age is relatively accurate and contains stars with significant emission at radio wavelengths (Guirado et al. 2006, 2011; Jason et al. 2007; Azulay et al. 2014, 2015). This last feature is essential because it allows us to use radio interferometry techniques to obtain astrometric information. Using these techniques it is possible to achieve angular resolutions in the sub-milliarcsecond (sub-mas) range, which are needed to solve and study in detail the kinematics (proper motion, parallax, and possible orbits) of the stellar systems.In this context, we have made several contributions to stars belonging to the AB Dor-MG, namely, AB Dor A/C (Guirado et al. 2006; 2011), AB Dor Ba/Bb (Azulay et al. 2015), and HD 160934 (Azulay et al. 2014). In the two first cases, a VLBI-driven astrometric study resulted in the precise estimate of the dynamical mass of the individual components, providing relevant results in terms of calibration of the mass-luminosity relationship for young, low-mass objects. Regarding the binary HD 160934, we reported the discovery of compact radio emission from both components of the system, which opened the possibility to a further astrometric monitoring of its orbital motion.Given the remarkable scientific output of AB Dor A/C, AB Dor Ba/Bb, and HD 160934, we considered it appropriate to include new similar stars, that is, young binaries that are luminous both in infrared and radio wavelengths. In fact, other stars in the AB Dor-MG are fast rotators, showing traces of magnetic activity (as stellar spots) that well could be radio emitters. The previous reasoning was the main motivation to initiate a study of the radio emission of AB Dor-MG members beyond the systems already studied.In this paper we present the results of a VLA/VLBI radio study of PMS stars members of the AB Dor-MG, namely, HD 160934, EK Dra, PW And, and LO Peg. In particular, we focus on the VLBI observations of HD 160934, from which we were able to monitor astrometrically the relative orbit (of the component HD 160934 A respect to the component HD 160934 c) and the absolute orbit (reflex motion of the component HD 160934 A with respect to a external quasar) and, thereby, to determine dynamical individual masses of both components of the star, which enabled further comparisons with stellar models. We also report on VLBI observations of the other three stars addressed to determine their fundamental parameters (EK Dra) or explore its possible binarity (PW And and LO Peg). § OBSERVATIONS AND DATA REDUCTION§.§ VLA observationsWe analyzed archival VLA data[Projects AG0377, ADA000, and ABO691 available at the VLA data archive https://archive.nrao.edu/archive/advquery.jsp] of the stars EK Dra, PW And, and LO Peg observed at 8.4 GHz in AB (EK Dra) and CD (PW And and LO Peg) configurations on 1993 January 29, 1993 September 16, and 1996 May 5, respectively (see Table <ref>). In all cases, the effective bandwidth was 50 MHz and both right- and left-handed circular polarizations were collected. For EK Dra the observation lasted 10.5 h, the source 3C48 was used as the primary flux calibrator and the source 1435+638 was selected as the phase calibrator. For PW And, the observation lasted 10 h, and the flux and phase calibrators were 0137+331 and 0029+349, respectively. For LO Peg, the observation lasted 5.5 h, the source 0137+331 was used as primary flux calibrator, and the source 2115+295 was selected as phase calibrator. VLA observations of HD 160934 are described in Azulay et al. (2014) and included in Table <ref> for completeness.To reduce all three experiments, we used standard routines of the Astronomical Image Processing System (AIPS, 31DEC15 version) program of the National Radio Astronomy Observatory (NRAO), which we summarize in turn. We flagged data, both obvious outliers or data segments selected after careful checking of the observing logs, which constituted a small fraction of the complete data set; we determined the flux density of the primary calibrator, we calculated the flux density of the phase calibrator from the primary flux calibrator, and we used the solutions derived from the calibrators to calibrate the amplitudes and phases of the target through linear interpolation. These calibrated data were imported to the DIFMAP software package (Shepherd 1994) to obtain the images of the stars. These resulting images are shown in Figs. <ref>, <ref>, and <ref> and are discussed in the next section. §.§ VLBI observationsThe previous VLA observations certified the presence of radio emission on HD 160934, EK Dra, PW And, and LO Peg; to study their compact structures, we carried out VLBI observations at 5 GHz between 2012 and 2014 with the EVN (see Table <ref>). Results of the first VLBI epoch on HD 160934 were already presented in Azulay et al. (2014) but they have been reanalyzed here in the context of new observations. For each experiment we observed an overall time of 10 h and both polarizations were recorded with a rate of 1024 Mbps (two polarizations, eight sub-bands per polarization, 16 MHz per sub-band, and two bits per sample). After each observation, the data were correlated with the EVN MkIV data processor at the Joint Institute for VLBI in Europe (JIVE), Dwingeloo, The Netherlands.As we are studying weak sources, we used the phase-referencing technique to facilitate its detection; for that, we interleave scans of the target sources with ICRF quasars. The quasars selected were J1746+6226, J1441+6318, J0015+3216, and J2125+2442 for HD 160934, EK Dra, PW And, and LO Peg, respectively (separated by 1.50^∘, 1.04^∘, 1.48^∘, and 1.87^∘, respectively). The cycles target-calibrator-target lasted about six minutes in all cases.We reduced each experiment using AIPS in a standard procedure briefly described here. The initial reduction included amplitude calibration using system temperatures and antenna gains, corrections of parallactic angle, and ionosphere. We applied fringe fitting on the calibrator to determine phase offsets and applied the solutions to the target source. Later, we imported the resulting data to DIFMAP to obtain uniformly weighted maps of the calibrator (Fig. <ref>, <ref>, <ref>, and <ref>). We obtained the image through a process of self-calibration iterations of the amplitude and phase with deconvolutions using the clean algorithm, which allowed us to determine both the amplitude scaling corrections and self-calibrated phase for each antenna. Back to AIPS, we applied these corrections to the target source to obtain the phase-referenced naturally weighted images (Fig. <ref>, <ref>, and <ref>). We analyze the details in the next section. §.§ AstraLux observationsHD 160934 and EK Dra were also observed with the Lucky Imaging AstraLux camera (Hormuth et al. 2008) at the Calar Alto 2.2 m telescope. The Lucky Imaging technique permits the reducuction of distortions due to atmosphere by acquiring a large number of short-exposure images and combining the best few percent of high-quality images to obtain a final image that is relatively unaffected by atmospheric turbulence (Hormuth et al. 2007 and reference therein). The observations of HD 160934 were carried out on 2013 June 24 and 2015 November 19; the observations of EK Dra were carried out on 2013 June 24 2013. In all cases, we used two different filters, SDSS i^' and SDSS z^', and we took 10000 individual frames with exposure times of 30 ms each. The individual frames were dark and flat corrected before selecting the best 10% of the accquisitions. The final images were constructed by filtering and resampling the selected frames, and then combining them with the Drizzle algorithm (Fruchter & Hook 2001). We also observed the stars at the center of the globular cluster M15, whose positions were used for astrometric calibration in the way described in Hormuth et al. (2007). At each observation, the field of view was 24^''×24^'' in a 512×512 pixel frame. The resulting images are shown in Fig. <ref> and <ref>. Details are shown in the next section. § DISCUSSION ON INDIVIDUAL SOURCES§.§ HD 160934HD 160934 (=HIP 86346) is a young, very active, binary star (Zuckerman, Song, and Bessel 2004; López-Santiago et al. 2006). The two components, HD 160934 A and HD 160934 c, were first reported by Galvez et al. (2006). This star has spectral type K7Ve (Schlieder et al. 2012) and it is placed at a distance of ∼33 pc (van Leeuwen 2007).Several studies of HD 160934 have been carried out so far with different techniques. Radial velocity measurements were performed by Henry et al. (1995), Zuckerman et al. (2004), Gálvez et al. (2006), Griffin & Filiz Ak (2010), and Griffin (2013). Relative astrometry through infrared imaging was reported by Hormuth et al. (2007) and Lafrenière et al. (2007). Moreover, relative astrometry from aperture-masking interferometry was provided by Evans et al. (2012). In Azulay et al. (2014), we reported the discovery of the radio emission of both components of the pair. There, we used that discovery to carry out relative astrometry and to determine new orbital elements and mass estimates.Regarding our new VLBI images, only a single component (assigned to A) is detected in 2013.392 (the upper bound to the radio emission of the component c is 0.01 mJy). In contrast, in the images corresponding to 2014.175, and as it happened in the observations of 2012.830 reported in Azulay et al. (2014), two point-like features are clearly seen. Those two point-like features can readily be associated with components A and c of the binary HD 160934. Circular Gaussian least-squares-fitted parameters for the two components are listed in Table <ref>. In the AstraLux images, meanwhile, the two components are not distinguished in the 2013 images whereas they are in those of 2015, where c is near the apoastron. We fitted a binary model to find the separation and flux ratio of the latter images; the results are shown in Table <ref>, along with previous estimates.§.§.§ Orbital parametersIn order to carry out an astrometric study, we measured the relative position of the pair A-c directly on the maps shown in Fig. <ref> (except for epoch 2013.39); we also measured the absolute position of the main component A, whose coordinates in Fig. <ref> are in turn referenced to the position of the external quasar (Table <ref>). We augmented our data set with the relative position in Fig. <ref> of 2015 and with previous orbital measurements reported by Evans et al. (2012), Hormuth et al. (2007), and Lafrenière et al. (2007). Table <ref> shows all the archive positions available for the system.We estimated the Keplerian parameters of HD 160934 via a weighted least-squares fit that combined the absolute positions of component A and all the relative positions constructed as A-c, that is, taking c as reference. We followed a similar approach to that used in Azulay et al. (2015) for another star of the AB Dor-MG, that is, AB Dor B,solving simultaneously for the absolute and relative orbits using the Thiele-Innes elements and the Levenberg-Marquardt algorithm. In practice, we proceed with two steps: * We obtained a priori values of the orbital elements from a previous least-squares fit to the, more numerous, differential data; in particular we estimated values for the period P (10.33 yr), semimajor axis of the relative orbit a_rel (0.^''152), eccentricity e (0.63), three orientation angles i (82.^∘4), ω (85.^∘9), Ω (35^∘), and time of periastron T_0 (2002.32).* We used the values above as (otherwise excellent) a priori to favor the convergence of the L-M algorithm in the combined fit of the absolute (A component) and relative positions (A-c). In this analysis, the proper motion and parallax of the system were also estimated. The resulting set of astrometric and orbital parameters is shown in Table <ref>, and meanwhile plots of the relative and absolute orbits are presented in Fig. <ref> and <ref>. Although there is a third object in this system, HD 160934 B (located at 8^'' separation from A and c; Lowrance et al. 2005), we did not include secular acceleration terms in the fit described above, since this third body does not induce an appreciable acceleration in our three-year time baseline of VLBI monitoring; the estimated period of the corresponding reflex orbital motion is on the order of 10^3 years).Our fit yielded a new value of the parallax (31.4±0.5 mas), which is within the uncertainties, but more precise than the previous Hipparcos estimate (30.2±2 mas; van Leeuwen 2007). This new parallax allowed us to determine the sum of the masses of both components of HD 160934 using Kepler's third law(a_rel^'' / π^'')^3/P^2 = (m_A + m_c)_⊙We obtained a value of m_A + m_c of 1.15±0.10 M_⊙, which is coincident with previous estimates made by other authors (Evans et al. 2012 and references therein). Similarly, using the semimajor axis of the absolute orbit of component A, a_A, we could estimate the mass of component c, m_c, using m_c^3 / (m_A + m_c)^2 = a_A^3/P^2, which yielded a value of 0.45±0.04 M_⊙. A value of the m_A (= 0.70±0.07 M_⊙) follows from a simple subtraction of the values above. In principle, the latter value for m_A may seem highly correlated with c; however, a similar, but coarser, estimate of a_c could be obtained by repeating the combined fit using the absolute positions of component c (see Table <ref>), from which a similar value of m_A (0.70±0.10 M_⊙) could be calculated. The coincidence of both estimates of m_A indicates the robustness of our mass determinations.The weighted rms of the postfit residuals (plotted in Fig. <ref>) is 3.2 mas, meaning that some unmodeled effects are still present in the data. The residuals show no evidence of another companion within the errors. Instead, the possible departure of some of the points from the fitted orbit might indicate instrumental effects that have not been considered. Accordingly, we scaled the statistical errors of the orbital parameters to take this contribution into account (see Table <ref>).§.§.§ Comparison with modelsTo proceed with the calibration of PMS models we needed the K magnitude and effective temperature of components A and c. Estimates of the individual K magnitudes can be obtained from the unresolved JHK 2MASS photometry, that is 6.812 ± 0.020, combined with the flux ratio between c and A (f_c/A) at K band; however, although f_c/A has been measured for different filters (see Table <ref>), the K-band magnitude is not available. We estimated f_c/A at K band via weighted linear fits to the flux ratios measured at other filters given in Table <ref>. We obtained a value of f_c/A = 0.39 ± 0.11 as a mean of our fits, and the standard deviation as the spread in the fits considering different subsets of data points, in such a way that our uncertainty conservatively covers the different fitted values of f_c/A. Final values for the absolute K magnitudes were 4.65±0.15 and 5.68±0.15 for components A and c, respectively.Regarding the effective temperatures, we derived a value for component A from its spectral type (K7-K8; McCarthy & White 2012) using the empirical color-temperature transformation reported by Hartigan et al. (1994). For component c, we proceeded in a similar way by assuming a M2-3 spectral type (Gálvez et al. 2006). The final values for the effective temperatures were 3960±60 K and 3450±90 K for components A and c, respectively. The standard deviation of these temperatures were estimated to cover several uncertainties that could affect our approach. Recent studies have shown that the spectral types of young stars determined from NIR observations disagree with those from the optical by up to three subtypes (Kastner et al. 2015; Pecaut et al. 2016), which would strongly affect the procedure to estimate the effective temperature. Since the spectral types of HD 160934 A and c are based on IR observations, we should not expect such large errors. Actually, the spectral type determination used for the components of HD 160934 (see above) are 1 subtype uncertain, which has been taken into account to estimate the standard deviation of the effective temperatures. Considering that these standard deviations have been further enlarged to cope with the different, more recent, color-temperature relations (Pecaut & Mamajek 2013; Luhman et al. 2003; Herczeg & Hillenbrand 2014) this wavelength-dependent spectral type effect should be reasonably covered.In order to calibrate the stellar evolution models for PMS stars, we considered isochrones and isomasses corresponding to the models of Baraffe et al. (1998; BCAH98), Siess et al. (2000; S00), Tognelli et al. (2011; 2012; TDP12), Bressan et al. (2012; Padova), Baraffe et al. (2015; BHAC15), and Choi et al. (2016; MIST). We adopted a metallicity value of [Fe/H]=0.0 (representative of the AB Dor metallicity; Barenfeld et al. 2013) and a solar calibrated mixing length parameter α. The different models can be shown in Fig. <ref>. HD 160934 A and HD 160934 c are placed in the H-R diagrams in Fig. <ref> using the values of both the magnitudes and temperatures explained above. The theoretical masses predicted by the models agree with our dynamical estimates just at the extreme of their uncertainties. All sets of tracks predict masses for component A ∼10% lower than our dynamical values, while predictions for the component c vary according to the model: BCAH98, BHAC15, and MIST predict masses that are ∼30% lower, S00 and TDP12 predict masses that are ∼40% lower, and Padova predict masses that are ∼10% lower. These results are consistent with previously published works, which conclude that PMS stellar evolution models for low-mass stars underestimate the dynamical values between 10-30% (Hillenbrand & White 2004; Mathieu et al. 2007). According to the values above, predictions are better for component A than for component c, that is, the larger the dynamical mass, the smaller the difference between the theoretical and dynamical estimates.In terms of age, the S00 and TDP12 models suggest that both stars are younger than 40 Myr, meanwhile BCAH98, Padova, BAHC15, and MIST favor slightly older ages but younger than 60 Myr. The age of the HD 160934 system predicted by the models in Fig. <ref> is between 15 Myr and 60 Myr, near the young estimates of the age of the AB Dor-MG. Age estimates of the AB Dor-MG range from 50-120 Myr (Malo et al. 2013), 70 Myr (Zuckerman et al. 2011), 70-120 Myr (Gagné et al. 2014), 100-150 Myr (Elliott et al. 2016), and 130-200 Myr (Bell et al. 2015). It should be mentioned that the possibility that HD 160934 does not belong to the AB Dor-MG cannot be ruled out. According to the BANYAN II membership probability tool (Gagné et al. 2014), the kinematics of this object favor HD 160934 to be a young field dwarf with a 95% probability. Were this the case for HD 160934, the conclusions about the age of the AB Dor-MG based on the age range derived from Fig. 7 would have a limited validity. However, in contrast with the BANYAN prediction, a very recent publication (Elliott et al. 2016) still confirms HD 160934 as an AB Dor-MG member.§.§.§ Magnetic field effects The discrepancy between the dynamical and the inferred mass for the components of HD 160934 seems to show the lack of additional input physics in the models. Indeed, the models appear to be too hot for the data at a given mass. To this regard, several hypothesis exist that would allow the models to be cooler; in the case of HD 160934, and given the existence of compact radio emission on both objects, which is frequently associated with intense magnetic activity (i.e., Güdel et al. 1995), we limited our analysis to study the impact of a magnetic field in the stellar models.The presence of a magnetic field inside the star mainly acts to reduce the surface convection efficiency in stellar models, as shown in Feiden et al. (2012, 2013, 2014, 2016), thus reducing the stellar effective temperature, and increasing the star's radius. Moreover, an external magnetic field might also result in surface spots that, blocking the flux at the stellar surface, tend to increase the stellar radius (Somers & Pinsonneault 2015). Thus, we briefly analyze these two aspects in turn.Regarding the internal magnetic field and following Feiden et al. (2013), its main effect on the convective heat transport can be partially simulated by reducing the efficiency of super-adiabatic convection, which is effectively carried out using a value of the mixing-length parameter α that is much lower than the solar calibrated value. Consequently, we computed new TDP12 models using a value of α = 0.6, in contrast with the solar calibrated α = 1.74 value used in the models of Fig. 7. This particular value of α has been adopted by Feiden et al. (2013) to simulate the reduced convection efficiency due to a magnetic field in a non-magnetic stellar model; also such a low α value is compatible with those used by Chabrier et al. 2007 to reproduce the radius of low-mass eclipsing binaries stars. The results are shown in Fig. <ref> (left), where we see an evident “cooling” of the magnetic (α = 0.6) isomasses of both components A and c with respect to those corresponding to the standard models (α = 1.74; see Fig. 7), which in turn produces a better agreement with the measurements. Interestingly, the effect of the internal magnetic field leads to older ages for both A (>50 Myr) and c (>30 Myr) components.Concerning the effect of surface spot coverage on the models, we computed additional TDP12 evolutionary models, in which we implemented this effect following the formalism described in Somers & Pinsonneault (2015). We adopted two different values of the effective spot coverage β of 0 (standard models without spots; Fig. 7) and 0.3, which is the same value used by Somers & Pinsonneault (2015). The comparison can be seen in Fig. <ref> (right), where we see similar effects to those shown for the internal magnetic field: cooling of the isomasses, leading to a reduction of the discrepancy with the measurements, and older ages (>50 Myr) for both components.There are other possible scenarios that can modify the position of a pre-MS star in the HR diagram, such as the presence of protostellar accretion, but its treatment is beyond the scope of this paper; in such a scenario, it is difficult to define a standard set of accretion models, as the outputs of the models are severely affected by the parameters that govern the accretion phase (e.g., mass accretion rate, initial seed mass, or thermal energy carried inside the star by the accreted matter; Baraffe et al. 2012; Tognelli et al. 2015).§.§ EK DraconisEK Dra (=HD 129333) is an active, G1.5 V star with a rapid rotation (2.6 days) (Järvinen et al. 2005). The binarity of this object (whose components are EK Dra A/B, separated by 0.74^'') was discovered for the first time through radial velocity variations by Duquennoy & Mayor (1991). Metchev & Hillenbrand (2004) confirmed the existence of these components from IR imaging. Several radial velocity studies of this star have been carried out (Duquennoy & Mayor 1991, Dorren & Guinan 1994, Montes et al. 2001, König et al. 2005). In particular, Köning et al. (2005) combined these radial velocity data with their data of speckle interferometry to derive masses of 0.9±0.1 M_⊙ and 0.5±0.1 M_⊙, for the primary and secondary, respectively, a period of 45±5 yr, and a semimajor axis of 14.0±0.5 AU. The VLA image (Fig. <ref>) shows EK Dra as an unresolved radio emitter with an integrated flux of 0.21 mJy (the radio emission of EK Dra A at radio wavelengths is known and reported in Güdel et al. 1995). Because of the small separation of both components of the binary at the epoch of observation, the components A and B appear to be blended on the map. Besides this, we could only detect the component A in the first of our three VLBI epochs of observations (2012.827). The image yields a flux density of 0.06 mJy, with an upper bound to the radio emission of the component B of 0.02 mJy (Fig. <ref>). The upper bounds to the radio emissions of the star in the second and third epochs are 0.01 and 0.02 mJy, respectively. The non-detection of EK Dra in the last two VLBI epochs can be a consequence of the variable behavior of the radio emission. In the AstraLux images (Fig. <ref>), meanwhile, we could detect both components of the star.We used our AstraLux relative position of EK Dra to revisit the orbital motion between components A and B. This position is shown in Table <ref> along with already published relative positions of EK Dra A/B, which mostly results from speckle interferometry observations (König et al. 2005). We performed a weighted least-squares analysis similar to that presented for HD 160934, simplified in this case to deal with only relative positions. Table <ref> shows the resulting orbital elements. The estimate of the combined mass of the system, using the Hipparcos distance 33.94±0.72 pc, is m_A+m_B = 1.38±0.08 M_⊙. Plots of the relative orbit can be seen in Fig. <ref> and Fig. <ref>. Both the orbital elements and mass estimates coincide with those reported by König et al. (2005) within uncertainties. Although our new position extends twofold the time baseline of the orbital monitoring, the motion of B around the main star is very slow, indicating that the components are near the apoastron. §.§ PW AndromedaePW And (=HD 1405) is a chomospherically very active star with a spectral type K2V, which displays a fast rotation (1.75 days) (Montes et al. 2001). Strassmeier et al. (1988) included this object as a possible binary, however, radial velocity studies (Griffin 1992, López-Santiago et al. 2003) discard the presence of a close, interacting companion, showing that the chomospheric activity is due to PW And itself. Moreover, Evans et al. (2012) explored the inner region of the star with speckle interferometry excluding the presence of companions at separations larger than 20 mas.Regarding the radio observations, a VLA image (Fig. 15) reveals a flux density of 0.34 mJy. Our EVN image (Fig. 17), meanwhile, shows an unresolved source that should correspond to PW And with a flux density of 0.17 mJy. Since our EVN observation does not show any companion to PW And, we can extend the absence of companions down to the resolution of our array, ∼5 mas, at a flux density limit level of 0.01 mJy. Given the apparent single character of the star, we did not propose further EVN observations of this source in the frame of this work (dedicated to monitor binary/multiple systems). Still, once confirmed the presence of compact emission, the determination of a precise, VLBI-based parallax value, which supersedes that of Hipparcos, might be certainly of interest.§.§ LO PegasusLO Peg (=BD+224402) is a young, active star with spectral type in the range K3V-K8V (Zuckerman & Song 2004 Pandey et al. 2005) and a rotation period of 0.42 days (Barnes et al. 2005). The first study of LO Peg was carried out by Jeffries et al. (1994), who concluded that there was no circumstellar matter around the star. Since then, Doppler images and studies of radial velocity have been carried out (Barnes et al. 2005, Piluso et al. 2008) and all of these studies consider LO Peg as a single star.We can confirm the radio emission of this source with a VLA image (Fig. <ref>) that reveals a flux density of 0.45 mJy. In the VLBI image, nevertheless, the star could not be detected. The upper bound to radio emission of LO Peg in this observation is 0.08 mJy/beam. This is because the non-detection could reflect the high variability of these active stars. § DISCUSSION AND CONCLUSIONS The main properties of the AB Dor-MG stars studied in this paper are shown in Table <ref>, including our calculated values of the absolute radio luminosity (obtained from the VLA flux and the Hipparcos distance) and the brightness temperature (obtained from the VLBI angular size when the source is detected). While AB Dor Ba/Bb and HD 160934 A/c resulted to be intense radio emitters that were detected at all observing epochs (and so it is PW And, also detected in our unique VLBI epoch of observation), we found that EK Dra (in 2 out 3 epochs) and LO Peg did not display detectable levels of radio emission. These non-detections might just reflect the variability of the radio emission, since, as seen in Table <ref>, neither the distance, rotation period, nor X-ray luminosity are significantly different in these two stars with respect to the other radio emitter systems. Therefore, further monitoring of these non-detected stars would not be as efficient in terms of kinematical studies as in the cases presented for AB Dor Ba/Bb (Azulay et al. 2015) and HD 160934 (this paper). The same conclusion applies to PW And, given its apparent non-binarity (even taking into account a clear VLBI detection). On the other hand, from the brightness temperatures shown in Table <ref>, we can conclude that the radio emission has a non-thermal origin. This fact, along with the rapid rotation values and saturated levels of X-ray luminosity, L_X, also listed in the table, favors the existence of an intense magnetic activity of the stellar corona that is responsible in terms of the presence of this radio emission. Therefore, the radio emission is apparently generated by gyrosynchrotron emission from non-thermal accelerated electrons (Lim et al. 1994; Güdel et al. 1995).In this paper we have shown the results of a VLBI program dedicated to monitoring the absolute reflex motion of HD 160934, a member of the AB Dor-MG. The unexpected detection of compact radio emission of the low-mass companion c (Azulay et al. 2014) allowed us to sample not only the absolute orbit of component A (with respect to the external quasar J1746+6226), but also the relative orbit, both of which are necessary to determine model-independent dynamical masses of the components of this system. The proximity of the two stars near periastron (∼20-40 mas in the last four years) has prevented an appropriate sampling of the relative orbit until the recent use of more precise interferometric techniques: aperture-masking (Evans et al. 2012) and VLBI (this work). The results of our orbital analysis yields values of 0.70±0.07 M_⊙ and 0.45±0.04 M_⊙ for components A and c, respectively, which are larger than the theoretical values predicted by PMS evolutionary tracks. The amount of this disagreement is ∼10% for component A and 10-40% for component c, contributing to the increasing observational evidence that PMS models underpredict the masses of systems with masses below 1.2 M_⊙. We have found that the inclusion of the effect of the stellar magnetic field in the theoretical models tends to reduce such a discrepancy. Remarkably, our study allowed us to obtain a revised and more precise value of the parallax (31.4±0.5 mas), thus solving a long-standing discussion about the distance to this system.With respect to the other stars included in our study, EK Dra and PW And showed compact radio emission at milliarcsecond scales, meanwhile LO Peg, appeared to be “off” at the time of observations. Complementary, companion infrared observations of EK Dra permitted us a revision of the orbital parameters of this system. R.A., J.C.G., J.M.M., & E.R. were partially supported by the Spanish MINECO projects AYA2012-38491-C02-01 and AYA2015-63939-C2-2-P and by the Generalitat Valenciana project PROMETEO/2009/104 and PROMETEOII/2014/057. R.A. acknowledges the Max-Planck-Institute für Radioastronomie for its hospitality. E.T. was supported by the “PRA 2016 Università di Pisa”. E.T. also acknowledges the INFN iniziativa specifica TAsP.[Azulay et al.(2014)]2014A A...561A..38A Azulay, R., Guirado, J. C., Marcaide, J. M., Martí-Vidal, I., & Arroyo-Torres, B. 2014, A&A, 561, A38[Azulay et al.(2015)]2015A A...578A..16A Azulay, R., Guirado, J. C., Marcaide, J. M., et al. 2015, A&A, 578, A16 [Baraffe et al.(1998)]1998A A...337..403B Baraffe, I., Chabrier, G., Allard, F., & Hauschildt, P. H. 1998, A&A, 337, 403 [Baraffe et al.(2012)]2012ApJ...756..118B Baraffe, I., Vorobyov, E., & Chabrier, G. 2012, , 756, 118 [Baraffe et al.(2015)]2015A A...577A..42B Baraffe, I., Homeier, D., Allard, F., & Chabrier, G. 2015, A&A, 577, A42 [Barnes et al.(2005)]2005MNRAS.356.1501B Barnes, J. R., Collier Cameron, A., Lister, T. A., Pointer, G. R., & Still, M. D. 2005, MNRAS, 356, 1501 [Bell et al.(2015)]2015MNRAS.454..593B Bell, C. P. M., Mamajek, E. E., & Naylor, T. 2015, , 454, 593 [Bressan et al.(2012)]2012MNRAS.427..127B Bressan, A., Marigo, P., Girardi, L., et al. 2012, , 427, 127 [Chabrier et al.(2000)]2000ApJ...542..464C Chabrier, G., Baraffe, I., Allard, F., & Hauschildt, P. 2000, ApJ, 542, 464 [Chabrier et al.(2007)]2007A A...472L..17C Chabrier, G., Gallardo, J., & Baraffe, I. 2007, , 472, L17 [Choi et al.(2016)]2016ApJ...823..102C Choi, J., Dotter, A., Conroy, C., et al. 2016, , 823, 102 [Dorren & Guinan(1994)]1994ApJ...428..805D Dorren, J. D., & Guinan, E. F. 1994, ApJ, 428, 805 [Duquennoy & Mayor(1991)]1991A A...248..485D Duquennoy, A., & Mayor, M. 1991, A&A, 248, 485[Evans et al.(2012)]2012ApJ...744..120E Evans, T. M., Ireland, M. J., Kraus, A. L., et al. 2012, ApJ, 744, 120 [Elliott et al.(2016)]2016A A...590A..13E Elliott, P., Bayo, A., Melo, C. H. F., et al. 2016, , 590, A13 [Feiden & Chaboyer(2012)]2012ApJ...757...42F Feiden, G. A., & Chaboyer, B. 2012, , 757, 42 [Feiden & Chaboyer(2013)]2013ApJ...779..183F Feiden, G. A., & Chaboyer, B. 2013, , 779, 183 [Feiden & Chaboyer(2014)]2014ApJ...789...53F Feiden, G. A., & Chaboyer, B. 2014, , 789, 53 [Feiden(2016)]2016A A...593A..99F Feiden, G. A. 2016, , 593, A99 [Fruchter & Hook(2002)]2002PASP..114..144F Fruchter, A. S., & Hook, R. N. 2002, , 114, 144 [Gagné et al.(2014)]2014ApJ...783..121G Gagné, J., Lafrenière, D., Doyon, R., Malo, L., & Artigau, É. 2014, , 783, 121 [Gálvez et al.(2006)]2006Ap SS.304...59G Gálvez, M. C., Montes, D., Fernández-Figueroa, M. J., & López-Santiago, J. 2006, Ap&SS, 304, 59 [Gennaro et al.(2012)]2012MNRAS.420..986G Gennaro, M., Prada Moroni, P. G., & Tognelli, E. 2012, MNRAS, 420, 986 [Griffin(1992)]1992Obs...112...41G Griffin, R. F. 1992, The Observatory, 112, 41 [Griffin & Filiz Ak(2010)]2010Ap SS.330...47G Griffin, R. F., & Filiz Ak, N. 2010, Ap&SS, 330, 47 [Griffin(2013)]2013Obs...133..322G Griffin, R. F. 2013, The Observatory, 133, 322 [Guedel et al.(1995)]1995A A...301..201G Güdel, M., Schmitt, J. H. M. M., Benz, A. O., & Elias, N. M., II 1995, A&A, 301, 201[Guirado et al.(2006)]2006A A...446..733G Guirado, J. C., Martí-Vidal, I., Marcaide, J. M., et al. 2006, A&A, 446, 733 [Guirado et al.(2011)]2011A A...533A.106G Guirado, J. C., Marcaide, J. M., Martí-Vidal, I., et al. 2011, A&A, 533, A106 [Hartigan et al.(1994)]1994ApJ...427..961H Hartigan, P., Strom, K. M., & Strom, S. E. 1994, ApJ, 427, 961 [Henry et al.(1995)]1995AJ....110.2926H Henry, G. W., Fekel, F. C., & Hall, D. S. 1995, AJ, 110, 2926 [Herczeg & Hillenbrand(2014)]2014ApJ...786...97H Herczeg, G. J., & Hillenbrand, L. A. 2014, , 786, 97 [Hillenbrand & White(2004)]2004ApJ...604..741H Hillenbrand, L. A., & White, R. J. 2004, ApJ, 604, 741 [Hormuth et al.(2007)]2007A A...463..707H Hormuth, F., Brandner, W., Hippler, S., Janson, M., & Henning, T. 2007, A&A, 463, 707 [Hormuth et al.(2008)]2008SPIE.7014E..48H Hormuth, F., Hippler, S., Brandner, W., Wagner, K., & Henning, T. 2008, The International Society for Optical Engineering, 7014, 701448 [Janson et al.(2007)]2007A A...462..615J Janson, M., Brandner, W., Lenzen, R., et al. 2007, A&A, 462, 615 [Järvinen et al.(2005)]2005A A...440..735J Järvinen, S. P., Berdyugina, S. V., & Strassmeier, K. G. 2005, A&A, 440, 735 [Jeffries et al.(1994)]1994MNRAS.270..153J Jeffries, R. D., Byrne, P. B., Doyle, J. G., et al. 1994, MNRAS, 270, 153 [Kastner et al.(2015)]2015csss...18..313K Kastner, J. H., Rapson, V., Sargent, B., Smith, C. T., & Rayner, J. 2015, 18th Cambridge Workshop on Cool Stars, Stellar Systems, and the Sun, 18, 313 [König et al.(2005)]2005A A...435..215K König, B., Guenther, E. W., Woitas, J., & Hatzes, A. P. 2005, A&A, 435, 215 [Lafrenière et al.(2007)]2007ApJ...670.1367L Lafrenière, D., Doyon, R., Marois, C., et al. 2007, ApJ, 670, 1367 [Lim(1993)]1993ApJ...405L..33L Lim, J. 1993, ApJ, 405, L33 [Lim et al.(1994)]1994ApJ...430..332L Lim, J., White, S. M., Nelson, G. J., & Benz, A. O. 1994, ApJ, 430, 332 [Lim & White(1995)]1995ApJ...453..207L Lim, J., & White, S. M. 1995, ApJ, 453, 207 [López-Santiago et al.(2003)]2003A A...411..489L López-Santiago, J., Montes, D., Fernández-Figueroa, M. J., & Ramsey, L. W. 2003, A&A, 411, 489 [López-Santiago et al.(2006)]2006ApJ...643.1160L López-Santiago, J., Montes, D., Crespo-Chacón, I., & Fernández-Figueroa, M. J. 2006, ApJ, 643, 1160 [Lowrance et al.(2005)]2005AJ....130.1845L Lowrance, P. J., Becklin, E. E., Schneider, G., et al. 2005, , 130, 1845 [Luhman et al.(2003)]2003ApJ...593.1093L Luhman, K. L., Stauffer, J. R., Muench, A. A., et al. 2003, , 593, 1093 [Malo et al.(2013)]2013ApJ...762...88M Malo, L., Doyon, R., Lafrenière, D., et al. 2013, , 762, 88 [Mathieu et al.(2007)]2007prpl.conf..411M Mathieu, R. D., Baraffe, I., Simon, M., Stassun, K. G., & White, R. 2007, Protostars and Planets V, 411 [McCarthy & White(2012)]2012AJ....143..134M McCarthy, K., & White, R. J. 2012, AJ, 143, 134 [Metchev & Hillenbrand(2004)]2004ApJ...617.1330M Metchev, S. A., & Hillenbrand, L. A. 2004, ApJ, 617, 1330[Montes et al.(2001)]2001A A...379..976M Montes, D., López-Santiago, J., Fernández-Figueroa, M. J., & Gálvez, M. C. 2001, A&A, 379, 976 [Pandey et al.(2005)]2005AJ....130.1231P Pandey, J. C., Singh, K. P., Drake, S. A., & Sagar, R. 2005, AJ, 130, 1231 [Pecaut & Mamajek(2013)]2013ApJS..208....9P Pecaut, M. J., & Mamajek, E. E. 2013, , 208, 9 [Pecaut(2016)]2016IAUS..314...85P Pecaut, M. J. 2016, Young Stars & Planets Near the Sun, 314, 85 [Piluso et al.(2008)]2008MNRAS.387..237P Piluso, N., Lanza, A. F., Pagano, I., Lanzafame, A. C., & Donati, J.-F. 2008, MNRAS, 387, 237 [Schlieder et al.(2012)]2012AJ....143...80S Schlieder, J. E., Lépine, S., & Simon, M. 2012, AJ, 143, 80 [Shepherd et al.(1994)]1994BAAS...26..987S Shepherd, M. C., Pearson, T. J., & Taylor, G. B. 1994, BAAS, 26, 987 [Siess et al.(2000)]2000A A...358..593S Siess, L., Dufour, E., & Forestini, M. 2000, A&A, 358, 593 [Somers & Pinsonneault(2015)]2015ApJ...807..174S Somers, G., & Pinsonneault, M. H. 2015, , 807, 174 [Stassun et al.(2004)]2004ApJS..151..357S Stassun, K. G., Mathieu, R. D., Vaz, L. P. R., Stroud, N., & Vrba, F. J. 2004, ApJS, 151, 357 [Strassmeier et al.(1988)]1988A AS...72..291S Strassmeier, K. G., Hall, D. S., Zeilik, M., et al. 1988, A&AS, 72, 291 [Tognelli et al.(2011)]2011A A...533A.109T Tognelli, E., Prada Moroni, P. G., & Degl'Innocenti, S. 2011, A&A, 533, A109 [Tognelli et al.(2012)]2012A A...548A..41T Tognelli, E., Degl'Innocenti, S., & Prada Moroni, P. G. 2012, A&A, 548, A41 [Tognelli et al.(2015)]2015MNRAS.454.4037T Tognelli, E., Prada Moroni, P. G., & Degl'Innocenti, S. 2015, , 454, 4037 [Torres et al.(2008)]2008hsf2.book..757T Torres, C. A. O., Quast, G. R., Melo, C. H. F., & Sterzik, M. F. 2008, Handbook of Star Forming Regions, Volume II, 757 [van Leeuwen(2007)]2007A A...474..653V van Leeuwen, F. 2007, A&A, 474, 653 [Wichmann et al.(2003)]2003A A...399..983W Wichmann, R., Schmitt, J. H. M. M., & Hubrig, S. 2003, A&A, 399, 983 [Zuckerman & Song(2004)]2004ARA A..42..685Z Zuckerman, B., & Song, I. 2004, ARA&A, 42, 685 [Zuckerman et al.(2004)]2004ApJ...613L..65Z Zuckerman, B., Song, I., & Bessell, M. S. 2004, ApJ, 613, L65 | http://arxiv.org/abs/1703.08877v1 | {
"authors": [
"R. Azulay",
"J. C. Guirado",
"J. M. Marcaide",
"I. Martí-Vidal",
"E. Ros",
"E. Tognelli",
"F. Hormuth",
"J. L. Ortiz"
],
"categories": [
"astro-ph.SR"
],
"primary_category": "astro-ph.SR",
"published": "20170326222440",
"title": "Young, active radio stars in the AB Doradus moving group"
} |
Department of Physics, Key Laboratory of Low Dimensional Quantum Structures and Quantum Control of Ministry of Education, and Synergetic Innovation Center for Quantum Effects and Applications, Hunan Normal University, Changsha, Hunan 410081, P. R. ChinaThe strong gravitational lensing for charged black holes with scalar hair in Einstein-Maxwell-Dilaton theory are studied. We find, with the increase of scalar hair, that the radius of the photon sphere, minimum impact parameter, angular image position and relative magnitude increase, while the deflection angle and angular image separation decrease. Our results can be reduced to those of the Schwarzschild black hole in two cases, one of them is that the scalar hair disappears, the other is that the coupling constants takeparticular values with arbitrary scalar hair. 04.70.Dy, 95.30.Sf, 97.60.LfStrong gravitational lensing for the charged black holes with scalar hair Ruanjing Zhang, Jiliang Jing[Corresponding author, Email: [email protected]] December 30, 2023 =================================================================================§ INTRODUCTION The presence of a massive body produces the deflection of light passing close to the object according to the theory of general relativity; the corresponding effects are called as gravitational lensing, and the object causing a detectable deflection acts as gravitational lens<cit.>. What's more, this deflection of light was first observed in 1919 by Dyson, Eddington, and Davidson <cit.>. After the pioneering strong gravitational lensing in Q0957+561 <cit.> was discovered in 1979, gravitational lensing developed into an important astrophysical tool to extract information about distant stars which are too dim to be observed, similar to a natural and large telescope. When an object with a photon sphere is situated between a source and an observer, there are two infinite sets of images called relativistic images, produced by light passing close to the photon sphere, which undergoes a large deflection. It is shown that these relativistic images carry much valuable information about the central celestial objects and could provide the profound verification of alternative theories of gravity <cit.>. Therefore, gravitational lensing is regarded as a powerful indicator of the physical nature of the central celestial object. So, we need a systematic approach to calculate the deflection angle and the feature of relativistic images. Darwin <cit.> calculated the deflection angle by using the strong deflection limit (consisting of a logarithmic approximation)for the Schwarzschild spacetime. And this method allows for calculating the position, magnification of the relativistic images. It was rediscovered several times <cit.>, then extended to the Reissner-Nordström metric <cit.>, and to any spherically symmetric objects with a photon sphere <cit.>. In recent years, many works <cit.> have been done basing on this method. The standard “no-hair theorem" <cit.> states that a black hole is completely specified by the mass, charge, and angular momentum. However, during the recent years, much attention was devoted to gravity theories supplying by scalar field, and many examples of scalar hairy black holes <cit.> have been obtained. There are several reasons for this. To begin with, as one kind of the fundamental and effective fields, scalar field is well motivated by standard-model particle physics. In addition, we analyze different field contents which can be treated as a means of checking the “no-hair theorem" and exploring the structure of black holes. Scalar field is often considered by physicists as one of the simplest types of “matter". At last, the presence of the scalar field leads to different black hole spacetimes, which may engender some new phenomena. We hope these deviations could be detected in astrophysical observations. What's more, the fundamental scalar field does exist in nature by discovering a scalar particle at the Large Hadron Collider at CERN <cit.>, which has been identified as the standard-model Higgs boson since 2012. Therefore, to study strong gravitational lensing and time delay for black holes with scalar hair has great significance.This paper is arranged as follows: In Sec. <ref>, we study the physical properties of strong gravitational lensing around the charged black holes with scalar hair and probe the effects of the scalar hair on the event horizon, the radius of the photon sphere, the minimum impact parameter, and the deflection angle. In Sec. <ref>, we suppose that the gravitational field of the supermassive black hole at the centre of our Galaxy can be described by this metric, and then obtain the numerical results for the main observables in strong gravitational lensing, such as the angular image position, the angular image separation, and the relative magnitude of relativistic images. Finally, we will include our conclusions in the last section. § DEFLECTION ANGLE FOR THE CHARGED BLACK HOLES WITH SCALAR HAIR Supergravities have provided a variety of fundamental matter fields that we can study their interactions with gravity <cit.>, one of them is dilatonic scalar <cit.>. If the dilaton φ couples to an n-form field strength F_n=dA_n-1 through Z(φ), the general class of Lagrangian is given by <cit.>e^-1ℒ=R-1/2(∂φ)^2-1/2n!Z(φ)F^2_n,where e=√(-det(g_μν)). It is not hard to find that we can get the usual Reissner-Nordström black hole decoupledwith the dilaton if Z(φ) in Eq. (<ref>) has a stationary point. And the uniqueness theorem is broken if we can construct a further different black hole with the same mass and charge, but non-vanishing dilaton. Now, we suppose Z asZ^-1=e^a_1φcos^2ω +e^a_2φsin^2ω ,witha_1a_2=-2(n-1)(D-n-1)/D-2,where a_1 and a_2 are the dilaton coupling constants, and ω is another coupling constant.The function Z becomes an exponential function of φ for ω=0 or ω=π/2.In this paper, we focus our attentionon the Einstein-Maxwell-Dilaton theory in four dimensions, corresponding to D = 4 and n = 2, in which the dilaton φ coupling to the Maxwell field A is not the usual single exponential function, but one with a stationary point. The condition for (a_1, a_2) in Eq. (<ref>) becomes a_1a_2=-1, and then the Lagrangian can be rewritten ase^-1ℒ=R-1/2(∂φ)^2-1/4ZF^2,where F=dA. The constantsa_1,a_2can be expressed as a_1=√(1-μ/1+μ), a_2=-√(1+μ/1-μ),where μ is a dimensionless constant with the range of μ∈(-1,1). The dilaton coupling function Z is thus given byZ^-1=e^√(1-μ/1+μ)φcos^2ω +e^-√(1+μ/1-μ)φsin^2ω .Then, the Lagrangian (<ref>) admits the charged black holes with scalar hair as<cit.>ds^2=-f(r)dt^2+dr^2/f(r)+r^1+μ(r+S)^1-μ(dθ^2+sin^2θ dϕ^2),wheref(r)=(1+S/r)^μ[1+Q^2cos^2ω/2rS(1+μ)-Q^2sin^2ω/2S(1-μ)(r+S)].The solution involves two integration constants, one is constant Q which parameterizes the electric charge, the other is S that associates with dilaton φ bye^φ/√(1-μ^2)=1+S/r.Since the black hole has scalar hair with varying φ,S parameterizes the scalar hair.What important is that this solution will return to the Schwarzschild black hole when S→0 or ω=π/2 with μ→1. Hence, this limit can be used to test our results in the following study. After that, the ADM mass, electric charge, and Maxwell field A are given by<cit.>M=Q^2sin^2ω/4(1-μ)S-Q^2cos^2ω/4(1+μ)S-1/2μ S, Q_e=1/4Q, A=Q(r+Scos^2ω)/r(r+S).It is useful for the calculation to follow the scaling symmetries in the formsr/2M→ r, S/2M→ S, Q/2M→ Q, t/2M→ t, C(r)/(2M)^2→ C(r).After taking the scaling symmetries, the solution (<ref>) still takes the same form as above. Since it is meaningful to study the effects of scalar hair on the strong gravitational lensing, we can use μ, ω, S to show Q^2 as Q^2=2S(1-μ)(1+μ)(1+Sμ)/(μ-1)cos^2ω+(μ+1)sin^2ω.Then, we have to take ω∈[π/4,π/2] with μ=0 or μ∈(-1,1)with ω=π/2 to ensure Q^2>0. Now, let us study the physical properties of strong gravitational lensing by the charged black holes with scalar hair. We choose the equatorial plane (θ=π/2) which means that both the observer and the source lie in the equatorial plane, and the whole trajectory of the photon is limited on the same plane. Then the metric (<ref>) can be expressed asds^2=-A(r)dt^2+B(r)dr^2+C(r)dϕ^2,withA(r)=f(r), B(r)=1/f(r), C(r)=r^1+μ(r+S)^1-μ.In the spherically symmetric case, the equation of the photon sphere readsC'(r)/C(r)=A'(r)/A(r),where the prime represents the derivative with respect to r. For the charged black holes with scalar hair, the equation of the photon sphere takes the form8Sr^3(μ^2-1)+r^2[4S^2(2μ+3)(μ^2-1)+6Q^2(μ-cos2ω)]+2Sr[2S^2(2μ^3+μ^2-2μ-1)+Q^2(2μ^2+3μ-2-3cos2ω)]+2S^2Q^2(1+cos2ω )(μ^2-1)=0.Obviously, this equation has three roots because it is a cubic equation of r. We take the root tends to 1.5 when S→0 as the radius of the photon sphere. In other words, we take the root which could return to the Schwarzschild black hole when S→0. We present the variation of the radius of the photon sphere r_ps and radius of the event horizon r_H with the scalar hair S for μ=0 (varying ω) and ω=π/2 (varying μ) in Fig. <ref>.We can see that r_H and r_ps are both decrease with the increase of scalar hair for either μ=0 or ω=π/2. And we can also see r_ps always bigger than r_H for given μ and ω, this is in accordance with our usual perception. This black hole can recover to the Schwarzschild black hole <cit.> (r_H=1, r_ps=1.5) in two cases, one is S→0 for arbitrary ω and μ, another is ω=π/2 and μ→1 for arbitrary scalar hair S. From Fig. <ref>, we can see that each line of r_H and r_ps intersects when S→0, and the black line in the right graph is basically a straight line, these are both performances of recovering to the results of the Schwarzschild case. The exact deflection angle α for a photon coming from infinity relates to the closest approach distance r_0 canbe expressed as <cit.>α(r_0)=I(r_0)-π,withI(r_0)=2∫^∞_r_0√(B(r))dr/√(C(r))√(C(r)A(r_0)/C(r_0)A(r))-1.The deflection angle is a monotonic decreasing function of r_0. For a special value of r_0, the deflection angle will become 2π, that is to say, the light ray will makes a complete loop around the compact object before reaching the observer, which results in two infinite sets of relativistic images, one is on the same side, and the other is on the opposite side of the source. Furthermore, the deflection angle diverges when r_0 approaches to the radius of the photon spherer_ps, which means that the photon is captured. We are now in position to calculate the case of a photon passing close to the photon sphere, by using the evaluation method for the integral (<ref>) proposed by Bozza <cit.>. Then, it is useful to define a new variablez=1-r_0/r,and we will obtainI(r_0)=∫^1_0R(z,r_0)f(z,r_0)dz,withR(z,r_0)=2r^2√(A(r)B(r)C(r_0))/r_0C(r), f(z,r_0)=1/√(A(r_0)-A(r)C(r_0)/C(r)),where R(z,r_0) is the regular term, and f(z,r_0) is the divergent term which diverges for z→0—i.e., the photon approaches the photon sphere. So we can split the integral (<ref>) as a sum of two parts I_D(r_0) = ∫^1_0R(0,r_ps)f_0(z,r_0)dz, I_R(r_0) = ∫^1_0[R(z,r_0)f(z,r_0)-R(0,r_ps)f_0(z,r_0)]dz,where I_D(r_0) and I_R(r_0) denote the divergent and regular parts in the integral (<ref>), respectively. To find the order of divergence of the integrand, we take a Taylor expansion of the argument of the square root in f(z,r_0) to the second order in z; then we getf_0(z,r_0)=1/√(p(r_0)z+q(r_0)z^2),withp(r_0) = r_0/C(r_0)(A(r_0)C'(r_0)-A'(r_0)C(r_0)),q(r_0) = r_0/2C^2(r_0)(2r_0C(r_0)C'(r_0)A(r_0)-2r_0C'^2(r_0)A(r_0)+ r_0C(r_0)C”(r_0)A(r_0)-r_0C^2(r_0)A”(r_0)).It is obviously that p(r_0)=0 at r_0=r_ps from Eqs. (<ref>) and (<ref>). So we have f_0(z,r_0)∼1/z when r_0 is equal to the radius of the photon sphere r_ps, and then the term I_D(r_0) diverges logarithmically. Therefore, the deflection angle can be expanded in the formα(θ)=-a̅log(θ D_OL/u_ps-1)+b̅+o(u-u_ps),witha̅ = R(0,r_ps)/2√(q(r_ps)), b̅ = -π+b_R+a̅logr_ps^2[C”(r_ps)A(r_ps)-C(r_ps)A”(r_ps)]/u_ps√(A^3(r_ps)C(r_ps)),b_R = I_R(r_ps),u_ps = √(%s/%s)C(r_ps)A(r_ps),where the quantityD_OL is the distance between the observer and the gravitational lens; θ is the angular separation between the optical axis and the direction of image which satisfies u=θ D_OL; u_ps is the impact parameter u evaluated at r_ps which is called the minimum impact parameter; a̅ and b̅ are strong deflection limit coefficients which depend only on the metric function evaluated at r_ps. Making use of Eqs. (<ref>) and (<ref>), we can study the properties of strong gravitational lensing by the charged black holes with scalar hair.Now, we probe the properties of strong gravitational lensing by the charged black holes with scalar hair and mainly explore the effects of the scalar hair S on the deflection angle. Weshow, in Figs. <ref>-<ref>, the variation of the coefficient a̅, the minimum impact parameter u_ps, and the deflection angle α(θ) with scalar hair S for the change of ω when μ=0, and for the change of μ when ω=π/2, respectively. We can read from Fig. <ref> that the coefficient a̅ always grows with the increase of scalar hair Sfor either μ=0 or ω=π/2, but the growth rate decreases with the increase of μ or ω. Furthermore, we get that the minimum impact parameter u_ps decreases with the increase of scalar hair S for either μ=0 or ω=π/2 in Fig. <ref>. We also plot the deflection angle α(θ) evaluated at u=u_ps+0.003 in Fig. <ref>. And then, we find that the deflection angle increases with the increase of scalar hair regardless of the varying μ and ω, which tells us that the scalar hair enhances the effects of the black hole on the light. It is also shown that the deflection angle has the similar properties with the coefficient a̅; this means that the deflection angle of the light ray is dominated by the logarithmic term in strong gravitational lensing.There is one thing that we can't ignore is every line of a̅, u_ps, and α(θ) intersects at S→0 for arbitrary μ and ω, which implies that they recover to the results of the standard Schwarzschild case <cit.>; i.e., a̅=1, u_ps=2.598, and α(θ)=6.28. We should also note thatthe black lines in each graph on the right side of Figs. <ref>-<ref> are almost straight lines,it means that our resultsrecover to the results of the Schwarzschild case again for ω=π/2 and μ→1. § OBSERVABLES IN STRONG GRAVITATIONAL LENSING In this part, we calculate the observables in strong gravitational lensing by the charged black holes with scalar hair, including the angular image position θ_∞, the angular image separation s, and the relative magnitude r_m. Let us start with the lens equation <cit.>β=θ-D_LS/D_OSα_n,where β is the angle between the direction of the source and the optical axis, called the angular source position. D_LS is the distance between the lens and the source; D_OS is the distance between the observer and the source, and they satisfy D_OS=D_LS+D_OL. α_n=α-2nπ is the offset of the deflection angle, and n is an integer that indicates the number of loops done by the photon around the black hole. Since the lensing effects are more significant when the objects are highly aligned, we will study the case which the angles β and θ are small. We can find that the angular separation between the lens and the nth relativistic image isθ_n≃θ^0_n+u_pse_n (β-θ_n^0)D_OS/a̅D_LSD_OL,withθ_n^0=u_ps/D_OL(1+e_n), e_n=e^b̅-2nπ/a̅,where θ_n^0 is the image position corresponding to α=2nπ. As n→∞, we can find that e_n→0 from Eq. (<ref>), which implies that the minimum impact parameter u_ps and the asymptotic position of a set of images θ_∞ obey a simple formu_ps=D_OLθ_∞.Then, the magnification of the nth relativistic image is given byμ_n=. 1/β/θ∂β/∂θ|_θ^0_n =u_ps^2e_n(1+e_n)D_OS/a̅β D^2_OLD_LS.It is easy to find that the first relativistic image is the brightest, and the magnification decreases exponentially with n. Therefore, we only consider that the outermost and brightest image θ_1 is resolved as a single image, and all the remaining ones are packed together at θ_∞ <cit.>. Thus, the angular image separation s between the first image and the packed others, and the ratio ℛ of the fluxfrom the first image to those from all other images can be expressed ass = θ_1-θ_∞=θ_∞ e^b̅-2π/a̅, ℛ = μ_1/Σ^∞_n=2μ_n=e^2π/a̅.These two formulas can be easily inverted to geta̅ = 2π/logℛ, b̅ = a̅log(ℛs/θ_∞).For a given theoretical model, the strong deflection limit coefficients a̅ and b̅ and the minimum impact parameter u_ps can be obtained; then these three observables s, θ_∞, and ℛ can be calculated. On the other hand, comparing them with astronomical observations, it will allow us to determine the nature of the black hole stored in the lensing.To provide an example, let us consider the supermassive black hole in the Galactic center can be described by this solution. It has a mass M=4.4×10^6M_⊙ <cit.>, and it is situated at a distance from the Earth D_OL= 8.5kpc; so the ratio of the mass to the distance is M/D_OL≈2.4734×10^-11. Hence, we can estimate the value of the coefficients and observables for strong gravitational lensing by combing with Eqs. (<ref>),(<ref>), and (<ref>). We present the numerical value for the angular image position θ_∞, the angular image separation s, and the relative magnitude r_m (which is related to ℛ by r_m=2.5logℛ) of the relativistic images in Figs. <ref>-<ref>. We can find that the angular image position θ_∞ decreases with the increase of scalar hair S for either μ=0 case or ω=π/2 case in Fig. <ref>. We also find that the θ_∞ grows with the increase of ω for fixed S and μ, and so does the change of θ_∞ with μ for fixed S and ω. Figures <ref> and<ref> show that the changes in θ_∞ and u_ps are the same, this is because θ_∞ and u_ps satisfy the geometrical relationship of u_ps=D_OLθ_∞. Furthermore, we get from Figs. <ref> and <ref> that the angular image separation s increases, while the relative magnitude r_m decreases with the increase of scalar hair S. It is interesting to find that for different ω and μ, each line of θ_∞, s, and r_m intersects at S→0, which returns to the results of the standard Schwarzschild case for θ_∞=26.5095 μ arc sec, s=0.0331 μ arc sec, and r_m=6.82 <cit.>. There is another situation (ω=π/2, μ→1) that can return to the results of the Schwarzschild case, they are plotted with black lines in the graph on the right of Figs. <ref>-<ref>.§ SUMMARY In this paper, we investigated strong gravitational lensing for the four dimensions charged black holes with scalar hair in the Einstein-Maxwell-Dilaton theory <cit.>. We studied the effects of scalar hair on the event horizon r_H, the radius of the photon sphere r_ps, the strong deflection limit coefficient a̅, the minimum impact parameter u_ps, the deflection angle α(θ) and the main observables, such as the angular image position θ_∞, the angular image separation s, and the relative magnifications r_m of relativistic images in strong gravitational lensing. In our usual perception, r_ps is always greater than r_H for an arbitrary black hole, Fig. <ref>, which showed the r_H and r_ps for the charged black holes with scalar hair, illustrated this view again. And Fig. <ref> also showed that the r_H and r_ps are both decrease with the increase of scalar hair. We found from Figs. <ref> and<ref> that both the deflection angle α(θ) and strong deflection limit coefficient a̅ increase with the increase of scalar hairfor either μ=0 or ω=π/2, which means that the deflection angle of light ray is dominated by the logarithmic term in gravitational lensing. We learned from Figs. <ref> and <ref> that the changes of the angular image position θ_∞ and the minimum impact parameter u_ps are the same (both decrease with the increase of scalar hair for either μ=0 or ω=π/2) due to θ_∞ and u_ps satisfy the geometrical relationship of u_ps=D_OLθ_∞. Moreover, we also found,with the increase ofscalar hair, that the angular image separation s increases, while the relative magnitude r_m decreases from Figs. <ref> and <ref>. It should be pointed out that this black holecan recover to the Schwarzschild black hole in two cases, one is S→0 for arbitrary ω and μ, another is ω=π/2 and μ→1 for arbitrary scalar hair, and all quantities of strong gravitational lensing for the charged black holes with scalar hair can be reduced to those of the Schwarzschild spacetime— i.e., r_H=1, r_ps=1.5, a̅=1, u_ps=2.598, α(θ)=6.28, θ_∞=26.5095μ arc sec, s=0.0331μ arc sec, r_m=6.82. This can be seen clearly in every figure, all the lines intersect at S→0 and the black lines in the graph on the right side of figures, which stand for μ→1 with ω=π/2, are basically straight lines. This work is supported by theNational Natural Science Foundation of China under Grant Nos. 11475061; the Hunan Provincial Innovation Foundation for Postgraduate (Grant No.CX2016B164). 00 Einstein A. Einstein, Lens-like action of a star by the deviation of light in the gravitational field,Science 84, 506 (1936).Dyson F. W. Dyson, A. S. Eddington, and C. Davidson, A determination of the deflection of light by the Sun's gravitational field, from observations made at the total eclipse of May 29, 1919, Phil. Trans. Roy. Soc. Lond. A 220, 291 (1920). Walsh D. Walsh, R. F. Carswell, and R. J. Weymann, 0957 + 561 A, B: twin quasistellar objects or gravitational lens?Nature 279, 381 (1979).VirK. S. Virbhadra and G. F. R. Ellis, Schwarzschild black hole lensing,Phys. Rev. D 62, 084003 (2000).Fritt S. Frittelly, T. P. Kling, and E. T. Newman, Spacetime perspective of Schwarzschild lensing, Phys. Rev. D 61, 064021 (2000).Bozza2 V. Bozza, Quasiequatorial gravitational lensing by spinning black holes in the strong field limit, Phys. Rev. D 67, 103006(2003).Eirc1 E. F. Eiroa, Braneworld black hole gravitational lens: Strong field limit analysis, Phys. Rev. D 71, 083010 (2005); E. F. Eiroa, Gravitational lensing by Einstein-Born-Infeld black holes,Phys. Rev. D 73, 043002 (2006); E. F. Eiroa and C. M. Sendra, Regular phantom black hole gravitational lensing, Phys. Rev. D 88, 103007 (2013).whisk R. Whisker, Strong gravitational lensing by braneworld black holes,Phys. Rev. D 71, 064004 (2005).Gyulchev G. N. Gyulchev and S. S. Yazadjiev,Kerr-Sen dilaton-axion black hole lensing in the strong deflection limit, Phys. Rev. D 75, 023006 (2007); G. N. Gyulchev and S. S. Yazadjiev,Gravitational lensing by rotating naked singularities,Phys. Rev. D 78, 083004 (2008).Bhad1 A. Bhadra, Gravitational lensing by a charged black hole of string theory, Phys. Rev. D 67, 103009 (2003).TSa1 T. Ghosh and S. Sengupta, Strong gravitational lensing across a dilaton anti-de Sitter black hole,Phys. Rev. D 81, 044013 (2010). AnAvA. N. Aliev and P. Talazan, Gravitational effects of rotating braneworld black holes, Phys. Rev. D 80, 044023 (2009).gr1 C. Ding, C. Liu, Y. Xiao, L. Jiang, and R. Cai, Strong gravitational lensing in a black-hole spacetime dominated by dark energy, Phys. Rev. D 88, 104007 (2013); S. Wei, Y. Liu, C. Fu, and K. Yang,Strong field limit analysis of gravitational lensing in Kerr-Taub-NUT spacetime,J. Cosmol.Astropart. Phys.10, 053 (2012); S. Wei and Y. Liu, Equatorial and quasiequatorial gravitational lensing by a Kerr black hole pierced by a cosmic string,Phys. Rev. D 85, 064044 (2012). Kraniotis G. V. Kraniotis, Precise analytic treatment of Kerr and Kerr-(anti) de Sitter black holes as gravitational lenses, Class. Quant. Grav. 28, 085021 (2011).JH J. Sadeghi, J. Naji, and H. Vaez, Strong gravitational lensing in a charged squashed Kaluza-Klein Gödel black hole, Phys. Lett. B 728, 170 (2014); J. Sadeghi, A. Banijamali, and H. Vaez, Strong gravitational lensing in a charged squashed Kaluza-Klein black hole, Astrophys Space Sci. 343, 559 (2013).Bozza4V. Bozza, F. De Luca, G. Scarpetta, and M. Sereno, Analytic Kerr black hole lensing for equatorial observers in the strong deflection limit,Phys. Rev. D 72, 083003 (2005); V. Bozza, F. De Luca, and G. Scarpetta, Kerr black hole lensing for generic observers in the strong deflection limit, Phys. Rev. D 74, 063001 (2006).Darwin C. Darwin, The gravity field of a particle,Proc. Roy. Soc. Lond. A 249, 180 (1959). Bozza3 J. P. Luminet, Image of a spherical black hole with thin accretion disk, Astron. Astrophys. 75, 228 (1979);H. C. Ohanian,The black hole as a gravitational “lens", Am. J. Phys. 55 428 (1987);R. J. Nemiroff,Visual distortions near a neutron star and black hole,Am. J. Phys. 61, 619 (1993); V. Bozza, S. Capozziello, G. lovane, and G. Scarpetta, Strong field limit of black hole gravitational lensing, Gen. Rel. and Grav. 33, 1535 (2001). EircE. F. Eiroa, G. E. Romero, and D. F. Torres, Reissner-Nordström black hole lensing, Phys. Rev. D 66, 024010 (2002). Bozza V. Bozza, Gravitational lensing in the strong field limit, Phys. Rev. D 66, 103001(2002). schen S. Chen and J. Jing,Strong field gravitational lensing in the deformed Hor̆ava-Lifshitz black hole,Phys. Rev. D 80, 024036 (2009); Y. Liu, S. Chen, and J. Jing, Strong gravitational lensing in a squashed Kaluza-Klein black hole spacetime,Phys. Rev. D 81, 124017 (2010); S. Chen and J. Jing, Geodetic precession and strong gravitational lensing in dynamical Chern-Simons-modified gravity,Class. Quant Grav. 27, 225006, (2010); S. Chen, Y. Liu, and J. Jing, Strong gravitational lensing in a squashed Kaluza-Klein Gödel black hole,Phys. Rev. D 83, 124019 (2011); C. Ding, S. Kang, C. Chen, S. Chen, and J. Jing, Strong gravitational lensing in a noncommutative black-hole spacetime,Phys. Rev. D 83, 084005, (2011); S. Chen and J. Jing,Strong gravitational lensing by a rotating non-Kerr compact object,Phys. Rev. D 85, 124029 (2012); C. Liu, S. Chen, and J. Jing, Strong gravitational lensing of quasi-Kerr compact object with arbitrary quadrupole moments, J. High Energy Phys. 08, 097 (2012); L. Ji, S. Chen, and J. Jing, Strong gravitational lensing in a rotating Kaluza-Klein black hole with squashed horizons, J. High Energy Phys. 03, 089 (2014); S. Chen and J. Jing,Strong gravitational lensing for the photons coupled to Weyl tensor in a Schwarzschild black hole spacetime,J. Cosmol.Astropart. Phys. 10, 002 (2015). zhangR. Zhang, J. Jing and S. Chen, Strong gravitational lensing for black hole with scalar charge in massive gravity Phys. Rev. D 95, 064054 (2017). Ruffini R. Ruffini and J. A. Wheeler, Introducing the black hole, Physics Today 24, 30 (1971). NadaliniM. Nadalini, L. Vanzo, and S. Zerbini, Thermodynamical properties of hairy black holes in n spacetime dimensions,Phys. Rev. D 77, 024047 (2008). AnabalonA. Anabalon, Exact black holes and universality in the backreaction of non-linear sigma models with a potential in (A)dS_4 , J. High Energy Phys. 06, 127 (2012). HerdeiroC. A. R. Herdeiro and E. Radu, Kerr black holes with scalar hair, Phys. Rev. Lett. 112, 221101 (2014).MartinezC. Martinez and J. Zanelli, Conformally dressed black hole in 2 + 1 dimensions, Phys. Rev. D 54, 3830 (1996).Aad G. Aad et al, Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC, Phys. Lett. B 716, 1 (2012).Chatrchyan S. Chatrchyan et al, Observation of a new boson at a mass of 125 GeV with the CMS experiment at the LHC, Phys. Lett. B 716, 30 (2012). Sagnotti A. Sagnotti, A note on the Green-Schwarz mechanism in open-string theories, Phys. Lett. B 294, 196 (1992).Duff M. J. Duff and J. X. Lu, Loop expansions and string/five-brane duality, Nucl. Phys. B 357, 534 (1991).Erler J. Erler, Anomaly cancellation in six dimensions, J. Math. Phys. 35, 1819 (1994).Faedo F. Faedo, D. Klemm, and M. Nozawa, Hairy black holes in N=2 gauged supergravity, J. High Energy Phys. 11,045 (2015). Dall G. Dall'Agata, G. Inverso, and M. Trigiante, Evidence for a family of SO(8) gauged supergravity theories,Phys. Rev. Lett. 109, 201301 (2012). Gibbons G. W. Gibbons and K. i. Maeda,Black holes and membranes in higher-dimensional theories with dilaton fields, Nucl. Phys. B 298, 741 (1988).FanZ. Y. Fan and H. Lu, Charged black holes with scalar hair, J. High Energy Phys. 09, 060 (2015). GenzelR. Genzel, F. Eisenhauer, and S. Gillessen, The Galactic Center massive black hole and nuclear star cluste, Rev. Mod. Phys. 82, 3121 (2010).Weinberg S. Weinberg, Gravitation and Cosmology: principles and applications of the general theory of relativity (Wiley, New York, 1972). | http://arxiv.org/abs/1703.08758v2 | {
"authors": [
"Ruanjing Zhang",
"Jiliang Jing"
],
"categories": [
"gr-qc",
"hep-th"
],
"primary_category": "gr-qc",
"published": "20170326030008",
"title": "Strong gravitational lensing for the charged black holes with scalar hair"
} |
nd→ p(nn) Alexey Kuznetsov [Dept. of Mathematics and Statistics,York University, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada. E-mail:[email protected]] December 30, 2023 =======================================================================================================================================================================3mm,, 141980 ,* E-mail: [email protected] Pacs: 25.40.Kv UDC: 539.171.113mm Keywords: charge-exchange, qusielastic, Fermi-momentum, Hulthen expressionnd→ p(nn).,ε_≈2.23. ,-. nd→ p(nn) . . ^*, . . , . . , . . December 30, 2023 ============================§ - —<cit.> NN- T_n=1.2÷3.7 . ,2002 . R_dp—nd→ p(nn)np→ pnT_n=0.5÷2.0 . <cit.>: R_dp(0) = dσ(0)dt_nd→ p(nn) / dσ(0)dt_np→ pn=2/3·1/1+r^nfl/fl_np→ pn (0), r^nfl/fl_np→ pn (0) spin-flipspin-non-flipnp→ pn<cit.>. , (<ref>),., nd→ p(nn)ε_≈2.23 . ,, q⃗ , ,<cit.> 2m_n≈1879 /c^2. δE=E_n-E_p, E_nE_p —, ε_2.5 ,nn-, .§-n→ p , , . <ref>.∅≈25 σθ≈2σP/P≈3.3 %.M1M2. A,. D_2/H_2-CD_2/CH_2/C-,, . n→ p,-94Gxy, 1x2x. 3xy4xy. S1 ST 1,2,3. σP/P≈0.7 %σθ≈1.2 .Δθ<50 .TOF (Time-of-Flight),np→ dπ^0. <ref>. SH 1,2TOF L, R.T_n=1.4÷2.0 , Δ (1232). (DTS),<ref>. §.§ TOF TOF- S1 (. <ref>) TOF L, R (. <ref>, ),∼10.S1 -TDC 2228, ∼100 . TOF L, R,TOF L TOF R .TOF LR .. <ref> () ∼1.8 .TOF LR,∼300 .(. <ref>). TOF np→ d+π^0,, .,, π- np-, , np→ pn. D_2nn_d→ d+π^-, n_d — . .,T_n=550∼40 % (. <ref>). §.§np→ pn nd→ p(nn),T_n=0.55÷2 Δ (1232). + -. T_n≥1.4: ΔN±15 %. ,- Δ-, . Δ , , ,,-. <cit.>, , π- γ-, Δ-. (. <ref>).() π- 4-, ,H_2/D_2CD_2/CH_2/C-. γ-: γ→ e^+ + e^-. 1.5. ()20-, 5γ-, π^+π^- ,.92 %, — 67 %.∼80 %,:, , (. <ref>)≈5. §,, Δ (1232),. ,T_n=0.8np→ pnnd→ p(nn).(. <ref>), 6 /c. 3 : CH_2→ CD_2→ C, . 20 /c, T_n = 0.810 /c.:f(p)=C_1exp(-(p-M)^2/2 σ^2_1) + C_2exp(-(p-M)^2/2 σ^2_2) .(<ref>) . . 5 c χ^2≈1 (. <ref>. <ref>, . <ref>). np→ pn nd→ p(nn) M_H_2M_D_2 : δP=M_H_2-M_D_2... <ref> T_n = 1.8 . , H_2-5- - , (<ref>).D_2-, , "<"> : 4H_2.δ P = 11 /c.δ P :(. <ref>), , nd→ p(nn) . ,T_n≥1.4, Δ (1232).,. §:δP,np→ pnnd→ p(nn).[ T_n = 0.550.8 , ,,, .] (. <ref>), , δP(. <ref>, ).: R(p) = _ D_2_ H_2 = exp-(p-M_H_2) δP2 σ^2 , : δP=M_H_2-M_D_2 σ=σ_D_2=σ_H_2.,δP≪σ, , -, R(p)M_H_2, : α = dR/dp ≃ -δP·1/2 σ^2 . ,,. σ=σ_H_2 ..α,dR/dp .101 /c,δP_α=0(. <ref>). §.§D_2/H_2-T_n=550800 , .. <ref>T_n=800 . δP6.4±2.1 /c,(<ref>). ,, (. <ref>) .D_2/H_2-[ D_2H_2- 1/M_D_21/M_H_2 , : M_D_2M_H_2 —(. <ref>). .] (. <ref>) Mean=P_n,, n→ p . D_2/H_2-, Δ (1232), P_n. ,P_n (. <ref>, . <ref>), .-(. <ref>). , T_n=0.55÷2.0 -δPD_2/H_2-.∼30 % δP (. <ref>, <ref>),T_n=2.03 %, .,,.1 , , .§.§D_2/H_2-200 /c, δP 010 /c, .,{α_n} . (. <ref> <ref>) D_2-, 1 /c. , , 100 %, .:σ(δP) = 1/k·1/11∑_n=1^11σ(α_n) . , k,:αδP. §T_n=0.55÷2.0 ,6.5 /c (. <ref>. <ref>).δP. , ,T_n=550800 , , . -, T_n=1.2 ,,(<ref>),. , .,6.5 /c. ,T_n=800 , :EdE =PdP⇒δE|_800 ≈ 5.4,2.5. § , - <cit.>.R_dp , —, pd→ n+(pp).647800 . , δE∼7 .≈10 ,., , D. S. Ward 1971 .<cit.> . ,nd→ p+ (nn)., , . ,T_n=794 , , T_p=800 pd→ n(pp).,7 /c.,, .nd→ p(nn), ,T_p∼788 ,,. 1<cit.> . 2005 ., 800 .:, TOF. ,δP≈6 /c . ,(CH_2→CD_2→C)3 , ,.... . .,. , nd→ p(nn) , ,(<ref>).2006 .,. , , .§.§δP δE=E_n-E_p,nd→ p(nn),, ( . ). , , ^3He^3H,, ,. :√(|m^2_n+P^2_n) + m^1_ ^3He = √(|m^2_p+P^2_p) +√(|m^2_ ^3H+q^2) ,[2mm] q⃗ = P⃗_⃗n⃗-P⃗_⃗p⃗⇒q^2 = δ^2P + 4P_n(P_n-δP) sin^2θ2. θ —.δP=P_n-P_p(<ref>).,-:√(|m^2_n+P^2_n) + m_d= √(|m^2_p+P^2_p) + √(|m^2_nn+q^2) . m_nn . ,, 2m_n≈1879 /c^2,δEε_≈2.23δP2.5 /c (. <ref>),. ,. <cit.> 1000, n→ p .σP/P≈3.3 %,- :nd→ p(nn). , .m_nn2m_n : m_nn = 2m_n+ε_ ,ε_≈2.23 .,,d→ nn - P_F,. :m_nn = 2√(m^2_n+P^2_F) .(<ref>) , P_F=√(m_nε_)≈45.7 /c.m_nn (<ref>)[,np→ pn :√(|m^2_n+P^2_n) + m_p= √(|m^2_p+P^2_p) + √(|m^2_n+q^2) ,δP_p , θ,. :δP = δP_p ' - δP_p . ] δP_p ' - .§.§- P_F, , ., , (. <ref>, . <ref>, <ref>. <ref> ). P_F≈86 /c.<cit.>(. <ref>, . <ref> ), P_F80 /c. (. <ref>, . <ref>). ,-,—., -(<ref>)P_nP_F. , q⃗,, nd- , q⃗/2,-q⃗/2.:m_nn = 2√(m^2_n+(P⃗_F+q⃗/2)^2) , E_nn = √(4m^2_n+4(P⃗_F+q⃗/2)^2+q^2) .- P⃗_Fq⃗.-T_n=0.5÷2.0 .. <ref>np→ pnnd→ p(nn)T_n=1.0 . ,[,, ,-, , (. <ref>, . <ref>).].nd→ p(nn) 2-3 /cnp→ pn. δP , ,:δP = P_p - P_p ' . , δPT_n=0.5÷2.0 ,(. <ref>).nd→ p(nn). , , .. γ+d→ n+p- . .(<ref>) P_F=45.7 /c(. <ref>),. δP , (<ref>, <ref>, <ref>),- P_F 57.9±4.8 /c.§.§ ^3S_1→ ^1S_0,δP (. <ref>)nn- (. <ref>) (<ref>)., 8 (. <ref>),: m_nn=1882.7±0.6 /c^2. nd→ p(nn) nn-ε_nn=3.5±0.6 .<cit.>, dp→ (pp)n., d→ nn (. <ref>, . <ref>), , s- R_0(r, φ),φ≡φ(p) —^1S_0-[φ SP07 <cit.>,nppp-.- . ]. :Ψ_nn(p)=C_nn∫_0^∞ e^-α r-e^-β r/r sin(pr/ħ+φ)/pr/ħ 4π r^2 dr = [-2mm] =C_nn[cosφ+αħpsinφ/(αħ)^2+p^2-cosφ+βħpsinφ/(βħ)^2+p^2] . C_nn: ∫Ψ^2_nn 4π p^2 dp≡1, d- (≈4 %) , (≈17 %)ε_nn,, (q≈20 /c)d- . ε_nn3.5 (. <ref>), , , .-, -. , T_n=550 ε_nn14.35.6 , s- nppp- . -δP (. <ref>. <ref>).nn-^1S_0-, s- np- <cit.>, δPT_n=0.5÷2.0 7 /c.,φ pp-, dp→ (pp)n,. T_d=1.17ANKE COSY <cit.>.q∈[0, 100] /c dσ/dε_pp,ε_pp —pp-,(. <ref>, Ψ_H ^1S_0^ pp). ,s- ^3S_1→ ^1S_0., (<ref>) <cit.>,φ . nd→ p(nn), , -,d→ nn., (<ref>) ,. ,nn-(. <ref>),m_nn, ,2-3 /c.T_n=550800, , - . §nd→ p(nn)np→ pnT_n=0.5÷2.0 .δP=6.5±2.5 /c. ,Δ (1232),, ., δP,,Δ.,nd→ p(nn).δP≈14 /c,. nn-- P_F≈57.9 /c, δP≈6.5 /c. ^3S_1→ ^1S_0s-nppp-nn- 56.6,ε_nn=3.5±0.6 , .- δPnp→ pn nd→ p(nn) . nn- . , δP, c σP_n/P_n≈1.0 %, D_2-,- <cit.>. ,: . . , . . . .. . . . ., 02-02-17129 07-02-01025. §. . 1951 . <cit.>: Ψ_H(r)=C e^-α r-e^-β r/r , : C=√(αβ(α+β)/2π(α-β)^2) ,: α=45.7 /ħ cβ=260 /ħ c.C: ∫Ψ^2_H(r) dV≡1.(<ref>) e^-iħp⃗r⃗:Ψ_H(p)= √(ħαβ(α+β))/π|α-β| [1/(αħ)^2+p^2-1/(βħ)^2+p^2] . pp+dp: dw(p)=Ψ^2_H(p) 4π p^2dp (. <ref>). :<p> = ħ 4αβ(α+β)/π(α-β)^2(α^2+β^2/β^2-α^2lnβ/α-1), √(<p^2>) = ħ √(αβ) .§,σθ≈2 . .np→ pnP_p≈ P_n, . . <ref>n→ pT_n=1.0 .T_d=2.0 ,- - (. <ref>).(. <ref>, . <ref>).(<ref>).,,.T_n=0.5÷2.0 .my-ieeetr | http://arxiv.org/abs/1703.08820v1 | {
"authors": [
"R. A. Shindin",
"D. K. Guriev",
"A. N. Livanov",
"I. P. Yudin"
],
"categories": [
"nucl-th",
"nucl-ex"
],
"primary_category": "nucl-th",
"published": "20170326143352",
"title": "Interesting effect of the nd -> p(nn) reaction"
} |
firstpage–lastpage 2015EMCD with an electron vortex filter: Limitations and possibilities [================================================================== We discuss a new method for unveiling the possible blazar AGN nature among the numerous population of Unassociated Gamma-ray sources (UGS) in the Fermi catalogues.Our tool relies on positional correspondence of the Fermi object with X-ray sources (mostly from Swift-XRT), correlated with other radio, IR and optical data in the field.We built a set of Spectral Energy Distributions (SED) templates representative of the various blazar classes, and we quantitatively compared them to the observed multi-wavelength flux density data for all Swift-XRT sources found within the Fermi error-box, by taking advantage of some well-recognised regularities in the broad-band spectral properties of the objects.We tested the procedure by comparison with a few well-known blazars, and tested the chance for false positive recognition of UGS sources against known pulsars and other Galactic and extragalactic sources.Based on our spectral recognition tool, we find the blazar candidate counterparts for 14 2FGL UGSs among 183 selected at high galactic latitudes. Further our tool also allows us rough estimates of the redshift for the candidate blazar.In a few cases in which this has been possible (i.e. when the counterpart was a SDSS object), we verified that our estimate is consistent with the measured redshift. The estimated redshifts of the proposed UGS counterparts are larger, on average, than those of known Fermi blazars, a fact that might explain the lack of previous association or identification in published catalogues.High-energy astrophysics: observations – blazar – galaxies: active – space astronomy. § INTRODUCTIONThe Fermi γ-ray Observatory <cit.> is favouring a dramatic progress in the field of high-energy astrophysics. With its high-throughput, almost instantaneous all-sky vision, the mission has not only offered a tenfold increase in the number of catalogued sources from 100 MeV to 100 GeV with respect to previous γ-ray missions, but also a continuous monitoring for variability studies and a huge spectral coverage <cit.>.As it often happens when such a new revolutionary astronomical facility is put in operation, the Fermi mission is also producing unexpected outcomes: of the 1873 sources in the Second Catalogue of the Large Area Telescope (2FGL) <cit.>, as many as one-third (576 sources) are lacking reliable association with sources detected at other wavelengths, henceforth the Unassociated Gamma-ray sources (UGS).Many more are also found in the last release of the Fermi catalogue (3FGL) where 1011 sources result to be unassociated among the 3034 objects reported.The majority (about one thousands) of the 1297 Fermi associated sources in the 2FGL have been classified as active galactic nuclei (AGN), in particular blazars (BL Lac objects and flat spectrum radio quasars). It is thus likely that a large number of the UGSs might hide previously unknown sources of this category.Blazars are the most extreme engines of nature, producingthe largest amount of radiant energy than any other cosmic source. Froma sub-parsec scale region, they accelerate to relativistic speeds entire plasma clouds transforming fast (even maximally) rotating super-massive black holes and gravitational energy into radiation and mechanical power. Blazars are not only unique machines to test extreme physics, but can also be exploited as light-houses to probe the distant universe.Their emitted very high energy (VHE; E > 100 GeV) photons are known to interact with the low-energy photon backgrounds <cit.> and decay into e^–e^+ pairs.So observations of the blazar VHE spectra and their spectral absorption, e.g. with Cherenkov observatories or Fermi satellite itself, are used to test the EBL <cit.>. On this respect, the identification of distant and high redshift blazars at high energies is particularly relevant, among others, to estimate the earliest EBL components due to the first-light sources (Population III stars, galaxies or quasars) in the universe (see e.g. Franceschini A., & Rodighiero, G., 2017, to appear in A&A).Expanding our knowledge of the blazar population at high energies and high redshift is then a priority topic for various reasons.Several papers in the literature are dedicated to methods for the identification of the Fermi unassociated sources. <cit.>,<cit.> and <cit.> developed statistical algorithms based on selected UGS γ-ray features, such as spectral and variability information, able to discriminate between AGNs and pulsars.Other works focused on the search for AGN candidates among the 2FGL UGSs by analysing their long-wavelength counterparts: <cit.> and <cit.> compiled a thorough catalogue of ATCA (Australia Telescope Compact Array) radio sources lying inside the UGS error-boxes.Other radio surveys were published by <cit.> and <cit.>. <cit.> and <cit.> proposed AGN candidates on the basis of the colours of the infrared counterparts in the Wide-field Infrared Survey Explorer survey lying within the Fermi error ellipses; in <cit.> a complete analysis of the X-ray data provided by the Swift satellite has been performed to search forX-ray counterparts and in <cit.> and <cit.> a multi-wavelength approach has been adopted. The present paper contributes to the effort of exploiting such a unique all-sky γ-ray survey for a search of high-energy emitting AGN population, with a new approach based on a total-band Spectral-Energy Distribution (SED) analysis.The blazar non-thermal radiation dominates, and often hides, the emission from the host galaxy or from the AGN substructures.For most blazars, especially for the BL Lac objects,this results in a featureless optical spectra, thus hindering redshift measurement. Our tool for blazar recognition among the UGSs has then been tailored to offer at the same time a rough estimate of their redshift.This also takes advantage of the known relationship between the frequencies of the synchrotron and IC peaks and the source luminosities (sometimes referred to as the blazar sequence, ).In any case, our method is completely empirical, model-independent, and not relying on prior assumptions, except for the requirement that the UGSs, proposed as Fermi blazar candidates, are detected in X-rays. Note that in the present paper, we propose a possible physical relationship between the UGS and the blazar-like object, to be considered as a candidate for the association.In our case, improving from a proposed association to an identification for the source would require, among other, spectroscopic follow-up and confirmation.With the last Fermi catalogue released (3FGL, ), the number of UGSs wherein to exercise our blazar recognition tool will be further substantially amplified to as many as 1010 objects out of a total of 3034 sources, with chances to expand the number of γ-ray selected AGNs.While our UGS primary selection relies on the 2FGL catalogue, we make full use of the newer 3FGL to confirm those sources and to improve their error-box and Fermi photometry.The paper is organised as follows. In Section 2 we review the UGS selection.In Section 3 and Appendix A we discuss our procedures for the counterpart selection of UGSs. In particular Appendix A includes finding charts and the multi-wavelength SED for the UGS counterparts. We define as potential association of an UGS a set of sources consistently detected in various bands, all positionally coincident, and within the Fermi error-box. In Section 4 we build the library of multi-wavelength SED templates from known blazars, selected from the 3FGL catalogue. This SED template set is then used in Section 5 to build up our tool for the blazar recognition and characterisation.In this section the validity of the method is verified on bona-fide blazars, against known Galactic and extragalactic sources, and also the false positive associations are tested. We then proceed in Section 6 to present the results of our proposed method for a set of UGSs, and defer to Section 7 some discussion and the conclusions.Throughout the paper we assume a standard Wilkinson Microwave Anisotropy Probe (WMAP) cosmology with H_0=70 km s^-1, Ω_Λ=0.7, and Ω_M=0.3 <cit.>.§ THE UGS SELECTIONFor as many as 576 sources in the Second Fermi-LAT catalogue, no plausible associations or identifications have yet been found.This makes an important component of the high-energy sky, and may hide new classes of AGNs, like the extreme blazars <cit.>, the Dark-Matter (DM) candidates <cit.>, or even unexpected high-energy phenomena.To set up a procedure assisting the recognition of AGN populations among Fermi UGSs, we selected the UGS sample from the 2FGL catalogue following this basic selection criteria:* No association in the 2FGL and no association in other γ-ray catalogues (those from the EGRET and AGILE missions in particular), or catalogues at other wavelengths considered by the Fermi collaboration; * Sky positions outside the Galactic plane, with a Galactic latitude |b|>20^∘.Many UGSs are in the Galactic plane, but we exclude this region because it is very crowded and confused, and the Fermi procedure hardly converges towards correct associations. Furthermore, this gives us a higher probability to select extragalactic sources.An additional possible criterion is the variability index on the 2-years baseline of the Fermi 2FGL observations.This might be used to select DM candidates among UGSs, because they are expected to be stable in time <cit.>.In any case, we do not consider flux variability in our primary UGS selected sample, as most of them do not show significant flux variation.183 UGSs from the 2FGL catalogue survive the selection criteria. While we referred to the 2FGL for our UGS selection at the time when the present project started, for all subsequent analyses we used data from the 3FGL catalogue, yielding a decisive improvement in the Fermi/LAT source position uncertainty and photometry. § SEARCH FOR UGS MULTI-WAVELENGTH COUNTERPARTSIn spite of the improvement allowed by the 3FGL, the association and identification of the Fermi sources is complicated, or even prevented, by the large Fermi LAT error-boxes typically a few arc-minutes radius (for a fraction of UGSs this may even exceed ∼10 arcmins).Our approach for finding potential counterparts for all of the 183 UGSs was to identify all detected X-ray sources inside the Fermi error-box and, if there are X-ray sources, then check for the existence of counterparts at lower energies (radio, IR, optical) to build up a broad-band SED. Following previous works in the literature, <cit.>, our UGS recognition procedure is primarily based on the available Swift/XRT X-ray imaging data over the Fermi source error-box position.Without a reliable X-ray counterpart, the method cannot be applied. Not all γ-ray sources have detectable X-ray counterparts inside their error-box. This lack may be due to the intrinsic faintness of the source, to X-ray flux variability, to shallow depth of the X-ray exposure, or the lack of X-ray observations of the field. Hint about the fraction of X-ray emitting blazars, and among them of γ-ray emitting objects, can be derived from the BZCAT catalogue <cit.>. It represents an exhaustive, although by no means complete, list of sources classified as blazars, useful to look for general trends. The sample of 3561 blazars of the 5th BZCAT contains 63% of objects detected in the soft X-ray band and 28% of Fermi/LAT sources. Among the latter, 79% are X-ray emitters.In conclusion, a large fraction of γ-ray blazars contains an X-ray counterpart within the Fermi/LAT error-box.We then expect that a substantial fraction of Fermi UGSs might be within reach of our analysis, in consideration of the sensitivity and extensive coverage by the Swift/XRT telescope of the UGS sample. The counterparts found for the UGSs are proposed associations, to be subsequently verified once new γ-ray catalogues will be matched to other samples for the next releases.Thanks to the usually X-ray positional uncertainties of the order of few arc-seconds, we typically have one to a few sources with multi-wavelength photometric data inside the Swift source error-box.Subsequent to the X-ray detection, the radio band is important for our recognition work, since all discovered blazars have been identified as radio-loud sources so far. Although primarily dedicated to the identification of Gamma-Ray Bursts, the Swift/XRT telescope <cit.>, thanks to its rapid responsivity and high sensitivity, has been systematically used to obtain X-ray follow-up observations for most of UGSs (e.g. <cit.>). So far, among 183 UGSs selected, ∼130 have dedicated Swift observations. In our XRT analysis we only used the PC mode[Photon-counting mode: PC mode is the more traditional frame transfer operation of an X-ray CCD. It retains full imaging and spectroscopic resolution, but the time resolution is only 2.5 seconds. The instrument is operated in this mode only at very low fluxes (useful below 1 mCrab).] data.We analysed them through the UK Swift Science Data Centre XRT tool [http://www.swift.ac.uk/user_objects/]that provides X-ray images, source positions <cit.>, spectra <cit.> and light curves <cit.> of any object in the Swift XRT field of view. For our purposes, we used the total XRT 0.3-10 keV energy band to generate the X-ray image of the UGS sky field.The X-ray sky maps of our UGS sample are reported in the figures of the Appendix A and we checked which X-ray sources (green circles) fall inside the 3FGL 95% confidence error-box (yellow ellipse).For comparison, we also indicated with magenta ellipses the positional uncertainties of the 2FGL, as white crosses the X-ray sources of the 1SXPS Swift XRT Point Source Catalogue <cit.>, and the radio sources of the NRAO VLA Sky Survey (NVSS) and Sydney University Molonglo Sky Survey (SUMSS) catalogues as cyan circles (with radius equal to the semi-major axis of the positional error) or ellipses.For each of the X-ray sources within the Fermi error-box, we provided the position, with the corresponding error radius, and the X-ray spectrum.Two types of position determinations are available for the XRT sources: the un-enhanced position, estimated using only a PSF fit, and the enhanced position <cit.> where the absolute astrometry is corrected using field stars in the UVOT telescope and the systematic uncertainty is then decreased to 1.4" (90% confidence), if compared to the 3.5" systematic for the un-enhanced positions.The X-ray energy spectrum is estimated in the 0.3-10 keV band.The output spectra are downloaded and then fitted using the XSPEC software (version 12.8.1g) <cit.> of the HEASOFT Ftool package.According to the number of total counts, the spectral data are analysed in different ways.If the source has less than 25 counts, the total flux is calculated by the Mission Count Rate Simulator WebPIMMS[http://heasarc.gsfc.nasa.gov/Tools/w3pimms.html], using a power law model with photon spectral index 2. If the found X-ray counterpart is reported in the 1SXPS catalogue <cit.>, we consider the corresponding photometric data points provided by the catalogue and available in the ASI-ASDC database. With more than 25 total counts, we used an un-binned analysis by applying the Cash statistics.For bright sources with at least 150 counts, we binned (with the Ftool grppha) the spectra with a minimum of 20 counts per spectral bin. Once the list of the X-ray counterparts inside the Fermi error-box is defined, our next step is to search for counterparts in radio, infrared and optical bands, around their XRT enhanced position (or the un-enhanced one if the former is not available), using a search radius corresponding to the 90% confidence error radius. (green circle, as exemplified in Fig. <ref>-upper panels of Appendix A).The results are displayed in the close-up images (i.e. Fig. <ref>-upper right panel) where, on the XRT sky map, we superimpose entries from the radio NVSS catalogue <cit.> and SUMSS catalogues <cit.>, from the WISE <cit.> (blue crosses) and the 2MASS catalogues <cit.> (green diamonds) in the near and mid-infrared bands, and finally from the USNO-B1.0 catalogue <cit.> or the Sloan Digital Sky Survey (SDSS) catalogue <cit.> (magenta crosses) in the optical.The error on the optical and IR positions is neglected since it is several times smaller than the uncertainty on the X-ray position.On the other hand a spurious optical source may fall within the X-ray error box.Taking into consideration the number of objects and sky coverage in SDSS and USNO optical catalogues, the source sky density is ∼ 38000 deg^-2; 2MASS and WISE have a lower source density.Then the expected number of accidental optical sources in an error box of typical radii from 2" to 5" (the minimum and maximum in our sample) spans from 0.03 to 0.2.Therefore in our UGS sample of 14 objects, 5 of them with error box radius of 4", we do expect up to ∼ 1 spurious optical source in the X-ray error boxes.We note that in our sample there is no UGS with more than 1 optical source in the X-ray error box.As a further check on the goodness of the XRT position estimates, we superimposed on the XRT image the positions of the X-ray sources from the 1SXPS Swift XRT Point Source Catalogue (1SXPS) reported by <cit.>.As can be seen in the sky maps of the Appendix A, each XRT position found with our procedure is compatible with the 1SXPS positions.The multi-wavelength counterpart data set of a given UGS is then used to create the broad-band SEDs (Fig <ref>-Bottom panel).We combined these data through the SED Builder tool of the ASI ASDC database [http://tools.asdc.asi.it/SED/]. The Fermi flux points are taken from the 3FGL catalogue. All the X-ray plotted data are corrected for Galactic absorption as available from the XSPEC package.If available, we include in the analysis also the X-ray data points reported by <cit.> (black points) and in the 1SXPS catalogue that we can consider a cross-check of our analysis. In Appendix A we report details about our UGS counterpart search procedure for a sample of 14 Fermi UGSs among the 183 objects of our primary list. § DEFINITION OF A BLAZAR SED TEMPLATE SETSince we are interested to recognise blazars candidates among the Fermi UGS population, we built a tool for the systematic comparison of the broad-band SED of UGS counterparts with spectral templates representing various categories of the blazar populations. One possibility would be to use the so-called blazar sequence reported by <cit.> and <cit.>, and updated in <cit.> that is defined in terms of functional dependencies of the spectral parameters for both the synchrotron and IC components. However, as explained in Sect. <ref>, we preferred to adopt a different, more empirical, approach. §.§ A Sample of Known BlazarsWe defined a reference sample of known blazars, for which we collected all the available photometric data, grouping them into four categories defined in the Fermi 3LAC catalogue <cit.> and characterised by different spectral properties and luminosities:the Low-Synchrotron-Peaked sources (LSP, with synchrotron peak frequency <10^14 Hz), the Intermediate-Synchrotron-Peaked (ISP, with a synchrotron peak frequency between 10^14 and 10^15 Hz), the High-Synchrotron-peaked (HSP, peak frequency >10^15 Hz), and the extreme High-peaked BL Lacs (EHBL).The latter class <cit.> is a new emerging population of BL Lac objects with extreme properties (a large ratio between the X-ray and the radio flux and the hardness of the X-ray continuum locating the synchrotron peak in the medium-hard X-ray band). In order to build a blazar SED template library, we started our selection from all blazars (FSRQ and BL Lac objects) present in the 3LAC catalogue at high Galactic latitude (|b|>20^∘), with a certain SED classification and a Likelihood Ratio Reliability between Radio/Gamma bands and X-ray/Gamma greater than 0.We cross-matched this preliminary selected sample with the BZCAT and we rejected all objects without X-ray flux and with an uncertain or unknown redshift in the BZCAT. To ensure a good spectral coverage and a precise SED characterisation, we performed a cross-match with the WISE, 2MASS and Swift (1SWXRT) catalogues.We use only the identified LSP of the 3LAC. Finally, we performed an extensive search in literature to assess the robustness of the published redshifts, also examining the published optical spectra, and we selected only the sources with secure redshift.We also include the source PG 1553+113, having a very extensive multi-band photometry but a still uncertain distance, for which we adopt a redshift of 0.5 following <cit.> and <cit.>.PG 1553+113 is thought to be among the most distant HSP objects known and was considered as an extragalactic standard candle in the VHE band. Moreover PG 1553+113 shows very moderate variability at all frequencies, which makes it a good candidate to build a robust average SED.The final list of our adopted blazar templates is composed of 50 sources, including 20 LSPs, 12 ISPs, 16 HSPs and 2 EHBLs, and is reported in Table <ref>, where we indicate the source name, the 3LAC blazar SED class, and the redshift from literature. We have considered this list of objects as sufficiently representative of the various blazar categories.Further enlarging this template database would be possible and can be done in the future. The ASI Science Data Centre Database and SED Builder tool have been used to collect the whole set of archived historical observations for every blazar of our sample.For each source we created a data vector containing the monochromatic luminosities versus emission rest-frame frequencies, computed from the redshift of the object and standard expressions for the luminosity distance. K-corrections for all the photometric data have been computed assuming a flat frequency-independent spectrum in ν F(ν). The data points range from the radio to the high energy frequencies and the complete list of the data catalogues used is reported in the ASI Science Data Centre SED Builder Tool[http://tools.asdc.asi.it/SED/docs/SED_catalogs_reference.html].Examples of SED data collected for four blazars are shown in Figure <ref>. §.§ Building the SED Template SetOnce the archive of multi-frequency photometric data for all sample blazars was collected, the next step was to fit these data with simple analytic representations for each one object.We first divided the data in equally spaced frequency bins and calculated the average of the logarithms of the luminosity measurements inside each bin. This allowed us to minimise the effects of flux variability. Altogether we obtained a library of 50 averaged blazar SEDs for the objects in Table <ref>.For each one of these sources we fitted the average photometric data using a simply parametrised analytic form with a double power-law with exponential convergence: one component representing the low-frequency synchrotron peak, the other the high-frequency IC peak.This function has the following expression: ν L(ν)=ν L_1(ν)+ν L_2(ν) withν L_1(ν) = A ·( ν/ν_1)^1-α·exp{-1/2σ_1^2[log (1+ν/ν_1)]^2}ν L_2(ν) = B ·( ν/ν_2)^1-α·exp{-1/2σ_2^2[log (1+ν/ν_2)]^2} This has 7 free parameters: A and B are the normalisations of the two emission components, α determines the slopes of the two power-law functions, that are assumed to coincide (consistent both with the SSC assumption for the blazar modeling, and with the data), ν_1 and ν_2 are the characteristic frequencies of the two emission bumps, and finally σ_1 and σ_2 determine the two bump widths.Examples of resulting fitting curves for the average SEDs of the four representative blazars are reported in Fig. <ref> where the SED data are the geometric averages of the luminosity measurements inside each frequency bin.Note that two of the sources, the low-redshift Mkn 501 and 1ES 0229+200, show evidence for a narrow peak at logν≃ 14 to be attributed to the host galaxy, that is not fitted by our analytic formula.Indeed the formula aims at reproducing only the non-thermal power-law blazar emissions.When comparing the SED templates to UGS SED data, we check a-posteriori if a galactic contribution might show-up (which essentially does not happen in most of our investigated cases, that tend to be sources at high redshifts where the blazar emission dominates over the host galaxy).We adopted this simple analytic representation for the average SEDs instead of using more physical models for blazar emission (like the SSC model itself), to be model-independent and to achieve good adherence to the data.In particular, SSC models have some difficulties to reproduce data in the radio band. Unlike the approach of <cit.>, <cit.> and <cit.>, we did not average out the SEDs of the different sources. We collect in Fig. <ref> all the SED templates of our spectral library and grouped into the four blazar classes: the LSP, ISP, HSP, and E-HBL.The plot reveals the general behavior found by <cit.> and <cit.>: the LSPs (red curves) occupy the highest luminosity values of the sequence, with the peak emission frequencies falling at lower energies with respect to those of HSP (mainly BL-Lac objects). § ASSOCIATING UGS TO BLAZAR CLASSESOnce the broad-band spectral properties of the blazar populations are defined, we proceed to compare them with the SEDs of all our Fermi UGSs discussed in Sec. <ref>. To this purpose, we developed an algorithm to assess the similarity of the UGS SED with those of blazars of a given class, and to obtain some information on the blazar category and the redshift.§.§ The algorithmOur blazar recognition tool requires the following steps to be performed [The numerical code for the blazar recognition is written in IDL and SuperMongo, for the ease of graphical comparison between the photometric data and the SED templates].* We start by considering the plots of luminosity versus frequency reported in Fig. <ref> for all four blazar categories, including the SEDs of all sources in each category.The units in these plots are the logarithm of the ν L(ν) luminosity in erg/sec on the y-axis, and the logarithm of photon frequency ν in Hz on the x-axis. * Using the observed multi-wavelength fluxes of a given UGS counterpart, we convert them into ν L(ν) luminosities by assuming a suitable grid of redshifts z spanning a range of values from 0.05 to 2.0.Here again, we calculate the K-corrections adopting flat spectra to be consistent with Sec. <ref> for the template set. We then over-plot the luminosity data-points on the SED templates of all blazar classes, as illustrated in Fig. <ref> and followings.* For the same UGS counterpart, for every redshift of the grid and with respect to every j-th SED of the blazar template set, the χ^2_min statistic is calculated as the minimum of all values of χ^2_j = ∑_i{log[ ν_i L_i(ν_i)]-log[ν_i l_j(ν_i)]}^2/{0.01 log[ν_i l_j(ν_i)]}^2 ,The reduced χ^2_ν,j is obtained dividing by the number of data-points. The minimum of this quantity, χ^2_ν,min, offers a measure of how close is that j-th SED template to the observational UGS SED for a given assumed redshift in the grid.* The second quantity that we use to estimate the similarity of the UGS SED with the blazar SED templates (and for a given assumed UGS's redshift in the grid) is the Minimum Average Distance (MAD), defined as MAD = 1/N_SED· |∑_j(χ_j)| withχ_j = ∑_ilog[ν_i L_i(ν_i)] -log[ν_i l_j(ν_i)]/0.01 log[ν_i l_j(ν_i)] where i is running over all photometric data-points for that UGS, j is the index flagging every SED of the blazar template set and ν_i l_j(ν_i) is the luminosity of the j-th SED template interpolated at the frequency ν_i. The normalisation factor N_SED is the number of templates of a given class of blazars. MAD is a measure of how far is the UGS SED, for a given assumed redshift of the grid, from the distribution of the SEDs of that blazar template category. Note that while χ^2_ν,min measures the match of data to a single template SED, MAD refers to the whole distribution of the SEDs in the blazar class and how far it is from the object data-points. * For a given UGS and its counterparts, the goodness of the recognition, the best-guess redshift and spectral class are found by first considering the χ^2_ν,min statistics.In the cases in which we have more than one counterpart or when there are degeneracies in the χ^2 solutions as for the redshift, the MAD statistics is used to get a qualitative measure of the relative likelihood of these various solutions.The MAD statistics measures the distance of the assumed UGS luminosity data-points from the whole distribution of the SED templates.Consequently it provides us a first hint about the blazar class and luminosity, and thus the source redshift by comparison with the observed fluxes.Instead the χ_ν,min^2 value is more closely related to the spectral shapes of both of the UGS and the individual spectral templates and, in particular, to the slopes of the rising and descending parts of the two spectral components of the UGS SED.Hence, it evaluates the degree of similarity in shape of the SED of the UGSs and the blazar templates.So, our blazar recognition procedure offers also a method of estimating a tentative redshift for the UGS, in cases in which the agreement between the observational and template SEDs is good.We deem this a valuable contribution, considering the difficulty of measuring blazar redshifts and the number of objects for which it is unknown. §.§ Characterising the χ_min^2 and MAD statistics The reduced χ_min^2 statistics has a well defined χ^2 theoretical probability distribution under the assumptions of statistically independent data and Gaussian-distributed errors.Unfortunately, this is not our typical case because we are in the presence of flux variability and ill-defined photometric uncertainties.Consequently we have adopted arbitrarily fixed errors for all data points (corresponding to 1% of the luminosity value). In conclusion, we cannot simply use the χ^2 for testing our best-fit solutions. Nor we have any statistics for our MAD test. We then proceeded to a rough characterisation of the χ_ν,min^2 and MAD statistics in the following way.For the χ_ν,min^2 test, we considered all blazars of the template set discussed in the previous section.For all these sources we ignored their redshift and we calculated the ν L(ν) values by adopting redshifts within our grid of values of z=0.05 to 2.Then, we calculated the values of χ_ν,min^2 from Eqn. <ref> for all redshifts and all sources by comparing such estimated ν L(ν) values with all best-fit SEDs, excluding from the calculation the SED template of the source itself.The resulting values are reported as black histograms in Fig. <ref> (top).The blue histogram represents instead the χ_ν,min^2 distribution for the a-priori known good solutions, that are the solutions for which the blazar class and redshift are consistent with the real source properties (we considered a good solution if the found redshift value is within δ z/z ≤ 0.1 of the real redshift). So, the black histogram details the distribution of the χ_ν,min^2 values that would be obtained from a blind application of the test to blazars of unknown class and redshift.The fraction of random solutions with χ_ν,min^2≤ 1.1 is 4%, that can be considered as our approximate confidence figure. Note that we expect that the χ_ν,min^2 test would obtain higher values on average when applied to Fermi objects other than blazars. This indeed will be checked against non-blazar sources in Sec. <ref>.So the figure of 4% can be considered as a conservative one. Our good solutions will be obtained for values of our χ_ν,min^2 statistics χ_ν,min^2≤ 1.1. We have performed a similar characterisation of the MAD statistics, whose results are reported in the bottom panel of Fig. <ref>. Here the black histogram is calculated from eq. <ref> for all sources in the template set, assuming that we do not know a-priori the source redshift and class. The coloured histograms show the corresponding histograms for "good" redshift solutions (δ z/z ≤ 0.1 of the real value) for the three main blazar classes.We see that the MAD test performs well in identifying good solutions for the LSP and HSP, less well for the ISP, whose MAD distribution for the good solutions has substantial overlap with that of the random population. As a guideline, we will consider as potentially good blazar recognitions those with MAD<2.5, however without excluding solutions with higher MAD values. It is clear that, as anticipated, MAD offers a rather complementary test potentially useful for disentangling among degenerate solutions.In all our later analyses, when applying our test to Fermi UGS and other sources, we offer a graphical summary of the test performance in the form of plots of χ_ν,min^2 versus MAD statistics for the four blazar classes and various redshifts (see e.g. Fig. <ref> below).The quadrant at χ_ν,min^2<1.1 and MAD<2.5 indicates regions where to look for potentially good solutions.In general, our analysis can be effectively applied in cases in which there is a sufficient sampling of the synchrotron component of the blazar SED. This means having at least three reliable data-points over the synchrotron part, and assuming that the Fermi and X-ray data are sufficient to sample the IC component. §.§ Testing of the blazar recognition tool on known objects As a sanity check and to test the effectiveness of our method in recognising blazar-like sources among UGSs, we applied it on a few well-known blazars: 1ES 1215+303, 1ES 1011+496, 1ES 2344+514, and 3C 279 (the latter also present in the sample used to build the SED templates).We have also made a test on the well-known high-redshift HSP 1ES 1424+240 with redshift z=0.604 <cit.>.We assumed these to be sources with unknown class and redshift, and run blindly the algorithm on the simultaneous photometric data collected during dedicated campaigns for each blazar.Note that, to make it a meaningful test for the source 3C 279, we use here a different flux dataset to that used for building the SED template set in Sec. <ref>: in that case the whole set of historical observations in the ASDC archive was used, while here the test is done instead on completely independent sets of simultaneous observations, as detailed below for the 5 test sources.Following the above described recognition procedure, from the flux data of the five sources we determined the corresponding luminosity values as a function of the redshift values of our grid.Then the luminosity data were over-plotted with different colours in the four panels for the blazar classes, as shown in Figs. <ref>, <ref>, <ref>, <ref>, and <ref>.The MAD and χ^2_ν statistics were computed for every redshift and every SED template. The results for the five well-known blazar sources are hereby briefly discussed. * Results for 1ES 1215+303:This is an HSP blazar <cit.> with redshift z=0.129 <cit.>. For this source we used the flux data collected during a multi-wavelength campaign performed in 2011 <cit.> and triggered by an optical outburst of the source. The data were taken simultaneously from radio to VHE band.The minima of the MAD andχ_ν,min^2 statistics are reported in Fig. <ref> and are used to select the best-fit SED template.The minimum value of χ_ν,min^2 is associated with a SED template of the HSP class (green curve), suggesting that 1ES 1215+303 is an HSP object with a redshift of about 0.1.* Results for 1ES 1011+496:For the test blazar 1ES 1011+496 at z=0.212, we used the simultaneous multi-wavelength data obtained by the observational campaign in 2012 <cit.>.We constructed the diagnostic plots, shown in Fig. <ref>, with the corresponding table including the MAD and χ_ν,min^2 values.The minimum χ_ν,min^2 corresponds to the HSP class with a significant level of degeneracy. We found three best-fitting SED template candidates with the value of MAD and χ_ν,min^2 in the good solution quadrant defined in Sec. <ref>.The first solution assumes for 1ES 1011+496 a redshift of z=0.1, providing a χ_ν,min^2∼ 0.29, while the second one, with χ_ν,min^2∼ 0.37, corresponds to a redshift of 0.05. The third solution provides a χ_ν,min^2∼ 0.77, with a higher MAD value.Although this level of degeneracy, our tool is pretty in agreement with the real blazar classification and the real redshift of z=0.212 <cit.>. * Results for 3C 279:Another test blazar studied is 3C 279, the first LSP discovered to emit VHE γ-rays in 2006 <cit.> and with a redshift of 0.536 <cit.>, making it one of most distant VHE emitting sources discovered so far.The multi-wavelength data, used in the blazar diagnostic plots, are taken from <cit.>, obtained during the 2011 observational campaign performed from February 8 to April 11, when the source was in a low state. We see from Fig. <ref> that we get an excellent match only for an LSP class and a redshift which is very consistent, within the uncertainties of our method, with the observed redshift. Such a good match with this well known high-redshift source makes us confident about the validity of the test even for distant high-luminosity objects.* Results for 1ES 2344+514: This BL Lac object is classified as an HSP with a redshift of z = 0.044 <cit.>. It was targeted in 2008 by a simultaneous broad-band observational campaign from radio to VHE energies, during which this source was found in a low flux state <cit.>.We used these simultaneous multi-wavelength data to test our blazar recognition tool.Concerning the HE γ-ray flux, we decided to use the 1FGL catalogue flux points because we can consider them quasi-simultaneous with the data collected during the 2008 season.The diagnostic plots for this source are shown in Fig. <ref> with the values of the MAD and χ_ν,min^2 statistics.Our procedure indicates a good solution with an HSP at z ≃ 0.2, but few other solutions at lower redshift are within the confidence limits of χ_ν,min^2 and MAD defined in Sec. <ref>. On one side for this source, our test fully confirms the classification as an HSP blazar, on the other it clearly indicates for it a low redshift, in spite of some degeneracy. * Results for PKS 1424+240:The last blazar used as test for our tool is the BL Lac object PKS 1424+240 belonging to the class of the HSPs. Recently the redshift of the source, z=0.604, was determined <cit.>, which makes it among the most distant TeV BL Lac objects.PKS 1424+240 was observed in the framework of a multi-wavelength campaign during 2009 and 2010, allowing us to build a well covered simultaneous broad-band SED from radio to VHE regime <cit.>. The diagnostic plots, resulting by our blazar tool, are displayed in Fig. <ref>.We found excellent match with an HSP with z ≃ 0.6 assuming a very high luminosity, although a correspondingly bad value of MAD. §.§ Test on non-blazar Fermi extragalactic sources We have tested our method on extragalactic sources of the 3LAC catalogue that are classified as non-blazars. The results are summarised in Table <ref>.Double values correspond to multiple solutions with acceptable fit.The only type-1 Seyfert galaxy, Circinus is clearly classified by our tool as a non-blazar object.In the 3LAC catalogue there are 5 steep spectrum radio quasars, but only 2 have a good X-ray coverage. For these we find that the global SEDs are not evidently distinguishable from classical blazars.We have considered 14 radio galaxies among the 16 present in the 3LAC catalogue (for the excluded sources there is no X-ray coverage): all objects are rejected as blazars by our algorithm because of a global misfit, with the exception for PKS 0625-35, the only radio galaxy with a HSP SED reported in the 3LAC catalogue, and NGC 1218 that shows a marginal MAD value.Finally about the Narrow-line Seyfert-1 galaxy (NLSG1) class, there are 5 objects reported in the 3LAC catalogue and we analysed them with our recognition tool.Note that the most marginal source among these is 1H 0323+342, having a marginally acceptable χ_min^2.We classify this as an ISP with a consistent tentative redshift estimate by our method with respect to the correct one.All other four NLSG1s reveal fairly acceptable fits in our test as blazar objects.We do not consider in this paper the reason for this similarity between apparently different classes of sources, that will be discussed instead in a future paper.§.§ A counter-example:2FGL J1544.5-1126(3FGL J1544.6-1125) Our tool is also suited to exclude a blazar recognition: we consider, for example,the source 2FGL J1544.5-1126 in our UGS catalogue, a rather complicated case.From the Swift/XRT observations of this UGS error-box (see Fig. <ref> in Appendix A), we proposed the X-ray source 1RXS J154439.4-112820, the brightest X-ray source in the field, as the likely X-ray counterpart of the Fermi source.This association is also proposed by <cit.>, who identify 2FGL J1544.5-1126 as a transitional millisecond pulsar binary in an accretion state.They also note that the Fermi source 3FGL J1227.9-4854 (2FGL J1227.7-4853 in the 2FGL catalogue), associated with the transitional millisecond pulsar binaries XSS J12270-4859 <cit.>, has a radio-to-γ SED very similar to the 2FGL J1544.5-1126 SED as illustrated in Fig. <ref>. We applied our blazar recognition procedure (see Fig. <ref>), assuming 1RXS J154439.4-112820 as the most likely counterpart. From the χ^2_ν,min / MAD plot, we note that actually no SED template can suitably match the observational data, independently of the assumed redshift (all χ^2_ν,min values are too high). Hence our results suggest that a standard blazar classification for this source is quite unlikely.The nature of the brighter X-ray source 1RXS J154439.4-112820 has been studied with optical spectroscopy by <cit.>. These data show a Galactic source characterised by broad emission lines (Balmer series and helium transitions, EW∼20 Å, FWHM > 800 km s^-1). §.§ Testing the rate of false positive recognitions with known pulsars and other Galactic sourcesWe have further tested the validity of our blazar recognition tool against the observed SEDs of 15 Galactic sources of different HE classes included in the 3FGL catalogue, in order to verify the chance of false recognition. The objects have a spectral coverage similar to that of our blazar SED template set and the multi-wavelength fluxes are retrieved from the ASDC archive. The selected sample of Galactic objects includes the following.* Seven pulsars detected in the Fermi surveys with good multi-wavelength coverage and with a galactic latitude higher than 20 degrees.They are PSR J0437-4715, PSR J0614-3329, PSR J1024-0719, PSR J1614-2230, PSR J2124-3358, plus the well-known Vela pulsar and the brightest HE millisecond pulsar PSR J2339-0533.We excluded the Geminga pulsar because lacking sufficient spectral coverage. * The Crab Nebula, whose pulsar wind nebula is assumed as the standard candle in high-energy astrophysics. * V407 Cyg, the only nova known in the 3FGL catalogue. * The HE binary Eta Carinae and the three high-mass binaries LS 5039, 1FGL J1018.6-5856, and LS I+61 303. * The Supernova Remnants Cas A and SNR G349+002, also present in the 3FGL catalogue. In Fig. <ref>we report a graphical summary of the results obtained applying our method to 11 of the test Galactic sources.With no exception, these show high or very high values of χ^2_ν,min>1, and also typically large values for MAD.For the remaining four cases,the inferred values of χ^2_ν,min and MAD exceed our considered boundaries of 10 in both statistics and therefore the algorithm plots are not shown.It is worth to note that for the pulsar PSR J2339-0533, our method would find acceptable fits for the LSP class at high redshift (z=1.1-1.3).Indeed we know that the source is located at 450 pc <cit.>, such that our inferred radio luminosity(∼10^44 erg s^-1) from the best-fit spectrum would imply an enormous continuum flux if located at that distance, that is excluded by the (unpublished) observational radio upper limits reported in <cit.>. A radio detection or an upper limit would immediately rule out a blazar classification for this source.This illustrates that the radio data may result significant constraints on the UGS classification. In summary, our blazar recognition tool appears rather robust against mis-interpreting pulsars and Galactic sources as AGNs. § UGS RECOGNITION RESULTS We then proceeded to exploit our blazar recognition tool for the analysis of 14 Fermi UGSs of the 2FGL catalogue, whose multi-wavelength counterparts have been previously defined and discussed.These make a sub-set of our complete, flux-limited UGS sample described in Sec. 2 for which we will discuss a clear evidence in favour of a blazar recognition and provide a tentative estimate about the redshift. Results of our recognition procedure for them are detailed in the following.§.§ 2FGL J0102.2+0943 (3FGL J0102.1+0943)The error-box area of this S/N=7.09 Fermi detection was observed for a total of about 4 ksec by Swift XRT (details are reported in Appendix A).We found only one counterpart, for which we got data in X-ray (Swift/XRT), optical (SDSS),infrared (2MASS) and radio (NVSS) bands. The diagnostic plots obtained by our blazar recognition code, with the MAD and χ_ν,min^2 values corresponding to the best-fitting SED templates, are reported in Fig. <ref>. The template with the minimum χ_ν,min^2 corresponds to an HSP SED, for a best-fit redshift of z∼ 0.5. An HSP blazar at about such redshift is our proposed classification for 2FGL J0102.2+0943.§.§ 2FGL J0116.6-6153 (3FGL J0116.3-6153)This γ-ray source is reported with a 9.9σ significance in the 3FGL. It was unassociated in the 2FGL catalogue, but in the 3FGL and 3LAC catalogues it is classified as an ISP BL Lac object with unknown redshift. In the 3.3 ksec Swift/XRT image, we found only one X-ray source as a possible counterpart (Fig. <ref>).The broad-band SED, obtained combining the multi-wavelength fluxes of this counterpart, was analysed with our method and the resulted plots are displayed in Fig. <ref>. These indicate, as best-guess classification, an HSP blazar with a tentative redshift of ∼ 0.4. This result is in agreement with the association and the classification of the 3FGL catalogue and with the optical spectroscopic classification as a BL Lac object reported in <cit.> for the IR counterpart WISE J011619.62-615343.4.No spectroscopic redshift estimates are provided by them due to the lack of optical emission or absorption features in the optical spectrum.§.§ 2FGL J0143.6-5844 (3FGL J0143.7-5845) This bright γ-ray source is classified as an UGS in the 2FGL, but as a BL Lac object with unknown redshift in the 3FGL and 3LAC cataloguesThe source was observed by Swift/XRT for about 4.5 ksec. As discussed in Sec. <ref>, within the 3FGL Fermi error-box a very bright X-ray source has been found with multi-wavelength counterparts and the resulting multi-wavelength SED (Fig. <ref>-bottom panel) presents a good spectral coverage.Our blazar recognition tool (Fig. <ref>) indicates clearly a minimum χ_min^2 corresponding to an HSP template that fits the source luminosity data assuming a redshift of 0.1-0.3. Our proposed association and classification is in agreement with the 3LAC classification and with the optical spectrum reported in <cit.> where the source is classified as a BL Lac object with unknown redshift for lack of optical features. Since the source shows a hard Fermi spectrum[According to the 3FGL catalogue the hard-spectrum sources have a spectral index Γ < 2.2. ]ofΓ <1.84and is reported in the Second Fermi LAT catalogue of High energy sources <cit.>, it could be an interesting target for TeV observations, once account is taken of the EBL absorption.§.§ 2FGL J0338.2+1306 (3FGL J0338.5+1303)This source is reported in the 3FGL and 3LAC catalogue as a blazar candidates of uncertain type of the second sub-type (BCU-II)[The 3LAC sources classified as blazar candidates of uncertain type are divided in three sub-types: the BCU-I sources where the counterpart has a published optical spectrum but is not sensitive enough for a classification as an FSRQ or a BL Lac; the BCU-II objects whit the counterpart lacking of an optical spectrum but a reliable evaluation of the SED synchrotron-peak position is possible; the BCU-III sources for which the counterpart is lacking both an optical spectrum and an estimated synchrotron-peak position but shows blazar-like broadband emission and a flat radio spectrum.] and with a detection significance of 11.90 σ (in the 2FGL it was classified as unassociated).The error-box field is analysed in Sec. <ref>, where only one candidate counterpart is found. The broad-band SED of this object is reported in Fig. <ref>, including the Swift/XRT imaging photometry.The output plots of our blazar recognition tool are shown in Fig. <ref>.We have two best-fit solutions for the minimum χ_ν,min^2 belonging to the HSP class with z=0.3 and z=1.9, the latter having a very large MAD value.For such a high redshift value even the Fermi fluxes would be strongly damped because of the pair-production by the extragalactic background light (EBL) <cit.>: the last Fermi point at ∼e^25 Hz by about a factor 10, which is not seen in the data.In conclusion, we consider the HSP solution with redshift z∼ 0.3 as our preferred solution and it is worth to note that our proposal is confirmed by the recent work of <cit.>, where the optical spectrum of the counterpart reveals a BL Lac object nature with an unknown redshift due to the lack of emission and absorption lines. §.§ 2FGL J1129.5+3758 (3FGL J1129.0+3758)The error-box area of this S/N=10.25 γ-ray emitter was observed for a total of about 4.7 ksec by Swift/XRT and the X-ray sky map is reported in Fig. <ref>. We proposed the object XRT J1129-375857 as the likely X-ray counterpart and we were able to build its multi-frequency SED spanning from radio to HE band. Our blazar-like recognition code results in the diagnostic plots reported in Fig. <ref> and the SED template with the minimum χ_ν,min^2 corresponds to an LSP SED, for a best-fit redshift of z ∼1.6.However we can see a significant degeneracy with other solutions belong to the same class of LSP at z ∼1.2 to 1.5 and to the ISP class at z∼ 0.5 to 1.2.In either case, a high value of the redshift is indicated. §.§ 2FGL J1410.4+7411 (3FGL J1410.9+7406)Thanks to the new reduced 3FGL error box of this Fermi UGS, we can find an X-ray source that can be proposed as likely counterpart for the source (see details in <ref>).Despite the lack of a radio counterpart, that could help the tool to constrain the classification and the redshift for this object, we have a good spectral coverage at the other frequencies and we can build a multi-wavelength SED for the counterpart XRT J141045+740509. Based on our blazar-like SED recognition tool (Fig. <ref>), we suggest that the nature of 2FGL J1410.4+7411 is an HSP object with a high tentative redshift of z=0.5-0.6.An optical classification of our proposed counterpart is provided by <cit.>. The optical spectrum shows emission lines allowing to classify the source as a new NLSY1 with a z=0.429.§.§ 2FGL J1502.1+5548 (3FGL J1502.2+5553)The source is still an UGS in the 3FGL catalogue with a detection significance of 12.6σ.In the 3FGL error-box region of the source, the only X-ray source found is 1SXPS J150229.0+555204, which is spatially coincident with a radio source.We propose it as the likely X-ray counterpart for 2FGL J1502.1+5548 (the broad-band SED and details in Sect. <ref>). The resulting plots by our blazar SED-recognition tool are shown in Fig. <ref>.The best-fitting SED template, with minimum χ_ν,min^2, belongs to the LSP class at redshift ∼1.6-1.9, but a similarly good solution is found with a template of the ISP class at lower redshift (z∼0.4-0.7).Hencefor this source the blazar classification and redshift are uncertain (but a high redshift is indicated), probably due to the limited spectral coverage of the synchrotron peak for this source. Photon-photon absorption by the EBL is not expected to seriously affect the Fermi fluxes even for the high-redshift solution. §.§ 2FGL J1511.8-0513 (3FGL J1511.8-0513) This object is present in the 3FGL and 3LAC catalogues with a significance of 10.59 σ and its new classification is a blazar candidate of uncertain type with unknown redshift.Two X-ray sources are found in the source region observed by Swift-XRT (App. <ref>), but only the brightest, XRT J151148-051348, is in the reduced 3FGL error ellipse and also proposed as counterpart in the Fermi catalogue.The diagnostic plots for the X-ray counterpart are shown in Fig. <ref> and the best-fitting SED template corresponds to an HSP with a tentative redshift z=0.1-0.2.Our classification is in agreement with the result reported in <cit.>, where the source is classified as a BL Lac object with unknown redshift, on the basis of its featureless optical spectrum. §.§ 2FGL J1614.8+4703 (3FGL J1615.8+4712) The multi-wavelength counterpart set for this source is discussed in Sec. <ref> and we propose the Swift source XRT J161541+471110 as the likely X-ray counterpart, in agreement with the association reported in the 3FGL and 3LAC catalogues.In Fig. <ref>, we show its multi-wavelength SED built from its counterpart set.Based on our blazar recognition tool (Fig. <ref>) and on the minimum value of χ_min^2, we suggest that 2FGL J1614.8+4703 is an ISP object at redshift z=0.3.For this object, the SDSS survey has reported the presence of an early-type spiral (Sa) (<ref>) in the position of our optical counterpart, with a measured spectroscopic redshift of z=0.19, that may represent the host galaxy of a very faint low-z blazar. §.§ 2FGL J1704.3+1235 (3FGL J1704.1+1234)Inside the error-box of this 3FGL S/N=9.43 Fermi source (details in Sect. <ref>), we found only one bright X-ray counterpart with data in the radio, optical and IR. This appearsto be as a robust and unique counterpart for 2FGL J1704.3+1235. For this source, the SDSS survey reports the presence of an un-resolved reddish object, in the source position, classified as a star.Based on our blazar-like SED recognition tool (Fig. <ref>), we find two possible solutions.One is in terms of an HSP object with tentative redshift z=0.3.A fit at this redshift appears to be confirmed by some evidence of an host galaxy contribution in the optical, as illustrated in Fig. <ref>.Despite our resulting fit to the Fermi data turns out to be quite poor, our result is in broad agreement with the classification provided by <cit.>, where the optical spectrum of the proposed potential counterpart suggests a BL Lac object nature with a redshift of z = 0.45 .The other solution is instead for an EHBL classification with z=0.2.This source certainly requires further scrutiny, given the robustness and the uniqueness of the association.§.§ 2FGL J2115.4+1213 (3FGL J2115.2+1213) Of the two X-ray sources found in the 3FGL error-box of 2FGL J2115.4+1213, as discussed in App. <ref>, the fainter one has essentially no counterparts in other bands.Instead for the brighter X-ray source, XRT J211522+121801, we find counterparts in all bands and for this reason it is proposed as our likely X-ray counterpart.Based on our blazar recognition algorithm (see Fig. <ref>), we suggest that 2FGL J2115.4+1213 is a blazar of the HSP class at redshift z=0.4. About the optical counterpart, the SDSS survey reports the presence of an un-resolved object that is classified as a star.§.§ 2FGL J2246.3+1549 (3FGL J2246.2+1547) This γ-ray emitter is reported in the 3FGL and 3LAC catalogue with a detection significance of 9.5σ and it is classified as a blazar candidate of unknown type (BCU-II) with an ISP SED classification and an unknown redshift.From the analysis of the XRT data covering the error-box field (discussed in Sect. <ref>), we found only one faint X-ray source with positional counterparts in various bands. Despite this source is not within the 3FGL, we suggest this as X-ray counterpart because it is the only X-ray source detected around the 2FGL J2246.3+1549 sky region and moreover our proposal is in agreement with the 3FGL association.The plots based on our tool are shown in Fig. <ref>.The best-fitting SED template indicates a classification as ISP object with a tentative redshift from z∼ 0.3 to ∼ 0.8, although the upper value corresponds to an only marginally acceptable MAD.§.§ 2FGL J2347.2+0707 (3FGL J2346.7+0705) Inside the 3FGL error-box of this S/N=13.83 Fermi source of 3 arc-mins (see Sec. <ref>), we found a bright X-ray source with good counterparts in the radio, optical and IR. This counterpart set is in agreement with the 3FGL and 3LAC association where the source is classified as a BCU-II with a ISP SED and with unknown redshift. Based on our blazar recognition code (see Fig. <ref>), we suggest that the source is a blazar of the ISP-HSP class with a best-fit redshift of z=0.2. The SDSS survey (dr12) reports the presence of a r=16.62 BL Lac object in our proposed counterpart position, for which a spectroscopic redshift of z∼0.17 is provided by the SDSS automatic analysis procedure and that is in a good agreement and supportive of our result.Further dedicated optical observations are needed to confirm this result. § DISCUSSION AND CONCLUSIONSThe Fermi mission unveiled a mine of information about the high-energy Universe, which is far from having been completely exploited yet. In particular, a large fraction of the sources of the 2FGL catalogue and a comparable fraction in 3FGL, are still waiting for reliable identification.As many as 576 of those high-confidence UGSs may be either pulsars, other kinds of Galactic objects, or more likely high-energy emitting AGN, mainly BL Lac objects or flat spectrum radio quasars.There is also a non-negligible chance that these signals might hide entirely new classes of sources, and even the electromagnetic signatures of (either decaying or annihilating) non-baryonic massive particles that are expected to constitute the dark matter in the Universe.In the recently released 3FGL catalogue, there are 1010 unidentified sources, exactly one-third of the 3034 detected sources. As a further step towards a more complete characterisation of the UGS population, we discussed in this paper a new method for recognising sources with blazar-like SED among the UGSs. This tool is based on the observed multi-wavelength flux density data, and takes advantage of some well-recognised regularities in the spectral properties of the blazar population, like the dependence of the peak frequencies of the synchrotron and IC on source luminosity, and the spectral slopes. The procedure is tested by comparison with a few well-known blazars, pulsars, and other Galactic sources, and then used for proposing the recognition of 14 UGSs selected in the 2FGL catalogue at high galactic latitudes.The 3FGL classification for these 14 sources includes 7 unassociated γ sources (UGS), 3 blazars (two BL Lac objects and one FSRQ), and 4 active galaxies of unknown type (BCU). A summary of our results is reported in Table <ref>, and for all sources of our UGS sample we report our proposed blazar typology and a rough estimate of the redshift. We find blazar-like counterparts for 13 of these UGSs (the remaining one is 2FGL J1544.5-1126, our counter-example for which we disfavour an AGN classification): the majority of them belongs to the HSP class, a couple are of the LSP class and two to ISP class. This is in agreement with the results of previous works as <cit.> and <cit.>, and with the Fermi 3LAC classification when a given UGS is classified as AGN in the 3FGL catalogue. Identification works based on optical spectroscopic observations, <cit.>,show the typical power-law optical spectrum for 7 sources of our UGS sample and therefore they confirm our classification and redshift (in case of presence of emission and absorption lines).For our proposed counterparts, we suggest substantial values of redshift, from about z∼ 0.2 upwards. These relatively high redshift may partly explain their lack of previous association or identification in published catalogues, although other explanations are possible.To better understand the general properties of these new counterparts, and to further test the reliability of our method, we have built colour-colour diagrams for Fermi sources of various nature, based on existing multi-wavelength data.These sources include pulsars, micro-quasars and AGNs, these latter classified by the 3LAC catalogue into high-, intermediate- and low-synchrotron-peaked objects (referred to as HSPs, ISPs and LSPs), corresponding to our blazar classification scheme. In Fig. <ref> (upper panel), we plot the γ-ray to X-ray flux ratios versus radio to X-ray ratios. For ease of comparison with previous works, we also plot the corresponding broad-band spectral indices (bottom panel). The radio and X-ray fluxes of the counterparts associated with the 3FGL sources of different astronomical classes have been derived from the 3LAC and 2PC (the Second Pulsar Fermi catalogue, ) catalogues. As we see, there is a clear separation between the HSP and LSP classes of sources, while the ISP objects span the whole range of properties from HSP to LSP. Twelve of the 2FGL UGSs that we recognise as blazars in this paper (and for which we could calculate the radio-to-γ-ray spectral index) are shown as black points in Fig. <ref>.All of them are situated in the blazar region, in agreement with our proposed classification.There is also a good agreement between the colour region and our estimate of the UGS blazar classes, perhaps with the exception of sources 1 and 2 in the plot, that we classify as HSP, but that fall in the ISP colour region. The rest of our associated objects fall instead in the expected regions. We also note a tendency for our UGSs to fall closer to the right-side border of the multi-wavelength colour distributions (to the left of the spectral index region), while the radio to X-ray colour appear consistent. This is likely explained as a selection effect due to the higher than average γ-ray and lower than average X-ray fluxes for our UGSs compared to standard luminous blazars in the Fermi catalogues.The obvious next step will be to obtain spectroscopic observations of our proposed UGS counterparts lacking of a optical spectrum.Given the brightness of the sources and their characteristic featureless spectrum, confirming a BL Lac nature of the candidates, or the LSP nature from the strong emission lines, should be a relatively easy task. Much more difficult, or even impossible, might instead be the redshift measurement, for which however our analysis offers at least a guideline.From tests carried out in the present paper, our new method to study unassociated Fermi objects, based on the analysis of radio-to-γ total-band spectral energy distributions, appears to offer a valuable tool to assist in the investigation of the large number of γ-ray sources still missing a physical interpretation. § ACKNOWLEDGMENTSWe acknowledge helpful discussions and suggestions by Stefano Vercellone and Patrizia Romano.This work has benefited by an extensive collaboration with the MAGIC project, in particular the Padova MAGIC team. Edoardo Iani helped us to draw some of the figures.This work made use of data supplied by the UK Swift Science Data Centre at the University of Leicester. Part of this work is based on archival data, software and online services provided by the ASI science data centre (ASDC).We are grateful to an anonymous referee for his careful reading and numerous suggestions that helped improving the paper.The financial contributions by the contracts INAF ASTRI (PI E. Giro) and INAF E-ELT (PI R. Falomo), and by the Padova University, are also acknowledged. tocsectionbibliography mn2e § MULTI-WAVELENGTH COUNTERPARTS FOR A SELECTED SAMPLE OF FERMI UGSS §.§ 2FGL J0102.2+0943 This UGS shows a detection significance of7.09 (5.5) σ and an error-box of 4.8' (7.8') in the 3FGL (2FGL) catalogue. Two observations are performed by Swift/XRT for a total exposure time of about 4000 sec. Using the XRT imaging analysis tool of the UK Swift Science Data Centre, in the X-ray sky map (Fig. <ref>, upper-left panel), for this Fermi source, only one faint X-ray source is detected within the 3FGL error-box (yellow ellipse), with (RA,DEC) = (01 02 17.15, 09 44 11.16) and a 90% positional error radius of 4.5".The estimated count rate is (2.624×10^-3 ± 8.314×10^-4) cts/s for a total of 11 counts. Superimposing the catalogues of the other wavelengths with the DS9 plotting tool (Fig. <ref> upper-right panel), we find a positional coincidence with the radio source NVSS J010217+094407, the IR source 2MASS 01021713+0944098 and the optical SDSS10 1237678833220911130.Another IR source appears positionally coincident within the XRT error-box, but we do not consider it because the coincidence is very marginal and the corresponding optical source is outside the region.Through the SED builder tool of the ASI ASDC Data Centre we build the multifrequency SED (Fig. <ref>, bottom panel) combining the fluxes of the proposed set of counterparts and including also the XRT flux data from <cit.> and the X-ray data points taken from 1SXPS catalogue <cit.> .§.§ 2FGL J0116.6-6153 In the 3FGL (2FGL) catalogue this object is reported with a detection significance of 9.90 (5.5) σ and the 95% semi major axis is 0.04^∘ (6').In the 3FGL Its new classification is a blazar of BL Lac type.Two Swift-XRT observations are available for a total of 3276 sec.Through the UK Swift data analysis, we obtain the X-ray image shown in Fig. <ref> (upper-left panel). Within the 3FGL error-box (yellow ellipse), we detect only one X-ray source with (RA,DEC) = (01 16 19.24, -61 53 40.2) with a 90% error radius of 5.7”.The estimated net count rate is (6.424×10^-3 ± 1.432×10^-3) cts/s.Hence we propose this object as the most likely counterpart of 2FGL J0116.6-6153. From the close-up image in Fig. <ref> (upper-right panel), the radio source SUMSS J011619-615343, the IR sources WISE J011619-615343 and 2MASS 01161959-6153434, and the optical source USNOB U0281-0014602 are spatially coincident with the X-ray position of XRT J011619-615340.The multi-wavelength SED (<ref> bottom panel) is built by combining all available flux data of this set of counterparts. §.§ 2FGL J0143.6-5844In the 3FGL (2FGL) catalogue, this source is reported with a detection significance of 18.98 (14.2) σ and a 95% semi major axis of 2.4' (3.6').The new 3FGL classification for the source is blazar of BL Lac type.2FGL J0143.6-5844 has been observed by Swift/XRT, that was pointing at the coordinates of the 1FGL J0143.9-5845 and collecting 4348 seconds of good exposure time. The XRT sky map is shown in Fig. <ref>-(upper-left panel), with only one X-ray source detected within the 3FGL error-box. We suggest XRT J014347-584551 as the likely X-ray counterpart for 2FGL J0143.6-5844.It is a very bright X-ray source with a count rate of (3.765×10^-1 ± 9.337×10^-1) cts/s.The XRT enhanced position is (RA,DEC)=(01 43 47.57, -58 45 51.6) with an error radius of 1.9”.In the close-up image (Fig. <ref> upper-right panel), we find that the radio source SUMSS J014347-584550, together with the infrared sources WISE J014347-584551 and 2MASS 01434742-5845514, and the optical object USNOA2.0 U0300_00524092 are spatially coincident with the error region of the X-ray counterpart. In the bottom panel of the same figure, the corresponding broad-band SED built is shown.The magenta points are the X-ray data calculated from our dedicated XRT analysis, the black points are the X-ray spectrum taken from <cit.> and the blue points from 1SXPS catalogue.§.§ 2FGL J0338.2+1306 This γ-ray emitter is a Fermi source with detection significance of 11.90 (5.8) σ and an error-box of 1.8' (6.6') in the 3FGL (2FGL) catalogue.In the 3FGL catalogue, this source is classified as an active galaxy of uncertain type (BCU-II). It was observed by Swift/XRT on 4th July 2012 with an exposure time of 3344 sec.The resulting XRT sky map is shown in Fig. <ref>-(upper-left panel).We found only one X-ray source, XRT J033829+130216, within the 3FGL error-box (yellow ellipse).Therefore we decide to propose it as the most likely X-ray counterpart, according to the 3FGL association.From the image analysis, the XRT positional error for this source is 2.1” and its count rate is (7.160×10^-2 ± 4.653×10^-3) cts/s with 242 total counts found.Using an appropriate absorbed model, the integral flux in the energy range 0.3–10 keV is 4.1319×10^-12 ergs cm^-2 s^-1. Looking at the close-up image (Fig. <ref> upper-right panel), we can see that the radio source NVSS J033829+130215, the infrared sources WISE J033829+130215 and 2MASS 03382926+1302151, and the optical USNOB 1030-0045117 are spatially coincident with the X-ray object.The multi-frequency SED (bottom panel) is obtained by combining the data-points of these objects.§.§ 2FGL J1129.5+3758 In the 3FGL catalogue 2FGL J1129.5+3758 is still an unidentified object with a detection significance = 10.25 σ and a 95% semi major axis of 3.6'. In 2014 the Swift satellite provided about 4700 seconds of data.Through the X-ray image analysis, we found that within the reduced 3FGL error box of this source there is one X-ray source (XRT J112903+375857 with (RA,DEC)=21 15 22.08, 12 18 01.8) detected (Fig. <ref>, upper panel left). We propose it as the likely X-ray counterpart of this UGS and the close-up image shows its multi-frequency counterparts within the X-ray error circle of radius 4.7”: the radio source NVSS J112903+375655, the IR objects WISE J112903+375655 and 2MASS 11290325+3756564, and the optical object SDSS7 587739099132657672.In the bottom panel the corresponding MWL SED is reported. §.§ 2FGL J1410.4+7411 In the 2FGL catalogue 2FGL J1410.4+7411 has a 9.8σ significance and a semi major axis of 4.8’.In the 3FGL catalogue the unassociated source 3FGL J1410.9+7406 is reported with a detection significance of 15.76σ and a semi major axis of 2.4'.We suggest that 2FGL J1410.4+7411 and 3FGL J1410.9+7406 are the same γ-ray emitter. There are several short Swift/XRT observations provided between 2011 and 2014, and from the resulting XRT sky map, shown in Fig. <ref>-(upper-left panel), we can found two X-ray 1SXPS sources (white crosses). We suggest the brightest one, the source XRT J141045+740609, as likely X-ray counterpart for 3FGL J1410.9+7406.From the image analysis, the XRT positional error for this source is 4.5”. Looking at the close-up image (upper-right panel), we can see that the infrared sources WISE J141046+740511 and the optical USNOB 1640-0083647are spatially coincident with the X-ray object.The multi-frequency SED of 2FGL J1410.4+7411 - 3FGL J1410.9+7406 is obtained by combining the data-points of these objects and the X-ray data points provided by the 1SXPS catalogue (blue points).§.§ 2FGL J1502.1+5548 In the 3FGL catalogue, this UGS is reported with a detection significance of 12.64σ and a 95% semi major axis of 4.2'. 2FGL J1502.1+5548 has been observed by Swift/XRT and ∼4000 seconds of good exposure time was collected.The XRT sky map is shown in Fig. <ref>(upper-left panel), and we were able to detect two X-ray sources that are located outside the 3FGL error-box, but within the 3FGL error-box.From the high quality 1SXSP catalogue <cit.>, we note that the source 1SXSP J150229.0+555204 (with a positional error radius of 5.9”), coincident with the radio source NVSS J150229+555204, can be considered the most likely counterpart for 2FGL J1502.1+5548.In the close-up image of 1SXSP J150229.0+555204 (upper-right panel) we find that, besides the radio source, only the optical source SDSS J150229.07+555204.9 is spatially coincident with the X-ray counterpart.There are not IR sources detected and the closest one is located outside the X-ray error-box and associated to the star very close to SDSS J150229.07+555204.9 (see the finding chart in Fig. <ref>-bottom-right panel). In the same figure (bottom-left panel) the corresponding broad-band SED built is shown. §.§ 2FGL J1511.8-0513 In the 3FGL (2FGL) catalogue this source shows a detection significance of 10.59 (7.8) σ and a semi major axis of 3' (4.8').In the 3FGL this source is not unassociated but it is classified as an active galaxy with uncertain type.By pointing at the position of the 1FGL source, Swift/XRT observed the 2FGL J1511.8-0513 sky region in 2010 for a total time of 4160 sec.The XRT data analysis was performed by the UK XRT analysis tool and the resulting sky map is shown in Fig. <ref> (upper-left panel).Within the 3FGL error-box we found only one X-ray source, with (RA,DEC) = (15 11 48.55, -05 13 48.00) and a 90% error radius of 1.9”.We propose it as X-ray counterpart in agreement with the 3FGL association. From the close-up image in Fig. <ref> (upper-right panel), the radio source NVSS 151148-051345, the IR sources WISE J151148-051346 and 2MASS 15114857-0513467, and the optical source USNOB U0825.08626045 are spatially coincident with the X-ray position of XRT J151148-051348.The multi-wavelength SED (bottom panel) is built by combining all available flux data of this set of counterparts.The X-ray flux is derived by our dedicated Swift-XRT analysis (magenta points) and in addition we plot the X-ray data points taken from the 1SXPS catalogue (blue points) and <cit.> (black points).§.§ 2FGL J1544.5-11262FGL J1544.5-1126 shows a detection significance of 10.85 (5.79) σ and a 95% semi major axis of 4.8' (8.4') in the 3FGL (2FGL) catalogue.Swift/XRT did not observe it directly, but pointed to the ROSAT source 1RXS J154439.4-112820, from 2006 to 2012, for an exposure time of 13350 seconds.This object is the brightest X-ray source within the 3FGL error-box (yellow ellipse) of 2FGL J1544.5-1126 (Fig. <ref>, upper-left panel).We suggest it as the likely X-ray counterpart.From the XRT data analysis, we find that this X-ray counterpart has an error-box of 1.7”.In the close-up image (upper-right panel) the IR object WISE J154439-112804 and the optical source USNOB1.0 0785-0287377 are positionally coincident and hence we consider them as associated to 2FGL J1544.5-1126.The estimated XRT count rate is (7.003×10^-2 ± 2.311×10^-3) cts/s and the integrated 0.3-10 keV flux is 4.4394×10^-12 ergs cm^-2 s^-1 (935 total counts).The X-ray differential spectrum (magenta points) is plotted in the multi-wavelength SED (bottom panel) together with the X-ray spectrum taken from <cit.> (black points) and the data-points provided by the 1SXPS catalogue (blue points). §.§ 2FGL J1614.8+4703 2FGL J1614.8+4703 is a very faint γ-ray emitter with a detection significance of 4.59σ and a rather large Fermi 95% semi major axisof 13.8'.In the 3FGL and 3LAC catalogues, this object is a 6.30 σ source with a 95% semi major axis of 5.4' and it is associated to the source TXS 1614+473, classified as LSP blazar. The Swift/XRT pointings for this source were targeted to the IR source 2MASX J16154117+47111 for 4990 sec (see Fig. <ref>-upper panel left).Only the source XRT J161541+471110 is detected and we suggest it as likely X-ray counterpart, in agreement with the 3FGL association. The close up image around XRT J161541+47111 position (upper panel right) shows that within the XRT positional error of 4.8”, and we can find the IR objects WISE J161541+471111 and 2MASS 16154121+4711118, and the optical object SDSS10 588007004192637004 spatially coincident.For the latter, the SDSS survey[http://skyserver.sdss3.org/dr10/en/tools/chart/navi.aspx] identifies the source with an elliptical galaxy at redshift of 0.19 (bottom panel right). The multi-wavelength SED for 2FGL J1614.8+4703 is displayed in the bottom-left panel by combining all flux data of the proposed counterparts. The magenta points indicate the X-ray spectrum estimated through our UK online analysis of the XRT J161541+471110 data, while the blue points are the X-ray data flux taken from 1SXPS catalogue.§.§ 2FGL J1704.3+1235 Through the UK online analysis of the 2013 XRT data (∼ 4800 seconds), covering the 2FGL J1704.3+1235 sky region, within the 3FGL (2FGL) 95% semi major axis error box of 4.2' (6.6'), we found only one bright X-ray source, XRT J170409+123421 (Fig. <ref>, upper-left panel), with a count rate of 6.418×10^-2 ±3.68×10^-3.We consider it as the X-ray counterpart for this UGS.Looking at the close-up image (upper-right panel) this X-ray source, with positional error of 2.6”,is spatially coincident with the radio source NVSS J170409+123421, the infrared source WISE J170409+123421 and the optical object SDSS10 1237665106510021484.The broad-band SED is built by combining all the corresponding flux data and shown in the bottom panel: the magenta points are the X-ray differential spectrum obtained by the UK online analysis.We find evidence for the possible contribution of a host galaxy assumed it at z = 0.3, as illustrated by the green spectrum in Fig. <ref>.§.§ 2FGL J2115.4+1213 In the 3FGL (2FGL) catalogue 2FGL J2115.4+1213 is an unidentified object with a detection significance = 6.15 (5.11) σ and a 95% semi major axis of 9.6' (8.4').In 2012 the Swift satellite provided about 3800 seconds of X-ray data.Through the image analysis, we found that within the Fermi error box there are two X-ray sources detected (Fig. <ref>, upper panel).We propose as the likely X-ray counterpart of this UGS the brightest X-ray object (details in the corresponding table) with 30 counts and (RA,DEC)=(21 15 22.08, 12 18 01.8).In the middle panel the multi-wavelength SED of XRT J211522+121801 with the close-up image that shows the multi-frequency counterparts for XRT J211522+121801 within the X-ray error circle of radius 3.3”: the radio source NVSS J211522+121802, the IR objects WISE J211522+121802 and 2MASS 21152198+1218029, and the optical object SDSS10 1237678538491691263. §.§ 2FGL J2246.3+1549In the 3FGL (2FGL) catalogue, this Fermi object has a detection significance of 9.47 (8.21) σ and a 95% semi major axis of 3' (6.6').This object is associated and classified as an active galaxy of uncertain type.Swift/XRT observed 2FGL J2246.3+1549 in 2010 for a total of 3381 seconds.The XRT sky map of the 2FGL J2246.3+1549 region is shown in Fig. <ref>-(upper-left panel) and only one X-ray source, with (RA,DEC)=(22 46 05.1, +15 44 34.07) is detected within the 2FGL error ellipse, with a count rate of 8.718×10^-3 ± 1.648×10^-3 cts/s and the integrated 0.3-10 keV flux is 4.1156×10^-13 ergs cm^-2 s^-1.However this X-ray source is not inside the 3FGL error region, but we decide to consider it as the likely X-ray counterpart for 2FGL J2246.3+1549 because it is the only X-ray source detected in the larger 2FGL error region and moreover this choice is in agreement with the association provide from the 3FGL catalog.From the close-up image (upper-right panel), within the X-ray positional error of 3.2” we see that the radio source NVSS J224605+154437, the IR sources WISE J224604+154435 and 2MASS 22460500+1544352, and the optical object SDSS10 237680091651178703 can be associated to XRT J224605+154434. In the bottom panel the broad-band SED built by combining the flux data-points of the multi frequency counterparts. §.§ 2FGL J2347.2+0707 2FGL J2347.2+0707 is an object of the 3FGL (2FGL) catalogue with detection significance = 13.83 (7.2)σ and an 95% semi axis error of 3' (6.0').In the 3FGL catalogue the source is classified as an active galaxy of uncertain type and associated to the object TXS 2344+068. From the UK online analysis of the 2011 XRT data (∼5000 seconds) we obtain the X-ray count map of the 2FGL J2347.2+0707 sky region (Fig. <ref>, upper-left panel).Inside the 3FGL error ellipse, we have only one X-ray source detected from Swift with (RA,DEC)=(23 46 40.01, +07 05 07.0) with a count rate of 2.090×10^-2 ± 2.055×10^-3 cts/s.We propose this object as the most likely X-ray counterpart for 2FGL J2347.2+0707 in agreement with the 3FGL association. From the close-up image focused on XRT J234640+070507 (upper-right panel) within the X-ray error circle of 2.1”we have the radio source NVSS J234639+070504, the optical source SDSS10 1237669517440385146, and the IR objects WISE J234639+070506 and 2MASS 23463993+0705068. In the bottom panel the multi-wavelength SED of 2FGL J2347.2+0707, is shown with the X-ray differential spectrum obtained by our dedicated X-ray analysis (green points) and the 1SXPS data points (blue points). | http://arxiv.org/abs/1703.09143v1 | {
"authors": [
"Simona Paiano",
"Alberto Franceschini",
"Antonio Stamerra"
],
"categories": [
"astro-ph.HE"
],
"primary_category": "astro-ph.HE",
"published": "20170327152753",
"title": "A new method to unveil blazars among multi-wavelength counterparts of Unassociated Fermi gamma-ray Sources"
} |
1,2]Imanol Albarran [email protected],4]Mariam Bouhmadi-López [email protected] 5,6]Che-Yu Chen [email protected],6,7]Pisin Chen [email protected] [1]Departamento de Física, Universidade da Beira Interior, Rua Marquês D'Ávila e Bolama, 6201-001 Covilhã, Portugal [2]Centro de Matemática e Aplicações da Universidade da Beira Interior (CMA-UBI), Rua Marquês D'Ávila e Bolama, 6201-001 Covilhã, Portugal [3]Department of Theoretical Physics, University of the Basque Country UPV/EHU, P.O. Box 644, 48080 Bilbao, Spain [4]IKERBASQUE, Basque Foundation for Science, 48011, Bilbao, Spain [5]Department of Physics and Center for Theoretical Sciences, National Taiwan University, Taipei, Taiwan 10617 [6]LeCosPA, National Taiwan University, Taipei, Taiwan 10617 [7] Kavli Institute for Particle Astrophysics and Cosmology, SLAC National Accelerator Laboratory, Stanford University, Stanford, CA 94305, U.S.A. By far cosmology is one of the most exciting subject to study, even more so with the current bulk of observations we have at hand. These observations might indicate different kinds of doomsdays, if dark energy follows certain patterns. Two of these doomsdays are the Little Rip (LR) and Little Sibling of the Big Rip (LSBR). In this work, aside from proving the unavoidability of the LR and LSBR in the Eddington-inspired-Born-Infeld (EiBI) scenario, we carry out a quantum analysis of the EiBI theory with a matter field, which, from a classical point of view would inevitably lead to a universe that ends with either LR or LSBR. Based on a modified Wheeler-DeWitt equation, we demonstrate that such fatal endings seems to be avoidable. Quantum cosmology Modified theories of gravity Dark energy doomsdays Palatini type of theories § INTRODUCTIONThe scrutiny of extensions on General Relativity (GR) is a well motivated topic in cosmology. Some phenomena, such as the current red accelerating expansion of the universe or gravitational singularities like the big bang, would presage extensions of GR in the infra-red as well as in the ultra-violet limits. Among these extensions, the EiBI theory <cit.>, which is constructed on a Palatini formalism, is an appealing model in the sense that it is inspired by the Born-Infeld electrodynamics <cit.> and the big bang singularity can be removed through a regular stage with a finite physical curvature <cit.>. Various important issues of the EiBI theory have been addressed such as cosmological solutions <cit.>, compact objects <cit.>, cosmological perturbations <cit.>, parameter constraints <cit.>, and the quantization of the theory <cit.>. However, some possible drawbacks of the theory were discovered in Ref. <cit.>. Finally, some interesting generalizations of the theory were proposed in Refs. <cit.>.As is known, the cause of the late time accelerating expansion of the universe can be resorted to phantom dark energy, which violates the null energy condition (at least from a phenomenological point of view) while remainsconsistent with observations so far. Nonetheless, the phantom energy may induce more cosmological singularities in GR (curvature singularities). In particular there are three kinds of behaviors intrinsic to phantom models, which can be characterized by the behaviors of the scale factor a, the Hubble rate H=ȧ/a, and its cosmic derivatives Ḣ near the singular points: (a) The big rip singularity (BR) happens at a finite cosmic time t when a→∞, H→∞, and Ḣ→∞ <cit.>, (b) the LR happens at t→∞ when a→∞, H→∞ and Ḣ→∞ <cit.>, (c) the LSBR happens at t→∞ when a→∞, H→∞, while Ḣ→constant <cit.>. All these three scenarios would lead to the universe to rip itself as all the structures in the universe would be destroyed no matter what kind of binding energy is involved.Interestingly, even though the EiBI theory can cure the big bang, in Refs. <cit.>it was found that the BR and LR are unavoidable in the EiBI setup, hinting that the EiBI theory is still not complete and some quantum treatments near these singular events may be necessary. In this paper, we will extend the investigations in Ref. <cit.> where we showed that the BR in the EiBI phantom model is expected to be cured in the context of quantum geometrodynamics. We will carry an analysis to encompass the rest of truly phantom dark energy abrupt events; i.e. the LR and LSBR.§ THE EIBI MODEL: THE LR AND LSBRThe EiBI action proposed in <cit.> is (from now on, we assume 8π G=c=1) 𝒮_EiBI=2/κ∫ d^4x[√(|g_μν+κ R_μν(Γ)|)-λ√(-g)]+S_m(g),where |g_μν+κ R_μν| is the determinant of the tensor g_μν+κ R_μν. The parameter κ, which characterizes the theory, is assumed to be positive to avoid the imaginary effective sound speed instabilities usually associated with a negative κ <cit.> and λ is related to the effective cosmological constant. S_m is the matter Lagrangian. The field equations are obtained by varying (<ref>) with respect to g_μν and the connection Γ. In a flat, homogeneous and isotropic (FLRW) universe filled with a perfect fluid whose energy density and pressure are ρ and p, respectively, the Friedmann equations of the physical metric g_μν and of the auxiliary metric compatible with Γ are <cit.>κ H^2= 8/3[ρ̅+3p̅-2+2√((1+ρ̅)(1-p̅)^3)]×(1+ρ̅)(1-p̅)^2/[(1-p̅)(4+ρ̅-3p̅)+3dp̅/dρ̅(1+ρ̅)(ρ̅+p̅)]^2,and κ H_q^2=κ(1/bdb/dt̃)^2=1/3+ρ̅+3p̅-2/6√((1+ρ̅)(1-p̅)^3),where ρ̅≡κρ and p̅≡κ p [Notice that we are dealing with Palatini type of models which are also known as affine models. On these types of theories (c.f. the action (<ref>)) there is a metric g_μν and a connection Γ which does not correspond to the Christoffel symbols of the metric. However, it is always possible to define a metric compatible with that connection <cit.> and this is the metric that we are referring to as the auxiliary metric. The same applies to the action (<ref>) where we denote the auxiliary metric as q_μν and the physical metric g_μν. This is the standard and usual nomenclature in Palatini/affine theories.]. On the above equations a and b are the scale factor of the physical and auxiliary metrics, respectively. t̃ is a rescaled time such that the auxiliary metric can be written in a FLRW form. In GR, the LR and LSBR can be driven (separately) by two phantom energy models as follows <cit.> p_LR=-ρ_LR-A_LR√(ρ_LR) ,p_LSBR=-ρ_LSBR-A_LSBR ,where A_LR and A_LSBR are positive constants. Therefore,ρ_LR/ρ_0 = (3A_LR/2√(ρ_0)ln(a/a_0)+1)^2, ρ_LSBR = 3A_LSBRln(a/a_0)+ρ_0 ,where we take ρ_LR=ρ_LSBR=ρ_0 when a=a_0 <cit.>. The abrupt events happen at an infinite future where a and ρ diverge. Inserting these phantom energy contents into the EiBI model, i.e., Eqs. (<ref>) and (<ref>), and considering the large a limit (for ρ given in Eqs. (<ref>)), we have κ H^2≈ρ̅/3→∞ , κ H_q^2≈1/3 ,andḢ≈A_LR/2√(ρ_LR) , A_LSBR/2 ,for these two phantom energy models. Therefore, the LR and LSBR of the physical metric are unavoidable within the EiBI model while the auxiliary metric behaves as a de-Sitter phase at late time.§ THE EIBI QUANTUM GEOMETRODYNAMICS: THE LR AND LSBR MINISUPERSPACE MODELThe deduction of the WDW equation of the EiBI model is based on the construction of a classical Hamiltonian that is promoted to a quantum operator. As shown in <cit.>, this can be achieved more straightforwardly by considering an alternative action which is dynamically equivalent to the EiBI action (<ref>):𝒮_a=λ∫ d^4x√(-q)[R(q)-2λ/κ+1/κ(q^αβg_αβ-2√(g/q))]+S_m(g).In Ref. <cit.> it has been shown that the field equations obtained by varying the action (<ref>) with respect to g_μν and the auxiliary metric q_μν are the same to those derived from the action (<ref>). Starting from action (<ref>) and inserting the FLRW ansatz, the Lagrangian of this model in which matter field is described by a perfect fluid can be written as (see Ref. <cit.>)ℒ=λ Mb^3[-6ḃ^2/M^2b^2-2λ/κ+1/κ(X^2+3Y^2-2XY^3)]-2ρ Mb^3XY^3,where X≡ N/M and Y≡ a/b. N and M are the lapse functions of g_μν and q_μν, respectively. Note that ρ is a function of a, i.e., ρ=ρ(bY) and it is given in Eqs. (<ref>). §.§ The classical analysis of the Hamiltonian systemThe system described by the Lagrangian ℒ is a constrained system. The conjugate momenta can be obtained as follows:p_b ≡∂ℒ/∂ḃ=-12λ bḃ/M,p_X ≡∂ℒ/∂Ẋ=0,p_Y ≡∂ℒ/∂Ẏ=0,p_M ≡∂ℒ/∂Ṁ=0.Therefore, the system has three primary constraints <cit.>:p_X ∼0,p_Y ∼0,p_M ∼0,where ∼ denotes the weak equality, i.e., equality on the constraint surface. The total Hamiltonian of the system can be defined by <cit.>ℋ_T=ḃp_b-ℒ+λ_Xp_X+λ_Yp_Y+λ_Mp_M,where λ_X, λ_Y, and λ_M are Lagrangian multipliers associated with each primary constraint. According to the consistent conditions of each primary constraint, i.e., their conservation in time: [p_X,ℋ_T]∼0, [p_Y,ℋ_T]∼0, and [p_M,ℋ_T]∼0, one further obtains three secondary constraints [We remind that the Poisson bracket is defined as[F,G]=∂ F/∂ q_i∂ G/∂ p_i-∂ F/∂ p_i∂ G/∂ q_i,where q_i are the variables and p_i their conjugate momenta. Notice that the repeating suffices denote the summation. ] <cit.>:C_X≡λ X-Y^3(λ+κρ)∼0,C_Y≡3λ-3XY(λ+κρ)-XY^2bκρ'∼0,C_M≡p_b^2/24λ b-2λ^2b^3/κ+λ/κb^3X^2+3λ/κb^3Y^2-2XY^3b^3/κ(λ+κρ)∼0.The prime denotes the derivative with respect to a=bY. Furthermore, it can be shown that the total Hamiltonian is a constraint of the system:ℋ_T=-MC_M+λ_Xp_X+λ_Yp_Y+λ_Mp_M∼ 0.Because the Poisson brackets of the total Hamiltonian with all the constraints should vanish weakly by definition, ℋ_T is a first class constraint and we will use it to construct the modified WDW equation.This system has six independent constraints: p_X, p_Y, p_M, C_X, C_Y, and C_M. After calculating their Poisson brackets with each other, we find that except for p_M, which is a first class constraint, the other five constraints are second class <cit.>. The existence of the first class constraint p_M implies a gauge degree of freedom in the system and one can add a gauge fixing condition into the system to make the constraint second class. An appropriate choice of the gauge fixing condition is M=constant and after fixing the gauge, the conservation in time of this gauge fixing condition, i.e., [M,ℋ_T]=0, implies λ_M=0. §.§ Quantization of the systemTo construct the WDW equation, we impose the first class constraint ℋ_T as a restriction on the Hilbert space where the wave function of the universe |Ψ⟩ is defined, ℋ̂_T|Ψ⟩=0. The hat denotes the operator. The remaining constraints χ_i={M,p_M,p_X,p_Y,C_X,C_Y} are all second class and we need to consider the Dirac brackets to construct the commutation relations and promote the phase space functions to operators <cit.>. Note that C_M can be used to construct the first class constraint ℋ_T, i.e., Eq. (<ref>), so it is excluded from the set χ_i. The Dirac bracket of two phase space functions F and G are defined by <cit.>[F,G]_D≡[F,G]-[F,χ_i]Δ_ij[χ_j,G],where Δ_ij is the matrix satisfyingΔ_ij[χ_j,χ_k]=δ_ik.The existence of the matrix Δ_ij is proven in Dirac's lecture <cit.>.According to Ref. <cit.>, the second class constraints can be treated as zero operators after promoting them to quantum operators as long as the Dirac brackets are used to construct the commutation relations:[F̂,Ĝ]=iħ[F,G]_D, (F=F̂, G=Ĝ).This is due to the fact that the Dirac brackets of the constraints χ_i with any phase space function vanish strongly (they vanish without inserting any constraint). After some calculations, the Dirac brackets between the fundamental variables take the forms[b,p_b]_D =[b,p_b]=1,[b,X]_D =0,[b,Y]_D =0,[X,Y]_D =0,[X,p_b]_D =f_1(X,Y,b)=f_1(b),[Y,p_b]_D =f_2(X,Y,b)=f_2(b),where f_1 and f_2 are two non-vanishing functions. Notice that f_1 and f_2 can be written as functions of b because it is legitimate to insert the constraints C_X and C_Y to replace X and Y with b when calculating the Dirac brackets.On the XYb basis, if we define⟨XYb|b̂|Ψ⟩= b⟨ XYb|Ψ⟩, ⟨ XYb|X̂|Ψ⟩= X⟨ XYb|Ψ⟩, ⟨ XYb|Ŷ|Ψ⟩= Y⟨ XYb|Ψ⟩, ⟨ XYb|p̂_̂b̂|Ψ⟩= -iħ∂/∂ b⟨ XYb|Ψ⟩-f_1∂/∂ X⟨ XYb|Ψ⟩-f_2∂/∂ Y⟨ XYb|Ψ⟩,it can be shown that the resulting commutation relations satisfy Eqs. (<ref>) and (<ref>). Furthermore, the momentum operator p̂_̂b̂ can be written as⟨ξζ b|p̂_̂b̂|Ψ⟩=-iħ∂/∂ b⟨ξζ b|Ψ⟩,after an appropriate redefinition of the wave functions: ⟨ XYb|→⟨ξ(X,Y,b), ζ(X,Y,b), b|. Therefore, in the new ξζ b basis, the modified WDW equation ⟨ξζ b|ℋ̂_T|Ψ⟩=0 can be written as-1/24λ⟨ξζ b|p̂_̂b̂^2/b|Ψ⟩+V(b)⟨ξζ b|Ψ⟩=0,where the term containing p̂_̂b̂^2 is determined by Eq. (<ref>) and its explicit form depends on the factor orderings. Note that the eigenvalues X and Y can be written as functions of b according to the constraints C_X and C_Y, hence it leads to the potential V(b) as followsV(b)=2λ^2b^3/κ+λ/κb^3X(b)^2-3λ/κb^3Y(b)^2.§.§ Wheeler-DeWitt equation: factor ordering 1 In order to prove that our results are independent of the factor ordering, we make two choices of it. First, we consider ⟨ξζ b|b^3ℋ̂_T|Ψ⟩=0 and choose the following factor ordering:b^2p̂_̂b̂^2=-ħ^2(b∂/∂ b)(b∂/∂ b)=-ħ^2(∂/∂ x)(∂/∂ x),where x=ln(√(λ)b). Near the LR singular event, the energy density ρ behaves as ρ∝(lna)^2. On that regime, the dependence between the auxiliary scale factor b and a is b∝ alna. On the other hand, near the LSBR event the energy density behaves as ρ∝lna and b behaves as b∝ a√(lna). For both cases, the WDW equation can be written as(d^2/dx^2+48/κħ^2e^6x)Ψ(x)=0,when x and a go to infinity. Note that we have replaced the partial derivatives with ordinary derivatives and Ψ(x)≡⟨ξζ b|Ψ⟩. The wave function reads <cit.>Ψ(x)=C_1J_0(A_1e^3x)+C_2Y_0(A_1e^3x),and consequently when x→∞, its asymptotic behavior reads <cit.>Ψ(x)≈√(2/π A_1)e^-3x/2[C_1cos(A_1e^3x-π/4)+C_2sin(A_1e^3x-π/4)],where A_1≡4/√(3κħ^2).Here J_ν(x) and Y_ν(x) are Bessel function of the first kind and second kind, respectively. It can be seen that the wave function vanishes when a and x go to infinity. §.§ Wheeler-DeWitt equation: factor ordering 2From the WDW equation (<ref>), we can as well derive a quantum Hamiltonian by choosing another factor orderingp̂_̂b̂^2/b=-ħ^2(1/√(b)∂/∂ b)(1/√(b)∂/∂ b).Before proceeding further, we highlight that this quantization is based on the Laplace-Beltrami operator which is the Laplacian operator in minisuperspace <cit.>. This operator depends on the number of degrees of freedom involved. For the case of a single degree of freedom, it can be written as in Eq. (<ref>).Under this factor ordering and after introducing a new variable y≡(√(λ)b)^3/2, in both cases (LR and LSBR) the WDW equation can be written as (d^2/dy^2+64/3κħ^2y^2)Ψ(y)=0,when a and y approach infinity. The solution of the previous equation reads <cit.>Ψ(y)=C_1√(y)J_1/4(A_1y^2)+C_2√(y)Y_1/4(A_1y^2),and when y→∞, therefore, <cit.>Ψ(y)≈√(2/π A_1y)[C_1cos(A_1y^2-3π/8)+C_2sin(A_1y^2-3π/8)].Consequently, the wave functions approach zero when a goes to infinity. According to the DeWitt criterium for singularity avoidance <cit.>, the LR and LSBR is expected to be avoided independently of the factor orderings considered in this work.§.§ Expected values We have shown that the DeWitt criterium of singularity avoidance is fulfilled hinting that the universe would escape the LR and LSBR in the EiBI model once the quantum effects are important. We next estimate the expected value of the scale factor of the universe a by estimating the expected value of b. The reason we have to resort to the expected value of b rather than a is that in the classical theory <cit.> that we have quantized the dynamics is only endowed to the scale factor b. We remind at this regard that when approaching the LR and LSBR, b∝ aln a and b∝ a√(ln a), respectively, at least within the classical framework. Therefore if the expected value of b, which we will denote as b, is finite, then we expect that the expected value of a; i.e. a would be finite as well. Therefore, non of the cosmological and geometrical divergences present at the LR and LSBR would take place. We next present a rough estimation for an upper limit of b for the two quantization procedures presented on the previous subsection.* Factor ordering I:The expected value of b at late-time can be estimated as follows: b=∫_x_1^∞Ψ^*(x) e^x/√(λ)Ψ(x)dx, where x_1 is large enough to ensure the validity of the approximated potential in (<ref>), i.e., δ→ 0. In this limit, we can use the asymptotic behavior for the wave function c.f. Eq. (<ref>). Then, it can be shown that the approximated value of b is bounded as ∫_x_1^∞Ψ^*(x) e^x/√(λ)Ψ(x)dx<| C_1|^2+| C_2|^2/π A_1√(λ)e^-2x_1. Therefore, we can conclude that b has an upper finite limit. Consequently, the LR and LSBR are avoided. * Factor ordering II:In this case the expected value of b can be written asb=∫_y_1^∞Ψ^*(y) y^2/3/√(λ)Ψ(y)f(y)dy, where y_1 is large enough to ensure the validity of the approximatedpotential in (<ref>), i.e., η→0. In addition, we haveintroduced a phenomenological weight f(y) such that the norm of the wave function is well defined and finite for large y <cit.>. In fact, we could as well choose f(y)=y^-α with 2/3<α. After some simple algebra, we obtain b<2(| C_1|^2+| C_2|^2)/π A_1√(λ)∫_y_1^∞y^-1/3f(y). Consequently, we get b<2(| C_1|^2+| C_2|^2)/π A_1√(λ)(α-2/3)y_1^2/3-α. Once again, we reach the conclusion that b is finite. Therefore, the LR and LSBR are avoided. § CONCLUSIONSSingularities seem inevitable in most theories of gravity. It is therefore natural to ask whether by including quantum effects would the singularities be removed. In the case of the EiBI scenario, while the big bang singularity can be removed, the intrinsic phantom dark energy doomsday remains inevitable <cit.>. We solved the modified Wheeler-DeWitt equation of the EiBI model for a homogeneous and isotropic universe whose matter content corresponds to two kinds of perfect fluid. Those fluids within a classical universe would unavoidably induceLR or LSBR. We show that within the quantum approach we invoked, the DeWitt criterion is fulfilled and it leads toward the potential avoidance of the LR and LSBR. Our conclusion appears unaffected by the choice of factor ordering. § ACKNOWLEDGMENTS The work of IA was supported by a Santander-Totta fellowship “Bolsas de Investigação Faculdade de Ciências (UBI) - Santander Totta”. The work of MBL is supported by the Basque Foundation of Science Ikerbasque. She also wishes to acknowledge the partial support from the Basque government Grant No. IT956-16 (Spain) and FONDOS FEDER under grant FIS2014-57956-P (Spanish government). This research work is supported partially by the Portuguese grand UID/MAT/00212/2013. CYC and PC are supported by Taiwan National Science Council under Project No. NSC 97-2112-M-002-026-MY3 and by Leung Center for Cosmology and Particle Astrophysics, National Taiwan University.§ REFERENCES 99 Banados:2010ix M. Bañados and P. G. Ferreira,Phys. Rev. Lett.105 (2010) 011101Erratum: [Phys. Rev. Lett.113 (2014) no.11,119901]. Born:1934gh M. Born and L. Infeld,Proc. Roy. Soc. Lond. A 144 (1934) 425. Scargill:2012kg J. H. C. Scargill, M. Bañados and P. G. Ferreira,Phys. Rev. D 86 (2012) 103533. Avelino:2012ue P. P. Avelino and R. Z. Ferreira,Phys. Rev. D 86 (2012) 041501. Bouhmadi-Lopez:2013lha M. Bouhmadi-López, C. Y. Chen and P. Chen,Eur. Phys. J. C 74 (2014) 2802. Bouhmadi-Lopez:2014jfa M. Bouhmadi-López, C. Y. Chen and P. Chen,Eur. Phys. J. C 75 (2015) 90. Bouhmadi-Lopez:2014tna M. Bouhmadi-López, C. Y. Chen and P. Chen,Phys. Rev. D 90 (2014) 123518.Delsate:2012ky T. Delsate and J. Steinhoff,Phys. Rev. Lett.109 (2012) 021101. Cho:2013pea I. Cho, H. C. Kim and T. Moon,Phys. Rev. Lett.111 (2013) 071301. Pani:2011mg P. Pani, V. Cardoso and T. Delsate,Phys. Rev. Lett.107 (2011) 031101. Pani:2012qb P. Pani, T. Delsate and V. Cardoso,Phys. Rev. D 85 (2012) 084020. Harko:2013wka T. Harko, F. S. N. Lobo, M. K. Mak and S. V. Sushkov,Phys. Rev. D 88 (2013) 044032. Sham:2013cya Y. H. Sham, L. M. Lin and P. T. Leung,Astrophys. J.781 (2014) 66. Wei:2014dka S. W. Wei, K. Yang and Y. X. Liu,Eur. Phys. J. C 75 (2015) 253Erratum: [Eur. Phys. J. C 75 (2015) 331]. Olmo:2013gqa G. J. Olmo, D. Rubiera-Garcia and H. Sanchis-Alepuz,Eur. Phys. J. C 74 (2014) 2804. EscamillaRivera:2012vz C. Escamilla-Rivera, M. Bañados and P. G. Ferreira,Phys. Rev. D 85 (2012) 087302. Yang:2013hsa K. Yang, X. L. Du and Y. X. Liu,Phys. Rev. D 88 (2013) 124037. Du:2014jka X. L. Du, K. Yang, X. H. Meng and Y. X. Liu,Phys. Rev. D 90 (2014) 044054. Casanellas:2011kf J. Casanellas, P. Pani, I. Lopes and V. Cardoso,Astrophys. J.745 (2012) 15. Avelino:2012ge P. P. Avelino,Phys. Rev. D 85 (2012) 104053. Avelino:2012qe P. P. Avelino,JCAP 1211 (2012) 022. Bouhmadi-Lopez:2016dcf M. Bouhmadi-López and C. Y. Chen,JCAP 1611 (2016) no.11,023. Arroja:2016ffm F. Arroja, C. Y. Chen, P. Chen and D. h. Yeom,arXiv:1612.00674 [gr-qc].Pani:2012qd P. Pani and T. P. Sotiriou,Phys. Rev. Lett.109 (2012) 251102. Makarenko:2014lxa A. N. Makarenko, S. Odintsov and G. J. Olmo,Phys. Rev. D 90 (2014) 024066. Odintsov:2014yaa S. D. Odintsov, G. J. Olmo and D. Rubiera-Garcia,Phys. Rev. D 90 (2014) 044003. Jimenez:2014fla J. Beltrán. Jiménez, L. Heisenberg and G. J. Olmo,JCAP 1411 (2014) 004. Chen:2015eha C. Y. Chen, M. Bouhmadi-López and P. Chen,Eur. Phys. J. C 76 (2016) 40.Starobinsky:1999yw A. A. Starobinsky,Grav. Cosmol.6 (2000) 157 Caldwell:1999ew R. R. Caldwell,Phys. Lett. B 545 (2002) 23Caldwell:2003vq R. R. Caldwell, M. Kamionkowski and N. N. Weinberg,Phys. Rev. Lett.91 (2003) 071301 Carroll:2003st S. M. Carroll, M. Hoffman and M. Trodden,Phys. Rev. D 68 (2003) 023509 Chimento:2003qy L. P. Chimento and R. Lazkoz,Phys. Rev. Lett.91 (2003) 211301Dabrowski:2003jm M. P. Da̧browski, T. Stachowiak, and M. Szydłowski,Phys. Rev. D 68 (2003) 103519 GonzalezDiaz:2003rf P. F. González-Díaz,Phys. Lett. B 586 (2004) 1 GonzalezDiaz:2004vq P. F. González-Díaz,Phys. Rev. D 69 (2004) 063522 BouhmadiLopez:2009jk M. Bouhmadi-López, Y. Tavakoli and P. Vargas Moniz,JCAP 1004 (2010) 016. Albarran:2015tga I. Albarran and M. Bouhmadi-López,JCAP 1508 (2015) no.08,051. Ruzmaikina T. Ruzmaikina and A. A. Ruzmaikin, Sov. Phys. JETP 30 (1970) 372. Nojiri:2005sx S. 'i. Nojiri, S. D. Odintsov and S. Tsujikawa,Phys. Rev. D 71 (2005) 063004 Nojiri:2005sr S. 'i. Nojiri and S. D. Odintsov,Phys. Rev. D 72 (2005) 023003 Stefancic:2004kbH. Štefančić,Phys. Rev. D 71 (2005) 084024BouhmadiLopez:2005gk M. Bouhmadi-López,Nucl. Phys. B 797 (2008) 78Frampton:2011sp P. H. Frampton, K. J. Ludwick and R. J. Scherrer,Phys. Rev. D 84 (2011) 063003Brevik:2011mm I. Brevik, E. Elizalde, S. 'i. Nojiri, and S. D. Odintsov,Phys. Rev. D 84 (2011) 103508 Bouhmadi-Lopez:2013nma M. Bouhmadi-López, P. Chen, and Y. W. Liu,Eur. Phys. J. C 73 (2013) 2546Albarran:2016ewi I. Albarran, M. Bouhmadi-López, C. Kiefer, J. Marto and P. Vargas Moniz,Phys. Rev. D 94 (2016) no.6,063536.Bouhmadi-Lopez:2014cca M. Bouhmadi-López, A. Errahmani, P. Martín-Moruno, T. Ouali, and Y. Tavakoli,Int. J. Mod. Phys. D 24 (2015) no.10,1550078 Albarran:2015cda I. Albarran, M. Bouhmadi-López, F. Cabral and P. Martín-Moruno,JCAP 1511 (2015) no.11,044.Morais:2016bev J. Morais, M. Bouhmadi-López, K. Sravan Kumar, J. Marto and Y. Tavakoli,Phys. Dark Univ.15 (2017) 7.WALD R. M. Wald, General Relativity, University of Chicago Press, Chicago, (1984). (I,A)Henneaux M. Henneaux and C. Teitelboim, Quantization of gauge systems, Princeton University Press (1992).Diraclecture P. A. M. Dirac, Lectures on Quantum Mechanics, Yeshiva University, New York (1964).KieferQG C. Kiefer, Quantum Gravity. Third edition (Oxford University Press, Oxford, 2012).mathhandbook M. Abramowitz and I. Stegun, Handbook on Mathematical Functions (Dover, 1980). DeWitt:1967yk B. S. DeWitt,Phys. Rev.160 (1967) 1113. Barvinsky:1993jf A. O. Barvinsky,Phys. Rept.230 (1993) 237. Kamenshchik:2012ij A. Y. Kamenshchik and S. Manti,Phys. Rev. D 85 (2012) 123518. Barvinsky:2013aya A. O. Barvinsky and A. Y. Kamenshchik,Phys. Rev. D 89 (2014) no.4,043526. | http://arxiv.org/abs/1703.09263v2 | {
"authors": [
"Imanol Albarran",
"Mariam Bouhmadi-López",
"Che-Yu Chen",
"Pisin Chen"
],
"categories": [
"gr-qc",
"astro-ph.CO",
"hep-th",
"quant-ph"
],
"primary_category": "gr-qc",
"published": "20170327185103",
"title": "Doomsdays in a modified theory of gravity: A classical and a quantum approach"
} |
APS/[email protected] [email protected]@eng.ox.ac.ukDepartment of Engineering Science University of OxfordIn a wide range of complex networks, the links between the nodes are temporal and may sporadically appear and disappear. This temporality is fundamental to analyze the formation of paths within such networks. Moreover, the presence of the links between the nodes is a random process induced by nature in many real-world networks. In this paper, we study random temporal networks at a microscopic level and formulate the probability of accessibility from a node i to a node j after a certain number of discrete time units T. While solving the original problem is computationally intractable, we provide an upper and two lower bounds on this probability for a very general case with arbitrary time-varying probabilities of links' existence. Moreover, for a special case where the links have identical probabilities across the network at each time slot, we obtain the exact probability of accessibility between any two nodes. Finally, we discuss scenarios where the information regarding the presence and absence of links is initially available in the form of time duration (of presence or absence intervals) continuous probability distributions rather than discrete probabilities over time slots. We provide a method for transforming such distributions to discrete probabilities which enables us to apply the given bounds in this paper to a broader range of problem settings. Valid PACS appear here Accessibility and Delay in Random Temporal Networks David E. Simmons December 30, 2023 ===================================================§ INTRODUCTIONThe existence of a connection between any pair of nodes in many types of networks is a temporal event and also random in many cases. For instance, human beings meet for some period of time and walk away afterwards.Because of this temporality, static graphs or even random graphs are incapable of modelling many aspects of random temporal networks. For instance, a path between two specific nodes i and j in a static network is a sequence of nodes starting from i and ending at j given that an edge exists between any two successive nodes in the sequence. In a temporal network and over a time window of observation, a sequence of nodes forms a path (or more precisely an open path which is defined formally later) if the existence of an edge between two subsequent nodes in the sequence maintains causality. Consider a traveler starting its journey from node i to node j. The traveler waits at each node and jumps to the next node from its current node as soon as an edge becomes available between these two nodes. A path exists from i to j if such a traveler reaches j within the observation time window. Therefore, the existence of a temporal path depends on the availability of an edge on or after the current time between its current node and the next node in the sequence, regardless of whether or not an edge had existed before the traveler arrived at this current node.If there exists at least one temporal path from i to j over a discrete time window of 1, …, T, j is said to be accessible from i and such an event is denoted by i j. A directed graph representing the accessibility relation between every pair of nodes in a temporal network is called the accessibility graph, where there exists an edge from any node i to any node j if i j. A temporal network over the window 1, …, T can be modelled as a sequence of T adjacency graphs over the time window of observation. Fig. <ref> shows the adjacency and accessibility graphs for a set of four nodes over three time slots. Obviously the accessibility and adjacency graphs are identical at t=1. By t=2, node 4 is accessible from 2 as there is an edge between 2 and 1 at t=1 and an edge connects 1 to 4 at t=2. However, this is not the case for the opposite direction as there is an edge between 4 and 3 at t=1 but no edge from 3 to 2 at t=2. Therefore 2 is not accessible from 4 by t=2. This directionality in the accessibility graph is an immediate consequence of causality in the formation of temporal paths.An interesting method for obtaining the accessibility graph adjacency matrix (AGAM) in temporal networks is introduced in <cit.>. It should be noted that for a static graph with adjacency matrix 𝐊_0, (𝐊_0)^T gives the number of paths of length at most T between any two nodes, and by changing every non-zero element to 1 the AGAM is obtained. However, in temporal networks, since the edges between two nodes may appear or disappear at any time, a traveller on the graph might need to wait at a specific node for a certain number of time slots until an edge to the next hop becomes available.Given the adjacency matrices of the adjacency graphs over the window 1,…, T denoted by 𝐊_1, …, 𝐊_T, in <cit.>, this waiting at the current node is modelled by adding the identity matrix 1 to each adjacency matrix 𝐊_t. Therefore calculating ∏_t = 1^T (1+𝐊_t) and changing all the non-zero elements to 1 gives the AGAM by time T for such a temporal graph. The input to the method given in <cit.> is the adjacency matrices. However, in many cases the temporal variations in the edges of the network and their presence and absence is a random process (e.g. wireless ad-hoc networks, human centric networks, etc.). In this paper we study the notion of accessibility in random temporal networks. We assume that instead of knowing the adjacency matrices that deterministically identify the presence or absence of an edge between two specific nodes at a certain time slot, we have the probabilities of such events in hand. In other words, a random temporal network is defined as a sequence of random graphs, each one associated with one time slot. Our objective is to obtain the probability of accessibility, denotedby P(i j), from any node to any other over a window of observation. Since the total number of possible paths between any two vertices grows exponentially and also due to the dependence between paths with common edges, calculation of such probabilities is computationally intractable. In this paper, we provide a non-trivial upper bound and two different lower bounds on these probabilities. Our numerical results show that the accessibility probability obtained by Monte-Carlo simulations of such random temporal networks is very close to the given upper bound. Moreover, we examine the upper bound as a predictor for the probability of accessibility over a real-world dataset obtained from a vehicular network (taxis in Rome) <cit.>. The results show a high correlation between the predicted values and the actual observations.Also for the special case where the probability of edges is identical for all the edges across the network in any given time slot (but can vary from one time slot to another), we obtain the exact probability of accessibility. It should be noted that in many cases instead of discrete probabilities for the edges over each time slot, the distribution of the duration of the intervals of presence or absence of edges (and mostly in continuous time domain) is available. For instance, the distribution of the inter-contact time between individuals in human centric networks has been studied in the literature <cit.>. To be able to apply the bounds provided in this paper, these continuous distributions need to be transformed to discrete probabilities of edges [An edge is assumed to be present between two individual while they are in contact with each other, i.e. when they are within a given proximity of each other.]. In this paper, such transformations are provided. These transformations extend the range of problems that the given bounds are applicable to. Specifically, they provide a general framework for analyzing delay problems in multi-hop networks (e.g. in delay tolerant networks <cit.>) using the bounds obtained in this paper.Current studies in the literature of temporal networks and specifically the notion of accessibility (reachability) can be categorized from different perspectives. Firstly it should be noted that accessibility has been the core of many studies in a wide range of contexts even if the term accessibility (or reachability) has not been explicitly used. Peer-to-Peer networks <cit.>, wireless multi-hop networks <cit.>, gossiping over networks <cit.>, prevalence of epidemic diseases <cit.>, information diffusion in social communication networks <cit.> or spreading patterns of viruses on smart phones <cit.> are examples of studies with the theme of accessibility. A very closely related problem to accessibility is delay or trip duration in networks. The duration of a trip is a function of the dynamics of the links. We discuss the relationship between accessibility probability and expected delay (trip duration) in Section <ref>. An extensive body of research has been devoted to this topic including analysis of the data collected from transportation networks <cit.>, mathematical modelling of trip durations in human transportation networks at a macroscopic level <cit.>, shortest route in time dependant networks <cit.> and delay in delay performance of wireless delay tolerant networks <cit.>.In a large fraction of studies on path formation and travel duration in temporal networks, methods for measuring various parameters in such networks which are applicable to deterministic (known) temporal networks are proposed <cit.>. However, in this paper we assume that the adjacency evolution of the network graph over the observation time is unknown and only the probabilities of edges' presence between nodes are available. For a special case of such random temporal networks, i.e. the clique network, and from a macroscopic level perspective, the speed of information dissemination is studied in <cit.>. Another relevant paper to the context of accessibility in random temporal networks is <cit.> where the probability of being infected by an epidemic disease is obtained where the individuals are in contact with given probabilities. However, the fundamentally important notion of the dependencies between the paths connecting two nodes caused by common edges between such paths, is ignored.In this paper we deal with this dependency and provide an analytical upper and two lower bounds on the accessibility probability. In particular, the theoretical approach used in this paper for obtaining the upper bound is based on a Fortuin-–Kasteleyn-–Ginibre (FKG) correlation inequality <cit.>, which gives a deeper insight to the problem and provides a basis for analyzing further complex models. Moreover, in our model we consider the very general scenario of time-varying arbitrary probabilities of edges' existence. § SYSTEM MODEL We consider a random temporal network with N nodes (vertices) represented by V={1,…, N} and a set of discrete time slots 𝒯 = {0, 1,2, …, T}. There are a total of M(T) :=N^T-1 vertex sequences of length T between two vertices i and j. This is becauseat each time we can choose any node in the graph to be the next step (except for the last time slot where node j has been selected). The mth sequence (possibly with repeated nodes) is represented byA_m^ij(T)=v_m^ij(0)… v_m^ij(T),wherev_m^ij(t)∈{1,…, N}, ∀ t∈𝒯\{0, T},v_m^ij(0)=i, v_m^ij(T)=j.Such a sequence of nodes is called a temporal path. An edge between a pair of distinct nodes u and v at time t is denoted by the triple(u,v,t), whereu,v∈{1,…, N} and t∈𝒯.The triple is defined to be open if in the realization of the network the link between these two nodes is physically present. We assume that an edge is open between two nodes u and v with probability p_uv(t), independent of other edges. A temporal path A_m^ij(T) is said to be an open path if any pair of distinct successive nodes v_m^ij(t)v_m^ij(t+1) in the sequence is an open edge. In other words we use the terms open edge and open path adopted from percolation theory to identify the realization of an edge or a path. A pair of successive non-distinct nodes v_m^ij(t)v_m^ij(t+1) is an indication of remaining at the same node from time t to t+1.We use the following compact notation to denote the probability of the event that a given path is openP(A_m^ij(t))≡ P(A_m^ij(t) is open ).We apply this convention to all the probabilities of the sets corresponding to the temporal paths, including edge triplets (u,v,t).Moreover, we denote a temporal path from i to j with v_m^ij(T-1)=ℓ by B_m^iℓ j(T). The set of all paths inclusive of ℓ as their node at time T-1 is denoted byB^iℓ j(T)={B_1^iℓ j,…, B_M(T-1)^iℓ j}.Our objective is to find the probability that at least one open temporal path exists from a given node i to another node j over the time window 𝒯. § EXACT METHOD FOR EQUAL EDGE PROBABILITIESIn this section, we assume that p_uv(t)=p(t) (i.e. the probability can change over time but is identical for all the edges in the network at each time t). In other words, at each time slot t the network is equivalent to a classic Erdös-Rényi graph. We start at node i and begin visiting other nodes. Any node u at time t=1 is labeled as visited if (i,u,1) is open.We denote the set of nodes visited for the first time at time slot t' by ω(t') and the set of all nodes visited from t = 1 to t = t' by W(t'). Therefore W(t+1)=W(t)∪ω(t+1). A node is labeled as visited in time t if there exists an open edge between any node in W(t-1) and this node. Obviously, the total number of visited nodes in t=1 is a binomial random variable B(N-1,p(1)). If we assume that |W(t-1)|=ℓ, we have |ω(t)|∼ B(N-1-ℓ, 1-(1-p(t))^ℓ). Therefore, we can conclude thatP(|W(t)|=k)=∑_ℓ=0^k P(|W(t-1)|=ℓ)N-1-ℓk-ℓ× (1-(1-p(t))^ℓ)^k-ℓ(1-p(t))^N-1-k The probability P(i j) is equivalent to the probability of j being labeled as visited by time T (see Fig. <ref>). Therefore,P(i j) =P(j∈ W(T))=∑_ℓ=1^N P(j∈ W(T)| |W(T)|=ℓ)P(|W(T)|=ℓ)=∑_ℓ=1^N ℓ/N-1P(|W(T)|=ℓ),and one can obtain P(i j) recursively.§ UPPER BOUNDGeneralizing the exact method in Section <ref> to the case of arbitrary time varying probabilities is not straightforward; and even if possible, it would be computationally intractable. Therefore, we propose a different method in this section, which provides an upper bound on P(i j) given that the probabilities p_ij(t) can take any value (between 0 and 1) at each time slot. The event of at least one open path existing from node i to j is the complement of the event that no open paths exist from i to j. Thus, obtaining the probabilities of every path from i to j should give the desired accessibility probability. However, it should be noted that, firstly, even finding the probability of one path is not straightforward because the number of trials at each node to jump to the next node (waiting time at each node) is a random variable by itself and in general, of a different probability distribution; secondly, the number of paths from i to j grows exponentially with t; thirdly and most importantly, different paths might be positively correlated if they have any edge in common in the same time slot. In the following, the derivation of the upper bound is discussed.We associate a dependant variable α_ij(t) to any pair of nodes (i,j). This variable is formed recursively and given as followsα_ij(t)=1-∏_ℓ=1^N(1-α_iℓ(t-1)p_ℓ j(t))α_ij(1)=p_ij(1).In the following theorem we show that α_ij(t) is an upper bound for the probability of an open temporal path existing from any node i to any node j.P(i j)⩽α_ij(T), for all (i,j)∈ V× V and any positive integer T⩾ 1. We use induction by showing that P(i j)≤α_ij(T+1) given that P(i j)≤α_ij(T). Obviously the theorem holds for T=1 because α_ij(1)=p_ij(1) by definition (<ref>). At time T, we have P(iℓ)= P(⋃_m=1^M(T)A_m^iℓ(T))⩽α_iℓ(T). where the inequality follows from the induction assumption. This implies thatP(⋃_m=1^M(T)A_m^iℓ(T))p_ℓ j(T+1)⩽α_iℓ(T)p_ℓ j(T+1).Since the existence of open edges between nodes are independent random variables, we have:P(⋃_m=1^M(T) A_m^iℓ(T))p_ℓ j(T+1)=P((⋃_m=1^M(T)A_m^iℓ(T))∩ (ℓ, j, T+1))=P(⋃_m=1^M(T)(A_m^iℓ(T)∩(ℓ, j, T+1))) = P(⋃_m=1^M(T)B_m^iℓ j(T+1)).Combining (<ref>) and (<ref>), we haveP(⋃_m=1^M(T)B_m^iℓ j(T+1))⩽α_iℓ(T)p_ℓ j(T+1) ⇒ P(⋂_m=1^M(T)B_m^iℓ j(T+1))⩾ 1-α_iℓ(T) p_ℓ j(T+1),where B_m^iℓ j(T+1) is the complement of the event B_m^iℓ j(T+1). Each set of paths ⋃_m=1^M(T)B_m^iℓ j(T+1) for ℓ=1,…, N is a family of monotonically decreasing events[A Family 𝒜 of subsets of K = {1,2,…,k} is monotone decreasing if A∈𝒜 and A'⊆ A ⇒ A'∈𝒜. The collection of any open path (viewed as an edge set) and all its subpaths (also viewed as edge sets), form a monotonically decreasing family. This is because if a path is open (and consequently in the family of open paths), any subpath would also be open and hence an element of the family. Here, to avoid unnecessary complication in the notations, we have used B_m^iℓ j(T+1) to represent such a family of events.]. Therefore, using the Harris-FKG inequality (Theorem 6.3.2 in <cit.>) we can establish the proofP(⋃_m=1^M(T+1) A_m^ij(T+1))=P(⋃_ℓ=1^N (⋃_m=1^M(T)B_m^iℓ j(T+1))) = 1-P(⋂_ℓ = 1^N (⋂_m=1^M(T)B_m^iℓ j(T+1) ))⩽ 1-∏_ℓ=1^N P(⋂_m=1^M(T)B_m^iℓ j(T+1))⩽ 1-∏_ℓ=1^N (1-α_iℓ(T)p_ℓ j(T+1))= α_ij(T+1),where the first inequality follows from the Harris-FKG inequality and the second inequality follows immediately from (<ref>). § LOWER BOUNDSIn this section we provide two alternative lower bounds on P(i j). The performance of each bound depends on the distribution of the probabilities of edges across the network and over the observation window. §.§ Lower Bound IThe first lower bound on P(i j) relies on finding a clique graph inside the temporal network such that the probability of all the edges in this clique is above a certain threshold over the entire window of observation. More formally, we find a subset of nodes V̂⊆ V for a fixed value p_min such that i,j ∈V̂ and p_ℓ m(t)≥ p_min, ∀ℓ, m ∈V̂, t∈{1,…, T}.For any such V̂ and p_min we generate a new temporal network G_V̂ for which we set the probability of all edges to be p'_ℓ m(t)=p_min, ∀ℓ ,m ∈V̂, t∈{1,…, T}. Since the probabilities of edges are assumed to be identically p_min, we can apply the method in Section <ref> to the resulting temporal network with the vertex set V̂ and find the probability β_ij(T) := P̂(i j), where we use the notation P̂ (as opposed to P) to distinguish the probability of accessibility in the derived network (defined by vertex set V̂ and identical probability p_min) from the probability of accessibility in the original network.From the construction of G_V̂, it is easy to concludeβ_ij(T)⩽ P(i j) in the original network. Therefore any such subset V' and p_min provides a lower bound for the probability of accessibility between two nodes i and j. Such a clique is not unique as it depends on the choice of p_min. It should be noted that the size of a clique by itself does not identify the bound. For instance, a smaller clique with a higher value of p_min might result in a higher probability of accessibility and hence a better bound. Therefore, different values of p_min should be examined, and the highest accessibility probability should be selected as the lower bound. Based on this heuristic, for a fixed p_min, we obtain the corresponding lower bound as follows (see also Fig. <ref>):* Step 1: Find the sets E_c = {(ℓ,m): p_ℓ m (t)⩾ p_min,1⩽ t ⩽ T } and V_c = {ℓ: ∃ m such that p_ℓ m(t) ⩾ p_min, 1⩽ t ⩽ T}. Form the corresponding equivalent static graph G_c = (V_c, E_c). * Step 2: Using Bron-Kerbosch <cit.> algorithm find the set of all maximal cliques of G_c.* Step 3: Select the largest clique V̂ from the subset of cliques that includes (i,j). Generate the temporal network G_V̂ according to the selected clique, such that p_ℓ m(t) = p_min,∀ℓ,m ∈V̂. * Step 4: Apply the method in Section <ref> to G_V̂ and find P̂(i j).By examining the above method and finding the maximum value of β_ij(T) (denoted by β_ij^*(T)) for different values of p_min (possibly for all values of p_min = p_ℓ m(t): p_ℓ m(t)≤min_t{p_ij(t)}) one can obtainβ_ij^*(T)⩽ P(i j). §.§ Lower Bound IIThe second lower bound is established based on finding a set of edge-disjoint paths, because the events of such paths being open are statistically independent. This independence results in finding the probability of at least one path (from the selected edge-disjoint subset of all the paths) being open, without being concerned about the correlation between the paths (as there is no common edge between any two paths within the selected set). It is worthwhile to mention that in general the definition of an edge disjoint path in a temporal network is different from its equivalent in static graphs. In temporal networks an edge (i,j,t) is not only identified by its two end nodes i and j but also with a label of time t. Two edges (i,j,t) and (i',j',t') are disjoint if i≠ i' or j≠ j' or t≠ t'. Therefore two edge disjoint paths might share the link between two nodes with two distinct time labels.If we denote the waiting period at each node on a pathR_k= i, v_1, …, v_L_R-1, jof length L_R by t_1,…, t_L_R, the existence of an open path between i and j implies that t_1+… +t_L_R⩽ T. The waiting time at each node is the number of time slots from the current time slot to the time slot in which an edge to the next node in the path becomes open. Clearly finding the probability of all sequences of numbers with summation less than T given the probabilities of all the edges on this path (which are possibly time-variant as well), is computationally costly. However, if we setp_min= min_R_kmin_t {p_iv_1(t), p_v_1v_2(t), …, p_v_L_R-1j(t)},then we can assume all the edges have at least a success probability p_min. Therefore, P(R_k(T)), the probability of R_k being open before time T, can be lower bounded by the following inequalityP(R_k(T))⩾ 1-∑_m=1^L_R-1Tm p_min^m (1-p_min)^T-m.The inequality holds because the probability of R_k (of length L_R)being open is translated to the probability of observing at least L_R successful outcomes of TBernoulli trials with parameter p_min.In what follows, it will be helpful to define the quality of path to be f(R_k(T)) = 1-∑_m=1^L_R-1Tm p_min^m (1-p_min)^T-m.Our objective is to find edge disjoint paths with high qualities to obtain a tighter lower bound for P(i j) (However, any set of disjoint paths would result in a lower bound). Fig. <ref> compares two paths for their quality f(R_k(T)). As it can be observed, a longer path (of length 3) can have a higher quality than a shorter path (of length 2). To obtain the lower bound, we form a set of edge disjoint paths. This is done by first forming a random graph G_min where we set the probability p_ℓ m = min_t{p_ℓ m(t)} for any pair of nodes in the network. We sort the outgoing edges from i such that p_il_1⩾ p_il_2⩾…. We start from (i, l_1) to form the first path i.e. R_1(T). Initially this path would be i,l_1, j. At each step the quality of path f(R_1(T)) is measured, the outgoing edges from the last node (the node before j) are sorted and the edge with maximum probability (between the current last node and a node m) is selected. If adding node m as the last node before arriving at j increases the quality of the path, the discovered path is updated by adding m to the path just before j. For instance, in Fig. <ref>, for T = 10, we initially examine the path I, K, J and then we update the path to I, K, L, J as the latter has a higher quality. We stop adding nodes to the path as soon as the quality of the path begins to decrease. Once the path R_1(T) is formed, we remove all the edges of this path from G_min and we start generating a new path by repeating the same procedure on G_min starting from the next outgoing edge from i in the sorted list of edges. We continue this algorithm until we cannot generate any new path from i to j. This algorithm of path selection is represented in Algorithm <ref>. If we denote the set of generated paths by ℛ, the lower bound II is given as followsP(i j)⩾ 1-∏_k=1^|ℛ|(1-f(R_k(T))) = γ_ij(T). § QUANTIZATION OF CONTINUOUS RANDOM ON-OFF LINKSIn many scenarios, a temporal network evolves in a continuous time fashion. Said in a different way, we may have the probability distribution of the ON or OFF intervals in the continuous-time domain. The ON period refers to the time interval that there exists an open edge between two specific nodes.By splitting the entire observation window into time slots of a given length, and deriving the probability of the state of an edge being observed at the end of each time slot, we can quantize such continuous distributions. Such a quantization associates a discrete probability to each time slot.This enables us to use the bounds given above to estimate the probability of accessibility between any two nodes in the network. The link between any two nodes could have started from being ON or OFF at t=0 and could have been switched ON and OFF any m number of times between m=0 to m=∞ during the interval t = 0 to t = T_0 (see Fig. <ref>). Therefore, an infinite number of possible events should be considered when deriving the probability of observing the edge in the ON position.If we are given the probability distributions of the ON and OFF periods (for a specific edge) denoted by f_ON(τ) and f_OFF(τ) and also the probability of starting from the ON position at t = 0 (represented by p_0), we can obtain the probability of being in the ON position (denoted by SW=1, and SW=0 for OFF)at time T_0 which is derived as follows: P(SW=1)= p_0∑_m=0^∞∫_0^T_0 f_S_m^ON(s)(1-F_ON(T_0-s)) ds +(1-p_0)∑_m=0^∞∫_0^T_0 f_S_m^OFF(s)(1-F_ON(T_0-s)) dswhere,f_S_m^ON(s)1 m=0 f_ON∗…∗ f_ON_m∗ f_OFF…∗ f_OFF_mm>0 f_S_m^OFF(s) = f_ON∗…∗ f_ON_m∗ f_OFF…∗ f_OFF_m+1and F_ON is the cumulative distribution function (CDF) of the ON time distribution. Also S_m^ON represents the random variable describing the sum of m periods of ON-OFF (an ON period followed by an OFF period), assuming that at t = 0 the edge has been ON. Similarly, S_m^OFF represents a similar sum with the assumption that at t = 0, the edge has been OFF. Starting from ON (with probability p_0) or OFF (with probability 1-p_0) at t = 0 are two mutually exclusive events. For each of these two possibilities the summation of the probabilities of infinitely many exclusive events are calculated as in Equation (<ref>). Each of these many events corresponds to a given m number of switchings (ON-OFF periods). To derive the probability of such an event (i.e. being in an ON position at t = T_0 starting from an ON position at t = 0 with m switchings)one needs to obtain the probability of the event that S_m^ON⩽ T_0 (the total duration of the m ON-OFF periods is less than T_0) and after these m periods an ON period lasts at least until t = T_0 and possibly onwards. Similar arguments apply to the case of starting from the OFF position at t = 0, which is reflected in the integrals within the second summation in Equation ((<ref>)). The integral terms inside the summations give the probability of such events. Since the last ON period can take any value larger than T_0-S_m^ON, its probability is 1-F_ON(T_0-s).It should be noted that this calculation provides the probability of an edge being open if it is observed at a specific instance of time T_0. However, another useful probability would be the probability of observing at least one ON period between t = 0 and t = T_0 to be reported as the probability of the edge being open over the corresponding time slot. Moreover, in practice one can choose the quantization step of time (T_0) very small such that with high probability at most one switching occurs within each time slot to ease the calculations. We skip the details of these last two possibilities. However, we apply these simplifications to the numerical experiment in Section <ref>. § NUMERICAL EXPERIMENTSThe bounds and mapping from continuous interval distributions to discrete probabilities of edges introduced in this paper have been examined against synthetically generated networks (Monte Carlo simulations) as well as a real-world vehicular network dataset <cit.> collected from the GPS devices of the taxis in Rome, Italy. In the following, these experiments and their results are reported.§.§ Synthetically Generated NetworksIn our first experiment, we considered a network with N = 20 nodes and the probability of any edge (ℓ, m) was selected randomly from the range [0.05 0.1]. However, we fixed this probability for the observation window t = 1,…, 15. Fig. <ref> shows the evolution of the the probability of accessibility P(ij) for a randomly selected pair of nodes i and j over the mentioned window. As it can be observed from the figure, the upper bound is very close to the expected probability of accessibility obtained from Monte-Carlo simulations[For convenience, in the presentation we have shown the probability distributions over a continuous domain by interpolation in Fig <ref> and <ref>.]. It should be noted that the Lower Bound I is computationally slow because of the clique search algorithm; however the other bounds, and particularly the upper bound, are easily applicable to large-scale networks as well. Another interesting observation is the relationship between accessibility and delay (or trip duration) between two specific nodesin the network. We define the delay from node i to j to be the first time slot t that an open temporal path from i to j becomes present starting from t=1. We denote this delay by D_i → j.The probability of accessibility P(i j) can be interpreted as the cumulative distribution function (CDF) of the delay. Therefore, the delay probability distribution function (pdf), denoted by π(D_i→ j), would be immediately available by calculating the derivative of the probability of accessibility, i.e. π(D_i→ j) = dP(i j)/dt Therefore, the bounds given in this paper can be similarly differentiated to approximate the density function that describes the delay distribution. Fig. <ref> shows the distribution of delayfor the experiment setting in Fig. <ref> as well as approximated distributions obtained by differentiating the lower andupper bounds.Another experiment was performed to verify the ON-OFF model derived in Section <ref>. In our experiment, we assumed that the ON and OFF periods for an edge follow exponential distributions with parameters β^ON and β^OFF, respectively. Therefore the probability distribution of the total length of being ON (or OFF) given m switchings (changing from ON to OFF or from OFF to ON) is the convolution of m exponentials, i.e., a Gamma distribution Γ(β^ON, m) (or Γ(β^OFF, m)). Therefore the pdf of the random variable S_m^ON (or S_m^OFF) is obtained by calculating the convolution of the two Gamma distributions with different parameters. Various methods for such calculations are available in the literature <cit.>. We considered a window observation of T = 10 units of time in the continuous domain and observed the status of an edge over the mentioned window. We assumed that the ON and OFF periods are exponentially distributed with β^ON = 2 and β^OFF = 3, respectively. Moreover, we assumed that starting from ON or OFF positions was equally probable (i.e. p_0 = 0.5).We have run 10000 trials for this edge and measured the status of the edge (ON or OFF) at the end of every time slot of length 0.5 from t = 0 to t = 10 and averaged them over these trials.Since in a practical setting the total number of switchings cannot be infinite we limited the number of switchings to be from m = 0 to m = 40.The comparison between the results of this experiment and the probability obtained from Equation (<ref>) is shown in Fig. <ref> which verifies the accuracy of the method in Section <ref> for the mentioned range of m. §.§ Real-World Dataset As it could be seen from the experiments on synthetically generated networks, the gap between the upper bound and estimated probabilities from Monte-Carlo simulations for networks with randomly assigned probabilities on edges is fairly narrow. Hence, the bound can be naturally nominated as a predictor for the probability of accessibility in real-world networks. However, it would be crucial to observe the performance of the given bound beyond the abstractions of the previously mentioned synthetic networks and under more realistic conditions. With this objective, we have used the data collected from the GPS devices of a group of taxis in Rome <cit.>. A vehicular network has been selected for this experiment as such networks are mathematically more tractable. The reason for this is that in such networks the inter-contact time has been shown to follow an exponential distribution <cit.>. From the entire set of 387 taxis in the original dataset, we have randomly selected 25 and observed them over a period of one month (February). The time period of observation has been split to T = 2688 time slots of 15 minutes. We have assumed that two taxis are in contact if their distance is less than R = 50 meters at any fraction of time within a given time slot. The occurrence of a contact between two vehicles over a given time slot can be represented by the existence of an edge in the equivalent adjacency graph over that time slot (where each vehicle is represented by a vertex in such a graph).We further assume that the duration of a contact is negligible when compared to the length of a time slot. If two vehicles are in contact for a longer period, this can be considered as several consecutive contacts. With this assumption and given the memorylessness of the exponential distribution, the origin of time would not have any impact on the probability of a contact occurringbetween two vehicles over a given time slot. If we denote the inter-contact time between two vehicles by random variable X ∼λ e^-λ x and the duration of time slot by t_0, the probability of a contact occurring over a given time slot can be approximated by ∫_0^t_0λ e^-λ tdt = 1-e^-λ t_0. It should be noted that here we have assumed that t_0≪1/λ. Therefore, it can be assumed that with a high probability at most one contact occurs between the two vehicles. Therefore, transforming the continuous distribution of inter-contact time between the vertices in the corresponding graph would be considerably simpler than the general procedure described in Section <ref>. The experiment is comprised of two phases. In the first phase the distribution of inter-contact time between any two individual vehicles is estimated by fitting to an exponential distribution. In other words, the first phase is used for training. We have allocated the period 1⩽ t ⩽ 1100 for training. In the second phase we use the distributions obtained from the training to predict the probability of accessibility between pairs of vehicles over the period 1101⩽ t⩽ 2600. We divide this period of 1500 time slots to 10 equal and distinct subperiods of 150 time slots. For each time slot, the number of experiments in which any vehicle j has been accessible from i is assumed to be an estimate of the probability of accessibility from i to j. Fig. <ref> compares the average delay obtained empirically from the second phase and the predicted expected delay (obtained by using the upper bound as the predictor for accessibility probability) from the training phase for a subset of vertex pairs. We have only selected those pairs that in at least 8 out of the 10 experiments accessibility has been established. To avoid a dense figure, half of the pairs have been randomly selected and their delays are compared in the figure.Moreover, to evaluate the goodness of the upper bound as a predictor for the accessibility probability, we have measured the correlation coefficient between the vector of all estimated probabilities obtained from the experimental phase and the vector of probabilities predicted by the upper-bound for the entire set of pairs of vertices. Fig. <ref> represents the variations of the correlation coefficient over time. As it can be observed, even for such a small number of experiments (ten) and for a relatively short phase of training (1100 time slots), the correlation coefficient remains above 0.7 almost all the time. Therefore, the combination of the upper bound (as the predictor) and the contact probability estimation (based on a learning phase) performs with a very good accuracy.§ CONCLUSION Formation of paths in complex networks with time-varying edges where the presence and absence of edges is a function of time and possibly random, is far more complicated than static graphs. In this paper, we studied the formation of such paths and the notion of accessibility between nodes in random temporal networks at a microscopic level. Finding the exact probability of having access from one node to another in such networks is rather complicated. We provided a set of bounds on this probability for a very general setting of probabilities. Moreover, we extended our result to the continuous-time domain networks. We evaluated our analytical results with numerical experiments. The microscopic level analysis given in this paper can be a ground for macroscopic analysis of random temporal networks in future. § ACKNOWLEDGMENTThis work was supported in part by EPSRC grant number EP/N002350/1 (Spatially Embedded Networks). * | http://arxiv.org/abs/1703.09036v1 | {
"authors": [
"Shahriar Etemadi Tajbakhsh",
"Justin P. Coon",
"David E. Simmons"
],
"categories": [
"physics.soc-ph",
"cs.SI",
"physics.data-an"
],
"primary_category": "physics.soc-ph",
"published": "20170327124124",
"title": "Accessibility and Delay in Random Temporal Networks"
} |
Dynamical alignment of visible and dark sector gauge groups Rainer Dick December 30, 2023 =========================================================== We study a dynamic market setting where an intermediary interacts with an unknown large sequence of agents that can be either sellers or buyers: their identities, as well as the sequence length n, are decided in an adversarial, online way. Each agent is interested in trading a single item, and all items in the market are identical. The intermediary has some prior, incomplete knowledge of the agents' values for the items: all seller values are independently drawn from the same distribution F_S, and all buyer values from F_B. The two distributions may differ, and we make standard regularity assumptions, namely that F_B is MHR and F_S is log-concave. We focus on online, posted-price mechanisms, and analyse two objectives: that of maximizing the intermediary's profit and that of maximizing the social welfare, under a competitive analysis benchmark. First, on the negative side, for general agent sequences we prove tight competitive ratios of (√(n)) and (ln n), respectively for the two objectives. On the other hand, under the extra assumption that the intermediary knows some bound α on the ratio between the number of sellers and buyers, we design asymptotically optimal online mechanisms with competitive ratios of 1+o(1) and 4, respectively. Additionally, we study the model were the number of items that can be stored in stock throughout the execution is bounded, in which case the competitive ratio for the profit is improved to O(ln n). § INTRODUCTIONThe design and analysis of electronic markets is of central importance in algorithmic game theory. Of particular interest are trading settings, where multiple parties such as buyers, sellers, and intermediaries exchange goods and money. Typical examples are markets for trading stocks, commodities, and derivatives: sellers and buyers where each one trades a single item and one intermediary for facilitating the transactions. However, the well-understood cases are comparatively quite modest. The very specialcase of one seller, and one buyer was thoroughly studied by Myerson andSatterthwaite <cit.> in their seminal paper; they provided a beautiful characterization of many significant properties a mechanism might have, along with an impossibility theorem showing that it cannot possess them all. The paper also dealt with the case where a brokerprovides assistance by making two potential trades, one with each agent, while also trying to maximize his profit. This was extended in <cit.> to multiple sellers and buyers that are allimmediately present in an offline manner.Our work considers a similar setting, but with a key difference: the buyersand sellers appear one-by-one, in a dynamic way. It is natural to study this question inthe incomplete information setting in which the intermediary, whose objective is to maximize either profit or welfare, does not know thesequence ofbuyers and sellers in advance. The framework that we employ to study the question is the standard worst-case analysis of online algorithms whose goal is to do as well as possible in the face of a powerful adversary which tries to embarrass them.We are not the first to apply techniques from online algorithms to quantify uncertainty in markets: the closest work to ours would be byBlum et al. <cit.> who consider buyers and sellers trading identical items. In their setting, motivated mostly from a financial standpoint, buyers and sellers arrived in an online manner, with their bids appearing to the trader and expiring after some time. The trader would have to match prospective buyers and sellers to facilitate trade. Among a plethora of interesting results, the trader's profit maximization problem was studied using competitive analysis and techniques from online weighted matchings. The key difference in our setting is that buyers and sellers do not overlap: whenever a seller appears, the intermediary has to decide whether or not to attempt to buy the item, without having a buyer ready to go. Instead, the intermediary stores the item to sell it at a later time. We believe this variation is able to capture “slower” markets, like online marketplaces similar to Amazon or AliExpress (or even regular retail stores), where uncertainty stems from not knowing how large a stock of items to buy, in expectation of the buyers to come. §.§ Our ResultsOur aim is to study this dynamic market setting, where an intermediary faces a sequence of potential buyers and sellers in an online fashion. The goal of the intermediary is to maximize his profit, or society'swelfare, by buying from the sellers and selling to buyers. We take a Bayesian approach to their utilities but use competitive analysis for their arrivals: the main difficulty stems from the unknown (and adversarially chosen) sequence of agents. Further particulars and notationis discussed in Section <ref>. All the online algorithms wedesign are posted price, which are simple, robust and strongly truthful.First, in Section <ref> we study the case of arbitrary sequences of buyers and sellers and show that the competitive ratio—the ratio of the optimal offline profit over the profit obtained by the online algorithm—is Θ(√(n)), where n is the total number of buyers and sellers. We also study the social welfare objective, where the goal is to maximizethe total utility of all participants, including the sellers, the buyers and the intermediary The competitive ratio here is (log n). All theseresults are achieved via standard regularity assumptions on the distributions of theagent values (see Section <ref>), which we also prove to be necessary,by providing arbitrarily bad competitive ratios in the case they are dropped(Theorem <ref>). To overcome the above pessimistic results,we next study in Section <ref> the setting where both theonline and offline algorithms have a limited stock, i.e. at no point in timecan they hold more than K items. In this model, the competitive ratio isimproved to (Klog n), asymptotically matching that of welfare.Finally, we also propose a way to restrict the input sequence, byintroducing in Section <ref> the notion of α-balancedstreams, where at every prefix of the stream the ratio of the number ofsellers to buyers has to be at least α. Under this conditionwe are able to bring down the competitive ratios for both objectives toconstants. In particular, the online posted-price mechanism that we use forprofit maximization, and which is derived by a fractional relaxation of theoptimal offline profit, achieves an asymptotically optimal ratio of 1+o(1).A similar mechanism is 4-competitive for the welfare objective. §.§ Prior Work Our work is grounded on a string of fruitful research in mechanism design. The main topics that are close to our effort are bilateral trading, trading markets and sequential (online) auctions. The first step in bilateral trading and mechanism design was made byMyerson andSatterthwaite <cit.> who proved their famous impossibility result, even for the case of one buyer and one seller. The case for profit maximization was extended to many buyersand sellers, each trading a single identical item, in <cit.>. Some of the assumptions in our model are based in these two works. The impossibility result in<cit.>, among other difficulties, slowly vanishes forlarger markets as was shown by McAffee <cit.>.There is still active progress being made on this intriguing setting, concentrating on simple mechanisms that provide good approximations either to welfarewhile staying budget balanced and individually rational <cit.> or toprofit <cit.>.Otherrecent developments include a hardness result for computing optimalprices <cit.> and constant efficiencyapproximation with strong budget balance <cit.>.Sequential auctions have also produced a collection of interesting results, either extending the ideas of simple approximate mechanismsinstead of more complex, theoretically optimal ones or dealing with entirely new settings. Prominent examples that compare the revenue (or welfare)generated by simple, posted-price sequential auctions to the optimal, proving good approximations in certain cases, are <cit.> for single-item revenue,<cit.> formatroid constraints (and some multi-dimensional settings) and<cit.> for combinatorial auctions. There have been many approaches that apply competitive (worst-case) analysis to mechanism design. The analysis of competitive auctions for digital goods is explored in <cit.> where near optimal algorithms are developed using techniques inspired from no-regret learning. There is also a deep connection between secretary problems and online sequential auctions <cit.>. Hajiaghayi et al. utilized techniques such as prophet inequalitiesforunknown market size with distributional assumptionsin <cit.>. A comprehensive exposition of online mechanism design by Parkes can be found in <cit.>.There are also positive results in online auctions when the valuation distribution is unknown (but usually known to be restricted in some way, having bounded support or being monotone hazard-rate etc). Babaioff et al. explored the case of selling a single item to multiple i.i.d. buyers in <cit.>. The case of k items in a similar setting was studied in <cit.>, while the case of unlimited items (digital goods auctions) in <cit.> and <cit.>. Budget constraints where also introduced in <cit.>, where a procurement auction was the focus.§ PRELIMINARIES AND NOTATION The input is a finite string σ∈S,B^* of buyers (B) andsellers (S). The online algorithm has no knowledge of σ(t), i.e. whether σ(t)=S or σ(t)=B, before step t. Also, it doesn't know thelength n(σ) of σ. Denote n_S(σ), n_B(σ) thenumber of sellers and buyers, respectively, in σ, and let N_S(σ),N_B(σ) be the corresponding set of indices, i.e.N_S(σ)=tσ(t)=S and N_B(σ)=tσ(t)=B. LetN(σ)=N_S(σ) N_B(σ)=1,2,…,n(σ). In the above notation we willoften drop the σ if it is clear which input stream we are referring to.The values of the sellers are drawn i.i.d. from a probability distribution(withcdf) F_S and these of buyers i.i.d. from a distribution F_B, bothsupported over intervals of nonnegative reals.We denote the random variable of the value of the t-th agentwith X_t. We assume that distributions F_S and F_B are continuous,with bounded expectations μ_S and μ_B, and have (well-defined)density functions f_S and f_B, respectively. It will also be useful todenote by X_S a random variable drawn from distribution F_S, andsimilarly X_B∼ F_B, and for any random variable Y and positiveinteger m use Y^(m) to represent the maximum order statistic out ofm i.i.d. draws from the same distribution as Y. We will also use theshortcut notation μ^(m)=Y^(m).We study posted-price online algorithms that upon seeing theidentity ofthe t-th agent (whether she isa seller or a buyer), offer a price p_t. Webuy one unit of the item from sellers that accept our price (i.e. ifσ(t)=S and X_t≤ p_t) and pay them that price, and we sell tobuyers that accept our price (i.e. if σ(t)=B and X_t≥ p_t),given stock availability (see below), and collect from them that price. So, apricep_t+1 can only depend on σ(1),…,σ(t+1) and theresult of the comparison X_i≤ p_i in all previous stepsi=1,2,…,t. Let K_t denote the available stock at thebeginning of the t-th step, i.e. K_1=0 and K_t+1=K_t+1,ifσ(t)=SX_t≤ p_tK_t-1,ifσ(t)=BK_t≠ 0 X_t≥ p_tK_t,otherwise.Then, the set of sellers from whom we bought items during the algorithm'sexecution isI_S=t∈ N_SX_t≤ p_t and the set of buyerswe sold to is I_B=t∈ N_BX_t≥ p_tK_t≠ 0. Notice that these are random variables, depending on the actual realizations of the agent values X_t.The total profit that the intermediary deploying an algorithm Amakes throughout the execution on an input stream σ, is theamount he manages to collect from the buyers via successful sales, minusthe amount he spent in order to maintain stock availability from the sellers,that isℛ(A,σ)=∑_t∈ I_B p_t- ∑_t∈ I_S p_t . The social welfare ofalgorithm A is the sum of valuations that all participants achieve throughout the entire execution. That is, a seller atposition t of the stream has a value of X_t if she keeps her item, or a value ofp_t if she sold the item to the intermediary; a buyer has a value of X_t-p_t if she managed to buy an item, since the item has a value ofX_t and he spent p_t to buy it, or 0 otherwise. And the intermediary, has a value of ℛ(A) plus the value of the items that he didn't manage to sell inthe end and which are now left in his stock. Putting everything together andperforming the occurring cancellations, this results in the welfare to be expressed simplyas the sum of the values of the sellers that kept their items plus the sum of the values of thebuyers that bought an item, i.e. 𝒲(A,σ)=∑_t∈ N_S∖ I_S X_t +∑_t∈ I_B X_t. We use competitive analysis, the standard benchmark for onlinealgorithms (see e.g. <cit.>), in order to quantify theperformance of an online algorithm A: we compare it to that of anunrealistic, offline optimal algorithmhas access to the entirestream σ in advance. Then, we say that A isρ(n)-competitive with respect to welfare, if for any feasible inputsequence of agents σ with length n and distributions F_S,F_B for the agent values, it is 𝒲(,σ)≤ρ(n)·𝒲(A,σ).Notice how we allow the competitive ratio ρ(n) to explicitly depend onthe input's length, so that we can perform asymptotic analysis as𝒲(,σ) and n tend to infinity. It is common incompetitive analysis to allow for an additional constant in the right handside of the above expression, that does not depend in the input, and whichintuitively can capture some initial configuration disadvantage of the onlinealgorithm. We do that for the case of the profit objective, as this constantwill have a very natural interpretation: you can think of it as the maximumamount of deficit on which an online algorithm can run at any point in time,since an adversary can always stop the execution at any time he wishes.Given that interpretation, it makes sense to allow for this constant todepend on seller distribution F_S, since even when we face a single sellerat the first step we expect to spend an amount that depends on therealization of her value. Thus, we will say that an online algorithm isρ(n)-competitive with respect to welfare, if for any input sequence ofagents σ and any probability priors F_S,F_B,ℛ(,σ)≤ρ(n)·ℛ(A,σ)+O(μ_S).§ DISTRIBUTIONAL ASSUMPTIONS Throughout most of the paper we will make some assumptions on thedistributions F_B, F_S from which the buyer and seller values are drawn. In particular, we will assume that F_B has monotone hazard rate(MHR), i.e. log(1-F_B(x)) is concave, and that F_S islog-concave, i.e. log F_S(x) is concave.For convenience, we will collectively refer to both the above constraints asregularity. These conditions are rather standard in the optimal auctions literature, andthey encompass a large class of natural of distributions includinge.g. exponential, uniform and normal ones.Notice that distributions that satisfy the above conditions also fulfil theregularity requirements introduced in the seminal paper Myerson andSatterthwaite <cit.> for the single-shot, one buyerand one seller setting of bilateral trade, namely thatx+F_S(x)/f_S(x) and x-1-F_B(x)/f_B(x) are both increasingfunctions.Finally, we must mention that such regularity assumptions are necessary, inthe sense that dropping them would result in arbitrarily bad lower boundsfor the competitive ratios of our objectives, as it is demonstrated byTheorem <ref>. The following two lemmas demonstrate some key properties of the regulardistributions that will be very useful in our subsequent analysis: For any random variable Y drawn from an MHR distribution with bounded expectation μ and standard deviation s, *Y≥ y≥1/e for any y≤μ *Y≥ y< 1/e for any y> 2μ *Y^(m)≤ H_m·μ, where H_m is the m-th harmonic number. *s ≤μ A proof of Property <ref> can be found in <cit.>, of Property <ref> in <cit.>, and of Property <ref> in <cit.>. For Property <ref>, from <cit.> we know that Y^2≤ 2μ^2, so s^2=Y^2-μ^2≤μ^2. For any distribution over [0,∞) with log-concave cdf F and expectation μ, x≤ eμ F(x)for any x≤μ. Fix some x≤μ and let c=x/μ. Define the random variable Y=cX, where X is drawn from F, and let F_Y be the cdf of Y. Since F is log-concave, ln F(t) is a concave function, and so from Jensen's inequality ln F(cμ)=ln F(Y)≥∫_0^∞ln F(t) dF_Y(t)= ∫_0^∞ln F(t)c dF(t)=c ∫_0^1ln u du=-c. So, F(x)≥ e^-c=cμ/μe^-c/c=x/μe^-c/c. The lemma follows from the fact that e^-c/c is decreasing for c∈ (0,1]. Finally, we prove the following property bounding the sum of maximumorder statistics of a distribution, that holds for general (not necessarilyregular) distributions and might be of independent interest: The expected average of the k-th highest out of m independent draws from a probability distribution with expectation μ and standard deviation s can be at most μ+2√(m/k)s. Let Y^(1:m)≤ Y^(2:m)≤ Y^(m:m) denote the order statistics of m independent draws from a probability distribution with mean μ and standard deviation s. We want to prove that ∑_i=m-k+1^mY^i:m≤ kμ+2√(km) s. From <cit.> we know that Y^(i:m)≤μ +s √(i-1/m-i+1), so it is enough to show that ∑_i=m-k+1^m√(i-1/m-i+1)≤ 2√(km). Indeed, by using the transformation j=m-i+1, we get ∑_i=m-k+1^m√(i-1/m-i+1)=∑_j=1^k√(m/j-1)≤√(m)∑_j=1^k√(1/j)≤√(m)∫_0^k x^-1/2 dx=√(m)· 2√(k). § GENERAL SETTING We start by studying the general setting where no additional assumptionsare enforced on the structure of the input sequence. The adversary is free toarbitrarily choose the identities of the agents.§.§ Welfare For regularly distributed agent values[As matter of fact, in the proof of Theorem <ref> just regularity for the buyer values would suffice, i.e. F_B being MHR.], the online auction that posts to any seller and buyer the median of their distribution is O(ln n)-competitive with respect to welfare. This bound is tight. We split the proof of the theorem in two more general lemmas below, corresponding to upper and lower bounds. Then, the upper bound for our case of regular distributions follows easily from Lemma <ref> by using constants c_1=c_2=2, and taking into consideration that, from Property <ref> of Theorem <ref>, the ratio of the maximum order statistic for the MHR distribution F_B is upper bounded by r_B(m)≤ H_m≤ O(ln m). For the lower bound, it is enough to observe that this ratio is attained by an exponential distribution, which is MHR. For any choice of constants c_1,c_2> 1, the following fixed-price online auction has a competitive ratio of at most maxc_1/c_1-1,c_1c_2· r_B(n_B) with respect to welfare, where n_B is the number of buyers, and r_B(m)=μ^(m)_B/μ_B is the ratio between the m-maximum-order statistic and the expectation of the buyer value distribution. * Post to all sellers price q=F^-1_S(1/c_1). * Post to all buyers price p=F^-1_B(c_2-1/c_2). Let A denote our online algorithm andan offline algorithm with optimal expected welfare.Fix an input stream σ. Looking at (<ref>), the maximum welfare thatcan get from the sellers is at most ∑_t∈ N_SX_t=n_sμ_S, while from the buyers at most I_B· X_B^(n_B)≤κX_B^(n_B), where κ is the maximum number of sellers that can be matched to distinct buyers that arrive after them[You can think of that as the maximum size of a matching in the following undirected graph: the nodes are the sellers and the buyers, and there is an edge between any seller and all the buyers that appear after her in σ.] in σ: clearly, no mechanism can sell more than κ items. Bringing all together we have that 𝒲()≤ n_sμ_S + κμ^(n_B)_B=n_sμ_S +r_B(n_B)·κμ_B. For the online algorithm now, from the sellers we get ∑_i∈ N_SX_i>qX_i|X_i>q≥ n_s(1-F_S(q))X_S=c_1-1/c_1· n_Sμ_S and from the buyers at least κX_S≤ qX_B≥ pX_i|X_i≥ p≥κ F_S(q)(1-F_B(p))X_B= 1/c_11/c_2·κμ_B, just by considering one of the κ-size matchings discussed before: if we manage to buy from one of these κ sellers, then we will definitely have stock availability for the matched buyer. The upper bound in Lemma <ref> cannot be improved: For any probability distribution F, even if the seller and buyer values are i.i.d. from F, the sequence SB^n forces all posted-price online mechanisms to have a competitive ratio of (r(n)), where r(n)=μ^(n)/μ is the ratio of the n-maximum-order statistic of distribution F to its expectation. Assume that the seller and buyer values are drawn i.i.d. from a distribution F. Let Y∼ F denote a random variable following this distribution and denote μ=Y, μ^(n)=Y^(n). Fix an online algorithm A that posts price q to the seller and prices p ≡ p_1,p_2,… to the buyers. Notice that this sequence of buyer prices p cannot depend on the actual stream length n, since that is being selected adversarially. We overestimate A's expected welfare by assuming that it gets maximum welfare from the first seller, i.e. Y=μ, while at the same buys for sure the item from her so that it has stock availability to sell in the sequence of buyers. Then, from (<ref>) its expected welfare is given by 𝒲( p)=μ+∑_t=1^n π(t)·λ(p_t), where π(t)=π( p,t)=∏_j=1^t-1Y<p_j=∏_j=1^t-1F(p_j) and λ(y)=Y≥ y·YY≥ y=(1-F(y))YY≥ y=∫_ y^∞ xf(x) dx ≤μ. First we show that we can without loss assume that the buyer prices are nonincreasing. Indeed, for a contradiction suppose that exists a time step t^* such that α≡ p_t^*<p_t^*+1≡β. Consider now the online mechanism that uses prices p', where p' results from the original prices p if we flip the prices at steps t^*,t^*+1, i.e. p_t^*'=β, p_t^*+1'=α, and p'_t=p_t for all t≠ t^*,t^*+1. Then, the difference in the expected welfare between the two mechanisms is 𝒲( p')-𝒲( p) =∑_t=t^*^t^*+1π( p',t)·λ(p_t')-∑_t=t^*^t^*+1π(t)·λ(p_t) = π(t^*)λ(β)+π(t^*)F(β)λ(α) - π(t^*)λ(α)-π(t^*)F(α)λ(β) = π(t^*)[(1-F(α))λ(β)-(1-F(β))λ(α) ] = π(t^*)(1-F(α))(1-F(β))(YY≥β - YY≥α), which is nonnegative since α<β. There are two options for the prices p: either F(p_t)=1 for all t, or k=mintF(p_t)<1 is a well-defined positive integer that does not depend on n, in which case define the constant c≡ F(p_k)<1. From (<ref>), in the former case it is easy to see that W( p)= μ, while in the latter one 𝒲( p) ≤μ +π(k)∑_t=k^nF(p_k)^t-kλ(p_t) ≤μ +π(k)∑_t=k^n c^t-kμ≤(1+∑_j=0^∞ c^j )μ =2-c/1-cμ On the other hand, it is a well-know fact from the theory of prophet inequalities (see e.g. <cit.>) that by using a price of μ^(n)/2 for all the buyers an offline mechanism can achieve a welfare of at least μ^(n)/2 from the buyers, given of course availability of stock. So, by setting e.g. a price equal to the median of F for the seller, the optimal offline welfare is at least 1/2μ +1/4μ^(n)=(μ^(n)). As the following theorem demonstrates, the regularity assumption on theagent values is necessary if we want to hope for non-trivial bounds. Inparticular, the lower bound in Lemma <ref> can be madearbitrarily high: For any constant ε∈ (0,1), there exists a continuous probability distribution F such that any online posted-price mechanism has a competitive ratio of (n^1-ε) on the input sequence SB^n, even if the values of the sellers and the buyers are i.i.d. Fix some ε∈ (0,1) and choose the Pareto distribution with F(x)=1-x^-1/1-ε for x∈ [1,∞). The expected value of this distribution is μ=1/ε while the expectation of the maximum order statistic out of n independent draws is μ^(n)=n(n)(ε)/(n+ε)∼(ε) n^1-ε, since lim_n→∞(n+ε)/(n)/n^ε=1, where (x) denotes the standard gamma function. So, as n grows large, the ratio in Lemma <ref> becomes r(n)=μ^(n)/μ=ε(ε)· n^1-ε≥4/5 n^1-ε=(n^1-ε).§.§ Profit Now we turn our attention to our other objective of interest, that ofmaximizing the expected profit of the intermediary. As it turns out, thisobjective has some additional challenges that we need to address. Forexample, as the followingtheorem demonstrates, if the distribution of seller valuesis bounded away from 0, the competitive ratio can be arbitrarily bad,even for i.i.d. values from a uniform distribution: For any a>0 and ε∈ (0,1), if the seller and buyer values are drawn i.i.d. from the uniform distribution over [a,b] where b> 2a, then no online posted-price mechanism can have an approximation ratio better than a(1-1/k)^4 n^1-ε with respect to profit, where k=b/a-1. In particular, for any uniform distribution over an interval [1,h] with h≥ 3 the lower bound is 1/2^4n^1-ε=(n^1-ε). Fix a,b>0 such that k≡b/a-1>1. Assume that the buyer and seller values are drawn i.i.d. from the uniform distribution [a,b], i.e. the cdf is F(x)=x-a/b-a=x-a/ak for all x∈[a,(k+1)a]. Consider the input stream σ=S^n/2B^n/2, for n even. First, it is easy to see that for any ε∈ (0,1) no online algorithm can buy more than n^ε/1281/a items from the sellers in the first part of the stream, otherwise it will have to spend more than n^ε/128=ω(1). This means that the maximum profit that an online algorithm can get, even if it manages to sell to the buyers all the items she bought from the sellers, is at most n^ε/1281/a(b-a)=k/128n^ε. Consider an offline algorithm that posts to seller and buyers the prices corresponding to the 1/8(1- 1/k)^2 and 1/2(1- 1/k) percentiles, respectively. That is, buyers get a price of p=F^-1(y)=a(yk+1) and sellers q=F^-1(y^2/2)=a/2(2+y^2k), where y=1/2(1- 1/k). Then, the probability that the offline algorithm buys an item from a specific seller is F(q), resulting in the algorithm spending n/2F(q)q in expectation. On the other hand, underestimate its expected income buy considering only selling to the i-th buyer the item that you got from the i-th seller. Then, the probability of achieving a successful transaction with a particular buyer is F(q)(1-F(p)), resulting in an expected profit of at least n/2F(q)(1-F(p))p-n/2F(q)q =n/2y^2/2[(1-y)F^-1(y)- F^-1(y^2/2)]= an/8y^3[(3y-2)k-2 ]= ak/128n(1-1/k)^4. If we consider distributions supported over intervals that include 0, under standard regularity assumptions we can do a little better than the triviallower bound of Theorem <ref>: For agent values regularly distributed over intervals that include 0, the following online posted-price mechanism achieves a competitive ratio of O(n^1/2+ε) for any ε>0: * Post to the i-th seller price q_i=F_S^-1(1/e1/i^1/2+ε) * Post to all buyers price p=μ_B. Fix an input stream σ of length n. Let μ_B and s_B be the expectation and standard deviation of the buyer value distribution F_B. As in the proof of Lemma <ref>, let κ denote the maximum number of sellers that can be matched to distinct buyers that arrive after them in σ. If μ_B^(j:m) denotes the expectation of the j-th largest out of m independent draws from F_B, since no algorithm can make more than κ sales over its entire execution, the optimal offline profit is upper bounded by ∑_j=1^κμ_B^(n_B-j+1:n_B)≤∑_i=n-κ+1^nμ_B^(i:n)≤κμ_B +2√(κ n)s_B≤ 3√(κ)√(n)μ_B, where for the second inequality we have used Lemma <ref> and for the last one we have used Property <ref> from Theorem <ref> and the obvious fact that κ≤ n. For the analysis of the online mechanism now, the expected number of items that it gets from the first κ sellers is ∑_i=1^κ F_S(q_i)=1/e∑_i=1^κ1/i^1/2+ε≥1/eκ^1/2-ε. So, by considering the FIFO matching between these first κ sellers and their corresponding buyers (see Lemma <ref>), the expected income of our algorithm is at least 1/eκ^1/2-ε (1-F(p))=1/eκ^1/2-ε (1-F(μ_B))≥1/e^2κ^1/2-ε, where in the last step we deployed Property <ref> of Theorem <ref>. So, it only remains to be shown that the online algorithm does not spend more than a constant amount. Indeed, our expected spending is at most ∑_i=1^∞ q_iF_S(q_i)≤∑_i=1^∞ eμ_S F_S(q_i)^2 = 1/eμ_S∑_i=1^∞1/i^1+2ε =O(μ_S), where for the first inequality we have used Lemma <ref>, taking into consideration that seller prices q_i are decreasing and q_1 is below μ_S. This is true because again from Lemma <ref> for x=μ_S we know that μ_S ≤ eμ_S F(μ_S), or equivalently F(μ_S)≥1/e=F(q_1). The algorithm of Theorem <ref> is asymptoticallyoptimal: If the seller and buyer values are drawn i.i.d. from the uniform distribution over [0,1], then no online posted-price mechanism can have an approximation ratio better than (√(n)).As in the lower bound proof of Theorem <ref> we again deploy an input sequence σ=S^n/2B^n/2 with n even. Let F(x)=x be the cdf of the uniform distribution over [0,1]. This time we argue that no online algorithm can buy more than (√(n)) items from the sellers, in expectation. Indeed, let q_i be the price that the online mechanism posts to the i-th seller. Then, the expected number of items m_σ bought from the sellers is ∑_i=1^n/2F(q_i)=∑_i=1^n/2q_i, while the expected expenditure c_σ is ∑_i=1^n/2F(q_i)q_i=∑_i=1^n/2q_i^2. By the convexity of the function t↦ t^2 and Jensen's inequality it must be that m_σ=∑_i=1^n/2q_i ≤√(n/2)(∑_i=1^n/2q_i^2)^1/2=O(√(c_σ)√(n)), so given that our deficit must be c_σ=O(1/2), we get the desired m_σ=O(√(n)). As a result, the online profit can be at most O(√(n))· 1=O(√(n)). For the offline algorithm we use prices q=1/8 and p=1/2 for the buyers and sellers, respectively, and by an analogous analysis to that of the proof of Theorem <ref>, we get that the expected offline profit is at least n/2F(q)(1-F(p))p-n/2F(q)q=n/21/8(1-1/2)1/2-n/21/81/8=n/128=(n).§ LIMITED STOCK If one looks carefully at the lower bound proof for the profit inTheorem <ref>, it becomes clear that the source ofdifficulty for any online algorithm is essentially the fact that withoutknowledge of the future, you cannot afford to spend a super-constantamount of money into accumulating alarge stock of items, without theguarantee that there will be enough demand from future buyers. Inparticular, it may seem that the offline algorithm has an unrealisticadvantage of using a stock of infinite size. The natural way to mitigate thiswould be to introduce an upper bound K on the number of items thatboth the online and offline algorithms can store at any point in time. As itturns out, this has a dramatic improvement in the competitive ratio for theprofit: Assuming stock sizes of at most K items, under our standard regularity assumptions the following online mechanism is O(Krlog n)-competitive, where r=max1,μ_S/μ_B: * If your stock is not currently full, post to sellers price q=F_S^-1(1/r1/2eK) * Post to all buyers price p=μ_B. The proof is similar to that of Theorem <ref>, but certain points need some special care. Let κ again be the maximum number of sellers that can be matched to distinct buyers that follow them, but this time under the addedrestriction of the K-size stock. This corresponds to the maximum matching with no “temporal” cut of size greater than K. We write “temporal” cut to mean any cut in the graph that separates the vertices (buyers and sellers) 1… i from vertices i+1… n — that is, precisely the condition that we cannot match more than K sellers from an initial segment to buyers later in the sequence. Lemma <ref> in the appendix demonstrates that such a κ-size matching can be computed not only offline, but also online using a FIFO queue of length K, adding sellers to the queue while it is not full and matching buyers greedily: we post prices to sellers, only if we have free space in our stock, i.e. when the matching queue is not full. We underestimatethe online profit by considering only selling an item to the buyer that is matched to the seller from which we bought the item. Mimicking the analysis in the proof of Theorem <ref> we can see that the expected number of items bought from the κ matched sellers is κ F_S(q)≥κ1/2eK1/r. Now we argue that q≤μ_B/2. Indeed, since F_S(q)≤1/e we know for sure that q≤μ_S, and so from Lemma <ref> it is q≤ eμ_S F(q)≤ eμ_Sμ_B/μ_S1/2e=μ_B/2. Next, notice that whenever we make a successful sale, the contribution to profit is p-q≥μ_B-μ_B/2=1/2μ_B. Thus, the total expected gain in profit from sales is at least κ F_S(q) (1-F_B(p))(p-q) ≥κ1/2eK1/μ_S/μ_B+1(1-F_B(μ_B))1/2μ_B ≥1/4e^21/Krκμ_B, where in the bound for the quantile 1-F_B(μ_B) we used Property <ref> of Theorem <ref>. Also, the profit we loose from the cost of unsold items cannot be more than Kq≤ Kμ_S e1/2eK=O(μ_S). On the other hand, the offline profit is at most κ times the expected maximum order statistic out of n independent draws from F_B, so by Property <ref> of Theorem <ref> it is upper bounded by κ H_nμ_B. Putting everything together, the competitive ratio of the online algorithm is at most κ H_nμ_B/1/4e^21/Krκμ_B=O(Krln n ). We want to mention here that the above upper bound in Theorem <ref>, although a substantial improvement from the (√(n)) one for the general case in Theorem <ref>, it cannot be improved further: the logarithmic lower bound is unavoidable, since a careful inspection of the welfare lower bound in the proof of Lemma <ref> reveals that the same analysis carries over to the profit. In particular, the last parenthesis of YY≥β - YY≥α in (<ref>) will be replaced by β-α which is still nonnegative, and also the bad instance sequence of SB^n does not use a stock of size more than 1. We try to overcome this obstacles by considering a different model of constrained streams in the following section.§ BALANCED SEQUENCES As we saw in Section <ref>, introducing a restriction in thesize of available stock can improve the performance of our onlinealgorithms with respect to profit. However, the bound is stillsuper-constant. Thus, it is perhaps more reasonable to assume someknowledge of the ratioα between buyers and sellers in sequences the intermediary mightface. This allows us finer control over the trade-off between high volume oftrades and the hunt for greater order statistics.In this section we analyse the competitive ratio for profit and welfareobtained by onlinealgorithms on α-balanced sequences. Let α be a positive integer. A sequence containing m buyers is called α-balanced if it contains α m sellers and the i-th buyer is preceded by at least α i sellers. For example, the sequence SBSSBSBB is 1-balanced, butSBBSSB is not. Similarly, SSSBSB is 2-balanced, while SSBSBSSSB isn't. Note that since n = n_S α+1/α = n_B(α+1), we only need to know the number of buyers of a sequence.For convenience, we will denote it by m instead of n_B, as it is usedquite often. §.§ ProfitWe first work on profit, deriving bounds for a variety of online and offlinemechanisms. Naturally, there are two types of offline mechanisms: adaptive and non-adaptive. The non-adaptive posted-price mechanism calculates all prices in advance based on the sequence of buyers and sellers, while the adaptive posted-price mechanism can alter the prices on the fly, depending on the outcomes of previous trades.We show that there is a competitive online mechanism forα-balanced sequences. To do this, we compare the optimal adaptive and non-adaptive profit to the profit of a class of hypothetical mechanisms, called fractional mechanisms, which are allowed to buy fractional quantities of items: posting the price p would buy exactly F_S(p) items or sell 1-F_B(p) items. The advantage of using fractional mechanisms is that at any point we know the exact quantity of items in the hands of the intermediary instead of the expectation; animmediate consequence of this is that we know in advance whether there is enough quantity to sell, which implies that the adaptive and non-adaptive versions of the optimal fractional mechanism are identical.We can now give an outline of the results in this section: Forα-balanced sequences σ with m buyers and α msellers, we establish the following relations of optimal profits:adaptive(σ) ≤fractional(σ)≤fractional(S^α mB^m)≈non-adaptive(σ),the last of which will be our online algorithm. We begin by the fractionaloffline mechanism. The profit gained by the optimal fractional mechanism for the sequence S^α mB^m is maxm(p(1-F_B(p)) - α· q F_S(q)) s.t. 1-F_B(p) = α F_S(q)p,q ∈ [0,∞). The profit and optimal prices can be calculated through the following optimization: max∑_i=1^m p_i(1-F_B(p_i)) - ∑_i=1^α m q_i F_S(q_i) s.t.∑_i=1^m (1-F_B(p_i)) ≤∑_i=1^α m F_S(q_i)p_i,q_i ∈ [0,∞), where q_i and p_i are the prices for buying and selling respectively.However, we can assume that the first constraint is tight, as all q_i's can be lowered until equality is achieved, without hurting the trades happening in the second half of the sequence. Remember, these are not in expectation, but rather, fractions. This constrained optimization can be reduced to finding stationary points of its Lagrange function ℒ = ∑_i=1^m p_i(1-F_B(p_i)) - ∑_i=1^α m q_i F_S(q_i) - λ (∑_i=1^m (1-F_B(p_i)) - ∑_i=1^α m F_S(q_i)). Taking its derivative with respect to price p_i we get: (1-F_B(p_i)) -p_i f_B(p_i)= -λ f_B(p_i)⇔ p_i - 1-F_B(p_i)/f_B(p_i) = λ, which has at most one solution for any given λ due to the distribution being regular. The treatment of q_i's is similar, leading to a unique solution as well. Thus, since p_i = p and q_i = q for all i we obtain the stated result.For other sequences containing α m sellers and m buyers in adifferent order, we can use the following lemma to establish the middle partof inequality <ref>. For any α-balanced σ with m buyers, fractional(σ) ≤fractional(S^α mB^m) Let q_i, p_i be the prices set by the optimal fractional mechanism for sequence σ. These prices have to satisfy ∑_1^m (1-F_B(p_i)) ≤∑_1^α m F_S(q_i), to ensure that the total quantity of items sold does not exceed the amount bought. Thus, the prices p_i, q_i represent a feasible solution to the optimization problem for the sequence S^α mB^m and by definition, their profit is at most as much as the optimal. theoremadaptiveres For any sequence σ we have adaptive(σ) ≤fractional(σ).The intuition behind the proof of the theorem is that the optimal adaptive profit is bounded from above by the optimal fractional adaptive profit (since fractional mechanisms is a more general class of mechanisms); since in fractional mechanisms optimal adaptive and non-adaptive profits are the same, the theorem follows. For a more rigorous technical treatment, see Appendix <ref>.At this point, we have a clear model of the adversary's power: thefractional mechanism's revenue for sequence S^α mB^m, settingonly twoprices p,q for sellers and buyers. Could we do the same online? It seemslikely. After all, long sequences of buyers and sellers seem to lead to asimilar amount of trading on average by a mechanism setting the sameprices.Based on the previous discussion we propose the following online postedprice algorithm: * Use prices p,q given by the optimal fractional solution for S^α mB^m(see Theorem <ref>). This algorithm works without knowing thelength of the sequence chosen by the adversary. Let A be the online algorithm defined by the optimal fractional offline prices of (<ref>). Consider two α-balanced sequences σ_1 and σ_2 of equal length. We write σ_1 ≻σ_2 whenever every prefix of σ_1 contains more sellers than the prefix of σ_2 having equal length. Then, σ_1 ≻σ_2 ⇒ℛ(A,σ_1) ≥ℛ(A,σ_2) Assume the draws of σ_1 and σ_2 come from the same probability space, so that the i-th agent gets the same draw in both sequences. We will show that all trades (or at least as many) that happened in σ_2 will occur in σ_1. Let i be the index of an arbitrary buyer that was matched to a seller in σ_2 and k the number of items in stock when he arrives in σ_1. If k>0, then we trade with him as we would do in σ_2. If k=0, we have already traded at least as many items as σ_2 at this point. To see this, note that since σ_1 ≻σ_2, at least as many items have been bought from the first i-1 agents of σ_1 than from σ_2 and because k=0, at least as many have been traded.Although not all sequences are comparable (e.g. SSBBSB and SBSSBB),the sequence (S^α B)^m is the bottom element among allα-balanced sequences of length (α+1)m. This is trivial, asany balanced sequence must have at least i/(α+1)/(α) sellers for any prefixof length i and (S^αB)^m is tight for this bound.To formalize our intuition of making the same number of trades in the longrun, we reformulate our algorithm in the more familiar setting of randomwalks. Instead of considering agents separately, each “timestep” would beone sub-sequence S^αB, giving m steps in total. Thus, we areinterested in the random variables Z_i, denoting the items in stock at the end of each step, starting with Z_0 = 0. Knowing the algorithm buys α mF_S(q)items in expectation, the expected profit can be given by ℛ((S^αB)^m) = (α mF_S(q) - Z_m)(p-q) - Z_mq,which is the revenue of the expected number of trades minus the cost of the unsold items. Z_m≤√(2m α^2 log m)(1 - 2/m) + 2 The process Z_i is almost a martingale but not quite:clearly Z_i≤α m for all i and we do have Z_i+1|Z_i ≥ 1 = Z_i since the expected change in items after that step is α F_S(q) - (1-F_B(p)) = 0 by Theorem <ref> . However, Z_i+1|Z_i = 0 > Z_i, by the no short selling assumption. We can define Y_i in the same probability space, where Y_0=0, and Y_i+1 = Y_i + Z_i+1 ifY_i > 0 -Z_i+1 ifY_i < 0 Z_i+1 with probability 1/2 -Z_i+1 with probability 1/2 ifY_i = 0 . The crucial observation is that Y_i behaves similar to Z_i but has no barrier at 0. Notice, that |Y_i| ≥ Z_i for all i and Y_i is a martingale. Moreover, we have that |Y_i+1 - Y_i| ≤α thus by the Azuma-Hoeffding inequality we can bound the expected value Z_m: [Z_m ≥ x]≤[|Y_m| ≥ x] = [|Y_m - Y_0| ≥ x] ≤ 2e^-x^2/2mα^2⇒ Z_m ≤ x(1 - 2e^-x^2/2mα^2) + 2α me^-x^2/2mα^2, where we can set x = √(2m α^2 log m) to obtain the simpler form: Z_m≤√(2m α^2 log m)(1 - 2/m) + 2α. Let r=max2,μ_S/μ_B. The optimal value of Programme (<ref>) is at least m μ_B/2er. Furthermore, at any optimal solution the buyer price has to be at most p≤4ln(4er)μ_B. Consider the value of Programme (<ref>) that corresponds to the solution determined by the seller price q such that F_S(q)=1/ eα r. In a similar way to the proof of Theorem <ref>, it is again easy to see that q≤μ_S since F_S(q)≤1/e, and so by Lemma <ref> and the regularity of F_S we get that q≤ eμ_S 1/eα r≤μ_S/rμ_B/2. Furthermore, for the corresponding buyer price p we have 1-F_B(p)=α F_S(q)=1/e r<1/e and so from Property <ref> of Theorem <ref> we get that p≥μ_B. Thus, the objective value of the particular solution is at least mα F_S(q)(p-q)≥ m 1/e r(μ_B-μ_B/2)=m μ_B/2er. Next, for the upper bound on the buyer price, consider a solution that has buyer price p̂=cp^* for c≥ 1, where F_B(p^*)=1-1/e. Then, since F_B is an MHR distribution, (1-F_B(x))^1/x is decreasing with respect to x, as can be verified using that log(1-F_B(x)) is concave (see e.g. <cit.>), so 1-F_B(p̂)≤ (1-F_B(p^*))^p̂/p^*=e^-c. Furthermore, since F_B(p^*)=1-1/e, from Property <ref> of Theorem <ref> it must be that p^*≤ 2μ_B, and thus p̂≤ 2cμ_B, resulting in 1-F_B(2cμ_B)≤ 1-F_B(p̂)≤ e^-c. This means that if we use a solution with p=2cμ_B, for some c≥ 1, the objective value of the Programme cannot exceed m (1-F_B(p))(p-q)≤ m e^-c2cμ_B. So, unless this value is at least m μ_B/2er, the particular choice of p cannot be part of an optimal solution. Thus, it must be ce^-c≥1/4er. It is not difficult to check that this requires c≤ 2ln(4er), since 2ln xe^-2ln x=2ln x/x^2<1/x for any x>0 and ce^-c is a decreasing function for c≥ 1. As a result of the above analysis we can conclude that the buyer price p of any optimal solution in Programme (<ref>) must be such that p<2μ_B, or otherwise satisfy p≤ 2· 2ln(4er)·μ_B=4ln(4er)μ_B. In any case, the desired upper bound for p in the theorem's statement holds. Under our standard regularity assumptions, the proposed non-adaptive online mechanism is (1+o(α^3/2 rlog r))-competitive for any balanced sequence, where r=max2,μ_S/μ_B. Plugging (<ref>) into (<ref>), we get: ℛ((S^αB)^m) ≥α mF_S(q)(p-q) - Z_m(p-q) - Z_mq≥α mF_S(q)(p-q) - (√(2m α^2 log m)(1 - 2/m) + 2α)p ≥α mF_S(q)(p-q) - O(α√(mln m)p) . Using Lemma <ref>, Theorem <ref> and Theorem <ref> we know that for every α-balanced sequence, the profit of our non-adaptive online algorithm is at least ℛ((S^αB)^m) and the optimal offline is at most that of the fractional on sequence S^α mB^m, i.e. α mF_S(q)(p-q). Thus, the second term in (<ref>) bounds the additive difference of the online and optimal offline profit, and its ratio with respect to the offline profitis upper bounded by O( α√(mln m)p/α mF_S(q)(p-q)) =O( α√(mln m)μ_B ln (4er) /m μ_B/2er) =O(α^3/2√(ln n/n)rlog r )=o(α^3/2 rlog r), using m = n/(α+1). Among all 1-balanced sequences, the sequence that gives the maximum profit is not the sequence S^mB^m; intuitively, by moving some buyers earlier in the sequence, we obtain an improved profit by adapting the remaining buying prices to the outcome of these potential trades. For example, it should be intuitively clear that the sequence S^m/2BS^m/2B^m-1 has (slightly) better adaptive profit than the sequence S^mB^m for large m. Our work above shows that the difference is asymptotically insignificant, but it remains an intriguing question to determine the balanced sequence with the maximum profit.§.§ WelfareWelfare on balanced sequences also improves the competitive ratio ofTheorem <ref> to a constant. Intuitively, the reason is thatthe high volume of possible trades dampens the advantage the adversaryhas in obtaining higher order statistics from buyers. As before, the fact thatall sellers start with some contribution to the welfare is also helpful. The online auction that posts to any seller and buyer the median of their distribution is 4-competitive. The algorithm buys from half the sellers in expectation, so in the end the welfare obtained just from sellers is at least: ∑_t∈ N_S ∖ I_S X_t = ∑_t ∈ N_SX_t | X_t ≥ q(1 - F_S(q)) ≥1/2 n_S μ_S. Following the proof of Lemma <ref>, let κ denote the size the matching between sellers and buyers. Since the input is α-balanced, we are guaranteed that every buyer is preceded by some distinct seller, meaning that κ is exactly N_B. The welfare obtained from buyers is κ[X_S ≤ q][X_B ≥ p]X_B | X_B ≥ p≥1/4n_B μ_B, Adding everything together, the online algorithm gets at least 1/4(n_B μ_B + n_S μ_S). On the other hand the optimal welfare is at most: ∑_t∈ N_S ∖ I_S X_t + ∑_t∈ I_B X_t ≤∑_t∈ N_S X_t + ∑_t∈ N_B X_t = n_S μ_s + n_B μ_B. Notice that the above theorem holds without any regularity assumption ontheagent value distributions.Acknowledgements.We want to thank Matthias Gerstgrasser for many helpful discussions andhis assistance during the initial development of our paper.niazadeh2014simple § OMITTED PROOFS The matching computed using an online FIFO queue of size K, adding sellers while it's not full and popping them when a buyer is encountered, in the proof of Theorem <ref> is a maximum one. We show this for the limited stock case. The general case works similarly, or follows by setting K large enough. Let be the matching computed by our FIFO algorithm, and let be any arbitrary maximum matching in the graph induced by σ. We will show that we can transformintousing a series of changes that do not reduce its size. Let i be the index of the first vertex that is not matched in the same way inand . That is, all edges in andthat are either between vertices before i, or originate at a vertex before i, are identical in both matchings. (There cannot be any matchings that terminate in a vertex smaller than i but originate after i due to the construction of the graph.) We will show using a case-by-case analysis that we can changeinto ' so that i is matched the same way as in , without changing any edges originating before i, and with || = |'|. It follows that we can repeat this procedure untilis transformed into , and thus || =||, i.e.is a maximum matching. * If i is a buyer: This is not possible. If i is matched in either matching, the edge is originating from a vertex before i, and thus must be the same in both matchings by our hypothesis. * If i is a seller. * If i is matched in both matchings. Let j_ be its match in , and j_ in . * j_ < j_ * j_ unmatched in . Make edge ij_ into ij_. Can't violate K-limit this way, as we're making the edge shorter. * j_ matched in . Make edge ij_ into ij_, and match the seller originally matched to j_ in with j_. We can't violate the K-limit this way. * j_ < j_ - This is not possible. * It is not possible that j_ is unmatched in , as we encounter it before j_, and would have matched i to it. * It is not possible that j_ is matched to a seller other than i in . Not to one before i by hypothesis, and not to one after i by construction of the FIFO algorithm. * If i is matched inbut not in . Let j_ be i's match in . * j_ unmatched in . This cannot happen. Notice that we cannot have any buyers between i and j_ that are unmatched in , nor can we have any that are matched to sellers after i. Thus, all buyers between i and j_ are matched to sellers before i in bothand . There can be at most K-1 of them, as there is one more edge originating from i in , and a cut between i and i+1 has size at most K in . Therefore, we could add the edge ij_ towithout violating the K-limit. Thus was not maximum, contradicting out assumption. * j_ matched in . Let s_ be the seller matched to j_ in . Again, all buyers between i and j_ are matched to sellers before i in both matchings. So we can replace s_ j_ with sj_ without violating the K- limit. * If i is matched inbut not in . Let j_ be its match in . * j_ is matched in . This cannot happen due to the FIFO construction. * j_ is unmatched in . This cannot happen. If i were to enter the FIFO queue, it would be matched to j_ (or an earlier available buyer) in . If i does not enter the FIFO queue this can only be because the queue was full.But if the queue was full, this means that K sellers before i were matched to buyers between i and j_ (otherwise j_ would be matched to one of them if ). So there is K edges going from sellers before i to buyers between i and j_ in . So there is also K edges going that way in , as they are identical on vertices before i. So there is K+1 edges going from nodes before and including i to vertices after i in , violating the K item limit. *Fix an adaptive mechanism and let Q_i be the price posted to seller i and Q̃_̃ĩ be the probability of sale at price Q_i. Since in an adaptive mechanism the price depends on the history, Q_i and Q̃_̃ĩ are random variables. Similarly define P_j and P̃_̃j̃ to be the price and probability of buying from buyer j. For the payments to sellers and from buyers wehave:Q_i F_S(Q_i)= Q̃_̃ĩ F_S^-1(Q̃_̃ĩ) P_j(1-F_B(P_j))= P̃_̃j̃ F_B^-1(1-P̃_̃j̃).Summing over all agents we get the expected profit: ∑_j ∈ N_BP_j(1-F_B(P_j)) - ∑_i ∈ N_SQ_i F_S(Q_i)= ∑_j ∈ N_BP̃_̃j̃ F_B^-1(1-P̃_̃j̃) - ∑_i ∈ N_SQ̃_̃ĩ F_S^-1(Q̃_̃ĩ)≤∑_j ∈ N_BP̃_̃j̃ F_B^-1(1-P̃_̃j̃) - ∑_i ∈ N_SQ̃_̃ĩ F_S^-1(Q̃_̃ĩ), where the last inequality follows from our regularity assumptions. Note that in the last inequality F_B^-1(1-P̃_̃j̃) and F_S^-1(Q̃_̃ĩ) can be interpreted as prices set by the fractional mechanism, with P̃_̃j̃ and Q̃_̃ĩ the fractions of items bought and sold. We have obtained the objective function of the optimization and it is left to a set of inequalities concerning the prices, to serve as the constraints.Observe that Q̃_̃ĩ is the expected number of items bought from seller i, while P̃_̃j̃ sold to buyer j. Let 𝒮_t and ℬ_t be the sets of indices of sellers and buyers contained in the first t agents of the sequence. Let Z_t be the number of items exchanged with the agent encountered at step t. The number of items currently held by the intermediary at time t is ∑_1^t Z_i ≥ 0 by the no short selling assumption. Thus for all t: ∑_i=1^t Z_i = ∑_i ∈𝒮_tZ_i - ∑_j ∈ℬ_tZ_j=∑_i ∈𝒮_tZ_i|Q̃_̃ĩ- ∑_j ∈ℬ_tZ_j|P̃_̃j̃=∑_i ∈𝒮_tQ̃_̃ĩ- ∑_j ∈ℬ_tP̃_̃j̃≥ 0 Combining (<ref>) and (<ref>) gives us exactly the same optimization problem the optimal fractional mechanism would face for that sequence. | http://arxiv.org/abs/1703.09279v1 | {
"authors": [
"Yiannis Giannakopoulos",
"Elias Koutsoupias",
"Philip Lazos"
],
"categories": [
"cs.GT"
],
"primary_category": "cs.GT",
"published": "20170327193932",
"title": "Online Market Intermediation"
} |
Using 2.93 fb^-1 of data taken at 3.773 GeV with the BESIII detector operated at the BEPCII collider, we study the semileptonic decays D^+ →K̅^0e^+ν_e and D^+ →π^0 e^+ν_e. We measure the absolute decay branching fractions ℬ(D^+ →K̅^0e^+ν_e)=(8.60±0.06± 0.15)×10^-2 and ℬ(D^+ →π^0e^+ν_e)=(3.63±0.08±0.05)×10^-3, where the first uncertainties are statistical and the second systematic. We also measure the differential decay rates and study the form factors of these two decays. With the values of |V_cs| and |V_cd| from Particle Data Group fits assuming CKM unitarity, we obtain the values of the form factors at q^2=0, f^K_+(0) = 0.725±0.004± 0.012 and f^π_+(0) = 0.622±0.012± 0.003. Taking input from recent lattice QCD calculations of these form factors, we determine values of the CKM matrix elements |V_cs|=0.944 ± 0.005 ± 0.015 ± 0.024 and |V_cd|=0.210 ± 0.004 ± 0.001 ± 0.009, where the third uncertainties are theoretical. 13.20.Fc, 12.15.HhAnalysis of D^+→K̅^0e^+ν_e and D^+→π^0e^+ν_e Semileptonic DecaysM. Ablikim^1, M. N. Achasov^9,d, S. Ahmed^14, X. C. Ai^1, O. Albayrak^5, M. Albrecht^4, D. J. Ambrose^45, A. Amoroso^50A,50C, F. F. An^1, Q. An^47,38, J. Z. Bai^1, O. Bakina^23, R. Baldini Ferroli^20A, Y. Ban^31, D. W. Bennett^19, J. V. Bennett^5, N. Berger^22, M. Bertani^20A, D. Bettoni^21A, J. M. Bian^44, F. Bianchi^50A,50C, E. Boger^23,b, I. Boyko^23, R. A. Briere^5, H. Cai^52, X. Cai^1,38, O. Cakir^41A, A. Calcaterra^20A, G. F. Cao^1,42, S. A. Cetin^41B, J. Chai^50C, J. F. Chang^1,38, G. Chelkov^23,b,c, G. Chen^1, H. S. Chen^1,42, J. C. Chen^1, M. L. Chen^1,38, S. Chen^42, S. J. Chen^29, X. Chen^1,38, X. R. Chen^26, Y. B. Chen^1,38, X. K. Chu^31, G. Cibinetto^21A, H. L. Dai^1,38, J. P. Dai^34,h, A. Dbeyssi^14, D. Dedovich^23, Z. Y. Deng^1, A. Denig^22, I. Denysenko^23, M. Destefanis^50A,50C, F. De Mori^50A,50C, Y. Ding^27, C. Dong^30, J. Dong^1,38, L. Y. Dong^1,42, M. Y. Dong^1,38,42, Z. L. Dou^29, S. X. Du^54, P. F. Duan^1, J. Z. Fan^40, J. Fang^1,38, S. S. Fang^1,42, X. Fang^47,38, Y. Fang^1, R. Farinelli^21A,21B, L. Fava^50B,50C, F. Feldbauer^22, G. Felici^20A, C. Q. Feng^47,38, E. Fioravanti^21A, M. Fritsch^22,14, C. D. Fu^1, Q. Gao^1, X. L. Gao^47,38, Y. Gao^40, Z. Gao^47,38, I. Garzia^21A, K. Goetzen^10, L. Gong^30, W. X. Gong^1,38, W. Gradl^22, M. Greco^50A,50C, M. H. Gu^1,38, Y. T. Gu^12, Y. H. Guan^1, A. Q. Guo^1, L. B. Guo^28, R. P. Guo^1, Y. Guo^1, Y. P. Guo^22, Z. Haddadi^25, A. Hafner^22, S. Han^52, X. Q. Hao^15, F. A. Harris^43, K. L. He^1,42, F. H. Heinsius^4, T. Held^4, Y. K. Heng^1,38,42, T. Holtmann^4, Z. L. Hou^1, C. Hu^28, H. M. Hu^1,42, T. Hu^1,38,42, Y. Hu^1, G. S. Huang^47,38, J. S. Huang^15, X. T. Huang^33, X. Z. Huang^29, Z. L. Huang^27, T. Hussain^49, W. Ikegami Andersson^51, Q. Ji^1, Q. P. Ji^15, X. B. Ji^1,42, X. L. Ji^1,38, L. L. Jiang^1,L. W. Jiang^52, X. S. Jiang^1,38,42, X. Y. Jiang^30, J. B. Jiao^33, Z. Jiao^17, D. P. Jin^1,38,42, S. Jin^1,42, T. Johansson^51, A. Julin^44, N. Kalantar-Nayestanaki^25, X. L. Kang^1, X. S. Kang^30, M. Kavatsyuk^25, B. C. Ke^5, P. Kiese^22, R. Kliemt^10, B. Kloss^22, O. B. Kolcu^41B,f, B. Kopf^4, M. Kornicer^43, A. Kupsc^51, W. Kühn^24, J. S. Lange^24, M. Lara^19, P. Larin^14, H. Leithoff^22, C. Leng^50C, C. Li^51, Cheng Li^47,38, D. M. Li^54, F. Li^1,38, F. Y. Li^31, G. Li^1, H. B. Li^1,42, H. J. Li^1, J. C. Li^1, Jin Li^32, K. Li^13, K. Li^33, Lei Li^3, P. R. Li^42,7, Q. Y. Li^33, T. Li^33, W. D. Li^1,42, W. G. Li^1, X. L. Li^33, X. N. Li^1,38, X. Q. Li^30, Y. B. Li^2, Z. B. Li^39, H. Liang^47,38, Y. F. Liang^36, Y. T. Liang^24, G. R. Liao^11, D. X. Lin^14, B. Liu^34,h, B. J. Liu^1, C. L. Liu^5, C. X. Liu^1, D. Liu^47,38, F. H. Liu^35, Fang Liu^1, Feng Liu^6, H. B. Liu^12, H. H. Liu^1, H. H. Liu^16, H. M. Liu^1,42, J. Liu^1, J. B. Liu^47,38, J. P. Liu^52, J. Y. Liu^1, K. Liu^40, K. Y. Liu^27, L. D. Liu^31, P. L. Liu^1,38, Q. Liu^42, S. B. Liu^47,38, X. Liu^26, Y. B. Liu^30, Y. Y. Liu^30, Z. A. Liu^1,38,42, Zhiqing Liu^22, H. Loehner^25, Y. F. Long^31, X. C. Lou^1,38,42, H. J. Lu^17, J. G. Lu^1,38, Y. Lu^1, Y. P. Lu^1,38, C. L. Luo^28, M. X. Luo^53, T. Luo^43, X. L. Luo^1,38, X. R. Lyu^42, F. C. Ma^27, H. L. Ma^1, L. L. Ma^33, M. M. Ma^1, Q. M. Ma^1, T. Ma^1, X. N. Ma^30, X. Y. Ma^1,38, Y. M. Ma^33, F. E. Maas^14, M. Maggiora^50A,50C, Q. A. Malik^49, Y. J. Mao^31, Z. P. Mao^1, S. Marcello^50A,50C, J. G. Messchendorp^25, G. Mezzadri^21B, J. Min^1,38, T. J. Min^1, R. E. Mitchell^19, X. H. Mo^1,38,42, Y. J. Mo^6, C. Morales Morales^14, G. Morello^20A, N. Yu. Muchnoi^9,d, H. Muramatsu^44, P. Musiol^4, Y. Nefedov^23, F. Nerling^10, I. B. Nikolaev^9,d, Z. Ning^1,38, S. Nisar^8, S. L. Niu^1,38, X. Y. Niu^1, S. L. Olsen^32, Q. Ouyang^1,38,42, S. Pacetti^20B, Y. Pan^47,38, M. Papenbrock^51, P. Patteri^20A, M. Pelizaeus^4, H. P. Peng^47,38, K. Peters^10,g, J. Pettersson^51, J. L. Ping^28, R. G. Ping^1,42, R. Poling^44, V. Prasad^1, H. R. Qi^2, M. Qi^29, S. Qian^1,38, C. F. Qiao^42, L. Q. Qin^33, N. Qin^52, X. S. Qin^1, Z. H. Qin^1,38, J. F. Qiu^1, K. H. Rashid^49,i, C. F. Redmer^22, M. Ripka^22, G. Rong^1,42, Ch. Rosner^14, X. D. Ruan^12, A. Sarantsev^23,e, M. Savrié^21B, C. Schnier^4, K. Schoenning^51, W. Shan^31, M. Shao^47,38, C. P. Shen^2, P. X. Shen^30, X. Y. Shen^1,42, H. Y. Sheng^1, W. M. Song^1, X. Y. Song^1, S. Sosio^50A,50C, S. Spataro^50A,50C, G. X. Sun^1, J. F. Sun^15, S. S. Sun^1,42, X. H. Sun^1, Y. J. Sun^47,38, Y. Z. Sun^1, Z. J. Sun^1,38, Z. T. Sun^19, C. J. Tang^36, X. Tang^1, I. Tapan^41C, E. H. Thorndike^45, M. Tiemens^25, I. Uman^41D, G. S. Varner^43, B. Wang^30, B. L. Wang^42, D. Wang^31, D. Y. Wang^31, K. Wang^1,38, L. L. Wang^1, L. S. Wang^1, M. Wang^33, P. Wang^1, P. L. Wang^1, W. Wang^1,38, W. P. Wang^47,38, X. F. Wang^40, Y. Wang^37, Y. D. Wang^14, Y. F. Wang^1,38,42, Y. Q. Wang^22, Z. Wang^1,38, Z. G. Wang^1,38, Z. H. Wang^47,38, Z. Y. Wang^1, Z. Y. Wang^1, T. Weber^22, D. H. Wei^11, P. Weidenkaff^22, S. P. Wen^1, U. Wiedner^4, M. Wolke^51, L. H. Wu^1, L. J. Wu^1, Z. Wu^1,38, L. Xia^47,38, L. G. Xia^40, Y. Xia^18, D. Xiao^1, H. Xiao^48, Z. J. Xiao^28, Y. G. Xie^1,38, Y. H. Xie^6, Q. L. Xiu^1,38, G. F. Xu^1, J. J. Xu^1, L. Xu^1, Q. J. Xu^13, Q. N. Xu^42, X. P. Xu^37, L. Yan^50A,50C, W. B. Yan^47,38, W. C. Yan^47,38, Y. H. Yan^18, H. J. Yang^34,h, H. X. Yang^1, L. Yang^52, Y. X. Yang^11, M. Ye^1,38, M. H. Ye^7, J. H. Yin^1, Z. Y. You^39, B. X. Yu^1,38,42, C. X. Yu^30, J. S. Yu^26, C. Z. Yuan^1,42, Y. Yuan^1, A. Yuncu^41B,a, A. A. Zafar^49, Y. Zeng^18, Z. Zeng^47,38, B. X. Zhang^1, B. Y. Zhang^1,38, C. C. Zhang^1, D. H. Zhang^1, H. H. Zhang^39, H. Y. Zhang^1,38, J. Zhang^1, J. J. Zhang^1, J. L. Zhang^1, J. Q. Zhang^1, J. W. Zhang^1,38,42, J. Y. Zhang^1, J. Z. Zhang^1,42, K. Zhang^1, L. Zhang^1, S. Q. Zhang^30, X. Y. Zhang^33, Y. Zhang^1, Y. Zhang^1, Y. H. Zhang^1,38, Y. N. Zhang^42, Y. T. Zhang^47,38, Yu Zhang^42, Z. H. Zhang^6, Z. P. Zhang^47, Z. Y. Zhang^52, G. Zhao^1, J. W. Zhao^1,38, J. Y. Zhao^1, J. Z. Zhao^1,38, Lei Zhao^47,38, Ling Zhao^1, M. G. Zhao^30, Q. Zhao^1, Q. W. Zhao^1, S. J. Zhao^54, T. C. Zhao^1, Y. B. Zhao^1,38, Z. G. Zhao^47,38, A. Zhemchugov^23,b, B. Zheng^48,14, J. P. Zheng^1,38, W. J. Zheng^33, Y. H. Zheng^42, B. Zhong^28, L. Zhou^1,38, X. Zhou^52, X. K. Zhou^47,38, X. R. Zhou^47,38, X. Y. Zhou^1, K. Zhu^1, K. J. Zhu^1,38,42, S. Zhu^1, S. H. Zhu^46, X. L. Zhu^40, Y. C. Zhu^47,38, Y. S. Zhu^1,42, Z. A. Zhu^1,42, J. Zhuang^1,38, L. Zotti^50A,50C, B. S. Zou^1, J. H. Zou^1 (BESIII Collaboration)^1 Institute of High Energy Physics, Beijing 100049, People's Republic of China^2 Beihang University, Beijing 100191, People's Republic of China^3 Beijing Institute of Petrochemical Technology, Beijing 102617, People's Republic of China^4 Bochum Ruhr-University, D-44780 Bochum, Germany^5 Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USA^6 Central China Normal University, Wuhan 430079, People's Republic of China^7 China Center of Advanced Science and Technology, Beijing 100190, People's Republic of China^8 COMSATS Institute of Information Technology, Lahore, Defence Road, Off Raiwind Road, 54000 Lahore, Pakistan^9 G.I. Budker Institute of Nuclear Physics SB RAS (BINP), Novosibirsk 630090, Russia^10 GSI Helmholtzcentre for Heavy Ion Research GmbH, D-64291 Darmstadt, Germany^11 Guangxi Normal University, Guilin 541004, People's Republic of China^12 Guangxi University, Nanning 530004, People's Republic of China^13 Hangzhou Normal University, Hangzhou 310036, People's Republic of China^14 Helmholtz Institute Mainz, Johann-Joachim-Becher-Weg 45, D-55099 Mainz, Germany^15 Henan Normal University, Xinxiang 453007, People's Republic of China^16 Henan University of Science and Technology, Luoyang 471003, People's Republic of China^17 Huangshan College, Huangshan 245000, People's Republic of China^18 Hunan University, Changsha 410082, People's Republic of China^19 Indiana University, Bloomington, Indiana 47405, USA^20 (A)INFN Laboratori Nazionali di Frascati, I-00044, Frascati, Italy; (B)INFN and University of Perugia, I-06100, Perugia, Italy^21 (A)INFN Sezione di Ferrara, I-44122, Ferrara, Italy; (B)University of Ferrara, I-44122, Ferrara, Italy^22 Johannes Gutenberg University of Mainz, Johann-Joachim-Becher-Weg 45, D-55099 Mainz, Germany^23 Joint Institute for Nuclear Research, 141980 Dubna, Moscow region, Russia^24 Justus-Liebig-Universitaet Giessen, II. Physikalisches Institut, Heinrich-Buff-Ring 16, D-35392 Giessen, Germany^25 KVI-CART, University of Groningen, NL-9747 AA Groningen, The Netherlands^26 Lanzhou University, Lanzhou 730000, People's Republic of China^27 Liaoning University, Shenyang 110036, People's Republic of China^28 Nanjing Normal University, Nanjing 210023, People's Republic of China^29 Nanjing University, Nanjing 210093, People's Republic of China^30 Nankai University, Tianjin 300071, People's Republic of China^31 Peking University, Beijing 100871, People's Republic of China^32 Seoul National University, Seoul, 151-747 Korea^33 Shandong University, Jinan 250100, People's Republic of China^34 Shanghai Jiao Tong University, Shanghai 200240, People's Republic of China^35 Shanxi University, Taiyuan 030006, People's Republic of China^36 Sichuan University, Chengdu 610064, People's Republic of China^37 Soochow University, Suzhou 215006, People's Republic of China^38 State Key Laboratory of Particle Detection and Electronics, Beijing 100049, Hefei 230026, People's Republic of China^39 Sun Yat-Sen University, Guangzhou 510275, People's Republic of China^40 Tsinghua University, Beijing 100084, People's Republic of China^41 (A)Ankara University, 06100 Tandogan, Ankara, Turkey; (B)Istanbul Bilgi University, 34060 Eyup, Istanbul, Turkey; (C)Uludag University, 16059 Bursa, Turkey; (D)Near East University, Nicosia, North Cyprus, Mersin 10, Turkey^42 University of Chinese Academy of Sciences, Beijing 100049, People's Republic of China^43 University of Hawaii, Honolulu, Hawaii 96822, USA^44 University of Minnesota, Minneapolis, Minnesota 55455, USA^45 University of Rochester, Rochester, New York 14627, USA^46 University of Science and Technology Liaoning, Anshan 114051, People's Republic of China^47 University of Science and Technology of China, Hefei 230026, People's Republic of China^48 University of South China, Hengyang 421001, People's Republic of China^49 University of the Punjab, Lahore-54590, Pakistan^50 (A)University of Turin, I-10125, Turin, Italy; (B)University of Eastern Piedmont, I-15121, Alessandria, Italy; (C)INFN, I-10125, Turin, Italy^51 Uppsala University, Box 516, SE-75120 Uppsala, Sweden^52 Wuhan University, Wuhan 430072, People's Republic of China^53 Zhejiang University, Hangzhou 310027, People's Republic of China^54 Zhengzhou University, Zhengzhou 450001, People's Republic of China^a Also at Bogazici University, 34342 Istanbul, Turkey^b Also at the Moscow Institute of Physics and Technology, Moscow 141700, Russia^c Also at the Functional Electronics Laboratory, Tomsk State University, Tomsk, 634050, Russia^d Also at the Novosibirsk State University, Novosibirsk, 630090, Russia^e Also at the NRC "Kurchatov Institute", PNPI, 188300, Gatchina, Russia^f Also at Istanbul Arel University, 34295 Istanbul, Turkey^g Also at Goethe University Frankfurt, 60323 Frankfurt am Main, Germany^h Also at Key Laboratory for Particle Physics, Astrophysics and Cosmology, Ministry of Education; Shanghai Key Laboratory for Particle Physics and Cosmology; Institute of Nuclear and Particle Physics, Shanghai 200240, People's Republic of China^i Government College Women University, Sialkot - 51310. Punjab, Pakistan. December 30, 2023 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================§ INTRODUCTIONIn the Standard Model (SM) of particle physics, the mixing between the quark flavours in the weak interaction is parameterized by the Cabibbo-Kobayashi-Maskawa (CKM) matrix, which is a 3×3 unitary matrix. Since the CKM matrix elements are fundamental parameters of the SM, precise determinations of these elements are very important for tests of the SM and searches for New Physics (NP) beyond the SM.Since the effects of strong and weak interactions can be well separated in semileptonic D decays, these decays are excellent processes from which we can determine the magnitude of the CKM matrix element V_cs(d).In the SM, neglecting the lepton mass, the differential decay rate for D^+→ P e^+ν_e (P= K̅^0 or π^0) is given by <cit.>dΓ/dq^2 = XG_F^2/24π^3|V_cs(d)|^2 p^3|f_+(q^2)|^2,where G_F is the Fermi constant, V_cs(d) is the corresponding CKM matrix element, p is the momentum of the meson P in the rest frame of the D meson, q^2 is the squared four momentum transfer, i.e., the invariant mass of the lepton and neutrino system, and f_+(q^2) is the form factor which parameterizes the effect of the strong interaction. In Eq. (<ref>), X is a multiplicative factor due to isospin, which equals to 1 for the decay D^+→K̅^0e^+ν_e and 1/2 for the decay D^+→π^0e^+ν_e. In this article, we report the experimental study of D^+→K̅^0e^+ν_e and D^+→π^0e^+ν_e decays using a 2.93 fb^-1 <cit.> data set collected at a center-of-mass energy of √(s)=3.773 GeV with the BESIII detector operated at the BEPCII collider. Throughout this paper, the inclusion of charge conjugate channels is implied.The paper is structured as follows. We briefly describe the BESIII detector and the Monte Carlo (MC) simulation in Sec. <ref>. The event selection is presented in Sec. <ref>. The measurements of the absolute branching fractions and the differential decay rates are described in Sec. <ref> and <ref>, respectively. In Sec. <ref> we discuss the determination of form factors from the measurements of decay rates, and finally, in Sec. <ref>, we present the determination of the magnitudes of the CKM matrix elements V_cs and V_cd. A brief summary is given in Sec. <ref>. § BESIII DETECTORThe BESIII detector is a cylindrical detector with a solid-angle coverage of 93% of 4π, designed for the study of hadron spectroscopy and τ-charm physics. The BESIII detector is described in detail in Ref. <cit.>. Detector components particularly relevant for this work are (1) the main drift chamber (MDC) with 43 layers surrounding the beam pipe, which performs precise determination of charged particle trajectories and provides a measurement of the specific ionization energy loss (dE/dx); (2) a time-of-flight system (TOF) made of plastic scintillator counters, which are located outside of the MDC and provide additional charged particle identification information; and (3) the electromagnetic calorimeter (EMC) consisting of 6240 CsI(Tl) crystals, used to measure the energy of photons and to identify electrons.A geant4-based <cit.> MC simulation software <cit.>, which contains the detector geometry description and the detector response, is used to optimize the event selection criteria, study possible backgrounds, and determine the reconstruction efficiencies.The production of the ψ(3770), initial state radiation production of ψ(3686) and J/ψ, as well as the continuum processes of e^+e^-→τ^+τ^- and e^+e^-→ qq̅ (q=u,d,s) are simulated by the MC event generator kkmc <cit.>; the known decay modes are generated by evtgen <cit.> with the branching fractions set to the world average values from the Particle Data Group (PDG) <cit.>; while the remaining unknown decay modes are modeled by lundcharm <cit.>.We also generate signal MC events consisting of ψ(3770)→ D^+D^- events in which the D^- meson decays to all possible final states and the D^+ meson decays to a hadronic or a semileptonic decay final state being investigated. In the generation of signal MC events, the semileptonic decays D^+→K̅^0e^+ν_e and D^+→π^0e^+ν_e are modeled by the the modified pole parametrization (see Sec. <ref>).§ EVENT RECONSTRUCTIONThe center-of-mass energy of 3.773 GeV corresponds to the peak of the ψ(3770) resonance, which decays predominantly into DD̅ (D^0D̅^0 or D^+D^-) meson pairs. In events where a D^- meson is fully reconstructed, the remaining particles must all be decay products of the accompanying D^+ meson. In the following, the reconstructed meson is called “tagged D^-” or “D^- tag”. In a tagged D^- data sample, the recoiling D^+ decays to K̅^0e^+ν_e or π^0e^+ν_e can be cleanly isolated and used to measure the branching fraction and differential decay rates. §.§ Selection of D^- tagsWe reconstruct D^- tags in the following nine hadronic modes: D^-→ K^+π^-π^-, D^-→ K^0_Sπ^-, D^-→ K^0_S K^-, D^-→ K^+K^-π^-, D^-→ K^+π^-π^-π^0, D^-→π^+π^-π^- [ We veto the K^0_Sπ^- candidates when a π^+π^- invariant mass falls within the K_S^0 mass window. ], D^-→ K^0_Sπ^-π^0, D^-→ K^+π^-π^-π^-π^+, and D^-→ K^0_Sπ^-π^-π^+. The selection criteria of D^- tags used here are the same as those described in Ref. <cit.>.Tagged D^- mesons are identified by their beam-energy-constrained mass M_ BC≡√(E_ beam^2/c^4-|p⃗_ tag|^2/c^2), where E_ beam is the beam energy, and p⃗_ tag is the measured 3-momentum of the tag candidate [ In this analysis, all four-momentum vectors measured in the laboratory frame are boosted to the e^+e^- center-of-mass frame. ].We also use the variable Δ E≡ E_ tag-E_ beam, where E_ tag is the measured energy of the tag candidate, to select the D^- tags. Each tag candidate is subjected to a tag mode-dependent Δ E requirement as shown in Table <ref>. If there are multiple candidates per tag mode for an event, the one with the smallest value of |Δ E| is retained.The M_ BC distributions for the nine D^- tag modes are shown in Fig. <ref>. A binned extended maximum likelihood fit is used to determine the number of tagged D^- events for each of the nine modes. We use the MC simulated signal shape convolved with a double-Gaussian resolution function to represent the beam-energy-constrained mass signal for the D^- daughter particles, and an ARGUS function <cit.> multiplied by a third-order polynomial <cit.> to describe the background shape for the M_ BC distributions. In the fits all parameters of the double-Gaussian function, the ARGUS function, and the polynomial function are left free. The solid lines in Fig. <ref> show the best fits, while the dashed lines show the fitted background shapes.The numbers of the D^- tags (N_ tag) within the M_ BC signal regions given by the two vertical lines in Fig. <ref> are summarized in Table <ref>. In total, we find 1703054 ± 3405 single D^- tags reconstructed in data. The reconstruction efficiencies of the single D^- tags, ϵ_ tag, as determined with the MC simulation, are shown in Table <ref>. §.§ Reconstruction of semileptonic decaysCandidates for semileptonic decays are selected from the remaining tracks in the system recoiling against the D^- tags.The dE/dx, TOF and EMC measurements (deposited energy and shape of the electromagnetic shower) are combined to form confidence levels for the e hypothesis (CL_e), the π hypothesis (CL_π), and the K hypothesis (CL_K).Positron candidates are required to have CL_e greater than 0.1% and to satisfy CL_e/(CL_e + CL_π + CL_K)>0.8. In addition, we include the 4-momenta of near-by photons within 5^∘ of the direction of the positron momentum to partially account for final-state-radiation energy losses (FSR recovery).The neutral kaon candidates are built from pairs of oppositely charged tracks that are assumed to be pions. For each pair of charged tracks, a vertex fit is performed and the resulting track parameters are used to calculate the invariant mass, M(π^+π^-). If M(π^+π^-) is in the range (0.484, 0.512) GeV/c^2, the π^+π^- pair is treated as a K_S^0 candidate and is used for further analysis.The neutral pion candidates are reconstructed via the π^0→γγ decays. For the photon selection, we require the energy of the shower deposited in the barrel (end-cap) EMC greater than 25 (50) MeV and the shower time be within 700 ns of the event start time. In addition, the angle between the photon and the nearest charged track is required to be greater than 10^∘. We accept the pair of photons as a π^0 candidate if the invariant mass of the two photons, M(γγ), is in the range (0.110, 0.150) GeV/c^2. A 1-Constraint (1-C) kinematic fit is then performed to constrainM(γγ) to the π^0 nominal mass, and the resulting 4-momentum of the candidate π^0 is used for further analysis. We reconstruct the D^+→K̅^0 e^+ν _e decay by requiring exactly three additional charged tracks in the rest of the event. One track with charge opposite to that of the D^- tag is identified as a positron using the criteria mentioned above, while the other two oppositely charged tracks form a K_S^0 candidate.For the selection of the D^+→π^0 e^+ν _e decay, we require that there is only one additional charged track consistent with the positron identification criteria and at least two photons that are used to form a π^0 candidate in the rest of the event. If there are multiple π^0 candidates, the one with the minimum χ^2 from the 1-C kinematic fit is retained.In order to additionally suppress background due to wrongly reconstructed or background photons, the semileptonic candidate is further required to have the maximum energy of any of the unused photons, E_γ, max, less than 300 MeV. Since the neutrino is undetected, the kinematic variable U_ miss≡ E_ miss- c|p⃗_ miss| is used to obtain the information about the missing neutrino, where E_ miss and p⃗_ miss are, respectively, the total missing energy and momentum in the event. The missing energy is computed from E_ miss = E_ beam - E_P - E_e^+, where E_P and E_e^+ are the measured energies of the pseudoscalar meson and the positron, respectively. The missing momentum p⃗_ miss is given by p⃗_ miss = p⃗_D^+ - p⃗_P - p⃗_e^+, where p⃗_D^+, p⃗_P and p⃗_e^+ are the 3-momenta of the D^+ meson, the pseudoscalar meson and the positron, respectively. The 3-momentum of the D^+ meson is taken as p⃗_D^+ = - p̂_ tag√((E_ beam/c)^2 - (m_D^+c)^2), where p̂_ tag is the direction of the momentum of the single D^- tag, and m_D^+ is the D^+ mass. If the daughter particles from a semileptonic decay are correctly identified, U_ miss is near zero, since only one neutrino is missing. Figure <ref> shows the U_ miss distributions for the semileptonic candidates, where the potential backgrounds arise from the DD̅ processes other than signal, ψ(3770)→ non-DD̅ decays, e^+e^-→τ^+τ^-, continuum light hadron production, initial state radiation return to J/ψ and ψ(3686).The background for D^+→K̅^0e^+ν_e is dominated by D^+→K̅^*(892)^0e^+ν_e and D^+→K̅^0μ^+ν_μ.For D^+→π^0e^+ν_e, the background is mainly from D^+→ K_L^0e^+ν_e and D^+→ K_S^0(π^0π^0)e^+ν_e. Following the same procedure described in Ref. <cit.>, we perform a binned extended maximum likelihood fit to the U_ miss distribution for each channel to separate the signal from the background component. The signal shape is constructed from a convolution of a MC determined distribution and a Gaussian function that accounts for the difference of the U_ miss resolutions between data and MC simulation. The background shape is formed from MC simulation. From the fits shown as the overlaid curves in Fig. <ref>, we obtain the yields of the observed signal events to be N_ obs(D^+→K̅^0 e^+ν_e)=26008±168 and N_ obs(D^+→π^0 e^+ν_e)=3402±70, respectively.To check the quality of the MC simulation, we examine the distributions of the reconstructed kinematic variables. Figure <ref> shows the comparisons of the momentum distributions of data and MC simulation. § BRANCHING FRACTION MEASUREMENTS §.§ Determinations of branching fractions The branching fraction of the semileptonic decay D^+→ Pe^+ν_e is obtained fromℬ(D^+→ Pe^+ν_e) = N_ obs(D^+→ Pe^+ν_e)/N_ tag ε(D^+→ Pe^+ν_e),where N_ tag is the number of D^- tags (see Sec. <ref>), N_ obs(D^+→ Pe^+ν_e) is the number of observed D^+→ Pe^+ν_e decays within the D^- tags (see Sec. <ref>), and ε(D^+→ Pe^+ν_e) is the reconstruction efficiency. Here the D^+→K̅^0 e^+ν_e efficiency includes the K^0_S fraction of the K̅^0 and K^0_S→π^+π^- branching fraction, the D^+→π^0 e^+ν_e efficiency includes the π^0→γγ branching fraction <cit.>.Due to the difference in the multiplicity, the reconstruction efficiency varies slightly with the tag mode. For each tag mode i, the reconstruction efficiency is given by ε^i=ε^i_ tag,SL/ε^i_ tag, where the efficiency for simultaneously finding the D^+→ Pe^+ν_e semileptonic decay and the D^- meson tagged with mode i, ε^i_ tag,SL, is determined using the signal MC sample, and ε^i_ tag is the corresponding tag efficiency shown in Table <ref>. These efficiencies are listed in Table <ref>. The reconstruction efficiency for each tag mode is then weighted according to the corresponding tag yield in data to obtain the average reconstruction efficiency, ε̅=∑_i(N_ tag^iε^i)/N_ tag, as listed in the last row in Table <ref>.Using the control samples selected from Bhabha scattering and DD̅ events, we find that there are small discrepancies between data and MC simulation in the positron tracking efficiency, positron identification efficiency, K_S^0 and π^0 reconstruction efficiencies. We correct for these differences by multiplying the raw efficiencies ε(D^+→K̅^0e^+ν_e) and ε(D^+→π^0e^+ν_e) determined in MC simulation by factors of 0.9957 and 0.9910, respectively. The corrected efficiencies are found to be ϵ^'(D^+→K̅^0e^+ν_e)=(17.75±0.03)% and ϵ^'(D^+→π^0e^+ν_e)=(55.02±0.10)%, where the uncertainties are only statistical.Inserting the corresponding numbers into Eq. (<ref>) yields the absolute decay branching fractionsℬ(D^+→K̅^0e^+ν_e) = (8.60± 0.06± 0.15)×10^-2 and ℬ(D^+→π^0e^+ν_e)=(3.63± 0.08± 0.05)×10^-3,where the first uncertainties are statistical and the second systematic.§.§ Systematic uncertainties The systematic uncertainties in the measured branching fractions of D^+→K̅^0e^+ν_e and D^+→π^0e^+ν_e decays include the following contributions. Number of D^- tags. The systematic uncertainty of the number of D^- tags is 0.5% <cit.>. e^+ tracking efficiency. Using the positron samples selected from radiative Bhabha scattering events, the e^+ tracking efficiencies are measured in data and MC simulation. Considering both the polar angle and momentum distributions of the positrons in the semileptonc decays, a correction factor of 1.0021±0.0019 (1.0011±0.0015) is determined for the e^+ tracking efficiency in the branching fraction measurement of D^+→K̅^0e^+ν_e (D^+→π^0e^+ν_e) decay. This correction is applied and an uncertainty of 0.19% (0.15%) is used as the corresponding systematic uncertainty.e^+ identification efficiency. Using the positron samples selected from radiative Bhabha scattering events, we measure the e^+ identification efficiencies in data and MC simulation. Taking both the polar angle and momentum distributions of the positrons in the semileptonic decays into account, a correction factor of 0.9993±0.0016 (0.9984±0.0014) is determined for the e^+ identification efficiency in the measurement of ℬ(D^+→K̅^0e^+ν_e) (ℬ(D^+→π^0e^+ν_e)). This correction is applied, and an amount of 0.16% (0.14%) is assigned as the corresponding systematic uncertainty. K_S^0 and π^0 reconstruction efficiency. The momentum-dependent efficiencies for K^0_S (π^0) reconstruction in data and in MC simulation are measured with D D̅ events. Weighting these efficiencies according to the K^0_S (π^0) momentum distribution in the semileptonic decay leads to a difference of (-0.57±1.62)% ((-0.85±1.00)%) between the K_S^0 (π^0) reconstruction efficiencies in data and MC simulation. Since we correct for the systematic shift, the uncertainty of the correction factor, 1.62% (1.00%), is taken as the corresponding systematic uncertainty in the measured branching fraction of D^+→K̅^0e^+ν_e (D^+→π^0e^+ν_e). Requirement on E_γ,max. By comparing doubly tagged DD̅ hadronic decay events in the data and MC simulation, the systematic uncertainty due to this source is estimated to be 0.1%. Fit to the U_ miss distribution. To estimate the uncertainties due to the fits to the U_ miss distributions, we refit the U_ miss distributions by varying the bin size and the tail parameters (which are used to describe the signal shapes and are determined from MC simulation) to obtain the number of signal events from D^+ semileptonic decays.We then combine the changes in the yields in quadrature to obtain the systematic uncertainty (0.12% for D^+→K̅^0e^+ν_e, 0.52% for D^+→π^0e^+ν_e). Since the background function is formed from many background modes with fixed relative normalizations, we also vary the relative contributions of several of the largest background modes based on the uncertainties in their branching fractions (0.12% for D^+→K̅^0e^+ν_e, 0.01% for D^+→π^0e^+ν_e). In addition, we convolute the background shapes formed from MC simulation with the same Gaussian function in the fits (0.02% for D^+→K̅^0e^+ν_e, 0.30% for D^+→π^0e^+ν_e). Finally we assign the relative uncertainties to be 0.2% and 0.6% for D^+ →K̅^0 e^+ ν_e and D^+ →π^0 e^+ ν_e, respectively. Form factor. In order to estimate the systematic uncertainty associated with the form factor used to generate signal events in the MC simulation, we re-weight the signal MC events so that the q^2 spectra agree with the measured spectra. We then remeasure the branching fraction (partial decay rates in different q^2 bins) with the newly weighted efficiency (efficiency matrix). The maximum relative change of the branching fraction (partial decay rates in different q^2 bins) is 0.2% and is assigned as the systematic uncertainty. FSR recovery. The differences between the results with FSR recovery and the ones without FSR recovery are assigned as the systematic uncertainties due to FSR recovery. We find the differences are 0.1% and 0.5% for D^+→K̅^0e^+ν_e and D^+→π^0e^+ν_e, respectively.MC statistics. The uncertainties in the measured branching fractions due to the MC statistics are the statistical fluctuation of the MC samples, which are 0.2% for both of D^+ →K̅^0 e^+ν_e and D^+ →π^0 e^+ν_e semileptonic decays.K_S^0 and π^0 decay branching fractions. We include an uncertainty of 0.07% (0.03%) on the branching fraction measurement of D^+→K̅^0 e^+ν_e (D^+→π^0 e^+ν_e) to account for the uncertainty of the branching fraction of K_S^0 →π^+ π^- (π^0 →γγ) decay <cit.>. Table <ref> summarizes the systematic uncertainties in the measurement of the branching fractions. Adding all systematic uncertainties in quadrature yields the total systematic uncertainties of 1.76% and 1.41% for D^+→K̅^0 e^+ν_e and D^+→π^0 e^+ν_e, respectively. §.§ Comparison The comparisons of our measured branching fractions for D^+→K̅^0e^+ν_e and D^+→π^0e^+ν_e decays with those previously measured at the BES-II <cit.>, CLEO-c <cit.> and BESIII <cit.> experiments as well as the PDG values <cit.> are shown in Fig. <ref>. Our measured branching fractions are in agreement with the other experimental measurements, but are more precise. For D^+→π^0e^+ν_e, our result is lower than the only other existing measurement by CLEO-c <cit.> by 2.0σ.Using our previous measurements of ℬ(D^0→ K^-e^+ν_e) and ℬ(D^0→π^-e^+ν_e) <cit.>, the results obtained in this analysis, and the lifetimes of D^0 and D^+ mesons <cit.>, we obtain the ratiosI_K ≡Γ(D^0→ K^-e^+ν_e)/Γ(D^+→K̅^0e^+ν_e) =1.03±0.01±0.02andI_π≡Γ(D^0→π^-e^+ν_e)/2Γ(D^+→π^0e^+ν_e) =1.03±0.03±0.02,which are consistent with isospin symmetry. § PARTIAL DECAY RATE MEASUREMENTS §.§ Determinations of partial decay rates To study the differential decay rates, we divide the semileptonic candidates satisfying the selection criteria described in Sec. <ref> into bins of q^2. Nine (seven) bins are used for D^+→K̅^0e^+ν_e (D^+→π^0e^+ν_e). The range of each bin is given in Table <ref>.The squared four momentum transfer q^2 is determined for each semileptonic candidate by q^2 = (E_e^++E_ν_e)^2/c^4 - (p⃗_e^+ + p⃗_ν_e)^2/c^2, where the energy and momentum of the missing neutrino are taken to be E_ν_e = E_ miss and p⃗_ν_e = E_ missp̂_ miss/c, respectively.For each q^2 bin, we perform a maximum likelihood fit to the corresponding U_ miss distribution following the same procedure described in Sec. <ref> and obtain the signal yields as shown in Table <ref>.To account for detection efficiency and detector resolution, the number of events N^i_ obs observed in the ith q^2 bin is extracted from the relation N^i_ obs=∑_j=1^N_ binsε_ijN^j_ prd, where N_ bins is the number of q^2 bins, N_ prd^j is the number of semileptonic decay events produced in the tagged D^- sample with the q^2 filled in the jth bin, and ε_ij is the overall efficiency matrix that describes the efficiency and smearing across q^2 bins.The efficiency matrix element ε_ij is obtained by ε_ij = n^ rec_ij/n^ gen_j1/ε_ tag f_ij, where n^ rec_ij is the number of the signal MC events generated in the jth q^2 bin and reconstructed in the ith q^2 bin, n^ gen_j is the total number of the signal MC events which are generated in the jth q^2 bin, and f_ij is the matrix to correct for data-MC differences in the efficiencies for e^+ tracking, e^+ identification, and K̅^0 (π^0) reconstruction.Table <ref> presents the average overall efficiency matrices for D^+ →K̅^̅0̅e^+ν_e and D^+ →π^0 e^+ ν_e decays. To produce this average overall efficiency matrix, we combine the efficiency matrices for each tag mode weighted by its yield shown in Table <ref>. The diagonal elements of the matrix give the overall efficiencies for D^+→ Pe^+ν_e decays to be reconstructed in the correct q^2 bins in the recoil of the single D^- tags, while the neighboring off-diagonal elements of the matrix give the overall efficiencies for cross feed between different q^2 bins.The partial decay width in the ith bin isobtained by inverting the matrix Eq. (<ref>),ΔΓ_i=N_ prd^i/τ_D^+ N_ tag =1/τ_D^+ N_ tag∑_j^N_ bins(ε^-1)_ijN_ obs^j,where τ_D^+ is the lifetime of the D^+ meson <cit.>.The q^2-dependent partial widths for D^+→K̅^0e^+ν_e and D^+→π^0e^+ν_e are summarized in Table <ref>. Also shown in Table <ref> are the statistical uncertainties and the associated correlation matrices. §.§ Systematic covariance matrices For each source of systematic uncertainty in the measurements of partial decay rates, we construct an N_ bins× N_ bins systematic covariance matrix. A brief description of each contribution follows.D^+ lifetime. The systematic uncertainty associated with the lifetime of the D^+ meson (0.7%) <cit.> is fully correlated across q^2 bins.Number of D^- tags. The systematic uncertainty from the number of the single D^- tags (0.5%) is fully correlated between q^2 bins.e^+, K_S^0, and π^0 reconstruction. The covariance matrices for the systematic uncertainties associated with the e^+ tracking, e^+ identification, K_S^0, and π^0 reconstruction efficiencies are obtained in the following way. We first vary the corresponding correction factors according to their uncertainties, then remeasure the partial decay rates using the efficiency matrices determined from the re-corrected signal MC events. The covariance matrix due to this source is assigned via C_ij=δ(ΔΓ_i)δ(ΔΓ_j), where δ(ΔΓ_i) denotes the change in the partial decay rate measurement in the ith q^2 bin.Requirement on E_γ, max. We take the systematic uncertainty of 0.1% due to the E_γ, max requirement on the selected events in each q^2 bin, and assume that this uncertainty is fully correlated between q^2 bins.Fit to the U_ miss distribution. The technique of fitting the U_ miss distributions affects the number of signal events observed in the q^2 bins. The covariance matrix due to the U_ miss fits is determined by C_ij= (1/τ_D^+ N_ tag)^2∑_αϵ^-1_i αϵ^-1_j α [δ(N_ obs^α)]^2, where δ(N_ obs^α) is the systematic uncertainty of N_ obs^α associated with the fit to the corresponding U_ miss distribution.Form factor. To estimate the systematic uncertainty associated with the form factor model used to generate signal events in the MC simulation, we re-weight the signal MC events so that the q^2 spectra agree with the measured spectra. We then re-calculate the partial decay rates in different q^2 bins with the new efficiency matrices which are determined using the weighted MC events. The covariance matrix due to this source is assigned via C_ij=δ(ΔΓ_i)δ(ΔΓ_j), where δ(ΔΓ_i) denotes the change of the partial width measurement in the ith q^2 bin.FSR recovery. To estimate the systematic covariance matrix associated with the FSR recovery of the positron momentum, we remeasure the partial decay rates without the FSR recovery. The covariance matrix due to this source is assigned via C_ij=δ(ΔΓ_i)δ(ΔΓ_j), where δ(ΔΓ_i) denotes the change of the partial decay rate measurement in the ith q^2 bin.MC statistics. The systematic uncertainties due to the limited size of the MC samples used to determine the efficiency matrices are translated to the covariance via C_ij = (1/τ_D^+ N_ tag)^2∑_αβ (N_ obs^αN_ obs^β cov[ϵ^-1_i α,ϵ^-1_j β] ), where the covariance of the inverse efficiency matrix elements are given by <cit.>cov[ϵ^-1_αβ,ϵ^-1_ab] = ∑_ij(ϵ^-1_α iϵ^-1_ai) [σ^2(ϵ_ij)]^2(ϵ^-1_j βϵ^-1_jb). K_S^0 and π^0 decay branching fractions. The systematic uncertainties due to the branching fractions of K_S^0 →π^+π^- (0.07%) and π^0 →γγ (0.03%) are fully correlated between q^2 bins.The total systematic covariance matrix is obtained by summing all these matrices. Table <ref> summarizes the relative size of systematic uncertainties and the corresponding correlations in the measurements for the partial decay rates of the D^+→K̅^0e^+ν_e and D^+→π^0e^+ν_e semileptonic decays. § FORM FACTORSTo determine the product f_+(0)|V_cs(d)| and other form factor parameters, we fit the measured partial decay rates using Eq. (<ref>) with the parameterization of the form factor f_+(q^2). In this analysis, we use several forms of the form factor parameterizations which are reviewed in Sec. <ref>. §.§ Form factor parameterizationsIn general, the single pole model is the simplest approach to describe the q^2 dependence of the form factor. The single pole model is expressed asf_+(q^2) = f_+(0)/1-q^2/m_ pole^2,where f_+(0) is the value of the form factor at q^2=0, and m_ pole is the pole mass, which is often treated as a free parameter to improve fit quality.The modified pole model <cit.> is also widely used in Lattice QCD (LQCD) calculations and experimental studies of these decays. In this parameterization, the form factor of the semileptonic D→ Pe^+ν_e decays is written asf_+(q^2) = f_+(0)/(1-q^2/m_D^*+_(s)^2)(1-α q^2/m_D^*+_(s)^2),where m_D^*+_(s) is the mass of the D^*+_(s) meson, and α is a free parameter to be fitted. The ISGW2 model <cit.> assumesf_+(q^2) = f_+(q^2_ max) ( 1+r^2/12(q^2_ max - q^2) )^-2,where q^2_ max is the kinematical limit of q^2, and r is the conventional radius of the meson.The most general parameterization of the form factor is the series expansion <cit.>, which is based on analyticity and unitarity. In this parameterization, the variable q^2 is mapped to a new variable z throughz(q^2,t_0) = √(t_+-q^2)-√(t_+-t_0)/√(t_+-q^2)+√(t_+-t_0),with t_±=(m_D^+± m_P)^2 and t_0 = t_+(1-√(1-t_-/t_+)). The form factor is then expressed in terms of the new variable z asf_+(q^2) = 1/P(q^2)ϕ(q^2,t_0)∑_k=0^∞ a_k(t_0)[z(q^2,t_0)]^k,where a_k(t_0) are real coefficients. The function P(q^2) is P(q^2) = z(t,m^2_D^*_s) for D→ K and P(q^2)=1 for D→π. The standard choice of ϕ(q^2,t_0) is ϕ(q^2,t_0)= ( π m^2_c/3)^1/2( z(q^2,0)/-q^2)^5/2( z(q^2,t_0)/t_0-q^2)^-1/2× ( z(q^2,t_-)/t_–q^2)^-3/4(t_+-q^2)/(t_+-t_0)^1/4, where m_c is the mass of the charm quark. In practical use, one usually makes a truncation of the above series. After optimizing the form factor parameters, we obtainf_+(q^2) =f_+(0)P(0)ϕ(0,t_0) (1+∑_k=1^k_ maxr_k [z(q^2,t_0)]^k)/P(q^2)ϕ(q^2,t_0) (1+∑_k=1^k_ maxr_k [z(0,t_0)]^k),where r_k≡ a_k(t_0)/a_0(t_0). In this analysis we fit the measured decay rates to the two- or three-parameter series expansion, i.e., we take k_ max=1 or 2. In fact, the z expansion with only a linear term is sufficient to describe the data. Therefore we take the two-parameter series expansion as the nominal parameterization to determine f_+^K(π)(0) and |V_cs(d)|.§.§ Fitting partial decay rates to extract form factors In order to determine the form factor parameters, we fit the theoretical parameterizations to the measured partial decay rates. Taking into account the correlations of the measured partial decay rates among q^2 bins, the χ^2 to be minimized in the fit is defined asχ^2 = ∑_ij (ΔΓ_i-ΔΓ_i^ th)𝒞^-1_ij (ΔΓ_j-ΔΓ_j^ th),where ΔΓ_i is the measured partial decay rate in the ith q^2 bin, 𝒞_ij^-1 is the inverse matrix of the covariance matrix 𝒞_ij. In the ith q^2 bin, the theoretical expectation of the partial decay rate is obtained by integrating Eq. (<ref>), ΔΓ_i^ th = ∫_q^2_ min,i^q^2_ max,i XG^2_F/24π^3|V_cs(d)|^2p^3|f_+(q^2)|^2 dq^2,where q^2_ min,i and q^2_ max,i are the lower and upper boundaries of that q^2 bin, respectively.In the fits, all parameters of the form factor parameterizations are left free.The central values of the form factor parameters are taken from the results obtained by fitting the data with the combined statistical and systematic covariance matrix together. The quadratic difference between the uncertainties of the fit parameters obtained from the fits with the combined covariance matrix and the uncertainties of the fit parameters obtained from the fits with the statistical covariance matrix only is taken as the systematic error of the measured form factor parameter.The results of these fits are summarized in Table <ref>, where the first errors are statistical and the second systematic.Figure <ref> shows the fits to the measured differential decay rates for D^+→K̅^0 e^+ν_e and D^+→π^0 e^+ν_e.Figure <ref> shows the projection of the fits onto f_+(q^2) for the D^+→K̅^0e^+ν_e and D^+→π^0 e^+ν_e decays, respectively. In these two figures, the dots with error bars show the measured values of the form factors, f_+(q^2), in the center of each q^2 bin, which are obtained with f_+(q^2_i)=√(ΔΓ_i/Δ q^2_i24π^3/XG_F^2 p^'_i^3 |V_cq|^2) in which p^'_i^3 = ∫_q^2_ min,i^q^2_ max,i p^3|f_+(q^2)|^2 dq^2/ |f_+(q^2_i) |^2 (q^2_ max,i-q^2_ min,i) , where |V_cs|=0.97351± 0.00013 and |V_cd|=0.22492± 0.00050 are taken from the SM constraint fit <cit.>. In the calculation of p^'_i^3, f_+(q^2) is computed using the two parameter series parameterization with the measured parameters.§.§ Determinations of f_+^K(0) and f_+^π(0)Using the f_+^K(π)(0)|V_cs(d)| values from the two-parameter series expansion fits and taking the values of |V_cs(d)| from the SM constraint fit <cit.> as inputs, we obtain the form factorsf_+^K(0)=0.725±0.004± 0.012 and f_+^π(0)=0.622±0.012± 0.003,where the first errors are statistical and the second systematic. § DETERMINATIONS OF |V_CS| AND |V_CD|Using the values of f_+^K(π)(0)|V_cs(d)| from the two-parameter z-series expansion fits and in conjunction with the form factor values f_+^K(0)=0.747 ± 0.011 ± 0.015 <cit.> and f_+^π(0)=0.666 ± 0.020 ± 0.021 <cit.> calculated from LQCD, we obtain |V_cs|=0.944 ± 0.005 ± 0.015 ± 0.024 and |V_cd|=0.210 ± 0.004 ± 0.001 ± 0.009,where the first uncertainties are statistical, the second systematic, and the third are due to the theoretical uncertainties in the LQCD calculations of the form factors. § SUMMARYIn summary, by analyzing 2.93 fb^-1 of data collected at 3.773 GeV with the BESIII detector at the BEPCII, the semileptonic decays for D^+→K̅^0e^+ν_e and D^+→π^0e^+ν_e have been studied. From a total of 1703054 ± 3405 D^- tags, 26008± 168 D^+ →K̅^0e^+ν _e and 3402± 70 D^+ →π^0e^+ν_e signal events are observed in the system recoiling against the D^- tags. These yield the absolute decay branching fractions to be ℬ(D^+ →K̅^0e^+ν_e)=(8.60± 0.06 ± 0.15)×10^-2 and ℬ(D^+ →π^0e^+ν_e)=(3.63± 0.08± 0.05)×10^-3. We also study the relations between the partial decay rates and squared 4-momentum transfer q^2 for these two decays and obtain the parameters of different form factor parameterizations. The products of the form factors and the related CKM matrix elements extracted from the two-parameter series expansion parameterization are selected as our primary results.We obtain f_+(0)|V_cs| = 0.7053±0.0040± 0.0112 and f_+(0)|V_cd| = 0.1400±0.0026± 0.0007. Using the global SM fit values for |V_cs| and |V_cd|, we obtain the form factors f^K_+(0) = 0.725±0.004± 0.012 and f^π_+(0) = 0.622±0.012± 0.003. Furthermore, using the form factors predicted by the LQCD calculations, we obtain the CKM matrix elements |V_cs|=0.944 ± 0.005 ± 0.015 ± 0.024 and |V_cd|=0.210 ± 0.004 ± 0.001 ± 0.009, where the third errors are dominated by the theoretical uncertainties in the LQCD calculations of the form factors. The BESIII collaboration thanks the staff of BEPCII and the IHEP computing center for their strong support. This work is supported in part by National Key Basic Research Program of China under Contract Nos. 2009CB825204, 2015CB856700; National Natural Science Foundation of China (NSFC) under Contracts Nos. 10935007, 11235011, 11305180, 11322544, 11335008, 11425524, 11635010; the Chinese Academy of Sciences (CAS) Large-Scale Scientific Facility Program; the CAS Center for Excellence in Particle Physics (CCEPP); the Collaborative Innovation Center for Particles and Interactions (CICPI); Joint Large-Scale Scientific Facility Funds of the NSFC and CAS under Contracts Nos. U1232201, U1332201, U1532257, U1532258; CAS under Contracts Nos. KJCX2-YW-N29, KJCX2-YW-N45; 100 Talents Program of CAS; National 1000 Talents Program of China; INPAC and Shanghai Key Laboratory for Particle Physics and Cosmology; German Research Foundation DFG under Contracts Nos. Collaborative Research Center CRC 1044, FOR 2359; Istituto Nazionale di Fisica Nucleare, Italy; Koninklijke Nederlandse Akademie van Wetenschappen (KNAW) under Contract No. 530-4CDP03; Ministry of Development of Turkey under Contract No. DPT2006K-120470; The Swedish Resarch Council; U. S. Department of Energy under Contracts Nos. DE-FG02-05ER41374, DE-SC-0010118, DE-SC-0010504, DE-SC-0012069; U.S. National Science Foundation; University of Groningen (RuG) and the Helmholtzzentrum fuer Schwerionenforschung GmbH (GSI), Darmstadt; WCU Program of National Research Foundation of Korea under Contract No. R32-2008-000-10155-0.99 Z_Phys_C_46_93 J. G. Köerner and G. A. Schuler, Z. Phys. C 46, 93 (1990); F. J. Gilman and R. L. Singleton, Jr., Phys. Rev. D 41, 142 (1990).lum M. Ablikim et al. (BESIII Collaboration), Chin. Phys. C 37, 123001 (2013); Phys. Lett. B 753, 629 (2016).bes3 M. Ablikim et al. (BESIII Collaboration), Nucl. Instrum. Methods Phys. Res., Sect. A 614, 345 (2010).geant4 S. Agostinelli et al. (GEANT4 Collaboration), Nucl. Instrum. Methods Phys. Res., Sect. A 506, 250 (2003).BOOST Z. Y. Deng et al., Chin. Phys. C 30, 371 (2006).kkmc S. Jadach, B. F. L. Ward, and Z. Was, Comput. Phys. Commun. 130, 260 (2000).besevtgen D. J. Lange, Nucl. Instrum. Meth. A 462, 152 (2001); R.-G. Ping, Chin. Phys. C 32, 599 (2008).pdg2016 C. Patrignani et al. (Particle Data Group), Chin. Phys. C 40, 100001 (2016).lundcharm J. C. Chen, G. S. Huang, X. R. Qi, D. H. Zhang and Y. S. Zhu, Phys. Rev. D 62, 034003 (2000).BESIII_Dptomunu M. Ablikim et al. (BESIII Collaboration), Phys. Rev. D 89, 051104(R)(2014).Albrecht-1990am H. Albrecht et al.(ARGUS Collaboration), Phys. Lett. B 241, 278 (1990).BESII_D0toPenu M. Ablikim et al. (BES Collaboration), Phys. Lett. B 597, 39 (2004).BESIII_D0toPenu M. Ablikim et al. (BESIII Collaboration), Phys. Rev. D 92, 072012 (2015).BESII_DptoK0enu M. Ablikim et al. (BES Collaboration), Phys. Lett. B 608, 24 (2005).CLEOc_DtoPenu_818 D. Besson et al. (CLEO Collaboration), Phys. Rev. D 80, 032005 (2009).BESIII_DptoKLenu M. Ablikim et al. (BESIII Collaboration), Phys. Rev. D 92, 112008 (2015).BESIII_DptoKSenu M. Ablikim et al. (BESIII Collaboration), Chin. Phys. C 40, 113001 (2016).cov_inverse_matrix M. Lefebvre, R.K. Keeler, R. Sobie, and J. White, Nucl. Instrum. Methods Phys. Res., Sect. A 451, 520 (2000).BK D. Becirevcic and A. B. Kaidalov, Phys. Lett. B 478, 417 (2000).ISGW2 D. Scora and N. Isgur, Phys. Rev. D 52, 2783 (1995).ff_zexpansion T. Becher and R. J. Hill, Phys. Lett. B 633, 61 (2006).LQCD_fK H. Na et al. (HPQCD Collaboration), Phys. Rev. D 82, 114506 (2010).LQCD_fpi H. Na et al. (HPQCD Collaboration), Phys. Rev. D 84, 114505 (2011). | http://arxiv.org/abs/1703.09084v1 | {
"authors": [
"BESIII Collaboration",
"M. Ablikim",
"M. N. Achasov",
"S. Ahmed",
"X. C. Ai",
"O. Albayrak",
"M. Albrecht",
"D. J. Ambrose",
"A. Amoroso",
"F. F. An",
"Q. An",
"J. Z. Bai",
"O. Bakina",
"R. Baldini Ferroli",
"Y. Ban",
"D. W. Bennett",
"J. V. Bennett",
"N. Berger",
"M. Bertani",
"D. Bettoni",
"J. M. Bian",
"F. Bianchi",
"E. Boger",
"I. Boyko",
"R. A. Briere",
"H. Cai",
"X. Cai",
"O. Cakir",
"A. Calcaterra",
"G. F. Cao",
"S. A. Cetin",
"J. Chai",
"J. F. Chang",
"G. Chelkov",
"G. Chen",
"H. S. Chen",
"J. C. Chen",
"M. L. Chen",
"S. Chen",
"S. J. Chen",
"X. Chen",
"X. R. Chen",
"Y. B. Chen",
"X. K. Chu",
"G. Cibinetto",
"H. L. Dai",
"J. P. Dai",
"A. Dbeyssi",
"D. Dedovich",
"Z. Y. Deng",
"A. Denig",
"I. Denysenko",
"M. Destefanis",
"F. De Mori",
"Y. Ding",
"C. Dong",
"J. Dong",
"L. Y. Dong",
"M. Y. Dong",
"Z. L. Dou",
"S. X. Du",
"P. F. Duan",
"J. Z. Fan",
"J. Fang",
"S. S. Fang",
"X. Fang",
"Y. Fang",
"R. Farinelli",
"L. Fava",
"F. Feldbauer",
"G. Felici",
"C. Q. Feng",
"E. Fioravanti",
"M. Fritsch",
"C. D. Fu",
"Q. Gao",
"X. L. Gao",
"Y. Gao",
"Z. Gao",
"I. Garzia",
"K. Goetzen",
"L. Gong",
"W. X. Gong",
"W. Gradl",
"M. Greco",
"M. H. Gu",
"Y. T. Gu",
"Y. H. Guan",
"A. Q. Guo",
"L. B. Guo",
"R. P. Guo",
"Y. Guo",
"Y. P. Guo",
"Z. Haddadi",
"A. Hafner",
"S. Han",
"X. Q. Hao",
"F. A. Harris",
"K. L. He",
"F. H. Heinsius",
"T. Held",
"Y. K. Heng",
"T. Holtmann",
"Z. L. Hou",
"C. Hu",
"H. M. Hu",
"T. Hu",
"Y. Hu",
"G. S. Huang",
"J. S. Huang",
"X. T. Huang",
"X. Z. Huang",
"Z. L. Huang",
"T. Hussain",
"W. Ikegami Andersson",
"Q. Ji",
"Q. P. Ji",
"X. B. Ji",
"X. L. Ji",
"L. L. Jiang",
"L. W. Jiang",
"X. S. Jiang",
"X. Y. Jiang",
"J. B. Jiao",
"Z. Jiao",
"D. P. Jin",
"S. Jin",
"T. Johansson",
"A. Julin",
"N. Kalantar-Nayestanaki",
"X. L. Kang",
"X. S. Kang",
"M. Kavatsyuk",
"B. C. Ke",
"P. Kiese",
"R. Kliemt",
"B. Kloss",
"O. B. Kolcu",
"B. Kopf",
"M. Kornicer",
"A. Kupsc",
"W. Kuhn",
"J. S. Lange",
"M. Lara",
"P. Larin",
"H. Leithoff",
"C. Leng",
"C. Li",
"Cheng Li",
"D. M. Li",
"F. Li",
"F. Y. Li",
"G. Li",
"H. B. Li",
"H. J. Li",
"J. C. Li",
"Jin Li",
"K. Li",
"K. Li",
"Lei Li",
"P. R. Li",
"Q. Y. Li",
"T. Li",
"W. D. Li",
"W. G. Li",
"X. L. Li",
"X. N. Li",
"X. Q. Li",
"Y. B. Li",
"Z. B. Li",
"H. Liang",
"Y. F. Liang",
"Y. T. Liang",
"G. R. Liao",
"D. X. Lin",
"B. Liu",
"B. J. Liu",
"C. L. Liu",
"C. X. Liu",
"D. Liu",
"F. H. Liu",
"Fang Liu",
"Feng Liu",
"H. B. Liu",
"H. H. Liu",
"H. H. Liu",
"H. M. Liu",
"J. Liu",
"J. B. Liu",
"J. P. Liu",
"J. Y. Liu",
"K. Liu",
"K. Y. Liu",
"L. D. Liu",
"P. L. Liu",
"Q. Liu",
"S. B. Liu",
"X. Liu",
"Y. B. Liu",
"Y. Y. Liu",
"Z. A. Liu",
"Zhiqing Liu",
"H. Loehner",
"Y. F. Long",
"X. C. Lou",
"H. J. Lu",
"J. G. Lu",
"Y. Lu",
"Y. P. Lu",
"C. L. Luo",
"M. X. Luo",
"T. Luo",
"X. L. Luo",
"X. R. Lyu",
"F. C. Ma",
"H. L. Ma",
"L. L. Ma",
"M. M. Ma",
"Q. M. Ma",
"T. Ma",
"X. N. Ma",
"X. Y. Ma",
"Y. M. Ma",
"F. E. Maas",
"M. Maggiora",
"Q. A. Malik",
"Y. J. Mao",
"Z. P. Mao",
"S. Marcello",
"J. G. Messchendorp",
"G. Mezzadri",
"J. Min",
"T. J. Min",
"R. E. Mitchell",
"X. H. Mo",
"Y. J. Mo",
"C. Morales Morales",
"G. Morello",
"N. Yu. Muchnoi",
"H. Muramatsu",
"P. Musiol",
"Y. Nefedov",
"F. Nerling",
"I. B. Nikolaev",
"Z. Ning",
"S. Nisar",
"S. L. Niu",
"X. Y. Niu",
"S. L. Olsen",
"Q. Ouyang",
"S. Pacetti",
"Y. Pan",
"M. Papenbrock",
"P. Patteri",
"M. Pelizaeus",
"H. P. Peng",
"K. Peters",
"J. Pettersson",
"J. L. Ping",
"R. G. Ping",
"R. Poling",
"V. Prasad",
"H. R. Qi",
"M. Qi",
"S. Qian",
"C. F. Qiao",
"L. Q. Qin",
"N. Qin",
"X. S. Qin",
"Z. H. Qin",
"J. F. Qiu",
"K. H. Rashid",
"C. F. Redmer",
"M. Ripka",
"G. Rong",
"Ch. Rosner",
"X. D. Ruan",
"A. Sarantsev",
"M. Savrie",
"C. Schnier",
"K. Schoenning",
"W. Shan",
"M. Shao",
"C. P. Shen",
"P. X. Shen",
"X. Y. Shen",
"H. Y. Sheng",
"W. M. Song",
"X. Y. Song",
"S. Sosio",
"S. Spataro",
"G. X. Sun",
"J. F. Sun",
"S. S. Sun",
"X. H. Sun",
"Y. J. Sun",
"Y. Z. Sun",
"Z. J. Sun",
"Z. T. Sun",
"C. J. Tang",
"X. Tang",
"I. Tapan",
"E. H. Thorndike",
"M. Tiemens",
"I. Uman",
"G. S. Varner",
"B. Wang",
"B. L. Wang",
"D. Wang",
"D. Y. Wang",
"K. Wang",
"L. L. Wang",
"L. S. Wang",
"M. Wang",
"P. Wang",
"P. L. Wang",
"W. Wang",
"W. P. Wang",
"X. F. Wang",
"Y. Wang",
"Y. D. Wang",
"Y. F. Wang",
"Y. Q. Wang",
"Z. Wang",
"Z. G. Wang",
"Z. H. Wang",
"Z. Y. Wang",
"Z. Y. Wang",
"T. Weber",
"D. H. Wei",
"P. Weidenkaff",
"S. P. Wen",
"U. Wiedner",
"M. Wolke",
"L. H. Wu",
"L. J. Wu",
"Z. Wu",
"L. Xia",
"L. G. Xia",
"Y. Xia",
"D. Xiao",
"H. Xiao",
"Z. J. Xiao",
"Y. G. Xie",
"Y. H. Xie",
"Q. L. Xiu",
"G. F. Xu",
"J. J. Xu",
"L. Xu",
"Q. J. Xu",
"Q. N. Xu",
"X. P. Xu",
"L. Yan",
"W. B. Yan",
"W. C. Yan",
"Y. H. Yan",
"H. J. Yang",
"H. X. Yang",
"L. Yang",
"Y. X. Yang",
"M. Ye",
"M. H. Ye",
"J. H. Yin",
"Z. Y. You",
"B. X. Yu",
"C. X. Yu",
"J. S. Yu",
"C. Z. Yuan",
"Y. Yuan",
"A. Yuncu",
"A. A. Zafar",
"Y. Zeng",
"Z. Zeng",
"B. X. Zhang",
"B. Y. Zhang",
"C. C. Zhang",
"D. H. Zhang",
"H. H. Zhang",
"H. Y. Zhang",
"J. Zhang",
"J. J. Zhang",
"J. L. Zhang",
"J. Q. Zhang",
"J. W. Zhang",
"J. Y. Zhang",
"J. Z. Zhang",
"K. Zhang",
"L. Zhang",
"S. Q. Zhang",
"X. Y. Zhang",
"Y. Zhang",
"Y. Zhang",
"Y. H. Zhang",
"Y. N. Zhang",
"Y. T. Zhang",
"Yu Zhang",
"Z. H. Zhang",
"Z. P. Zhang",
"Z. Y. Zhang",
"G. Zhao",
"J. W. Zhao",
"J. Y. Zhao",
"J. Z. Zhao",
"Lei Zhao",
"Ling Zhao",
"M. G. Zhao",
"Q. Zhao",
"Q. W. Zhao",
"S. J. Zhao",
"T. C. Zhao",
"Y. B. Zhao",
"Z. G. Zhao",
"A. Zhemchugov",
"B. Zheng",
"J. P. Zheng",
"W. J. Zheng",
"Y. H. Zheng",
"B. Zhong",
"L. Zhou",
"X. Zhou",
"X. K. Zhou",
"X. R. Zhou",
"X. Y. Zhou",
"K. Zhu",
"K. J. Zhu",
"S. Zhu",
"S. H. Zhu",
"X. L. Zhu",
"Y. C. Zhu",
"Y. S. Zhu",
"Z. A. Zhu",
"J. Zhuang",
"L. Zotti",
"B. S. Zou",
"J. H. Zou"
],
"categories": [
"hep-ex"
],
"primary_category": "hep-ex",
"published": "20170327135902",
"title": "Analysis of $D^+\\to\\bar K^0e^+ν_e$ and $D^+\\toπ^0e^+ν_e$ Semileptonic Decays"
} |
§ INTRODUCTION Quantum chromodynamics (QCD) covers a wide range of scales from its partonic degrees of freedom, quarks and gluons, to complex hadrons, such as pion and nucleon. QCD is an asymptotic free theory, and a perturbative method could beapplicable for studying observables with a large momentum transferin high energy collisions. On the other hand, at a low energy, the strong interaction physics is nonperturbative. When we study high energy scattering processes with identified hadron(s), such as deep inelastic scattering and Drell-Yan process, one of the key concepts is the “QCD collinear factorization”. The scattering cross sections can be approximately written in a convolution of perturbative hard part and nonperturbative parton distribution functions (PDFs), which absorb all parturbative collinear divergences of the partonic scattering. The PDFs are universal functions and can be used to predictthe cross sections of various hadronic scattering processes.The PDFs have been extracted through global QCD analyses. Direct calculation of the PDFs is, in principle, possible and would give us invaluable insights into QCD dynamics, complementary to the global QCD analysis. Lattice QCD is a possible nonperturbative method to calculate the PDFs. However, since the PDFs are defined by using field operators located on the light-cone, e.g.,q(x,μ)=∫dξ^-/4πe^-ixξ^-P^+⟨ P|ψ(ξ^-)γ^+ exp(-ig∫_0^ξ^-dη^-A^+(η^-))ψ(0)|P⟩,for a quark distribution with the nucleon momentum P=(P_0, 0, 0, P_z), where x is the momentum fraction of P carried by the quark,μ is the factorization scale, and the light-cone coordinateξ^±=(t± z)/√(2), the time-dependence of the fields correlated in the ξ^--direction makes the direct calculation on the Euclidean lattice impossible. Although there have been attempts to calculate the moments of PDFs on the lattice, and then reconstruct PDFs from the moments,this approach has not been very successful since the higher momentsare noisy and the existence of power divergence causes complicated operator mixings.A recent breakthrough in the lattice calculation of the PDFs is the quasi-PDF approach, introduced by Ji <cit.>. The quasi-PDFs are defined with fields correlated completelyalong the spatial direction, e.g.,q(x̃,μ, P_z)=∫dδ z/4π e^-iδ zx̃P_z⟨ P_z|ψ(δ z)γ^3 exp(-ig∫_0^δ zdz'A_3(z'))ψ(0)| P_z⟩,for the quasi-quark distribution, and are calculable on the Euclidean lattice. The quasi-PDFs could be matched to normal PDFs using the large momentum effective theory <cit.> with perturbative matching factorsq(x,μ̃, P_z)= Z(x,μ̃/P_z, μ/P_z)⊗ q(x,μ) + O(Λ_ QCD^2/P_z^2, M^2/P_z^2),where ⊗ represents a convolution with respect to x and M is a nucleon mass. The relation between the normal- and renormalizedquasi-PDFs was also investigated in terms of the QCD collinear factorization approach <cit.>.While there have been several lattice calculations of quasi-PDFsusing the matching approach introduced by Ji <cit.>, a couple of uncertainties remains unsolved in their simulations, e.g., the existence of power divergences and matching between continuum and lattice.In this article, we report our approach toward resolving these uncertainties. We propose a non-perturbative renormalization of the quasi-PDFs tosubtract their power divergences. With our renormalization scheme, we provide an example of one-loop perturbative calculation of the matching factor between continuum and lattice. § RENORMALIZATION OF A NON-LOCAL OPERATOR WITH POWER DIVERGENCE The quasi-quark distribution (<ref>) is known to have the linear power divergence, which only comes from the Wilson line in its definition. If we adopt a UV cutoff as a regulator, the power divergence is manifest. Since the lattice QCD naturally introduces the UV cutoff, the UV divergence must be handled. Otherwise, we cannot take the continuum limit.The renormalization of a Wilson line along a (smooth) contour C, W_C, has been known to beW_C=Z_ze^δ m ℓ(C)W_C^ ren,where a superscript “ren” indicates the operator is renormalized, ℓ(C) is length along the contour C, and δ m depicts mass renormalization of a test particle moving along the contour C <cit.>. The power divergence is contained in the δ m in the exponential factor, leaving only logarithmic divergences in the renormalization constant, Z_z,which arise from end points of the Wilson line.For the non-local quark bilinears of the hadronic matrix elementin the r.h.s. of equation (<ref>), named as O_C(z), it was assumed to be <cit.>O_C=Z_ψ,ze^δ m ℓ(C)O_C^ ren,where Z_ψ, z does not contain the power divergence and δ m in the exponential factor, which, like that in equation (<ref>), takes care of all the power divergence. Unlike the Wilson line case in (<ref>),the multiplicative renormalization pattern (<ref>) is non-trivial. While the renormalization pattern on the power divergence in equation (<ref>), which is in an exponential form, holds even nonperturbatively, there is no guarantee on whether other divergences can be multiplicatively renormalized <cit.>.If we rewrite the Wilson line operator as an auxiliary fermion field propagator, which is similar to a static quark propagator with the field propagating in the z-direction, we can use the knowledgeof the heavy quark effective theory (HQET). In the HQET case, the multiplicative renormalizability has been seen up to first several loops. Lattice QCD simulation on the HQET also suggests the nonperturbative renormalizability, because the existence of the continuum limit of the heavy-light system has been numerically checked. Therefore we have a good reason to assume the renormalization pattern (<ref>) and use it in following arguments. Knowing the power divergence can be renormalized in the exponential form as in equation (<ref>), the power divergence in the quasi-quark distribution could be subtracted by introducing a non-local operator in the hadronic matrix element of the quasi-quark distribution in (<ref>) <cit.>:O^ subt(δ z)=e^-δ m|δ z|ψ(δ z)γ^3 Pexp(-ig∫_0^δ zdz'A_3(z'))ψ(0),where the superscript “subt” indicates that this is a power divergence subtracted operator.We now need some scheme to fix the mass renormalization δ m. One of the convenient choices for the subtraction scheme is to use a static quark potential V(R), which shares the same power divergence as the one in the non-local operator <cit.>. The static potential V(R) can be obtained from an R× T Wilson loop in the large T limit:W_R× T∝ e^-V(R)T(T→ large).The renormalization of the static potential is written asV^ ren(R)=V(R)+2δ m.To fix δ m, we impose a fixing condition at some distance R_0, yieldingV^ ren(R_0)=V_0⟶δ m=1/2(V_0-V(R)).Since the Wilson loop is measured in the lattice simulation, the subtraction of the power divergence in equation (<ref>) is nonperturbative. § ONE-LOOP PERTURBATION CONTRIBUTION IN THE CONTINUUM In this section, we present an one-loop calculation of the matrix element in the r.h.s. of equation (<ref>) in the continuum. In this calculation, we set external quark momenta to be zero, because the results are used for obtaining the matching factor between the continuum and the lattice calculation, and the external momentum dependence is canceled in the matching. We assume Euclidean space in the calculation. Besides the wave function renormalization of quarks, there are three diagrams to be calculated at the one-loop level in the Feynman gauge,as shown in figure <ref>. By integrating out z component of the loop momentum, we obtainδΓ_ vertex/sail/tadpole(δ z) =g^2/(4π)^2G_Fγ_3I_ vertex/sail/tadpole(δ z), I_ vertex(δ z) = (4π)^2/4∫_k_⊥ z(1/k_⊥ z^3+|δ z|/k_⊥ z^2 +|δ z|^2/k_⊥ z)e^-k_⊥ z|δ z|, I_ sail(δ z) = (4π)^2/2∫_k_⊥ z[1/k_⊥ z^3 -(1/k_⊥ z^3+|δ z|/k_⊥ z^2) e^-k_⊥ z|δ z|], I_ tadpole(δ z) = (4π)^2/2∫_k_⊥ z[1/k_⊥ z^3-|δ z|/k_⊥ z^2 -1/k_⊥ z^3e^-k_⊥ z|δ z|],where C_F=4/3 and k_⊥ z represents loop momenta perpendicular to z-direction. When δ z=0, a local operator case, contributions from the sail- and tadpole-type diagrams vanish, and the vertex-type reproduces logarithmic UV and IR divergences of the local operator. In the non-local case, the vertex-type is UV finite, because the loop integral is regulated by δ z≠0, while sail- and tadpole-type diagrams has logarithmic UV divergences. Also, tadpole-type produces a linear UV divergence. As we mentioned in the previous section this linear power divergence should be subtracted. The subtraction is carried out using the static potential, whose one-loop expression is written asV(R)=-g^2C_F1/4π R+g^2C_F∫_k_⊥ 01/k_⊥ 0^2 + O(g^4),where k_⊥ 0^2=k_1^2+k_2^2+k_3^2. From equations (<ref>) and (<ref>), it is clear that the linear divergence in the tadpole-type diagram and the potential are canceled. We can define a subtracted tadpole-type contribution as:I_ tadpole^ subt(δ z) = (4π)^2/2∫_k_⊥ z[1/k_⊥ z^3 -1/k_⊥ z^3e^-k_⊥ z|δ z|].At this stage, we introduce a UV cutoff as a regulator. Although the loop integrals are now three-dimensional, the two-dimensional UV cutoff is enough to regulate the UV divergences. The two directions for the cutoff correspond to usual transverse direction in the Minkowski space. Let μ be the two-dimensional UV cutoff scale and λ be the IR regulator, the loop integrals yield:I_ vertex(δ z=0)=2lnμ/λ,I_ sail(δ z=0)=0,I_ tadpole^ subt(δ z=0)=0,and for δ z≠0:I_ vertex(δ z≠0) =-∫_-∞^∞dk_0 .(k_⊥+1/√(k_0^2+1)) e^-√(k_0^2+1)k_⊥|_k_⊥=λ|δ z|^μ|δ z|, I_ sail(δ z≠0) =4lnμ/λ +2∫_-∞^∞dk_0 . e^-√(k_0^2+1)k_⊥/√(k_0^2+1)|_k_⊥=λ|δ z|^μ|δ z|, I_ tadpole^ subt(δ z≠0) =4lnμ/λ +2∫_-∞^∞dk_0 .(e^-√(k_0^2+1)k_⊥/√(k_0^2+1) +k_⊥ Ei[-√(k_0^2+1)k_⊥]) |_k_⊥=λ|δ z|^μ|δ z|.§ ONE-LOOP PERTURBATIVE MATCHING BETWEEN CONTINUUM AND LATTICEIn this section, we calculate the matching factor of the power divergence subtracted non-local operator (<ref>) between continuum and lattice at the one-loop level. The matching is done at each distance scales δ z, hence the matching factor could depend on δ z. With the multiplicative renormalization in equation (<ref>), we have the following matching pattern:O_ cont^ subt(δ z)=Z(δ z)O_ latt^ subt(δ z).In the following, we take a two-dimensional UV cutoff in the continuum as mentioned in the previous section, and the cutoff scale is set to be μ=a^-1 (lattice cutoff). For the lattice side, the naïve fermion for the lattice perturbative calculation is employed just for making the calculation simple. Extending this work to other practical lattice fermions, such as Wilson and domain-wall fermions, is straightforward, but just introduces complications. We also introduce link smearings for the Wilson line operatorin the definition of the non-local operator for the lattice side. The link smearing is often used for improving the S/N in the simulation, and is also known to reduce power divergences. We adopt two types of smearing, HYP1 and HYP2 in this study. To improve convergence in the coupling expansion in the lattice perturbative calculation, the mean-field improvement (MF) program is employed (See reference <cit.> for the details). For this matching, the final result does not depend onthe choice of the power divergence subtraction condition (<ref>), because the relevant term to the choice is canceled between continuum and lattice. At one-loop level, the matching coefficient can be obtained by taking the differences of the loop integrals between continuum and lattice calculations:δ I(δ z)=I_ cont(δ z)-I_ latt(δ z),where I stands for integrals (<ref>), (<ref>), (<ref>) and (<ref>) for continuum, and their lattice counterparts. The wave function renormalization is also included in the matching.The one-loop matching coefficients are shown in figure <ref>, separating contributions from each diagram. The linear divergence is subtracted, then the δ z dependence in the large δ z region is flat. This result is consistent with an intuition thatthe difference of the continuum and the lattice is of only the UV structure. The Wilson link smearing gives a tiny one-loop total coefficient compared to the unsmeared case. This small coefficient is preferable for perturbative accuracy. § SUMMARY We reported our effort to address two of the major uncertainties inextracting PDFs from quasi-PDFs calculated on the lattice: the power divergences and matching between continuum and lattice. Since the power divergences must be subtracted nonperturbatively, we presented the subtraction scheme using a static quark potential. We also derived the one-loop matching factorbetween the continuum and lattice calculations. Although other nonperturbative renormalization techniques,such as RI/MOM scheme, might be preferable for better accuracy incomputing the matching, our one-loop perturbative calculation could provide a good guidance for the future efforts. § ACKNOWLEDGEMENTS This work is supported in part by the U.S. Department of Energy, under contract DE-AC05-06OR23177.99Ji:2013dvaX. Ji,http://dx.doi.org/10.1103/PhysRevLett.110.262002 Phys. Rev. Lett.110, 262002 (2013). Ji:2014glaX. Ji,http://dx.doi.org/10.1007/s11433-014-5492-3 Sci. China Phys. Mech. Astron.57, 1407 (2014). Ma:2014jlaY. Q. Ma and J. W. Qiu,http://arxiv.org/abs/1404.6860arXiv:1404.6860 [hep-ph]. Ma:2014jgaY. Q. Ma and J. W. Qiu,http://dx.doi.org/10.1142/S2010194515600411 Int. J. Mod. Phys. Conf. Ser.37, 1560041 (2015). Chen:2016utp J. W. Chen, S. D. Cohen, X. Ji, H. W. Lin and J. H. Zhang,http://dx.doi.org/10.1016/j.nuclphysb.2016.07.033 Nucl. Phys. B 911, 246 (2016). Alexandrou:2016jqi C. Alexandrou, K. Cichy, M. Constantinou, K. Hadjiyiannakou, K. Jansen, F. Steffens and C. Wiese,http://arxiv.org/abs/1610.03689arXiv:1610.03689 [hep-lat]. Dotsenko:1979wbV. S. Dotsenko and S. N. Vergeles,http://dx.doi.org/10.1016/0550-3213(80)90103-0 Nucl. Phys. B 169, 527 (1980). Arefeva:1980zdI. Y. Arefeva,http://dx.doi.org/10.1016/0370-2693(80)90529-8 Phys. Lett. B 93, 347 (1980). Craigie:1980qsN. S. Craigie and H. Dorn,http://dx.doi.org/10.1016/0550-3213(81)90372-2 Nucl. Phys. B 185, 204 (1981). Dorn:1986dtH. Dorn,http://dx.doi.org/10.1002/prop.19860340104 Fortsch. Phys.34, 11 (1986). IMQY2 T. Ishikawa, Y. Q. Ma, J. W. Qiu, and S. Yoshida, in preparation. Ishikawa:2016znuT. Ishikawa, Y. Q. Ma, J. W. Qiu and S. Yoshida,http://arxiv.org/abs/1609.02018arXiv:1609.02018 [hep-lat]. Chen:2016fxxJ. W. Chen, X. Ji and J. H. Zhang,http://arxiv.org/abs/1609.08102arXiv:1609.08102 [hep-ph]. Musch:2010ka B. U. Musch, P. Hagler, J. W. Negele and A. Schafer,http://dx.doi.org/10.1103/PhysRevD.83.094507 Phys. Rev. D 83, 094507 (2011). | http://arxiv.org/abs/1703.08699v1 | {
"authors": [
"Tomomi Ishikawa",
"Yan-Qing Ma",
"Jian-Wei Qiu",
"Shinsuke Yoshida"
],
"categories": [
"hep-lat",
"hep-ph",
"nucl-th"
],
"primary_category": "hep-lat",
"published": "20170325145101",
"title": "Matching issue in quasi parton distribution approach"
} |
[email protected] Institute for Theoretical Physics, University of Heidelberg, Philosophenweg 16, D–69120 Heidelberg, Germany Department of Physics, Israel Institute of Technology – Technion, Haifa 32000, Israel [email protected] SISSA, via Bonomea 265, 34136, Trieste, Italy INFN, Sezione di Trieste, Via Valerio 2, 34127 Trieste, Italy [email protected] Institute for Theoretical Physics, University of Heidelberg, Philosophenweg 16, D–69120 Heidelberg, Germany What are the fundamental limitations of reconstructing the properties of dark energy, given cosmological observations in the quasilinear regime in a range of redshifts, to be as precise as required? The aim of this paper is to address this question by constructing model-independent observables, while completely ignoring practicalproblems of real-world observations. Non-Gaussianities already present in the initial conditions are not directly accessible from observations,because of a perfect degeneracy with the non-Gaussianities arising from the (weakly) nonlinear matter evolution in generalized dark energy models. By imposing a specific set of evolution equations that should cover a range of dark energy cosmologies, we find, however, a constraint equation for the linear structure growth rate f_1 expressed in terms of model-independent observables. Entire classes of dark energy models which do not satisfy this constraint equation could be ruled out, and for models satisfying it we could reconstruct e.g. the nonlocal bias parameters b_1 and b_2. 98.80.Es, 04.50.Kd, 95.36.+xQuasilinear observables in dark energy cosmologies Luca Amendola December 30, 2023 ==================================================§ INTRODUCTIONGravity is a nonlinear phenomenon that is also responsible for today's observedlarge-scale structure.On large scales, gravitational interactions shouldbe close to linear, even if there aresignificant non-Gaussian features in theinitial conditions for structure formation. The linearity of gravitationalinteractions on large scales is mainly due to a suppressed interaction rate ofenergy fluctuations close to the causality horizon. On smaller scalesgravitational interactions grow exponentially and we observe nonlinear amplifications ofover- and underdensities, accompanied by increasing tidal interactions. However, there should be an intermediate regime where linear theory provides a reasonably good approximation of the underlying physics, and nonlinearities can be viewed as a small perturbation to it. This is what we callthe quasilinear regime,where we expect that a theory that includes the leading nonlinearities should deliver better approximations as opposed to a strictly linear analysis.Cosmological structures such as filaments, clusters and voids emerge on scalesthat connect also such intermediate scales. Galaxies are tracers of the underlying matter distribution, and the explicitbias relation is unknown. Generally, galaxy bias coulddepend on the scale and on nonlocal physical processes,such as galaxy formation and hydrodynamical interactions, whose specificmechanisms are not yet comprehensively understood. Simplified bias models such as the local model can be very accurate,especially on large scales,but need to be revised when investigatingcosmological models beyond ΛCDM. The reason for the necessary revisionis that departures from ΛCDM usually imply scale-dependent matter growth,which also renders the bias to be scale dependent and nonlocal(see <cit.> for a review). Dark energy (DE) could affect all of the above.So far the simple ΛCDM model has been remarkably successful at explaininga host of astrophysical observations on a wealth of scales, but more sophisticatedDE models are not ruled out and should be furtherinvestigated <cit.>. Much effort has been made to understand DE and possible modifications at the level of background, linear and weakly nonlinear perturbation observables,often with the premiss to fix a particular DE model and investigate theresulting phenomenological consequences (e.g. <cit.>). In the literature there are also manyapproaches to investigate DE modifications in a model independent way <cit.>,but they are usually restricted to the linear regime. One of the tasks of the present study is to extend the model-independentapproach by allowing weak nonlinearities in the analysis. Quasilinear observables probe quasilinear scales, and on very large scales where the physics is linear, quasilinear observables should deliver, to a verygood approximation, the same answers as linear observables. By also allowing weak nonlinearities in our model-independent analysis,we of course not only provide access to more scales, but also introduce many more unknowns that should betaken into account. Such unknowns could arise from e.g. the bias model orthe weakly nonlinear matter evolution within the DE model. Furthermore, alsonon-Gaussian modifications could be present already in the initial conditionsof structure formation. These modifications are usually dubbed as primordialnon-Gaussianity (PNG), and in the present paper we refrain to use any simplifiedparametrization of PNG. Rather we show, amongst other things, thatPNG and non-Gaussianities arising from the weakly nonlinear matter evolution are indistinguishable, because of aperfect degeneracyinHowever, by going beyond linear order we derive new observables thatconstrain the combined effect of gravity and PNG and, furthermore, find anovel constraint equation that also gives insight into the linear regime ofstructure formation — much more insight than could be achieved in a strictlylinear analysis.In the present paper, which is closest in the spirit of Ref. <cit.>, we completely ignore practical problems of actual observations, such as surveygeometry and we assume good-enough statistics. Thus, we investigate a vastly idealizedscenario with the aim of obtaining the fundamental limitations of reconstructing theproperties of DE cosmologies.This paper is organized as follows. In the following section we outline thetheoretical assumptions and approximations that we use in this paper.In Sec. <ref> we apply our assumptions and approximations and develop the weakly nonlinear framework for generalized DE models. Sections <ref>–<ref> introduce various statisticalestimators that are used to connect the theory with galaxy and weak gravitational lensing observations. Readers who like to skip the technical details should at least read the shortSec. <ref> where we explain our methodology that we apply throughout this paper. Then, in Sec. <ref> we report a selection of quasilinear observables from the statistical estimators (see Appendix <ref> for a complete list of observables). We derive equations that deliver model-independent constraints of several unknowns in Sec. <ref>. Our constraint equations, although being based on fairly generalassumptions, still rely on a given class of theoretical modelsthat should hold for many DE cosmologies in a suitable range of cosmological scales.We do not rule out the possibility that theoretical improvements could allow toextend the validity regime of our analysis, and in Sec. <ref>we sketch a few of such theoretical avenues. Finally, we conclude in <ref>. We adopt metric signature cosmic time is t and its corresponding partial derivative is the overdot, while a prime denotes a partial derivative with respect to the time variable N=ln a,where a=(1+z)^-1 is the cosmic scale factor and z the redshift.The subscript 0 denotes present time.If not otherwise stated, the functional dependence in Fourier spaceis with respect to k= |k|. The shorthands k_12 and k_123 stand fork_1+k_2 and k_1+k_2+k_3, respectively, and we make use of the integral shorthand notation∫^3 k_12 = ∫^3 k_1 ∫^3 k_2. For a given function F(k_i,k_j) that depends on two wave vectors k_i and k_jwherei, j ∋{ 1,2,3} and i ≠ j,the shorthand F^ eq denotesequilateral dependence for which k_1 = k_2 = k_3≡ k. We apply a similar shorthandF^ sq_ij≡ F^ sq (k_i,k_j)for triangle dependences in the squeezed limit,where k_1 = k_2 ≡ k and k_3 = Δ k, and Δ k/k → 0. § ASSUMPTIONS &APPROXIMATIONS In the present work the considered departures from ΛCDM are described bytwo free functions, the first being a modification of the source term in the Poisson equation (usually dubbed Y(z;k)), and the second being a modification of the gravitational lensing potential (often called Σ(z;k)).Such deviations occur for example in modified theories of gravity orcoupled DE models (see e.g. <cit.>). Regarding the asssumptions on the underlying geometry and matter content of theUniverse, we impose that:(a) The background geometry of the Universe is well described by a Friedmann–Lemaître–Robertson–Walker metric;its evolution is parametrized by the cosmic scale factor a(t).The Hubble parameter H= ȧ /a is governed by the Friedmann equation H^2 - H_0^2 Ω_ k0 a^-2 = 1/3( ρ̅_ m + ρ̅_ x)(setting 8π G =1), where H_0 and Ω_ k0 are, respectively,the present day values of the Hubble parameter and curvature,ρ̅_ m∼ a^-3 is the background density of matter,and ρ̅_ x is the combined background density of an unspecifiedmodification of gravity.Background observations can generally measure H(z) up to a multiplicativeconstant (see e.g., <cit.>), and, thus, we assume in the followingthat the dimensionless Hubble function E(z) ≡ H(z)/H_0 is anobservable. Combining measurements of the luminosity or angular-diameter distancewith H(z), we can furthermore determine Ω_ k0<cit.>. By contrast, it is impossible to measureΩ_ m0 without invoking an explicit parametrizationfor ρ̅_ x, as the problem is perfectly degenerated <cit.>.(b) The matter content (i.e., dark matter and baryons) is exposed to an identical gravitational force and moves on geodesicsdescribed by a given metric theory. This assumption in particularrestricts our approach to sufficiently large scales where baryonicfeedback is negligible.For example, in a ΛCDM universe, baryons affect thevelocity divergence power spectrum of dark matter by less than 1%at scales larger than 0.5 h/Mpc <cit.>.Probing very large scales requires a careful assessment of so-calledsecondary effects that naturally arise in metric theories of gravity. For example, in ΛCDM which is based on general relativity,such secondary effects are relativisticcorrections thatappear at the matter level <cit.>,through radiation <cit.>, orthrough light-cone effects <cit.>. It is beyond the scope of this paper to incorporate such effects, and we, thus, restrict our analysis to the subhorizon regime to minimize its contamination. Relativistic corrections in the initial conditions, however, could generatenonzero intrinsic bispectra, which we do allow in our analysis;(c) We are interested in the cosmological evolution of a single-streammatter fluid within the quasilinear regime where perturbation theory should give meaningful results.We only take theleading nonlinearities into account and consequently ignore any loopcontributions. This implies that we only need to go up to second order in thefluid variables, i.e., δ_ m = δ_ m1 + δ_ m2 , θ_ m=θ_ m1 + θ_ m2 ,where δ_ m≡ (ρ_ m - ρ̅_ m)/ ρ̅_ mis the matter density contrast andθ_ m≡∇·v_ m the divergence of a rescaled peculiar velocity v_ m≡v_ m, pec/(a H). In the following section we provide some evolution equations for these fluidvariables, although explicit evolution equations are onlyrequired for Sec. <ref> when we derive a novelconstraint equation. The theoretical tools used in this paper are based on standardperturbation theory (SPT) <cit.>.Several theoretical models in the literature exist that could push the validityof the perturbative description to more nonlinear scales. We defer the discussion about such avenues to Sec. <ref>.(d) We apply the so-called plane-parallel limit when projecting fluidvariables from real-space coordinates x to redshift-spacecoordinates s <cit.>, s = x + ∇^-2∇_z θ ,where the inverse Laplacian is with respect to the real-space coordinates. For the bias relation between matter and galaxy, we allow the bias functionto be scale and time dependent. This means that the galaxy density can bewritten as δ_ g = δ_ g1 + δ_ g2, which is inFourier space <cit.> δ_ g1(z;k)= b_1 δ_ m1 , δ_ g2(z;k)= b_1 δ_ m2 + 1/2∫^3 k'/(2π)^3 b_2(k', k-k')×δ_ m1(k') δ_ m1(k-k') , where, again, the unknown bias functions b_1 and b_2 are generallyscale and time dependent. By virtue of assumption (b), we assume that there is no bias between the matterand galaxy velocity, i.e.,(e) Although there is currently no sign of any significant nonzero primordial non-Gaussianity(PNG),we allow in the present analysis for the most generaldeviation of Gaussian initial conditions, that is we donot invoke anyparametrization of PNG. Rather we assume that there exists a possible nonzerointrinsic matter bispectrum,⟨δ_ m1δ_ m1δ_ m1⟩_ c∼B_ m111 (z;k_1, k_2, k_3),which is related in some arbitrary way to the initial curvature perturbation onsuperhorizon scales. For example, B_ m111 could originate from PNG of the local type in whichcase one would expect large contributions to the bispectrum in the squeezed limit.We neglect PNG contributions to the initial trispectrum and higher-ordercorrelators, as they usually involve loop corrections; see assumption (c). § EQUATIONS IN REAL & REDSHIFT SPACE Let us apply the above assumptions and approximations,and set up the respective equations, firstly in real space and then redshift space.Although not required for the bulk part of this paper(except Sec. <ref>), let us assume the following explicitfluid equations for matter, δ_ m' + ∇·( 1 + δ_ m) v_ m = 0,v_ m' + ( v_ m·∇) v_ m = -( 2+H'/H) v_ m - ∇Ψ .Here δ_ m≡ (ρ_ m - ρ̅_ m)/ ρ̅_ mis the matter density contrast andv_ m≡v_ m,pec/(aH) the rescaledpeculiar velocity of matter, H the Hubble parameter, a prime denotes a partial derivative with respect to the time variable N=ln a, and a is the cosmic scale factor itself determined bythe Friedmann equation (<ref>).We make use of the modified Poisson equation,∇^2Ψ(x)=3/2Ω_ m∫^3 y Y(x-y) δ_ m(y) , where Y is a scale- and time-dependent clustering function.The function Y is by definition equal to unity in a universethat is prescribed by a cosmological constant (Λ) and a matter component;by contrast, for a realistic ΛCDM universe,where generally not just matter but also other fluid components are present(e.g., massive neutrinos), Y differs (mildly) from unity reflectingthe factthat matter couples to other fluid components gravitationally. In addition, Y≠ 1 can be established by a wealth of modified gravity scenarios(see <cit.> and references therein). In the following, we will makeno model-dependent assumptions how Y might look alike and thus leave it as a free function.To solve Eqs. (<ref>)–(<ref>), we assume that the fluid motionis irrotational and thus, the velocity can be fully described by itsdivergence, θ_ m = ∇·v_ m. Perturbing the density andvelocity according to (<ref>), we obtain to first order in Fourier space δ_ m1”+(2+H'/H) δ_ m1' - 3/2Ω_ mY δ_ m1 = 0.The growing mode solution for the density can be formally written as δ_ m1(z;k)= D(z;k)δ_0(z_0;k), where D is the linear growthfunction which is normalized to unity today, and δ_0 is the present matterdensity.We note that D is not only time dependent butin general also scale dependent. Using the solution for the density, one immediately gets for the first-order velocityθ_ m1 = - δ_ m1'= - f_1 δ_ m1, where the linear structure growth rate f_1 is defined by f_1≡ D'/D. Second-order solutions can be formally written asδ_ m2(z; k) = ∫^3k_12/(2π)^3δ_ D^(3)(k-k_12) 𝔉_2(z; k_1, k_2 ) ×δ_ m1(z;k_1) δ_ m1(z; k_2) , θ_ m2(z; k)= ∫^3k_12/(2π)^3δ_ D^(3)(k-k_12) 𝔊_2(z; k_1, k_2 )×δ_ m1(z;k_1) δ_ m1(z; k_2) , where δ_ D^(3) is the Dirac-delta distribution,𝔉_2 and 𝔊_2 are perturbation kernels with symmetric k-dependence in their arguments, and the matter density and velocity only depend on the magnitude of the wave vector k ≡ |k|, due to statistical isotropy.For an Einstein-de Sitter (EdS) universe the above kernels become time independent, and read in our sign convention 𝔉_2^ EdS = 5/7 + k_1·k_2/2k_1k_2[k_1/k_2+ k_2/k_1]+ 2/7( k_1·k_2/k_1k_2)^2 , 𝔊_2^ EdS =- 3/7 - k_1·k_2/2k_1k_2[ k_1/k_2+ k_2/k_1]- 4/7( k_1·k_2/k_1k_2)^2.For an EdS universe these kernels are well known in theliterature <cit.>,and are usually labelled with F_2 and G_2, respectively. For a standard ΛCDM model or in modified gravity,however, these kernels generally do depend on time and could have a morecomplicated k-dependence. Note that to construct the observables in the following,we do not require explicit solutions for𝔉_2 and 𝔊_2; we only assume that solutions for δ_ m and θ_ m can be written in terms of a power series in the linear density. Such perturbative solutions should describe the physics sufficiently well,provided that nonlinear corrections are small with respect to the linear contributions, and that on the considered scales vorticities and the effects from velocity dispersion can be neglected (see Sec. <ref> for details). Furthermore, since in this paper we consider the modification Y of the source term in the Poisson equation as a free function, also 𝔉_2 and 𝔊_2 are effectively free functions since they dependUp to this point we have dealt with matter perturbations in real space, but what we observe are galaxies, measured in redshift space. We deal with the galaxy description as outlined in Sec. <ref>, see inparticular Eqs. (<ref>)–(<ref>), where we employa nonlocal bias description between the matter density δ_ m and the galaxy density δ_ g,whereas according to assumption (d) we assume v_ m = v_ g≡v. The next step is to incorporate the effects of redshift-space distortions, resulting from the fact that the observed comoving positions of galaxies s are modified by their peculiar motion accordingto s=x+v_z(x)ẑ in the plane-parallel limit,whereẑ is the unit vector along the line of sight and v_z is theprojection of the peculiar velocity along the z axis.This leads to the following relation between the galaxy density in redshift space, δ_ g^ s(z;k), and the one in real space, δ_ g^ s(z;k)= ∫^3xe^-k·x[ 1 + δ_ g(z;x) ] e^- k_z v_z . Taylor expanding the fluid variables and the exponential in the last expression, we obtainδ_ g1^ s(z;k)= S_1(z;k_1) δ_ m1(z;k_1), δ_ g2^ s(z;k)= ∫^3k_12/(2π)^3δ_ D^(3)(k - k_12) S_2(z;k_1,k_2)×δ_ m1(z;k_1)δ_ m1(z;k_2) ,with the kernelsS_1 = b_1+f_1μ_1^2 , S_2 =-μ_12^2 𝔊_2 + b_1(k_12) 𝔉_2+1/2μ_12k_12[μ_1/k_1b_1(k_2)f_1(k_1)+μ_2/k_2b_1(k_1)f_1(k_2)] +(μ_12k_12)^2/2μ_1μ_2/k_1k_2f_1(k_1)f_1(k_2)+1/2b_2(k_1,k_2),where μ=k·ẑ/k is the cosine of the angle formed by the direction of the observation ẑ and the wave vector k and μ_i=k_i·ẑ/k_i.The kernel S_1 is widely known in the literature, in particular also in the frameworks of nonlocal bias and DE models. To our knowledge, the second-order kernel S_2 has not been reported earlier in the context of DE models. In the framework of nonlocal bias a very similar kernel, however valid only for a ΛCDM universe, has been derived in Ref. <cit.>, and in the respective limit our S_2 agrees with the one of <cit.>.§ POWER SPECTRUM IN REDSHIFT SPACE To understand our methodology in the following sections, it is instructive tofirst investigate the linear observables that can be constructed from thegalaxy power spectrum<cit.>. Thegalaxy power spectrum in redshift space is defined as⟨δ_ g^ s(k_1)δ_ g^ s(k_2) ⟩_ c = (2π)^3δ_ D^(3)(k_12) P_ g^ s(z; μ_1, k_1),where we note that P_ g^ s depends not only on the magnitude but alsoon the cosine of the wave vector with respect to the direction of the observation, since itacquires an angular dependence due to the redshift-space distortions. The matterpower spectrum P_ m(k), by contrast, depends only on the modulus k due tothe assumption of statistical isotropy.As for the perturbations of field variables, we can formulate the power spectrumin terms of a power series within perturbation theory, e.g., for the matter powerspectrum wehave, to the leading order,that P_ m = P_ m11∼δ_ m1^2. In the linear regime and for scales much smaller than the survey characteristic size,one can write for the galaxy power spectrumP_ g^ s(z; k,μ)= ( b_1 + f_1 μ^2)^2 P_ m11 ,where we remind the reader that thefunctions f_1 and b_1 depend generally on space and time. This expression can be written in terms of a polynomial in μ:P_ g^ s(z;k,μ) = P_ m11(z;k) ∑_iP_i μ^i ,with the only nonvanishing coefficientsP_0=b_1^2 ,P_2=2b_1f_1 , P_4=f_1^2 .Observations can be made in principle at all values of μ. This means that one can measure individually each term in the μ expansion. Taking ratios of the various terms in Eq. (<ref>) one gets rid of P_ m11 (and the unknown normalization σ_8),whose shape depends in general on initial conditions. One obtains, for example, the quantity P_1 = f_1/b_1from (2P_4)/P_2. The same procedure, extended to the bispectrum, is at the core of the method presented below. In addition to galaxy spectra, we will take into account also shear lensing spectra and cross-correlation spectra of lensing and galaxy clustering, in order to identify which quantities can be measured directly from observations without assumptions on the shape of theFurther linear observables are reviewed in Sec. <ref>. § BISPECTRUM IN REDSHIFT SPACEWe now continue with the next-to-leading order statistical estimator. Thegalaxy bispectrum in redshift space is defined as⟨δ_ g^ s(k_1)δ_ g^ s(k_2) δ_ g^ s(k_3) ⟩_ c = (2π)^3δ_ D^(3)(k_123) B_ g(k_1,k_2,k_3).The density bispectrum is nonzero only whennon-Gaussianities in the density are present.This is especiallythe case in the quasilinear regime of structure formation,which encompasses also the linear regime. As mentioned in Sec. <ref>, we allow in the present analysis ofnonlinearities arising from the initial condition (PNG), and ofthe weakly nonlinear evolution of matter.For the galaxy bispectrum, we get to the leading orderB_ g =2S_2(k_1,k_2)S_1(k_1)S_1(k_2)P_ m11(k_1)P_ m11(k_2) +two perms+ S_1(k_1)S_1(k_2) S_1(k_3) B_ m111 ,where ⟨δ_ m1(k_1) δ_ m1(k_2) δ_ m1(k_3) ⟩_ c = (2π)^3 δ_ D^(3) (k_123) B_ m111is the said non-Gaussian component arising from primordial/unknown physics.The galaxy bispectrum in redshift spaceis a function of five variables. The shape of the triangle is definedby three variables:the length of two sides, i.e., the magnitude oftwo wave vectors, k_1 and k_2, and the angle between them,cosθ_12= k_1·k_2/(k_1k_2). The two remainingvariables characterize the orientation of the triangle with respect tothe line of sight: we take them to be the polar angleof k_1, ω= arccosμ_1, and the azimuthal angle ϕaround k_1.All the angles between the wave vectors and the line of sight can be writtenin terms of μ_1 and ϕ <cit.>,μ_1=k_1·ẑ/k_1 , μ_2 = μ_1 cosθ_12 - √(1- μ_1^2)sinθ_12cosϕ ,μ_3= - k_1/k_3μ_1 -k_2/k_3μ_2.We now determine the explicit expressions for the galaxy bispectrumfor two fixed triangle configurations, namely for the equilateral and the squeezed type. §.§ The equilateral bispectrum In the equilateral configuration all the wave vectors have the samemagnitude which we take to bek_1 = k_2 = k_3 ≡ k, from which it followsthat k_i ·k_j/(k_i k_j) =-1/2, for i ≠ j.Furthermore, the relation (<ref>) between the three μ_i's simplifies toμ_2=- μ_1/2 -√(3-3μ_1^2)cosϕ/2 , μ_3=-μ_1-μ_2 ,which we use to replace all μ_2's and μ_3's in the general expression forthe bispectrum (<ref>) in terms of μ_1.We are thus left with a bispectrum that depends only on two angles,namely on μ_1 and on the azimuthal angle ϕ. We integrate out theazimuthal angle because of statistical isotropy around the redshift axis.Thus, one finally arrives at the equilateralbispectrum which is given in terms of a polynomial in μ_1, B_ g^ eq= P_ m11^2∑_iB_i^ eq μ_1^i ,with nonvanishing coefficients B_0^ eq, B_2^ eq, B_4^ eq, B_6^ eq and B_8^ eq. In the main text we only need the last twocoefficientsB_6^ eq =-177/1024f_1^2( f_1^2 + 16 𝔊_2^ eq -8/3Q_ m111^ eq f_1 ),B_8^ eq = -87/1024f_1^4,where 𝔊_2^ eq is the second-order velocity kernel in the equilateral configuration, and we have defined the reduced intrinsic bispectrumQ_ m111^ eq≡ B_ m111^ eq /P_ m11^2.The complete list of bispectrum coefficients is given in Appendix <ref>. §.§ The squeezed bispectrum The squeezed bispectrum is a specific limit that correlates density perturbationson essentially two different scales to each other. In that limit, two densityperturbations which are usually taken to be well inside the horizon, arecorrelated with another perturbation close to the horizon (or beyond). The corresponding triangle configuration in that limit is such thatone wave vector, Δ k, is much smaller than the other two.We choose k_1=k_2 = k, and k_3=Δ k. We leave Δ k as a free parameter but note that the squeezed approximation becomes more accurate when Δ k/k → 0.In the present paper we assume that the correlation length k is in the linear or in the quasilinear regime, where second-order perturbation theoryis a good approximation of the underlying physics, whereas Δ k is onsufficiently large scales where perturbations should mostly follow the overallHubble flow and are otherwise well described by linear perturbation theory.For the squeezed bispectrum,we thus assume the existence of an intermediate regime where we canuse the linear observablesas linear operators on functions which depend on the squezzed bispectrum triangleside Δ k (see the following).From the μ_i relations (<ref>), we get μ_2 ≃ -μ_1for all values of the azimuthal angle ϕ, and the latter drops out.Thus, we can write the squeezed bispectrum as a polynomial of two cosines, B_ g^ sq=∑_i,jB_ij^ sqμ_1^iμ_Δ k^j ,with the only nonvanishing coefficients B_00^ sq = a_1b_1 b_1,Δ k + b_2,12^ sq b_1^2 P_ m11^2+ B_ m111^ sq b_1^2 b_1, Δ k ,B̅_02^ sq =a_1b_1 b_1,Δ k + B_ m111^ sq b_1^2 b_1,Δ k , B_20^ sq =a_1f_1b_1, Δ k + a_2 b_1 b_1, Δ k + 2 b_2,12^ sq b_1 f_1 P_ m11^2+ 2 B_ m111^ sq f_1 b_1 b_1,Δ k ,B̅_22^ sq =a_1 f_1 b_1, Δ k + a_2 b_1 b_1, Δ k+ 2 B_ m111^ sqf_1 b_1 b_1,Δ k , B_40^ sq =a_2 f_1 b_1, Δ k +b_2,12^ sq f_1^2 P_ m11^2 + B_ m111^ sq f_1^2 b_1,Δ k ,B̅_42^ sq =a_2 f_1 b_1, Δ k + B_ m111^ sq f_1^2 b_1,Δ k ,where we have introduced the shorthand notationb_2,12^ sq≡ b_2^ sq(k_1,k_2), and P_ m11,Δ k≡ P_ m11(Δ k), etc., and defined a_1 =( b_2,13^ sq + b_2,23^ sq+ 4 b_1 𝔉_2, eff^ sq) P_ m11P_ m11,Δ k ,a_2 =( 2b_1,Δ kf_1 - 𝔊_2, eff^ sq) P_ m11P_ m11,Δ k .The bar indicates the ratio B̅_02^ sq = P_1,Δ k^-1 B_02^ sq etc., where P_1,Δ k = f_1,Δ k/ b_1,Δ k is the linear observable in the Δ k mode,which is assumed to be in the quasilinear regime.As promised above, P_1,Δ k is thus to be understood as anoperator acting on given functions. By contrast, we do not make use of the operator P_1 ≡ P_1(k)as the k-mode could be in the quasilinear regime where the operator P_1 delivers possibly a poor approximation of the underlying physics. We have defined 2𝔉_2, eff^ sq≡𝔉_2,13^ sq + 𝔉_2,23^ sq and 2𝔊_2, eff^ sq≡𝔊_2,13^ sq + 𝔊_2,23^ sqwhich are free of infrared divergences even in the vicinity of Δ k → 0.We note that in deriving the above expressions, we have assumedthat 𝔉_2,12 = 𝔉_2(k,-k) =0, a relation which is trivialto see in an EdS universe but generally holds also in DE models, as we shall provein Appendix <ref>.The galaxy bispectrum coefficients contain mostly too cluttered informationabout unknowns, and this is why we investigate in the following more sourcesof potential observables. Nevertheless, some of the above coefficients willbecome essential when determining our observables. § LENSING AND LENSING-GALAXY CROSS-SPECTRA Weak lensing, together with cross-correlations, provides another important toolin our analysis to gain further knowledge of quasilinear structure formation. To discuss weak lensing we make use of the scalar line element ds^2= -(1+2Ψ)t^2 +a^2(1+2Φ)d x^2 up to second order. We neglect vector and tensor modes as we are usually interested inDE modifications of the scalar type. Secondary vector and tensor modes,even present in standard ΛCDM cosmologies (see e.g. <cit.>), are ignored as well, as theirimpact should be vanishingly small on the scales we consider.Dark energy models usually modify the source term in the Poissonequation (<ref>), and on top of that, modifications in thegravitational slip are expected as well, the latter defined byη = -Φ/Ψ .In ΛCDM we have η→ 1, however, only to first order in perturbationtheory, and when ignoring massive neutrinos and the effects of baryons. As regards to the impact of baryons, as we do limit our analysis to sufficientlylarge scales where baryonic effects should be small (see our assumption (b)),we nevertheless expect that for weakly nonlinear scales, a mild baryonic impact could be incorporated in our framework.In any case, as mentioned above, we leave η as a free function and do not invokeany specific parametrization. Gravitational lensing is unaffected by redshift-space distortions orthe (unknown) bias, and is instead only sensitive to the total matter perturbation,k^2Φ_ lens=k^2 ( Ψ - Φ) = -3/2Σ Ω_ mδ_ m ,where we have defined the modified lensing function Σ=Y(1+η), with Y → 1, however, only in the “simplistic” ΛCDM model (see above) which we do not assume.What we truly observe in a measurement of gravitational lensing is the projectionof the three-dimensional power spectrum and the bispectrum on a two-dimensional sphere integrated along the lineof sight. The integral involves a window function that depends on the surveyspecification and geometry of the background space-time. Assuming a perfectknowledge of the window function one can differentiate the integral relation betweenthe 3D and the 2D spectra and therefore link the unprojected 3D bispectrum to theactual observations.§.§ Lensing bispectrum We define the lensing bispectrum as Ω_ m^3⟨Σ(k_1)δ_ m(k_1) Σ(k_2)δ_ m(k_2) Σ(k_3)δ_ m(k_3)⟩_ c≡(2π)^3δ_ D^(3)(k_123)B_ lens(k_1,k_2,k_3).Since the lensing signal is not sensitive to redshift-space distortions,the lensing bispectrum will not be affected by any projection effects. We obtain at the leading order for the equilateral configuration B_ lens^ eq =Ω_ m^3 Σ^3 ( 6𝔉_2^ eq P^2_ m 11 + B_ m111^ eq) , and for the squeezed configuration B_ lens^ sq =Ω_ m^3 Σ^2 Σ_Δ k( 4 𝔉_2,eff^ sq P_ m11 P_ m11, Δ k + B_ m111^ sq).§.§ Lensing-galaxy cross bispectra We also consider two types of cross-correlations between galaxy and lensing signal, the first is the galaxy-galaxy-lensing bispectrum, defined byΩ_ m⟨ δ_ g^ s(k_1) δ_ g^ s(k_2) Σ(k_3)δ(k_3) ⟩_ c≡ (2π)^3δ_ D^(3)(k_123)B^ ggl(k_1,k_2,k_3) ,which is in the equilateral configuration B^ ggl, eq =Ω_ mΣP_ m11^2 ∑_iB^ ggl, eq_iμ_1^i,with nonvanishing coefficientsB^ ggl, eq_0, B^ ggl, eq_2, B^ ggl, eq_4, and B^ ggl, eq_6.All coefficients are reported inAppendix <ref>, in the following we only need the last one, i.e.,B^ ggl, eq_6= - 59/128f_1^3.We have also derived the squeezed limit of that cross-bispectra, together with the other cross-bispectrum, the lensing-lensing-galaxy bispectrum, defined byΩ_ m^2⟨ Σ(k_1)δ(k_1) Σ(k_2)δ(k_2) δ_ g^ s(k_3) ⟩_ c≡ (2π)^3δ_ D^(3)(k_123)B^ llg(k_1,k_2,k_3),and we report all coefficients in Appendix <ref>. We note that in deriving the equilateral coefficients for the cross-correlators B^ ggl and B^ llg, we have integrated out the azimuthal angular dependence ϕas explained around Eq. (<ref>). A comment on stochastic biasing models (see e.g. <cit.>) is in order.In that class of phenomenological models, one introduces correlation coefficients, usually dubbed r, that parametrize the disknowledge of the underlying deterministic formation process of biased tracers <cit.>.Since we do not assume any simplified bias model,our nonlocal bias model incorporates the stochasticity between matter and galaxy fields, and thus, we do not need to introduce these correlation coefficients for our cross-correlators. See Sec. II E in Ref. <cit.> for a highly related discussion.§ OBSERVABLESThe cosine-independent coefficients of the various sorts of bispectraare not directly observable since they are proportional to the model-dependent matter power spectrum and to the unknown normalization of the density fluctuation amplitude. However, taking ratios of these coefficients, these unknowns drop out.Taking the time derivative of a coefficient by subsequent division by a coefficient is another useful operation, since unknowns disappear.Thus, this methodology provides access to a wealth of cosmological information in a model-independent way.In the following we briefly summarize the findings of linear observables that can be obtained from the galaxy and lensing power spectrum, then we extend the set of observables into the quasilinear regime, by the use of the above bispectrum coefficients.§.§ Linear observablesThis section summarizes the findings from the literature <cit.>, and we follow in particular the procedure of Ref. <cit.>. There it has been shown thattaking the ratio of the power spectrum coefficients P_2 and 2P_4 (see Eq. (<ref>)), one gets b_1/f_1, whereas taking the time derivative of P_4 divided by P_4 gives f_1+f_1'/f_1. Another important linear observable is Ω_ mΣ/f_1, which is obtained by taking the ratio of the lensing power spectrum and P_4.In summary, the linear observables are <cit.>P_1= f_1 / b_1, P_2 = Ω_ m0Σ / f_1,P_3 =f_1+f_1'/f_1.Interestingly, we obtain these (and many more) observables also from the bispectrum coefficients, with the important difference, that the bispectrum coefficients should hold on a wider range of scales, simply because they are obtained by using a better approximation in perturbation theory. The above linear observables are well known in the literature, and we note that P_1 is often denoted with β <cit.>, whereas P_2 is sometimes called E_G <cit.>.§.§ Quasilinear observables It is straightforward to confirm from our bispectrum coefficients the findings of P_1–P_3, but now obtained from a wider range of cosmological scales,B_1 = B_40^ sq - B̅_42^ sq/B_00^ sq - B̅_02^ sq = f_1^2/b_1^2 ,B_2=- 87E^2 Ω_ mΣ B_6^ ggl, eq/472 (1+z)^3 B_8^ eq =Ω_ m0Σ/f_1 ,B_3 = 1/4( B_8^ eq P_ m1^2 )'/B_8^ eq P_ m1^2= f_1 + f_1'/f_1 . The equivalence of these observables with P_1–P_3 can be used to establish a consistency relation in various ways. For example if the actual measurements from both the linear and quasilinear regime yield inconsistent results, there could be some unresolved systematic in the theory or analysis.Note that the above observables are independent of the unknown intrinsic bispectrum contribution B_ m111 (or Q_ m111).In fact, since we do not want to specify the DE model, second-order perturbations arising from the nonlinear matter evolution are indistinguishable from PNG modifications, as it is also evident from the following two observables,B_4 = - 29B_ lens^ eq/2048B_8^ eq P_ m11^2B_3^3 =f_1^-1𝔉_2^ eq + Q_ m111^ eq/6f_1 , B_5 = 29B_6^ eq/944B_8^ eq -1/16= f_1^-2𝔊_2^ eq - Q_ m111^ eq/6f_1 . However, summing up these two observables, we obtain another importantobservable that is independent of Q_ m111, B_6 = B_4 + B_5 = f_1^-1𝔉_2^ eq + f_1^-2𝔊_2^ eq .This observable will become crucial in the following section when we establish a nonlinear model-independent constraint.What knowledge can be gained about the nonlocal bias coefficients? We find model-independent constraints for the following bias ratios,B_7= b_2^ eq/b_1^2 ,B_8= b_2,12^ sq/b_1^2 ,which we shall derive in Appendix <ref>, where we also provide even more observables. What is missing is a similar uncluttered observable involving b_2,13^ sq or b_2,23^ sq, which, however, we have been unable to find. § MODEL-INDEPENDENT CONSTRAINTS §.§ Linear regimeObserve that the PDE for the linear matter density, Eq. (<ref>), can be rewritten in terms of a PDE for f_1,f_1'+f_1^2+f_1(2+ E'/ E) =3/2Ω_ mY ,where we remind the reader that E = H/H_0, and we haveΩ_ m =Ω_ m0(1+z)^3/ E^2 .In Ref. <cit.> it has been shown, using the set of linear observables (<ref>), that the above equation turns into a relation for the anisotropic stress η,3P_2 (1+z)^3/2E^2 (P_3 + 2 +E'/ E) -1 = η .This relation implies a model-independent constraint of η in terms of linear observables, a powerful result that can be used e.g. to rule out entire classes of DE models. §.§ Quasilinear regime Here we seek a similar relation as above, now obtained, however, from our novel quasilinear observables. To achieve this, we use the linear result θ_ m1 = - f_1 δ_ m1 and the fully general ansatz (cf. Eqs. (<ref>); here suppressing the integrals andDirac deltas because of notational simplicity)δ_ m2 = 𝔉_2 δ_ m1δ_ m1 , θ_ m2 = 𝔊_2 δ_ m1δ_ m1in Eqs. (<ref>)–(<ref>) together with the modified Poisson equation (<ref>). Taking the divergence of Eq. (<ref>), expanding Eqs. (<ref>)–(<ref>) in perturbation theoryand Fourier transforming the resulting expressions, these equations become, respectively, at second order{𝔉_2' + 𝔉_2 [ f_1(k_1) + f_1(k_2) ] }δ_ m1δ_ m1 ={1/2k_1 ·k_2/k_1 k_2[ f_1(k_1)k_2/k_1 + f_1(k_2)k_1/k_2] - 𝔊_2 + 1/2 f_1(k_1)+ 1/2 f_1(k_2) }δ_ m1δ_ m1 , {𝔊_2' + 𝔊_2 [ f_1(k_1) + f_1(k_2) ] }δ_ m1δ_ m1 = { - ( 2+E'/ E) 𝔊_2 - f_1(k_1) f_1(k_2) ( k_1 ·k_2/k_1 k_2)^2 - 1/2 f_1(k_1) f_1(k_2) k_1 ·k_2/k_1 k_2[ k_1/k_2 + k_2/k_1] - 3/2Ω_ m Y 𝔉_2 }δ_ m1δ_ m1 .These relations must also hold for specific configurations and without loopintegrals (see App. <ref> for a rigorous proof). For example, in the equilateral case we get the two relations𝔉_2^ eq' + 2 f_1 𝔉_2^ eq = f_1/2 - 𝔊_2^ eq ,𝔊_2^ eq' + 2 f_1 𝔊_2^ eq =- ( 2 +E'/ E) 𝔊_2^ eq + f_1^2/4 - 3/2Ω_ mY 𝔉_2^ eq . Now, making use of these equations, and the quantityf_1^2B_6 = f_1 𝔉_2^ eq + 𝔊_2^ eq with its time derivative,𝔊_2^ eq' = (f_1^2B_6 )' - f_1' 𝔉_2^ eq - f_1 𝔉_2^ eq', we obtain a model-independent realization of f_1. For this we first use Eq. (<ref>) to get an expression for 𝔉_2^ eq', and then plug this into the expression for 𝔊_2^ eq' in terms of B_6. We get𝔊_2^ eq' =(f_1^2B_6 )' - f_1' 𝔉_2^ eq- f_1^2/2 + f_1^3B_6 + f_1^2 𝔉_2^ eq .Plugging this in (<ref>) we finally get after a little algebra3/4 B_6 -B_6'/ B_6- 2B_3 -( 2 +E'/ E) = f_1,where we have used Eqs. (<ref>) and (<ref>). This is our main result. We stress that the lhs is obtained from model-independent observables, and thus this equation delivers a model-independent measurement of f_1. Furthermore we note that Eq. (<ref>) is independent of the modified Poisson source function Y, as the latter drops out during the derivation of (<ref>).Having obtained f_1, we get from the quasilinear observables (<ref>) and (<ref>) the bias parameters b_1, b_2^ eq and b_2,12^ sq as well as the quantity Ω_ m0Σ.If we furthermore use the linear relation (<ref>) that gives η, we also get Ω_ m0 Y, by virtue of Σ = Y(1+η), i.e., Ω_ m0 Y=2 f_1B_2E^2 ( P_3+2+ E'/ E)/3 P_2 (1+z)^3 .These are our final results, and we remind the reader that they are valid under assumptions (a)–(e), see Sec. <ref>. § CHALLENGES OF PERTURBATION THEORYThe linear and quasilinear observables, together with the constraint equationshave been obtainedwithin the framework of SPT, the latter being based on a single-stream fluid description which breaks down when particle trajectories begin to intersect. At that instant, the fluid enters the multi-stream regime and velocities become multi-valued, which evidently excites higher-order kinetic moments of the Vlasov hierarchy —such as the velocity dispersion tensor (see e.g. <cit.>).Also, even if the fluid was initially curlfree, vorticities are generated in the multi-stream regime. Both the presence of a nonvanishing velocity dispersion and vorticity couldrestrict the validity of parts of the above calculation to sufficiently large scales, although it is expected that both effects hamper the analysisdeep in the nonlinear regime the most.Let us first elucidate the consequences of the presence of vorticity generation inthe multi-stream regime, that we have neglected in the present paper.Nonvanishing vorticity implies that the velocity cannot be described by just its divergence θ_ m. As a result, the velocity power spectrum is a superposition of two power spectra, one for its divergence and the other for its noncurlfree part. Since our observables make use only of the divergence part of the total velocity power spectrum, one could question whether the estimators could bebiased in the presence of vorticity. The effect of vorticity on the total velocitypower spectrum has been investigated by a suite of cosmological simulations inRef. <cit.>.There it has been shown that at late times (z=0),the amplitude of the vorticity power spectrum is by a factor of about 250 smaller compared to the one from the divergence part for scales larger than 0.4h/Mpc (see their Fig. 3;notice, however, the residual dependence on the mass resolution for the extraction of the vorticity power spectrum). Thus, for sufficiently large scales only little power gets transferred to the curl part of the velocity, and vorticity can be safely neglected.The next issue we discussis the effect of velocity dispersion in the onset of multi-streaming.Incorporating velocity dispersion is generally a small-scale problem that must be modelled deep in the multi-stream regime, but in several numerical studies it has been shown that redshift matter polyspectra could be affected on mildly nonlinear or even linear scales (e.g., <cit.>).Also, not only the matter density but also the matter velocity divergencereceives corrections induced through velocity dispersion,and the respective feedbacks have been assessed in Ref. <cit.>. There the authors report that at late times 1% corrections ariseon the velocity divergence matter power spectrum at scales smaller than 0.1h/Mpc.Estimating the corrections induced through velocity dispersionis, however, a difficult task, and in Ref. <cit.> the authorshave applied only a linearization-based estimate of the impact of velocitydispersion. Furthermore, it remains unclear whether the measured velocity dispersion in these simulations is physical or remnants of finiteresolution effects <cit.>. Accurate theoretical modelling of the effect of velocity dispersion on redshift polyspectrais still an open problem, although considerable progress has been made in the past years. Advanced models make use for example of resummation schemes that resum the infinite SPT series in Lagrangian space <cit.>, or for example the distribution function approach <cit.> that uses an extended version of perturbation theory to capture velocity dispersion effects more accurately than SPT. The simplest models, by contrast, are motivated by phenomenological considerations and modify the redshift galaxy power- and bispectrum by hand (e.g., <cit.>). All the models so far in the literature essentially introduce a suppression factor in the power- and bispectrum. These suppression factors have in common that all of them affect mostly the overall shape of the polyspectra, whereas leaving other features such as the “wiggle information” (i.e., baryonic acoustic oscillations) almost unaltered (see e.g., Fig. 2 of Ref. <cit.>).Thus, when restricting to sufficiently large scales, we expect that our observables are mostly unaffected by velocity dispersion, since the shape information cancels out when taking the ratios of different μ coefficients of the bispectra.A significant nonzero velocity dispersion, however,would alter the momentum conservation of matter, i.e., the velocity dispersion tensorwould explicitly appear in Eq. (<ref>) and consequently also in (<ref>).Fortunately, our observables are measured from galaxy samples and not from the dark matter distribution directly,and it is known that velocity dispersion effects are generally smallerfor galaxies than for matter, especially if a sample of central galaxies can be selected <cit.>. Nonetheless we believe that the assumption of small velocity dispersion in the present paper is the most stringent one, i.e., the one which limits the validity of the present approach to sufficiently large scales. We thus consider further theoretical investigations in this direction as an important task, but beyond the scope of the present study. § CONCLUSIONS We have shown that, without imposing any DE parametrization, cosmological observationscan measure only(1) Ω_ k0 and E = H/H_0 at the background level;(2) the combinations P_1 =f_1/b_1, P_2=Ω_ m0Σ/f_1 and P_3 =f_1+f_1'/f_1 at the linear level; and(3) the novel observables B_1 -B_10 and C_1- C_4 thatare applicable in the quasilinear regime. (A concise list of these nonlinearobservables is given in Appendix <ref>.) The observables P_1 -P_3 and B_1 -B_3 are formallyidentical, with the difference that the former are obtained from a strictly linear analysis, whereas the latter includes the leading nonlinearities. However, the quasilinear observables also apply to the linear scales and thus, our quasilinear observables can probe alarger range of scales than could be done with a strictly linear analysis. Furthermore, applying both the linear and quasilinear observablesto linear scales only, the respective measurements of the observables must deliver identical results.From this one could perform consistency tests that ruleout entire classes of DE models.Many unknowns remain unknowns, especially Ω_ m0, the DE densityparameter Ω_ x, and we are left with a degeneracy between non-Gaussianitiesin the initial conditions (arising from PNG) and non-Gaussianitiesfrom the matter evolution. From our nonlinear observables, however, we can derive a model-independent constraintequation given in Eq. (<ref>). This relation should hold for a wide rangeof DE models, and, if verified by cosmological observations, can be used to obtain amodel-independent measure of f_1. That in turn, in combination with our observables, enables us to reconstruct the bias parameters b_1 and b_2, and the quantities Ω_ m0Y and Ω_ m0Σ. Lastly, having f_1 one gets the normalization-dependent quantityR= D f_1 σ_8 δ_ m0 <cit.>, from which onegets σ_8^2 P_ m1 as well.All the practical limitations of a real measurement, that we neglect here, areof course the most challenging problem to handle(see e.g., <cit.>). This paper, thus, should be understood as a potential starting point for a long journey with many hurdles ahead, with the final goal to reconstruct or rejectentire classes of DE cosmologies.§ ACKNOWLEDGEMENTS C.R. thanks V. Desjacques for useful discussions. The work of C.R. and L.A. is supported by the DFG through the Transregional Research Center TRR33 “The Dark Universe.” C.R. acknowledges the support of the individual Grant No. RA 2523/2-1 from the DFG. E.V. thanks the INFN-INDARK initiative Grant No. IS PD51 for financial support. § ALL BISPECTRUM COEFFICIENTS In the main text we have mentioned only bispectrum coefficients that are relevant in deriving our main results, while skipping other coefficients. Here we give a complete list of bispectrum coefficients.The galaxy bispectrum in the equilateral configuration readsB_ g^ eq= P_ m11^2∑_iB_i^ eq μ_1^i ,with the nonvanishing coefficientsB_0^ eq = 27/128 f_1^2 b_2^ eq + 3/4f_1 b_1^3 + 6 b_1^3 𝔉_2^ eq + 3 b_1^2 b_2^ eq+ 27/64f_1^2 b_1^2+ 3 f_1 b_1^2𝔉_2^ eq-3/2b_1^2𝔊_2^ eq+ 3/2 f_1 b_1 b_2^ eq+27/64 f_1^2 b_1 𝔉_2^ eq-27/32 f_1 b_1 𝔊_2^ eq+ Q_ m111^ eq( b_1^3 + 3/4 b_1^2 f_1 + 27b_1f_1^2/128) ,B_2^ eq = 3/4 f_1 b_1^3+ 9/32 f_1^2 b_1^2 +3 f_1 b_1^2𝔉_2^ eq - 3/2 b_1^2 𝔊_2^ eq + 3/2 f_1 b_1b_2^ eq+ 9/32 f_1^2 b_1 𝔉_2^ eq -9/16 f_1 b_1 𝔊_2^ eq+ 9/64 f_1^2 b_2^ eq - 135/1024 f_1^4 - 81/64 f_1^2𝔊_2^ eq + 3f_1/128 Q_ m111^ eq( 32 b_1^2 + 6 b_1 f_1 + 9 f_1^2 ) , B_4^ eq =27/128 f_1^2 b_2^ eq +351/1024 f_1^4 + 27/64 f_1^2 b_1 𝔉_2^ eq -27/32 f_1 b_1 𝔊_2^ eq+ 27/64 f_1^2 b_1^2 +117/32f_1^2𝔊_2^ eq+ 3f_1^2/128 Q_ m111^ eq( 9 b_1 - 26 f_1 ) ,B_6^ eq =-177/1024f_1^2( f_1^2 + 16 𝔊_2^ eq -8/3Q_ m111^ eq f_1 ), B_8^ eq = -87/1024f_1^4,whereQ_ m111^ eq≡ B_ m111^ eq /P_ m11^2. For the squeezed galaxy bispectrum we haveB_ g^ sq=∑_i,jB_ij^ sqμ_1^iμ_Δ k^j ,with the only nonvanishing coefficients B_00^ sq= a_1b_1 b_1,Δ k + b_2,12^ sq b_1^2 P_ m11^2+ B_ m111^ sq b_1^2 b_1, Δ k ,B̅_02^ sq=a_1b_1 b_1,Δ k + B_ m111^ sq b_1^2 b_1,Δ k , B_20^ sq=a_1f_1b_1, Δ k + a_2 b_1 b_1, Δ k +2 b_2,12^ sq b_1 f_1 P_ m11^2+ 2 B_ m111^ sq f_1 b_1 b_1,Δ k ,B̅_22^ sq=a_1 f_1 b_1, Δ k + a_2 b_1 b_1, Δ k + 2 B_ m111^ sqf_1 b_1 b_1,Δ k , B_40^ sq =a_2 f_1 b_1, Δ k +b_2,12^ sq f_1^2 P_ m11^2 + B_ m111^ sq f_1^2 b_1,Δ k ,B̅_42^ sq =a_2 f_1 b_1, Δ k + B_ m111^ sq f_1^2 b_1,Δ k ,where a_1=(b_2,13^ sq+b_2,23^ sq+4b_1𝔉_2, eff^ sq)P_ m11P_ m11,Δ k ,a_2=(4f_2𝔉_2, eff^ sq-2[f_1+f_1,Δ k/2]+2b_1,Δ kf_1) × P_ m11P_ m11,Δ k .For the pure lensing bispectra, we get in the equilateral configuration B_ lens^ eq = Ω_ m^3 Σ^3 ( 6𝔉_2^ eq P^2_ m 11 + B_ m111^ eq) , and in the squeezed configurationB_ lens^ sq = Ω_ m^3 Σ^2 Σ_Δ k( 4 𝔉_2,eff^ sq P_ m11 P_ m11, Δ k + B_ m111^ sq) ,which, evidently, have no angular dependence. Next is the cross-bispectrum 'galaxy-galaxy-lensing' which is in the equilateral configuration B^ ggl, eq =Ω_ mΣP_ m11^2 ∑_iB^ ggl, eq_iμ_1^i,with B^ ggl, eq_0 =3/8 f_1 b_2^ eq + 3/8 f_1 b_1^2 + 6 b_1^2 𝔉_2^ eq + 2 b_1 b_2^ eq + 3/2 f_1 b_1 𝔉_2^ eq - 3/4 b_1 𝔊_2^ eq+ 1/8 Q_ m111^ eq( 8 b_1^2 + 3 b_1 f_1 ) ,B^ ggl, eq_2 =7/8 f_1 b_1^2 + 7/8f_1b_2^ eq - 27/128 f_1^3 + 3/4 f_1^2 𝔉_2^ eq + 9/16 f_1^2 b_1+ 7/2 f_1 b_1 𝔉_2^ eq- 7/4 b_1 𝔊_2^ eq - 3/2 f_1 𝔊_2^ eq+ 1/8 Q_ m111^ eq( 7 b_1 f_1 + 3 f_1^2 ) ,B^ ggl, eq_4 =1/16 f_1^2 b_1 + 39/64 f_1^3 - 1/4 f_1^2 𝔉_2^ eq + 1/2 f_1 𝔊_2^ eq - 1/8 Q_ m111^ eq f_1^2 , B^ ggl, eq_6 =- 59/128f_1^3,and in the squeezed configuration B^ ggl,sq= Ω_ mΣ_Δ k∑_iB_i^ ggl,sqμ_1^i ,with coefficients B^ ggl, sq_0=b_1 a_1+ B_ m111^ sq b_1^2 , B^ ggl, sq_2=f_1 a_1 + b_1 a_2+ 2 B_ m111^ sq b_1 f_1 , B^ ggl, sq_4= f_1 a_2+ B_ m111^ sq f_1^2,with a_1 and a_2 as above. The second cross-bispectrum we consider is the lensing-lensing-galaxy bispectrum, defined byΩ_ m^2⟨ Σ(k_1)δ(k_1) Σ(k_2)δ(k_2) δ_ g^ s(k_3) ⟩_ c≡ (2π)^3δ_ D^(3)(k_123)B^ llg(k_1,k_2,k_3) ,which is in the equilateral configuration B^ llg, eq = Ω^2_ mΣ^2P_ m11^2 ∑_iB^ llg, eq_iμ_1^i ,with B^ llg, eq_0 =b_2^ eq +3/8 f_1 b_1 + 6 b_1 𝔉_2^ eq + 3/2 f_1 𝔉_2^ eq - 3/4𝔊_2^ eq+ Q_ m111^ eq( b_1 + 3/8 f_1 ) , B^ llg, eq_2 =3/16 f_1^2 - 1/8 f_1 b_1 - 1/2 f_1 𝔉_2^ eq + 1/4𝔊_2^ eq - 1/8 Q_ m111^ eq f_1 , B^ llg, eq_4 = - 5/16 f_1^2,and in the squeezed configuration B^ llg,sq= Ω_ m^2 Σ^2∑_iB_i^ llg,sqμ_Δ k^i ,withB^ llg, sq_0=4b_1,Δ k𝔉_2, eff^ sq P_ m11 P_ m11, Δ k+ b_2,12^ sq P_ m11^2 + B_ m111^ sq b_1,Δ k ,B^ llg, sq_2=4 f_1,Δ k𝔉_2, eff^ sq P_ m11 P_ m11, Δ k + B_ m111^ sq f_1,Δ k .§ MORE NONLINEAR OBSERVABLESHere we report the full list of nonlinear observables including their derivations,B_1 = B_40^ sq - B̅_42^ sq/B_00^ sq - B̅_02^ sq = f_1^2/b_1^2 ,B_2= - 87E^2 Ω_ mΣ B_6^ ggl, eq/472 (1+z)^3 B_8^ eq =Ω_ m0Σ/f_1 ,B_3 = 1/4( B_8^ eq P_ m1^2 )'/B_8^ eq P_ m1^2= f_1 + f_1'/f_1 , B_4 = - 29B_ lens^ eq/2048B_8^ eq P_ m11^2B_3^3 =f_1^-1𝔉_2^ eq + Q_ m111^ eq/6f_1 , B_5 = 29B_6^ eq/944B_8^ eq -1/16= f_1^-2𝔊_2^ eq - Q_ m111^ eq/6f_1 , B_6 = B_4 + B_5 = f_1^-1𝔉_2^ eq + f_1^-2𝔊_2^ eq , B_7 = 3B_1^1/2 C_1/16 - 3/ 8 - 6 B_4/ 3/ 8 + 2B_1^-1/2 = b_2^ eq/b_1^2 ,B_8 =B_1C_2 (1+C_3) = b_2,12^ sq/b_1^2 , B_9 = B_ lens^ sq B_1,Δ k^-1/2 C_4,Δ k^-1/ B_1C_4^2 B_00^ sq - B_0^ llg, sq = b_14 𝔉_2, eff^ sq + Q_ m111^ sq/b_2,13^ sq + b_2,23^ sq , B_10 =G /B̅_22^ sq-2 B̅_42^ sq B_1^-1/2 - GB_1^1/2=b_2,13^ sq + b_2,23^ sq/4 f_1 𝔉_2, eff^ sq -2 f_1 b_1,Δ k + 4 𝔊_2, eff^ sq ,and C_1= - 5 B_2^ llg,eq + 3 B_4^ llg,eq/ B_4^ llg,eq + 2 B_1^-1/2 = -8 f_1^-1𝔉_2^ eq + 4 f_1^-2𝔊_2^ eq-2 f_1^-1 Q_ m111^ eq , C_2= - 177/1024B_40^ sq - B̅_42^ sq/B_6^ eq P_ m1^2 = b_2,12^ sq/f_1^2 + 16 𝔊_2^ eq- 8 f_1 Q_ m111^ eq/3 , C_3 = 87B_6^ eq/177B_8^ eq -1 = 16 f_1^-2𝔊_2^ eq - 8/3 f_1^-1Q_ m111^ eq , C_4 = (1+z)^3/ E^2 B_2.In deriving the above we have defined a quantity that is dependent on the normalization of density fluctuations and, thus, generally not an observable,G= B_00^ sq -C_4^-2 B_1^-1Ω_ m^2 Σ^2 B_0^ llg,sq=( b_2,13^ sq + b_2,23^ sq) b_1 b_1,Δ k P P_Δ k . § EVOLUTION EQUATIONS IN SQUEEZED LIMIT In Sec. <ref>, we have derived a relation from the fluid equations by applying the equilateral limit to the wave dependence of the kernels. Here we repeat the analysis for the squeezed case, also to prove that Using Eqs. (<ref>)–(<ref>) as a starting point and taking the squeezed limit, we obtain, respectively,{𝔉_2,12^ sq' + 2 f_1 𝔉_2,12^ sq}δ_ m1^2=- 𝔊_2,12^ sqδ_ m1^2,{𝔊_2,12^ sq' + 2f_1 𝔊_2,12^ sq}δ_ m1^2={ - ( 2+ H'/H) 𝔊_2,12^ sq - 3/2Ω_ m Y 𝔉_2,12^ sq}δ_ m1^2.Defining δ_2^ sq = 𝔉_2,12^ sqδ_ m1^2 andθ_2^ sq = 𝔊_2,12^ sqδ_ m1^2, we can combine these equations into the following PDE,δ_2^ sq” + ( 2+ H'/H)δ_2^ sq' - 3/2Ω_ m Y δ_2^ sq = 0.This PDE coincides exactly with the one obtained for the linear matter density, Eq. (<ref>), thus its solution will grow with the same linear amplitude D. But since δ_2 is of second order with the fastest growing mode potentially of the order of D^2,we conclude that the above PDE for δ_2^ sq excites nothing more butdecaying modes, and thus we can set δ_2^ sq =0,from which follows that 𝔉_2,12^ sq=0.§ EVOLUTION EQUATIONS FOR THE MATTER BISPECTRUM In Sec. <ref> we have determined evolution equations thatlead subsequently to the constraint equation (<ref>). For itsderivation we have argued that we can drop two loop integrals and a Dirac delta. Here we provide a more rigorous derivation that obviously leads to the identical final result (<ref>). Actually, to understand our methodology, it is sufficient to focus on thelhs term of Eq. (<ref>) for which we again restore the double integrals and the Dirac delta. In that term we interchange some dependencesaccording to k→k_3, k_1 →k_4, k_2 →k_5, and write equivalently ∫^3 k_45/(2π)^3δ_ D^(3)(k_3 - k_45) {𝔉_2'(k_4,k_5) + 𝔉_2(k_4,k_5) [ f_1(k_4) + f_1(k_5) ] }δ_ m1(k_4)δ_ m1(k_5).Multiplying this by δ_ m1(k_1)δ_ m1(k_2) and taking the correlator of the resulting expression, we have∫^3 k_45/(2π)^3δ_ D^(3)(k_3 - k_45) {𝔉_2'(k_4,k_5) + 𝔉_2(k_4,k_5) [ f_1(k_4) + f_1(k_5) ] }⟨δ_ m1(k_1)δ_ m1(k_2) δ_ m1(k_4) δ_ m1(k_5) ⟩_ c= 2 (2π)^3 δ_ D^(3)(k_123 ) {𝔉_2'(k_1,k_2) + 𝔉_2(k_1,k_2) [ f_1(k_1) + f_1(k_2) ] } P_ m1(k_1)P_ m1(k_2), where we have used Wick's theorem <cit.> and discarded a zero-mode term ∝δ_ D^(3)(k_3). The rhs term in Eq. (<ref>) is only nonzero if the closure condition, dictated by δ_ D^(3)(k_123 ), is satisfied. This is indeed the case for the bispectrum where the three wave vectors form a closed triangle in Fourier space. By contrast, the omitted zero-mode term that is proportional to δ_ D^(3)(k_3) dictates k_3 =0 and no triangle closure condition.The same technique applies to the rhs of Eq. (<ref>) [and, of course, to the whole Eq. (<ref>) as well]; dropping the Dirac delta, some constant factors and the two power spectra, we then obtain for the equilateral triangle configuration Eq. (<ref>) [and Eq. (<ref>), respectively], which concludes the proof. Finally, we note that the above technique delivers evolution equations not for the fluid variables but for the bispectrum. In general, this technique of course applies not only to the bispectrum but to any polyspectrum. | http://arxiv.org/abs/1703.09228v2 | {
"authors": [
"Cornelius Rampf",
"Eleonora Villa",
"Luca Amendola"
],
"categories": [
"astro-ph.CO",
"gr-qc"
],
"primary_category": "astro-ph.CO",
"published": "20170327180004",
"title": "Quasilinear observables in dark energy cosmologies"
} |
theoremTheorem equationsection lemma[theorem]Lemma proposition[theorem]Proposition corollary[theorem]Corollary remark[theorem]Remark example[theorem]Example definition[theorem]Definition assumption[theorem]Assumption𝔪 Kerℝ ε ÷divMicroscopic modeling and analysis of collective decision making: equality bias leads suboptimal solutionsPierluigi Vellucci^1 Mattia Zanella^2Abstract: We discuss a novel microscopic model for collective decision-making interacting multi-agent systems. In particular we are interested in modeling a well known phenomena in the experimental literature called equality bias, where agents tend to behave in the same way as if they were as good, or as bad, as their partner. We analyze the introduced problem and we prove the suboptimality of the collective decision-making in the presence of equality bias. Numerical experiments are addressed in the last section. § INTRODUCTION Several^1Department of Economics, Roma Tre University, via Silvio D'Amico 77, 00145 Rome, Italy; [email protected] experimental^2Politecnico di Torino, Department of Mathematical Sciences, Corso Duca degli Abruzzi 24, 10129 Torino, Italy; [email protected] works on group psychology has been done in recent years in order to observe unexpected dysfunctional behaviors in decision-making communities, see <cit.> and the references therein. Usual example are the groupthink, a collective phenomena whereby people try to minimize internal conflicts for reaching consensus to the detriment of the common good, the Dunning-Kruger effect, regarding an overestimation of personal competence of unskilled people, and the equality bias, whereby people behave as if they are as good, or as bad as their partner. In the following we will focus on this latter aspect of decision–making systems.A valuable improvement on the direction of understanding the emergence of the equality bias has been done in <cit.>. Here, authors asked how people deal with individual differences in competence in the context of a collective perceptual decision-making task, developing a metric for estimating how participants weight their partner's opinion relative to their own. Empirical experiments, replicated across three slightly different countries like Denmark, Iran, and China, show how participants assigned nearly equal weights to each other's opinions regardless of the real differences in their competence. The results show that the equality bias is particularly costly for a group when a competence gap separates its members.Drawing inspiration by these recent experimental results, and by the mathematical set-up introduced in the recent works <cit.>, we consider here amicroscopic model taking into account the influence of the competence in collective decision-making tasks for systems of interacting agents. This works follows the recent study of the authors <cit.> where the decision-making task is discussed at the kinetic level. The approach proposed in this paper is based on the Laplacian matrix of the connectivity graph and is inspired by classical works on self–organization <cit.>. With reference to the experimental literature we introduce competence–based interaction functions describing the maximum competence (MC) and the equality bias (EB) case. In particular, the MC model sketches the case in which the emerging decision coincides with the one of the most competent agent. On the other hand the EB model should deal with the complementary case. Based on a simplified communication coefficients, we derive the asymptotic convergence of the overall system for the decision models. A key feature of present modeling is the evolution of the competence variable, whose dynamics takes into account the social background of the single agent and the possibility to improve specific competences during interactions with more competent agents, see <cit.>. At the continuous level it has been showed in <cit.> how the variation of the mean opinion of agents with given competence follows the choice of the most competent agents in the MC case. The present approach is based on the explicit derivation of eigenvalues of the system. The present manuscript is organized as follows. In Section <ref> we briefly review some microscopic models for alignment dynamics, we introduce here a specific model for decision and competence. Then we discuss two main models for the collective decision-making, the mentioned MC model and the EB model. In Section <ref> we analyze the main properties of the model and we show how the equality bias leads the system of agents toward suboptimal collective decisions, computing the eigenvalues of the aforementioned Laplacian matrix and proving that the collective decision-making in the presence of equality bias is suboptimal for each t>0. Finally, in Section <ref> we address numerical experiments based on the introduced model.§ DESCRIPTION OF THE MODEL In this section we discuss some modeling aspects of second order microscopic model for decision-making dynamics. Our mathematical approach follows the set-up of several recent works on opinion dynamics, see <cit.> and the references therein. These class of models gained deepest attention in scientific research in the last decade thanks to their countless applications in biology, socio-economic sciences and control theory <cit.>. §.§ Microscopic models for the collective behaviorWithout intending to review whole literature, we introduce some well-known microscopic modelsdescribing particular aspects of the aggregate motion of a finite system of interacting agents. We focus in particular on alignment-type dynamics.We are interested in studying the dynamics of N∈ℕ individuals with the following general structure at time t∈^+ ẋ_i = f(x_i,w_i),i=1,…,N, ẇ_i = S(x_i)+1α_i∑_j=1^NP(x_i,x_j;w_i,w_j) (w_j-w_i),where (x_i,w_i)∈^2d for each t≥ 0, S(·) is a self-propelling term and P(·,·;·,·) is a general interaction function depending on both the considered variables. In (<ref>) we introduced a function f:^2d→^d, it assumes the form f(x_i,w_i)=w_i in case of flocking systems, in this case x_i,w_i are the space and velocity variables the ith agent. It may describe a wider class of processes which will be specified later on.We exemplify the structure of flocking systems by presenting the Cucker-Smale (CS) model and the Motsch-Tadmor (MT) model. In the classic CS model each agent adjusts its velocity by adding a weighted average of the differences of its velocity with those of all the other agents. Therefore, for all i∈{1,…,N} we consider a symmetric interaction function of the formP(x_i,x_j;w_i,w_j)=p(x_i-x_j ^2)depending on the Euclidean distance between agents and the constant scaling α_i=N, see <cit.>. In particular the typical choice is the followingp(x_i-x_j ^2) = K(ζ^2+x_i-x_j ^2)^γ,with K,ζ>0 and γ≥ 0. Without considering self-propelling terms, i.e. S(·)≡ 0, it has been shown how under these assumptions that the resulting initial value problems is well-posed: mass and momentum are preserved and the solution has compact support for both position and velocity <cit.>. Further, in the CS model unconditional alignment emerges for γ≤1/2 and the velocity support collapses exponentially to a single point and the system holds the same disposition.An example of non-symmetric interactions in flocking systems is given by the MT model <cit.>. Here the alignment is based on the relative influence between the system of agents, therefore we consider an interaction of the form introduced in (<ref>) whereas the scaling factors α_i>0 are given byα_i = ∑_j iP(x_i-x_j ^2).With this definition the dynamics looses any property of symmetry of the CS model, linking the initial value problem (<ref>) to more sophisticated models where the ith agent may interact with the jth agent but not vice versa, for example leader-follower models as well as limited perception models <cit.>. §.§ A competence-based model for collective decision-making We are interested to describe the coupled evolution of decisions and competence in a system of N∈ℕ interacting agents. Each agent is endowed with two quantities (x_i,w_i) representing its competence and decision respectively, where x_i∈ X⊆^+ and w_i∈[-1,1]=ℐ, where ± 1 denote two oppositepossible decisions of an agent.One of the main factors influencing the evolution of the competence variable is the social background in which individuals lives. It is therefore natural to assume that competence is partially inherited from the environment with the possibility to learn specific competences by interacting with more competent agents <cit.>.Real experiments have been done in the psychology literature in order to define the impact of the competence on a group decision-making, see <cit.> and the references therein. Competence is generally associated to the predisposition to listen and give value to the otheropinions. The higher this quality, greater is the ability to value other opinions. Vice versa, a person unwilling to listen and dialogue is usually marked by not competent. An emergent phenomenon in group decision-making is called equality bias, that is a misjudgement of personal competence of unskilled people during the exchange of informations, which goes hand in hand with the tendency of the most skilled individuals to underestimate their competence.From the general structure introduced in the previous section we consider the evolution in [0,T_f], T_f>0 of the following system of differential equationsẋ_i = ∑_j=1^N λ(x_i,x_j) (x_j-x_i)+λ_B(x_i) z,i=1,…,N ẇ_i= 1N∑_j=1^N P(x_i,x_j;w_i,w_j)(w_j-w_i),where z∈^+ is a the degree of competence achieved from the background at each interaction, having distribution C(z) and bounded mean m_B ∫_^+C(z)dz=1,∫_^+zC(z)dz=m_B.Further, λ_B(·) quantifies the expertise gained from the background and λ(·,·) weights the exchange of competence between individuals. A possible choice for the function λ(·,·) is λ(x_i,x_j)=const.>0 if x_i<x_j and λ(x_i,x_j)=0 elsewhere. In the above system we introduced the interaction function 0 ≤ P(w_i,w_j;x_i,x_j)≤ 1 depending on both the decisions and competence of the interacting agents.More realistic models may be obtained by adding to (<ref>) decision dependent noise terms modeling self-thinking processes and characterized by a function D(x_i,w_i)∈[0,1] generally called local relevance of the diffusion for a given decision and competence. A possible choice for the interaction function is the followingP(w_i,w_j;x_i,x_j)=Q(w_i,w_j) R(x_i,x_j),where 0≤ Q(·,·)≤ 1 is the compromise propensity and 0≤ R(·,·)≤ 1 which takes into account the agents' competence. Let us assume Q(w_i,w_j)≡ 1, we adopt the following notation for the square matrix ℛ_N∈Mat_N(^+) r_ij=R(x_i,x_j),for all i,j=1,…,N.We further define the diagonal square matrix 𝒟_N∈Mat_N(^+) ( 𝒟_N)_ij=∑_j=1^N r_ij if i=j0if i j,for all i,j=1,…,N. Then we can rewrite (<ref>) as follows ẇ_i = -(1N∑_j=1^N r_ij)w_i(t)+1N∑_j=1^N r_ijw_j(t),=-1/N[𝒟_N w(t)]_i+1/N[ℛ_N w(t)]_i,=-1N[ℒ_N w(t)]_i.being ℒ_N=𝒟_N-ℛ_N; where ℒ_N is usually called Laplacian matrix of a graph. §.§ Collective decision-making under equality biasIn the following we consider two main models of decision-making inspired by real experiments <cit.>. The first model takes into account the competence of individuals: at each interaction the prevailing decision coincides with the one of the system with maximum competence. We will refer to this model as maximum competence model (MC). In the present setting the MC model may be obtained by considering the Heaviside-type interaction function R(x_i,x_j)=:R_MC(x_i,x_j) R_MC(x,x_*)=1 x<x_* 1/2 x=x_* 0 x>x_*.The function R_MC(·,·) may be approximated through a smoothed continuous version of the MC model (cMC)R_cMC(x_i,x_j) = 11+e^c(x_i-x_j),with c>>1. In order to reproduce the cited equality bias we consider here a competence based interaction function R(x_i,x_j)=:R_EB(x_i,x_j) with the following properties: if the competences x_i and x_j are very close together, i.e. in the homogeneous case, there are not appreciable changes in the dynamics of the model, while if x_i and x_j sensibly differ we have R_EB(x_i,x_j)≃ 1. An example is given by the sigmoid functionR_EB(x_i,x_j)=11+e^-c(x_i-x_j),with c>0 a given constant. We depict in Figure <ref> the functions R_cMC(·,·) and R_EB(·,·) defined in (<ref>)-(<ref>) for several choices of the constant c>0.Observe how, both in the EB and cMC cases, the element of the matrix ℛ_N introduced in (<ref>) is such thatr_ij= 1-r_ji, i,j=1,…,N.The problem to study the eigenvalues distribution of the matrix ℒ_N is not in general an easy task, and it depends on the connectivity coefficients index of the model. Under suitable assumptions, it has been addressed in <cit.> and <cit.>. In our case the Laplacian matrix is not symmetric and a strategy similar to <cit.> cannot be used. For this reason, in the following, we will face the problem of the eigenvalues distribution under simplifying assumptions. We introduce the concepts of collective decision <cit.>.Let us consider a system of N∈ℕ agents with competence and opinion (x_i,w_i)_i=1,…,N. We define the collective decision of the system the quantityw̅ = 1N∑_i=1^N w_i.Let (x_i,w_i)_i=1,…,N be a system of N∈ℕ interacting agents. A collective decision is said to be optimal ifw̅ = w_k,such that x_k=max_i=1,…,Nx_i.We have introduced a definition of optimal decision which is rather different from the one in <cit.>. In the cited work the optimal decision of the interacting system is suggested by external factors through and embedded in the dynamics through a self–propulsion term, whereas the introduced optimal decision depends on the maximal competence of the considered system of agents and is defined a priori as the decision of the most competent agents of the system.In the rest of the paper we focus on two main situations described by the following assumptions:If the competence of the agents does not enter in the dynamics we have r_ij= r for all i,j=1,…,N.The system of agents is divided in two populations, competent and incompetent agents, belonging to the sets S and U respectively. The interaction function in the MC case readsr_ij=0 i∈ S, j∈ U 12i,j∈ S or i,j∈ U1 i∈ U, j∈ S,whereas it simplifies in the EB case as followsr_ij=1 i∈ S, j∈ U 12i,j∈ S or i,j∈ U0 i∈ U, j∈ S.The Assumption <ref> describes the case in which the competences of individuals of the whole group does not depend on time and are very close together, i.e. x_i≈ x_j for all i,j=1,…,N. We will analyze this simple case by assuming from r=1/2. On the other hand Assumption <ref> define simplified interaction rules, which are coherent with R_MC(·,·) and R_EB(·,·) for c≫ 1. Observe how in this case the evolution of the competence variable does not permit the decoupling of the introduced decision dynamics and to derive explicit stationary decisions of the system. In the following we derive the general structure of the eigenvalues of the Laplacian matrix showing how the introduction of these interaction rules leads the agents toward an optimal or suboptimal collective decisions respectively in the MC and EB cases. § PROPERTIES OF THE MODELIn this section we investigate the structure and properties of the matrices defined in the last section.Let us consider the matrices ℒ_N=𝒟_N-ℛ_N defined in (<ref>)-(<ref>). Then we have:(i) The entries of ℒ_N are given by ( ℒ_N)_ij=(1-δ_i1)(i-1-∑_k=1^i-1r_ki)+ (1-δ_iN) ∑_k=i+1^N r_iki = j -r_iji < j r_ji-1 i > jwhereδ_ij =0 i ≠ j, 1 i=j.is the Kronecker's delta function. Further, the expression of ℒ_N at the may be written in terms of ℒ_N-1 as followsℒ_N=( ℒ_N-1+ℋ_N-1-h_N-1^Th_N-1-1_N-1N-1-∑_i=1^N-1 r_i,N )where ℒ_1=(0) and we introduced the terms h_N-1=[r_iN]_i=1,…,N-1, 1_N-1=[1,…,1_N-1] and the diagonal matrix ℋ_N-1∈Mat_N-1([0,1]) defined as(ℋ_N-1)_ij =r_iN if i=j,0otherwise. (ii) The matrix ℒ_N is singular.(iii) For N≥ 2, tr(ℒ_N)=N(N-1)/2.(i) By induction, in the case N=2 we have:ℒ_2= (r_11+r_120 0 r_21+r_22)-(r_11r_12r_21r_22 )=(r_12-r_12r_12-1 1-r_12 ),that is (<ref>) in the case N=2. We assume true (<ref>) foranyN∈ℕ, N>2, therefore we haveℒ_N+1=𝒟_N+1-ℛ_N+1,that isℒ_N+1= ( [ r_1,1+…+r_1,N+1 0; ⋱; 0 r_N+1,1+…+r_N+1,N+1; ])-([ r_1,1 r_1,2 … r_1,N+1; r_2,1 r_2,2 … r_2,N+1; ⋮ ⋮ ⋮ ⋮; r_N+1,1 r_N+1,2 … r_N+1,N+1; ]).Being -r_i,j=r_j,i-1 for i>j we have that the (N+1)th row of (<ref>) is given by(h_N-1_N, N-∑_i=1^N r_i,N+1),while we can write the (N+1)th column as,([-h_N^T; N-∑_i=1^N r_i,N+1 ]).Hence, if we define(ℋ_N+1)_ij =r_i,N+1 if i=j,0otherwise,we have the first point. (ii) It follows form the fact that the vector (1,…,1)lies in the kernel of the matrix ℒ_N. (iii) We have to show that∑_i=1^N (1-δ_i1)(i-1-∑_k=1^i-1r_ki)+ ∑_i=1^N (1-δ_iN) ∑_k=i+1^N r_ik=N(N-1)/2,that is ∑_i=2^N (i-1-∑_k=1^i-1r_ki)+ ∑_i=1^N-1∑_k=i+1^N r_ik=N(N-1)/2,which is true in the case N=2. Therefore, we prove by mathematical induction that equation (<ref>) holds for all N>2. Let us define the following objectsP_N =∑_i=2^N (i-1-∑_k=1^i-1r_ki), Q_N =∑_i=1^N-1∑_k=i+1^N r_ik.It follows thatP_N+1 =P_N+N-∑_k=1^Nr_k(N+1), Q_N+1 =Q_N+∑_k=1^Nr_k(N+1),thusP_N+1+Q_N+1=P_N+Q_N+N=N(N-1)/2+N=N(N+1)/2,which completes the proof.Let us consider the matrix ℛ_N under the Assumption <ref>, i.e. r_ij=r∈[0,1] for each i,j=1,… N, N≥ 2. Denoting with λ_2^N-1,…, λ_N-1^N-1 the non-zero eigenvalues of ℒ_N-1, the expression of ℒ_N is ℒ_N=((N-1)r -r -r…-r r-1λ_2^N-1-r…-r⋮ … ⋱ … ⋮r-1…r-1λ_N-1^N-1-r r-1 r-1…r-1 (N-1)(1-r))with eigenvaluesλ_1^N=0, λ_i^N=(i-1)(1-r)+(N+1-i)r, i=2,…,NIn the case N=2 we haveλ_1^2=0,λ_2^2=1which are eigenvalues of the matrixℒ_2 =(r -r r-1 1-r)We proceed by induction, assume the statement is true for a generic integer N-1; by equation (<ref>) in Proposition <ref> we have:ℒ_N =( ℒ_N-1+ ℋ_N-1-h_N-1^Th_N-1-1_N-1(N-1)(1-r))=( ℒ_N-1+r Id_N-1-h_N-1^Th_N-1-1_N-1(N-1)(1-r))where h_N-1 = [r,…,r_N-1] and Id_N-1 is the identity matrix of size N-1. Therefore, ℒ_N assumes the form given in (<ref>) for each N≥ 2. Let us consider now λ_i^N as in (<ref>), we prove that(ℒ_N-λ_i^N Id_N)=0, for each i=2,…, N.If i=2, the first two rows of ℒ_N-λ_2^N Id_N are(N-1)r-λ_2^N -r -r…-r r-1λ_2^N-1-λ_2^N -r…-rwith λ_2^N=1-r+(N-1)r, and the two rows in (<ref>) are both equal to the arrayr-1,-r,…,-rFor 2<i+1<N, we consider the ith and (i+1)th rows of ℒ_N-λ_i^N Id_N, given byr-1…r-1λ_i^N-1-λ_i+1^N-r -r…-r r-1…r-1 r-1λ_i+1^N-1-λ_i+1^N-r…-rwhereλ_i+1^N-1-λ_i+1^N=-r, λ_i^N-1-λ_i+1^N=r-1.Thus the ith and (i+1)th rows are linearly dependent. Finally we consider the case i=N, we observe that the last two rows of ℒ_N -λ_N^N Id_N r-1…r-1λ_N-1^N-1-λ_N^N-rr-1…r-1 r-1λ_N^N-1-λ_N^Nare equal, in fact λ_N-1^N-1-λ_N^N=r-1 and λ_N^N-1-λ_N^N=-r. We have proven that λ_i^N, for i=2,…,N defined in (<ref>) are solutions of characteristic polynomial associated to the matrix ℒ_N. From the Lemma <ref> we can easily see that the eigenvalues (<ref>) of ℒ_N are real and positive. Further it is now easy to show that the Laplacian matrix ℒ_N defined in (<ref>), in the case described by Assumption <ref>, assumes the form given by the following result.Let us consider Assumption <ref> with all entries of the matrix ℛ_N such that r_ij=1/2 for each i,j=1,… N. The Laplacian matrix ℒ_N is the following: ℒ_N=( N-1/2-1/2-1/2 …-1/2-1/2 N-1/2-1/2 …-1/2 ⋮ … ⋱ … ⋮-1/2 …-1/2 N-1/2-1/2-1/2-1/2 …-1/2 N-1/2 ),with eigenvaluesλ_1^N=0,λ_i^N=N2,i=2,…,N. In the following we utilize the notations ℒ_N^MC and ℒ_N^EB for the Laplacian matrix under the maximum competence and equality bias case respectively as in Assumption <ref>. The Laplacian matrix under the Assumption <ref> will be denoted with ℒ_N^(1).In the following example we establish the structures of the matrices ℒ_N^MC and ℒ_N^EB. More exhaustive results are proposed in Lemma <ref>. In this example we consider four agents where i=1,2 have high competence and the agents i=3,4 have no competence (assuming r_12=r_21=r_34=r_43=1/2). We denote it with Case a. Afterward, we consider the Case b, where i=1 has high competence and the agents i=2,3,4 have no competence. All the computations refer to the case of wide competence gap for the EB model, or equivalently to the case c>>1. Case a (MC model). Letr_13=r_14=r_23=r_24=0,r_31=r_41=r_32=r_42=1andr_12=r_21=r_34=r_43=1/2.Henceℒ_4^MC=( 1/2-1/2 0 0 -1/2 1/2 0 01-5 -1 -15/2-1/2-1 -1 -1/2 5/2 )which is a matrix of triangular block form. The characteristic polynomial of ℒ_4^MC isp(ℒ_4^MC,λ)=p(ℒ_2^(1),λ)· p(ℒ_2^(1)+2Id_2,λ) ,and so the eigenvalues are {0,1,2,3}. The possibility of calculate the eigenvalues of ℒ_4^MC from those of ℒ_2^(1), is due to a decoupling of two effects: the effect imposed on the system by highly skilled agents, and the effect of the less competent agents. Note that, since the first two agents are those most competent, the upper left block in (<ref>) is related to highly skilled agents (i=1,2) while the lower right block is due to less competent agents (i=3,4). Case a (EB model).r_13=r_14=r_23=r_24=1,r_31=r_41=r_32=r_42=0andr_12=r_21=r_34=r_43=1/2.Henceℒ_4^EB=( 5/2-1/2 -1 -1 -1/2 5/2 -1 -11-5 0 01/2-1/20 0 -1/2 1/2 )which is a matrix of triangular block form. The characteristic polynomial of ℒ_4^EB isp(ℒ_4^EB,λ)=p(ℒ_2^(1)+2Id_2,λ)· p(ℒ_2^(1),λ),and so the eigenvalues are still {0,1,2,3}. As for MC model, we can observe again the decoupling of the effect imposed on the system by highly skilled agents from those of the less competent agents.Case b (MC model).r_12=r_13=r_14=0,r_21=r_31=r_41=1andr_11=r_22=r_23=r_24=r_32=r_33=r_34=r_42=r_43=r_44=1/2.Thenℒ_4^MC=(00 0 01-6 -12 -1/2-1/2-1 -1/2 2 -1/2-1 -1/2 -1/22).The characteristic polynomial of ℒ_4^MC isp(ℒ_4^MC,λ)=λp(ℒ_3^(1)+Id_3,λ),whose eigenvalues are {0,1,5/2,5/2}.Case b (EB model).r_12=r_13=r_14=1,r_21=r_31=r_41=0andr_11=r_22=r_23=r_24=r_32=r_33=r_34=r_42=r_43=r_44=1/2.Henceℒ_4^EB=(3-1 -1 -11-6 01 -1/2-1/20 -1/2 1 -1/20 -1/2 -1/21).The characteristic polynomial of ℒ_4^EB isp(ℒ_4^EB,λ)=(λ-3) p(ℒ_3^(1),λ),whose eigenvalues are {0,3/2,3/2,3}. In the following Lemma we generalize this approach denoting with N_1 the number of incompetent agents which may vary in time. Besides, we recall the notation introduced with Proposition <ref>: h_N-1=[r_1,N,…,r_N-1,N], 1_N-1=[1,…,1_N-1], 0_N-1=[0,…,0_N-1]), and ℋ_N-1 is the diagonal matrix defined in (<ref>).From the Lemma <ref> we can easily see that the eigenvalues (<ref>) of ℒ_N are real and positive. Further it is now easy to show that the Laplacian matrix ℒ_N defined in (<ref>), in the case described by Assumption <ref>, assumes the form given by the following result. Under Assumption <ref> let us consider a system of N+N_1∈ℕ, N≥1, N_1≥1 interacting agents such that S={1,…,N} and U={N+1,…,N+N_1}. We define the following rectangular matrices 𝒥∈Mat_N-N_1,N_1({1}), 𝒪∈Mat_N_1,N-N_1({0}), i.e.(𝒥)_ij = 1,for all 1≤ i ≤ N-N_1, 1≤ j ≤ N_1, (𝒪)_ij = 0,for all 1≤ i ≤ N_1, 1≤ j ≤ N-N_1.Then, in the case c>>1, we have the following claims:(i) The Laplacian matrix for the MC model is given byℒ_N+N_1^MC=( [ ℒ_N^(1) 𝒪; 1-3 - 𝒥ℒ_N_1^(1)+N Id_N_1; ])its characteristic polynomial is given byp(ℒ_N+N_1^MC,λ)=p(ℒ_N^(1),λ)· p(ℒ_N_1^(1)+NId_N_1,λ) .with eigenvalues:λ_1=0, λ_2=…=λ_N=N/2,λ_N+1=N,λ_N+2=…=λ_N+N_1=N_1/2+N,(ii) The Laplacian matrix for the EB model is given byℒ_N+N_1^EB=( [ ℒ_N^(1)+N_1 Id_N - 𝒥; 1-3𝒪 ℒ_N_1^(1);])its characteristic polynomial is given byp(ℒ_N+N_1^EB,λ)=p(ℒ_N^(1)+N_1Id_N,λ)· p(ℒ_N_1^(1),λ)with eigenvaluesλ_1=N_1,λ_2=…=λ_N= N/2+N_1,λ_N+1=…=λ_N+N_1-1= N_1/2,λ_N+N_1=0.(i) We proceed by induction. Let us consider N_1=1, therefore S={i=1, … ,N} and U={N+1}. From Proposition <ref>, eq. (<ref>), we haveℒ_N+1^MC=( ℒ_N^(1)+ℋ_N - h_N^T1-3h_N-1_NN-∑_i=1^N r_i N+1 )where r_i,N+1=0,for each i=1,…, N. Hence ℒ_N+1^MC assumes the following formℒ_N+1^MC=(ℒ_N^(1)+Id_N0_N^T1-3 - 1_NN)The first step has been showed. Let us assume that the results in (<ref>) and (<ref>) hold. We add another incompetent agent such that: S={1,…,N }, U={N+1,…,N+N_1+1}. From Proposition <ref> we haveℒ_N+N_1+1^MC= ( [ ( [ ℒ_N^(1) 𝒪;1-3 -𝒥 ℒ_N_1^(1)+ N Id_N_1; ])+ℋ_N+N_1 - h_N+N_1^T; 1-3h_N+N_1-1_N+N_1 N+N_1-∑_i=1^N+N_1 r_i,N+N_1+1;]),wherer_1,N+N_1+1=…=r_N,N+N_1+1=0 r_N+1,N+N_1+1=…=r_N+N_1,N+N_1+1=1/2,and so([ ℒ_N^(1) 𝒪;1-3 -𝒥 ℒ_N_1^(1)+ N Id_N_1; ])+ℋ_N+N_1=([ ℒ_N^(1) 𝒪;1-3 -𝒥 ℒ_N_1^(1)+(N+ 1/2) Id_N_1; ]).Rows -h_N+N_1 and h_N+N_1-1_N+N_1 are respectively given by0,…, 0_N,-1/2,…, -1/2_N_1-1, …,-1_N,-1/2,…, -1/2_N_1,whileN+N_1-∑_i=1^N+N_1 r_i,N+N_1+1=N+N_1/2.From Corollary <ref>, the main diagonal of L_N_1^(1) contains all N_1-1/2, hence the inductive step is proved.(ii) We proceed by induction. Let us consider N_1=1, therefore S={1, … ,N} and U={N+1}. From Proposition <ref>, eq. (<ref>), we haveℒ_N+1^EB=( ℒ_N^(1)+ℋ_N - h_N^T1-3h_N-1_NN-∑_i=1^N r_i N+1 )where r_i N+1=1,for each i=1,…, N. Hence ℒ_N+1^EB assumes the following formℒ_N+1^EB=(ℒ_N^(1)+Id_N - 1_N^T1-30_N0)The base step is achieved. Suppose the result (<ref>) and (<ref>) holds. We add another incompetent agent: S={1,…, N }, U={N+1,… , N+N_1+1}. We have, from Proposition <ref>, eq. (<ref>), ℒ_N+N_1+1^EB= ( [ ( [ ℒ_N^(1)+N_1I_N - 𝒥_N,N_1; 1-3𝒪_N_1,N ℒ_N_1^(1);])+ℋ_N+N_1 - h_N+N_1^T;1-3h_N+N_1-1_N+N_1 N+N_1-∑_i=1^N+N_1 r_i,N+N_1+1; ]) ,wherer_1,N+N_1+1=…=r_N,N+N_1+1=1 r_N+1,N+N_1+1=…=r_N+N_1,N+N_1+1=1/2.Therefore we have([ ℒ_N^(1)+N_1I_N - 𝒥_N,N_1; 1-3𝒪_N_1,N ℒ_N_1^(1);])+ℋ_N+N_1==([ ℒ_N^(1)+(N_1+1)I_N - 𝒥_N,N_1; 1-3𝒪_N_1,N ℒ_N_1^(1)+1/2 I_N_1;]).Rows -h_N+N_1 and h_N+N_1-1_N+N_1 are, respectively:-1, …,-1_N,-1/2,…, -1/2_N_10,…, 0_N,-1/2,…, -1/2_N_1,whileN+N_1-∑_i=1^N+N_1 r_i,N+N_1+1=N_1/2.From Corollary <ref>, the main diagonal of ℒ_N_1^(1) contains all N_1-1/2, henceℒ_N_1^(1)+1/2 I_N_1=ℒ_N_1+1^(1)and we can conclude. We notice that, in Lemma <ref>, the zero-eigenvalue appears in the spectrum of ℒ_N^(1) for the MC model and in the spectrum of ℒ_N_1^(1) for the EB model. The matrix ℒ_N^(1) is associated to the set S of competent agents while the matrix ℒ_N_1^(1) is associated to the set U of incompetent agents. In the following result we will prove how at each time t>0 the EB model leads the system toward suboptimal decisions with respect to the optimal one given by the MC model at a fixed time.As we see in the following theorem, the zero-eigenvalue controls the asymptotic behavior of the system.Let us consider a system of N+N_1∈ℕ, N≥1, N_1≥1 interacting agents, such that at a given time S={1,…,N} is the set of competent agents and U={N+1,…,N+N_1} is the set of not competent agents. In case of interaction function in Assumption <ref> at each t>0 the collective decision in case of EB is not optimal.Let us observe that for all N>1 we have (ℒ_N^(1))=1, in fact, the (N-1)× (N-1) top left minor in (<ref>) has non zero determinant, while (ℒ_N^(1))=0 for each N. Thus, the rank of ℒ_N^(1) is N-1.In the following we denote with m_a(λ) and m_g(λ) the algebraic and the geometric multiplicity of eigenvalue λ. Now we can prove that the matrix ℒ_N+N_1^EB is diagonalizable.In Lemma <ref> we have shown that the eigenvalues of ℒ_N+N_1^EB are λ_1=…=λ_N-1= N/2+N_1, λ_ N=N_1, λ_N+1=…=λ_N+N_1-1= N_1/2 and λ_N+N_1=0. *Case m_g(0)=m_a(0)=1. Notice that λ=0 belongs also to the spectrum of ℒ_N_1^(1), ( ℒ_N_1^(1))=1 and (1,…,1)∈ℒ_N_1^(1). Further we have that (ℒ_N^(1)+N_1 Id_N)≠ 0 and thus ( ℒ_N+N_1^EB)=1. *Case m_g(N_1/2)=m_a(N_1/2)=N_1-1. We have:ℒ_N+N_1^EB-N_1/2Id_N+N_1=( [ ℒ_N^(1)+N_1/2 Id_N - 𝒥; 1-3𝒪ℒ_N_1^(1)-N_1/2 Id_N_1;]),where (ℒ_N_1^(1)-N_1/2 Id_N_1)=N_1-1 and (ℒ_N^(1)+N_1/2 Id_N)≠ 0. Therefore we have(ℒ_N+N_1^EB-N_1/2Id_N+N_1)=N_1-1.*Case m_g(N_1)=m_a(N_1)=1. Let us considerℒ_N+N_1^EB-N_1Id_N+N_1=( [ℒ_N^(1)-𝒥; 1-3𝒪ℒ_N_1^(1)-N_1 Id_N_1;]),where the eigenvalues of ℒ_N_1^(1)-N_1 Id_N_1 are -N_1 and -N_1/2, thus (ℒ_N_1^(1)-N_1 Id_N_1)≠ 0. Moreover, (ℒ_N^(1))=1 and (1,…,1)∈ℒ_N^(1). Accordingly (ℒ_N+N_1^EB-N_1Id_N+N_1)=1.*Case m_g(N/2+N_1)=m_a(N/2+N_1)=N-1. We now consider the matrixℒ_N+N_1^EB -(N/2+N_1)Id_N+N_1=( [ℒ_N^(1)- N/2Id_N - 𝒥;1-3𝒪 ℒ_N_1^(1)-(N/2+N_1)Id_N_1; ]),where (ℒ_N_1^(1)-(N/2+N_1)Id_N_1)≠ 0 and (ℒ_N^(1)-N/2 Id_N)=N-1. We have shown(ℒ_N+N_1^EB-(N/2+N_1)Id_N+N_1)=N-1.The solution of the system of differential equations (<ref>) is equivalent to a diagonal system with diagonal entries given by the eigenvalues in Lemma <ref>. Therefore, the collective decision in the EB case is not optimal.Observe that the case N_1=0 is encompassed in Corollary <ref> where, under Assumption <ref>, all entries of the matrix ℛ_N are r_ij=1/2 for each i,j=1,… N. § NUMERICSIn this section we present several numerical results in order to show the main features of thesystem (<ref>) under the hypotheses of maximum competence and equality bias. We consider a set of N=20 agents, forming an interacting decision-making system. Therefore, we compare the emerging asymptotic collective decisions in the cMC and EB regimes for several c≥ 1.For what it may concern the evolution of the competence variable we consider a background variable z∈^+ with uniform distribution C(z)∼ U([0,1]). Further, the interaction function λ(x_i,x_j) introduced in (<ref>), representing the possible learning processes of low skilled agents through the interaction with the more competent agents, is supposed to be λ(x_i,x_j) = λ̅x_i<x_j, 0 x_i≥ x_j.The numerics have been performed in the case λ̅=λ_B=10^-2. The ODE system (<ref>) has been solved through the RK4 method by considering both for competence and decision the time step Δ t =10^-2 and the final time Tf = 10. The interaction terms of the evolving decision have been chosen of the form(<ref>) with Q(w_i,w_j) = 1 and R(x_i,x_j) describing the cMC and EB cases for increasing values of the parameter c>0. We consider a multi-agent system characterized at t=0 by decisions strongly clustered: the most competent agent with uniform distribution in w∈[-1,-0.75], x∈[0.75,1] and the less skilled agents with uniform distribution w∈[0.75,1], x∈[0,0.25]. In all the tests the two populations of competent/incompetent agents are supposed to be of equal size. In Figure <ref> and in Figure <ref> we compare the evolution of the system in the cMC case (blue line) and in the EB case (orange dashed line). The results are presented for c=1,5,10. We can observe how the collective decision of the system strongly diverges in the case of EB with respect to the optimal decision, given by cMC model with c>>1. A further evidence of the emerging suboptimality is given in Figure <ref> where we depict the asymptotic collective decision of the multi-agent system evolving in the cMC and EB cases and an increasing c=1,…,10.§ CONCLUSIONSWe introduced microscopic models of interacting multi-agent systems for the description of decision-making processes inspired by the experimental results in <cit.>. The introduction of the bias is here analytically studied at the level of the Laplacian matrix. We explicitly calculated the structure of the eigenvalues of the Laplacian matrix under under simplified assumptions both in the maximum competence and in the equality bias case. The suboptimality of the collective decision under equality bias with respect to the maximum competence case is then established for each time step. Numerical results show that the equality bias impairs the emergence of the decision of the most competent agents of the system.§ ACKNOWLEDGEMENTSPV would like to acknowledge Dr. Luigi Teodonio for his stimulating discussions about the topic of the paper. MZ acknowledges the ”Compagnia di San Paolo”.9AP G. Albi, L. Pareschi. Modeling of self-organized systems interacting with a few individuals: from microscopic to macroscopic dynamics. Applied Mathematics Letters 26(4): 397–401, 2013.AHP G. Albi, M. Herty, L. Pareschi. Kinetic description of optimal control problems and applications to opinion consensus. Communications in Mathematical Sciences, 13(6): 1407–1429, 2015.APTZ G. Albi, L. Pareschi, G. Toscani, M. Zanella. Recent advances in opinion modeling: control and social influence. In Active Particles Volume 1, Theory, Methods, and Applications, N. Bellomo, P. Degond, and E. Tadmor Eds., Birkhäuser-Springer, 2017.APZa G. Albi, L. Pareschi, M. Zanella. Boltzmann-type control of opinion consensus through leaders. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 372(2028): 20140138, 2014.APZb G. Albi, L. Pareschi, M. Zanella. Uncertainty quantification in control problems for flocking models. Mathematical Problems in Engineering, Vol. 2015, 14 pp., 2015.APZcG. Albi, L. Pareschi, M. Zanella. Opinion dynamics over complex networks: kinetic modeling and numerical methods. Kinetic and Related Models, to appear.BOLRRF B. Bahrami, K. Olsen, P. E. Latham, A. Roepstorff, G. Rees, C. D. Frith. Optimally interacting minds. Science, 329(5995): 1081–1085, 2010. BT C. Brugna, G. Toscani. Kinetic models of opinion formation in the presence of personal conviction. Physical Review E, 92(5): 052818, 2015.CCR J. A. Cañizo, J. A. Carrillo, J. Rosado. A well-posedness theory in measures for some kinetic models of collective behavior. Mathematical Models and Methods in Applied Sciences 21(3): 515–539, 2011.CFRT J. A. Carrillo, M. Fornasier, J. Rosado, G. Toscani. Asymptotic flocking dynamics for the kinetic Cucker-Smale model. SIAM Journal on Mathematical Analysis, 42(1): 218–236, 2010.CFTV J. A. Carrillo, M. Fornasier, G. Toscani, F. Vecil. Particle, kinetic and hydrodynamic models of swarming. In Mathematical Modeling of Collective Behavior in Socio-Economic and Life Sciences, G. Naldi, L. Pareschi, and G. Toscani Eds., Birkháuser Boston, pp. 297–336, 2010.CFL C. Castellano, S. Fortunato, V. Loreto. Statistical physics of social dynamics. Reviews of Modern Physics 81(2): 591, 2009.CC A. Chakraborti, B. K. Chakrabarti, Statistical mechanics of money: how saving propensity affects its distribution, The European Physical Journal B-Condensed Matter and Complex Systems 17(1): 167–170, 2000.CPT E. Cristiani, B. Piccoli, A. Tosin. Multiscale Modeling of Pedestrian Dynamics, MS&A: Modeling, Simulation and Applications, vol.12, Springer International Publishing, 2014.CS F. Cucker, S. Smale. Emergent behavior in flocks. IEEE Transactions on Automatic Control, 52(5): 852–862, 2007.DOCBC M. R. D'Orsogna, Y. L. Chuang, A. L. Bertozzi, L. S. Chayes. Self-propelled particles with soft-core interactions: patterns, stability and collapse. Physical Review Letters 96(10):104–302, 2006.DMPW B. Düring, P. Markowich, J. F. Pietschmann, M.-T. Wolfram. Boltzmann and Fokker-Planck equations modelling opinion formation in the presence of strong leaders. Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences465(2112): 3687–3708, 2009.G97 S. Galam. Rational group decision making: a random field Ising model at T=0. Physica A. Statistical Mechanics and its Applications, 238(1–4): 66–80, 1997.GZ S. Galam, J.-D. Zucker. From individual choice to group decision-making. Physica A. Statistical Mechanics and its Applications, 287: 644–659, 2000.G F. Galton. One vote, one value. Nature 75: 414–414, 1907.HF N. Harvey, I. Fischer. Taking advice: Accepting help, improving judgment, and sharing responsibility. Organizational Behavior and Human Decision Processes 70(2): 117–133, 1997.HZ M. Herty, M. Zanella. Performance bounds for the mean-field limit of constrained dynamics. Discrete and Continuous Dynamical Systems A, 37(4): 2023–2043, 2017.KD J. Kruger, D. Dunning. Unskilled and unaware of it: how difficulties in recognizing one's incompetence lead to inflated self-assessments. Journal of Personality and SOcial Psychology, 77(6): 1121–1134, 1999. MPNAS A. Mahmoodi, D. Bang, K. Olsen, Y. A. Zhao, Z. Shi, K. Broberg, S. Safavi, S. Han, M. N. Ahmadabadi, C. D. Frith, A. Roepstorff, G. Rees, B. Bahrami. Equality bias impairs collective decision-making across cultures. Proceedings of the National Academy of Sciences 112(12): 3835–3840, 2015. MT1 S. Motsch, E. Tadmor. Heterophilious dynamics enhances consensus. SIAM Review 56(4): 577–621, 2014.MT2 S. Motsch, E. Tadmor. A new model for self-organized dynamics and its flocking behavior. Journal of Statistical Physics 144(5): 923–947, 2011. PT1 L. Pareschi, G. Toscani. Wealth distribution and collective knowledge: a Boltzmann approach. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 372(2028): 20130396, 2014.PT2 L. Pareschi, G. Toscani. Interacting Multiagent Systems. Kinetic Equations and Monte Carlo Methods. Oxford University Press, 2013.PVZ L. Pareschi, P. Vellucci, M. Zanella. Kinetic models of collective decision-making in the presence of equality bias. Physica A. Statistical Mechanics and its Applications, 467: 201–217, 2017.Shen J. Shen. Cucker–Smale flocking under hierarchical leadership. SIAM Journal on Applied Mathematics 68(3): 694–719, 2007.T G. Toscani. Kinetic models of opinion formation. Communications in Mathematical Sciences 4(3): 481–496, 2006. | http://arxiv.org/abs/1703.09098v1 | {
"authors": [
"Pierluigi Vellucci",
"Mattia Zanella"
],
"categories": [
"physics.soc-ph",
"nlin.AO"
],
"primary_category": "physics.soc-ph",
"published": "20170327142322",
"title": "Microscopic modeling and analysis of collective decision making: equality bias leads suboptimal solutions"
} |
Institute of Physics, Academia Sinica, Taipei 11529, Taiwan Institute of Physics, Academia Sinica, Taipei 11529, Taiwan Institute of Atomic and Molecular Sciences, Academia Sinica, Taipei 10617, Taiwan[ Spin-incoherent Luttinger liquid (SILL) is a different universal class from the Luttinger liquid. This difference results from the spin incoherence of the system when the thermal energy of the system is higher than the spin excitation energy. We consider one-dimensional spin-1 Bose gas in the SILL regime and investigate its spin-dependent many-body properties. In Tonks-Girardeau limit, we are able to write down the general wave functions in a harmonic trap. We numerically calculate the spin-dependent (spin-plus, minus, and 0) momentum distributions in the sector of zero magnetization which allows to demonstrate the most significant spin-incoherent feature compared to the spinless or spin-polarized case. In contrast to the spinless Bose gas, the momentum distributions are broadened and in the large momentum limit follow the same asymptotic 1/p^4 dependence but with reduced coefficients. While the density matrices and momentum distributions differ between different spin components for small N, at large N they approach each other. We show these by analytic arguments and numerical calculations up to N=16. Spin-incoherent Luttinger liquid of one-dimensional spin-1 Tonks-Girardeau Bose gas: Spin-dependent properties S.-K. Yip December 30, 2023 ==============================================================================================================§ INTRODUCTION A plethora of studies on one-dimensional (1D) quantum systems <cit.> of gaseous atoms thrive recently due to the experimental achievements of 1D confined bosons <cit.>. Many studies focus on the ground state properties of spinless bosons <cit.> such as spatial and momentum distributions <cit.>, quantum magnetism in a spinful Bose gas <cit.>, and low-energy excitations in the Luttinger liquid model. Meanwhile a spinful quantum system in the spin-incoherent regime <cit.> also provides a new avenue for studying 1D quantum many-body systems. This regime is termed as spin-incoherent Luttinger liquid (SILL) which forms a different universality class from the Luttinger liquid, where the temperature is high enough that different spin configurations can be regarded as degenerate while low enough that charge excitation is suppressed. For 1D spin-1 Bose gas <cit.> with s-wave scattering lengths satisfying |a_0-a_2|≪a_0,2, there exists a window of temperature for the gas in SILL regime. This happens since the sound velocity is much larger than the spin velocity. In the crossover regime between Luttinger liquid and SILL, 1D fermions with tunable spins <cit.> and their high momentum tails <cit.> have been studied, which show an evident broadening in the momentum distributions <cit.>. Quantum criticality <cit.> and Pomeranchuk effect <cit.> in the spin-incoherent regime are also theoretically predicted in the two-dimensional Hubbard model. Here in contrast we investigate 1D spin-1 Bose gas <cit.> in the SILL regime in a harmonic trap, which is studied only recently <cit.>. We shall focus on the Tonks-Girardeau (TG) regime <cit.> where the density is sufficiently low that the effective repulsion between particles can be regarded as infinite. TG spinor Bose gas is a special case of SILL since the exchange energy vanishes in this limit <cit.>. Therefore TG gas automatically is in the regime of SILL. In TG gas limit, we can write down the exact spatial wave functions since bosons are fermionized and impenetrable due to effectively infinitely-strong atom-atom interactions. We then numerically calculate the momentum distributions for the three individual components of the spin-1 Bose gas (spin-plus, minus, and 0). These predictions can be measurable in spin-resolved matter-wave experiments, either the time-of-flight experiment <cit.> or Bragg scattering spectroscopy <cit.>. This system allows for better demonstrations of SILL physics which is within reach of present experimental conditions. As compared with electronic spin-1/2 systems <cit.>, ultracold atom experiments not only provide controllable spatial dimensions but also tunable atom-atom interactions via Feshbach resonances, thus making our investigations testable in the quantum many-body systems. In Ref. <cit.>, we have derived the wave functions and density matrix for 1D spin-1 Bose gas in TG limit. We numerically calculate its momentum distributions, summed over spin components, up to six bosons. The momentum distributions are uniformly broadened as the number of bosons N grows. We have also derived the analytical large momentum (p) asymptotic in one-body momentum distributions, which shows the universal 1/p^4 dependence. The coefficients of the asymptotic 1/p^4 are also formulated for arbitrary N. Here we present the spin-dependent properties of density matrix in 1D spin-1 Bose gas in TG limit, and show the spin-dependent momentum distributions up to N=16. We also obtain the spin-dependent coefficients of 1/p^4 for asymptotic large p. Though the momentum distributions vary between different spin components for small N, they approach each other as N increases. We show this from the numerical results accompanied by analytical arguments in the large N limit.The rest of the paper is organized as follows. In Sec. II we introduce the general wave functions for 1D spin-1 Bose gas. In Sec. III, we derive the general forms of density matrices for each spin components with individual spin function overlaps in SILL regime, and present the numerically calculated results using Monte Carlo integration method implemented with Gaussian unitary ensemble. In Sec. IV. we discuss the analytical derivation of high momentum asymptotic for each component, which we compare with numerically calculated momentum distributions. We also investigate the momentum distributions in large N limit using the method of steepest descent or stationery phase, and compare them with the numerical results. Finally we conclude in Sec. V. § GENERAL WAVE FUNCTIONS IN TG LIMIT In general we can express the wave function of N bosons as|Ψ⟩=∑_s_1,s_2,...s_Nψ_s_1,s_2,...s_N(x⃗)|s_1,s_2,...,s_N⟩,where we denote x⃗=(x_1,x_2,...,x_N) and |s_1,s_2,...s_N⟩≡|s⃗⟩ as the spatial distributions and the spin configurations respectively. Here we can label s_i=+, -, or 0, respectively for spin-plus, minus, and zero components for the ith particle. The total wave function must satisfy the bosonic symmetry, therefore it is sufficient to just consider the ordered region of x_1<x_2<...<x_N, and we can obtain all other regions via permutations of this ordered region. In TG gas limit, the atoms become fermionized that their spatial wave functions take the Slater determinant form of noninteracting fermions. For the symmetrized spatial part of the wave function, we denote it as ψ_n⃗^sym(x⃗) which can be expressed in terms of the eigenfunctions ϕ_n_j(x_j) of the noninteracting fermions in a harmonic trap, ψ_n⃗^sym(x⃗) = 1/√(N!)𝔸[ϕ_n_1(x_1),ϕ_n_2(x_2),...,ϕ_n_N(x_N)]×sgn(x_2-x_1)× sgn(x_3-x_2) ...×sgn(x_N-x_N-1). We denote the sign function as sgn and the anti-symmetrizer as 𝔸 for later convenience. The orbital indices are (n_1, n_2, ...n_N), and the prefactor √(N!) normalizes the wave function. For convenience we use the dimensionless forms of the eigenfunctions ϕ_n(y), ϕ_n(y) = 1/√(2^n n!)1/π^1/4H_n(y)e^-y^2/2, y≡ x/x_ho,where H_n are Hermite polynomials. The harmonic oscillator length is x_ho≡√(ħ/(Mω)) where ω is the trap frequency and M is the atomic mass. To eventually evaluate the density matrix for say the "+" component, we need to obtain the wave function amplitude where at least one particle has spin "+". First we consider some degenerate and normalized spin configuration state |χ⟩ in some sector of magnetization, and the wave function can be expressed as |Ψ⟩=ψ_n⃗^sym(x⃗)|χ⟩. Take N=3 for an example, we obtain the probability amplitude ψ_+,s_2,s_3^sym(x,x_2,x_3) for the first particle having spin "+" when we project |Ψ⟩ to ⟨ s_1=+,x_1=x| in the ordered region of x<x_2<x_3. To access the probability amplitudes in the other regions, we use the permutation operators P_12 and P_123 on the projected states, obtaining x<x_2<x_3, ⟨ (+,s_2,s_3)|χ⟩,x_2<x<x_3, ⟨ (s_2,+,s_3)|χ⟩=⟨ P_12 (+,s_2,s_3)|χ⟩,x_2<x_3<x, ⟨ (s_2,s_3,+)|χ⟩=⟨ P_123 (+,s_2,s_3)|χ⟩,where we have suppressed the common ψ_n⃗^sym(x⃗) factors. Similar construction applies for other N's. In the next section we proceed to calculate the spin-dependent density matrices for spin-1 Bose gas in the SILL regime. § DENSITY MATRICES FOR SILL OF SPIN-1 BOSE GAS The spin-dependent single-particle density matrix can be straightforwardly written down from the wave function described above. For example of the spin-plus component, we have ρ_+(x,x')=N∑_s⃗'∫ dx̅ψ_+,s⃗'^*(x,x̅)ψ_+,s⃗'(x',x̅),where x̅≡(x_2,x_3,...,x_N) and s⃗'≡(s_2,s_3,...,s_N). A factor of N represents N possible choices of x and x'. Again we take N=3 as an example, and consider only the region of x<x' which is symmetric to x>x'. The spin-plus single-particle density matrix for N=3 then becomes ρ_+(x<x')= 3× 2×{∫_x<x'<x_2<x_3(E,E)_+.+∫_x<x_2<x'<x_3(E,P_12)_++∫_x<x_2<x_3<x'(E,P_123)_+ +∫_x_2<x<x'<x_3(P_12,P_12)_++.∫_x_2<x<x_3<x'(P_12,P_123)_++∫_x_2<x_3<x<x'(P_123,P_123)_+} ×ψ_n⃗^sym*(x,x_2,x_3)ψ_n⃗^sym(x',x_2,x_3)dx_2dx_3, where the parentheses () in various integral regions represent the spin function overlaps. Take (E,P_12)_+ as an example where E is the identical permutation operator, we define (E,P_12)_+=∑_s_2,s_3⟨ E (+,s_2,s_3)|χ⟩⟨ P_12(+,s_2,s_3)|χ⟩.Similar forms apply to the other spin function overlaps in Eq. (<ref>). Also the factor of 2 in Eq. (<ref>) comes from the contribution of the integral region x_2>x_3, where the spin function overlaps are the same as those with x_2<x_3. In the SILL regime, we average the above individual spin function overlaps by the total number of spin state configurations, which is denoted as Tr_χ (E)≡∑_χ⟨χ|E|χ⟩. It is simply the trace (Tr) of the identical operator over all spin configurations |χ⟩ since ⟨χ|E|χ⟩=1. We define the normalized spin function overlap in general as (using the same notation as the non-normalized one for simplicity)(P_12...j,P_12...k)_+=∑_s⃗'⟨ P_12...j(+,s⃗')|P_12...k(+,s⃗')⟩/Tr_χ(E), where P_12...j are j-particle permutation operators in the symmetric group S_N. To derive Eq. (<ref>), we have used the identity ∑_χ|χ⟩⟨χ|=1. (P_12...j,P_12...k)_+ in general represents the spin function overlap from the integration region where the particle at x permutes to just behind x_j while the particle at x' permutes to just behind x_k.In general for arbitrary N, we obtain the spin-plus density matrix as ρ_+(x<x')=N!{∫_x<x'<x_2...<x_N(E,E)_++∫_x<x_2<x'...<x_N(E,P_12)_+ +∫_x<x_2<x_3<x'...<x_N(E,P_123)_++... +∫_x_2<x<x'...<x_N(E,E)_+ +∫_x_2<x<x_3<x'...<x_N(E,P_12)_++...+∫_x_2<x_3...<x_N<x<x'(E,E)_+} ×ψ_n⃗^sym*(x,x̅)ψ_n⃗^sym(x',x̅)dx̅, where we have used the properties of (P_12...j,P_12...j)_+=(E,E)_+ and (E,P_12...j)_+=(P_12...m,P_12...m+j-1)_+ for j,m≥2. The first property can be proved from Eq. (<ref>) by using P_12...j^-1P_12...j=E^-1E=1. To prove the second property, we can reduce P_12...m^-1P_12...m+j-1 to(P_m-1,m...P_23P_12)^-1P_m...m+j-1(P_m-1,m...P_23P_12) =P_12^-1P_23^-1...P_m-1,m^-1P_m...m+j-1P_m-1,m...P_23P_12, =P_1,m+1...m+j-1, such that (E,P_1,m+1...m+j-1)_+, using again Eq. (<ref>), is exactly the same as (E,P_12...j)_+.Other spin components of the density matrices, ρ_-(x<x') and ρ_0(x<x'), can be derived similarly from characterizing the respective normalized spin function overlaps ()_-,0 which we will evaluate below. From now on we limit ourselves to the specific sector of total S_z≡∑_i=1^N s_i=0. For S_z close to N, spin-1 Bose gas will behave not much different from the polarized or spinless one. Therefore we choose the sector of zero S_z, which allows the SILL of spin-1 Bose gas to distinguish most significantly from the spinless bosons for S_z≲N. The spin configurations |χ⟩ in this sector generally involve n pairs of (+-), that is |+++—00...0⟩ with n=3 for example. The total number of states can then be calculated asw_N≡ Tr_χ(E)=∑_n=0^N/2 or N-1/2N!/(n!)^2(N-2n)!,which we obtain by permuting n(±)'s and (N-2n)(0)'s. For the spin-plus component of the single-particle density matrix in Eq. (<ref>), the spin configuration |00...0⟩ with n=0 never contributes. Therefore we consider only the spin configurations of at least one pair of (+-), and |χ⟩ can be generally expressed as|++...+_n-1-...-_n00...0_N-2n⟩. The first + is projected out in ρ_+(x<x'), and thus we have the normalized spin function overlap (E,E)_+,(E,E)_+ = 1/w_N∑_n=1^N/2 or N-1/2(N-1)!/(n-1)!n!(N-2n)!, which is averaged by w_N, the total number of states. We note that all the arguments of the factorials should be equal and larger than zero. (E,E)_+ is proportional to the number of states obtained by permuting the rest of (n-1)(+)'s, n(-)'s, and (N-2n)(0)'s. For (E,P_12...j)_+, it has a contribution only when the first j entries are (+)'s,|+...+_j+...+_n-j-...-_n00...0_N-2n⟩, such that we have (E,P_12...j)_+ = 1/w_N∑_n≥ j^N/2 or N-1/2(N-j)!/(N-2n)!n!(n-j)!, which denotes the number of states obtained by permuting the rest of (n-j)(+)'s, n(-)'s, and (N-2n)(0)'s. In this specific sector of zero S_z, we note that in general (E,P_12...j)_+ is nonvanishing only when j≤N/2.These corresponding spin function overlaps in ρ_-(x<x'), which are (E,E)_- and (E, P_12...j)_-, should be the same as those in ρ_+(x<x'). While for ρ_0(x<x'), we have the spin function overlaps as(E,E)_0 = 1/w_N∑_n=0^N/2 or N-1/2(N-1)!/(n!)^2(N-2n-1)!,(E,P_12...j)_0 = 1/w_N∑_n=0^N/2 or N-1/2(N-j)!/(n!)^2(N-2n-j)!, which respectively denote the number of states contributed from the spin configurations with the first one and the first j entries of (0)'s. We note of the identity that2(E,E)_+ + (E,E)_0 = 1.This also corresponds to the particle number conservation, that is 2N_++N_0=N, where N_±(0)≡∫ dxρ_±(0)(x,x). Thus the number of particles is proportional to the spin function overlaps, N_±(0)=N(E,E)_±(0). Furthermore we note that2 (E,P_12...j)_+ + (E,P_12...j)_0 = w_jN/w_N,where w_jN was defined in Ref. <cit.>,w_jN ≡ ∑_n=0^N/2 or N-1/2[(N-j)!/(n!)^2(N-2n-j)!.+ .2(N-j)!/(n-j)!n!(N-2n)!].§.§ Spatial correlation in SILL of spin-1 Bose gas The effect of the spin function overlaps in the SILL regime can be seen in Fig. <ref> where we compare the spatial correlations of spin-1 [ρ(x,x')] and spinless bosons [ρ_ spl(x,x')] at some chosen x in a harmonic trap. Spinless or spin-polarized bosons show a wider spatial distribution than the spin-1 case, indicating a sharper momentum distribution. Large |x-x'| in the spatial correlation corresponds to the small p region. For spinless bosons, it has been shown in the bulk that ρ_ spl, b(x,x')∝|x-x'|^-1/2, thus small p behavior is proportional to |p|^-1/2<cit.>. This narrow momentum distribution resembles the one of a Bose-Einstein condensate but not quite since no condensation is allowed <cit.> due to large quantum fluctuations in 1D system. Therefore no off-diagonal long-range order can be present in the density matrix of 1D Bose gas. However a superfluid phase can exist in 1D quantum systems, possessing a power-law decay in spatial correlations. This power-law decay can be well described in the Luttinger liquid model using the bosonization method <cit.>. In a harmonic trap as shown in Fig. <ref>, the spatial correlations of ρ_ spl(x,x') are similar to the one in a bulk in a moderate region of |x-x'| until the correlation decays faster at the edge of the trap (x'≳4x_ ho). In the trap, ρ_ spl(p=0) is finite. It has also been shown that ρ_ spl(p=0)∝N in the large N limit <cit.>.In sharp contrast to the spinless bosons in Fig. <ref>, the spin-1 Bose gas in the SILL regime shows an exponential decay in its spatial correlation, which is therefore not condensed. This exponential decay has been predicted in the single-particle Green's function of quantum wires in the SILL regime <cit.>, distinguishing from the Luttinger liquid with only power-law decays. In the momentum distributions on the other hand, spin-incoherence tends to broaden the distributions, which has been investigated in the t-J model <cit.> or the system of uniform two-component gas <cit.>. Similarly the spin-1 bosons in the SILL regime will also have a broadened momentum distribution due to the averaging of the spin function overlaps, which we discuss in more details below. Large p behavior will be discussed later in Sec. IV. A.§.§ Momentum distribution in SILL of spin-1 Bose gasWe define the spin-dependent momentum distributions asρ_±(0)(p)=1/2π∫_-∞^∞ dx ∫_-∞^∞ dx' e^ip(x-x')ρ_±(0)(x,x'), where we set ħ=1. We then numerically calculate the momentum distributions of the three components in 1D TG Bose gas based on Eq. (<ref>), ρ_-(x<x'), and ρ_0(x<x'). In Fig. <ref>, both spin components of ρ_+(p) and ρ_0(p) are uniformly broadened as N grows, and ρ_+(p)≠ρ_0(p) for finite N. The effect of spin-incoherence also averages out the oscillatory structure that is present in the momentum distribution for specific spin state of spinor Bose gas <cit.>. Furthermore the peaks of ρ_0(p) are larger than ρ_+(p) up to N=16. This is due to N_0≥N_± in general and the spin function overlaps (E,P_12...j)_0 are always larger than (E,P_12...j)_+, which we will show more specifically in Fig. <ref> in Appendix. For spinless bosons, the peaks of ρ_ spl(p) have a scaling of ρ_ spl(p=0)∝N<cit.>. Here the spin-1 Bose gas in the SILL regime shows fitted scalings of ρ_+(p=0)∝N^0.49 and ρ_0(p=0)∝N^0.66 from Fig. <ref>. These reduced scalings again show the feature of broadened momentum distributions in the SILL regimeTo calculate ρ_±(0)(x,x') we implement Gaussian unitary ensemble (GUE) <cit.> to speed up the convergence in the Monte Carlo (MC) integration method. The GUE draws a series of (N-1) random numbers in x̅, which are repulsively distributed due to the joint probability density of Π_1≤ i<j≤ N-1(x_i-x_j)^2. This implementation of GUE thus enables our MC integration to simulate up to N=16, which in this case takes about 140 hours with MC simulations of M=10^6 sets of random numbers using 200 parallel CPU cores. All MC simulations in Fig. <ref> use M=10^7 except for N=16 with M=10^6.In the next section we investigate their asymptotic forms in large momentum limit, which show 1/p^4 decay, and their momentum distributions in large N limit. § MOMENTUM DISTRIBUTIONS IN HIGH P AND LARGE N LIMITS§.§ Asymptotic high p limit For spinless bosons in the TG limit, relative wave function between two particles in short distance is ψ_ rel(x,x')∝|x-x'|, indicating of impenetrable bosons and corresponding to the feature of fermionic repulsion. Again it has been shown <cit.> in a bulk where ρ_ spl, b(x,x') in short distance is proportional to [1+...+|x-x'|^3/(9π)+...]. Thus the non-analytic |x-x'|^3 term in the short-distance correlation gives a universal 1/p^4 asymptotic in large momentum limit. This universal 1/p^4 asymptotic is not unique for a Bose gas with two-body contact interactions <cit.>. It also shows up in Tan's relation <cit.> in the two-component Fermi gas <cit.>. For 1D spin-1 TG Bose gas on the other hand, the analytical results for a high p asymptotic total momentum distribution ρ(p) have been derived <cit.>, showing also a universal 1/p^4 dependence. Similarly for its spin-dependent components, they can be straightforwardly written asρ_±(0)(p)p→∞=2[(E,E)_±(0)+(E,P_12)_±(0)]/2π p^4 ×∑_(n_i,n_j)∫_-∞^∞ dx | [ ϕ_n_i'(x) ϕ_n_j'(x);ϕ_n_i(x)ϕ_n_j(x) ]|^2,where (n_i,n_j) denotes any possible pairs of N harmonic oscillator eigenfunctions. The asymptotic form depends on the spin function overlap (E,P_12)_±(0) because it has significant contributions only from the integral regions of x<x_j<x' and x'<x_j<x for all x_j∈x̅ with x≈x'. The asymptotic form for the spinless bosons can be also obtained by replacing [(E,E)_±(0)+(E,P_12)_±(0)] with 2 in Eq. (<ref>). We note that using Eqs. (<ref>) and (<ref>), we have 2ρ_+(p)+ρ_0(p)=ρ(p) where the last quantity was computed in Ref. <cit.>. The spin-1 Bose gas in the SILL regime shows very different properties from the spin-coherent ones in the coefficients of high p asymptotics. The coefficients are always less than the ones in spinless bosons since (E,E)_±(0), (E,P_12)_±(0)<1. And for large N, [(E,E)_±(0)+(E,P_12)_±(0)]→[1/3+(1/3)^2]=4/9 from Eq. (<ref>), less than 2 for the case of spinless bosons as well. As an example, in Fig. <ref> we compare the numerical and analytical results of ρ_+(p) in high p limit. The numerically calculated high p asymptotics approach approximately to the analytical ones. For even larger px_ho≳7, the trends either drop and cross the analytical asymptotics, or bounce back and oscillate, indicating the inaccuracy of numerical results in these regions. To reach accurate high p asymptotics is quite demanding in MC integrations and consuming more CPU time for even larger N. However, MC simulations have already achieve the accuracy of 10^-3 and 10^-2 of the momentum distributions for N=2-3 and 10 respectively.We have also evaluated numerically the potential (⟨ V⟩) and kinetic energies (⟨ K⟩). Since our 1D bosonic TG gas has the same density distribution as the one of a Fermi gas, we have ⟨ V⟩=⟨ K⟩=N^2ħω/4, equivalent to half of the total energy, which complies with the Virial theorem <cit.>. In Ref. <cit.>, we concatenate ρ(p) with the asymptotic tails analytically derived to improve the energy calculations. Here we directly use the momentum distributions of ρ(p) calculated by MC simulations implemented with GUE. We find that the numerical results of these energies improve to the relative errors below 7% and 10% for N=2-7 and 16 respectively to the exact values of ⟨ V⟩ and ⟨ K⟩. This further shows the advantage of GUE in the convergence and accuracy of our numerical results.In Fig. <ref>, we plot the difference of spin-plus and zero momentum distributions numerically, ρ_+(p)-ρ_0(p), from Fig. <ref>. The difference goes away gradually as N increases, indicating these two components approach each other in large N limit. The dips at around p∼0 demonstrate that the peaks of ρ_0(p) are always higher than ρ_+(p), which is due to larger spin function overlaps for the spin-0 component. A special feature of the peaks for the case of N=2 shows a wider ρ_+(p) than ρ_0(p) while this feature is not obvious for larger N.§.§ Large N limit Due to the limits of numerical integration, we can only calculate the single-particle density matrix of spin-1 Bosons up to N=16. For finite N, we have demonstrated numerically ρ_+(p)≠ρ_0(p) since in general N_0≥N_+ and spin function overlaps (E,P_12...j)_0 are always larger than (E,P_12...j)_+. Thus the peaks of ρ_0(0=0) are larger than ρ_+(p=0). In this subsection we attempt to investigate the momentum distributions in large N limit. The study in this limit can give insight to practical experiments where several hundreds or thousands of atoms are involved. To investigate the individual components of spin-1 momentum distributions in large N limit, we need the asymptotic forms for various spin function overlaps. These spin function overlaps in general can be written as(E,E)_+ = f^(N-1)_1/f^(N)_0, (E,P_12...j)_+=f^(N-j)_j/f^(N)_0,(E,E)_0 = f^(N-1)_0/f^(N)_0, (E,P_12...j)_0=f^(N-j)_0/f^(N)_0, where f^(N)_k≡∑_j=0^N-k/2N!/(k+j)!j!(N-2j-k)!. f^(N)_k are just the coefficients of x^k in the binomial expansions of (x+x^-1+1)^N. We then further express them in terms of the complex integral as shown in Appendix, and find the asymptotic form of f^(N)_k in large N limit using the method of steepest descent or stationary phase <cit.>. In Fig. <ref> of Appendix, the asymptotic forms in Eq. (<ref>) are used to compare with the exact ones of Eqs. (<ref>), (<ref>), (<ref>), and (<ref>), and they approach to the exact ones in large N limit for small j in Eq. (<ref>). Therefore we shall use the asymptotic forms to compare the spin-plus with the spin-0 components of the spin function overlaps for large N.We define their relative deviations as |(E,P_12...j)_+-(E,P_12...j)_0/(E,P_12...j)_0|,which asymptotically approaches to |f̅^(N-j)_j-f̅^(N-j)_0/f̅^(N-j)_0|, in large N limit, where f̅^(N-j)_j is the asymptotic form of f^(N-j)_j. This asymptotic form allows us to compute the deviations for even larger N than using the exact formulas. In Fig. <ref>, the relative deviations decay as N increases for small j and become below 10^-2 for N∼600 with j≤3. Note that for a moderate j=10, it only reaches 0.1 for as high as N=600. Since the spin function overlaps with smaller j contribute much more to the momentum distributions than j≲N/2 [see Figs. <ref>(a) and (b)], along with much less relative deviations, we expect that the spin-plus momentum distribution approaches the one of spin-0 as N increases. We have also shown this trend in Fig. <ref> for finite N up to sixteen particles. This can be further confirmed by studying respective contributions from the integral regions in the single-particle density matrix ρ_+(x<x') of Eq. (<ref>). In Fig. <ref> we plot the results of most significant 12 out of a total 15 integral regions for the case of N=5 with and without the multiplications of spin function overlaps. The order of the integral regions can be seen from Eq. (<ref>). We denote the first five regions as x<x_2 with x'<x_2 moving sequentially to x_5<x', the next four regions (6th to 10th) as x_2<x<x_3 with x_2<x'<x_3 moving sequentially to x_5<x', and so on for the rest of the regions. A feature of hard core bosons in 1D harmonic trap is that atoms prefer to distribute evenly in space. The particles at x̅ repel each other due to their strongly repulsive interactions in TG limit. Thus more significant peaks of the integral contributions occur roughly at the corresponding integral regions depending on (x,x'). For example in Fig. <ref>(a), the spatial correlation at (x,x')=(-1,0) has the largest contribution at the seventh integral region, x_2<x<x_3<x'<x_4<x_5. Similarly the spatial correlation at (x,x')=(-1,3) has the largest contribution at the ninth integral region, x_2<x<x_3<x_4<x_5<x'. In Fig. <ref>(b) we multiply these values with the spin function overlaps (inset), and show that significant contributions come from the overlaps with small j. For four specific spatial correlations we choose here, significant values dwell in the 6th, 7th, 10th, and 11th integral regions, which correspond to the spin function overlaps (E,E)_+, (E,P_12)_+, (E,E)_+, and (E,P_12)_+ respectively. Moreover as expected the spatial correlations decay as |x-x'| increases, reminiscent of the exponential decay discussed in III. A.For TG bosons in the bulk with a length L in thermodynamic limit, we can use the analytical expression of Eq. (<ref>) and let ϕ_n_i(x)=e^ik_ix/√(L) with various eigenmodes n_i→k_i. The spatial integral in Eq. (<ref>) can then be calculated as∫ dx | [ ϕ_n_i'(x) ϕ_n_j'(x);ϕ_n_i(x)ϕ_n_j(x) ]|^2 =1/L^2∫ dx | [ ik_ie^ik_ix ik_je^ik_jx; e^ik_ix e^ik_jx ]|^2, =(k_i-k_j)^2/L.In the continuous limit of (k_i,k_j) corresponding to the pairs of (n_i,n_j) in the summation, we have ρ_ spl, b(p→∞) in the bulk ρ_ spl, b(p→∞)/L=2/π p^4∫_-k_F^k_Fdk_i/2π∫_-k_F^k_idk_j/2π(k_i-k_j)^2,where k_F=π N/L. Let k_i=k_Fx and k_j=k_Fy, the above becomes ρ_ spl, b(p→∞)/L=k_F^4/2π^3p^4∫_-1^1 dx∫_-1^x dy(x-y)^2, =2k_F^4/3π^3p^4, which is the same as Eq. (67) in Ref. <cit.> up to a factor of 2π in our definition of Fourier transform in Eq. (<ref>). Our derivation is parallel to using short-distance expansions in ρ_ spl,b(x,x')<cit.> where its non-analytic term |x-x'|^3/(9π) after Fourier transformed gives the same result.For large N spinless bosons in a harmonic trap, we can use the local-density approximation (LDA) for the local chemical potential in Thomas-Fermi limit, which reads μ(x)=μ-α x^2/2 with α=Mω^2. The cutoff momentum can be derived as k_F(x)=√(2M(μ-α x^2/2)). Since the maximum mode in 1D hard core bosons at infinite interactions is approximately n_ max≈N, the chemical potential can be determined as μ=n_ maxω≈Nω. We further define the boundary of x_ max=√(2Nω/α) such that we re-express k_F(x) as √((Mα)(x_ max^2-x^2)). Applying LDA to Eq. (<ref>), we have the density matrix for spinless bosons in a harmonic trap as ρ_ spl, LDA(p→∞)=2/3π^3∫_-x_ max^x_ maxdxk_F^4(x)/p^4, =2(Mα)^2/3π^3p^4∫_-x_ max^x_ maxdx(x_ max^2-x^2)^2, =2^7√(2)/45π^3N^5/21/p^4x_ho^3,where the scaling of N^5/2 disagrees with Ref. <cit.> which gave a different scaling of N^3/2. N^5/2 scaling has also been reported for the 1D SU(κ) Fermi gas with κ≠1<cit.>. The coefficient in Eq. (<ref>) is the same as the TG Fermi gas in the κ→∞ limit. We note that the coefficient of the scaling depends on the many-body state and is related to the slope of energy (-dE/dg_ 1D^-1) <cit.>.For the spinful case of our spin-1 Bose gas in large N limit, the asymptotic coefficient of [(E,E)_±(0)+(E,P_12)_±(0)] becomes [1/3+(1/3)^2]=4/9 according to Eq. (<ref>), such that the spin-dependent and total momentum distributions respectively in thermodynamic limit become ρ_±(0)(p→∞)|_N→∞=1/24/9×ρ_ spl, LDA(p→∞), =2^8√(2)/405π^3N^5/21/p^4x_ho^3, and ρ(p→∞)|_N→∞= 3×ρ_±(0)(p→∞)|_N→∞, =2^8√(2)/135π^3N^5/21/p^4x_ho^3,which is 2/3 of Eq. (<ref>). We denote the momentum distribution of spinless bosons in high p limit as ρ_ spl(p→∞) which can be derived by replacing [(E,E)_±(0)+(E,P_12)_±(0)] with 2 in Eq. (<ref>). We then define c(N)≡ρ_ spl(∞)(2^6√(2))^-1N^-5/2(45π^3p^4x_ho^3) and c(∞)=2 according to Eq. (<ref>). c_±(0)(N) can be also defined in the same way for ρ_±(0)(∞). These coefficients can be calculated using Eq. (<ref>). In Fig. <ref> we plot c(N) and c_+(0)(N) respectively to show how they approach the asymptotic large N values. We find that Eqs. (<ref>) and (<ref>) already give good enough estimates for N≳20 and 30 for spinless and spin-1 bosons respectively. The relative deviations |c(N)-c(∞)|/c(∞)=0.14% for N=40, and for the cases of c_+(0)(40), they reach 1.9%(1.5%). In Fig. <ref>(b), c_0>c_+, which again indicates that the spin function overlaps (E,P_12...j)_0>(E,P_12...j)_+ and N_0>N_+. § CONCLUSION In conclusion, we have investigated the spin-dependent properties of spin-1 Bose gas in the regime of spin-incoherent Luttinger liquid (SILL). Three components (spin-plus, zero, and minus) of the single-particle density matrix for this universal class can be calculated by deriving respective spin function overlaps. These spin function overlaps result from the highly degenerate spin configurations in the SILL regime. In contrast with the spinless bosons with a power-law decay in its spatial correlation, spin-1 bosons in TG limit show an exponentially decaying spatial correlation, which indicates a broadened momentum distribution and a different universal class from Luttinger liquid. The universal 1/p^4 dependence in high p limit is also present in the spin-dependent momentum distributions. This asymptotic has a scaling of N^5/2 with a reduced coefficient than the one of the spinless bosons. The coefficients of the asymptotic are proportional to Tan's contact and can be observed in experiments as one of the signatures of SILL. We compare these analytical predictions with the numerical results calculated by Monte Carlo (MC) integration with Gaussian unitary ensemble (GUE) up to sixteen bosons. The method of MC integration implemented with GUE converges faster and gives more accurate results such that we are able to calculate higher p regions. The high momentum tails approximately and asymptotically follow the reduced coefficients we analytically derived.For the S_z=0 sector, we show that the spin-0 component always has a larger peak than the spin-plus momentum distribution for finite N. This can be explained by the spin function overlaps which are larger for the spin-0 density matrix than the spin-plus case. While they differ for small N, they coincide in the large N limit. This indicates that highly incoherent bosons form in this limit with each component occupying exactly one third of the total number of particles. The ultracold spinor Bose gas allows for a potential realization of this universal class of SILL, and our results offer a testable paradigm to study quantum many-body phenomena in 1D strongly interacting bosons. § ACKNOWLEDGEMENTSThis work is supported by the Ministry of Science and Technology, Taiwan, under Grant No. 104-2112-M-001-006-MY3. The work of SKY was partially supported by a grant from the Simons Foundation, and was performed at the Aspen Center for Physics supported by National Science Foundation Grant No. PHY-1066293.§ SPIN FUNCTION OVERLAPS IN LARGE N LIMIT Before we derive the spin function overlaps in large N limit, first we express them in terms of an integral function, and then introduce the method of stationary phase or steepest descent to solve for their asymptotic forms <cit.>. Consider the following function,f^(N)(x) = (x+x^-1+1)^N,= ∑_j_2=0^N∑_j_1=0^N-j_2N!/j_1!j_2!(N-j_1-j_2)!x^j_1-j_2,where the second line above can be derived by binomial expansions. When we set k=j_1-j_2 and j=j_2, we havef^(N)(x)=∑_k=-N^N∑_j=0^N-k/2N!/(k+j)!j!(N-2j-k)!x^k, where the upper bound of index j can be derived by solving j_2 in the equations of j_1+j_2=N and j_1-j_2=k. From Eq. (<ref>), we define the coefficient of x^k in f^(N)(x) as f^(N)_k which is the same as the one of x^-k. Compare with Eqs. (<ref>), (<ref>), (<ref>), and (<ref>) in the main paper, we find that the spin function overlaps can be expressed as Eq. (<ref>) in the main text. These coefficients can be calculated as followsf^(N)_k=1/2π i∮_C dz z^k-1(z+z^-1+1)^N,where the coefficient f^(N)_k (of z^-k in this case) is exactly the residue at the pole of z=0 with a complex number z. C in the above denotes a contour integration on a circle in a counterclockwise direction around the origin.The asymptotic behavior of the integral can be obtained in large N limit. We let z=e^iθ, the coefficient becomesf^(N)_k = 1/2π∫_-π^π dθ e^ikθ(1+2cosθ)^N= 1/2π∫_-π^π dθ e^ℱ(θ), where ℱ(θ)≡Nln(1+2cosθ)+ikθ. Let θ→w in the complex plane, using the method of steepest descent <cit.> for the above highly-oscillating integrals in large N limit, we first find the saddle points which satisfy the first derivative ℱ'(w)=0. The integrals are then dominated by the local maxima passing the saddle points along the integration contour.The saddle points are therefore located at-2sin w+ik/N(1+2cos w)=0,which, after replacing trigonometry functions with exponentials, becomese^iw=-k/N±√((k/N)^2+4(1-k^2/N^2))/2(1+k/N). If k→0, the above suggests a multiple of roots that satisfy e^iw=± 1, which are w=2nπ and ±(2n+1)π for integers n, indicating multiple saddle points in this integral. We consider an integration path in Fig. <ref>, where only three saddle points (n=0) are involved. Below we demonstrate why the contour is valid and guarantees to follow the valleys of steepest descent between these three saddle points, which can be determined by the sign of ℱ”(w).First to calculate ℱ”(w_0), we define Q≡√(4-3(k/N)^2), and we have from Eq. (<ref>) with the "+" sign,e^iw_0=-k/N+Q/2(1+k/N), e^-iw_0=k/N+Q/2(1-k/N),where w_0 should be purely imaginary in general. We further use the above to reinterpretcos w_0 = (k/N)^2+Q/2[1-(k/N)^2], 1+2cos w_0=1+Q/1-(k/N)^2, sin w_0 = i/2k/N1+Q/1-(k/N)^2.Now the second derivative of ℱ(w_0) becomes ℱ”(w_0) = N[-2cos w_0/1+2cos w_0 - 4sin^2 w_0/(1+2cos w_0)^2],= -N[Q(1-(k/N)^2)]/1+Q, which is always less than zero. For example, ℱ”(w_0)=-2N/3 and -Nϵ respectively at small and large k limit (k/N=1-ϵ with ϵ≳0).Next for ℱ”(w_± 1) at the other two saddle points which we denote as w_± 1, we have from Eq. (<ref>) with the "-" sign,e^iw_± 1=-k/N-Q/2(1+k/N), e^-iw_± 1=k/N-Q/2(1-k/N),where w_± 1 in general are complex. Setting w_± 1=±π+iy_1 with real y_1, we have e^-y_1≲1 and e^-y_1≈1/2 respectively when k→0 and k→N, suggesting y_1≳0 at small k limit. Again we can use the above to reinterpretcos w_± 1 = (k/N)^2-Q/2[1-(k/N)^2], 1+2cos w_± 1=1-Q/1-(k/N)^2, sin w_± 1 = i/2k/N1-Q/1-(k/N)^2.Now the second derivative of ℱ(w_± 1) becomes ℱ”(w_± 1)=-NQ[1-(k/N)^2]/Q-1, which is again always less than zero, for example, ℱ”(w_0)=-2N and -2N/3 respectively for small and large k limit. Now we have located three saddle points, which are w_0 and w_± 1. The asymptotic form of the integral for f^(N)_k in Eq. (<ref>) can then be calculated using the contour in Fig. <ref>, which traverses through these three saddle points on the paths tangent to the real axis. Following the integration contour for Eq. (<ref>), we have the asymptotic form f̅_k^(N) of f^(N)_k as f̅_k^(N)=1/2π[∫_D_-1+∫_C_-1+∫_C_0+∫_C_1+∫_D_1]dθ e^ℱ(θ).We obtain the contributions of w_0 in the path C_0 as 1/2π∫_C_0 dwe^ℱ(w_0) e^1/2ℱ”(w_0)(w-w_0)^2,=e^ℱ(w_0)/2π∫_C_0 dw e^-1/2|ℱ”(w_0)|(w-w_0)^2,=e^ℱ(w_0)/2√(π)√(-ℱ”(w_0)/2), where its next correction term is at least 𝒪(N^-1/2) smaller. Since the integration path C_0(± 1) follows the valley of steepest descent, the integration is dominated near the region of the saddle point w_0(± 1), where we are also able to allow the boundary of w to ±∞. Similarly for the paths C_± 1, we have the contributions from w_± 1, e^ℱ(w_1)/2π[∫_C_-1 dw e^1/2ℱ”(w_-1)(w-w_-1)^2. . +∫_C_1 dw e^1/2ℱ”(w_1)(w-w_1)^2],where ℱ(w_-1)=ℱ(w_1). Let w=w_± 1+x in the paths C_± 1 respectively, we have e^ℱ(w_1)/2π[∫_0^∞ dx e^-1/2|ℱ”(w_-1)|x^2+∫_-∞^0 dx e^-1/2|ℱ”(w_1)|x^2] =e^ℱ(w_1)/2π∫_-∞^∞ dx e^-1/2|ℱ”(w_1)|x^2, =e^ℱ(w_1)/2√(π)√(-ℱ”(w_1)/2),where we have used ℱ”(w_-1)=ℱ”(w_1). For the paths of D_± 1, though they do not follow the valleys of steepest descent, their contributions cancel with each other since ℱ(w)=ℱ(w+2π). Finally we obtain the asymptotic form of f^(N)_k in large N limit as f̅^(N)_k=1/2√(π)[e^ℱ(w_0)/√(-ℱ”(w_0)/2)+e^ℱ(w_1)/√(-ℱ”(w_1)/2)].Before we write down the explicit form for the above, it is useful to derivee^ℱ(w_0)=[1+Q/1-(k/N)^2]^N[Q-k/N/2(1+k/N)]^k, which become 3^N and 1/ϵ^Nϵ respectively for small and large k. Also we have e^ℱ(w_± 1)=(-1)^N+k[Q-1/1-(k/N)^2]^N[Q+k/N/2(1+k/N)]^k,which become (-1)^N+k and (3/4)^N2^Nϵ/(-1)^Nϵ respectively for small and large k. Inserting the functions of the saddle points from Eqs. (<ref>), (<ref>), (<ref>), and (<ref>), we have f̅^(N)_k=[(1+Q)^N+1/2(Q-k/N)^k. .+(-1)^N+k(Q-1)^N+1/2(Q+k/N)^k] ×[2(1+k/N)]^-k/√(2NQπ)[1-(k/N)^2]^N+1/2. We note that the main contribution in Eq. (<ref>) comes from the saddle point w_0. To have some estimates of Eq. (<ref>), we have f̅^(N)_k→ 0=3^N/2√(π)√(N/3), f̅^(N)_k→ N=1/√(2π Nϵ)·ϵ^Nϵ.From the above result at small k and according to Eq. (<ref>), we can show that (E,P_12...j+1)_+(0)/(E,P_12...j)_+(0)|_N→∞ = 1/3, (E,E)_+|_N→∞ =(E,E)_0|_N→∞= 1/3, which respectively indicates one third decrease for the spin function overlaps when j increases by one, and the populations of N_+ and N_0 coincide in large N limit. In the large N limit, despite the constraint S_z=0, the probability of finding a particle in any of the spin states, "+", "0", or "-", is 1/3 and is irrespective of the spins of the other particles (if j≪N). This decrease in spin function overlaps also reflects on the exponential decay in spatial correlations discussed in Sec. III. A.To have some estimates of the spin function overlaps and their asymptotic forms in large N limit, in Fig. <ref> we compare (E,P_12...j)_+,0 of Eqs. (<ref>) and (<ref>) with f̅^(N-j)_j,0/f̅^(N)_0 from Eq. (<ref>). In Figs. <ref>(a) and (b), the values of spin function overlaps decrease rather fast in logarithmic scales as j increases, while (E,P_12...j)_0 is always larger than (E,P_12...j)_+. As a comparison, we define the relative deviations as |(E,P_12...j)_+,0-f̅^(N-j)_j,0/f̅^(N)_0/(E,P_12...j)_+,0|, which we show in Figs. <ref>(c) and (d), indicating of a good asymptotic form from our derivations for small j in large N limit. However a slow decay in the relative deviation of (E,P_12...N/2-1)_+ to f̅^(N/2+1)_N/2-1/f̅^(N)_0 in (c) shows the worst case in the asymptotic form. It is due to a rather small ℱ”(w_0)→-Nϵ in Eq. (<ref>) when we set k/N=1-ϵ with a small value of ϵ, which makes the method of steepest descent less accurate unless we go to N→∞. 99Giamarchi2004 T. Giamarchi,Quantum Physics in One Dimension (Oxford University Press, Oxford, 2004). Haldane1981 F. D. M. Haldane, Phys. Rev. Lett.47, 1840 (1981); J. Phys. C: Solid State Phys.14, 2585 (1981). Paredes2004 B. Paredes, A. Widera, V. Murg, O. Mandel, S. Föling, I. Cirac, G. V. Shlyapnikov, T. W. Hänsch, and I. Bloch. Nature429, 277 (2004). Kinoshita2004 T. Kinoshita, T. Wenger, and D. S. Weiss. Science305, 1125 (2004). Haller2009 E. Haller, M. Gustavsson, M. J. Mark, J. G. Danzl, R. Hart, G. Pupillo, H.-C. Nägerl. Science325, 1224 (2009). Girardeau2001 M. D. Girardeau, E. M. Wright, and J. M. Triscari, Phys. Rev. A63, 033601 (2001). Papenbrock2003 T. Papenbrock, Phys. Rev. A.67, 041601 (R) (2003). Minguzzi2002 A. Minguzzi, P. Vignolo, and M. P. Tosi, Phys. Lett. A294, 222 (2002). Olshanii2003 M. Olshanii and V. Dunjko, Phys. Rev. Lett.91, 090401 (2003). Xu2015 W. Xu and M. Rigol, Phys. Rev. A92, 063623 (2015). Deuretzbacher2008 F. Deuretzbacher, K. Fredenhagen, D. Becker, K. Bongs, K. Sengstock,and D. Pfannkuche, Phys. Rev. Lett.100, 160405 (2008). Deuretzbacher2014 F. Deuretzbacher, D. Becker, J. Bjerlin, S. M. Reimann, and L. Santos, Phys. Rev. A90, 013611 (2014). Volosniev2014 A. G. Volosniev, D. V. Fedorov, A. S. Jensen, M. Valiente, and N. T. Zinner, Nature Comm.5, 5300 (2014). Yang2015 L. Yang, L. Guan, and H. Pu, Phys. Rev A91, 043634 (2015). Yang2016 L. Yang and X. Cui, Phys. Rev. A93, 013617 (2016). Deuretzbacher2016 F. Deuretzbacher, D. Becker, and L. Santos, Phys. Rev. A94, 023606 (2016). Fiete2007 G. A. Fiete, Rev. Mod. Phys.79, 801 (2007). Ho1998 T. L. Ho, Phys. Rev. Lett.81, 742 (1998). Olshanii1998 M. Olshanii, Phys. Rev. Lett.81, 938 (1998).Pagano2014 G. Pagano, M. Mancini, G. Cappellini, P. Lombardi, F. Schäfer, H. Hu, X.-J. Liu, J. Catani, C, Sias, M. Inguscio, and L. Fallani, Nature Phys.10, 198 (2014). Decamp2016 J. Decamp, J. Jünemann, M. Albert, M. Rizzi, A. Minguzzi, and P. Vignolo, Phys. Rev. A94, 053614 (2016). Cheianov2005 V. V. Cheianov, H. Smith, and M. B. Zvonarev, Phys. Rev A71, 033610 (2005). Feiguin2010 A. E. Feiguin and G. A. Fiete, Phys. Rev. B81, 075108 (2010). Hazzard2013 K. R. A. Hazzard, A. M. Rey, and R. T. Scalettar, Phys. Rev. B87, 035110 (2013). Zhou2014 Z. Zhou, Z. Cai, C. Wu, and Y. Wang, Phys. Rev. B90, 235139 (2014). Cazalilla2011 M. A. Cazalilla, R. Citro, T. Giamarchi, E. Orignac, and M. Rigol, Rev Mod. Phys.83, 1405 (2011). Stamperkurn2013 D. M. Stamper-Kurn and M. Ueda, Rev. Mod. Phys.85, 1191 (2013). Jen2016_spin1 H. H. Jen and S.-K. Yip, Phys. Rev. A94, 033601 (2016). Tonks1936 L. Tonks. Phys. Rev.50, 955 (1936). Girardeau1960 M. D. Girardeau. J. Math. Phys.1, 516 (1960).Shvarchuck2002 I. Shvarchuck, Ch. Buggle, D. S. Petrov, K. Dieckmann, M. Zielonkowski, M. Kemmann, T. G. Tiecke, W. von Klitzing, G. V. Shlyapnikov, and J. T. M. Walraven, Phys. Rev. Lett.89, 270404 (2002). Davis2012 M. J. Davis, P. B. Blakie, A. H. van Amerongen, N. J. van Druten, and K. V. Kheruntsyan, Phys. Rev. A85, 031604 (2012). Jacqmin2012 T. Jacqmin, B. Fang, T. Berrada, T. Roscilde, and I. Bouchoule, Phys. Rev. A86, 043626 (2012). Fang2016 B. Fang, A. Johnson, T. Roscilde, and I. Bouchoule, Phys. Rev. Lett.116, 050402 (2016). Kozuma1999 M. Kozuma, L. Deng, E. W. Hagley, J. Wen, R. Lutwak, K. Helmerson, S. L. Rolston, and W. D. Phillips, Phys. Rev. Lett.82, 871 (1999). Stenger1999 J. Stenger, S. Inouye, A. P. Chikkatur, D. M. Stamper-Kurn, D. E. Pritchard, and W. Ketterle, Phys. Rev. Lett.82, 4569 (1999). Steinhauer2002 J. Steinhauer, R. Ozeri, N. Katz, and N. Davidson, Phys. Rev. Lett.88, 120407 (2002). Papp2008 S. B. Papp, J. M. Pino, R. J. Wild, S. Ronen, C. E. Wieman, D. S. Jin, and E. A. Cornell, Phys. Rev. Lett.101, 135301 (2008).Veeravalli2008 G. Veeravalli, E. Kuhnle, P. Dyke, and C. J. Vale, Phys. Rev. Lett.101, 250403 (2008).Pino2011 J. M. Pino, R. J. Wild, P. Makotyn, D. S. Jin, and E. A. Cornell, Phys. Rev. A83, 033615 (2011). Lenard1964 A. Lenard, J. Math. Phys.5, 930 (1964). Vaidya1979 H. G. Vaidya and C. A. Tracy, Phys. Rev. Lett.42, 3 (1979). Jimbo1980 M. Jimbo, T. Miwa, Y. Mori, and M Sato, Physica D1, 80 (1980). Forrester2003 P. J. Forrester, N. E. Frankel, T. M. Garoni, and N. S. Witte, Phys. Rev. A67, 043607 (2003).Dalfovo1999 F. Dalfovo, S. Giorgini, L. P. Pitaevskii, and S. Stringari, Rev. mod. Phys.71, 463 (1999).Cheianov2004 V. V. Cheianov and M. B. Zvonarev, phys. Rev. Lett.92, 176401 (2004) Fiete2004 G. A. Fiete and L. Balents, Phys. Rev. Lett.93, 226401 (2004).Penc1996 K. Penc, K. Hallberg, F. Mila, and H. Shiba, Phys. Rev. Lett.77, 1390 (1996). Penc1997 K. Penc and M. Serhan, Phys. Rev. B56, 6555 (1997).Tan2008 S. Tan, Annals of Physics323, 2952 (2008). Barth2011 M. Barth and W. Zwerger, Annals of Physics326, 2544 (2011). Braaten2008-1 E. Braaten and L. Platter, Phys. Rev. Lett.100, 205301 (2008). Braaten2008-2 E. Braaten, D. Kang, and L. Platter, Phys. Rev. A78, 053606 (2008). Werner2009 F. Werner, L. Tarruell, and Y. Castin, Eur. Phys. J. B68, 401 (2009). Zhang2009 S. Zhang and A. J. Leggett, Phys. Rev. A79, 023601 (2009). Werner2006 F. Werner and Y. Castin, Phys. Rev. A74, 053604 (2006). Werner2008 F. Werner, Phys. Rev. A78, 025601 (2008). Bender1999 C. M. Bender and S. A. Orszag,Advanced Mathematical Methods for Scientists and Engineers I: Asymptotic methods and perturbation theory (Springer-Verlag New York, 1999). ] | http://arxiv.org/abs/1703.08949v1 | {
"authors": [
"H. H. Jen",
"S. -K. Yip"
],
"categories": [
"cond-mat.quant-gas"
],
"primary_category": "cond-mat.quant-gas",
"published": "20170327064702",
"title": "Spin-incoherent Luttinger liquid of one-dimensional spin-1 Tonks-Girardeau Bose gas: Spin-dependent properties"
} |
Peter John Martin ChristoftheoremTheorem[section] lemma[theorem]Lemma proposition[theorem]Proposition corollary[theorem]Corollarytheorem-w-back-ref[1][N/A]theoremTheorem [#1-back-ref] lemma-w-back-ref[1][N/A]theoremLemma [#1-back-ref] corollary-w-back-ref[1][N/A]theoremCorollary [#1-back-ref] proof[1][Proof] #1 definitionDefinition[section] exaExample[section] remRemark[section]example[1][Example:] #1 remark[1][Remark:] #1lemma-reflemma-ref[1][N/A] lemma-ref Lemma <ref>theorem-reftheorem-ref[1][N/A] theorem-ref Theorem <ref>corollary-refcorollary-ref[1][N/A] corollary-ref Corollary <ref>images/abs/[email protected] Luxembourg Centre For Systems Biomedicine University of Luxembourg, Esch-sur-Alzette, L-4362, Luxembourg =-1 [email protected] Department of Mathematics and Statistics Portland State University, Portland, OR, 97201, USA [email protected] Shiley School of Engineering University of Portland, Portland, OR, 97203, USA [email protected] Department of Electrical and Computer Engineering Portland State University, Portland, OR, 97201, USA The search for symmetry as an unusual yet profoundly appealing phenomenon, and the origin of regular, repeating configuration patterns have long been a central focus of complexity science and physics. To better grasp and understand symmetry of configurations in decentralized toroidal architectures, we employ group-theoretic methods, which allow us to identify and enumerate these inputs, and argue about irreversible system behaviors with undesired effects on many computational problems. The concept of so-called configuration shift-symmetry is applied to two- cellular automata as an ideal model of computation. Regardless of the transition function, the results show the universal insolvability of crucial distributed tasks, such as leader election, pattern recognition, hashing, and encryption. By using compact enumeration formulas and bounding the number of shift-symmetric configurations for a given lattice size, we efficiently calculate the probability of a configuration being shift-symmetric for a uniform or density-uniform distribution. Further, we devise an algorithm detecting the presence of shift-symmetry in a configuration.Given the resource constraints, the enumeration and probability formulas can directly help to lower the minimal expected error and provide recommendations for system's size and initialization. Besides cellular automata, the shift-symmetry analysis can be used to study the non-linear behavior in various synchronous rule-based systems that include inference engines, Boolean networks, neural networks, and systolic arrays.Shift-Symmetric Configurations in Two-Dimensional Cellular Automata:Irreversibility, Insolvability, and Enumeration Christof Teuscher June 26, 2019 ===================================================================================================================== Symmetry is a synonym for beauty and rarity, and generally perceived as something desired. In this paper we investigate an opposing side of symmetry and show how it can irreversibly corrupt a computation, and restrict a system's dynamics and its potentiality. We demonstrate this fundamental phenomenon, which we call configuration shift-symmetry, affecting many crucial distributed tasks on the simplest grid-like synchronous system of cellular automaton. We show how to count these symmetric inputs depending on a lattice size and its prime factorization, how likely they are encountered, and how to detect them. § INTRODUCTION The structure of the computational rules that result in regular, repeating system configurations has been studied by many, yet the question of how the natural and engineered system organize into symmetric structures is not completely known. To understand the role of symmetry of the starting configurations (the inputs), how they are processed (the machine), and produce the final configurations with desired properties (the outputs) we use a cellular automata (CA) as a simple distributed model of computation. First introduced by John von Neumann, CAs were instrumental in the exploration of logical requirements for machine self-replication and information processing in nature <cit.>. Despite having no central control and limited communication among the components, CAs are capable of universal computation and can exhibit various dynamical regimes <cit.>. As one of the structurally simplest distributed systems, CAs have become a fundamental model for studying complexity in its purist form <cit.>. Subsequently, CAs have been successfully employed in numerous research fields and applications, such as modeling artificial life <cit.>, physical equations <cit.>, and social and biological simulations <cit.>.The CA input configurations define a language that is processed by the machine. Exploring the structural symmetries of the input language not only translates to an efficient machine implementation, but allows us to argue about a problem insolvability and the irreversibility of computation.In this paper, we explore the concept of shift-symmetry and revisit a well-known fact that any standard CA maintains a configuration shift-symmetry due to uniformity and synchronicity of cells. We show that once a system reaches a symmetric, i.e., spatially regular configuration, the computation will never revert from this attractor and will fail to solve all problems that require asymmetric solutions. As a result, the number of symmetries of the dynamical system is never decreasing. When a configuration slips to a symmetric, repeating pattern the configuration space of the CA irreversibly folds, causing a permanent regime “shift." Consequently, a non-symmetric solution cannot be reached from a shift-symmetric configuration. A more general implication is that a configuration is unreachable (even if symmetric) if a source configuration has a symmetry not contained in the target. Non-symmetric tasks, such as leader election or pattern recognition, i.e., tasks expecting a final configuration to be non-symmetric, are therefore principally insolvable, since for any lattice size there always exist input configurations that are symmetric. As a hypothesis we also briefly discuss the eventual gradual increase of system's symmetries at the end of this paper, however, without any strong claims or proofs attached.Using basic results from group theory and elementary combinatorics, we develop three progressively more efficient enumeration techniques based on mutually independent generators to answer the question of how many potential shift-symmetric configurations there are in any given two-dimensional CA lattice. As a side product, we demonstrate that the shift-symmetry is closely linked to prime factorization. We introduce and prove lower and upper bounds for the number of shift-symmetric configurations, where the lower bound (local minima) is tight and reached only for prime lattice sizes. We enumerate shift-symmetric configurations for a given lattice size and number of active cells.Finally, we derive a formula and bounds for the probability of selecting shift-symmetric configuration randomly generated from a uniform or density-uniform distribution. We develop a shift-symmetry detection algorithm and prove its worst and average-case time complexities. §.§ Applications All the formulas and proofs presented in this paper assume a two-dimensional CA with any number of states, and arbitrary uniform transition and neighborhood functions, which makes our results widely applicable.Knowing the number of shift-symmetric configurations, we can directly determine the probability of selecting a shift-symmetric configuration by chance. This probability then equals an error lower bound or expected insolvability for any non-symmetric task. As we show, the insolvability caused by shift-symmetry rapidly decreases asymptotically with the lattice size for a uniform distribution. For instance, the probability is 0.5 for a 2×2 lattice, but drops to around 2.7×10^-15 for a 10×10 lattice. Since the number of shift-symmetric configurations heavily depends on the prime factorization of the lattice size, the probability function is non-monotonously decreasing. To minimize the occurrence of shift-symmetries for uniform distribution, we generally recommend using prime lattices, or at least avoiding even ones. On the other hand, the probability for a density-uniform distribution is quite high, regardless of primes; it is around 10^-3, even for a 45×45 lattice.The distribution error-size constraints have important consequences for designing robust and efficient computational procedures for many crucial distributed problems, such as leader election <cit.>, pattern recognition <cit.>, edge detection <cit.>, image translation <cit.>, convex hull/minimum bounding rectangle <cit.>, hashing or collision resolution for associative memory <cit.>, encryption <cit.>, and random number generation <cit.>. For these tasks an expected final configuration, e.g., reproduction of a certain two-dimensional image, is frequently non-shift-symmetric, and therefore unreachable from a symmetric configuration. Alternatively, an expected configuration can be unreachable even if it is shift-symmetric, which occurs when the vector space of its generating vectors (shifts) do not contain all the shifts of an initial configuration.Practical implications of these properties include performance degradation of systolic CPU arrays and nanoscale multicore systems <cit.>. Our results span to the hardware implementations of synchronous CAs with s, used, e.g., for traffic signals control <cit.>, random number generation <cit.>, and reaction-diffusion model <cit.>; and spintronics, where computation is achieved by coupled oscillators <cit.>. Also, current efforts to implement two or three-dimensional cellular automata using DNA tiles <cit.> and/or gel-separated compartments in so-called gellular automata <cit.> may face problems related to configuration shift-symmetry if a synchronous update is considered. §.§ Related Work In their seminal work, Packard and Wolfram <cit.> identified the importance of symmetry and showed that the global properties of a CA emerge as a function of the transition function's reflective and rotational symmetries. The fundamental algebraic properties of additive and non-additive CAs were studied by Martin et al. <cit.>, who demonstrated that in simple cases there is a connection between the global behavior and the structure of the configuration transitions. Wolz and deOliveira <cit.> exploited the structure and symmetry in the transition table to design an efficient evolutionary algorithm that found the best results for the density classification and parity problems. Marquez-Pita et al. <cit.> used a brute-force approach to find similar input configurations that produce the same outputs. Their results are a compact transition function re-description schema that use wild-cards to represent the many-to-one computation rules on a majority problem. Bagnoli et al. <cit.> explored different methods of master-slave synchronization and control of totalistic cellular automata. A number of computation-theoretic results for CA were summarized by Culik II et al. <cit.>, who investigated CAs through the eyes of set theory and topology. The effect of symmetry on the complexity of Boolean functions was thoroughly researched by Babai et al. <cit.>. Pippenger <cit.> studied translation functions capable of correcting CA configurations under a specific kind of symmetry: rotation (an isometry with a corner coordinate fixed). Besides the symmetry of transition functions and the design of transition functions resulting in regular or synchronized patterns, a number of contributions to the theoretical CA literature have addressed the general structure and implications of shift-symmetric configurations also called translation invariant, or simply periodic, as we do here. This problem has been studied primarily in the context of group theory <cit.>, through a general approach using stabilizers, group actions, and Bernoulli shifts. In particular, the work by Castillo-Ramirez and Gadouleau <cit.> for example, approaches the problem using Möbius inversion of the subgroup lattice. Our derivation differs from their work by leveraging the affordances of specifying in advance that our symmetries are restricted to shift-symmetries, i.e., the specific case of Cartesian powers of cyclic groups in two dimensions, and proceeding inductively, which allows us to derive stronger results for the subproblem of our interest. In particular, we provide more efficient and executable enumeration formulas in an algorithmic sense and a better lower bound for the number of aperiodic configurations. Note that Castillo-Ramirez and Gadouleau improved the bound found by Gao et al. <cit.> Another recent article <cit.> explores similar questions regarding the number of distinct binary configurations of toroidal arrays in the presence of rotational and reflection symmetries. For our purposes, the ratio of symmetric to non-symmetric configurations is of greater interest than a simple enumeration of the total. Accordingly, our work differs from theirs by our focus on enumerating how many of these configurations possess some nontrivial symmetry (and, additionally, we do not wish to be limited to the binary case of an alphabet of size 2).The concept of symmetry in number theory has been applied to so-called tapestry design and periodic forests <cit.>, which relates to CA configurations. However, the triangular topology and geometric branching differs from the discrete toroidal Cartesian topology typically used for CAs.One of our main motivations is the pioneering work of Angluin <cit.>, who noticed that a ring containing anonymous components (processors), which are all in the same state, will never break its homogeneous configuration and elect a leader. This intuitive observation is, in fact, a special case of the concept of configuration shift-symmetry for CAs. We will show that Angluin's homogeneous state, which corresponds to a configuration of all zeros or all ones in a binary CA, is the most symmetric configuration for a given lattice size.The concept of shift-symmetry is related to the notion of regular domains in computational mechanics <cit.>. A shift-symmetric configuration is essentially a (global) regular domain spread to a full lattice. Although we cannot apply our results directly to regular domains at the level of sub-configurations, because we pay no attention to local symmetries and non-cyclic and non-regular borders, the number of possible shift-symmetric configurations gives at least an upper bound on the number of possible regular domains. In our previous work <cit.> we proved that configuration shift-symmetry, along with loose-coupling of active cells, prevents a leader from being elected in a one-dimensional CA <cit.>. The leader election problem, first introduced by Smith <cit.>, requires processors to reach a final configuration where exactly one processor is in a leader state (one) and all others are followers (zero). Leader election is representative of a problem class where the solution is an asymmetric, non-homogeneous, transitionally and rotationally invariant system configuration. A final fixed-point configuration is asymmetric, since it contains only one processor in a leader state. Clearly, leader election and symmetry are enemies, and, in fact, leader election is often called symmetry-breaking. To enumerate shift-symmetric configurations for a one-dimensional case <cit.> we employed only basic combinatorics. Here, in order to span to two dimensions, we extend our enumeration machinery to include some basic concepts from group theory and we rely heavily on the notion of independent generators. We show that the insolvability caused by configuration symmetry extends beyond leader election to a whole class of non-symmetric problems.§.§ Model By definition, a CA <cit.> consists of a lattice of N components, called cells, and a state set Σ. A state of the cell with index i is denoted s_i∈Σ. A configuration is then a sequence of cell states:𝐬 = (s_0, s_1,…, s_N - 1).Given a topology for the lattice and the number of neighbors b, a neighborhood function η: ℕ×Σ^N→Σ^b maps any pair (i, 𝐬) to the b-tuple η_i(𝐬) of cells' states that are accessible (visible) to cell i in configuration 𝐬. Note that each cell is usually its own neighbor.The transition rule ϕ : Σ^b→Σ is applied in parallel to each cell's neighborhood, resulting in the synchronous update of all of the cells' states s_i^t+1 = ϕ(η_i(𝐬)^t). The transition rule is represented either by a transition table, also called a look-up table, or a finite state transducer <cit.>. Here we focus exclusively on uniform CAs, where all cells share the same transition function. The global transition rule Φ: Σ^N→Σ^N is defined as the transition rule with the scope over all configurations: 𝐬^t+1 = Φ(𝐬^t).In this paper we analyze two-dimensional CAs, where cells are topologically organized on a two-dimensional grid with cyclic boundaries, i.e., we treat them as tori.The true power of our analysis is that it applies totwo-dimensional CAs with arbitrary neighborhood and transition functions. We rely only on their uniformity: each cell has the same neighborhood and transition function; and synchronous update, the attributes typically assumed for a standard CA. Figure <ref> shows the update mechanism for a two-dimensional binary CA with a Moore neighborhood, a square neighborhood with radius r = 1 containing 9 cells. The dynamics of two-dimensional CAs are illustrated as a series of configuration snapshots, where an active cell is black and an inactive cell white (Figure <ref>).§ SHIFT-SYMMETRIC CONFIGURATIONS As stated by Angluin <cit.>, the problem of reaching a “center” (i.e., leader) in homogeneous configurations is insolvable by any anonymous deterministic algorithm (including CAs). The CA uniformity can be embedded in its transition function, the deterministic update, synchronicity, topology, configuration, and cells' anonymity. Intuitively, a fully uniform system in terms of its structure, configuration, and computational mechanisms cannot produce any reasonable or complex dynamics. We show that Angluin's homogeneous configurations of 0^N and 1^N belong to a much larger class of so-called shift-symmetric configurations. In this section we formalize the concept of configuration shift-symmetry by employing vector translations and group theory. Figure <ref> depicts a CA computation on a two-dimensional shift-symmetric configuration. Compared to the one-dimensional case <cit.>, two dimensions are more symmetry-potent.It is important to mention that we deal with square configurations only. Nevertheless, we suggest most of the lemmas and theorems could be extended to incorporate arbitrary rectangular shapes. Also, the formulas and methodology to enumerate two-dimensional shift-symmetric configurations could be generalized to arbitrarily many dimensions. For consistency, however, we leave the rectangular as well as n-dimensional extensions for future consideration. Note that in order to improve the readability of the main text, all proofs and formally defined lemmas and theorems appear in Appendix <ref>. The non-trivial proofs from the appendix are referenced by thesymbol.First, we define a shift-symmetric (square) configuration by a given vector as shown in Figure <ref>. Formally, for a non-zero vector (pattern shift) v∈ℤ_n ×ℤ_n we denote byS_n × n( v) = {𝐬∈Σ^n × n | ∀ u∈ℤ_n ×ℤ_n : s_ u = s_ u⊕ v}the set of all shift-symmetric square configurations of size N = n^2 relative to v over the alphabet Σ, where ⊕ denotes coordinate-wise addition on ℤ_n ×ℤ_n. Note that as opposed to our previous work <cit.>, we renamed symmetric to shift-symmetric configurations to avoid confusion with reflective or rotational symmetries. These two symmetry types, unlike shift-symmetry, are not generally preserved by a transition function unless we impose certain “symmetric” properties on the transitions.Since any translation by a non-zero vector v defines a configuration symmetry, we can study shift-symmetric configurations with the techniques of group theory. From now on, we will call such a non-zero vector v∈ℤ_n ×ℤ_n that we use for state translation a generator formalized asS_n × n( v) = {𝐬∈Σ^n × n | ∀ u∈ℤ_n ×ℤ_n ∀ w∈⟨ v⟩ : s_ u = s_ u⊕ w},where ⟨ v⟩ is the cyclic subgroup ofℤ_n ×ℤ_n generated by v.Trivially, for any non-zero v∈ℤ_n ×ℤ_n [lemma:symmetric-config-set-2d-l1_l2-size] |S_n ×n(v)| = |Σ|^n^2/|⟨v ⟩|.In the following text we bridge shift-symmetry, which is associated with configurations, i.e., static states, with any uniform transition rule, which defines the dynamics of CA. We show that shift-symmetry cannot be broken, thus it fundamentally restricts the reachable states and the potentiality of a transition rule. More formally, for a vector v and any uniform global transition rule Φ [theorem:symmetric-stays-symmetric-2d] s ∈S_n ×n (v) Φ(𝐬) ∈S_n ×n (v),and so, by induction, a non-symmetric configuration q∉ S_n × n ( v) is unreachable from a shift-symmetric s∈ S_n × n ( v)Φ^i(𝐬) ∈ S_n × n( v)q.As a consequence, several tasks for CA are principally insolvable. For instance, a target configuration for leader election <cit.> contains exactly one cell in the leader state a ∈Σ. This configuration is asymmetric for n > 1 as shown in Figure <ref> (asym-d), and therefore unreachable from any shift-symmetric configuration. Further, several image-processing tasks illustrated in Figure <ref> are insolvable: e.g., image translation (sym-c to asym-c) <cit.>, and pattern recognition or noise filtering (sym-c to asym-f and sym-a to sym-b) <cit.>. Also, the task of random <cit.> or prime p number generation is insolvable if p|̸ n (for p = 7: sym-a to asym-e). If a configuration isshift-symmetric the associative memory <cit.> has a corrupted (non-uniform) hashing function to handle collisions (e.g., sym-a to asym-a). This in general sense also applies to encryption <cit.>. § ENUMERATING SHIFT-SYMMETRIC CONFIGURATIONS In this section we will further investigate shift-symmetric two-dimensional configurations and ask how many there are in a square lattice of size N = n^2. First, to generalize shift-symmetry and lay a solid ground for group-centric analysis we define the symmetric configurations over several generators.Let 𝕃⊆ℤ_n ×ℤ_n. We define the set of 𝕃-symmetric configurations to be the setS_n × n(𝕃) = {𝐬∈Σ^n × n | ∀ u∈ℤ_n ×ℤ_n, ∀ v∈⟨𝕃⟩ : s_ u = s_ u⊕ v},where ⟨𝕃⟩ = {c_1v_1 ⊕…⊕ c_|𝕃| v_|𝕃|| c_i ∈ℤ_n}. In other words, S_n × n(𝕃) denotes the set of all shift-symmetric configurations of size N = n^2 over the alphabet Σ with generator set 𝕃.Directly from the definition, for any subset 𝕃⊆ℤ_n ×ℤ_n,|S_n × n(𝕃)| = |Σ |^n^2/|⟨𝕃⟩|, and for any u,v∈ℤ_n ×ℤ_nS_n × n( u) ∩ S_n × n( v) = S_n × n({ u, v}).The following equivalence, which may sound counterintuitive at first, adapts shift-symmetry for the theory of groups. It states that if a vector v generates a cyclic subgroup of another vector u, then its set of shift-symmetric configurations is a superset (not a subset!) of that generated by the vector u and vice versa, i.e., for any u,v∈ℤ_n ×ℤ_n [lemma:symmetric-config-subset-2d] S_n ×n(u) ⊆S_n ×n(v) ⟨v ⟩≤⟨u ⟩. Now, in a straightforward manner, we define the set of all shift-symmetric configurations S_n × n over all possible combinations of vectors (shifts) for a given lattice asS_n × n = ⋃_ 0≠ v∈ℤ_n ×ℤ_n S_n × n( v).Due to non-trivial intersections of the sets S_n × n( v), it is fairly unpractical to count the shift-symmetric configurations over all n^2-1 vectors. We, instead, construct significantly fewer generators using prime factors of n, which equivalently produce an entire set of shift-symmetric configurations. We start with the definition of the generators. For any natural number n let n=∏_j=1^ω(n) p_j^α_j be the prime factorization, where ω(n) denotes the number of distinct prime factors. We define the generator set G_n asG_n = ⋃_j = 1^ω(n) G_n(p_j),where for each prime divisor p_j G_n(p_j) = {(0, n/p_j)}∪{(n/p_j, i n/p_j):0 ≤ i ≤ p_j-1 }. The total number of these generators is then |G_n| =ω(n) + ∑_i = 1^ω(n) p_i.Using some divisibility arguments we can prove that they indeed produce all shift-symmetric configurations[lemma:symmetric-config-primes-2d] S_n ×n = ⋃_w ∈G_n S_n ×n(w).Further, we show these prime-based generators are mutually independent, thus greatly simplifying the counting problem to relatively compact closed formulas. For any distinct u,v∈ G_n (n ∈ℕ), [lemma:symmetric-config-2d-primes-linearly-independent-all] | ⟨u ⟩∩⟨v ⟩| = 1, and for any distinct u, v∈ G_n(p), where p is a prime divisor of n and n̂=n/p [lemma:symmetric-config-2d-primes-linearly-independent] ⟨u,v ⟩= ⟨(n̂,0), (0,n̂) ⟩.In particular, |⟨ u, v⟩| = p^2.Finally, we are ready toenumerate shift-symmetric configurations. In the following formulas, given any v, w∈ℤ^k, we write v w whenever the coordinates satisfy v_i ≤ w_i for every i (1 ≤ i ≤ k). We write v w if v w and v≠ w. We denote the sum of the coordinates by | v| = ∑_i=1^k v_i, and for any m ∈ℤ, we write m for the k-tuple whose coordinates all equal m. Let n=∏_i=1^k p_i^α_i be the prime factorization of n, where k=ω(n), the number of distinct prime factors of n. Note that a one-by-one lattice offers no symmetries since there exists no non-zero shift in ℤ_1 ×ℤ_1.As the base, we first combine the generators G_n directly by the inclusion-exclusion principle, and apply the fact that these generators are mutually independent (Eq. <ref>), as well as that their joint size is at most |⟨ u, v⟩| = p^2 (Eq. <ref>). That gives us [lemma:symmetric-config-overall-2d-size] |S_n ×n| = ∑_0 v p+1 (-1)^1 + |v| ∏_i=1^k p_i + 1v_i | Σ|^f(v),where p = (p_1,…,p_k) and f( v) = n^2 ∏_i=1^k p_i^-min (v_i,2). An alternative and more efficient counting is based on an idea of grouping of the exponential elements | Σ|^f( v) from the original formula (Eq. <ref>), which are costly to calculate. [lemma:symmetric-config-overall-2d-size-alternative] |S_n ×n| = ∑_0 v 2|Σ|^g(v) ( ∑_v u top(v)(-1)^1 + |u| ∏_i=1^kp_i + 1u_i )where g( v) = n^2 ∏_i=1^k p_i^-v_i and top( v) ∈ℤ^k has ith coordinate top(i) =v_i v_i < 2 p_i + 1 v_i = 2. The final formula that follows is the most efficient because, besides having the exponential elements grouped, it also reduces the inner binomial sum to a simple expression r(i).[theorem:symmetric-config-overall-2d-size-final] |S_n ×n| = ∑_0 v 2 (-1)^1 + |v| |Σ|^g(v) ∏_i=1^kr(i)where g( v) = n^2 ∏_i=1^k p_i^-v_i andr(i) = 1 v_i = 0 p_i + 1 v_i = 1 p_i v_i = 2. Interestingly, for a prime lattice n=p the vector v is (1) and g( v) = n, or v = (2) and g( v) = 1, which forces the formula <ref> to collapse to [cor:symmetric-config-overall-2d-prime-size] |S_n ×n| = |Σ|^n(n + 1) - |Σ|n. The gradual improvements from Eq. <ref>, <ref>, and finally to Eq. <ref> are illustrated for n = 2^α_1 3^α_2 in Appendix <ref>. §.§ Bounding the Number of Shift-Symmetric Configurations In the previous section we derived closed and increasingly efficient formulas for counting the number of shift-symmetric configurations in a square lattice N = n^2. To get a deeper and more qualitative insight we now bound this number from the top and the bottom by exponential functions. We prove that the lower bound is tight on prime lattices (example in Fig. <ref>), whereas local maxima are reached on even ones (Fig. <ref>).More precisely, for any n ∈ℕ, |S_n × n| can be bounded as [lemma:symmetric-config-lower-bound] |Σ|^n(n + 1) - |Σ|n ≤|S_n ×n|,where equality holds if and only if n is a prime.For an upper bound, let n=∏_i=1^k p_i^α_i be the prime factorization of n, where k=ω(n), the number of distinct prime factors of n. Then [lemma:symmetric-config-upper-bound] |S_n×n| ≤6 log_2(n)|Σ|^n^2/2.Note that our bound is significantly lower than the bound |Σ|^n^2-(n^2-1)|Σ|^n^2/2 found by Castillo-Ramirez and Gadouleau <cit.>. Also recall that they addressed a more general problem of counting aperiodic configurations on an arbitrary group.By combining the inequalities <ref> and <ref> the number of shift-symmetric configurations satisfies|Σ|^n(n + 1) - |Σ|n ≤ |S_n× n| ≤ 6 log_2(n)|Σ|^n^2/2.§.§ Probability of Selecting Shift-Symmetric Configuration over Uniform Distribution To calculate the probability that a randomly drawn configuration is shift-symmetric, we first handle a uniform distribution, in which each symbol from Σ for s_i in configuration 𝐬 is equally likely. For non-symmetric tasks, this probability directly equals a least expected insolvability (or error lower bound).Overall, there exist |Σ|^n^2 configurations and each configuration is equally likely, hence the probability of selecting a shift-symmetric configuration in a square lattice of size N = n^2 over uniform distribution is P_n × n^unif = |S_n × n|/|Σ|^n^2. Further, by applying the inequality <ref> and knowing that n|Σ|^n ≤ |Σ|^n(n + 1) - |Σ|n we can bound the probability asn|Σ|^-n^2 + n≤ P_n × n^unif≤ 6 log_2(n)|Σ|^-n^2/2. As exemplified in Figure <ref> and mathematically rooted in the inequality <ref>, the probability P_n × n^unif decreases rapidly: square-exponentially by n or exponentially by the lattice size N = n^2. Since |S_n × n| depends on the prime factorization of n the probability is non-monotonous. Similarly to |S_n × n| the probability P_n × n^unif reaches local minima for prime and local maxima for even lattices (n > 4). § ENUMERATING SHIFT-SYMMETRIC CONFIGURATIONS FOR K ACTIVE CELLS Having enumerated all shift-symmetric configurations we now tackle a subproblem of enumerating configurations with a specific number of cells in a given state, such as the state active. The motivation behind this endeavour is to calculate the probability of selecting a shift-symmetric configuration generated by a density-uniform distribution.We first define a set of shift-symmetric configurations with k cells in a special state a. Formally, for any state a ∈Σ and n,k∈ℕ, we define D^a_n × n,k to be the set of all square configurations with exactly k sites in state a:D^a_n× n,k = {𝐬∈Σ^n × n| _#_a𝐬 = k}, where #_a𝐬 denotes the number of cells in a configuration 𝐬 that are in a state a. Accordingly, let S^a_n × n,k be the set of such configurations that are symmetric:S^a_n × n,k = S_n × n∩ D^a_n × n,k, and for any v∈ℤ_n ×ℤ_n, let S^a_n × n,k( v) denote the set of configurations in S^a_n × n,k that are generated by v, so thatS^a_n × n,k( v) = S_n × n( v) ∩ D^a_n × n,k.As a direct corollary, for any a ∈Σ, any n,k ∈ℕ, and v = (l_1, l_2) ∈ℤ_n ×ℤ_n,S^a_n × n,k( v) ≠∅|⟨ v⟩| = n/ gcd(l_1,l_2,n). To launch our enumeration endeavour, we focus first on shift-symmetric configurations of a single generating vector. For any a ∈Σ, any k ∈ℕ, and v∈ℤ_n ×ℤ_n such that |⟨ v⟩| divides k [lemma:symmetric-config-2d-active-cell-intersect-size] | S^a_n ×n,k(v) | = ( n^2/|⟨v ⟩|k/|⟨v ⟩| ) (|Σ| - 1)^n^2 - k/|⟨v ⟩|.To derive the counting formulas for the specifics of k-active-cell configurations we mimic the advancements of the three counting techniques based on mutually-independent generators for |S_n× n| from Section <ref>, but this time, we root them into Eq. <ref>.As before we start with a base formula. Pick n,k ∈ℕ with k ≤ n and let d=gcd(k,n). Let n =∏_i=1^ω(n) p_i^α_i, k=∏_i=1^ω(k) q_i^β_i, and d = ∏_i=1^ω(d) r_i^γ_i be the prime factorizations of n, k, d, respectively. Then for any a ∈Σ, [lemma:symmetric-config-2d-active-cell-size]|S^a_n ×n,k| = ∑_0 u r + 1 (-1)^1 + |u| ( ∏_i=1^ω(d) r_i + 1u_i ) × n^2/h(u) k/h(u) (| Σ|-1)^n^2-k/h(u), where r = (r_1,…,r_ω(d)) and h( u) = ∏_i=1^ω(d) r_i^min (u_i,2).Similarly to Eq. <ref> from Section <ref>, the following alternative counting method is more efficient than the core Eq. <ref> due to the grouping of the exponential elements.[lemma:symmetric-config-2d-active-cell-size-alternative]|S^a_n ×n, k| = ∑_ 0 v 2 v u top(v) (-1)^1 + |u| n^2/h(v)k/h(v) (|Σ|-1)^n^2-k/h(v) ×∏_i=1^ω(d)r_i + 1u_i,where h( v) = ∏_i=1^ω(d) r_i^min(v_i,2) and top( v) ∈ℤ^ω(d) has ith coordinatetop(i) =v_i v_i < 2 r_i + 1 v_i = 2. At last, as a parallel to Eq. <ref> we derive the final formula, which further simplifies the counting mechanics by collapsing the inner binomial sum to a simple expression r(i).[theorem:symmetric-config-2d-active-cell-size-final] |S^a_n ×n, k| = ∑_0 v 2 (-1)^1 + |v| n^2/h(v)k/h(v) (|Σ|-1)^n^2-k/h(v) ∏_i=1^ω(d)r(i), where h( v) = ∏_i=1^ω(d) r_i^min(v_i,2) andr(i) = 1 v_i = 0 p_i + 1 v_i = 1 p_i v_i = 2.As a special case, it can be shown that the number of binary symmetric configurations (|Σ| = 2) with k sites in state a is|S^a_n × n, k| = ∑_ 0 v 2 (-1)^1 + | v|n^2/h( v)k/h( v)∏_i=1^ω(d)r(i).For illustration purposes, an example of the three increasingly more compact counting formulas is given for n = 2^α_1 3^α_2 and k = 2^β_1 3^β_2, β_1 ≤α_1, β_2 ≤α_2 in Appendix <ref>.§.§ Probability of Selecting Shift-Symmetric Configuration over Density-Uniform Distribution Besides a uniform distribution, a CA's performance is commonly evaluated using a so-called density-uniform distribution, in which the probability of selecting k active cells (_#_a𝐬 = k), a density, is uniformly distributed. Since for a density k there exist n^2k(|Σ| - 1)^n^2-k configurations and each density is equally likely, the probability of selecting a shift-symmetric configuration in a lattice N = n^2 over a density-uniform distribution is thenP_n × n^dens = 1/n^2+1∑_k = 0^n^2|S^a_n × n, k|/n^2k(|Σ| - 1)^n^2-k. As presented in Figure <ref>, the probability for density-uniform distribution decreases a magnitude slower than for the uniform one and reaches 0.001 even for N = 45^2. That is due to the fact that density-uniform distribution selects configurations with a few or many active cells, which are combinatorially more symmetric, more often. § SHIFT-SYMMETRIC CONFIGURATION DETECTION For practical reasons, e.g., to test whether a current system's configuration is shift-symmetric, and if yes take an action (restart), we provide an algorithm to effectively detect an occurrence of shift-symmetry.First, to find out whether a configuration is shift-symmetric by a shift 𝐯 we start at a corner cell 𝐰 = (0,0) and check if all the cells at the orbit 𝐰⊕ i 𝐯 are in the same state. If yes, we repeat this process for the next orbit and so on, moving in an arbitrary but fixed order (e.g., left-right up-down), until we check all the cells. If a cell has been visited before we skip it and move on until we find an unvisited cell, which marks a start of the next orbit. Also, if the test fails at any point, a configuration is non-shift-symmetric (by 𝐯), and the process can be terminated. Otherwise, the property holds for all the cells and a configuration is shift-symmetric.To determine whether a configuration is shift-symmetric globally, a naive way would be to try all possible non-zero vectors 𝐯 and check if any of them passes the aforementioned procedure. Luckily, as we discovered in Section <ref> each configuration shift-symmetry “overlaps” with mutually-independent generators from G_n. Recall that these generators are defined by prime factors and their total number |G_n| = ω(n) + ∑_i = 1^ω(n) p_i is significantly smaller than n^2.In the worst case, the shift-symmetry test needs to visit all n^2 cells and there are |G_n| vectors to try. Since O(|G_n|) = O( sopf(n)) and sopf(n) = ∑_i = 1^ω(n) p_i, also known as the integer logarithm, is at most n, the worst-case time complexity is [theorem:detection-algorithm-worst-case-complexity] O(n^3). Similarly, with a slightly more complicated proof, we can show that the average-case time complexity of the shift-symmetry test for a configuration generated from a uniform distribution is [theorem:detection-algorithm-average-case-complexity] O(n^2). Note that the worst and average-case time complexity of O(n^3) and O(n^2) respectively translate to O(√(N)N) and linear O(N) when interpreted by the optics of the number of cells N = n^2. The function sopf(n), which plays a crucial role in both O formulas, is of a logarithmic nature in “most of the cases,” but n for primes. Since the number of primes is infinite we could not use any tighter asymptote than n. However, for randomly chosen n the expected time complexities drop to just O(log(n)n^2) and O(log^2(n)) respectively.It is worth mentioning that the presented algorithm detects if a configuration is shift-symmetric but does not count the number of shift-symmetries in a configuration. The validity of the detection holds because we know that any shift-symmetric configuration must obey at least one of the prime generators from G_n. Nevertheless, to determine the number of shift-symmetries, i.e., the number of vectors with distinct vector spaces in ℤ_n ×ℤ_n for which the cells at a same orbit share the same state, we would need to consider also sub-vectors, whose satisfiability cannot be generally inferred from the prime generators. Construction of a counting algorithm is addressable but goes beyond the scope of this paper.§ DISCUSSION AND CONCLUSION Shift-symmetry, as we illustrated in the paper, decreases the system's computational capabilities and expressivity, and is generally good to be avoided. For each shift-symmetry, a system falls into, a configuration folds by the order of symmetry and “independent" computation shrinks to a smaller, prime fraction of the system. The rest is mirrored and lacks any intrinsic computational value or novelty. The number of reachable configurations shrinks proportionally as well.One of the key aspects of shift-symmetry is that it is maintained (irreversible) for any number of states, and any uniform transition and neighborhood functions. It means that the occurrence of shift-symmetry is rooted in the CA model itself, specifically, in the cells' uniformity, synchronous update, and toroidal topology. Shift-symmetry is preserved as along as a transition function is uniform (shared among the cells), even if non-deterministic. In other words, during each step a transition function can be discarded and regenerated at random. However, within the same synchronous update it must be consistent, i.e., two cells whose neighborhood's sub-configs are the same must be transitioned to the same state.We showed that a non-shift-symmetric solution is unreachable from a shift-symmetric configuration. Even more, a shift-symmetric configuration cannot be reached from another shift-symmetric one, if the vector space defining the symmetries of the starting configuration is not a subset of the target configuration's vector space. This renders the tasks, such as leader election <cit.>, several image processing routines including pattern recognition <cit.>, and encryption <cit.>, insolvable by uniform CAs in a general sense.These procedures are fundamental for many distributed protocols and algorithms. Additionally, leader election contributes to decision making of biological societies <cit.>, and is a key driver of cell differentiation <cit.> responsible for their structural heterogeneity and specialization.To determine how likely a configuration randomly generated from a uniform distribution is shift-symmetric, hence insolvable, we efficiently enumerated and bounded the number of shift-symmetric configurations using mutually independent generators. We also introduced a lower, tight prime-size bound, and an upper bound, and showed that even-size lattices are locally most likely shift-symmetric. By specializing on Cartesian powers of cyclic groups (two-dimensional case), we obtained more effective counting and probability formulas and sharper bounds compared to the state-of-the-art work addressing the problem for general groups <cit.>. We also extended our machinery to a fixed number of active symbols and derived a probability formula for density-uniform distribution.Overall, shift-symmetry is not as rare as one would think, especially for small or non-prime lattices, or when a configuration is generated using density-uniform distribution. Asymptotically, the probability for uniform distribution drops exponentially with the lattice size but a magnitude slower for a density-uniform distribution. For instance the probability for a 100^2 square lattice is around 10^-1505 using uniform and 2 × 10^-4 using density-uniform distribution. Importantly, shift-symmetry does not necessarily have to be harmful for all the tasks. For instance, the density classification <cit.>, which is widely used as a CA benchmark problem, requires a final configuration to be either 1^N if the majority of cells are initially in the state 1, and 0^N otherwise. Since the expected homogeneous configurations are fully shift-symmetric, they can be reached potentially from any configuration. Naturally, that depends on the structure of a transition function but shift-symmetry does not impose any strong restrictions here. The ability of reaching a valid answer does not necessarily mean reaching a correct answer. However, for the density classification, shift-symmetry tolerates the latter as well. It is because a shift-symmetric configuration consists purely of repeated sub-configurations, and so the density (ratio of ones) in a sub-configuration is the same as in the whole.To detect whether a configuration is shift-symmetric we constructed an algorithm, which, by using the base prime generators, can effectively determine a presence of shift-symmetry in linear O(N) time for prime and just O((1/2log(N))^2) for randomly chosen N on average.By moving from one to two dimensions we generalized our machinery to vector translations, which can be extended to the n-dimensional case <cit.>. It is expected that the number of shift-symmetric configurations will grow with the dimensionality of lattice. It will be interesting to investigate this relation from the perspective of prime-exponent divisors.An important implication of shift-symmetry is that cyclic behavior must occur only within the same symmetry class defined by a set of prime shifts (vectors) as illustrated in Figure <ref>. Note that we count no-symmetry as a class as well. This leads to the realization that once a CA gains a symmetry, i.e., a configuration crosses symmetry classes, it cannot be injective and reversible, and there must exist a configuration without a predecessor, a so-called “Garden of Eden" configuration <cit.>. It means that the only way for the CA to stay injective is to decompose all the configurations into cycles, each fully residing in a certain shift-symmetry class. Again one large class would contain all the non-shift-symmetric configurations. Open question is for which lattices, i.e., for how many shift-symmetric configurations, CAs are non-injective, thus irreversible, on average. As opposed to our shift-symmetric endeavour, which applies to any transition function, investigating injectivity would require to assume something about the transition function, e.g., that is generated randomly. Trivially, for any lattice there always exists an injective transition function. An example is an identity function. As we proved, the number of symmetries in any synchronous toroidal CA is non-decreasing. A natural question is: could it be increasing in the “average" case for a random transition function? We know that the expected behavior of randomly generated CA is most likely chaotic and the attractor length is exponential to the lattice size N, as opposed to ordered or complex CAs with linear or quadratic attractors <cit.>. Would the length of attractor be sufficient to discover a shift-symmetry if we keep a random CA running long enough, potentially |Σ|^N time steps? As seen in Figure <ref>, the ratio of shift-symmetric configurations assuming a uniform distribution is exponentially decreasing with the lattice size, and prime lattices could produce “only" around n|Σ|^n symmetric configurations. For a randomly chosen lattice size, dimensions, and cell connectivity, we expect the number of reachable symmetries to be significantly smaller than the total number of symmetries available. However, for symmetry-rich lattices, we speculate that toroidal synchronous uniform systems, such as CAs, could undergo spontaneous symmetrization contracting an initial configuration to a fully homogeneous state (analogical to Big Crunch). If proven, it would directly imply the system's non-injectivity and irreversibility, and would bind symmetrization with non-ergodicity. This hypothesis will be addressed in our future work. We suggest that several phenomena observed in CA dynamics, such as irreversibility, emergence of structured “patterns", and self-organization could be explained or contributed to shift-symmetry. As demonstrated by Wolfram <cit.> on 256 elementary one-dimensional CAs, when run long enough, most of these CAs condensate to ordered structures: homogeneous configurations and self-similar patterns, which are in fact shift-symmetric. A straightforward way to fight symmetry would be to introduce noise, i.e., to break the uniformity of cells and/or to use an asynchronous update. Based on the amount of noise, this could, however, disrupt the consistency of local, particle-based, interactions, which give rise to a global computation. Clearly, asynchronicity makes a system more robust but sacrificesthe information processing by algebraic structures, which could exist only due to synchronous update.Practical utility of the presented enumeration formulas and probability calculations for a given distributed application is that, we can minimize a likelihood of shift-symmetry-caused insolvability as well as the number of resources needed. An online supplementary web page, which implements these formulas as well as an embedded simulator to run a CA on a shift-symmetric configuration, can be found at <https://coel-sim.org/symmetry>.§ ACKNOWLEDGMENTSThis material is based upon work supported by the National Science Foundation under grants #1028120, #1028378, #1518833, and by the Defense Advanced Research Projects Agency (DARPA) under award #HR0011-13-2-0015. The views expressed are those of the author(s) and do not reflect the official policy or position of the Department of Defense or the U.S. Government. Approved for Public Release, Distribution Unlimited. § PROOFSNote: lemmas, theorems, and corollaries are numbered such that those starting with E correspond to the equations in the main text which they are referenced from (e.g., Lemma E.4 ↔ Equation 4). The remaining non-equation-referenced (auxiliary) lemmas have the A prefix. For any non-zero vector (generator) v∈ℤ_n ×ℤ_n,S_n × n( v) = {𝐬∈Σ^n × n | ∀ u∈ℤ_n ×ℤ_n ∀ w∈⟨ v⟩ : s_ u = s_ u⊕ w},where ⟨ v⟩ is the cyclic subgroup ofℤ_n ×ℤ_n gen. by v.[lemma:symmetric-config-set-2d-l1_l2-size] For any non-zero v = (l_1, l_2) ∈ℤ_n ×ℤ_n, the following hold:(i). |S_n × n( v)| = |Σ|^n^2/|⟨ v⟩| .(ii). | ⟨ v⟩ | = n/ (l_1, l_2, n),(iii). |S_n × n( v)| = |Σ|^n gcd(l_1,l_2,n). (i). When v=(l_1,l_2) is repeatedly applied to any cell in the lattice, an orbit is generated, consisting of |⟨ v⟩| cells that must share a common state for any configuration in S_n × n( v). The number of distinct orbits of cells in the lattice is simply n^2/|⟨ v⟩|. Any configuration in S_n × n( v) is thus uniquely determined by choosing a state from Σ for each orbit of cells, so (i) follows.(ii). For l ∈ℤ_n it is easily shown that |⟨ l ⟩| = n/ gcd(l,n), so|⟨ v⟩| =lcm(n/ gcd(l_1,n),n/ gcd(l_2,n)) = n/ gcd(l_1, l_2, n),where lcm denotes the least common multiple. (iii). By (ii), the exponent in (i) becomesn^2/|⟨ v⟩ | = n^2/n/ gcd(l_1,l_2,n) = n gcd(l_1,l_2,n)as desired.Fix any non-zero vector v∈ℤ_n ×ℤ_n and any shift-symmetric square configuration s∈ S_n × n ( v).Then for any w∈ℤ_n ×ℤ_n, the neighborhoods satisfyη_ w( s) = η_ w⊕ v( s).Suppose the neighborhood function, which is uniformly shared by all cells, is defined by (relative) vectors u_1, …,u_m, i.e., η_ w( s) = (s_ w⊕ u_1, …, s_ w⊕ u_m) and assume the lemma does not hold, i.e., there exists w for which η_ w( s) ≠η_ w⊕ v( s). Then(s_ w⊕ u_1,…,s_ w⊕ u_m) ≠ (s_( w⊕ v) ⊕ u_1,…,s_( w⊕ v) ⊕ u_m)and so there exists some u_j such that s_ w⊕ u_j≠ s_( w⊕ v) ⊕ u_j, i.e., s_ w⊕ u_j≠ s_( w⊕ u_j) ⊕ v, which contradicts the assumption that s∈ S_n × n ( v). [theorem:symmetric-stays-symmetric-2d] If s∈ S_n × n ( v) then Φ(𝐬) ∈ S_n × n ( v) for any uniform global transition rule Φ. Suppose 𝐪 = Φ(𝐬) is not symmetric by v. Then, there exists u∈ℤ^n ×ℤ^n, such that q_ u≠ q_ u⊕ v. By Lemma <ref>, η_ u( s) = η_ u⊕ v( s), and so q_ u = ϕ(η_ u( s)) = ϕ(η_ u⊕ v( s)) = q_ u⊕ v,which is a contradiction. [lemma:symmetric-config-subset-2d] For any u,v∈ℤ_n ×ℤ_nS_n × n( u) ⊆ S_n × n( v) ⟨ v⟩≤⟨ u⟩ .(⇒). Suppose S_n × n( u) ⊆ S_n × n( v). ThenS_n × n( u) = S_n × n( u) ∩ S_n × n( v) = S_n × n({ u,v}) by Corollary (Eq.) <ref>. But then |⟨ u⟩| = |⟨ u,v⟩| by Corollary (Eq.) <ref>, which forces ⟨ u⟩ = ⟨ u,v⟩, so that v∈⟨ u⟩ and ⟨ v⟩≤⟨ u⟩ as desired.(⇐). By way of contradiction, suppose that S_n × n( u) ⊈S_n × n( v) and⟨ v⟩≤⟨ u⟩. Let 𝐬∈ S_n × n( u) such that 𝐬∉S_n × n( v). Then 𝐬 is symmetric under u but not under v. Consequently, there exists w∈ℤ_n ×ℤ_n such that s_ w≠ s_ w⊕ v. But 𝐬∈ S_n × n( u) and v∈⟨ u⟩ by assumption, so Lemma <ref> implies that s_ w = s_ w⊕ v, which is a contradiction. For any prime p that divides n and any i (0 ≤ i < n), the cyclic group ⟨(n/p,in/p) ⟩ is simple, i.e., it has no nontrivial proper subgroups. By Lemma <ref>(ii), we see that ⟨(n/p,in/p) ⟩ has order p, and by Lagrange's Theorem, any group with prime order is simple. By swapping the coordinates, the proof applies also to each subgroup of the form ⟨(in/p, n/p) ⟩.[lemma:symmetric-config-primes-2d] Fix any natural number n and let n=∏_j=1^ω(n) p_j^α_j be the prime factorization of n, where ω(n) denotes the number of distinct prime factors. ThenS_n × n = ⋃_ w∈ G_n S_n × n( w),where G_n is defined as in Definition (Eq.) <ref>. (⊆).Let 𝐬∈ S_n × n, so that 𝐬∈ S_n × n( v) for some nonzero v = (a,b) ∈ℤ_n ×ℤ_n. It suffices to show that ⟨ w⟩≤⟨ v⟩ for some w∈ G_n, since this fact, by Lemma <ref>, impliesS_n × n( v) ⊆ S_n × n( w) and therefore 𝐬∈ S_n × n( w).Without loss of generality, we may assume gcd(a,b,n) = 1. Otherwise, we simply divide everything by d = (a,b,n) to obtain v̂ = (â,b̂), and n̂, respectively. Once we show that ⟨ŵ⟩≤⟨v̂⟩ for some ŵ∈ G_n̂, we multiply throughout by d to obtain the desired result.Case 1. Suppose gcd(a,n) = 1. Then ai ≡_n b and aj ≡_n 1 for some i, j ∈ℤ. Also, nv≡_n (0,0), so | ⟨ v⟩ | divides n.Let p be any prime divisor of | ⟨ v⟩ | and write n = p m for some m ∈ℤ. Let w = (m, im) and note that w∈ G_n(p). Also observe v = a (1,i) and w = m (1,i), so thatmjv = mja (1,i) = m ( 1,i ) =w.Therefore w∈⟨ v⟩ and thus ⟨ w⟩≤⟨ v⟩ as desired.Case 2. Suppose gcd(a,n) ≠ 1.Let p be any prime divisor of both a and n, so that a=pa' and n = p n' for some a', n' ∈ℤ. Let w = (0,n') and note that w∈ G_n(p). Observe that an' = a'pn' = a'n ≡_n 0, son'v = n' (a,b) = (an',bn') = (0,bn') = bw.Therefore n'v∈⟨ w⟩. But by Lemma <ref>(ii), | ⟨ w⟩ | = p, a prime.So if n'v is nonzero, then it generates ⟨ w⟩. But n'v is indeed nonzero, since its second coordinate is b n', and if bn' ≡_n 0, then n | bn'. Dividing by n', we see p | b.But recall that p divides a and n, and we assumed at the beginning (without loss of generality) that (a,b,n)=1. So p cannot divide b. This contradiction shows n'v is nonzero and so n'v generates ⟨ w⟩. Thus w∈⟨ w⟩ = ⟨ n'v⟩≤⟨ v⟩.So ⟨ w⟩≤⟨ v⟩ as desired.(⊇). Immediate by Definition (Eq.) <ref>.[lemma:symmetric-config-2d-primes-linearly-independent-all]Fix any n ∈ℕ.For any distinct u,v∈ G_n, ⋆ |⟨ u⟩∩⟨ v⟩ | = 1.First, suppose that u∈ G_n(p) and v∈ G_n(q), where p ≠ q. By Lemma <ref>(i), |⟨ u⟩ | = p and |⟨ v⟩ | =q. Since | ⟨ u⟩∩⟨ v⟩ | must divide both of these primes, the line (<ref>) must hold as claimed.Next, suppose u, v∈ G_n(p) and write n=n̂p for some n̂∈ℤ. Suppose u=(n̂,in̂) and v=(n̂,jn̂) for some 0≤ i<j < p. If x∈⟨ u⟩∩⟨ v⟩ then ∃ k,l (0 ≤ k, l <p) such that x = k u = l v. But then (kn̂,kin̂)=(ln̂,ljn̂), so kn̂≡_n ln̂ and thus k ≡_p l.But also, kin̂≡_n ljn̂, so that ki ≡_p lj. Since i ≢_p j, this forces k ≡_p 0, so that x=0 and (<ref>) must hold as claimed.Finally, suppose u, v∈ G_n(p) and suppose u=(0,n̂) and v=(n̂,in̂) for some 0≤ i < p. If x∈⟨ u⟩∩⟨ v⟩ then ∃ k,l (0 ≤ k, l <p) such that x = k u = l v. But then (0,kn̂)=(ln̂,lin̂), so 0 ≡_n ln̂ and thus 0 ≡_p l.But also, kn̂≡_n lin̂, so that k ≡_p li and therefore k ≡_p 0. Now x=0 and (<ref>) must hold as claimed.[lemma:symmetric-config-2d-primes-linearly-independent] Fix any n ∈ℕ and any prime divisor p of n. Let n̂=n/p.Then for any distinct u, v∈ G_n(p),⟨ u, v⟩ = ⟨ (n̂,0), (0,n̂) ⟩.In particular, |⟨ u, v⟩| = p^2. (⊆). First suppose u=(n̂,in̂) and v=(n̂,jn̂) for some 0 ≤ i<j<p.Then u = (n̂,0) + i(0,n̂) and v = (n̂,0) + j(0,n̂). So ⟨ u, v⟩⊆⟨ (n̂,0), (0,n̂) ⟩ as desired.A similar argument holds when u=(n̂,in̂) and v=(0,n̂). (⊇). Again suppose u=(n̂,in̂) and v=(n̂,jn̂) for some 0 ≤ i<j<p. Then u- v∈⟨ (0, n̂) ⟩. But u- v≠0 and |⟨ (0,n̂) ⟩|=p, so u- v generates ⟨ (0,n̂) ⟩.Thus (0,n̂) ∈⟨ u- v⟩⊆⟨ u, v⟩. Likewise, (n̂,0) ∈⟨ j u-i v⟩⊆⟨ u, v⟩, so the desired containment holds. A similar argument can be made when u=(n̂,in̂) and v=(0,n̂), showing that (n̂,0) ∈⟨ u-i v⟩, which implies the desired result. [lemma:symmetric-config-overall-2d-size] Let n=∏_i=1^k p_i^α_i be the prime factorization of n, where k=ω(n), the number of distinct prime factors of n. Then|S_n × n| = ∑_ 0 v p+ 1 (-1)^1 + | v|∏_i=1^kp_i + 1v_i| Σ|^f( v),where p = (p_1,…,p_k) and f( v) = n^2 ∏_i=1^k p_i^-min (v_i,2). By Lemma <ref>, inclusion-exclusion, and Eq. <ref>,|S_n × n|= |⋃_w ∈ G_n S_n × n(w)|= ∑_∅≠ J ⊆ G_n (-1)^|J|+1|S_n × n(J) | .Since G_n = ⋃_j = 1^k G_n(p_j), we have k=ω(n) sets from which to choose the elements of J, so|S_n × n| = ∑_ J_1 ⊆ G_n(p_1)…J_k ⊆ G_n(p_k) (-1)^1 + ∑_i=1^k|J_i|| S_n × n(⋃_i=1^k J_i ) |,where the sum excludes the case when J_i =∅ for all i. It follows from Eq. <ref> that S_n× n(⋃ J_i) = ⋂ S_n × n(J_i) and so Eq. <ref> gives| S_n × n( ⋃_i=1^k J_i ) | = |Σ |^n^2/|⟨⋃_i=1^k J_i ⟩ |.But by Lemma <ref> we know | ⟨ J_i ⟩∩⟨ J_j ⟩ |=1 when i ≠ j, so | ⟨⋃_i=1^k J_i ⟩| = ∏_i=1^k | ⟨ J_i ⟩ |. Since J_i ⊆ G_n(p_i), recall that⟨ J_i ⟩ = ⟨ (n/p_i,0),(0,n/p_i) ⟩ when |J_i| ≥ 2 by Lemma <ref>. So |⟨ J_i ⟩| = 1, p_i, and p_i^2 when |J_i|=0, 1, and ≥ 2, respectively. Therefore∏_i = 1^k|⟨ J_i ⟩| = ∏_i = 1^k p_i^min(|J_i|,2)Substituting all this into the expression for |S_n × n|, we obtain|S_n × n| = ∑_ J_1 ⊆ G_n(p_1)…J_k ⊆ G_n(p_k) (-1)^1 + ∑_i=1^k|J_i|(|Σ|^n^2/∏_i = 1^k p_i^min(|J_i|,2)) Now, because the content of J_i is irrelevant and we care only about the cardinality |J_i|, for each size v_i = |J_i| we have |G_n(p_i)|v_i = p_i + 1v_i ways of choosing v_i elements from G_n(p_i), which produces the final formula as required.[lemma:symmetric-config-overall-2d-size-alternative] Let n=∏_i=1^k p_i^α_i be the prime factorization of n, where k=ω(n), the number of distinct prime factors of n. Then an alternative counting of |S_n × n| is|S_n × n| = ∑_ 0 v 2|Σ|^g( v)( ∑_ v u top( v)(-1)^1 + | u|∏_i=1^kp_i + 1u_i)where g( v) = n^2 ∏_i=1^k p_i^-v_i and top( v) ∈ℤ^k has ith coordinatetop(i) =v_i v_i < 2 p_i + 1 v_i = 2.We know that the exponent of each p_i in S_n × n from Lemma <ref> is at most 2. Therefore for given v_1, …, v_k ∈{0,1,2} we can combine all binomial expressions associated with |Σ|^n^2/∏_i = 1^k p_i^v_i. If v_i ≤ 1 then we have p_i + 1v_i selections from G_n(p_i), and ⋃_u_i = 2^p_i + 1p_i + 1u_i for v_i = 2. These two expressions could be generalized as ⋃_u_i = v_i^ top(i)p_i + 1u_i using the top function defined above. Therefore the total coefficient of |Σ|^n^2/∏_i = 1^k p_i^v_i is ∑_ v_1 ≤ u_1 ≤ top(1)…v_k ≤ u_k ≤ top(k) (-1)^1 + ∑_i = 1^k u_i∏_i=1^kp_i + 1u_ias required.[theorem:symmetric-config-overall-2d-size-final] Let n=∏_i=1^k p_i^α_i be the prime factorization of n, where k=ω(n), the number of distinct prime factors of n. Then|S_n × n| = ∑_ 0 v 2 (-1)^1 + | v| |Σ|^g( v)∏_i=1^kr(i)where g( v) = n^2 ∏_i=1^k p_i^-v_i andr(i) = 1 v_i = 0 p_i + 1 v_i = 1 p_i v_i = 2. For a given vector v with v_1, …, v_k ∈{0,1,2} we define b( v) = ∑_ v u top( v)(-1)^1 + | u|∏_i=1^kp_i + 1u_i,where top(i) = v_iv_i < 2p_i + 1v_i = 2.Using Lemma <ref>, we are left to show thatb( v) = (-1)^1 + | v|∏_i=1^kr(i).We prove it by induction on k. As the induction basis we choose k = 1, and so n = p, where p is a prime. Since b( v) = r(0) we need to confirm it equals -1, p + 1, or -p for three different cases of v defined by the function r.If v = (0), top( v) = (0) and the only u is u = (0), which gives b( v) = - p + 10 = -1. If v = (1), top( v) = (1) and the only u is u = (1), and so b( v) = p + 11 = p + 1. If v = (2), top( v) = (p + 1) and u ranges from (2) to (p + 1). Thereforeb( v)= ∑_u_1 = 2^p + 1 (-1)^1 + u_1p + 1u_1= ∑_u_1 = 0^p + 1 (-1)^1 + u_1p + 1u_1_0 + p + 10 - p + 11= -p For the induction step we prove:b( v) = (-1)^1 + | v|∏_i=1^kr(i) ⇒ b( w) = (-1)^1 + | w|∏_i=1^k + 1r(i),where w = (v_1, …, v_k, v_k + 1). Similarly to the induction basis we need to consider three cases for v_k + 1: If v_k + 1 = 0b( w)= p_k + 1 + 10∑_ v u top( v) (-1)^1 + | u|∏_i=1^kp_i + 1u_i_b( v)= (-1)^1 + | v|∏_i=1^kr(i) = (-1)^1 + | w|∏_i=1^k + 1r(i)If v_k + 1 = 1b( w)= - p_k + 1 + 11∑_ v u top( v) (-1)^1 + | u|∏_i=1^kp_i + 1u_i_b( v)= - (p_k + 1 + 1) (-1)^1 + | v|∏_i=1^kr(i) = (-1)^1 + | w|∏_i=1^k + 1r(i)If v_k + 1 = 2b( w)= ∑_u_k + 1 = 2^p_k + 1 + 1 (-1)^u_k + 1p_k + 1 + 1u_k + 1_p_k + 1 b( v) = p_k + 1 (-1)^1 + | v|∏_i=1^kr(i) = (-1)^1 + | w|∏_i=1^k + 1r(i) [cor:symmetric-config-overall-2d-prime-size] Let n=p, where p is a prime. Then|S_n × n| = |Σ|^n(n + 1) - |Σ|n.For v = (1), g( v) = n and b( v) = (n + 1), and for v = (2), g( v) = 1 and b( v) = - n. Let n=∏_i=1^k p_i^α_i be the prime factorization of n, where k=ω(n), the number of distinct prime factors of n, and for each m(1 ≤ m ≤ k), letq_n × n^m =∑_ 0 v 2 (-1)^1 + | v| |Σ|^g( v)∏_i=1^mr(i), where v∈ℤ^m and g( v) and r(i) are defined as before. Note that |S_n × n| = q_n × n^k. Then for m < k q_n × n^m ≤ q_n × n^m + 1. Let v = (v_1, …, v_m) and w = (v_1, …, v_m, v_m + 1). Thenq_n × n^m + 1 = ∑_ 0 w 2 (-1)^1 + | w| |Σ|^g( w)∏_i=1^m + 1r(i) =∑_v_m + 1 = 0^2 ((-1)^v_m + 1 r(m + 1)∑_ 0 v 2 (-1)^1 + | v| |Σ|^g( v)p_m + 1^-v_m+1∏_i=1^mr(i))+ ∑_v_m + 1 = 1^2 (-1)^1 + v_m + 1 |Σ|^n^2 p_m + 1^-v_m+1 r(m + 1) We split the expression into five parts: q_n × n^m + 1 = x_0 + x_1 + x_2 + y_1 + y_2and define, for any c ∈ℝ q_n × n^m(c) =∑_ 0 v 2 (-1)^1 + | v| |Σ|^g( v)c∏_i=1^mr(i), i.e., q_n × n^m = q_n × n^m(1). Then x_0 = q_n × n^m x_1 = - (p_m + 1 + 1) q_n × n^m(p_m + 1^-1)x_2 = p_m + 1 q_n × n^m(p_m + 1^-2) y_1 = |Σ|^n^2 p_m + 1^-1 (p_m + 1 + 1)y_2 = - |Σ|^n^2 p_m + 1^-2 p_m + 1 Now we show that y_1 + x_1 + y_2≥ 0 Let A = n^2 p_m + 1^-1. Then (y_1 + x_1 + y_2)(p_m + 1 + 1)^-1 ≥ |Σ|^A - q_n × n^m(p_m + 1^-1) -|Σ|^A/p_m+1≥ |Σ|^A - ∑_ 0 v 2 |Σ|^g( v) p_m + 1^-1∏_i=1^mr(i)-|Σ|^A/p_m+1 = |Σ|^A - ∑_ 0 v 2 |Σ|^A/∏_i=1^mp_i^v_i∏_i=1^mr(i)-|Σ|^A/p_m+1≥ |Σ|^A - ∏_i=1^m(p_i + 1) ∑_ 0 v 2 |Σ|^A/∏_i=1^mp_i^v_i-|Σ|^A/p_m+1≥ |Σ|^A - n 2^m ∑_ 0 v 2 |Σ|^A/∏_i=1^mp_i^v_i-|Σ|^A/p_m+1≥|Σ|^A- n 2^m ∑_ 0 v 2 |Σ|^A/p_l -|Σ|^A/p_m+1 =|Σ|^A- 2^log_2(n) + m∑_ 0 v 2 |Σ|^A/p_l-|Σ|^A/p_m+1≥|Σ|^A- 2^log_2(n) + m 3^m |Σ|^A/p_l -|Σ|^A/p_m+1≥|Σ|^A- |Σ|^log_2(n) + m + 2m |Σ|^A/p_l-|Σ|^A/p_m+1≥|Σ|^A - |Σ|^3m + log_2(n) + A/p_l-|Σ|^A/p_m+1≥|Σ|^A - |Σ|^4log_2(n) + A/p_l -|Σ|^A/p_m+1≥|Σ|^A - |Σ|^4log_2(n) + A/2 -|Σ|^A/2≥|Σ|^A - 2|Σ|^4log_2(n) + A/2≥|Σ|^A - |Σ|^4log_2(n) + A/2 + 1≥ 0. Since x_2 is non-negative we can conclude that q_n × n^m + 1 = x_0_q_n × n^m + x_1 + y_1 + y_2_≥ 0 + x_2_≥ 0≥ q_n × n^m [lemma:symmetric-config-lower-bound]|Σ|^n(n + 1) - |Σ|n ≤ |S_n × n|,where equality holds if and only if n is a prime. If k = 1, i.e., n is a prime, the equality holds as shown in Corollary <ref>. If k > 1 using Lemma <ref> and p_1 < n, p_1 ≤n/2 |S_n × n|= q_n × n^k ≥ q_n × n^k - 1≥…≥ q_n × n^1= |Σ|^n^2 p_1^-1(p_1 + 1) - |Σ|^n^2 p_1^-2p_1 > |Σ|^n^2 n^-1(n + 1) - |Σ|^n^2 n^-2n Let n=∏_i=1^k p_i^α_i be the prime factorization of n, where k=ω(n), the number of distinct prime factors of n. Then |S_n× n| ≤ 2 ∑_i = 1^k |Σ|^n^2 p_i^-1(p_i + 1). As in the proof of Lemma <ref> we employ the function q_n × n, which can be decomposed into five parts as defined earlier q_n × n^m + 1 = x_0 + x_1 + x_2 + y_1 + y_2 Now we show that y_1≥ x_1 + x_2 + y_2 Let A = n^2 p_m + 1^-1. Then (y_1 - x_1 - x_2 - y_2)(p_m + 1 + 1)^-1 ≥ |Σ|^A + q_n × n^m(p_m + 1^-1)_≥ 0 - q_n × n^m(p_m + 1^-2) +|Σ|^A/p_m+1_≥ 0≥ |Σ|^A - q_n × n^m(p_m + 1^-2) ≥ |Σ|^A - ∑_ 0 v 2 |Σ|^g( v) p_m + 1^-2∏_i=1^mr(i)= |Σ|^A - ∑_ 0 v 2 |Σ|^A/∏_i=1^mp_i^v_ip_m + 1∏_i=1^mr(i)≥ |Σ|^A - ∏_i=1^m(p_i + 1) ∑_ 0 v 2 |Σ|^A/∏_i=1^mp_i^v_ip_m + 1≥ |Σ|^A - n 2^m ∑_ 0 v 2 |Σ|^A/∏_i=1^mp_i^v_ip_m + 1≥|Σ|^A- n 2^m ∑_ 0 v 2 |Σ|^A/p_l p_m + 1 =|Σ|^A- 2^log_2(n) + m∑_ 0 v 2 |Σ|^A/p_l p_m + 1≥|Σ|^A - 2^log_2(n) + m3^m |Σ|^A/p_l p_m+1≥|Σ|^A - |Σ|^log_2(n) + m + 2m |Σ|^A/p_l p_m+1 ≥|Σ|^A - |Σ|^3m + log_2(n)+ A/p_l p_m+1≥|Σ|^A - |Σ|^4log_2(n) + A/p_l p_m+1≥|Σ|^A - |Σ|^4log_2(n) + A/6≥ 0. Since y_1≥ x_1 + x_2 + y_2 q_n × n^m + 1 = x_0 + x_1 + x_2 + y_1 + y_2 ≤ x_0 + 2y_1. By substituting x_0 and y_1 we obtain a recursive inequality q_n × n^m + 1 ≤ q_n × n^m + 2 |Σ|^n^2 p_m + 1^-1 (p_m + 1 + 1) ≤ q_n × n^m -1 + 2 |Σ|^n^2 p_m^-1 (p_m + 1) + 2|Σ|^n^2 p_m + 1^-1 (p_m + 1 + 1)…≤ 2 ∑_i = 1^m + 1 |Σ|^n^2 p_i^-1(p_i + 1) To finalize the proof we use |S_n × n| = q_n × n^k.Let p be a prime divisor of n. Then|Σ|^n^2p^-1 (p + 1) ≤ |Σ|^n^2(p - 1)^-1 p.Let B = |Σ|^n^2p^-1. Then B^p/p - 1p - B(p + 1) = B(B^1/p - 1p - (p + 1)) ≥ B(|Σ|^n/p - 1p - (p + 1)) ≥ B(2^n/p - 1p - (p + 1)) ≥ B(2p - (p + 1)) ≥ 0Let p be a prime of n. Then|Σ|^n^2p^-1 (p + 1) ≤ 3|Σ|^n^2/2. By inductionΣ|^n^2p^-1 (p + 1) ≤ |Σ|^n^2(p - 1)^-1 p≤ |Σ|^n^2(p - 2)^-1 (p - 1) ≤…≤ |Σ|^n^22^-1 3[lemma:symmetric-config-upper-bound] Let n=∏_i=1^k p_i^α_i be the prime factorization of n, where k=ω(n), the number of distinct prime factors of n. Then |S_n× n| ≤ 6 log_2(n)|Σ|^n^2/2. By Lemma <ref> and Corollary <ref> |S_n× n| ≤ 2 ∑_i = 1^k |Σ|^n^2 p_i^-1(p_i + 1) ≤ 6k|Σ|^n^2/2≤ 6 log_2(n)|Σ|^n^2/2[lemma:symmetric-config-2d-active-cell-intersect-size] For any a ∈Σ, any k ∈ℕ, and v∈ℤ_n ×ℤ_n such that |⟨ v⟩| divides k| S^a_n × n,k( v) | = n^2/|⟨ v⟩|k/|⟨ v⟩|(|Σ| - 1)^n^2 - k/|⟨ v⟩|.Let 𝐬∈ S^a_n × n,k( v). Then the number of selections of state in 𝐬, i.e., the pattern size, is n^2/|⟨ v⟩|. To enumerate the number of such configurations, we first have tochoose k/|⟨ v⟩| out of n^2/|⟨ v⟩| sites to be in state a, and then fill the remaining n^2/|⟨ v⟩| - k/|⟨ v⟩| sites with states from Σ∖{a }. [lemma:symmetric-config-2d-active-cell-size] Pick n,k ∈ℕ with k ≤ n and let d=gcd(k,n). Let n =∏_i=1^ω(n) p_i^α_i, k=∏_i=1^ω(k) q_i^β_i, and d = ∏_i=1^ω(d) r_i^γ_i be the prime factorizations of n, k, d, respectively. Then for any a ∈Σ, |S^a_n × n,k| = ∑_ 0 u r +1 (-1)^1 + | u|( ∏_i=1^ω(d)r_i + 1u_i) ×n^2/h( u)k/h( u) (| Σ|-1)^n^2-k/h( u),where r = (r_1,…,r_ω(d)) and h( u) = ∏_i=1^ω(d) r_i^min (u_i,2).Using Eq. <ref>, Lemma <ref>, and Eq. <ref>,S^a_n× n,k = (⋃_ w∈ G_n S_n × n( w) ) ⋂D^a_n × n,k= ⋃_i = 1^ω(n)⋃_ w∈ G_n(p_i) S^a_n × n,k( w)= ⋃_i = 1^ω(d)⋃_ w∈ G_n(r_i) S^a_n × n,k( w).By the inclusion-exclusion principle|S^a_n × n, k| = ∑_ J_1 ⊆ G_n(r_1)…J_ω(d)⊆ G_n(r_ω(d)) (-1)^1 + ∑_i=1^ω(d)|J_i|| ⋂_ w∈∪_i J_i S^a_n × n( w) |.Now, by Eq. <ref>⋂_ w∈∪_i = 1^ω(d) J_i S^a_n × n,k( w)= ( ⋂_ w∈∪_i = 1^ω(d) J_i S_n × n( w) ) ∩ D^a_n × n,k= S_n × n(∪_i = 1^ω(d) J_i) ∩ D^a_n × n,k= S^a_n × n,k(∪_i = 1^ω(d) J_i)Finally let m = |⟨∪_i = 1^ω(d) J_i ⟩|, then using Lemma <ref>,|S^a_n × n,k(∪_i = 1^ω(d) J_i)|= n^2/mk/m (|Σ| - 1)^n^2 - k/m,where m = |⟨∪_i = 1^ω(d) J_i ⟩| = ∏_i=1^ω(d) r_i^min(|J_i|,2).[lemma:symmetric-config-2d-active-cell-size-alternative] Pick n,k ∈ℕ with k ≤ n and let d=gcd(k,n). Let n =∏_i=1^ω(n) p_i^α_i, k=∏_i=1^ω(k) q_i^β_i, and d = ∏_i=1^ω(d) r_i^γ_i be the prime factorizations of n, k, d, respectively. Then for any a ∈Σ,|S^a_n × n, k| = ∑_ 0 v 2v u top( v) (-1)^1 + | u|n^2/h( v)k/h( v) (|Σ|-1)^n^2-k/h( v) ×∏_i=1^ω(d)r_i + 1u_i,where h( v) = ∏_i=1^ω(d) r_i^min(v_i,2) and top( v) ∈ℤ^ω(d) has ith coordinatetop(i) =v_i v_i < 2 r_i + 1 v_i = 2.Similar to the proof of Lemma <ref>. [theorem:symmetric-config-2d-active-cell-size-final] Pick n,k ∈ℕ with k ≤ n and let d=gcd(k,n). Let n =∏_i=1^ω(n) p_i^α_i, k=∏_i=1^ω(k) q_i^β_i, and d = ∏_i=1^ω(d) r_i^γ_i be the prime factorizations of n, k, d, respectively. Then for any a ∈Σ,|S^a_n × n, k| = ∑_ 0 v 2 (-1)^1 + | v|n^2/h( v)k/h( v) (|Σ|-1)^n^2-k/h( v)∏_i=1^ω(d)r(i),where h( v) = ∏_i=1^ω(d) r_i^min(v_i,2) andr(i) = 1 v_i = 0 p_i + 1 v_i = 1 p_i v_i = 2.Similar to the proof of Theorem <ref>.The number of binary symmetric configurations (|Σ| = 2)with k sites in state a is given by|S^a_n × n, k| = ∑_ 0 v 2 (-1)^1 + | v|n^2/h( v)k/h( v)∏_i=1^ω(d)r(i).For any state set Σ and state a ∈Σ, the set S^a_n × n,0 equals the set S_n × n for the state set Σ∖{ a }.[theorem:detection-algorithm-worst-case-complexity] The worst-case time complexity of the shift-symmetry detection algorithm for a square configuration of size N = n^2 is O(n^3). In a worst-case scenario, when a configuration is non-shift-symmetric and there is only one cell breaking symmetry, each test requires to visit potentially all n^2 cells. The overall worst-case time complexity is therefore O(|G_n| n^2). We know that the sum of distinct prime factors sopf(n) = ∑_i = 1^ω(n) p_i also known as the integer logarithm is at most n (if n is prime), which gives usO(|G_n|n^2)= O((ω(n) + ∑_i = 1^ω(n) p_i)n^2)= O((log_2(n) +sopf(n))n^2)= O(n^3).[theorem:detection-algorithm-average-case-complexity] The average-case time complexity of the shift-symmetry detection algorithm for a square configuration of size N = n^2 generated from a uniform distribution is O(n^2). Let m = n^2 p^-1 be the number of orbits for a prime p. Assuming a uniform distribution the probability of passing an orbit is Q = |Σ|^1 - p. If successful we move to a next orbit, otherwise we terminate with the probability 1 - Q. The probability of terminating at ith orbit can be therefore generalized as P_i = (1 - Q) Q^i-1i < m Q^m - 1i = m. It is easy to show that these probabilities sum to 1, i.e., we must terminate at one of m orbits. Further, the probability of successfully passing the test for all the orbits—the probability that a configuration generated from a uniform distribution is shift-symmetric by a vector with an order p—equals |Σ|^n^2(p^-1 - 1).By using the formula for a geometric sum we can prove that∑_i = 0^n - 1(i + 1)r^i = 1 - r^n(1 + n(1 - r))/(1 - r)^2.We apply this to calculate the expected number of visited orbits as E_p[#orbits]= ∑_i = 1^m iP_i = (1 - Q) ∑_i = 0^m - 2(i + 1)Q^i + mQ^m - 1 = 1 - Q^m - 1(m - Qm + Q) + (1 - Q)mQ^m - 1/1 - Q = 1 - Q^m/1 - Q. Owing to Q < 1 we can bound the expected (average) number of visited orbits for a prime p as E_p[#orbits] ≤ (1 - Q)^-1 = (1 - |Σ|^1 - p)^-1. Each p-orbit contains p cells and so the expected number of visited cells is simply E_p[#cells] ≤ 2p(1 - |Σ|^1 - p)^-1. Note that while moving from one orbit to a next one we can potentially revisit some cells, however, because the order is fixed we can visit each cell at most twice.The number of generators |G_n(p_i)| for each prime p_i equals p_i + 1 (Eq. <ref>), thus the overall expected number of visited cells, i.e., the average-case time complexity in O-notation is ∑_i = 1^ω(n) (p_i + 1) p_i(1 - |Σ|^1 - p_i)^-1. Since the expression (1 - |Σ|^1 - p_i)^-1 is at most 2 (p_i ≥ 2) and the integer logarithm sopf(n) is at most n, the average-case time complexity of the shift-symmetry test is O(∑_i = 1^ω(n) p_i^2 + ∑_i = 1^ω(n) p_i)= O( sopf^2(n) +sopf(n)) = O(n^2).§ EXAMPLESLet n = 2^α_13^α_2, then using counting from Lemma <ref>, |S_n × n| = 31|Σ|^n^2/2 + 41|Σ|^n^2/3-32|Σ|^n^2/2^2 - 3141|Σ|^n^2/2 3 - 42|Σ|^n^2/3^2+33|Σ|^n^2/2^2 + 3241|Σ|^n^2/2^2 3 + 3142|Σ|^n^2/23^2 + 43|Σ|^n^2/3^2-3341|Σ|^n^2/2^2 3 - 3242|Σ|^n^2/2^23^2 - 3143|Σ|^n^2/23^2 - 44|Σ|^n^2/3^2+3342|Σ|^n^2/2^2 3^2 + 3243|Σ|^n^2/2^23^2 + 3144|Σ|^n^2/23^2-3343|Σ|^n^2/2^2 3^2 - 3244|Σ|^n^2/2^23^2+3344|Σ|^n^2/2^2 3^2by Lemma <ref>, |S_n × n| = |Σ|^n^2/2[+31] +|Σ|^n^2/3[+41] +|Σ|^n^2/2 3[-3141] +|Σ|^n^2/2^2[- 32+ 33] +|Σ|^n^2/3^2[- 42 + 43 - 44] + |Σ|^n^2/2^2 3[+ 3241 - 3341] + |Σ|^n^2/23^2[+ 3142 - 3143 + 3144] + |Σ|^n^2/2^2 3^2[ - 3242 + 3342 + 3243 - 3343 - 3244 + 3344]and finally by Theorem <ref>, |S_n × n| = |Σ|^n^2/23 + |Σ|^n^2/34 - |Σ|^n^2/2 33 · 4 - |Σ|^n^2/2^22 - |Σ|^n^2/3^2 3 + |Σ|^n^2/2^2 32 · 4+ |Σ|^n^2/23^23 · 3- |Σ|^n^2/2^2 3^22 · 3 Let n = 2^α_13^α_2, a ∈Σ, and k = 2^β_13^β_2, where β_1 ≤α_1, β_2 ≤α_2, and σ = |Σ| - 1. Then using counting from Lemma <ref>|S^a_n × n,k| = 31n^2/2k/2σ^n^2 - k/2 + 41n^2/3k/3σ^n^2 - k/3-32n^2/2^2k/2^2σ^n^2 - k/2^2 - 3141n^2/2 3k/2 3σ^n^2 - k/2 3 - 42n^2/3^2k/3^2σ^n^2 - k/3^2+33n^2/2^2k/2^2σ^n^2 - k/2^2 + 3241n^2/2^2 3k/2^2 3σ^n^2 - k/2^2 3 + 3142n^2/2 3^2k/2 3^2σ^n^2 - k/23^2 + 43n^2/3^2k/3^2σ^n^2 - k/3^2-3341n^2/2^2 3k/2^2 3σ^n^2 - k/2^2 3 - 3242n^2/2^2 3^2k/2^2 3^2σ^n^2 - k/2^23^2 - 3143n^2/2 3^2k/2 3^2σ^n^2 - k/23^2 - 44n^2/3^2k/3^2σ^n^2 - k/3^2+3342n^2/2^2 3^2k/2^2 3^2σ^n^2 - k/2^2 3^2 + 3243n^2/2^2 3^2k/2^2 3^2σ^n^2 - k/2^23^2 + 3144n^2/2 3^2k/2 3^2σ^n^2 - k/23^2-3343n^2/2^2 3^2k/2^2 3^2σ^n^2 - k/2^2 3^2 - 3244n^2/2^2 3^2k/2^2 3^2σ^n^2 - k/2^23^2+3344n^2/2^2 3^2k/2^2 3^2σ^n^2 - k/2^2 3^2by Lemma <ref> |S^a_n × n,k| = n^2/2k/2σ^n^2-k/2[ +31] + n^2/3k/3σ^n^2-k/3[ +41] +n^2/2 3k/2 3σ^n^2-k/2 3[ -3141] + n^2/2^2k/2^2σ^n^2-k/2^2[ - 32 + 33] +n^2/3^2k/3^2σ^n^2-k/3^2[ - 42 + 43 - 44] +n^2/2^2 3k/2^2 3σ^n^2-k/2^2 3[ + 3241 - 3341] +n^2/2 3^2k/2 3^2σ^n^2-k/23^2[ + 3142 - 3143 + 3144] +n^2/2^2 3^2k/2^2 3^2σ^n^2-k/2^2 3^2[ - 3242 + 3342 + 3243 - 3343 - 3244 + 3344]and finally by Theorem <ref>, |S^a_n × n,k| =n^2/2k/2σ^n^2-k/23 + n^2/3k/3σ^n^2-k/34 - n^2/2 3k/2 3σ^n^2-k/2 33 · 4- n^2/2^2k/2^2σ^n^2-k/2^22 - n^2/3^2k/3^2σ^n^2-k/3^23 + n^2/2^2 3k/2^2 3σ^n^2-k/2^2 32 · 4+ n^2/2 3^2k/2 3^2σ^n^2-k/23^23 · 3 - n^2/2^2 3^2k/2^2 3^2σ^n^2-k/2^2 3^22 · 3§ REFERENCES aipauth4-1 | http://arxiv.org/abs/1703.09030v2 | {
"authors": [
"Peter Banda",
"John Caughman",
"Martin Cenek",
"Christof Teuscher"
],
"categories": [
"nlin.CG",
"nlin.AO"
],
"primary_category": "nlin.CG",
"published": "20170327123226",
"title": "Shift-Symmetric Configurations in Two-Dimensional Cellular Automata: Irreversibility, Insolvability, and Enumeration"
} |
B. E. GriffithImmersed boundary model of aortic heart valve dynamicsLeon H. Charney Division of Cardiology, Department of Medicine, and Program in Computational Biology, Sackler Institute of Graduate Biomedical Sciences, New York University School of Medicine, 550 First Avenue, New York, NY 10016 USABoyce E. Griffith, Leon H. Charney Division of Cardiology, Department of Medicine, New York University School of Medicine, 550 First Avenue, New York, NY 10016 USA.E-mail: [email protected]. The immersed boundary (IB) method is a mathematical and numerical framework for problems of fluid-structure interaction, treating the particular case in which an elastic structure is immersed in a viscous incompressible fluid.The IB approach to such problems is to describe the elasticity of the immersed structure in Lagrangian form, and to describe the momentum, viscosity, and incompressibility of the coupled fluid-structure system in Eulerian form.Interaction between Lagrangian and Eulerian variables is mediated by integral equations with Dirac delta function kernels.The IB method provides a unified formulation for fluid-structure interaction models involving both thin elastic boundaries and also thick viscoelastic bodies.In this work, we describe the application of an adaptive, staggered-grid version of the IB method to the three-dimensional simulation of the fluid dynamics of the aortic heart valve.Our model describes the thin leaflets of the aortic valve as immersed elastic boundaries, and describes the wall of the aortic root as a thick, semi-rigid elastic structure.A physiological left-ventricular pressure waveform is used to drive flow through the model valve, and dynamic pressure loading conditions are provided by a reduced (zero-dimensional) circulation model that has been fit to clinical data.We use this model and method to simulate aortic valve dynamics over multiple cardiac cycles.The model is shown to approach rapidly a periodic steady state in which physiological cardiac output is obtained at physiological pressures.These realistic flow rates are not specified in the model, however. Instead, they emerge from the fluid-structure interaction simulation. Immersed boundary model of aortic heart valve dynamics with physiological driving and loading conditions Boyce E. Griffith December 30, 2023. ==========================================================================================================§ INTRODUCTIONThe immersed boundary (IB) method is a mathematical and numerical approach to problems of fluid-structure interaction that was introduced by Peskin to model the fluid dynamics of heart valves <cit.>.The IB methodology has subsequently been used to model diverse problems in biological fluid dynamics <cit.> and other problems in which a rigid or elastic structure is immersed in a fluid flow <cit.>.The IB method for fluid-structure interaction treats problems in which an elastic structure is immersed in a viscous incompressible fluid, describing the elasticity of the immersed structure in Lagrangian form, and describing the momentum, viscosity, and incompressibility of the coupled fluid-structure system in Eulerian form.Integral equations with Dirac delta function kernels couple the Lagrangian and Eulerian variables.When discretized for computer simulation, the IB method approximates the Lagrangian equations on a curvilinear mesh, approximates the Eulerian equations on a Cartesian grid, and approximates the Lagrangian-Eulerian interaction equations by replacing the singular delta function with a regularized version of the delta function.A key strength of the IB approach to fluid-structure interaction is that it does not require conforming Lagrangian and Eulerian discretizations.Specifically, the IB method permits the Lagrangian mesh to cut through the background Eulerian grid in an arbitrary manner and does not require dynamically generated body-fitted meshes. This attribute of the method greatly simplifies the task of grid generation and facilitates simulations involving large deformations of the elastic structure.An additional feature of the IB formulation is that it provides a unified approach to constructing models involving both thin elastic boundaries (i.e., immersed structures that are of codimension one with respect to the fluid) and also thick elastic bodies (i.e., immersed structures that are of codimension zero with respect to the fluid) <cit.>.In this work, we describe an adaptive version of the IB method and the application of this method to the simulation of the fluid dynamics of the aortic heart valve.Each year, approximatelyprocedures are performed to repair or replace damaged or destroyed heart valves <cit.>.Severe aortic valve disease is generally treated by replacement with either a mechanical or a bioprosthetic valve <cit.>, and approximatelyaortic valve replacements are performed annually to treat severe aortic stenosis <cit.>.Because many of the difficulties of prosthetic heart valves are related to the fluid dynamics of the replacement valve <cit.>, mathematical and computational models that enable the study of the fluid-mechanical mechanisms of valve function and dysfunction may ultimately aid in improving treatment outcomes for the many patients suffering from valvular heart diseases.The aortic valve model employed herein is similar, but not identical, to that described by Griffith et al. <cit.>.We model the thin leaflets of the aortic valve as immersed boundaries comprised of systems of elastic fibers that resist extension, compression, and bending, and we model the aortic root and ascending aorta as a thick, semi-rigid elastic structure.To construct the model valve leaflets, we use the mathematical theory of Peskin and McQueen <cit.>, which describes the architecture of the systems of collagen fibers within the valve leaflets that allow the closed valve to support a significant pressure load.The geometry of the model aortic root is based on the idealized description of Reul et al. <cit.>, which was derived from imaging data collected from healthy patients, and we use dimensions that are based on measurements by Swanson and Clark <cit.> of human aortic roots harvested after autopsy.A Windkessel model fit to human data by Stergiopulos et al. <cit.> provides physiological loading conditions for the model valve.Two limitations of our earlier model <cit.>, which are overcome in the present work, are that it used only a highly idealized left-ventricular driving pressure waveform, and that it considered only a single cardiac cycle.In this work, we use a physiological driving pressure waveform that is based on human clinical data <cit.>, and we perform multibeat simulations of the fluid dynamics of the aortic heart valve.We emphasize that we do not prescribe the flow rate at either the upstream or downstream boundaries of the model vessel. Instead, we impose a realistic, periodic left-ventricular driving pressure at the upstream boundary along with a dynamic circulatory loading pressure at the downstream boundary.With the driving and loading conditions used in the present work, our model rapidly approaches a periodic steady state in which physiological cardiac output is obtained at physiological pressures.There are also important differences between the numerical methods used in the present study and those of our earlier simulations of aortic valve dynamics <cit.>.Although both use an adaptive version of the IB method for fluid-structure interaction, our earlier study used a cell-centered IB method <cit.>, whereas herein we use a staggered-grid discretization.This is notable because we have recently demonstrated that staggered-grid IB methods yield substantially improved accuracy when compared to cell-centered discretizations <cit.>. Specifically, we have found that using a staggered-grid Eulerian discretization improves the volume-conservation properties of the IB method by one to two orders of magnitude in comparison to a cell-centered discretization <cit.>.Staggered-grid IB methods also yield improved resolution of pressure discontinuities <cit.>.In the present application, such discontinuities occur along the thin heart valve leaflets and are especially pronounced when the valve is closed and supporting a significant, physiological pressure load.The three-dimensional adaptive IB method used in our simulations is similar to the two-dimensional adaptive IB method of Roma et al. <cit.>.Specifically, both schemes use a globally second-order accurate staggered-grid (i.e., marker-and-cell or MAC <cit.>) discretization of the incompressible Navier-Stokes equations on block-structured adaptively refined Cartesian grids, and both schemes implement formally second-order accurate versions of the IB method (i.e., schemes that yield second-order convergence rates for problems with sufficiently smooth solutions <cit.>).There are also important differences between the present scheme and the scheme of Roma et al.For instance, the method of Roma et al. uses centered differencing to approximate the nonlinear advection terms of the incompressible Navier-Stokes equations, whereas we use a staggered-grid version <cit.> of the xsPPM7 variant <cit.> of the piecewise parabolic method (PPM) <cit.> that enables the application of the present method to high Reynolds number flows.Our scheme also uses the projection method not as a fractional-step solver for the incompressible Navier-Stokes equations, but rather as a preconditioner for an iterative Krylov method applied to an unsplit discretization of those equations <cit.>.Our approach eliminates the timestep-splitting error associated with standard projection methods. It also greatly simplifies the specification of physical boundary conditions along the outer boundaries of the computational domain.In this work, such physical boundary conditions couple the detailed, three-dimensional fluid-structure interaction model to the reduced circulation models that provide realistic driving and loading conditions.§ THE CONTINUOUS EQUATIONS OF MOTIONThe IB formulation of the equations of motion for a coupled fluid-structure system describes the elasticity of the immersed structure in Lagrangian form and describes the momentum, velocity, and incompressibility of the fluid-structure system in Eulerian form.Let = (x_1,x_2,x_3) ∈Ω denote Cartesian physical coordinates, with Ω⊂^3 denoting the physical region that is occupied by the fluid-structure system; let = (s_1,s_2,s_3) ∈ U denote Lagrangian material coordinates that are attached to the immersed elastic structure, with U ⊂^3 denoting the Lagrangian coordinate domain; and let (,t) ∈Ω denote the physical position of material pointat time t.We consider the case in which the fluid possesses a uniform mass density ρ and dynamic viscosity μ, and we assume that the structure is neutrally buoyant and has the same viscous properties as the fluid in which it is immersed.These assumptions are not essential to the method, however, and generalizations of the IB method have been developed to permit the mass density of the structure to differ from that of the fluid <cit.>. Work is also underway to develop new extensions of the IB method that permit the viscosity of the structure to differ from that of the fluid.The IB formulation of the equations of fluid-structure interaction is <cit.>:ρ(t(,t) + (̆,t) ·(̆,t)) = - p(,t) + μ^2 (̆,t) + (,t),·(̆,t)= 0, (,t)= ∫_U (,t)δ( - (,t)), t(,t) = ∫_Ω(̆,t)δ( - (,t)),(,t)= [(·,t)],in which (̆,t) = (u_1(,t),u_2(,t),u_3(,t)) is the Eulerian velocity field, p(,t) is the Eulerian pressure, (,t) = (f_1(,t),f_2(,t),f_3(,t)) is the Eulerian elastic force density (i.e., the elastic force density with respect to the physical coordinate system, so that (,t) has units of force), (,t) = (F_1(,t),F_2(,t),F_3(,t)) is the Lagrangian elastic force density (i.e., the elastic force density with respect to the material coordinate system, so that (,t) has units of force), :↦ is a functional that specifies the Lagrangian elastic force density in terms of the deformation of the immersed structure, and δ() = δ(x_1)δ(x_2)δ(x_3) is the three-dimensional Dirac delta function.In this formulation, eqs. (<ref>) and (<ref>) are the interaction equations that couple the Lagrangian and Eulerian variables.Eq. (<ref>) converts the Lagrangian force density (,t) into the equivalent Eulerian force density (,t).Eq. (<ref>) states that the physical position of each Lagrangian material pointmoves with velocity (̆(,t),t), thereby implying that there is no fluid slip at fluid-structure interfaces.Notice, however, that the no-slip condition of a viscous fluid does not appear in the equations as a constraint on the fluid motion.Instead, the no-slip condition determines the motion of the immersed structure.See, e.g., Peskin <cit.> for further discussion of these equations.We next describe the form of the Lagrangian elastic force density functional :↦ used in our model.Like our earlier study of cardiac valve dynamics <cit.>, we model the flexible leaflets of the aortic valve as thin elastic boundaries, and we model the vessel wall as a thick, semi-rigid elastic structure.The elasticity of these structures is described in terms of families of fibers that resist extension, compression, and bending.We identify the model fibers of the valve leaflets with the collagen fibers that enable the real valve to support a significant pressure load when closed.In the case of the vessel wall, we do not identify the model fibers with particular physiological features; instead, these fibers are used simply to fix the geometry of the vessel.As is frequently done in IB models <cit.>, we define the fiber elasticity in terms of a strain-energy functional E = E[(·,t)].The corresponding Lagrangian elastic force density may be expressed in terms of the Fréchet derivative of E. Specifically,is defined byF⃗ = - δ E/δ,which is shorthand forδ E[(·,t)] = - ∫_U F⃗(,t) ·δ(,t).Notice that in eqs. (<ref>) and (<ref>), δ denotes the perturbation operator, not the Dirac delta function.To specify E, it is convenient to choose the Lagrangian coordinates = (s_1,s_2,s_3) ∈ U so that each fixed value of (s_1,s_2) labels a particular fiber.This implies that the mapping s_3 ↦(s_1^0,s_2^0,s_3) is a parametric representation of the fiber labeled by (s_1,s_2) = (s_1^0,s_2^0).The curvilinear coordinate s_3 need not correspond to arc length along the fiber, however, and even if s_3 were to correspond to arc length in an initial or reference configuration, notice that it generally will not remain arc length as the structure deforms.As we have done previously <cit.>, we describe the total elastic energy functional E as the sum of a stretching energyand a bending energy , so that E =+.In turn, these elastic energy functionals determine a stretching force densityand a bending force density , so that =+.The stretching energy is= ∫_Ω(|s_3|;),in whichis a local stretching energy.The corresponding stretching force is given by <cit.>= s_3( '(|s_3|;) / s_3/|/ s_3|),in which ' indicates the derivative ofwith respect to its first argument.By identifying T = '(|/ s_3|;) as the fiber tension and τ⃗ = / s_3 / |/s_3| as the fiber-aligned unit tangent vector, we may rewriteas= s_3(T τ⃗).The bending energy used in our model is= ∫_Ω() |s_3 - s_3|^2 ,in which = () is the spatially inhomogeneous bending stiffness, and = () is the reference configuration of the structure.The corresponding bending-resistant force is given by <cit.>= s_3(() (s_3 - s_3)).We take the reference configuration to be the initial configuration, i.e., () = (,0).We remark that we use bending-resistant forces only within the model valve leaflets, i.e., () ≠ 0 only for those fibers that comprise the valve leaflets.Because we model the valve leaflets as thin elastic surfaces immersed in fluid, including bending-resistant forces allows the model leaflets to account for the thickness of real valve leaflets, which are thin but, of course, not infinitely thin.In our model, we increasenear the tips of the free edges of the valve leaflets to account for the fibrous noduli arantii.Next, we specify the boundary conditions imposed along the outer boundary of the physical domain Ω.We take Ω to be a ×× rectangular box, and we employ a combination of solid-wall and prescribed-pressure boundary conditions along Ω.Solid-wall boundary conditions are simply homogeneous Dirichlet conditions for the velocity field (̆,t).At solid-wall boundaries, a boundary condition for the pressure is neither needed nor permitted.By prescribed-pressure boundary conditions, we mean a combination of normal-traction and zero-tangential-slip boundary conditions.For a viscous incompressible fluid, it is easy to show that combining normal-traction and zero-tangential-slip boundary conditions along a flat boundary allows for the pointwise specification of the pressure p(,t) on that boundary.To see this, recall that the Cauchy stress tensor of a viscous incompressible fluid isσ⃗ =-p+ μ[ u⃗ + (u⃗)^T].Let the outward unit normal at a position ∈Ω be denoted by n⃗ = n⃗(), and let a unit tangent vector at a position ∈Ω be denoted by t⃗ = t⃗().By prescribing the normal traction at the boundary, we are prescribing the value of the normal component of the normal stress, i.e.,n⃗·σ⃗·n⃗ = -p + 2 μn(u⃗·n⃗).The zero-tangential-slip condition imposed on $̆ implies that·̆t⃗ ≡0alongΩ, and combining this condition with the incompressibility constraint implies thatn(u⃗·n⃗) ≡0alongΩ.Therefore, alongΩ, the normal component of the normal stress reduces ton⃗·σ⃗·n⃗ = -p.Thus, combining normal-traction and zero-tangential-slip boundary conditions allows us to prescribe the value of the pressure pointwise along the boundary.The model vessel attaches directly toΩ, the outermost boundary of the physical domain.A schematic diagram is provided in fig. <ref>.Along, the upstream boundary of the vessel, a time-dependent left-ventricular pressure waveform(t)is prescribed to drive flow through the model valve, so thatp(,t) = (t),∈.The specific left-ventricular pressure waveform used in the present model is adapted from the study of Murgo et al. <cit.>.The rate of flow entering the model vessel via the upstream boundary is(t) = - ∫_(̆,t) ·n⃗ ,in whichn⃗is the outward unit normal alongΩ, and= ()is the area element in the Cartesian coordinate system.On, the downstream boundary of the vessel, we use a reduced (i.e., ordinary differential equation) circulation model to determine the pressure(t)that provides dynamic loading conditions for the model valve, so thatp(,t) = (t),∈.The reduced circulation model used in this work is a three-element Windkessel model with characteristic resistance, peripheral resistance, and arterial complianceC; see fig. <ref>.The rate of flow leaving the model vessel via the downstream boundary is(t) = ∫_(̆,t) ·n⃗ .Because we specify the pressure along, the value of(t)is not known in advance; instead, it must be determined by the coupled model.The flow leaving the fluid-structure interaction model viais exactly the flow through the circulation model, so that1/((t) - (t))= (t), C d/dt(t) + 1/(t)= (t),in which(t)is the stored pressure in the Windkessel model. Notice that the value of(t)is completely determined by(t)and(t).In our simulations, we set= 0.033(mmHg ml^-1s),= 0.79(mmHg ml^-1s), andC = 1.75(ml mmHg^-1), corresponding to the human “Type A” beat characterized by Stergiopulos et al. <cit.>.Although we have found that coupling the detailed and reduced models via prescribed-pressure boundary conditions works well in practice, other choices of boundary conditions are possible.For instance, it is straightforward to devise boundary conditions for the incompressible Navier-Stokes equations that prescribe the mean pressure along a portion of a flat boundary; see ch. 3 sec. 8 of Gresho and Sani <cit.> for details.Alternatively, one may wish to couple the detailed and reduced models by prescribing boundary conditions for the normal component of the velocity.A drawback of this approach is that it would require determining an appropriate velocity profile at the boundary.A more serious limitation of this alternative approach is that prescribing the flow rate as a boundary condition, at either the upstream or the downstream boundary, makes it impossible to impose a realistic pressure difference across the model valve during the diastolic phase of the cardiac cycle.By using pressure boundary conditions at both the upstream and downstream boundaries of the vessel, we allow the normal component of the velocity profile at the boundary to be determined by the model, and we are able to impose realistic pressure loads on the model valve throughout the cardiac cycle.Along, the outermost portion ofΩexterior to the model vessel, we set the pressure to equal zero.This external boundary condition provides an open boundary that acts to couple the fluid-structure interaction model to a zero-pressure fluid reservoir. This constant-pressure reservoir allows the vessel to change volume during the course of the simulation, i.e., it permits a mismatch between the instantaneous flow rates at the inflow and outflow boundaries of the vessel.Because we model the vessel wall as a semi-rigid elastic structure, we have that(t) ≈(t). Of course, once the model reaches periodic steady state, the time-integrated inflow and outflow volumes must match.The real aortic root is a flexible structure with significant compliance, however, and a model vessel that accounts for the physiological compliance of the aortic root would generally result in instantaneous differences between the inflow rate throughand the outflow rate through.All that remains is to specify the initial conditions.At timet = 0, we set(̆,t) = 0along with(t) = (t) = 0, so that all prescribed normal traction along the boundary are equal to zero. During a brief initialization period lasting 12.8 ms, we increase the left-ventricular driving pressure to a value of approximately 10 mmHg, and we increase the stored pressure in the Windkessel model to 85 mmHg, thereby establishing a realistic pressure load on the closed valve.During this initialization period,(t)is treated as a boundary condition and not as a state variable.That is to say,(t)does not satisfy eq. (<ref>) fort ≤; rather, the value of(t)is prescribed.Once the model is initialized, however,(t)is treated as a state variable, the dynamics of which are determined by eq. (<ref>).§ THE DISCRETE EQUATIONS OF MOTION§.§ Lagrangian and Eulerian spatial discretizations As in our earlier simulation studies of cardiac fluid dynamics <cit.>, we discretize the Lagrangian equations on a fiber-aligned curvilinear mesh, and we discretize the Eulerian equations on a block-structured locally refined Cartesian grid that is adaptively generated to conform to the moving fiber mesh.The curvilinear mesh spacings are_1,_2, and_3, and we use the indices(l,m,n)to label the nodes of the Lagrangian mesh, so that_l,m,nand_l,m,nare the position and Lagrangian elastic force density associated with curvilinear mesh node(l,m,n).The nodal values of_l,m,nare computed from the physical positions of the nodes of the curvilinear mesh via standard second-order accurate finite difference approximations tos_3and tos_3.This approach is equivalent to describing the elasticity of the discretized model in terms of systems of springs and beams.The locally refined Cartesian grid is organized as a hierarchy of nested grid levels that are labeledℓ= 0,…,, withℓ= 0denoting the coarsest level of the hierarchical grid and withℓ= denoting the finest level.Each grid levelℓis comprised of the union of rectangular Cartesian grid patches.All grid patches in a given levelℓof the grid hierarchy share the same uniform grid spacing, and the grid spacings are chosen so that= / , in which>is an integer refinement ratio.The patch levels are constructed to satisfy the proper nesting condition <cit.>, which generally requires that the union of the levelℓ+1grid patches be strictly contained within the union of the levelℓgrid patches.The proper nesting condition is relaxed at the outermost boundary of the physical domain, thereby allowing high Eulerian spatial resolution all the way up toΩin cases in which such resolution is needed.The patch levels are generated so that the faces of the grid patches that comprise levelℓ> 0are coincident with the faces of the Cartesian grid cells that comprise levelℓ-1, the next coarser level of the grid. This construction simplifies the development of composite-grid discretization methods that couple the levels of the locally refined grid.Except where noted, in our simulations, we employ a two-level adaptive grid, so that= 1, and we use=4.An example two-dimensional grid is shown in fig. <ref>.To discretize the Eulerian incompressible Navier-Stokes equations in space, we employ a locally refined version of a three-dimensional staggered-grid finite difference scheme; see fig. <ref>. The computational domainΩis a rectangular box,Ω= [0,L_1] ×[0,L_2] ×[0,L_3], and the coarsest level of the locally refined Cartesian grid is a uniform discretization ofΩ, so that the union of the levelℓ= 0grid patches form a regularN_1 ×N_2 ×N_3Cartesian grid with grid spacings_1 = L_1/N_1,_2 = L_2/N_2, and_3 = L_3/N_3.For simplicity, we assume that_1 = _2 = _3 = h^0.On each levelℓof the locally refined grid,labels a particular Cartesian grid cell, and_i,j,k = denotes the physical location of the center of that cell.The physical region covered by Cartesian grid cell(i,j,k)on levelℓis denoted by_̧i,j,k^ℓ, and the set of Cartesian grid cell indices associated with levelℓis denoted by^ℓ.The components of the Eulerian velocity field=̆ (u_1,u_2,u_3)are respectively approximated at the centers of thex_1,x_2, andx_3faces of the Cartesian grid cells, i.e., at positions_i-,j,k = ,_i,j-,k = , and_i,j,k- = .The pressurepis approximated at the centers of the Cartesian grid cells.We use(u_1)_i-,j,k,(u_2)_i,j-,k,(u_3)_i,j,k-, andp_i,j,kto denote the values of$̆ and p that are stored on the grid.A staggered discretization is also used for the Eulerian body force = (f_1,f_2,f_3), so that f_1, f_2, and f_3 are respectively approximated at the centers of the x_1, x_2, and x_3 faces of the Cartesian grid cells.Let Ω^ℓ⊆Ω denote the physical region covered by the union of the level ℓ grid patches.By construction, Ω^0 = Ω, and Ω^ℓ+1⊆Ω^ℓ.Moreover, away from Ω, Ω^ℓ+1 is strictly contained within Ω^ℓ.Notice that Ω^ℓ = ∪_(i,j,k) ∈^ℓ_̧i,j,k^ℓ.The coarse-fine interface between levels ℓ and ℓ+1 is Ω^ℓ+1∖Ω^ℓ.Because the grid levels are constructed to satisfy the proper nesting condition, however, Ω^ℓ+1∩Ω^ℓ⊂Ω, so that the coarse-fine interface between levels ℓ and ℓ+1 is Ω^ℓ+1∖Ω.The refined region of level ℓ <, denoted by Ω^ℓ,ref, consists of the portion of Ω^ℓ that is covered by Ω^ℓ+1, i.e., Ω^ℓ,ref = Ω^ℓ∩Ω^ℓ+1 = Ω^ℓ+1.The grid values of any quantity stored on the Cartesian grid that are physically located in Ω^ℓ,ref are referred to as invalid values.The remaining values, i.e., those that are located in Ω^ℓ∖Ω^ℓ,ref, are referred to as valid values.The invalid values of level ℓ < are constrained to be the restriction of the overlying level ℓ+1 values.Notice that these overlying values could be either valid or invalid values, depending on the configuration of the grid hierarchy. The simplest restriction procedure is to define the coarse-grid invalid values to be the averages of the overlying fine-grid values; however, other restriction procedures are possible and, in fact, necessary to obtain higher-order accuracy at coarse-fine interfaces.Each Cartesian grid patch is augmented by a layer of ghost cells that provide the values at the patch boundary that are needed to evaluate the approximations to the Eulerian spatial differential operators in the patch interior.We consider three types of ghost cells for the level ℓ grid patches: [(1)]* ghost cells (i,j,k) that overlap the interior of another level ℓ grid patch, so that _̧i,j,k^ℓ⊂Ω^ℓ;* ghost cells (i,j,k) that overlap the interior of some level ℓ-1 grid patch but not the interior of any of the level ℓ grid patches, so that _̧i,j,k^ℓ⊂Ω^ℓ-1 but _̧i,j,k^ℓ⊄Ω^ℓ; and* ghost cells that are exterior to Ω, so that _̧i,j,k^ℓ⊄Ω.For level ℓ ghost cells that overlap the interior of a neighboring level ℓ-grid patch, the values in that ghost cell are simply copies of the neighboring interior values.If a level ℓ ghost cell overlaps the interior of a neighboring level ℓ-1 grid patch but does not overlap the interior of any level ℓ grid patch, that ghost cell is said to be located on the coarse side of a coarse-fine interface.Ghost values located on the coarse side of a coarse-fine interface are defined by an interpolation procedure that uses both fine- and coarse-grid values.As we have done previously <cit.>, we use a specialized quadratic interpolation procedure <cit.> to compute cell-centered ghost values at coarse-fine interfaces.For face-centered quantities, we use a generalization of this procedure that employs a combination of quadratic and cubic interpolation at coarse-fine interfaces.(Details of these coarse-fine interface discretizations are provided in the appendix.)Finally, ghost values that are exterior to the physical domain are determined via the physical boundary conditions in a manner that is analogous to the uniform-grid scheme for the incompressible Navier-Stokes equations of Griffith <cit.>.We denote by ≈̆·$̆,p ≈p, and≈̆^2 $̆ composite-grid finite difference approximations to the divergence, gradient, and Laplace operators, respectively.We employ a standard staggered-grid (MAC) discretization approach, in which $̆ is computed at cell centers, whereas bothpand$̆ are computed on cell faces.Briefly stated, we use standard second-order accurate staggered-grid approximations to these operators within each Cartesian grid patch <cit.>.These uniform-grid patch-based discretizations are coupled to form composite-grid discretizations via restriction and prolongation operators.Although our composite-grid finite difference discretization of the incompressible Navier-Stokes equations is globally second-order accurate, this discretization does not retain the pointwise second-order accuracy of the basic uniform-grid approximation because of localized reductions in accuracy at coarse-fine interfaces.To compute $̆ on the composite grid, we first use simple averaging to restrict$̆ from finer levels of the grid to coarser levels of the grid.We then compute for each level ℓ grid cell( )_i,j,k =(u_1)_i+,j,k - (u_1)_i-,j,k/ + (u_2)_i,j+,k - (u_2)_i,j-,k/ + (u_3)_i,j,k+ - (u_3)_i,j,k-/.To compute p on the composite grid, we first use cubic interpolation to restrict p from finer levels of the grid to coarser levels of the grid, and we then compute the composite-grid cell-centered interpolation of p in the ghost cells along the coarse side of the coarse-fine interface.This approach ensures that the ghost values are at least third-order accurate interpolations of p. With the coarse-fine interface ghost values so determined, we then compute for each level ℓ cell face( p)_i-,j,k = p_i,j,k - p_i-1,j,k/, ( p)_i,j-,k = p_i,j,k - p_i,j-1,k/, ( p)_i,j,k- = p_i,j,k - p_i,j,k-1/.Finally, to compute $̆, we first use cubic interpolation to restrict$̆ from finer levels of the grid to coarser levels of the grid, and we then compute coarse-fine interface ghost values of $̆ using a composite-grid face-centered interpolation scheme, again ensuring that the ghost values are at least third-order accurate.We then compute for eachx_1face on levelℓ( u_1)_i-,j,k =(u_1)_i+,j,k - 2 (u_1)_i-,j,k + (u_1)_i-3/2,j,k/()^2 + (u_1)_i-,j+1,k - 2 (u_1)_i-,j,k + (u_1)_i-,j-1,k/()^2 + (u_1)_i-,j,k+1 - 2 (u_1)_i-,j,k + (u_1)_i-,j,k-1/()^2.Similar formulae are used to evaluateu_2andu_3on thex_2andx_3faces of the composite grid.In the case of a uniform-grid discretization, notice thatis the usual 7-point cell-centered finite difference approximation to the Laplacian.This property facilities the construction of efficient preconditioners based on the projection method <cit.> and approximate Schur complement methods <cit.>. In the presence of local mesh refinement,is a relatively standard composite-grid generalization of the 7-point Laplacian <cit.> for which efficient solution algorithms, such as FAC (the fast adaptive composite-grid method) <cit.>, have been developed. §.§ Lagrangian-Eulerian interaction To approximate the Lagrangian-Eulerian interaction equations, eqs. (<ref>) and (<ref>), we replace the delta functionδ()with a regularized version of the delta functionδ_h().The regularized delta function that we use is of the tensor-product formδ_h() = δ_h(x_1) δ_h(x_2) δ_h(x_3), and we use a one-dimensional regularized delta function that is of the formδ_h(x) = 1/h ϕ(x/h).We takeϕ= ϕ(r)to be the four-point delta function of Peskin <cit.>.We have found that fluid-structure interfaces generally require the highest available spatial resolution in an IB simulation.Therefore, we construct the Cartesian grid hierarchy so that the curvilinear mesh is embedded in the finest level of the grid.Additionally, because we use a regularized delta function with a support of four Cartesian meshwidths in each coordinate direction, we ensure that the locally refined grid is generated so that each curvilinear mesh point is physically located at least two levelgrid cells away fromΩ^ ∖Ω, the coarse-fine interface between levels-1and.With these constraints on the configuration of the locally refined Cartesian grid, the nodes of the curvilinear mesh are directly coupled only to valid values defined on the finest level of the locally refined grid.This construction therefore allows us to discretize the Lagrangian-Eulerian interaction equations as if we were using a uniformly fine grid with resolution.With= (f_1,f_2,f_3)and= (F_1,F_2,F_3), we approximate eq. (<ref>) componentwise by(f_1)_i-,j,k = ∑_(l,m,n) (F_1)_l,m,n δ_(_i-,j,k - _l,m,n)_1_2_3, (f_2)_i,j-,k = ∑_(l,m,n) (F_2)_l,m,n δ_(_i,j-,k - _l,m,n)_1_2_3, (f_3)_i,j,k- = ∑_(l,m,n) (F_3)_l,m,n δ_(_i,j,k- - _l,m,n)_1_2_3,for(i,j,k) ∈^.The valid values ofare set to equal zero on all coarser levels of the grid hierarchy.Similarly, with=̆ (u_1,u_2,u_3)and with= (X_1,X_2,X_3), we approximate eq. (<ref>) byd/ dt (X_1)_l,m,n = ∑_(i,j,k)∈^ (u_1)_i-,j,k δ_(_i-,j,k - _l,m,n)()^3, d/ dt (X_2)_l,m,n = ∑_(i,j,k)∈^ (u_2)_i,j-,k δ_(_i,j-,k - _l,m,n)()^3, d/ dt (X_3)_l,m,n = ∑_(i,j,k)∈^ (u_3)_i,j,k- δ_(_i,j,k- - _l,m,n)()^3,in which we again consider only Cartesian grid cells on the finest level of the hierarchical grid.For those curvilinear mesh nodes that are in the vicinity of physical boundaries, we use the modified regularized delta function formulation of Griffith et al. <cit.>.This approach ensures that force and torque are conserved during Lagrangian-Eulerian interaction, even nearΩ.To simplify the description of our timestepping algorithm, we use the shorthand= [] andd/dt = ^*[] $̆, in which the force-spreading and velocity-interpolation operators, [] and ^*[], are implicitly defined by eqs. (<ref>)–(<ref>) and (<ref>)–(<ref>), respectively. §.§ Temporal discretization We employ a simple timestep-splitting scheme to discretize the equations in time.Briefly, during each timestep, we first solve the fluid-structure interaction equations, treating the values of the upstream and downstream pressure boundary conditions as fixed.We then update the state variables of the circulation model, treating the fluid-structure interaction model state variables as fixed.This approach amounts to a first-order timestep splitting of the equations.We discretize the fluid-structure interaction equations in time using truncated fixed-point iteration.We treat the linear terms in the incompressible Navier-Stokes equations implicitly, and we treat all other terms explicitly.Let ^n+1,k, ^̆n+1,k, and p^n+,k denote the approximations to the values ofand $̆ at timet^n+1 = (n+1)and to the value ofpat timet^n+ = (n+)obtained afterksteps of fixed-point iteration, with^n+1,0 = ^n,^̆n+1,0 = ^̆n, andp^n+,0 = p^n-.Letting^n+,k = (^n + ^n+1,k)and^̆n+,k = (^̆n + ^̆n+1,k), we obtain^n+1,k+1,^̆n+1,k+1, andp^n+,k+1by solving the linear system of equationsρ(^̆n+1,k+1 - ^̆n/ + N⃗^n+,k)= -p^n+,k+1 + μ^̆n+,k+1 + f⃗^n+,k,^̆n+1,k+1 = 0,^n+,k = [^n+,k]^n+,k,^n+1,k+1 - ^n/ = ^*[^n+,k]^̆n+,k+1,^n+,k = [^n+,k] + [^n+,k],in whichN⃗^n+,k ≈[·̆]^n+is an explicit approximation to the advection term that uses the xsPPM7 variant <cit.> of the piecewise parabolic method (PPM) <cit.> to discretize the nonlinear advection terms; see Griffith <cit.> for details. We use two cycles of fixed-point iteration per timestep to obtain a second-order accurate timestepping scheme.When solving for^n+1,k+1,^̆n+1,k+1, andp^n+,k+1, we fix the pressure at the upstream boundaryto bep = (t^n), and we fix the pressure at the downstream boundaryto bep = (t^n).Next, having computed^n+1,^̆n+1, andp^n+, we use^̆n+1to compute(t^n+1), the instantaneous rate of flow leaving the vessel throughat timet^n+1.We then update the value ofvia a second-order accurate explicit Runge-Kutta method, so thatC P̃^Wk(t^n+1) - (t^n)/ + 1/(t^n)= (t^n+1), C P^Wk(t^n+1) - P^Wk(t^n)/ + 1/P̃^Wk(t^n+1) + P^Wk(t^n)/2 = (t^n+1).We then set(t^n+1) = (t^n+1) + (t^n+1),which serves as the downstream pressure boundary condition for the fluid-structure interaction model during the subsequent timestep.For the initial timestep, we set(0) = (0) = 0, values that are consistent with the initial conditions of the continuous system. §.§ Cartesian grid adaptive mesh refinement The locally refined grid is constructed in a recursive fashion: First, level 0 is constructed to cover the entire physical domainΩ. Next, having constructed levels0,…,ℓ< , levelℓ+1is generated by [(1)]*tagging cells on level ℓ for refinement,*covering the tagged level ℓ grid cells by rectangular boxes generated by the Berger-Rigoutsos point-clustering algorithm <cit.>, and*refining the generated boxes by the integer refinement ratioto form the level ℓ+1 grid patches.Our cell-tagging criteria are simple rules that ensure that the immersed structure remains covered throughout the simulation by the grid cells that comprise the finest level of the hierarchical grid, and that attempt to ensure that flow features requiring enhanced resolution, such as vortices shed from the free edges of the valve leaflets, remain covered by grid cells of an appropriate resolution. Specifically, we tag grid cell(i,j,k)on levelℓ< for refinement whenever there exists some curvilinear mesh node(l,m,n)such that_l,m,n ∈_̧i,j,k^ℓ, or whenever the local magnitude of the vorticityω⃗_i,j,k = _h ×_i,j,kexceeds a relative threshold.Additional cells are added to the finest level of the grid to ensure that the coarse-fine interface between levels-1andis sufficiently far away from each of the curvilinear mesh nodes to prevent complicating the discretization of the Lagrangian-Eulerian interaction equations, as discussed previously.We emphasize that the positions of the curvilinear mesh nodes are not constrained to conform in any way to the locally refined Cartesian grid.Instead, the Cartesian grid patch hierarchy is adaptively updated to conform to the evolving configuration of the immersed elastic structure.To prevent the immersed structure from “escaping” from the finest level of the grid, it is necessary to regenerate the locally refined grid at an interval that is determined by the CFL number of the flow. In our simulations, we choose the timestep sizeto satisfy a CFL condition of the form≤1/5min_0 ≤ℓ≤min_(i,j,k) ∈^ℓ/(̆_i,j,k)_∞.This condition implies that each curvilinear mesh point moves at most1/5fractional meshwidths per timestep.Therefore, to ensure that the immersed structure remains covered byΩ^, we must adaptively regenerate the grid hierarchy at least every five timesteps.In our simulations, we actually regenerate the grid every four timesteps.In practice, we could generally postpone regridding because the actual timestep size is generally smaller than that required by eq. (<ref>) as a consequence of an additional stability restriction onthat is of the form= O(()^4).This severe stability restriction results from our time-explicit treatment of the bending-resistant elastic force.For a model that includes only extension- and compression-resistant elastic elements, the stability restriction is reduced to= O(()^2).Each time that the locally refined Cartesian grid is regenerated, Eulerian quantities must be transferred from the old grid hierarchy to the new one.In newly refined regions of the physical domain, the velocity field is prolonged from coarser levels of the old grid via a specialized conservative interpolation scheme that preserves the discrete divergence and curl of the staggered-grid velocity field <cit.>.(The basic divergence- and curl-preserving interpolation scheme <cit.> considers only the case in which=2; however, this procedure is easily generalized to cases in whichis a power of two via recursion.)The pressure, which is not a state variable of the system and which is used in the subsequent timestep only as an initial approximation to the updated pressure computed during that timestep (see below), is prolonged by simple linear interpolation.In newly coarsened regions of the domain, the values of the velocity and pressure are set to be the averages of the overlying fine-grid values from the old grid hierarchy.§ SOLUTION METHODOLOGY Solving for^n+1,k+1,u⃗^n+1,k+1, andp^n+,k+1in eqs. (<ref>)–(<ref>) requires the solution of the linear system of equations,( [ ρ/ I - μ/2 ;-0 ]) ( [ ^̆n+1,k+1;p^n+,k+1 ]) = ( [ (ρ/ I + μ/2) ^̆n - ρN⃗^n+,k + ^n+,k; 0 ])We solve eq. (<ref>) via the FGMRES algorithm <cit.>, usingu⃗^n+1,kandp^n+,kas initial approximations tou⃗^n+1,k+1andp^n+,k+1, and using the projection method as a preconditioner.(Recall that we defineu⃗^n+1,0 = u⃗^nandp^n+,0 = p^n-.) LettingAdenote the matrix corresponding to this block system, i.e.,A = ( [ ρ/ I - μ/2 ;-0 ])we write the corresponding projection method-based preconditioner matrixBas <cit.> B =( [ I -/ρ; 0 I - /ρμ/2 ]) ( [ I 0; 0 ()^-1 ]) ·( [ I 0; -ρ/ -ρ/ I ]) ( [ (ρ/I - μ/2)^-10;0I ])Because we use a preconditioned Krylov method to solve foru⃗^n+1,k+1andp^n+,k+1, it is not necessary to form explicitly the matrices corresponding toAandB.Instead, we need only to be able to compute the application of these operators to arbitrary velocity- and pressure-like quantities.Computing the action ofArequires implementations of the finite difference approximations to the divergence, gradient, and Laplace operators described previously.Computing the action ofBadditionally requires solvers for cell-centered and face-centered Poisson-type problems.It is not necessary, however, to employ exact solvers for these subdomain problems.In fact, these subdomain solvers may be quite approximate.At least in the present application, we have found that the performance of our implementation is optimized by using a single multigrid V-cycle of a cell-centered FAC preconditioner for the pressure subsystem, corresponding to the block()^-1of the preconditionerB, and by applying two iterations of the conjugate gradient method for the velocity subsystem, corresponding to the block(ρ/I - μ/2)^-1of the preconditioner.We are able to avoid using a multigrid method for the velocity subsystem because the Reynolds number of the flow is relatively large and the timestep size is relatively small, so that the linear system(ρ/I - μ/2)is well conditioned.We remark that although the projection method can be an extremely effective preconditioner <cit.>, it does not appear to be widely used in this manner in practice.Instead, it is generally used as an approximate solver for the incompressible Navier-Stokes equations <cit.>. As a solver for the incompressible Navier-Stokes equations, the projection method is a fractional-step scheme that first solves the momentum equation over a time interval[t^n,t^n+1]for an “intermediate” velocity field without imposing the constraint of incompressibility, and then projects that intermediate velocity field onto the space of discretely divergence-free vector fields to obtain an approximation to the incompressible velocity field at timet^n+1.These two steps correspond to the two subdomain solves of our projection preconditioner, and each step requires the imposition of “artificial” physical boundary conditions.When the projection method is used as a solver, the artificial boundary conditions must be chosen carefully to yield a stable and accurate approximation to the true boundary conditions that are to be imposed on the coupled equations <cit.>.Obtaining high-order accuracy may not be possible in all cases, such as for problems involving outflow boundaries <cit.>, and constructing discretizations that are both stable and accurate can be difficult in practice <cit.>.We prefer to use the projection method as a preconditioner rather than as a solver.First, doing so permits the Krylov method to eliminate the timestep-splitting error of the basic projection method.Second, solving eq. (<ref>), an unsplit discretization of the incompressible Stokes equations, permits us to impose directly the true boundary conditions on$̆ and p in a coupled manner. Specifically, the form of the artificial boundary conditions required of the basic projection method does not affect the accuracy of the overall solver.This greatly simplifies the development of higher-order accurate discretiations of various types of boundary conditions.See Griffith <cit.> for details on the construction of accurate discretizations of various combinations of normal and tangential velocity and traction boundary conditions.§ IMPLEMENTATIONOur adaptive IB method is implemented in the IBAMR software framework, a freely available C++ library for developing fluid-structure interaction models that use the IB method <cit.>. IBAMR provides support for distributed-memory parallelism via MPI and Cartesian grid adaptive mesh refinement.IBAMR relies upon the SAMRAI <cit.>, PETSc <cit.>, and hypre <cit.> libraries for much of its functionality.§ COMPUTATIONAL RESULTS §.§ Model results Using the methods described herein, we have simulated the fluid dynamics of the aortic valve over multiple cardiac cycles, using a time-periodic left-ventricular driving pressure and dynamic loading conditions.In these simulations, the physical domain Ω is a ×× rectangular box that we discretize using a two-level adaptively refined Cartesian grid with refinement ratio =4 between grid levels.The coarse-grid resolution is h^0 =, which corresponds to that of a 32 × 32 × 48 uniform discretization of Ω, and the fine-grid resolution is h^1 ==, which corresponds to that of a 128 × 128 × 192 uniform discretization of Ω.We model the valve leaflets as thin elastic surfaces because real aortic valve leaflets are onlythick <cit.>, which is somewhat thinner than the Cartesian grid spacing on the finest level of the locally refined Cartesian grid.In contrast, we model the vessel as a thick elastic structure because the thickness of the aortic wall is<cit.>, which is relatively thick at the scale of the Cartesian grid.In our simulations, we use a uniform timestep size of =, thereby requiring 128,000 timesteps percardiac cycle.This value ofwas empirically determined to be approximately the largest stable timestep permitted by the present model and semi-explicit numerical method.As discussed earlier, the elasticity of the valve leaflets and vessel wall is modeled using systems of fibers.In our model, each valve leaflet is spanned by two families of fibers.The first family of fibers runs from commissure to commissure, and the second family runs orthogonal to the commissural fibers.We view the commissural fibers as corresponding to the collagen fibers that allow the real valve leaflets to support a significant pressure load, and we use the mathematical theory of Peskin and McQueen <cit.> to determine the architecture of these fibers.We remark that the construction of Peskin and McQueen yields discretized fibers that form an orthogonal net, thereby facilitating the construction of the second family of fibers.Because this mathematical theory of the valve fiber architecture describes the geometry of the closed, loaded valve, we assume that the commissural fibers are initialized in a condition of 10% strain, and we choose the commissural fiber stiffness so that the closed valve supports a pressure load of approximately .The second family of fibers is 10% as stiff as the commissural fibers, thereby approximating the anisotropic material properties of real aortic valve leaflets <cit.>.Notice that although the valve leaflets are initialized in a strained configuration, the boundary conditions at time t=0 do not provide the closed valve with any pressure load.This initial strain thereby induces oscillations in the valve leaflets, as if the valve had been struck like a drumhead at time t = 0.These oscillations are rapidly damped by the viscosity of the fluid but nonetheless render the results of the first simulated beat somewhat atypical.The initial configuration of the model vessel and valve leaflets are shown in figs. <ref> and <ref>.The geometry of the aortic root and ascending aorta is based on the geometric description of Reul et al. <cit.>, and the dimensions of the model vessel are based on measurements by Swanson and Clark of human aortic roots harvested post autopsy that were pressurized to 120 mmHg <cit.>.The stiffnesses of the fibers that comprise the vessel wall are empirically determined to keep the vessel essentially fixed in place.Our model therefore neglects the significant compliance of the real aortic root, which increases in volume by approximately 35% during ejection <cit.>.The incorporation of a realistic description of the elasticity of the aortic root and ascending aorta into our fluid-structure interaction model remains important future work.Figs. <ref>–<ref> display representative results from a multibeat simulation using this model with left-ventricular driving pressures adapted from the human clinical data of Murgo et al. <cit.>, and with loading conditions provided by the three-element Windkessel model of Stergiopulos et al. <cit.>.Recall that pressure boundary conditions are imposed at both the upstream and downstream boundaries of the model vessel.This is necessary to obtain a simulation in which the model valve supports a realistic pressure load during diastole.We remark that the realistic, nearly periodic flow rates produced by the model are not specified; instead, they emerge from the fluid-structure interaction simulation.During the second and third beats of the simulation, mean stroke volume is approximately , which is within the physiological range <cit.>, whereas the peak flow rate is approximately , which is somewhat lower than the peak flow rate reported by Murgo et al. <cit.>.Because the pressure field appears to be smoothly resolved in our simulation, we speculate that this difference is primarily related to the unphysiological rigidity of our vessel model and is not a consequence of numerical underresolution; however, this has not yet been demonstrated.The maximum systolic pressure difference across the model valve, which is computed as the difference between the left ventricular pressure and the pressure approximatelydownstream of the valve, isduring the second beat andduring the third beat, values that are in good agreement with the corresponding experimentally obtained value ofreported by Driscol and Eckstein <cit.>.The pressure difference across the valve at the time of peak flow isandduring the second and third beats, respectively.The mean transvalvular pressure difference of the model isandduring the second and third beats, values that are somewhat higher than the experimental range of<cit.>.Notice that the familiar S_2 (“dup”) heart sounds, which correspond to the reverberations of the aortic valve leaflets upon the closure of the valve, are clearly visible in both the computed flow rate (fig. <ref> panel A) and also the aortic loading pressure (t) (fig. <ref> panels B and C). §.§ Performance Analysis To gauge the computational performance of our adaptive method, we record the wallclock time required to perform the first 128 timesteps of the simulation.To obtain representative results, we average timings obtained for five successive runs of these first 128 timesteps.We compare the performance of the method when using a uniform Cartesian grid, a two-level Cartesian grid with refinement ratio =4 between levels, and a three-level Cartesian grid with refinement ratio =4 between levels.In all cases, the effective resolution of the finest level of the Cartesian grid hierarchy corresponds to that of a uniform 128 × 128 × 192 grid.Timings were performed on thecluster at New York University, which is comprised of 80 Sun Microsystems, Inc., Sun Blade X8440 server modules interconnected by an InfiniBand network. Each compute server is equipped with four 2.3 GHz quad-core AMD Barcelona 8356 processors with 2 GB memory per core.We perform timings on two, four, or eight compute nodes, using four processors per node and three cores per processor for a total of 24, 48, or 96 cores per simulation.Timing results are summarized in table <ref>.Notice that for the present model at the present effective fine-grid spatial resolution, the efficiency of the adaptive scheme is not improved by using a Cartesian grid hierarchy consisting of more than two levels.Specifically, when we use a three-level grid, most of the coarsest level is tagged for refinement and covered by the grid patches that comprise the next finer level of the grid hierarchy. Consequently, the total number of degrees of freedom is not reduced by an amount that is sufficient to overcome the increased computational overhead associated with the increased number of grid levels.In general, the optimal number of levels of refinement is problem dependent.For instance, by increasing the size of the physical domain Ω by a sufficient amount, it would eventually become computationally beneficial to use a Cartesian grid hierarchy comprised of three or more levels.Similarly, higher-resolution versions of the present aortic valve model would almost certainly benefit from additional levels of refinement.Because our semi-implicit timestepping scheme requires that = O(()^4) for problems involving bending-resistant elastic elements, however, it appears likely that a prerequisite for such a higher-resolution adaptive model is the development of an efficient implementation of an implicit version of this adaptive IB method.§ CONCLUSIONSIn this work, we have described an adaptive, staggered-grid version of the IB method, and we have presented multibeat simulation results obtained by applying this method to a fluid-structure interaction model of the aortic heart valve that includes realistic driving and loading conditions.In our simulations, physiological flows are obtained with physiological pressure differences over multiple cardiac cycles.Only pressure boundary conditions are prescribed at the upstream and downstream boundaries of the model vessel.Nonetheless, realistic, time-periodic flow rates emerge from the coupled fluid-structure interaction model.The primary differences between the simulations described herein and those of our earlier simulation study of cardiac valve dynamics <cit.> are that, in the present simulations, we use a physiological driving pressure waveform; we consider multiple cardiac cycles; and we employ an adaptive staggered-grid version of the IB method.The staggered-grid version of the IB method used in this work yields dramatic improvements in accuracy compared to our earlier cell-centered scheme <cit.>, but despite these improvements in accuracy, work remains to obtain fully resolved simulations of the three-dimensional fluid dynamics of the aortic valve.Because of the severe restriction on the timestep size imposed by our semi-implicit timestepping scheme, however, it is difficult to deploy higher spatial resolution, even with the benefit of adaptive mesh refinement.We expect that obtaining higher-resolution IB simulations of aortic valve dynamics will require the development of an efficient parallel implementation of an implicit version of the IB method.Such methods promise to overcome the severe stability restrictions of semi-implicit IB methods, like that used in the present work, and we and others are actively working to develop efficient implicit IB methods.Higher spatial resolution is not the only factor limiting the realism of our model.Another limitation of the present model is its simple description of the elasticity of the aortic valve and root.The present model could be improved by replacing these simple models with experimentally based constitutive models.Fiber-based elasticity models like those used in this work provide a convenient description of anisotropic structures commonly encountered in biological applications, and are well-suited for modeling the thin aortic valve leaflets, but combining realistic constitutive models with the fiber-based approach traditionally used with the IB method is difficult.The IB method is not restricted to fiber models, however, and several recent extensions of the IB method allow for more general elasticity models that permit finite element discretizations <cit.>.Using one such extension of the IB method <cit.>, we aim to develop realistic IB models of aortic valve mechanics that will use experimentally characterized models of the aortic valve leaflets <cit.>, the sinuses <cit.>, and the ascending aorta <cit.>, and to use such models to study the fluid dynamics of the aortic valve in both health and disease.§ COMPOSITE-GRID DISCRETIZATION Here, we provide additional details of the composite-grid finite difference discretizations used in the adaptive staggered-grid IB method.We use (I,J,K) to index grid cells of a coarse level ℓ of the AMR grid hierarchy, and we use (i,j,k) to index grid cells of the next finer level of the grid, level ℓ+1.Throughout this appendix, (i,j,k) = ( I,J,K), i.e., fine grid cell (i,j,k) and coarse grid cell (I,J,K) share the vertex _i-,j-,k- = (i , j , k ) = (I , J , K ) = _I-,J-,K-.We use the notation u(I-,J,K), v(I,J-,K), w(I,J,K-), and p(I,J,K) to denote the values of =̆ (u,v,w) and p that are stored on the coarse level ℓ of the grid.Similar notation is used to denote values stored on the fine level ℓ+1 of the grid. §.§ Restriction The composite-grid finite difference discretizations used in this work require a cell-centered cubic restriction procedure along with conservative and cubic face-centered restriction operators.To define these restriction procedures, we consider a single coarse grid cellon level ℓ along with the overlying ×× grid cells on level ℓ+1, namely grid cells (i+α,j+β,k+γ) for α,β,γ = 0,…,-1.Notice that_̧I,J,K^ℓ = ⋃_α,β,γ=0,…,-1_̧i+α,j+β,k+γ^ℓ+1. §.§.§ Cell-centered cubic restriction The cell-centered cubic restriction procedure, which requiresto be even and at least four, defines a cell-centered quantity p(I,J,K) on coarse level ℓ in terms of the closest overlying 4 × 4 × 4 fine-grid values stored on level ℓ+1 viap(I,J,K) := ∑_α,β,γ=-2,…,1ω(α)ω(β)ω(γ) p(i+/2+α,j+/2+β,k+/2+γ),in which ω(-2) = ω(1) = -1/16 and ω(-1) = ω(0) = 9/16.In our implementation, if =2, we revert to linear interpolation, which results in a reduction of the formal order of accuracy of our composite-grid discretization at coarse-fine interfaces.We do not permitto be odd because our recursive implementation of the divergence- and curl-preserving interpolation scheme <cit.> used to transfer the staggered-grid velocity field from one Cartesian grid hierarchy to another during adaptive regridding requiresto be a power of two.§.§.§ Face-centered conservative restriction The face-centered conservative restriction procedure defines a face-centered quantity (u(I-,J,K), v(I,J-,K), w(I,J,K-)) on coarse level ℓ in terms of values stored in the overlying ×× fine-grid cells on level ℓ+1 viau(I-,J,K):= 1/^2∑_β,γ=0,…,-1 u(i-,j+β,k+γ),v(I,J-,K):= 1/^2∑_α,γ=0,…,-1 v(i+α,j-,k+γ),w(I,J,K-):= 1/^2∑_α,β=0,…,-1 w(i+α,j+β,k-).This procedure is conservative in the sense of a finite volume scheme, i.e.,u(I-,J,K) ()^2= ∑_β,γ=0,…,-1 u(i-,j+β,k+γ) ()^2,v(I,J-,K) ()^2= ∑_α,γ=0,…,-1 v(i+α,j-,k+γ) ()^2,w(I,J,K-) ()^2= ∑_α,β=0,…,-1 w(i+α,j+β,k-) ()^2. §.§.§ Face-centered cubic restriction The face-centered cubic restriction procedure, which requiresto be even and at least four, defines a face-centered quantity u(I-,J,K) on coarse level ℓ in terms of the closest overlying 4 × 4 fine-grid values stored on level ℓ+1 viau(I-,J,K) := ∑_β,γ=-2,…,1ω(β)ω(γ) u(i-,j+/2+β,k+/2+γ),in which ω(-2) = ω(1) = -1/16 and ω(-1) = ω(0) = 9/16.Similar formulae define v(I,J-,K) and w(I,J,K-).In our implementation, if =2, we revert to linear interpolation, which results in a reduction of the formal order of accuracy of our composite-grid discretization at coarse-fine interfaces.We do not permitto be odd because our recursive implementation of the divergence- and curl-preserving interpolation scheme <cit.> used to transfer the staggered-grid velocity field from one Cartesian grid hierarchy to another during adaptive regridding requiresto be a power of two. §.§ Interpolation at coarse-fine interfaces To describe the specialized interpolation scheme used to define values in the ghost cells abutting a coarse-fine interface, we temporarily restrict our discussion to two spatial dimensions.The extension of this interpolation scheme to three spatial dimensions is straightforward but difficult to visualize and cumbersome to describe, and therefore is not presented in detail.We specifically consider the two-dimensional configuration shown in figs. <ref>–<ref>, in which a coarse level ℓ grid cell (I,J-1) abutsfine level ℓ+1 grid cells (i+α,j) for α=0,…,-1.(In the figures, as in our computations, we set =4; however, the formulae presented in this subsection are valid for any even value of .Different formulae would be required to treat the case in whichis odd.)Formulae similar to those presented hold for other coarse-fine interface orientations. To define the values in the ghost cells located along this coarse-fine interface, we interpolate values stored in the coarse grid cells (I+α,J-1) for α = -1,0,1 along with values stored in the two layers of fine grid cells that are adjacent to the coarse-fine interface, i.e., grid cells (i+α,j) and (i+α,j+1) for α=0,…,-1.The values stored in coarse grid cells (I-1,J-1) and (I+1,J-1) may be either valid or invalid values, i.e., grid cells (I-1,J-1) or (I+1,J-1) could be located within the refined region of level ℓ.If the values in coarse grid cells (I-1,J-1) or (I+1,J-1) are invalid values, those values are defined to be the cubic restriction of the overlying fine-grid values.Our approach extends the cell-centered approach of Minion <cit.>, Martin and Colella <cit.>, and Martin et al. <cit.> to treat both cell-centered and face-centered quantities.We first interpolate coarse-grid values in the direction tangential to the coarse-fine interface, so as to obtain interpolated values at locations that are aligned with the valid fine-grid values.We then define the values in the ghost cells by interpolating in the direction normal to the coarse-fine interface, using the interpolated coarse-grid values along with the valid fine-grid values.§.§.§ Cell-centered coarse-fine interpolationIn reference to fig. <ref>, the cell-centered quantities that we wish to compute are denoted p(i+α,j-1), α = 0,…,-1.To define these values, we first compute intermediate values that are defined by performing quadratic interpolation in the direction tangential to the coarse-fine interface, using the coarse-grid values p(I-1,J-1), p(I,J-1), and p(I+1,J-1).Specifically, we computep(i+α,j-+1/2):= (2 (i+α) + 1 - ) (2 (i+α) + 1 - 3 )/8 ^2 p(I-1,J-1)- (2 (i+α) + 1 + ) (2 (i+α) + 1 - 3 )/4 ^2 p(I,J-1)+ (2 (i+α) + 1 + ) (2 (i+α) + 1 - )/8 ^2 p(I+1,J-1)for α = 0,…,-1.We then compute p(i+α,j-1), α = 0,…,-1, by performing quadratic interpolation in the direction normal to the coarse-fine interface, using the fine-grid values p(i+α,j) andalong with the intermediate interpolated values.Specifically, we computep(i+α,j-1):= 2 ( - 1)/1 +p(i+α,j) -- 1/3 +p(i+α,j+1)+ 8/(1 + ) (3 + ) p(i+α,j-+1/2)for α = 0,…,-1.This two-step interpolation procedure is summarized in fig. <ref> for =4 and α=1.§.§.§ Face-centered coarse-fine interpolation of components normal to the coarse-fine interfaceThe components of the staggered-grid velocity field that are normal to the coarse-fine interface are treated in a manner that is similar to the cell-centered coarse-fine interface interpolation scheme described in sec. <ref>.In reference to fig. <ref>, the face-centered quantities that we wish to compute are denoted v(i+α,j-3/2), α = 0,…,-1.To define these values, we first compute intermediate values that are defined by performing quadratic interpolation in the direction tangential to the coarse-fine interface, using the coarse-grid values v(I-1,J-3/2), v(I,J-3/2), and v(I+1,J-3/2).Specifically, we computev(i+α,j--):= (2 (i+α) + 1 - ) (2 (i+α) + 1 - 3 )/8 ^2 v(I-1,J-3/2)- (2 (i+α) + 1 + ) (2 (i+α) + 1 - 3 )/4 ^2 v(I,J-3/2)+ (2 (i+α) + 1 + ) (2 (i+α) + 1 - )/8 ^2 v(I+1,J-3/2)for α = 0,…,-1.We then compute v(i+α,j-3/2), α = 0,…,-1, by performing quadratic interpolation in the direction normal to the coarse-fine interface, using the fine-grid values v(i+α,j-) and v(i+α,j+) along with the intermediate interpolated values.Specifically, we computev(i+α,j-3/2):= 2 ( - 1)/ v(i+α,j-) -- 1/1 +v(i+α,j+)+ 2/ (1 + ) v(i+α,j--)for α = 0,…,-1.This two-step interpolation procedure is summarized in fig. <ref> for =4 and α=1.§.§.§ Face-centered coarse-fine interpolation of components tangential to the coarse-fine interfaceThe components of the staggered-grid velocity field that are tangential to the coarse-fine interface are treated in a manner that is similar to the interpolation schemes described in secs. <ref> and <ref>, except that in this case, we perform cubic interpolation in the tangential direction instead of quadratic interpolation.In reference to fig. <ref>, the face-centered quantities that we wish to compute are denoted u(i+α-,j-1), α = 0,…,.To define these values, we first compute intermediate values that are defined by performing cubic interpolation in the direction tangential to the coarse-fine interface, using the coarse-grid values u(I-3/2,J-1), u(I-1/2,J-1), u(I+1/2,J-1), and u(I+3/2,J-1).Specifically, we computeu(i+α-,j-+1/2):= - (i+α) (i+α - ) (i+α - 2 )/6 ^3 u(I-3/2,J-1)+ (i+α - ) (i+α - 2 ) (i+α + )/2 ^3 u(I-1/2,J-1)- (i+α) (i+α + ) (i+α - 2 )/2 ^3 u(I+1/2,J-1)+ (i+α) (i+α - ) (i+α + )/6 ^3 u(I+3/2,J-1)for α = 0,…,.We then compute u(i+α-,j-1), α = 0,…,, by performing quadratic interpolation in the direction normal to the coarse-fine interface, using the fine-grid values u(i+α-,j) and u(i+α-,j+1) along with the intermediate interpolated values.Specifically, we computeu(i+α-,j-1):= 2 ( - 1)/1 +u(i+α-,j) -- 1/3 +u(i+α-,j+1)+ 8/(1 + ) (3 + ) u(i+α-,j-+1/2)for α = 0,…,.This two-step interpolation procedure is summarized in fig. <ref> for =4 and α=1.§.§.§ Extension to three spatial dimensions The essential difference between the two-dimensional coarse-fine interface interpolation scheme and its extension to three spatial dimensions is that, in the three-dimensional case, the initial interpolation of coarse-grid values in the direction tangential to the coarse-fine interface must employ a two-dimensional interpolation scheme to align the intermediate values with the fine-grid values.We use tensor-product interpolation rules that combine the tangential interpolation schemes described in secs. <ref>–<ref> with a quadratic interpolation rule along the additional tangential direction.In the direction normal to the coarse-fine interface, the same interpolation procedure is used in both two and three spatial dimensions to compute the final ghost values from the fine-grid values and the intermediate interpolated quantities.Implementations of both the two- and three-dimensional versions of this scheme are available online <cit.>. §.§ Composite-grid approximations to the divergence, gradient, and Laplace operators Finally, we summarize the manner in which we compute finite difference approximations to ·$̆,p, and^2 $̆ on the AMR grid hierarchy.To compute $̆, we [(1)]*use the conservative face-centered restriction procedure to coarsen $̆ from finer levels of the grid to coarser levels of the grid; and*use eq. (<ref>) to compute a discrete approximation to·$̆ in each cell of the grid hierarchy.To computep, we [(1)]*use the cubic cell-centered restriction procedure to coarsen p from finer levels of the grid to coarser levels of the grid;*use the cell-centered coarse-fine interface interpolation procedure to compute values of p stored in the coarse-fine interface ghost cells; and*use eqs. (<ref>)–(<ref>) to compute a discrete approximation to the face-normal components of p on each cell face of the grid hierarchy.To compute$̆, we [(1)]* use the cubic face-centered restriction procedure to coarsen $̆ from finer levels of the grid to coarser levels of the grid;*use the face-centered coarse-fine interface interpolation procedure to compute values of$̆ stored in the coarse-fine interface ghost cells; and* use eq. (<ref>) and its analogues to compute a discrete approximation to the face-normal components of ^2 $̆ on each cell face of the grid hierarchy.Additional ghost-cell values are determined, where needed, either by copying values from neighboring grid patches, or by employing a discrete approximation to the physical boundary conditions. The author gratefully acknowledges discussion of this work with Charles Peskin and David McQueen of the Courant Institute of Mathematical Sciences, New York University.The author also gratefully acknowledges research support from the American Heart Association (Scientist Development Grant 10SDG4320049) and the National Science Foundation (DMS Award 1016554 and OCI Award 1047734). Computations were performed at New York University using computer facilities funded in large part by a generous donation by St. Jude Medical, Inc. | http://arxiv.org/abs/1703.09265v1 | {
"authors": [
"Boyce E. Griffith"
],
"categories": [
"cs.CE"
],
"primary_category": "cs.CE",
"published": "20170327185315",
"title": "Immersed boundary model of aortic heart valve dynamics with physiological driving and loading conditions"
} |
KyeongRo Kim and TaeHyouk Jo December 30, 2023 ================================Hierarchical matrices can be used to construct efficient preconditioners for partial differential and integral equations by taking advantage of low-rank structures in triangular factorizations and inverses of the corresponding stiffness matrices.The setup phase of these preconditioners relies heavily on low-rank updates that are responsible for a large part of the algorithm's total run-time, particularly for matrices resulting from three-dimensional problems.This article presents a new algorithm that significantly reduces the number of low-rank updates and can shorten the setup time by 50 percent or more.§ INTRODUCTION Hierarchical matrices <cit.> (frequently abbreviated as ℋ-matrices) employ the special structure of integral operators and solution operators arising in the context of elliptic partial differential equations to approximate the corresponding matrices efficiently. The central idea is to exploit the low numerical ranks of suitably chosen submatrices to obtain efficient factorized representations that significantly reduce storage requirements and the computational cost of evaluating the resulting matrix approximation.Compared to similar approximation techniques like panel clustering <cit.>, fast multipole algorithms <cit.>, or the Ewald fast summation method <cit.>, hierarchical matrices offer a significant advantage: it is possible to formulate algorithms for carrying out (approximate) arithmetic operations like multiplication, inversion, or factorization of hierarchical matrices that work in almost linear complexity. These algorithms allow us to construct fairly robust and efficient preconditioners both for partial differential equations and integral equations.Most of the required arithmetic operations can be reduced to the matrix multiplication, i.e., the task of updating ZZ + α X Y, where X, Y, and Z are hierarchical matrices and α is a scaling factor. Once we have an efficient algorithm for the multiplication, algorithms for the inversion, various triangular factorizations, and even the approximation of matrix functions like the matrix exponential can be derived easily <cit.>.The ℋ-matrix multiplication in turn can be reduced to two basic operations: the multiplication of an ℋ-matrix by a thin dense matrix, equivalent to multiple parallel matrix-vector multiplications, and low-rank updates of the form ZZ + A B^*, where A and B are thin dense matrices with only a small number of columns. Since the result Z has to be an ℋ-matrix again, these low-rank updates are always combined with an approximation step that aims to reduce the rank of the result. The corresponding rank-revealing factorizations (e.g., the singular value decomposition) are responsible for a large part of the computational work of the ℋ-matrix multiplication and, consequently, also inversion and factorization.The present paper investigates a modification of the standard ℋ-matrix multiplication algorithm that draws upon inspiration from the matrix backward transformation employed in the context of ℋ^2-matrices <cit.>: instead of applying each low-rank update immediately to an ℋ-matrix, multiple updates are accumulated in an auxiliary low-rank matrix, and this auxiliary matrix is propagated as the algorithm traverses the hierarchical structure underlying the ℋ-matrix. Compared to the standard algorithm, this approach reduces the work for low-rank updates from 𝒪(n k^2 log^2 n) to 𝒪(n k^2 log n).Due to the fact that the ℋ-matrix-vector multiplications appearing in the multiplication algorithm still require 𝒪(n k^2 log^2 n) operations, the new approach cannot improve the asymptotic order of the entire algorithm. It can, however, significantly reduce the total runtime, since it reduces the number of low-rank updates that are responsible for a large part of the overall computational work. Numerical experiments indicate that the new algorithm can reduce the runtime by 50 percent or more, particularly for very large matrices.The article starts with a brief recollection of the structure of ℋ-matrices in Section 2. Section 3 describes the fundamental algorithms for the matrix-vector multiplication and low-rank approximation and provides us with the complexity estimates required for the analysis of the new algorithm. Section 4 introduces a new algorithm for computing the ℋ-matrix product using accumulated updates based on the three basic operations “addproduct”, that adds a product to an accumulator, “split”, that creates accumulators for submatrices, and “flush”, that adds the content of an accumulator to an ℋ-matrix. Section 5 is devoted to the analysis of the corresponding computational work, in particular to the proof of an estimate for the number of operations that shows that the rank-revealing factorizations require only 𝒪(n k^2 log n) operations in the new algorithm compared to 𝒪(n k^2 log^2 n) for the standard approach. Section 6 illustrates how accumulators can be incorporated into higher-level operations like inversion or factorization. Section 7 presents numerical experiments for boundary integral operators that indicate that the new algorithm can significantly reduce the runtime for the ℋ-LR and the ℋ-Cholesky factorization. § HIERARCHICAL MATRICES Letandbe finite index sets.In order to approximate a given matrix G∈^× by a hierarchical matrix, we use a partition of the corresponding index set ×. This partition is constructed based on hierarchical decompositions of the index setsand . Let 𝒯 be a labeled tree, and denote the label of a node t∈𝒯 by t̂. We call 𝒯 a cluster tree for the index setif * the root r=(𝒯) is labeled with r̂=,* for t∈𝒯 with (t)≠∅ we have t̂ = ⋃_t'∈(t)t̂', and * for t∈𝒯 and t_1,t_2∈(t) witht_1≠ t_2 we have t̂_1∩t̂_2=∅.A cluster tree foris usually denoted by , its nodes are called clusters, and its set of leaves is denoted by:= { t∈ : (t)=∅}.Letandbe cluster trees forand . A pair t∈, s∈ corresponds to a subset t̂×ŝ of ×, i.e., to a submatrix of G. We organize these subsets in a tree. Let 𝒯 be a labeled tree, and denote the label of a node b∈𝒯 by b̂. We call 𝒯 a block tree for the cluster treesandif * for each node b∈𝒯 there are t∈ ands∈ such that b=(t,s),* the root consists of the roots ofand , i.e.,r=(𝒯) has the formr=((),()),* for b=(t,s)∈𝒯 the label is given byb̂ = t̂×ŝ, and* for b=(t,s)∈𝒯 with (b)≠∅,we have (b)=(t)×(s).A block tree forandis usually denoted by , its nodes are called blocks, and its set of leaves is denoted by:= { b∈ : (b) = ∅}.For b=(t,s)∈, we call t the row cluster and s the column cluster. Our definition implies that a block treeis also a cluster tree for the index set ×. The index sets corresponding to the leaves of a block treeform a disjoint partition{b̂ = t̂×ŝ : b=(t,s)∈}of the index set ×, i.e., a matrix G∈^× is uniquely determined by its submatrices G|_b̂ for all b∈.Most algorithms for hierarchical matrices traverse the cluster or block trees recursively. In order to be able to derive rigorous complexity estimates for these algorithms, we require a notation for subtrees.For a cluster treeand one of its clusters t∈, we denote the subtree ofrooted in t by t. It is a cluster tree for the index set t̂, and we denote its set of leaves by t.For a block treeand one of its blocks b=(t,s)∈, we denote the subtree ofrooted in b by b. It is a block tree for the cluster trees t and s, and we denote its set of leaves by b. Theoretically, a hierarchical matrix for a given block treecan be defined as a matrix such that G|_b̂ has at most rank k∈_0. In practice, we have to take the representation of low-rank matrices into account: if the cardinalities #t̂ and #ŝ are larger than k, a low-rank matrix can be efficiently represented in factorized formG|_b̂ = A_b B_b^*withA_b∈^t̂× k, B_b∈^ŝ× k,since this representation requires only (#t̂+#ŝ) k units of storage. For small matrices, however, it is usually far more efficient to store G|_b̂ as a standard two-dimensional array.To represent the different ways submatrices are handled, we split the set of leavesinto the admissible leavesthat are represented in factorized form and the inadmissible leavesthat are represented in standard form.Let G∈^×, and letbe a block tree forandwith the setsandof admissible and inadmissible leaves. Let k∈_0.We call G a hierarchical matrix (or ℋ-matrix) of local rank k if for each admissible leaf b=(t,s)∈ there are A_b∈^t̂× k and B_b∈^ŝ× k such thatG|_t̂×ŝ = A_b B_b^*.Together with the nearfield matrices given by N_b:=G|_t̂×ŝ for each inadmissible leaf b=(t,s)∈, the matrix G is uniquely determined by its hierarchical matrix representation, the triple ((A_b)_b∈, (B_b)_b∈, (N_b)_b∈).The set of all hierarchical matrices for the block treeand the local rank k is denoted by ℋ(,k). In typical applications, hierarchical matrix representations require 𝒪(n k log n) units of storage <cit.>. § BASIC ARITHMETIC OPERATIONS If the block tree is constructed by standard algorithms <cit.>, stiffness matrices corresponding to the discretization of a partial differential operator are hierarchical matrices of local rank zero, while integral operators can be approximated by hierarchical matrices of low rank <cit.>.In order to obtain an efficient preconditioner, we approximate the inverse <cit.> or the LR or Cholesky factorization <cit.> of a hierarchical matrix. This task is typically handled by using rank-truncated arithmetic operations <cit.>. For partial differential operators, domain-decomposition clustering strategies have been demonstrated to significantly improve the performance of hierarchical matrix preconditioners <cit.>, since they lead to a large number of submatrices of rank zero.We briefly recall four fundamental algorithms: multiplying an ℋ-matrix by one or multiple vectors, approximately adding low-rank matrices, approximately merging low-rank block matrices to form larger low-rank matrices, and approximately adding a low-rank matrix to an ℋ-matrix. Matrix-vector multiplication. Let G be a hierarchical matrix, b=(t,s)∈, α∈, and let arbitrary matrices X∈^ŝ× and Y∈^t̂× be given, whereis an arbitrary index set. We are interested in performing the operationsY Y + α G|_t̂×ŝ X, X X + α G|_t̂×ŝ^* Y.If b is an inadmissible leaf, i.e., if b=(t,s)∈ holds, we have the nearfield matrix N_b=G|_t̂×ŝ at our disposal and can use the standard matrix multiplication.If b is an admissible leaf, i.e., if b=(t,s)∈ holds, we have G|_t̂×ŝ = A_b B_b^* and can first compute Z := α B_b^* X and then update YY + A_b Z for the first operation or use Z := α A_b^* Y and XX + B_b Z for the second operation.If b is not a leaf, we consider all its sons b'=(t',s')∈(b) and perform the updates for the matrices G|_t̂'×ŝ' and the submatrices X|_ŝ'× and Y|_t̂'× recursively. Both algorithms are summarized in Figure <ref>.Truncation. Let b=(t,s)∈, and let R∈^t̂×ŝ be a matrix of rank at most ℓ≤min{#t̂,#ŝ}. Assume that R is given in factorized formR= A B^*, A∈^t̂×ℓ, B∈^ŝ×ℓ,and let k∈[0:ℓ]. Our goal is to find the best rank-k approximation of R. We can take advantage of the factorized representation to efficiently obtain a thin singular value decomposition of R: letB = Q_B R_Bbe a thin QR factorization of B with an orthogonal matrix Q_B∈^ŝ×ℓ and an upper triangular matrix R_B∈^ℓ×ℓ. We introduce the matrixA := A R_B^* ∈^t̂×ℓand compute its thin singular value decompositionA = U ΣV^*with orthgonal matrices U∈^t̂×ℓ and V∈^ℓ×ℓ andΣ = [ σ_1; ⋱; σ_ℓ ],σ_1≥σ_2 ≥…≥σ_ℓ≥ 0.A thin SVD of the original matrix R is given byR= A B^*= A R_B^* Q_B^*= A Q_B^*= U ΣV^* Q_B^*= U Σ (Q_B V)^*= U Σ V^*with V := Q_B V. The best rank-k approximation with respect to the spectral and the Frobenius norm is obtained by replacing the smallest singular values σ_k+1,…,σ_ℓ in Σ by zero. Truncated addition. Let b=(t,s)∈, and let R_1,R_2∈^t̂×ŝ be matrices of ranks at most k_1,k_2≤min{#t̂,#ŝ}, respectively. Assume that these matrices are given in factorized formR_1= A_1 B_1^*, A_1∈^t̂× k_1, B_1∈^ŝ× k_1,R_2= A_2 B_2^*, A_2∈^t̂× k_2, B_2∈^ŝ× k_2,and let ℓ:=k_1+k_2 and k∈[0:ℓ]. Our goal is to find the best rank-k approximation of the sum R := R_1 + R_2. Due toR = R_1 + R_2 = A_1 B_1^* + A_2 B_2^* = [ A_1 A_2 ][ B_1 B_2 ]^*,this task reduces to computing the best rank-k approximation of a rank-ℓ matrix in factorized representation, and we have already seen that we can use a thin SVD to obtain the solution. The resulting algorithm is summarized in Figure <ref>.Low-rank update. During the course of the standard ℋ-matrix multiplication algorithm, we frequently have to add a low-rank matrix R = A B^* with A∈^t̂×, B∈^ŝ× and (t,s)∈ to an ℋ-submatrix G|_t̂×ŝ. For any subsets t̂'⊆t̂ and ŝ'⊆ŝ, we haveR|_t̂'×ŝ' = A|_t̂'× B|_ŝ'×^*,so any submatrix of the low-rank matrix R is again a low-rank matrix, and a factorized representation of R gives rise to a factorized representation of the submatrix without additional arithmetic operations. This leads to the simple recursive algorithm summarized in Figure <ref> for approximately adding a low-rank matrix to an ℋ-submatrix. | http://arxiv.org/abs/1703.09085v3 | {
"authors": [
"Steffen Börm"
],
"categories": [
"math.NA",
"cs.NA",
"65F05, 65F08, 65F30, 65N22, 65N38"
],
"primary_category": "math.NA",
"published": "20170327140024",
"title": "Hierarchical matrix arithmetic with accumulated updates"
} |
^1 Fluid Mechanics Group, University Carlos III of Madrid, 28911 Leganés, Madrid, Spain [email protected] ^2 Physics of Fluids Group, MESA+ Research Institute, and J. M. Burgers Centre for Fluid Dynamics, University of Twente, P.O. Box 217, 7500 AE Enschede, The Netherlands^3Institut für Materialphysik im Weltraum, Deutsches Zentrum für Luft- und Raumfahrt, 51170 Cologne, Germany Drop tower setup to study the diffusion-driven growth of a foam ball in supersaturated liquids in microgravity conditions Patricia Vega-Martínez ^1 Javier Rodríguez-Rodríguez^1 Devaraj van der Meer^2 Matthias Sperl^3 Received: date / Accepted: date =========================================================================================================================The diffusion-driven growth of a foam ball is a phenomenon that appears in many manufacturing process as well as in a variety of geological phenomena. Usually these processes are greatly affected by gravity, as foam is much lighter than the surrounding liquid. However, the growth of the foam free of gravity effects is still very relevant, as it is connected to manufacturing in space and to the formation of rocks in meteorites and other small celestial bodies. The aim of this research is to investigate experimentally the growth of a bubble cloud growing in a gas-supersaturated liquid in microgravity conditions. Here, we describe the experiments carried out in the drop tower of the Center of Applied Space Technology and Microgravity (ZARM). In few words, a foam seed is formed with spark-induced cavitation in carbonated water, whose time evolution is recorded with two high-speed cameras. Our preliminary results shed some light on how the size of the foam ball scales with time, in particular at times much longer than what could be studied in normal conditions, i.e. on the surface of the Earth, where the dynamics of the foam is already dominated by gravity after several milliseconds.§ INTRODUCTIONThe diffusion-driven growth of a dense bubble cloudimmersed in a supersaturated gas-water solution is of interest to understand a variety of phenomena that range from industrial applications to geology. For instance, in photo- and electrocatalysis the physical mechanisms responsible of the growth of bubbles on the catalytic surface share many similarities with those driving the growth of a bubble cloud by pure diffusion. More fundamental applications are found in the field of planetary geology. For instance, to understand which mechanisms determine the amount of noble gases –in particular Helium– found in meteorites and other small solar-system bodies, it is essential to study the diffusion of these gases outside the body through the complex multiphase flow ocurring during their solidification. Naturally, this diffusion process occurs in low-gravity conditions <cit.>.A bubble that forms part of a cloud and grows by diffusion in a supersaturated CO_2 solution increases its size at a pace slower than that of an isolated bubble, since it has to compete for the available CO_2 with its neighbors. Our previous experiments <cit.> suggest that, at short times, when the inter-bubble distance is relatively large, bubble sizes grow with the square root of the time, as predicted by the diffusion-driven regime <cit.>. More interestingly, as the void fraction of the cloud grows, the growth rate departs from this regime in ways that are not fully understood <cit.>, in contrast with the scaling found in the coarsening of dry foams at constant liquid fraction, where bubble sizes grow as t^1/2 at all times <cit.>. Under normal gravity conditions, this purely diffusive competition process is interrupted after a few milliseconds when the bubble cloud starts a buoyancy-induced rising motion, and advection –understood as the transport of dissolved gas by the fluid velocity field– and mixing dominate thereafter the bubble-liquid gas transfer. In this paper, we describe a novel experimental setup aimed at exploring the diffusion-driven growth experimentally at times much longer than what is possible in normal conditions on Earth making use of a microgravity facility. The ultimate goal of this experiment is to obtain quantitative data that will serve us to validate theoretical models on the diffusion-driven growth of dense bubble clouds. It is worth pointing out that the void fractions explored here are smaller than those found in foams, which have been studied in the past in microgravity conditions both theoretically <cit.> and experimentally <cit.>. Also the behavior of plateau borders, where the interfaces of adjacent bubbles meet, has been studied in the absence of gravity <cit.>.Although studying cavitation is not the main purpose of the experiment, we exploit this phenomenon to generate the bubble cloud. Cavitation in a microgravity environment was explored experimentally by <cit.>. In their experiments, they induced cavitation by focusing a laser pulse in the bulk liquid, whereas here we use a spark for that purpose. However, the main difference is that in the work by Obreschkow et al. the gas cavity disappears upon its collapse, whereas here the bubble fragments that result from the collapse become the nuclei from which the bubbles in the cloud will grow. This different behavior occurs because the water that we use is supersaturated, i.e. contains more CO_2 than what the liquid can dissolve, thus this gas fills the cavitation fragments and precludes their dissolution. § SETUP OF THE EXPERIMENTThe experiment consists in the formation of a bubble cloud in a supersaturated liquid by spark-induced cavitation and then, observing the development of the cloud using high-speed imaging in microgravity conditions. The layout of the experimental set-up is shown in figures <ref> and <ref>. The measurement chamber, where the foam evolves, is the main component of the experiment. Its body is a cylinder of Pyrex glass (24.4 mm of diameter and 101.1 mm of height) and contains CO_2-supersaturated water. Around the cylinder there is a rectangular prism that is filled with degassed water. The purpose of this jacket is to avoid the optical aberration caused by a cylindrical container. At the top of the tank, there is a line which is connected to a pressurized CO_2 gas bottle through an electrovalve (V_1). In this line, there are two more electrovalves (V_2 and V_3) that connect the chamber to the ambient pressure to depressurize the chamber. Downstream of the electrovalve V_2 there is a reduction valve to achieve a smoother depressurization of the tank before the experiment starts. This is necessary since abrupt depressurization induces bubble growth at unwanted locations in the measurement tank. In addition, there is a pressure sensor (Gems, 220RAA6002F3DA, 0-6 bar) in order to control the pressure during the pressurization and depressurization of the tank. Near the bottom of the measurement tank, there are two copper filaments of 100 μm in diameter. These thin naked copper wires are used as electrodes, that are connected to the spark generator device that discharges a large capacitor in a very short time (∼400 μs). The spark generator reproduces the discharge circuit described in Willert et al. <cit.> but replacing the LED by the electrodes, as suggested by Goh et al. <cit.>. A capacitor (2200 μF) is charged through a power supply (30-35 V) andis discharged thought a fast MOSFET power transistor when it receives a TTL trigger signal. This discharge induces cavitation, and the collapse of the imploding bubble generates the bubble cloud which is the target of the experiment.The supersaturated liquid has been prepared in the installation designed by Enríquez et al. <cit.> at the University of Twente. In this way, we can control the saturation level of the liquid. However, due to the manipulation during the filling of the measurement tank, the CO_2 concentration is lower than the initial concentration in the preparation. §.§ Experimental procedure Before filling the measurement tank with carbonated water, we flush the chamber and the electrodes with alcohol to reduce bubble formation at the walls. The electrodes are in contact inside the tank and connected to the Spark Generator device. Initially, all the electrovalves are closed. Then, electrovalve V_1 is opened and the CO_2 gas fills up the measurement tank up to about 1.8-2 bar of pressure. The purpose is to dissolve all the bubbles that may have appeared in the chamber during its filling. After approximately 30-40 minutes, the electrovalve V_1 is closed. Now, electrovalve V_2 is opened, thus exposing the chamber at ambient pressure. Then, the capsule is dropped and, when it achieves microgravity conditions, the electrodes spark and the cameras are triggered. At this point, the experiment starts.§.§ Extended setup In order to measure the time evolution of the total volume of exsolved gas in the measurement chamber, the following system has been designed: at the top of the measurement chamber, there is another line which connects through a capillary tube to a second, gas-filled vessel (expansion tank) in order to allow the liquid-gas mixture to freely expand during the experiment. As the bubble cloud grows inside the liquid, the free surface advances into the measurement line thus compressing the gas inside. It is easy to see that the overpressure satisfies |Δ P/P_0| ≃|Δ V/V_0|, where P_0 is the initial pressure in the line (ambient), Δ P is measured by a differential pressure sensor (Sensirion, SDP610 ±25Pa), V_0 is the initial gas volume in the line and Δ V is the volume of the exsolved gas. In this line, we place an expansion tank with a relatively large volume V_0 to act as a buffer, since the determination of the initial gas volume in the line is not feasible. The differencial pressure sensor starts to measure a few milliseconds before the spark is triggered, and these measurements are stored in an Arduino Due board, which manages the sensor. In this line, there are two more electrovalves (V_4 and V_5) to control the pressure conditions in the system (see Fig. <ref>). As a consequence, the experimental procedure changes sligthly. When the electrovalve V_2 is opened, half a second after the V_5 is opened too, hence the pressure in the expansion tank will be at ambient pressure. At this time, the differential pressure sensor starts to measure and the following steps are the same as described in subsection <ref>. Figure <ref> shows an example of pressure measurement acquired in an experiment in normal gravity conditions. The measurement chamber was initially at ambient pressure and connected to the atmosphere through a differential pressure sensor. Thus, after triggering cavitation with a laser pulse (occurring at t = 1 s), a bubble cloud is formed that grows due to diffusion and buoyancy-driven advection as explained in <cit.>. As the cloud volume increases, the liquid surface compresses the air inside the expansion tank as shown in Figure <ref>. Interestingly, after t ≈ 3.5 s the pressure decreases, coinciding this with the moment when the bubble cloud reaches the liquid free surface. The decrease of the pressure is due to the fact that the differential pressure sensor allows some gas to flow through it to the ambient, since it measures the pressure difference precisely based on this flow <cit.>.§ RESULTS AND DISCUSSION We report in this section preliminary results on the growth rate of the radius of the individual bubbles, the 3D reconstruction of the structure of the cloud and the time-evolution of the total gas volume in the cloud. The latter is obtained through the analysis of the mean grey level of the images, which will allow us to determine quantitatively the time evolution of the cloud's volume. The purpose of this section is to illustrate the kind of quantitative information that can be obtained with the experimental set-up described in this paper. The drops whose results are shown in Figs. <ref>, <ref> and <ref> were carried out between June 27^ th and July 1^ st, 2016. §.§ Radial expansion of individual bubbles We start from the well-known Epstein–Plesset equation <cit.>, <cit.>. It describes the growth rate of the bubble radius as a function of the properties of the gas and the level of saturation in the liquid. For our purpose, it is useful to rewrite it in terms of the square of the radius. Thus, after some mathematical manipulations we obtain:1/2dR^2/dt = D Δ C/ρ_g(1 + R/√(π D t)) ,where D is the diffusivity of CO_2 in water, ρ_g the density of the gas inside the bubble and Δ C the difference in concentration of CO_2 between the bubbles in the surface and the bulk fluid. Numerical integration of equation (<ref>) reveals that, for times of the order of R_0^2/D or longer, with R_0 the initial bubble radius, the second term in the right hand side of equation (<ref>) approaches a constant <cit.>, thus the square of the radius grows linearly with time:R^2∼ F(Δ C/ρ_g) Dt = F(Λ (ζ -1)) Dt,where Λ = K_h R_g T_∞, K_h is Henry's constant, R_g the gas constant and T_∞ the temperature. Physically speaking, Λ is a constant which measures the gas solubility. In the parenthesis, ζ is the supersaturation level which measures the amount of dissolved CO_2 available for bubble growth.The function F(x) is given by:F(x) = x/√(π) + √(x^2/π + 2x). Note that, in these calculations, the effect of surface tension on the gas pressure has been neglected, as bubbles are much larger than 2σ/P ≈ 0.8 μm, the size at which capillary overpressure, 2σ/R_0, becomes equal to the ambient one, P. Indeed, taking σ = 0.0434 N/m, the Laplace overpressure is at most about 2% of the ambient one even for the smallest bubbles reported here. To measure the growth of the bubble radius in the experiments, we track individual bubbles in the cloud using custom-made image processing software implemented in Matlab. The time evolution of the bubble radii for different bubbles in the same experiment is shown in figure <ref>. In some experiments, bubbles are attached to the electrodes. We consider these bubbles to be isolated as they are far away from others whereas the area in touch with the electrode is small. Consequently, their growth rate gives us an estimation of the saturation level (Fig. <ref>). Indeed, the growth rates of the squared radius of the bubbles considered as isolated are around three times larger than those of bubbles found inside the cloud. Moreover, the growth rate of the bubbles on the electrodes are nearly the same, whereas in contrast, growth rates for the bubbles inside the cloud differ in a visible way (Fig. <ref>). This variability of growth rates is consistent with the fact that these bubbles compete for the available CO_2 in their surroundings and this competition for the CO_2 provides information about how an individual bubble interacts with the rest of the bubbles in the cloud. In order to get quantitative data to validate future models of bubble cloud growth, we take advantage of the fact that the experiment uses two high-speed cameras forming a 90^ o angle to reconstruct the 3D structure of the cloud (see Fig. <ref> for an example). This information will allow us to relate the local level of saturation with the position of the bubble in the cloud. §.§ Determining the gas volume of the cloud Although tracking individual bubbles yields very useful quantitative information, this technique can only be applied to relatively large bubbles. Nonetheless, a significant volume of the cloud corresponds to very fine and small bubbles that lie below the spatial resolution of our high-speed cameras. As an alternative way to determine the time evolution of the total gas volume of the cloud, we analyze the mean grey level of the images.This mean grey level, MGL, can be defined asMGL = ∑_i=1^H∑_j=1^W Im_i, j/HW,where H and W are the height and width of the image matrix, Im, respectively. Next, we mustestablish a relation between this mean grey level and the volume, in other words, we must obtain a calibration curve. This can be done by generating a bubble cloud with a well-controlled volume via electrolysis. §.§.§ Calibration Indeed, using Faraday's law for electrolysis <cit.>, we are able to predict the volume of the gas generated provided the current flowing through the electrodes is known. The expression of the Faraday's law for electrolysis ism = (Q/F) (M/z),where m is the mass of the substance liberated at an electrode, Q is total electric charge that has flowed through the electrolyte (the liquid), and that can be obtained as the time integral of the current; F = 96500 C·mol^-1 is Faraday's constant, M is the molar mass of the gas (Hydrogen in our case) and z is the number of electrons transferred per ion. The mass, m, divided by the molar mass, M, is the number of moles, n. In order to obtain the volume, V, we use the ideal-gas law P V = n R̅ T_∞ where, P is the pressure of the gas (nearly ambient here), R̅ is the universal gas constant and T_∞ is the temperature.Thus, we carried out calibration experiments producing bubble clouds by electrolysis in the same experimental chamber used for the drops. The procedure was the following: the measurement chamber is filled up with clean water and the electrodes are separated and connected to a current source. This produces a known volume of gas which is then filmed with the high-speed camera. In order to measure the current in the circuit, a resistor (Ω_c = 18 Ω) is placed in series with the electrodes. The voltage across the resistor is measured with an oscilloscope. We use deioned water to which a small amount (15 grams per liter) of potassium carbonate was added in order to make it conductive <cit.>. The overall reaction of the electrolysis of the water is <cit.>,2 H_3 O^+ + 2e^-⟶H_2 (g) + 2 H_2O,so, z will be equal to 2.We show the mean grey level of the images with the volume produced by electrolysis at different voltages (see Fig. <ref>a). The mean grey level increases linearly with the volume. Moreover, the curves for the different voltages and different experimental realizations (3 per voltage) overlap, which proves the reliability of the results. Indeed, although the bubble size distributions show some variation for the different voltages, these changes do not affect the calibration curve. It should be pointed out that, in the calibration curves (Fig. <ref>a) the first tens of milliseconds upon starting electrolysis have been excluded, since at those early times the Hydrogen has remained in dissolution and does not form bubbles.Therefore, making use of the calibration curve obtained, the time evolution bubble cloud volume evolution in the drops can be estimated with the analysis of the mean grey level. As an example, figure <ref>b shows the evolution of the mean grey level and the volume for drop N^∘4 of the campaign.§ CONCLUSIONThe goal of this experiment is the study of the diffusion-driven growth of a bubble cloud in a CO_2 supersaturated water solution at times much longer than a few hundreds of milliseconds, when gravity becomes dominant in normal conditions on Earth. In our preliminary experiments under microgravity conditions, the evolution of the cloud can be observed for more than 1 second (Fig. <ref>). Still the cloud moves in the microgravity tests as a consequence of the residual velocity resulting from the implosion. Nonetheless, although the typical Peclet number, Pe = V R_0 / D, computed with the measured bubble velocity, V, for the bubbles tracked in this study is relatively large (Pe ≈ 60), the linear relation observed between the square of the bubble radii, R^2 and the time t suggests that advection plays a small role in bubble growth. This is consistent with the fact that, although the bubble cloud may translate as a whole, the relative velocity between each bubble and the fluid in its vicinity is much smaller than V. Consequently, we can apply the Epstein–Plesset equation <cit.> to predict bubble growth. Interestingly, the different slopes of the R^2 vs. t curves allow us to estimate the local concentration of CO_2 that every bubble experiences, which can later be connected to its location inside the cloud thanks to the 3D-reconstruction.Furthermore, the analysis of the grey level can be used to estimate quantitatively the time evolution of the total volume of gas in the cloud. This has been checked by calibrating the mean grey level using images where the gas volume was generated by electrolysis, so it could be accurately determined at all times.In summary, the experiment described here will allow us in future drop tower campaigns to gather very relevant quantitative information on the diffusion-driven growth of a cloud of bubbles in a gas-supersaturated liquid solution. The authors thank the team from the ZARM Drop Tower Operation and Service Company (ZARM FAB mbH) for valuable technical support during the finalization of the setup and the measurement campaign. The European Space Agency is acknowledged for providing access to the drop tower through grant HSO/US/2015-29/AO "Diffusion-driven growth of a dense bubble cloud in supersaturated liquids under microgravity conditions". This work was supported by the Netherlands Center for Multiscale Catalytic Energy Conversion (MCEC), an NWO Gravitation programme funded by the Ministry of Education, Culture and Science of the government of the Netherlands. Finally, we wish to thank the Spanish Ministry of Economy and Competitiveness for supporting the building of the experimental facility through grants DPI2014-59292-C3-1-P and DPI2015-71901-REDT, partly funded through European Funds. 19 natexlab#1#1[Barrett et al.(2008)Barrett, Kelly, Daly, Dolan, Drenckhan, Weaire & Hutzler]Barret_etalMGST2008 Barrett, D. G. T., Kelly, S., Daly, E. J., Dolan, M. J., Drenckhan, W., Weaire, D. & Hutzler, S. 2008 Taking plateau into microgravity: The formation of an eightfold vertex in a system of soap films. Microgravity Sci. Tech. 20, 17–22.[Brennen(1995)]book Brennen, C.E. 1995 Cavitation and Bubble Dynamics.. New York: Oxford University Press.[Cox & G.Verbist(2003)]CoxVerbistMGST2003 Cox, S. J. & G.Verbist 2003 Liquid flow in foams under microgravity. Microgravity Sci. Tech. 14, 45–52.[Durian et al.(1991)Durian, Weitz & Pine]DurianWeitzPinePRA1991 Durian, D. J., Weitz, D. A. & Pine, D. J. 1991 Scaling behavior in shaving cream. Phys. Rev. A. 44, R7902–7906.[Ehl & Ihde(1954)]Faraday_law Ehl, R.G. & Ihde, A. 1954 Faraday's electrochemical laws and the determination of equivalent weights. J. Chem. Edu. 31, 226–232.[Enríquez(2015)]5 Enríquez, O.R. 2015 Growing bubbles and freezing drops: depletion effects and tip singularities. Phd thesis, University of Twente.[Enríquez et al.(2013)Enríquez, Hummelink, Bruggertm, Lohse, van der Meer & Sun]carbonatedwater Enríquez, O.R., Hummelink, C., Bruggertm, G.-W., Lohse, D., van der Meer, A. Prosperetiand D. & Sun, C. 2013 Growing bubbles in a slightly supersaturated liquid solution. Rev. Sci. Instruments. 84, 065111.[Epstein & Plesset(1950)]3 Epstein, P.S. & Plesset, M.S. 1950 Stability of gas bubbles in liquid-gas solutions. J. Chem. Phys. 18, 1505–1509.[Goh et al.(2013)Goh, Oh, Klaseboer, Ohl & Khoo]spark Goh, B.H.T., Oh, Y.D.A., Klaseboer, E., Ohl, S.W. & Khoo, B.C. 2013 A low-voltage spark-discharge method for generation of consistent oscillating bubbles. Rev. Sci. Instr. 84, 014705.[Harrison & Levene(2008)]waterelectr Harrison, K. & Levene, J. I. 2008 Electrolysis of water. Solar Hydrogen Generation.. New York: Springer.[Homan et al.(2014)Homan, Gjaltema & van der Meer]Homan_etalPRE2014 Homan, T., Gjaltema, C. & van der Meer, D. 2014 Collapsing granular beds: The role of interstitial air. Phys. Rev. E. 89, 052204.[Medina-Palomo(2015)]PhD_Ana Medina-Palomo, A. 2015 Experimental and analytical study of the interaction between short acoustic pulses and small clouds of microbubbles. PhD thesis, Universidad Carlos III de Madrid.[Obreschkow et al.(2011)Obreschkow, Tinguely, Dorsaz, Kobel, de Bosset & Farhat]Obreschkow_etalPRL2011 Obreschkow, D., Tinguely, M., Dorsaz, N., Kobel, P., de Bosset, A. & Farhat, M. 2011 Universal scaling law for jets of collapsing bubbles. Phys. Rev. Lett. 107, 204501.[Rodríguez-Rodríguez et al.(2014)Rodríguez-Rodríguez, Casado-Chacón & Fuster]RodriguezRodriguez_etalPRL2014 Rodríguez-Rodríguez, J., Casado-Chacón, A. & Fuster, D. 2014 Physics of beer tapping. Phys. Rev. Lett. 113, 214501.[Saint-Jalmes et al.(2006)Saint-Jalmes, Marze, Safouane & Langevin]Saint-Jalmes_etalMGST2006 Saint-Jalmes, A., Marze, S., Safouane, M. & Langevin, D. 2006 Foam experiments in parabolic flights: Development of an iss facility and capillary drainage experiments. Microgravity Sci. Tech. 18, 22–30.[Scriven(1959)]4 Scriven, L.E. 1959 On the dynamic of phase growth. Chem. Eng. Sci. 10, 1–13.[Strong(1961)]Faraday_law1 Strong, F. C. 1961 Faraday's laws in one equation. J. Chem. Edu. 38, 98.[Stuart et al.(1999)Stuart, P.J.Harrop, S.Knott & Turner]7 Stuart, F.M., P.J.Harrop, S.Knott & Turner, G. 1999 Laser extraction of helium isotopes from antarctic micrometeorites: Source of he and implications for the flux of extraterrestrial (3)he to earth. Geochimica et Cosmochimica Acta. 63, 2653–2665.[Willert et al.(2010)Willert, Stasicki, Klinner & Moessner]circuit Willert, C., Stasicki, B., Klinner, J. & Moessner, S. 2010 Pulsed operation of high-power light-emitting diodes for imaging flowvelocimetry. Meas. Sci. Technol. 21, 075402. | http://arxiv.org/abs/1703.08875v1 | {
"authors": [
"Patricia Vega-Martínez",
"Javier Rodríguez-Rodríguez",
"Devaraj van der Meer",
"Matthias Sperl"
],
"categories": [
"physics.flu-dyn"
],
"primary_category": "physics.flu-dyn",
"published": "20170326220307",
"title": "Drop tower setup to study the diffusion-driven growth of a foam ball in supersaturated liquids in microgravity conditions"
} |
Immersed transient eddy current flow metering]Immersed transienteddy current flow metering: a calibration-free velocity measurement techniquefor liquid metalsHelmholtz-Zentrum Dresden - Rossendorf, Bautzner Landstr. 400, D-01328 Dresden, Germany [email protected] current flow meters (ECFM) are widely used for measuring the flow velocity of electricallyconducting fluids. Since the flow induced perturbations of a magnetic field depend both on the geometryand theconductivity of the fluid, extensivecalibration is needed to get accurate results.Transient eddy current flow metering (TECFM)has been developed to overcome this problem. It relies on tracking theposition of an impressed eddy current system which is movingwith the same velocity as the conductive fluid.We present an immersed version of this measurementtechnique and demonstrate its viability by numerical simulations and a first experimentalvalidation. Keywords: flow measurement, inductive methods, calibration-free[ N Krauter and F Stefani=========================== § INTRODUCTION Measuring the flow velocity of liquid metals is achallenging task becauseof their opacity, chemical reactivity and - in most cases -elevated ambient temperature <cit.>. Fortunately,the high electrical conductivity of liquid metals often allows touse magnetic inductive measurement techniques. Thesetechniques generally rely on applying magnetic fields to thefluid and measuring appropriate features, e.g.amplitudes, phases, or forces, of the flow induced magneticfields.A local embodiment of this technique is the eddy currentflow meter (ECFM) as patented by Lehde and Lang in 1948 <cit.>, which consists of two primary coils excited by an AC generator, andone secondary coil located midway between them. Modifications of this method, using one primary coil and two secondary coils,were described in<cit.>. Another version of this local sensor, which measures the flow induced change of the amplitude in the vicinity of a small permanent magnet, is themagnetic-distortion probe described byMiralles et al. <cit.>.A global embodiment of the same principle,the contactless inductive flow tomography (CIFT), is able toreconstruct entire two or three-dimensional flow fields from induced field amplitudes that are measuredat many position around the fluid when it isexposed to one or a few external magnetic fields<cit.>. Another inductive measurement concept relies on thedetermination of magnetic phase shifts due to the flow<cit.>. Further, the Lorentz forcevelocimetry (LFV)determines the force acting on a permanent magnet close to the flow, which results as a direct consequence of Newton's third law applied to the braking force acting by the magnet on the flow <cit.>. With this technique, it is even possible to measure velocities of fluids with remarkably low conductivities, such as salt water <cit.>.A common drawback of (nearly) all those methods isthat they require extensive calibration since the flow induced magnetic field perturbations depend both on geometric details of the measuring system andon the conductivity of the fluid, which is, in turn,temperature-dependent. Actually, the signals are proportional to the magnetic Reynolds number Rm=μ_0 σ V L, where μ_0 is the magnetic permeability constant, σ the conductivity of the liquid, and V and L denotetypical velocity and length scales of the relevantfluid volume. Further to this, the use of permanent magnets,as necessary for the magnetic distortion probe <cit.> and for LFV <cit.>, or of magnetic yoke materials,as for the phase-shift method <cit.>, set serious limitations to the ambient temperatureat the position of the respective sensors.Transient eddy currentflow metering (TECFM) <cit.>aims at overcoming both drawbacks.Building upon earlier work of Zheigur andSermons <cit.>,this is accomplished by impressing a traceableeddy current system into the liquid metal anddetecting its movement with appropriatelypositioned magnetic field sensors.Since the eddy currentmoves with the velocity of the liquid, there is noneed for a calibration of the sensor. The non-invasive TECFM sensor for measuring the liquidmetal velocity close to the fluid boundary from outside, as described in <cit.>, represents a specific external realization of TECFM.Here, we present amodified variant of TECFM, an invasivesensor that can be placed within a liquid metal pool or apipe to measure the local velocity in the surroundingmetal. After describing the main functioning principle ofthis immersed transient eddy current flow metering(ITECFM), wewill illustrate the method by numerical simulations. Then, first flow measurements in the eutectic alloy GaInSnwill be presented. The paper closes withsome conclusions and a discussionof the prospects to use the method under high temperatureconditions.§ THE PRINCIPLE OF ITECFMITECFM is intended to measure the local velocity or the flow ratearound the sensor in liquid metal pools or large pipes (forsmall pipes there will be some distortion of the results, when thepenetration depth of the magnetic field into the liquid metalis larger than the radius of the pipe). Basically, the ITECFM sensoris an invasive tube-shape sensor which is put inside the liquid metal,parallel to the flow direction. There is no direct contactbetween the liquid metal and the pick-up coils becausethe latter are protected by a cladding, made of stainlesssteel for example. In contrast to the external variant of TECFM <cit.>, this configuration traces the zero crossing ofthe magnetic field of the eddy current system instead of theposition of a magnetic pole. For this purpose, the coils arearranged differently in order to allow a velocity measurementof the surrounding liquid.The eddy currents within the liquid metal, which are to beused for inferring the fluid velocity, are induced by theexcitation coils E1 and E2 (see figure <ref>).Figure <ref> shows a simplified scheme of these eddy currents. Both magneticfields B_E1 and B_E2 are generated bycurrent steps which occur at the same time, but in oppositedirections. The result are two oppositely directed magnetic fields withthe same amplitude, which will induce opposing eddy currentsduring switching on or off of the excitation currents. Becauseof the symmetric arrangement of the coils (and, therefore,the magnetic fields),the zero crossing x_0 of the total magnetic fieldB is located exactly in the middle between the receivercoils R1 and R2, when v_liquid is zero and/or immediatelyafter the current step for v_liquid>0.The excitation currents are assumed to beswitched off at t=0.Although the eddy currents ec1, ec2 and their magnetic fieldsB_1, B_2 are dissipating <cit.>,for v_liquid=0 the zerocrossing remains exactly in their middle, regardless oftheir magnitude. This changes, however, if the fluid is moving. Then, theeddy currents are transported in flowdirection with the velocity of the fluid. Under the reasonableassumption that the electrical conductivity of the liquid metalis homogeneous around the sensor, both eddy currents willdissipate with the same rate and the zero crossing of the magneticfield will also move with the fluid velocity. The position of thezero crossing can be tracked by means ofthe receiver coils R1 and R2.Just as in <cit.>, the position of the zerocrossing can be calculated according tox_0(t) = x_1 Ḃ_2(t) - x_2Ḃ_1(t)/Ḃ_2(t)-Ḃ_1(t) =x_1 U_2(t) - x_2U_1(t)/U_2(t)-U_1(t)where x_1 and x_2 are the positions of the receivercoils R1 and R2, and U_1 and U_2 are therespective voltages measured there. Although the arrangement of the coilsfor the external TECFM is different, this simplifiedformula can be used to approximate the liquid metalvelocity in the case of ITECFM, too. This will be validated bynumerical simulations in the next section. § NUMERICAL SIMULATIONS The simulations of ITECFM were implemented in COMSOLMultiphysics 5.0, using a time dependent 2D axisymmetric modeland the magnetic fields (mf) physics environment. §.§ Simulation Model For the simulation, some simplifications have been made.The flow velocity v_liquid of the liquid metal isassumed constant and homogeneous around the sensor thimble. The liquid metal does not contain foreign particles or gasbubbles. Furthermore, any Lorentz forces exerted by theexcitation coils on the liquid metal are neglected.For an optimal operation of the sensor, the receivercoils shouldbe arranged symmetrically, with the centre of symmetryexactly in the middle between the twoexcitation coils(see dashed horizontal line in figure <ref>).Reasonable positions of the receiver and excitation coilshave been determined by multiple simulations withvariations in arrangement, size and spacing between thecoils. In principle, there are two possibilities for thearrangement of the coils: the receiver coils can be placedbetween the excitation coils or vice versa. However, sincethe initial position x_0 of the zero crossingof the magnetic field is located exactly in the middlebetween the excitation coils, the receiver coils should beplaced as close as possible to this point in order toachieve maximum sensitivity of the sensor. Placing the excitation coils between the receiver coils is also possible butwould result in a much lower signal amplitude and sensitivitybecause of the increased distance from x_0. The size of the excitation coils turn out tohave only a minor influenceon the functionality of the sensor,as long as the absolute values ofthe excitation current pulses are the same, sincex_0 is always in themiddle between them.Their actual size was chosen to accommodate areasonable number of turns for the coil wires in the actualprototype. However, the axial extension of thereceiver coils should be assmall as possible because this will increase theirsensitivity for detecting the zero crossing of themagnetic field, and also minimize the dependence on theconductivity of the liquid. Further to this, since the distance between the coils and theboundary of the liquid metal has a significant influence on the signalstrength, the air gap between coils and inner wallof the sensor, as well as the wall thickness of the sensorthimble, should be as small as possible. The actual sizes, turn numbers, and wire thicknessesof the coils for a low temperature (LT)and a high temperature (HT) prototypeof the ITECFM sensorare shown in table 1 (see alsofigure <ref> a). A photography of the hightemperature sensor is shown in figure <ref>.§.§ Simulation results The edge steepness of the excitation current plays an important rolefor ITECFM because the fluidvelocity can only be extracted from the magnetic fields when theexcitation currents have reached their final value, i.e.dI/dt = 0.From this instant on, R1 and R2 detect exclusively the magnetic fieldsof the eddy currents within the liquid. In figure <ref> we see how the time derivativesof the fields B_1 and B_2 at the two receiver coils R1 and R2change with increasing fluid velocity. Forthis example, the three top curves show the values forḂ_1 and the three bottom curves the valuesfor Ḃ_2. While they are symmetrical forv_liquid=0,there is a growing asymmetry between Ḃ_1 and Ḃ_2 for increasing v_liquid.When plotting Ḃ on a line parallel to the flow directionat different instants in time after the currentsteps, the movementof the zero crossing can clearly be seen (figure <ref>).Although the magnetic field is significantlydissipating with time,the zero crossing of Ḃ keeps moving withv_liquid. This is shown in the right panel offigure <ref> ,where x_0 marks the time-dependent position of the zero crossing. Figure <ref> shows the movementof x_0 over time.The fluid velocity is then inferred fromthe slope of the respectiveline in the data set. Since, in this particularsimulation,the excitation currentsreach zero only at 100 μ s, before that instant x_0(t) appears to move slower than thefluid velocity.The reason for this effect isthe superimposition with the remaining magnetic fields of theexcitation coils.For laboratory experiments it is thereforeadvisable to use a current sourcewith a high edge steepness. OtherwiseB_1 and B_2 would have already dissipated toomuch for an accurate measurement.While we have considered the case of switching off the excitation current, almost the same results areobtained for switching on the currents. Although the magneticfields of the excitation coils are not zero, they are constant afterthey reach their final value and would not induce currents withinthe excitation coils or the liquid metal. Yet, the difference between both methodswould become larger for increasing Rm. Until now, the simulations have been calculated only for oneelectrical conductivity of the liquid metal(σ=3.3 × 10^6 S/m for GaInSn).With view on the strong influence of the (temperature dependent) conductivityfor conventional ECFM's,we present in figure <ref>the calculated velocity for differentconductivities and a variety of hypothetical (and real)coil geometries. In general, despite some slight dependence on the electrical conductivity ofthe fluid, the deviation from the ideal velocity is relatively weak for a wide range of conductivities. Hence, ITECFM can essentiallybe considered as calibration-free. In the case of extremely thinreceiver coils (a), theoverall deviation would remain less than 5% for1 MS/m < σ < 10 MS/m. For the standard geometry (as embodied in theprototypes for which the coil size and distancebetween the coils have to be larger in order toaccommodate a suitable number of turns and tofacilitate the construction of the core) it canbe seen (b and c) that the results for switching onor switchingoff the excitation currents are almost the same.The further simulation (c,d,e) show also that thedeviation from the idealvelocity is smaller, when the coils are positioned asclose to each other as possible, especially at highconductivities. At low conductivities of the liquid metal,the positioning of the coils has only a small influenceon the results. The increasing deviations for higherfluid conductivities are related to the size of andthe distance between the coils as well as the dissipationof the eddy currents. As can be seen in figure <ref>, the electricalconductivity of the sensor core, which holds the coils,and of the sensor thimble have a significant influence onthe measurement results for the velocity, especially atlow σ of the liquid metal. The eddy currents whichare induced within the conductive components of the sensor,are stronger for higher electrical conductivities. Unlike theeddy currents within the liquid metal, they are not movingwith v_liquid but are stationary at all times. Becausethe magnetic fields of the eddy currents from liquid metal are superimposed with the fieldsof the sensor components, the measured velocity appears to belower. This effect is stronger at low σ of the liquidmetal because the eddy currents within the sensor componentsare in the same order of magnitude or even larger than theones in the liquid metal. Another aspect to consider is thevolume of the respective sensor components. Because the corehas a considerable larger volume than the thimble wall, itsconductivity has a larger influence on the velocity measurement.At higher σ of the liquid metal the effect gets moreand more negligible because of the stronger eddy currentsand the larger volume of the liquid metal.§ EXPERIMENTAL RESULTS A first test of aITECFM sensor was carried out with thelow-temperature prototype in aliquid metal loop with the eutectic alloy GaInSn.This sensor has a plastic coil holder,the excitation coils have 100 turns, the receiver coils have120 turns, and conventional copper wire of diameter 0.25 mmwas used (see table 1). Rectangularvoltage pulses of 5 V with a frequency of 1 kHz, a duty cycleof 50 % and a fall time of 20 μs have been used to generatethe excitation currents.The sensor was put inside a stainless steel tube to prevent directcontact with the liquid metal. The receiver voltages were measuredwith a memory oscilloscope and x_0(t) was calculated withequation (<ref>). Figure <ref> shows the measurementresults for four different fluid velocities and a linear fit ofeach dataset. The displayed results representthe mean value of 2500 measurementsweeps with one measurement taken every millisecond for 2.5s.As can be seen in the previous section in figure<ref>, the results for x_0(t) are expectedto have a linear rise. There are some deviations from the expected results for x_0(t),especially the disturbances around t=28 μs and t=33 μs. The overallslope of the linear fit however, is very close to the pre-adjustedflow velocity in the GaInSn-Loop (which is, as a matter of fact, also not exactly known). The disturbances appear at the same times for eachmeasurement and are most likely caused by the resonant frequency of thereceiver coils.Futureexperiments using tailored current sources instead of the presently used voltage source are expected toimprove this situation.§ CONCLUSIONS AND PROSPECTSIn this paper, we have presented the principle ofITECFM and some promising results obtained bothin simulations as well as in an experimentwith a first prototype in GaInSn.While its calibration-freecharacter makes the method a promising candidate for anumber of laboratory and industrial applications, itcertainly needs further tests and optimization. Althoughboth the external and immersed configurations of TECFMare based on the sameprinciple, there aresome differences with view on thedifferent arrangement of the excitation and receiver coils,which have to be addressed in detail. Future work will be devoted to more experiments with optimizedexcitation schemes and differentliquid metals to validate the simulationresults and the calibration free-free character of the sensor.Another advantage of ITECFM is the avoidance ofany magnetic materials which makes it particularly suited for high temperature applications. Tests with the high temperatureprototype consisting of heat resistant materialsare planned for ambient temperatures of up to650 ^∘C as they are typical, e.g.,for sodium fast reactors. This work was supported by CEA in the framework of theARDECo programme. § REFERENCES 20ECKERTEckert S, Buchenau D, Gerbeth G, Stefani F and Weiss FP2011 Some recent developments in the field of measuring techniques andinstrumentationfor liquid metal flows J. Nucl. Sci. Techn. 48 490-9LELA Lehde H, Lang WT 1948 Device for measuring rate of fluid flow US Patent 2435043SURESH Sureshkumar S et al. 2013 Utilization of eddy current flow meter for sodium flow measurement in FBRs Nuclear Engineering and Design, 265 1223-31POORNA Poornapushpakala S, Gomathy C, Sylvia JI, Babu B 2014 Design, development and performance testing of fast response electronics for eddy current flow meter in monitoring sodium flow Flow Meas. Instrum. 38 (2014)98-107MIRALLESMiralles S, Verhille G, Plihon N, Pinton JF 2011 The magnetic-distortion probe: velocimetry in conducting fluids Rev. Sci. Instrum. 82 095112MST1 Stefani F and Gerbeth G 2000A contactless method for velocity reconstruction inelectrically conducting fluidsMeas. Sci. Techn. 11 758-65 CIFT Stefani F,Gundrum T and Gerbeth G 2004 Contactless inductive flow tomography Phys. Rev. E 70 056306MST2 Wondrak T, Galindo V, Gerbeth G, Gundrum T, Stefani F and Timmel K 2010Contactless inductive flow tomography for a model ofcontinuous steel casting Meas. Sci. Techn. 21 045402 PRIEDE Priede J,Buchenau D and Gerbeth G 2011Contactless electromagnetic phase-shift flowmeter for liquid metals Meas. Sci. Techn. 22 055402BUCHENAU Priede J,Buchenau D and Gerbeth G 2009Force-free and contactless sensor for electromagnetic flowrate measurements Magnetohydrodynamics 45 451-8THESSThess A, Votyakov E V and Kolesnikov Y 2006 Lorentz force velocimetryPhys. Rev. Lett. 96 164501 HALBEDELHalbedel B et al 2014A novel contactless flow rate measurement device for weaklyconducting fluids based on Lorentz force velocimetry Flow Turb. Comb. 92 361-9FORBRIGER2 Forbriger J and Stefani F 2015Transient eddy current flow metering Meas. Sci. Techn. 26 105303 ZHEIGURZheigur BD and Sermons GY 1965Pulse method of measuring the rate of flow of a conducting fluid Magnetohydrodynamics 1 (1) 101-104 FORBRIGER1Forbriger J, Galindo V, Gerbeth G and Stefani F2008 Measurement of the spatio-temporaldistribution of harmonic and transient eddy currents in a liquid metal Meas. Sci. Technol. 19 045704 | http://arxiv.org/abs/1703.09116v1 | {
"authors": [
"Nico Krauter",
"Frank Stefani"
],
"categories": [
"physics.flu-dyn"
],
"primary_category": "physics.flu-dyn",
"published": "20170327144236",
"title": "Immersed transient eddy current flow metering: a calibration-free velocity measurement technique for liquid metals"
} |
[email protected] http://mankei.tsang.googlepages.com/ Department of Electrical and Computer Engineering, National University of Singapore, 4 Engineering Drive 3, Singapore 117583Department of Physics, National University of Singapore, 2 Science Drive 3, Singapore 117551 I present a semiclassical analysis of a spatial-mode demultiplexing (SPADE) measurement scheme for far-field incoherent optical imaging under the effects of diffraction and photon shot noise.Building on previous results that assume two point sources or the Gaussian point-spread function, I generalize SPADE for a larger class of point-spread functions and evaluate its errors in estimating the moments of an arbitrary subdiffraction object.Compared with the limits to direct imaging set by the Cramér-Rao bounds, the results show that SPADE can offer far superior accuracy in estimating the second and higher-order moments. Subdiffraction incoherent optical imaging via spatial-mode demultiplexing: semiclassical treatment Mankei Tsang December 30, 2023 ====================================================================================================§ INTRODUCTIONRecent theoretical and experimental studies have shown that far-field optical methods can substantially improve subdiffraction incoherent imaging <cit.>. While most of the prior works focus on two point sources, Ref. <cit.> proposes a spatial-mode demultiplexing (SPADE) measurement technique that can enhance the estimation of moments for arbitrary subdiffraction objects. Although the predicted enhancements are promising for applications in both astronomy and fluorescence microscopy, such as size and shape estimation for stellar objects or fluorophore clusters, researchers in those fields may find it difficult to comprehend the quantum formalism used in Ref. <cit.>.One of the main goals of this work is therefore to introduce a more accessible semiclassical formalism that can reproduce the results there, assuming only a background knowledge of statistical optics on the level of Goodman <cit.> and parameter estimation on the level of Van Trees <cit.>. The formalism incorporates diffraction, photon shot noise, and—most importantly—coherent optical processing, which enables the enhancements proposed in Refs. <cit.>. This treatment thus sheds light on the physical origin of the enhancements, clarifying that no exotic quantum phenomenon is needed to explain or implement them.As Ref. <cit.> assumes the Gaussian point-spread function (PSF) exclusively, another goal of this work is to generalize the results for a larger class of PSFs via the theory of orthogonal polynomials <cit.>, affirming that enhancements remain possible in those cases.To set a benchmark for the proposed method, I derive limits to moment estimation via direct imaging in the form of Cramér-Rao bounds (CRBs) <cit.>, which are original results in their own right and may be of independent interest to image-processing research <cit.>. On a more technical level, this work also investigates the estimation bias introduced by an approximation made in Ref. <cit.> and assures that it is harmless.This paper is organized as follows. Section <ref> introduces the background formalism of statistical optics, measurement noise, and CRBs. Section <ref> presents the bounds for moment estimation via direct imaging of a subdiffraction object. Section <ref> introduces the theory of SPADE for a general class of PSFs and evaluates its biases and errors for moment estimation, showing that giant accuracy enhancements are possible for the second and higher-order moments. Section <ref> revisits the case of Gaussian PSF studied in Ref. <cit.> and also proposes new exactly unbiased estimators in the case of two dimensions. Section <ref> presents a Monte Carlo analysis to confirm the theory.Section <ref> concludes the paper, pointing out open questions and future directions. Appendices <ref>–<ref> deal with mathematical issues that arise in the main text.§ FORMALISM §.§ Statistical opticsConsider an object emitting spatially incoherent light, a diffraction-limited imaging system, as depicted in Fig. <ref>, and the paraxial theory of quasi-monochromatic scalar waves <cit.>.On the image plane, the mutual coherence function, also called the mutual intensity, can be expressed as <cit.>Γ(x,x'|θ)= ∫ dX ψ(x-X)ψ^*(x'-X) F(X|θ),where x, x' ∈ℝ^D are D-dimensional position vectors on the image plane, X is the object-plane position vector normalized with respect to the magnification factor, F(X|θ) is the object intensity function, θ = (θ_0,θ_1,…) is a vector of unknown parameters to be estimated, and ψ(x) is the field PSF. To simplify the notations, I adopt the multi-index notation described in Appendix <ref> and Ref. <cit.>, such that D can be kept arbitrary, though D = 1 or 2 is typical in spectroscopy and imaging. Note that three-dimensional imaging requires a different formalism in the paraxial theory and is outside the scope of this paper. The mean intensity on the image plane isf(x|θ)≡Γ(x,x|θ) = ∫ dX |ψ(x-X)|^2 F(X|θ),which is a basic result in statistical optics <cit.>. For convenience, I normalize the position vectors with respect to the width of the PSF, such that the PSF width is equal to 1 in this unit.The PSF is assumed to obey the normalization∫ dx |ψ(x)|^2= 1,such thatθ_0≡∫ dX F(X|θ) = ∫ dx f(x|θ)is the mean optical power reaching the image plane. Instead of intensity measurement on the image plane, consider the use of further linear optics to process the field followed by photon counting in each output channel, as depicted in Fig. <ref>. The mean power in each output channel can be expressed asp_j(θ)= ∫ dx ∫ dx' ϕ_j^*(x)ϕ_j(x')Γ(x,x'|θ) = ∫ dX ∫ dx ϕ_j^*(x)ψ(x-X)^2 F(X|θ),where ϕ_j^*(x) is a propagator that couples the image-plane field from position x to the jth output. If the optics after the image plane is passive, power conservation implies that∑_j p_j(θ) ≤θ_0.This can be satisfied if the set {ϕ_j(x)} is orthonormal, viz.,∫ dx ϕ_j(x)ϕ_k^*(x)= δ_jk,by virtue of Bessel's inequality <cit.>.If {ϕ_j(x)} is also complete in the Hilbert space of image-plane fields, it becomes an orthonormal basis, and Parseval's identity leads to equality for Eq. (<ref>) <cit.>. Physically, Eq. (<ref>) implies that each output can be regarded as a projection of the image-plane field in a spatial mode. For example, direct imaging, which measures the spatial intensity on the image plane, can be modeled by taking ϕ_j(x)= √(dx^(j))δ(x^(j)- x), where x^(j) is the position of each pixel with infinitesimal area dx^(j), such that p_j(θ) = f(x^(j)|θ) dx^(j). A generalization of the measurement model to deal with mode-dependent losses and non-orthogonal mode projections is possible via the concept of positive operator-valued measures <cit.> but not needed here.In superresolution research, it is known that image processing can achieve arbitrary resolution if f(x|θ) is measured exactly and benign assumptions about the object can be made <cit.>. The caveat is that the techniques are severely limited by noise, so the use of proper statistics is paramount in superresolution studies.For weak incoherent sources, such as astronomical optical sources and microscopic fluorophores, bunching or antibunching is negligible, and it is standard to assume a Poisson model for the photon counts n = (n_1,n_2,…) at the output channels <cit.>. The Poisson distribution isP(n|θ)= ∏_j exp-τ p_j(θ)[τ p_j(θ)]^n_j/n_j!,where τ ≡η T/ħω, η∈ [0,1] is the detection efficiency, T is the integration time, and ħω is the photon energy. The most important statistics here are the mean(n_j)= τ p_j(θ),wheredenotes the expectation with respect to P, and the covariance matrix_jk(n)≡(n_jn_k)-(n_j)(n_k) = n_jδ_jk,which is signal-dependent.If {ϕ_j} is an orthonormal basis, the mean photon number detected by the measurement isN≡∑_jn_j =τθ_0.Conditioned on a total photon number ∑_j n_j, n obeys multinomial statistics, and the reconstruction of F via direct imaging becomes the density deconvolution problem in nonparametric statistics; see, for example, Ref. <cit.> and references therein.The quantum formalism can arrive at the same Poisson model by assuming that the source is thermal, the mean photon number per spatiotemporal mode is much smaller than 1, and the photon count for each channel is integrated in time over many modes <cit.>.That said, an advantage of the semiclassical model besides simplicity is that it applies to any incoherent source that produces Poisson noise at the output, such as incoherent laser sources <cit.> and electron microscopy <cit.>, without the need to satisfy all the assumptions of the quantum model.§.§ Cramér-Rao bounds (CRBs)To deal with the signal-dependent nature of Poisson noise, many existing approaches to computational superresolution <cit.> are inadequate. A more suitable tool to derive fundamental limits is the CRB, which is now standard in astronomy <cit.> and fluorescence microscopy <cit.>. For any estimator θ̌(n) that satisfies the unbiased condition(θ̌) = θ,the mean-square error matrix is equal to its covariance, viz.,MSE_μν(θ̌,θ)≡θ̌_μ-θ_μθ̌_ν-θ_ν =_μν(θ̌),and the CRB is<cit.>MSE_μμ(θ̌,θ)≥CRB_μμ(θ),where CRB(θ)≡ J^-1(θ)is the inverse of the Fisher information matrix defined asJ_μν(θ)≡∑_n P(n|θ) ln P(n|θ)θ_μln P(n|θ)θ_ν.An unbiased estimator whose error attains the CRB is called efficient. In the limit of infinite trials, the maximum-likelihood estimator is asymptotically unbiased and efficient <cit.>, so the bound is also useful as a measure of the achievable error in the asymptotic limit.For the Poisson model, the Fisher information is J_μν(θ)= τ∑_j 1/p_j(θ)p_j(θ)θ_μp_j(θ)θ_ν.For example, the information for direct imaging with infinitesimal pixel size isJ_μν(θ)=τ∫ dx 1/f(x|θ)f(x|θ)θ_μf(x|θ)θ_ν.The data-processing inequality <cit.> ensures that increasing the pixel size, or any processing of the image-plane intensity in general, cannot increase the amount of information.A simple extension of Eq. (<ref>) for strong thermal sources with super-Poisson statistics can be found in Appendix C of Ref. <cit.>.An intuitive way of understanding Eq. (<ref>) is to regard it as a signal-to-noise ratio: each derivative ∂ p_j/∂θ_μ measures the sensitivity of an output to a parameter, while the denominator p_j is proportional to the Poisson variance and indicates the noise level.The form of Eq. (<ref>) hence suggests that any parameter-insensitive background in p_j should be minimized.The nonlinear dependence of the Fisher information on p_j complicates the analysis, but also hints that coherent optical processing may lead to nontrivial effects.The Bayesian CRB (BCRB) can be used to set more general limits for any biased or unbiased estimator <cit.>.Define the Bayesian mean-square error asBMSE(θ̌)≡∫ dθΠ(θ)MSE(θ̌,θ),where Π(θ) is a prior probability density.For a prior that vanishes on the boundary of its domain, the BCRB is BMSE_μμ(θ̌)≥BCRB_μμ,BCRB ≡J̃ + K^-1,whereJ̃ ≡∫ dθΠ(θ) J(θ)is the Fisher information averaged over the prior andK_μν ≡∫ dθ1/Π(θ)Π(θ)θ_μΠ(θ)θ_νis the prior information.Other Bayesian bounds for more general priors can be found in Ref. <cit.>. The BCRB also applies to the worst-case error sup_θMSE_μμ(θ̌,θ) for minimax estimation <cit.>, sincesup_θMSE_μμ(θ̌,θ)≥BMSE_μμ(θ̌)for any Π(θ), and the prior can be chosen to tighten the bound <cit.>.The BCRB is close to the CRB if J(θ) is constant in the domain of the prior, such that J̃ = J, and the prior information K is negligible relative to J̃, such thatBCRB = (J̃ + K)^-1≈J̃^-1 = J^-1.A counterexample is the problem of two-point resolution <cit.>, where J vanishes at a point in the parameter space and the BCRB becomes very sensitive to the choice of prior, as mentioned later in Sec. <ref>.§ LIMITS TO DIRECT IMAGING§.§ Error boundsDefine the object momentsθ_μ ≡∫ dX X^μ F(X|θ),μ ∈ℕ_0^D,as the parameters of interest. Note that the moments are unnormalized, unlike the definition in Ref. <cit.>. Under general conditions, the set of moments uniquely determine F <cit.>, so there is little loss of generality with this parameterization. I will focus on moment estimation hereafter and not the pointwise reconstruction of F, however, for two reasons: the moments are more directly related to many useful parameters in practice, such as the brightness, location, size, and shape of an object <cit.>, while the reconstruction of F without further prior information is ill-posed and a forlorn task in practice when noise is present <cit.>, even with the techniques introduced in this work.Expanding |ψ(x-X)|^2 in a Taylor series, the mean image given by Eq. (<ref>) can be expressed in terms of θ asf(x|θ)= ∑_μθ_μ/μ!(-∂)^μ |ψ(x)|^2.The Fisher information given by Eq. (<ref>) becomesJ_μν(θ)= τ∫ dx [(-∂)^μ|ψ(x)|^2][(-∂)^ν|ψ(x)|^2]/μ!ν! f(x|θ).Appendix <ref> shows that this can be inverted analytically to giveCRB_μν(θ)= θ_0^2/N∑_ξ,ζ (C^-1)_μξ M_ξζ(θ) (C^-1)_νζ,where N is the mean photon number given by Eq. (<ref>),M_μν(θ)≡1/θ_0∫ dx f(x|θ) x^μ+νis the normalized image moment matrix, the C matrix is defined asC_μν ≡1/ν!∫ dx|ψ(x)|^2 ∂^ν x^μ= {[0, if any ν_j > μ_j,;[ μ; ν ]Λ_μ-ν,otherwise, ].andΛ_μ ≡∫ dx |ψ(x)|^2 x^μis a moment of the PSF. The lower-triangular property of C indicated by Eq. (<ref>) means that C^-1 is also lower-triangular and the low-order elements of the CRB can be computed from a finite number of low-order elements of M and C. An unbiased and efficient estimator is described in Appendix <ref>.To proceed further, I focus on the subdiffraction regime, which I define as the scenario where the object support width Δ is much smaller than the PSF width. To be specific, the width is defined byF(X|θ)= 0 if max_j |X_j| > Δ/2,and the subdiffraction regime is defined by the conditionΔ≪ 1in the dimensionless unit assumed here. This can be regarded as the extreme opposite to the sparse regime commonly assumed in compressed sensing <cit.> and can be ensured by prior information in practice. For example, a spot that resembles the PSF in a prior image indicates a subdiffraction object and can be studied further via the framework here; such spots are of course commonly found in both astronomical and microscopic imaging. In fluorescence microscopy, the subdiffraction support can even be enforced via stimulated-emission depletion (STED) <cit.>, and the theory here can help STED microscopy gain more information about each spot beyond θ_0.In the subdiffraction regime, the moments observe a magnitude hierarchy with respect to the order |μ|, as|θ_μ|≤∫ dX |X^μ| F(X|θ) ≤θ_0 Δ/2^|μ|,and I can combine Eqs. (<ref>), (<ref>), and (<ref>) to obtainM_μν(θ)=1/θ_0∑_ξ=0^μ+νθ_ξ[ μ+ν; ξ ]Λ_μ+ν-ξ= Λ_μ+ν + O(Δ).In other words, the image is so blurred that it resembles the PSF to the zeroth order, and the image moments approach those of the PSF. The CRB hence becomesCRB_μν = θ_0^2/N∑_ξ,ζ (C^-1)_μξΛ_ξ+ζ (C^-1)_νζ +O(Δ).This is the central result of Sec. <ref>. To set a more general limit for any biased or unbiased estimator, consider the BCRB described in Sec. <ref>.Since the Fisher information given by the inverse of Eq. (<ref>) depends only on θ_0 and not the other parameters to the leading order, the average information J̃ defined by Eq. (<ref>) is relatively insensitive to the choice of prior in the subdiffraction regime.For any reasonable prior that gives a finite prior information K, a long enough integration time can then make J̃ much larger than K in Eq. (<ref>), leading to BCRB≈CRB, if θ_0 is replaced by a suitable prior value. The two bounds hence give similar results here in the asymptotic limit.Figure <ref> summarizes the relationships among the various quantities defined for direct imaging in this section.§.§ Special casesThe low-order elements of Eqs. (<ref>) and (<ref>) can be used to reproduce a few well known results. For example, the CRB with respect to θ_0 can be derived from Eq. (<ref>) and is given byCRB_00 = θ_0^2/N,which is equal to the textbook result.Another example is point-source localization <cit.>, for which known results can be retrieved from Eq. (<ref>) by defining the location parameters as θ_μ/θ_0 for |μ| = |ν| = 1. To see this, assume D = 1 for simplicity, and the information with respect to X = θ_1/θ_0 in the Δ→ 0, f(x|θ) →θ_0|ψ(x)|^2 limit becomesJ^(X) = θ_1X^2 J_11→ N∫ dx [∂|ψ(x)|^2]^2/|ψ(x)|^2,which is exact for one point source <cit.>.Considering |μ| = |ν| = 2, Eq. (<ref>) can also reproduce the results in Refs. <cit.> regarding sub-Rayleigh two-point separation estimation. To see this, assume D = 1 again and that the centroid of the two point sources is at the origin. The second moment is then related to the separation d by θ_2 = θ_0 d^2/4. The information with respect to d becomesJ^(d) = θ_2d^2 J_22→N d^2/16∫ dx[∂^2 |ψ(x)|^2]^2/|ψ(x)|^2.This can be compared with a direct calculation of the information by considering the mean imagef(x|d)= θ_0/2|ψ(x-d/2)|^2+|ψ(x+d/2)|^2,and approximating it for sub-Rayleigh d≪ 1 as <cit.>f(x|d)≈θ_0 |ψ(x)|^2 + d^2/8∂^2|ψ(x)|^2.The information is thenJ^(d) = τ∫ dx 1/ffd^2 ≈N d^2/16∫ dx[∂^2 |ψ(x)|^2]^2/|ψ(x)|^2,which coincides with Eq. (<ref>). The vanishing J^(d) and divergent CRB^(d) = 1/J^(d) for d ≪ 1 were first reported in Refs. <cit.> and called Rayleigh's curse in Ref. <cit.>. The BCRB becomes very sensitive to the choice of prior and produces a markedly different result from the CRB when applied to the worst-case error <cit.>. This issue depends on the parameterization <cit.> and does not arise for the moment parameters, however.In the absence of a specific parametric model or equality parameter constraints <cit.>, the full information matrix should be considered, and the CRB given by Eq. (<ref>), which results from inverting the full information matrix, is a tighter limit <cit.> for general objects. Appendix <ref> presents a limit of Eq. (<ref>) when diffraction can be ignored, while Eq. (<ref>) should be used in the subdiffraction regime.This section has established fundamental limits to direct imaging in the subdiffraction and shot-noise-limited regime. The next sections show that coherent optical processing can beat them.§ SPATIAL-MODE DEMULTIPLEXING (SPADE)§.§ Point-spread-function-adapted (PAD) basisReferences <cit.> have shown that SPADE, a technique of linear optics and photon counting with respect to a judiciously chosen basis of spatial modes, can substantially improve subdiffraction imaging. To generalize the use of the TEM basis in Ref. <cit.>, I consider the point-spread-function-adapted (PAD) basis proposed by Rehacek et al. for the two-point problem <cit.> and apply it to more general objects. Denote the PAD basis byϕ_q(x); q ∈ℕ_0^D,where the spatial modes are more conveniently defined in the spatial-frequency domain. DefiningΦ_q(k)≡1/(2π)^d/2∫ dk ϕ_q(x) exp(-ik· x),Ψ(k) ≡1/(2π)^d/2∫ dk ψ(x) exp(-ik· x),Φ_q(k) can be expressed asΦ_q(k)= (-i)^|q|g_q(k) Ψ(k), g_q(k)≡∑_r G_qr k^r,where {g_q(k); q ∈ℕ_0^D} is a set of real orthogonal polynomials with |Ψ(k)|^2 as the weight function <cit.>, G is an invertible matrix that satisfies the lower-triangular propertyG_qr = 0 if r > q,and the indices follow a total and degree-respecting order that obeysr ≥ q ⇒ |r| ≥ |q|.See Appendix <ref> for more details about orthogonal polynomials. The polynomials are assumed to satisfy the orthonormal condition∫ dk Φ_q^*(k)Φ_r(k)= ∫ dk |Ψ(k)|^2 g_q(k) g_r(k) = δ_qr,which also ensures that {ϕ_q} is orthonormal.The completeness of {ϕ_q} can be proved along the lines of Ref. <cit.> but is not essential here. As ϕ_0(x) = ψ(x) and each higher-order mode in real space is a sum of ψ(x) derivatives given byϕ_q(x) = (-i)^|q|g_q(-i∂)ψ(x),the PAD basis can be regarded as a generalization of the binary SPADE concept in Ref. <cit.> and the derivative-mode concept in Ref. <cit.>.In terms of the PAD basis, I can define a mutual coherence matrix asΓ_qq'(θ)≡∫ dX h_q(X)h_q'^*(X) F(X|θ), h_q(X)≡∫ dx ϕ_q^*(x)ψ(x-X).In particular, SPADE in terms of the PAD basis gives a set of output channels with powersp_q(θ)= ∫ dX |h_q(X)|^2 F(X|θ) =Γ_qq(θ),and the Poisson photon counts {n_q; q ∈ℕ_0^D} have expected valuesn_q = τ_0 p_q(θ),τ_0≡η_0 T/ħω,where η_0 is the efficiency of the PAD-basis measurement. An unbiased estimator of Γ_qq isΓ̌_qq = n_q/τ_0,and its variance isΓ̌_qq = Γ_qq/τ_0. In the context of the Gaussian PSF, Refs. <cit.> found that p_q(θ) is sensitive only to some of the object moments. To estimate the other moments, Ref. <cit.> further proposes measurements that access the off-diagonal elements of Γ.To measure an off-diagonal Γ_qq', take two spatial modes with indices q and q' from the PAD basis and interfere them, such that the outputs correspond to projections into the spatial modesφ_qq'^+(x)=1/√(2)ϕ_q(x)+ϕ_q'(x),φ_qq'^-(x)= 1/√(2)ϕ_q(x)-ϕ_q'(x),which I call interferometric-PAD (iPAD) modes.Thepowers at the two outputs arep_qq'^+=Γ_qq +Γ_q'q'/2 + Γ_qq', p_qq'^-= Γ_qq +Γ_q'q'/2 - Γ_qq'.The photon counts, denoted by n_qq'^+ and n_qq'^-, have expected valuesn_qq'^+ = τ_s p_qq'^+,n_qq'^- = τ_s p_qq'^-,τ_s≡η_s T/ħω,where η_s denotes the efficiency of the measurement that includes these two projections. Assume further that |Ψ(k)|^2 is centrosymmetric, as defined by|Ψ(k)|^2= |Ψ(-k)|^2,such that G, h_q(X), and Γ_qq' are all real, as shown in Appendix <ref> and assumed hereafter. An unbiased estimator of Γ_qq' is thenΓ̌_qq' = n_qq'^+- n_qq'^-/2τ_s,withΓ̌_qq' = Γ_qq+ Γ_q'q'/4τ_s.The estimators Γ̌_qq' given by Eqs. (<ref>) and (<ref>) will be used in Sec. <ref> to construct moment estimators.Since the iPAD modes are not orthogonal to the PAD modes, they cannot belong to the same orthonormal basis.This means that, if projections into both PAD and iPAD modes are desired, multiple measurements in different bases are needed and must be performed on different photons. This can be done either sequentially in time via configurable interferometers or on different beamsplitted parts of the light. If each measurement has an efficiency η_s, energy conservation mandates that∑_s η_s ≤ 1. §.§ Moment estimation To relate Γ to the object moments, use Eqs. (<ref>)–(<ref>) to rewrite the propagator h_q(X) in Eq. (<ref>) ash_q(X)= i^|q|∫ dk |Ψ(k)|^2g_q(k) exp(-ik· X) = i^|q|∫ dk |Ψ(k)|^2g_q(k)∑_r(-ik)^r X^r/r!= ∑_r H_qr X^r,whereH_qr ≡i^|q|/r!∫ dk |Ψ(k)|^2 g_q(k) (-ik)^r = i^|q|(-i)^|r|/r!(G^-1)_rq, (H^-1)_qr = q! i^|q|(-i)^|r| G_rq,as shown in Appendix <ref>. Since G^-1 and G are lower-triangular, H and H^-1 are upper-triangular, satisfyingH_qr = 0, (H^-1)_qr = 0 if r < q.Substituting Eq. (<ref>) into Eq. (<ref>), Γ_qq' can be related to the moments byΓ_qq' = ∑_r,r' H_qrH_q'r'θ_r+r',which shows that each Γ_qq' is sensitive to a combination of moments with orders at least as high as |q+q'|.Given the magnitudes of θ according to Eq. (<ref>), the magnitude of Γ_qq' can be expressed asΓ_qq' = θ_0 O(Δ^|q+q'|),and the variances of the estimators given by Eqs. (<ref>) and (<ref>) becomeΓ̌_qq' = θ_0^2/N_s O(Δ^2min(|q|,|q'|)), N_s≡τ_sθ_0 = η_s Tθ_0/ħω.Equations (<ref>) and (<ref>) will be used to evaluate the errors of moment estimation.Instead of computing the CRB and relying on asymptotic arguments, here I construct explicit moment estimators and evaluate their errors directly to demonstrate the achievable performance of SPADE. To begin, consider the inverse of Eq. (<ref>) given byθ_q+q' = ∑_r,r'(H^-1)_qr(H^-1)_q'r'Γ_rr',which implies that an unbiased estimator of θ_q+q' can be constructed from unbiased estimators of Γ_pp' given by Eqs. (<ref>) and (<ref>), viz.,θ̌_q+q' = ∑_r,r'(H^-1)_qr(H^-1)_q'r'Γ̌_rr'.This estimator may not be realizable, however, as it may not be possible to group the needed projections into a reasonable number of bases.A fortuitous exception occurs for the Gaussian PSF, as elaborated later in Sec. <ref>.To find a simpler estimator, I focus on the class of separable PSFs given by|Ψ(k)|^2= ∏_j |Ψ^(j)(k_j)|^2,where each |Ψ^(j)(k_j)|^2 is a one-dimensional function. Definingg_q_j^(j)(k_j)= ∑_r_j G_q_j r_j^(j) k_j^r_jas the orthogonal polynomials with respect to each |Ψ^(j)(k_j)|^2, the natural orthogonal polynomials in the multivariate case are their products, viz.,g_q(k)= ∏_j g_q_j^(j)(k_j).As each G_q_jr_j^(j) is lower-triangular, I obtain the conditionG_qr = ∏_jG_q_j r_j^(j) = 0 if any r_j > q_j.It follows from Eqs. (<ref>) and (<ref>) that H and H^-1 are also separable and given byH_qr = ∏_ji^q_j(-i)^r_j/r_j!(G^(j))^-1_r_j q_j,H^-1_qr = ∏_jq_j!i^q_j(-i)^r_j G^(j)_r_j q_j.Using the property(H^-1)_qr = 0 if any q_j > r_j,I can rewrite the sums in Eq. (<ref>) as∑_r = ∑_r_1 = q_1^∞…∑_r_D=q_D^∞and obtainθ_q+q' = (H^-1)_qq(H^-1)_q'q'Γ_qq' +∑_|r+r'| > |q+q'| (H^-1)_qr(H^-1)_q'r'Γ_rr',which consists of one θ_0 O(Δ^|q+q'|) term and higher-order terms, as ranked by Eq. (<ref>).To evaluate the magnitude of the higher-order terms, note that, for a centrosymmetric |Ψ(k)|^2, (H^-1)_qr∝ G_rq = 0 if |r| = |q| + 1, |q|+3,… <cit.>, so∑_|r+r'|>|q+q'| (H^-1)_qr(H^-1)_q'r'Γ_rr' =θ_0 O(Δ^|q+q'|+2),which is smaller than the leading-order term by two orders of magnitude. A simplified estimator, involving only one Γ̌_qq', can then be constructed asθ̌_q+q''= (H^-1)_qq(H^-1)_q'q'Γ̌_qq' = Γ̌_qq'/H_qqH_q'q',where the last step uses the fact (H^-1)_qq = 1/H_qq for a triangular matrix. The bias is then the negative of Eq. (<ref>), viz.,θ̌_q+q''-θ_q+q' =θ_0 O(Δ^|q+q'|+2).Figure <ref> summarizes the relationships among the various quantities defined in this section, while Appendix <ref> discusses a generalization of the estimator for non-separable PSFs.Given Eq. (<ref>), the variance of the estimator isθ̌_q+q'' =Γ̌_qq'/H_qq^2H_q'q'^2 = θ_0^2/N_sO(Δ^2min(|q|,|q'|)).To minimize the variance for a given moment θ_μ with μ = q+q', min(|q|,|q'|) should be made as high as possible. This can be accomplished by choosingfor each j ∈1,2,…,D, q_j = {[ μ_j/2 if μ_j is even,; μ_j/2 if μ_j is the first odd number,; μ_j/2 if μ_j is odd and the last choice was ,; μ_j/2 if μ_j is odd and the last choice was . ].The alternating floor () and ceil () operations keep |q| high without exceeding |q'|.If |μ| is even, μ has an even number of odd elements, then |q| = |q'| = |μ|/2. If |μ| is odd, μ has an odd number of odd elements, then |q| = (|μ|-1)/2 and |q'| = (|μ|+1)/2.Hence one can achievemin(|q|,|q'|)= |μ|/2,θ̌_μ' = θ_0^2/N_s O(Δ^2|μ|/2),and the mean-square error becomesMSE(θ̌_μ',θ_μ)= θ̌_μ' +θ̌_μ'-θ_μ^2 = θ_0^2/N_s O(Δ^2|μ|/2) + θ_0^2 O(Δ^2|μ|+4).Compared with the CRB for direct imaging given by Eq. (<ref>), Eq. (<ref>) can be much lower in the Δ≪ 1 subdiffraction regime if |μ| ≥ 2, the bias is negligible, and η_s is on the same order of magnitude as the direct-imaging efficiency.This is the central result of Sec. <ref>. The conclusion holds also from the Bayesian or minimax perspective, since the BCRB for direct imaging is close to the CRB in the asymptotic limit, as argued in Sec. <ref>, while Eq. (<ref>) also applies to the Bayesian or worst-case error for SPADE if θ_0 is replaced by a suitable prior value.A heuristic explanation of the enhancements is as follows. Recall that Poisson noise is signal-dependent, and any background in the signal increases the variance.In the subdiffraction regime, the direct image is so blurred that it resembles the PSF |ψ(x)|^2, and the fundamental mode ϕ_0(x) = ψ(x) acts as a background and the main contributor of noise.With SPADE, on the other hand, each moment estimator is designed to use spatial modes with the highest possible orders. The isolation from the lower-order modes, including the fundamental, substantially reduces the background and improves the signal-to-noise ratio. §.§ Multi-moment estimation The remaining question is the number of bases needed to estimate all moments. For D = 1, three bases are enough: a measurement in the PAD basis providesΓ̌_qq, q ∈ℕ_0 and θ̌_μ'; μ∈ 2ℕ_0,where 2ℕ_0={0,2,4,…}, a measurement in the basis {φ_q,q+1^±(x); q ∈ 2ℕ_0} providesΓ̌_q,q+1, q ∈ 2ℕ_0 and θ̌_μ'; μ∈ 4ℕ_0 + 1,where 4ℕ_0 + 1 = {1,5,9,…}, and a measurement in the basis {φ_q,q+1^±(x); q ∈ 2ℕ_0+1} providesΓ̌_q,q+1, q ∈ 2ℕ_0+1 and θ̌_μ'; μ∈ 4ℕ_0 + 3,where 2ℕ_0+1 = {1,3,5,…} and 4ℕ_0 + 3 = {3,7,11,…}. If the light is split for measurements in all three basis, the condition of energy conservation given by Eq. (<ref>) impliesmin(η_s)≤1/3.For D = 2, seven bases—defined by Table <ref> and illustrated by Fig. <ref>—can do the job. I call these bases PAD and iPAD1–iPAD6, which generalize the TEM and iTEM1–iTEM6 bases proposed in Ref. <cit.> for the Gaussian PSF.Energy conservation now impliesmin(η_s)≤1/7,if measurements in all the seven bases are performed. The essential point is that the penalty in efficiency for multi-moment estimation is only a constant factor, and significant enhancements over direct imaging remain possible. §.§ Criterion for informative estimationA word of caution is in order: even with SPADE, there are severe resolution limits.This is because the moments are inherently small parameters in the subdiffraction regime according to Eq. (<ref>), and the error needs be much smaller than the prior range of the parameter for the estimation to be informative.To evaluate the usefulness of an estimation relative to prior information, I adopt the Bayesian perspective <cit.> and consider the Bayesian error given by Eq. (<ref>). In the absence of measurements, the error is determined by the prior and given byBMSE_μμ^(Π) ≡^(Π)θ_μ-^(Π)(θ_μ)^2≤θ_0^2Δ/2^2|μ|,where ^(Π) denotes the expectation with respect to Π(θ), the upper bound comes from Eq. (<ref>), and θ_0 is assumed to be given for simplicity.Using the bound as a conservative estimate of the prior error, a rule of thumb for informative estimation isBMSE_μμ/θ_0^2(Δ/2)^2|μ|≪ 1.The small prior error places a stringent requirement on the post-measurement error.For direct imaging, assuming the asymptotic limit where the BCRB is close to the CRB given by Eq. (<ref>), the fractional BCRB isBCRB_μμ/BMSE_μμ^(Π) ≈CRB_μμ/BMSE_μμ^(Π)=O(Δ^-2|μ|)/N.This value grows exponentially with the order |μ|, meaning that the estimation of higher-order moments requires exponentially more photons to become informative.For SPADE, an achievable Bayesian error can be obtained by averaging MSE(θ̌_μ',θ_μ), and the magnitude is also given by Eq. (<ref>). The fractional error becomesBMSE_μμ/BMSE_μμ^(Π) =O(Δ^2|μ|/2-2|μ|)/N_s +O(Δ^4).The O(Δ^4) relative bias is always much smaller than 1, but the fractional variance still grows with |μ| exponentially. Compared with direct imaging, the exponent is reduced for |μ| ≥ 2 and not as many photons are needed to achieve a small fractional error for a given moment, but higher-order moments remain more difficult to estimate.This consideration suggests that SPADE is most useful for scenarios that depend on only a few low-order moments.For example, the two-point problem studied in Refs. <cit.> requires moments up to the second order only <cit.>, the case of two unequal sources studied in Refs. <cit.> requires moments up to the third, and parametric object models with size and shape parameters <cit.> can also be related to low-order moments.§ GAUSSIAN POINT-SPREAD FUNCTION §.§ Direct imagingFor an illustrative example of the general theory, consider the Gaussian PSFψ(x)= 1/(2π)^d/4exp-||x||^2/4,which is a common assumption in fluorescence microscopy <cit.>.The Hermite polynomials can be used to compute the CRB in the limit of Δ→ 0, as shown in Appendix <ref>.The result isCRB_μν →θ_0^2/Nμ!δ_μν,which coincides with the D = 2 theory in Ref. <cit.>. §.§ SPADEThe PSF in the spatial-frequency domain isΨ(k)= 2/π^d/4exp(-||k||^2).A set of orthogonal polynomials with respect to |Ψ(k)|^2 are defined byg_q(k)= 1/√(q!)_q(2k),and the PAD mode functions becomeΦ_q(k)= 2/π^d/4(-i)^|q|/√(q!)_q(2k)exp(-||k||^2),ϕ_q(x)= 1/(2π)^d/4√(q!)_q(x) exp-||x||^2/4.The PAD basis in this case is simply the TEM basis, as expected.The propagator given by Eq. (<ref>) can be computed analytically with the help of the generating function for Hermite polynomials <cit.>; the result ish_q(X)= H_qqexp-||X||^2/8 X^q, H_qq = 1/2^|q|√(q!).The mutual coherence matrix Γ defined by Eq. (<ref>) becomesΓ_qq' = H_qqH_q'q'∫ dX exp-||X||^2/4 X^q+q'F(X|θ).Unbiased estimators of Γ_qq' can be constructed from projections in the PAD and iPAD spatial modes according to Eqs. (<ref>) and (<ref>); the iPAD modes are called iTEM modes in Ref. <cit.>. The estimator variances are given by Eqs. (<ref>) and (<ref>), with magnitudes given by Eq. (<ref>).To estimate a given moment θ_μ, q and q' = μ-q can be chosen according to Eq. (<ref>), the simplified estimator given by Eq. (<ref>) can be used, and the error then agrees with Eq. (<ref>).These results again agree with Ref. <cit.>, except that Ref. <cit.> neglects the contribution of bias to the mean-square error and therefore does not include the second term in Eq. (<ref>). §.§ Exactly unbiased estimatorFor D = 2, the PAD and iPAD1–iPAD6 bases described by Table <ref> and Fig. <ref> become the TEM and iTEM1–iTEM6 bases proposed in Ref. <cit.>, and the estimator given by Eq. (<ref>) is equivalent to the ones proposed in Ref. <cit.>.Interestingly, it is possible to go further than Ref. <cit.> and construct exactly unbiased moment estimators from these measurements. First note that Eq. (<ref>) offers a shortcut to express each moment in terms of Γ as follows: θ_q+q' = ∫ dX exp||X||^2/4exp-||X||^2/4 X^q+q' F(X|θ) =∫ dX ∑_r X^2r/r! 4^|r|exp-||X||^2/4 X^q+q' F(X|θ) = ∑_r 1/r! 4^|r|∫ dXexp-||X||^2/4 X^q+q'+2r F(X|θ) = ∑_r Γ_q+r,q'+r/r! 4^|r| H_q+r,q+rH_q'+r,q'+r. Combining Eqs. (<ref>) and (<ref>), it can then be shown that the estimatorθ̌_μ = ∑_r θ̌_μ+2r'/r!4^|r|is exactly unbiased. To construct θ̌_μ; μ∈ (2ℕ_0) × (2ℕ_0),one simply needs θ̌_μ'; μ∈ (2ℕ_0) × (2ℕ_0) from the PAD basis. To constructθ̌_μ; μ∈ (2ℕ_0+1) × (2ℕ_0),one needs θ̌_μ'; μ∈ (2ℕ_0+1) × (2ℕ_0), which can be obtained from the iPAD1 and iPAD4 bases. Similarly, to constructθ̌_μ; μ∈ (2ℕ_0)× (2ℕ_0+1),one needs θ̌_μ'; μ∈ (2ℕ_0)× (2ℕ_0+1), which can be obtained from the iPAD2 and iPAD5 bases. Finally, to constructθ̌_μ; μ∈ (2ℕ_0+1)× (2ℕ_0+1),one needs θ̌_μ'; μ∈ (2ℕ_0+1)× (2ℕ_0+1), which can be obtained from the iPAD3 and iPAD6 bases.The error matrix of the unbiased estimator becomesMSE_μν(θ̌,θ)= _μν(θ̌) =θ_0^2/min(N_s)O(Δ^2|μ|/2) δ_μν,which remains on the same order of magnitude as the variance of the simplified estimator in Eq. (<ref>), while the bias contribution is no longer present. The number of bases needed to achieve enhanced and exactly unbiased multi-moment estimation for other PSFs and dimensions remains an open question.§ NUMERICAL DEMONSTRATIONI now present Monte Carlo simulations to corroborate the theory. Assume D = 1. Each simulated object is an ensemble of S = 5 point sources with randomly generated positions {X_σ; σ = 1,…,S} within the interval|X_σ|≤Δ/2,Δ = 0.2,such thatF(X|θ)= θ_0/S∑_σ=1^S δ(X-X_σ).50 objects are generated for each PSF under study.For direct imaging, I assume that the mean photon number is N = 50,000, the pixel size is dx = 0.1, and 1,000 samples of Poisson images are generated for each object. The estimator described in Appendix <ref> is applied to each sample to estimate the moments θ_μ for μ = 1,2,3,4 (θ_0 can be estimated by summing all the photon counts and the results are trivial). The sample errors with respect to the true parameters are averaged to approximate the expected values.The averaged errors are then plotted for two different PSFs in Figs. <ref> and <ref> and compared with the CRB given by Eq. (<ref>), omitting the O(Δ) correction. To simulate SPADE according to Sec. <ref>, measurements in three different bases are simulated. The first basis isϕ_0(x),ϕ_1(x),ϕ_2(x),with the simulated photon counts denoted by {n_0,n_1,n_2}, the second basis isφ_01^+(x),φ_01^-(x),ϕ_2(x),with the photon counts denoted by {n_01^+,n_01^-,n_2'}, and the third basis isϕ_0(x),φ_12^+(x),φ_12^-(x),with the photon counts denoted by {n_0',n_12^+,n_12^-}. The light is split equally among the three measurements, such that N_s = N/3. All photons in higher-order modes are neglected.To estimate the moments with SPADE, I use the simplified but biased estimator given by Eq. (<ref>), with q given by Eq. (<ref>).Using Eq. (<ref>) for Γ̌_01, the estimator of θ_1 becomesθ̌_1'= Γ̌_01/H_00H_11 = n_01^+-n_01^-/2H_00H_11τ_s.The estimator is applied to 1,000 samples of the simulated photon counts for each object.The sample errors with respect to the true parameters are averaged and compared with the analytic expressionMSE_11 ≈θ̌_1'=Γ̌_01/H_00^2H_11^2≈Γ_00/4H_00^2H_11^2τ_s≈θ_0/4H_11^2τ_s,which neglects the bias and applies the approximationsΓ_qq + Γ_q'q'≈Γ_qq≈ H_qq^2θ_2qto Eqs. (<ref>) and (<ref>). Similarly,θ̌_2'= n_1/H_11^2τ_s,MSE_22 ≈θ_2/H_11^2 τ_s,θ̌_3'= n_12^+-n_12^-/2H_11H_22τ_s,MSE_33 ≈θ_2/4H_22^2τ_s.To estimate θ_4, I use both of the photon counts that come from the two ϕ_2(x) projections to obtainθ̌_4'= n_2+n_2'/2H_22^2τ_s,MSE_44 ≈θ_4/2H_22^2τ_s.There is no need to specify θ_0, τ, or τ_s individually if the errors are normalized with respect to θ_0^2. The simulated errors and the analytic expressions are plotted in Figs. <ref>–<ref> against the relevant parameters in log-log scale for the three PSFs. The three PSFs in the spatial-frequency domain under study and the associated PAD modes are plotted in Fig. <ref>.Figure <ref> plots the results for the Gaussian PSF described in Sec. <ref>. The simulated errors all match the theory, despite the approximations in the analytic expressions. In particular, the agreement confirms that the contribution of bias to the errors of SPADE is negligible. For μ = 1, SPADE uses one third of the photons only, and its errors are three times those of direct imaging. For higher moments, however, SPADE outperforms direct imaging by orders of magnitude. It is important to note that the plotted mean-square errors are normalized with respect to θ_0^2(Δ/2)^2μ, which is the square of the prior limit given by Eq. (<ref>), and only the normalized errors for μ =1,2 go significantly below 1. According to the discussion in Sec. <ref>, this implies that only the estimation for μ≤ 2 is informative, while the estimation for μ≥ 3 would require a lot more photons to become informative. The high variances of the estimators for μ≥ 3 also suggest that, for the given photon number, replacing them with Bayesian estimators <cit.> can reduce their errors to the vicinity of the prior levels given by Eq. (<ref>), although the bias will go up a lot. The second PSF under study is the “bump” aperture function <cit.>Ψ(k)= {[ Ψ(0) exp-k^2/1-k^2,|k| < 1,;0,|k| ≥ 1, ].where Ψ(0) ≈ 1.0084 is a normalization constant.The compact support models a hard bandwidth limit, while the infinite differentiability of Ψ(k) ensures that all the moments of |ψ(x)|^2 are finite and the direct-imaging theory in Sec. <ref> is valid, as discussed in Appendix <ref>. The simulated errors, plotted in Fig. <ref>, behave similarly to those in the Gaussian case, except that the direct-imaging errors are substantially higher for higher moments.The enhancements by SPADE appear even bigger, though not big enough to bring the errors for μ≥ 3 down to the informative regime for the given photon number.The final PSF is the textbook rectangle aperture functionΨ(k)= {[ 1, |k| < 1/2,; 0, |k| ≥ 1/2. ].The second and higher moments of |ψ(x)|^2 are infinite, meaning that the direct-imaging theory in Sec. <ref> is inapplicable, as discussed in Appendix <ref>. Fortunately, the orthogonal polynomials with respect to |Ψ(k)|^2 and therefore the PAD basis remain well-defined <cit.>. Figure <ref> plots the results for SPADE, which are similar to those for the bump aperture in Fig. <ref>.Although these results have no direct-imaging limits to compare with, the earlier results on the two-point problem for this PSF <cit.> suggest that significant improvements remain likely.§ CONCLUSIONThe semiclassical treatment complements the quantum approach in Ref. <cit.> by offering a shortcut to the Poisson photon-counting model for incoherent sources, passive linear optics, and photon counting. Besides pedagogy, this work generalizes the results in Refs. <cit.> for more general objects and PSFs in the context of moment estimation, demonstrating that the giant enhancements by SPADE are not limited to the case of two point sources or Gaussian PSF considered in prior works.Many open problems remain, such as extensions for more general PSFs, more complex objects, and three-dimensional imaging, the effect of excess statistical and systematic errors, such as dark counts, aberrations, turbulence, and nonparaxial effects <cit.>, the application of more advanced Bayesian or minimax statistics <cit.>, and the quantum optimality of the measurements <cit.>. Experimental implementation is another important future direction.For proof-of-concept demonstrations, it should be possible to use the same setups described in Refs. <cit.> to estimate at least the second moments of more general objects.For practical applications in astronomy and fluorescence microscopy, efficient demultiplexing for broadband sources is needed.The technical challenge is by no means trivial, but the experimental progress on spatial-mode demultiplexers has been encouraging <cit.>, and the promise of giant imaging enhancements using simply far-field linear optics should motivate further efforts. § ACKNOWLEDGMENTSThis work is supported by the Singapore Ministry of Education Academic Research Fund Tier 1 Project R-263-000-C06-112.§ MULTI-INDEX NOTATIONA D-dimensional vector of continuous variables is written asx= (x_1,x_2,…,x_D) ∈ℝ^D.For such a vector, the following notations are assumed:dx≡∏_j=1^D dx_j,∫ dx≡∫_ℝ^D dx,δ(x-x')≡∏_j=1^D δ(x_j-x_j'),∂_x≡x_1,…,x_D, k· x≡∑_j=1^D k_jx_j, ||x||^2≡ x· x.If the subscript is omitted in ∂, derivatives with respect to x are assumed.A vector of integer indices, on the other hand, is defined asμ = (μ_1,μ_2,…,μ_D) ∈ℕ_0^D.For such a vector, the following notations are assumed:0≡0,…,0, |μ|≡∑_j=1^D |μ_j|,∑_μ ≡∑_μ∈ℕ_0^D,∑_μ=ν^ξ ≡∑_μ=ν_1^ξ_1…∑_μ=ν_D^ξ_D,μ!≡∏_j=1^D μ_j!.[ μ; ν ] ≡μ!/(μ-ν)!ν!.Note that the one-norm is assumed for index vectors.Other useful notations includex^μ ≡∏_j=1^D x_j^μ_j,∂_x^μ ≡∏_j=1^D ^μ_jx_j^μ_j.§ CRB FOR DIRECT IMAGINGIt is useful to define a Hilbert spaceℋ ≡b_μ(x); μ∈ℕ_0^Dwith respect tob_μ(x)≡(-∂)^μ |ψ(x)|^2/μ!f̃(x|θ),f̃(x|θ)≡f(x|θ)/θ_0,and the weighted inner productu,v ≡∫ dx f̃(x|θ) u(x)v(x),whereis the closed linear span inside the L^2(f̃) space <cit.> and f̃(x|θ) is the normalized image.In other words, any function in ℋ can be expressed as a linear combination of {b_μ(x)}. Equation (<ref>) becomesJ_μν = τ/θ_0b_μ,b_ν.This can be inverted with the help of orthogonal polynomials.Definea≡a_μ(x); μ∈ℕ_0^D,where a_μ(x) is a real polynomial with degree |μ| and the orthonormal condition isa_μ,a_ν = δ_μν.For orthogonal polynomials to exist, the moment matrix M given by Eq. (<ref>) should be positive-definite <cit.>, or equivalently∫ dx f̃(x|θ) 𝒫^2(x) > 0 for any polynomial 𝒫. The strict positiveness can be satisfied as long as the support of f̃(x|θ) is an infinite set, as 𝒫^2(x) has a finite number of zeros only.The orthogonal polynomials can be computed by applying the Gram-Schmidt procedure to the set of monomials {x^μ; μ∈ℕ_0^D} if the set is totally ordered <cit.>.For D = 1, the natural order {1,x,x^2,…} leads to a unique set of orthogonal polynomials for a given weight function. For D ≥ 2, however, the situation is more complicated. A useful requirement is that the order should respect the degree in the sense ofν≥μ⇒ |ν| ≥ |μ|.An example is the graded lexicographical order, defined byν> μ ⇔ |ν| > |μ|, or if|ν| = |μ|,the first nonzero ν_j-μ_j > 0.For D = 2 for example, the order is(0,0)<(0, 1)< (1,0) <(0,2)< (1,1) < (2,0) < …(0, |μ|)< (1,|μ|-1) < … < (|μ|, 0) < …,but one should see in this example that indices with the same total degree |μ| may be ordered in other ways and there is no single compelling choice; a different choice will lead to a different set of orthogonal polynomials. In the following I assume simply that a degree-respecting order has been chosen; the analysis is valid regardless of the choice.Express each polynomial asa_μ(x)= ∑_ν A_μν x^ν,where A is a matrix that satisfies the lower-triangular propertyA_μν = 0 if ν > μ.Combining Eqs. (<ref>), (<ref>), and (<ref>), I obtain∑_ξ,ζ A_μξM_ξζA_νζ = δ_μν.Given a total order of the indices, the matrices can be rasterized into two-dimensional matrices. Equation (<ref>) can then be written more compactly asA M A^⊤ = I,where ⊤ denotes the matrix transpose and I is the identity matrix. As M is positive-definite, A can be obtained from the Cholesky decompositionM = L L^⊤,where L is a real lower-triangular matrix with positive diagonal elements <cit.>.Since the diagonal elements of a triangular matrix are also its eigenvalues, L is invertible, L^-1 is also lower-triangular, and settingA = L^-1leads toM= (A^-1)(A^-1)^⊤,which satisfies Eq. (<ref>). To invert Eq. (<ref>), I also need to prove that a is an orthonormal basis in ℋ.The orthonormality given by Eq. (<ref>) is satisfied by definition, while the completeness follows from the fact that the only function u(x) = ∑_νλ_ν b_ν(x) in ℋ that is orthogonal to a in the sense ofa_μ,u = ∑_νa_μ,b_νλ_ν= 0,μ ∈ℕ_0^D,is the zero function, provided that B_μν ≡a_μ,b_ν = 1/ν!∫ dx a_μ(x)(-∂)^ν |ψ(x)|^2is an invertible matrix. To prove so, apply integration by parts to Eq. (<ref>) to obtainB_μν =1/ν!∫ dx|ψ(x)|^2 ∂^ν a_μ(x) = ∑_ξ A_μξ C_ξν, B= A C,where C is defined by Eq. (<ref>). Since A is invertible, it suffices to prove that C is also invertible.Consider the term ∂^ν x^μ in C_μν. ν > μ in a degree-respecting order implies |ν| > |μ|, or |ν| = |μ| and ν≠μ.In either case, there exists at least one ν_j > μ_j that makes ∂^ν x^μ vanish, resulting inC_μν = 0 if ν > μ,meaning that C is lower-triangular. The eigenvalues of C are then the diagonal elements and given byC_μμ = 1/μ!∫ dx |ψ(x)|^2 ∂^μ x^μ= ∫ dx |ψ(x)|^2 = 1.Hence C is invertible. Since both A and C are lower-triangular and invertible, B = AC is also lower-triangular and invertible, andB^-1 = C^-1A^-1is lower-triangular as well.I can now use the a basis to express Eq. (<ref>) asJ_μν = τ/θ_0∑_ξb_μ,a_ξa_ξ,b_ν = τ/θ_0∑_ξ B_ξν B_ξν.In matrix form,J= τ/θ_0 B^⊤ B,and the CRB becomesCRB = J^-1 = θ_0/τ B^-1 (B^-1)^⊤= θ_0/τ C^-1 M (C^-1)^⊤,where I have applied Eqs. (<ref>) and (<ref>).§ AN UNBIASED AND EFFICIENT ESTIMATOR FOR DIRECT IMAGINGLet {n(𝒮); 𝒮⊆ℝ^D} be the Poisson process <cit.> obtained by direct imaging with infinitesimal pixel size. The expected value of n over an area 𝒮 isn(𝒮) = τ∫_𝒮 dx f(x|θ),and {n(𝒮_1), n(𝒮_2), …} are independent Poisson variables if {𝒮_1,𝒮_2,…} are disjoint subsets. Consider the estimatorθ̌_μ = 1/τ∑_ν (C^-1)_μν∫ n(dx) x^ν.Its expected value isθ̌_μ = ∑_ν (C^-1)_μν∫dx f(x|θ) x^ν= ∑_ν (C^-1)_μν∫ dx ∑_ξθ_ξ/ξ! (-∂)^ξ |ψ(x)|^2 x^ν=∑_ν (C^-1)_μν∑_ξθ_ξ/ξ!∫ dx|ψ(x)|^2 ∂^ξ x^ν= ∑_ν,ξ (C^-1)_μν C_νξθ_ξ = θ_μ,where I have applied Eqs. (<ref>) and (<ref>).Its covariance, on the other hand, is_μνθ̌ =1/τ∑_ξ,η (C^-1)_μξ (C^-1)_νη∫dx f(x|θ) x^ξ+η= θ_0/τ C^-1 M (C^-1)^⊤,which coincides with the CRB given by Eq. (<ref>). The estimator is hence unbiased and efficient. § CRB FOR DIRECT IMAGING IN THEDIFFRACTION-UNLIMITED REGIMESuppose that the PSF |ψ(x)|^2 = δ(x) is infinitely sharp and f(x|θ) = F(x|θ).The image moments given by Eq. (<ref>) become identical to those of the object, viz.,M_μν = θ_μ+ν/θ_0,the C matrix given by Eq. (<ref>) becomesC_μν = 1/ν!∫ dx δ(x)∂^ν x^μ = δ_μν,and the CRB given by Eq. (<ref>) becomesCRB_μν = θ_μ+ν/τ.This represents an ideal scenario where the imaging is limited only by shot noise and not by diffraction. Equation (<ref>) also serves as a general lower bound on the CRB given by Eq. (<ref>) for any linear-optical processing, as Eq. (<ref>) is a Markov chain on F(X|θ) and the data-processing inequality <cit.> can be invoked.To verify Eq. (<ref>), suppose that F consists of isolated point sources, viz.,F(X|θ) = ∑_σϑ_σδ(X-X_σ),and since |ψ(x)|^2 = δ(x), their positions can be perfectly resolved.The unknowns are then ϑ, and the CRB with respect to ϑ isJ_σγ^(ϑ) = τ/ϑ_σδ_σγ,CRB_σγ^(ϑ) =ϑ_σ/τδ_σγ.Expressing the moments asθ_μ = ∑_σϑ_σ X_σ^μ,I can compute the CRB with respect to the momentsvia the transformationCRB_μν = ∑_σ,γθ_μϑ_σCRB_σγ^(ϑ)θ_νϑ_γ = θ_μ+ν/τ,which coincides with Eq. (<ref>).§PROPERTIES OF MATRICES IN SEC. <REF>Equation (<ref>) can be inverted to givek^r= ∑_s (G^-1)_rs g_s(k).Substituting this in Eq. (<ref>) and using the orthonormality given by Eq. (<ref>), I obtainH_qr = i^|q|(-i)^|r|/r!∑_s (G^-1)_rs ∫ dk |Ψ(k)|^2 g_q(k) g_s(k) = i^|q|(-i)^|r|/r!(G^-1)_rq.The inverse is given by Eq. (<ref>), which can be confirmed by directly computing HH^-1 or H^-1H.Since G^-1 and G are lower-triangular, H and H^-1 are upper-triangular.If |Ψ(k)|^2 is centrosymmetric according to Eq. (<ref>), Ref. <cit.> shows that g_q(k) consists of only even-order monomials {k^r; |r|even} if |q| is even and only odd-order monomials {k^r; |r|odd} if |q| is odd. ThusG_qr = 0if |q|-|r| is odd, g_q(k)= (-1)^|q| g_q(-k).Substituting k with -k in the integral in Eq. (<ref>) yieldsh_q(X) = i^|q|∫ dk |Ψ(-k)|^2 g_q(-k)exp(ik· X) = (-i)^|q|∫ dk |Ψ(k)|^2 g_q(k)exp(ik· X) = h_q^*(X),and h_q(X) is real. It follows that H and H^-1 are real as well.§ CRB FOR DIRECT IMAGING WITH THE GAUSSIAN PSFIn the limit of Δ→ 0,f̃(x|θ)= |ψ(x)|^2 = 1/(2π)^d/2exp-||x||^2/2.A set of orthogonal polynomials area_μ(x)= 1/√(μ!)_μ(x),where_μ(x)≡∏_j=1^D _μ_j(x_j),and the definition of the single-variable Hermite polynomials can be found, for example, in Refs. <cit.>.The B matrix defined by Eq. (<ref>) can then be computed by substituting the identity(-∂)^ν |ψ(x)|^2= |ψ(x)|^2_ν(x)for Hermite polynomials <cit.> and using the orthonormality of a. The result isB_μν = 1/√(μ!)δ_μν,which can be substituted into Eq. (<ref>) to give Eq. (<ref>).§ AN ESTIMATOR FOR SPADE WITH NON-SEPARABLE PSFSThe simple estimator given by Eq. (<ref>) relies on the strong upper-triangular property of H given by Eq. (<ref>) for separable PSFs.Without it, the weaker property given by Eq. (<ref>) for a degree-respecting order still implies that the ∑_r sum in Eq. (<ref>) can be separated into a |r| = |q| group and and a |r| > |q| group, viz.,∑_r = ∑_|r| = |q| + ∑_|r| > |q|,and Eq. (<ref>) becomesθ_q+q' = ∑_|r|=|q|,|r'|=|q'|(H^-1)_qr(H^-1)_q'r'Γ_rr' + ∑_|r+r'|>|q+q'|(H^-1)_qr(H^-1)_q'r'Γ_rr'.If I assume the estimatorθ̌_q+q''=∑_|r|=|q|,|r'|=|q'|(H^-1)_qr(H^-1)_q'r'Γ̌_rr',the bias is also given by Eq. (<ref>), while the variance isθ̌_q+q'' =∑_|r|=|q|,|r'|=|q'|(H^-1)_qr^2(H^-1)_q'r'^2 ×Γ̌_rr'= θ_0^2/min(N_s)O(Δ^|q+q'|),which can still be minimized by choosing q and q' according to Eq. (<ref>).A problem with Eq. (<ref>) is that, for a given |q| and |q'|, the number of (r,r') indices with |r| = |q| and |r'| = |q'| is[ |q|+D - 1; |q| ]×[ |q'|+D - 1; |q'| ],so the estimator may require a large number of Γ̌_rr''s and a large number of bases to implement for a high-order moment, leading to a reduction in min(N_s).This difficulty is compounded by the fact that, for D ≥ 2, there exist infinitely many sets of orthogonal polynomials for a given weight function, as pointed out in Appendix <ref>, leading to infinite possible choices of the g polynomials and the PAD basis. For separable PSFs, the choice of the separable PAD basis in Sec. <ref> fortunately leads to only one term in Eq. (<ref>), but it remains an open question whether Eq. (<ref>) can be further simplified via a more specific choice of the PAD basis for non-separable PSFs.§ CONDITIONS FOR FINITE IMAGE MOMENTSGiven Eqs. (<ref>) and (<ref>), M is finite if all the PSF moments {Λ_μ; μ∈ℕ_0^D} are finite. ConsiderΛ_μ = ∫ dk Ψ^*(k)(i∂_k)^μΨ(k)in terms of the Fourier transform given by Eq. (<ref>).A sufficient condition for Λ to be finite is that Ψ(k) is infinitely differentiable and has compact support; an example is the bump function given by Eq. (<ref>).If any Λ_μ is infinite, the C matrix given by Eq. (<ref>) and the CRB given by Eq. (<ref>) also have infinite elements, and the direct-imaging theory in Sec. <ref> and Appendix <ref> breaks down. This happens for the rectangle aperture function given by Eq. (<ref>).A solution, not explored in this work, may be to smooth Ψ(k) by convolving it with a bump function with support width w, such that the smoothed Ψ(k) becomes infinitely differentiable but remains compactly supported. When w ≪ 1, the result should offer a good approximation of that for the original Ψ(k). | http://arxiv.org/abs/1703.08833v5 | {
"authors": [
"Mankei Tsang"
],
"categories": [
"physics.optics"
],
"primary_category": "physics.optics",
"published": "20170326160439",
"title": "Subdiffraction incoherent optical imaging via spatial-mode demultiplexing: semiclassical treatment"
} |
Lehrstuhl für Festkörperphysik, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Staudtstr. 7, 91058 Erlangen, GermanyLehrstuhl für Angewandte Physik, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Staudtstr. 7, 91058 Erlangen, GermanyInstitute of Materials for Electronics and Energy Technology (I-MEET), Department of Materials Science and Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Martensstrasse 7, 91058 Erlangen, GermanyInstitute of Materials for Electronics and Energy Technology (I-MEET), Department of Materials Science and Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Martensstrasse 7, 91058 Erlangen, Germany Institute of Materials for Electronics and Energy Technology (I-MEET), Department of Materials Science and Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Martensstrasse 7, 91058 Erlangen, GermanyInstitute of Materials for Electronics and Energy Technology (I-MEET), Department of Materials Science and Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Martensstrasse 7, 91058 Erlangen, GermanyInstitute of Materials for Electronics and Energy Technology (I-MEET), Department of Materials Science and Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Martensstrasse 7, 91058 Erlangen, GermanyInstitute of Materials for Electronics and Energy Technology (I-MEET), Department of Materials Science and Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Martensstrasse 7, 91058 Erlangen, Germany Bavarian Center for Applied Energy Research (ZAE Bayern), Haberstrasse 2a, 91058 Erlangen, GermanyLehrstuhl für Angewandte Physik, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Staudtstr. 7, 91058 Erlangen, GermanyLehrstuhl für Festkörperphysik, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Staudtstr. 7, 91058 Erlangen, GermanyLong carrier lifetimes and diffusion lengths form the basis for the successful application of the organic-inorganic perovskite (CH_3NH_3)PbI_3 in solar cells and lasers. The mechanism behind the long carrier lifetimes is still not completely understood. Spin-split bands and a resulting indirect band gap have been proposed by theory. Using near band-gap left-handed and right-handed circularly polarized light we induce photocurrents of opposite directions in a single-crystal (CH_3NH_3)PbI_3 device at low temperature (4 K). The phenomenom is known as the circular photogalvanic effect and gives direct evidence forphototransport in spin-split bands. Simultaneous photoluminecence measurements show that the onset of the photocurrent is below the optical band gap.The results prove that an indirect band gap exists in (CH_3NH_3)PbI_3 with broken inversion symmetry as a result of spin-splittings in the band structure. This information is essential for understanding the photophysical properties of organic-inorganic perovskites and finding lead-free alternatives. Furthermore, the optically driven spin currents in (CH_3NH_3)PbI_3 make it a candidate material for spintronics applications.Spin-split bands cause the indirect band gap of (CH_3NH_3)PbI_3: Experimental evidence from circular photogalvanic effect Thomas Fauster December 30, 2023 =========================================================================================================================Organic-inorganic perovskite semiconductors (OIPS) show remarkable potential for applications in highly efficient thin-film solar cells <cit.> and nanolasers <cit.>. Unusually long carrier lifetimes <cit.> and diffusion lengths <cit.> form the basis of their exceptional performance in optoelectronic devices. Strong spin-orbit coupling due to the constituting heavy elements <cit.> and a resulting slightly indirect band gap have been proposed as origin of the observed long carrier lifetimes <cit.>. The direct-indirect character of the band gap of (CH_3NH_3)PbI_3 was recently evidenced experimentally <cit.>. Direct experimental evidence for spin-orbit coupling as the origin of the indirect band gap, however, is still missing to the best of our knowledge.To gain insight into the mechanism giving rise to the indirect gap of (CH_3NH_3)PbI_3, we excite photocurrents with left-handed and right-handed circularly polarized light as illustrated in Fig. <ref> (a). In the absence of spin-orbit coupling the direction of the excited photocurrent does not depend on the helicity of the incoming light. The spin structure of the electronic band structure causes differences in the optical transition matrix elements as illustrated in Fig. <ref> (b). For opposite helicity of the light the group velocity of carriers is reversed and spin-polarized currents of opposite direction are induced <cit.>. They enhance or reduce the overall photocurrent, respectively. The effect is known as the circular photogalvanic effect. It has been observed experimentally in GaAs/AlGaAs quantum well structures <cit.>, in wurtzite semiconductors such as ZnO <cit.> and GaN <cit.>, in transition-metal dichalcogenides <cit.>, and in the topological insulator Bi_2Se_3 <cit.>. A circular photogalvanic effect of measurable magnitude has been predicted <cit.> for (CH_3NH_3)PbI_3. Previously, circular dichroism has been found in optical <cit.> and electron spectroscopy <cit.> experiments on OIPS. A circular photogalvanic effect is hence expected if coherent spin transport takes place on a length scale large enough for spin-polarized currents to be driven through a device.Results of polarization-dependent photocurrent measurements performed on single-crystal (CH_3NH_3)PbI_3are shown in Fig. <ref> (a) for different excitation photon energies. The sample temperature is 4 K. To control the polarization of the incident light a zero-order λ/4 plate is introduced in the excitation pathway. The polarizing waveplate is rotated while the photocurrent is measured. The angle of the λ/4 plate is given along the horizontal axis in Fig. <ref> (a). At all wavelengths, a variation of the photocurrent is observed due to changing contributions of p and s-polarized light fields to the excitation. This signal has a periodicity of 90^∘ in the angle of the waveplate. At an angle of n · 90^∘ (n ∈ℕ_0) the light is p-polarized and the photocurrent has a minimum. Local maxima occur at45^∘ (135^∘), where the light is circularly polarized and the component of s-polarized light is the largest. The variations may result from differences in reflectivity at the surface of the OIPS due to changing contributions of p and s-polarized light, from anisotropies in absorption along the crystalline directions associated with s and p polarization, and from a linear photogalvanic effect <cit.>. Since differences in the photocurrents excited with s-polarized light and circularly polarized light with p-components can occur in any material and their interpretation is complex, they will not be in the focus of our discussion. However, it is worth noting that a linear photogalvanic effect necessarily goes hand in hand with the circular photogalvanic effect <cit.>.An additional modulation of the photocurrent induced by the light polarization is clearly observed upon excitation at 1.55 eV and 1.61 eV photon energy. This signal has a periodicity of 180^∘ in the angle of the waveplate, resulting in different photocurrents at 45^∘ + n · 180^∘and 135^∘ + n · 180^∘. As these angles correspond to left-handedand right-handed circularly polarized light, the differences represent the circular photogalvanic effect. To extract the contribution of the circular photogalvanic effect to the photocurrent, we fit the data with a sum of two cosine functions. The two components are shown individually in Fig. <ref> (b). The effect of linear polarization is given by the blue curve. The signal arising from the circular photogalvanic effect, indicated by the red curve, is phase-shifted by 90^∘. Its contribution to the photocurrent vanishes whenever the light is linearly polarized (n · 90^∘). For left-handed (45^∘) andright-handed (135^∘) circular polarization, in contrast, it switches sign. The reversal of the photocurrents as the helicity of the excitation light is switched is characteristic for materials with spin-split band structures <cit.>. It implies that spin currents are driven by photoexcitation with circularly polarized light, as indicated in Fig. <ref> (b). Light of different helicity couples to opposite branches of the spin-split band structure in k-space. Since the opposite branches do not only carry electrons of opposite spin orientation, but also of reversed group velocity dE/dk, spin-polarized currents are induced along opposite directions. We observe a modification of the overall photocurrent by±1.5%. The amplitude of the circular photogalvanic effect relative to the average photocurrent is given by red symbols in Fig. <ref> (a). The photocurrent (normalized to electrons/photon) is shown as black dots connected by lines to guide the eye in Fig. <ref>(a). The onset of the photocurrent is well described by a fourth-power dependence on energy starting at 1.56±0.01 eV. The large exponent can be understood as the result of an indirect band gap in combination with a low density of states of OIPS at the band edges <cit.>. A small current flows for photon energies below the onset because of the applied bias voltage. Note that the circular photogalvanic effect sets in right at the onset of the photocurrent. The photocurrent reaches its maximum at 1.62 eV photon energy which may be taken as an estimate for the direct band gap. This results in a difference between the direct and indirect gap of 60±15 meV in agreement with literature <cit.>. The circular photogalvanic effect proves that the spin-orbit coupling of the Rashba effect is responsible for the indirect band gap in (CH_3NH_3)PbI_3.In order to corroborate the assignment of the direct band gap we performed in situ photoluminescence (PL) measurements on the sample at 4 K. A comparison of photocurrent excitation and PL spectra is given in Fig. <ref>. The low-temperature PL spectrum of (CH_3NH_3)PbI_3 consists of a high-energy emission feature at 1.64±0.01 eV, a second peak at 1.61±0.01 eV, and broad low-energy continuum emission with a maximum at 1.56±0.01 eV, in agreement with previous reports <cit.>. The positions of the maxima are indicated by magenta tick marks in Fig. <ref>(b). Following the literature <cit.>, we assign the highest-energy peak at 1.64 eV to the optical band gap of orthorhombic (CH_3NH_3)PbI_3. The value matches the optical band gap found from magneto-absorption measurements <cit.>. Emission and absorption at lower photon energies have been attributed to excitons localized at defects <cit.> and to coexisting structural phases <cit.>. The assignment of optical transitions to bound or free excitons is difficult based on optical spectroscopy alone. Connecting the PL spectroscopy to the transport measurements at low temperature, we find that currents are generated with photon energies as low as 1.56 eV, demonstrating that free excitons are excited at these photon energies. We attribute the photocurrent to tetragonal <cit.> and low-symmetry orthorhombic <cit.> domains coexisting with the low-temperature, inversion-symmetric orthorhombic phase. The photocurrent has a maximum at 1.62 eV which coincides with the second PL emission peak, indicating an allowed optical transition. We assign the maximum to the direct gap of the tetragonal and low-symmetry orthorhombic domains, in good agreement with the value of 1.61 eV found from magneto-absorption on tetragonal (CH_3NH_3)PbI_3 <cit.>. The photocurrent drops by 40% as the photon energy exceeds the optical gap of 1.64 eV of orthorhombic (CH_3NH_3)PbI_3. A smaller photocurrent in orthorhombic (CH_3NH_3)PbI_3 than in the tetragonal phase has been reported before and assigned to less efficient generation of free excitons <cit.>. Note that the amplitude of the circular photogalvanic effect also drops in the energy range when the inversion symmetric low-temperature orthorhombic phase contributes to the photocurrent. The observed circular photogalvanic effect unambiguously identifies spin splittings in the band structure as the origin of the indirect gap. A slightly indirect band gap by 47 meV was reported for tetragonal (CH_3NH_3)PbI_3 resulting in an enhanced lifetime of photoexcited carriers as compared to the direct band-gap orthorhombic phase <cit.>. The lifetime enhancement was found to be absent in the low-temperature orthorhombic phase.While calculations point towards Rashba and Dresselhaus type spin splittings as the origin of the slightly indirect gap <cit.>, experimental evidence for this interpretation is, to the best of our knowledge, lacking.Requirements for spin splittings are spin-orbit coupling and absence of inversion symmetry. It is worth noting that Rashba and Dresselhaus spin splittings are caused by the local environment of the atoms in the unit cell rather than by the average, long-range symmetry of the crystal <cit.>. A Rashba-type spin-split band structure was found in the valence band at the surface of related (CH_3NH_3)PbBr_3 perovskite using angle-resolved photoelectron spectroscopy <cit.>.Surfaces break inversion symmetry inherently and enhance Rashba splitting. Observation of the circular photogalvanic effect demonstrates that spin-splittings occur in the bulk of (CH_3NH_3)PbI_3 on a length scale relevant for carrier transport. We find a stronger effect at low photon energies than for higher ones. The corresponding transitions can be assigned to tetragonal and low-symmetry orthorhombic domains (<1.64 eV) and to the inversion symmetric low-temperature orthorhombic phase (≥ 1.64 eV), respectively. For the latter, a prominent photogalvanic effect is not expected. The former, in contrast, has a locally broken inversion symmetry at all temperatures <cit.>. The spin splittings in the band structure observed here at 4 K are hence expected to persist for temperatures up to room temperature, as implied also by the strong circular dichroism found in optical spectroscopy at room temperature <cit.>. The observed difference between the optical and the transport gap of 60±15 meV of tetragonal (CH_3NH_3)PbI_3 is large enough to pose an energetic barrier for electron-hole-pair recombination even at room temperature. Activation energies of 75 meV <cit.> and 47 meV <cit.> for radiative recombination were reported previously, in good agreement with our results. Calculations find a value of 75 meV as a result of spin-splittings in the band structure of (CH_3NH_3)PbI_3 <cit.>. They predict an increasing splitting with increasing temperature, in agreement with optical spectroscopic results obtained from related (CH_3NH_3)PbBr_3 single crystals <cit.>.Our results clarify the mechanism behind the indirect character of the band gap of (CH_3NH_3)PbI_3. This helps to understand the excellent performance of OIPS in optoelectronic devices and provides a design rule for less toxic alternatives to (CH_3NH_3)PbI_3. Lifetime enhancements by a factor of 10 to 350 have been predicted as the result of Rashba-type spin splitting restricting optical transitions <cit.>, making it an essential ingredient to the observed long carrier diffusion lengths. <cit.> Spin splittings of this magnitude do not only enhance carrier lifetimes. They also allow to optically drive spin currents <cit.> in the system.The spin splitting of 60± 15 meV is similar to the strongest ones found in known bulk Rashba systems, such as(X = Cl <cit.>, Br <cit.>, I <cit.>) and GeTe(111) <cit.>. In contrast to these materials, (CH_3NH_3)PbI_3 has a band gap in the near-infrared range making it a candidate material for opto-spintronics applications <cit.>. Additional applications become possible if structures with a switchable ferroelectric polarization can be grown <cit.>, as they have been found at the surface of (CH_3NH_3)PbI_3 <cit.>, where the Rashba splitting in organic-inorganic perovskite is further enhanced <cit.>.We find a measurable circular photogalvanic effect of ±1.5% in rather large devices with a channel width of 1 mm. Significant spin currents can be expected when device dimensions are reduced to the spin transport length. From magneto-transport and magneto-optical experiments, a spin-lattice relaxation time τ of 200 ps was estimated for spin-cast (CH_3NH_3)PbI_3 thin films <cit.>. Carrier diffusion coefficients in (CH_3NH_3)PbI_3 thin films are around D=0.05 cm^2s^-1 <cit.>, translating into a spin diffusion length l=√(D ·τ)=30 nm. The carrier diffusion constant in single crystals is larger than in thin films by a factor of ≈ 20 <cit.>, and also the spin relaxation time can be expected to be enhanced. Spin diffusion lengths of hundreds of nanometers may hence be achieved in OIPC single crystals. § ACKNOWLEDGEMENTSI. L., A. O., S. S., M. B., and C. J. B. gratefully acknowledge financial support from the Soltech Initiative, the Excellence Cluster "Engineering of Advanced Materials" (EAM) granted to the University Erlangen-Nuremberg, and from the Energiecampus Nürnberg. Funding from the Emerging Fields initiative "Singlet Fission" supported by Friedrich-Alexander-Universität Erlangen-Nürnberg is gratefully acknowledged by D. N., M. W., and T. F.§ METHODS*Device fabrication. (CH_3NH_3)PbI_3 single crystals were grown by the seed-solution growth method following the procedure described in Ref. <cit.>. Crystals were prepared and kept under N_2 atmosphere before they got contacted with 40 nm of gold at a spacing of 1 mm. Contacts are aligned with the macroscopic facets of the crystals. The crystals are mounted under ambient conditions (exposure for ≈ 2 h) in a vacuum cryostat. Immediately after pumping the cryostat, crystals are cooled to 4 K. Keeping the crystals in vacuum at room temperature for extended periods of time (>12 h) results in changes in the spectra, whereas no changes are observed for the cooled crystals under prolonged (4 h) illumination with laser light, see Fig. S1 in Supplementary Information. We demonstrated that methylamine desorbs from (CH_3NH_3)PbBr_3 in vacuum just above room temperature <cit.> and speculatively assign the degradation of (CH_3NH_3)PbI_3 to the same mechanism.*Photocurrent and photoluminescence measurements. Photocurrents are excited with a Ti:Sa laser in cw mode. Laser powers are around 3 mW with a Gaussian spot radius of 0.3 mm.The voltage applied to the device is swept between -0.5 V and +0.5 V to avoid slow changes <cit.> in the structure of (CH_3NH_3)PbI_3. No hysteresis is observed in the I(V) sweeps, see Fig. S2 in Supplementary Information. The data are shown for a small bias voltage (0.25 V).Results presented here are independent of the applied bias voltage for 0<|V|<0.5 V except for the amplitude of the measured photocurrents. Photocurrents are normalized to the number of incident photons to account for variations in the laser power at different photon energies. The photocurrent increases linearly with excitation density as shown in Fig. S3 in Supplementary Information.For photoluminescence experiments the channel of the (CH_3NH_3)PbI_3 device was optically excited with a 532 nm cw laser between photocurrent measurementss. The PL spectra were recorded using an Ocean Optics HR 4000 spectrometer. A dielectric long pass filter (550 nm) was used to suppress scattered light from the illuminated surface from which the PL was collected. naturemag_noURL | http://arxiv.org/abs/1703.08740v1 | {
"authors": [
"Daniel Niesner",
"Martin Hauck",
"Shreetu Shrestha",
"Ievgen Levchuk",
"Gebhard J. Matt",
"Andres Osvet",
"Miroslaw Batentschuk",
"Christoph Brabec",
"Heiko B. Weber",
"Thomas Fauster"
],
"categories": [
"cond-mat.mtrl-sci"
],
"primary_category": "cond-mat.mtrl-sci",
"published": "20170325205428",
"title": "Spin-split bands cause the indirect band gap of (CH$_3$NH$_3$)PbI$_3$: Experimental evidence from circular photogalvanic effect"
} |
=4=1[5.2]Mathematica thmTheorem[section] thmsection conj[thm]Conjecture exam[thm]Example remark[thm]Remark definition[thm]Definition cor[thm]Corollary prop[thm]Proposition assm[thm]Assumption lemma[thm]Lemma | http://arxiv.org/abs/1703.08954v3 | {
"authors": [
"Krzysztof Bartoszek"
],
"categories": [
"q-bio.PE",
"math.PR",
"stat.AP",
"05C80, 60F15, 60J85, 62M02, 62P10, 92-08, 92B10, 92D15"
],
"primary_category": "q-bio.PE",
"published": "20170327071105",
"title": "Exact and approximate limit behaviour of the Yule tree's cophenetic index"
} |
A Framework for Assessing Achievability of Data-Quality ConstraintsA Framework for Assessing Achievability of Data-Quality Constraints ^1 Computer Science Department, North Carolina State UniversityNorth Carolina, USA^2 Pontificia Universidad Católica de ChileLecture Notes in Computer Science Authors' Instructions A Framework for Assessing Achievability Of Data-Quality Constraints Rada Chirkova^1Jon Doyle^1Juan L. Reutter^2 December 30, 2023 =====================================================================Assessing and improving the quality of data are fundamental challenges for data-intensive systems that have given rise to numerous applications targeting transformation and cleaning of data. However, while schema design, data cleaning, and data migration are nowadays reasonably well understood in isolation, not much attention has been given to the interplay between the tools addressing issues in these areas. We focus on the problem of determining whether the available data-processing procedures can be used together to bring about the desired quality characteristics of the given data. For an illustration, consider an organization that is introducing new data-analysis tasks. Depending on the tasks, it may be a priority for the organization to determine whether its data can be processed and transformed using the available data-processing tools to satisfy certain properties or quality assurances needed for the success of the task. Here, while the organization may control some of its tools, some other tools may be external or proprietary, with only basic information available on how they process data. The problem is then, how to decide which tools to apply, and in which order, to make the data ready for the new tasks? Toward addressing this problem, we develop a new framework that abstracts data-processing tools as black-box procedures with only some of the properties exposed, such as the applicability requirements, the parts of the data that the procedure modifies, and the conditions that the data satisfy once the procedure has been applied. We show how common database tasks such as data cleaning and data migration are encapsulated into our framework and, as a proof of concept, we study basic properties of the framework for the case of procedures described by standard relational constraints. We show that, while reasoning in this framework may be computationally infeasible in general, there exist well-behaved special cases with potential practical applications. § INTRODUCTION A common approach to ascertaining and improving the quality of datais to develop procedures and workflows for repairing or improving data setswith respect to quality constraints.The community has identified a wide range of data-managementproblems that are vital in this respect, leading to the creation of several lines of studies, whichhave normally been followed by the development of toolboxes ofapplications that practitioners can use to solve their problems. This has been the case, for example,for theExtract-Transform-Load (ETL) <cit.> process in business applications, or for the development of automatic tools to reason about the completeness or cleanlinessof the data <cit.>.As a result, organizations facing data-improvement problems now have access to a variety of data-managementtools to choose from; the tools can be assembled to construct so-called workflows of data operations.However, in contrast with the considerable body of research on particular data operations, or evenentire business workflows (see, e.g., <cit.>), previous research appears to have not focused explicitly either on the assembly process itself or on providing guarantees that the desired data-quality constraints will be satisfied once the assembled workflow of procedures has been applied to the available data.We investigate the problem of constructing workflows from already available procedures.That is, we consider a scenario in which an organization needs to meet a certain data-quality criterion or goal usingavailable data-improvement procedures. In this case, the problem isto understand whether these procedures can beassembled into a data-improvement workflow in a way that would guarantee that the data set produced by theworkflow will effectively meet the desired quality goal. Motivating example:Suppose that data stored in a medical-data aggregator (such as, e.g., Premier <cit.>) are accessed to perform a health-outcomes analysis in population health management <cit.>, focusing on repeat emergency-room visits in the Washington, DC area. The goal of the analysis is to see whether there is a relationship between such repeat visits and ages and zip codes of patients.We assume that the aggregator imports information about emergency-room visits from a number of facilities, and stores the information using a relation with attributes and for the ID and location of the medical facility,for the patient insurance number, and for the date and time of the visit.We also assume that medical-record information imported from each facility is stored at the aggregator in a relation, with attributes , , , , , and so on. The analyst plans to isolate information about emergency-room visits for the Washington area in a relation, which would have all the attributes of except , as the values of the latter are understood to be fixed.Further, to obtain the age and zip code of patients, the analyst also needs to integratethe data in with those of . To process the data, the analyst has access to some procedures that are part of the aggregator's everyday business.For example, the aggregator periodically runs a StandardizePatientInfo procedure, which first performs entity resolution on insurance IDs in , using both the values of all the patient-related attributes in that relation and a separate “master” relationthat stores authoritative patient information from insurance companies, and then merges the results into . Further, the aggregator offers a procedure MigrateIntoLocVisits that will directly populate with therelevant information about emergency rooms (but not the age and zip code of patients). The analyst is now facing a number of choices, some of which we list here:(i) Use the StandardizePatientInfo procedure on , then manually import the correct(ed) information into , and finally join this relation with . (ii) Run MigrateIntoLocVisits to get the relevant patient information, and then join with without running the procedure StandardizePatientInfo. (iii) Add and attributes to , get the information intoas in (ii), and then try to modify StandardizePatientInfo into operating directly on .Which of these options is the best for the planned analysis? Option (i) seems to be the cleanest, but if the analystsuspects that StandardizePatientInfo may produce some loss of data, then going with (ii) or (iii) might be a better option. Further, suppose the analyst also has access to a separate relation HealthcareInfo from a health NGO, withinformation about emergency-room visits gathered from other independent sources. Then the analyst could pose thefollowing quality criterion on the assembled workflow: The result of the workflow should provide at least the information thatcan be obtained from the relation HealthcareInfo. How could one guarantee that such a criterion will be met? Contributions:Our goal is to develop a general framework that can be used to determine whether the available data-processing tools can be put together into a workflow capable of producing data that meet the desired quality properties.To address this problem, we abstract data-processing tools as black-box procedures thatexpose only certain properties. The properties of interest include (i) preconditions, which indicate the state of the data required for the procedure to be applicable; (ii) the parts of the data that the procedure modifies; and (iii) postconditions, which the data satisfy once the procedure has been applied. In this paper we introduce the basic building blocks and basic results for the proposed framework for assessing achievability of data-quality constraints.The contributions include formalizing the notion of (sequences of) data-transforming procedures, and characterizing instances that are outcomes ofapplying (sequences of) procedures over other instances. We also illustrate our design choices by discussing ways to encode important database tasks in the proposed framework, including data migration, data cleaning, and schema updates.One of the advantages of our framework is its generality, as it can be used to encode multiple operations not only on relational data, but on semistructured or evenunstructured text data.This very generality implies thatto be able to reason about the properties of our framework, one needs to first instantiate some of its most abstract components.As a proof of concept, we provide an in-depth analysis of applications of (sequences of) procedures over relational data, where the procedures are stated usingstandard relational-data formalisms.We show that properties concerning outcomes of procedures are in general (not surprisingly) undecidable. At the same time, we achieve decidability and tractability for broad classes of realistic proceduresthat we illustrate with examples. While the formalism and results presented in this paper have practical implications on their own, we see them mainly as prerequisites that need to be understood before one can formalize the notion of assembling procedures in the context of and in response to a user task. We conclude this paper by showing how the proposed framework can be used to formally define the following problem:Given a set of procedures and data-quality criteria, is it possible to assemble a sequence of procedures such that the data outcome is assured to satisfy this criteria? Related Work:Researchers have been working on eliciting and defining specific dimensions of quality of the data — <cit.> provides a widely acknowledged standard; please also see <cit.>. At the general level, high-quality data can be regarded as being fit for their intended use <cit.> — that is, both context and use (i.e., tasks to be performed) need to be taken into account when evaluating and improving the quality of data.Recent efforts have put an emphasis on information-quality policies and strategies; please see <cit.> for a groundbreaking set of generic information-quality policies that structure decisions on information. An information-quality improvement cycle, consisting of the define-measure-analyze-improve steps for data quality, has been proposed in <cit.>. Work has also been done <cit.> in the direction of integrating process measures with information-quality measures. Our work is different from these lines of research in that in our framework we assume that task-oriented data-quality requirements are already given in the form of constraints that need to be satisfiedon the data, and that procedures for improving data quality are also specified and available. Under these assumptions, our goal is to determine whether the procedures can be used to achieve satisfaction of the quality requirements on the data. The work <cit.> introduces a unifiedframework covering formalizations and approaches for a range of problems in data extraction, cleaning, repair, and integration, and also supplies an excellent survey of related work in these areas. More recent work on data cleaning includes <cit.>. The research area of business processes <cit.> studies the environment in which data are generated and transformed, including processes, users of data, and goals of using the data. In this context, researchers have studied automating composition of services into business processes, see, e.g.,<cit.>, under the assumption that the assembly needs to follow a predefinedworkflow of executions of actions (services). In contrast, in our work, the predefined part is the specified constraints that the data should satisfy after the assembled workflow of available procedures has been applied to it. Another line of work<cit.> is closer to reasoning about static properties of business process workflows. That work is different from ours in that it does not pursue the goal of constructing new workflows.Outline of the paper: Section <ref> contains basic definitions used in the paper. Section <ref> introduces the proposed framework, and Section <ref> discussesencoding tasks such as data exchange, data cleaning, and alter-table statements. The formal results are presented in Section <ref>. Section <ref> concludes with a discussion of future challenges and opportunities.§ PRELIMINARIESSchemas and instances: Assume a countably infinite set of attribute names 𝒜 = { A_1, A_2, …} and a countably infinite set (disjoint from A) of relation names ℛ = { R_1, R_2, …}.A relational schema is a partial function : ℛ→ 2^𝒜 with finite domain,which associates a finite set of attributes to a finite set of relation symbols. If (R) is defined, we say that R is in . A schema ' extends a schemaif for each relation R such that (R) is defined, we have that(R) ⊆'(R). That is, ' extendsif ' assigns at least the same attributes to all relations in .We also assume a total order ≤_𝒜 over all attribute names in order to be able to switch between the named and unnamed perspectives for instances and queries. We define instances so that it is possible to switch between the named and unnamed perspectives.Assume a countably infinite set of domain values D (disjoint from both 𝒜 and ℛ).Following <cit.>, an instance I of schemaassigns to each relation R in ,where (R) = {A_1,…,A_n}, a set R^I of named tuples, each of which is a function of the form t:{A_1,…,A_n}→ D,representing the tuples in R. (We use t(A_i) to denote the element of a tuple t corresponding to the attribute A_i.) By using the order <_𝒜 over attributes, we can alternatively view t as an unnamed tuple, corresponding to thesequence t̅ = t(A_1),…,t(A_n), with A_1 <_𝒜⋯ <_𝒜 A_n. Thus, we can also view an instance I asan assignment R^I of sets of unnamed tuples (or just tuples) t̅∈ D^n. In general, when we know all attribute names for a relation, we use the unnamed perspective, but when theset of attributes is not clear, we resort to the named perspective.For the sake of readability, we abuse notation and use (I) to denote the schema of an instance I. For instances I and J over a schema , we write I ⊆ J if for each relation symbol R inwe have thatR^I ⊆ R^J.Furthermore, if I_1 and I_2 are instances over respective schemas _1 and _2,we denote by I_1 ∪ I_2 the instance over schema _1 ∪_2such that R^I_1 ∪ I_2 = R^I_1∪ R^I_2 if R is in both _1 and _2,R^I_1 ∪ I_2 = R^I_1 if R is only in _1, and R^I_1 ∪ I_2 = R^I_2 ifR is only in _2.Finally, an instance I' extends an instance I if (1) (I') extends (I), and (2) for each relation R in (I) with assigned attributes {A_1,…,A_n} and for each tuple t in R^I,there is a tuple t' in R^I' such that t(A_i) = t'(A_i) for each 1 ≤ i ≤ n.Intuitively, Iextends I' if the projection of I' over the schema of I is contained in I.Conjunctive queries:Since our goal is for queries to be applicable to different schemas, we adopt a named perspective on queries.A named atom is an expression of the form R(A_1:x_1,…,A_k:x_k), whereR is a relation name, each A_i is an attribute name, and each x_i is a variable. We say thatthe variables mentioned by such an atom are x_1,…,x_k, and the attributes mentioned by it areA_1,…,A_k. A conjunctive query (CQ) is an expression of the form ∃z̅ϕ(z̅,y̅), where z̅ and y̅ are tuples of variablesand ϕ(z̅,y̅) is a conjunction of named atoms that use the variables in z̅ and y̅. A named atom R(A_1:x_1,…,A_k:x_k) is compatible with schemaif {A_1,…,A_k}⊆(R).A CQ is compatible withif all its named atoms are compatible.Given a named atom R(A_1:x_1,…,A_k:x_k), an instance I of a schemathat is compatible with the atom, and an assignment τ: {x_1,…,x_k}→ D of values to variables, we say that (I,τ) satisfy R(A_1:x_1,…,A_k:x_k) if there is atuple a:𝒜→ Dmatching values with τ on attributes in R in the sense that a(A_i) = τ(x_i) for each 1 ≤ i ≤ k.(Under the unnamed perspective we would require a tuple a in R^I such that its projection π_A_1,…,A_ka̅ over attributes A_1,…,A_k is precisely the tuple τ(x_1),…,τ(x_k).) The usual semantics of conjunctive queries now follows, extending the notion of assignments in the usual way. Finally, given a conjunctive query Q that is compatible with , the evaluation Q(I) of Q over Iis the set of all the tuples τ(x_1),…,τ(x_k) such that (I,τ) satisfy Q. We also need to specify queries that extract all tuples storedin a given relation, regardless of the schema, as is done in SQL with the query .To be able to do this, we also use what we call total queries, which, as we do not need to know the arity of R, are simply constructs of the formR, for a relation name R.A total query of this form is compatible with a schemaif (R) is defined, and the evaluationof this query over an instance I over a compatible schemais simply the set of tuples R^I. Data Constraints:Most of our data constraints can be captured by tuple-generating dependencies (tgds), which are expressions of the form∀x̅(∃y̅ϕ(x̅,y̅) →∃z̅ψ(x̅,z̅)), for conjunctive queries ∃y̅ϕ(x̅,y̅) and ∃y̅ψ(x̅,z̅), and by equality-generating dependencies (egds), which are expressions of theform ∀x̅(∃y̅ϕ(x̅,y̅) → x = x'), for a conjunctive query ∃y̅ϕ(x̅,y̅) and variables x,x' in x̅. As usual, for readability we sometimes omit the universal quantifiers of tgds and egds.An instance I satisfies a set Σ of tgds and egds, written I Σ, if(1) the schema of I is compatible with each conjunctive queryin each dependency in Σ, and (2) every assignmentτ: x̅∪y̅→ Dsuch that (I,τ) ϕ(x̅,y̅) → Dcan be extended into an assignment τ': x̅∪y̅∪z̅→ D such that(I,τ') ψ(x̅,z̅). A tgd is full if it does not use existentiallyquantified variables on the right-hand side.A set Σ of tgds is full if each tgd in Σ is full. Σ is acyclic if an acyclic graph is formed by representing each relation mentioned in a tgd in Σ as a node and by adding an edge from node R to S if a tgd in Σ mentions R on the left-hand side and S on the right-hand side.Structure Constraints: Structure constraints are used to specify that schemas need to contain a certainrelation or certain attributes. A structure constraint is a formula of the formR[s̅] or R[*], where R is a relation symbol, s̅ is a tuple of attributes, and * is a symbol not in𝒜 or ℛ intended to function as awildcard.A schemasatisfies a structure constraint R[s̅], denotedby R[s̅], if (R) is defined, and each attribute in s̅ belongs to (R)The schema satisfies the constraint R[*] if (R) is defined. For an instance I over a schemaand a set Σ of tgds, egds, and structure constraints,we write (I, ) Σ if I satisfies each data constraint in Σ andsatisfieseach structure constraint in Σ.§ PROCEDURESIn this section we formalize the notion of procedures that transform data.We view procedures as black boxes, and assume no knowledge of or control over their inner workings.Our reasoning aboutprocedures is based on two properties: an input condition, or precondition on the state of the data that must hold for a procedure to be applicable, and an output condition, or postcondition on the state of the data that must holdafter the application. Consider again the medical example discussed in the introduction, with a schema having tworelations: LocVisits, holding information about emergency-room visits in a geographical area,and EVisits, holding visit information for an individual emergency room in a particular location.Suppose we know that a procedure is available that migrates the data fromEVisits to LocVisits. We do not know how the procedure works,but we do know that once it has been applied, all tuples in EVisits also appear in LocVisits.In other words, this procedure can be described by the following information: Precondition: The schema has relations LocVisits and EVisits, each with attributes, and (standing for facility ID, patient insurance ID, and timestamp). Postcondition: Every tuple from EVisits is in LocVisits. Scope and safety guarantees: To rule out procedures that, for example, delete all the tuples from the database, we must be assured that our procedure only modifies the relation LocVisits, and thatit preserves all the tuples present in LocVisits before the application of the procedure. We shall soon see how to encode these guarantees intoour framework. Suppose that after a while, the requirements of one of the partner agencies of the organization impose an additionalrequirement: Relation LocVisits should also contain information about the age of the patients.Suppose the organization also has a relation Patients, where the patient age is recorded in attributeage, together with and patientId. To migrate the patient ages intoLocVisits, one needs the following steps: First add the attribute to LocVisists, and thenupdate this table so that the patient ages are as recorded in Patients.We observe that all the procedures involved in this operation can be captured using the same framework ofpreconditions, postconditions, and scope/safety guarantees that we used to capture the data-migration procedure.§.§ Formal Definition We define procedures with respect to a classof constraints and a classof queries.Aprocedure P overandis a tuple (,,,_),where*is a set of structure constraints that defines the scope (i.e., relations and attributes) in which the procedure acts; *andare constraints in that describe the pre- and postconditions of the procedure, respectively; and * _ is a set of queries inthat serve as a safety guarantee for the procedure.Let us return to the procedure outlined in Example <ref>, where the intention was to define migration of data from relation EVisits into LocVisits. In our formalism, we describe this procedure as follows. : Since the procedure migrates tuples into LocVisits, the scope of the procedureis just this relation. This is described using the structure constraint LocVisits[*]. _: We use the structure constraints EVisits[, , ] and LocVisits [, ,], to ensure that the database has the correct attributes._: The postcondition comprises the tgdEVisits(:x, :y, :z)→LocVisits(:x, :y, :z).That is to say, after the procedure has been applied, the projection of EVisits over , and is a subset of the respective projection of LocVisits. _: We can add safety guarantees in terms of queries that need to be preservedwhen the procedure is applied. In this case, since we do not want the procedure to delete anythingthat was stored in LocVisits before the migration, we add the safety constraintLocVisits(:x,:y,:z), whose intent is to to state that all answers to this query on LocVisits that are present in the database before the application of the proceduremust be preserved. We formalize this intuition when giving the semantics below. §.§ SemanticsFormalizing the semantics of procedures requires additional notation.Given a setof structure constraints and aschema , we denote byQ_∖ the conjunctive query that, intuitively, is meant to retrieve the projection of the entire database over all relations and attributes not mentioned in .Formally, Q_∖ includes a conjunct R(A_1:z_1,…,A_m:z_m) for each relation R inbut not mentioned in , where (R) = { A_1,…,A_m } and z_1,…,z_m are fresh variables. In addition, if some constraint inmentions a relation T in , but no constraint inis of the form T[*], then Q_∖ also includes a conjunct T(B_1:z_1,…,B_k:z_k), where {B_1,…,B_k} is the set of all the attributes in (T) that are not mentioned in any constraint in ,and z_1,…,z_k are again fresh variables. For example, consider a schemawith relations R, S, and T, where R has attributes A_1 and A_2, T has attributes B_1, B_2 and B_3, and S has A_1 and B_1. Further, consider the setwith a single constraint R[*] ∧ S[B_1]. ThenQ_∖ C is the query T(B_1: z_1, B_2:z_2, B_3: z_3) ∧ S(A_1: w_1). Note that Q_∖ is unique up to the renaming ofvariables and order of conjuncts.A procedure P = (,,,_) is applicable on an instance I over schemaif(1) The queryQ_∖ and each query in _ are compatible with bothand ', and(2) (I,) satisfy the preconditions _.We can now proceed with the semantics of procedures.Let I be an instance over a schema .An instance I' over schema 'is a possible outcome of applying P over the instance and schema (I,) if the following holds:* P is applicable on I. * (I',') _. * The answers of the query Q_∖ do not change:Q_∖(I) = Q_∖(I').* The answers of each query Q in _ over I are preserved:Q(I) ⊆ Q(I').In the definition, we state the schemas of instances I and I' explicitly, to reinforce thefact that schemas may change during the application of procedures.However, most of the time the schema can be understood from the instance, so we normallyjust say that an instance I' is a possible outcome of I (even if the schemas of I and I' are different). Let us also recall that we use (I) to denote theschema of an instance I.[Example <ref> continued]Recall the procedure P = (,,,_)defined in Example <ref>.Consider the instance I over the schemawith relations and , eachwith attributes , , and , as shown in Figure <ref> (a).Note first that P is indeed applicable on I.When applying the procedure P over I, we know fromthat the only relation whose content can change is, while (or more precisely, the projection of over , and ) is the same across all possible outcomes.Furthermore, we know fromthat in all possible outcomes the projection ofover attributes , , and must be the same as the projection ofover the same attributes. Finally, from _ we know thatthe projection of over these three attributes must be preserved. Perhaps the most obvious possible outcome of applying P over I is that of the instance J_1 in Figure <ref> (b),corresponding to the outcome where the tuple in that is notyet in is migrated into this last relation. However, since we assume no control over the actions performed by the procedure P,it may well be that it is also migrating data from a different relation that we are not aware of,producing an outcome whose relation remains the same as in I and J_1, butwhere has additional tuples, as depicted in Figure <ref> (c). Moreover,it may also be the case that the procedure alters the schema of , adding an extra attribute , importing the information from an unknown source, as shown in Figure <ref> (d).As we have seen in this example, in general the number of possible outcomes (and even the number of possible schemas) that result after a procedure is executedis infinite. For this reason, we are generally more interested in properties shared by all possible outcomes, which motivates the following definitions.The outcome set of applying a procedure P to I is defined as the set. _P(I) = {I' | I'is a possible outcome of applyingPtoI }.[Recall that the schema of instances I'is not necessarily the same as that of I.]§ DEFINING COMMON DATABASE TASKS AS PROCEDURESWe now show additional examples of defining common database tasksas procedures within our framework.We show that data exchange, alter-table statements, and data cleaning can all be accommodated by the framework,and provide additional examples in Appendix <ref>. It is worth noticing that in our first three examples we use only structureconstraints, tgds, and egds as pre- and postconditions, and that our safe queries are all conjunctive queries.The last example calls for extendingthe language used to define procedures.§.§ Data ExchangeWe have already seen an example of specifying data-migration tasks as black-box procedures.However, a more detaileddiscussion will allow us to illustrate some of the basic properties of our framework.Following the notation introduced by Fagin et al. in <cit.>, the most basic instance of the data-exchange problemconsiders a source schema _s, a target schema _t, and a set Σ of dependencies that define how data from the source schema are to be mapped to the target schema.The dependencies in Σ are usually tgds whose left-hand side is compatiblewith _s, and the right-hand side is compatible with. The data-exchange problem is as follows: Given a source instance I, computea target instance J so that (I,J) satisfies all the dependencies in Σ. Instances J with this property are called solutions for I under Σ.In encapsulating this task as a black box within our framework, we assume that the target and sourceschemas are part of the same schema. (Alternatively, one can define procedures working over different databases.)Let (_s,_t,Σ) be as above.We construct the procedure ^st = (^st,^st,^st,_^st),where * ^st contains an atom R[*] for each relation R on the right-hand side of a tgd in Σ; * ^st contains a structure constraintQ[A_1,…,A_n] for each query of the formQ(A_1:x_1,…,A_n:x_n) on the left-hand side of a tuple-generating dependency in Σ; * _^st is the set of all the tgds in Σ; and * _^st is the conjunction of all the atoms R appearing in any tgd in Σ.By the semantics of procedures, it is not difficult to conclude that,for every pair of instances I and J over _s and_t, respectively, we have thatJ is a solution for I if and only if the instance I ∪ J overthe schema _s∪_t is a possible outcome of applying^st over (I,_s∪_t). We can make this statement much more general,as the set of all possible outcomes essentially corresponds to the set of solutions of the data-exchange setting.An instance J over schema _s∪_t is a possible outcome of applying ^st over(I,_s∪_t) if and only if J is a solution for I under Σ.§.§ Alter Table Statements In our framework, procedures can be defined to work over more than one schema,as long as the schemas satisfy the necessary input and compatibility conditions. This is inspired by SQL,where statements such aswould be executable over any schema, as long as the relations R and S have the same types of attributes in the same order.Thus, it seems logical to allow procedures that alter the schema of theexisting database. To do so, we use structure constraints, as shown in the following example. Recall from Example <ref> that, due to a change in the requirements, we now need to add the attribute to the schema of .In general, we capture alter table statements by procedures without scope, used onlyto alter the schema of the outcomes, so that it would satisfy the structural postconditions of procedures.In this case, we model a procedure that adds to the schema of with theprocedure P' = (',',',_'), where ' and _' are empty(if there is no scope, then the database does not change modulo adding attributes, so we do notinclude any safety guarantees), ' is the stucture constraint [*], stating that the relationexists in the schema, and ' is the structure constraint [], stating thatnow has an age attribute.Note that the instance J_3 in Figure <ref>(d)with as in J_1 in Figure <ref>(b), is actually a possible outcome of applying P' over instance J_1; the part of the instance given by the schema of J_1 does not change, but we do add an extra attributeto , and we cannot really control the values of the newly added attribute. We remark that the empty scope in P' guarantees that no relations or attributes aredeleted when applying this procedure. This happens becauseQ_∖ must be compatible with the schema of all outcomes.However, nothing prevents us from adding extra attributes on top of . This decision to use the open-world assumption on schemas reflects the understanding of procedures as black boxes, which we can execute but not control in other ways.§.§ Data CleaningData cleaning is a frequent and important task within database systems (see e.g., <cit.>). The most simplecleaning scenario one could envision is when we have a relation R whose attribute valuesare deemed incorrect or incomplete, and it is desirable to provide the correct values. There are, in general, multipleways to do this; here we consider just a few of them. The first possibility is to assume that we have the correct values in another relation, andto use this other relation to provide the correct values for R. Consider an example.Consider again the schema from Example <ref>. Recall that in Example <ref> we added the attribute to the schema of . The problem is that we have no control over the newly added values of . (If the procedure was a SQLalter-table statement, then the column would be filled with nulls.) However, another relation,, associates an value with each pair of (, ) values; all we need to do now is to copy the appropriatevalue into each tuple in . To this end, we specify the procedure P^* = (^*,^*,^*,_^*), which copies the values offrominto, using the values ofandas a reference. ^*: We use the constraint [], so that the onlypiece of the database the procedure can alter isin the relation ._^*: The preconditions are the structure constraints [,,] and[,,], plus the fact that the values ofandneed to determine the values of in the relation, specified with the dependency (:x,:y, :z) ∧(:x,:y, :w) → z = w.Note that in this case we do not actually need the structure constraints in , because they are implicit in the dependencies (they need to be compatible with the schema), but we keep them for clarity. _^*: The postcondition is the constraint (:x,:y,:z) ∧(:x,:y,:w) → z = w. Alternatively, if we know thatall the (,) pairs from are in (which can be required with a precondition),we can specify the same postcondition via (:x,:y, :w) →(:x,:y, :w)._^*:Same as before, no guarantees are needed. As desired, in all the outcomes of P^* the value of the attribute in is the same as in the correspondingtuple (if it exists) in with the same and values. But then again, the procedure might modify the schema of some relations, or might even create auxiliary relations in the database in the process.What we gain is that this procedure will work regardless of the shape of relations and , as longas the schemas satisfy the compatibility and structure constraints. In the above example we used a known auxiliary relation to clean the values of in . Alternatively, we could define amoregeneral procedure that would, for instance, only remove nulls from , without controlling which values end up replacing these nulls.In order to state this procedure, let us augment the language of tgds with an auxiliary predicate C (for constant) with a single attribute val, which is to take the role of theconstraint in SQL: It is true only for the non-null values in D.Let us now define a procedure P̂ = (, , ,_) that simply replaces all null values of the attribute in relation with non-null values. : The scope is again [], just as in the previous example._: In contrast with the procedure P^* of the previous example, this procedure is light on preconditions: We only needrelation to be present and have the attribute. _: The postcondition states that the attribute of no longer has null values. To express this, we use the auxiliary predicate C, anddefine the constraint (: x) → C( : x ), which states that no value in the attribute in is null. _:Since we only want to eliminate null values, we also include the safety query (age : x, : y, : z) ∧ C( : x), so that we preserve all thenon-null values of (with the correct and attached to these ages).§ BASIC COMPUTATIONAL TASKS FOR RELATIONAL PROCEDURES In this section we study some formal properties of our procedure-centric framework, with the intent of showing how the proposedframeworkcan be used as a toolbox for reasoning about sequences of database procedures.We focus on what we call relational procedures, where the sets of pre- and postconditions are givenby tgds, egds, or structure constraints, and safety queries can be conjunctive or total queries.While there clearly are interesting classes of procedures that do not fit into this special case in the proposed framework, we remark that relational proceduresare general enough to account for a wide range of relational operations on data, including the examples in the previous section.§.§ ApplicabilityIn the proposedframework we focus on transformations of data sets given by sequences of procedures.Because we treat procedures as black boxes, the only description we have of theresults of these transformations is that they ought to satisfy the output constraints of the procedures.In this situation, how can one guarantee that all the procedures will be applicable?Suppose that, for instance, we wish to apply procedures P_1 and P_2to an instance I in sequential order: First P_1, then P_2.The problem is that, sinceoutput constraints do not fully determine the outcome of I after applying _1, we cannotimmediately guarantee that this outcomeis an instance that satisfiesthe preconditions of P_2. Given that the set of possible outcomes is in general infinite, our focus is on guaranteeing that any possible outcome of applying P_1 over I will satisfy the preconditionsof P_2. To formalize this intuition, we need to extend the notion of outcome to a set of instances.We define the outcome of applyinga procedure P to a set of instancesas _P() = ⋃_I ∈_P(I), the union of the outcomes of all the instances in .Furthermore, for a sequence _1,…,_n of procedures we define the outcome of applying _1,…,_n to an instanceI as the set __1,…,_n(I) = _P_n(_P_n-1(⋯ (_P_1(I)) ⋯ )). We can now define the first problem of interest:2lApplicability:Input: A sequence _1,…,_n of procedures, a schema ; Question: Is it true that, for any arbitrary instance I over , procedure P_n canbe appliedto each instance in the set__1,…,_n-1(I)? It is not difficult to show that the Applicability problem is intimately related to theproblem of implication of dependencies, defined as follows: Given a set Σ of dependencies and an additional dependency λ, is it true that all the instances that satisfy Σ also satisfy λ — that is, does Σ imply λ? Indeed, consider a class ℒ of constraints for which the implication problem is knownto be undecidable.Then one can easily show that the applicability problem is also undecidable for those procedures whose pre- and postconditionsare in ℒ: Intuitively, if we let _1 be a procedure with a set Σof postconditions, and _2 a procedure with a dependency λ as a precondition, then it is not difficultto come up with proper scopes and safety queries so that__1(I) satisfies λ for every instance I over schemaif and only ifλ is true in all instances that satisfy Σ. However, as the following proposition shows, the applicability problemis undecidable already for very simple procedures, and even when we consider the data-complexityview of the problem, that is when we fix the procedure and take a particular input instance. There are fixed procedures _1 and _2 that only use tgds for their constraints, and such that the following problemis undecidable. Given an instance I over schema , is it true that all the instances in__1(I) satisfy the preconditions of _2? The proof of Proposition <ref> is by reduction from the embedding problem for finite semigroups,shown to be undecidable in <cit.>. There are several lines of work aiming to identify practical classes of constraints for which the implication problemis decidable, and all that work can be applied in our framework.However, weopt for a stronger restriction: Since all of our examples so far use only structure constraintsas preconditions, for the remainder of the paper we focus on procedures whose preconditions comprisestructure constraints. In this setting, we have the following result. Applicability is in polynomial time for sequences of relational procedures whose preconditions contain only structure constraints.§.§ Representing the Outcome Set We have seen that deciding properties about the outcome set of a sequence of procedures (or even of a single procedure) can be a complicated task. One of the reasons is that procedures do not completely define their outcomes: Wedo not really know what will be the outcome of applying a sequence _1,…,_n of procedures to aninstance I, we just know it will be an instance from the collection __1,…,_n(I).This collection may well be of infinite size, but can it still be represented finitely? The database-theory communityhas developed multiple formalisms for representing sets of database instances, from notions of tables with incomplete information <cit.> toknowledge bases (see, e.g., <cit.>). In this section we study the possibility ofrepresenting outcomes of (sequences of) procedures by means of incomplete tables, along the lines of <cit.>. We also discuss some negative resultsabout representing outcomes of general procedures in systems such as knowledge bases, but leave a moredetailed study in this respect for future work. The first observation we make is that allowing arbitrary tgds in procedures introducesproblems with management of sequences of procedures. Essentially,any means of representing the outcome of a sequence of proceduresneeds to be so powerful that even deciding whether it is nonempty is going to be undecidable. There is a fixed procedurethat does not use preconditions andonly use tgds in their postconditions, such that the following problem isundecidable: Given an instance I, is the set__1(I) nonempty?The reason we view Proposition <ref> as a negative result is because it rules out thepossibility of using any “reasonable” representation system. Indeed, one would expect that deciding non-emptinessshould be decidable in any reasonable way of representing infinite sets of instances.Proposition <ref> is probably not surprising, since reasoning about tgds in general isknown to be a hard problem. Perhaps more interestingly, in our case one can show that the above fact remains true even ifone allows only acyclic tgds, which are arguably one of the most well-behaved classes of dependencies in the literature.The idea behind the proof is that one can simulate cyclic tgds via procedures with only acyclic tgds and no scope. Consider two procedures _1 and _2, where P_1 = (^1, ^1,^1, _^1), with^1 = {R[*],T[*]}, ^1 = ∅, ^1 = {R(A:x) → T(A:x)} and_^1 = R(A:x) ∧ T(A:x); _2 has empty scope, preconditions, and safety queries, and has postconditions {T(A:x) → R(A:x) }.Let I be an instance over the schema with relations R and T, both with attribute A. By definition,the set of possible outcomes of _1 over I are all instances J that extend I and satisfythe dependency R(A:x) → T(A:x). However, the set _P_1,P_2(I) corresponds to all instancesI' that extend I and satisfy both dependencies R(A:x) → T(A:x) and T(A:x) → R(A:x) (In other words,we can use _2to filter out all those instances J where T^J⊈R^J). Intuitively, this happensbecause the outcome set of applying _2 over any instance not satisfying T(A:x) → R(A:x) is empty, andwe define _P_1,P_2(I) as the union of each set _P_2(K), for each instance K ∈_P_1(I).By applying the idea of this example to the proof of Proposition <ref>, we show: Proposition <ref> holds for procedures _1 and _2 that only use acyclic tgds. Since acyclic tgds do not help, we may consider restrictions to full tgds.Still, even this is not enough for making the non-emptiness problem decidable, once one adds the possibility of having schema constraints in procedures. There exists a sequence _1,_2,_3 of procedures such that the following problem isundecidable:Given an instance I, is the set __1,_2,_3(I) nonempty?Here, all the procedures have no preconditions, and have postconditions built using acyclic sets of full tgds and schema constraints (and nothing else). Propositions <ref> and <ref> tell us that restricting the classes of dependencies allowed in proceduresmay not be enough to guarantee outcomes that can be represented by reasonable systems. Thus, we now adapt a different strategy:We restrict interplay between the postconditionsof procedures, their scope, and their safety queries. Let us define two important classes of procedures that will be used thoroughout this section. We say that procedure P = (,,,_) is safe scope if the followingholds:*is a set of tgds where no relation in the right-hand side of a tgd appears also in the left-hand side of a tgd; * The setcontains exactly one constraint R[*] for each relation R that appearson the right-hand side of a tgd in ; and * The query Q_ corresponds to ⋀_R[*] ∈ R,that is it binds precisely all therelations in the scope of P. (For instance, procedure P in Example<ref> is essentially a procedure with safe scope, as it caneasily be transformed into one by slightly altering the safety query.) We also define a class of procedures that ensure that certain attributes or relations be present in the schema. Formally, we say that aprocedure P = (,,,_) is an alter-schema procedure if the following holds:* Bothand _ are empty; and *is a set of structure constraints. Let ^safe,alter be the class of all the procedures that are either safe scope or alter-schema procedures.The class ^safe,alter allows for practically-oriented interplay between migration and schema-alteration tasks and, as we will see in this section, is more manageable from the point of view of reasoning tasks, in terms of complexity.To begin with, deciding the non-emptiness of a sequence of procedures is essentially tractable for ^safe,alter: The problem of deciding, given an instance I and a sequence _1,…,_n of procedures in ^safe,alter,whether __1,…,_n(I) ≠∅, is in exponential time, and is polynomial if the number n of procedures is fixed.The proof of Theorem <ref> is based on the idea of chasing instances with the dependencies in the procedures, and of addingattributes to schemas as dictated by the alter-schema procedures. As usual, to enable the chase we need to introduce labeled nulls in instances (see, e.g., <cit.>), and composing procedures calls for extending the techniquesof <cit.> to enable chase instances that already have null values. Usingthe enhanced approach,one can show that the result of the chase is a good over-approximation of the outcome of a sequence of procedures. To state this result,we introduce conditional tables <cit.>.Letbe an infinite set of null values that is disjoint from the set of domain values D.A naive instance T over schemaassigns a finite relation R^T ⊆ (D ∪)^nto each relation symbol R inof arity n.Conditional instances extend naive instances by attaching conditions over the tuples.Formally, an element-condition is a positive boolean combination of formulasof the form x = y and x ≠ y, where x ∈ and y ∈ (D ∪).Then, a conditional instance T over schemaassigns to each n-ary relationsymbol R in a pair (R^T,ρ^T_R), where R^T ⊆ (D ∪)^n andρ^T_R assigns an element-condition to each tuple t ∈ R^T.A conditional instance T is positive if none of the element-conditions in its tuples usesinequalities (of the form x ≠ y). To define the semantics, let (T) be the set of all nulls in any tuple in T or in an element-condition used in T.Given a substitution ν: (T) → D, let ν^* be theextension of ν to a substitution D ∪(T) → D that is the identity on D. We say thatν satisfies an element-condition ψ, and write νψ, if for every equality x= y in ψ it is the case thatν^*(x) = ν^*(y) and for every inequality x ≠ y we have that ν^*(x) ≠ν^*(y). Furthermore, we define the set ν(R^T) as {ν^*(t) | t ∈ R^T and νρ^T_R(t))}. Finally, for a conditional instance T, ν(T) is theinstance that assigns ν(R^T) to each relation R in the schema. The set of instances represented by T, denoted by (T), is defined as(T) = {I | there is a substitution ν such that I extends ν(T) }.Note that the instances I in this definition could have potentially bigger schemas than ν(T), or, in other words,we consider the set (T) to contain instances over any schema extending the schema of T. The next result states that conditional instances are good over-approximations for the outcomesof sequences of procedures. More interestingly, these approximations preserve the minimal instancesof outcomes. To put this formally, we say that an instance J in a setof instances is minimal if there is no instance J' ∈, J' ≠ J, and such that J extends J'. Let I be an instance and _1,…,_n be a sequence of procedures in ^safe,alter.Then either __1,…,_n = ∅ or one can construct, in exponential time (or polynomial if n is fixed), a conditional instance T such that* __1,…,_n(I) ⊆(T); and * If J is a minimal instance in (T), then J is also minimal in__1,…,_n(I). We remark that this proposition can be extended to include procedures defined only with egds, at the cost of a much more technical presentation.While having an approximation with these properties is useful for reasoning tasks related to CQ answering, orin general checking any criterion that is closed under extensions of instances,there is still the question ofwhether one can find any reasonable class of properties whose entire outcomes can be represented by these tables.However, as the following example shows, this does not appears to be possible, unless one is restricted to sequencesof procedures almost without interaction with each other (see an example in appendix <ref>). Consider a procedure = (, ,,_) with safe scope, where= S[*],is empty, = {R(A: x) → S(A:x)} and _ = S.Consider now the conditional instance T over the schema with relations R and S, both with attribute A, given byR^T= {1,2} and S^T = {1,2}. One could be tempted to say that T is itself a representation of the set_((T)), and indeed (T) and _((T)) share their only minimal instance (essentially,the instance given by T). However, the open-world assumption behind (T) allows for instances that do not satisfy ,whereas all outcomes in _((T)) must satisfy . One can in fact generalize this argument to show thatconditional instances are not enough to fully represent outcome sets. Example <ref> suggest that one could perhaps combine conditional instances with a knowledge base, to allow fora complete representation of the outcome set of sequences of safe procedures. However, this would require studying the interplay ofthese two different types of representation systems, a line of work which is interesting in its own right.§ FUTURE WORK AND OPPORTUNITIESIn this paper, we introduced basic building blocks for a proposed framework for assessing achievability of data-quality constraints. Wedemonstrated thatthe framework is general enough to representnontrivial database tasks, and exhibited realistic classes of proceduresfor which reasoning tasks can be tractable.Our next step is to address the problem of assessing achievability of constraints, which can be formalized as follows.Let Q be a boolean query, Π a set of procedures, and I an instance over a schema .Then we say that I can be readied for Q using Πif there is a sequence P_1,…,P_nof procedures (possibly empty and possibly with repetitions)from Π such that Q is compatible with and true in each instanceI' in the set_P_1,…,P_n(I). (If the latter conditions involving Q are true on I, then we say that I is ready for Q.) We are confident that this problem is decidable forsets of procedures in ^safe,alter, and we plan on looking into more expressive fragments. The proposed framework presents opportunities for several directions of further research. One line of work would involve understanding how torepresent outcomes of sequences of procedures, or how to obtain good approximations of outcomes ofmore expressive classes of procedures. To solve this problem, we would need a better understanding of the interplay betweenconditional tables and knowledge bases, which would be interesting in its own right. We also believe that our framework is general enough to allow reasoning on other data paradigms, or even across variousdifferent data paradigms. Our black-box abstraction could, for example, offer an effective way to reason about procedures involving unstructured text data, or even data transformations using machine-learning tools, as long as one can obtain some guarantees on the data outcomes of these tools. abbrv§ ADDITIONAL EXAMPLES §.§ SQL data-modification statements We show how to encode arbitrary SQL INSERT and DELETE statements as procedures.Due to dealing with arbitrary SQL, we relax the constraints and queries that we use. INSERT statements: Consider a SQL statement of the form, where Q is a relational-algebra query.: Not surprisingly, the scope of the procedure is the relation . : The precondition for the procedure is that all the relation names and attributes mentioned inQ must be present in the database. : The postcondition is stated using the constraint ⊆. (Note that the SQL statement only works whenandhave the same arity.)_: Since we are inserting tuples, we need the queryto be preserved. Alternatively, we can specify an INSERT statement of the form, with a̅ a tuple of values.In order to formalize this, we just need to change the postcondition of the procedure toa̅⊆. DELETE statements: Consider a SQL statement of the form, in which C is a boolean combination of conditions.: The scope is the relation , as expected. : The precondition for the procedure is that all the relations and attributes mentioned inC must be present in the database. : There are no postconditions in this query._: Let Q_C be the query . Then the safety query is - Q_C, which preserves only those tuples that are not to be deleted. §.§ Representing sequences of proceduresAs we mentioned, one possibility to obtain a full representation of sequences of procedures is tofurther restrict the scope of sequences of safe procedures. To be more precise, let ussay that a sequence _1,…,_n of procedures is a safe sequence if (1) each _i is either an alter-schema procedure or a safe-scope procedure that only uses tgds,and (2) for every 1 ≤j ≤ n, none of the atoms on the right-hand side of a tgd in _j is part of the scope of any _i, with i ≤ j.Intuitively, safe sequences of procedures restrict the possibility of sequencing data-migration tasks when the result of one migration is usedas an input for the next one. A conditional instance with scope is a pair = (T,), where T is a conditional instance andis a set of relation names.The set of instances represented by , denoted again by(), now contains all the instances J in (T) where,for each relation R ∈(ν(T)) that is not in ,the projection of R^J over the attributes of R in T is the same as R^ν(T). (In other words,we allow extra tuples only in the relations whose symbols are in the set .)It is now not difficult to show the following result. For each instance I and each safe sequence _1,…,_n of procedures one can construct a conditionalinstancewith scope such that () = __1,…,_n(I).§ PROOFS AND INTERMEDIATE RESULTS§.§ Proof of Proposition <ref> The reduction is from the complement of the embedding problem for finite semigroups, shown to be undecidable in <cit.>, andit is itself an adaptation of the proof of Theorem 7.2 in <cit.>. Note that, since we do not intend to add attributes nor relations in the procedures of this proof, we candrop the named definition of queries, treating CQs now as normal conjunctions of relational atoms. The embedding problem for finite semigroups problemcan be stated as follows. Consider a pair A = (A,g), where A is a finite set andg: A × A → A is a partial associative function. We say that A is embeddable in a finitesemigroup is there exists B = (B,f) such that A ⊆ B and f: B × B → B is a totalassociative function. The embedding problem for finite semigroups is to decide whether an arbitraryA = (A,g)is embeddable in a finite semigroup. Consider the schema= {C(·,·), E(·,·), N(·,·), G(·,·,·), F(·), D(·)}.The idea of the proof is as follows.We use relation G to encode binary functions, so that a tuple (a,b,c) in G intuitivelycorresponds to saying that g(a,b) = c, for a function g.Using our procedure weshall mandate that the binary function encoded in G is total and associative.We then encode A = (A,g) into our input instance I: the procedure will then try to embedA into a semigroup whose function is total. In order to construct the procedures, we first specify the following set Σ of tgds.First we add to Σ a set of dependencies ensuring that all elements in the relationG are collected into D:G(x,u,v)→D(x) G(u,x,v)→D(x) G(u,v,x)→D(x) The next set verifies that G is total and associative: D(x) ∧ D(y)→ ∃ z G(x,y,z) G(x,y,u) ∧ G(u,z,v) ∧ G(y,z,w)→G(x,w,v)Next we include dependencies that are intended to force relation E to be an equivalence relationover all elements in the domain of G.D(x)→E(x,x) E(x,y)→E(y,x) E(x,y) ∧ E(y,z)→E(x,z) The next set of dependencies we add Σ ensure that G represents a function that is consistentwith the equivalence relation E. G(x,y,z) ∧ E(x,x') ∧ E(y,y') ∧ E(z,z')→G(x',y',z') G(x,y,z) ∧ G(x',y',z') ∧ E(x,x') ∧ E(y,y')→E(z,z') The final tgd in Σ serves us to collect possible errors when trying to embedA = (A,g). The intuition for this tgd will be made clear once we outline the reduction, butthe idea is to state that the relation F now contains everything that is in R, as longas a certain property holds on relations E, C and N. E(x,y) ∧ C(u,x) ∧ C(v,y) ∧ N(u,v) ∧ R(w)→F(w) Let then Σ consists of tgds (1)-(11). We construct fixed procedures P_1 = (^1,^1,^1,_^1) andP_2= (^2,^2,^2,_^2) as follows. procedure P_1: ^1: The scope of P_1 consists of relations G, E, D and F, which corresponds to the constraints{G[*],E[*], D[*], F[*]}. ^1: There are no preconditions for this procedure. ^1: The postconditions are the tgds in Σ. _^1: This query ensures that no information is deleted from all of G, E and F (and thus that noattributes are added to them): G(x,y,z) ∧ E(u,v) ∧ D(w) ∧ F(p).procedure P_2: ^2: The scope of P_2 is empty. ^2: The precondition for this constraint is R(x) → F(x). ^2: The are no postconditions. _^2: There is no safety query. Note that P_2 does not really do anything, it is only there to check that R is contained in F.We can now state the reduction. On input A = (A,g), where A = {a_1,…,a_n},we construct an instanceI_A given by the following interpretations:* E^I_A contains the pair (a_i,a_i) for each 1 ≤ i ≤ n (that is, for each element of A); * G^I_A contains the triple (a_i,a_j,a_k) for each a_i,a_j,a_k ∈ A such that g(a_i,a_j) = a_k;* D^I_A and F^I_A are empty, while R^I_A contains a single element d not in A; * C^I_A contains the pair (i,a_i) for each 1 ≤ i ≤ n; and * N^I_A contains the pair (i,j) for each i ≠ j, 1 ≤ i ≤ n and 1 ≤ j ≤ n.Let us now show A = (A,g) is embeddable in a finite semigroup if and only if _P_1(I) containsan instance I such that I' does not satisfy the precondition R(x) → F(x) of procedure P_2.(⟹) Assume that A = (A,g) is embeddable in a finite semigroup, say the semigroupB = (B,f), where f is total. Let J be the instance such thatE^J is the identity over B, D^J = B and G^J contains a pair (b_1,b_2,b_3) if and only iff(b_1,b_2) = b_3; F^J is empty and relations N, C and R are interpreted as in I_A.It is easy to see that J Σ, Q_∖ is preserved and that_(I_A) ⊆_(J), this last because A was said to be embeddable inB. We have that J then does belong to _P_1(I), but J does not satisfy theconstraint R(x) → F(x). (⟸) Assume now that there is an instance J ∈_P_1(I) that does notsatisfy R(x) → F(x). Note that, because of the scope of P_1, the interpretation ofC, N and R of J must be just as in I. Thus it must be that the element d is not inF^J, because it is the only element in R^J.Construct a finite semigroup B = (B,f) as follows.Let B consists of one representative of each equivalence class in E^J, with the additional restriction thateach a_i in A must be picked as its representative. Further, define f(b_1,b_2) = b_3 if and only ifG(b_1,b_2,b_3) is in G. Note that J satisfies the tgds in Σ, in particular G is associative andE acts as en equivalence relation over G, which means that f is indeed associative, total, and well defined.It remains to show that A can be embedded in B, but sinceG^J and E^J are supersets of G^I_A and E^I_A (because of the safety query of P_1),all we need to show is that each a_i is in a separate equivalence relation. But this hold because of tgd(11) in Σ: if two elements from A are in the same equivalence relation then the left hand side of (11) would hold in I_A, which contradicts the fact that F^J does not contain d. §.§ Proof of Proposition <ref> LetP = (,,,_). We first show how to construct, for each instance I over a schema , the minimalschemasuch that all pairs (J,') that are possible outcomes of applying P over (I,)are such that' extend . The algorithm receives a procedure P and a schemaand outputs either , if the procedure is applicable, or a failure signal in case there is no schema satisfying the output constraints of the procedure.Along the algorithm we will be assigning numbers to some of the relations in . This is important to be able to decide failure. Algorithm A(P,) for constructing Input: procedure P = (,,,_ and schema . Output: either failiure or a schema . * Ifdoes not satisfy the structural constraints inor is not compatible with either _ or Q_∖, output failure. Otherwise, continue. * Start with = ∅. * For each total query R in _, assume that |(R)| = k. Set (R) = (R), and label R with k.* Add toall relations R mentioned in an atom R[*] in(if they are not already part of ), without associating any attributes to them* In the following instructions we construct a set Γ(P,) of pairs of relations and attributes. Intuitively,a pair (R,{a_1,…,a_n}) in Γ(P,) states that each schema in the output of P must contain a relation R with attributesa_1,…,a_n.* For each relation R inthat is not mentioned in , add to Γ(P,) the pair (R,(R)).* For each constraint R[a_1,…,a_n] in , add the pair(R,(R) ∖{a_1,…,a_n}) to Γ(P,).* For each atom R(a_1:x_1,…,a_n:x_n) in _ add to Γ(P,) the pair(R,{a_1,…,a_n}). * For each atom R(a_1:x_1,…,a_n:x_n) in a tgd or egd inadd to Γ(P,) the pair(R,{a_1,…,a_n}). * For each constraint R[a_1,…,a_n] in , add to Γ(P,) the pair(R,{a_1,…,a_n}).* For each pair (R,A) in Γ(P,), do the following. * If R is not yet in , add R toand set (R) = A; * If R is in , update (R) = (R) ∪ A. * Ifcontains a relation R labelled with a number n where, (R) > n, output failure. Otherwise output . By direct inspection of the algorithm, we can state the following.Let P = (,,,_) be a relational procedure anda relational schema. Then for eachrelation R inwith attributes {a_1,…,a_n}, every instance I overand every pair(J,') in the outcome of applying P to (I), we have that(R) is defined, with {a_1,…,a_n}⊆(R). Furthermore, the following lemma specifies, in a sense, the correctness of the algorithm.Let P = (,,,_) be a relational procedure anda relational schema. Then:i) If A(P,) outputs failure, either P cannot be applied over any instance I over , orfor each instance I overthe set_P(I) is empty. ii) If A(P,) outputs , then the schema of any instance in _P(I) extends .For i), not that if some of the components of P are not compatible with , ordoes not satisfy the constraints in, then clearly P cannot be applied over any instance I over .Assume then thatsatisfies all compatibilities and preconditions in P, but A(P,) outputs failure.Thencontains a relation R such that |(R)| = m, but R is labelled with number k, fork < ℓ. From the algorithm, we this implies that |(R)| > |(R), but that there is a queryR in _. Clearly, _ cannot be preserved under any outcome, since by Observation <ref>we require the schemas of outcomes to assign more attributes to R than those assigned by , and thusthe cardinality of tuples in the answer of R differs between I and its possible outcomes.Finally, item ii) is a direct consequence of Observation <ref>.Note that the algorithm (A,P) runs in polynomial time, and that the total size of(measured as the number of relations and attributes)is at most the size ofand P combined. Thus, to decide the applicability problem for a sequence P_1,…,P_n of procedures, all we needto do is to perform subsequent calls to the algorithm, setting _0 = and then using _i =A(P_i,_i-1) as the input for thenext procedures. If A(P_n,_n-1) outputs a schema, then the answer to the applicability problem is affirmative, otherwise if some call toA(P_i,i-1) outputs failure, theanswer is negative. §.§ Proof of Proposition <ref> This proof is a simple adaptation of the reduction we used in the proof of Proposition <ref>.Indeed, consider again the schemafrom this proof, and the procedure P given by: : The scope of P consists of relations G, E, D and F, which corresponds to the constraintsG[*], E[*], D[*] and F[*]. : There are no preconditions for this procedure. : The postconditions are the tgds in Σ plus the tgd F(x) → R(x)._: This query ensures that no information is deleted from all of G, E and F:G(x,y,z) ∧ E(u,v) ∧ D(w) ∧ F(p).Given a finite semigroup A, we construct now the following instance I:* E^I_A contains the pair (a_i,a_i) for each 1 ≤ i ≤ n (that is, for each element of A); * G^I_A contains the triple (a_i,a_j,a_k) for each a_i,a_j,a_k ∈ A such that g(a_i,a_j) = a_k;* All of D^I_A, F^I_A and R^I_A are empty; * C^I_A contains the pair (i,a_i) for each 1 ≤ i ≤ n; and * N^I_A contains the pair (i,j) for each i ≠ j, 1 ≤ i ≤ n and 1 ≤ j ≤ n.By a similar argument as the one used in the proof of Proposition <ref>, one can show that_P(I) has an instance if and only if A is embeddable in a finite semigroup.The intuition is that now we are adding the constraint F(x) → R(x) as a postcondition, andsince R is not part of the scope of the procedure the only way to satisfy this restriction is if we do notfire the tgd (11) of the set Σ constructed in the aforementioned proof. This, in turn, can only happen ifA is embeddable.§.§ Proof of Proposition <ref> The reduction, just as that of Proposition <ref>, is by reduction fromthe embedding problem for finite semigroups, and builds up from this proposition.Let us start by defining the procedures _1, _2 and _3. For procedure _1 we first build a set Γ_1 of tgs. This set is similar to the setΣ used in Proposition <ref>, but using three additional dummyrelations G^d, E^d and G^binary. First we add to Γ_1 dependencies that collect elements of G into D, and that initializeE as a reflexive relation.G(x,u,v)→D(x) G(u,x,v)→D(x) G(u,v,x)→D(x)D(x)→E(x,x) Next the dependency that states that F contains everything in R if some conditions about E occur.E(x,y) ∧ C(u,x) ∧ C(v,y) ∧ N(u,v) ∧ R(w)→F(w)The dependencies that assured that E was an equivalence relation where acyclic, so wereplace the right hand side with a dummy relation. E(x,y)→E^d(y,x) E(x,y) ∧ E(y,z)→E^d(x,z) Next come the dependencies assuring G is a total and associative function, using also dummy relations.D(x) ∧ D(y)→ ∃ z G^binary(x,y) G(x,y,u) ∧ G(u,z,v) ∧ G(y,z,w)→G^d(x,w,v) Finally, the dependencies that were supposed to ensure that E worked as the equality overfunction G, using again the dummy relations. G(x,y,z) ∧ E(x,x') ∧ E(y,y') ∧ E(z,z')→G^d(x',y',z') G(x,y,z) ∧ G(x',y',z') ∧ E(x,x') ∧ E(y,y')→E^d(z,z') We can now define procedure P_1: : The scope of P_1 consists of relations G, E, D, F, G^d, E^d and G^binary which corresponds to the constraintsG[*], E[*], D[*], F[*], E^d[*], G^d[*] and G^binary[*]. : There are no preconditions for this procedure. : The postconditions are the tgds in Γ_1. _: This query ensures that no information is deleted from all of G, E, F, G^d, E^d and G^binary:G(x,y,z) ∧ E(u,v) ∧ D(w) ∧ F(p)∧ G^d(x',y',z') ∧ E^d(u',v') ∧ G^binary(a,b).Note that, even though relations G and E are not mentioned in the right hand side of any tgd in Γ_1,they are part of the scope and thus they could be modified by the procedures _1. The procedure _2 has no scope, no safety queries, no precondition, and the only postcondition is the presenceof a third attribute, say C, in G^binary, by using a structural constraint G^binary[A,B,C] (to maintain consistency with our unnamedperspective, we assume that these three attributes are ordered A <_𝒜 B <_𝒜 C). To define the final procedure, consider the following set of tgds Γ_3.E^d(x,y)→E(x,y)G^d(x,y,z)→G(x,y,z) G^binary(x,y,z)→G(x,y,z)F(x)→F^check(x) Then we define procedure _3 is as follows. : The scope of _3 is again empty.: There are no preconditions for this procedure. : The postconditions are the tgds in Γ_3. _: There are also no safety queries for this procedure. Letbe the schema containing relationsG, E, D, F, F^check, G^d, E^d and G^binary and R. The attribute names are of no importancefor this proof, except for G^binary, which associates attributes A and B. Given a finite semigroup A, we construct now the following instance I_A:* E^I_A contains the pair (a_i,a_i) for each 1 ≤ i ≤ n (that is, for each element of A); * G^I_A contains the triple (a_i,a_j,a_k) for each a_i,a_j,a_k ∈ A such that g(a_i,a_j) = a_k;* All of D^I_A, F^I_A and F^check^I_A are empty; * R^I_A has a single element d not used elsewhere in I_A* C^I_A contains the pair (i,a_i) for each 1 ≤ i ≤ n; and * N^I_A contains the pair (i,j) for each i ≠ j, 1 ≤ i ≤ n and 1 ≤ j ≤ n.Let us now show A = (A,g) is embeddable in a finite semigroup if and only if _P_1,P_2,P_3(I) is nonempty. (⟹) Assume that A = (A,g) is embeddable in a finite semigroup, say the semigroupB = (B,f), where f is total. Let J be the instance oversuch thatboth E^d^J and E^J are the identity over B, D^J = B, both G^d^J andG^J contains a pair (b_1,b_2,b_3) if and only iff(b_1,b_2) = b_3; G^binary^J is the projection of G^J over its two first attirbutes,F^J and F^check^J are empty and relations N, C and R are interpreted as in I_A. It is easy to see that J is in the outcome of applying _1 over I.Now, let ' be the extension ofwhere G^binary has an extra attribute, C,and K is an instance over ' that is just like J except that G^binary^K is nowthe same as G^J (and therefore G^K).By definition we obtain that K is a possible outcome of applying _2 over J, and thereforeK is in __1,_2(I).Furthermore, one can see that the same instance K is again an outcome of applying _3 overK, therefore obtaining that __1,_2,_3(I) is nonempty. (⟸) Assume now that there is an instance L ∈_P_1,P_2,P_3(I). Then by definition there are instances J and Ksuch that J is in __1(I), K is in __2(J) andL is in __3(K). Let J^* be the restriction of J over the schema . From a simple inspection of _1 we havethat J^* satisfies as well the dependencies in _1, so thatJ^* is in __1(I).Let now ' be the extension ofthat assigns also attribute C to G^binary. Now, since K is an outcome of P_2 over Jand P_2 has no scope, if we define K^* as the restriction of K over ', thenclearly K^* must be in the outcome of applying _2 over J^*.Note that, by definition of _3 (since its scope is empty), the restriction of L up to the schema of K must be the same instance as K,and therefore the restriction L^* of L to ' must be the same instance than K^*. Furthermore, since L (and thus L^*) satisfies theconstraints in _3, and the constraints only mention relations and atoms in ',we have that K^* must be an outcome of applying _3 over (K^*,'). We now claim that K^* satisfy all tgds (1)-(11) in the proof of Propositon <ref>.Tgds (1-3) and (6) are immediate from the scopes of procedures, and the satisfaction for all the remaining onesis shown in the same way. For example, to see that K^* satisfies E(x,y) → E(y,x), note thatJ^* already satisfies E(x,y) → E^d(y,x). From the fact that the interpretations of E^d and Eare the same over J^* and K^* and that K^* satisfies E^d(x,y) → E(x,y) weobtain the desired result. Finally, since K^* satisfies F(x) → F^check(x), and the interpretation ofF^check over all of I, J^* and K^* must be empty, we have thatthe interpretation of F over K^* is empty as well. Given that K^* satisfies all dependencies in Σ,it must be the case that the left hand side of the tgd (11) is not true K^*, for any possible assignment.By using the same argumentas in the proof of Proposition <ref> we obtain that A = (A,g) is embeddable in a finite semigroup. §.§ Proof of Theorem <ref>This theorem is an immediate corollary of Proposition <ref>, together with an inspection on the complexity ofcomputing the over-approximation. We provide all details in the proof of the next proposition (Proposition <ref>).§.§ Proof of Proposition <ref> For the proof we assume that all procedures does not use preconditions. We can treat themby first doing an initial check on compatibility that only complicates the proof. We also specify an alternative set of representatives for conditional instances (which is actually the usual one). Theset (G) of representatives of a conditional instance G is simply(G)= {I | there is a substitution ν such that ν(T) ⊆ I }. That is,(G) onlyspecifies instances over the same schema as G.The following lemma allows us to work with this representation instead; it is immediate from the definition of safe scope procedures. If G is a conditional instance, then (1) (G) ⊆(G), and (2) an instance J is minimal for (G) if and only if it is minimalfor (G).Moreover, from the fact that procedures with safe scope are acyclic, we can state Theorem 5.1 in <cit.> in the following terms:Given a set Σ of tgds and a positive conditional instance G, one can construct, in polynomial time, a positive conditional instanceG' such that (1) (G') ⊆(G) and (2) all minimal models of (G') satisfy Σ.Moreover, by slightly adapting the proof of Proposition 4.6 in <cit.>, we can see that the conditional instance constructed above has even betterproperties. In order to prove this theorem all that one needs to do is to adapt the notion of solutions for data exchange into a scenario wherethe target instance may already have some tuples (which will not fire any dependencies because of the safeness of procedures). Let P = ,,,_ be a procedure with safe scope, and let G be a positive conditional instance.Then one can construct a conditional instance G' such that, for every minimal instance I of (G), theset (G') contains all minimal instances in _(I), and for every minmal instance J in (G')there is a minimal instance I of (G) such that J is minimal for _(I). Finally, we can show the key result for this proof.Letbe a set of instances, and G a conditional table that is minimal for , and= (,,,_) a procedure with safe scope. Then either _() = ∅ or one can construct, in polynomial time, a conditional instance G' such thati) _() ⊆(G'); and ii) If J is a minimal instance in (G'), then J is also minimal in_().Using the chase procedure mentioned in Lemma <ref>, we see that the conditional table G' produced in this lemmasatisfies the conditions of this Lemma, for (G).For i), let J be an instance in _(). Then there is an instance I insuch thatJ ∈P(I). Let I^* be a minimal instance insuch that I extends I^*.By our assumption we know that I^* belongs to (G), andsince I^* is minimal it must be the case that I^* belongs (and is minimal) for (G).Therefore, by Lemma <ref> we have that (G') contains all minimal instancesfor _P(I^*). But now notice that for every assignment τ and tgd λ such that(I^*,τ) satisfies λ, we have that (I,τ) satisfy λ as well. This means thatevery instance in the set _P(I) must extend a minimal instance in _P(I^*)(if not, then a tgd would not be satisfied due to some assignment that would not be possible to extend).Since every minimal instance in _P(I^*) is in (G'), then by the semantics of conditional tablesit must be the case that J belongs to (G') as well, and therefore to (G'). Item [ii)] follows from the fact that any minimal instance in (G') must also be minimal for (G')and a direct application of Lemma <ref>. The next Lemma constructs the desired outcomes for alter schema procedures. Letbe a set of instances, and G a conditional table that is minimal for , and= (,,,_) an alter schema procedure. Then either _() = ∅ or one can construct, in polynomial time, a conditional instance G' such thati) _() ⊆(G'); and ii) If J is a minimal instance in (G'), then J is also minimal in_(). Assume that _() ≠∅ (this can be easily checked in polynomial time).Then one can compute the schemafrom the proof of Proposition<ref>. This schema will add some attributes to some relations in the schema of G, andpossibly some other relations with other sets of attributes. Let (G) =. We extend G to a conditional table G' overas follows:* For every relation R such that (R) ∖(R) = {A_1,…,A_n}, with n ≥ 1, for tuples from G' byadding to each tuple in G a fresh null value in each of the attributes A_1,…,A_n. * For every relation R such that (R) is not defined, but (R) is defined, set R^G' = ∅ The properties of the lemma now follow from a straightforward check. The proof of Proposition <ref> now follows from successive applications of Lemmas <ref> and<ref>:one just need to compute the appropriate conditional table for each procedure in the sequence P_1,…,P_n.That each construction is in polynomial size if the number n of procedures is fixed, or exponential in other case,follows also from these Lemmas, as the size of the conditional table G', for a procedure P and a conditional table G, is at mostpolynomial in G and P (and thus we are composing a polynomial number of polynomials, or a fixed number if n is fixed).Proof of Theorem <ref>:While in general checking that the set represented by an arbitrary conditional instance may be np-complete, we note thatin <cit.> it was shown that, for Lemma <ref>, all that is needed is a positive conditional instance, and clearlydeciding whether a positive conditional instance represents at least one solution is in polynomial time.Thus, for the proof of the Theorem we just compute the (positive) conditional instance exhibited for Proposition <ref> andthen do the polynomial check on the size of the final conditional instance. | http://arxiv.org/abs/1703.09141v1 | {
"authors": [
"Rada Chirkova",
"Jon Doyle",
"Juan L. Reutter"
],
"categories": [
"cs.DB"
],
"primary_category": "cs.DB",
"published": "20170327152501",
"title": "A Framework for Assessing Achievability of Data-Quality Constraints"
} |
i.e., 1Department of Astronomy, New Mexico State University, Las Cruces, NM 880012CSIRO Astronomy and Space Science, 26 Dick Perry Ave, Kensington WA 6151 Australia3The Netherlands Institute for Radio Astronomy (ASTRON), Dwingeloo, The Netherlands4Kapteyn Astronomical Institute, University of Groningen, Postbus 800, 9700 AV Groningen, The Netherlands5Dept. of Physics and Astronomy, University of Bologna, Viale Berti Pichat 6/2, 40127, Bologna, Italy6Department of Astronomy, University of Washington, Box 351580, Seattle, WA 981957Department of Physics and Astronomy, University of New Mexico, 1919 Lomas Blvd. NE, Albuquerque, NM 871318SKA South Africa Radio Astronomy Research Group, 3rd Floor, The Park, Park Road, Pinelands 7405, South Africa9Rhodes Centre for Radio Astronomy Techniques & Technologies, Department of Physics and Electronics, Rhodes University, PO Box 94, Grahamstown 6140, South Africa10Argelander-Institut für Astronomie, Auf dem Hügel 71, D-53121 Bonn, Germany11Department of Physics and Astrophysics, Vrije Universiteit Brussel, Pleinlaan 2, 1050 Brussels, Belgium12INAF – Osservatorio Astronomico di Cagliari, Via della Scienza 5, I-09047 Selargius (CA), ItalyWe use new deep 21 cmobservations of the moderately inclined galaxy NGC 4559 in theHALOGAS survey to investigate the properties of extra-planar gas. We use TiRiFiC to construct simulated data cubes to match theobservations. We find that a thick disk component of scale height ∼ 2 kpc, characterized by a negative vertical gradient in its rotation velocity (lag) of ∼13 ± 5 km s^-1 kpc^-1 is an adequate fit to extra-planar gas features. The tilted ring models also present evidence for a decrease in the magnitude of the lag outside of R_25, and a radial inflow of ∼ 10 km s^-1. We extracted lagging extra-planar gas through Gaussian velocity profile fitting. From both the 3D models and and extraction analyses we conclude that ∼10-20% of the totalmass is extra-planar.Most of the extra-planar gas is spatially coincident with regions of star formation in spiral arms, as traced by Hα and GALEX FUV images, so it is likely due to star formation processes driving a galactic fountain. We also find the signature of a filament of a kinematically “forbidden" , containing ∼ 1.4× 10^6 M_⊙ of , and discuss its potential relationship to a nearbyhole. We discover a previously undetected dwarf galaxy inlocated ∼ 0.4^∘ (∼ 58 kpc) from the center of NGC 4559, containing ∼ 4×10^5 M_⊙. This dwarf has counterpart sources in SDSS with spectra typical ofregions, and we conclude it is two merging blue compact dwarf galaxies.§ INTRODUCTION Substantial reservoirs of material have been documented to exist outside of the plane of disk galaxies <cit.>. This extra-planar material has been found at multiple wavelengths and emission sources, including X-rays <cit.>, dust <cit.>, Hα<cit.>, and <cit.>, indicating gas over a wide range in temperatures and densities. Extra-planar material is likely an excellent probe of the effects spiral galaxies have on their environments, and vice versa. It is possible that extra-planar matter could have originally been a part of the disk of the underlying galaxy, but was expelled from the disk through various energetic processes, such as supernova explosions or stellar winds from massive stars. This material could then expand, and rain back down onto the galaxy after cooling. This process is referred to as the galactic fountain mechanism <cit.>. The rotation velocity of this material about the galactic center is expected to decrease with distance from the plane <cit.>. This reduction in rotation velocity is generally referred to as a “lag”, and is a signature of extra-planar matter. However, observations of extra-planar gas show lag magnitudes that are larger than could be reproduced with ballistic models, implying that additional mechanisms are needed to explain the behavior of this material <cit.>. <cit.> postulated that this extra interaction could be with a hot, but slowly rotating corona of gas already residing above the disks of galaxies. Alternately, a study by <cit.> using a hydrostatic disk model finds that pressure gradients or magnetic tension could also play a part in setting the magnitude of observed lags.Observationally, lagging gas can be found in both edge-on and inclined galaxies. In edge-on situations, one can directly measure the vertical extent and lag in the rotational velocities of the extra-planar gas. In moderately inclined galaxies, the disentangling of extra-planar gas from disk gas is more difficult, but is possible if a detectable lag exists, as in <cit.>. Additionally, in such galaxies the connection with star formation across the disks is easier to establish. The Westerbork Hydrogen Accretion in LOcal GAlaxieS (HALOGAS) survey <cit.> targets 22 nearby moderately inclined and edge-on spiral galaxies for deep 21 cm observations using the Westerbork Synthesis Radio Telescope (WSRT). This survey has increased the sample of nearby galaxies for which extra-planar gas can be characterized, and one of its goals is to search for a connection between extra-planarand star formation, and externally originatingwith galactic fountain gas. In this paper we present a detailed analysis of the HALOGAS data cube for the moderately inclined spiral NGC 4559.The Hubble type of the galaxy cited in <cit.> is SABcd, though a kinematic influenceon thefrom a bar is not apparent. The adopted distance to the galaxy, 7.9 Mpc, was obtained through the median of all Tully-Fisher distances.A previousstudy by <cit.> (hereafter, B05) revealed many interesting details about the gas in NGC 4559. The study found evidence of ∼5.9 × 10^8 M_⊙ of extra-planar gas with a scale height of ∼2 kpc, rotating 25-50 km s^-1 slower than the uniform thin disk of . The extra-planar gas was found to be kinematically and spatially regular throughout the galaxy. Though accretion from the intergalactic medium (IGM) could not be ruled out, the regular extent of extra-planar gas suggests it is likely due to a widespread phenomenon, such as star formation across the disk. B05 used 3-D tilted ring models to model the extra-planar gas, where the thick disk had a separate rotation curve from the thin disk. We build upon this result by constraining the magnitude of vertical lag, rather than computing a separate rotation curve for the extra-planar gas. B05 also found evidence for a largehole at α=12^h 36^m 4^sδ=27^∘ 57 7, that would require ∼ 2× 10^7 M_⊙ ofto fill. They determined thedistribution to be highly asymmetric between the approaching and receding halves of the galaxy. Interestingly, B05 also found evidence for a stream of gas located at “forbidden" velocity in the major axis position-velocity diagram, near the center of the galaxy, which they postulate to be associated with the aforementionedhole. We further the discussion on the possible origins of this forbidden velocity feature and its possible relation to the hole.The aim of this work is to expand upon the analysis done by B05 using the more sensitive HALOGAS observations of the same galaxy, together with ancillary Hα and GALEX FUV observations as tracers of star formation activity. We present the data in Sections 2 and 3, explore three-dimensional tilted ring models to characterize the presence of extra-planar gas in Section 4, and determine the mass and relation to star formation of the extra-planar gas in Section 5. In the remainder of Section 5 we further characterize the forbidden gas feature discovered in B05 and discuss its potential origins and connection to the nearbyhole. Section 6 discusses a new detection ofin a nearby dwarf companion galaxy, while Section 7 concludes the paper with a discussion of the results.§ DATA ACQUISITION AND REDUCTION A brief overview of the data collection and reduction process is included. We refer the reader to <cit.> for a comprehensive description.We used the HALOGAS survey observations of NGC 4559, taken in the Maxi-short WSRT configuration, with baselines ranging from 36 m to 2.7 km to maximize sensitivity to faint, extended emission. Nine of the ten fixed antennas were used with a regular spacing of 144m. The total bandwidth of 10 MHz was split into 1024 channels (2.06 km s^-1 per channel), with two linear polarizations. Galaxy observations spanned 10 × 12 hr between January and May 2011.We used Miriad <cit.> to perform the data reduction. Data properties are included in Table 2. We produced multiple data cubes using a variety of weighting schemes. The cubes were Hanning smoothed to obtain the final velocity resolution of 4.12 km s^-1 per channel, and a beam size of 28.38×13.10. The 1σ rms noise in a single channel of the full-resolution cube is0.17 mJy beam^-1. The minimum detectable column density (5σ and 12 km s^-1 velocity width) of the full resolution cube is 3.67 × 10^19 cm^-2. The widest field data cube is 1024×1024 pixels of size 4, giving it a field of view of 68. The galaxy emission is much smaller than that, so we trim the field of view to ∼ 24 due to data size considerations.We note the existence of strong solar interference in the data cube of NGC 4559.The observations of this galaxy were taken in 10 12-hour blocks with time from January through May, 2011. This timeframe was a period of moderately large solar activity. Five out of the ten tracks were taken in May, with 3.75 - 5.1 hours of exposed sunlight time. Though the angular separation between the galaxy and the Sun was kept as large as possible (∼ 120^∘), there is still solar interference affecting the short baselines. This solar interference was flagged in problematic baselines and timeframes during the data reduction. The flagging reduced inner uv coverage, which makes sensitivity to extended, faint emission lessened. Furthermore, remaining solar interference artifacts preclude the cleaning of the data cube to reach the deepest possible noise level of the HALOGAS data. We attribute this solar interference as the most likely explanation for the lack of appreciable improvement in sensitivity to extended emission over B05. However, we also note that the rms noise per channel is improved by a factor of ∼ 2 over B05, so we have improved point source sensitivity, despite the lack of extended emission sensitivity.To improve sensitivity to faint, extended emission, we smoothed the original HALOGAS data cube to a 30×30 beam, making the noise level in a single channel 0.24 mJy/beam, or 0.16 K. See Table 2 for details of the 30×30 beam cube. The cube was primary beam corrected using the Miriad task “linmos” when calculating totalmass. We use the 30×30 cube for all tilted ring modelling. Moment maps were created using the “moments” task within the Groningen Image Processing System (GIPSY; <cit.>). Moment maps were created by first smoothing the original cube to 60×60, with which masks at the 5 σ level were produced and applied to the full resolution cube.GALEX FUV and ground-based Hα images allow us to investigate in more detail the correlation between the lagginglayer and star formation in NGC 4559.The GALEX FUV image from <cit.>, and a continuum subtracted Hα image are included in our analysis. The Hα image was taken by one of the authors (MP) on March 21, 2012, with the Kitt Peak National Observatory (KPNO) 4-m telescope. The Hα exposure time was 30 minutes. The mosaic instrument is known to produce artifacts in which bright stars may appear in multiple CCDs. We first removed this crosstalk between CCDs and trimmed the image. The image was bias subtracted, flat fielded with dark sky flats, and stacked from dithered images. The image was then continuum subtracted with an 10 minute exposure R-band image taken on the same night. The pixel scale of the Hα image is 0.258 per pixel. §MASS AND EXTENT OF THEDISK We estimate the totalmass using the primary beam corrected and masked totalmap and assume the emission is optically thin. We use the standard conversion to column density, where the beam HPBW major and minor axes, a and b respectively, are in units of arcseconds: N(cm^-2)=1.104 × 10^21·F(mJy/beam·km/s)/(a× b). The totalmass obtained from the HALOGAS data cube is in very good agreement with that found in B05 when the same distance is assumed: 4.53 × 10^9 M_⊙ versus 4.48 × 10^9 M_⊙ in B05 using our assumed distance of 7.9 Mpc. As mentioned in B05, this agrees with alternate and single dish measurements of the mass from <cit.> and <cit.>. This shows that the increase in integration time and sensitivity of the HALOGAS observations did not discover a larger amount ofthat the observations of B05 were not sensitive enough to probe.We analyze the extent of thedisk and radial profile in the highest resolution data cube with a beam size of 28.38×13.10.The masked, high resolution cube is integrated over the velocity axis to produce the totalimage, or integratedmap shown in the top panel of Figure <ref>. We use the GIPSY task ELLINT to create an azimuthally averaged surface brightness profiles for the receding and approaching halves independently. ELLINT is a 2D ring fitting code that uses a least-squares fitting algorithm to constrain thedensity profile. We provide the moment 0 map as input to ELLINT and fit only the surface brightness profile in the approaching and receding sides, independently. We fix the position angle, inclination, and central position using the quoted values in Table 1 of B05 (-37.0^∘, 67.2^∘, α (J2000) =12^h 35^m 58^s, and δ(J2000) = 27^∘ 57 32 respectively) for the ELLINT calculation of the surface brightness profile. The inclination and position angle were both derived from the morphology and kinematics of the tilted ring model analysis done in B05. The central position was also tabulated by B05 using the kinematics of thedata from B05, and represents the kinematical center of NGC 4559. This is then converted into the column density profiles shown also in Figure <ref>.The HALOGAS data is somewhat deeper than the data from B05. In Figure <ref>, the radial extent is similar to that found in B05. We calculate the minimum detectable column density in the 30 smoothed cube to be 1.81 × 10^19 cm^-2 (see Table 2), which is 1.6 times lower than the minimum detected column density from the 26 cube in B05: 3.0× 10^19 cm^-2. Also, had B05 smoothed their 26 cube to 30, this difference would be slightly smaller. Furthermore, when the same distance is assumed, the HALOGAS data produced an extremely similar totalmass to B05, implying there is not much extra diffusein the HALOGAS data that was not captured by B05. If we assume there exists a plateau ofat our limiting column density due to the solar RFI (1 × 10^19 atoms cm^-2) ranging a radial distance between 25-30 kpc from the center of the galaxy, there would be only ∼ 7 × 10^7 M_⊙ of extrato detect. We do not detect a sharp cut-off inbut see a rather constant slope in log (column density) down to the last detected point at ∼1 ×10^19 atoms cm^-2. A cutoff might be expected due the intergalactic ionizing radiation field, as was found in M83 <cit.>. However, this effect would appear near a column density of a few times 10^19 atoms cm^-2 (e.g. <cit.>, <cit.>), which is near our sensitivity limit. We also note the clear change inprofile morphology inside and outside of R_25.Within R_25 theprofile seems clumpy and oscillates, perhaps due to overdensities like spiral arms. Outside of R_25 thedistribution becomes more uniform. Given the high theoretical sensitivity of the HALOGAS data cube, it is surprising that thecolumn density radial profile does not reach fainter levels. We suspect this is due to the effects ofsolar interference on the observations (see Section 2). The deconvolution process of the HALOGAS data is imperfect and cannot fully recover the lack of short-spacings. In particular, we can detect slightly negative and positive residuals due to solar interference on large angular scales in individual cleaned channel maps at the level of ∼ 1× 10^18 atoms cm^-2. These residuals change in depth and location between channel maps and the summation of channels will then limit the sensitivity in the integratedmap. If these residuals are summed over ∼ 3 channels at 3 σ, this limiting column density would approach ∼ 1× 10^19 atoms cm^-2, the lowest column densities we observe in the integratedradial profile.It is likely that a combination of the HALOGAS data with Green Bank Telescope (GBT) observations of NGC 4559 would lead to detection of lower column densities. An effort along these lines is underway using observations with the GBT (Pingel et al. in prep). § TILTED RING MODELS There are various signs of vertically extended lagging gas within the HALOGAS data cube, itself. In Figure <ref>, we show the channel maps of the 30 resolution HALOGAS cube, rotated so the major axis is horizontal. Signs of lagging extra-planar gas can be seen as emission that fills the “C"-shaped channel maps at intermediate velocities. Since the analysis in B05 was done, the vertical velocity structure was measured in edge-on galaxies, like NGC 891 <cit.>. Such studies have found that the lagging component is characterized by a vertical gradient in velocity, rather than a bulk decrease in velocity with separate rotation curve from the disk, making the velocity gradient the preferred characterization. Thus, in this study, we constrain the magnitude of the velocity gradient. This is an improvement over B05, where a separate rotation curve for the thick disk was used. To accomplish this, we use TiRiFiC <cit.> to create 3D tilted ring models to match to the HALOGAS data cube. TiRiFiC is a stand-alone program that constructs 3-D simulated data cubes of rotating galaxy disks. In addition to standard capabilities in other tilted ring codes, TiRiFiC allows for the addition of simple radial and vertical inflows and outflows in the construction of the simulated cubes. We present diagnostic position-velocity diagrams and channel maps of the 3D models as compared to the data in Figure <ref>. We reproduce the “lagging" model from B05 as a point of comparison. This model contains a two-component gas layer with a thin disk of 0.2 kpc and a thick disk of 2 kpc. Both disks have separate rotation curves with the exact values as presented in Figure 7 of B05. The radial surface brightness profile was reproduced from Figure 3 of B05, and 10% of the total was put into the thick disk, as was found in B05. The B05 model is included in the left-most column in Figure <ref>.We create new 3D models to match to the HALOGAS data cube. We use the GIPSY tasks ELLINT and ROTCUR to find initial estimates for the surface brightness profile and rotation curve. Both tasks are 2D ring fitting codes that use a least-squares fitting algorithm to constrain thedensity profile (ELLINT) and the rotation curve (ROTCUR). We use ELLINT in the same fashion here as for the radial profile calculation in Section 3. In a similar fashion, we provide the moment 1 map (velocity field) as input to ROTCUR, and fit only the rotation curve in the approaching and receding halves of the galaxy, independently. In the ROTCUR fitting of the rotation curve, we fix the position angle, inclination, central position and systemic velocity using the values quoted in Table 1 of B05. In both tasks we use 61 rings, all of thickness 15. We used the initial output surface brightness profile and rotation curve from ELLINT and ROTCUR as the initial input parameters to TiRiFiC to produce 3D tilted ring models. In TiRiFiC, the values of inclination angle, central position angle, and central position are the same throughout all models and are 67^∘, -37^∘, and 12^h 35^m 58^s+27^∘ 57 32, respectively. The approaching half also seems to show a slight warp at large radii. So, the position angle was lowered, through trial and error, by 4^∘ beginning at a radius of 18.3 kpc, in that half of the galaxy. Minor adjustments were made to the ELLINT and ROTCUR output surface brightness profile and rotation curve. These adjustments were made interactively, through trial and error, using TiRiFiC to better match the full 3D structure of the data cube. In all subsequent 3D models, we use trial and error to optimize each parameter by comparing the model to the channel maps, the position-velocity diagrams along both the major and minor axes, the moment 0 map, and the moment 1 map of the 30×30 cube. We do not use TiRiFiC in automated fitting mode, because conventional fitting routines fail to adequately fit faint structures. Since this study is most interested in characterizing faint structures, such as diffuse lagging extra-planar gas and the forbidden gas stream, we elect to fit the cube in this manner. The rotation curve we adopted for all models overlaid on the position-velocity diagram along the major axis and column density profiles are included in Figure <ref> and Figure <ref>.We experimented with various warp morphologies by varying the inclination near the edges of the disk, and the scale height in an attempt to reproduce the lagging gas signatures. One such signature can be seen in the position-velocity diagram along the major axis as diffuse emission that is found at velocities closer to systemic than the normally rotating disk. This extra-planar gas feature is commonly referred to as 'beard gas'. No amount of disk warping or disk thickness alone could reproduce the data adequately. We then added a thick disk component to the first models with a uniform lag throughout.The following parameters were adjusted and matched to the data through trial and error: thick disk scale height, thick disk lag, percentage of total gas in the thick disk, and global velocity dispersion. The velocity dispersion was constrained primarily by matching the thickness and spacing of brightness contours in the position-velocity diagram along the major axis and channel maps. The resulting parameters are shown in Table 2, and the model itself can be seen in the second column of Figure <ref>. To better illustrate how constrained the key parameter of lag is in this model, we produce two additional models that are identical to the uniform lag model, but with ±5 km s^-1 kpc^-1lag. We note that the lag value is degenerate with other parameters of the model, like the thick disk scale height and velocity dispersion (see Section 4.1).These two new models are shown in the third and fourth columns of Figure <ref>. In a final model, referred to as the fine-tuned model, we experimented with various values of velocity dispersion and lag in each individual ring. We converged on a model containing three values of velocity dispersion in three radial extents. For R ≤ 3.4 kpc, the velocity dispersion is 25 km s^-1 in the receding half of the thin disk. For 3.4 kpc < R < 12.0 kpc the dispersion is 18 km s^-1. Outside of 12.0 kpc, the velocity dispersion is 10 km s^-1. This can likely be attributed to turbulent motions in central star forming regions. In the approaching half, rings at R = 2.8 kpc and R= 3.4 kpc of the thin disk contain 25 km s^-1 of velocity dispersion, to account for the “bump” in the position-velocity diagram (see arrow in first row, data column Figure <ref>). We also increase the lag in the receding half to 13 km s^-1 kpc^-1 for R < 12.0 kpc of the thick disk, which roughly corresponds with R_25. The rest of the thick disk in the receding half of the galaxy contains no lag in this model due to a sharp cut off in extra-planar gas signatures in the position-velocity diagram along the major axis (see arrow in second row, data column Figure <ref>). We note this lack of lag at large radii is not indicative of a sharp radially shallowing lag, but is mostly due to the lack of extra-planar gas at those radii in the receding half. The approaching half of the galaxy has a uniform lag throughout in this model, just as in the previous model. Lastly, we include a modest radial inflow along the entirety of the thin disk of 10 km s^-1 in order to better match a kink in the position-velocity diagram along the minor axis. We note that inside a radius of 10 kpc, radially inflowing gas with this velocity would reach the center of the galaxy within a Gyr. This model is shown in the fifth column of Figure <ref>.The uniform lag model captures much of the lagging gas component, as seen in the above described extra-planar gas signatures. However, that model requires different lag magnitudes between the approaching and receding halves. Differences between the uniform lag model and that same model with increased and decreased lags (columns two through four of Figure <ref>) are most seen in the diffuse signatures of extra-planar lagging gas contours in the position-velocity diagram along the major axis (columns one and two).The fine-tuned model with its small scale variations in velocity dispersion and lag best represents both halves of the galaxy. From this analysis it is apparent that the lag magnitude does not change from one half of the galaxy to the other, but does cut off at large radii in the receding half, far from the star-forming disk. This result supports a Galactic Fountain model for extra-planar lagging gas, since the lag magnitude is uniform throughout the star-forming disk, and drops off at the edge. Note that no model adequately represents the forbidden gas. See Section 5.2 for a brief discussion on modelling the forbidden gas component. Also note that all asymmetries in the fine-tuned model that do not exist in other models arise from the specific treatment of lag, velocity dispersion, and the modest radial inflow that we include exclusively in that model. The B05 model is quite comparable to the uniform lag model. We claim that a uniform lag can explain lagging gas just as adequately as two distinct rotation curves for the thin and thick disks in NGC 4559. A uniform lag is preferable in that it has been observationally shown to be more physically accurate <cit.>. <cit.> discuss trends in lags among other galaxies and find that lags seem to reach their radially shallowest values near R_25. In the fine-tuned model, we model the lag in the receding half of NGC 4559 to cut off sharply at a radius of 12.06 kpc. At our assumed distance (D=7.9 Mpc), R_25 is 12.98 kpc, which makes this radial lag cutoff at 0.93R_25. So, this result is consistent with what was found in <cit.>. Tilted ring models were created from the HALOGAS data cube of NGC 4559 including radially varying lags, but no radially varying lag producing appreciable improvement to the fine-tuned model. As discussed in <cit.>, the overall steepness of lags suggest that conservation of angular momentum is not a sufficient explanation.Also, <cit.> create simulations of fountain gas clouds moving through a hot halo medium, and are better at reproducing the steepness of lags seen observationally. Alternatively, <cit.> propose that pressure gradients could explain the magnitude of morphology of lags, but this has been difficult to accomplish observationally. Deep radio continuum observations with the recently upgraded VLA, like <cit.>, should make future measurements of non-thermal pressure gradients possible. §.§ Uncertainties in Derived Parameters In general, uncertainties in three-dimensional tilted ring parameters that describe the data are estimated by varying each individual parameter to the point where the model no longer adequately represents the data <cit.>. Certain projections or regions in the data cube are more sensitive to some parameters than others, so these intricacies are considered in estimating uncertainties. The decisions as to what constitutes improvements and acceptable models were based on visual inspection of the various plots, as in previous papers in this type of work. The subtle low column density features generally do not typically lend themselves easily to statistical measures but visual inspection shows clearly if a certain model feature is required to reproduce particular faint features in p-v diagrams and channel maps. A comparison between totalmaps is effective at determining the uncertainty in inclination and position angle. A change of 3^∘ in both inclination and position angle through the complete disk is enough to make the totalmaps inconsistent with the data. To constrain the uncertainty in global velocity dispersion, we analyze position-velocity diagrams. After testing this parameter, the high velocity contours in the position-velocity diagram along the major axis no longer represent the data when changed by more than ±3 km s^-1 from their original value of 10 km s^-1. We note that the velocity dispersion in the extra-planar gas is degeneratewith the thick disk scale height and the magnitude of the thick disk lag. We account for this degeneracy as well as possible in estimating this uncertainty. Uncertainties in extra-planar gas related parameters are also estimated. We assume a thin disk scale height of 200 pc, while varying values were used for the thick disk. The fitting done in B05 assumed a thin disk scale height of 200 pc, so, for consistency, we retain that value, despite the resolution limits on constraining that number present in both studies. The central channels in the data cube are particularly sensitive to scale height changes. We find the thick disk must have a scale height of 2 ± 1 kpc. Since NGC 4559 is not seen edge on, the relative mass and amplitude of lag for the extra-planar gas is not easily constrained. We find between 15%-25% of the totalmass in extra-planar gas. We find the uncertainty in lag to be ∼ 5 km s^-1 kpc^-1 in both halves of the galaxy. These uncertainties take the degeneracy between scale-height and lag magnitude into account. For instance, large scale height values can be compensated with small lag magnitudes and vice versa. However, small imperfections in models, such as location of extra-planar gas signatures in the position-velocity diagram along the major axis, and thickness of diffuse emission in channel maps were closely inspected to minimize this degeneracy.§ ANOMALOUS GAS EXTRACTION It is useful to separate the lagging extra-planar gas component from the totaldata cube in an independent way. To that end, we follow the procedure by <cit.> to extract the extra-planar gas component, and create two separate data cubes: one with emission attributed to regularly rotating gas, and another with only emission of extra-planar gas. This procedure was also done in B05, so we will focus on comparing our new results with star-formation tracers.This procedure assumes that eachline profile contains a narrow Gaussian-shaped component whose peak is positioned close to the rotation velocity, and a broader component whose peak is closer to the systemic velocity. The latter is attributed to extra-planar lagging gas whose profile's shape is unconstrained, but likely substantially fainter than the normally rotating component. We estimated the contribution of the normally rotating component by fitting a Gaussian profile to the upper portions of the total line profile. Modeling only the tops of the line profiles enables us to minimize contamination from potential abnormally rotating components. Experimentation using various percentages of line profiles was performed to decrease the occurrence of fitting artifacts. The procedure produced the least artifacts when only the upper 60% of each line profile was fit. The amplitudes, central velocities, and widths of the Gaussian profiles were fit throughout the data cube. Based on experimenting with parameters, a dispersion maximum limit on each Gaussian profile of 30 km s^-1 was imparted on the fitting. The Gaussian profile fit to each line profile was then subtracted from the data line profiles, leaving only anomalously rotating extra-planar gas. In 285 out of 158234 instances, Gaussian profiles were not able to be fit. In these instances, the profiles were excluded from the extra-planar cube.The results of the extra-planar gas extraction can be seen in Figures <ref> (right panels) and <ref>. The behavior of the extra-planar gas seen in velocity fields is somewhat more irregular than in the total data.As seen in the bottom right panel of Figure <ref>, the residual p-v diagram from the fit (i.e. the presumed extra-planar gas) includes some gas at extreme values of radial velocity, near ∼ 950 km s^-1 and ∼ 675 km s^-1, which is not lagging. This is due to the limitation imposed by the forcing the fit to only 60% of the peak of the profile and the maximum cap on the velocity width of the regularly rotating thin disk in the fit. This gas is also likely due to the assumption of Gaussian velocity profiles. If the gas is clumpy or moving peculiarly the Gaussian profile assumption is incorrect.However, most of the residual gas is indeed lagging, as we find ∼ 10% of the total residual to reside in the extreme velocity regimes of the position-velocity diagram along the major axis. Assuming theemission is optically thin, we can estimate the total mass infrom its emission. The totalmass of the galaxy is ∼ 4.5 × 10^9 M_⊙. The mass of the extracted extra-planaris ∼ 4.0 × 10^8 M_⊙, or ∼ 10% of the totalmass. The mass in the thickdisk model derived from the tilted ring fitting analysis contains 20% of the totalmass. However, the two methods are not directly comparable since the portion of that thick disk at low z was superimposed over the thin disk in the ring models. Since the vertical profile usedin the ring modeling was a sech^2 model, one can integrate that function over the thin disk scale height, and exclude that amount of gas in the thick disk that spatially resides within the thin disk. The amount of thick disk mass outside ±3 times the thin disk scale height is 14% of the total mass. This is still somewhat more than for the extracted emission in the Gaussian fitting. In reality, the line profile fitting method certainly misses some extra-planar gas which happens to be at the same velocity (in projection) as the disk gas <cit.>. This is particularly true for gas near the minor-axis, where the rotation signal is weak. We conclude that overall the mass estimates for the extra-planar gas derived from the two methods are in reasonable agreement, and we find ∼ 10-20% of the totalmass to be extra-planar.§.§ Relation of Extra-planar Gas to Star Formation We incorporate two ancillary images – one Hα narrowband image, and one GALEX FUV image from <cit.>– as tracers of star formation in NGC 4559. We show the extracted extra-planar gas overlaid as colored contours atop the Hα image and the GALEX FUV image in Figure <ref>. In Figure <ref>, we see the location of the highest densities of extra-planar . The three highest density contours (violet, blue and green) trace the regions of active star formation. Additionally, a spiral arm feature seen extending to the south-east in both the Hα and FUV images is traced by the extra-planar gas. Note there are some small isolated depressions in the extracted extra-planar gas. These are not in all cases regions with lesser amounts of extra-planar gas, but could also be regions where the Gaussian profile fit did not converge. To better see the radial extent of the extra-planar gas and the star formation tracers, we computed azimuthally averaged radial profiles for the total , Gaussian extracted extra-planar gas cube, the Hα image, and the GALEX FUV image. All profiles are corrected for the inclination of the galaxy. These radial profiles are shown in Figure <ref>. In this figure, the intensity of each component has been normalized to its peak so that differences in structure can be more easily seen. We find that the totalis more extended than the extra-planar gas. The extra-planar gas traces the extent of the UV profile well. Since UV emission, which is indicative of older star formation, and the extra-planar gas are coincident, it is likely that the extra-planar gas is related to past star formation processes <cit.>. Had the extra-planar gas been due to accretion, there would be a high likelihood of seeing more radially extended extra-planar gas that traces the extent of the total . Thus, we conclude that the extra-planar gas is most likely due to star formation processes. We also note that our modeling showed evidence that the thick disk lag value approached zero outside of R∼12 kpc in the receding half. The modeled thick disk still contained gas there, so this gas would be non-lagging extra-planar gas that would not be found by the Gaussian fitting algorithm. However, we cannot distinguish this gas from disk gas observationally, due to the inclination of the galaxy, so we cannot say whether or not the thick disk lag approaches zero outside of this radius.It is interesting that the lagging gas is still closely associated with the spiral arms (see e.g. the southern spiral arm in Figure <ref>). Here, it is relevant to consider the fountain cycle timescale, which is the amount of time it takes for an ejected parcel of gas to fall back down on the disk. If the ejection occurs at ∼70 km s^-1 and is vertically stationary at its apex, then its average vertical velocity is ∼ 35 km s^-1. This vertical ejection velocity is consistent with the results of <cit.>, where a vertical kick velocity of this magnitude was used in dynamical models matching observed extra-planar properties of NGC 891 and NGC2403. If the gas reaches a vertical height of ∼ 2 kpc, then the gas will traverse the full vertical extent of 4 kpc and fall back to the disk in ∼ 100 Myr. When the gas is at a vertical height of 2 kpc it experiences a lag of 26 km s^-1, and while it is at 1 kpc it lags the disk by 13 km s^-1. Thus, the average magnitude of the lag is 13 km s^-1, which would produce an azimuthal offset of ∼ 1.3 kpc over the course of the 100 Myr fountain cycle timescale. This is slightly larger than the 30 (1.1 kpc) smoothed beam, which is most sensitive to diffuse lagging gas features. Although the inclination of the galaxy would make this offset slightly smaller, we see no evidence for a systematic offset of 0.5-1 beam. The time scales suggest that the extra-planar gas is likely to be recently ejected fountain gas, that has not had time to cool and begin its journey back down to the disk. So, we conclude that the most likely origin for the bulk of extra-planarin NGC 4559 is the galactic fountain mechanism; i.e.,is transported above the disk as cold/warm gas in superbubbles following the explosion of supernovae. Our results show roughly 10-20% of the totalmass to be extra-planar. A similar amount of extra-planar gas (∼ 10%) was found for NGC 2403 by <cit.> using a similar Gaussian profile fitting and extraction to what is used in this study. Interestingly, both NGC 2403 and NGC 4559 also have similar star formation rates of 1.0 M_⊙ yr^-1 and1.1 M_⊙ yr^-1, respectively <cit.>. Additionally, NGC 3198 was found to house ∼15% of its totalmass as extra-planar gas, with a SFR of 0.61 M_⊙ yr^-1<cit.>. The amount of extra-planar gas in NGC 3198 comes from a tilted ring model analysis, in which 15% of the totalmass is used in a thick disk component. Since some of that thick disk is superimposed on the thin disk, the amount of extra-planar gas in NGC 3198 is likely closer to ∼10%, as we find in NGC 4559. Indeed, even when star formation rate density (SFR/D_25^2) is compared, both NGC 2403 and NGC 4559 are similar, with star formation rate densities of 0.045 and 0.042 M_⊙ yr^-1 kpc^-2<cit.>. NGC 3198 has a lower star formation rate density of 0.016 M_⊙ yr^-1 kpc^-2<cit.>, yet still has a substantial amount of extra-planar gas <cit.>. §.§ Forbidden Gas andHoles A striking amount of anomalous gas located in the 'forbidden' region of the position-velocity diagram along the major-axis is present in the data cube of NGC 4559.This feature was also mentioned in B05, but the deeper HALOGAS data makes it possible to study this region in more detail. The existence of otherholes is noted in this galaxy. However, we focus on this particular one due to its proximity to the forbidden gas feature. A study by <cit.> found that most of theholes in NGC 6946 are related to extra-planar material expelled by star formation. The previous study by B05 noted the potential relationship between the forbidden velocity gas feature in NGC 4559 and a nearbyhole. The samehole is visible in the full-resolution HALOGAS cube, as well. In Figure <ref>, we show the sum of 9 channels in the full resolution HALOGAS cube which showed the strongest forbidden emission, overlaid on the GALEX FUV image, and mark the locations of thehole and forbidden velocity filament. We estimate the location of the center of this hole to be α=12^h36^m3.3^sδ=27^∘57^m9.6^s, which is in good agreement with B05. We estimate the center of the forbidden gas feature to be located at α=12^h36^m0.6^sδ=27^∘56^m52.0^s– this is ∼40 (1.5 kpc) away from the center of the hole on the sky. Assuming the hole and the forbidden gas feature lie in the same plane on the sky, the vertical distance from the forbidden gas to the point on the plane directly below it would be h=1.5 kpc tan(67^∘)=3.5 kpc. In order to explore the potential origins of this peculiar feature, we extracted it from each channel in which it is present in the 30 smoothed cube. Emission was found in 13 channels between 817.78 km s^-1 and 867.20 km s^-1. The total mass of the feature is ∼ 1.4× 10^6 M_⊙.The proximity and orientation of the forbidden gas filament suggests it could contain gas that may have once filled the hole. Assuming the bulk motion of the gas in the forbidden gas filament can be characterized by the velocity of the central channel of its emission, we estimate the relative velocity of the filament to the gas in its spatial surroundings. The central channel of the filament's emission is at 842.5 km s^-1, and the channel containing most of the gas spatially coincident with the feature is at 719 km s^-1, ie. the bulk velocity of the gas around the filament has a velocity of ∼ 123 km s^-1 relative to the hole.We estimate the amount of kinetic energy the forbidden feature would have moving directly vertically from the plane of the galaxy via K=1/2 mv^2. Using the relative velocity of the feature corrected for inclination, this energy would be ∼1.8×10^53 erg. We estimate the potential energy required to move a parcel of gas to the height of the feature assuming it is located directly above the plane of the disk (3.5 kpc). We use Equation 4 of <cit.> for the potential energy of extra-planar clouds above a galaxy disk. We use the above tabulated cloud mass and height above the plane, assume the stellar disk scale height is the model thin disk scale height of ∼ 200 pc, and assume the mid-plane mass density is that of the Solar Neighborhood: ρ_0 =0.185 M_⊙ pc^-3<cit.>. We find the corresponding potential energy of the forbidden gas feature is 2 × 10^53 erg s^-1. The total energy, kinetic plus potential, required to move this gas is ∼4.0×10^53 erg, or the energy of ∼2000 supernovae of energy 10^51 erg, assuming 20% efficiency. This efficiency and energy requirement is reasonable for a superbubble that arose from many supernovae. <cit.> relate the Galactic HVC Complex C to ejection via star formation processes within the Milky Way. This complex has a hydrogen mass and relative velocity both roughly twice as large as seen in the forbidden gas feature of NGC 4559. <cit.> estimate the number of supernovae required to eject the complex to be ∼ 1 × 10^4 supernovae, corresponding to a typical star formation rate density of a star forming region in the Milky Way disk. Thus, it is plausible that this feature in NGC 4559 was once part of the disk, but was ejected through star formation processes. If our assumptions about the vertical separation of the forbidden gas from the disk were incorrect, the energy calculation would change somewhat. The kinetic energy would be unchanged. However, the potential energy would decrease if the gas were actually closer to the disk, further bolstering our above conclusion. If the gas were actually located further from the disk, it would require more supernovae to potentially eject the forbidden gas feature. If the gas were located 7 kpc above the disk, this would increase our potential energy calculation by a factor of 5, which would not be enough to change the above conclusion. Of course, beyond some height, the potential energy required would begin to become unreasonably high. But, the greater the height of the gas, the less likely it is to show the smooth kinematic connection to permitted velocity gas in the major axis p-v diagram.We can explore whether or not the forbidden gas is an outflow feature or infalling from purely geometric arguments. Both the hole and the forbidden gas feature are located near the major axis in the approaching side of the galaxy and we assume that the spiral arms seen in the GALEX image are trailing arms (see Figure <ref>), indicative of counter-clockwise rotation. If the arms are indeed trailing arms, then the SW side is the near side.If we assume that the spatial separation between the hole and the forbidden gas feature is mostly along the z-axis, then the feature must be located on the far side of the disk, and is therefore outflowing due to its positive heliocentric velocity. If it is inflowing and on the near side of the disk, then the feature cannot be located above the hole, but would instead be “ahead" of the hole in azimuth, which cannot be due to a lag. Therefore, if the feature is inflowing, it is either not related to the hole, or was launched from the hole at a large angle, which is not likely. Since we see this forbidden gas feature as a smooth connection to the extra-planar gas signatures at permitted velocities, which we attribute to a galactic fountain, we believe the forbidden gas feature to most likely be an outflow on the far side of the disk. In an attempt to explain the presence of the forbidden gas feature, numerous tilted ring models containing both radial and vertical inflows and outflows were created with TiRiFiC. Models were created containing varying strengths of these flows in the inner regions of the thick and thin disks of the best-fitting thick disk model. However, no combination of these effects made models containing any distinct feature akin to the forbidden gas feature. Simple kinematic changes to the tilted ring model are insufficient to match the observed phenomenon. This may be due to the nature of tilted ring fitting – we attempted to model an isolated, non-axisymmetric structure, within rings that extend through half of the angular extent of the entire galaxy. We note that <cit.> show that a random fountain can produce similar, centrally-located, forbidden gas features that also exist in NGC 2403.§DWARF GALAXY A previouslyundetected dwarf galaxy was found in the widest field HALOGAS data cube of NGC 4559. The center of this dwarf is located at α=12^h35^m21.335^s, δ=27^∘33^m46.68^s, which puts it at least 0.418^∘ (∼58 kpc) away from the center of NGC 4559, if the objects are spatially aligned in the same plane.The heliocentric velocity of the dwarf is ∼ 1200 km s^-1 and emission from the dwarf spans 9 channels from 1187-1224 km s^-1, in total. This also makes the dwarf well outside the field of view of the Hα image. Velocities are computed in the optical definition. The totalflux of this feature was found to be 27.1 mJy, corresponding to a totalmass of ∼4×10^5 M_⊙, assuming a distance to the dwarf of 7.9 Mpc. We show thedwarf in Figure <ref>, overlaid on an SDSS g band image. We attempted to determine whether the dwarf is, in fact, a bound companion to NGC 4559.The circular velocity of NGC 4559 is ∼ 130 km s^-1, as seen in therotation curve modelled in this study. We assume a typical halo mass for a galaxy at the rotation velocity of NGC 4559 by inverting Equation 8 in <cit.>. That study found an empirical relation between circular velocity and halo mass, assuming NFW dark matter density profiles using the Bolshoi simulation. The virial mass we find is ∼ 2.8 × 10^11 M_⊙, assuming h=0.70 also from the Bolshoi simulation <cit.>. We find the virial mass of a Milky Way-sized halo to be ∼1.5 × 10^12 M_⊙ using that same relation. Since R_vir∝ M_vir^1/3, we estimate the virial radius of NGC 4559 to be ∼ 170 kpc. Even if the dwarf is not in the same plane as NGC 4559, it is still likely well within the virial radius. We estimate the escape velocity of the dwarf using Equation 2-192 in <cit.>. We estimate r_⋆∼ 25 kpc, the maximum radius where the circular velocity is constant. Since the halo extends further than theemission, this value is a lower limit, and thus makes the escape velocity estimate also a lower limit. In this calculation, we also use the circular velocity estimate from the tilted ring modelling (v_c∼ 130 km s^-1), and the assumed distance to the dwarf of 58 kpc. This yields an escape velocity of v_esc∼ 121 km s^-1. This value is much lower than the 390 km s^-1 difference in systemic velocity of NGC 4559 and the central velocity of the dwarf. Though we cannot measure the exact distance where the rotation curve ceases to be flat, this would have to occur at a radial distance of ∼ 261 kpc for the escape velocity to match the velocity of the dwarf. Since it is unlikely that the rotation curve is flat to such an extreme distance, we conclude that the dwarf is unbound. Two optical counterparts for this object were found in the Sloan Digital Sky Survey (SDSS, ) data release 12 database. In Figure <ref> we show a SDSS g Band image with the HALOGAScontours overlaid in red, showing the locations of the SDSS counterparts and the source in . These two objects are separated by ∼5 and appear to be two merging objects. The SDSS spectra for these two regions show the objects to both reside at z=0.004, which corresponds to a heliocentric velocity of 1198 km s^-1, which is effectively identical tothe velocity of the dwarf in . The spectra of these objects can be seen in Figure <ref>. They both show large OIII/Hβ ratios, implying the existence of high excitation HII regions, and the existence of young stars. We conclude that the dwarf is actually two merging Blue Compact Dwarf (BCD) galaxies. The SDSS quoted u and g band magnitudes for the north-eastern object are 17.12 and 16.67, and those of the south-western object are 17.22 and 16.79. Using the assumed distance of 7.9 Mpc, the corresponding absolute magnitudes are M_u=-12.4 and M_g=-12.8 for the north-eastern object, and M_u=-12.3 and M_g=-12.7 for the south-western object. We converted these magnitudes to an equivalent B band magnitude, then calculated the luminosity of the objects assuming they are at the same distance as NGC 4559. The dwarves have anmass to blue light ratio of 0.18. The optical size of the companion is ∼5 or 191.5 pc, assuming the object is at the distance of the galaxy. Thismass to blue light ratio is low, but within the values obtained by <cit.>– an Effelsbergstudy of 69 BCD galaxies. The combinedmass of these dwarves is also of the same order of magnitude to that of 12 dwarf galaxies in the local group as described in <cit.>. Of these 12, only 4 are within 200 kpc of their parent galaxy. Also of these 12, 5 have M_V between -14 and -11. § DISCUSSION AND CONCLUSIONS Our analysis of the extra-planarin NGC 4559 can be compared to other moderately inclined spiral galaxies in the HALOGAS sample. In a similar study by <cit.> of NGC 3198, a lagging extra-planarcomponent was discovered, containing ∼15% of the totalmass of that galaxy. The extra-planarin NGC 3198 was found to be characterized by a thick disk scale-height of ∼3 kpc with a vertical lag of 7-15 km s^-1 kpc^-1. These values are very similar to what was found in NGC 4559.NGC 2403 is a very similar galaxy to NGC 4559 in morphology and star formation characteristics, and was studied inwith the Very Large Array by <cit.>. NGC 2403 has a star formation rate of 1 M_⊙ yr^-1<cit.> and a rotation velocity of 122 km s^-1. <cit.> found NGC 2403 to contain an extra-planarcomponent containing ∼10% of the totalmass of that galaxy. That study found the extra-planarto be lagging the disk's rotation velocity by 25-50 km s^-1.A recent study by <cit.> of NGC 4414 found that only about ∼4% of the totalmass is in extra-planar gas. However, due to the disturbed nature of that galaxy's halo, that number is difficult to constrain. Analysis of the inner disk of NGC 4414 show that galaxy has likely experienced a recent interaction with a dwarf galaxy, which may account for its large star formation rate of 4.2 M_⊙/yr.Although the characteristics of the extra-planarin NGC 4559 seem to fit into the overall nearby moderately-inclined spiral galaxy picture, it is difficult to say whether or not lagging extra-planaris ubiquitous or extraordinary throughout the nearby universe. Furthermore, is extra-planar laggingalways seen to be likely caused by the Galactic Fountain mechanism? Greater understanding of this problem can be obtained through further study of the entire HALOGAS sample, both edge-on and moderately inclined galaxies. The next generation of radio telescopes, including the Square Kilometre Array, will answer these questions in the future.We used the deep 21 cm HALOGAS observations of NGC 4559 to expand upon the work done by <cit.> in characterizing diffuse extra-planar and anomalous characteristics in thedistribution of that galaxy. We created detailed three dimensional, tilted ring models of that galaxy's . We confirm B05, in that a model containing only a ∼200 pc thin disk cannot produce a fit to the faint extra-planar lagging gas signatures seen in this galaxy. We create an expanded model containing a thick disk comprised of 20% of the totalof the thin disk model, whose thick disk component extends vertically to a scale height of 2 kpc. We constrain the magnitude of the gradient in rotation velocitywith height in a simple thick disk model to be ∼ 13 km s^-1 kpc^-1 in the approaching side and ∼ 6.5 km s^-1 kpc^-1 in the receding side. In the fine-tuned model, we find that a lag of ∼ 13 km s^-1 kpc^-1 in both halves, but with a cutoff in the receding half near R_25 is an improved match to the data. This measurement of the lag magnitude was not previously done in B05, where a separate rotation curve was used for the thick disk. We use a Gaussian line profile fitting technique to extract the anomalously rotating extra-planar gas from the normally rotating disk. In this technique we find that ∼ 10% of the totalmass is extra-planar. Also, the extra-planar gas is localized to the inner star-forming regions of the galaxy, again suggesting a bulk of this gas is of galactic fountain origin.We analyze the spatial locations of total and extra-planarin relation to Hα, emission seen from young stars as a tracer for active star formation. We find that extra-planartraces regions of star formation, leading us to conclude that most of the extra-planarseen is from in-situ star formation, ie. a galaxy-wide galactic fountain. To further build on the work of B05, we extracted the emission from a filament oflocated in the kinematically forbidden region of the position-velocity diagram along the major axis. We find that the feature contains 1.4× 10^6 M_⊙ of . Energy estimates of the feature require ∼2000 supernova to move the gas, which is consistent with a superbubble or other in-situ processes due to star formation. The remarkable proximity of this feature to a largehole is difficult to ignore, but no irrefutable evidence tying the two together was found. Furthermore, the feature extends into the extra-planar gas signatures quite smoothly in the position-velocity diagram along the major axis, further pointing to the filament originating inside of the normally rotating disk, and was also expelled through star formation.We analyze a merger of two BCD galaxies, previously unobserved in , located ∼0.4^∘ from the center of NGC 4559. The BCD galaxies contain ∼4×10^5 M_⊙ ofand contain two spatially tight counterpart sources in SDSS. We conclude the objects are merging BCD galaxies due to a lowmass to blue light ratio of 0.18 and spectra largely indicative ofregions.This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. 127229 to CJV. This material is also based on work partially supported by the National Science Foundation under Grant No. AST-1616513 to RJR and Grant Nos. AST-0908126 and AST-1615594 to RAMW. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 679627). We would like to thank the anonymous referee for insightful and helpful comments that lead to the overall improvement of this work.aasjournal | http://arxiv.org/abs/1703.09345v1 | {
"authors": [
"Carlos J. Vargas",
"George Heald",
"Rene A. M. Walterbos",
"Filippo Fraternali",
"Maria T. Patterson",
"Richard J. Rand",
"Gyula I. G. Jozsa",
"Gianfranco Gentile",
"Paolo Serra"
],
"categories": [
"astro-ph.GA"
],
"primary_category": "astro-ph.GA",
"published": "20170327233854",
"title": "HALOGAS Observations of NGC 4559: Anomalous and Extra-planar HI and its Relation to Star Formation"
} |
Center for Free-Electron Laser Science, DESY, Notkestrasse 85, 22607 Hamburg, Germany The Hamburg Centre for Ultrafast Imaging, Luruper Chaussee 149, 22761 Hamburg, Germany Department of Physics, University of Hamburg, Jungiusstrasse 9, 20355 Hamburg, Germany Department of Physics, Indian Insitute of Technology, Kharagpur, West Bengal, India Center for Free-Electron Laser Science, DESY, Notkestrasse 85, 22607 Hamburg, Germany Center for Free-Electron Laser Science, DESY, Notkestrasse 85, 22607 Hamburg, Germany The Hamburg Centre for Ultrafast Imaging, Luruper Chaussee 149, 22761 Hamburg, Germany Center for Free-Electron Laser Science, DESY, Notkestrasse 85, 22607 Hamburg, Germany The Hamburg Centre for Ultrafast Imaging, Luruper Chaussee 149, 22761 Hamburg, Germany Center for Free-Electron Laser Science, DESY, Notkestrasse 85, 22607 Hamburg, Germany The Hamburg Centre for Ultrafast Imaging, Luruper Chaussee 149, 22761 Hamburg, Germany Department of Physics, University of Hamburg, Jungiusstrasse 9, 20355 Hamburg, Germany When matter is exposed to a high-intensity x-ray free-electron-laser pulse, the x rays excite inner-shell electrons leading to theionization of the electrons through various atomic processes and creating high-energy-density plasma, i.e., warm or hot dense matter.The resulting system consists of atoms in various electronic configurations, thermalizing on sub-picosecond to picosecond timescales after photoexcitation.We present a simulation study of x-ray-heated solid-density matter.For this we use XMDYN, a Monte-Carlo molecular-dynamics-based code with periodic boundary conditions,which allows one to investigate non-equilibrium dynamics. XMDYN is capable of treatingsystems containing light and heavy atomic species with full electronic configuration space and 3D spatial inhomogeneity.For the validation of our approach we compare for a model system the electron temperatures and the ion charge-state distribution from XMDYN to results for the thermalized systembased on the average-atom model implemented in XATOM, an ab-initio x-ray atomic physics toolkit extended to include a plasma environment.Further, we also compare the average charge evolution of diamond with the predictions of a Boltzmann continuumapproach. We demonstrate that XMDYN results are in good quantitative agreement with the above mentioned approaches,suggesting that the current implementation of XMDYN is a viable approach to simulate the dynamics of x-ray-driven non-equilibrium dynamics in solids. In order to illustrate thepotential of XMDYN for treating complex systems we present calculations on the triiodo benzene derivative 5-amino-2,4,6-triiodoisophthalic acid (I3C), a compound of relevance ofbiomolecular imaging, consisting of heavy and light atomic species.A molecular-dynamics approach for studying the non-equilibrium behavior of x-ray-heated solid-density matter Robin Santra December 30, 2023 ============================================================================================================§ INTRODUCTION X-ray free-electron lasers (XFELs) <cit.> provide intense radiation with a pulse duration down to only tens offemtoseconds. The cross sections for the elementary atomic processes during x-ray–matter interactions are small. Delivering high x-ray fluence can increasethe probabilities of photoionization processes to saturation <cit.>. Nonlinear phenomena arise because of the complex multiphoton ionization pathwayswithin molecular or dense plasma environment <cit.>. Theory has a key role in revealing the importance of differentmechanisms in the dynamics. Many models have been developed for this purpose using both particle and continuumapproaches <cit.>.In order to give a complete description of the evolution of the atomic states in the plasma, one needs to account for the possible occurrence of allelectronic configurations of the atoms/ions. A computationally demanding situation arises when a plasma consists of heavy atomic species <cit.>.For example, at a photon energy of 5.5 keV, the number of electronic configurations accessible in a heavy atom such as xenon (Z=54) is about20 million <cit.>. If one wants to describe the accessible configuration space of two such atoms, one must dealwith (2×10^7)^2 = 4×10^14 electronic configurations. It is clear that following the populations of all electronic configurations ina polyatomic system as a function of time is a formidable task. To avoid this problem, the approximation of using superconfigurations has long beenused <cit.>. Moreover, the approach of using a set of average configurations <cit.> and the approach of limitingthe available configurations by using a pre-selected subset of configurations in predominant relaxation paths <cit.> has been applied.The most promising approach to address this challenge is to sample the most importantpathways in the unrestricted polyatomic electronic configuration space. This can be realized by using a Monte-Carlo strategy, which is straightforward to implementin a particle approach. In the present study we simulate the effect of individual ultrafast XFEL pulsesof different intensities incident on a model system of carbon atoms placed on a lattice and analyze the quasi-equilibrium plasma state of the material reached throughionization and electron plasma thermalization. In order to have a comprehensive description during electron plasma thermalization we include all possible atomic electronicconfigurations for Monte-Carlo sampling, and no pre-selection of transitions and configurations is introduced. To this end, we useXMDYN <cit.>, a Monte-Carlo molecular-dynamics based code. XMDYN gives a microscopic description of a polyatomic system, and phenomena such as sequential multiphoton ionization <cit.>,nanoplasma formation <cit.>, thermalization of electrons through collisions and thermal emission <cit.> emerge as an outcome of a simulation. Probabilities of transitions between atomic states are determined by cross-section and rate data that are calculated by XATOM <cit.>,a toolkit for x-ray atomic physics. In XMDYN individual ionization and relaxation paths are generated via a Monte-Carlo algorithm. A recent extension of XMDYN to periodicboundary conditions allows us to investigate bulk systems <cit.>. To validate the XMDYN approach towards a free-electron thermal equilibrium, we use an average-atom (AA) extension of XATOM <cit.>, which isbased on concepts of average-atom models used in plasma physics <cit.>.AA gives a statistical description of the behavior of atoms immersed in a plasma environment. It calculatesplasma properties such as ion charge-state populations and plasma electron densities for a system with a given temperature. We compare the electron temperaturesand ion charge-state distributions provided by XMDYN and AA. We also make a comparison between predictions for the ionization dynamics in irradiated diamond obtainedby the XMDYN particle approach and results from a Boltzmann continuum approach published recently <cit.>. With these comparisons, we demonstrate the potentialof the XMDYN code for the description of high-energy-density bulk systems in and out of equilibrium.Finally, we consider a complex system of 5-amino-2,4,6-triiodoisophthalic acid (I3C in crystalline form) consisting of heavy and light atomic species. We show theevolution of average atomic charge states and free electron thermalization. We demonstrate that XMDYN can simulate thedynamics of x-ray-driven complex matter with all the possible electronic configurations without pre-selecting any pathways in the electronic configuration space.§ THEORETICAL BACKGROUND §.§ XMDYN: Molecular dynamics with super-cell approachXMDYN <cit.> is a computational tool to simulate the dynamics of matter exposed to high-intensity x rays.A hybrid atomistic approach <cit.> is applied whereneutral atoms, atomic ions and ionized (free) electrons are treated as classical particles,with defined position and velocity vectors, charge and mass.The molecular-dynamics (MD) technique is applied to calculate the real-space dynamics of these particles by solving the classical equations of motion numerically.XMDYN treats only those orbitals as being quantized that are occupied in the ground state of the neutral atom. It keeps track of the electronic configuration of all the atoms and atomic ions.XMDYN calls the XATOM toolkit on the fly, which provides rate and cross-section data of x-ray-inducedprocesses such as photoionization, Auger decay, and x-ray fluorescence, for all possible electronic configurations accessible during intense x-ray exposure.Probabilities derived from these parameters are then used in aMonte-Carlo algorithm to generate a realization of the stochastic inner-shell dynamics. XMDYNincludes secondary (collisional) ionization and recombination, the two most important processes occurring due to an environment. XMDYN has been validatedquantitatively against experimental data on finite samples calculated within open boundary conditions <cit.>. Our focus here is the bulk properties of highly excited matter.XMDYN uses the concept of periodic boundary condition (PBC) to simulate bulk behavior <cit.>. In the PBC concept, we calculate the irradiation-induceddynamics of a smaller unit, called a super-cell. A hypothetical, infinitely extended system is constructed as a periodic extension of the super-cell.The Coulomb interaction is calculated for all the charged particles inside the super-cell within the minimum imageconvention <cit.>. Therefore, the total Coulomb force acting on a charge is given by the interaction with other charges within its well-defined neighborhoodcontaining also particles of the surrounding copies of the super-cell. §.§ Impact ionization and recombinationWhile core excited states of atoms decay typically within ten or less femtoseconds, electron impact ionization and recombination events occur throughout the thermalization processand are in dynamical balance in thermal equilibrium. The models used in this study consider theseprocesses on different footing that we overview in this section. Within the XMDYN particle approach, electron impact ionization is not a stochastic process (i.e., no random number isneeded in the algorithm), but it depends solely on the real space dynamics (spatial location and velocity) of the particles and on the cross section. When a classical free electron is close to an atom/ion, its trajectory is extrapolatedback to an infinite distance in the potential of the target ion by using energy and angular momentum conservation. Impact ionization occurs only if the impact parameter at infinity issmaller than the radius associated with the total electron impact ionization cross section. The total cross section is a sum of partial cross sectionsevaluated for the occupied orbitals, using the asymptotic kinetic energy of the impact electron. In the case of an ionization event the orbital to be ionized is chosen randomly, accordingto probabilities proportional to the subshell partial cross sections. XMDYN uses the binary-encounter-Bethe (BEB) cross sections <cit.> supplied with atomicparameters calculated with XATOM. Similarly, in XMDYN recombination is a process that evolves through the classical dynamics of the particles. XMDYN identifies for the ion that has the strongest Coulomb potentialfor each electron and calculates for how long this condition is fulfilled. Recombination occurs when an electron remains around the same ion for n full periods(e.g., n=1) <cit.>. While recombination can be identified based on this definition, the electron is still kept classical if its classical orbital energy is higher than the orbital energy of the highest considered orbital icontaining avacancy. When the classical binding becomes stronger, the classical electron is removed and the occupation numberof the corresponding orbital is incremented by one. Although treating recombination the above way is somewhat phenomenological (e.g., no cross section derived from inverse processes is used),in particle simulations similar treatments are common <cit.>. This process corresponds to three-body (or many-body) recombination as energy of electrons istransferred to other plasma electrons leading to the recombination event. The three-body recombinationis the predominant recombination channel in a warm-dense environment.§.§ Electron plasma analysisElectron plasma is formed when electrons are ejected from atoms in ionization events and stay among the ions through an extensive period as, e.g., in bulk matter. The plasma dynamics are governed not only by the Coulomb interaction between the particles but also by collisional ionization, recombination, and so on.XMDYN follows the system from the very first photoionization event through non-equilibrium states until free electron thermalization is reached asymptotically. In order to quantify theequilibrium properties reached, we fit the plasma electron velocity distribution using a Maxwell-Boltzmann distribution,f(v) = √((1/2π T)^3) 4π v^2e^-v^2/2T,where T represents the temperature (in units of energy), and v is the electron speed.Atomic units are used unless specified. With the function defined in Eq. (1) we fit the temperature, which is used later to compare with equilibrium-state calculations.   § VALIDATION OF THE METHODOLOGY In order to validate how well XMDYN can simulate free electron thermalization dynamics, we compare AA, where full thermalization is assumed, and XMDYNafter reaching a thermal equlibrium. We first consider a model system consisting of carbon atoms. For a reasonable comparison of the results from XMDYN and AA,one should choose a system that can be addressed usingboth tools. AA does not consider any motion of atomic nuclei. Therefore we had to restrict the translational motion of atoms and atomic ions in XMDYN simulations as well.In order to do so, we set the carbon mass artificially so large that atomic movements were negligible throughout the calculations. Further, we increased the carbon-carbon distancesto reduce the effect of the neighboring ions on the atomic electron binding energies.In XMDYN simulations, we chose a super-cell of 512 carbon atoms arranged in a diamond structure, butwith a 13.16 Å lattice constant (in case of diamond it is 3.567 Å). The number density of the carbon atoms is ρ_0=3.5×10^-3Å^-3, whichcorresponds to a mass density of 0.07 g/ cm^3. Plasma was generated by choosing different irradiation conditions typical at XFELs.Three different fluences, ℱ_low =6.7×10^9 ph/μ m^2 ,ℱ_med =1.9×10^11 ph/μ m^2, and ℱ_high =3.8×10^11 ph/μ m^2, were considered.In all three cases the photon energy and pulse duration were 1 keV and 10 fs (full width at half maximum), respectively. From XMDYN plasma simulations shown in Fig. <ref>, the time evolution of the temperature of the electron plasma is analyzed by fitting to Eq. (1). Counterintuitively, right after photon absorption has finished, the temperature is still low, and then it gradually increases although no more energy is pumped into the system. The reason is thatduring the few tens of femtoseconds irradiation the fast photoelectrons are not yet part of the free electron thermal distribution; initially only the low-energy secondary electrons and Augerelectrons that have lost a significant part of their energy in collisions determine the temperature. The fast electrons thermalize on longer timescales as shown inFigs. <ref>(b) and (c), contributing to the equilibrated subset of electrons. In all cases equilibrium is reached within 100 fs after the pulse.AA calculates only the equilibrium properties of the system, which means that it does not consider the history of the system's evolution through non-equilibrium states.We first calculate the total energy per atom, E(T), as a function of temperature T within a carbon system of density ρ_0.E(T) = ∑_p ε_p ñ_p(μ,T) ∫_r ≤ r_s d^3 r| ψ_p(𝐫) |^2,where p is a one-particle state index, ε_p and ψ_p are corresponding orbital energy and orbital, and ñ_p stands for the fractional occupation numbersat chemical potential μ. Details are found in Ref. [sonPhysRevX]. In this way we obtain a relation between the average energy absorbed per atom,ΔE=E(T)-E(0), and the electron temperature (see Fig. <ref>).From XMDYN the average number of photoionization events per atom, n_ph, is available for each fluence point, and thereforethe energy absorbed on average by an atom is known (= n_ph×ω_ph, where ω_ph is the photon energy). Using this value we can select the corresponding temperature that AA yields. This temperature is compared with that fitted from XMDYN simulation. All these results are in reasonable agreement, as shown in Table <ref>. Later we use this temperature for calculating the charge-state distributions. Figure <ref> shows the kinetic-energy distributionof the electron plasma (in the left panels) and thecharge-state distributions (in the right panels) for the three different fluences. The charge-state distributions obtained from XMDYN at the final timestep (250 fs) are comparedto those obtained from AA at the temperatures specified in Table <ref>. Although similar charge states are populated using the two approaches,differences can be observed: AA yields consistently higher ionic charges than XMDYN (20%–30% higher average charges) for the cases investigated. This is probably for the following reasons. XMDYN calls XATOM on the fly to calculate re-optimized orbitals foreach electronic configuration. In this way XMDYN accounts for the fact that ionizing an ion of charge Q costs less energy than ionizing an ion of charge Q+1. However,in the current implementation of AA, this effect is not considered. At a given temperature, AA uses the same orbitals (and, therefore, the same orbital energies) irrespective ofthe charge state. A likely consequence is that AA gives more population to higher charge states, simply because their bindingenergies are underestimated. That could also be the reason why AA produces wider charge-state distributions and predicts a somewhat higher average charge than XMDYN does.The other reason for the discrepancies could be the fact that XMDYN treats only those orbitals as being quantized that are occupied in the ground state of the neutral atom. For carbon, these are the 1s, 2s, and 2p orbitals. All states above are treated classically in XMDYN, resulting in a continuum of bound states. As a consequence, the densityof states is different and it may yield different orbital populations and therefore different charge-state distributions. Moreover, while free-electron thermalization has been ensuredthe bound electrons are not necessarily fully thermalized in XMDYN. In spite of the discrepancies observed, XMDYN and AAequilibrium properties are in reasonably good agreement. We also performed simulations under the conditions that had been used in a recent publication using a continuum approach <cit.>. In these simulations, we do not restrict nuclear motions.A Gaussian x-ray pulse of 10 fs FWHM was used. The intensities considered lie within the regime typically used for high-energy-density experiments :I_max = 10^16 W/cm^2 for ω_ph = 1000 eV, and I_max = 10^18 W/cm^2 forω_ph = 5000 eV.We employed a super-cell of diamond (mass density = 3.51 g/cm^3) containing 1000 carbon atoms within the PBC framework. In this study, 25 different Monte-Carlo realizationswere calculated and averaged for each irradiation case in order to improve the statistics of the results. For a system of 1000 carbon atoms each XMDYN trajectory takes 45 minutes ofruntime. The average energy absorbed per atom [Fig. <ref>] is∼28 eV and ∼26 eV, respectively, for the 1000-eV and 5000-eV photon-energy cases, inagreement with Ref. [beata2016]. Figure <ref> shows the time evolution of the average charge for the two different photon energies.Average atomic charge states of +1.1 and +0.9, respectively, were obtained long after the pulse was over.Although the rapid increase of the average ion charge is happening on very similar times, the charge values at the end of the calculation are30% and 40% higher than those in Ref. [beata2016] for the 1000-eV and 5000-eV cases, respectively [Fig. <ref>(a,b)].We can name two reasons that can cause such differences in the final charge states. One is that two different formulas for the total impact ionization cross section were used in the two approaches. In Ref. [beata2016] the cross sections are approximated from experimental ground state atomic and ionic data <cit.>, while XMDYN employs the semi-empirical BEB formula takinginto account state-specific properties.Figure <ref> compares these cross sections for neutral carbon atom. It can be seen that the cross section and, therefore, the rate of the ionization used by XMDYN are larger,which can shift the final average charge state higher as well. The second reason is the evaluation of the three-body recombination cross section. In Ref. [beata2016] recombination is defined usingthe principle of microscopic reversibility which states that the cross section of impact ionization can be used to calculate the recombination rate <cit.>.In the current implementation of the Boltzmann code the two-body distributionfunction is approximated using one-body distribution functions in the evaluation of the rate for three-body recombination, whereas in XMDYN correlations at all levels are naturally capturedwithin the classical framework due to the explicit calculation of the microscopic electronic fields.§ APPLICATION In order to demonstrate the capabilities of XMDYN we investigate the complex system of crystalline form I3C (chemical composition: C_8H_4I_3NO_4· H_2O) <cit.> irradiated byintense x rays. I3C contains the heavy atomic species iodine, which makes it a good prototype for investigations of experimental phasing methods based on anomalousscattering <cit.>. We considered pulse parameters used at an imaging experiment recently performed at the Linac CoherentLight Source (LCLS) free-electron laser <cit.>. The photon energy was 9.7 keV and the pulse duration was 10 fs FWHM. Two different fluences were considered in thesimulations, ℱ_high =1.0×10^13 ph/μ m^2 (estimated to be in the center of the focus) and its half value ℱ_med =5.0×10^12 ph/μ m^2. In these simulations, we do not restrict nuclear motions.The computational cell used in the simulations contained 8 molecules of I3C (184 atoms in total). The time propagation ends 250 fs after the pulse. For the analysis 50 XMDYN trajectories are calculatedfor both fluence cases. These trajectories sample the stochastic dynamics of the system without any restriction of the electronic configuration space that possesses (2.0×10^7)^24possible configurations considering the subsystem of the 24 iodine atoms only. The calculation of such an XMDYN trajectory takes approximately 150 minutes on a Tesla M2090 GPU while the samecalculation takes 48 hours on Intel Xenon X5660 2.80GHz CPU (single core).Figure <ref> shows the average charge for the different atomic species in I3C as a function of time. Both fluences pump enormous energy in the system predominantlythrough the photoionization of the iodine atoms due to their large photoionization cross section. In both cases almost all the atomic electrons are removedfrom the light atoms, but mainly via secondary ionization. The ionization of iodine is very efficient: already when applying the weaker fluence ℱ_med, the iodine atomslose on average roughly half of their electrons, whereas for the high fluence case the average atomic charge goes even above +40.Further, we also investigate the free electron thermalization. The plasma electrons reach thermalization via non-equilibrium evolution within approximately 200 fs. The Maxwelliandistribution of the kinetic energy of these electrons corresponds to very high temperatures: 365 eV for ℱ_med and 1 keV for ℱ_high(see Fig. <ref>). Hence, we have shown that XMDYN is a tool that can treat systems with 3D spatial inhomogeneity, whereas the continuum models usually dealwith uniform or spherically symmetric samples. If the sample includes heavy atomic species, pre-selecting electronic configurations can affect the dynamics of the system.XMDYN allows for a flexible treatment of the atomic composition of the sample and, particularly, easy access to the electronic structure of heavy atoms with large electronic configuration space.§ CONCLUSIONSWe have investigated the electron plasma thermalization dynamics of x-ray-heated carbon systems using the simulation tool XMDYN and compared its predictions to two otherconceptually different simulation methods, the average-atom model (AA) and the Boltzmann continuum approach. Both XMDYN and AA are naturally capable to address ions witharbitrary electronic configurations, a very common situation in high-energy-density matter generated by, e.g., high-intensity x-ray irradiation. We found very similar quasi-equilibriumtemperatures for the two methods.Qualitative agreement can be observed between the predicted ion charge-state distributions, although AA tends to yield somewhat higher charges. The reason could be that, in the current implementation, AA uses fixed atomic binding energies irrespective of the atomic electron configuration. We have also compared results from XMDYN and the Boltzmann continuum approach for free electron thermalization dynamics of XFEL-irradiated diamond as a validation of our approach. Thermal equilibrium of the electron plasma is reached within similar times in the two descriptions, although the asymptotic average ion charge states are somewhat different. The discrepancy could be attributed to the different approaches for impact ionization and recombination processes in the two models and to different parametrizations usedin the simulation. Moreover, we have considered a complex system, crystalline I3C, containing the heavy atomic species iodine. We calculated the dynamics and evolutionof the system from an x-ray-induced non-equilibrium state to a state where the plasma electrons are thermalized and hot dense matter is formed. The atomic electronic configurations foriodine are taken into account in full detail. Therefore, with XMDYN the treatment of systems including heavy atomic species (exhibiting complex inner-shell relaxation pathways) iscomprehensive and expected to be reliable. Finally, we note that, in contrast to a Boltzmann continuum approach, it is straightforward within XMDYN to treat spatiallyinhomogeneous systems consisting of several or even many atomic species.§ ACKNOWLEDGEMENTWe thank Beata Ziaja for fruitful discussions about the Boltzmann continuum approach. We also thank John Spence, Richard Kirian, Henry Chapman, andDominik Oberthuer, for stimulating the I3C calculations presented in this work.This work has been supported by the excellence cluster “The Hamburg Center for Ultrafast Imaging (CUI): Structure, Dynamics and Controlof Matter at the Atomic Scale” of the Deutsche Forschungsgemeinschaft. | http://arxiv.org/abs/1703.09110v2 | {
"authors": [
"Malik Muhammad Abdullah",
"Anurag",
"Zoltan Jurek",
"Sang-Kil Son",
"Robin Santra"
],
"categories": [
"physics.atm-clus"
],
"primary_category": "physics.atm-clus",
"published": "20170327143716",
"title": "A molecular-dynamics approach for studying the non-equilibrium behavior of x-ray-heated solid-density matter"
} |
Belyaev, Berezhnoy, Likhoded, Luchinsky Comments on "Study of J/ψ production in jets" Institute of Theoretical and Experimental Physics (ITEP), Moscow, Russia [email protected] SINP of Moscow State University, 119991 Moscow, Russia [email protected] for High Energy Physics NRC “Kurchatov Institute”, 142281, Protvino, Russia [email protected] Institute for High Energy Physics NRC “Kurchatov Institute”, 142281, Protvino, Russia [email protected] on "Study of J/ψ production in jets" A.V. Luchinsky December 30, 2023 ============================================= Recent LHCb measurements of the J/ψ meson production in jets is analyzed using fragmentation jet function formalism. It is shown that disagreement with theoretical predictions for distribution over the fraction of J/ψ transverse momentum z(J/ψ) in the cases of prompt production can be explained if one takes into account evolution of the fragmentation function and contributions from double parton scattering mechanism. § INTRODUCTION In a recent experimental paper <cit.> LHCb Collaboration analyzed J/ψ meson production in induced by c-quark jets in the forward region of proton-proton interaction at center-of-mass energy √(s)=13 TeV. In the cited work distributions over fraction of the jet transverse momentum carried by the J/ψ mesonz= p_T^J/ψ/p_T^jetwere presented both for J/ψ mesons produced in b-hadron decays and promptly. It turns out, that in the first case experimental results are consistent with theoretical predictions, while in the latter case a noticeable disagreement with theory is observed. In Fig.<ref> one can clearly see that measured by LHCb Collaboration z-distribution of promptly produced J/ψ is significantly softer than theoretical predictions made by Pythia8 <cit.> generator. In our short note we will try to give a simple explanation of this disagreement.§ J/Ψ PRODUCTION IN JETSIn the original experimental paper it is required that a transverse momentum of the jet p_T^jet>p_T^min = 20 GeV, that is high enough to considerthe applicability of fragmentation approach. In this approach the measured distribution can be written in the form dσ_J/ψ/dp_T =∫^1_2p_T/√(s)dσ_c c̅/dk_T(p_T/z)D_c→ J/ψ(z)/z dz,1/σdσ/dz ∼ D_c→ J/ψ(z),where D_c→ J/ψ(z) is the fragmentation function that describes c-quark transition into J/ψ meson andk_T is a transverse momentum of с-quark. This function is universal and at LO QCD can be calculated using presented in Fig.<ref> Feynman diagram.The analytical forms of fragmentation function for S wave states are known from<cit.>: D_Q → (Qq̅) (z)= 2α_s^2 |R_S(0)|^2/27π m_c^3rz(1-z)^2/(1-(1-r)z)^6 (2-2(3-2r)z+3(3-2r+4r^2)z^2-2(1-r)(4-r+2r^2)z^3+ (1-r)^2(3-2r+2r^2)z^4),whereα_sis a strong coupling constant and r=m_Q/m_Q+m_q and |R_S(0)| is a value of Qq̅ quarkonium wave function at origin.For our case r=0.5 and the formula can be rewritten as follows (see also <cit.>): D_c→ J/ψ(z)∼4z(1-z)^2/(2-z)^6(16-32z+72z^2-32z^3+5z^4). Integrating over z, we can obtain the totalprobability to produce J/ψ and c in fragmentation process <cit.>:P_c→ J/ψ=64/27 πα_s^2|R_S(0)|^2/M_J/ψ^3( 1189/30-57log 2). As it is shown in <cit.>, the non-fragmentation mechanismsessentially contribute to hadronic J/ψ +c+c̅ production.However these contributions rapidly decrease with transverse momenta increasing andonly mentioned above fragmentation contribution is left with close in rapidity J/ψ meson and c quark. This quark could be observed as D meson in the same jet as J/ψ and it is interesting to note that in our model we could expect z(D)≈ z(J/ψ)/2.We show this function in Fig.<ref> with dotted curve and one can see that its form (as well as Pythia8 predictions) contradicts experimental data.It should be noted, however, that the fragmentation function (<ref>) depends on the factorization scale μ^2 and the parametrization (<ref>) corresponds to value μ=μ_1=m_c. Comparison with high-energy experimental results, on the other hand, should be performed at some higher scale of the order μ_2∼ p_T^min. From papers <cit.> it is known that the evolution from μ_1^2 to μ_2^2 leads to significant variation of the shape of fragmentation function. This evolution can be described using DGLAP evolution equation <cit.> and using it one can easy track the evolution of the fragmentation function (<ref>) from μ^2=μ_1^2 to μ^2=μ_2^2 scales. The corresponding results are shown in Fig.<ref>. It is clear, that after the evolution of the fragmentation function was taken into account, the agreement with experimental data in high z region is restored. There is, however, some experimental excess in z∼ 0.2 region. In order to describe this peak one should also add contributions from double parton scattering (DPS) mechanism. The corresponding distribution is presented in<cit.>. This distribution was obtained using Pythia8 generator with default setting, that in turn corresponds to σ_eff∼ 30 mb. Simple estimates, on the other hand, shows, that experimental data supports higher contributions of the DPS mechanism (and, correspondingly, smaller value of the effective cross section), so in our work we use only the shape of DPS component. Since overall normalizations are not known, we describe total cross section as a sum of DPS and fragmentation signals with free parameters:1/σdσ/dz = c_DPS[1/σdσ/dz]_DPS + c_frag[1/σdσ/dz]_frag,where numerical values of the parameters c_DPS,frag were determined from fit of the experimental data:c_frag = 0.59±0.05, c_DPS = 0.26 ± 0.05.The correlation matrix of the fit is( [ 1 -0.37; -0.37 1 ]).In Fig.<ref> and Table <ref> we show calculated with these parameters total cross section in comparison with experimental data. One can clearly see that after double parton scattering contributions and evolution of the fragmentation function were taken into account, theoretical estimates are in reasonable agreement with the experiment.§ CONCLUSION Let us summarize briefly the results of our work.It was shown that the disagreement between theoretical predictions and experimental results in z-distribution of prompt J/ψ meson production in jets can be removed if one takes into account evolution of the fragmentation function c→ J/ψ +c and double parton scattering contribution. It should be mentioned that in our calculations we restrict ourselves to color-singlet mechanism. In the recent work <cit.> it was proposed to explain the same discrepancy taking into account also contributions of color-octet components of the J/ψ meson.A more detailed experimental study if the process under consideration (including, probably, polarization measurements <cit.>) could help to solve this problem moreaccurately. In addition, our model predicts that charmed meson should be present comoving with J/ψ with z(D)≈ z(J/ψ)/2, so it could be interesting to search for this particle.It should be noted also, that according to papers <cit.> for transverse momentum under consideration non-fragmentation contributions could also be important. In our future work we plan to study this question in more details. The authors would like to thank Dr. Lansberg and Dr. Filippovafor fruitful discussions. This work was partially supported by the Russian Foundation of Basic Research grant #14-02-00096.10 Aaij:2017fak LHCb Collaboration, R. Aaij et al.(2017), http://arxiv.org/abs/1701.05116arXiv:1701.05116 [hep-ex]. Sjostrand:2007gs T. Sjostrand, S. Mrenna and P. Z. Skands, Comput. Phys. Commun. 178, 852(2008), http://arxiv.org/abs/0710.3820arXiv:0710.3820 [hep-ph]. Braaten:1993jn E. Braaten, K.-m. Cheung and T. C. Yuan, Phys. Rev. D48, R5049 (1993), http://arxiv.org/abs/hep-ph/9305206arXiv:hep-ph/9305206 [hep-ph]. Kiselev:1994qp V. V. Kiselev, A. K. Likhoded and M. V. Shevlyagin, Z. Phys. C63, 77(1994). Braaten:1993mp E. Braaten, K.-m. Cheung and T. C. Yuan, Phys. Rev. D48, 4230 (1993), http://arxiv.org/abs/hep-ph/9302307arXiv:hep-ph/9302307 [hep-ph]. Berezhnoy:1998aa A. V. Berezhnoy, V. V. Kiselev, A. K. Likhoded and A. I. Onishchenko, Phys. Rev. D57, 4385(1998), http://arxiv.org/abs/hep-ph/9710339arXiv:hep-ph/9710339 [hep-ph]. Novoselov:2010zz A. Novoselov, Phys. Atom. Nucl. 73, 1740(2010), http://arxiv.org/abs/1007.0846arXiv:1007.0846 [hep-ph], [Yad. Fiz. 73, 1789(2010)]. Corcella:2007tg G. Corcella and G. Ferrera, JHEP 12, 029(2007), http://arxiv.org/abs/0706.2357arXiv:0706.2357 [hep-ph]. Gribov:1972ri V. N. Gribov and L. N. Lipatov, Sov. J. Nucl. Phys. 15, 438 (1972), [Yad. Fiz.15,781(1972)]. Lipatov:1974qm L. N. Lipatov, Sov. J. Nucl. Phys. 20, 94(1975), [Yad. Fiz.20,181(1974)]. Dokshitzer:1977sg Y. L. Dokshitzer, Sov. Phys. JETP 46, 641(1977), [Zh. Eksp. Teor. Fiz.73,1216(1977)]. Altarelli:1977zs G. Altarelli and G. Parisi, Nucl. Phys. B126, 298(1977). Bain:2017wvk R. Bain, L. Dai, A. Leibovich, Y. Makris and T. Mehen(2017), http://arxiv.org/abs/1702.05525arXiv:1702.05525 [hep-ph]. Kang:2017yde Z.-B. Kang, J.-W. Qiu, F. Ringer, H. Xing and H. Zhang(2017), http://arxiv.org/abs/1702.03287arXiv:1702.03287 [hep-ph]. Baranov:2006dh S. P. Baranov, Phys. Rev. D73, 074021(2006). Artoisenet:2007xi P. Artoisenet, J. P. Lansberg and F. Maltoni, Phys. Lett. B653, 60 (2007), http://arxiv.org/abs/hep-ph/0703129arXiv:hep-ph/0703129 [hep-ph]. | http://arxiv.org/abs/1703.09081v2 | {
"authors": [
"I. Belyaev",
"A. V. Berezhnoy",
"A. K. Likhoded",
"A. V. Luchinsky"
],
"categories": [
"hep-ph",
"hep-ex"
],
"primary_category": "hep-ph",
"published": "20170327135617",
"title": "Comments on \"Study of $J/ψ$ production in jets\""
} |
School of Physics and Astronomy, University of Edinburgh [email protected] A novel method for learning optimal, orthonormal wavelet bases for representing 1- and 2D signals, based on parallels between the wavelet transform and fully connected artificial neural networks, is described. The structural similarities between these two concepts are reviewed and combined to a “wavenet”, allowing for the direct learning of optimal wavelet filter coefficient through stochastic gradient descent with back-propagation over ensembles of training inputs, where conditions on the filter coefficients for constituting orthonormal wavelet bases are cast as quadratic regularisations terms. We describe the practical implementation of this method, and study its performance for a few toy examples. It is shown that an optimal solutions are found, even in a high-dimensional search space, and the implications of the result are discussed. Neural networkswaveletsmachine learningoptimization § INTRODUCTION[Sections <ref> and <ref> contain overlaps with <cit.>.]The Fourier transform has proved an indispensable tool within the natural sciences, allowing for the study of frequency information of functions and for the efficient representation of signals exhibiting angular structure. However, the Fourier transform is limited by being global: each frequency component carries no information about its spatial localisation; information which might be valuable. Multiresolution, and in particular wavelet, analysis has been developed, in part, to address this limitation, representing a function at various levels of resolution, or at different frequency scales, while retaining information about position-space localisation. This encoding uses the fact that due to their smaller wavelengths, high-frequency components may be localised more precisely than their low-frequency counterparts.The wavelet decomposition expresses any given signal in terms of a “family” of orthonormal basis functions <cit.>, efficiently encoding frequency-position information. Several different such wavelet families exist, both for continuous and discrete input, but these are generally quite difficult to construct exactly as they don't possess closed-form representations. Furthermore, the best basis function for any given problem depends on the class of signal, choosing the best among existing functional families is hard and likely sub-optimal, and constructing new bases is non-trivial, as mentioned above. Therefore, we present a practical, efficient method for directly learning the best wavelet bases, according to some optimality criterion, by exploiting the intimate relationship between neural networks and the wavelet transform.Such a method could have potential uses e.g. in areas utilising time-series data and imaging, for instance — but not limited to — EEG, speech recognition, seismographic studies, financial markets as well as image compression, feature extraction, and de-noising. However, as is shown in Section <ref>, the areas to which such an approach can be applied are quite varied. In Section <ref> we review some of the work previously done along these lines. In Section <ref> we briefly describe wavelet analyses, neural networks, as well as their structural similarity and how they can be combined. In Section <ref> we discuss metrics appropriate for measuring the quality of a certain wavelet basis. In Section <ref> we describe the actual algorithm for learning optimal wavelet bases. Section <ref> describes the practical implementation and, finally, Section <ref> provides an example use case from high-energy physics.§ PREVIOUS WORKA typical approach <cit.> when faced with the task of choosing a wavelet basis in which to represent some class of signals, is to select one among an existing set wavelet families, which is deemed suitable to the particular use case based on some measure of fitness. This might lead to sub-optimal results, as mentioned above, since limiting the search to a few dozen pre-exiting wavelet families will likely result in inefficient encoding or representation of (possibly subtle) structure particular, or unique, to the problem at hand. To address this shortcoming, considerable effort has already gone into the question of the existence and construction of optimal wavelet bases. Ref. <cit.> describes a method for constructing optimally matched wavelets, i.e. wavelet bases matching a prescribed pattern as closely as possible, through lifting <cit.>. However, the proposed method is somewhat arduous and relies on the specification of a pattern to which to match, requiring considerable and somewhat artificial preprocessing.[“It is difficult to find a problem our method can be applied to without major modifications.” <cit.>.] This is not necessarily possible, let alone easy, for many use cases as well as for the study of more general classes of inputs rather than single examples. In a similar vein, Ref. <cit.> provides a method for unconstrained optimisation of a wavelet basis with respect to a sparsity measure using lifting, but has the same limitations as Ref. <cit.>.Refs. <cit.> provide theoretical arguments for the existence of optimal wavelet bases as well as an algorithm for constructing such a basis for single 1- or 2D inputs, based on gradient descent. However, results are only presented for low-order wavelet bases, the implementation of orthonormality constraints is not discussed, and the question of generalisation from single inputs to classes of inputs is not addressed. In addition, the optimal filter coefficients referenced in <cit.> do not satisfy the explicit conditions (C2), (C3), and (C4) for orthonormality in Section <ref> below. These constraints are violated at the 1%-level, which also corresponds roughly to the relative angular deviation of the reported optimal basis from the Daubechies <cit.> basis of similar order. Finally, Refs. <cit.> provide a comprehensive prescription for designing wavelets that optimally represent signals, or classes of signals, at some fixed scale J. However, the results are quite cumbersome, are based on a number of assumptions regarding the characteristics of the input signal(s), and relate only to the question of optimal representation at fixed scales.This indicates that, although the question of constructing optimal wavelet bases has been given substantial consideration, and clear developments have been made already, a general approach to easily learning discrete, demonstrably orthonormal wavelet bases of arbitrary structure and complexity, optimised over classes of input has yet to be developed and implemented for practically arbitrary choice of optimality metric. This is what is done below.§ THEORETICAL CONCEPTSIn this section, we briefly review some of the underlying aspects of wavelet analysis, Section <ref>, and neural networks, Section <ref>, upon which the learning algorithm is based. In Section <ref> we discuss the parallels between the two concepts, and how these can be used to directly learn optimal wavelet bases. §.§ WaveletNumerous excellent references explain multiresolution analysis and the wavelet transform in depth, so the present text will focus on the discrete class of wavelet transforms, formulated in the language of matrix algebra as it relates directly to the task at hand. For a more complete review, see e.g. <cit.> or <cit.>.In the parlance of matrix algebra, the simplest possible input signal f∈ℝ^N is a column vector f = [ f[0]; f[1];⋮; f[2^M-2]; f[2^M-1] ]and the dyadic structure of the wavelet transform means that N must be radix 2, i.e. N=2^M for some M ∈ℕ_0.[Although the results below are also applicable to 2D, i.e. matrix, input, cf. Section <ref>.] The forward wavelet transform is then performed by the iterative application of low- and high-pass filters. Let L(f) denote the low-pass filtering of input f, the i'th entry of which is then given by the convolution L(f)[i] = ∑_k = 0^2^M-1 a[k] f[i + N / 2 - k], i ∈ [0, 2^M-1 - 1]assuming periodicity, such that f[-1] = f[N-1], etc. The low-pass filter, a, is represented as a row vector of length N_filt, with N_filt even, and its entries are called the filter coefficients, {a}.The convolution yielding each entry i in L(f) can be seen as a matrix inner product of f with a row matrix of the form [ ⋯0 a[N-1]⋯ a[1] a[0]0⋯]Since this is true for each entry, the full low-pass filter may be represented as a (2^M-1× 2^M ) · (2^M× 1) matrix inner product: L(f) = L_M-1 fwhere, for each low-pass operation, the matrix operator is written as L_m = [⋱⋱⋱⋱;⋯ a[N-1]⋯ a[1] a[0]0000⋯;⋯00 a[N-1]⋯ a[1] a[0]00⋯;⋯0000 a[N-1]⋯ a[1] a[0]⋯;⋱⋱⋱⋱ ]_2^m+1} 2^m .In complete analogy to Eq. (<ref>), a high-pass filter matrix H_m can be expressed as a 2^m× 2^m+1 matrix parametrised in the same way by coefficients {b}, which we choose <cit.> to relate to {a} by b_k = (-1)^ka_N_filt - 1 - kfork ∈ [0, N_filt-1]The means that, given wavelet coefficients {a}, we have specified the full wavelet transform in terms of repeated application of matrix operators L_m and H_m. The filter coefficients will therefore serve as our parametrisation of any given wavelet basis.At each step in the transform, the power of 2 that gives the current length of the (partially transformed) input, n = 2^m, is referred to as the frequency scale, m. Large frequency scales m correspond to large input arrays, which are able encode more granular, and therefore more high-frequency, information than for small m. As the name implies, the low-pass filter acts as a spatial sub-sampling of the input from frequency scale m to m-1, averaging out the frequency information at scale m in the process. Similarly, the high-pass filter encodes the frequency information at scale m; the information which is lost in the low-pass filtering. After each step, another pass of high- and low-pass filters are applied to the sub-sampled, low-pass filtered input. This procedure is repeated from frequency scale M to 0. At each step, the high-pass filter encodes the frequency information specific to the current frequency scale. This is illustrated in Figure <ref>. The coefficients obtained through successive convolution of the signal with the high- and low-pass filters, i.e. the right-most layers in Figure <ref>, collectively encode the same information as the position-space input f, but in the basis of wavelet functions. These are called the wavelet coefficients {c}. Given such a set of wavelet coefficients, the inverse transform can be perform by retracing the steps of the forward transform. Letting f_m denote the input signal low-pass filtered down to scale m, with f_M≡f the inverse transform proceeds as f_0= [c_0]f_1= L_0^Tf_0 + H_0^T[c_1] f_2= L_1^Tf_1 + H_1^T[c_2 c_3]⋮ f≡f_M= L_M-1^Tf_M-1 + H_M-1^T[ c_2^M-1 ⋯c_2^M-1 ] In this way it is seen that c_0 encodes the average information content in the input signal f, and that c_i>0 dyadically encode the frequency information at larger and larger scales m. The explicit wavelet basis function corresponding to each wavelet coefficient can be found by setting c = [ ⋯ 0 10⋯ ] and studying the resulting, reconstructed position-space signal f̂ at some suitable largest scale M.The filter coefficients {a} completely specify the wavelet transform and -basis, but they are not completely free parameters, however. Instead, they must satisfy a number of explicit conditions in order to corresponds to an orthonormal wavelet basis. These conditions <cit.> are as follows:In order to satisfy the dilation equation, the filter coefficients {a}must satisfy ∑_k a_k= √(2)C1 In order to ensure orthonormality of the scaling- and wavelet functions, the coefficients {a} and {b} must satisfy ∑_k a_ka_k+2m= δ_m,0∀m ∈ℤC2and ∑_k b_kb_k+2m= δ_m,0∀m ∈ℤC3 where the condition for m=0 is trivially fulfilled from (C2) through Eq. (<ref>). To ensure that the corresponding wavelets have zero area, i.e. encode only frequency information, we require ∑_k b_k= 0 C4 Finally, to ensure orthogonality of scaling and wavelet functions, we must have ∑_k a_k b_k+2m= 0 ∀m ∈ℤC5where condition (C5) is automatically satisfied through Eq. (<ref>).Conditions (C1–5) then collectively ensure that the filter coefficients {a} (and {b}) yield a wavelet analysis in terms of orthonormal basis functions. As we parametrise our basis uniquely in terms of filter coefficients {a}, since {b} are fixed through Eq. (<ref>), we will need to explicitly ensure that these conditions are met. The method for doing this is described in Section <ref>.§.§ Neural network Since (artificial) neural networks have become ubiquitous within most areas of the physical sciences, we will only briefly review the central concepts as they relate to the rest of this discussion. A comprehensive introduction can be found e.g. in Ref. <cit.>.Neural networks can be seen general mappings f:ℝ^n→ℝ^m, which can approximate any function, provided sufficient capacity. In the simplest case, such networks are constructed sequentially, where the input vector f = h_0∈ℝ^N_0 is transformed through the inner product with a weight matrix θ_1, the output of which is a hidden layer h_1∈ℝ^N_1, and so forth, until the output layer h_l∈ℝ^N_l is reached. The configuration of a given neural network, in terms of number of layers and their respective sizes, is called the network architecture. In addition to the transfer matrices θ_i, the layers may be equipped with bias nodes, providing the opportunity for an offset, as well as non-linear activation functions. A schematic representation of one such network, without bias nodes and non-linearities, is shown in Figure <ref>.The neural network can then be trained on a set of training examples, {(f_i, y_i)}, where the task of network usually is to output a vector ŷ_i trying to predict y_i given f_i. The quality of the prediction is quantified by the cost or objective function 𝒥(y, ŷ). The central idea is then to take the error of any given preduction ŷ_i, given by the derivative of the cost function with respect to the prediction at the current value, and back-propagate it through the network, performing the inverse operation of the forward pass at each layer. In this way, the gradient of the cost function 𝒥 with respect to each entry in the network's weight matrices (θ_i)_jk is computed. Using stochastic gradient descent, for each training example one performs small update steps of the weight matrix entries along these error gradients, which is then expected to produce slightly better performance of the network with respect to the task specified by the cost function.One challenge posed by such a fully connected network is the shear multiplicity of weights for just a few layers of moderate sizes. Such a large number of free parameters can make the network prone to over-fitting, which can be mitigated e.g. by L_2 weight regularisation, where a regularisation term ℛ({θ}) is added to the cost function, with a multiplier λ controlling the trade-off between the two contributions.§.§ Combining conceptsThe crucial step is then to recognise the deep parallels between these two constructs. We can cast the discrete wavelet transform as an ℝ^N→ℝ^N neural network with a fully-connected, deep, non-sequential, dyadic architecture without bias-units and with linear (i.e. no) activations. A schematic representation of this setup, here called a “wavenet”, is shown in Figure <ref>. This is done by identifying the neural network transfer matrices with the low- and high-pass filter operators in the matrix formulation of the wavelet transform, cf. Eq. (<ref>). The forward wavelet transform then corresponds to the neural network mapping, and the output vector of the neural network is exactly the wavelet coefficients of the input with respect to the basis prescribed by {a}. If we can formulate an objective function 𝒥 for the wavelet coefficients, i.e. the output of the “wavenet”, this means that we can utilise the parallel with neural networks and employ back-propagation to gradually update the weight matrix entries, i.e. the filter coefficients {a}, in order to improve our wavelet basis with respect to this metric. Therefore, choosing a fixed filter length |{a}| = N_filt, and parametrising the “wavenet” in terms of {a}, we are able to directly learn the wavelet basis which is optimal according to some task 𝒥.Interestingly, and unlike some of the approaches mentioned in Section <ref>, a neural network approach naturally accommodates classes of inputs, in addition to single examples. That is, one can train repeatedly on a single example and learn a basis which optimally represents this particular signal in some way, cf. e.g. <cit.>. However, the use of stochastic gradient descent is naturally suited for fitting the weight matrices to ensembles of training examples, which in many cases is much more meaningful and useful, cf. Section <ref>.Another key observation is that while the entries in a standard neural network wight matrix are free parameters, the weights in the “wavenet” are highly constrained, since they must correspond to the low- and high-pass filters of the wavelet transform. For instance, a neural network like the one in Figure <ref>, mapping ℝ^8→ℝ^8 will have 84 free parameters in the standard treatment. However, identifying each of the 6 weight matrices with the wavelet filter operators, this number is reduced to N_filt, which can be as low as 2. This is schematically shown in Figure <ref>. For inputs of “realistic” sizes, i.e. |f| = N ≳ 64 this reduction is exponentially greater, leading to a significant reduction of complexity.Finally, we note that the filter coefficients need to conform with conditions (C1–5), cf. Section <ref> above, in order to correspond to an orthonormal wavelet basis. This can be solved by noting that all conditions (C1–5) are differentiable with respect to {a}, which means that we can cast these conditions in the form of quadratic regularisation terms, ℛ_i, which can then be added to the cost function with some multiplier λ, in analogy to standard L_2 weight regularisation. The multiplier λ then controls the trade-off between the possibly competing objectives of optimising 𝒥 and ensuring fulfillment of conditions (C1–5). In principle, this means that for finite λ any learned filter configuration {a} might violate these conditions to order 1/λ, and might therefore strictly be taken to constitute a “pseudo-orthonormal” basis. This will, however, have little impact in practical application, where one can simply choose a value of λ sufficiently high that 𝒪(1/λ) is within the tolerances of the use case at hand.§ MEASURING OPTIMALITYThe choice of objective function defines the sense in which the basis learned through the method outlined in Section <ref> will be optimal. This also affords the user a certain degree of freedom in defining the measure of optimality, the only condition being that the objective function be differentiable with respect to the wavelet coefficients {c}.[Possibly except for a finite number of points.]In this example we choose sparsity, i.e. the ability of a certain basis to efficiently encode the information contained in a given signal, as our measure of optimality. From the point of view of compression, sparsity is clearly a useful metric, in that it measures the amount of information that can be stored with a within certain amount of space/memory. From the point of view of representation, sparsity is likely also a meaningful objective, since a basis which efficiently represents the defining features of a (class of) signal(s) will also lead the signal(s) to be sparse in this basis.Based on <cit.>, we choose the Gini coefficient 𝒢( · ) as our metric for the sparsity of a set of wavelets coefficients {c}, 𝒢({c}) = ∑_i = 0^N_c - 1 (2 i - N_c - 1) |c_i| / N_c∑_i = 0^N_c - 1 |c_i| ≡f({c})/g({c})for wavelet coefficients {c} sorted by ascending absolute value, i.e. |c_i| ≤ |c_i+1| for all i. Here N_c≡ |{c}| is the number of wavelet coefficients.A Gini coefficient of 1 indicates a completely unequal, and therefore maximally sparse, distribution, i.e. the case in which only one coefficient has non-zero value, and therefore carries all of the information content in the signal. Conversely, a Gini coefficient of 0 indicates a completely equal distribution, i.e. each coefficient has exactly the same (absolute) value, and therefore all carry exactly the same amount of information content.Having settled on a choice of objective function, we now proceed to describing the details of the learning procedure itself. We stress that the results of the following sections should generalise to other reasonable choices of objectives, which may be chosen based on the particular use case at hand.§ LEARNING PROCEDUREAs noted above, the full objective function for the optimisation problem is given as the sum of a sparsity term 𝒮({c}) and a regularisation term ℛ({a}), the relative contribution of the latter controlled by the regularisation constant λ, i.e. 𝒥({c}, {a}) = 𝒮({c}) + λ ℛ({a})where {c} is the set of wavelet coefficients for a given training example and {a} is the current set of filter coefficients. The ℛ-term ensures that the filter coefficient configuration {a} does indeed correspond to a wavelet basis as defined by conditions (C1–5) above; the 𝒮-term measures the quality of a given wavelet basis according to the chosen fitness measure. The learning task then consists of optimising the filter coefficients according to this combined objective function, i.e. finding a filter coefficient configuration, in an N_filt-dimensional parameter space, which minimises 𝒥.The procedure for computing a filter coefficient gradient for each of the two terms is outlined below.§.§ Sparsity termBased on the discussion in Section <ref>, we have chosen the Gini coefficient 𝒢( · ) as defined in Eq. (<ref>) as our measure of the sparsity of any given set of wavelet coefficients {c}. The sparsity term in the objective function is chosen to be 𝒮({c}) = 1 - 𝒢({c})This definition means that low values of 𝒮({c}) correspond to greater degree of sparsity, such that that minimising this objective function term increases the degree of sparsity.In order to utilise stochastic gradient descent with back-propagation, the objective function needs to be differentiable in the values of the output nodes, i.e. the wavelet coefficients. Since the sparsity term is the only term which depends on the wavelet coefficients, particular care needs to be afforded here. The sparsity term is seen to be differentiable everywhere except for a finite number of points where c_i = 0. In these cases the derivative is taken to be zero, which is meaningful considering the chosen optimisation objective: coefficients of value zero will, assuming at least one non-zero coefficient exists, contribute maximally to the sparsity of the set as a whole. Therefore we don't want these coefficients to change, and the corresponding gradient should be zero.[Cases with all zero-valued coefficients are ill-defined but also practically irrelevant.]Therefore, assuming c_i≠ 0, the derivative of the sparsity term is given by (suppressing the arguments of the objective function terms for brevity) |c|𝒮 ≡e_i𝒮|c_i| =e_i|c_i| (1 - 𝒢)= - |c|𝒢 = - |c| f · g - f ·|c| g/g^2where |c| f = e_i|c_i|( ∑_k = 0^N_c - 1 (2 k - N_c - 1) |c_k| )= ( 2i - N_c - 1 ) e_iand|c| g = e_i|c_i|(N_c∑_k = 0^N_c - 1 |c_k| ) = N_c e_ifor f and g defined in Eq. (<ref>), where summation of vector indices is implied.To get the gradient with respect the the signed coefficient values, the gradients of f and g are multiplied by the corresponding coefficient sign, i.e. cf = (c) ×|c| f andcg = (c) ×|c| gwhere × indicates element-wise multiplication. The gradients with respect to the base, non-sorted set of wavelet coefficients {c}, cf and cg respectively, are found by performing the inverse sorting with respect to the absolute wavelet coefficient values. In this way c𝒮 can be computed from c f and c g through Eq. (<ref>). Having computed the gradient of the sparsity cost with respect to the output nodes (wavelet coefficients) we can now use standard back-propagation on the full network to compute the associated gradient on each entry in the low- and high-pass filter matrices. For a given, fixed filter length N_filt, entries in the filter matrices which are identically zero are not modified by a gradient. Conversely, the gradient on every filter matrix entry to which a particular filter coefficient is contributing is added to the corresponding sparsity gradient in filter coefficient space, possibly with a sign changein the case of high-pass filter matrices, cf. Eq. (<ref>). In this way, the gradient on the wavelet coefficients is translated into a gradient in filter coefficient space, which we can then use in stochastic gradient descent, along with a similar regularisation gradient, to gradually improve our wavelet basis as parametrised by {a}.§.§ Regularisation term The regularisation terms are included to ensure that the optimal filter coefficient configuration does indeed correspond to an orthonormal wavelet basis as defined through conditions (C1–5). As noted in Section <ref>, we choose to cast cast these conditions in the form of quadratic regularisation conditions on the filter coefficients {a}. Each of the conditions (C1–5) is of the form h_k({a}) = d_kwhich can be written as a quadratic regularisation term, i.e. ℛ_k({a}) = ( h_k({a}) - d_k)^2and the combined regularisation term is then given by ℛ({a}) = ∑_k=1^5ℛ_k({a})This formulation allows for the search to proceed in the full N_filt-dimensional search space, and the regularisation constant λ regulates the degreeof precision to which the optimal filter coefficient configuration will fulfill conditions (C1–5).In order to translate deviations from conditions (C1–5) into gradients in filter coefficient space, we take the derivative of each of the terms ℛ_k with respect to the filter coefficients a_i.The gradients are found to be: aℛ_1= e_i2 ([ ∑_k a_k] - √(2)) D1 aℛ_2= e_i ∑_m2 ( ∑_k[ a_k a_k+2m] - δ_m,0) ×(a_i+2m + a_i-2m) D2 aℛ_3= e_i ∑_m2 ( ∑_k[ b_k b_k+2m] - δ_m,0) ×(a_i+2m + a_i-2m) D3 aℛ_4= e_i2 ( ∑_k b_k) × (-1)^N - i - 1D4 aℛ_5= 0D5Since condition (C5) is satisfied exactly by the definition in Eq. (<ref>), the corresponding gradient is identically equal to zero.The combined gradient from the regularisation term is then the sum of the above five (four) contributions.§ IMPLEMENTATIONThe learning procedure based on the objective function and associated gradients presented in Section <ref> is implemented <cit.> as a publicly available C++ <cit.> package. The matrix algebra operations are implemented using armadillo <cit.>, with optional interface to the high-energy physics root library <cit.>.This package allows for the processing of 1- and 2D dimensional training examples of arbitrary size, provides data generator for a few toy examples and reads CSV inputas well as high-energy physics collision events in the HepMC <cit.> format. The 2D wavelet transform is perform by performing the 1D transform on each row in the signal, concatenating the output rows, and then performing the 1D transform on each of the resulting columns. Their matrix concatenation then corresponds to the 2D set of wavelet coefficients.In addition to standard (batch) gradient descent, the library allows for the use of gradient momentum and simulated annealing of the regularisation term in order to ensure faster and more robust convergence to the global minimum even in the presence of local minima and steep regularisation contours.§ EXAMPLE: QCD 2 → 2 PROCESSES IN HIGH-ENERGY PHYSICSAs an example of the procedure for learning optimal wavelet bases according to the metric presented in Section <ref>, using the implementation in Sections <ref> and <ref>, we choose that ofhadronic jets produced at proton colliders. In particular, the input to the training is taken to be simulated quantum chromodynamics (QCD) 2 → 2 processes, generated in Pythia8 <cit.>, segmented into a 2D array of size 64 × 64 in the η-ϕ plane, roughly corresponding to the angular granularity of present-day general purpose particle detectors. The collision events are generated at a center of mass energy of √(s) = 13 TeV with a generator-level p_⊥ cut of 280 GeV imposed on the leading parton.QCD radiation patterns are governed by scale-independent splitting kernels <cit.>, which could make them suitable candidates for wavelet representation, since these naturally exhibit self-similar, scale-independent behaviour. In that case, the optimal (in the sense of Section <ref>) representation is one which efficiently encodes the localised angular structure of this type of process, and could be used to study, or even learn, such radiation patterns. In addition, differences in representation might help distinguish between such non-resonant, one-prong “QCD jets” and resonant, two-prong jets e.g. from the hadronic decay of the W and Z eletroweak bosons.We also note that, as alluded to in Section <ref>, for signals of interest in collider physics, a standard neural network with “wavenet” architecture contains an enormous number of free parameters, e.g. N_c^2D≈ 4.4× 10^7 for N × N = 64× 64 input, which is reduced to N_filt, i.e. as few as two, by the parametrisation in terms of the filter coefficients {a}.We apply the learning procedure using Ref. <cit.>, iterating over such “dijet” events pixelised in the η-ϕ plane, and use back-propagation with gradient descent to learn the configuration of {a} which, for fixed N_filt, minimises the combined sparsity and regularisation in Eq. (<ref>). This is shown in Fig. <ref> for N_filt = 2. It is seen that, for N_filt = 2, only one minimum exists, due to only one point in a_1-a_2 space fulfilling all five conditions (C1–5). This configuration has a_1 = a_2 = 1/√(2) and is exactly the Haar wavelet <cit.>. Although this is an instructive example allowing for clean visualisation, showing the clear effect of the gradient descent algorithm and the efficacy of the interpretation of conditions (C1–5) as quadratic regularisation terms, it also doesn't tell us much since the global minimum will be the same for all classes of inputs. For N_filt > 2 the regularisation allows for minima in an effective hyperspace with dimension D > 0.Instead choosing N_filt = 16 we can perform the same optimisation, but now with sufficient capacity of the wavelet basis to encode the defining features of this class of signals. The effect of the learning procedure is presented in Figure <ref>, showing a selection of the lowest-scale wavelet basis functions corresponding to particular filter coefficient configurations at the beginning of, during, and at convergence of the learning procedure in this higher-dimensional search space.The random initialisation on the unit hyper-sphere is shown to produce random noise (Figure <ref>), which does not correspond to a wavelet basis, since the algorithm has not yet been afforded time to update the filter coefficients to conform with the regularisation requirements. At some point roughly half way through the training, the filter coefficient configuration does indeed yield an orthonormal wavelet basis (Figure <ref>), and the learning procedure now follows the gradients towards greater sparsity along a high-dimensional, quadratic regularisation “valley”. Finally, at convergence, the optimal wavelet found is again seen to be exactly the Haar wavelet (Figure <ref>), despite the vast amount of freedom provided the algorithm by virtue of 16 filter coefficients. That is, the learning procedure arrives at the optimal configuration by setting 14 filter coefficients to exactly zero without any manual tuning.This result shows that limiting the support of the basis functions provides for more efficient representation than any deviations due to radiation patterns could compensate for. Indeed, it can be show that removing some of the conditions (C1–5) so as to ensure that {a} simply corresponds to an orthonormal basis (i.e. not necessarily an orthonormal wavelet basis) the learning procedure results in the pixel basis, i.e. the one in which each basis function corresponds to a single entry in the input array. This shows that, due to the fact that QCD showers are fundamentally point-like (due to the constituent particles) and since they, to leading order, are dominated by a few particles carrying the majority of the energy in the jet, the representation which best allows for representation of single particles will prove optimal according to our chosen measure Eq. (<ref>). However, since this example studies the optimal representation of entire event, its conclusions may change for inputs restricted to a certain region in η-ϕ space around a particular jet, i.e. for the study of optimal representation of jets themselves. § ACKNOWLEDGMENTS The author is supported by the Scottish Universities Physics Alliance (SUPA) Prize Studentship. The author would like to thank Troels C. Petersen for insightful discussions on the subject matter, and James W. Monk for providing Monte Carlo samples.elsarticle-num | http://arxiv.org/abs/1706.03041v2 | {
"authors": [
"Andreas Søgaard"
],
"categories": [
"cs.NE",
"cs.LG"
],
"primary_category": "cs.NE",
"published": "20170325154601",
"title": "Learning optimal wavelet bases using a neural network approach"
} |
Laboratory of Integrative Neuroscience, The Rockefeller University The primary visual cortex (V1) integrates information over scales in visual space, which have been shown to vary, in an input-dependent manner, as a function of contrast and other visual parameters. Which algorithms the brain uses to achieve this feat are largely unknown and an open problem in visual neuroscience. We demonstrate that a simple dynamical mechanism can account for this contrast-dependent scale of integration in visuotopic space as well as connect this property to two other stimulus-dependent features of V1: extents of lateral integration on the cortical surface and response latencies. Adaptive Scales of Spatial Integration and Response Latencies in a Critically-Balanced Model of the Primary Visual Cortex Marcelo Magnasco December 30, 2023 =========================================================================================================================§ INTRODUCTIONStimuli in the natural world have quantitative characteristics that vary over staggering ranges. Our nervous system evolved to parse such widely-ranging stimuli, and research into how the nervous system can cope with such ranges has led to considerable advances in our understanding of neural circuitry. For example at the sensory transduction level, the physical magnitudes encoded into primary sensors, such as light intensity, sound pressure level and olfactant concentration, vary over exponentially-large ranges, leading to the Weber-Fechner law <cit.>. As neuronal firing rates cannot vary over such large ranges, the encoding process must compress physical stimuli into the far more limited ranges of neural activity that represent them. These observations have stimulated a large amount of research into the mechanisms underlying nonlinearly compression of physical stimuli in the nervous system . Of relevance to our later discussion is the nonlinear compression of sound intensity in the early auditory pathways <cit.>, where it has been shown that poising the active cochlear elements on a Hopf bifurcation leads to cubic-root compression. But other characteristics besides the raw physical magnitude still vary hugely. The wide range of spatial extents and correlated linear structures present in visual scenery <cit.> leads to a more subtle problem, if we think of the visual areas as fundamentally limited by corresponding anatomical connectivity. Research into this problem has been focused on elucidating the nature of receptive fields of neurons in the primary visual cortex (V1) <cit.>.Studies have found that as the contrast of a stimulus is decreased, the receptive field <cit.> size or area of spatial summation in visual space increases (Fig <ref>) <cit.>. As an example of contextual modulation of neuronal responses, this problem has naturally received theoretical attention <cit.>. However, current literature does not describe this phenomenon as structurally integral to the neural architecture but rather either highlight a different set of features or the contextual modulations are explicitly written in an ad hoc fashion. Our aim is to develop a model which displays this phenomenon structurally, as a direct consequence of the neural architecture. In our proposed models, multiple length scales emerge naturally without any fine tuning of the system's parameters. This leads to length-tuning curves similar to the ones measured in Kapadia et al. over the entire range (Fig <ref>) <cit.>.The findings of Kapadia et al. demonstrate that receptive fields in V1 are not constant but instead grow and shrink, seemingly beyond naive anatomical parameters, according to stimulus contrast. The “computation” being carried out is not fixed but is itself a function of the input. Let us examine this distinction carefully. There are numerous operations in image processing, such as Gaussian blurs or other convolutional kernels, whose spatial range is fixed. It is very natural to imagine neural circuitry having actual physical connections corresponding to the nonzero elements of a convolutional kernel, and in fact a fair amount of effort has been expended trying to identify actual synapses corresponding to such elements <cit.>. There are, however, other image-processing operations, such as floodfill (the "paint bucket”) whose spatial extent is entirely dependent on the input; the problem of "binding” of perceptual elements is usually thought about in this way, and mechanisms posited to underlie such propagation dynamics include synchronization of oscillations acting in a vaguely paint-bucket-like way <cit.>. This dichotomy is artificial because these are only the two extremes of a potentially continuous range. While the responses of neurons in V1 superficially appear to be convolutional kernels, their strong dependence on input characteristics, particularly the size of the receptive field, demonstrates a more complex logic in which spatial extent is determined by specific characteristics of the input. What is the circuitry underlying this logic?Neurons in the primary visual cortex are laterally connected to other neurons on the cortical surface and derive input from them. Experiments have shown that the spatial extent on the cortical surface from which neurons derive input from other neurons through such lateral interactions varies with the contrast of the stimulus <cit.>. In the absence of stimulus contrast, spike-triggered traveling waves of activity propagate over large areas of cortex. As contrast is increased, the waves become weaker in amplitude and travel over increasingly small distances. These experiments suggest that the change in spatial summation area with increasing stimulus contrast may be consistent with the change in the decay constants of the traveling wave activity. However, no extant experiment directly links changes in summation in visual space to changes in integration on the cortical surface, and no explicit model of neural architecture has been shown to simultaneously account for, and thus connect, the input-dependence of spatial summation and lateral integration in V1. The latter one is our aim, and a crucial clue will come from the input-dependence of latencies.Recently, a critically-balanced network model of cortex was proposed to explain the contrast dependence of functional connectivity <cit.>. It was shown that in the absence of input, the model exhibits wave-like activity with an infinitely-long ranged susceptibility, while in the presence of input, perturbed network activity decays exponentially with an attenuation constant that increases with the strength of the input. These results are in direct agreement with Nauhaus et al. <cit.>.We will now demonstrate that a similar model also leads to adaptive scales of spatial integration in visual space. Our model makes two key assumptions. The first is a local, not just global, balance of excitation and inhibition across the entire network; all eigenmodes of the network are associated with purely imaginary eigenvalues. It has been shown that such a critically-balanced configuration can be achieved by simulating a network of neurons with connections evolving under an anti-Hebbian rule <cit.>. The second key assumption is that all interactions in the network are described by the connectivity matrix; nonlinearities do not couple distinct neurons in the network. There are a number of examples of dynamical criticality in neuroscience, including experimental studies in motor cortex <cit.>, theoretical <cit.> and experimental studies <cit.> of line attractors in oculomotor control, line attractors in decision making <cit.>, Hopf bifurcation in the auditory periphery <cit.> and olfactory system <cit.>, and theoretical work on regulated criticality <cit.>. More recently, Solovey et al. <cit.> performed stability analysis of high-density electrocorticography recordings covering an entire cerebral hemisphere in monkeys during reversible loss of consciousness. Performing a moving vector autoregressive analysis of the activity, they observed that the eigenvalues crowd near the critical line. During loss of consciousness, the numbers of eigenmodes at the edge of instability decrease smoothly but drift back to the critical line during recovery of consciousness.We also examine the dynamics of the system and show that its activity exponentially decays to a limit cycle over multiple timescales, which depend on the strength of the input. Specifically, we find that the temporal exponential decay constants increase with increasing input strength. This result agrees with single-neuron studies which have found that response latencies in V1 decrease with increasing stimulus contrast <cit.>. We now turn to describing our model.§ METHODS Let x∈ℂ^N be the activity vector for a network of neurons which evolve in time according to the normal form equation:ẋ_i=j∑A_ijx_j-|x_i|^2x_i+I_i(t)In this model, originally proposed by Yan and Magnasco <cit.>, neurons interact with one another through a skew-symmetric connectivity matrix A. The cubic-nonlinear term in the model is purely local and does not couple the activity states of distinct neurons, while the external input I(t)∈ℂ^N to the system may depend on time and have a complex spatial pattern.The original model considered a 2-D checkerboard topology of excitatory and inhibitory neurons. For theoretical simplicity and computational ease, we will instead consider a 1-D checkerboard layout of excitatory and inhibitory neurons which interact through equal strength, nearest neighbor connections (Fig <ref>). In this case, A_ij=(-1)^js(δ_i,j+1+δ_i,j-1), where i,j=0,1,...,N-1 and s is the synaptic strength. Boundary conditions are such that the activity terminates to 0 outside of the finite network.We are specifically interested in the time-asymptotic response of the system, but explicitly integrating the stiff, high-dimensional ODE in (<ref>) is difficult. Fortunately, we can bypass numerical integration methods by assuming periodic input of the form I(t)=Fe^iω t, where F∈ℂ^N and look for solutions X(t)=Ze^iω t, where Z∈ℂ^N. Substituting these into (1), we find that:0=(A-iω)Z-|Z|^2Z+F And define g(Z) to be equal to the right hand side of (<ref>). The solution of (<ref>) can be found numerically by using the multivariable Newton-Raphson method in ℂ^N: Z→Z-J(Z)^-1g(Z)where Z and gare the concatenations of the real and imaginary parts of Z and g, respectively. J is the Jacobian of g with respect toJ_ij(z)=∂g_i/∂z_j § RESULTS To test how the response of a single neuron in the network varies with both the strength and length of the input, we select a center neuron at index c and then calculate, for a range of input strengths, the response of the neuron as a function of input length around it. Formally, for each input strength level B∈ℝ, we solve (<ref>) for:F_k(B,l)= Bv_kif k∈[c-l,c+l]0 otherwisewhere k=0,...,N-1, v∈ℂ^N describes the spatial shape of the input, and2l+1 is the length of the input in number of neurons. The response of the center neuron is taken as the modulus of Z_c,and we focus on the case where ω is an eigenfrequency of A and v the corresponding eigenvector.The results for a 1-D checkerboard network of 64 neurons is shown in Fig <ref>. Here we fix a center neuron and sweep across a small range of eigenfrequencies ω of A. The curves from bottom to top correspond to an ascending order of base-2 exponentially distributed input strengths C=2^i. For all eigenfrequencies, the peak of the response curves shift towards larger input lengths as the input strength decreases. In fact, for very weak input, the response curves rise monotonically over the entire range of input lengths without ever reaching a maximum in this finite network. This is in contrast to the response curves corresponding to strong input, which always reach a maximum but, depending on the eigenfrequency, exhibit varying degrees of response suppression beyond the maximum. This is consistent with variability of response suppression in primary visual cortex studies <cit.>. In Fig <ref>, eigenfrequencies ω=1.92, 1.96, 1.99 show the greatest amount of suppression while the others display little to none.To understand why certain eigenfrequencies lead to suppression, we fix the eigenfrequency to be ω=1.92 and examine the response curves of different center neurons. The response of four center neurons (labeled by network position) and the modulus of the eigenfrequency's corresponding eigenvector are plotted in Fig <ref>. The center neurons closest to the zeros of the eigenvector experience the strongest suppression for long line lengths. Neuron 38 closer to the peak of the eigenvector's modulus experiences almost zero suppression. This generally holds for all eigenvectors and neurons in the network as all eigenvectors are periodic in their components with an eigenvalue-dependent spatial frequency.To strengthen the connection between model and neurophysiology, one can consider a critically-balanced network with an odd number of neurons so that 0 is now an eigenfrequency of the system. In our model, input associated with the 0-eigenmode represents direct current input to the system which is what neurophysiologists utilize in experiments; the visual input is not flashed <cit.>. Contrary to the even case, long range connections must be added on top of the nearest neighbor connectivity in order to recover periodic eigenvectors and hence suppression past the response curves maximums. Next, we show that the network not only selectively integrates input as a function of input strength but also operates on multiple time scales which flexibly adapt to the input. This behavior is not surprising given that in the case of a single critical Hopf oscillator, the half width of the resonance, the frequency range for which the oscillator's response falls by a half, is proportional to the forcing strength of the input, Γ∝F^2/3 where Γ is the half-width F the input strength <cit.>. Thus, decay constants in the case of a single critical oscillator should grow with the input forcing strength as F^2/3.Assuming input Fe^iωt, as described above, the network activity x(t), given by (<ref>), decays exponentially in time to a stable limit cycle, X(t)=Ze^iω t. This implies that for any neuron i in the network, |x_i(t)|=e^-btf(t)+|Z_i| during the approach to the limit cycle. We therefore plot log(||x_i(t)|-|Z_i||) over the transient decay periodand estimate the slope of the linear regimes. We do this for a nearly network size input length (input length=29, N=32) and a range of exponentially distributed input strengths. In Fig <ref>, we plot representative transient periods of a single neuron corresponding to 3 input strengths: 2^-10, 2^-4, and 2^2. For weak input there is a fast single exponential decay regime (red) that determines the system's approach to the stable limit cycle. As we increase the input, however, the transient period displays two exponential decay regimes: the fast decay regime (red) which was observed in the presence of weak input and a new slow decay regime (blue) immediately preceding the stable limit cycle. For very large input strength, the slow decay regime becomes dominant. The multiple decay regimes is a surprising result which doesn't appear in the case of a single critical Hopf oscillator. We estimate the exponential decay constants as a function of input strength and plot them on a log-log scale in Fig <ref>. The red circles correspond to the fast decay regime, while the blue circles correspond to the slow decay regime, which becomes prominent for large forcings. We separately fit both the slow and fast decay regimes with a best fit line.Unsurprisingly, the slopes of the lines are equal and approximately 2/3. Thus, the decay constants grow with the input as ∝ F^2/3, where F is the input strength. This implies that the system operates on multiple timescales dynamically switching from one to another depending on the magnitude of the forcing. Larger forcings lead to faster network responses.In this paper, we consider a line of excitatory and inhibitory neurons, but our results hold equally well for a ring of neurons with periodic boundary conditions and appropriately chosen long range connections. Ring networks have extensively been studied as a model of orientation selectivity in V1 <cit.>. In agreement with recent findings <cit.>, the critically-balanced ring network exhibits surround suppression in orientation space when long range connections are added on top of nearest neighbor connectivity.§ CONCLUSION We have shown that a simple dynamical system poised at the onset of instability exhibits an input-strength-dependent scale of integration of the system's input and input-strength-dependent response latencies. This finding strongly complements our previous results showing that a similar nonlinear process with fixed, nearest neighbor network connectivity leads to input-dependent functional connectivity. This system is thus the first proposed mechanism that can account for contrast dependence of spatial summation, functional connectivity, and response latencies. In this framework, these three characteristic properties of signal processing in V1 are intrinsically linked to one another. 1 fechner1860elementsFechner G. 1966 Elements of Psychophysics. Howes, DH, Boring, EC, Adler, HE Holt, Rinehart and Winston, New York ((Translated from German) Originally published in 1860). 1860. eguiluz2000essential Eguíluz VM, Ospeck M, Choe Y, Hudspeth AJ, Magnasco MO. Essential nonlinearities in hearing. Physical Review Letters. 2000 May 29;84(22):5232. camalet2000auditory Camalet S, Duke T, Jülicher F, Prost J. Auditory sensitivity provided by self-tuned critical oscillations of hair cells. Proceedings of the National Academy of Sciences. 2000 Mar 28;97(7):3183-8. field1987relations Field DJ. Relations between the statistics of natural imagesand the response properties of cortical cells. Josa a. 1987 Dec 1;4(12):2379-94. ruderman1994statistics Ruderman DL, Bialek W. Statistics of natural images: Scaling in the woods. Physical Review Letters. 1994 Aug 8;73(6):814-817. sigman2001common Sigman M, Cecchi GA, Gilbert CD, Magnasco MO. On a common circle: natural scenes and Gestalt rules. Proceedings of the National Academy of Sciences. 2001 Feb 13;98(4):1935-40. kapadia1995imporovement Kapadia MK, Ito M, Gilbert CD, Westheimer G. Improvement in visual sensitivity by changes in local context: parallel studies in human observers and in V1 of alert monkeys. Neuron. 1995 Oct 31;15(4):843-56. zipser1996contextual Zipser K, Lamme VA, Schiller PH. Contextual modulation in primary visual cortex. Journal of Neuroscience. 1996 Nov 15;16(22):7376-89. levitt1997contrast Levitt JB, Lund JS. Contrast dependence of contextual effects in primate visual cortex. Nature. 1997 May 1;387(6628):73. polat1998collinear Polat U, Mizobe K, Pettet MW, Kasamatsu T, Norcia AM. Collinear stimuli regulate visual responses depending on cell's contrast threshold. Nature. 1998 Feb 5;391(6667):580-4. kapadia1999dynamics Kapadia MK, Westheimer G, Gilbert CD. Dynamics of spatial summation in primary visual cortex of alert monkeys. Proceedings of the National Academy of Sciences. 1999 Oct 12;96(21):12073-8. sceniak1999contrast Sceniak MP, Ringach DL, Hawken MJ, Shapley R. Contrast's effect on spatial summation by macaque V1 neurons. Nature neuroscience. 1999 Aug 1;2(8):733-9. hubel1962receptiveHubel DH, Wiesel TN. Receptive fields, binocular interaction and functional architecture in the cat's visual cortex. The Journal of physiology. 1962 Jan 1;160(1):106-54. kuffler1953discharge Kuffler SW. Discharge patterns and functional organization of mammalian retina. Journal of neurophysiology. 1953 Jan 1;16(1):37-68. deangelis1992organization DeAngelis GC, Robson JG, Ohzawa I, Freeman RD. Organization of suppression in receptive fields of neurons in cat visual cortex. Journal of Neurophysiology. 1992 Jul 1;68(1):144-63. deangelis1994length DeAngelis GC, Freeman RD, Ohzawa IZ. Length and width tuning of neurons in the cat's primary visual cortex. Journal of neurophysiology. 1994 Jan 1;71(1):347-74. schwabe2006feedback Schwabe L, Obermayer K, Angelucci A, Bressloff PC. The role of feedback in shaping the extra-classical receptive field of cortical neurons: a recurrent network model. Journal of Neuroscience. 2006 Sep 6;26(36):9117-29. lochmann2012perceptual Lochmann T, Ernst UA, Deneve S. Perceptual inference predicts contextual modulations of sensory responses. Journal of neuroscience. 2012 Mar 21;32(12):4179-95. zhu2013visual Zhu M, Rozell CJ. Visual nonclassical receptive field effects emerge from sparse coding in a dynamical system. PLoS computational biology. 2013 Aug 29;9(8):e1003191. olshausen1996emergence Olshausen BA, Field DJ. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature. 1996 Jun;381(6583):607. reid1995specificity Reid RC, Alonso JM. Specificity of monosynaptic connections from thalamus to visual cortex. Nature. 1995 Nov;378(6554):281. rosenblatt1961principles Rosenblatt F. Principles of neurodynamics. perceptrons and the theory of brain mechanisms. CORNELL AERONAUTICAL LAB INC BUFFALO NY; 1961 Mar 15. von1999and Von der Malsburg C. The what and why of binding: the modeler's perspective. Neuron. 1999 Sep 30;24(1):95-104. lee2003hierarchical Lee TS, Mumford D. Hierarchical Bayesian inference in the visual cortex. JOSA A. 2003 Jul 1;20(7):1434-48. nauhaus2009stimulus Nauhaus I, Busse L, Carandini M, Ringach DL. Stimulus contrast modulates functional connectivity in visual cortex. Nature neuroscience. 2009 Jan 1;12(1):70-6. yan2012input Yan XH, Magnasco MO. Input-dependent wave attenuation in a critically-balanced model of cortex. PloS one. 2012 Jul 25;7(7):e41419. magnasco2009self Magnasco MO, Piro O, Cecchi GA. Self-tuned critical anti-Hebbian networks. Physical review letters. 2009 Jun 22;102(25):258102. churchland2012 Churchland MM, Cunningham JP, Kaufman MT, Foster JD, Nuyujukian P, Ryu SI, Shenoy KV. Neural population dynamics during reaching. Nature. 2012 Jul;487(7405):51. seung1998continuous Seung HS. Continuous attractors and oculomotor control. Neural Networks. 1998 Nov 30;11(7):1253-8. seung2000stability Seung HS, Lee DD, Reis BY, Tank DW. Stability of the memory of eye position in a recurrent network of conductance-based model neurons. Neuron. 2000 Apr 30;26(1):259-71. machens2005 Machens CK, Romo R, Brody CD. Flexible control of mutual inhibition: a neural model of two-interval discrimination. Science. 2005 Feb 18;307(5712):1121-4. choe1998model Choe Y, Magnasco MO, Hudspeth AJ. A model for amplification of hair-bundle motion by cyclical binding of Ca2+ to mechanoelectrical-transduction channels. Proceedings of the National Academy of Sciences. 1998 Dec 22;95(26):15321-6. freeman2005metastability Freeman WJ, Holmes MD. Metastability, instability, and state transition in neocortex. Neural Networks. 2005 Aug 31;18(5):497-504. bienenstock1998regulated Bienenstock E, Lehmann D. Regulated criticality in the brain?. Advances in complex systems. 1998 Dec;1(04):361-84. solovey2015loss Solovey G, Alonso LM, Yanagawa T, Fujii N, Magnasco MO, Cecchi GA, Proekt A. Loss of consciousness is associated with stabilization of cortical activity. Journal of Neuroscience. 2015 Jul 29;35(30):10866-77. carandini1994summation Carandini M, Heeger DJ. Summation and division by neurons in primate visual cortex. Science-AAAS-Weekly Paper Edition-including Guide to Scientific Information. 1994 May 27;264(5163):1333-5. gawne1996latency Gawne TJ, Kjaer TW, Richmond BJ. Latency: another potential code for feature binding in striate cortex. Journal of neurophysiology. 1996 Aug 1;76(2):1356-60. albrecht2002visual Albrecht DG, Geisler WS, Frazor RA, Crane AM. Visual cortex neurons of monkeys and cats: temporal dynamics of the contrast response function. Journal of Neurophysiology. 2002 Aug 1;88(2):888-913. yishai1995theory Ben-Yishai R, Bar-Or RL, Sompolinsky H. Theory of orientation tuning in visual cortex. Proceedings of the National Academy of Sciences. 1995 Apr 25;92(9):3844-8. hansel1997modeling Hansel D, Sompolinsky H. 13 Modeling Feature Selectivity in Local Cortical Circuits. shriki2003rate Shriki O, Hansel D, Sompolinsky H (2003) Shriki O, Hansel D, Sompolinsky H. Rate models for conductance-based cortical neuronal networks. Neural computation. 2003 Aug;15(8):1809-41. ermentrout1998neural Ermentrout B. Neural networks as spatio-temporal pattern-forming systems. Reports on progress in physics. 1998 Apr;61(4):353. bressloff2000dynamical Bressloff PC, Bressloff NW, Cowan JD. Dynamical mechanism for sharp orientation tuning in an integrate-and-fire model of a cortical hypercolumn. Neural computation. 2000 Nov;12(11):2473-511. bressloff2001geometric Bressloff PC, Cowan JD, Golubitsky M, Thomas PJ, Wiener MC. Geometric visual hallucinations, Euclidean symmetry and the functional architecture of striate cortex. Philosophical Transactions of the Royal Society of London B: Biological Sciences. 2001 Mar 29;356(1407):299-330. dayan2001theoretical Dayan P, Abbott LF. Theoretical neuroscience. Cambridge, MA: MIT Press; 2001. rubinr2015suprlinear Rubin DB, Van Hooser SD, Miller KD. The stabilized supralinear network: a unifying circuit motif underlying multi-input integration in sensory cortex. Neuron. 2015 Jan 21;85(2):402-17. | http://arxiv.org/abs/1703.09347v3 | {
"authors": [
"Keith Hayton",
"Dimitrios Moirogiannis",
"Marcelo Magnasco"
],
"categories": [
"q-bio.NC",
"math.DS"
],
"primary_category": "q-bio.NC",
"published": "20170327234254",
"title": "Adaptive Scales of Spatial Integration and Response Latencies in a Critically-Balanced Model of the Primary Visual Cortex"
} |
.pdf,.png,.jpg,.mps _γr R u_ X Instituto de Fisica de São Carlos, Universidade de São Paulo, CP 369, 13560-970, São Carlos, SP, [email protected] CNR-Istituto di Nanoscienze, Via Campi 213A, I-41125 Modena, [email protected] Instituto de Fisica de São Carlos, Universidade de São Paulo, CP 369, 13560-970, São Carlos, SP, Brazil. Department of Physics and York Centre for Quantum Technologies, University of York, York YO10 5DD, United [email protected] We introduce a rigorous, physically appealing, and practical way to measure distances between exchange-only correlations of interacting many-electron systems, which works regardless of their size and inhomogeneity. We show that this distance captures fundamental physical features such as the periodicity ofatomic elements, and that it canbe used toeffectively and efficiently analyzethe performance of density functional approximations. We suggest that this metric can find useful applications in high-throughput materials design. 31.15.E-, 31.15.V-, 71.15.Mb, 03.65.-w Fermionic correlations as metric distances:A useful tool for materialsscience Irene D'Amico December 30, 2023 ====================================================================================§ I. INTRODUCTION The discovery of innovative materials andengineering devices with targeted properties involve substantial experimental and theoretical efforts. Their progress ultimately relies on our understanding of the physicsat the nanoscale. Atomistically, the possible constituents and their combinations are vast. One can often focus onthestate of electrons within the Born-Oppenheimer approximation, however a too direct computational approach is in general unpractical, because of the presence of many degrees of freedom and the fact that these are interrelated in a non-trivial fashion.Density functional theory (DFT) proposes an alternative by transforming the problem of determining interacting many-body system properties into the solution of the Kohn-Sham(KS) equations, which only involve auxiliary non-interacting particles <cit.>. Practically, the KS approach relies on the possibility of devising approximate forms for the exchange-correlation (xc) energy – a functional of the particle density. This functional embodies the effects of many-body correlations due to the intrinsicanti-symmetry of the many-electron state and to the electrostatic electron-electron repulsions; it also accounts for the auxiliary KS system being non-interacting. Within this context, we wish to expose the usefulness of introducing metric spaces to analyze many-body correlations – when the protocol to define these spaces is both rigorous and based on quantities with a deep physical meaning.There is an increasing interest in the use of metrics to explore quantum mechanical systems <cit.>, and appropriate (“natural”) metrics for particle densities, wavefunctions, and external potentials <cit.> already shed light on (previously unknown) features of the mappings at the base of the Hohenberg-Kohn theorem, the cornerstone of DFT. Among the ultimate goals of DFT applications is the determination of properties such as total energies, ionization potentials, electron affinities,the fundamental gaps, and lattice distances of crystalline structures. All these quantities canbe computed accurately only if the relevant two-body correlations are properly captured by the underlying approximations. The xc energy, at the core of the KS DFT approach, can be expressed in terms of the aforementioned two-body correlations by means of the xc-hole function as defined in theso-called adiabatic coupling-constant integration <cit.>. Furthermore, the xc hole can be split into a correlation (c) and an exchange (x) component. Here, wefocus on an exchange-only analysis of this quantity (more details follow below), which is useful for dealing with relatively weakly correlated systems' ground states. First, wewill introduce a “natural” distance for the x hole and show that it captures fundamental physical features such as the periodicity ofatomic elements; afterwards we will also demonstrate that it can be used to effectively and efficiently analyze the performance of density functional approximations.§ II. METRIC SPACE DESCRIPTION OF EXCHANGE HOLESLet us briefly remind the reader of a few fundamental definitions <cit.>. The exchange-hole (x hole) has an expressionn(,') = -∑_σ |γ_σ(,')|^2/n() which can be evaluated once the KS one-body reduced density matrix (1BRDM) γ_σ(,') =∑_k f_k σψ_k σ()ψ^*_k σ(')is known. This, in turn, only requires the knowledge of theoccupied single-particle orbitals ψ_k,σ(). Here, f_k σ are occupation numbers and σ is the z projection of the spin index <cit.>. At the denominator of Eq. (<ref>), the particle densityisdetermined from the trace n() = ∑_σγ_σ(,). Note that the calculation of the x energy, E,can be based on the knowledge of the system-averagedx hole, ⟨ n⟩, as follows:E =2 π∫_0^∞u du ⟨ n⟩(u)where ⟨ n⟩(u) := ∫ d n()n(,u),withn(,u):= 1/4π∫ dΩ_𝐮 n(,+𝐮 )being the spherical average of the x hole, and Ω_𝐮 being the solid angle definedby 𝐮 around r. Therefore, practical calculations in DFT can be enabled by providing approximationsfor⟨ n⟩(u). Sensible approximations must satisfy important exact conditions. In this respect, it is well known that the property ∫_0^∞4 π u^2du ⟨ n⟩(u) = -Ntogether with the pointwise negativity condition are of outmost importance. These two properties can be combined, giving raise to the constraint ∫_0^∞4 π u^2du |⟨ n⟩(u)| = N .Crucially, through Eq. (<ref>) and by following the protocol for deriving natural metrics of Ref. Sharp:2014,these same conditionsallow us to define the natural distance between two given system-averaged x-hole functions,D[ ⟨n^(1)⟩, ⟨n^(2)⟩] := 4 π∫_0^∞ u^2du |⟨n^(1)⟩(u) -⟨n^(2)⟩(u) | . Equation (<ref>) is the key result of the present work. We emphasize that the same exact conditions that are essential to explain surprisingly good performance of even veryrough DFT approximations, allow us to introduce a rigorous metric: we expect then this metric to capture the essential physics of exchange-only correlations.Equation (<ref>) summarizes the difference between the exchange-only correlations of two many-body systems into asingle number. While differences of exchange energiescould be thought too as “single numbers” to estimate the difference between the exchange in two systems, Eq. (<ref>) not only rigorously satisfies the mathematical properties of a distance <cit.> butalso enables a comparative analysis of the systems that is far more detailed than the claim that they have the same exchange energy – the examples illustrated below will provide a vivid illustration of this point. By the metrics' axioms, D_x=0 if and only if the two systems considered have thesame system-averaged x hole (modulo irrelevant differences over sets of vanishing measure). For non vanishing distances,Eq. (<ref>)impliesa well-defined maximum, given by the sum of the two systems' particle numbers. This can be evinced from Eqs. (<ref>) and (<ref>) by considering two systems of particle numbers N_1 and N_2 for which the system-averaged x holes do not overlap: in this case D= N_1+N_2. Because the system-averaged x holes have a definite sign, this also corresponds to the maximum distance between the two systems. This property implies that the x-hole distance between two systems gives us anon-arbitrary“absolute” measure of their closeness, as their distance can be recast in terms of a percentage of their maximum possible distance.Furthermore, Eq. (<ref>) implies a very effective geometrical structure of the physical Fock space. Consider the application of Eq. (<ref>) to compute the distance between theexact system-averaged x holes of two different systems. This distance represents a measure of the difference of the exchange-only correlations between two systems. A system with no particles may be thought of as a point, say, at the center of the Fock space. Because of Eq. (<ref>), all the other systems will be distributed ata fixed distance equal to the number of particles in the systems. Thus, the overall Fock space can be thought of as the union of disjoint “onionlike” shells: systems with same number of particles are on the same shell; systems whose external potentials differ only by a constant are separated by a vanishing distance (i.e., they occupy the same point) as the orbitals and therefore the 1BDM and corresponding particle densities do not change. Exchange holes and therefore their distances are unchanged if each single-particle orbital is multiplied by the same constant phase. This embodies the fact that both the Schrödinger equation and the DFT framework are invariant under global gauge transformations <cit.>. Systems will be on different shells if they have different particle numbers: the distancesacquire minimum value (i.e., the absolute value of the difference of the shell radii) if the systems “face each other,” and they acquire maximum value (i.e., the sum of the shell radii) if the systems are “on opposite poles”<cit.>. Of course, the configurations which generate maximum and – for systems on different shells – minimum distances are not unique.Finally, let us consider the evaluation of Eq. (<ref>) using some approximate ⟨n⟩. Since Eq. (<ref>) must be fulfilled, proper approximations preserve the mentioned onionlike structure of the Fock space. Also the minimum and maximum distances are unchanged, but the configurations at which these occur may vary from the exact case. The errors due to the approximation may be viewed as fictitious displacements of the systems from their exact locations on the aforementioned shells. Having the possibility to quantify these errors through a rigorously defineddistance that can also be visualized is,per se, very appealing. In the rest of this paper, we will give explicit examples of how powerful this approach can be.§ III. NUMERICAL RESULTS We start by considering a set of systems for which the exact x holes can be calculated: we will discuss the exact results as well as compare and contrast these with corresponding results from DFT approximations.Here we shall consider popular approximations for ⟨n⟩: the local-density approximation (LDA),the generalized gradient approximation (GGA), and the meta-GGA (MGGA). The LDA takes as a reference the xc energy densities of the homogeneous electron gas; GGA and MGGA arenonempirical refinements which aim at capturing the effects ofsystem inhomogeneities – those neglected within the LDA – while progressively satisfying a larger set of exact conditions.LDA formsmake use only ofthe particle density n()as input; GGAs also use the reduced dimensionless gradient, s() = |∇ n( )|/{2 [ 3 π^2 ]^1/3 n()^4/3}; n() and s(), the kinetic-energy density τ = ∑_k σ f_k σ|∇ψ_k σ() |^2, and, possibly, the Laplacian of the particle density may be exploited in MGGAs. MGGA forms are then considered to be the most accurate approximations among these three. As representative approximations for ⟨ n⟩, we choose the versions of the Perdew-Wang LDA and of the Perdew-Burke-Ernzerhof GGA by Ernzerhof and Perdew <cit.> and the version of the Tao-Perdew-Staroverov-Scuseria MGGA by Constantin et. al <cit.>. Figure <ref> shows the distances of the exact ⟨n⟩ (solid line) from a reference system chosen (arbitrarily) at Z^ref=50 for the isoelectronic heliumlike sequence <cit.>. Distances from the reference system increase monotonically for both increasing and decreasing values of Z. As the distance increases, the spatial overlap of the related system-averaged x holes decreases. The system-averaged x holes ⟨n (u)⟩ describe the system-averaged electron depletion observed at separation u from a reference electron due to the effect of electron-electron exchange, so an increasing distance D_ximplies systems with an increasingly different spatial exchange pattern. When there is no overlap between these patterns, their distance saturates at its maximum, which is D_x^max=4 for the set of systems ofFig. <ref>. Next we check how the trend for the exact exchange of heliumlike ions is reproduced by the approximations (dotted, dashed, and dash-dotted lines, as labeled in Fig. <ref>). While the qualitative general trend is mostly reproduced, we note that, quantitatively, the fewer exact conditionsan approximation satisfies, the higher the inaccuracy, which in fact increases as we move from MGGA to GGA to LDA. In particular, LDA becomes unable to reproduce, even qualitatively, the saturation to maximum distance, despite considering ion's nuclear charges as large as Z=2000.Distances can also be used to perform “point-by-point” exact-to-approximated comparisons, by directly computing the distance between exact and approximated exchange for each system. Figure <ref> shows the distances ofapproximated⟨n⟩ from the corresponding exact quantity for each ion in the isoelectronic heliumlike sequence. As the electrons get strongly confined around the nucleus, the effect of the electron-electron interaction becomes negligible with respect to an external potential which increases linearly with Z. In this way, the noninteracting limit of an infinitely charged ion is approached. Interestingly, errors with respect to the exact results quickly saturate at a finite constant. ForLDA and GGA,these errors may be mainly related to spurious self–interactions. Notably, although the considered MGGA gives rather accurate x energies for two-electron systems, it is obvious that a sizable error still persists at the level of ⟨n⟩. Importantly, the use of natural metrics allows us toquantify what we mean by “sizable,” by expressing the error as apercentage of the maximum distance. In the case at hand then, a 10% error threshold would correspond to D_x = 0.4 (dashed black line). We can then assert that for the heliumlike ion series, both GGA and MGGA always provide results which are closer than 10% to the exact ones (about 7.8% for GGA and between 4.0% and 3.0% for MGGA), while LDA estimates, at about 24.0% of D_x^max, are always well above the chosen error threshold.Consistent with the general expectation, both in Fig. <ref> and Fig. <ref>, the GGA performs inbetween the MGGA and LDA; however, our method and results show in an immediate and appealing visual way how substantial is the improvement obtained in going from an LDA to a GGA. The improvement of the MGGA over the GGA is not as large as from LDA to GGA, but still significant. For DFT practitioners, it is important to clarify under which circumstances numerically “cheaper” approximations could be used in place of more accurate but computationally more involved approaches. Toward this goal, in the rest of this paper, we show how the metric for the x hole can be used to efficiently compare the performance of different DFT approximations on large sets of systems. In the process, we will also show how D_x can be used to capture and compare physical trends within a large set of systems. First we focus on physical trends within a set of systems, and so we consider distances between x holes ofdifferent systems calculated using thesame approximation. Figure <ref> shows distances between neutral atoms with atomic numbers Z and Z-1. Moving along the rows of the periodic table, the periodicity is well reflected in the behaviors of D_x for MGGA (solid line), the most accurate approximation considered here. For example, the curves characteristically peak when considering the distance between the x holes of the last atom of one row and the first of the next (as labeled in Fig. <ref>). This behavior follows from the sharp change of the corresponding atomic sizes. The MGGA curves also display characteristic minima at every start of double occupancy in spin of the p shells: as the fourth p electron is introduced, the atomic radius does not change significantly. This implies that the x-hole distance from the previous atom sharply decreases. Significant deviations are observed for LDA results foratoms in the first two rows. We explain this by noting that self-interaction errors become larger in small systems, and electrons of light elements tend to behave rather differently from the electrons in a homogenous gas.GGA improves over this by accounting better for density inhomogeneity, but it is still quite poor for the smaller Z values. For larger values of Z, the trends of LDA and GGA looks qualitatively more similar to MGGA results, although, as Z increases, maximum and minimum features related to the filling of the p shells get displaced with respect to MGGA positions.Next, we wish to show how distances can lead to direct comparison between different approximations: here distances are calculated betweendifferent approximations applied to thesame system, e.g., the same atom.In Fig. <ref>, we report these distances for the noble gases. The first thing to notice is that the distances among the various approximations decrease substantially with increasing Z. This is related to the fact that in all the considered approximations, the leading contribution to the semiclassical expansion of the exchange energies is provided through LDA <cit.>. The remaining differences can be attributed to high-orders contributions, more related to system inhomogeneities. Consistently, thus, the GGA and MGGA results are closer to each other than to the LDA. We can now define an error threshold toestablish the parameter region for which LDA and GGA would be a good-enough cheaper substitute for MGGA. As our best results are already approximated, we consider in this case a threshold of 5% of the maximum possible distance, which corresponds in this case to D_x/Z <0.1 (black dashed line in Fig. <ref>). It is immediate to see then that while LDA would be appropriate only for the heaviest three, GGA would be a good choice for all noble gases except helium.§ IV. SUMMARY AND CONCLUSIONS In summary, we have presented a way to rigorously and quantitatively compare exchange-only correlations of different systems. We havegiven evidence that by the use of a “natural” metrics, it is possible to effectively and efficiently characterize exchange-only correlations in many-electron systems. Our metric based on the exchange hole could have important practical applications in evaluating DFT approximations. For example, our results suggest that among the available approximations for the system-averaged exchange-hole, the meta-GGA performs best and could be used in evaluating distances for systems widely different in size and level of inhomogeneity. Our x-hole metric could also help guiding high-throughput materials design <cit.>,e.g., for searching in large configurational spaces or for validating the reproducibility of a collaborative database of electronic calculations, independently from the different methodology, quantum package, or hardware used <cit.>. Natural metrics such as this or the one for the particle density <cit.> might also be used to ensure that newly developed functionals optimize, together with the total energies, other key physical quantities, helping revert the trend recently described in <cit.>.§ ACKNOWLEDGMENTSWe thank Professor Luiz Nunes de Oliveira for fruitful discussions. I.D. acknowledges support by the Royal Society through the Newton Advanced Fellowship scheme (Grant No. NA140436). I.D. and S.M. were supported by the Conselho Nacional de Desenvolvimento Científico e Tecnológico (Grant No. 401414/2014-0) and S.P. was supported by the European Community through the FP7’s Marie-Curie International Incoming Fellowship, Grant agreement No. 623413.apsrev30 natexlab#1#1 bibnamefont#1#1 bibfnamefont#1#1 citenamefont#1#1 url<#>1 urlprefixURL [Kohn(1999)]Kohn:1999authorW. Kohn, journalRev. Mod. Phys.volume71, pages1253 (year1999).[Capelle(2006)]Capelle:2006authorK. Capelle, journalBraz. J. Phys.volume36, pages1318(year2006).[Koch and Holthausen(2001)]KochHolhausen:2001authorW. KochandauthorM. C.Holthausen, titleA Chemists Guide to Density Functional Theory (publisherWiley-VCH Verlag, Weinheim, year2001).[D'Amico et al.(2011)D'Amico, Coe, França, and Capelle]DAmico:2011authorI. D'Amico, authorJ. P.Coe, authorV. V.França, andauthorK. Capelle, journalPhys. Rev. Lett. volume106, pages050401 (year2011).[Sharp and D'Amico(2014)]Sharp:2014authorP. M.SharpandauthorI. D'Amico, journalPhys. Rev. Bvolume89, pages115137 (year2014).[Sharp and D'Amico(2015)]Sharp:2015authorP. M.SharpandauthorI. D'Amico, journalPhys. Rev. Avolume92, pages032509 (year2015).[Sharp and D'Amico(2016)]Sharp:2016authorP. M.SharpandauthorI. D'Amico, journalPhys. Rev. Avolume94, pages062509 (year2016).[Pires et al.(2016)Pires, Cianciaruso, Céleri, Adesso, and Soares-Pinto]Adesso:2016authorD. P.Pires, authorM. Cianciaruso, authorL. C.Céleri, authorG. Adesso, andauthorD. O.Soares-Pinto, journalPhys. Rev. Xvolume6, pages021031 (year2016).[Turner et al.(2017)Turner, Meichanetzidis, Papic, and Pachos]Pachos:2016authorC. J.Turner, authorK. Meichanetzidis, authorZ. Papic, andauthorJ. K.Pachos, journalNat. Commun.volume8, pages14926 (year2017).[Funo et al.(2017)Funo, Zhang, Chatou, Kim, Ueda, and del Campo]Funo:2017authorK. Funo, authorJ.-N.Zhang, authorC. Chatou, authorK. Kim, authorM. Ueda, andauthorA. del Campo, journalPhys. Rev. Lett. volume118, pages100602 (year2017).[not(a)]note1noteFor the sake of simplicity, we restrict ourselves to the cases for which the Kohn-Sham state is a single Slater determinant.[not(b)]note2noteIn the standard formulation of Kohn-Sham DFT, the occupation numbers f_k σ take integer values (0 or 1) . Fractional values are also admitted in the sense of its ensemble generalization. In this work, we restrict to closed-shell systems or to situations with globally collinear spin polarizations. In the latter case, one must take into consideration that the x hole acquires a dependence on the spin polarization. The spin-dependent x hole can be expressed in terms of the spin-unpolarized x hole by means of the spin-scaling relation <cit.> as followsn[n_↑,n_↓] (,')= ∑_σn_σ()/n() n_x[2n_σ](,'). .[Sutherland(2009)]distance_propertiesauthorW. A.Sutherland, titleIntroduction to Metric and Topological Spaces (publisherOxford University Press, Harvard, year2009).[not(c)]note3noteFor discussing invariance under more general gauge transformations, one should admit additional couplings to proper external gauge fields and adopt the corresponding extensions of the DFT frameworks as done, for example, in current-DFT and spin-current-DFT(CDFT) <cit.>.[not(d)]note4noteThe onion-shell geometry characterizesall“natural” metrics, as a consequence of the protocol defined to derive them. A detailed discussion, including a discussion of the polar angle characterizing the distance between two systems, can be found in Ref. Sharp:2014.[Ernzerhof and Perdew(1998)]Ernzerhof:1998authorM. ErnzerhofandauthorJ. P.Perdew, journalJ. Chem. Phys. volume109, pages3313 (year1998).[Constantin et al.(2006)Constantin, Perdew, and Tao]Constantin:2006authorL. A.Constantin, authorJ. P.Perdew, andauthorJ. Tao, journalPhys. Rev. Bvolume73, pages205104 (year2006).[not(e)]note5noteThe isoelectronic heliumlike sequence was solved exactly combining the approaches taken by Accad et al. <cit.> and Coe et al. <cit.>.[Oliveira and Nogueira(2008)]APEauthorM. J. T.OliveiraandauthorF. Nogueira, journalComp. Phys. Commun.volume178, pages524 (year2008).[Dierckx(1993)]Dierckx:1993authorP. Dierckx, titleCurve and Surface Fitting with Splines (publisherOxford University Press, addressNew York, year1993).[Elliott and Burke(2009)]Elliot09authorP. ElliottandauthorK. Burke, journalCan. J. Chem.volume87, pages1485 (year2009).[Curtarolo et al.(2013)Curtarolo, Hart, Nardelli, Mingo, Sanvito, and Levy]Curtarolo:2013authorS. Curtarolo, authorG. L. W.Hart, authorM. B.Nardelli, authorN. Mingo, authorS. Sanvito, andauthorO. Levy, journalNat. Mater.volume12, pages191 (year2013).[Calderon et al.(2015)Calderon, Plata, Toher, Oses, Levy, Fornari, Natan, Mehl, Hart, Nardelli et al.]Calderona:2015authorC. E.Calderon, authorJ. J.Plata, authorC. Toher, authorC. Oses, authorO. Levy, authorM. Fornari, authorA. Natan, authorM. J.Mehl, authorG. Hart, authorM. B.Nardelli, et al., journalComp. Mater. Sciencevolume108, Part A, pages233 (year2015).[Medvedev et al.(2017)Medvedev, Bushmarinov, Sun, Perdew, and Lyssenko]Medvedev:2017authorM. G.Medvedev, authorI. S.Bushmarinov, authorJ. Sun, authorJ. P.Perdew, andauthorK. A.Lyssenko, journalSciencevolume355, pages49 (year2017).[Perdew et al.(1996)Perdew, Burke, and Wang]SpinScalingauthorJ. P.Perdew, authorK. Burke, andauthorY. Wang, journalPhys. Rev. Bvolume54, pages16533 (year1996).[Vignale and Rasolt(1987)]CDFTauthorG. VignaleandauthorM. Rasolt, journalPhys. Rev. Lett. volume59, pages2360 (year1987).[Vignale and Rasolt(1988)]SC1authorG. VignaleandauthorM. Rasolt, journalPhys. Rev. Bvolume37, pages10685 (year1988).[Bencheikh(2003)]SC2authorK. Bencheikh, journalJ. Phys. A: Math. Gen.volume36, pages11929 (year2003).[Accad et al.(1971)Accad, Pekeris, and Schiff]Accad:1971authorY. Accad, authorC. L.Pekeris, andauthorB. Schiff, journalPhys. Rev. Avolume4, pages516 (year1971).[Coe et al.(2009)Coe, Capelle, and D'Amico]Coe:2009authorJ. P.Coe, authorK. Capelle, andauthorI. D'Amico, journalPhys. Rev. Avolume79, pages032504 (year2009). | http://arxiv.org/abs/1703.08709v3 | {
"authors": [
"Simone Marocchi",
"Stefano Pittalis",
"Irene D'Amico"
],
"categories": [
"cond-mat.mtrl-sci",
"cond-mat.str-el",
"quant-ph",
"81Qxx"
],
"primary_category": "cond-mat.mtrl-sci",
"published": "20170325163526",
"title": "Fermionic correlations as metric distances: a useful tool for materials science"
} |
http://arxiv.org/abs/1703.09231v1 | {
"authors": [
"Marton Kanasz-Nagy",
"Izabella Lovas",
"Fabian Grusdt",
"Daniel Greif",
"Markus Greiner",
"Eugene A. Demler"
],
"categories": [
"cond-mat.quant-gas",
"physics.comp-ph",
"quant-ph"
],
"primary_category": "cond-mat.quant-gas",
"published": "20170327180004",
"title": "Quantum correlations at infinite temperature: the dynamical Nagaoka effect"
} |
|
Researchaddressref=aff1, corref=aff1, [email protected]]MRAMuhammad R A Khandaker addressref=aff1,[email protected] ]KKKai-Kit Wong [id=aff1] Department of Electronic and Electrical Engineering, University College London,Torrington Place,London WC1E 7JE,UKThis paper considers multiple-input multiple-output (MIMO) relay communication in multi-cellular (interference) systems in which MIMO source-destination pairs communicate simultaneously. It is assumed that due to severe attenuation and/or shadowing effects, communication links can be established only with the aid of a relay node. The aim is to minimize the maximal mean-square-error (MSE) among all the receiving nodes under constrained source and relay transmit powers. Both one- and two-way amplify-and-forward (AF) relaying mechanisms are considered. Since the exactly optimal solution for this practically appealing problem is intractable, we first propose optimizing the source, relay, and receiver matrices in an alternating fashion. Then we contrive a simplified semidefinite programming (SDP) solution based on the error covariance matrix decomposition technique, avoiding the high complexity of the iterative process. Numerical results reveal the effectiveness of the proposed schemes.interference MIMO two-way relay optimization § INTRODUCTIONDue to scarcity of frequency spectrum in practical wireless networks, multiple communicating pairs are motivated to share a common time-frequency channel to ensure efficient use of the available spectrum. Co-channel interference (CCI) is, however, one of the main deteriorating factors in such networks that adversely affect the system performance. The impact is more obvious in 5G heterogeneous networks where there is oceanic volume of interference due to hyper-dense frequency reuse among small-cell and macro cell base stations. Therefore it is important to develop schemes to mitigate the CCI, which has been a major research direction in wireless communications over the past decades.In the literature, various schemes have been proposed to control CCI at an acceptable level. A conventional approach in MIMO systems is to exploit spatial diversity for suppressing CCI <cit.>. Such spatial diversity technique has been used to solve many power control problems in interference systems for different network setups. In <cit.>, a power control scheme has been designed with receive diversity only, whereas joint transmit-receive beamforming has been considered in <cit.> for interference systems. However, the incorporation of the spatial diversity at the transmitter side in <cit.>, results in lower total transmit power compared to that in <cit.>.On the other hand, there is synergy between multiple antenna and relaying technologies. The latter is particularly useful to reestablish communications in case of a broken channel between source and destination. Hence relaying has been considered in interference networks in order to afford longer source-destination distance <cit.>. Both <cit.> considered network beamforming for minimizing total relay transmit power, whereas in <cit.>, an iterative transceiver optimization scheme has been proposed to minimize total source and relay transmit power.While the works in <cit.> all considered minimizing the total transmit power of interference networks, another important performance metric, which concerns more about the quality of communications, is the mean-square-error (MSE) for signal estimation <cit.>. In <cit.>, the sum minimum MSE (MMSE) was considered to design iterative algorithms for MIMO interference relay systems taking the direct links between the source and destination nodes into consideration, and in <cit.>, similar problem has been considered ignoring the direct links between the communicating parties. Nonetheless, the sum MMSE criterion runs the risk that some of the receivers may suffer from unacceptably high MSEs. Also, the works in <cit.> considered one-way relaying only.Due to the increasing demands on multimedia applications, in particular, the notion of emerging wireless communications terminologies such as Big data, ultra-high spectral efficiency is essential in future wireless networks, including 5G, to provide ADSL-like user experience aspired by 2020. The above-mentioned one-way relay systems suffer from a substantial performance loss in terms of spectral efficiency due to the pre-log factor of 1/2 persuaded by the fact that two channel uses are required for each end-to-end transmission.Two-way relay systems have hence been proposed to overcome the loss of spectral efficiency in such one-way relay methods <cit.>. Utilizing the concept of analog network coding <cit.>, communication in a two-way relay channel can be accomplished in two phases: the multiple access (MAC) phase and the broadcast (BC) phase. During the MAC phase, all the users simultaneously send their messages to an intermediate relay node, whereas in the BC phase, the relay retransmits the received information to the users. As each user knows its own transmitted signals, each user can cancel the self-interference and decode the intended message. The capacity region of multi-pair two-way relay networks in the deterministic channel was characterized in <cit.>. Later in <cit.>, the achievable total degrees of freedom in a two-way interference MIMO relay channel were also studied. Most recently in <cit.>, the transceivers in a full-duplex MIMO interference system were optimized based on the weighted sum-rate maximization criterion.In this paper, we consider a K-user MIMO interference system where each of the pairs can communicate only with the aid of a relay node thus ignoring the direct source-destination links. The direct links are understood to be in deep shadowing and hence negligible. Both one- and two-way amplify-and-forward (AF) relaying mechanisms are considered. All nodes are assumed to be equipped with multiple antennas so as to afford simultaneous transmission of multiple data streams. Our aim is to develop joint transceiver optimization algorithms for minimizing the worst-user MSE (min-max MSE)[The min-max MSE criterion is considered by many to be more desirable than the min-sum MSE criterion in <cit.> because fairness is imposed and weaker users are not being sacrificed for the minimization of the sum.] subject to the source and relay power constraints. It can be verified that the problem is strictly non-convex, and thus it is difficult to find an analytical solution. To tackle this, we first devise an algorithm to optimize the source, relay, and receiver matrices alternatingly by decomposing the original non-convex problem into convex subproblems. To avoid the complexity of the iterative process, we then extend the error covariance matrix decomposition technique applied to point-to-point MIMO relay systems in <cit.> to interference MIMO relay systems in this paper. More specifically, under practically reasonable high first-hop signal-to-noise ratio (SNR) assumption, we demonstrate that the problem can be decomposed into two standard semidefinite programming (SDP) problems to optimize source and relay matrices separately. Note that high SNR assumption has also been made in <cit.> to simplify the joint codebook design problem in single-user MIMO relay systems and in <cit.> for multicasting MIMO relay design. Hence our work is a generalization to multi-pair communication scheme taking co-channel interference into account.The remainder of this paper is lined-up as follows. In Section <ref>, the interference MIMO relay system model is introduced. The joint optimal transmitter, relay, and receiver beamforming optimization schemes are developed in Section <ref> and Section <ref>, respectively, for one-way and two-way relaying. Section <ref> provides simulation results to analyze the performance of the proposed algorithms in various system configurations before concluding remarks are made in Section <ref>.§ SYSTEM MODELLet us consider a communication scenario, as illustrated in Fig. <ref>, where each of the K source nodes communicates with the corresponding destination node sharing the same frequency channel via a common relay node. The direct link between each transmitter-receiver pair is assumed to be broken due to strong attenuation and/or shadowing effects. The kth source, the relay, and the kth destination nodes are assumed to be equipped with N_ s,k, N_ r, and N_ d,k antennas, respectively.§ ONE-WAY RELAYINGIn this section, we consider that communication takes place in one direction only. The relay node is assumed to work in half-duplex mode which implies that the actual communication between the source and destination nodes is accomplished in two time slots. In the first time slot, the source nodes transmit the linearly precoded signal vectors B_k s_k, k = 1, ⋯, K, to the relay node. The received signal vector at the relay node is therefore given byy_ r = ∑_k=1^K H_k B_k s_k +n_ r,where H_k denotes the N_ r× N_ s,k Gaussian channel matrix between the kth source node and the intermediate relay node, s_k is the N_ b,k× 1(1 ≤ N_ b,k≤ N_ s,k) transmit symbol vector with covariance I_N_ b,k, B_k is the N_ s,k× N_ b,k source precoding matrix, and n_ r is the N_ r× 1 additive white Gaussian noise (AWGN) vector introduced at the relay node. Let us denote N_ b = ∑_k=1^KN_ b,k as the total number of data streams transmitted by all the source nodes. In order to successfully transmit N_ b independent data streams simultaneously through the relay, the relay node must be equipped with N_ r≥ N_ b antennas.After receiving y_ r, the relay node simply multiplies the signal vector by an N_ r× N_ r precoding matrix F and transmits the amplified version of y_ r in the second time slot. Thus the relay's N_ r× 1 transmit signal vector x_ r is given byx_ r =F y_ r.Accordingly, the signal received at the kth destination node can be expressed asy_ d,k = G_k x_ r +n_ d,k=G_k F H_k B_k s_k_desired signal +G_k F∑_j=1 j≠ k^K H_j B_j s_j_interference signal +G_k F n_ r +n_ d,k_noise,=H̅_k s_k + n̅_ d,k, k=1,…,K,where G_k denotes the N_ d,k× N_ r complex channel matrix between the relay node and the kth destination node, n_ d,k is the N_ d,k× 1 AWGN vector introduced at the kth destination node, H̅_k≜ G_k F H_k B_k is the equivalent source-destination channel matrix, and n̅_ d,k≜ G_k F (∑_j=1 j≠ k^K H_j B_j s_j +n_ r) +n_ d,k is the equivalent noise vector. All noises are assumed to be independent and identically distributed (i.i.d.) complex Gaussian random variables with mean zero and variance σ_ n^2, where n∈{ r,d} indicates the noise introduced at the relay or at the destination. Note that the interference term in (<ref>) does not appear in the received signal of the single-user MIMO relay system considered in <cit.> or in the multicasting MIMO relay system considered in <cit.>. Hence the subsequent analyses remain considerably simpler in <cit.>, whereas we need to deal with this troublesome interference term in this paper. Considering the input-output relationship at the relay node given in (<ref>), the average transmit power consumed by the MIMO relay node is defined astr( E{ x_ r x_ r^H})=tr( FΨ F^H),where tr(·) denotes trace of a matrix, E{·} indicates statistical expectation, and Ψ≜ E{ y_ r y_ r^H} =∑_k=1^K H_k B_k B_k^H H_k^H +σ_ r^2 I_N_ r represents the covariance matrix of the signal vector received at the relay node.For signal detection, linear receivers are used at the destination nodes for simplicity reasons. Denoting W_k as the N_ d,k× N_ b,k receiver matrix used by the kth destination node, the corresponding estimated signal vector ŝ_k can be written asŝ_k =W_k^H y_ d,k, k = 1, …, K,where (·)^H indicates the conjugate transpose (Hermitian) of a matrix (vector). Thus the MSE of signal estimation at the kth receiver can be expressed asE_k= tr( E_k ≜ E[(ŝ_k- s_k) (ŝ_k -s_k)^H]),=tr([ I_N_ b,k -W_k^H G_k F H_k B_k -B_k^H H_k^H F^H G_k^H W_k;+ ∑_j=1^K W_k^H G_k F H_j B_j B_j^H H_j^H F^H G_k^H W_k;+ σ_ r^2 W_k^H G_k F F^H G_k^H W_k + σ_ d^2 W_k^H W_k ]),= tr(( W_k^H H̅_k-I_N_ b,k)( W_k^H H̅_k-I_N_ b,k)^H +W_k^H C̅_k W_k), k = 1, …, K,where E_k denotes the error covariance matrix at the kth receiver, andC̅_k≜∑_j=1 j k^K G_k F H_j B_j B_j^H H_j^H F^H G_k^H + σ_ r^2 G_k F F^H G_k^H + σ_ d^2 I_N_ d.is the combined interference and noise covariance matrix.In the following subsections, we develop optimization approaches that minimize the worst-user MSE among all the receivers subject to source and relay power constraints. §.§ Problem FormulationIn this section, we formulate the joint source and relay precoding optimization problem for MIMO interference systems. Our aim is to minimize the maximal MSE among all the source-destination pairs yet satisfying the transmit power constraints at the source as well as the relay nodes. To fulfill this aim, the following joint optimization problem is formulated:min_{ B_k},F, { W_k}max_k E_k s.t. tr( FΨ F^H)≤ P_ rtr( B_k B_k^H)≤ P_ s,k, k = 1, …, K where (<ref>) and (<ref>), respectively, constrains the transmit power at the relay node and the kth transmitter to P_ r > 0, P_ s, k > 0. Our next endeavour is to develop optimal solutions for this problem. Note that the problem is strictly non-convex with matrix variables appearing in quadratic form, and hence any closed-form solution is intractable. Therefore, we first resort to developing an iterative algorithm for the problem and then propose a sub-optimal solution which has lower computational complexity. §.§ Iterative Joint Transceiver OptimizationIn this subsection, we investigate the non-convex source, relay, and destination filter design problem in an alternating fashion. We tend to optimize one group of variables while fixing the others. Given source and relay matrices { B_k}, F, the optimal receiver matrices { W_k} are obtained through solving the unconstrained optimization problem of min_ W_k E_k, since E_k does not depend on W_j, for j≠ k, and W_k does not appear in constraints (<ref>) and (<ref>). Using the matrix derivative formulas, the gradient ∇_ W_k^H( tr( E_k)) can be written as∇_ W_k^H( tr( E_k))= -G_k F H_k B_k + ∑_j=1^K G_k F H_j B_j B_j^H H_j^H F^H G_k^H W_k+ σ_ r^2 G_k F F^H G_k^H W_k + σ_ d^2 W_k, k = 1, …, K.Equating ∇_ W_k^H( tr( E_k)) =0 yields the linear MMSE receive filter given by W_k = (∑_j=1^K G_k F H_j B_j B_j^H H_j^H F^H G_k^H + σ_ r^2 G_k F F^H G_k^H + σ_ d^2 I_N_ d,k)^-1 × G_k F H_k B_kwhere (·)^-1 indicates the inversion operation of a matrix.Then for given source and receiver matrices { B_k} and { W_k}, the relay precoding matrix F optimization problem can be formulated asmin_ Fmax_k E_k s.t. tr( FΨ F^H)≤ P_ r. Note that (<ref>) is non-convex with a matrix variable since F appears in quadratic form in the objective function as well as in the constraint. However, we can reformulate this problem as an SDP using Schur complement <cit.> as follows. By introducing a matrix Ξ_k we conclude from the second equation in (<ref>) that the k-th link MSE will be upper-bounded if-W_k^H G_k F H_k B_k -B_k^H H_k^H F^H G_k^H W_k +W_k^H G_k FΨ F^H G_k^H W_k≼Ξ_k.In the above inequality, A≼ B indicates that the matrix B -A is positive semidefinite (PSD). Now, by introducing a matrix Φ such that FΨ F^H≼Φ, and a scaler variable τ_ r, the relay optimization problem (<ref>) can be transformed tomin_τ_ r,F, {Ξ_k}, Φτ_ rs.t. tr(Ξ_k) +tr( W_k^H W_k)+ N_ b,k≤τ_ r, k = 1, …, K,[[ Ξ_k +W_k^H G_k F H_k B_k +B_k^H H_k^H F^H G_k^H W_k W_k^H G_k F; F^H G_k^H W_kΨ^-1 ]]≽ 0, k = 1, …, K, [[ΦF;F^H Ψ^-1 ]]≽ 0tr(Φ) ≤ P_ r where we have used the Schur complement to obtain (<ref>) and (<ref>). Note that the problem (<ref>) is an SDP problem which is convex and can, as a result, be efficiently solved using interior-point based solvers <cit.> at a maximal complexity order of 𝒪((K + 2N_ r^2 + ∑_k=1^KN_ b,k^2 + 2)^3.5) <cit.>. However, the actual complexity is usually much less in many practical cases. Interested readers are referred to <cit.> for a detailed analysis of the computational complexity based on interior-point methods.Finally, we optimize the source matrices { B_k} using the relay matrix F and the receiver matrices { W_k} known from the previous steps. Let us define H̃_k,j≜ W_k^H G_k F H_j. Applying the matrix identity vec( ABC)=( C^T ⊗ A)vec( B), we can rewrite E_k in (<ref>) asE_k= ∑_j=1^Kb_j^H ( I_N_ b,j⊗(H̃_k,j^HH̃_k,j))b_j - ( vec(H̃_k,j))^T b_k-b_k^H vec(H̃_k,j^H) + θ_k,where the vector b_k ≜ vec( B_k) is created by stacking all the columns of the matrix B_k on top of each other, θ_k ≜ tr(σ_ r^2 W_k^H G_k F F^H G_k^H W_k + σ_ d^2 W_k^H W_k)+ N_ b,k, and ⊗ indicates matrix Kronecker product. Let us now denote {G̃_k≜ bd( I_N_ b,1⊗ (H̃_k,1^HH̃_k,1), …, I_N_ b,K⊗ (H̃_k,K^HH̃_k,K)),c_k≜[( vec(C̃_k,1))^T, …, ( vec(C̃_k,K))^T]^T,b ≜[ b_1^T, …,b_K^T]^T, .where bd(·) constructs a block-diagonal matrix taking the parameter matrices as the diagonal blocks, C̃_k,k = H̃_k,k and C̃_k,j =0_N_ b,k× N_ s,j, if jk. The MSE in (<ref>) can be rewritten asE_k = b^H G̃_k b-c_k^Hb-b^Hc_k+θ_k.By introducing M_k≜ F H_k, the power constraints in (<ref>) can be rewritten asb^HM b≤P̅_ r, k=1,…,K,where M≜ bd( I_N_ b,1⊗ ( M_1^H M_1), …, I_N_ b,K⊗ ( M_K^H M_K)), and P̅_ r= P_ r-σ_ r^2 tr( F F^H). Using (<ref>) and (<ref>), problem (<ref>) can be written asmin_ bmax_kb^H G̃_k b-c_k^Hb-b^Hc_k+θ_k s.t. b^HM b≤P̅_ rb^H ℐ_kb≤ P_ s,k, k=1,…,K, where ℐ≜ bd(ℐ_k1, …, ℐ_kk, …, ℐ_kK) with ℐ_kk= I_N_ s,kN_ b,k and ℐ_kj= 0, if j≠ k. Problem (<ref>) is a standard quadratically-constrained quadratic program (QCQP) which can be solved using off-the-shelf convex optimization toolboxes <cit.>. In the following, we also provide an SDP formulation of problem (<ref>): min_t_ s, bτ_ ss.t.([ τ_ s-θ_k+c_k^Hb+b^Hc_kb^H;bG̃_k^-1;]) ≽ 0, k=1,…,K, ([ P̅_ r b^H; bM^-1; ]) ≽ 0, ([ P_ s,k b^Hℐ_k^1/2;ℐ_k^1/2 bI_p;]) ≽ 0, k=1,…,K, where τ_ s is a slack variable and p ≜∑_k=1^K N_ s,k N_ b,k. The problem (<ref>) can be solved at a maximal complexity order of 𝒪((∑_k=1^KN_ b,k^2 + 1)^3.5) <cit.>. The proposed iterative optimization technique for solving the original problem (<ref>) is summarized in Table <ref>.Since in each step of the iterative algorithm we solve a convex subproblem to update one set of variables, the conditional update of each set will either decrease or maintain the objective function (<ref>). From this observation, a monotonic convergence of the iterative algorithm follows. However, the overall computational complexity of the iterative algorithm increases as the multiple of the number of iterations required until convergence. Thus the complexity of the iterative algorithms is often reasonably high. Note that the sum-MSE based iterative algorithms proposed in <cit.> have similar complexity orders. Hence in the following subsection, we contrive an algorithm for the joint optimization problem such that the computational overhead is substantially reduced. §.§ Simplified Joint Optimization AlgorithmIn the previous subsection, we optimized the source, relay, and receiver matrices in an alternating fashion. Here, we propose a simplified approach to solve problem (<ref>) using the error covariance matrix decomposition technique. The following theorem paves the foundation of the simplified algorithm. For given { B_k} and { W_k}, the optimum relaying matrix F for minimizing the worst-user MSE has the form:F = ∑_k = 1^K T_k D_k^H =T D^H,where T≜[ T_1, …,T_K] and D≜[ D_1, …,D_K] with T_k and D_k, respectively, defined asT_k ≜λ_ e,k(∑_i=1^Kλ_ e,i G_i^H W_i W_i^H G_i + λ_ r I_N_ r)^-1 G_k^H W_kandD_k ≜(∑_j=1^K H_j B_j B_j^H H_j^H + σ_ r^2 I_N_ r)^-1 H_k B_k,λ_ r and λ_ e,k, ∀ k, are the corresponding Lagrange multipliers as defined in Appendix <ref>.See Appendix <ref>. Note that D_k = ( H_k B_k B_k^H H_k^H + ∑_j=1 j≠ k^K H_j B_j B_j^H H_j^H + σ_ r^2 I_N_ r)^-1 H_k B_k can be regarded as the MMSE receive filter of the first-hop MIMO channel for the kth transmitter's signal received at the relay node given by (<ref>).The implication of the structure of the relay amplifying matrix in the proposed simplified design can be observed while applying the following theorem. The MSE term appearing in (<ref>) can be equivalently decomposed intoE_k =tr( I_N_ b,k +B_k^H H_k^HΨ_k̅^-1 H_k B_k)^-1 +tr(( B_k^H H_k^HΨ^-1 H_k B_k)^-1 + T̃^H G_k^H G_kT̃)^-1,where Ψ_k̅≜Ψ -H_k B_k B_k^H H_k^H = ∑_j=1 j≠ k^K H_j B_j B_j^H H_j^H + σ_ n^2 I_N_ r and T̃ is defined in Appendix <ref>.See Appendix <ref>. Even given the structure, an analytical optimal solution to the joint optimization problem is still difficult to obtain due to the cross-link interference from the relay node to the destination nodes. Therefore, we resort to develop an efficient suboptimal solution. The following proposition provides the foundation of the proposed simplified suboptimal solution. In the practically reasonably high SNR regime, the term B_k^H H_k^H ×Ψ^-1 H_k B_k in (<ref>) can be approximated as B_k^H H_k^HΨ^-1 H_k B_k ≈ I_N_ b,k.See Appendix <ref>. The result in Proposition <ref> is guided by the observation that the eigenvalues of B_k^H H_k^HΨ^-1 H_k B_k approach unity with increasing first-hop SNR.It will be demonstrated in Section <ref> through numerical simulations that such an approximation results in negligible performance loss while reducing the computational complexity significantly. Applying Proposition <ref>, the transmit power of the relay node defined in (<ref>) can be expressed as tr( FΨ F^H) =tr(T̃ B_k^H H_k^HΨ^-1 H_k B_kT̃^H) =tr(T̃T̃^H). Therefore, problem (<ref>) can be approximated asmin_{ B_k}, { W_k}, T̃max_ktr( I_N_ b,k +B_k^H H_k^HΨ_k̅^-1 H_k B_k)^-1+tr( I_N_ b,k + T̃^H G_k^H G_kT̃)^-1s.t. tr( B_k B_k^H)≤ P_ s,k, k = 1, …, K,tr(T̃T̃^H)≤ P_ r. Note that the optimal receiver matrices { W_k} can be obtained as in (<ref>). Interestingly, the source and relay optimization variables { B_k} and T̃ are separable both in the objective function as well as in the constraints in problem (<ref>).Therefore, applying the results from Theorem <ref> and Proposition <ref>, we can decompose the problem (<ref>) into the following source precoding matrices optimization problem:min_{ B_k}max_k tr( I_N_ b,k +B_k^H H_k^HΨ_k̅^-1 H_k B_k)^-1s.t. tr( B_k B_k^H)≤ P_ s,k, k = 1, …, K, and the relay amplifying matrix optimization problem:min_T̃max_ktr([ I_N_ b,k + T̃^H G_k^H G_kT̃]^-1) s.t. tr(T̃T̃^H)≤ P_ r.Note that the objective function in (<ref>) can be interpreted as the MSE of the kth transmitter's signal vector s_k. In particular, the equivalent received signal for the kth transmitter's signal in the first hop received at the relay node is given by y_ r^(k) =H_k B_k s_k + ∑_jk^K H_j B_j s_j +n_ r, treating other users' signals as noise. As such, the corresponding MMSE receiver is given by D_k in (<ref>). Thus the MSE expression in (<ref>) actually represents the equivalent first-hop MSE of the kth transmitter's signal s_k. Given the corresponding MMSE receiver D_k, (<ref>) can be rewritten asE_ s,k ≜tr( D_k^H(Ψ + σ_ r^2 I_N_ r) D_k -D_k^H H_k B_k -B_k^H H_k^H D_k +I_N_ b,k) =tr(( D_k^H HΥ_k B - Ω_k)( D_k^H HΥ_k B - Ω_k)^H + σ_ r^2 D_k^H D_k) =vec( D_k^H HΥ_k B - Ω_k)_2^2 + σ_ r^2 tr( D_k^H D_k) = [[ ω_k; ( I_N_ r⊗ D_k^H HΥ_k) vec( B) -vec(Ω_k) ]]_2^2,where ω_k ≜σ_ r√( tr( D_k^H D_k)) and Υ_k ≜[Υ_k1, …, Υ_kk, …, Υ_kK] with Υ_kk= I_N_ r and Υ_kj= 0, if j≠ k. Introducing an auxiliary variable t_ s, problem (<ref>) can be rewritten as the following second-order cone program (SOCP):min_{ B_k}, t_ s t_ ss.t.[[ ω_k; ( I_N_ r⊗ D_k^H HΥ_k) vec( B) -vec(Ω_k) ]]_2 ≤ t_ s,k = 1, …, K,vec( B_k)_2 ≤√(P_ s,k), k = 1, …, K, which can be efficiently solved by standard optimization packages at a complexity order of 𝒪((∑_k=1^KN_ b,k^2 + 1)^3) <cit.>. Thus, we can update { D_k} and { B_k} in an alternating fashion.Regarding the relay amplifying matrix optimization, by introducing T̃^HT̃≜ Q, the relay matrix optimization problem (<ref>) can be equivalently transformed tomin_ Q≽ 0max_ktr([ I_N_ d,k +G_k Q G_k^H]^-1) + N_ b,k - N_ d,ks.t. tr( Q)≤ P_ r. Let us now introduce a matrix variable Y_k≽( I_N_ d,k +G_k Q G_k^H)^-1, and a scalar variable t_ r. Using these variables, the relay optimization problem (<ref>) can be equivalently rewritten as the following SDP:min_t_ r,Q, { Y_k} t_ rs.t. tr( Y_k)≤ t_ r, k = 1, …, K,tr( Q)≤ P_ r,([ Y_k I_N_ d,k,;I_N_ d,k I_N_ d,k +G_k Q G_k^H; ]) ≽ 0, k = 1,…, K,t_ r≥ 0,Q≽ 0. Problem (<ref>) is convex and the globally optimal solution can be easily obtained <cit.>. The complexity order of solving problem (<ref>) is at most 𝒪((∑_k=1^KN_ b,k^2 + ∑_k=1^KN_ d,k^2 + K + 2)^3.5) <cit.>. Note that in the simplified algorithm, only the source matrices are obtained in an alternating fashion.The overall joint optimization procedure is summarized in Table <ref>.§ TWO-WAY RELAYINGTwo-way relaying is being considered as a promising technique for future generation wireless systems since two-way relaying can significantly improve spectral efficiency. Hence in this section, we consider two-way relaying in an interference MIMO relay system where each pair of users transmit signals to each other through the assisting relay node. The information exchange in the two-way relay channel is accomplished in two time slots: MAC phase and the BC phase. During the MAC phase, all the users simultaneously send their messages to the relay node. Thus the signal vector received at the relay node during the MAC phase can be expressed asy_ r = ∑_k=1^2K H_k B_k s_k +n_ r,where H_K+k≜ G_k^T for k = 1, …, K and n_ r is the N_ r× 1 AWGN vector received at the relay node.Upon receiving y_ r, the relay node linearly precodes the signal vector by an N_ r× N_ r amplifying matrix F and transmits the N_ r× 1 precoded signal vector x_ r in the MAC phase:x_ r =F y_ r.The received signal at the kth user in the BC phase is given byy_k =H_k^Tx_ r +n_ d,k=H_k^T F H_k̅ B_k̅ s_k̅ +H_k^T F(∑_j=1 j≠k̅^2K H_j B_j s_j +n_ r) +n_ d,k,k=1, …, 2K,where we have defined k̅ as the index of user k's partner (e.g., 1̅ = K+1, K+1 = 1), n_ d,k is the N_ d,k× 1 AWGN vector at the kth destination node. As in the case of the one-way relaying system, all noises are assumed to be i.i.d. complex Gaussian random variables with mean zero and variance σ_ n^2.Since the transmitting node k knows its own signal vector s_k and the full CSI of the corresponding source-destination link H_k^T F H_k B_k, each transmitter can completely cancel the self-interference component in (<ref>). Thus, the effective received signal vector at the kth receiving node is given byy_k =H_k^T F H_k̅ B_k̅ s_k̅ +H_k^T F(∑_ j≠ k, k̅^2K H_j B_j s_j +n_ r) +n_ d,k,= H̅_k s_k̅ + n̅_ d,k, k=1, …, 2K. Using (<ref>), the transmission power required at the relay node can be defined astr( E{ x_ r x_ r^H})=tr( FΨ F^H),where Ψ≜ E{ y_ r y_ r^H} = ∑_k=1^2K H_k B_k B_k^H H_k^H + σ_ r^2 I_N_ r is the covariance matrix of the signal received at the relay node from all the transmitters. Furthermore, the MSE of the estimated signal using an N_ d× N_ b linear weight matrix W_k at the kth receiving node can be expressed asE_k =tr([ I_N_ s,k -W_k^H H_k^T F H_k̅ B_k̅ -B_k̅^H H_k̅^H F^H H_k^* W_k; + ∑_j=1 j k^2K W_k^H H_k^T F H_j B_j B_j^H H_j^H F^H H_k^* W_k;+ σ_ r^2 W_k^H H_k^T F F^H H_k^* W_k + σ_ d^2 W_k^H W_k ]),k = 1, …, 2K.Similar to the case of one-way relaying, the problem of optimizing the transmit, relay, and receive matrices for the two-way scenario can be formulated asmin_{ B_k},F, { W_k}max_k E_k s.t. tr( FΨ F^H)≤ P_ rtr( B_k B_k^H)≤ P_ s,k, k = 1, …, 2K, where (<ref>) and (<ref>) indicates the corresponding transmit power constraints. §.§ Iterative Joint Transceiver OptimizationSimilar to the one-way relaying scenario, it can be shown that the transmitter, relay, and receiver matrices can be optimized in an alternating fashion through solving convex sub-problems. In each iteration of the algorithm, the receiver weight matrices are updated as follows:W_k = (∑_j=1 jk^2K G_k F H_j B_j B_j^H H_j^H F^H G_k^H + σ_ r^2 G_k F F^H G_k^H + σ_ d^2 I_N_ d)^-1 × G_k F H_k B_k, k = 1, …, 2K.The relay beamforming matrix F is optimized through solving the following SDP problem:min_τ_ r,F, {Ξ_k}, Φτ_ rs.t. tr(Ξ_k) +tr( F_k^H W_k) ≤τ_ r, [[ Ξ_k +W_k^H H_k^T F H_k̅ B_k̅ +B_k̅^H H_k̅^H F^H H_k^* W_k W_k^H H_k^T F; F^H H_k^* W_k Ψ_k̅^-1 ]]≽ 0,k=1,…, 2K, [[ΦF;F^H Ψ^-1 ]]≽ 0,tr(Φ) ≤ P_ r, where we have defined{ FΨ F^H ≼Φ,- W_k^H H_k^T F H_k̅ B_k̅ -B_k̅^H H_k̅^H F^H H_k^* W_k +W_k^H H_k^T FΨ_k̅ F^H H_k^* W_k ≼Ξ_k. .Finally, the optimal source precoding matrices are obtained by solvingmin_t_ s, bτ_ ss.t.([ τ_ s-θ_k+c_k^Hb+b^Hc_kb^H;bG̃_k^-1;]) ≽ 0, k=1,…, 2K, ([ P̅_ r b^H; bM^-1; ]) ≽ 0, ([ P_ s,k b^Hℐ_k^1/2;ℐ_k^1/2 bI_p;]) ≽ 0, k=1,…, 2K, where θ_k≜ tr(σ_ r^2 W_k^H G_k F F^H G_k^H W_k + σ_ d^2 W_k^H W_k)+ N_ b,k, k=1,…, 2K, G̃_k≜ bd( I_N_ b,1⊗ (H̃_k,1^HH̃_k,1), ⋯, I_N_ b,2K⊗ (H̃_k,2K^HH̃_k,2K)), k=1,…, 2K,c_k≜[( vec(C̃_k,1))^T, …, ( vec(C̃_k,2K))^T]^T, C̃_k,k = H̃_k,k, C̃_k,j=0_N_ b,k× N_ s,j, jk,b ≜[ b_1^T, …,b_2K^T]^T,M ≜ bd( I_N_ b,1⊗ ( M_1^H M_1), …, I_N_ b,K⊗ ( M_2K^H M_2K)),p≜∑_k=1^2K N_ s,k N_ b,k. §.§ Simplified Non-Iterative ApproachAssuming moderate SNR in the MAC phase, it can be shown, similar to the one-way relaying case, that the generic structure of the relay matrix F is defined as F =T D^H. Using this particular structure of F, the MSE at the kth receiver can be equivalently decomposed into two parts as shown below:E_k = tr( I_N_ b,k +B_k^H H_k^HΨ_k̅^-1 H_k B_k)^-1+tr(( B_k^H H_k^HΨ_k̅^-1 H_k B_k)^-1 + T̃^H H_k̅^* H_k̅^TT̃)^-1.Accordingly, the joint precoding design problem (<ref>) can be decomposed into two sub-problems, namely, the source precoding matrices optimization problem:min_{ B_k}max_k tr( I_N_ b,k +B_k^H H_k^HΨ_k̅^-1 H_k B_k)^-1s.t. tr( B_k B_k^H)≤ P_ s,k, k = 1, …, K, and the relay beamforming matrix optimization problem:min_T̃max_ktr([ I_N_ b,k + T̃^H H_k̅^* H_k̅^TT̃]^-1) s.t. tr(T̃T̃^H)≤ P_ r, which can be solved following the similar approach as for the one-way relaying scenario.§ NUMERICAL SIMULATIONSIn this section, we analyze the performance of the proposed one- and two-way MIMO relay interference system optimization algorithms through numerical examples. For simplicity, we assume that the source and the destination nodes are equipped with N_ s and N_ d antennas each, respectively, and P_ s,k = P_ s, ∀ k. We simulated a flat Rayleigh fading environment such that the channel matrices have zero-mean entries with variances 1/N_ s for H_k, ∀ k, and 1/N_ r for G_k, ∀ k. All the simulation results were obtained by averaging over 500 independent channel realizations.The performance of the proposed min-max MSE algorithms have been compared with that of the naive AF (NAF) algorithm in terms of both MSE and bit error rate (BER). The NAF algorithm is a simple baseline scheme that forwards the signals at the transmitters and the relay node assigning equal power to each data stream. In particular, the source and the relay matrices, in their simplest forms, in the NAF scheme are defined as{ B_k= √(P_ s/N_ s)I_N_ s, k = 1, …, K,F = √(P_ r/ tr(Ψ))I_N_ r. . In the first example, we compare the performance of the proposed min-max MSE-based one-way algorithms with that of the sum-MSE minimization algorithm in <cit.> as well as the NAF approach in terms of the MSE normalized by the number of data streams (NMSE) with K = 3, N_ s = 3, N_ r = 9, and N_ d = 3. Fig. <ref> shows the NMSE performance of the algorithms versus transmit power P_ s with fixed P_ r = 20 dB. Note that for the proposed simplified non-iterative algorithm, we plot the NMSE of the user with the worst channel (Worst) as well as the average per-stream MSE of all the users (Avg.). On the other hand, for the rest of the algorithms, the worst-user NMSE has been plotted. The results clearly indicate that the proposed joint optimization algorithms consistently yield better performance compared to the existing schemes. It can also be revealed that the proposed iterative algorithm has the best MSE performance compared to the other approaches over the entire P_ s range. It is no surprise that the NAF algorithm yields much higher MSE compared to the other schemes since the NAF algorithm performs no optimization operation. Most importantly, the iterative sum-MSE minimization algorithm in <cit.> always penalizes the user with the worst channel condition. Since the NAF algorithm does not allocate the transmit power optimally, and equally divides the power among multiple data streams instead, the inter-stream interference and the inter-user interference increase significantly at higher transmit power. Hence the MSE of the NAF algorithm does not improve notably at higher transmit power.Further analysis of the results in Fig. <ref> reveals that the proposed simplified algorithm yields the worst-user MSE performance which is comaprable to that of the iterative algorithm, even at low P_ s region. This observation illustrates that the approximation made in the simplified algorithm encounters negligible performance loss compared to the iterative optimal design. On the other hand, the computational complexity of the proposed simplified optimization is less than that of even one iteration of the iterative design, making it much more attractive for practical interference MIMO relay systems. The number of iterations required for convergence up to 10^-3 in terms of MSE in a random channel realization for the iterative algorithm are listed in Table <ref>.In the next example, we focus on the proposed simplified optimization scheme and compare its performance with that of the proposed iterative approach and the NAF algorithm in terms of BER. Quadrature phase-shift keying (QPSK) signal constellations were assumed to modulate the transmitted signals and maximum-likelihood detection is applied at the receivers. We set K = 3, N_ s = 2, N_ r = 6, N_ d = 3, and transmit 1000N_ s randomly generated bits from each transmitter in each channel realization. The BER performance of the algorithms are shown in Fig. <ref> versus P_ s with P_ r = 20dB. As we can see, the proposed simplified algorithm yields a much lower BER compared to the conventional NAF scheme.Compared with the iterative approach the simplified algorithm has much lower computational task at the cost of marginal performance loss.In the last couple of examples, we analyze the performance of the two-way MIMO relaying scheme. The NMSE performance of the two-way relaying algorithms is shown for different number of communication links K in Fig. <ref>. This time we set N_ s = 2, N_ r = K N_ s, and N_ d = 6 to plot the NMSE of the proposed algorithms versus P_ s with P_ r = 20 dB. It can be clearly seen from Fig. <ref> that as the number of links increases, the worst-user MSE keeps increasing. This is due to the additional cross-link interferences generated by the increased number of active users.In Fig. <ref>, the BER performance of the proposed two-way relaying algorithms has been compared with the sum-MSE based algorithms originally proposed for one-way relaying in <cit.>. QPSK signal constellations were assumed to modulate the transmitted signals. We set N_ s = 2, K = 3, N_ r = K N_ s, N_ d = 6, P_ r = 20dB, and transmit 1000N_ s randomly generated bits from each transmitter in each channel realization. Most importantly, the iterative sum-MSE minimization algorithms in <cit.> always penalize the user with the worst channel condition in the two-way relaying system.§ CONCLUSIONSWe considered a two-hop interference MIMO relay system and developed schemes to minimize the worst-user MSE of signal estimation for both one- and two-way relaying schemes. At first, we proposed an iterative solution for both relaying schemes by solving several convex subproblems alternatingly and in an iterative fashion. Then to reduce the computational overhead of the optimization approach, we develop a simplified non-iterative algorithm using the error covariance matrix decomposition technique based on the high SNR assumption. Simulation results have illustrated that the proposed simplified approach performs nearly as well as the iterative approach, while offering significant reduction in computational complexity.§ APPENDICEStocsectionAppendices§.§ Proof of Theorem <ref>For given { B_k} and { W_k}, problem (<ref>) reduces tomin_ Fτs.t. E_k ≤τ, k = 1, …, K,tr( FΨ F^H)≤ P_ r. The Lagrangian function of problem (<ref>) can be written asℒ( F, {λ_ s,k}, λ_ r) =τ + ∑_k=1^Kλ_ e,k tr([ I_N_ s,k - 2 Re( B_k^H H_k^H F^H G_k^H W_k); + ∑_j=1^K W_k^H G_k F H_j B_j B_j^H H_j^H F^H G_k^H W_k; + σ_ r^2 W_k^H G_k F F^H G_k^H W_k + σ_ d^2 W_k^H W_k - τ ])+ λ_ r( tr( F(∑_k=1^K H_k B_k B_k^H H_k^H + σ_ r^2 I_N_ r)F^H) - P_ r).The derivative of the Lagrangian function over F^H is given by∂ℒ/∂ F^H = ∑_k=1^Kλ_ e,k(-G_k^H W_k B_k^H H_k^H + ∑_j=1^K G_k^H W_k W_k^H G_k F H_j B_j B_j^H H_j^H . .+ σ_ r^2 G_k^H W_k W_k^H G_k F) + λ_ r F(∑_k=1^K H_k B_k B_k^H H_k^H + σ_ r^2 I_N_ r).Rearranging the terms in (<ref>), ∂ℒ/∂ F^H can be expressed as∂ℒ/∂ F^H =∑_k=1^K - λ_ e,k G_k^H W_k B_k^H H_k^H+ (∑_i=1^Kλ_ e,i G_i^H W_i W_i^H G_i + λ_ r I_N_ r) F(∑_j=1^K H_j B_j B_j^H H_j^H + σ_ r^2 I_N_ r).Equating ∂ℒ/∂ F^* = 0, we have the optimal relay filter given byF=∑_k = 1^K T_k D_k^Hwith{ T_k ≜λ_ e,k(∑_i=1^Kλ_ e,i G_i^H W_i W_i^H G_i + λ_ r I_N_ r)^-1 G_k^H W_k,D_k≜(∑_j=1^K H_j B_j B_j^H H_j^H+ σ_ r^2 I_N_ r)^-1 H_k B_k. .Denoting T≜[ T_1 ⋯ T_K ] and D≜[ D_1 ⋯ D_K ], F can be expressed as F =T D^H.§.§ Proof of Theorem <ref>The MSE in (<ref>) can be rewritten asE_k= [ I_N_ s,k +B_k^H H_k^H F^H G_k^H C̅_k^-1 G_k F H_k B_k]^-1=tr( I_N_ s,k -B_k^H H_k^H F^H G_k^H( G_k F H_k B_k B_k^H H_k^H F^H G_k^H + C̅_k)^-1 G_k F H_k B_k) =tr( I_N_ s,k -B_k^H H_k^H F^H G_k^H( G_k FΨ F^H G_k^H + σ_ d^2 I_N_ d,k)^-1 G_k F H_k B_k)=tr( I_N_ s,k -B_k^H H_k^H[Ψ^-1 - (Ψ F^H G_k^H G_k FΨ + Ψ)^-1] H_k B_k)=tr( I_N_ s,k +B_k^H H_k^HΨ_k̅^-1 H_k B_k)^-1 +tr( B_k^H H_k^H(Ψ F^H G_k^H G_k FΨ + Ψ)^-1. .× H_k B_k),where we used matrix inversion lemma ( A +BCD)^-1=A^-1 -A^-1 B( DA^-1 B. . +C^-1)^-1 DA^-1 to obtain (<ref>) and the first term in (<ref>) whereas the matrix identity B^H( B C B^H +I)^-1 B =C^-1 - ( C B^H B C +C)^-1 is used to obtain (<ref>) in the above derivation. Note that the first term in (<ref>) is irrelevant to F. Hence for given source matrices, the problem of optimizing F can be simplified asmin_ F tr( B_k^H H_k^H(Ψ F^H G_k^H G_k FΨ + Ψ)^-1 H_k B_k) s.t. tr( FΨ F^H)≤ P_ r. By introducing F̃ =FΨ^1/2, problem (<ref>) can be rewritten asmin_F̃ tr( B_k^H H_k^HΨ^-1/2(F̃^H G_k^H G_kF̃ +I_N_ r)^-1Ψ^-1/2 H_k B_k) s.t. tr(F̃F̃^H)≤ P_ r. Let us write the eigenvalue decomposition (EVD) G_k^H G_k =V_ gΛ_ g V_ g^H and the singular value decomposition (SVD) Ψ^-1/2 H_k B_k =U_ψΛ_ψ V_ψ^H. The following lemma defines the optimal F̃.<cit.> For matrices A, T̅,H of dimensions m× n, l× m, and k× l, respectively, with k, l, m ≥ n, r ≜ rank ( H) ≥ n and rank(T̅ )= n, the solution to the optimization problem min_T̅ tr( A^H(T̅^H H^H HT̅ +I_m)^-1 A) s.t. tr(T̅T̅^H)≤ p, is given by T̅ = Ṽ_ hΛ_ T U_ a^H in terms of the SVD of T̅. Here H =U_ hΣ_ h V_ h^H and A =U_ aΣ_ a V_ a^H are the SVDs of H and A, respectively, with the diagonal elements of Σ_ h and Σ_ a sorted in a decreasing order, and Ṽ_ h contains the leftmost n columns of V_ h. According to Lemma <ref>, the optimal F̃ in (<ref>) has the SVD F̃ = Ṽ_ gΛ_ f U_ψ^H where Ṽ_ g contains the left-most columns of V_ g corresponding to the non-zero eigenvalues. Then after some simple manipulations, F̃ can be rewritten as F̃ = Ṽ_ gΛ_ fΛ_ψ^-1 V_ψ^H V_ψΛ_ψ U_ψ^H = T̃ B_k^H H_k^HΨ^-1/2 where T̃≜Ṽ_ gΛ_ fΛ_ψ^-1 V_ψ^H. Hence F can be expressed as F = T̃ B_k^H H_k^HΨ^-1. Interestingly, F = T̃ B_k^H H_k^HΨ^-1 can be expressed as F = T̃D̃^H, which is structurally identical to the one defined in Theorem <ref>.Applying this structure of the relay matrix, the second term in (<ref>) can be written astr( B_k^H H_k^H(Ψ F^H G_k^H G_k FΨ + Ψ)^-1 H_k B_k)=tr( B_k^H H_k^H(ΨΨ^-1 H_k B_kT̃^H G_k^H G_kT̃ B_k^H H_k^HΨ^-1Ψ + Ψ)^-1 H_k B_k)=tr( B_k^H H_k^H(Ψ^-1 - Ψ^-1 H_k B_k( B_k^H H_k^HΨ^-1 H_k B_k + (T̃^H G_k^H G_kT̃)^-1)^-1.. ×.. B_k^H H_k^HΨ^-1) H_k B_k)=tr( B_k^H H_k^HΨ^-1 H_k B_k -B_k^H H_k^HΨ^-1 H_k B_k( B_k^H H_k^HΨ^-1 H_k B_k.. .. + (T̃^H G_k^H G_kT̃)^-1)^-1 B_k^H H_k^HΨ^-1 H_k B_k)=tr(( B_k^H H_k^HΨ^-1 H_k B_k)^-1 + T̃^H G_k^H G_kT̃)^-1.Thus the MSE in (<ref>) can be expressed as the sum of two MSEs given byE_k = tr( I_N_ s,k +B_k^H H_k^HΨ_k̅^-1 H_k B_k)^-1 +tr(( B_k^H H_k^HΨ^-1 H_k B_k)^-1 + T̃^H G_k^H G_kT̃)^-1.§.§ Proof of Proposition <ref>Assuming that the first-hop SNR is reasonably high, it emerges that ∑_j=1^K H_j B_j B_j^H H_j^H ≫σ_ r^2 I_N_ r where A≫ B effectively means that the eigenvalues of A -B are much greater than zero. Hence,B_k^H H_k^HΨ^-1 H_k B_k=B_k^H H_k^H(∑_j=1^K H_j B_j B_j^H H_j^H + σ_ r^2 I_N_ r)^-1 H_k B_k≈B_k^H H_k^H(∑_j=1^K H_j B_j B_j^H H_j^H)^-1 H_k B_k.Let U_kΛ_k U_k^H be the EVD of H_k B_k B_k^H H_k^H. Without loss of generality, we express U_k = [ U_k^(0̅) U_k^(0)] and Λ_k = [[ Λ_k^(0̅)0;00 ]], where U_k^(0̅) and U_k^(0) contain the eigenvectors corresponding to the non-zero and zero eigenvalues, respectively, in U_k while Λ_k^(0̅) is an N_ b,k× N_ b,k diagonal matrix containing the non-zero eigenvalues as the main diagonal. Thus H_k B_k =U_kΛ̅_k^(0̅) where Λ̅_k^(0̅) = [[ Λ_k^(0̅)1/2; 0 ]]. Similarly, we obtain the following EVD∑_j=1 k≠ k^K H_j B_j B_j^H H_j^H =U_k̅Λ_k̅ U_k̅^H=[ U_k̅^(0̅) U_k̅^(0)] [[ Λ_k̅^(0̅) 0; 0 0 ]] [ U_k̅^(0̅) U_k̅^(0)]^H=[ U_k̅^(0) U_k̅^(0̅)] [[ 0 0; 0 Λ_k̅^(0̅) ]] [ U_k̅^(0) U_k̅^(0̅)]^H. Substituting H_k B_k in (<ref>) with H_k B_k =U_kΛ̅_k^(0̅), we obtainB_k^H H_k^H(∑_j=1^K H_j B_j B_j^H H_j^H)^-1 H_k B_k= Λ̅_k^(0̅)H(Λ_k +U_k^H U_k̅Λ_k̅ U_k̅^H U_k)^-1Λ̅_k^(0̅).Now we rewrite U_k^H U_k̅ asU_k^H U_k̅ = [ U_k^(0̅) U_k^(0)]^H[ U_k̅^(0) U_k̅^(0̅)] = [[U̅_k^(0) 0; 0 U̅_k^(0̅) ]],where U̅_k^(0) and U̅_k^(0̅) are N_ b,k× N_ b,k and (N_ r-N_ b,k)×(N_ r-N_ b,k) unitary matrices, respectively. As a consequence, we obtainU_k^H U_k̅Λ_k̅ U_k̅^H U_k =U_k^H[ U_k̅^(0) U_k̅^(0̅)] [[ 0 0; 0 Λ_k̅^(0̅) ]] [ U_k̅^(0) U_k̅^(0̅)]^H U_k = [[00;0 U̅_k^(0̅)Λ_k̅^(0̅)U̅_k^(0̅)H ]].Using the identity U^-1 =U^H for a unitary matrix U, we obtain(Λ_k +U_k^H U_k̅Λ_k̅ U_k̅^H U_k)^-1 = [[ Λ_k^(0̅)^-1 0; 0 U̅_k^(0̅)Λ_k̅^(0̅)^-1U̅_k^(0̅)H ]].Substituting (<ref>) into (<ref>), we obtainB_k^H H_k^H(∑_j=1^K H_j B_j B_j^H H_j^H)^-1 H_k B_k = [[ Λ_k^(0̅)1/2H0 ]] [[ Λ_k^(0̅)^-1 0; 0 U̅_k^(0̅)Λ_k̅^(0̅)^-1U̅_k^(0̅)H ]] [[ Λ_k^(0̅)1/2; 0 ]]=Λ_k^(0̅)1/2HΛ_k^(0̅)^-1Λ_k^(0̅)1/2= I_N_ b,k.Thus for high first-hop SNR, B_k^H H_k^HΨ^-1 H_k B_k can be approximated as I_N_ b,k.§ COMPETING INTERESTSThe authors declare that they have no competing interests.§ FUNDINGThis work is supported by EPSRC under grant EP/K015893/1.bmc-mathphys§ FIGURES§ TABLES | http://arxiv.org/abs/1703.09029v1 | {
"authors": [
"Muhammad R A Khandaker",
"Kai-Kit Wong"
],
"categories": [
"cs.IT",
"cs.ET",
"math.IT"
],
"primary_category": "cs.IT",
"published": "20170327122747",
"title": "One- and Two-Way Relay Optimization for MIMO Interference Networks"
} |
firstpage–lastpage Jet-hadron correlations relative to the event plane at the LHC with ALICE Joel Mazer December 30, 2023 =========================================================================We present an outline of basic assumptions and governing structural equations describing atmospheres of substellar mass objects, in particular the extrasolar giant planets and brown dwarfs. Although most of the presentation of the physical and numericalbackground is generic, details of the implementation pertain mostly to the code CoolTlusty. We also present a review of numerical approaches and computer codes devised to solve the structural equations, and make a critical evaluationof their efficiency and accuracy.planets and satellites: atmospheres, gaseous planets – methods: numerical – radiative transfer – brown dwarfs § INTRODUCTION There have been a number of theoretical studies dealing with constructing model atmospheres of the sub-stellar mass objects (SMO), most notably extrasolar giant planets(EGP) and brown dwarfs.In the context of EGPs, the first self-consistent model atmospheres were produced by Seager & Sasselov (1998), followed by Goukenleuque et al. (2000) and Barman et al. (2001). The first extended grid of EGP model atmospheres was constructed by Sudarsky et al. (2003). There have been many more theoretical studies afterward, but it is not our aim hereto provide a historical review of the field. Most of the literature deals with the properties of constructed models and with analyses ofobservations. However, the basic physical assumptions and the methodology of model construction is usually covered only in short sections, usually referring to other papers, or is sometimes lost in Appendices of otherwise application minded papers.Here, we intend to fill this gap, and provide a systematic overview of basic physicalassumptions, structural equations, and numerical methods to solve them. We also would like to clarify some previously confusing points, because researchers in thefield of extrasolar giant planets come from both the planetary science and the stellar atmosphere communities and use their respective traditional terminologies, sometimes using the same term (e.g., the effective temperature, albedo, etc.) to mean a completelydifferent concept.Section 2 of this paper contains an outline of the basic assumptions and governing structural equations describing an SMO atmosphere. Section 3 then reviews the essential elements of the numerical methods used to solve the structural equations without unnecessary approximations, and Section 4 deals with someimportant details of the numerical procedure. Section 5 briefly discusses the topic of approximate, gray or pseudo-gray, models. They are useful as initial models for a subsequent iterative scheme to solve the structural equations exactly, as well as a pedagogical tool to understand the atmospheric temperature structure. Finally, in Section 6, we discuss a comparison of the present scheme to other modeling approaches. We also include several Appendices where some technical details are described.We stress that while Section 2 presents a general outline of the physical background which is largely universal and is adopted by a number of approaches and computer codes, the material presented in Sections. 3 and 4 pertains mostly to the code CoolTlusty(Hubeny et al. 2003, Sudarsky et al . 2003) which was developed as a variant of the universal stellar atmosphere code tlusty (Hubeny 1988, Hubeny & Lanz 1995), although analogous or similar techniques are adopted in other codes, as is summarized in Section 6. § PHYSICAL BACKGROUNDWe will describe here a procedure to compute the so-called classical model atmospheres; that is, plane-parallel, horizontally homogeneous atmospheres in hydrostaticand radiative (or radiative+convective) equilibrium. The basic physical framework employed to model the atmospheres of SMOs represents a straightforward extension of the physical description used in the theory of stellaratmospheres. For a comprehensive discussion and detailed description of the basic physics and numerics in the stellar context, refer to Hubeny & Mihalas (2014; in particular Chaps. 12–13, 16–18). §.§ Basic structural equationsThe basic structural equations are the hydrostatic equilibrium equation andthe energy balance equation, Since radiation critically influences the energybalance, the radiative transfer equation has to be viewed as one of the basic structural equations. These equations are supplemented by the equation of state and the equations that define the absorption and emission coefficient for radiation. We shall briefly discuss these equations below.§.§.§ Radiative transfer equationFor a time-independent, horizontally homogeneous atmosphere,possibly irradiated by an external source which is symmetric with respect to the normal to the surface, the radiative transfer equation is written as μdI(ν,μ,z)/dz = -χ(ν,z) I(ν,μ,z) +η^ tot(ν,μ,z), where I is the specific intensity of radiation defined such as I cosθdν dt dS dΩ is the energy of radiation having a frequency in the range (ν, ν+dν) going through an elementarysurface dS in an element of solid angle dΩ around direction of propagation n, with angle θ between the normal to the surfaceelement dS, and n, in time interval dt. In the plane-parallel geometry, the state parametersdepend only on one geometrical coordinate, the depth in theatmosphere, and the specific intensity depends only on the angle θ; we use a customary notation μ≡cosθ. Further, χ and η^ tot are the total absorption and emissioncoefficients, respectively. They include both the thermal as well as the scatteringprocesses – see below. Here we assume that there are no external forces and no macroscopicvelocities, so the absorption coefficient does not depend on μ. The emission coefficient may still depend on direction; however, for an isotropic scattering the emission coefficient is also independent of μ, η^ tot(ν,μ,z) = η^ tot(ν, z).In the following, we denote a dependence on frequency through index ν and omit an indication of the dependence on depth. The total absorption coefficient, or extinction coefficient, is written as χ_ν = κ_ν + s_ν, where κ_ν is the coefficient of true absorption,which correspond toa process during which an absorbed photon is destroyed, while s_ν is is the scattering coefficient,corresponding to a process which removes a photon from the beam, but re-emits it in adifferent direction[Generally, a scattering process may be non-coherent, in which case an absorbed and a re-emitted photon may have different frequencies, for instance during resonance scattering in spectral lines, or in Compton scattering.However, we will not consider these processes here and assume a coherent scattering.] We note that this coefficient is sometime denoted as σ_ν, but we use thenotation with s to avoid a confusion with cross sections which we denote σ – see below.The total emission coefficient is also given as a sum of thermal and scattering contributions. The latter refers only to continuum scattering; scattering. In the context of SMO model atmospheres, spectral lines are treated with complete frequency redistribution,in which case the scattering termis in fact a part of the thermal emission coefficient. The continuum scattering part isusually treated separately from the thermal part, and the “thermal emission coefficient” is usually called the “emission coefficient.” Specifically, the total emission coefficient is written as η_ν^ tot = η_ν + η_ν^ sc. In the case of coherent isotropic scattering, η_ν^sc = s_ν J_ν. For cold objects, brown dwarfs and exoplanets, one usually assumes local thermodynamic equilibrium (LTE), in which case η_ν = κ_ν B_ν, where B_ν is the Planck function, B_ν = 2hν^3/c^21/exp(hν/kT)-1, where T is the temperature, and h, k, c are the Planck constant, Boltzmannconstant,and the speed of light, respectively.It is customary to introduce the optical depth, dτ_ν = -χ_ν dz, and the source function S_ν = η_ν^ tot/χ_ν,In LTE, and for coherent isotropic scattering, the source function is given by S_ν = ϵ_ν B_ν + (1-ϵ_ν) J_ν, where ϵ_ν = κ_ν/χ_ν. The term (1-ϵ_ν) is sometimes called a single-scattering albedo.The transfer equation now reads μdI_ν(μ)/dτ_ν = I_ν(μ) - S_ν. Introducing the moments of the radiation intensity as [J_ν, H_ν, K_ν] ≡1/2∫_-1^1 I_ν(μ)[1,μ,μ^2]dμ, the moment equations of the transfer equation read dH_ν/dτ_ν = J_ν - S_ν, and dK_ν/dτ_ν = H_ν Combining Eqs. (<ref>) and (<ref>) one obtains a second-order equationd^2 K_ν/dτ_ν^2 = J_ν - S_ν,When dealing with an iterative solution of the set of all structural equations that specifically include the radiative transfer equation, it is advantageous to introduce a form factor, usually called the (variable) Eddington factor f_ν = K_ν/J_ν, and to write the second-order form as d^2 (f_ν J_ν)/dτ_ν^2 = J_ν - S_ν. This equation contains only the mean intensity, J_ν, that depends on frequency and depth, but not the specific intensity, I_ν(μ), which is alsoa function of the polar angle θ. The Eddington factor is not known or given a priori, but is computed in the formal solution of the transfer equation, andis held fixed during the subsequent iteration of the linearization procedure. By the term “formal solution” we mean a solution of the transfer equation withknown source function. It is done between two consecutive iterations of theiterative scheme, with current values of the state parameters.We stress that introducing the Eddington factor does not represent an approximation. Equation (<ref>) is exact at the convergence limit.It should also be stressed that the Eddington factor technique offers some, but not spectacular, advantages in solving the transfer equation for radiationintensities alone, because the computer time for solving directly a linear, angle-dependent transfer equation, Eq. (<ref>), or solving a second-order equation (<ref>) iteratively, is not very much different unless one deals with a large number of directions. However, its main strength lies in providing an efficient way of solving simultaneously the radiative transfer equation together with other structuralequations to determinethe radiation intensity and other state parameters (temperature, density, etc.) self-consistently.The upper boundary condition is written as [d(f_ν J_ν)/dτ_ν]_0 = g_ν J_ν(0) - H_ν^ext, where g_ν is the surface Eddington factor defined by g_ν≡1/2∫_0^1 I_ν(μ,0)μ dμ/ J_ν(0), and H_ν^ext≡1/2∫_0^1 I_ν^ext(-μ)μ dμ, where I_ν^ext (-μ) is the external incoming intensity at the top of the atmosphere. Two features are worth stressing. First, the right-hand side of Eq. (<ref>) can be written as H_ out - H_ in, that is, as a difference of the outgoing and incoming flux at the top of the atmosphere. Second, the integral in Eq. (<ref>) is evaluated only over the outgoing directions, but thedefinition of the surface Eddington factor g contains the mean intensity J whichis defined through an integral over all, outgoing and incoming, directions.The lower boundary condition is written similarly, [d(f_ν J_ν)/dτ_ν]_τ_max = H_ν^+ - 1/2 J_ν, where H_ν^+ = 1/2∫_0^1 I_ν(μ,τ_max)μdμ. The factor 1/2 on the right-hand side of Eq. (<ref>) could be replaced by another Eddington factor analogous to g_ν, but because the radiation field at the lower boundary is essentially isotropic, this factor would be very close to 1/2 anyway. One typically assumes the diffusion approximation at the lower boundary, in which case I_ν(μ)=B_ν+μ(dB_ν/dτ_ν),thus H_ν^+ = (1/2)B_ν+(1/3)(dB_ν/dτ_ν); hence Eq. (<ref>) is written as [d(f_ν J_ν)/dτ_ν]_τ_max =[1/2(B_ν - J_ν)+ 1/3d B_ν/dτ_ν]_τ_max.To compare this treatment of the radiative transfer equation to the approaches usually used in the Earth or for the solar system planetary atmospheres, several points are worth stressing:* All frequencies are treated at the same footing. There is no artificial separation of frequencies into the “solar” (optical) region, in which the dominant mechanism of photon transport is scattering, and the “infrared” region, in which the dominant mechanism of transport is absorption and thermal emission of photons. * External irradiation is treated simply, but at the same time exactly, as an upper boundarycondition for the radiative transfer equation. No additional contribution of an attenuated irradiation intensity is artificially added to the source function.* The transfer equation does not contain any assumptions about a division of an atmosphere into a series of vertically homogeneous slabs, with constant properties within a slab, as is often done in planetary studies. The transfer equation is discretized, as shown explicitly in Appendix A, and a manner of discretization in fact stipulates a behavior of the source function between the discretized grid points,in which it is determined exactly.For instance, a second-order form of the transfer equation, Eq. (<ref>), automatically yields a second-order accurate numerical scheme, i.e. the solution of the transfer equation is exact for a piecewiseparabolic form of the source function between the grid points.§.§.§ Hydrostatic equilibrium equationUnder the conditions met in SMO atmospheres, the radiation pressure is negligible, and the hydrostatic equilibriumequation is given simply as dP/dz = -ρ g, ordP/dm = g, where P is the gaspressure, and m the column mass, dm = -ρ dz, which is typically used (at least in stellar applications) as the basic depth coordinate. Equation (<ref>) has a simple solution P=mg, so one can use either P or m as a depth coordinate.§.§.§ Radiative equilibrium equationIn the convectively stable layers, the condition of energy balance is representedby the radiative equilibrium equation, ∫_0^∞(χ_ν J_ν - η^ tot_ν) dν =0, which states that no energy is being generated in,or removed from, an elementary volume in the atmosphere. In other words,the total radiation energy emitted in a given volume is exactly balanced to the total energy absorbed. This form of the radiative equilibrium equation is called the integral form. In view of Eqs. (<ref>) - (<ref>), the term representing the net radiative energy generation can be written as ∫_0^∞ (χ_ν J_ν - η_ν^ tot)dν = ∫_0^∞ (κ_ν J_ν - η_ν)dν because the scattering terms exactly cancel.Physically, Eq. (<ref>) states that the coherent scattering, which represents a process of an absorption plus subsequentre-emission of a photon without a change of its energy, does not contribute to theenergy balance.As follows from Eq. (<ref>), in LTE one has ∫_0^∞ (κ_ν J_ν - η_ν)dν = ∫_0^∞κ_ν (J_ν - B_ν)dν =0, but we will use a general term in the following text.Using Eq. (<ref>), the radiative equilibrium equation can also be written as ∫_0^∞dH_ν/dz dν = 0, or, equivalently, H≡∫_0^∞ H_ν dν =const≡σ_R/4π T_ eff^4, where σ_R is the Stefan-Boltzmann constant, and T_eff the effective temperature, which is a measure of the total energy flux coming from the interior. It is one of the basic parameters of the problem.We stress that we use the term “effective temperature” as it is used inthe stellar context. In the planetary studies, this term is traditionally used to describe an equilibrium temperature of the upper layers of an irradiated atmosphere. So, this term has in a sense an opposite meaning in these two fields: in the stellar atmosphere terminology it describes the energy flux coming from the interior, and, in view of Eq. (<ref>), the net flux flux passing through the atmosphere, while in the planetary terminology it reflects the energy flux coming from the outside. More accurately, in the planetary terminology it describes the outgoing flux which, in most cases, almost balances the flux coming from the outside and which can be substantially larger than the net flux.Equation (<ref>) can be rewritten, using Eqs. (<ref>) and (<ref>), as ∫_0^∞d(f_ν J_ν)/ dτ_ν dν =σ_R/ 4π T_eff^4, which is called a differential form of the radiative equilibrium equation. Experience with computing model stellar atmospheres(e.g. Hubeny & Lanz 1995) revealed that it is numerically advantageous to consider a linear combination of both forms of the radiative equilibrium equation, namely α[∫_0^∞(κ_ν J_ν - η_ν)dν] + β[∫_0^∞d(f_ν J_ν)/ dτ_ν dν - σ_R/ 4π T_eff^4] = 0, where α and β are empirical coefficients that satisfy β→ 0 in upper layers, and β→ 1 in deep layers, while α→ 1 in upper layers, and may be essentially arbitrary elsewhere.The reason for this treatment is the following: The condition of a constanttotal flux, dH/dm=0, or equivalently,∫[d(f_ν J_ν)/dτ_ν] dν= (σ_R/4π) T_ eff^4, (the differential form), is accurate and numerically stable at deeper layers, where the mean intensity and the flux change appreciably from depth to depth.Consequently, the derivatives with respect to optical depth are well constrained.In fact, it must be applied at the lower boundary in order to impose the conditionfor the total flux given through the effective temperature.At low optical depths, theflux is essentially constant and moreover fixed by the conditions deeper in the atmosphere (around monochromatic optical depths of the order of unity), so that an evaluation of the derivatives is unstable, and often dominated by errors in the current values of κ_ν and J_ν. Moreover, the local temperatureis constrained by this condition only indirectly.The integral form, which is mathematically equivalent, schematically written as ∫κ_ν J_ν dν=∫κ_ν B_ν dν, is stable at all depths, including low optical depths, and is directly linked to the local temperature through the Planck function. It is applicable everywhere in the atmosphere.. §.§.§ Radiative/convective equilibrium equationAn atmosphere is locally unstable against convection if the Schwarzschildcriterion is satisfied, ∇_ rad > ∇_ ad, where ∇_ rad= (d ln T/dln P)_ rad is the logarithmictemperature gradient in radiative equilibrium, and ∇_ ad is theadiabatic gradient. The latter is viewed as a function of temperature and pressure, ∇_ ad= ∇_ ad(T,P).The density ρ is considered to be a function of T and P through the equation of state.If convection is present, equation (<ref>) is modified to read α[∫_0^∞(κ_ν J_ν - η_ν)dν + ρ/4πdF_ conv/dm] + β[∫_0^∞d(f_ν J_ν)/ dτ_ν dν - σ_R/ 4π T_ eff^4 + F_ conv/ 4π] = 0 where F_ conv is the convective flux. Using the mixing-length approximation, it is given by [e.g.,Hubeny & Mihalas (2014;16.5] F_ conv = (gQH_P/32)^1/2(ρ c_P T)(∇-∇_ el)^3/2 (ℓ/H_P)^2, where H_P ≡ -(dln P/dz)^-1 = P/(ρ g)is the pressure scale height, c_P is the specific heat at constant pressure, and Q ≡ -(dlnρ/dln T)_P.Further, ℓ/H_P is the ratio of the convective mixing length to the pressure scale height, taken as a free parameter of the problem. ∇ is the actual logarithmic temperature gradient, and ∇_ el is the gradient in the convective elements. The latter isdetermined by considering the efficiency of the convective transport; see, e.g.,Hubeny & Mihalas (2014;16.5), ∇-∇_ el = (∇-∇_ ad) +B^2/2 - B√( B^2/2 - (∇-∇_ ad)), where B= 12√(2) σ_R T^3/ρ c_p (gQH_P)^1/2 (ℓ/H_P)τ_ el/ 1+ τ_ el^2/2, and where τ_ el = χ_Rℓ is the optical thickness of the characteristic convective element with size ℓ. The gradient in the convective elements is thus a function of temperature, pressure, and the actual gradient, ∇_ el = ∇_ el(T,P,∇). The convective flux can also be viewed as a function of T, P, and ∇. It should be noted that although in many cases∇≈∇_ ad, we do not enforce this relation explicitly. §.§.§ Equation of stateIn the present context, the equation of state gives a relation between density and pressure. The gas pressure is given, assuming an ideal gas, by P= kTN= kT ∑_j N_j, and the mass density as ρ = ∑_j N_j m_j = m_H ∑_j N_j m_j/m_H =μ̅m_H/kT P , where N is the total particle number density, and k the Boltzmann constant.The total particle number density is given by the sum of the number densities of the individual atomic or molecular species, N_j; we assume that the number density of free electrons is negligible. m_j is the mass of the species j, m_H the mass of the hydrogen atom, and μ̅ the mean molecular weight, given by μ̅= ∑_j N_j (m_j/m_H)/∑_j N_j. The individual number densities (concentrations) N_jare obtained by solvingthe chemical equilibrium equations, or possibly taking into account some departures from chemical equilibrium (see<ref>).However, in an essentially solar-composition cold gas, a majority of particles are the hydrogen molecules and neutral helium atoms, in which case the mean molecular weight is simply μ̅=(1+4Y)/(0.5+Y)≈ 2.33, where Y≈ 0.1 is the solar helium abundance(by number, with respect to hydrogen). Taking into account a contribution of heavier elements, in particular C, N, O, a more reasonable (yet still approximate) value is μ≈ 2.38. §.§.§ Absorption and emission coefficientsThe absorption coefficient is given by κ_ν = ∑_i ∑_ℓ∑_u>ℓ n_ℓ,iσ^ line_ℓ u(ν) + ∑_i N_i σ^ cont_i(ν) + ∑_j N_j σ^ cond, abs_j(ν) + κ_ν^add, where the first term represents the contribution of spectral lines, summed over all species i, lower levels ℓ and upper levels u. The second term is the contribution of continuum processes of species i. Unlike the case of stellar atmospheres, these processes are not very important in the case of SMO atmospheres, with the exception of the collisional-induced absorption of H_2. The third term represents an absorption of photons on condensed particles, and the last term a possible additional or empiricalopacity not included in the previous terms. In all cases, σ(ν) represents the corresponding cross section, N the corresponding number density, and n the individual level population. The correction for stimulated emission, 1-exp(-hν/kT) is assumed to be included in the transition cross sections. It should be stressed that cross sections for spectral lines describe line broadening effects and thus depend on temperatures and appropriate perturber number densities; the most important being the hydrogen molecule, H_2, and atomic helium, He.Absorption cross sections for condensates depend on assumed distribution of cloud particle sizes. There are several distributions considered in the literature, most commonly used ones being a lognormal distribution(Ackerman & Marley 2001), or a distributiongiven by Deirmendjian (1964),used by Sudarsky et al (2000, 2003), and subsequentlyin all applications usingthe CoolTlusty modeling code,n(a) ∝ (a/a_0)^6 exp[-6(a/a_0)], where a_0 is the modal particle size, usually taken as a free parameter. The adopted cross section is then a function of a_0, and is given by σ(a_0,ν) = ∫_0^∞ n(a) σ(a,ν)da / ∫_0^∞ n(a) da, whereσ(a,ν) is the cross section for absorption on condensates of a single size, a, typically given by the Mie theory. The scattering coefficient is given by s_ν =∑_i N_iσ^Ray_i(ν) + ∑_j N_j σ^cond,sc_j(ν), whereσ^Ray_i is the Rayleigh scattering cross sectionof species i,and σ^cond,sc_j is the cross section forMie scattering on condensate species j. The same averaging as that expressed by Eq. (<ref>) is applied here as well.Notice that the scattering and the absorption cross sections σ^ cond, sc_j(ν) and σ^ cond, abs_j(ν) are generally different.The absorption coefficient (<ref> and the scattering coefficient (<ref>) express the so-called opacities per length. They are measured in units of cm^-1 (since cross sections are in cm^2 and number densities incm^-3). In actual applications, one often works in terms of opacities per mass, in units of cm^2g^-1. They are given by, for instance for the total opacity, χ_ν^'≡χ_ν/ρ. Since the particle number densities are roughly proportional to the mass density, the opacity per mass is much less sensitive to the density than the opacity per length. This property is used to advantage when constructing opacity tables, because interpolating in density is more accurate using the opacity per mass. §.§ Treatment of external irradiation Assuming that the distance, D, between the star and the planet is much larger than the stellar radius, r_∗, then all the rays from the star to a given point at the planetary surface are essentially parallel.The total energy received per unit area at the planetary surface at the substellar point is (e.g., Hubeny & Mihalas 2014, Eq. 3.72) E = 2π (r_∗/D)^2 ∫_0^1 I_∗(μ) μ dμ =4π (r_∗/D)^2 H_∗ = (r_∗/D)^2 F_∗, where H_∗ is the first moment of the specific intensity at thestellar surface, H_∗ = (1/2) ∫_-1^1 I_∗(μ) μdμ =(1/2) ∫_0^1 I_∗(μ) μdμ (the second equality is valid if there is no incoming radiation at the stellar surface). The incoming (physical) flux at the planetary surface, intercepted by an area perpendicular to the line of sight toward the star (i.e., at the substellar point) is thus given by F^ ext_0 ≡ 2π∫_0^1 I^ ext μdμ = E =4π (r_∗/D)^2 H_∗, Expressing the intercepted flux as the first moment of the specific intensity, H^ ext_0 = F^ ext_0/4π, then H^ ext_0 = (r_∗/D)^2 H_∗,If one does not compute separate model atmospheres for individual annuli corresponding to different positions of a star on the planetary sky (i.e., at different distances from the substellar point), and instead uses some sort of averaging over the planetary surface, then one has to introduce an additional parameter, f, that accounts for the fact that the planet has a non-flat surface. If we assume that the incoming irradiation energy is evenly distributed over the irradiated hemisphere, then f=1/2; if we assume that the incoming energy is redistributed over the whole surface, then f=1/4. Such an averaged incoming flux is thus given by H^ ext = fH^ ext_0 = f(r_∗/D)^2 H_∗.Finally, one needs to relate the incoming flux to the incoming specific intensity because this is the quantity used for the upper boundary condition for the transfer equation for specific intensity. If we assume that the irradiation at the stellar surface is isotropic; better speaking, we artificially isotropise a highly anisotropic irradiation, I^ ext(μ) = I_0^ ext, then H^ ext = 1/2∫_0^1 I^ ext(μ)μdμ =1/4 I_0^ ext, and thus I_0^ ext =4 H_∗ (r_∗/D)^2 f = F_∗/π(r_∗/D)^2f. This equation can be rewritten in a useful form, expressing H_∗ = (σ_R/4π)T_∗^4. where T_∗ is the effective temperature of the irradiating star, as I_0^ ext =(σ_R/π) T_∗^4 W = B(T_∗) W, whereW ≡ (r_∗/D)^2 f is the so-called dilution factor. In the second equality in Eq. (<ref>), B(T_∗) is the total(frequency-integrated) Planck function. §.§.§ Day/night side interaction The above described formalism applies for any type of object that is irradiated from an external source, such as a planet, a brown dwarf, or even a star in a close binary system. Close-in planets that exhibit a tidally-locked rotation present a special case. Their day and night sides exhibit a vastly different atmospheric conditions, and therefore it is quite natural thatan interaction of the day and the night side is important. A proper description of this effect requires a hydrodynamic simulations (e.g., Komacek & Showman2016, and references therein) and is thus beyond the scope of simple atmospheric models considered here. However, there are several approaches suggested in the literature that deal with this effect in an approximate way, which will be described below.This simplest way, considered e.g. in Sudarsky et al. (2003). is based on characterizing the degree of the day/night side heat redistributionthrough an empirical parameter f, as described above. Burrows et al. (2006) introduced an analogous parameter, P_n,as a fraction of incoming fluxthat is redistributed to the night side. The underlying assumption is that the fraction P_n of the incoming flux is somehow removed before the incoming radiation reaches the upper boundary of the atmosphere, and is deposited at the lower boundary of the night-side atmosphere.A more realistic approach was suggested by Burrows et al. (2008).The day side of the planet is irradiated by the true external radiation coming from the star, but then a fraction P_n is being removed at a certain depth range, parameterized by limiting pressures P_0 and P_1. The same amount of energy is deposited at the night side, also in a certain depth range, usually but not necessarily in the same pressure range. The rationale for this approach is that meridional circulations, that may occur below the surface, may actually carry a significant amount of energy to the night side.Specifically, the total radiation flux (expressed as H) received by a unit surfaceof a planet at the angular distance μ_0 from the substellar point is given by H_ tot^ ext(μ_0) = (r_∗/D)^2 μ_0∫_0^∞ H^∗_ν dν = (r_∗/D)^2 μ_0 σ_ R/4πT_∗^4, so that the integrated flux over the surface of the dayside hemisphere is H̅_ tot^ ext≡∫_0^1 H_ tot^ ext(μ_0)dμ_0 = 1/2(r_∗/D)^2 σ_R/4π T_∗^4. One defines a local gain/sink of energy, D(m), such that ∫_0^∞ D(m) = H^ irr, where H^ irr≡ P_n H̅_ tot^ ext. One assumes that D(m) is non-zero only between column masses m_0 and m_1 defined through limiting pressures P_0 and P_1. These are free, essentially ad-hocparameters that aim to mimic a complex radiation-hydrodynamical process. Hydro simulations may in principle provide a guidance to the choice of these parameters. Burrows et al. (2008) adopted as an educated guess the values P_0=0.05, P_1=0.5 bars.D(m) is negative (better speaking,non-positive) on the day side, and is non-negative on the night side.One is free to choose an actual form of function D(m); Burrows et al (2008) considered two models, (i)D(m) being constant between m_0 and m_1, i.e., D(m)= H^ irr/(m_1-m_0), or (ii) a model with D(m) linearly decreasing between m_0 and m_1, in such a way the D(m) reaches 0 at m=m_1; then D(m) = 2 H^ irr(m_1-m)/(m_1-m_0)^2.The radiative equilibrium equation then becomes: in the integral form ∫_0^∞κ_ν (J_ν - B_ν) = -D(m), and in the differential form dH/dm = -D(m),or H(m)=σ_R/4π T_ eff^4 + ∫_m^m_1 D(m^')dm^'. These equations are easily modified for the convection zone, in the case where the gain/sink energy region overlaps the convection zone. §.§ Treatment of clouds Ideally, the cloud properties, namely its position, extent, anda distribution of condensed particle sizes, should be determined self-consistently with local atmosphericconditions. However, this is a very difficult problem which is not yet fully solved, even in the context of cloud formation in the Earth atmosphere. In the context of SMO atmospheres, one has to resort to various approximations andparameterizations of the problem.Ackerman and Marley (2001) reviewed an earlier work, and developed a simple, yet physically motivated treatment of cloud formation. They formulate an equation for the mole fractions of the gas and condensed phases of a condensable species, q_g and q_c, respectively. This approach sets the cloud base at depth z where the q_g(z)=q_s(z),where q_s(z) is the vapor mole fraction corresponding to the saturation vaporpressure at depth z.. In other words, the cloud base is set at the point wherethe actual T-P profile intersects the condensation curve of the species. Below this point, there are no condensates, q_c(z) = 0,if q_g(z) < q_s(z), and above this point, where q_g(z) ≥ q_s(z), the mole fraction of the condensate is given by an equation that expresses a balance between turbulent diffusion that mixes both the gas and condensed particles and transport them upward, and sedimentation that transport condensate downward, -K ∂(q_g+q_c)/∂ z - v_ sed q_c = 0, where v_ sed is the mass-weighted droplet sedimentation velocity, and K is the vertical eddy diffusion coefficient. The latter can be expressed, assuming a free convection, as a function of basic state parameters (Ackerman & Marley 2001),namely the atmospheric scale height, convective mixing length, mean molecular weight, temperature, and density. Sedimentation velocity is expressed as v_ sed =f_ rain v_ conv wheref_ rain, the ratio of the sedimentationvelocity to the convective scale velocity, is taken as a free parameter of the problem. For f_ rain→ 0, sedimentation is essentially disregarded, which leads to a cloud extending from the base all the way upward. For f_ rain≫ 1, sedimentation is very efficient, and the cloud mass distribution exhibits a sharp, essentially exponential, decline above the base.Equations (<ref>) and (<ref>) apply in the convection zone. In the convectively stable regions, one introduces two more free parameters, a minimum “mixing length”, and a minimum value of the K coefficient, to be ableto use the same expressions as in the convection zone. For the distribution of cloud particle sizes, Ackerman & Marley (2001)assume a lognormal distribution, in which the geometric mean radius and the number concentration of particles is expressed through q_c and f_ rain, so that it contains only one free parameter, the geometric standard deviation of the distribution.Although the Ackerman-Marley model is physical motivated, it still inevitably contains several adjustable free parameters.Alternatively, one can devise an approach that treats the cloud mass distribution parametrically, but can mimic a cloud composed of several condensed species. It can also offer some additional flexibility in treating cloud shapes (Sudarsky et al 2000, 2003, Burrows et al. 2006). This treatment of the clouds is based on the following simple model, which isalso adopted in the CoolTlusty code. The opacity (per gram of atmospheric material) of the givencondensate j at pressure P s given by κ^'_j(ν, P) =N_j M_j(A/μ)S_j k̅_j(ν, a_0,j)f_j(P), where N_j is the number density (mixing ratio) of the species j,M_j its molecular weight, μ the mean molecular weight of the atmosphericmaterial, A the Avogadro number. Factor N_j M_j (A/μ) transforms the opacity per gram of condensate to the opacity per gram of atmospheric material. S_j is the supersaturation ratio,k̅_j(ν, a_0,j) is the opacity per gram of species j at frequency ν and for the modal particle size a_0,j.CoolTlusty, uses a previously computedtable of k̅_j for a number of values of a_0and frequencies ν. An analogous expression is used for the scattering opacity.In Eq. (<ref>), the supersaturation ratio and the modal particle size are takenas free parameters of the model. Intrinsic optical properties of cloud particles (i.e., the absorption and scattering coefficients) are contained in appropriate tables. All the physics of cloud absorption and scattering is thus set up independently of the model atmosphere code.Cloud shape function is parametrized in the following way (Burrows et al. 2006): The cloud base is set at pressure P_0, given typically as an intersection of the current T-P profile and the corresponding condensation curve. It can howeverbe set differently – see below. One also introduces a plateau region between thisand a higher pressure, P_1 ≥ P_0, which is meant to mimic a contribution of other condensate species for which the given one serves as a surrogate.For a single isolated cloud, P_1 → P_0, and the flat part would shrink to a zero extent. However, for multiple cloud condensates, or for a convective regions with multiple T-P intersection points, it is advantageous to introduce a flat part that mimics these phenomena. On both sides of the flat part, f decreases as a power low whose exponents are free parameters of the problem. The cloud shape function is thus given by f(P) ={[(P/P_0)^c_0, P≤ P_0,;[4pt] 1,P_0 ≤ P ≤ P_1,; [4pt] (P/P_1)^-c_1, P≥ P_1, ]. , In this model, the supersaturation ratio S and the modal particle size a_o are taken as free parameters.The cloud shape function contains three more free parameters, P_1, c_0, and c_1.§.§ Departures from chemical equilibriumThere are two kinds of departures from chemical equilibrium that are taken into account in a number of studies of SMO atmospheres: * Departures due to the rainout of a condensable species. Burrows & Sharp (1999) developed a simple and useful procedure to treat such departures from chemical equilibrium. The concentrations of the species that are influenced by a rainout depend only on temperature and pressure, and therefore one may construct corresponding opacity tables independently of an actual model atmosphere. In other words, such departures from strict chemical equilibrium lead only to a modification of the opacity table, but not to a necessity to change a computational algorithm of constructing modelatmospheres, in contrast to the next case, described below.* The second type of departures occurs in the case when the chemicalreaction time for certain important reactions is much larger than vertical transport (mixing) timescale. The mechanism is sometimes referred to as “quenching" (for a recent review of the literature on the subject,see Madhusudhan et al. 2016)It is usually considered for the carbon and nitrogen chemistry. These are described schematically by the net reactions CO + 3 H_2 ⟷ CH_4 +H_2 O, and N_2 + 3 H_2 ⟷ 2NH_4. Because of the strong C=C and N≡N bonds, the reactions (<ref>) and (<ref>) proceed much faster form right to left than from left to right. For instance, for carbon the reaction in which CO is converted to CH_4 is very slow, and therefore CO can be vertically transportedby convective motions or eddy diffusionto the upper and cooler atmospheric layers,in which it would be virtually absent in chemical equilibrium. The net result is an overabundance of CO and N_2 and an underabundance of CH_4 and NH_3 in the upper layers of the atmosphere. The mechanismwas first suggested by Prinn & Barshay (1977) for the Jovian planets in the solar system, and subsequently applied by Fegley & Lodders (1996), Griffith & Yelle (1999) and Saumon et al. (2000) for the atmospheres of brown dwarfs. Hubeny & Burrows (2007) performed a systematic study of this effect for the whole range of L and T dwarfs. We will use their notation and terminology below. The mixing time is given by t_ mix = {[ H^2 K_zz,in the radiative zone,; [4pt] 3H_c/v_c, in the convection zone, ]. where H is the pressure scale height, K_zz is the coefficient of eddy diffusion, H_c theconvective mixing length (typically taken equal to H), and v_c is the convective velocity. While the mixing time in the convective region is well defined, its value in the radiative region is quite uncertain because of uncertainties in K_zz, which can attain values between 10^2 and 10^8, as discussed, e.g.,bySaumon et al.(2006, 2007). The chemical time is also uncertain. One can use the value of Prinn & Barshay (1977) for carbon chemistry, t_ chem≡ t_ CO = N( CO)/κ_ CO N( H_2) N( H_2 CO), withκ_ CO = 2.3× 10^-10exp(-36200/T), where N( A) is the number density of species A. Some other estimates of the chemical time are available, see Hubeny & Burrows (2007). For a more recent treatment of non-equilibrium carbon chemistry, see, e.g., Visscher & Moses (2011) and Moses et al. (2011).For nitrogen, the corresponding expressions are t_ chem≡ t_ N_2 = 1/κ_ N_2 N( H_2) , withκ_ N_2 = 8.54× 10^-8exp(-81515/T), For a more recent treatment of non-equilibrium nitrogen chemistry, see, e.g., Moses et al. (2011).The effects of departures of chemical equilibrium are treated in a simple way. For the current T-P profile, one finds an intersection point where the mixing timefor the current T-P profile equals the chemical reaction time. Above this point (for lower pressures) the number densities of CO and CH_4 are set to constant values equal to those found at the intersection point. Analogous procedure is donefor the nitrogen chemistry, fixing the N_2 and NH_3 number densities above the intersection point. Since the amount of available oxygen atoms is changed by this process (more are being sequestered by CO), the number density of water is also held fixed above the intersection point.§.§ Empirical modifications of the basic equations §.§.§ Modifications of radiative equilibriumThe radiative equilibrium equation (<ref>), or radiative/convective equilibrium equation (<ref>) can be modified by adding an empirical energy loss/gain term, as was done foe instance byBurrows et al. (2008). One can introduce an empirical term E(m), together with another parameter D(m) discussed in<ref>, so that the integral form of the radiative equilibrium is written as ∫_0^∞ (κ_ν J_ν - η_ν)dν = - D(m) - E(m), where E(m) represents an energy gain E>0 or loss (E<0) per unit volume. Quantity D(m) is related to an empirical redistribution of incoming radiation (as was done in Burrows et al 2008), while E(m) refersto some unspecified empirical energy gain/sink. §.§.§ Modifications of chemical equilibriumThere are several possible modifications of the chemical equilibrium: * A simple modification for a rainout of the species after Sharp & Burrows (1997).* Considering departures from chemical equilibrium due to quenching for carbon and nitrogen chemistry, arising from long chemical timescales as compared to dynamical timescales, as described above in<ref>;* Mixing ratios of the individual species can be set up completely empirically, such as in Madhusudhan & Seager (2009); see alsoLine et al (2012), Madhusudhan et al. (2014);for a review refer to Madhusudhan et al. (2016). In that case the mixing ratios of selected species are treated as free parameters of the problem.§.§.§ Modifications of opacitiesAs indicated in Eq. (<ref>), one can include empirical opacity sources. For instance, one may consider an artificial optical absorber as in Burrows et al (2008) that represents an additional opacity source in the optical region, placed at a certain depth range in the atmosphere.§.§ Synthetic (forward) versus analytic (retrieval) approach There are essentially two types of approaches to modeling atmospheres of substellar-mass objects, and in particular the giant planets:* A synthetic, or forward, approach, in which one solves the basicstructural equations to determine the structure of the atmosphere. computesa predicted spectrum, and compares the synthetic spectrum to observations. When an agreement is consistently reached for the given set of basic input parametersof the model (effective temperature, surface gravity, chemical composition, external irradiation, ), the analyzed object is declared to be described by the basic input parameters equal to those of the model.In this sense, one usually calls this procedure a “determination of the basic parameters.” Another, perhaps even more important result of such a study is that it verifies the validity of the basic physical picture of the studied object. This approach is exactly parallel to a usual approach in stelar physics where one constructs a grid of model atmospheres together with synthetic spectra, and by comparison to observations determines the basic input parameters of the model.* An inverse, or retrieval approach (also called or analytic,or semi-empiricalapproach). Here one assumes a given structure of the atmosphere. Typically, the temperature is assumed to be a prescribed function of depth (pressure), and the chemical composition is either computedconsistently with this T-P profile, or is also set empirically. One then computes emergent radiation for this atmosphere, and tries many such structures until an agreement with observations is achieved. In the context of analysis of exoplanets, this approach is usually called the retrieval' method (Madhusudhan & Seager 2009), also see Irwin et al. (2008), Line et al (2012, 2013), Madhusudhan et al. (2014),andfor a review refer to Madhusudhan et al. (2016).An advantage of the synthetic approach is that it computes a model based on true physical and chemical description. But, the disadvantage is that the inputphysics and chemistry is often very uncertain or approximate. Thus the analytic approaches have a potential to highlight missing parts of physics and chemistry. As an example from a different field, semi-empirical models of the solar atmosphere (e.g. Vernazza et al. 1973) showed that the radiative equilibrium assumption cannot hold in the uppermost layers (the chromosphere), and someadditional source of energy has to be invoked. These models determined the temperature as a function of depth needed to explain theobserved spectral features, and even estimated the amount of extra energy neededto produce such a temperature structure.Here, we will mostly describe the synthetic approach, but will alsodescribe the methods used to obtain the emergent radiation from the given structure, which is at the heart of the analytic method. §.§ 1-D versus multi-D modelsThe basic approximation inherent in the above described modeling approach is theassumption of a plane-parallel horizontally-homogeneous, i.e. a 1-dimensional (1-D) atmosphere. In other words, the structural parameters are allowed to depend onlyon one coordinate – the depth in the atmosphere.There are several essential reasons why this approximation may be violated:* In the case of strong external irradiation, the atmospheric conditionsdepend on the angular distance of the given position in the atmospherefrom the substellar point. * If clouds of condensates are formed, they are most likely formed with aninhomogeneous distribution on the stellar/planetary surface.* For a close-in planet with a tidally-locked rotation period, an interaction between the day and night sides will inevitable lead to meridional circulations that may exhibit a rather complicated pattern (e.g., Komacek & Showman2016).* The presence of convection leads to inhomogeneities,but these typically occur on small geometrical scales, so they are usually treated using horizontally-averaged(1-D) models.The first two issues may be dealt with approximately by using the concept of a 11/2-D approach, in which one constructs a series of1-D models for individual patches of an atmosphere. * In the case of strongly irradiated planets, one can construct models for rings (belts) with an equal distance from the substellar point. In other words, all points on a givenbelt see the irradiating star at the same polar angle. This was actually done by Barman et al. (2001). They found that the differences between this approach and the original, fully 1-D one, are not big. Nevertheless, for more accurate models these effects should be taken into account.* Similarly, one can deal with horizontal inhomogeneities due to clouds by constructing 1-D models with and without clouds. Introducing an empirical cloud-coveringfactor, a, one can approximate the predicted radiation from the object asF_λ = a F_λ^ clouds + (1-a) F_λ^ no clouds. One can also form a final spectrum by a linear combination of models with various cloud extents, but in such a case the number of input empirical parameters will become too large, with a questionable physical meaning.* To deal with inhomogeneities caused by meridional circulation and other dynamical phenomena, the current approach is first to construct a hydrodynamical model without radiation, or with a simplified treatment of radiation transport (e.g., Showman & Guillot 2002, Showman et al. 2009, 2010), and using the atmospheric structure following from such a model to compute “snapshot” spectra using detailed radiation transport, possibly using methods described in the paper. This was done for instance by Burrows et al. (2010).. One can in principle construct, using present computational facilities, more sophisticated 3-D radiation hydrodynamic model atmospheres of SMOs, and in particular close-in exoplanets, but this field of study is still in its infancy.§ NUMERICAL SOLUTION The set of structural equations (<ref>),(<ref>), (<ref>),(<ref>) or (<ref>), and necessary auxiliary expressions, are discretizedin depth and frequency, replacing derivatives by differences and integralsby quadrature sums. This yields a set of non-linear algebraic equations.Detailed forms of the discretized equations are summarized in Hubeny& Mihalas (2014;18.1); see also Appendix A.Upon discretization, the physical state of an atmosphere is fully described by the set of vectors ψ_d for every depth point d, (d=1, …, ND), ND being the total number of discretized depth points. The full state vector ψ_d is given by-4ptψ_d = { J_1, …, J_NF, T, [ρ] ,[∇]}, where J_i, (i=1,…, NF) is the mean intensity of radiation in the i-th frequencypoint; we have omitted the depth subscript d. NF is the number of discretized frequency points. The quantities in the square brackets are optional, and are considered to be components of vector ψ only in specific cases. In most applications, ρ and ∇ are taken as function of T and P. However, with the pressure P being given a priori as P=mg, they are viewed as functions of the temperature T only.§.§ LinearizationAlthough the individual methods of solution may differ, the resulting set of non-linear algebraic equations is solved by some kind of linearization. Generally, a solution is obtained by an application of the Newton-Raphson method. Suppose the required solution ψ_d can be written in terms of the current, but imperfect, solution ψ_d^0 asψ_d=ψ_d^0+δψ_d. The entire set of structural equations can be formally written as an operator P acting on the state vector ψ_d asP_d(ψ_d)=0. To obtain the solution, we expressP_d(ψ_d^0+δψ_d)= 0, using a Taylor expansion of P_d, P_d(ψ_d^0) +∑_j∂ P_d/∂ψ_d,jδψ_d,j = 0, and solve for δψ_d. Because only the first–order (i.e., linear) term of the expansion is taken into account, this approach is called a linearization. To obtain the corrections δψ_d, one has to form a matrix of partial derivatives of all the equations with respect to allthe unknowns at all depths—the Jacobi matrix, or Jacobian—and to solve equation (<ref>).The radiative equilibrium equation (in the differentialform) couples two neighboring depth points d-1 and d, and the radiative transfer equation couples depth point d to two neighboring depths d-1 and d+1; see equations (<ref>) – (<ref>). Consequently, the system of linearized equations can be written as - A_d δψ_d-1 + B_d δψ_d- C_d δψ_d+1 =L_d, where A, B, and C are NN × NN matrices,with NN being the dimension of vector ψ_d. The minus signs at theA and C terms in Eq. (<ref>) are for convenience only. The block of the first NF rows and NF columns of any of matrices A, B, and Cforms a diagonal sub-matrix (because there is no coupling of the individual frequencies in the transfer equation),while the row and the column corresponding to T are full (because the radiativeor radiative/convective equilibrium equation contains the mean intensity at all frequency points).L is a residual error vector, given byL_d=-P_d(ψ_d^0). At the convergence limit, L→ 0 and thus δψ_d→ 0.Equation (<ref>) forms a block-tridiagonal system, whichis solved by a standard Gauss-Jordan elimination. It consists of a forward elimination D_d = ( B_d -A_dD_d-1)^-1 C_d, d=2,…,ND, starting with D_1 =B_1^-1 C_1; andZ_d = ( B_d -A_dD_d-1)^-1 ( L_d +A_dZ_d-1) , d=2,…,ND. with Z_1 =B_1^-1 L_1. The second part is a back-substitution,δψ_d =D_dδψ_d+1 + Z_d ,d=ND-1,…,1, starting with δψ_ND =Z_ND.This procedure, known as complete linearization, wasdeveloped in the seminalpaper by Auer & Mihalas (1969). However, one has to perform ND inversions of a NN × NN matrix per iteration – see Eqs. (<ref>) and (<ref>). Since the dimension of the state vector ψ, that is, the total number of structural parameters NN can belarge; so unless the number of frequencies is very small (of the order of few hundreds), a direct application of the original complete linearization is too time consuming and therefore not practical.§.§ Hybrid CL/ALI methodThe method, developed by Hubeny & Lanz (1995), combines the basic advantages of the complete linearization (CL) and the accelerated lambda iteration (ALI) method.We stress that this method employs just one aspect of the general idea of the ALI schemes, expressed by Eq. (<ref>) below. More traditional applications of ALI provide an iterative solution of the radiative transfer equation with a dominant scattering term in the source function. One such application is outlined in<ref>. The hybrid CL/ALI method is essentially the linearization method, with the only difference from the traditional CL method being that the mean intensity in some (most) frequency points is not treated as an independent state parameter, but is instead expressed as J_di = Λ^∗_di [η_di/κ_di] + Δ J_di,where d and i represent indices of the discretized depth and frequency points, respectively, Λ^∗ is the so-called approximate Lambda operator, and Δ J is a correction to the mean intensity. The approximate operator is in most cases taken as a diagonal (local) operator, hence its action is just an algebraic multiplication. It is evaluated in the formal solution of the transfer equation, and is held fixed in the next iteration of the linearization procedure, and so is the correction Δ J. Since the absorption and emission coefficients κ and η are known functions of temperature, one may express the linearization correction to the mean intensity J_di as δ J_di = Λ^∗_di∂ (η_di/κ_di )/∂ T_diδ T_di,Equation (<ref>) shows that J_di is effectively eliminated from the set of unknowns, thus reducing the size of vector ψ to NN = NF_CL + 1, where NF_CL is the number of frequency points (called explicit frequencies) for which the mean intensity is kept to be linearized. As was shown by Hubeny & Lanz (1995), NF_CL can be very small, of the order of O(10^0) to a few times 10^1. In the context of SMOs,this method was used for instance by Sudarsky et al. (2003) to construct a grid of exoplanet model atmospheres. §.§ Rybicki schemeAn alternative scheme, which can be used in conjunction with either the original complete linearization, or with the hybrid CL/ALI scheme, is a generalization of the method developed originally by Rybicki (1969) for solving a NLTE line transfer problem. It starts with the same set of linearized structural equations, and consistsof a reorganization of the state vector and the resulting Jacobi matrix in a different form. Instead of forming a vector of all state parameters in a given depth point, it considers a set of vectors of tmean intensity, each containing the mean intensities in one frequency point for all depths, δ J_i ≡{δ J_1i, δ J_2i, …,δ J_ND,i}, i=1,…,NF, and analogously for the vector of temperatures δ T≡{δ T_1, δ T_2, …,δ T_ND}. In a description of the method presented in Hubeny & Mihalas (2014;17.3), an analogous vector δ N for the particle number density was introduced, but this is not necessary here.The linearized radiative transfer equation can be written as ∑_d^'=d-1^d+1 U_dd^', iδ J_d^' i + ∑_d^'=d-1^d+1 R_dd^', iδ T_d^' = E_di, for i=1,…,NF.In the matrix notation U_i δ J_i +R_i δ T =E_i , whereU_i and R_i are ND × ND tridiagonal matrices that account for a coupling of the corrections to the radiation field at frequency ν_i and the material properties that are taken asa function of T, at the three adjacent depth points (d-1, d, d+1).Analogously, the linearized radiative/convective equilibrium equation is written as ∑_i=1^NF V_i δ J_i +Wδ T =F , where V_i and W are generally bi-diagonal matrices (in the differential form of the radiative/convective equilibrium equation;in the purely integral form they would be diagonal).The overall structure here is reversed from the original complete linearization, in the sense that the role of frequencies and depths is reversed. The matrix elements are the same; they only appear in different places. For instance, U_dd,i≡ (B_d)_ii,U_d,d-1,i≡ (A_d)_ii, U_d,d+1,i≡ (C_d)_ii,R_dd,i≡ (B_d)_i,NF+1,R_d,d-1,i≡ (A_d)_i,NF+1, and so on.The global system is a block-diagonal (since the frequency points are not coupled), with an additional block (“row”) with the internal matrices being tridiagonal. Corrections to the mean intensities are found from Eq. (<ref>), δ J_i =U_i^-1 E_i - ( U_i^-1 R_i) δ T. Substituting Eq. (<ref>) into (<ref>), one obtains for the correctionof temperature (W - ∑_i=1^NF V_iU_i^-1 R_i )δ T =(F - ∑_i=1^NF V_iU_i^-1 E_i ), which is solved for δ T., and then δ J_i are obtainedfrom Eq. (<ref>).In this scheme, one has to invert NF tridiagonal matrices U_i, which is very fast, plus one inversion of the ND × ND grand matrix in Eq. (<ref>), which is also fast. Since the computer time scales linearly with the number of frequency points, the method can be used even for models with a large number of frequency points (several times 10^4). In the context of SMO's, this method was first used byBurrows et al. (2006) to construct a grid of L and T model atmospheres.We illustrate the convergence properties of the Rybicki scheme on two examples. First, we consider a brown dwarf model atmosphere computed with CoolTtlusty. Convergence pattern, displayed in Fig. <ref>, is similar to most of other SMO model atmosphere calculations. Overall, the convergence properties are excellent. The iteration process could have been safely stopped after the maximum relative change of temperature decreased below 10^-4; however we set the convergence criterion here to be 10^-5.For the purposes of demonstration of numerical properties of the method, wechose a simplified numerical treatment with 5000 discretized frequency points between ν = 6× 10^12 and 7× 10^14 s^-1. Calculation of the model took about 30 s on a MacBook Pro, OSX 10.9.5 with 2.2 GHz Intel i7processor, using an open-source gfortran compiler. We will show the properties of the actual model (temperature structure, conservation of the total flux, numerical check of the radiative/convective equilibrium) later in<ref>.Another example is a model atmospheres of a giant planet with T_ eff=100 K(in the stellar atmosphere terminology, i.e., with T_ eff describing the total energyflux coming from the interior), log g = 3, irradiated by a solar-type star at a distance of 0.06 AU. The convergence pattern is shown in Fig <ref>. For comparison, we also show the convergence pattern for the same model computedusing the hybrid CL/ALI method, where10 highest frequencies are treated using complete linearization, while the rest offrequencies are treated withALI – see Fig. <ref>. In order to be able to converge the model, one has to set the division parameters α and β in such a way that β=1 for τ_ ross≥ 0.5, and β=0 elsewhere, while α=1 everywhere except the last 5 depth points where it is set to 0..Convergence is now much slower, although still stable. The corresponding temperature structure is displayed in Fig. <ref>. The upper panel shows the temperature as a function of the column mass, while the lower panel shows the temperature difference between the two models. Because the radiative/convective equilibrium equation is solved differently in both cases, there are some differences, albeit quite small and otherwise inconsequential. §.§ Overall procedure of the model constructionConstruction of a model is composed of several basic steps, which are described below.§.§.§ Initialization Sincethe overall scheme is an iterative one, an initial estimate of a model is needed. It can be obtained in three possible ways: * Using a previously constructed model atmosphere for similar input parameters. This way, one can compute a model with adifferent chemical composition, or with a slightly different irradiation flux than a model computed earlier. If one does not change the input parameters significantly, the iterations may proceed fast, and the overall computer time is shorter than when using other methods for providing the initial model.* Using an LTE-gray model atmosphere. This is a typical method of obtaining a starting model from scratch. The numerical procedure is described in Appendix C.* In some cases one can use an empirical temperature structure, using for instance the parametric approach of Madhusudhan & Seager (2009). §.§.§ Global iteration loopEach iteration consists of two main steps: (A) Formal solution.This step includes all calculation before entering any linearization step of the global scheme. Take the current temperature, T(m), and then:* Possibly smooth it if it exhibits a oscillatory behavior as a function of depth.* Compute opacities (by interpolating in the opacity tables). * Solve the radiative transfer equation for all frequency points – see<ref> and<ref>.* Recompute the temperature gradients (current and adiabatic), determine the positionof the convection zone, and possibly correct the temperature to satisfy the conservation of the total (radiative + convective) flux –<ref>.* With the new temperature, recalculate the mass density, and possibly return to step (ii) and iterate several times. This procedure results in a set of new values of structural parameters, T, ρ, and J_ν, which are as internally consistent as possible, and with which one enters the next iteration of the global linearization scheme. This prudent procedure increases the convergence speed and, in many cases, prevents convergence problems or even a divergence of the global scheme. (B) Linearization proper. This step includes evaluating the components of the Jacobi matrix, and solving the global system, either for the corrections δψ—whenusing the hybrid CL/ALI method (see<ref>), or for δ T—when usingthe Rybicki scheme (see<ref>). As pointed out above, the latter scheme is preferable. Using δ T, one evaluates the new temperature structure T(m), and returns to step (A).We stress that the step (B), which may be called the “temperature correction”, should not be confused witha procedure that is usually referred to by the same name. The usual meaning of the term temperature correction is that it is a procedure which employs the radiative/convective equilibrium equation to update the local temperature to yield an improved total energy flux, while keeping other parameters (radiation intensities, chemicalcomposition, opacities) fixed. Here, step (B) indeed corrects the temperature,but simultaneously with other state parameters and the radiation intensities. Consequently, the resulting convergence process is global and fast. § FORMAL SOLUTION OF THE RADIATIVE TRANSFER EQUATIONIn the previous text, in particular in<ref> – <ref>, we have considered a simultaneous solution of the transfer equation together with other structural equations. To this end, we did not employ an angle-dependent transfer equation for the specific intensity, but rather its combined moment equation for the meanintensity. Although such an equation is exact, it contains the Eddington factor which is not known a priori, and which needs to be determined by a formal solution of the (angle-dependent) transfer equation. By the term formal solution of the transfer equation we understand here a determination of the specific intensity for a given absorption and (thermal) emission coefficient.There are several types of the formal solution;a detailed description of the most popular numerical schemes is presented in Hubeny & Mihalas (2014;12.4). §.§ Feautrier methodIf the source function is independent of μ, as it is in the case of isotropic scattering, or is an even function of μ, then the most convenient method of the solution is theFeautrier (1964) method. It is based on introducing the symmetric and antisymmetric averages of the specific intensity for μ≥ 0, j_ν(μ) ≡[I_ν(μ) + I_ν(-μ)]/2,h_ν(μ) ≡ [I_ν(μ) - I_ν(-μ)]/2. Adding and subtracting the two forms of the transfer equation for μ and -μ, namely (suppressing the frequency index) μ[dI(μ)/dτ] = I(μ) -S, and -μ[dI(-μ)/dτ] = I(-μ) -S, one obtains μd h_ν(μ)/dτ_ν = j_ν(μ) - S_ν, and μd j_ν(μ)/dτ_ν = h_ν(μ), and by differentiating Eq. (<ref>) once more and substituting into (<ref>), one obtains an exact equation for the symmetric average j, sometimes called the Feautrier equation, μ^2 d^2 j_ν(μ)/dτ_ν^2 = j_ν(μ) - S_ν. It is interesting to point out that this scheme somewhat resembles the two-stream approximation, often used in radiative transfer applications. However, unlike the two-stream approaches, which are always approximate because they involve some kind of averaging over one hemisphere, or representing one hemisphereby a single direction, the Feautrier equations(<ref>) - (<ref>) are exact.Discretizing in the frequency and angle, and using Eq. (<ref>) for the source function, Eq. (<ref>) becomes μ_i^2 d^2 j_ni/dτ_n^2 = j_ni- (1-ϵ_n) ∑_i^'=1^NA w_i^' j_ni^' - ϵ_n B_n, where NA is the number of angle points in one hemisphere, and w_i are the angular quadrature weights.This equation is supplemented by the boundary conditions μ_i .dj_ni/dτ_n|_0 = j_ni(0) - I_ni^ ext, where I_ni^ ext is the incoming specific intensity I(ν_n, -μ_i). The lower boundary condition reads μ_i .dj_ni/dτ_n|_τ_ max = I^+_ni(τ_ max)- j_ni(τ_ max), where I^+_ni(τ_ max) is the outward-defected specific intensity at the deepest point, given by the diffusion approximationI^+_ni(τ_ max) = B(ν_n,τ_ max) + μ_i.∂ B(ν_n)/∂τ_ν_n|_τ_ max,All the individual frequency points in Eqs. (<ref>) – (<ref>) are independent, so the transfer equation can by solved for one frequency at a time. We drop thefrequency index n and discretize in depth, described by index d. Upon introducing a column vector j_d ≡ (j_d,1, J_d,2,…,j_d,NA), one writes Eqs. (<ref>) – (<ref>)as a linear matrix equation - A_dj_d-1 + B_dj_d - C_dj_d+1 =L_d, where A_d, B_d,and C_d, are NA × NA matrices; A and C are diagonal, while B is full. For illustration, we present here the matrix elements for the inner depth point d=2,…,ND-1; i,j=1,…, NA, (A_d)_ij= μ_i^2 /(Δτ_d-1/2,iΔτ_d,i) δ_ij,(C_d)_ij= μ_i^2 /(Δτ_d+1/2,iΔτ_d,i) δ_ij,(B_d)_ij =(A_d)_ij + (C_d)_ij +δ_ij - (1-ϵ_d ) w_j and (L_d)_i = ϵ_d B_d, where δ_ij is the Kronecker δ-symbol, δ_ij=1 for i=j and δ_ij=0 for i≠j. The expressions for the boundary conditions are analogous.The system is solved by the standard Gauss-Jordan elimination, equivalent to Egs. (<ref>) - (<ref>). In terms of the Feautrier symmetric average j, the mean intensity and the Eddington factor are given by J_d = ∑_j=1^NA w_j j_dj,and f_d = ∑_j=1^NA w_j μ_j^2 j_dj/ J_d.There are severalvariants of the Feautrier scheme, such as an improved second-order scheme by Rybicki & Hummer (1991), or a fourth-order Hermitian scheme by Auer (1976); for a detailed description refer to Hubeny & Mihalas (2014;12.3). All variants of the Feautrier method involve ND inversions ofNA × NA matrices. Since the typical value of NA is quite low (typically NA=3, which corresponds to 6 actual discretized angles), inverting such matrices does not present any problem or any appreciable time consumption. The basic advantage of the Feautrier scheme is that it treats scattering directly, without any need to iterate.It should be stressed that when using the Feautrier method for the formal solution of the transfer equation between the subsequent iterations of the global linearization scheme, one uses the above described procedure to determine the Eddington factors. For consistency, one does not use the resulting mean intensities directly, instead they are determined by solving Eq. (<ref>),written as d^2 (f_ν J_ν)/dτ_ν^2 = ϵ_ν(J_ν - B_ν), because this is exactly the transfer equation as employed in the linearization step. Otherwise the differences, albeit tiny, between J_ν determined fromEq. (<ref>) and from (<ref>) would prevent the overall iteration schemeto formally converge when using a very stringent convergence criterion, because very near the converged solution the linearization would correct the meanintensities to satisfy Eq. (<ref>), while the formal solution through Eq. (<ref>) would change it back.§.§ Discontinuous Finite Element methodIf the source function depends on direction, or if the number of angles is large (which may occur for some specificapplications), or if an atmospheric structure exhibits very sharp variationswith depth, it is advantageous to use the Discontinuous Finite Element (DFE)scheme by Castor et al. (1992). It solves the linear transfer equation (<ref>) directly for the specific intensity, and therefore if scattering is present, which is essentially always, the scattering part of the source function has to be treated iteratively. To this end, a simple ALI-based procedure is used. It is described, for a more complex case, below. Here we describe the method assuming that the total source function is fully specified.The method is essentially an application of the Galerkin method. The idea is to divide a medium into a set of cells, and to represent the source function within a cell by a simple polynomial, in this case by a linear segment. The crucial point is that the segments are assumed to have step discontinuities at grid points.The specific intensity at grid point d is thus characterized by two values I_d^+ and I_d^- appropriate for cells (τ_d, τ_d+1) and (τ_d-1, τ_d), respectively (notice that we are dealing with an intensity in a given direction;the superscripts “+” and “-” thus do not denote intensities in opposite directions as it is usually the case in the radiative transfer theory).The actual value of the specific intensity I(τ_d) is given as an appropriate linear combination of I_d^+ and I_d^-. We skip all details here; suffice to say that after some algebra one obtains simple recurrence relations for I_d^+ and I_d^-,for d=1,…,ND-1, a_d I_d+1^-=2 I_d^- + Δτ_d+1/2 S_d + b_d S_d+1 , a_d I_d^+=2(Δτ_d+1/2 + 1)I_d^- + b_d S_d -Δτ_d+1/2 S_d+1, where a_d= Δτ_d+1/2^2 + 2Δτ_d+1/2 + 2,b_d= Δτ_d+1/2 ( Δτ_d+1/2 +1), and Δτ_d+1/2 = (τ_d+1-τ_d)/|μ|, which represents the optical depth differences along the line of photon propagation, while τ measures the optical depth in the direction of the normal to the surface. The boundary condition is I_1^-=I^ ext, where I^ ext is the specific intensity of external irradiation (for inward-directed rays, μ<0).For outward-directed rays (μ>0), one can either use the same expressions as above, renumbering the depth points such as ND → 1, ND-1 → 2, …,1→ ND; or to use the same numbering of depth points while setting the recursion, for d=ND-1,…,1, as a_d I_d^- = 2 I_d+1^- + Δτ_d+1/2 S_d+1 + b_d S_d , a_d I_d+1^+ = 2(Δτ_d+1/2 + 1)I_d+1^- + b_d S_d+1 -Δτ_d+1/2 S_d, with I_d^-=B_d+μ (B_d-B_d-1)/Δτ_d-1/2 for d=ND. Finally, the resulting specific intensity at τ_d is given by a linear combinations of the “discontinuous" intensities I_d^- and I_d^+ as I_d = I_d^- Δτ_d+1/2+ I_d^+ Δτ_d-1/2/Δτ_d+1/2+Δτ_d-1/2. At the boundary points, d=1 and d=ND, we set I_d = I_d^-. As was shown by Castor et al., it is exactly the linear combination of the discontinuous intensities expressed by Eq. (<ref>) that makes the methodsecond-order accurate.Since one does not need to evaluate any exponentials, the method isalso very fast.We stress again that the above described scheme applies for a solution of the transfer equation along a single angle of propagation. The source function is assumed to be given. Therefore, when scattering is not negligible, one has to iterate on the source function. This is done most efficiently using a very powerful Accelerated Lambda Iteration (ALI) method, which will be outlined in<ref>.§.§ Anisotropic scattering on condensatesThe scattering part of the emission coefficient is generally written as η_ν^sc( n) = s_ν∮ (dΩ^'/4π)I_ν( n^') g( n^', n), where g( n^', n) is the phase function for the scattering, n^' and n are the directions of the incoming and the scattered photon, respectively. In the following text, the primed quantities refer to the incoming radiation and unprimed to scattered radiation.Introducing the usual polar (θ) and the azimuthal (ϕ) angles, with μ=cosθ, the source function with a general scattering term can be written as S(ν,μ,ϕ) = 1-ϵ_ν/4π∫_-1^1dμ^'∫_0^2π dϕ^' I(ν,μ^',ϕ^')g(ν,μ^',ϕ^',μ,ϕ) + ϵ_ν B_ν. The transfer equation to be solved is written asμd I(μ,ϕ)/dτ = I(μ,ϕ) - S(μ,ϕ). Here, and in the following expressions, we omit an explicit indication of the dependence on frequency. In general, Eq. (<ref>) is not advantageous to be considered in the second-order form, so the first-order form is solved, using the Discontinuous Finite Element method.[One can also use the short characteristics method (e.g., Hubeny & Mihalas 2014,12.4), butwe will not consider this scheme here.] In the absence of external forces, the phase function depends only on the scattering angle, that is the angle between the directions of the incoming and scattered photon, which we denote as γ, where cosγ= n^'· n. In terms of the polar and azimuthal angles, cosγ= sinθ^'sinθ(cosϕ^'cosϕ + sinϕ^'sinϕ) +cosθ^'cosθ.The simplest approximation is to treat both types of scattering that we deal with here,namely the Rayleigh and the Mie scattering,as being isotropic. In this case the phase function is simplyg(γ) =1, and the source function is written in the usual form S_ν = (1-ϵ_ν) J_ν + ϵ_ν B_ν.For the Rayleigh scattering, one can either assume isotropic scattering, which is a crude but acceptable approximation, or use an exact phase function which in this case isgiven by the dipole phase function, g(γ) = 3/4 (1+cos^2γ).For a scattering on cloud particles (condensates), there are three possible approaches: * Assuming the isotropic phase function. This is a rough approximation, but is acceptable for simple models, in particular when external irradiation is weak or absent. * Employing theHenyey-Greenstein phase function, g(γ) =1- g̅^2/(1+g̅^2-2g̅cosγ)^3/2, where g̅ is the asymmetry parameter that is coming from the Mie theory. * Finally, the most accurate treatment is using an exact phase function that follows from the Mie theory. In the two latter cases, one solves the transfer equation iteratively. One introduces a form factor, analogous to the Eddington factor, as (see Sudarsky et al. 2005) a_μϕ = ∫_-1^1dμ^'∫_0^2π dϕ^' I(μ^',ϕ^') g(μ^',ϕ^',μ,ϕ) /4π J. Notice that for isotropic scattering, a_μϕ=1. The iteration scheme proceeds asfollows:* Initialize a_μϕ, usually as a_μϕ=1.* While holding a_μϕ fixed, solve the transfer equation with the source function given by S_μϕ = (1-ϵ) a_μϕ J + ϵ B, for all angles μ and ϕ, This can be done by the procedure described below.* After this is done, update a_μϕ, and repeat. In the absence of strong irradiation the radiation field is essentially independent of the polar angle, so one can use a simpler procedure where the phase function is averaged over azimuthal angles, g(μ^',μ) = ∫_0^2π g(μ^',μ,ϕ^', ϕ_0) dϕ^', where ϕ_0 is an arbitrary value of the polar angle, typically chosen ϕ_0=0. The integration is performed numerically. The above equations are modified correspondingly, essentially omitting the dependences on the polar angle.The transfer equation is now μd I(μ)/dτ = I(μ) - S(μ), which can be put into the form involving the symmetric and antisymmetric averages, analogous to the Feautrier scheme, namely μdh(μ)/dτ = j(μ) -s∫_-1^1 g^+(μ^',μ) j(μ^') dμ^', andμdj(μ)/dτ = h(μ) -s∫_-1^1 g^-(μ^',μ) h(μ^') dμ^', where g^±(μ^',μ) = 1/2 [g(μ^',μ) ± g(μ^',-μ)], because the following symmetry relations hold: g(μ^',μ) = g(-μ^',-μ),g(μ^',-μ) = g(-μ^',μ). The numerical method for solving Eqs. (<ref>) and (<ref>) is described by Sudarsky et al. (2000). However, it is still simpler and more straightforward to employ the ALI-based method descried in<ref>.§.§.§ δ-function reduction of the phase function The phase function is typically computed in a set of discrete values of the scattering angle γ = γ_1, γ_2, …γ_NA, with γ_1=0 and γ_NA=π. However, in many cases the phase function is a very strongly peaked function of γ, with a peak at γ=0 (forward scattering). Any simple angular quadrature is inaccurate because g(γ_1=0) may be by several orders of magnitude larger than g(γ_2) even for very small values of γ_2. Describing the phase function close to the forward-scattering peak with sufficient accuracy would necessitate to consider a large number of angles, which would render the overall scheme impracticalA more efficient approach was developed in Sudarsky et al. (2005; Appendix), which splits the phase function into two components. The first one, g^', is defined as g^'(γ_1)= g(γ_2) and g^'(γ_i)= g(γ_i) for i>1; i.e. g^' is the original phase function with a forward-scattering peakbeing cut off. The second part is expressed through the δ-function, so that the modified phase functionis written as g(γ) = g^'(γ) + αδ(γ), where α is determined by a requirement that the modifiedphase function isnormalized to unity, i.e. 1/2∫_-1^1i g(ξ)dξ= 1/2∫_-1^1 g^'(ξ) dξ + α/2 = 1, where ξ=cosγ. With this phase function, one can write down the source function (<ref>) as (skipping an indication of the frequency dependence) S(μ,ϕ)= 1-ϵ/4π∫_-1^1dμ^'∫_0^2π dϕ^' I(μ^',ϕ^')g(μ^',ϕ^',μ,ϕ) + ϵ B = 1-ϵ/4π∫_-1^1dμ^'∫_0^2π dϕ^' I(μ^',ϕ^')g^'(μ^',ϕ^',μ,ϕ)+ ϵ B+(1-ϵ) α I(μ,ϕ). The last term, (1-ϵ) α I(μ,ϕ), represents a creation of photons with the rate proportional; to the specific intensity, and therefore acts as a reduction of theabsorption coefficient and thus the optical depth. This is quite natural because theforward scattering reduces the extinction of radiation because a photon removedfrom the beam is immediately added to it, and thus cancels the previous act of photon absorption. §.§.§ Combined moment equation in the presence of anisotropic scattering The above formalism applies for the formal solutionof the transfer equation in the case the thermal structure is given. However, to consider the effects of anisotropic scattering to determine the atmospheric structure, we need to consideran equation for the mean intensity J, analogous to Eq. (<ref>). For simplicity, we consider a ϕ-averaged case, but the full μ- and ϕ-dependent case is analogous.Starting with the transfer equation (<ref>) with the source function given by (<ref>), the moment equations obtained by integrating over μ, andby multiplying by μ and integrating, are as follows dH/dτ = J -S = ϵ(J-B), because1/2∫_-1^1 dμ1/2∫_-1^1dμ^'p(μ^',μ) I(μ^')= 1/2∫_-1^1dμ^' I(μ^')1/2∫_-1^1dμp(μ^',μ) = J. The second moment equation presents more problems because while (1/2)∫_-1^1 dμp(μ^',μ) =1, the analogous quantity (1/2)∫_-1^1 dμ μp(μ^',μ) ≠1, unless p is an even function of μ. One can however introduce a form factor β≡1/J[ 1/2∫_-1^1 dμ^'I(μ^') 1/2∫_-1^1 dμ μp(μ^',μ) ], so that the second moment equation can be written as dK/dτ = H - (1-ϵ)β J. The combined moment equation, using Eq. (<ref>) and thetraditional Eddington factor defined by (<ref>), becomes d^2(fJ)/dτ^2 = ϵ(J-B) - d/dτ[(1-ϵ)β J]. Analogously to the Eddington factor, the new factor β is determined during the formal solution, and is kept fixed in the next linearization stepwhere Eq. (<ref>) is used as one of the basic structural equations.The second term on the right-hand side is discretized using a three-point difference formula, analogously as described in Appendix A. The important point to realize is that the global tri-diagonal structure of resulting matrices is preserved, so that the global linearization procedure, e.g. the Rybicki scheme, is unchanged. The effects of anisotropy are contained in the form factor β, and also indirectly in the Eddington factor f which is modified with respect to the isotropic case.To the best of our knowledge, the procedure outlined above was not yet used for actual computations. Studies that examined an importance of anisotropic scattering on condensates(e.g., Sudarsky et al. 2005) calculated a formal solution of the transfer equation for the specific intensity, with the source functiongiven by (<ref>) or (<ref>), but only for a given atmospheric structure (i.e., the T-P profile). They did not iterate to obtain amodified temperature structure. These effects are expected to be small, but this remains to be verified using theprocedure outlined above.§.§ Application of the Accelerated lambda iterationWe describe here a formalism for the general, μ- and ϕ-dependent case; an analogous formalism applies for the azimuthally-averaged, ϕ-independent, case. The transfer equation is written as (suppressing the frequency subscript) μdI_μϕ/dτ = I_μϕ - S_μϕ, where the source function is given by Eq. (<ref>), i.e., S_μϕ = (1-ϵ) a_μϕ J + ϵ B, with the factor a_μϕ given by Eq. (<ref>). Solution of Eq. (<ref>) can be written as I_μϕ = Λ_μϕ [S_μϕ], where Λ is an operator that acts on the (total) source function to yield the specific intensity. Although Eq. (<ref>) is written in an operator form,we stress that the Λ-operator does not have to be assembled explicitly; Eq. (<ref>) should rather be understood as a process of obtaining the specific intensity from the source function. In fact, a construction of an explicit Λ operator (i.e., a matrix, upon discretizing) would be possible, but cumbersome and rather time consuming. It is never done in actual astrophysical applications. The basic idea of the Accelerated Lambda Iteration (ALI) class of methodsis to write Eq. (<ref>) as an iterative process, I_μϕ^ new = Λ^∗_μϕ [S^ new_μϕ] +(Λ_μϕ - Λ^∗_μϕ) [S_μϕ^ old], where Λ^∗_μϕ is a suitably chosen approximate operator. Equation (<ref>) is exact at the convergence limit. The “new” mean intensity is given by J^ new = 1/4π∫_0^2πdϕ∫_-1^1 dμI^ new_μϕ. Using Eqs. (<ref>) and (<ref>) in (<ref>), one obtains, after some algebra [for details, refer to Hubeny & Mihalas (2014,13.5)] δ J ≡ J^ new - J^ old = [I - (1-ϵ)Λ̅^∗]^-1[J^ FS - J^ old], where I is the unit operator, and Λ̅^∗ = 1/4π∫_0^2πdϕ∫_-1^1 dμ a_μϕΛ^∗_μϕ, is the angle-averaged approximate operator. Finally, J^ FS = 1/4π∫_0^2πdϕ∫_-1^1 dμ Λ_μϕ [S^ old_μϕ] is a newer value of the mean intensity obtained from the formal solution of the transfer equation with the “old” source function.Although there are several possibilities,the most practical choice of the approximate operator is a diagonal (i.e., local) operator, in which case its action is simply a multiplication by a real number, which we also denote as Λ^∗ (or its angle-averaged value as Λ̅^∗). The correction to the mean intensity is then simply δ J = J^ FS - J^ old/1-(1-ϵ)Λ̅^∗. Before proceeding further, we employ Eq. (<ref>) to point out some basicproperties of the ALI scheme, and to explain a motivation for using it.If one sets Λ^∗=0, one recovers the traditional Lambda iteration, in which J^ new = J^ FS, i.e. the iteration procedure simply alternates between solving the transfer equation with the known source function, and recalculating the source function with just determined intensity of radiation. This procedure is known to converge very slowly if the scattering term dominates, i.e.,if the single scattering albedo is very close to unity. On the other hand, if one sets Λ^∗ = Λ, one recovers an exact solution which can be done in a single step without a need to iterate.However, the inversion of the Λ operator (matrix) may be quite costly. Therefore, in order an ALI scheme to be efficient, Λ^∗ must be chosen in such a way that it is easy and cheap to invert, yet still leads to a fast convergence of the overall iteration process.From the physical point of view, we see that the ALI iteration process is driven,as is the ordinary Lambda iteration, by the difference between the old source function(or mean intensity) and the newer source function (mean intensity) obtained from the formal solution.But Eq. (<ref>) shows that in the case ofALI this difference is effectively amplifiedby an acceleration operator [1-(1-ϵ)Λ^∗]^-1. For example, any diagonal (i.e. local) Λ^∗ operator must be constructed to satisfy Λ^∗(τ)→ 1 for large τ (because I_ν→ S_ν for large τ). In a typical case ϵ≪ 1, and thus [1-(1-ϵ)Λ^∗]^-1→ϵ^-1, so that the acceleration operator does in fact act as a large amplification factor. From the mathematical point of view, an idea of solving large linear systems by splitting the system matrix into two parts, one being inverted, and the other one being used to compute an appropriate correction to the solution,goes backto Jacobi inthe mid nineteenth century. In the current literature these methods are known as preconditioning techniques.A comprehensive review of their mathematical properties that are important in the context of astrophysical radiative transfer is given in the recent textbook byHubeny & Mihalas (2014,13.2). The most important conclusion is that the convergence speed of any preconditioning method is determined by the largest eigenvalue of the amplification matrix, which is given through the original matrix and the preconditioner, in our case by Λ and Λ^∗.This gives an objective criterion for judging the quality of the chosen approximate operator. From this analysis (first done by Olson et al. 1986) it follows that a diagonal (local) Λ^∗, given as a diagonal part of the exact Λ,provides a reasonable compromise between the convergence speed and a time consumption per iteration. Its construction, in one particular case, is described below.Returning back to the present application,here is an algorithm for solving Eq. (<ref>) using the ALI method: * For a given S^ old (with an initial estimate S^ old=B or some other suitable value), perform a formal solution of the transfer equation fro all directions, but one direction (given μ and ϕ) at a time. This yields new values specific intensity I_μϕ and also new values of the angle-dependent approximate operator approximateΛ_μϕ^∗ – see below.* By integrating over directions using Eq. (<ref>) obtain new values of the formal-solution mean intensity J^ FS.* Using (<ref>), evaluate a new iterate of the mean intensityJ^ new=J^ old + δ J.* Update the source function from (<ref>) using the newly found meanintensity andrepeat steps (i) to (iii) to convergence. §.§.§ Construction of the approximate operatorRemaining part of the solution is a construction of the approximate operator Λ^∗. There are several possibilities, depending on which formal solver of the transfer equation is being used. As explained in Hubeny & Mihalas (2014;13.3), the matrix elements of theΛ-operator can be formally evaluated by setting the source functionto the unit pulse function, S(τ_d)=δ(τ-τ_d), so that Λ_dd^' = Λ_τ_d[δ(τ_d^'-τ)]. Therefore, one could obtain the diagonal elements of exact Λ by solvingthe transfer equation with the source function given by the δ-function. However,in practice one does not have to solve the full transfer equation, but only to collect coefficients that stand at S_d in the expressions to evaluate I_d. In the case of DFE scheme, one proceeds along the recurrence relations(<ref>) and (<ref>) to compute L_d+1^-=b_d/a_d,L_d^+=[2(Δτ_d+1/2 + 1)L_d^- + b_d]/a_d where a_d and b_d are given by (<ref>) and (<ref>). The complete diagonal element of the (angle-dependent)elementary operator is obtained,in parallel with Eq. (<ref>), as Λ^∗_d(μ,ϕ) ≡Λ_dd =L_d^- Δτ_d+1/2+ L_d^+ Δτ_d-1/2/Δτ_d+1/2+Δτ_d-1/2. The values at the boundaries are Λ_dd=0 for d=1, and Λ_dd=L_d^- for d=ND. An evaluation of the diagonal elements for outward-directed rays is analogous, L_d^-=b_d/a_d,L_d+1^+=[2(Δτ_d+1/2 + 1)L_d+1^- + b_d]/a_d As stressed in<ref>, a solution of the transfer equation using the DFE method is performed for one direction at a time, so L and Λ in Eqs. (<ref>) - (<ref>) are evaluated for given μ and ϕ. An angle-averaged approximate operator needed to evaluate the new iterate of the source function or the mean intensity, as in Eq. (<ref>), is then given by Λ̅^∗_d = 1/4π∫_0^2πdϕ∫_-1^1 dμ Λ^∗_d(μ,ϕ).In the case of Feautrier scheme, which is however useful only for isotropic scattering, one uses a special procedure to evaluate an elementary Λ^∗ suggested by Rybicki & Hummer (1991), see also Hubeny & Mihalas (2014,13.3). § DETAILS OF NUMERICAL IMPLEMENTATION§.§ Treatment of opacities and the state equationUnlike model stellar atmospheres, where the opacities are evaluated on the fly, here we use pre-calculated extensive tables of opacityas a function of frequency, temperature, and density (or pressure). Such an approach is used for instance in the computer code Cooltlusty (e.g. Hubeny et al. 2003; Sudarsky et al. 2003),, which is a variant of the stellar atmosphere code tlusty (Hubeny 1988; Hubeny & Lanz 1995). The opacity table can be set either(i)as the total opacity of all gaseous species, or (ii) opacities of the individual species separately. In the latter case,the table contains the corresponding cross sections σ. This approach is mandatory when treating departures form chemical equilibrium. On the other hand, one needs an additional table of concentrations of the species, or an analytical or empiricalprescription how to evaluate them.In both cases, the individual values of κ_i(ν_j) or σ_i(ν_j) for the individual frequencies are set using one of the two possible approaches: * Using the idea of Opacity Sampling (see, e.g. Hubeny & Mihalas 2014,18.5) that is usedin the stellar atmospheres applications. In the planetary context, it is known as the line-by-line approach. It consists simply of evaluating the exact opacity at the actual set of frequencies ν_j. If the set of frequencies is dense enough,this scheme essentially amounts to an exact representation of the opacity. However, if the frequency points are not spaced sufficiently densely, this approachmay miss cores of strong lines, or windows between them. * Using the idea of Opacity Distribution Functions (ODF), also often used in the context of stellar atmospheres (e.g. Hubeny & Mihalas 2014;17.6 and 18.5).This approach consists of three parts: * (a) Dividing the global range of frequencies into a set of relatively narrow intervals (typically 10^2 to several times 10^3 intervals);* (b) For each interval, one first computes a detailed line-by-line opacity with a very high frequency resolution, and then resamples the opacity to form a monotonic function of frequency, called ODF.* (c) This function is represented by a small number (typically of the order of 10^1) frequency points.This approach is analogous to the so-called correlated k-coefficient method(Goody et al. 1989; for an illuminating discussion, see Burrows et al. 1997), used in the planetary context. An advantage of this approach is that both high- and low-opacity points are well represented; however, a disadvantage is that the position of, say, thehighest peak in the true opacity distribution is generally different from the positionof the peak of an ODF. Nevertheless, if the intervals are chosen to be small, the resulting errors are also small.In the context of SMO model atmospheres, where the opacity is dominated by strong molecular bands composed of many closely spaced lines, the ODF approach is expected to work better than in the stellar atmosphere context where an ODF represents a set of relatively well separated lines.From the practical point of view, one needs several tables:[2pt] – a table (or a set of tables) of the gaseous opacity;[2pt] – a table of the total Rayleigh scattering opacity;[2pt] – a set of Mie scattering cross sections for the individual condensates;[2pt] – a set of cross sections for absorption of the individual condensates.The corresponding derivatives with respect to the temperature, needed to evaluate the Jacobian, are computed numerically.Analogously, one needs pre-calculated tables of density as a function of T and P and, for evaluating the thermodynamic parameters needed for treating convection, the internal energy (E) or entropy (S) as a function of T and P. Summarizing, one needs two more tables:[2pt] – a table of ρ=ρ(T,P);[2pt] – a table of E=E(T,P) or S=S(T,P).In this manner, all calculations that are connected to chemical equilibriumand determining the opacities are separated from the calculation of the atmosphericstructure.§.§ Setting up the cloud basesIdeally, the position of the (upper) cloud base should be given as an intersection of the current T-P profile and the condensation curve. The lower cloud base is an artificial concept. If it is set through the condensation curve of the surrogate species, or is set at a fixed temperature, it mimics the situation where there are many condensates with actual condensation curves between these two limits, so that the given species is in fact a representative of a cumulative effect of many condensates.For instance, Burrows et al (2006) chose forsterite (Mg_2SiO_4) to represent about 20 individual species of magnesium and aluminum silicates; with upper cloud base determined through the forsterite condensation curve, and the lower base at fixed temperature T=2300 K, which roughly corresponds to a characteristic highest condensation temperature of other silicates (see Fig.1 of Burrows et al., 2006).This procedure works well if the cloud is located in an optically thick portion of the atmosphere. However, numerical experience showed that in cases where the upper or lower base is located in an optically thin part of the atmosphere, tcloud position may oscillate between two or more locations, and in fact in no location can one obtain a cloud position fully consistently with the atmospheric structure. For instance, at certain iteration a cloud base is determined to be at a certain,say low-P position. When the cloud is located there, its influence modifies the temperature, and as a consequence the cloud moves to higher P. Again, this modifies the temperature, and in the next iteration the cloud moves back to the low-P location. After a few iterations, the model starts to oscillate between two identical cloud positions. Moreover, regardless where the cloud position is set empirically, for instance anywhere between the two positions mentioned above, the resulting temperature structure that is obtained after such a cloud is taken into account, moves the cloud away. In such situations, there is no stationary solution of the problem. To obtain at least an approximate solution in those cases, several procedures were devised. They were used by Burrows et al. (2006) and Hubeny & Burrows (2007), butnot explicitly described there.In those procedures, one first calculates the cloud base position that depends only on the current atmospheric structure. As mentioned above, there are three possibilities:(1) Setting the cloud base at an intersection of the T-P profilewith the condensation curve – the “exact” way.(2) Setting the cloud base at a specified temperature (which corresponds to an approximate condensation curve that is independent of pressure).(3) Setting the cloud base at a specified pressure. In this case, since the pressure is unchanged during iterations, the cloud base is also fixed in space. Obviously, this is not a good physical model, but this approach may be useful for testing, and for diagnosing problems when the code cannot find the self-consistent cloud bases. For instance, one may construct a series of models with many fixed cloud base positions, and to study which position is closest to a consistent one, that is to the one where the computed T-P profile intersects the condensation curve closest to the position where the cloud base was set.The cloud bases determined by any of the procedures (1) or (2) are called “tentative bases”. The tentative cloud bases may be either kept as they are, or may be modified by several possible procedures:(a) The position of the new cloud base cannot be moved more that a prescribed number of depth points.(b) The actual position of the base is set at the midpoint between the tentative and the previous base. The “previous” base is the final base determined (by any procedure) at the preceding iteration.(c) The actual position of the base is set as a weighted geometrical mean of the tentative and the previous base. In this case, one computes the geometrical mean of the pressures at the cloud bases. Specifically, say for the upper base,P_0^ actual = (P_0^ tent)^w × (P_0^ previous)^1-w,where w is a weight for the geometrical mean, typically set to w=1/2,i.e., as s true geometrical mean. Another possible numerical trick is a “rezoning” of depth points. It was found that it is more accurate and numerically mode stable to add several depth points at the newly determined low-pressure base of the cloud deck and immediately above it. Otherwise, if there are too few depth points in the region of exponential decline of the cloud-shape function on the low-pressure side of the main cloud, the opacity of the cloud would be overestimated. Analogously, if there is no depth point exactly at the cloud base, the opacity of the cloud is underestimated. Some results that illustrate an influence of clouds are shown in Figs. <ref> – <ref>.We compare a cloudless model considered earlier with T_ eff = 1500 K, log g = 5, to an analogous model with an added forsterite (Mg_2SiO_4) cloud. The low (high-pressure) cloud boundary is set at a fixed temperature of T=2300 K that simulates an effect of a whole set of other magnesium silicate condensates, as suggested by Burrows et al. (2006). Notice that even if the lower cloud boundary is specified at a fixed temperature, it is not fixed in the physical space because the temperature structure varies from iteration to iteration. The upper (low-pressure) cloud boundary is set exactly at the intersection of the T-P profile and the forsterite condensation curve. The power-law cloud shape parameters defined by Eq. (<ref>) are set toc_0 = 2 and c_1 =10. The modal particle size is taken to be 100 microns.Figure <ref> displays the convergence pattern of a model with clouds, computed using the Rybicki scheme. As is clearly seen, the convergence is again quite fast a very stable; the whole computation took about 30 s on the same MacBook Pro laptop as mentionedin<ref>. Figure <ref> shows the temperature structure, displayed as the temperature asa function of Rosseland optical depth for both, cloudless and cloudy models. Differences in the temperature structure are clearly seen. The effects of the cloud are best seen on a plot of the total radiative and convective energy flux, displayed inin Fig. <ref>.The upper panel shows the cloudless model, which exhibits a smooth rise ofF^ conv/(σ_R T_ eff^4) toward deep layers, starting aroundτ_ ross≈ 1. From the numerical point of view, noticethat the total flux is conserved within about 0.05%; this is not seen on this plot but is shown later in Fig. <ref>. The lower panel represents an analogous plot for the cloudy model, together with the cloud shape function. The later plot clearly shows that the cloud contributes to the total opacity at Rosseland optical depths roughly between 1 and 10. Because of an additional opacity ascompared to the cloudless model, the temperature gradient is flatter in this region, and consequently the radiative flux is somewhat lower.The relative portion of the convective flux in this region thus somewhat increases. In contrast, in the region just below the cloud, the temperature gradient increases and so does the radiative flux, and consequently the portion of the convective flux decreases dramatically.Finally, we show in Fig. <ref> the predicted emergent flux for both models. The main effect of clouds is to fill the opacity windows at 1.2 and 1.6 microns where the cloudless model exhibits the highest peaks of the spectral energy distribution. By virtue of the radiative equilibrium, this energy has to be redistributed in other spectral regions, and therefore the flux increases essentially everywhere for wavelengths larger than about 1.8 microns. §.§ Global formal solutionThe term “global formal solution” refers to the set of all calculations between two iterations of the overall iteration (i.e., linearization) scheme. The main part of this procedure is a solution of the radiative transfer equation for specific intensities and an evaluation of the Eddington factors, as described above in<ref>.In parallel with, or on top of, this procedure, one performs other “formal” solutions, essentially updating one state parameter by solving the appropriate equation, while keeping other state parameters fixed.For instance, and most importantly, one solves the radiative/convective equilibrium equation to update temperature in the convection zone and below it. To this end, several procedures were devised for convective models to iteratively improve the T-P profile before entering the next linearization step.In most cases, using such procedures has very favorable consequences for the convergence properties, or even prevents an otherwiseviolent divergence of theiteration scheme. These procedures will be described next in<ref>.For models with clouds, one then determines the new positions of the cloud bases as described in<ref>. This changes the opacity as a function of depth, so one has to perform another formal solution of the radiative transfer equation, as well as the radiative/convective equilibrium, and the whole procedure may be iterated several times.§.§ Correction of temperature in the convection zoneAlthough the linearization scheme may in principle converge without additional correction procedures, in practice it is a rare situation. The essential point is that a linearization iteration may yield current values of temperature and other state parameters such that, for instance, the actual logarithmic gradient of temperature in a previously convective region may spuriously decrease below the adiabatic gradient at certain depth points. Consequently, these points would be considered as convectively stable, and in the next iteration the radiative flux would be forcedto be equal to the total flux. This would lead to a serious destabilization of the overall scheme, likely ending in a fatal divergence.It is therefore often necessary to perform certain correction procedures to assure that the convection zone is not disturbed by spurious non-convective regions, and analogously the radiative zone is not disturbed by spurious convective region,so that the temperature and other state parameters are smooth functions of depth before one enters the next iteration of the overall linearization scheme.We describe these schemes below.§.§.§ Improved definition of convection zone .After a completed linearization iteration, one examines the depth points in which the actual temperature gradient surpasses the adiabatic one. If such a point is solitary, or if it occurs at much lower pressures than the upper boundary of the convectionzone in the previous iteration, the point is declared as convectively stable, and the usual radiative equilibrium equation is solved for it in the next iteration step. On the other hand, if there is/are depth points in which ∇ < ∇_ ad (so that they are seemingly convectively stable), surrounded on both sides by points that are convectively unstable, ∇≥∇_ ad, these points are declared as convectively unstable, and are considered to be part of the convection zone. In sucha newly defined convection zone, one or both of the following correction procedures are performed. §.§.§ Standard correction procedure .The idea of the correction is as follows. In view of eq. (<ref>), the convective flux is given by F_ conv = F_0 (∇-∇_ el)^3/2, whereF_0 = (gQH_P/32)^1/2(ρ c_P T)(ℓ/H_P)^2. After a completed iteration of the global linearization scheme, one takes the current values of the state parameters and the radiation flux, and computes, in the convection zone, the new convective flux correspondingto this radiation flux so that the total flux is perfectly conserved, F_ conv^∗ =F_ tot- F_ rad, where F_ tot = σ_R T_ eff^4. If F_ rad is spuriously larger than F_ tot, then F_ rad is set to 0.999 F_ tot. The new difference of the temperature gradientscorresponding to this convective flux is then ∇-∇_ el = (F_ conv^∗/F_0)^2/3, which is related to ∇-∇_ ad through ∇-∇_ ad = (∇-∇_ el)+B √(∇-∇_ el). where B is given by eq. (<ref>). Both B and ∇_ ad are computed using the current values of the state parameters. Equation (<ref>) thus yields the new gradient ∇ and, with the pressure being fixed, the new temperature. With the new temperature, one recalculates the thermodynamic variables, and iterates the process defined by equations (<ref>) - (<ref>) to convergence.In solving eq. (<ref>), one proceeds from the top of the convection zone to the bottom, because the gradient ∇ is numerically given by∇_d ≡∇_d-1/2 = T_d - T_d-1/P_d-P_d-1P_d+P_d-1/T_d + T_d-1. or by ∇_d= ln (T_d /T_d-1) / ln (P_d /P_d-1), so in order toevaluate T_d one needs to know T_d-1 in the previous depth point.§.§.§ Refined correction procedure The above procedure is improved by recognizing that the coefficient B is an explicit function of temperature, so B can be expressed as B≡β T^3. More importantly, the radiation flux is not kept fixed, but is written as F_ rad≡α T^4 ∇, so that instead of keeping F_ rad fixed, one first computes α from (<ref>) for the current values of T and ∇, andrewrites combined equations (<ref>) –(<ref>) as a non-linear equation for temperature, ∇(T) = ∇_ ad + (F_ tot - α T^4 ∇(T)/F_0)^2/3 + β T^3 (F_ tot - α T^4 ∇(T)/F_0)^1/3, where the parameters α and β are held fixed. Equation (<ref>) is solved by the Newton-Raphson method, again going from the top of the convection zone to the bottom. These procedures were developed by Hubeny & Burrows (2007), but not explicitly described there. Experience showed that they may be very helpful, but should be used judiciously. The best strategy is to start using them around the third or fourth iteration of the linearization scheme (otherwise, the radiation flux is so far from the correct value that the correction cannot work properly), and to stop using them at some later (e.g., 15th) global iteration. The reason for this cutoff is that an application of therefinement procedures for an almost converged model may lead to an oscillatory behavior of the temperature corrections, in the sense that the refinement procedures change the temperature slightly, while the subsequentlinearization iteration changes it back.§ GRAY AND PSEUDO-GRAY MODELSIt is instructive to consider the so-called gray, or pseudo-gray models. These are approximate models, but they serve two purposes:(i) they can be used as initial models for the linearization scheme, and (ii) they can provide a valuable physical insight into the properties of the computed atmospheric structure.They are based on the two moment equations of the transfer equation, Eqs (<ref>) and (<ref>), rewritten to contain derivatives with respect to the column mass m, and integrated over frequencies, namely dH/dm = κ_J J - κ_B B, dK/dm = χ_H H, where [J,H,K] ≡∫_0^∞ [J_ν, H_ν, K_ν]dν are the frequency-integrated moments of the specific intensity, and κ_J≡ ∫_0^∞ (κ_ν/ρ) J_ν dν /J,κ_B≡ ∫_0^∞(κ_ν/ρ) B_ν dν /B,χ_H ≡ ∫_0^∞(χ_ν/ρ) H_ν dν /H, are the absorption mean, the Planck mean, and the flux-mean opacities, respectively. Here B≡∫_0^∞ B_ν dν = (σ_R/π) T^4, is the frequency-integrated Planck function, which is proportional to T^4. As is customary, the mean opacities are defined using the monochromatic opacities per gram. Notice that κ_J and κ_B are defined through the true absorption coefficient (without scattering), while χ_H is defined through the total absorption (extinction) coefficient.Assuming radiative equilibrium, dH/dm=0, Eq. (<ref>) reduces to κ_J J = κ_B B,or B=(κ_J/κ_B) J, which shows that the temperature structure is given through the ratio of the absorption mean to the Planck mean opacities, and the integrated mean intensity, which is given by the solution of the transfer equation. From the second moment equation we have K(τ_H) = Hτ_H + K(0) = (σ_R/4π) T_ eff^4 τ_H + K(0), where dτ_H = χ_H dm is the optical depth associated with the flux-mean opacity. We express the moment K through J via an integrated Eddington factor, f_K≡ K/J, and using an integrated second Eddington factor, f_H≡ H(0)/J(0), Eq. (<ref>) together with (<ref>) gives (see also Hubeny et al. 2003) T^4 = κ_J/κ_B[ 3/4 T_ eff^4(1/3f_Kτ_H+ 1/3f_H) + π/σ_R H^ ext]. This expression is exact, but is only formal because κ_J, f_K, f_H, and τ_H are not a priori known. However, this expression is very useful if one makes some additional approximations. * Classical gray model without irradiation.It assumes that the opacity is independent of frequency. In this case one has an exact mathematical solution, T^4 = 3/4T_ eff^4[τ+q(τ)], where q(τ) is the Hopf function, a monotonically varying function between q(0) = 1/√(3)≈ 0.577 and q(∞) ≈ 0.71.Temperature structure given by (<ref>) is exact for a truly frequency-independent (gray) opacity, but it can be used as a useful starting approximation for any opacity, provided that τ is presented by a properly chosen mean opacity.As follows from the general expression (<ref>), the appropriate opacity should be an approximation of the flux mean opacity. It turns out that such an approximation is the Rosselandmean opacity. Specifically, in the deep layers where the diffusion approximation applies, H_ν≈1/3dB_ν/dτ_ν=1/3dB_ν/(χ_ν/ρ)dm = 1/31/(χ_ν/ρ)dB_ν/dTdT/dm, and therefore χ_H = ∫_0^∞(χ_ν/ρ) H_ν dν/∫_0^∞H_ν dν≈∫_0^∞ (dB_ν/dT)dν/∫_0^∞ [1/(χ_ν/ρ)](dB_ν/dT)dν≡χ_R, where the second equality is the definition of the Rosseland opacity.* Gray model with Eddington approximation. In our notation, the Eddington approximation sets f_K=1/3 and f_H=1/2, and the Hopf function is taken as constant, q(τ)=2/3. Equation (<ref>) still applies.* Eddington approximation, but allowing for non-gray opacity. In this case, the temperature structure is T^4 = κ_J/κ_B(3/4 T_ eff^4 [τ+2/3] ). * Eddington approximation, with non-gray opacity, and with external irradiation. T^4 = κ_J/κ_B(3/4 T_ eff^4 [τ_H+2/3] +W T_∗^4), where the external irradiation flux is expressed through the effective temperature of the irradiating star, T_∗, and the dilution factor, W, given by Eq. (<ref>). As shown by Hubeny et al. (2003), this expression helps to understand a possible temperature rise at the surface of strongly irradiated planets,, and even the fact that undercertain circumstances one can obtain two legitimate solutions of the structural equations – one for the temperature monotonically decreasing outward, and one exhibiting a temperature rise toward the surface. Mathematically speaking, these effects arise due to an inequality of the absorption mean and the Planck mean opacities in the surface layers, namely that κ_J/κ_B may become significantly larger than unity.The reason for this is that the Planck mean opacity weighs the monochromatic opacity by B_ν(T), the Planck functionat the local temperature, while κ_J close to the surface weighs the monochromatic opacity byB_ν(T_∗), the Planck function at the effective temperature of the irradiating star, T_∗,which is significantly larger than T. If, in addition, one has a strong opacity source acting in the optical region (where the stellar irradiation has the maximum), one can easily obtain κ_J/κ_B ≫ 1 close to the surface. Further from the surface, where less incoming radiation penetrates, κ_J →κ_B, which leads to a decrease of the local T as compared to the surface value. A more comprehensive discussion is presented in Hubeny et al. (2003) and Hubeny & Mihalas (2014;17.7). * Two-step gray models. A variant of the above approaches is a two-step gray model, which divides the whole frequency range into two regions, typically a "visible" and an "infrared", one, and assumes a frequency independent opacity χ_ vis and χ_IR,with χ_ vis≠χ_IR, and analogously for κ and the scattering coefficient s. In the two regions one typically invokes different approximations. Such models were developed by Hansen (2008), Guillot (2010) and Parmentier & Guillot (2014). We will not discuss this topic any further because our emphasis here is onconstructing model atmospheres without any unnecessary approximations. We use gray or pseudo-gray models just as am initial estimate for subsequent iterative procedure, or as a pedagogical tool to understand the atmospheric temperature structure.§ COMPARISON TO AVAILABLE MODELING APPROACHES AND CODESHere we briefly describe various modeling approaches and codes usedin the literature and compare them to the formalism described above. We stress that we will consider here only the codes and approaches that aim at determining a self-consistent atmospheric structure, obtained by a simultaneous solution of the basic structural equations summarized in Section 2, or at least a temperature structure that is consistent with the radiation filed. We will not consider here approaches that employ for instance an ad hoc, or parametrized, temperature structure and solve just for the radiation field, or using an approximately described, fixed radiation field to determine the atmospheric structure.Therefore, in the exoplanet terminology, we will consider here only the forward, self-consistent codes, but we will not consider the retrieval codes, such as the code of Madhusudhan & Seager (2009, 2011), NEMESIS (Irwin et al. 2008; Barstow et al. 2017), CHIMERA (Line et al., 2012, 2013), or Tau-REX (Waldmann et al., 2015), to name just a few.From the basic physical point of view, we will limit ourselves here to hydrostatic, plane-parallel models, because considering more sophisticated multi-dimensional dynamical models is a different topic that requires different computational strategies. §.§ PhilosophyModeling atmospheres of substellar-mass objects is obviously a young field, whose beginnings occurred in the mid and late 1990's, shortly after observational discoveries of these objects. In an endeavor to provide a needed theoretical background, it was deemed most straightforward to adapt some already available modeling approaches and codes to the physical conditions expected to occur in SMO atmospheres. There were two avenues taken in this regard: (i) adapting modeling codes for stellar atmospheres, and (ii) adapting codes developed for modeling solar system planets and moons. Both avenues offer certain advantages and certain challenges, as we will outline below. Only recently, there appear new codes which were developed from the scratch, and which may potentially offer a possibility of avoiding drawbacks and biases inherent in adapting existing codes.We shall briefly discuss the most popular and widely used codes in these three categories. We stress that this is not meant as a comprehensive review of the subject, but rather as a brief guide to understand what is involved, from both physical and numerical point of view, in the present most popular modeling codes. §.§ Adapting stellar atmosphere codesThe first category of codes are thosethat were created by adapting a code for computing model stellar atmospheres. It should be pointed out that computing model stellar atmospheres is a very mature subject, having been developing duringthe last almost seven decades. Even the state-of-the-art NLTE metal-line blanketed models are around for over two decades. The stakes in the stellar atmospheres theory are also very high thanks to an unprecedented quality and quantity of high-resolution, high signal-to-noise spectroscopic observations that put heavy demands of the accuracy and reliability of theoretical analysis tools.It is therefore quite natural to model atmospheres of SMOs by adapting existing stellar atmospheres codes. There are specific features that make computing SMO model atmospheres easier that computing model stellar atmospheres, and vice versa. We will briefly summarize them below.The features that make the SMO models easier to compute are: * In stellar atmospheres, in particular for hot stars, the hydrostatic equilibriumequation contains a contribution of radiation pressure, which involves an additional coupling of the gas pressure (and therefore the mass density) to the radiation field.* For both types of objects,the opacity varies rapidly with frequency.However, for stars, the (mostly) atomic lines are distributed randomly in frequency, while for SMOs, the (mostly) molecular lines tend to be organized in bands, which makes it more suitable to employ various statistical techniques such as the opacity distribution functions, or, as they are called in the planetary community, the correlated k-coefficients. Also, for stars, there are no frequency regions that can be treated as purely (or mostly) scattering or purely (or mostly) absorbing.* These two issues play a role already in LTE models. For NLTE models, a major difficulty comes from the fact that the opacities and emissivities depend on the populationsof levels involved in the corresponding atomic transitions, which in turn depend on the radiation field via the kinetic equilibrium equation. The opacities thus cannot be evaluated a priori as functions of temperature and density, but have to be computed self-consistently with all the structural equations. There are typically thousands to tens of thousands atomic energy levels involved in the atomic transition (lines or continua) that make a significant contribution to the total opacity.Although in the field of SMO model atmosphere, there are studies that consider NLTE effects (e.g., Fortney et al. 2004), stellar atmosphere models consider NLTE on much larger scale. For instance, in a grid of model atmospheres of B stars (Lanz & Hubeny 2007), one considers about 1130 energy levels and about 39,000 lines of light elements, and 500,000 to 2 million lines dynamically selected from a list of about 5.6 million lines of the iron peak elements, in full NLTE. All these complications are absent or alleviated for models of SMO's. Modifying a modern NLTE stellar atmosphere code thus mostly involves removing many routines dealing with special issues of NLTE (an evaluation of transition rates, solving the kinetic equilibrium equation, etc.), and evaluating opacities and emissivities on the fly, because in any LTE model atmosphere code, including that for SMO's, it is much more efficient to use pre-calculated opacity tables.On the other hand,computing SMO model atmospheres is more difficult than computing model stellar atmosphere, particularly for hot stars. We stress that at the cool end of the main sequence, K and M stars, one meets most of the challenges listed below for SMO's.* One has to include a solution of chemical networks todetermine the concentrations of the individual molecular species as functionsof temperature and pressure. However, this is not difficult numerically oralgorithmically; the difficulty is mostly in finding appropriate molecular data.In any case, this can be done independently of a model construction. * As pointed out above, more sophisticated modelsneeds to consider departures from chemical equilibrium. * One has to add a treatment of cloud formation, together with an evaluation of cloud absorption and scattering. This is perhaps the most difficult part of the process of adapting approaches and codes designed for hotter objects, because it involves basic physical problems (e.g., determining consistent particle sizes, their distribution, anda position of a cloud in the atmospheres), as well as algorithmic and numerical problems in incorporating these effect in a self-consistent manner. * Although not as serious as other problems listed above, the presence of strong (and generally anisotropic) external irradiation brings challenges on adopted numerical schemes, in particular for self-consistent models.Here is a list of the codes that were created by adapting their stellaratmospheric counterpart.§.§.§ CoolTLUSTY This code is a variant of a general stellar atmosphere (and accretion disk) code tlusty, originally described in Hubeny (1988) and Hubeny & Lanz (1995). Its modification for SMO atmospheres, called CoolTlusty was briefly describedin Sudarsky et al. (2003) and Hubeny et al. (2003).The present paper in fact describes in more detail the physical and numerical background of CoolTlusty. The input atomic and molecular physics and chemistry is quite flexible. It can either use opacity tables generated using the Burrows & Sharp (1999) and Sharp & Burrows (2007) approach, or any other opacity tables, both for the total opacity, as well as a set of tables for individual species. The input properties of condensates (cloud absorption and scattering) can accept any tables generated by a Mie code. Originally, it was using tables generated as described in Sudarsky et al. (2000); recently it switched to tables generated by Budaj et al. (2014).§.§.§ PHOENIX Code PHOENIX was developed for stellar or even supernova applications, see Hauschildt & Baron (1999). The first application for extrasolar giant planets was done by Barman et al. (2001).The input physics is analogous to that used in CooTtlusty, described above. The basic difference is the adopted numerical scheme; PHOENIX is using a differentflavor of the ALI method. It also uses a different set of chemical/molecular data, and a different treatment fo clouds. §.§.§ UMA UMA stands for Upsalla Model Atmospheres code (Gustafsson et al. 1974), somewhat modified by Vaz & Nordlund (1985). It was further adapted to studies of extrasolar giant planets by Seager & Sasselow (1998), see also Seager & Sasselow (2000), and Seager et al. (2000). It does not use an ALI scheme; it solves the radiative transfer equation by the Feautrier method, and determines the temperature structure self-consistently with the radiation field by a classical temperature correction. §.§ Adapting planetary atmosphere codesGenerally, the codes of this category are directly based on approaches used originally for atmospheres of the solar-system planets or moons. Some, but not all, are based on, or use the spirit of, approaches usedoriginally for the Earth atmosphere. After the observational detections of brown dwarfs and extrasolar giant planets in the mid and late 1990's and early 2000's, some of these codes were adapted to these objects.In the Earth atmosphere there is a clear distinction between the twofollowing wavelength regions: * The opticalwavelength region (often called “solar frequencies”), which is optically thin in most of the visible wavelengths,and the transport of radiation is dominated by the scattering processes; and * The infrared region, where the radiation transport is dominated by absorption and thermal emission.It should be noted that the atmosphere is opaque in the short-wavelength regions (UV and X-ray), but these regions are inconsequential for constructing structural models. The original Earth-atmosphere codes used that distinction explicitly to develop suitable approximations ofthe radiative transfer equationthat differ in the optical and the infrared region. The early codes for modeling solar-system planets often used at least some aspects of this distinction. However, when applying such a dichotomous model to significantly hotter or otherwise quite different conditions in the exoplanets and brown dwarfs, these procedures may becomeless accurate or less efficient than those based on the formalism outlined above.While the existing codes of this category do still yield valuable results, the above considerations should be kept in mind when developing new codes for modeling atmospheres of extrasolar planets of brown dwarfs. Figuratively speaking, it seems more efficient to treat exoplanets and brown dwarfs as small and cool stars ratherthan hot and big Earths' or solar system planets. §.§.§ McKay-Marley code The code was first developed by McKay et al. (1989) for calculating atmosphericstructure and spectra of Titan, and subsequently extended andapplied for atmospheres of brown dwarfsby Marley et al. (1996); Burrows et al. (1997), to the solar-system giant planets by Marley & McKay (1999),and applied for atmospheres of exoplanets by Marley et al. (1999),Fortney et al. (2005, 2008), andsubsequently in a largenumber of SMO studies.Here we list the main assumptions and approaches used by the code, stressing the differences form the approach described in this paper and/or used in the above mentioned codes. The code determines the T-P profile in the following way: In the convection zone (or possibly multiple zones) the temperature gradient is assumed to be strictly adiabatic, and all the flux is transported solely by convection. In the radiative zone, where the strict radiative equilibrium applies, one employs a special temperature-correctionprocedure, which somewhat resembles the Rybicki scheme described above, in the sense that one forms a vector of the local temperatures,T≡{T_1, …, T_NR}, where NR is the number of depth points in the radiative zone, and computes a correction δ T by using the following matrix equation (in our notation) A δ T = σ_R T_ eff^4 -F( T_0), where F( T_0) is a vector of the total radiative flux in all the depth pointsof the radiative zone, computed for the current vector of temperatures, T_0.. Equation (<ref>) in fact represents a linearization, or a Newton-Raphson solution, of a non-linear implicit relation between the radiative flux and the temperature, F( T) = σ_R T_ eff^4, expressing the constancy of the total radiative flux. Matrix A is the corresponding Jacoby matrix,A_ij = ∂ F_i/∂ T_j; that is, the ij-component of A expresses the response of the total flux at depth i to the temperature at depth j. Unlike the Rybicki scheme, the elements of the Jacoby matrix are not evaluated analytically. Instead, they are obtained by solving a set of additional radiative transfer equations, by consecutively modifying a single component of vector T, for instance T_j → T_j + Δ T (with Δ T having a small, arbitrary value such as 1 K), while keeping the other components unchanged, toobtain a perturbed flux at all depth points, F^p,j. The elements of the Jacoby matrixare then set toA_ij = (F_i^p,j - F_i)/Δ T.Radiative transfer equation is solved by a variant of the two-stream approximation, called two-stream source function method (Toon et al. 1989). It considers an atmospherecomposed of a set of zones, and assumes that the thermal source function (i.e., the Planckfunction) is a linear function of optical depth within a given zone. The method essentially solves the firstmoment equation of the radiative transfer equation directly for the radiative flux,where some empirical relation between the zero-order moment (mean intensity) and thefirst-order moment (flux) is invoked. This scheme improves the traditional two-streammethods in situations where scattering is present, by considering the scattering source function computed using the proper phase functions, but using the specific intensities obtained from thetraditional two-stream approximation for the thermal radiation.The line opacity is treated using a variant of the Opacity Distribution Function approach(see<ref>),called here the k-coefficient method. The opacity is assumed to be constant within a given depth zone, which allows one to introduce a k-coefficient not as a true opacity distributionfunction, as is done in the stellar context, but directly as a distribution of the transmissioncoefficients.In conclusion,the adopted method for solving the transfer equation is inherently approximate and only first-order accurate, in contrast to the Feautrier scheme or DFEused in the above approaches, which are second-order accurate (i.e., a numerical solution of the transfer equation is exact for a piecewise parabolic source function). However, this is usually not a big concern or a source of inaccuracies of the resulting model.A potentially more serious source of inaccuracies lies in the treatment of radiative equilibrium. While the temperature correction expressed by Eq. (<ref>) correctly takes into account the fact that a local flux is determined by theglobal temperature structure, an evaluation of the elements of the Jacobian numerically by differencing two numericalsolutions, moreover approximate ones, of the transfer equation, may lead to inaccuracies, in particular in optically thin regions.Even more seriously,the radiative equilibrium constraint is applied solely for the flux, and only the condition ∫ F_ν dν =const is checked. A fulfillment of this condition is viewed as a verification that a model is well converged for the T-P profile.However, experience gained from constructing model stellar atmospheres revealed that at the upper, optically thin portion of the atmosphere, the radiation flux is quite insensitive to the local temperature, because it is essentially fixed by the source function at the monochromatic opticaldepth around 2/3. The temperature structure in the upper layers may thus remainquite inaccurate even if the total flux is conserved within, say, 1% or even less. As discussed above, what is needed in upper layers is to employ the integral form of the radiative equilibrium,∫κ_ν (B_ν - J_ν) dν = 0, which does not seem to be done in this approach. To demonstrate these considerations numerically, we take a brown dwarf model with T_ eff= 1500 K, log g =5, considered in<ref>, and perturb artificially the temperature structure in the upper layers by adding a damped wavy pattern with an amplitude 0.3 times the actual temperature – see Fig. <ref>. For this model we recompute the radiative flux, and the heating/cooling rates. Figure <ref> shows the flux and the heating/cooling rates. While the computed radiation flux differs at most by 1% (close to the column mass m≈1 gcm^_2), and therefore such model could have easily been declared as reasonably converged, the net cooling rate, ∫κ_ν (B_ν - J_ν)dν/ ∫κ_ν B_ν dν shows significant differences from zero. This illustrates the above stated warning that inorder to assess an accuracy of the model, one needs to check not only a conservation of the total flux, but also an equality of the heating and cooling rates as stipulated by the constraint of the radiative equilibrium.However, we stress that while the above analysis demonstrates thatthe McKay-Marley temperature correction scheme may lead to an inaccurate determination of the temperature in the upper layers of an atmosphere, it did not prove that the results are necessarily inaccurate. Moreover, even if inaccuracies occur, they are likely limited to the optically layers, which in turn have relatively little influence on the predicted emergent radiation.§.§.§ Goukenleuque et al.'s code Goukenleuque et al. (2000) presented one of the first self-consistent model atmospheres of an extrasolar giant planet, 51 Peg b in this case. To our knowledge, this code was not used very much after this study. It takes into account cloud opacity and scattering, but on the other hand completely neglects convection, which represents a significant drawback. Radiative transfer equation is solved approximately, using a variant of the two-stream method with Eddington approximation.The code iterates between solving the transfer equation, and subsequently correcting temperature by solving the radiative equilibrium equation. One invokes two nested iteration loops.In the inner loop one holds the chemical composition, cloud position, and the opacities fixed at the current values, and determines the temperature that gives the correct total flux. The outer loop takes the T-P profile determined in the inner loop, and computes new chemical equilibrium composition and new opacities corresponding to this T-P profile. The authors mention that some 1000 (!) iterations were needed in the inner loop, which, when compared to the linearization scheme outlined above that requires some 10 - 20 iterations, clearly demonstrates a relativeinefficiency of this and other similar schemes that do not solve all thestructural equations simultaneously.§.§ Independent, newly developed codes §.§.§ PETITThe code is described in detail by Mollière et al. (2015). Although we list the code as newly developed, the radiative transfer solver and the method of the solution of the radiative equilibrium equation weredeveloped already by Dullemond et al. (2002), and used in a code for computing vertical structure of massive circumstellar disks. Code PETIT solves the radiative equilibrium and chemical equilibrium equations together with the radiative transfer equation using a specific application of the variable Eddington factor technique. Molecular line opacity is treated using the correlated k-coefficient method. The radiative equilibrium equation is considered in a form analogous to our Eq. (<ref>), where the Planck mean and the absorption mean opacities, together with the Eddington factors, are determined iteratively by solving theradiative transfer equation frequency by frequency.In the convectively unstable layers, the temperature gradient is taken to be adiabatic, and the integrated mean intensity of radiation is taken as a scaled integrated Planck function. External irradiation is treated by a variant of the two-stream approximation.Other approximation is that the PETIT code neglects any scattering process inthe transfer equation (see Appendix C1 of Mollière et al. 2015). Also, although the chemical equilibrium calculations contain some condensed species, cloud formation and opacity is not considered, which limits the general applicability of the code.§.§.§ GENESIS The code, together with its first actual applications, is described in detail in Gandhi & Madhusuhan (2017). It essentially uses the structural equations and the numerical procedures described in this paper, namely the linearization method with the Rybicki reorganization scheme to solve the coupled radiative transfer together with the radiative/convective equilibrium equation, and the Feautrier method for the formal solution of the transfer equation. Convection is treated using the mixing-length formalism, analogously as described here. In the present version, the code does not consider cloud opacity and scattering. §.§.§ HELIOS The codeand its benchmark tests are described in a recent paper (Malik et al. 2017) Although the code is newly developed from the scratch, it keeps using approximate and thus potentially inaccurate approaches and numerical schemes, having their origin in an old Earth/planetary-typephilosophy of atmospheric modeling, briefly discussed above.Here is a list of some shortcomings of the adopted procedure: * The radiative transfer equation is solved by a variant of the two-stream approximation which uses an analytic solution for the individual layers, assuming either isothermal structure inside a layer, or a linearly varying Planck function within a layer. The latter stillyields only a first-order accurate numerical scheme. Although a solution for onelayer is obtained analytically, the final solution of the transferequation for all layers still requires a numerical procedure. Relative complexity of the proposed algorithm, which is still approximate, contrasts with the procedure outlined above which yields an “exact” numerical solution, for physical problems of varyingcomplexity, in a very simple and transparent way.* From the paper (Malik et al. 2017) it appears that the scheme does not includeconvection at all. If this is indeed so, it is a significant drawback which seriously limits an applicability of the code.* Analogously, the published description does not contain any mention of the cloud opacity and scattering. Such a limitation is however present in other codes mentioned here.* In any case, regardless of the deficiencies expressed in (ii) and (iii), the radiativeequilibrium constraint is treatedas a some sort of time-dependentapproach to equilibrium. While this is in principle acceptable, the whole procedure still represents an iterative scheme alternating between (an approximate) solution of the transfer equation with fixed temperature and a solution (again approximate) of the radiative equilibrium equation.Experience gained from computing model stellar atmospheres revealed that this procedure may converge very slowly, or may even suffer from the problems of false convergence (i.e., relative changes may become small, but the current solution is still far from the correct one – see, e.g. Hubeny & Mihalas (2014;13.2). Furthermore, their formulation of the radiativeequilibrium equation uses thermodynamic parameters such as specific heat c_P,and thus ignores the microphysics of the interactionof radiation and matter, as contained e.g. in Eq. (<ref>). § CONCLUSIONS The aim of this paper was to summarize current physical, mathematical, and numerical methodology for computing model atmospheres of substellar massobjects within a framework of plane-parallel, static models. These two basicassumptions make the problem tractable on present-day computers. The remaining uncertainties and problems are not of an algorithmical or computationalnature, but rather are caused by the lack of data from other branches of physics and chemistry – in particular, data for molecular lines, details of line broadening, formation and detailed properties of condensed particles, and the rates of chemical reactions for treating non-equilibrium chemistry, to name just few of the most pressing problems.Our basic philosophy is the following. While we acknowledge the existence of many problems and uncertainties that plague our description of the SMO atmospheres, we feel that the physical formulation and corresponding mathematical treatment of phenomena that are currently well understood has to be done accurately, reliably, and without unnecessary approximations and simplifications.For instance, a treatment of an interaction of radiation and matter, moreover in a highly non-equilibrium conditions, has been developed to a high degree of sophistication in stellar astrophysics; for a recent summary, see, e.g, Hubeny & Mihalas (2014).Also, many efficient and fast numerical algorithms were developed in the last two decades. Yet, many approaches and numerical codes used for modeling SMO atmospheres are still unnecessarily based on old and outdated methodologies. In our opinion, this is caused, at least in part, by the lack of proper communication between researchers in the fields of planetary and stellar atmospheres. Another reason is the fact that in the present period of a rapid development of the field of exoplanets and brown dwarfs, most of the research emphasis is obviously devoted toobservational issues, like discovering and classifying new objects.Even in the subfield of computing SMO model atmospheres most emphasis if given to applications rather than to a development of new approaches or to adapting algorithms from different fields.We have therefore formulated a physical and numerical framework which we believe should be a standard for dealing with the “classical” problem, that is a plane-parallel, horizontally homogeneous (i.e. 1-D) atmosphere, in the hydrostatic, radiative/convective, and chemical equilibrium (or with some simple departures from the latter).We have stressed that since the radiation field is an important, or even crucial, ingredient of the energy balance, radiation transport must be treated accurately, and self-consistently with the global atmospheric structure. We believe that this effort does not represent an imbalanced emphasis on radiation while making serious approximations for other phenomena,for instance the cloud formation. A sophisticated and accurate treatment of an interaction of radiation and matter is now quite routine, and even not very costly fromthe point of view of computational resources. It is therefore unnecessary or evencounterproductive to keep applying inefficient and approximate methods for treatingradiation transfer with the argument that there are many uncertainties in describing the SMOatmospheres anyway.Finally,it should be kept in mind that any information, not only about the physical state of a studied object, but also about a realism of our description, comes only through observed radiation. Therefore, interpreting spectroscopic observations using unsatisfactory or oversimplified treatments of radiation may easily yield incorrect results and conclusions. This can be avoided by using proper methods for treating radiative transfer, for instance those outlined in this paper, or their future improvements. § ACKNOWLEDGEMENTSI gratefully acknowledgethe support from the Sackler Distinguished Visitorprogram of the Institute of Astronomy at the University of Cambridge, where most of the work on this paper was done. My special thanks go to Nikku Madhusudan.I alsothank MarkMarley, Jano Budaj, Ryan Macdonald,and anonymous referee for helpful comments to the paper. 99[] AckermanA.,MarleyM., 2001, ApJ, 556, 872 [] Allard. F., Hauschildt P. H.,Alexander, D. R., Tamanai, A., Schweitzer, A., 2001, ApJ, 556, 357 [] AuerL. H., 1976, J. Quant. Spectrosc. Radiat. Transfer, 16, 931 [] Auer L. H., Mihalas D., 1969, ApJ 158, 641 [] Barman T. S., Hauschildt P. H., Allard F., 2001, ApJ,556, 885 [] Barstow, J. K., Aigrain, S., Itwin, P. G. J., Sing, D. K., 2017, ApJ, 834, 50[] Budaj J., Kocifaj M., Salmeron R.,Hubeny I.,MNRAS, 454, 2 [] Burrows A., Budaj J., Hubeny I., 2008, ApJ, 678, 1436 [] Burrows A., Marley M. S., Hubbard W. B., Lunine J. I., Guillot T., Saumon D., Freedman R., Sudarsky D., Sharp, C., 1997, !pJ, 491, 856 [] Burrows A., Rauscher, ER., Spiegel, D. S., Menou,K., 2010, ApJ, 719, 341 [] Burrows A., Sharp C. M., 1999, ApJ, 512, 843 [] Burrows A., Sudarsky D.,Hubeny I., 2006, ApJ, 640, 1063 [] Burrows A., Sudarsky D.,Hubeny I.,Li A., 2005, ApJ, 627, 520 [] Castor J. I., Dykema P.,Klein R. I., 1992, ApJ 387, 561[]Dullemond, C., van Zadelhoff, G. J., Natta, A., A&A, 389, 464[] Feautrier P., 1964, C. R. Acad. Sci. Paris., Ser. B, 258, 3189 [] Fegley H. Jr,Lodders K., 1996, ApJ, 472, L37 []Fortney, J. J., Marley M. S., Lodders K., Saumon D.,Freedman R., 2005, ApJ, 627, L69 [] Fortney J. J., Lodders K., Marley M. S.,Freedman R., 2008, ApJ, 678, 1419 [] Ghandhi, S., Madhusudhan, N., 2017, MNRAS (in press). [] Goody R., West R., Chen L.,Crisp D., 1989, J. Quant. Spectrosc. Radiat. Transfer, 42, 539 [] Goukenleuque C., Bézart R., Joguett B., LelouchE.,Freedman, R., 2000, Icarus, 143, 308 [] Griffith C. A.,Yelle R. V., 1999, ApJ, 519, L85 [] Guillot T., 2010, A&A, 520, A27 [] Gustafsson B., Bell R. A., Eriksson K.,Nordlund Å., 1975, A&A, 42, 407 [] Hansen B. M. S., 2008, ApJS, 179, 484 [] Hauschildt P. H.,Baron E., 1999, J. Comput. Appl. Math., 102, 41[] Hubeny I., 1988, Computer Physics Comm. 52, 103 [] Hubeny I.,BurrowsA., 2007, ApJ, 669, 1248 [] Hubeny I., BurrowsA., Sudarsky, D., 2003, ApJ, 594, 1011[] Hubeny I.,Lanz T., 1995, ApJ, 439, 875 [] Hubeny I.,Mihalas D. 2014, Theory of Stellar Atmospheres, Princeton Univ. Press, Princeton [] Irwin, P. G. J., Teanby, E. A., de Kok, R., et al., 2008, J. Quant. Spectrosc. Radiat. Transfer, 109, 1136[] Komacek T. D.,Showman A. P., 2016, ApJ, 821, 16 [] Kurucz R. L., 1970, SAO Spec. Rep. 309 [] Lanz, T., Hubeny, I., 2007, ApJS, 169, 83 [] Line, M. R., Zhang, X., Vasisht, G., Natraj, V., Chen, P.,Yung, Y. L., 2012, ApJ, 749, 93[] Line, M. R., Wolf, A. S., Zhang, X., Knutson, H.,Kammer, J. A., Ellison, E., Deroo, P., Crisp, D., Yung, Y. L., 2013, ApJ, 775, 137[] Madhusudhan N.,Agúndez, M., Moses, J. I., Hu, Y., 2016, preprint, (arXiv:16.06, 06092). [] Madhusudhan N.,Amin, M. A., Kennedy, G. M., 2014,ApJ, 794, L2. [] Madhusudhan N.,Seager S., 2009, ApJ, 725, 261 [] Madhusudhan N.,Seager S., 2011, ApJ, 729, 41[] Malik, M, Grosheintz, L., Mendonca, J.M., et al. 2017,AJ, 153, 56. [] Marley,M. S., McKay C. P., 1999, Icarus, 138, 268. [] Marley,M. S., Saumion, D., Guillot, T., Freedman, R. S., Hubbard, W. B., Burrows, A., Lunine, J. I., 1996, Science, 272, 1919. [] Marley,M. S., Gelino, C., Stephens, D., Lunine, J. I., Freedman, R. S., 1999, ApJ, 513, 879. [] McKay C. P., Pollack J. B.,Courtin R., 1989, Icarus, 80, 23 [] Mollière, P., van Boekel, R., Dullemond, C., Henning, Th., Mordasini, C., 2016, ApJ, 813, 47[] Moses, J. I., Visscher, C., Fortney, J. J., Shownman, A. P.,Lewis, N. K., Griggith, C. A., Klippenstein, S. J., Shabram, M., Friedson, A. J., Marley, M. S., Friedman, R. S., 2011, ApJ, 737, 15. [] Olson G, Auer L. H.,Buchler J.,1986, J. Quant. Spectrosc. Radiat. Transfer, 38, 325. [] Olson G.,Kunasz P. B., 1987, J. Quant. Spectrosc. Radiat. Transfer, 38, 325 [] Parmentier V.,Guillot T., 2014, A&A, 562, A133 [] Prinn G. G.,Barshay S. S., 1977, Science, 198, 1031[] Rybicki G. B., 1969, J. Quant. Spectrosc. Radiat. Transfer, 11, 589 [] Rybicki G. B.,Hummer D. G., 1991, A&A 245, 171 [] Saumon D., Geballe T. R., Leggett S. K., Marley M. S., Freedman R., Lodders K., Fegley R., Jr.,Sengupta S. K., 2000, ApJ, 541, 374 [] Saumon D., Marley M. S., Cushing, M. C., Leggett S. K.,Roellig, T. I., Lodders K., Friedman, R. S., 2006, ApJ, 647, 552. []Saumon D., et al., 2007, ApJ, 656, 1136 [] Seager S.,Sasselov D. D., 1998, ApJ, 502, L157 [] Seager S., Sasselov D.D., 2000, ApJ, 537, 916 [] Seager S.,Whitney, B. A., Sasselov D. D., 2000, ApJ, 540, 504. [] Sharp C. S.,Burrows A., 2007, ApJS, 168, 140 [] Showman, A. P., Guillot, T., 2002, A&A, 385, 166. [] Showman, A. P., Fortney, J. J., Lian, Y., Marley, M. S., Freedman, R. S., Knutson, H. A., Charbonneau, D., 2009, ApJ, 699, 564. [] Showman, A. P., Polvani, [] Sudarsky D., Burrows A.,Hubeny I., 2003, ApJ, 588, 1121 [] Sudarsky D., Burrows A.,Pinto P., 2000, ApJ, 538, 885 [] Toon C. P., McKay C. P.,Ackerman, T. P., 1989. J. Geophys. Res. 94, 16,287 [] Vaz L. P. R.,Nordlund Å., 1985, A&A 147, 281 [] Vernazza J.,Avrett E. H.,Loeser R., 1973, ApJ, 184, 605 [] Visscher, C., Moses, J. I., 2011, ApJ, 738, 72. [] Waldmann, I. P., Tinetti, G., Rocchetto, M., Barton, E. J., Yurchenko, S. N., Tennyson, J., 2015, ApJ, 802, 107 § DISCRETIZATION AND LINEARIZATION OF THE BASIC STRUCTURAL EQUATIONS §.§ Discretization§.§.§ Radiative transfer equationWe assume the source function in the form (i.e., for LTE and isotropic scattering) S_ν= κ_ν/χ_νB_ν + s_ν/χ_ν J_ν≡ϵ_ν B_ν + (1-ϵ_ν) J_ν. Denoting d the depth index and i the frequency index, the transfer equation (<ref>), together with boundary conditions(<ref>) and (<ref>), is discretized as follows:* For d=1, the upper boundary condition, f_2iJ_2i-f_1iJ_1i/Δτ_3/2,i = g_i J_1,i - H_i^ext + Δτ_3/2,i/2ϵ_1i (J_1i - B_1i), where we used the the second-order form of the boundary condition (Hubeny & Mihalas, 2014, Eq. 12.50). * For d=2,…,ND-1,f_d-1,i/Δτ_d-1/2,iΔτ_di J_d-1,i - f_di/Δτ_di(1/Δτ_d-1/2,i+1/Δτ_d+1/2,i) J_di +f_d+1,i/Δτ_d+1/2,iΔτ_diJ_d+1,i =ϵ_di(J_di - B_di).* For d = ND, the lower boundary condition, f_diJ_di-f_d-1,iJ_d-1,i/Δτ_d-1/2,i = 1/2(B_di-J_di)+1/3B_di-B_d-1,i/Δτ_d-1/2,i- Δτ_d-1/2,i/2ϵ_di(J_di - B_di), where we again used the second-order form. In the above expressions Δτ_d± 1/2,i≡ (ω_d± 1,i+ω_di) |m_d± 1-m_d|/2, with ω_di≡χ_di/ρ_d,and Δτ_di≡ (Δτ_d-1/2,i+Δτ_d+1/2,i)/2. §.§.§ Radiative/convective equilibrium equationAnalogously, discretizing the radiative equilibrium equation, one obtains α_d ∑_i=1^NF w_i (κ_diJ_di-η_di)+ β_di [ ∑_i=1^NFw_i f_dI J_di-f_d-1,i J_d-1,i/Δτ_d-1/2 - σ_R/4π T_ eff^4 ] = 0. In the convectively unstable regions, Eq. (<ref>) is modified to read α_d[ ∑_i=1^NF w_i (κ_diJ_di-η_di) +ρ_d(F_ conv,d+1/2-F_ conv,d-1/2)/4πΔ m_d]+β_di[ ∑_i=1^NFw_i f_dI J_di-f_d-1,i J_d-1,i/Δτ_d-1/2 + F_ conv,d-1/2/4π- σ_R/4π T_ eff^4 ] = 0. where Δ m_d ≡Δ m_d+1/2 + Δ m_d-1/2 = (m_d+1-m_d-1)/2.§.§ Outline of the linearizationThe expressions for matrix elements of the Jacobi matrix are straightforward, but tedious to compute. We just present an example of linearizing Eq. (<ref>). Let us write this equation as P_di(ψ) = 0, which represents the discretized transfer equation for the frequency point i at depth point d. Then (A_d)_ij ≡- ∂ P_di/∂ J_d-1,j = f_d-1,i/Δτ_d-1/2,iΔτ_diδ_ij, (C_d)_ij ≡- ∂ P_di/∂ J_d+1,j = f_d+1,i/Δτ_d+1/2,iΔτ_diδ_ij, (B_d)_ij ≡ ∂ P_di/∂ J_dj = [ f_di/Δτ_d,i(1/Δτ_d-1/2,i + 1/Δτ_d+1/2,i) + ϵ_di] δ_ij where d=2,…,ND-1 and i=1,…,NF. The columns corresponding to the temperature are (A_d)_ik ≡- ∂ P_di/∂ T_d-1 = a_di∂ω_d-1,i/∂ T_d-1, (C_d)_ik ≡- ∂ P_di/∂ T_d+1 = c_di∂ω_d+1,i/∂ T_d+1,(B_d)_ik ≡- ∂ P_di/∂ T_d = - (a_di+c_di) ∂ω_d,i/∂ T_d+ ∂ϵ_d,i/∂ T_d (J_di- B_di) -ϵ_di∂ B_di/∂ T_d, where k=NF+1 is the index of T in the state vector, and α_di =(f_diJ_di - f_d-1J_d-1)/(Δτ_d-1/2,iΔτ_di), γ_di =(f_diJ_di - f_d+1J_d+1)/(Δτ_d+1/2,iΔτ_di), β_di = α_di+γ_di, a_di = [α_di + (β_di/2)(Δτ_d-1/2,iΔτ_di]/ ω_d-1/2,i, c_di = [γ_di + (β_di/2)(Δτ_d+1/2,iΔτ_di]/ ω_d+1/2,i, where ω_d± 1/2≡ω_d + ω_d± 1. The right-hand side vector is given by L_di = -β_di -ϵ_di(J_di - B_di), Linearization of the boundary conditions and the radiative/convective equilibrium equation is analogous § EVALUATION OF THE THERMODYNAMIC QUANTITIES The adiabatic gradient and other thermodynamic quantities can be evaluated using either the internal energy (E), or the entropy (S).When using the internal energy, the corresponding expressions are ∇_ ad = (∂ln T/∂ln P)_S = - P/ρ c_P T(∂lnρ/∂ln T)_P, where the specific heat is given by c_P = (∂ E/∂ T)_P - P/ρ^2(∂ρ/∂ T)_P , and (∂lnρ/∂ln T)_P =T/ρ(∂ρ/∂ T)_P . The internal energy is evaluated as E/kT = 3/2 + ∑_j N_j(d ln U_j/dln T), where N_j and U_j are the number density andthe partition function ofspecies j, respectively. The summation is carried over all species.When using entropy, one has ∇_ ad = - (∂ S/∂ T)_P/ (∂ S/∂ P)_TP/T, and c_P = -P/ρ T(∂lnρ/∂ln T)_P/ ∇_ ad The entropy is given by S/k = ∑_j N_j [1+ln(U_j/N_j)] +E/kT. All derivatives are evaluated numerically. § CONSTRUCTION OF THE INITIAL GRAY MODEL The procedure to construct the initial gray model is very similar to that described by Kurucz (1970).First, one sets up a grid of Rosseland optical depths, usually as logarithmically equidistant between τ_1 and τ_ND, which are input parameters of the model. These are typically chosenas τ_1 ≈ 10^-7 andτ_ND≈ 10^2. The temperature is a known function of the Rosseland optical depth, see<ref>, T^4(τ) = (3/4) T_ eff^4 [τ + q(τ)].+ (π/σ_R)H^ ext where q(τ) is the Hopf function, and H^ ext = ∫_0^∞ H_ν^ ext dν is the frequency-integrated external irradiation flux.The hydrostatic equilibrium equation is written as dln P/dlnτ = gτ/χ_R P, because τ and P span many orders of magnitude, so it is advantageous tointegrate the equation for logarithms. χ_R is the Rosseland mean opacity. One then proceeds to solving Eq.(<ref>) from the top of the atmosphere to the bottom. At the first depth point, τ_1, one makes a first estimate of the Rosseland mean opacity, χ_R,1, and assumes it isconstant from this point upward. Using the boundary condition P(0)=0, one obtains the first estimate of the pressure P_1 as P_1 = (g/χ_R,1) τ_1.Having an estimate of the pressure, one uses the following procedure which is valid for every depth point d: From known temperature T(τ_d), given by Eq. (<ref>),onecomputes monochromatic opacities, and, by integrating over frequency,the new value of the Rosseland mean opacity χ_R. We will refer to this procedure as P→χ_R. With the new value of χ_R, one returns to Eq. (<ref>), evaluates an improved estimate of P_1, and repeats the procedureP→χ_R until convergence. Once this is done, one proceeds to the subsequent depth point.For the next three depth points, d=2,…,4, one obtains the first estimate (a predictor step) of the total pressure is: ln P_d^ pred = ln P_d-1 + Δln P_d-1 , which is followed by a P→χ_R procedure, and with the newχ_R one goes to the corrector step, ln P_d = (ln P_d^ pred + 2 ln P_d-1 + Δln P_d +Δln P_d-1)/3, where Δln P_d = gτ_d/χ_R,d P_d (lnτ_d- lnτ_d-1). For the subsequent depth points, one uses the Hamming's predictor-corrector scheme (see Kurucz 1970; Eqs. 4.17 and 4.18), where the predictor step is ln P_d = (3ln P_d-4 + 8 ln P_d-1- 4Δln P_d-2 + 8 Δln P_d-3)/3, and the corrector step ln P_d = (126ln P_d-1 -14 ln P_d-3+ 9 ln P_d-4+42 Δln P_d+ 108 Δln P_d-1-54Δln P_d-2 + 24 Δln P_d-3)/121. After completing the above procedure for all depths, one constructs the column mass scale, which will subsequently be used as the basic depth scale, as m_d = P_d /g. When convection is taken into account, one first computes the radiative gradient of temperature, ∇_d = (T_d - T_d-1)/(P_d - P_d-1)(P_d + P_d-1)/(T_d + T_d-1), and compares to the adiabatic gradient, ∇_ add.If ∇_ rad > ∇_ add, the criterion for stability againstconvection is violated, one determines the truegradient ∇, where ∇_ad≤∇≤∇_rad,that gives the correct total, radiative plus convective, flux. If the instability occurs deep enough for the diffusion approximation to be valid, then (F_rad/F)= (∇/∇_ad), and the energy balance equation reads (see Hubeny & Mihalas 2014,17.4), 𝒜(∇-∇_el)^3/2=∇_rad-∇, where𝒜 = (∇_rad/σ_RT_eff^4)(gQH_P/32)^1/2(ρ c_P T) (ℓ/H_P)^2 . We see that 𝒜 depends only on local variables. Adding (∇-∇_el)+(∇_el -∇_ad) to both sides of (<ref>), and using the expression ∇_ el - ∇_ ad= B √(∇-∇_ el), where B is given by Eq. (<ref>), to eliminate (∇_el-∇_ad), we obtain a cubic equation for x≡(∇-∇_el)^1/2, namely 𝒜(∇-∇_el)^3/2+(∇-∇_el)+ B(∇-∇_el)^1/2=(∇_rad-∇_ad).or𝒜x^3+x^2+Bx= (∇_rad-∇_ad), which can be solved numerically for the root x_0. We thus obtain the true gradient ∇=∇_ad+ℬx_0+x_0^2, and can proceed with the integration, now regarding T as a function of P and the logarithmic gradient ∇. | http://arxiv.org/abs/1703.09283v1 | {
"authors": [
"Ivan Hubeny"
],
"categories": [
"astro-ph.SR",
"astro-ph.EP"
],
"primary_category": "astro-ph.SR",
"published": "20170327194316",
"title": "Model atmospheres of sub-stellar mass objects"
} |
Damir Hasić Department of Mathematics, Faculty of Science, University of Sarajevo, 71000 Sarajevo, Bosnia and [email protected], [email protected] Tannier Inria Grenoble Rhône-Alpes, F-38334 Montbonnot, FranceUniv Lyon, Université Lyon 1, CNRS, Laboratoire de Biométrie et Biologie Évolutive UMR5558, F-69622 Villeurbanne, France Gene tree species tree reconciliation with gene conversion This work is funded by the Agence Nationale pour la Recherche, Ancestrome project ANR-10-BINF-01-01.Damir Hasić Eric Tannier ================================================================================================================================================================ Gene tree/species tree reconciliation is a recent decisive progress in phylogenetic methods, accounting for the possible differences between gene histories and species histories. Reconciliation consists in explaining these differences by gene-scale events such as duplication, loss, transfer, which translates mathematically into a mapping between gene tree nodes and species tree nodes or branches. Gene conversion is a frequent and important biological event, which results in the replacement of a gene by a copy of another from the same species and in the same gene tree. Including this event in reconciliations has never been attempted because this changes as well the solutions as the methods to construct reconciliations. Standard algorithms based on dynamic programming become ineffective.We propose here a novel mathematical framework including gene conversion as an evolutionary event in gene tree/species tree reconciliation. We describe a randomized algorithm giving in polynomial running time a reconciliation minimizing the number of duplications, losses and conversions. We show that the space of reconciliations includes an analog of the Last Common Ancestor reconciliation, but is not limited to it. Our algorithm outputs any optimal reconciliation with non null probability. We argue that this study opens a research avenue on including gene conversion in reconciliation, which can be important for biology. 92D15 05C90 92-08 68W40 § INTRODUCTION§.§ Biological motivation Due to various evolutionary events on a gene level, gene trees (trees used to describe the evolution of genes) and species trees (trees used to describe the evolution of species) are often not identical. Identifying these evolutionary events, such as speciation, duplication, transfer, conversion, transfer with replacement, and their positioning inside species tree is called phylogenetic reconciliation. Tree reconciliation techniques become widely used in biology. For example they are used in testing hypotheses of horizontal transfer in some Bacterial and Archaeal species <cit.>; studying parasites infecting tropheine cichlids <cit.>; finding horizontal gene transfers of RH50 among prokaryotes <cit.>. Reconciliation tools <cit.> are also used to explore the process of shaping gut microbiomes <cit.>. In <cit.> reconciliations are used "for inferring orthology relationships" <cit.>, and in <cit.> "for identifying orthologs for use in function prediction, gene annotation, planning experiments in model organisms, and identifying drug targets" <cit.>. From <cit.> we can see that "reconciliation can also be used to study co-evolution between parasites and their hosts (parasitology), and between organisms and their living areas (biogeography)" <cit.>.An evolutionary event of particular interest in this paper is gene conversion. It is a highly important genomic event for evolution and health <cit.>. It results in the replacement of a gene in a genome by another homologous gene from the same genome, where homologous means that they have a common ancestor. It has largely contributed to shaping extant eukaryotic genomes and is involved in several known human genetic diseases <cit.>.However, gene conversion is nearly absent from the mathematical framework for phylogeny. Phylogenetic methods can handle base substitutions, indels <cit.>, genome rearrangements <cit.>, duplications, transfers and losses of genes <cit.> or population scale events as incomplete lineage sorting <cit.>. But the detection of gene conversion is still done with empirical examinations of gene trees combined with various genomic features <cit.>.This absence of gene conversion can strongly bias evolutionary studies. Indeed, it introduces a discordance between the history of a gene and the history of a locus <cit.> which stays unresolved. It makes the confusion between duplications and conversions <cit.>, whereas conversions are probably more frequent <cit.>.§.§ Mathematical and computational aspects of the problem With V(T) we denote the set of all nodes, and L(T) is the set of all leaves of a tree T. We assume that a gene tree G and a species tree S are given, as well as a mapping ϕ:L(G)→ L(S) that places extant genes into extant species. The problem is to find a mapping ρ:V(G)→ V(T) that optimizes some objective function. How to determine ρ depends on a model that describes a problem of reconciliation. The model includes the set of allowed evolutionary events (speciation is usually always included) and the objective function, which is usually the likelihood of a reconciliation (maximization problem) or the weight of a reconciliation (minimization problem). The weight of a reconciliation, which is the sum of costs of all evolutionary events in a reconciliation, is a sort of measure of dissimilarity between G and S. In this paper, the objective function is the weight of a reconciliation. Conversions are modeled as a pair of duplication and loss. Since we are pairing gene losses with gene duplications, there is a need to introduce lost subtrees, i.e. subtrees of the gene tree that were not given in the input. This means that, in order to obtain an optimal solution, we need to extend given gene tree G, and this extension we denote by G'. Because of pairing losses with duplications, we obtain that disjoint subtrees of G are not independent anymore. The loss of independence and the need to extend the given gene tree are things that make the problem harder than the usual duplication/loss reconciliation. §.§ A review of some previous results The first model of reconciliation to mention is the one with duplications, speciations and losses. A natural way to form a reconciliation, in this model, is to position every node from the gene tree as low as possible inside the species tree. This type of reconciliation is called the Last Common Ancestor (LCA). LCA minimizes the number of duplications and losses <cit.>, the number of duplications <cit.>, and the number of losses <cit.>. LCA is the only reconciliation that minimizes duplications and losses <cit.>. These reconciliations can be found in linear time. There is a polynomial algorithm in <cit.> that finds the minimum number of duplications even when S is polytomous.The problem of reconciliation between a polytomous gene tree and a binary species tree minimizing the number of mutations (duplications + losses) is polynomial <cit.>. In <cit.>, O(|G|+|S|) algorithms for reconciling a nonbinary gene tree and a binary species tree in the duplication, loss, mutation, and deep coalescence models are given.A biologically important and mathematically much studied evolutionary event is gene transfer. Models that include duplications, losses, and transfer are called DTL models. When the transfers are included, then time constraints are introduced, because direct gene transfer can happen only between species that exist in the same moment. There are two ways of considering time constraints in reconciliations. One is to use an undated species tree but imposing a consistency between found transfers. This variant has been proved to be NP-hard in <cit.> (while without time consistency it is solvable in time O(m^2n), where m is the number of extant species and n is the number of extant genes). Another is to use a fully dated species tree as an input, that is, there is a total order on the internal nodes. In that case a reconciliation algorithm with duplications, transfers and losses is given in <cit.> with time complexity Θ(m^2n). In <cit.> the space of all reconciliations is explored and formula for its size is given. Discrete and continuous cases for DTL model are equivalent <cit.>. In <cit.>, duplications, transfers, losses, and incomplete lineage sorting are included in the model and the FPT (fixed-parameter-tractable) algorithm for the most parsimonious reconciliation is given. If a gene that is transfered replaces another gene, then we have transfer with replacement, which is to transfer what conversion is to duplication (see <cit.> for NP-hardness proof, and FPT algorithm) For a more detailed review on reconciliations see <cit.>, <cit.>, and <cit.>.§.§ The contribution of this paper Gene conversion can be modeled in the gene tree/species tree reconciliation framework. It consists in coupling a duplication (the donor sequence) and a loss (the receiver sequence). It is usually not included in reconciliation models because the usual algorithmic toolbox of gene tree/species tree reconciliation, based on dynamic programming assuming a statistical independence between lineages, does not allow to couple events from different lineages. Our contribution is to explore the algorithmic possibilities of introducing conversion in reconciliations. We formally define a reconciliation with duplications, losses and conversions. We define the algorithmic problem of computing, given a gene tree and a species tree, a reconciliation minimizing a linear combination of the number of events of each type. We fully solve the problem in the particular case when all events are equally weighted. More precisely, we construct an algorithm which gives, in polynomial running time, an optimal solution, and we prove that any optimal solution can be output by the algorithm with a non null probability. The algorithm can be used as a polynomial delay enumeration of the whole space of solutions.The space of solutions is non trivial. In contrast with the duplication and loss only reconciliations, solutions are not unique, they are not all given by the standard Last Common Ancestor (LCA) technique. Moreover, easy examples show that the LCA technique does not give the optimal solution if events are weighted differently. This opens a wide range of new open algorithmic problems related to gene tree/species tree reconciliations.The paper is organized as follows. Section <ref> introduces a gene tree/species tree reconciliation including gene conversion events, and states the relations with the classical duplication loss reconciliation. Section <ref> is devoted to the presentation of an algorithm to find one optimal solution, which is called an LCA completion. In Section <ref>, we give an algorithm to find all optimal solutions, by the definition of a class of optimal solutions called zero-flow, containing but not limited to LCA completions. We prove that an algorithm finding all zero-flow reconciliations is sufficient to access the whole solution space, and we write such an algorithm. In Section <ref> we complete the proof that the presented algorithm always gives an optimal solution, and that every optimal solution can be output with a non null probability. § RECONCILIATIONS WITH DUPLICATION, LOSS, CONVERSION In this section we define the mathematical problem modeling the presence of gene conversion in gene tree species tree reconciliations. We start with the definition of the standard duplication and loss model, and then add the possibility of conversions.§.§ Duplication-Loss reconciliations Let us begin with some generalities about phylogenetic trees. All phylogenetic trees are binary rooted trees where the root node has degree 1, and its incident edge is called the root edge. The root edge of T is denoted by root_E(T), and the root node by root(T). If x is a node in a tree, then L(x) denotes the set of leaves of the maximal subtree rooted at x. If x∈ V(T)\ L(T) then x_r,x_l denote the two children of x. Similarly, we can define the children e_r,e_l of an edge e. If x is a leaf or an edge incident to a leaf, then their children are NULL and f(NULL)=0 for any function/procedure which returns some value. If x is a node/edge in a rooted tree T, then p_T(x)=p(x) denotes its parent. Let e=(x,p(x)) be an edge, then T(e) denotes the maximal rooted subtree with root edge e. If x is on the path from y to root(T) then we say that x is an ancestor of y, or that y is a descendant of x, and we write y≤_T x or y≤ x, defining a partial order on the nodes. If x is neither ancestor nor descendant of y, we say that x and y are incomparable. Let x and y be comparable nodes in a rooted tree T, then with d_T(x,y) or d(x,y) we denote the distance, i.e. the number of edges in the path between x and y. For a partially ordered set A, we use minimal to denote an element m such that x≤ mx=m, ∀ x∈ A.We use this terminology for the partial order defined by rooted trees. For example,if V' is a subset of nodes of a tree, their Last Common Ancestor (LCA) is the minimal node which is an ancestor of all nodes in V'. We also use it for partial orders defined by inclusion on sets or by subtrees in trees. In particular we can use it for the partial order defined by the extension relation. A tree G' is said to be an extension of a gene tree G if G can be obtained from G' by pruning some subtrees and suppressing nodes of degree 2. We define the gene tree species tree duplication loss (DL) reconciliation. We suppose we have two trees G and S, respectively called the gene tree and the species tree. Nodes of G (S) are called genes (species). A mapping ϕ:L(G) → L(S) indicates the species in which genes are found in the data. Without loss of generality we suppose that ϕ verifies that the last common ancestor of all the leaves of S that are in the image of ϕ is the node adjacent to the root node (recall the root node has degree 1). The reconciliation is based on a function ρ, which is an extension of ϕ to all genes and species, including internal nodes. A function ρ:V(G')→ V(S) on the nodes of a tree G' is said to be consistent with a species tree S if ρ(root(G'))=root(S) and for every x∈ V(G')\ L(G') one of the conditions holds (D) ρ(x)=ρ(x_l)=ρ(x_r) or (S) ρ(x)_l=ρ(x_l) and ρ(x)_r=ρ(x_r). We also say that G' is ρ-consistent with S.Obviously, both conditions (D) and (S) cannot hold for a single node. Let G and S be a gene and a species trees and ϕ:L(G) → L(S). A DL reconciliation between G and S is a 5-tuple (G,G',S, ϕ, ρ) such that G' is an extension of G, G' is ρ-consistent with S, and ρ/L(G)=ϕ. Note that we allow some extant species not to have genes. The definition is equivalent to the standard ones <cit.>, although they can present some variations between them. For example we do no impose that losses arerepresented by subtrees extended to the leaves of S (which is the case for example in <cit.>), because of the particular use we make of loss subtrees in the sequel. An example of DL reconciliation is given in Figure <ref> (a). Let =(G,G',S,ϕ,ρ) be a DL reconciliation and x∈ V(G')\ L(G') satisfies condition (D). Then x is called a duplication. The set of all duplications is denoted by Δ=Δ(). Let =(G,G',S,ϕ,ρ) be a DL reconciliation and x∈ V(G')\ L(G') satisfies condition (S). Then x is called a speciation. The set of all speciations is denoted by Σ=Σ(). Let =(G,G',S,ϕ,ρ) be a DL reconciliation and x∈ L(G')\ L(G). Then x is called a loss. The set of all losses is denoted by Λ=Λ().We say that a duplication, loss or speciation x is assigned to s if ρ(x)=s. Let Ł(s,)=Ł(s)=|ρ^-1(s)∩Λ()| and (s,)=(s)=|ρ^-1(s)∩Δ()| be the number of losses and the number of duplications assigned to s∈ V(S) in the reconciliation . If e=(s,p(s))∈ E(S), then Ł(e,)=Ł(e)=Ł(s,) and (e,)=(e)=(s,). The next definition extends the notion of loss. Let =(G,G',S,ϕ,ρ) be a DL reconciliation. A maximal subtree T of G' such that V(T)∩ V(G) =∅ is called a lost subtree. The next lemma introduces the standard Last Common Ancestor reconciliation, and its proof can be found in <cit.> or <cit.>. Let G and S be a gene and a species tree, and ϕ:L(G)→ L(S). There exists a DL reconciliation =(G,G',S, ϕ, ρ) such that ρ(x) is the root of the minimal subtree of S containing L(ϕ(x)), ∀ x∈ V(G). The DL reconciliation from Lemma <ref> that minimizes |Λ()| is called the Last Common Ancestor (LCA) reconciliation and is noted _lca=(G,G'_lca,S, ϕ, ρ_lca). Note that the LCA reconciliation is the unique reconciliation minimizing the number of duplications, or the number of losses, or any linear combination of these two numbers <cit.>. In Section <ref> we will construct equivalents of the LCA reconciliation including conversions, called LCA completions, which will have the property of minimizing the sum of the number of duplications, losses and conversions. However in contrast it is not unique, it does not contain all optimal solutions (as we show it in Section <ref>) and does not optimize over any linear combinations of these numbers (see the conclusion for such an example).§.§ Duplication-Loss-Conversion reconciliations In the next definition we introduce an additional event, called gene conversion, which is a function δ pairing some losses and duplications. This models the replacement of a gene by a copy of another one from the same family. Let (G,G',S,ϕ,ρ) be a DL reconciliation. Let δ: Δ→Λ be an injective partial function such that ρ(x)=ρ(δ(x)) for all x∈δ^-1(Λ). If x∈δ^-1(Λ), then x is called a conversion, and δ(x) is its associate loss. The set of all conversions is denoted by Δ' and the set of associate losses by Λ'. The 6-tuple (G,G',S,ϕ,ρ, δ) is called a DLC reconciliation. We see that every DL reconciliation is also a DLC reconciliation with Δ' = ∅. From now on, reconciliation stands for DLC reconciliation. Examples of DLC reconciliations are drawn on Figure <ref>.The following properties are equivalents of standard properties of DL reconciliations <cit.>, which have to be checked in the DLC case. Let =(G,G',S,ϕ,ρ,δ) be a reconciliation, x,y∈ V(G') and x < y. Then ρ(x) ≤ρ(y). If x<y, then we have x_1,...,x_k ∈ V(G') so that x=x_0<x_1<x_2<...<x_k<x_k+1=y, and x_i is a child of x_i+1. FromDefinition <ref>, we have that (D) or (S) holds, i.e. ρ(x) ≤ρ(p(x)), therefore ρ(x)≤ρ(x_1)≤ρ(x_2) ≤ ...≤ρ(x_k) ≤ρ(y). Let =(G,G',S,ϕ,ρ, δ) be a reconciliation, s∈ V(S)\ L(S), x∈ V(G')\ L(G') such that ρ(x)=s. Then x∈Σ() if and only if x is a minimal element of ρ^-1(s). Let x be a minimal element of ρ^ -1(s). Assume the opposite, then x∈Δ(). Let x_l,x_r be the children of x in G', hence x_l<x, x_r<x and ρ(x)= ρ(x_l) = ρ(x_l) = s, which contradicts the minimality of x. Let x∈Σ(). Assume the opposite, that x is not a minimal element of ρ^-1(s). Let x'<x, ρ(x')=s. Then x'≤ x_l or x'≤ x_r. Let x'≤ x_l, hence ρ(x')≤ρ(x_l)≤ρ(x). Therefore ρ(x)=ρ(x_l), whichcontradictsx∈Σ().Next lemma states that we cannot have two comparable speciations assigned to the same node from V(S). Let =(G,G',S,ϕ,ρ, δ) be a reconciliation and x,y∈ V(G'), x<y, ρ(x)=ρ(y). Theny∈Δ(). Follows directly from Lemma <ref>. Let _1=(G,G'_1,S,ϕ,ρ_1, δ_1) and _2=(G,G'_2,S,ϕ,ρ_2, δ_2) be reconciliations, and x∈ V(G). Then ρ_1(x) and ρ_2(x) are comparable. Assume the opposite, i.e.ρ_1(x) and ρ_2(x) are incomparable. Then T(ρ_1(x)) and T(ρ_2(x)) are disjoint, and in particular L(ρ_1(x))∩ L(ρ_2(x))=∅. Let l∈ L(x). Then l≤ x, therefore ϕ(l)=ρ_1(l) ≤ρ_1(x) and ϕ(l)=ρ_2(l) ≤ρ_2(x), hence ϕ(l)∈ L(ρ_1(x)) and ϕ(l)∈ L(ρ_2(x)), a contradiction. Let =(G,G',S,ϕ,ρ, δ) be a reconciliation, d,l,c∈ℕ weights associated with duplication, loss and conversion. The cost (or weight) ofis given byω()=l·|Λ\Λ'|+d·|Δ\Δ'|+ c·|Δ'|. Examples of computations of this cost are given on Figure <ref>. As we can see, losses from Λ' are not counted as losses in the formula, so we call them free losses. If a lost subtree has only free losses then it is called a free subtree. Let =(G,G',S,ϕ,ρ, δ) be a reconciliation that minimizes ω(), for given G, S, and ϕ. Then it is called minimum (or optimal) reconciliation. In the sequel we give an algorithm that is able to output all optimal reconciliations for d=l=c, so unless specified, we assume from now, and without loss of generality, that they are all equal to 1. We come back to the general case in the conclusion, stating open problems.§.§ Completions and minimizations of reconciliations Recall that any DL reconciliation is a DLC reconciliation by definition. However an optimal DL reconciliation is not an optimal DLC reconciliation. Completions and minimizations are operations on reconciliations that help constructing nonetheless a relation between optimal DL and DLC reconciliations. Let =(G,G',S,ϕ,ρ, δ) be a reconciliation. The reconciliation '=(G,G”,S,ϕ,ρ',δ') is said to be obtained fromby loss extension if G” is an extension of G', ρ=ρ'/V(G'),and ' have the same number of lost subtrees.Letbe a reconciliation, and ' is a reconciliation with minimum weight among all reconciliations obtained fromby extending some losses. Then ' is called a completion of .It is obvious, by definition, that an optimal reconciliation is a completion, i.e a completion of a reconciliationhas always a lower or equal cost thanitself. The set of all completions ofis denoted by c(). When useful, c() can also be used to denote one arbitrary completion if it is clear that any completion works. For example the cost of a completion can be written ω(c()) since by definition they all have the same cost.The converse of a completion is a minimization. It is based on the following definition and lemma. A reconciliation =(G,G',S,ϕ,ρ, δ) is called minimal if there does not exist G” such that G' is a proper extension of G”, G” is an extension of G, and G” is ρ”-consistent, where ρ”=ρ/V(G”). An example of minimal reconciliation is the LCA reconciliation. The next lemma shows how to construct a minimal reconciliation from any reconciliation. Let G and S be a gene and a species tree,andρ':V(G)→ V(S) such that * ρ'(x)=ϕ(x), ∀ x∈ L(G), * x<y ρ'(y)≤ρ'(y), * ρ'(x) belongs to the path from ρ_lca(x) to root(S). Then there exists a unique (up to δ) minimal reconciliation =(G,G',S,ϕ,ρ, δ) such that ρ/V(G)=ρ'. Assume that there exists a reconciliation _1=(G,G'_1,S,ϕ,ρ_1, δ_1) such that ρ_1/V(G)=ρ'. Let x∈ V(G) with children x_l,x_r (in G). In the next three cases we show how to construct G'.Case 1, ρ_1(x_l)=ρ_1(x) and ρ_1(x_r) < ρ_1(x). In that case x∉Σ(_1), hence x∈Δ(_1). Therefore ∃ x'∈ V(G'_1) such that x' is the right child of x and ρ_1(x')=ρ_1(x). Since x_r<x'<x, x' is not a leaf and it has the left subtree. Therefore ∃ x”∈ V(G'_1) such that x” is a descendant of x' and ρ_1(x”)=ρ_1(x')_l. We have a similar situation for the case ρ_1(x_r)=ρ_1(x) and ρ_1(x_l)<ρ_1(x).Case 2,e=(s,p(s))∈ E(S), s∈ V(S) and ρ_1(p_G(x))>s and ρ_1(x)<s. We will prove that there exists a node x_1∈ V(G'_1) such that ρ(x_1)=s and x<x_1<p_G(x). Let x' be a minimal node of V(G'_1) such that x'≤ p_G(x) and ρ(x')>s. From Lemma <ref>, we have x'∈Σ(_1). Therefore it has children x'_l,x'_r (in G'_1) such that ρ_1(x'_l)<ρ_1(x') and ρ_1(x'_r)<ρ_1(x'). From the properties of x', we get that one of the children maps to s. Let ρ(x'_r)=s, and we need to insert an additional child for x'_r, since x'_r cannot be a leaf.Case 3, ρ_1(x_l)≤ρ_1(x)_l and ρ_1(x_r)≤ρ_1(x)_l. Let x' be a child of x in G'_1. Therefore x' is comparable to x_l or x_r, and ρ_1(x') is comparable to ρ_1(x_l) or ρ_1(x_r), hence ρ_1(x') is comparable to ρ_1(x)_l. Next, ρ_1(x') is incomparable to ρ_1(x)_r, hence x∉Σ(_1) and x∈Δ(_1). If x'_l,x'_r are the children of x in G'_1, then ρ_1(x'_l)=ρ_1(x'_r)=ρ_1(x). This means that we need to insert x'_l,x'_r and additional children for x'_l,x'_r. Insertions, described in the previous three cases, are for any reconciliation _1. Let us prove that they are enough to form a reconciliation. From this will follow minimization and uniqueness. Let us form G' and ρ in a way described in the previous three cases. We need to prove that G' is ρ-consistent. Let x∈ V(G')\ L(G') and x_l,x_r are the children of x in G'. We will prove that x satisfies condition (D) or (S) from Definition <ref>. If ρ(x)=ρ(x_l)=ρ(x_r), then condition (D) is satisfied. Now assume that condition (D) is not satisfied, i.e. ρ(x)ρ(x_l) or ρ(x)ρ(x_r). Take ρ(x_r) < ρ(x). From the Case 2, we get ρ(x_r) = ρ(x)_r. We are left to prove ρ(x_l) = ρ(x)_l. Assume the opposite, let ρ(x_l)= ρ(x)_r or ρ(x_l)=ρ(x). From Case 3 and the definition of duplication, we get that x is a duplication, this contradicts our assumption that ρ(x_l) ρ(x)_l. The unique minimal reconciliation obtained from a reconciliation is called its minimization. In the next section we prove that minimization and completion are complementary operations, that is, an optimal reconciliation is always the completion of its minimization. This will lead to the important result that completions of the LCA reconciliations are optimal. § A FAMILY OF OPTIMAL RECONCILIATIONS: LCA RECONCILIATIONS In this section we provide a polynomial running time algorithm which finds an LCA completion, and prove that it is an optimal reconciliation. We present a more general algorithm, which finds a completion of any reconciliation. To this aim we present the important notion of flow, constantly used all along the paper. This settles the complexity of the defined problem when the weights d,l,c are all equal. However the algorithm described here does not find all LCA completions, and moreover the space of optimal reconciliations is not limited to LCA completions. Finding all solutions will be the subject of next section. Here we begin by stating general properties of reconciliations and optimal reconciliations, showing that they all share some important properties with LCA reconciliations.§.§ Similarities of any reconciliation with the LCA reconciliation Some properties of the LCA reconciliation are shared by all reconciliations. Let =(G,G',S,ϕ,ρ, δ) be a reconciliation, and x∈ V(G). Then ρ(x) is not lower than ρ_lca(x). Follows directly from the definition of Last Common Ancestor. Let =(G,G',S,ϕ,ρ, δ) be a reconciliation, and x∈ V(G)\ L(G). Then ρ(x) is in the path in S from ρ_lca(x) to root(S). Follows directly from Lemmas <ref> and <ref>.The next lemma states that if a node is a speciation in an arbitrary reconciliation then it is also a speciation in the LCA. Let =(G,G',S,ϕ,ρ, δ) be a reconciliation, and x ∈ V(G). If x∈Σ(), then x∈Σ(_lca), and ρ(x)=ρ_lca(x). Let x∈ V(G)∩Σ(). Let x”_l,x”_r be the children of x in , x'_l,x'_r the children of x in _lca, and x_l,x_r be the children of x in G. We have ρ(x)_l=ρ(x”_l) and ρ(x)_r=ρ(x”_r). From Lemma <ref> we have ρ_lca(x)≤ρ(x). Assume that ρ_lca(x)<ρ(x). Hence ρ(x)_l or ρ(x)_r is incomparable to ρ_lca(x). Assume that ρ(x)_r=ρ(x”_r) is incomparable to ρ_lca(x). Next, x_r≤ x'_r<x, x_r≤ x”_r<x, hence ρ_lca(x_r)≤ρ_lca(x'_r)≤ρ_lca(x) and ρ(x_r)≤ρ(x”_r)≤ρ(x). Therefore, ρ(x_r) is incomparable to ρ_lca(x), hence incomparable to ρ_lca(x_r), which contradicts Lemma <ref>. Therefore ρ_lca(x)=ρ(x). Let us prove that x∈Σ(_lca). Assume the opposite, x∈Δ(_lca). Thus ρ_lca(x)=ρ_lca(x'_l)=ρ_lca(x'_r), and from LCA reconciliation, we have ρ_lca(x)=ρ_lca(x_r) or ρ_lca(x)=ρ_lca(x_l). Next, ρ_lca(x_r)=ρ_lca(x)=ρ(x)>ρ(x_r) or ρ_lca(x_l)=ρ_lca(x)=ρ(x)>ρ(x_l), which contradicts Lemma <ref>. Thanks to these properties we can define a distance from an arbitrary reconciliation to the LCA reconciliation. This distance will be used in the proofs of several properties, stating that there is always a way to lower the distance to the LCA without increasing the cost of a reconciliation.Let =(G,G',S,ϕ,ρ, δ) be any reconciliation. Let dist_lca()=∑_d∈ V(G) d_S(ρ(d),ρ_lca(d)) be the distance fromto the LCA reconciliation _lca = (G,G'_lca,S,ϕ,ρ_lca). If for a reconciliationdist_lca()>0, there exists a reconciliation ' such that dist_lca(')<dist_lca() and ω(')≤ω(). Take any d'∈ V(G) so that ρ(d') > ρ_lca(d') and let d be a minimal element of V(G) such that ρ(d)=ρ(d') and d ≤ d'. Since d≤ d', we have ρ_lca(d) ≤ρ_lca(d')<ρ(d')=ρ(d), therefore ρ_lca(d)<ρ(d). By Lemma <ref> d∉Σ(), so d∈Δ(). Let d^1_l,d^1_r be the children of d in . Since d∈Δ(), we have ρ(d)=ρ(d^1_l)=ρ(d^1_r), and because of the minimality of d, we get d^1_l,d^1_r ∉ V(G). Similarly, all descendants of d in G', with the same ρ-value, are not in V(G). Let d_1,...d_k be these descendants and let T_1,...,T_k be lost subtrees such that root(T_i)=d_i, (i=1,…, k). Prune all these subtrees, contract nodes of degree two (i.e. d_1,…, d_k), and let G” denotes the obtained extension of gene tree G. Let d^2_l,d^2_r be the children of d in G”. If ρ(d^2_l) ρ(d^2_r), then G” generates a new reconciliation ', where d is a speciation, and ρ'(d)=ρ(d). By Lemma <ref>, ρ'(d)=ρ_lca(d), which contradicts ρ(d)>ρ_lca(d). Let ρ(d^2_l) = ρ(d^2_r). Since ρ(d^2_l) <ρ(d), we don't have consistency. Put ρ'(d)=ρ(d^2_l) and insert x_1 into G” so that d<x_1<p_G'(d), ρ'(x_1)=ρ(d), and x_1 is the root of some of the pruned subtrees T_i (reinsert T_i). In this way we get a new reconciliation ”, and d is a duplication in ”. Also ω(”)≤ω() and dist_lca(”) < dist_lca(). If d∈Δ'() and corresponding loss is l, then extend l so that one loss extensions follows d and the other can be some of the pruned subtrees T_i (reinsert T_i).The next lemma states that with LCA we get the smallest set of duplications. Let _lca be the LCA reconciliation andbe any reconciliation. Then Δ(_lca) ⊆Δ()∩ V(G). Let x∈Δ(_lca), then x ∉Σ(_lca) andx∈ V(G). Assume the opposite, that x∉Δ()∩ V(G), then x∈Σ(). From Lemma <ref> we get x∈Σ(_lca), a contradiction. Therefore x∈Δ() ∩ V(G).§.§ Properties of optimal reconciliations We examine some properties of optimal reconciliations. Note that optimal reconciliations are not necessarily minimal, but we will state the relation between the two classes (see Lemma <ref>). The next lemma states that optimal reconciliations never contain duplication nodes in lost subtrees. Let =(G,G',S,ϕ,ρ, δ) be an optimal reconciliation. Then Δ() ⊆ V(G), i.e. all duplications nodes are in G. Assume the opposite. Let =(G,G',S,ϕ,ρ, δ) be a reconciliation, and x is a minimal node of Δ() \ V(G). Let us prove thatcannot be optimal. Let x_l,x_r∈ V(G') be the children of x. Since x is a duplication, we have ρ(x)=ρ(x_l)=ρ(x_r). Observe two cases. Case 1, x_l,x_r∉ V(G) Case 1.1, x is a conversion, andl is the corresponding loss. Remove l and x, connect x_l with p_G'(l), and x_r with p_G'(x). In this way we get G”. Let ρ'=ρ/G”, and δ'=δ/G”. We get a reconciliation '=(G,G”,S,ϕ,ρ', δ') which has one duplication less, i.e. ω(')=ω()-1. Hencecannot be an optimal reconciliation. Case 1.2, x is not a conversion. Remove T(x_l) and x, then connect x_r with p_G'(x). By a similar argument, we get a reconciliation with one duplication and all non-free losses from T(x_l) less, i.e. we get a reconciliation with a strictly lower cost. Indeed, since x is a minimal duplication, subtree T(x_l) cannot have any duplications, i.e. by removing T(x_l) we cannot get to the situation where some free loss becomes non-free.Case 2, x_l∈ V(G), x_r∉ V(G). Similarly, if x is not a conversion, remove T(x_r) suppress x, and we get a reconciliation with strictly less cost. If x is a conversion and l is associate loss, then remove l, suppress x and connect x_r and p_G'(l). We again obtain a cheaper reconciliation.The next lemma is a version ofLemma <ref> for an optimal reconciliation. Let _lca be the LCA reconciliation, and letbe an optimal reconciliation. If dist_lca()>0, there exists an optimal reconciliation ' such that Δ(')=Δ() and dist_lca(')<dist_lca(). Follows directly from the proof of Lemma <ref>. We constructed ' by pruning some of the lost subtrees and lowering duplication, which remained a duplication in '. By Lemma <ref> lost subtrees in optimal reconciliation cannot contain duplications, hence the set of duplications remained unchanged, i.e. Δ(')=Δ(). Next theorem states that all optimal reconciliations have the same sets of duplications. Let _lca=(G,G_lca',S,ϕ,ρ_lca) be the LCA reconciliation and =(G,G',S,ϕ,ρ, δ) be an optimal reconciliation. Then Δ(_lca) = Δ(). Assume the opposite, there exist G, S andsuch thatis an optimal reconciliation and Δ(_lca) Δ(). By Lemma <ref> and Lemma <ref> we get Δ(_lca) ⊂Δ()∩ V(G)=Δ(). Assume thatis an optimal reconciliation with Δ(_lca) ⊂Δ() and minimum dist_lca(). We have dist_lca()=0, otherwise we could get an optimal reconciliation ' with dist_lca(') <dist_lca() and Δ(')=Δ() (Lemma <ref>). From dist_lca()=0, we obtain ρ(x)=ρ_lca(x), ∀ x∈ V(G). Let x'∈Δ()\Δ(_lca). By Lemma <ref>, we have x' ∈ V(G). From x'∉Δ(_lca) we get that x'∈Σ(_lca). We will continue in a similar way as in the proof of Lemma <ref>. Let x_1,...,x_k be descendants of x' in V(G') with the same ρ-value as x'. Assume x_1∈ V(G). Since ρ(x)=ρ_lca(x), ∀ x∈ V(G) and ρ(x_1)=ρ(x') we get ρ_lca(x_1)=ρ_lca(x'), hence (Lemma <ref>) x'∈Δ(_lca), a contradiction. Therefore x_1∉ V(G). By a similar argument, x_1,…,x_k∉ V(G). Let T_i be the lost subtrees rooted at x_i (i=1,…,k). By pruning T_i and suppressing x_i (i=1,…,k) we get G”, and a new reconciliation where node x' is a speciation. Hence we get a reconciliation with strictly lower cost, which contradicts the optimality of . Next lemma states that, in an optimal reconciliation, we cannot have two comparable nodes x,y∈ V(G')\ V(G) such that ρ(x)=ρ(y). Letbe an optimal reconciliation and x,y∈ V(G') such that ρ(x)=ρ(y) and x<y. Then y∈ V(G)∩Δ(R_ lca)=Δ(_lca)=Δ(). From Lemma <ref> we have y∈Δ(). From Theorem <ref>, we obtain Δ()=Δ(_lca). From Lemma <ref>, we have y∈ V(G)⊇Δ()=Δ(_lca). Therefore y∈ V(G)∩Δ(_lca). Next lemma states the relation between minimal and optimal reconciliations. Letbe an optimal reconciliation. Then there exists ', a minimal reconciliation such thatis a completion of '. Let ' be the reconciliation obtained fromby deleting all lost subtrees except their root edges. Sois a completion of '. We prove that ' is minimal. Suppose the opposite. There is e'=(x',p_G'(x'))∈ E(G')\ E(G) such that by removing e' and suppressing p_G'(x') we obtain again a reconciliation, denoted by ”. From the proof of Lemma <ref>, Case 2, we have that ∀ s∈ V(S) and x,y∈ V(G'), such that x<y, ρ(x)<s<ρ(y), ∃ z∈ V(G') such that ρ(z)=s and x<z<y. Let x_1 be another child of p_G'(x'). Since there is no lost subtrees with more than one edge, we have x_1∈ V(G). Let s=ρ(p_G'(x')). Since ” is a reconciliation, ∃ x”∈ V(G”) such thats=ρ(x”) and x” comparable to x_1. Take minimal x” with these properties, then (Lemma <ref>) x”∈Σ(”). After bringing back e, we get that p_G'(x') or x” becomes a duplication (Lemma <ref>). Hence Δ(”)⊂Δ(')=Δ(), which contradicts the optimality of(Lemma <ref> and Theorem <ref>). §.§ LCA completions are optimal A completion of the LCA reconciliation is an optimal reconciliation. Let =(G,G',S,ϕ,ρ, δ) be an optimal reconciliation withdist_lca() minimum. We prove that this reconciliation is a completion of the LCA. Since all completions of the LCA have the same weight by definition, this proves that all completions of the LCA are optimal reconciliations.From Lemma<ref> we get dist_lca()=0 and therefore ρ(x)=ρ_lca(x), ∀ x∈ V(G). From Theorem <ref> and Lemma <ref>, we have Δ()=Δ(_lca)⊆ V(G). Let t be a root of some lost subtree of G'. Let us prove that t∈ V(G'_lca), and vice versa, if t∈V(G'_lca)\ V(G), then t is a root of some lost subtree of G'. This correspondence has to be bijective. Let us prove that we can establish a bijection f:V(G)∪{t |t is a root of some lost subtree ofV(G')}→ V(G'_lca)\Λ(_lca) such that f(x)=x, ∀ x∈ V(G), x<y f(x)<f(y), ρ(x)=ρ(f(x)). First, put f(x)=x, ∀ x∈ V(G). Let t∈ V(G')\ V(G) be a root of some lost subtree of G', ρ(t)=s, x<t<p_G(x). From Lemmas <ref> and <ref>, we have t∈Σ() and t is a minimal element of ρ^-1(s). Hence, there is no other element t'∈ V(G') such thatρ(t')=s, x<t'<p_G(x). Since t∈Σ(), we have ρ(x)<ρ(t)≤ρ(p_G(x)). In _lca we also have x'∈ V(G'_lca), such that ρ(x')=s, and x<x'<p_G(x). Next, put f(t)=x'. Above correspondence is obviously an injection. Let us prove that it is a surjection. In a similar way, let x'∈ V(G'_lca)\Λ(_lca), ρ_lca(x')=s'. If x'∈ V(G), then x'=f(x'). Now, assume x'∉ V(G).Again from Lemmas <ref> and <ref> we have that x'∈Σ(_lca) and x' is a minimal element of ρ_lca^-1(s'). Let x<x'<p_G(x), x∈ V(G). Similarly, we have ρ_lca(x)<ρ_lca(p_G(x)) and x' is the only element from V(G')\ V(G) assigned to s' comparable to x.In order forto be ρ-consistent, there isa root of the lost subtree of G' (say t) such that: ρ(t)=s', and x<t<p_G(x) and it is unique. So, f(t)=x'. We proved the existence of the described correspondence, therefore every lost subtree ofis obtained as a loss extension in _lca.The LCA reconciliation is easy to find, it is a well known result that there is a linear time algorithm to compute it <cit.>. What remains in order to derive an algorithm to find an optimal reconciliation is to find a completion. Next section presents a method to find a completion of an arbitrary reconciliation.§.§ Finding a completion and the flow of losses Finding a completion is a kind of flow problem. We have demands, which are losses, that we supply by duplications, i.e. we associate them to duplications to form conversions. The amount and distribution of duplications in the phylogenetic tree tells how many losses can be supplied. The number of losses that can be supplied tells the value of a completion. We compute this number recursively along the tree. In consequence we have to define restriction of reconciliations to subtrees, which are multiple reconciliations.Let _i=(G_i,G'_i,S,ϕ_i,ρ_i) be DL reconciliations of gene trees G_i with species tree S, (i=1,…,k). Let T_1,…,T_t be trees, ρ'_j:V(T_j)→ V(S) verifying that ρ'_j(root(T_j))=root(S) and T_j is ρ'_j-consistent, (j=1,…,t). Let '_j=(T_j,S,ρ'_j),(j=1,…,t). Next, let δ: ⋃Δ(_i) ∪⋃Δ('_j) →⋃Λ(_i) ∪⋃Λ('_j) be a partial injective function such that δ(d)=l implies that d and l are assigned to the same node in V(S). Then the structure _m=(G,S,_1,...,_k,'_1,...,'_t,δ) is called multiple reconciliation.The crucial property of a multiple reconciliation is that a loss from one tree (G' or T_i) can be assigned by δ to a duplication from another gene tree. The cost of a multiple reconciliation is computed the same way as the cost of a reconciliation. The multiple reconciliation induced by a reconciliationand an edge e is composed of all parts ofmapped to S(e) by ρ. If it is evident from the context, instead of multiple reconciliation, we will write reconciliation, allowing additional lost subtrees. Let _m be a multiple reconciliation with e∈ E(S). Let _m1 be the reconciliation obtained from _m by adding k new lost subtrees with only one root edge assigned to e. Obviously ω(_m)+k=ω(_m1), but it is possible that ω(c(_m))=ω(c(_m1)) (see Figure <ref>). Letbe a reconciliation, e∈ E(S), and (e) the multiple reconciliation induced withand e. Let '(e) be the reconciliation obtained from (e) by removing all T_1,…,T_l the lost trees containing only one loss assigned to e. With _k(e) denote multiple reconciliation obtained from '(e) by adding k lost trees containing only one loss assigned to e (k may be lower or higher than l, if k=l then _k=). Let k' be the maximum number such that ω(c(_k'(e)))=ω(c('(e))). With F(e,)=F(e)=k'-l is denoted the flowof the edge e.Note that if F(e) ≥ 0, then F(e) is the maximum number of extralosses assigned to e that does not change the weight of the completion of (e). Opposite is also true, if m≥ 0 is the maximum number of extra losses assigned to e that does not change the weight of a completion of (e), then m=F(e). We show how to efficiently compute the flow recursively with Lemma <ref>. Recall (e)=(e,_m), Ł(e)=Ł(e,_m) denote number of duplications and losses assigned to e in reconciliation _m. Let _m be a multiple reconciliation, e∈ E(S). Then F(e)=max(min( F(e_l),F(e_r) ),0)+(e)-Ł(e).We will use mathematical induction on e. Let e be a leaf edge. Then e_l=NULL, e_r=NULL, and F(e_l)=F(e_r)=0. The only way new losses, assigned to e, can be free is by pairing them with duplications in e. Therefore k'=d and F(e)=k'-l=d-l.Now, let e be a non-leaf edge, m=max(min(F(e_l,_m(e_l)),F(e_r,_m(e_r))), 0), d=(e,_m(e)), and l=Ł(e,_m(e)). By inductive hypothesis, we can extend m losses over e_r and e_l, so the weight of the completions of _m(e_l) and _m(e_r) is not changed. We can make d losses, assigned to e, free by pairing them with duplications in e. Hence k'=m+d and F(e)=k'-l=m+d-l.Let _1 be a multiple reconciliation with a root edge e=(s,p(s)), and F(e,_1)≤ 0. By assigning an extra loss to e we obtain _2. Then ω(c(_2))=ω(c(_1))+1. We postpone the proof of this Lemma to section <ref> because it will use some notions introduced later. The next lemma is a consequence of Lemma <ref>.Let _1 be a (multiple) reconciliation, e is the root edge of S, and F(e,_1) < 0. Let _2 be a reconciliation obtained from _1 by removing a loss assigned to e. Then ω(c(_2))=ω(c(_1))-1 Lemmas <ref> and <ref> are stated in a way of adding and removing a loss from the root edge e. Similar lemmas are in effect if we remove/add a duplication from/to the root edge e. Because of the obviousness we will not state them nor prove them. Thanks to this flow computation we can find a completion of any reconciliation by a polynomial time algorithm, which pseudo-code is written in Algorithm <ref> and <ref>.Let us introduce a convention. If we say that, e.g. ' is an output of ExtendLosses(), then the procedure ExtendLosses(.) is observed as a standalone procedure with the input . But if we say that ' is an output of ExtendLosses (no input parameters), then we observe ExtendLosses as a part (sub procedure) of the main procedure, and ExtendLosses receives parameters as described. Letbe a reconciliation, l is a non-free loss assigned to e∈ E(S), e_1,e_2 are children of e. Next, Δ”(e)∅ or (F(e_1)>0 and F(e_2)>0).Then the procedure ExtendLossIntoFreeTree(,l)extends l into a free tree. Note that if Δ”(e)=∅ and F(e_1)=F(e_2)=0, then F(e)≤ 0. We will use mathematical induction on e. Let e be a leaf edge. Then e_1=NULL, e_2=NULL and F(e_1)=F(e_2)=0. Hence Δ”(e)∅, and l is assigned to a random duplication from Δ”(e). Assume that e is not a leaf edge. If Δ”(e) ∅ and assign is chosen, then l is assigned to a random element from Δ”(e), i.e. l is extended into a free tree with one edge. If Δ”(e)=∅ or extend is chosen, then F(e_1)>0, F(e_2)>0 and l is extended into l_1 and l_2. Since F(e_i)>0, (i=1,2) then e_i satisfies if condition in OneCompletion. Hence, by inductive hypothesis, ExtendLossIntoFreeTree(,l_i) extends l_i into a free tree, i.e. l is extended into free tree.Let us introduce a convention. Let e=(x,p_G'(x))∈ E(G'). If ρ(p_G'(x))=p(ρ(x)), then we can write ρ(e)=(ρ(x),ρ(p_G'(x)))∈ E(S). This property does not hold for any edge of G', but it holds for any edge of a lost subtree, since we do not observe lost subtrees with duplications (an optimal reconciliation cannot have a lost subtree with a duplication). Let T be a subtree of G', then ρ(t)={ρ(e)| e∈ E(T)}. Sometimes we will identify lost trees with their root, i.e. v can denote both a root of a tree or a tree with root v. The reason for this is that lost subtrees are dynamical, they extend or switch (an operation introduced later), but their roots are not. Letbe a reconciliation with non-extended losses, t_i (i=1,…, k) and t'_j (j=1,…, m) are free and non-free lost subtrees of c() such that t'_j≥ t_i whenever t_i and t'_j overlap. All non-free lost subtrees t'_j (j=1,…, m) are non-extended, i.e. they have one edge each. Then c() is a possible output ofOneCompletion(). Let _0=, _i is obtained from _i-1 by extending corresponding loss to the tree t_i (i=1,…, k). Hence _k=c(). Assume that trees t_1,…, t_i-1 (i≥ 1) are constructed by iterations ofExtendLossIntoFreeTree. Take t_i that has the minimal root among free lost subtrees that are not added. Let us prove that F(e,_i-1)>0, ∀ e∈ E(ρ(t_i))\{root_E(ρ(t_i))}. Assume the opposite, let F(e_1,_i-1)≤ 0, and since free subtree t_i extends over e_1, we have that some loss in S(e_1) becomes non-free. More precisely, ω(c(_i-1(e_1))) < ω(c(_i(e_1))). This means that |Λ\Λ'(c(_i-1(e_1)))| < |Λ\Λ'(c(_i(e_1)))|. Since trees t_1,…, t_i-1 (and t_i) are free and already present in _i-1 (i.e. _i), then we can assume that they are not changed in c(_i-1) (i.e. c(_i)), because we gain nothing by further extending free losses (although it is possible). Observe c(_i(e_1)). Let T_S be the maximal subtree of S(e_1) (see Figure <ref>) such that if v_0 ∈ V(T_S)\ L(T_S) is a lost subtree in c(_i(e_1)), then there are lost subtrees (in c(_i(e_1))) v_1,…, v_s, ρ(v_0)<ρ(v_1)<… <ρ(v_s), v_i overlaps with v_i+1 (i=0,…, s-1) and v_s=t_i. Let v∈ V(T_S)\ L(T_S) be a lost subtree. Let us prove that v is a free tree (in c(_i-1(e_1)), c(_i(e_1)), and c()). From v∈ V(T_S)\ L(T_S) we have v=v_0<v_1<…,v_s-1<v_s=t_i and v_i-1 overlaps v_i. Since v_s-1 overlaps t_i (in c(_i(e_1))) and t_i is the same in both c(_i(e_1)) and c(), we have that v_s-1 overlaps t_i in c(), hence v_s-1 is a free tree in c(), i.e. v_s-1∈{t_1,…, t_i-1}. Applying the same argument on v_s-1, we get v_s-2∈{t_1,…,t_i-1}. Proceeding in this manner, we have v∈{t_1,…,t_i-1}, hence v is a free tree. Let f_1,…, f_r be the children of leaf edges of T_S. From the maximality of T_S, we have there is no lost subtree in c(_i-1(e_1)) nor in c(_i(e_1)) that expands over f_j,(j=1,…, r). All non-free losses from S(e_1) are contained in S(f_j),(j=1,…, r). This holds for both c(_i-1) and c(_i). Therefore the structure of the lost subtrees in _i-1(f_j) can be identical to the structure of the lost subtrees in _i(f_j),(j=1,…, r), and thus obtaining that a completion of _i-1(e_1) has the same weight as an extension of_i(e_1), a contradiction. Hence the procedure ExtendLossIntoFreeTree can give us t_i, (i=1,…,k).It is proved in Section <ref>, in a more general framework, that these procedures indeed compute a completion, and hence, if the input reconciliation is the LCA reconciliation, it computes an optimal reconciliation. § ZERO-FLOW RECONCILIATIONS AND THE SPACE OF ALL OPTIMAL RECONCILIATIONS Here we introduce zero-flow reconciliations and use them as a hinge to find all optimal reconciliations. Zero-flow (ZF) reconciliations are a subspace of optimal reconciliations and they contain LCA reconciliations, but these inclusions are strict: all sets are distinct. We first show how to find any ZF reconciliation, up to completion, from an LCA reconciliation. Then by a different procedure we show how to access the whole space of optimal reconciliations, up to completion, from a ZF reconciliation. Finally, as these reductions work up to completion, we show how to navigate in all completions for a given reconciliation.Let e=(s,p(s)) be an edge of S anda reconciliation. We note X(e,)={d∈ V(G)|ρ_lca(d)≤ s, ρ(d)≥ p(s)} the set of nodes (duplications or conversions) which are assigned under s in the LCA reconciliation and above p(s) in . An optimal reconciliationis said to be a zero-flow (ZF) reconciliation if for all s internal node of S with children edges e_1 and e_2, F(e_1,)<0X(e_1,) = X(e_2,)=∅. In other words, an optimal reconciliation is ZF if all duplications assigned to or above a node s, when strictly below in the LCA, verify that the flow the children edges of s is non negative. By definition LCA reconciliations are ZF (X(e,_lca)=∅ for all e). But we will see that the converse is not true. Similarly ZF reconciliations are optimal by definition but some optimal reconciliations are not ZF. §.§ Computing ZF reconciliations by duplication raising Duplication raising consists in changing the position of a duplication from its position in a minimal reconciliation to an upper position in the species tree. It is a concept that was previously used to explore DL reconciliations <cit.>. Let =(G,G',S,ϕ,ρ, δ) be a minimal reconciliation andx∈ V(G). We say that reconciliation '=(G,G”,S,ϕ,ρ', δ') is obtained fromby raising node x if ' is a minimal reconciliation such that ρ(x')=ρ'(x'), ∀ x'∈ V(G)\{x} and ρ'(x)=p(ρ(x)). Depending on the assignment and event status of the parent node of x, raising x has different effects. If p_G(x) is a speciation (see Figure <ref>) and ρ(p_G(x))=p(ρ(x)), after raising x, p_G(x) becomes a duplication and three new losses are generated. This cannot lead to an optimal solution because of the additional duplication (Theorem <ref>). If ρ(p_G(x))>p(ρ(x)) or p_G(x) is a duplication, after raising x, only oneadditional loss is generated. This condition, which is necessary to yield an optimal solution, is formalized as follows.x∈Δ() ∧(p(ρ(x))<ρ(p_G(x)) ∨(p(ρ(x))=ρ(p_G(x)) ∧ p_G(x)∈Δ() ) )The next lemma states that raising a duplication cannot decrease the weight of a completion. The proof of the lemma also describes how to lower a duplication. This procedure will be important later in some proofs. Letbe a minimal reconciliation, _1 is a minimal reconciliation obtained fromby raising a duplication. Then ω(c())≤ω(c(_1)). Let x be the raised duplication, e_1,e_2∈ E(S) are siblings, e is their parent, x is assigned to e_1 inand to e in _1. Let T be the lost subtree such that root(T) is a child of x in c(_1) and T is expanded over e_2. Observe two cases (see Figure <ref>). Case 1, x∉Δ'(c(_1)). Start with c(_1), place x back to e_1, and remove T. We get an extension ofwith a cost at most the one of c(_1), i.e. ω(c())≤ω(c(_1)). Case 2, x∈Δ'(c(_1)). Let l be a loss assigned to x in c(_1). Start with c(_1), place x back to e_1, extend l, so that in e_1 is paired with x (staying free loss), and in e_2 is connected to T. In this way, we get an extension ofof the same weight as c(_1), i.e. ω(c())≤ω(c(_1)).As a consequence of Lemma <ref>, no optimal reconciliation can be obtained by raising a duplication from a reconciliation that has no optimal completion. We will now see in which conditions a duplication raising of a reconciliation with an optimal completion can lead to another reconciliation with optimal completion.The next lemma states when raising a duplication does not increase the weight of a reconciliation. Letbe a minimal reconciliation, and e_1,e_2∈ E(S) the children of edge e. If x∈Δ() assigned to e_1 satisfies condition (<ref>), _1 is a minimal reconciliation obtained by raising x, F(e_1,)>0 and F(e_2,)>0, then ω(c(_1))=ω(c()). First, construct an extension of _1, by using c(). By raising x, we generate one new loss in e_2. Since F(e_2,)>0, we have ω(c((e_2)))=ω(c(_1(e_2))), i.e. the loss generated by the duplication raising can become a free loss. Let x∈Δ'(c()) and assigned to l∈Λ'(c()). If l is non-extended (in c())and since F(e_1,)>0, we have that l can be assigned to some other duplication in e_1 or extend over children of e_1 and become free. If l is part of a lost subtree T_l in c(), then by raising x, we ca also raise l, remove subtree of T_l expanding over e_2, leave l assigned to x.Thus we obtain an extension of _1, not heavier than c(), i.e. ω(c(_1))≤ω(c()). From Lemma <ref>, we have ω(c())≤ω(c(_1)), hence ω(c(_1))= ω(c()). The next lemma follows directly from Lemma <ref>. Under the hypotheses of Lemma <ref>, if completions ofare optimal, then completions of _1 are optimal.Algorithms <ref>, <ref>, and <ref> describe how to generate a reconciliation which does not change the score of completions by raising duplications.Procedure GenerateNewLosses adds lost subtrees to that the new ρ after raising a duplication is consistent with S.The two next statements demonstrate that, up to completion, all the ZF reconciliations are reached by applying Algorithm RaiseDuplication on a LCA reconciliation. Completions of , an output of RaiseDuplication when the input is the LCA reconciliation, are optimal. Completion of LCA reconciliation is an optimal (Theorem <ref>), raised duplications satisfy conditions of Lemma <ref>, and by this Lemma every time a duplication is raised we get that c() is an optimal reconciliation. Let ' be a minimal reconciliation such that c(') is a ZF reconciliation. Then ' is a possible output of RaiseDuplication. Since c(') is an optimal reconciliation, ' is obtained from LCA by raising duplications that satisfy condition (<ref>). By raising a duplication, value of F(e) cannot increase. Let e_1,e_2∈ E(S) be siblings, e their parent, x a duplication assigned to e_1. Let us raise x to e. If before raising F(e_1)≤ 0 or F(e_2)≤ 0, then after raising F(e_1)< 0 or F(e_2)< 0, X(e_1)∅, and X(e_2)∅, a contradiction. Hence F(e_1)> 0 and F(e_2)> 0. Thus all conditions, for raising a duplication, of the procedure RaiseDuplication are satisfied, hence ' is a possible output.§.§ Reduction of optimal reconciliations to ZF reconciliations Lemma <ref> states that up to completion, we can generate all ZF reconciliation from LCA reconciliations. We now show how to generate all reconciliations from ZF reconciliations. This is done by conversion raising. Next lemma proves that only conversions are concerned by optimal non ZF reconciliations. Letbe an optimal reconciliation, e_1=(s_1,s),e_2=(s_2,s)∈ E(S). If F(e_1,)<0, then X(e_1,) and X(e_2,) are only conversions. Assume the opposite, let x∈ X(e_1,) and x is not a conversion. Put back (lower) all elements of X(e_1,) to e_1. The process is performed as in the proof of Lemma <ref> (Figure <ref>). If we lower a conversion, the weight of a reconciliation is not changed, as well as F(e_1). If we lower a duplication, then F(e_1) is increased by 1 and the cost of a completion is decreased by one (Lemmas <ref>, <ref> and the comment after), which is a contradiction with the optimality of . Therefore, X(e_1,) does not contain a duplication that is not a conversion. Similar arguments apply to X(e_2,). Procedure RaiseConversions does not change the weight of a reconciliation. Let d be a raisedconversion, and T_i is a lost subtree whose leaf is assigned to d. By raising d, we do not create an extra losses, but use existing subtree of T_i and reattach it under d (see Figure <ref> (ii) in the opposite direction and Lemma <ref>, Case 2). The loss that was assigned to d is removed, and newly created loss is assigned to d at a new position. In this way we do not change the number of non-free losses, and the number of duplications/conversions, i.e. the weight of the reconciliation is not changed. Letbe an optimal reconciliation. We can obtain a ZF reconciliation by lowering some conversions. For all e ∈ E(S), if F(e)<0, take all elements from X(e) and X(e'), where e' is the sibling of e, and lower them to e and e'. In this way we get X(e)=X(e')=∅. Since these elements are conversions (Lemma <ref>) lower them as described in Lemma <ref>, Case 2. In this way we obtain a ZF reconciliation of the same weight as . In consequence it is possible to reach any optimal reconciliation by an algorithm which explores first ZF reconciliations and raises some conversions as in Algorithm <ref>. §.§ Finding all completions All previous results are valid up to completions. It means that we have an algorithm which is able to detect all duplications that can be conversions in one optimal solution for example. However we don't know all the possibilities by which it is converted. For that we need to enumerate all possible completions. The algorithm can be described by three procedures, as written in Algorithm <ref>. One procedure is to generate a completion by extending losses into free trees, which is described in Section <ref>. In order to generate the full diversity of possible reconciliations, there are two others described here, which consist in extending losses into non free lost subtrees, and switch between subtrees. The first one is described in Algorithms <ref> and <ref>. In Algorithm <ref> a loss is extended over two edges, one with positive F-value (say edge e_1), and the other with non-positive F-value (say edge e_2). The part (of the lost subtree) extended over e_1 is further extended as a free loss, while the part extended over e_2 is further (recursively) extended as a non-free loss. Let l be a non-free loss in a reconciliation . Then procedure ExtendOneLossIntoNonFreeTree(,l) extends loss l into a non-free tree. If l is not extended, since it is not assigned to a duplication (conversion) we will assume that it is extended into a non-free tree (with one edge). Let l be assigned to the edge e, and e_1,e_2 are its children. We will use mathematical induction on e. Let e be a leaf edge. Then e_1=NULL,e_2=NULL and F(e_1)=F(e_2)=0. In this case, the if condition is not satisfied, and therefore l is not extended. Assume that e is not a leaf edge. If the if condition is not satisfied, then l is not extended, i.e. it is extended into a non-free tree with one edge. If the if condition is satisfied, then F(e_1)>0 and F(e_2)≤ 0, and l is extended into l_1,l_2. Then ExtendOneLossIntoFreeTree(,l_1) extends l_1 into a free tree (Lemma <ref>), and ExtendOneLossIntoNonFreeTree(,l_2) extends l_2 into a non-free tree (inductive hypothesis). Hence l is extended into a non-free tree. The next lemma is a consequence of Lemma <ref> Procedure ExtendLossesIntoNonFreeTrees does not change the weight of a reconciliation. Letbe a reconciliation with non-extended losses, t_i (i=1… k) and t'_j (j=1… m) are free and non-free lost subtrees of c() such that t'_j≥ t_i whenever t_i and t'_j overlap. Then c() is a possible output of series of procedures OneCompletion(),ExtendLossesIntoNonFreeTrees(). Let _0=, _i is obtained from _i-1 by extending corresponding loss to the tree t_i (i=1,…, k), '_0=_k, '_j is obtained from '_j-1 by extending corresponding loss to the tree t'_j (j=1,…, m). Hence '_m=c(). The procedure OneCompletion can give us t_i, (i=1,…,k) (Lemma <ref>). Now we will prove that ExtendLossesIntoNonFreeTrees can give us t'_j, (j=1,…, m). Assume that t_i,(i=1,…,k), t'_1,…, t'_j-1 (j≥ 1) are added. Let us prove that ExtendLossesIntoNonFreeTrees can add t'_j. Lete_1,e_2∈ E(S), e=(s,p(s)) is their parent, and ρ(l'_j)=s, where l'_j extends into t'_j. If F(e,'_j-1)>0, then l'_j can be free, thus obtaining a cheaper reconciliation than c(), a contradiction, so F(e,'_j-1)≤ 0. Let e'_1,e'_2∈ E(ρ(t'_j)) be siblings, e' their parent, and F(e_1','_i-1)≥ F(e_2','_i-1). Subtree t'_j expands over e'_1,e'_2 and not necessarily originating at e'. Observe two cases. Case 1, F(e','_j-1)≤ 0. If F(e'_1,'_j-1)≤ 0 (and F(e'_2,'_j-1) ≤ 0), then by pruningt'_j both e'_1 and e'_2 don't gain a loss, so the cost of reconciliations c('_j-1(e'_1)) and c('_j-1)(e'_2) will not rise in '_j, but '_j gain one non-free loss (pruned t'_j). Hence we gain a cheaper reconciliation, a contradiction. Assume F(e'_1,'_j-1)>0 and F(e'_2,'_j-1) > 0. Since F(e','_j-1)≤ 0, there is a loss l assigned to e' that is non-free (in '_j-1). Then we can extend l over e'_1,e'_2 so it becomes free, and prune t'_j to a single edge (t'_j stays non-free). Hence obtaining a cheaper reconciliation than c(), a contradiction. Case 2, F(e','_j-1)>0. If F(e'_2,'_j-1)≤ 0, then e' has a duplication that is not a conversion.At least one of the subtrees of t'_j expanding over e'_1,e'_2 is a free tree. Assume that it is the one expanding over e'_1. Next, we can prune subtree of t'_j so that t'_j has a leaf assigned to e' and to the duplication, thus becoming a free loss. Since F(e'_2,'_j-1)≤ 0 there is one non-free loss in '_j-1(e'_2) that can become free, thanks to the fact that t'_j does not expand over e'_1 anymore. Making this loss free enable us to obtain a cheaper reconciliation than c(), a contradiction. From the Cases 1 and 2, we have that if F(e','_j-1)≤ 0, then F(e'_1,'_j-1)>0, F(e'_2,'_j-1)≤ 0, and if F(e','_j-1)> 0, then F(e'_1,'_j-1)>0, F(e'_2,'_j-1)> 0. Hence conditions along ρ(t'_j) ofExtendLossesIntoNonFreeTrees are satisfied, and therefore t'_j can be obtained by this procedure. To obtain all possible lost subtrees in an optimal reconciliation, we need to introduce an operation that exchanges parts of the lost subtrees.Notice that a lost subtree with more than one non-free leaf cannot appear in an optimal reconciliation. Let T_0 and T_1 be binary rooted trees and t_i∈ V(T_i)\{root(T_i)} (i=0,1). A switch operation on T_0 and T_1 around t_0 and t_1 creates new trees by separating subtrees T_i(t_i) from T_i and joining them with p(t_1-i)∈ T_1-i (i=0,1). Letbe a reconciliation, T_0 and T_1 free and non-free lost subtrees, l∈ L(T_1) is a non-free loss, p is a path in S from ρ(l) to ρ(root(T_1)). Assume there exists a minimal element s_0∈{s| s∈ V(p)∩ V(ρ(T_0)) }\{ρ(root(T_0)), ρ(root(T_1))}, and t_i∈ V(T_i) such that ρ(t_i)=s_0 (i=0,1). By switch operation on T_0 and T_1 we mean a switch operation on the binary trees T_0 and T_1 around t_0 and t_1.Switch operation on a reconciliation is defined only for one free and one non-free lost subtree, and is possible only if trees T_0 and T_1 overlap, i.e. if ρ(v_0) ∈ρ(T_1) or ρ(v_1) ∈ρ(T_0), where v_i=root(T_i), (i=0,1). In the case ρ(v_0) ∈ρ(T_1), it must be ρ(l)<ρ(v_0), where l is a non-free leaf of T_1. In these cases we say that T_0 and T_1 are switchable. We have that either T_0 gives a (non-trivial) subtree to T_1, or T_1 gives a (non-trivial) subtree to T_0, but both cannot happen.When we apply a switch operation two times on the same trees, around the same nodes, we obtain starting trees, i.e. switching is self-inverse operation. After switch operation, involved trees still overlap. For simplicity of notation, we introduce some conventions. We write tree instead of lost subtree. We will identify a tree with its root, i.e. instead of writing a tree with the root v, we will use a tree v. We do this because, when switching, trees are changed, but the roots are not. When we write v_0<v_1, we mean ρ(v_0)<ρ(v_1). Number of non-free leaves in a tree v is denoted by ω(v), thus ω(v)=0 means that v is a free lost subtree, and ω(v)=1 means that v is a non-free lost subtree. If we will apply a switch operation on switchable trees v_0, v_1 such that ω(v_1)=1 and ω(v_0)=0, we say that v_1 carries over a (non-free) loss to v_0. The next lemma is obvious. Switch operation does not change the weight of a reconciliation. The next lemma tells us how to, from an arbitrary reconciliation, obtain a reconciliation with more convenient structure of lost subtrees. Letbe a reconciliation. Then there exists a reconciliation _1 such that if v_0 and v_1are free and non-free overlapping trees in _1, then v_0≤ v_1 and ω()=ω(_1). Let V_lost={v| vis a lost subtree}. Takev_0∈ V_lost such that ω(v_0)=1, and v_1∈ V_lost such that ω(v_1)=0,v_0<v_1, and v_0 is overlapping with v_1. By switching v_0 and v_1 we get ω(v_0)=0, ω(v_1)=1 and v_0<v_1. Repeat the process as long as there are trees v_0,v_1 as described. We need to prove that this algorithm ends. Let d(V_lost) be the total distance of all non-free v∈ V_lost from root(S). Hence d(V_lost) is a non-negative integer. Every time, when switching is applied,d(V_lost) decreases, hence the algorithm must stop, becaused(V_lost) cannot decrease indefinitely. Switch operation does not change the weight of a reconciliation (Lemma <ref>). Procedure SwitchSubtrees is described in Definition <ref>. § THE ALGORITHM In this section, we prove that the algorithm returns an optimal reconciliation, and any optimal reconciliation can be an output of the algorithm. We also prove the remaining lemmas.All elements are ready to write the main algorithm that generates a random optimal solution. Algorithm <ref> gives the main procedure.Now we prove a lemma stated earlier.Let l be the number of assigned losses to e in _1,is the (multiple) reconciliation obtained from _1 by removing all (l) losses from e, k' is from the definition of flow. Then F(e,_1)=k'-l. Therefore, the maximum number of extra losses that we can assign to e in , without completion cost change, is k' and k'≤ l.It is obvious that Δ()=Δ(_1)=Δ(_2). Also ω(c())<ω(c(_2)).We have that ω(c(_2))=ω(c(_1))+1 or ω(c(_2))=ω(c(_1)). Assume that ω(c(_2))=ω(c(_1)). Observe c(_2). Let t_1,…,t_l,t_l+1 be the lost subtrees with the roots assigned to p(s) (and expanding over e). If any of these subtrees are non-free in c(_2) then by removing it we get an extension of _1 that has strictly less weight than ω(c(_2))=ω(c(_1)), a contradiction. Therefore all subtrees t_1,…,t_l,t_l+1 are free in c(_2). Let us prove that there is at least one non-free subtree in c(_2). Assume the opposite, i.e. all lost subtrees of c(_2) are free. Then we can have an extension of _1 andwith all free lost subtrees, by just removing one or all subtrees extending over e. Hence ω(c())=ω(c(_1))=ω(c(_2))=|Δ()|. This means that we can assign at least l+1 losses to e inwithout completion cost change. This contradicts the fact that k'<l+1. Therefore c(_2) has at least one non-free lost subtree.Let us prove that there exists a chain of lost subtrees v_1,…, v_m-1,v_m (in _2) such that v_1< … < v_m, v_i overlaps v_i+1, (i=1,…,m-1), v_1 is a non-free tree, v_2,…,v_m are free trees and v_m is a tree assigned to p(s) extending over e. Assume the opposite. Let T_S be the maximum subtree with root edge e that contains only free lost subtrees (see Figure <ref>), andf_1,…,f_r edges of S that are children of leaf-edges of T_S. Because of the maximality of T_S and the assumption that there is no chain leading from non-free tree to one of the trees t_1,…,t_l+1, we have that there is no tree expanding from inner node of T_S over one of the edges f_1,…,f_r. Since _2 has at least one non-free lost subtree, we have r≥ 1, i.e. edges f_1,…,f_r do exist.Since ω(c())<ω(c(_2)) and c(_2) has only free trees in T_S, then there is i such that ω(c()(f_i)) < ω(c(_2)(f_i)). Since no lost subtrees expands from inner node of T_S over f_i, we can take the lost subtrees with roots in c()(f_i) and use them in c(_2), insteadof the lost subtrees in c(_2)(f_i). Thus we obtain an extension of _2 with strictly less cost than c(_2), a contradiction. This means that there is a chain v_1,…,v_m with described properties (v_1 is non-free, etc.).Now, apply switch operation on v_i,v_i+1, for every i=1,…,m-1. In this way v_m, which is one of the trees t_1,…,t_l+1, becomes non-free. The weight of c(_2) is not changed with these switch operations. Now, by removing v_m, we obtain an extension of _1 with strictly less cost than c(_2), which contradicts the assumption ω(c(_2)) = ω(c(_1)). Therefore ω(c(_2)) = ω(c(_1))+1.Let e=(s,p(s))∈ E(S), and by Ł'(e)=Ł'(e,)=Ł'(s)=Ł'(s,) denote the number of non-free lost subtrees, in the reconciliation , with a root assigned to p(s)∈ V(S),expanding over e. Letbe a reconciliation, _1 output of OneCompletion(), e∈ E(S), and e_l,e_r children of e. Then * F(e,)>0 Ł'(e,_1)=0; * F(e,)≤ 0 Ł'(e,_1)=Ł(e,)-(e,)-max(min(F(e_l,),F(e_r,)),0); *if_2 is another output of OneCompletion(), then ω(_1)=ω(_2). Let l=Ł(e,), d=(e,), m=max(min(F(e_l,),F(e_r,)),0). (a) Since OneCompletion extends losses only into edges e, if F(e)>0, we have that the number of extra losses, expanded over e, is not greater than F(e). Assume that _1 generates extra f losses in e. Hence f≤ F(e,)=m+d-l. Let f_m and l_m be the number of losses made free by extending over e_l,e_r, and f_d and l_d are number of losses made free by assigning them to the duplications in e. Hence f_m+l_m≤ m, f_d+l_d≤ d, f_m+f_d≤ f, l_m+l_d≤ l. Assume the opposite,let Ł'(e,_1)>0. Then f_m+f_d+l_m+l_d<f+l≤ d+mf_d+l_d < d or f_m+l_m < m. Therefore one extra loss can be made free by assigning it to duplication in e, or extending it over e_r and e_l. This contradicts the procedure ExtendLossIntoFreeTree, which make loss free if Δ(e)∅ or F(e_1)>0 and F(e_2)>0. (b)Since F(e,)≤ 0, _1 does not extend any new losses over e, and m+d≤ l. At most m losses can be extended over e_r and e_l, and at most d losses can be assigned to the duplications in e. Therefore, number of losses that remained non-free is l-d-m. (c) From (a) and (b), we have Ł'(e,_1)=Ł'(e,_2), ∀ e∈ E(S), hence |Λ\Λ'(_1)|=|Λ\Λ'(_2)|. Since ExtendLossIntoFreeTree does not create new duplications, we have Δ(_1)=Δ(_2)=Δ(). Therefore ω(_1)=ω(_2). Letbe a minimal reconciliation. ThenAllCompletions() returns a completion of . Let _1 be a reconciliation from Lemma <ref>, obtained by applying switch operations on c(). Then ω(_1)=ω(c()) and _1 satisfies the conditions from Lemma <ref>. Hence _1 is a possible output of the series of procedures OneCompletion(), ExtendLossesIntoNonFreeTrees(). Let _2 be another output of this series of procedures with the input . From Lemmas <ref> and <ref> we have that _2 is an extension of . From Lemmas <ref> (<ref>) and <ref> we have ω(_1)=ω(_2). Since _1 is a completion of , we have_2 is a completion of . Since Switch does not change the weight of a reconciliation (Lemma <ref>) and _2 is a completion of , we have that AllCompletions() is also a completion of . Algorithm <ref> returns an optimal solution.The algorithm starts with LCA reconciliation _1. LCA's completion is an optimal reconciliation (Theorem <ref>), therefore completion of _1 is an optimal reconciliation. Let _2 be an output of RaiseSeveralDuplications(_1). Then c(_2) is an optimal reconciliation (Lemma <ref>). Let _3 be an output of AllCompletions(_2). Then (Lemma <ref>) it is s completion of _2, hence _3 is an optimal reconciliation. Assume that _4 is an output of RaiseConversions(_3). From Lemma <ref> we have ω(_4)=ω(_3). Hence _4 is an optimal reconciliation. Note that _4 is an output of RandR(S,G,ϕ).Next lemma states that all duplications raised on a path going though a vertex with non positive flow on its children are conversions. Letbe a ZF reconciliationsuch that if v',v are non-free and free lost subtrees that overlap, then v≤ v'. Thenis a possible output of ExtendLossesIntoNonFreeTrees. From Lemma <ref> we have that ' is a possible output of RaiseDuplication, where ' is the minimization of . From Lemma <ref> and and this Lemma condition,is a possible output of the series of procedures OneCompletion('), ExtendLossesIntoNonFreeTrees('). Henceis a possible output of ExtendLossesIntoNonFreeTrees. Letbe a ZF reconciliation. Thenis a possible output of Switch. Let v' and v be non-free and free lost subtrees in . If they overlap and v'<v, apply switch operation. Previous procedure repeat as long as there are such trees. Let us prove that the procedure will stop. Let d be the sum of the distances of the roots of the non-free subtrees to root(S). With every switch operation d decreases. Since d≥ 0, it cannot decrease indefinitely. Hence the procedure will stop. The reconciliation, obtained in this way, denote by _1. Now, _1 satisfies the conditions in Lemma <ref>, hence it is a possible output of ExtendLossesIntoNonFreeTrees. So, by ExtendLossesIntoNonFreeTrees we obtain _1, and by Switch(_1), where switch operations are applied in the reversed order, we obtain . Any optimal solution can be generated by Algorithm <ref>. Letbe an arbitrary optimal reconciliation. By lowering some conversions, we can obtain a ZF reconciliation _1 such that ω(_1)=ω() (see Lemma <ref>).By Lemma <ref>,_1 is obtainable by Switch. So, _1 is a possible output of Switch, andis a possible output of RaiseConversions(_1), if conversion raising is applied in the reversed order.Algorithm <ref> has time complexity O(m^2+m· n). Let n=|V(G)|, m=|V(S)|, then E(G)∈ O(n), E(S)∈ O(m). LCA reconciliation can be determined in linear time (see <cit.>), say O(m+n). Algorithm <ref> forms a set Δ”(e) and it takes O(m) time. It extends a loss into free tree. The maximum size of a (non-)free tree is O(m). Algorithm <ref> applies Algorithm <ref> |Σ\Σ'|≤ |Σ| times, hence it has time complexity O(|Σ|· m). Algorithm <ref>determines possible new positions for a duplication d. Since the height of the tree S is O(m), we have that the number of possible positions is also O(m) and this is the complexity of Algorithm <ref>. Algorithm <ref> calls Algorithm <ref> and generates k∈ O(m) new losses. Hence the complexity of Algorithm <ref> is O(m). Algorithm <ref> calls Algorithm <ref> |Δ| times and its complexity is O(|Δ|· m). Algorithm <ref> raises one conversion. Maximal raise height is O(m) and this is the complexity of the algorithm. Algorithm <ref> calls Algorithm <ref> |C| times (C - the set of all conversions). Therefore the complexity of Algorithm <ref> is O(|C|· m). Algorithm <ref> extends a loss into a non-free tree. The size of non-free tree is O(m) an this is the complexity of the algorithm. Algorithm <ref> uses Algorithm <ref> |Σ_1| times, and its complexity is O(|Σ_1|· m). Algorithm <ref> applies a switch operation on lost subtrees. With every switch, a root of a subtree with non-free loss is further away from root(S). Longest distance from root(S) is O(m). Switch operation always include one non-free loss. Therefore the complexity of this algorithm if O(|Σ\Σ'|· m). When we add corresponding complexities we get O(m+n) + O(|Σ|· m) + O(|Δ|· m) + O(|C|· m) + O(|Σ_1|· m) + O(|Σ\Σ'|· m). Since |Σ|, |Σ_1|, |Σ\Σ'| ∈ O(m+n), |Δ|∈ O(n), we have that the complexity of the main algorithm is O(m^2+m· n). § CONCLUSION In this paper we give a polynomial algorithm that returns an optimal reconciliation in duplication, loss, conversion model. The algorithm can return any optimal reconciliation with a non-zero probability, and can enumerate the whole space of solutions.A natural extension would be a uniform sampling of all solutions in order to statistically assess properties of the solution space. Because of the switch operation, this could be achieved by an Markov chain Monte Carlo method. Future work is to define adequate transition probabilities to ascertain fast convergence.An interesting problem that we leave open for further research is the weighted case. Unfortunately the approach, used in this paper, is not useful for this case. A completion of LCA reconciliation does not have to be an optimal reconciliation (see Figure <ref>). It might be necessary to raise some speciations from V(G) in order to obtain an optimal solution. Adding transfers and recombinations significantly increases the complexity of the problem. spbasic | http://arxiv.org/abs/1703.08950v2 | {
"authors": [
"Damir Hasic",
"Eric Tannier"
],
"categories": [
"q-bio.QM",
"cs.DS"
],
"primary_category": "q-bio.QM",
"published": "20170327065030",
"title": "Gene tree species tree reconciliation with gene conversion"
} |
B. Ambrosio, M.A. Aziz-AlaouiNormandie Univ, UNIHAVRE, LMAH, FR-CNRS-3335, ISCN, 76600 Le Havre, France [email protected] R. Yafia Ibn Zohr University, Agadir, Le Havre Canard Phenomenon in amodified Slow-FastLeslie-Gower and Holling type scheme modelB. Ambrosio M.A. Aziz-AlaouiR. Yafia Received: date / Accepted: date ========================================================================================= Geometrical Singular Perturbation Theory has beensuccessfulto investigate a broad range of biological problems with different time scales. The aim of this paper is to apply this theory to a predator-prey model of modified Leslie-Gower type for which we consider thatprey reproduces mush faster thanpredators. This naturally leads to introduce a small parameter ϵ which gives rise to a slow-fast system. This system has a special folded singularity which has not been analyzed in the classical work <cit.>.We use the blow-up technique to visualize the behavior near this fold point P. Outside of this region the dynamics are given by classical singular perturbation theory. This allows to quantify geometrically the attractivelimit-cycle with an error of O(ϵ) and shows that it exhibits the canard phenomenon while crossing P. § INTRODUCTION In <cit.>, the authors introduced the following model:{[ ẋ=(r_1-b_1x-a_1y/x+k_1)x,;; ẏ=(r_2-a_2y/x+k_2)y ].where x represent the prey and y the predator. This two species food chain model describes a prey population x which serves as food for a predator y. The model parameters r_1,r_2,a_1, a_2, b, k_1 and k_2 are assumed to be positive. They are defined as follows: r_1 (resp. r_2) is the growth rate of prey x (resp. predator y), b_1 measures the strength of competition among individuals of species x, a_1(resp. r_2) is the maximum value of the per capita reduction rate of x(resp. y) due to y, k_1 (respectively, k_2) measures the extent to which environment provides protection to prey x (respectively, to the predator y). There is a wide variety of natural systems which may be modelled by system (<ref>), see <cit.>. It may, for example, be considered as a representation of an insect pest–spider food chain.Let us mention that the first equation of system (<ref>) is standard. The second equation is rather absolutely not standard. Recall that the Leslie-Gower formulation is based on the assumption that reduction in a predator population has a reciprocal relationship with per capita availability of its preferred food. This leads to replace the classical growing term (+xy) in Lotka-Volterra predator equation by a decreasing term (-y^2).Indeed, Leslie introduced a predator prey model where the carrying capacity of the predator environment is proportional to the number of prey. These considerations lead to the following equation for predator ẏ=r_2y(1-y/α x). The term y/α x of this equation is called the Leslie–Gower term.In case of severe scarcity, adding a positive constant to the denominator, introduces a maximum decrease rate, which stands for environment protection. Classical references include <cit.>. In order to simplify (<ref>), we proceed to the following change of variables:u(r_1t)=b_1/r_1x(t), v(r_1t)=a_2b_1/r_1r_2y(t),a=a_1r_2/a_2r_1, ϵ=r_2/r_1, e_1=b_1k_1/r_1, e_2=b_1k_2/r_1, t'=r_1t.For convenience, we drop the primes on t. We obtain the following system:{[ u_t = u(1-u)-auv/u+e_1,; v_t = ϵ v(1-v/u+e_2). ].We assume here that the prey reproduces much faster than the predator, i.e. r_1>>r_2, which implies that ϵ is small. Note that there are special solutions: u=0,v_t=ϵ v(1-v/e_2) and v=0,u_t=u(1-u). Hence, the quadrant (0≤ u ≤ 1, v≥ 0) is positively invariant for (<ref>). We restrict our analysis to this quadrant. We also assume the following conditions which ensure the existence of a unique attractive limit-cycle for (<ref>):ae_2<e_1, ae_2e_1,and, u^*<1-e_1/2, u^* 1-e_1/2,where u^* is solution ofu+e_2=1/a(1-u)(u+e_1). Under these asumptions there are 4 fixed points in the positive quadrant:P_1=(0,0), P_2=(0,e_2), P_3=(1,0), P_4=(u^*,g(u^*)),whereg(u)=1/a(1-u)(u+e_1).They also prevent additional singularities for the folded points.Figure <ref> illustrates nullclines and the attractive limit-cycle for (<ref>).Our aim is now to characterize the limit-cycle. In the following section we proceed to the classical slow-fast analysis which allows to describe the trajectories outside of a neighborhood of a special fold-point, induced by the nullcline u=0, which we will call P. In the third section, we use the blow-up technique to analyze the trajectories near this special fold point P.Now, let us fix a small value α>0 and define a cross section V={(u,v)∈^2; u>0, v=e_1/a+α}. Then, by the regularity of the flow with regard to ϵ,the limit cycle crosses V at a point (k(α)ϵ+o(ϵ),e_1/a+α) (below, for convenience, we do not write the dependence on α). We have the following theorem. Let u̅=1-e_1/2,and A=(0,g(u̅)), B=(0,e_1/a+α+c_2/c_1k), C=(u_*,e_1/a+α+c_2/c_1k), D=(u̅,g(u̅)),where u_* is such that g(u_*)=e_1/a+α+c_2/c_1k and c_1=1-e_1/e_1, c_2=e_1/a(1-e_1/ae_2). Let γ' be the closed curve defined by:γ'=[A,B]∪ [B,C] ∪ζ∪ [D,A]where,ζ= {(u,g(u)); u̅≤ u ≤ u_*}.All the trajectories not in u=0 and v=0, and different from the fixed point P_4 evolve asymptotically towards a unique limit-cycle γ which is O(ϵ) close of γ'. The existence of the cycle results from Poincare-Bendixon theorem. For uniqueness, we refer to <cit.>. The approximation by γ' results from slow-fast analysis and the blow-up technique which will be carried out in sections 2 and 3.According to<cit.>, the canard phenomenon occurs when a trajectory crosses a folded point from the attractive manifold andfollows the repulsive manifold during a certain amount of time before going away. We will see that according to this definition, the canard phenomena occurs here. This explains why we have introduced α and k.§ SLOW-FAST ANALYSISIn this section, we proceed to a classical slow-fast analysis, see for example <cit.>.We study the layer system and the reduced system.The layer system is obtained by setting ϵ=0 in system (<ref>). It reads as,{[u_t= u(1-u)-auv/u+e_1=F(u,v),;v_t=0 ].The stationary points of this system are given by:M_0={u=0v=1/a(1-u)(u+e_1)=g(u)}The set M_0 is called the critical manifold. Outside from a neighborhood of this manifold, for ϵ small,regular perturbation theory ensures that trajectories of system (<ref>) ar O(ϵ)-close to thoseof system (<ref>). The trajectories of system (<ref>) are tangent to the u-axis, which justifies the name of“layer system”. These trajectories are the fast trajectories.Furthermore, the Fenichel theory, see <cit.> or references cited above,provides the existence of a locally invariant manifold O(ϵ)-close to the critical manifold M_0 for compact subsets of M_0 where F'_u(u,v)≠ 0. Thus,we have to evaluate F'_u(u,v)on the critical manifold. The parts of M_0 whereF'_u(u,v)<0 is called the attractive part of the critical manifold. Analogously, the part of M_0 whereF'_u(u,v)>0 is called the repulsive part of the critical manifold. Now, we compute these subset of M_0. We start our computations with the case u=0. We have,F'_u(0,v)=1-av/e_1.Therefore,F'_u(0,v)>0 ⇔ v<e_1/a.Now, we deal with the case v=1/a(1-u)(u+e_1). We haveF'_u(u,v)=1-2u-ave_1/(u+e_1)^2.For v=1/a(1-u)(u+e_1), we obtain,F'_u(u,v)=u/u+e_1(-2u+(1-e_1)).Therefore,F'_u(u,g(u))>0 ⇔ u<1-e_1/2=u̅.Finally, the attractive critical manifold M_0,a is given by u=0 and v>e_1/a, or v=g(u) and 1-e_1/2<u≤ 1:M_0,a={(0,v);v>e_1/a}∪{(u,g(u));e_1/a<u≤ 1}.Analogously, the repulsive critical manifold M_0,r is given by:M_0,r={(0,v);0≤ v<e_1/a}∪{(u,g(u));0≤ u <u̅}. The non-hyperbolic points of the critical manifold, or fold points, where F'(u,v)=0 are B=(0,e_1/a) and D=(u̅,g(u̅)). Now, we look at the reduced system. The reduced system gives the slow-trajectories ie., the trajectories within the critical manifold which persists for ϵ small within the locally invariant manifold. It is obtained by setting ϵ=0 afterthe change of time τ=ϵ t in (<ref>).It reads as (to avoid complications, we keep the notation with t, but it should be with τ),{[ 0 = u(1-u)-auv/u+e_1,; v_t = v(1-v/u+e_2). ].For u=0, we obtain,v_t=v(1-v/e_2).This implies that v_t>0 ⇔ v<e_2.Notethat (0,e_2) is the fixed point P_2 of the original system. For, v=g(u). We have[ v_t > 0; ⇔v(1-v/u+e_2) > 0; ⇔ v<u+e_2 ]which reads alsov_t=g'(u)u_t=g(u)(1-g(u)/u+e_2).Therefore,u_t=g(u)/g'(u)(1-g(u)/u+e_2).The points where g'=0 correspond to a jump-point if g(u) ≠ u+e_2, since in this case,we have at this point, u_t=-∞. The analysis of layer and reduced system gives the qualitative behavior of the system outside the neighborhood of the fold-points. Trajectories reach the slow attractive manifold, and follow it according to the dynamics, or are repelledby the repulsive slow manifold. Furthermore, the behavior near thejump-point (u̅,g(u̅)), has been rigorously described in <cit.>. Trajectories reaching a neighborhood of the fold point from the right exit the neighborhood at left along fast fibers, and there is a contraction of rate e^-c/ϵ for some constant c between arriving and exitingtrajectories.The figure <ref> illustrates this behavior.Therefore, it remains only to analyze the behavior of trajectories near the fold point P=(0,e_1/a). This is what we wish to do in the following section by using the blow-up technique. Note that this has not been done in <cit.> since it is assumed there that critical manifold can be written v=φ(u) with φ'(0)=0 and φ”(0)≠0, which is not the case here sinceM_0 writes u=0 in a neighborhood of the fold-point P=(0,e_1/a).Canards may appear near the fold point D=(u̅,g(u̅)),when g(u)≃ u+e_2 . As we have already mentioned, canards are solutions that follow the repulsive manifold during a certain amount of time after crossing the fold before being repelled. They have been discovered by french mathematicians with non standard analysis and studied after with geometrical singular perturbation theory, see <cit.>. Our assumptionsprevent the apparition of canards near D. Near P=(0,e_1/a), we have canards as it is stated in theorem <ref>The condition e_2≃e_1/a, which is the analog of (<ref>) for P would lead to a higher singularity. We don't consider this case here and leave it for a forthcoming work. § BLOW-UP TECHNIQUE NEAR THE FOLD-POINT P=(0,E_1/A). The following proposition gives the formulation of (<ref>) when written around (0,e_1/a):Near the fold point (0,e_1/a), system (<ref>) rewrites:[ẋ= (c_1x^2-a/e_1xy+O(||(x,y)||^3);ẏ= ϵ(c_2+ e_1^2/a^2e_2^2x+(1-2e_1/ae_2)y+O(||(x,y)||^2); ϵ̇=0 ]wherec_1=1-e_1/e_1, c_2=e_1/a(1-e_1/ae_2).We start with the change of variables u=x,v=e_1/a+y.Plugging into (<ref>) gives:[ ẋ =x(1-x)-ax/e_1+x(e_1/a+y); ẏ = ϵ(e_1/a+y) (1-e_1/a(x+e_2)-y/x+e_2);ϵ̇ =0. ]Then, we use the following Taylor development:1/e_1+x=1/e_1-1/e_1^2x+1/e_1^3x^2+o(x^2).We find,[ẋ= (1/e_1-1)x^2-a/e_1xy+O(x^3)+O(x^2y),;ẏ= ϵ(e_1/a(1-e_1/ae_2)+ e_1^2/a^2e_2^2x+(1-2e_1/ae_2)y+O(||(x,y)||^2); ϵ̇= 0, ]which gives the result.Note that c_1>0 whereas c_2<0.We will now apply the blow-up technique. The blow-up technique is a change of variables which allows to desingularize the fold-point and visualize the trajectories in different charts. We use the followingchange of variables:x=r̅x̅, y=r̅^2y̅, ϵ=r̅^3ϵ̅ We obtain (we drop the bar):[ṙx+rẋ=c_1r^2x^2-a/e_1r^3xy+O(r^4x^2y)+O(r^3x^3);2ryṙ+r^2ẏ= r^3ϵ(c_2+ e_1^2/a^2e_2^2rx+(1-2e_1/ae_2)r^2y+O(||(rx,r^2y)||^2)); 3r^2ϵṙ+r^3ϵ̇=0 ]The chart K_1 is obtained by setting y̅=1. The chart K_2 is obtained by setting ϵ̅=1. The chart K_3 is obtained by setting x̅=1.In order to prove theorem we need only to consider the chart K_2 which will be fundamental in our analysis. When working ni chart K_2, we use the suscript 2.Dynamics in chart K_2.The dynamics in chart K_2 are given by the system:[ẋ_2= c_1x^2_2+O(r_2));ẏ_2= c_2+O(r_2);ṙ_2=0 ]Setting ϵ̅=1 in (<ref>) gives:[ẋ_2= r_2(c_1x_2^2+O(r_2));ẏ_2=r_2(c_2+O(r_2));ṙ_2= 0. ]Then, we desinguralize the systemby a change of time τ=r_2 t, which gives the result. For r_2=0, we obtain: [ ẋ_2 =c_1x_2^2; ẏ_2 = e_1/a(1-e_1/ae_2); ṙ_2 =0. ]Equation (<ref>) is very important in our analysis since it shows how the trajectories cross the fold point. The solution of system (<ref>) is:[ x_2(t)= 1/x_2^-1(0)-c_1t; y_2(t)=y_2(0)+c_2t;]i.e.[ x_2(t)= 1/x_2^-1(0)-c_1y_2(t)-y_2(0)/c_2 ]or[y_2(t) = y_2(0)+c_2/c_1(1/x_2(0)-1/x_2(t)) ]It follows that orbits have the following properties: * Every orbit has a horizontal asymptote y = y_r, where y_r depends on the orbit such that x→ +∞ as y approaches y_r from above.* Every orbit has avertical asymptote x= 0^+.* The point (x_2(0),α,0) is mapped to the point (δ, α+c_2/c_1(1/x_2(0)-1/δ)). It follows easily from the explicit solution. Solutions of (<ref>) are O(r)-close of those of (<ref>). This follows from regular perturbation theory. Let us make a remark on the first statement of proposition 3. For t^*=1/c_1x_2(0), x_2 blows-up. Since x_2=x/r_2, and r_2=ϵ^1/3, x_2=+∞ correspond, when ϵ=0 to a point x>0 where we can consider that trajectory has left the neighborhood of the fold and where the previous slow-fast analysis applies. This gives for y_2:y_2(t^*)=y_2(0)+c_2/c_1x_2(0).This means, that fixing x_2(0) and y_2(0), the value where the trajectory leaves the slow manifold and connects the fast fiber is determined by (<ref>). Therefore, if we choose (x_2(0),y_2(0)) on the limit-cycle, this determines the fast fiber followed by the limit-cycle. We will now detail this argument which gives the proof of theorem <ref>. Fix a value x far from 0, let's say x=1/2. We want to determine t^* such that x(t^*)=1/2, which corresponds to x_2(t^*)=1/2ϵ^1/3. Taking x(0)=kϵ+o(ϵ), andaccording to equation (<ref>), this gives:t^*=ϵ^1/3/c_1(1/kϵ+o(ϵ)-2)and for equation (<ref>),y_2(t^*)=y_2(0)+c_2ϵ^1/3/c_1(1/kϵ+o(ϵ)-2) +O(ϵ),which in original coordinates gives:y(t^*)=y(0)+c_2/kc_1+O(ϵ).This proves the theorem. Note that the folded node P is at the intersection of the the two branches of themanifold M_0, v=g(u) and u=0. Note also that these two branches actually exchange their stability at P. This case has been treated in a general form in <cit.> under the appropriate name of transcritical bifurcation. However, here we are precisely in the special case λ=1 excluded from theorem 2.1 of <cit.>. The authors have announced the existence of the canard in this case without giving the detailed proof of it. Here, we have proved the canard phenomenon using the blow up technique inthe case of the limit-cycle of this classical model of predator-prey.§ CONCLUSIONIn this article, we have characterised the limit-cycle of the system(<ref>). The system was originally introduced in <cit.> as a modification of the Leslie-Gower model. We have proved that the limit-cycle of the model exhibits the canard phenomenon when crossing a special folded node as well as computed the value at which it reaches the fast fiber. In a forthcoming work, we hope to investigate the diffusive model obtained by adding a laplacian term in the first equation. We would like to thank Region Haute-Normandie France and theERDF (European Regional Development Fund)projectXTERM (previously RISK). We would like thank N. Popovic for discussions on the transcritical bifurcation phenomenon.az03 M.A. Aziz-Alaoui and M. Daher Okiye, Boundedness and global stability for a predator-prey model with modified Leslie-Gower and Holling-type II schemes, Appl. Math. Lett. 16 (2003) 1069-1075.Da04 M. Daher Okiye, Étude et analyse asymptotique de certains systèmes dynamiques non-linéaires : application à des problèmes proie-prédateurs. PhD thesis, Le Havre, 2004.BC81 E.. Benoit, J.-L. Callot, F. Diener and M. Diener, Chasse au canards, Collect. Math., 31-32 (1981) 37-119.Fe79 N. Fenichel, Geometric singular perturbation theory for ordinary differential equations. J. Differ. Equ. 31 (1979) 53-98.Ha91 I.L. Hanski, L. Hassen,and H. Huttonen, Specialist Predation, generalist predation and the rodent microtine cycle, J. Animal Ecology 60 (1991) 353-367.(1992) 237-388.He10 G. Hek, Geometric singular perturbation theory in biological pratice, J. Math. Biol. 60 (2010) 347-386.Jo95 C.K.R.T. JonesGeometric singular perturbation theory. In: Johnson R (ed) Dynamical systems, Montecatibi Terme, Lecture Notes in Mathematics, Springer, Berlin. 1609 (1995)44-118.Ka99 T.J. Kaper An introduction to geometric methods and dynamical systems theory for singular perturbation problems. In: Cronin J, O’Malley RE Jr (eds) Analyzing multiscale phenomena using singular perturbation methods. Proc Symposia Appl Math, AMS, Providence,56 (1999) 85-132.KS01 M. Krupa and P. Szmolyan, Extending geometric singular perturbation theory to non-hyperbolic points-fold and canard points in two dimensions, SIAM. J. Math. Anal. 33 (2001) 286-314.KS01-2 M. Krupa and P. Szmolyan, Extending slow manifolds near transcritical and pitchfork singularities , Nonlinearity 14 (2001) 1473-1491.Le48 P.H. Leslie,Some further notes on the use of matrices in population mathematics, Biometrica35 (1948) 213-245.Le60 P.H. Leslie and J.C. Gower,The properties of a stochastic model for the predator-prey type of interaction between two species, Biometrica 47 (1960) 219-234. Ma73May R.M. Stability and complexity in model ecosystems. Princeton, NJ: Princeton University Press (1973). RM63Rosenzweig M.L. and MacArthur R.H. Graphical representation and stability conditions of predator–prey interaction. Amer. Naturalist. 47 (1963) 209-223RM92 S. Rinaldi and S. Muratori, Slow-fast limit-cycles in predator-prey models, Ecological Modelling. 61 (1992) 237-388.SW01 P. Szmolyan and M. Wechselberger, Canards in R3, J. Differential Equations, 177 (2001) 419-453.Up97 R.K. Upadhyay and V. Rai, Why chaos is rarely observed in natural populationss, Chaos Solitons and Fractals. 8(12) (1997) 1933-1939. Wi94 S. Wiggins Normally hyperbolic invariant manifolds in dynamical systems. Springer, New York (1994). | http://arxiv.org/abs/1703.09266v1 | {
"authors": [
"B. Ambrosio",
"M. A. Aziz-Alaoui",
"R. Yafia"
],
"categories": [
"math.DS"
],
"primary_category": "math.DS",
"published": "20170327185614",
"title": "Canard Phenomenon in a modified Slow-Fast Leslie-Gower and Holling type scheme model"
} |
^1Astronomy Program, Department of Physics and Astronomy, Seoul National University, Seoul 151-742, Republic of Korea; ^2Department of Earth Science Education, Seoul National University, Seoul 151-742, Republic of Korea;^3Department of Astronomy and Center for Galaxy Evolution Research, Yonsei University, Seoul 120-749, Republic of Korea†Author to whom any correspondence should be addressed: [email protected] We search type 1 AGNs among emission-line galaxies, that are typically classified as type 2 AGNs based on emission line flux ratios if a broad component in theline profile is not properly investigated. Using ∼24,000 type 2 AGNs at z <0.1 initially selected from Sloan Digital Sky Survey Data Release 7 by <cit.>, we identify a sample of 611 type 1 AGNs based on the spectral fitting results and visual inspection. These hidden type 1 AGNs have relatively low luminosity with a mean broadluminosity, log L_ Hα = 40.73±0.32 and low Eddington ratio with a mean log L_bol/L_ Edd = -2.04±0.34, while they do follow the black hole mass - stellar velocity dispersion relation defined by the inactive galaxiesand the reverberation-mapped type 1 AGNs. We investigate ionized gas outflows based on the [] λ5007 kinematics, which show relatively high velocity dispersion and velocity shift, indicating that the line-of-sight velocity and velocity dispersion ofthe ionized gas in type 1 AGNs is on average larger than that of type 2 AGNs.§ INTRODUCTIONThe spectral features of AGNs (active galactic nuclei) are observed in different ways since the nucleuscan be obscured by an optically thick dust torus <cit.>.According to the simplest AGN unification model, the observed physical properties show differences due to the orientation effect, which depends on the angle between the line-of-sight and the axis of the dust torus <cit.>. If we directly observe the inner part of an AGN, a blue AGN continuum from an accretion disk and broad emission linesoriginated from the broad-line region (BLR) are present in the optical spectral range. In contrast, if the dust torus obscures the inner part of an AGN, we observe narrow emission lines originated from the photoionized narrow-line region (NLR), without the direct detection of a blue AGN continuum and broad emission lines. If the FWHM of the broad emission line, e.g.,and , is larger than 1000 , AGNs are typically classified as type 1 AGNs <cit.>, for which the kinematics of gas in the BLR can be used to trace the gravitational potential of the central supermassive black hole for determining black hole mass <cit.>. In addition, the relativistic effect can also produce gravitational redshift causing velocity shift of the broad emission lines, as the BLR gas has orbital velocity corresponding to a few percent of the speed of light <cit.>. When AGN luminosity is relatively low, however, the emission from host galaxy dilutes AGN emission, consequently dominating in the observed spectra. Thus, either a blue AGN continuum or broad emission lines can be missed from detection if no careful analysis is performed. In particular, a weak broademission line from the BLR can be easily undetected if the narrowemission originated from the NLR and/or star forming region is very strong.Consequently, these low luminosity AGNs may be mis-classified as type 2 AGNs or star forming galaxies instead of type 1 AGNs. Previous studies by <cit.> and <cit.>, for example, showed that by decomposing the broad and narrow components in theline profile, a hidden population of type 1 AGNs can be identified among emission line galaxies <cit.>.The hidden population of type 1 AGNs is important to constrain the AGN unification model and to characterize dust torus since the empirical number ratio between type 1 and type 2 AGNs is a key to understand the orientation effect <cit.> although it is difficult to overcome the different bias and systematic difference in selecting type 1 and type 2 AGNs. Searching for intermediate mass black holes, other studies examined theline profile to investigate the presence of a broadcomponent and identified type 1 AGNs <cit.>. Hidden type 1 AGNs are characterized with low luminosity and low Eddington ratio <cit.>. Thus, the optical continuum mainly represents stellar population while narrow emission lines, e.g., [] λ5007, are similar to those of type 2 AGNs.Compared to typical type 1 AGNs, these hidden type 1 AGNs have advantage in studying their host galaxies and investigating the coevolution of black holes and galaxies by constraining the black hole mass scaling relations <cit.>. For example, while it is very difficult to measure stellar velocity dispersion or bulge mass of type 1 AGN host galaxies due to the high flux ratios between AGNs and stars <cit.>it is easy to measure the properties of host galaxies if the AGN continuum is weak <cit.>. Thus these AGNs can be utilized to investigate the black hole-galaxy scaling relations and their evolution <cit.>.In our pilot study <cit.>, we searched hidden type 1 AGNs among emission line galaxies at z <0.05, by carefully investigating the presence of a broad component in theemission line profile, using a large sample of ∼24,000 type 2 AGNs based on Sloan Digital Sky Survey (SDSS) Data Release (DR) 7 <cit.>.In this paperwe extend our previous work by enlarging the survey volume out toz=0.1, and improving the detection scheme for the broad component in . Based on the newly identified sample of type 1 AGNs with a broadcomponent, we investigate therelation, the kinematics of ionized gas outflows, and the effect of gravitational redshift.We present sample selection and analysis methods in Section 2 and 3, respectively. In Section 4, we present the results on therelation.In Section 5, we investigate the kinematics of [] and the gravitational redshift of the broad . Discussion is given in Section 6 and summary and conclusions follow in Section 7. Throughout the paper, we use the cosmological parameters as H_0 = 70 km s^-1 Mpc^-1, Ω_ m = 0.30, and Ω_Λ = 0.70.§ SAMPLE SELECTIONWe utilized the type 2 AGN catalog by <cit.> <cit.>, which identified type 2 AGNs at 0.02<z<0.1, in order to search hidden type 1 AGNs. The sample is selected based on a couple of criteria: signal-to-noise ratio (S/N)>3 for the four emission lines, namely, , [] λ5007, , and [] λ6583 from the MPA-JHU catalog for SDSS DR7 galaxies <cit.>. Using the flux ratios of these four emission lines <cit.> and the demarcation line for separating AGNs from star-forming galaxies <cit.>,23,517 type 2 AGNs (pure AGNs and composite objects)were identified with the amplitude-to-noise ratio 5 for [] and .More details of the selection procedure can be found in <cit.> and <cit.>. To identify hidden type 1 AGNs among type 2 AGNs, we first subtract stellar continuum by fitting SDSS spectra using the penalized pixel-fitting code (pPXF) <cit.>, which find the best-fit stellar population model of the given galaxy spectrum. We use MILES simple stellar population models with solar metallicity <cit.>. In addition to the optical emission lines, we mask the continuum around theregion (6300 ∼ 6900 Å) to prevent the stellar population model from fitting a potential broad component as a stellar continuum. After subtracting the best-fit stellar population model, we investigate whether a broadcomponent is present around theregion by visuallyinspecting the residual spectra (i.e., emission-line spectra). In this process we initially checked that ∼1000 objects show a broad feature around the narrowand [] lines. To confirm the presence of a broadcomponent, we need a reliable emission line decomposition process since theregion is complex due to the blending of the narrowand the [] doublet as well as the broadcomponent if present. Therefore, we use three different fitting approaches for theregion as follows: (a) If the narrow emission lines do not show particular wing components and if there is no broadcomponent, we use a single Gaussian model for each of theand two [] lines. In case of the [] doublet, we fix their centers and flux ratio at their theoretical value while we use the same line width; (b) If wing components are present in the narrow emission lines ( and [] doublet), we utilize a double Gaussian model (one for a core component and the other for a wing component) to account for outflow kinematics in the NLR as we used for typical type 2 AGNs <cit.> (c) If a very broadcomponent with FWHM> 1000is present, we add a single Gaussian component or a Gauss-Hermite component depending on whether this component is symmetric or asymmetric. For the broad and narrowcomponents, we use free parameters for the line centers, widths and amplitudes. For the total sample, we first apply model (a) and identify type 1 AGN candidates by visually inspecting whether the model-subtracted spectra show a significantly high residual around theregion. Note that the residual can be present due to a very broadcomponent or wing components of the narrowand [] lines. For these type 1 AGN candidates, we then apply either model (b) or (c) depending on the line profile of the [] doublet.If the [] doublet shows wing components based on a single Gaussian fit and visual inspection, then we apply model (b) by including a wing component for each ofand [], while we use model (c) only if both model (a) and (b) do not provide an acceptable fit (i.e., a significant residual is present based on visual inspection). In this process, we conservatively identify type 1 AGNs only for the case with model (c) in order to avoid false detection of the broadcomponent. In total, we obtained 611 type 1 AGNs with a broadlines with FWHM > 1000 . Note that we use FWHM = 1000 as a lower limit of the broadline width as a conventional definiton of type 1 AGNs <cit.> since aline narrower than 1000 may not be originated from the BLR. In Figure 1, we present examples with the best-fit models in theregion.§ ANALYSIS §.§ Measurement of gas kinematics To investigate gas kinematics, we measure the velocity shift with respect to the systemic velocity based on stellar lines, velocity dispersion and luminosity of the [] λ5007. We fit [] using single or double Gaussian models to investigate the ionized gas outflows. First, we fit the total sample with a single Gaussian model. If the single Gaussian fit is not acceptable and a wing component is present based on visual inspection, we use a double Gaussian model. Since in some casesa noise is fit with a second Gaussian component, we accept the double Gaussian fit only if the peak of the wing component is a factor of three larger than the noise level as we adopted in <cit.>(see Figure 1). Based on the best-fit model, we calculate the first and second moment of the total line profile as:λ_0 = ∫λ f_λ dλ∫ f_λ dλ σ^2= ∫λ^2 f_λ dλ∫ f_λ dλ - λ_0^2where f_λ is the flux density and σ refers to the second moment line dispersion.For the given λ_0 , the velocity shift is calculated with respect to the systemic velocity.Note that we follow the same procedure to analyze the [] kinematics presented in our previous study of type 2 AGNs <cit.>. §.§ Morphology classification We use the Galaxy Zoo human classification data to obtain the host galaxy morphology <cit.>. The host galaxies of the hidden type 1 AGNs are classified as 221 spirals (∼36%) and 381 ellipticals (∼62%). Nine (∼1.5%) of them are not classified due to the low image quality.Spiral host galaxies are further divided into face-on and edge-on galaxies based on the minor-to-major axis ratio (i.e., b/a) obtained from the SDSS DR7. If the ratio is larger than 0.5, we classify them as edge-on galaxies. Otherwise, we classify them as face-on galaxies. According to this criteria, 221 spirals are divided into 44 edge-on spirals and 177 face-on spirals.§.§ Black Hole Mass and Eddington ratio To calculate black hole mass, we use the single-epoch virial mass estimator calibrated by <cit.> as:M_ BH=f×10^6.819× (σ_ Hβ 10^3 km s^-1)^2× (λ L_5100 10^44 erg s^-1)^0.533 M_⊙where σ_ Hβ is the line dispersion of theline and L_5100 representsAGN continuum luminosity at 5100Å. However, since we do not detect AGN continuum and barely see theline due to their low flux in the large fraction of our sample, we use the width (i.e., FWHM and line dispersion) and the luminosity of the broadline as a proxy of . When we use the FWHM ofbroad component, we calculate the -based M_ BH following <cit.> as:M_BH=f × 10^6.544 (L_ Hα 10^42 ergs^-1)^0.46 × ( FWHM_Hα 10^3 kms^-1)^2.06 M_⊙ where L_ Hα is the luminosity of the broadand the FWHM_ Hα is the width of the broadcomponent. When we instead use the line dispersion of thebroad(σ_Hα), we calculate M_ BH following <cit.> as: M_BH=f × 10^6.561 (L_ Hα 10^42 ergs^-1)^0.46 × (σ_Hα 10^3 kms^-1)^2.06 M_⊙ We utilize log f=0.05 for the FWHM-based M_ BH estimator and log f=0.65 for σ-based M_ BH, which were obtained from the recent calibration based on therelation of quiescent galaxies andreverberation-mapped AGNs <cit.>.We use the luminosity of the broadcomponent (L_ Hα) as a proxy for the bolometric luminosity (L_ bol), by utilizing the relation betweenand the continuum luminosity at 5100Å <cit.>, and the bolometric correction 9.26 for L_5100 <cit.> as:L_ bol=2.21×10^44 (L_ Hα 10^42 erg s^-1)^0.86 erg s^-1. § MASS DISTRIBUTION AND THE SCALING RELATIONS§.§ Comparison with typical type 1 AGNs In this section, we investigate the properties of the newly identified type 1 AGNs. The most prominent feature of these type 1 AGNs is that they have very weak AGN continuum, hence, the observed continuum represents the stellar continuum. To compare the hidden type 1 AGNs with typical type 1 AGNs, we choose 882 type 1 AGNs at the matched redshift range z<0.1 from the catalogue by <cit.>. Using the luminosity and FWHM of the broademission lines of the typical type 1 AGNs, we obtain bolometric luminosity, black hole mass and the Eddington ratio in the same way as we described in Section 3.3. Note that we use FWHM_ Hα based M_ BH in this section.The hidden type 1 AGNs have relatively low broadluminosity while they have similar FWHM of the broadcompared with typical type 1 AGNs (Figure 2). The meanluminosity of the hidden type 1 AGNs, L_ Hα, is 10^40.73 ± 0.32while it is 10^41.61 ± 0.38for the typical type 1 AGNs. In contrast, there is no significant difference in the distribution of FWHM of(Figure 2 right panel). Figure 3 presents the distribution of the hidden type 1 AGNs (red dots) and the typical type 1 AGNs (blue dots) in the L_ bol - M_ BH plane. The mean bolometric luminosity of the hidden type 1 AGNs is 10^43.25 ± 0.28while that of typical type 1 AGNs is 10^44.01 ± 0.33 , indicating that the hidden type 1 AGNs have lower bolometric luminosity than the typical type 1 AGNs. In the case of black hole mass, the mean mass is 10^7.20 ± 0.42 and 10^7.56 ± 0.44, respectively for the hidden type 1 AGNs and the typical type 1 AGNs. Combining mass and luminosity, the mean Eddington ratio of the hidden type 1 AGNs (i.e., 0.01 ± 0.01) is slightly lower than that of typical type 1 AGNs (i.e., 0.03 ± 0.02).Overall, the newly identified type 1 AGNs have relatively low luminosity while the width ofis comparable to that of typical type 1 AGNs.§.§relation of the hidden type 1 AGNs We investigate whether the newly identified type 1 AGNs follow the black hole mass scaling relations as inactive galaxies and reverberation-mapped type 1 AGNs, using stellar velocity dispersion (σ_*) measurements from the SDSS DR7. We adopt the most recent calibration by <cit.> for σ_ Hβ-based M_ BH estimator as log (M_ BH/M_⊙)=8.34 + 4.97 log (σ_*/200 ) and for FWHM_ Hβ-based M_ BH estimator as log (M_ BH/M_⊙) = 8.34 + 5.04 log (σ_*/200 ). We find that the hidden type 1 AGNs locate slightly below therelation (Figure 4 left panels) with an average offset of 0.28 ± 0.40 dex and 0.12 ± 0.41 dex, respectively for the σ_Hβ and FWHM_Hβ based M_ BH estimator.The offset may be interpreted in different ways. First these hidden type 1 AGNs may not follow the samerelation compared to typical type 1 AGNs. Second, since we use thevelocity and luminosity as a proxy forvelocity and 5100Å luminosity in estimating black hole masses, systematic effects in the calibration of mass estimators may cause the offset, particularly at low mass scales where our targets are mainly located in theplane. Note that the empirical correlation between L_5100 and L_ Hα has not been directly calibrated for the hidden type 1 AGNs. Third, there is a possibility that the virial factor may not be same for the hidden type 1 AGNs. Four, stellar velocity dispersion may be overestimated particularly for the late type galaxies since the significant rotational velocity may broaden stellar lines observed through the 3" aperture of the SDSS spectroscopy <cit.>. Without more detailed data, e.g., dynamical black hole mass measurements and spatially resolved kinematics measurements, it is difficult to evaluate these different scenarios. Instead, we further investigate whether the hidden type 1 AGNs have a differentrelation compared to inactive and reverberation-mapped AGNsby following the joint-fit analysis presented by <cit.>. By combining the compilation of black hole mass and stellar velocity dispersion of inactive galaxies and reverberation-mapped AGNs from <cit.> with those of our type 1 AGNs, we fit therelation log (M_BH/M_⊙)=α + β log (σ_*/200 ), in order to determine the virial factor, intrinsic scatter, intercept(α) and slope(β) based on the χ^2 minimization asχ^2= ∑_i=1^N(μ_i - α -β s_i)^2σ_μ , i^2 + β^2σ_s, i^2 + ϵ_0^2 + ∑_j=1^M (μ_VP,j +log f- α -β s_j)^2σ^2_μ _VP,j + β^2σ_s, j^2 + ϵ_0^2where μ=log (M_ BH/M_⊙) of inactive galaxies, μ_VP is the virial product of AGNs, and s represents log (σ_*/200 ).For our type 1 AGNs, we calculate the virial product by diving M_ BH by the virial factor f in Eqs. 4 and 5. As a result of joint-fit analysis, we obtained a slope of 4.26 and logf = 1.03 for the σ_ Hα-based M_ BH estimator and a slope of 4.30 and logf = 0.30 for the FWHM_Hα-based M_ BH estimator (Figure 4 right panels).The offset of the hidden type 1 AGNs from therelation becomes negligible (i.e., 0.01 and 0.03 dex, respectively for σ_ Hα-based M_ BH and FWHM_Hα-based M_ BH), and the scatter of the sample (0.37 dex) is comparable to that of inactive galaxies and reverberation-mapped AGNs. Based on the joint fit analysis we find no strong evidence that the hidden type 1 AGNs have a differentrelation compared to inactive galaxies or the reverberation-mapped AGNs.We examine whether the measured stellar velocity dispersion may be affected by the rotational velocity by comparing face-on and edge-on late type galaxies in theplane (see Figure 5).As expected, we find that the majority of edge-on galaxies are located below therelation, suggesting that stellar velocity dispersion was overestimated due to the contribution of the rotational velocity, while face-on galaxies are more randomly distributed since the line-of-sight rotational velocity becomes much lower than that of edge-on galaxies.In Figure 6, we compare the b/a ratio as a proxy for the inclination angle with the offset from therelation using late-type galaxies.We find a weak trend that the average offset increases with increasing b/a ratio, indicating that more inclined late type galaxies have larger offset from therelation. These results suggest that spatially resolved kinematics are required to overcome the inclination and rotation effect and to better investigate therelation of the hidden type 1 AGNs. Nevertheless, we find that the hidden type 1 AGNs follow the samerelation as inactive galaxies and reverberation-mapped AGNs within the limitation of the current data.§ GAS KINEMATICS §.§ Narrow Line Region gas kinematics In this section, we investigate the gas kinematics in the NLR using [] and [] emission lines compared to stellar kinematics.If the kinematics of NLR gas is governed by the gravitational potential, the velocity dispersions of emission lines and stellar absorption lines should be comparable<cit.>. However, the ionized gas, e.g., [] of AGNs typically shows larger velocity dispersion than stars <cit.>, suggesting the presence of non-gravitational component. First, we compare the velocity dispersion of [] with stellar velocity dispersion, finding one-to-one relationship with a mean ratio, 0.99±0.20. This suggests that [] gas does not show any non-gravitational component and the broadening of [] gas can be entirely explained bythe gravitational potential. In the case of [], however, we find that the mean ratio between gas and stellar velocity dispersion is 1.22±0.50, indicating the presence of outflow component in []. When we separately calculate the velocity dispersion of narrow core and wing components, the core component shows a consistent velocity dispersion compared to stellar velocity dispersion with a mean ratio 0.96±0.29 while the velocity dispersion ofthe wing component of [] is a factor of 2.11±0.80 higher than stellar velocity dispersion, representing non-gravitational kinematics, i.e., outflows <cit.>.Second, we investigate the velocity shift of each emission line with respect to the systemic velocity (Figure 7).When we use the total profile of [], the mean velocity shift is -23± 44with a skewed distribution toward blueshift.Compared to the distribution of [] velocity shift of type 2 AGNs <cit.>, type 1 AGNs show a strong asymmetric distribution with a much larger number of AGNs with blueshifted [] than AGNs with redshifted [], presumably due to the fact that the direction of outflows is close to the line-of-sight (see the discussion on the number ratio between blueshifted and redshifted [] in <cit.>). Once we use the narrow core and broad wing components separately, the mean velocity shift is -10±38 and -102±108 , respectively. These measurements indicate that the core component of [] reflects the gravitational component while the wing component represents the outflow kinematics as discussed in the previous studies <cit.>. We also measure the velocity shift of narrow , finding no significant velocity shift with a mean velocity shift 1± 22anda similar distribution compared to the [] core component.In comparing velocity shifts of [] and , we find that the high ionization line is mainly blueshifted, which is consistent with the previous studies of type 1 AGNs <cit.>. For instance, <cit.> reported that 77% of the type 1 QSOs have blueshifted [], while low ionization lines (e.g., [], [], []) do not show any significant velocity shift. Based on these results, we will use the wing component in [] as a tracer of gas outflows to further investigate AGN-driven gas outflow in the next section.§.§ AGN-driven outflow §.§.§ Velocity shift and velocity dispersion of []Along with [] line dispersion, [] velocity shift is also an outflow signature <cit.>. As presented in the Figure 7, the velocity shift of the [] total profile shows a blueward tail in its distribution. To estimate the velocity shift error in [] of each galaxy, we performed Monte Carlo simulation to produce 100 mock spectra. We measure the velocity shift in each mock spectra and consider the 1σ dispersion as the uncertainty of velocity shift. The mean measurement uncertainty of the [] velocity shift is 17.0 ± 27.2 . If we restrict the samples whose [] velocity shifts are larger than 1σ measurement uncertainty, 414 AGNs (∼ 68%) show velocity shifts. Among the 414 AGNs, 318 AGNs (∼ 77%) are blueshifted. In type 2 AGNs, the blueshifted fraction is 56%, when adopting the same measurement uncertainty using Monte Carlo Simulation <cit.>. This result indicates that the blueshift is more common in type 1 AGNs than the type 2 AGNs <cit.>.The relatively common blueshifts in type 1 AGNs can be explained by the combined model of biconical outflows and dust extinction <cit.>. When we observe type 1 AGNs, it is more likely that the receding cone is obscured by the dusty stellar disk, resulting in blueshift of the observed [] line profile. Therefore, [] of type 1 AGNs are more likely to be blueshifted than type 2 AGNs due to the orientation effect.To further investigate kinematics of the AGNs with gas outflow in the NLR, we separate [] with a wing component (i.e., fitted with double Gaussian models; Group A) from [] without a wing component (i.e., fitted with single Gaussian; Group B). Group A consists of 226 AGNs (about 34.2 % of total sample) while Group B is composed of 385 AGNs. The mean [] velocity shiftis -41 ± 54and -12.1 ± 31.5 , respectively for Group A and Group B. Approximately 77% of AGNs in Group A shows velocity shift larger than 1σ measurement uncertainty, while63% of AGNs in Group B has velocity shift with >1σ measurement uncertainty. If we count only these measurements (i.e., > 1σ error), the mean [] velocity shift is -49.5 ± 56.6and -18.9 ± 37.7 , respectively for Group A and B, indicating that Group A has on average larger velocity shift than Group B (see Figure 8). In Figure 8 we compare the velocity shift and velocity dispersion of []. The normalized [] line dispersions (log σ_ OIII/σ_*) of Group B are centered around 0, indicating that [] line dispersion and stellar velocity dispersion are comparable. In contrast, Group A shows on average larger velocity shift and velocity dispersion than Group B. Compared to stellar velocity dispersion, []velocity dispersion is much larger, indicating the presence of non-gravitational kinematics, i.e., outflows. There is a trend that velocity shift becomes larger with increasing velocity dispersion as similarly found in type 2 AGNs <cit.>, whichis a characteristic feature of biconical outflows as demonstrated by <cit.>.§.§.§ Outflow fraction We investigate the outflow fraction as a function of [] luminosity and the Eddington ratio.Since the wing component of [] indicates gas outflows, we use the presence of the wing in [] as an evidence of outflows.The outflow fraction (i.e., the fraction of double Gaussian []) rapidly increases with increasing luminosity (Figure 9). At high luminosity, e.g., L_[] > 10^41.0 , the outflow fraction is over 85%, suggesting that most high-luminosity AGNs have strong outflows, which is consistent with the trend found in type 2 AGNs <cit.>.On the other hand, the outflow fraction in low luminosity AGNs is lower than that of type 2 AGNs.For example, the outflow fraction of AGNs at 10^40.0is 40% in the case of type 2 AGNs, while it is ∼20% in the hidden type 1 AGNs. It is not clear what is the origin of this discrepancy. A systematic comparison of the outflow fraction between type 1 and type 2 AGNs is yet to be available. The detection of a wing component in [] is more difficult for lower luminosity AGNs since the wing component can be easily diluted by noise.Thus, the outflow fraction of the low luminosity AGNs should be regarded as a minimum <cit.>.We note that our hidden type 1 AGN sample is far from complete at low luminosity, since a broadcomponent will be much weaker and easily missed in identifying type 1 AGNs. Therefore, we interpret the difference of the outflow fraction at low luminosity is insignificant.The outflow fraction also increases as Eddington ratio increases albeit with a shallower slope.Over the Eddington ratio range covered by our sample, i.e., log (L_ bol/L_ Edd) = -2.5 to -1.5, the outflow fraction increases from ∼30% to ∼50%.These results indicate that ionized gas outflows are directly connected to AGN activity and the outflow properties are qualitatively similarbetween type 1 and type 2 AGNs.§.§ Gravitational Redshift The velocity of BLR gas is relatively high up to several percent of the speed of light, suggesting that the the broad lines can be redshifted due to relativistic effects <cit.>. To investigate the gravitational redshift using the newly identified type 1 AGNs, we calculate the velocity shift of the broadwith respect to the systemic velocity. In contrast to typical type 1 AGNs, for which systemic velocity cannot be accurately measured <cit.>, we are able to use the luminosity-weighted stellar absorption lines to determine the systemic velocity since AGN continuum is too weak to dilute stellar absorption lines. Thus, we can decrease the systematic effect in measuring velocity shift of broad emission lines caused by the large uncertainty in systemic velocity.In Figure 10, we present the velocity shift of the broadline as a function of the line width of . The mean velocity shift is 115 ± 389while 427 out of 611 AGNs show a redshifted(∼ 70%), which may be interpreted as the gravitational redshift although the velocity shift may be also caused by other mechanisms such as non-gravitational kinematics (i.e., inflows and outflows)or orbital motions <cit.>. Note that the other 1/3 of the sample shows a blueshifted . We find several extreme objects with a very large blue shift (< -1000 ) and a large velocity dispersion, while theline profile is very asymmetric. The nature of these objects is unclear and beyond the scope of the current study.Using the SDSS quasar sample, <cit.> statistically investigated the velocity shift ofto compare with the prediction of their BLR geometry models. The [] narrow emission line was used to calculate the systemic velocity since stellar absorption lines are not detected due to strong AGN continuum. Comparing with their result (Figure 1 in <cit.>), we obtain a similar result within the observed range of the broadvelocity dispersion < ∼3000in our sample. The geometry of the BLR is often assumed with spherical or disk models, and these models predict that the effect of the gravitational redshift increases with the BLR gas velocity <cit.>. By taking the expected velocity shift of the broadas V_ Hα = 1.5σ_ Hα^2/ c based on a spherical model of BLR from <cit.>, we compare the observed mean velocity shift with the prediction as a function of the line dispersion of broad . Although there is a large scatter ofvelocity shift for given velocity dispersion, the trend of the mean velocity shift is consistent with the model, suggesting that the observedvelocity shift is consistent with the gravitational redshift effect. § DISCUSSION : COMPARISON WITH PREVIOUS WORKSAlthough the presence of broad emission lines is a characteristic of optical type 1 AGNs, in some cases it is difficult to detectbroad lines due to obscuration in the optical spectral range (e.g., red AGNs) or intrinsically low luminosity diluted by host galaxy emission <cit.>.Several studies tried to find hidden type 1 AGNs using different methods.For example, using the SDSS DR2 catalogue, <cit.> found 63 objects (1.8% of total galaxies), which have a relatively broadline based on visual inspection. Similar works by <cit.>and <cit.> reported the detection of hidden type 1 AGNs using SDSS DR7. We compare the selection methods and results between our work and the previous studies.In our pilot study <cit.>, we identified type 1 AGNs at 0.02<z<0.05 by detecting a broad component in theline profile.Using the same type 2 AGN catalogue, we extend our previous work out to z=0.1, enlarging the sample size from 142 to 611 objects.However, we note that the selection scheme is improved based on the revised spectral decomposition and visual inspection. For example, we model theregion more carefully using 3 different cases as described in Section 2.2, while <cit.> used a simple fitting scheme using one broad and one narrow components for . Thus, readers are suggested to use the results in this extended work. <cit.> searched type 1 AGNs among emission line galaxies at z<0.2 using the SDSS DR7 catalogue. First, they used the flux ratio at two narrow band regions, respectively representing a broad(6523-6543Å) and continuum (6460-6480Å), to trace the presence of a broadcomponent. This flux ratio F_6533/F_6470 is useful to find broadcandidates since it increases as the broadline becomes more prominent. Combining the signal-to-statistical noise (defined from continuum spectral range) and the flux ratio F_6533/F_6470, they defined the 1σ demarcation line to select type 1 AGN candidates <cit.>.Second, they used the area flux ratio, which is defined by the area of the red wing of the broadcomponent, after excluding the [] λ6584 emission line (see the blue + grey area in Figure 12), divided by the noise area defined with the 1σ noise level of the continuum (blue area in Figure 12), in order to avoid unreliable detection of a broad .Only if the area flux ratio is larger than 2, then they classified a target as type 1 AGNs. Using these criteria, they identified 1611 type 1 AGNs at z < 0.1.Among these AGNs, 882 objects are SDSS specClass =3 objects, which are already classified as type 1 AGNs in the SDSS, while the other 729 objects are SDSS specClass =2 (i.e., galaxy) objects. Thus, they found 729 hidden type 1 AGNs among type 2 or star forming galaxies by detecting a broad , while in our study we found 611 type 1 AGNs from our type 2 AGN catalogue.Among 729 type 1 AGNs identified by <cit.>, 230 objects are overlapped with our sample of 611 type 1 AGNs, while the other 499 objects are not identified as type 1 AGNs in our study. When we examine these 499 objects, the majority of them (412 objects) are located in the star forming region in the BPT diagram. Since we only used the type 2 AGN catalogue to search hidden type 1 AGNs, we simply did not investigate these 412 AGNs. The remaining 87 objects were not classified as type 1 AGNs in our study since a broadcomponent was not required to fit theregion. Instead, we were able to fitand [] with double Gaussian models for the majority of these objects (70 objects, see the top panel in Figure 11) or single Gaussian models provide a good fit for 7 objects (see the bottom panel in Figure 11). For some cases (10 objects), the noise in the vicinity ofregion is too large to decide whether single or double Gaussian models provide a better fit.Note that in our fitting procedure we often used double Gaussian models whenever a wing component in the narrow line profile is required. In particular, since we also examined the line profile of the [] doublet, in order to check whether a wing component is necessary to fit the narrow component ofand [] lines, we tried to avoid a false detection of fitting the wing components of narrow lines as a broadcomponent. In contrast, <cit.> used single Gaussian models for narrowand [] lines, hence they may have treated the wing components of the narrow lines as a broadcomponent. In fact, the [] line profile clearly shows the presence of a wing component for ∼a half of these objects (43/87) as demonstrated in Figure 11 (top panel). Since we used a double Gaussian profile for narrow lines (, [], and []), our fitting method for detecting a broad component ofseems to be more conservative than that of <cit.>. We also investigate why 315 objects among 611 objects in our hidden type 1 AGN sample were not classified as type 1 AGNs by <cit.> . Many of them were excluded due to the area ratio criterion by <cit.>. For example, when we apply the area ratio cut defined by <cit.> to the 315 AGNs, 136 objects do not satisfy the criterion while in the current study we conservatively concluded that a broadis present (see Figure 12). In particular, if the noise level is relatively large, the area flux ratio becomes below 2, hence, these AGNs will be rejected by the criterion defined by <cit.>. In Figure 12 we demonstrate two cases, each of which clearly shows a broadcomponent in visual inspection while the area flux ratio does not satisfy the criterion (i.e., ratio<2).In addition, if the broadis blueshifted, the area ratio becomes also smaller since the area defined in the red side of thebecomes smaller.For the remaining 179 objects among 315 objects, we find that the area flux ratio is slightly larger than 2 when we calculate it using our own noise estimates. Thus, it is not clear why these objects were rejected by <cit.>. It is possible that there is difference in noise calculation and in modeling continuum between these two studies. In summary, although the selection method of <cit.> provides consistent and quantitative criteria for identifying a hidden broadcomponent, it is somewhat limited due to the fact that the wing components in the narrow lines are not included in their fitting, and that their area ratio cut may reject a weak and shallow broadcomponent. More detailed studies on the detection scheme of a weakcomponent is necessary to better detect hidden type 1 AGNs. § SUMMARY AND CONCLUSIONS By detecting a broad component in theline profile, we conservatively identified a sample of 611 type 1 AGNs at 0.02 < z < 0.1, using the catalogue of <cit.>, which provides a large sample of type 2 AGNs classified based on the emission line flux ratios. We increased the sample size of the hidden type 1 AGNs from 142 <cit.> to 611 by extending the redshift range out to z=0.1 and using a consistent and improved emission line analysis. The main findings are summarized as follows.∙ The hidden type 1 AGNs have a similar range of thewidth, but on average lower luminosity compared to typical type 1 AGNs at the same redshift range, indicating that these hidden type 1 AGNs with a relatively weak AGN continuum is useful to study the properties of low luminosity AGNs and their host galaxies. The mean black hole mass estimated based on theFWHM and luminosity is log M_ BH =7.20±0.42, while the mean Eddington ratio is log L_ bol/L_ Edd = -2.04±0.34. ∙These AGNs seem to slightly offset from therelation defined by inactive galaxies and reverberation-mapped AGNs <cit.>, presumably due to the systematic difference in estimating black hole mass (i.e., depending onor ) and potentially different virial factors between typical type 1 AGNs and the hidden type 1 AGNs with a weak AGN continuum.In contrast, when we perform a joint-fit analysis by combining quiescent galaxies, RM AGNs, and hidden type 1 AGNs, we find no significant difference between typical type 1 AGNs and hidden type 1 AGNs. ∙By investigating the kinematics of ionized gas, we find that the velocity dispersion of [] and the core component of [] is roughly consistent with stellar velocity dispersion, indicating that the host galaxy gravitational potential is responsible for the broadening of these lines. In contrast, the wing component of [] represents non-gravitational kinematics, i.e., outflows, which are consistent with the finding of type 2 AGNs <cit.>. ∙ The velocity dispersion and velocity shift of [] show strong non-gravitational kinematics, i.e., outflows. The fraction of AGNs with a wing component in [] strongly increases with AGN luminosity, suggesting that the non-gravitational kinematics are directly connected to AGN activity. ∙ The line-of-sight velocity and velocity dispersion of the ionized gas in type 1 AGNs is on average larger than that of type 2 AGNs <cit.>, which is consistent with the biconical outflow models and orientation effect <cit.>. Although the uncertainty of the velocity shift of [] is large, we find that there are more AGNs with blueshifted [] than AGNs with redshifted [].Based on these results, we conclude that the hidden type 1 AGNs follow the general characteristics of typical broad line AGNs except for low luminosity and low Eddington ratio. More detailed studies on the detection scheme and the completeness of the broad line AGNs may provide a better understanding of these hidden type 1 AGNs, which can be used as a unique channel for studying AGN unification and black hole-galaxy connection. We thank the anonymous referee for valuable comments. Support for this work was provided by the National Research Foundation of Korea grant funded by the Korea government (No. 2016R1A2B3011457 and No. 2010-0027910).apj[Abazajian et al.(2009)]abazajian+09 Abazajian, K. N., et al. 2009, , 182, 543 [Antonucci(1993)]Antonucci+93 Antonucci R., 1993, ARA&A, 31, 473 [Bae & Woo(2014)]Bae+14 Bae, H.-J., & Woo, J.-H. 2014, ApJ, 795, 30 [Bae & Woo(2016)]Bae+16 Bae, H.-J., & Woo, J.-H. 2016, ApJ, 828, 97 [Baldwin et al.(1981)]Baldwin+81 Baldwin, J. A., Phillips, M. M., & Terlevich, R. 1981, PASP, 93, 5[Baldassare et al.(2016)]Baldassare+16 Baldassare, V. F., Reines, A. E., Gallo, E., et al. 2016, , 829, 57 [Barth et al.(2005)]Barth+05 Barth, A. J., Greene, J. E., & Ho, L. C. 2005, , 619, L151 [Bentz et al. (2009)]Bentz+09 Bentz, M. C., Walsh, J. L., Barth, A. J., et al. 2009, , 705, 199 [Bentz et al.(2013)]Bentz+13 Bentz, M. C., Denney, K. D., Grier, C. J, Barth, A. J., Petersom, B.M., Vestergaard M., Bennert, V. N., Canalizo, G., De Rosa, G., Filippenko, A. V., Gates, E. L., Greene, J. E., Li, W., Malkan, M. A., Pogge, R. W., Stern, D., Treu, T., & Woo, J.-H. 2013, , 767,149 [Blanford & Mckee (1982)]Blanford+82 Blanford, R.D., & McKee, C. F. 1982, , 225, 419 [Boroson (2002)]Boroson+02 Boroson, T. A. 2002, ApJ, 565, 78 [Boroson (2005)]Boroson+05 Boroson, T. 2005, AJ, 130, 381 [Cappellari & Emsellem.(2004)]Cappellari+04 Cappellari, M., & Emsellem, E. 2004, PASP, 116, 138 [Choi et al. (2009)]Choi+09 Choi, Y.-Y., Woo, J.-H., & Park, C. 2009, , 699, 1679 [Crenshaw et al.(2010)]Crenshaw+10 Crenshaw, D. M., Schmitt, H. R., Kraemer, S. B., Mushotzky, R. F., & Dunn, J. P. 2010, , 708, 419 [Ferrarese & Merritt(2000)]Ferrarese00Ferrarese, L. & Merritt, D. 2000, , 539, L9[Gilkman et al.(2007)]Gilkman+07 Gilkman, E., Helfand, D. J., White, R.L., et al. 2007, , 667, 673 [Gilkman et al.(2013)]Gilkman+13 Gilkman, E., Urrutia, T., Lacy, M., et al. 2013, , 778, 127 [Greene & Ho (2005)]Greene+05 Greene, J. E., & Ho, L. C. 2005, , 630, 122 [Grimes et al.(2004)]Grimes+04 Grimes, J. A., Rawlings, S., & Willott, C. J. 2004, , 349, 503 [Gültekin et al.(2009)]Gultekin+09 Gültekin, K., et al. 2009, , 698, 198[Kang et al.(2013)]Kang+13 Kang, W.-R., Woo, J.-H., Schulze, A., et al. 2013, , 767, 26 [Kauffmann et al. (2003)]Kauffmann+03 Kauffmann, G., Heckman, T. M., Tremonti, C., et al. 2003, , 346, 1055 [Kewley et al. (2001)]Kewley+01 Kewley, L. J., Dopita, M. A., Sutherland, R. S., Heisler, C. A., & Trevena, J. 2001, , 556, 121 [Kewley et al.(2006)]Kewley+06 Kewley, L. J., Groves, B., Kauffmann, G., & Heckman, T. 2006, , 372, 961[Kim et al.(2015)]Kim+15 Kim, D., Im, M., Kim, J. H., et al. 2015, , 216, 17 [Kollatschny(2003)]Kollatschny+03 Kollatschny, W. 2003, , 412, L61[Komossa & Xu(2007)]Komossa07 Komossa, S., & Xu, D. 2007, , 667, L33[Komossa et al.(2008)]Komossa+08 Komossa, S., Xu, D., Zhou, H., Storchi-Bergmann, T., & Binette, L. 2008, ApJ, 680, 926[Kormendy & Ho(2013)]Kormendy Ho2013 Kormendy, J., & Ho, L. C. 2013, ARA&A, 51, 511[Lawrence(1991)]Lawrence+91 Lawrence A., 1991, , 252, 586 [Lee et al. (2012)]Lee+12 Lee, G.-H., Woo, J.-H., Lee, M. G., Hwang, H. S., Lee, J. C., Sohn, J., & Lee, J. H. 2012, , 750, 141 [Lintott et al.(2008)]Lintott+08 Lintott, C. J., Schawinski, K., Slosar, A., et al. 2008, , 389, 1179[Lintott et al.(2011)]Lintott+11 Lintott, C., Schawinski, K., Bamford, S., et al. 2011, , 410, 166[Markwardt(2009)]Markwardt+09 Markwardt, C. B. 2009, ASPC, 411, 251 [Kaspi et al. (2000)]Kaspi+00 Kaspi, S., Smith, P. S., Netzer, H., et al. 2000, , 533, 631 [Oh et al.(2015)]Oh+15 Oh K., Yi S.K., Schawinski K., Koss M., Trakhtenbrot B., Soto K. 2015, , 219, 1[Park et al.(2015)]Park+15 Park, D., Woo, J.-H., Bennert, V. N., et al. 2015, , 799, 164[Peterson et al. (2004)]Peterson+04 Peterson, B. M., Ferrarese, L., Gilbert, K. M., et al. 2004, , 613, 682 [Reines et al. (2013)]Reines+13 Reines, A. E., Greene, J. E., & Geha, M. 2013, , 775, 116[Reines & Volonteri(2015)]Reines+15 Reines, A. E., & Volonteri, M. 2015, , 813, 82 [Richards et al.(2002)]Richards+2002 Richards, G. T., Fan, X., Newberg, H. J., et al. 2002, , 123, 2945[Richards et al(2006)]Richards+06 Richards, G. T., Lacy, M, Storrie-Lombardi, L. J., Hall, P. B., Gallagher, S. C., Hines, D. C., Fan, X., Papovich, C., Vanden Berk, D. E., Trammell, G. B., Schneider, D. P., Vestergaard, M., York, D. G., Jester S., Anderson, S. F., Budavari, T., & Szalay, A. S. 2006, ApJs, 166, 470 [Sánchez-Blázquez et al.(2006)]sb06 Sánchez-Blázquez, P., et al. 2006, , 371, 703[Schawinski et al.(2007)]Schanwinski+07 Schawinski, K., Thomas, D., Sarzi, M., et al. 2007, , 382, 1415 [Schneider et al.(2010)]Schneider+10 Schneider, D. P.,Richards, G. T., Hall, P. B., et al. 2010, , 139, 2360 [Seyfert(1943)]Seyfert43 Seyfert, C. K. 1943, , 97, 28 [Simpson(2005)]Simpson05 Simpson, C. 2005, MNRAS, 360, 565 [Tremaine et al.(2014)]Tremaine+14Tremaine S., Shen Y., Liu X. and Loeb A. 2014 ApJ 794 49 [Urry & Padovani(1995)]Urry+95 Urry, C. M., & Padovani, P. 1995, , 107, 803[Véron-Cetty et al.(2001)]Veron+01 Véron-Cetty, M.-P., Véron, P., & Gonçalves, A. C. 2001, , 372, 730[Vanden Berk et al.(2006)]Vanden+06 Vanden Berk, D. E., Shen, J., Yip, C.-W., et al. 2006, , 131, 84[Warner et al.(2004)]Warner+04 Warner, C., Hamann, F., & Dietrich, M. 2004, , 608, 136[Willott et al.(2000)]Willott00 Willott, C. J., Rawlings, S., Blundell, K. M., & Lacy, M. 2000, , 316, 449 [Woo et al.(2006)]Woo+06 Woo, J.-H., Treu, T., Malkan, M. A., & Blandford, R. D. 2006, , 645, 900[Woo et al.(2010)]Woo+10 Woo, J.-H., Treu, T., Barth, A. J., et al. 2010, , 716, 269[Woo et al.(2013)]Woo+13 Woo, J.-H., et al. 2013, , 772, 49 [Woo et al.(2014)]Woo+14 Woo, J.-H., Kim, J.-G., Park, D., Bae, H.-J., Kim, J.-H., Lee, S.-E., Kim, S. C., & Kwon, H.-J., 2014, J. Korean Astron. Soc., 47, 167 [Woo et al.(2015)]Woo+15 Woo, J.-H., Yoon, Y., Park S., Park, D., & Kim, S. C., , 801,1 [Woo et al.(2016)]Woo+16 Woo, J.-H., Bae, H.-J., Son, D., & Karouzos, M. 2016, , 817, 108[Woo et al.(2017)]Woo+17 Woo, J.-H., Son, D., & Bae, H.-J., 2017, , in press [Wyithe & Loeb(2002)]Wyithe+02 Wyithe, J. S. B., & Loeb, J. 2002, , 581, 886 [Zamanov et al.(2002)]Zamanov+02 Zamanov, R., Marziani, P., Sulentic, J. W., et al. 2002, , 576, L9[Zheng & Sulentic(1990)]Zheng+90 Zheng, W., & Sulentic, J. W. 1990, , 350, 512[Zhang et al.(2011)]Zhang+11 Zhang, K., Dong, X.-B., Wang, T.-G., & Gaskell, C. M. 2011,737, 71 | http://arxiv.org/abs/1703.08901v1 | {
"authors": [
"Da-In Eun",
"Jong-Hak Woo",
"Hyun-Jin Bae"
],
"categories": [
"astro-ph.GA"
],
"primary_category": "astro-ph.GA",
"published": "20170327021910",
"title": "A systematic search for hidden type 1 AGNs: gas kinematics and scaling relations"
} |
[email protected] Department of Physics and Astronomy, Texas A&M University, College Station, Texas 77843-4242, USASchool of Physics and Astronomy, University of Manchester,Manchester M13 9PL, United Kingdom Department of Physics and Astronomy, Texas A&M University,College Station, Texas 77843-4242, USA1QB Information Technologies (1QBit), Vancouver, British Columbia, Canada V6B 4W4 Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, New Mexico 87501, USAThe fractal dimension of excitations in glassy systems gives information on the critical dimension at which the droplet picture of spin glasses changes to a description based on replica symmetry breaking where the interfaces are space filling. Here, the fractal dimension of domain-wall interfaces is studied using the strong-disorder renormalization group method pioneered by Monthus [Fractals 23, 1550042 (2015)] both for the Edwards-Anderson spin-glass model in up to 8 space dimensions, as well as for the one-dimensional long-ranged Ising spin-glass with power-law interactions. Analyzing the fractal dimension of domain walls, we find that replica symmetry is broken in high-enough space dimensions. Because our results for high-dimensional hypercubic lattices are limited by their small size, we have also studied the behavior of the one-dimensional long-range Ising spin-glass with power-law interactions. For the regime where the power of the decay of the spin-spin interactions with their separation distance corresponds to 6 and higher effective space dimensions, we find again the broken replica symmetry result of space filling excitations. This is not the case for smaller effective space dimensions. These results show that the dimensionality of the spin glass determines which theoretical description is appropriate. Our results will also be of relevance to the Gardner transition of structural glasses.75.50.Lk, 75.40.Cx, 05.50.+qFractal Dimension of Interfaces in Edwards-Anderson and Long-range Ising Spin Glasses: Determining the Applicability of Different Theoretical Descriptions Helmut G. Katzgraber December 30, 2023 ==========================================================================================================================================================Spin glasses have been studied for more than half a century but there is still no consensus as to what order parameter describes their low-temperature phase. There are two competing theories: The oldest is the replica symmetry breaking (RSB) theory of Parisi <cit.>, which is known to be correct for the Sherrington-Kirkpatrick (SK) model <cit.>, which is the mean-field or infinite-dimensional limit of the short-range Edwards-Anderson (EA) Ising spin-glass model <cit.>, the commonly used model for d-dimensional systems. Within the RSB picture there are a very large number of pure states. In a second theory, known as the “droplet” picture <cit.> there are only two pure states and the low-temperature state is replica symmetric. In the droplet picture the behavior of the low-temperature phase is determined by low-lying excitations or droplets whose (free) energies scale in their linear extent ℓ as ℓ^θ and whose interfaces have a fractal dimension d_s < d. In the RSB theory, however, there exist low-lying excitations which cost an energy of O(1) and which are space filling, that is, d_s=d. It has been argued <cit.> that when d ≤ 6 the droplet picture applies while for d > 6 RSB is the appropriate picture. Note, however, that in finite space dimensions RSB is different from its infinite-dimensional limit; see Newman and Stein <cit.>, as well as Read <cit.> for details. In this paper we study the fractal dimension as a function of the space dimension, d_s(d) <cit.>, to find the space dimension at which the droplets become space-filling, i.e., when d_s(d)=d. Our results are consistent with 6 being the critical dimension. It is, of course, difficult to overcome finite-size effects in numerical work near 6 dimensions. Therefore, our main evidence that 6 is the critical dimension comes from our study of the one-dimensional long-range spin-glass model introduced by Kotliar, Anderson and Stein (KAS) <cit.>. The calculational technique which we have used is the strong-disorder renormalization group (SDRG) introduced by Monthus <cit.>. This approach produces estimates of d_s, that are in agreement with results on the EA model using other numerical techniques for space dimensions 2 and 3 (also studied by Monthus in Ref. <cit.>). In this Letter, we extend the results of Ref. <cit.> up to d = 8 space dimensions, and apply the method introduced in the aforementioned reference to the KAS spin-glass model <cit.>.Whether there is RSB or not in dimensions d ≤ 6 is not only important for spin glasses. In structural glasses there has been much recent interest in the Gardner transition, which is the transition at which replica symmetry breaking is supposed to occur to a glass state of marginal stability (for a review see Ref. <cit.>). However, recent numerical results have suggested that fluctuation effects about the mean-field solution might destroy the Gardner transition in at least 3 space dimensions <cit.>. This result is entirely consistent with our expectation that replica symmetry breaking will be absent for d ≤ 6.The Edwards-Anderson model <cit.> is defined on a d-dimensional cubic lattice of linear extent L by the Hamiltonianℋ = - ∑_⟨ ij ⟩ J_ij S_i S_j, where the summation is over only nearest-neighbor bonds and the random couplings J_ij are chosen from the standard Gaussian distribution of unit variance and zero mean. The Ising spins take the values S_i ∈{± 1} with i = 1,2, …, L^d.We have studied this model in space dimensions d=4, …, 8 using the SDRG method <cit.>. Reference <cit.> studied the cases of d=2 and 3. The SDRG approach successively traces out the spin whose orientation is most dominated by a single large renormalized bond to another spin; when the spin is eliminated the couplings of the remaining spins are renormalized accordingly. We refer the reader to Ref. <cit.> for further details.The observable we focus on is related to the bond average of Σ^ DW, where Σ^ DW is the number of bonds crossed by the domain wall when the boundary conditions in one direction are changed from periodic to antiperiodic. The SDRG method is essentially a way of constructing a possible ground state of the system. One runs the method twice, first with periodic and next with antiperiodic boundary conditions in one direction, and counts the bonds across which the relative spin orientation across the bond has altered because of the change of boundary conditions. Pictures of a domain wall so constructed for dimension d=2 can be found in Ref. <cit.>. It wanders, indicating that it has a fractal dimension and its length can be described by a fractal exponent d_s, where Σ^ DW∼ L^d_s. If the interface were straight across the system, its length would be proportional to L^d-1. This means that because of the wandering one expects that d_s > d-1. In the RSB phase the domain walls are space filling, i.e., d_s=d. In general, d-1 ≤ d_s ≤ d.We first introduce a more formal definition of Σ^DW which has a natural extension when we study long-range systems when the definition of an interface is far from obvious. One defines the link overlap <cit.> viaq_ℓ =1/N_b∑_⟨ ij ⟩S_i^()S_j^()S_i^()S_j^()(2 δ_J_ij^,J_ij^ - 1).Here S_i^() and S_i^() denote the ground states found with periodic () and antiperiodic () boundary conditions, respectively. One can switch from periodic to antiperiodic boundary conditions by flipping the sign of the bonds crossing a hyperplane of the lattice. N_b is the number of nearest-neighbor bonds in the lattice which for a d-dimensional hypercube is given by N_b=d L^d. One can then define <cit.>Γ≡ 1-q_ℓ= 2Σ^ DW/d L^d∼ L^d_s-d. In Fig. <ref> we show the bond-averaged value of Γ [Eq. (<ref>)] vs ln L which should be a straight line of slope d_s-d. In Fig. <ref> the value of d_s is plotted for various dimensionalities d. For d=1, d_s(1) = 0 (pentagon), while for d=2 we have used the value from Ref. <cit.>, i.e., d_s(2) = 1.27 (square), which is in excellent agreement with other numerical estimates <cit.>. For d=3, Ref. <cit.> quotes d_s(3) = 2.55 (square), which is again in good agreement with other estimates <cit.>. In addition, we estimate d_s(4) = 3.7358(13), which again is in good agreement with Monte Carlo estimates <cit.>. Note that the largest system in Ref. <cit.> has N = 5^4 spins, which seems to not be in the scaling regime (see Fig. <ref>). This means that results from small systems tend to overestimate d_s.Finally, one can see that as the dimensionality d increases, d_s(d) approaches d. However, results from simulations on hypercubic lattices struggle from corrections to scaling. These make it difficult to claim that d_s = d at precisely d = 6. To address this point, we turn to the KAS model.The one-dimensional KAS model <cit.> is described by the Hamiltonian in Eq. (<ref>), except that the L spins lie on a ring and the exchange interactions J_ij arelong ranged, i.e., ⟨ ij ⟩ denotes a sum over all pairs of spins:J_ij=c(σ,L) ϵ_ij/r_ij^σ,where r_ij is the shortest circular length between sites i and j<cit.>.The disorder ϵ_ij is chosen from a Gaussian distribution of zero mean and standard deviation unity, while the constant c(σ,L) in Eq. (<ref>) is fixed to make the mean-field transition temperature T_c^MF=1 and (T_c^MF)^2= ∑_j [J_ij^2]_av, where [⋯]_ av represents a disorder average and [J_ij^2]_ av=c^2(σ,L)/r_ij^2 σ where 1/c(σ,L)^2= ∑_j=2^L 1/r_1j^2 σ. Note that in the limit σ→ 0 the KAS model reduces to the infinite-range SK model. The advantage of the KAS model is that one can study a large range of linear system sizes. The KAS model can be taken as an interpolation between the d = 1 EA model and the d = ∞ SK model as the exponent σ is varied. The phase diagram of this model in the d–σ plane has been deduced from renormalization group arguments in Refs. <cit.>. For 0 ≤σ <1/2 it behaves just like the infinite-range SK model. When 1/2< σ <2/3 the critical exponents at the spin-glass transition are mean-field like, and this corresponds in the EA model with space dimensions above 6. In the interval 2/3 ≤σ <1 the critical exponents are changed by fluctuations away from their mean-field values. When σ≥ 1, T_c(σ)=0 and when σ > 2, the long-range zero-temperature fixed point, which controls the value of the exponents d_s and θ, becomes identical to that of the nearest-neighbor one-dimensional EA model, i.e., d_s=0 and θ=-1. There is a convenient mapping between σ and an effective dimensionality d_eff of the short-range EA model <cit.>. For 1/2 < σ < 2/3, it isd_eff=2/(2σ-1).Thus, right at the value of σ =2/3, d_eff=6. The arguments given in Ref. <cit.> that the critical dimension is 6, below which one sees droplet behavior and above which one sees RSB behavior were directly extended to the KAS model and predicted that only in the interval σ < 2/3 will one see RSB behavior, so that σ =2/3 is the critical value expected for the KAS model.We have determined d_s for the KAS model from two definitions of d_s. The first definition is via the generalization of the link overlap in Eq. (<ref>) to the long-range KAS model just as done in Ref. <cit.>:q_ℓ =2/L(L-1)∑_i<j w_ijS_i^()S_j^()S_i^()S_j^()(2 δ_J_ij^,J_ij^ -1),where w_ij=(L-1) c(σ,L)^2/r_ij^2σ. Note that the sum of w_ij over i<j equals L(L-1)/2. Antiperiodic boundary conditions can be produced by flipping the sign of the bonds when the shortest paths go through the origin. d_s is then obtained from q_ℓ using Eq. (<ref>) with d=1.Because we are unsure of the topological significance of d_s calculated in this way, we use a second approach whose topological significance is clear. Fortunately, it gives very similar results to that of our first definition. Let τ_i=S_i^(π) S_i^(π), and define an “island” as a sequence in which all the τ_i are of the same sign. For the EA model limit of the KAS model, i.e., when σ > 2, there are only two islands but when the long-range zero-temperature fixed point <cit.> controls the behavior, there are many islands; we denote by N_I the number of islands produced by the change from periodic to antiperiodic boundary conditions. Formally, N_I can be computed viaN_I= 1/4∑_i=1^L (τ_i+1-τ_i)^2,where τ_L+1=τ_1. We define d_s via N_I ∼ L^d_s. The islands have a distribution of sizes with their mean size L_0 = L/N_I ∼ L^1-d_s. In the RSB region where d_s =d =1 L_0 is independent of the size of the system and is of O(1), a result which we obtained previously from direct studies in the SK limit <cit.>.We have used these two quite distinct definitions of d_s to compute the fractal dimension as a function of σ using the SDRG method. The details of the system sizes and numbers of disorder realizations can be found in Table <ref>. Our results for N_I and Γare shown in Fig. <ref>. From these we have extracted values for d_s which are shown in Fig. <ref>. The values obtained for d_s from Γ and N_I are reassuringly similar. The most striking feature of our results are, first, d_s ≃ 1(=d) when σ < 2/3, and second, d_s decreases from unity as σ increases past 2/3. Because σ =2/3 maps to d=6 according to Eq. (<ref>) we believe that this is strong evidence that 6 is the dimension below which the droplet picture applies and that only in more than 6 space dimensions will one find RSB effects, just as anticipated in Ref. <cit.>.At σ > 2 the long-range fixed point is unstable and the renormalization group flows go to the short-range fixed point, that of the d=1 EA model <cit.>. For the EA model in one space dimension, d_s=0 and θ =-1. We were expecting that d_s would go to zero at σ =2; it is possible that d_s is just very small in the interval 1.5 < σ < 2.There are small finite-size corrections when using the SDRG method. For σ∼ 1, there is a downward curvature in the data (Fig. <ref>) so that if we had been able to study larger L values, our estimates of d_s might have decreased. However, the behavior in the crucial region where σ is close to 2/3 is less affected by finite-size effects. Monthus and Garel <cit.> have obtained estimates for d_s from exact studies on the KAS model for L ≤ 24. They found d_s(σ =0.62) ≃ 1, d_s(σ =0.75)≃ 0.94, d_s(0.87) ≃ 0.82, d_s(σ=1) ≃ 0.72, and d_s(σ=1.25) ≃ 0.4. These results illustrate clearly that estimates of d_s from small systems tend to be high.We now discuss the accuracy of the SDRG method. First, we note that SDRG is considerably better than the Migdal-Kadanoff (MK) approximation which gives d_s^ MK = d-1 <cit.>, which coincides with the lower bound on d_s and so never gives d_s=d. The SDRG method can be used to determine θ as well as d_s. In 2 dimensions; it gives θ≃ 0 <cit.>; its established value is close to -0.28 <cit.>. The SDRG method is only exact for special cases. Like the MK approximation, it is exact in one space dimension for the EA model but its performance for the energy per spin and the exponent θ then steadily deteriorates with increasing space dimension d.Monthus <cit.> suggested that it does a good job for the exponent d_s because that exponent is dominated by short length-scale optimization which is well captured by the early steps of the SDRG method, but that it does badly for the interface free-energy exponent θ which also requires optimization on the longest length scales. We also suspect that its success in determining d_s might be connected with the fact that the domain wall is a self-similar fractal. That means it has the same fractal dimension d_s whether that fractal dimension is studied on short or long length scales. In d=2 and d=3 Monthus <cit.> showed that the SDRG worked on short length scales but fails on long length scales. We believe the consequence of this might just be that in determining the length of the domain wall Σ^ DW =A L^d_s, the exponent d_s is correctly determined from the short length-scale behavior, but to obtain the coefficient A correctly one would need a treatment also valid on long length scales. In the KAS model at σ =0.1 the SDRG fails on short length scales but works on long length scales. Again, we believe that the exponent d_s=d=1 is correct, but that the coefficient A is only approximate.One worrisome issue is that numerical work around 6 space dimensions could suffer from poor precision, so how can one be confident that d=6 is a special space dimension below which RSB does not occur (aside from a rigorous proof). There is another numerical procedure, the greedy algorithm <cit.> in which one satisfies the bonds in the order of the couplings | J_ij| unless a closed loop appears, where one skips to the next largest bond. We have found that as d → 6 from below the values of d_s obtained from the GA approach those from the SDRG, which is not surprising when one examines how the SDRG works. For d=2, however, the GA is certainly poorer than the SDRG, because it predicts d_s = 1.216(1) <cit.>. Jackson and Read <cit.>, however, have an analytical argument that 6 is a special space dimension for the GA algorithm. This gives us confidence that 6 is the space dimension above which interfaces are space filling.W. W. and H. G. K. acknowledge support from NSF DMR Grant No. 1151387. The work of H.G.K. and W.W is supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via MIT Lincoln Laboratory Air Force Contract No. FA8721-05-C-0002. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of ODNI, IARPA, or the U.S. government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purpose notwithstanding any copyright annotation thereon. We thank Texas A&M University for access to their Ada and Curie clusters. | http://arxiv.org/abs/1703.08679v2 | {
"authors": [
"Wenlong Wang",
"M. A. Moore",
"Helmut G. Katzgraber"
],
"categories": [
"cond-mat.dis-nn"
],
"primary_category": "cond-mat.dis-nn",
"published": "20170325115810",
"title": "The Fractal Dimension of Interfaces in Edwards-Anderson and Long-range Ising Spin Glasses: Determining the Applicability of Different Theoretical Descriptions"
} |
[email protected] (D.W.) School of Physics & Material Science, Anhui University, Hefei 230601, China National Laboratory for Infrared Physics, Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai 200083, China School of Physics & Material Science, Anhui University, Hefei 230601, ChinaDepartment of Chemistry, Department of Physics and Birck Nanotechnology Center, Purdue University, West Lafayette, IN 47907 USA Qatar Environment and Energy Research Institute(QEERI), HBKU, Qatar Foundation, Doha, Qatar School of Physics & Material Science, Anhui University, Hefei 230601, [email protected] (L.Y.) School of Physics & Material Science, Anhui University, Hefei 230601, ChinaDepartment of Chemistry, Department of Physics and Birck Nanotechnology Center, Purdue University, West Lafayette, IN 47907 USA Qatar Environment and Energy Research Institute(QEERI), HBKU, Qatar Foundation, Doha, Qatar The uncertainty relation is a fundamental limit in quantum mechanics and is of great importance to quantum information processing as it relates to quantum precision measurement. Due to interactions with the surrounding environment, a quantum system will unavoidably suffer from decoherence. Here, we investigate the dynamic behaviors of the entropic uncertainty relation of an atom-cavity interacting system under a bosonic reservoir during the crossover between Markovian and non-Markovian regimes. Specifically, we explore the dynamic behavior of the entropic uncertainty relation for a pair of incompatible observables under the reservoir-induced atomic decay effect both with and without quantum memory. We find that the uncertainty dramatically depends on both the atom-cavity and the cavity-reservoir interactions, as well as the correlation time, τ, of the structured reservoir.Furthermore, we verify that the uncertainty is anti-correlated with the purity of the state of the observed qubit-system. We also propose a remarkably simple and efficient way to reduce the uncertainty by utilizing quantum weak measurement reversal. Therefore our work offers a new insight into the uncertainty dynamics for multi-component measurements within an open system, and is thus important for quantum precision measurements. Entropic uncertainty relations for Markovian and non-Markovian processes under a structured bosonic reservoir Sabre Kais December 30, 2023 =============================================================================================================The uncertainty principle, originally proposed by Heisenberg <cit.>, is a fascinating aspect of quantum mechanics. It sets a bound to the precision for simultaneous measurements regarding a pair of incompatible observables, e.g. position (x̂) and momentum (p̂). Later, the uncertainty principle was generalized, by Kennard <cit.> and Robertson <cit.> as applying to an arbitrary pair of non-commuting observables (say P̂ and Q̂) where the standard deviation is given asΔ_ρP̂·Δ_ρQ̂≥1/2|⟨[P̂,Q̂]⟩|_ρfor a given system, ρ, where the variance is given as Δ_ρ X=√(⟨ X^2⟩_ρ-⟨ X⟩^2_ρ), ⟨∙⟩ denotes the expectation value of the observable, and [P̂,Q̂]=P̂Q̂-Q̂P̂ denotes the commutator. Importantly, the standard deviation in Robertson's relation is not always an optimal measurement for the uncertainty as the right-hand side of the relation depends on the state ρ of the system, which will lead to a trivial bound if the operators P̂ and Q̂ do not commute. In order to compensate for this, Deutsch <cit.> put forward an alternative inequality of the formS^ρ(P̂)+S^ρ(Q̂)≥ 2log_2(2/1+√(c))for any pair of non-degenerate observables P̂ and Q̂ in terms of Shannon entropy, i.e. the so-called entropic uncertainty relation (EUR). To be explicit, the Shannon entropy is given by S^ρ(P̂)=-∑_i p_i logp_i, where p_i=⟨ψ_i|ρ|ψ_i⟩; the parameter c in Eq. (<ref>) weighs the maximum value of the overlap between observables P̂ and Q̂, which can be mathematically expressed as c= max_ij|⟨ψ_i|φ_j⟩|^2, with |ψ_i⟩ and |φ_j⟩ being the eigenstates of P̂ and Q̂. Obviously, yet Remarkable, that the lower bound is now independent on the state of the given system. Later, Kraus <cit.>, as well as Maassen and Uffink <cit.> made a significant improvement by refining Deutsch's result toS^ρ(P̂)+S^ρ(Q̂)≥ -log_2 c=: B_KMU,where the largest uncertainty can be obtained for two arbitrary mutually unbiased observables. More recently, Coles and Piani <cit.> have obtained an optimal solution with formS^ρ(P̂)+S^ρ(Q̂)≥ - log_2 c + 1-√(c) 2 log_2 c/c̃ =: B_CP,with c̃ being the second largest value of {|⟨ψ_i|φ_j⟩|^2} for all values of i and j. It is obvious that the bound B_CP≥ - log_2 c holds, which implies Eq. (<ref>) offers a tighter bound when compared with the former iterations. In fact, the importance of the uncertainty principle is that it reflects the ability of stored quantum information within quantum memory to reduce or eliminate the uncertainty associated with a measurement on a second particle entangled to the quantum memory <cit.>. Moreover, EUR has been established as a powerful tool for various applications, including: security analysis for quantum communication <cit.>, entanglement witness <cit.>, probing quantum correlation <cit.>, quantum speed limit <cit.>, and steering Bell's inequality <cit.>. Additionally, there have been several expressions for the optimal outcome of EUR associated with two-component or multiple measurements <cit.>. Notably, due to interacting with a noisy environment, the quantum system will suffer from decoherence, thereby inflating the entropic uncertainty to some extent. Therefore, it is of fundamentally importance to clarify how environmentally-induced decoherence affects the uncertainty of measurements. Till now, there have been some observations with respect to the entropic uncertainty under the influence of various types of dissipative environments <cit.>.Recently, Karpat et al. <cit.> proposed an interesting argument that the memory effects can straightforward manipulate EUR's lower bound in a practical scenario.It is well known that, any environment can be classified as either Markovian (information stored in the qubit system flows one-way from the system to the environment) or non-Markvian (information stored in the qubit system is capable of bidirectional flow between the system and the environment). Here, we aim to understand how a structured environment affects the EUR as it undergoes a crossover between non-Markovian and Markovian regimes. The model herein considered is a two-level atomic system coupled to a composite environment, which consists of a single cavity mode and a structured reservoir. The model is simple yet sophisticated enough for our purpose. It should be noted that non-Markovian dynamics for the qubit-cavity model has been studied theoretically <cit.> and demonstrated experimentally <cit.> beyond the non-Markovian regime. For a reservoir with an Ornstein-Uhlenbeck type of correlation function, the reservoir correlation time may be described with a single parameter, conveying the reservoir's decay time. Composite environments include several time scales denoting the information exchange between the two subsystems, as well as between the system and the environment. However, the single parameter method is not generalizable to composite environments. Therefore, we investigate a several-parameter regime for the cavity-reservoir coupling strength and show how these parameters affect the EUR. Remarkably, we found that the dissipation of the external environment caused quantitative fluctuations in the value of the entropic uncertainty. In particular, we also provide a simple and efficient way to decrease the uncertainty by leveraging the degradation of the initial state of the subsystem induced by this hierarchical environment via quantum weak measurement reversals.Results Systemic dynamics. Herein we consider a model system consisting of an atom (a qubit), a single-mode cavity and treat the environment as a structured bosonic reservoir. As illustrated in Fig. <ref>, information can flow between the atom, the cavity and the reservoir. Explicitly, during a Markovian evolution the information will outflow from the qubit to environment which consists of the cavity and reservoir.On the contrary, if the system exists within a non-Markovian regime, information will not only outflow but also backflow from the qubit to the hierarchical environment. The system can be described by the Hamiltonianℋ_S=ℋ_0+ℋ_I,whereℋ_0=ω_a/2σ_z+ω_c a^† a+∑_j=0^∞ω_jb_j^†b_jis the free Hamiltonian of the composite system consisting of an atom, a cavity and a structural reservoir.Within Eqs. (<ref>) and (<ref>), ω_a, ω_c and ω_j denote the transition frequency of the atom,the transition frequency of the cavity, and frequency of the jth mode of the reservoir, respectively. The Pauli operator σ_z=|e⟩⟨ e|-|g⟩⟨ g| with |e⟩ and |g⟩ representing the excitedandground states, respectively.a^†(a) and b_j^†(b_j) denote the creation (annihilation) operators for the cavity and the jth mode of the reservoir, respectively. Finally, ℋ_I denotes the interaction Hamiltonian for both the atom-cavity and the cavity-reservoir. In the interaction picture — under the resonance condition (ω_a=ω_c=ϖ) — the interaction Hamiltonian, ℋ_I, can be written asH_I=Ω(σ^+a+σ^-a^†) +∑_j=0^∞Δ_j(ab^†_je^iδ_jt+a^†b_je^-iδ_jt).Within the above, σ^+=|e⟩⟨ g| and σ^-=|g⟩⟨ e| are the upper and lower operator, respectively; Ω is the atom-cavity coupling strength, Δ_j is the coupling strength between thecavity mode and the jth mode of the reservoir, and δ_j=ω_j-ϖ describes the detuning of the cavity and the reservoir.We assume that the reservoir has a Lorentzian spectrum J(ω)=Θ/2πγ^2/(ϖ-ω)^2+γ^2. In this case, the correlation function of the reservoir is given by α(t,s)=Θγ/2e^-γ|t-s|, and the correlation (or memory time) is τ=γ^-1. When γ goes to infinity, the model environment tends to a reservoir possessing no memory effect.Under these assumptions, weobtain reduced dynamics for the atomic state, which is given as (see Method section for details)ρ(t)=( [ ρ_ee(t) ρ_eg(t); ρ_eg^*(t) 1-ρ_ee(t) ]),where ρ_ee(t)=ρ_ee(0)|Γ(t)|^2 and ρ_eg(t)=ρ_eg(0)Γ(t) withΓ(t)=L^-1[Υ(p)]; Υ(p)=2p(p+γ)+Θγ/2(p^2+Ω^2)(p+γ)+pΘγ.Where L^-1 is the canonical inverse Laplace transformation.EUR under a reservoir with memory. Assume the initial state of the atom to be an arbitrary pure state represented by |Ψ_ in(θ,ϕ)⟩= cos(θ)|e⟩+ sin(θ) e^iϕ|g⟩, with θ∈[0,π/2] and ϕ∈[0,π]. A Markovian evolution can always be represented by a dynamic semigroup of completely positive and trace-preserving maps. These properties guarantee the contractiveness of the trace distanceD(ρ_1(t),ρ_2(t))=1/2 Tr|ρ_1(t)-ρ_2(t)|.In Eq. (<ref>), the general form of the magnitude is |χ|=√(χ^†χ) between an arbitrary state ρ_1 and another state ρ_2. Note that, a Markovian process is unable to increase D(ρ_1,ρ_2) at any time step. In other words, a Markovian process either decreases or maintains the trace distance. Essentially, the reduction of the trace distance is indicative of a reduction in the distinguishability between the two states; this could be interpreted as an outflow of information from the qubit subsystem to the environment. Accordingly, the increase of trace distance can be understood as a backflow of information into the atomic system of interest, which is characterized by non-Markovian evolution. Hence, the violation of the contractiveness of the trace distance would signify the on-set of non-Markovian dynamics in the system. To be explicit, non-Markovianity <cit.> in a system can be measured byN=ρ_1(0),ρ_2(0) max∫_σ>0 dt σ(t,ρ_1(0),ρ_2(0)),where σ(t,ρ_1(0),ρ_2(0))=d/dtD(ρ_1(t),ρ_2(t)) is the rate of change of the trace distance as expressed by Eq. (<ref>). To clearly display the evolution of an atomic system under the reservoir with memory, we may utilize an optimal pair of states — (ρ_1(0)=|+⟩⟨+|, ρ_2(0)=|-⟩⟨-|) — as the two initial states, where |±⟩=(|e⟩±|g⟩)/√(2) as verified by previous works <cit.>. Thereby, after some calculations, the trace distance can be derived as:D(ρ_1(t),ρ_2(t))=|Γ(t)|,where Γ(t) is taken as Eq. (<ref>) and satisfies -1≤Γ(t)≤1. Incidentally, henceforth an abbreviation (TD) shall be used to represent the trace distance, D(ρ_1(t),ρ_2(t)), calculated under the two optimal initial states {|+⟩⟨+|,|-⟩⟨-|}. In this case, a sufficient and necessary condition for a Markovian evolution is equivalent to stating that |Γ(t)| is a monotonically decreasing function (i.e. d/dt|Γ(t)|<0, N=0); and therefore, a sufficient and necessary condition for a non-Markovian evolution is equivalent to that |Γ(t)| is a non-monotonically decreasing function (i.e. ∀ N,N>0).Here we employ a pair of Pauli observables — σ̂_x and σ̂_z — as the incompatible measurements. These two matrices are also conventionally used to describe the spin-1/2 observables. Each of the matrices yield the eigenvalues ± 1 with eigenstates |± X⟩=(|e⟩±|g⟩)/√(2) and |± Z⟩={|e⟩, |g⟩}. For the two Pauli operators, the uncertainty for measuring the two observables can be quantified by the entropic sumS_x,z:=S^ρ(σ̂_x)+S^ρ(σ̂_z).To illustrate this fact, in Fig. <ref> we vary the amount of uncertainty and the trace distance with respect to the time (t) for the initial state — which was constructed with θ=π/4 and ϕ=π/8 — for the case of Ω =Θ =π×10^6Hz. As shown in Fig. <ref>, the TD decreases initially and then oscillate periodically, but eventually tends to zero at the limit of long-time. This can be interpreted as an indicator of the system becoming non-Markovian; in this case, the information stored in the atom can not only outflow but also backflow. This is to say, the information will not only be lost to the environment, but also may be recovered to some extent. This is indicative of the capacity of the information to bi-directionally flow between the atom and the reservoir via the cavity. Eventually the entire system becomes dynamically balanced, which drives the qubit subsystem to an asymptotic steady state. Notably, in the non-Markovian regime, the peak values of the TD gradually become smaller with increasing time. This reduction of the peak value for the TD implies that the backflow information is always less than the information outflow due to dissipation. To clarify how the system evolves with fixed θ, in Fig. <ref> we plot N (representative of the system's non-Markovian character) as a function of γ/Ω for different values of Θ/Ω. From this one can infer that there are two main factors which influence the non-Markovianity of the system: 1) the ratio value of γ/Ω; 2) γ, which is related to the correlation time (τ) of the structured reservoir. Specifically, a stronger coupling strength, Ω, between atom and cavity can lead to a greater non-Markovian character for the atomic system; a contrario, the larger values of γ (the longer correlation time, τ) facilitates greater non-Markovianity. Let us now shift topics to the problem of how the noise may affect the uncertainty. Intuitively, the uncertainty should become larger when the atomic subsystem moves to a mixed state from a pure one. We plot the evolution of the measurement uncertainty with respect to time in Figs. <ref>(a) and <ref>(b) with γ=1000Ω and γ=Ω, respectively. One can infer that: (1) In the short-time regime, the TD of the atom decreases monotonously, while the uncertainty initially increases and then decreases. Intuitively, the system will degrade when the TD decreases, and thus the uncertainty ought to constantly increase all the time, yet this disagrees with the results displayed within Figs. <ref>(a) and <ref>(b). (2) The uncertainty initially increases and then shows a quasi-periodic oscillation which shrinks to the lower bound (B_CP) of the optimal uncertainty relation, and the minimal value of the uncertainty is B_CP≡1 as c=c̃=1/2 for our choice of incompatible measurements (σ_x and σ_z). That is to say, the uncertainty relation for the two-component measurement — when coupled with a structured reservoir in presence of quantum memory — never violates any previously suggested form of the uncertainty relation. This result certifies that the EUR — as it was previously proposed — is applicable to both the presence and absence of noises. (3) After the first minimal TD, the frequency of the uncertainty oscillation is the same as that of the TD. This shows that the fluctuation of the uncertainty is not synchronized with the change of the atom-system TD in short-time limit, yet is synchronized with the TD after the first minimal distinguishability. (4) The smaller γ-value can lead to the stronger non-Markovian characteristic. Stated otherwise, longer correlation times, τ, of the reservoir are responsible for non-Markovianity in such a system.To better understand the dynamics of the entropic uncertainty in the current model, we introduce the purity of a state, expressed asP= Tr(ρ^2).We plot the purity and the uncertainty as a function of time in Fig. <ref> with Ω=γ =π×10^6 Hz, for an initial state constructed with θ=π/4 and ϕ=π/8. We have set Θ/Ω=0.5, and Θ/Ω=5 in Figs. <ref>(a) and <ref>(b), respectively. From Figs. <ref>(a) and <ref>(b), one can infer that: (1) The ratio Θ/Ω is considerably effective at generating systemic non-Markovianity. To be explicit, the stronger coupling strength between the atom and the cavity, Ω, is responsible for non-Markovianity, while the weaker coupling strength between the reservoir and the cavity, Θ, can lead to Markovianity. This can be interpreted as the cavity merely being another sub-environment in addition to the structural reservoir. With this in mind, one can say that both the cavity and the reservoir (which can be regarded as the total environment) can effect the non-Markovianity of the atom system. (2) The uncertainty is fully anti-correlated with the purity of the qubit, which is a very interesting result and is consistent with previous claims in <cit.>. This implies that the uncertainty will increase correspondingly while the purity decreases, and vice versa. EUR under a memoryless reservoir. We shall next consider the other limiting condition: that the reservoir is memoryless, i.e. τ=0 (γ→∞). In this case, the cavity's presence is solely responsible for the non-Markovian character, and the correlation time is zero. By considering γ→∞, one can obtain Γ(t) in Eq. (<ref>) can be reduced intoΓ(t)=e^-Θ t/4[Θ/λ sinh(λ t/4)+ cosh(λ t/4)],where λ=√(Θ^2-16Ω^2). This expression is in agreement with the results presented in <cit.>, apart from a difference in units. This coincidence is attributable to from the fact that the dynamics of a single qubit coupled to a vacuum reservoir with a Lorentzian spectrum could be simulated by a pseudomode approach with a memoryless reservoir <cit.>. Two distinct dynamical regimes are identified and undertake a phase transition to each other at the critical condition: Ω_cr=Θ/4 <cit.>. In the weak-coupling regime, Ω<Ω_cr, one can easily determine that the dynamics are Markovian and the TD for the optimal pair ({|+⟩⟨+|,|-⟩⟨-|}) decreases as Γ(t) decreases monotonically. In the strong-coupling regime, Ω>Ω_cr, the evolution is non-Markovian and Γ(t) oscillates between positive and negative values.In what follows, we discuss how the coupling constants (Ω and Θ) can influence the value of the uncertainty associated with the measurement. As before, we employ the observable pair σ̂_x and σ̂_z as the pair of incompatibility measurements. Let us first consider the variation of the uncertainty and the TD for the evolutive atom state with respect to Ω t. As shown in Fig. <ref>(a), with fixed Θ the TD decreases at first and then oscillates periodically when Θ/Ω=0.5 or 1. This can be interpreted as the information not only flowing out of the atom, but also back-flowing into atom when Ω is sufficiently large, and hence the evolution of the atom is non-Markovian. A relatively small ratio of Ω/Θ indicates that the qubit is losing information at a far slower rate than the evolution of the environment, therefore backflow of information does not occur happen and the environment's evolution is not appreciably interrupted. When the evolution is Markovian, Ω<Ω_cr=Θ/4, the dominant effect is information outflow from the atomic system into environment, and thus TD will be reduced gradually. We plot the change of non-Markovianity with respect to Ω/Θ in Fig. <ref>(b). From the Fig. <ref>(b), the non-Markovianity (N) is zero-valued when Ω/Θ<0.25, as the evolution of the qubit is Markovian in this situation. N is non-zero while Ω/Θ>0.25, implying that the evolution is non-Markovian. During a non-Markovian evolution while Ω>Ω_cr, the information will not only outflow, but also backflow with increasing time. Notably in the non-Markovian regime, the maximum value of the TD is always below unity; this limit is largely due to dissipation effects. Additionally, the entropic uncertainty increases while the TD of the atom system decreases in the short-term due to the increase in the entropic uncertainty when the system becomes unstable and undergoes dissipation. However, from Figs. <ref> and <ref> one can see that with the decrease of the TD, the uncertainty of measurement will firstly increase and then decrease in a relatively short-time regime. Furthermore, the magnitude of the entropic sum undergoes periodic oscillations associated with the oscillating TD, and shrinks to the lower bound of EUR (B_CP) in the long-time regime. This indicates that the entropic uncertainty is not merely synchronous with the evolution of the atomic system at the initial stage of evolution, it becomes increasingly synchronous with the evolution of the atomic system after the TD reaches the first minimum. We note that the fluctuations of both the TD and the uncertainty become smaller as Θ grows larger, i.e. a stronger coupling constant between the cavity and the reservoir will decrease disturbance on the entropic uncertainty. This implies that the cavity-reservoir coupling strength, Θ, may dramatically influence the entropic sum. Furthermore, we plot the purity as a function of Ωt with different coupling-strength ratio of Θ/Ω in Fig. <ref> when the initial state of the qubit system is generated with θ=π/3 and ϕ=π/6. From Fig. <ref>, it is obvious that the uncertainty is always anti-correlated with the purity of system, which is entirely consistent with our previous statement. Through the above analysis, we can conclude that stronger Ω-coupling can affect the reservoir and can result in backflow of information to the atom, leading to a periodic evolution of the uncertainty.We also explore the relation between the initial state and the entropic sum in Fig. <ref>, where one finds that the value of S_x,z is symmetric about ϕ=π/2, and decreases with an increase in θ for a fixed ϕ. Specially, S_x,z reaches a peak when θ=0 and at the point of B_CP at θ=π/2. This implies the excited state of the atom is more sensitively to the uncertainty of the measurement in the current model comparing with that of the ground state. Reducing the uncertainty via weak measurement.A novel idea has recently been proposed to protect a state from decoherence by using quantum partially collapsing measurements, i.e. weak measurement reversals (WMR) <cit.>. The WMR procedure is described asρ_ee(t)→(1-m)/ Cρ_ee(t),ρ_eg(t)→√(1-m)/ Cρ_eg(t),ρ_ge(t)→√(1-m)/ Cρ_ge(t),ρ_gg(t)→1/ Cρ_gg(t).Within the above, the measurement strength m satisfies 0≤ m≤ 1 and C=(1-m)ρ_ee(t)+ρ_gg(t) is the normalized coefficient of the time-dependent state. The WMR essentially makes a post-selection that removes the result of the qubit transition |e⟩→|g⟩; WMR can be implemented by an ideal detector to monitor the environment. This is also referred to as null-result WMR because the detector does not report any signal. In a WMR, complete collapse to an eigenstate does not occur, and thus the qubits continue in their evolution. Decoherence can be largely suppressed within the systems by uncollapsing the quantum state, returning it to the excited state. It is well known that the amount of the uncertainty is crucial for quantum precision measurements, and one always expects a smaller measurement uncertainty when obtaining exact measurements. Motivated by this, we explore a methodology to reduce the uncertainty by the using appropriate WMR. For clarity, we plot the relationship between the measurement parameter m and the entropic sum in Fig. <ref>, with θ=π/3 and ϕ=π/6. From Fig. <ref>, one can readily infer that the uncertainty decreases with the increase of the measurement strength m. Therefore, the WMR is capable of suppressing the decay of the atomic state, and thus largely reducing the entropic uncertainty during the crossover from Markovianity to non-Markovianity. Furthermore, we investigate the relation between the entropic uncertainty and the coupling strengths Θ and Ω in Fig. <ref> for θ=π/5 and ϕ=π/3, both with and without weak measurement (m=0.5). It is obvious that the maximal value of the uncertainty in the case m=0.5 is smaller than that of m=0, which indicates that WMR can efficiently reduce the uncertainty of measuring a pair of incompatible observables. Furthermore, Figs. <ref>(a) and (b) show that the uncertainty will vary periodically with respect to the coupling strength Ω t, consistent with the previously obtained results.ConclusionHerein, we investigate how a bosonic environment influences the uncertainty of measuring two incompatible measurements on an atom-cavity coupled system during the crossover between Markovianity and non-Markovianity. Notably, in the presence of memory effects the evolution of the atom system is determined by the strength of the cavity and the structured reservoir. The uncertainty is characterized by fluctuations which are not synchronized with the change of the systemic state, tending to the lower bound in the long-time limit. In the absence of memory effects, we numerically verified that the amount of EUR is correlated with the coupling strengths of the atom-cavity and the cavity-reservoir. We find that the coupling strengths of the atom-cavity and the cavity-reservoir greatly influences the uncertainty and its dynamic behavior. The relatively strong coupling strength between the cavity and the structured reservoir can provide a natural reduction of the overall uncertainty. Additionally, we conclude that the stronger atom-cavity coupling strength results in information backflow to the atom manifesting itself as an oscillation in the uncertainty. Explicitly, the uncertainty oscillates to the lower bound of EUR when Ω>Ω_cr; the uncertainty will reduce all the time and shrink to the lower bound in the long-time regime when Ω<Ω_cr. We have also verified that the uncertainty for the measurement is anti-correlated with the purity of the evolutive qubit state, whether the system is Markovian or non-Markovian. Notably, we propose an efficient method to reduce the uncertainty for a pair of observables with such system via post-selection weak measurement reversal. Therefore, our investigation may shed light on the generation of precision measurements for a system coupled with a multi-degree-of-freedom environment possessing either Markovian or non-Markovian character.MethodsHere, we deal with the reduced dynamics of the atomic subsystem. Assuming that both the cavity and environmental reservoir are initially in their vacuum states.The model can be solved analytically and thus can fully capture the features of the atomic subsystem. In the one-excitation subspace, the total state can generally be written as <cit.>|Ψ(t)⟩=a(t)|g,0,0_j⟩+b(t)|e,0,0_j⟩+c(t)|g,1,0_j⟩ +∑_jh_j(t)|g,0,1_j⟩,where |0⟩ and |1⟩ are the vacuum and single-photon states of the cavity, while |0_j⟩and |1_j⟩the cavity represent no excitation and one excitation in the jth mode of the reservoir. In what follows, we derive the coefficients of the state of the composite system. Substituting Eq. (<ref>) into the Schrödinger equationid/dt|Ψ(t)⟩=ℋ_I|Ψ(t)⟩,yields the following formulaea(t) = a(0),d/dtb(t)=-iΩ c(t), d/dth(t)=-iΔ_j e^iδ_jtc(t), d/dtc(t) = -iΩ b(t)-i∑_kΔ_je^-iδ_jth_j(t)dτ.Linking the initial conditions c(0)=h_j(0)=0 with the correlation function α(t,s)=∑|Δ_j|^2e^-iδ_j(t-s)=Θγ/2e^-γ|t-s|, one can exactly obtain the atomic dynamics by means of tracing out both the cavity and the reservoir subsystem, i.e. ρ= Tr_C,R[|Ψ(t)⟩⟨Ψ(t)|]. In this way, one can derive the desired reduced matrix of the atomic state, as is in Eq. (<ref>).Heisenberg Heisenberg,W. Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik. Z. Phys. 43, 172-198 (1927).E.H. KennardKennard, E. H. Zur quantenmechanik einfacher bewegungstypen. Z. Phys. 44, 326-352 (1927). H. P. Robertson Robertson, H. P. The uncertainty principle. Phys. Rev. 34, 163-164 (1929). D. Deutsch Deutsch, D. Uncertainty in quantum measurements. Phys. Rev. Lett. 50, 631-633 (1983). K. Kraus Kraus, K. Complementary observables and uncertainty relations. Phys. Rev. D 35, 3070 (1987). H. Maassen Maassen, H., & Uffink, J. B. M. Generalized entropic uncertainty relations. Phys. Rev. Lett.60, 1103-1106 (1988). P. J. ColesColes, P. J. & Piani, M. Improved entropic uncertainty relations and information exclusion relations. Phys. Rev. A 89, 022112 (2014). C. F. Li Li, C. F., Xu, J. X., Xu, X. Y., Li, K. & Guo, G. C. Experimental investigation of the entanglement-assisted entropic uncertainty principle. Nat. Phys. 7, 752-756 (2011). R. Prevedel Prevedel, R., Hamel, D. R., Colbeck, R., Fisher, K. & Resch, K. J. Experimental investigation of the uncertainty principle in the presence of quantum memory. Nat. Phys. 7, 757-761 (2011).M. Berta Berta, M., Christandl, M., Colbeck, R., Renes, J. M. & Renner, R. The uncertainty principle in the presence of quantum memory. Nat. Phys. 6, 659-662 (2010). H. M. Zou Zou, H. M. et al. The quantum entropic uncertainty relation and entanglement witness in the two-atom system coupling with the non-Markovian environments. Phys. Scr. 89, 115101 (2014). M. Hu Hu, M. L. & Fan, H. Quantum-memory-assisted entropic uncertainty principle, teleportation, and entanglement witness in structured reservoirs. Phys. Rev. A 86, 032338 (2012). M. Hu2 Hu, M. L. & Fan, H. Competition between quantum correlations in the quantum-memory-assisted entropic uncertainty relation. Phys. Rev. A 87, 022314 (2013). M. L. Hu Hu, M. L. & Fan, H. Upper bound and shareability of quantum discord based on entropic uncertainty relations. Phys. Rev. A 88, 014105 (2013). D. Mondal Mondal, D. & Pati, A. K. Quantum speed limit for mixed states using an experimentally realizable metric. Phys. Lett. A 380, 1395-1400 (2016). D. P. Pires Pires, D. P., Cianciaruso, M., Céleri, L. C., Adesso, G. & Soares-Pinto, D. O. Generalized Geometric Quantum Speed Limits, Phys. Rev. X 6, 021031 (2016). J. Schneeloch Schneeloch, J., Broadbent, C. J., Walborn, S. P., Cavalcanti, E. G. & Howell, J. C. Einstein-Podolsky-Rosen steering inequalities from entropic uncertainty relations. Phys. Rev. A 87, 062103 (2013).Pati Pati, A. K., Wilde, M. M., Devi, A. R. U., Rajagopal, A. K. & Sudha. Quantum discord and classical correlation can tighten the uncertainty principle in the presence of quantum memory. Phys. Rev. A86, 042105 (2012).F. Adabi Adabi, F., Salimi, S. & Haseli, S. Tightening the entropic uncertainty bound in the presence of quantum memory. Phys. Rev. A93, 062123 (2016).Xiao Xiao, Y. L. et al. Strong entropic uncertainty relations for multiple measurements.Phys. Rev. A94, 042125 (2016).Z. Y. Xu Xu, Z. Y., Yang, W. L. & Feng, M. Quantum-memory-assisted entropic uncertainty relation under noise. Phys. Rev. A 86, 012113 (2012).Huang Huang, A. J., Shi, J. D., Wang, D. & Ye, L. Steering quantum-memory-assisted entropic uncertainty under unital and nonunital noises via filtering operations. Quantum Inf. Process. 16, 46 (2017). J. Zhang Zhang, J., Zhang, Y. & Yu, C. S. Entropic uncertainty relation and information exclusion relation for multiple measurements in the presence of quantum memory. Sci. Rep. 5, 11701 (2015). S. Liu Liu, S., Mu, L. Z. & Fan, H. Entropic uncertainty relations for multiple measurements. Phys. Rev. A 91, 042133 (2015). Y. J. Zhang Zhang,Y. J., Han, W., Fan, H. & Xia, Y. J. Enhancing entanglement trapping by weak measurement and quantum measurement reversal. Ann. Phys. 354, 203-212 (2015). Q. Sun Sun, Q., Al-Amri, M., Davidovich, L. & Zubairy, M. S. Reversing entanglement change by a weak measurement. Phys. Rev. A 82, 052323 (2010).KarpatKarpat, G., Piilo, J. & Maniscalco, S. Controlling entropic uncertainty bound through memory effects. EPL 111, 50006 (2015).T.T. Ma Ma, T. T., Chen, Y. S., Chen, T., Hedemann, S. R. & Yu, T. Crossover between non-Markovian and Markovian dynamics induced by a hierarchical environment. Phys. Rev. A 90, 042108 (2014). K. H. Madsen Madsen, K. H. et al.Observation of non-Markovian dynamics of a single quantum dot in a micropillar cavity. Phys. Rev. Lett. 106, 233601 (2011). Addis Addis, C., Karpat, G., Macchiavello, C. & Maniscalco, S. Dynamical memory effects in correlated quantum channels. Phys. Rev A 94, 032121 (2016). Z. He He, Z., Zou, J., Li, L. & Shao, B. Effective method of calculating the non-Markovianity N for single-channel open systems. Phys. Rev. A 83, 012108 (2011).Zhong-Xiao Man Man, Z. X., Nguyen, B. A. & Xia Y. J. Non-Markovianity of a two-level system transversally coupled to multiple bosonic reservoirs.Phys. Rev. A 90, 062104 (2014). B. Vacchini Vacchini, B. & Breuer, H. P. Exact master equations for the non-Markovian decay of a qubit. Phys. Rev. A 81, 042103 (2010). L. Mazzola Mazzola, L., Maniscalco, S., Piilo, J., Suominen, K. A. & Garraway, B. M. Pseudomodes as an effective description of memory: Non-Markovian dynamics of two-state systems in structured reservoirs. Phys. Rev. A 80, 012104 (2009). J. Jing Jing, J. & Yu, T. Non-Markovian relaxation of a three-level system: quantum trajectory approach. Phys. Rev. Lett. 105, 240403 (2010). E. M. Laine Laine, E. M., Piilo, J. & Breuer, H. P. Measure for the non-Markovianity of quantum processes. Phys. Rev. A 81, 062115 (2010). S. C. Wang Wang, S. C., Yu, Z. W., Zou, W. J. & Wang, X. B. Protecting quantum states from decoherence of finite temperature using weak measurement. Phys. Rev. A 89, 022318 (2014). X. Xiao Xiao, X. & Li, Y. L. Protecting qutrit-qutrit entanglement by weak measurement and reversal. Eur. Phys. J. D 67, 204 (2013). Y. Aharonov Aharonov, Y., Albert, D. Z. & Vaidman, L. How the result of a measurement of a component of the spin of a spin-1/2 particle can turn out to be 100. Phys. Rev. Lett. 60, 1351 (1988).H. P. Breuer Breuer, H. P. & Petruccione, F. The theory of open quantum systems (Oxford University Press, Oxford, 2002).Acknowledgements This work was supported by the National Natural Science Foundation of China (Grant Nos. 61601002, 61275119, and 11575001), Anhui Provincial Natural Science Foundation (Grant No. 1508085QF139), the fund of National Laboratory for Infrared Physics (Grant No. M201307), and is a project from National Science Foundation Centers for Chemical Innovation: CHE-1037992. Additional InformationCompeting financial interests: The authors declare no competing financial interests. | http://arxiv.org/abs/1703.08686v1 | {
"authors": [
"Dong Wang",
"Ai-Jun Huang",
"Ross D. Hoehn",
"Fei Ming",
"Wen-Yang Sun",
"Jia-Dong Shi",
"Liu Ye",
"Sabre Kais"
],
"categories": [
"quant-ph"
],
"primary_category": "quant-ph",
"published": "20170325125101",
"title": "Entropic uncertainty relations for Markovian and non-Markovian processes under a structured bosonic reservoir"
} |
Pulsar Wind NebulaePatrick Slane Harvard-Smithsonian Center for Astrophysics, [email protected]* Patrick Slane December 30, 2023 =====================* The extended nebulae formed as pulsar winds expand into their surroundings provide information about the composition of the winds, the injection history from the host pulsar, and the material into which the nebulae are expanding. Observations from across the electromagnetic spectrum provide constraints on the evolution of the nebulae, the density and composition of the surrounding ejecta, the geometry of the central engines, and the long-term fate of the energetic particles produced in these systems. Such observations reveal the presence of jets and wind termination shocks, time-varying compact emission structures, shocked supernova ejecta, and newly formed dust. Here I provide a broad overview of the structure of pulsar wind nebulae, with specific examples from observations extending from the radio band to very high energy gamma-rays that demonstrate our ability to constrain the history and ultimate fate of the energy released in the spin-down of young pulsars.The extended nebulae formed as pulsar winds expand into their surroundings provide information about the composition of the winds, the injection history from the host pulsar, and the material into which the nebulae are expanding. Observations from across the electromagnetic spectrum provide constraints on the evolution of the nebulae, the density and composition of the surrounding ejecta, the geometry of the central engines, and the long-term fate of the energetic particles produced in these systems. Such observations reveal the presence of jets and wind termination shocks, time-varying compact emission structures, shocked supernova ejecta, and newly formed dust. Here I provide a broad overview of the structure of pulsar wind nebulae, with specific examples from observations extending from the radio band to very high energy gamma-rays that demonstrate our ability to constrain the history and ultimate fate of the energy released in the spin-down of young pulsars. § INTRODUCTIONThe explosion of a supernova triggered by the collapse of a massive star produces several solar masses of stellar ejecta expanding at ∼ 10^4km s^-1 into surrounding circumstellar (CSM) and interstellar (ISM) material. The resulting forward shock compresses and heats the ambient gas.As the shock sweeps up material, the deceleration drives a reverse shock (RS) reverse shock back into the cold ejecta, heating the metal-enhanced gas to X-ray-emitting temperatures.In many cases, though the actual fraction remains a currently-unsolved question, what remains of the collapsed core is a rapidly-spinning, highly magnetic neutron star that generates an energetic wind of particles and magnetic field confined by the surrounding ejecta.[All current evidence indicates that pulsar winds are composed of electrons and positrons, with little or no ion component. Here, and throughout, the term “particles” is used interchangeably for electrons/positrons.] The evolution of this pulsar wind nebula (PWN) pulsar wind nebula is determined by the properties of the central pulsar, its host supernova remnant (SNR), and the structure of the surrounding CSM/ISM.In discussing the structure and evolution of PWNe, it is important to distinguish two important points at the outset.First, while PWNe have, in the past, sometimes been referred to as SNRs (most often as a “center-filled” variety), they are, in fact, not SNRs. As discussed below, PWNe are created entirely by a confined magnetic wind produced by an energetic pulsar. At early times, the confining material is supernova ejecta, but at later times it can simply be the ISM. Despite being the result of a supernova explosion (as is a neutron star), we reserve the term SNR for the structure produced by the expanding supernova ejecta and its interaction with the surrounding CSM/ISM (and, indeed, an entire population of SNRs have no association with PWNe whatsoever; see Chapter “Type Ia supernovae”). Second, when describing the evolutionary phase of a PWN (or a composite SNR – an SNR that contains a PWN), it is not necessarily the true age of the system that describes its structure. Rather, it is the dynamical age, which accounts for the fact that identical pulsars expanding into very different density distributions, for example, will evolve differently.The outline of this paper is as follows. In Section 2 we review the basic properties of pulsars themselves, including a description of pulsar magnetospheres and the subsequent pulsar winds that form PWNe. Section 3 discusses the emission from PWNe and provides examples of the constraints that multiwavelength observations place on the determination of the system evolution.In Section 4 we investigate the different stages of evolution for a PWN, starting with its initial expansion inside an SNR and ending with the bow shock stage after the PWN escapes into the ISM.Section 5 presents a brief summary. Crucially, in the spirit of this Handbook, this paper is not intended as a literature review. A small set of examples have been selected to illustrate particular properties, and a subset of the recent theoretical literature has been summarized to provide the framework for our basic understanding of these systems. The reader is referred to more thorough PWN reviews by Gaensler & Slane (2006), Bucciantini (2011), and Kargaltsev et al. (2015), and to the many references and subsequent citations in those works, for a more detailed treatment.§ BASIC PROPERTIES §.§ Pulsarspulsars The discovery and basic theory of pulsars has been summarized in many places. First discovered by their radio pulsations, it was quickly hypothesized that these objects are rapidly-rotating, highly-magnetic neutron stars (NSs). Observations show that the spin period P of a given pulsar increases with time, indicating a gradual decrease in rotational kinetic energy: Ė = I ΩΩ̇, where Ω = 2π/P and I is the moment of inertia of the NS (nominally I = 2/5MR^2, where M and R are the mass and radius of the star; I ≈ 10^45 g cm^2 for M = 1.4M_⊙ and R = 10 km). This spin-down spin-down energy loss is understood to be the result of a magnetized particle wind produced by the rotating magnetic star. Treated as a simple rotating magnetic dipole, the energy loss rate is Ė = -B_pR^6Ω^4/6c^3sin^2χ, where B_p is the magnetic dipole strength at the pole and χ is the angle between the magnetic field and the pulsar rotation axis.Typical values for P range from ∼ 0.03 - 3s, with period derivatives of 10^-17 - 10^-13 s s^-1 (though values outside these ranges are also observed, particularly for so-called magnetars and millisecond pulsars).This leads to inferred magnetic field strengths of order 10^11 - 10^13 G.As the pulsar rotates, a charge-filled magnetosphere magnetosphere is created, with particle acceleration occurring in charge-separated gaps in regions near the polar cap or in the outer magnetosphere, which extends to the so-called light cylinder light cylinder where R_ LC = c/Ω.The maximum potential generated by the rotating pulsar field under the assumption of co-alignment of the magnetic and spin axes isΦ = (Ė/c) ≈ 6 × 10^13( Ė/10^38 erg s^-1)^1/2V.The minimum particle current required to sustain the charge density in the magnetosphere isṄ_GJ = c Φ/e≈ 4 × 10^33( Ė/10^38 erg s^-1)^1/2 s^-1, where e is the electron charge (Goldreich & Julian 1969). As the particles comprising this current are accelerated, they produce curvature radiation that initiates an electron-positron pair cascade. Based on observations of PWNe, values approaching Ṅ = 10^40 s^-1 are required to explain the radio synchrotron emission.The implied multiplicity (i.e., the number of pairs created per primary particle) of ∼ 10^5 - 10^7 appears difficult to obtain from pair production in the acceleration regions within pulsar magnetospheres (Timokhin & Harding 2015), suggesting that a relic population of low energy electrons created by some other mechanism early in the formation of the PWN may be required (e.g., Atoyan & Aharonian 1996).§.§ Pulsar Wind Nebulaepulsar wind nebulaFor pulsars with a magnetic axis that is inclined relative to the rotation axis, the result of the above is a striped wind, with an alternating poloidal magnetic field component separated by a current sheet (Bogovalov 1999).The magnetization magnetization of the wind, σ, is defined as the ratio between the the Poynting flux and the particle energy flux:σ = B^2/4 π m n γ_0 c^2,where B, n, and γ_0 are themagnetic field, number density of particles of mass m, and bulk Lorentz factor in the wind, respectively.The energy density of the wind is expected to be dominated by the Poynting flux as it leaves the magnetosphere, with σ∼ 10^4. Ultimately, the wind is confined by ambient material (slow-moving ejecta in the host SNR at early times; the ISM once the pulsar has exited the SNR), forming an expanding magnetic bubble of relativistic particles - the PWN. As the fast wind entering the nebula decelerates to meet the boundary condition imposed by the much slower expansion of the PWN, a wind termination shock (TS) termination shock is formed at a radius R_ TS where the ram pressure of the wind is balanced by the pressure within the nebula:R_ TS = √(Ė/(4 πω c P_ PWN)),where ω is the equivalent filling factor for an isotropic wind and P_ PWN is the total pressure in the nebula.The geometry of the pulsar system results in an axisymmetric wind (Lyubarsky 2002), forming a torus-like structure in the equatorial plane, along with collimated jets along the rotation axis. The higher magnetization at low latitudes confines the expansion here to a higher degree, resulting in an elongated shape along the pulsar spin axis for the large-scale nebula (Begelman & Li 1992, van der Swaluw 2003).This structure is evident in Figure <ref> (left), where X-ray and optical observations of the Crab Nebula Crab Nebula clearly reveal the jet/torus structure surrounded by the elongated wind nebula bounded by filaments of swept-up ejecta. The innermost ring corresponds to the TS, and its radius is well-described by Eqn. 6.MHD models of the jet/torus structure in pulsar winds reproduce many of the observed details of these systems (see Bucciantini 2011 for a review). As discussed in Section 3, the relativistic particles in the PWN produce synchrotron radiation extending from the radio to the X-ray band, and upscatter ambient low-energy photons (from the cosmic microwave background, the stellar radiation field, and emission from ambient dust) producing inverse-Compton (IC) emission in the γ-ray band. Curiously, models of the dynamical structure and emission properties of the Crab Nebula require σ∼ 10^-3 just upstream of the termination shock (Kennel & Coroniti 1984). Thus, somewhere between the pulsar magnetosphere and the termination shock, the wind converts from being Poynting-dominated to being particle-dominated. Magnetic reconnection in the current sheet has been suggested as a mechanism for dissipating the magnetic field, transferring its energy into that of the particles (e.g., Lyubarsky 2003).Recent particle-in-cell simulations of relativistic shocks show that shock compression of the wind flow can drive regions of opposing magnetic fields together, causing the reconnection (Sironi & Spitkovsky 2011). As discussed in Section 3, this process can result in a broad particle spectrum, with a power-law-like shape dN/dE ∝ E^-p with p ∼ 1.5. High energy particles in the equatorial regions can diffuse upstream of the shock, generating turbulence that supports acceleration of subsequent particles to high energies through a Fermi-like process, potentially creating a steeper high-energy tail with p ∼ 2.5.The energy range spanned by the flat spectral region, and the maximum energy to which the steep spectrum extends, depend on properties of the striped wind that change with latitude, suggesting that the integrated particle injection spectrum may be quite complex (e.g., Slane et al. 2008). However, the maximum Lorentz factor that appears achievable is limited by the requirement that the diffusion length of the particles be smaller than termination shock radius; γ_max∼ 8.3 × 10^6 Ė_38^3/4Ṅ_40^-1/2. This is insufficient to explain the observed X-ray synchrotron emission in PWNe, suggesting that an alternative picture for acceleration of the highest energy particles in PWNe is required (Sironi et al. 2013).§ RADIATION FROM PWNEThe emission from PWNe can be divided into two broad categories – that originating from the relativistic particles within the nebula and that produced by material that has been swept up by the nebula. §.§ Emission from nebulaPWN emission The emission from the relativistic particles is a combination of synchrotron radiation synchrotron radiation and IC radiation inverse-Compton radiation associated with the upscattering of ambient photons.If we characterize the injected spectrum as a power law,Q(E_e,t) = Q_0(t)(E_e/E_0)^-γthe integrated particle energy is then ∫ Q(E,t) E dE = (1 + σ) Ė(t)The resulting emission spectrum is found by integrating the electron spectrum over the emissivity function for synchrotron and IC radiation using, respectively, the nebular magnetic field and spectral density of the ambient photon field. As noted above, the low energy particles in PWNe actually appear to have a flatter spectrum, leading to a flat radio spectrum (α∼ 0.0 - 0.3 where S_ν∝ν^-α) [Note: In X-rays, it is conventional to express the photon spectrum dN_γ/dE ∝ E^-Γ, where Γ = α + 1.] The spectrum generally steepens near the mm or optical band. For young PWNe with very high magnetic fields, up-scattering of the high energy synchrotron spectrum can produce γ-ray photons through so-called synchrotron self-Compton emission. The resulting spectrum thus depends on the age, magnetic field, and pulsar spin-down power (e.g., Torres et al. 2013).As illustrated in Figure <ref>, the build-up of particles in the nebula results in an IC spectrum that increases with time.The synchrotron flux decreases with time due to the steadily decreasing magnetic field strength associated with the adiabatic expansion of the PWN (see Section 4). This behavior is reversed upon arrival of the SNR RS (not shown in Figure 2), following which the nebula is compressed and the magnetic field strength increases dramatically, inducing an episode of rapid synchrotron losses. Upon re-expanding, however, IC emission again begins to increase relative to the synchrotron emission.At the latest phases of evolution, when the nebula is very large and the magnetic field is low, the IC emission can provide the most easily-detected signature. As described below, such behavior is seen for a number of PWNe that have been identified based on their emission at TeV energies, and for which only faint synchrotron emission near the associated pulsars is seen in the X-ray band.For electrons with energy E_e,100, in units of 100 TeV, the typical energy of synchrotron photons is E_γ^s ≈ 2.2 E_e,100^2 B_10 keV, where B_10 is the magnetic field strength in units of 10 μG. The associated synchrotron lifetime synchrotron lifetime for the particles is τ_syn≈ 820 E_e,100^-1 B_10^-2 yr which results in a break synchrotron break in thephoton spectrum at E_γ,br≈ 1.4 B_10^-3 t_ kyr^-2 keV for electrons injected over a lifetime t_ kyr. Beyond this energy, the photon power law spectrum steepens by ΔΓ = 0.5. For young PWNe, with large magnetic fields, the result is a steepening of the X-ray spectrum with radius due to synchrotron burn-off synchrotron burn-off of the higher energy particles on timescales shorter than their transit time to the outer portions of the PWN.This is readily observed in young systems such as G21.5-0.9 and 3C 58 (see below), although the spectral index actually flattens more slowly than expected unless rapid particle diffusion is in effect (Tang & Chevalier 2012).For γ-rays produced by IC-scattering off of the CMB, E_γ^IC≈ 0.32 E_e,10^2TeV, where E_e,10 = E_e/(10TeV). Note that while the synchrotron energy depends upon both the electron energy and the magnetic field strength, the IC energy (from CMB scattering) depends only on the particle energy. Modeling of both emission components for a particular PWN thus allows determination of the magnetic field strength.Because of the short synchrotron lifetime for the X-ray emittingparticles, the X-ray luminosity is related to the current spin-down power of the pulsar. From a variety of studies, L_x ∼ 10^-3Ė (e.g., Possenti et al. 2002). Although flux values for individual pulsars may differ from this relationship by as much as a factor of 10, determination of the X-ray luminosity can provide a modest constraint on Ė for systems in which pulsations are not directly detected.The broadband spectrum of a PWN, along with the associated dynamical information provided by measurements of the pulsar spin properties, and the size of the PWN and its SNR, place very strong constraints on its evolution and on the spectrum of the particles injected from the pulsar. Combined with estimates of the swept-up ejecta mass, this information can be used to probe the properties of the progenitor star and to predict the long-term fate of the energetic particles in the nebula. Recent multiwavelength studies of PWNe, combined with modeling efforts of their evolution and spectra, have provided unique insights into several of these areas. §.§ Emission from shocked ejectaSNR ejecta As the PWN expands into the surrounding supernova ejecta, as described below, it heats the ejecta. The resulting emission, often confined to filaments, is a combination of radiation from shocked gas and continuum emission from dust condensed from the cold ejecta in the early adiabatic expansion of the SNR. The thermal emission depends on the velocity of the PWN shock driven into the ejecta which, in turn, depends on the spin-down power of the central pulsar and the density and velocity profile of the ejecta. For slow shocks, line emission may be observed in the IR and optical bands, such as that observed from the Crab Nebula (see Chapter “Supernova of 1054 and its remnant, the Crab Nebula”), G21.5-0.9, and G54.1+0.3 (see below), while for faster shocks the emission may appear in the X-ray band, as observed in 3C 58.This line emission can provide important information on the ejecta composition and expansion velocity.The dust emission is in the form of a blackbody-like spectrum whose properties depend on the temperature, composition, and grain-size distribution of the dust. Measurements of emission from ejecta dust prior to interaction with the SNR RS (see below) are of particular importance in estimating dust formation rates in supernovae (e.g., Temim et al. 2015)§ PWN EVOLUTIONPWN evolutionThe evolution of a PWN within the confines of its host SNR is determined by both the rate at which energy is injected by the pulsar and by the density structure of the ejecta material into which the nebula expands.The location of the pulsar itself, relative to the SNR center, depends upon any motion given to the pulsar in the form of a kick velocity during the explosion, as well as on the density distribution of the ambient medium into which the SNR expands. At the earliest times, the SNR blast wave expands freely at a speed of ∼ (5-10)×10^3 , much higher than typical pulsar velocities of ∼ 200-1500 . As a result, for young systems the pulsar will always be located near the SNR center.The energetic pulsar wind is injected into the SNR interior, forming a high-pressure bubble that expands supersonically into the surrounding ejecta, forming a shock.The input luminosity is generally assumed to have the form (e.g. Pacini & Salvati 1973) Ė = Ė_0 ( 1 + t/τ_0)^-(n+1)/(n-1) where τ_0 ≡P_0/(n-1)Ṗ_0 is the initial spin-down time scale of the pulsar.Here Ė_̇0̇ is the initial spin-down power, P_0 and Ṗ_0 are the initial spin period and its time derivative, and n is the so-called “braking index” braking index of the pulsar (n = 3 for magnetic dipole spin-down).The pulsar has roughly constant energy output until a time τ_0, beyond which the output declines fairly rapidly with time.Figure <ref> illustrates the evolution of a PWN within its host SNR.The left panel shows a hydrodynamical simulation of an SNR evolving into a non-uniform medium, with a density gradient increasing from left to right. The pulsar is moving upward. The SNR forward shock (FS), RS and contact discontinuity (CD) separating the shocked CSM and shocked ejecta are identified, as is the PWN shock driven by expansion into the cold ejecta. The right panel illustrates the radial density distribution, highlighting the PWN TS as well as the SNR FS, CD, and RS.§.§ Early ExpansionThe energetic pulsar wind injected into the SNR interior forms a high-pressure bubble that drives a shock into the surrounding ejecta. The sound speed in the relativistic fluid within the PWN is sufficiently high (c_s = c/√(3)) that any pressure variations experienced during the expansion are quickly balanced within the bubble; at early stages, the pulsar thus remains located at the center of the PWN, even if the pulsar itself is moving through the inner SNR, which is often the case because pulsars can be born with high velocities (∼ 200 - 1500km s^-1; Arzoumanian et al. 2002) due to kicks imparted in the supernova explosions. The wind is confined by the innermost slow-moving ejecta, and the PWN expansion drives a shock into these ejecta, heating them and producing thermal emission.Magnetic tension in the equatorial regions exceeds that elsewhere in the nebula, resulting an a oblate morphology with the long axis aligned with the pulsar rotation axis (Begelman & Li 1992).As illustrated in Figure <ref> (left), the PWN/ejecta interface is susceptible to Rayleigh-Taylor (R-T) instabilities. These structures are readily observed in the Crab Nebula (Figure <ref>a; also see Hester 2008 as well as Chapter “Supernova of 1054 and its remnant, the Crab Nebula”), where highly-structured filaments of gas and dust are observed in the optical and infrared.Spectral studies of these filaments provide information on the composition, mass, and velocity of the ejecta.This, along with information about the associated SNR, can place strong constraints on the progenitor system. In the Crab Nebula, for example, the total mass of the ejecta swept up by the PWN is ∼ 5 M_⊙ (Fesen et al. 1997), and the expansion velocity is ∼ 1300 km s^-1 (Temim et al. 2006). The Crab is one of a small set of young PWNe for which there is no evidence of the surrounding SNR, other than the swept-up ejecta. Other examples include 3C 58 and perhaps G54.1+0.3, although there is some evidence for radio and X-ray emission that might be associated with an SNR shell in the latter (Bocchino et al. 2010). The lack of bright (or any) SNR emission in these systems is generally assumed to result from some combination of low explosion energy, as might result from low-mass progenitors that produce electron-capture SNe, and a very low surrounding density, as could result from mass loss through stellar winds in the late phase of massive star evolution.For the Crab Nebula, the available evidence appears to be consistent with a low-mass progenitor (Yang & Chevalier 2015). For G54.1+0.3, on the other hand, an infrared shell surrounding the X-ray PWN is observed to encompass a collection of what appear to be O-type stars that presumably formed in the same stellar cluster as the PWN progenitor, indicating that this system resulted from a high mass star (Temim et al. 2010). The IR emission appears to arise from a combination of slow shocks driven into the surrounding ejecta and unshocked supernova dust that is being radiatively heated by emission from the embedded stars.While the optical emission from 3C 58 3C 58 shows evidence for R-T structures, high resolution X-ray observations show a network of filamentary structures that do not appear to be associated with the optical filaments (Figure <ref>).The origin of these structures is currently not understood, although kink instabilities kink instabilities in the termination shock region may result in magnetic structures whose size scale is similar to what is observed in 3C 58 (Slane et al. 2004). Thermal X-ray emission is observed in the outer regions of the PWN, (which appear red in Figure <ref> due to both the low energy thermal flux and the steepening of the synchrotron spectrum with radius associated with burn-off of high energy particles) with indications of enhanced metals as would be expected from shocked ejecta. Mass and abundance measurements, combined with expansion measurements, can provide the velocity and composition distribution of the ejecta, placing constraints on the total ejecta mass and explosion energy of the supernova (e.g., Yang & Chevalier 2015, Gelfand et al. 2015).For more typical systems, the ambient density (and/or supernova explosion energy) is sufficiently high to form a distinct SNR shell of swept-up CSM/ISM material, accompanied by RS-shocked ejecta, as illustrated in Figure <ref>.An exceptional example is G21.5-0.9.G21.5-0.9 X-ray observations (Figure <ref>a) show a bright central nebula that coincides with a flat-spectrum radio nebula. The nebula is accompanied by a faint SNR shell (Slane et al. 2000; Matheson & Safi-Harb 2005), and radio timing measurements with the Parkes telescope reveal the 62 ms pulsar J1833-1034 in the center of the nebula (Camilo et al. 2006).Ground-based IR observations (Zajczyk et al. 2012) reveal a ring ofemission associated with ejecta that has been swept up by the expanding PWN (Fig. 4b; contours are X-ray emission from the dashed square region from Fig. 4a). The emission directly around the pulsar is extended in X-rays (see innermost contours), possibly associated with a surrounding torus as is seen in the Crab Nebula and other PWNe.The IR emission surrounding the pulsar is polarized.The electric field vectors are shown in Fig. 4c, with the length of the white bars proportional to the polarization fraction. The magnetic field, which is perpendicular to the electric vectors, is largely toroidal, consistent with the picture of wound-up magnetic flux from the spinning pulsar, as described above.§.§ Reverse-shock Interactionreverse shockAs the SNR blast wave sweeps up increasing amounts of material, the RS propagates back toward the SNR center. In the absence of a central PWN, it reaches the center at a time t_c≈ 7 (M_ej/10 M_⊙)^5/6 E_51^-1/2 n_0^-1/3 kyr, where E_51 is the explosion energy, M_ej is the ejecta mass, and n_0 is the number density of ambient gas (Reynolds & Chevalier 1984). When a PWN is present, however, the RS interacts with the the nebula before it can reach the center (Figure <ref>). The shock compresses the PWN, increasing the magnetic field strength and resulting in enhanced synchrotron radiation that burns off the highest energy particles. In the simplified case of SNR expansion into a uniform medium, with a spherically-symmetric PWN, the system evolves approximately as illustrated in Figure <ref> (from Gelfand et al. 2009), where the Sedov solution Sedov solution has been assumed for the SNR evolution, R_SNR≈ 6.2 × 10^4 (E_SN/n_0)^1/5 t^2/5, and the PWN evolves approximately asR_PWN≈ 1.5 Ė_0^1/5 E_SN^3/10 M_ej^-1/2 t^6/5(Chevalier 1977) prior to the RS interaction. [In reality, the SNR expands freely at the outset, approaching the Sedov solution as t → t_c.] Here, E_SN is the supernova explosion energy, n_0 is the number density of the ambient medium, and M_ej is the mass of the supernova ejecta. If the ambient CSM/ISM is significantly nonuniform (and it typically is, because massive stars form in turbulent regions of dense clouds, and strongly modify the CSM through strong and potentially-asymmetric winds), the FS expands more (less) rapidly in regions of lower (higher) density.This has two significant effects.First, it changes the morphology of the SNR to a distorted shell for which the associated pulsar is no longer at the center.Second, the RS also propagates asymmetrically, reaching the center more quickly from the direction of the higher density medium (Blondin et al. 2001). The return of the RS ultimately creates a collision with the PWN. During the compression phase, compression phase the magnetic field of the nebula increases, resulting in enhanced synchrotron radiation and significant radiative losses from the highest energy particles.The PWN/RS interface is Rayleigh-Taylor (R-T) unstable, and is subject to the formation of filamentary structure where the dense ejecta material is mixed into the relativistic fluid.If the SNR has evolved in a nonuniform medium, an asymmetric RS will form, disrupting the PWN and displacing it in the direction of lower density (Figure <ref>).The nebula subsequently re-forms as the pulsar injects fresh particles into its surroundings, but a significant relic nebula of mixed ejecta and relativistic gas will persist.Because the SNR RS typically reaches the central PWN on a timescale that is relatively short compared with the SNR lifetime, all but the youngest PWNe that we observe have undergone an RS interaction (see Figure <ref>). This has significant impact on the large-scale geometry of the PWN, as well as on its spectrum and dynamical evolution.Remnants such as G328.4+0.2 (Gelfand et al. 2007), MSH 15-56 (Temim et al. 2013), and G327.1-1.1 (Temim et al. 2015) G327.1-1.1 all show complex structure indicative of RS/PWN interactions, and observations of extended sources of very high energy (VHE) γ-rays indicate that many of these objects correspond to PWNe that have evolved beyond the RS-crushing stage. An example of such a RS-interaction stage is presented in Figure <ref> where we show the composite SNR G327.1-1.1 (Temim et al.2015). Radio observations (a) show a complete SNR shell surrounding an extended flat-spectrum PWN in the remnant interior, accompanied by a finger-like structure extending to the northwest. X-ray observations (b) show faint emission from the SNR shell along with a central compact source located at the tip of the radio finger, accompanied by a tail of emission extending back into the radio PWN. The X-ray properties of the compact source are consistent with emission from a pulsar (though, to date, pulsations have not yet been detected) which, based on its position relative to the geometric center of the SNR, appears to have a northward motion. Spectra from the SNR shell indicate a density gradient in the surrounding medium, increasing from east to west. Results from hydrodynamical modeling of the evolution of such a system using these measurements as constraints, along with an estimate for the spin-down power of the pulsar based upon the observed X-ray emission of its PWN (see Section 3.1) are shown in Figure <ref>c where we show the density (compare with Figure <ref>). The RS has approached rapidly from the west, sweeping past the pulsar and disrupting the PWN. The result is a trail of emission swept back into the relic PWN, in excellent agreement with the radio morphology. The X-ray spectrum of the tail shows a distinct steepening with distance from the pulsar, consistent with synchrotron cooling of the electrons based on the estimated magnetic field and age of the injected particles tracked in the hydro simulation. Detailed investigation shows that the central source is actually resolved, suggesting that the pulsar is surrounded by a compact nebula (panel d). This is embedded in a cometary structure produced by a combination of the northward motion of the pulsar and the interaction with the RS propagating from the west. However, extended prong-like structures are observed in X-rays, whose origin is currently not understood. §.§ Late-phase Evolutionlate-phase evolutionAs illustrated in Figure <ref>, as a PWN ages, the ratio of the IC to synchrotron luminosity increases due to the declining magnetic field in the nebula. As a result, in late phases of the evolution, the γ-ray emission may dominate that observed in the radio or X-ray bands. Indeed, PWNe dominate the population of TeV γ-ray sources in the Galactic Plane (e.g., Carrigan et al. 2013).For many such TeV-detected PWNe, the inferred magnetic field strengths are only ∼ 5μG (e.g., de Jager et al. 2008). In such a case, 1 TeV gamma-rays originate from electrons with energies of ∼ 20 TeV (assuming IC scattering of CMB photons) while 1 keV synchrotron X-rays originate from electrons with energies of ∼ 100 TeV (see Eqns. <ref>, <ref>). The higher energy X-ray producing electrons fall beyond the cooling break, while those producing the γ-rays are predominantly uncooled. The result is a bright TeV nebula accompanied by a fainter X-ray nebula.Such results are seen clearly for HESS J1825-137, for which measurements show that the TeV emission extends to much larger distances thanthe X-ray emission due to more rapid cooling of the X-ray emitting particles. Indeed, for this PWN, the γ-ray size is observed to decline with increasing energy, indicating that even some of the γ-ray emitting electrons fall beyond the cooling break although, as observed in younger PWNe in X-rays, the high energy emission extends to larger radii than can be explained unless rapid diffusion of the associated electrons is occurring (Van Etten & Romani 2011). Deep surveys with future VHE γ-ray telescopes are expected to reveal many older systems for which emission in other wavebands is now faint.§.§ Escaping the SNR – Bow Shock PWNebow shock nebulaeLate in the evolution of a PWN, the pulsar will exit its host SNR and begin traveling through the ISM. Since the sound speed for the cold, warm, and hot phases of the ISM is v_s ∼ 1, 10, and 100km s^-1,the pulsar motion will be supersonic. The relative motion of the ISM sweeps the pulsar wind back into a bow shock structure. As illustrated in Figure <ref> (left), the structure is still characterized by an FS, CD, and TS, but the gas behind the FS is now shocked ISM material, and the CD separates the shocked pulsar wind from the shocked ISM. Inside the TS, the pulsar wind flows freely. The distance from the pulsar to the TS depends on the angle θ relative to the pulsar motion (as does that to the FS), and is approximately described by (Wilkin 1996) R_w(θ) = R_w0√(3(1-θθ))/sinθ. Here R_w0 is the stand-off distance from the pulsar, in thedirection of motion, where the wind pressure matches the ram pressure of the inflowing ISM (in the pulsar frame):R_w0 = √(Ė/(r πω c ρ_0 v_ PSR^2),where v_ PSR is the pulsar velocity and ρ_0 is the density of the unshocked ISM (cf. Equation 6).Although this descriptionwas derived for a non-relativistic, unmagnetized radiative fluid, while the pulsar wind is magnetized and relativistic, and the radiative time for the ISM is long relative to the flow timescale in pulsar bow shock nebulae, the overall geometric description provides an adequate representation (Bucciantini & Bandiera 2001).The non-radiative shock formed in the ISM interaction results in the emission of optical Balmer lines, dominated by Hα, providing a distinct signature from which properties of the pulsar motion and wind luminosity can be inferred. An exceptional example is the bow shock nebula associated with PSR J0437-4715 (Figure <ref>, right), a nearby ms pulsar in a binary system, for which timing measurements have established M_NS∼ 1.8 M_⊙ (Verbiest et al. 2008). Parallax measurements establish a distance of 0.16 kpc, and proper motion measurements of the pulsar (and nebula) provide v_⊥ = 107km s^-1.With the measured spin-down power Ė = 5.5 × 10^33 ergs^-1, modeling of the bow shock structure provides a direct limit on the NS moment of inertia that indicates a relatively stiff equation of state (Brownsberger & Romani 2014).Radio and X-ray measurements of bow shock nebulae probe the shocked pulsar wind. Observations of PSR J1747-2958 and its associated nebula G359.23-0.82 reveal a long radio tail and an X-ray morphology that reveals both a highly magnetized tail from wind shocked from the forward direction, and a weakly magnetized tail from wind flowing in the direction opposite that of the pulsar motion (Gaensler et al. 2004). High resolution measurements of the emission near several pulsars have also provided evidence for asymmetric pulsar winds imprinting additional structure on the bow shock structure (e.g., Romani et al. 2010). § SUMMARYThe structure of a PWN is determined by both the properties of the host pulsar and the environment into which the nebula expands. Observations across the electromagnetic spectrum allow us to constrain the nature of the pulsar wind, including both its magnetization and geometry, and the global properties of the PWN allow us to constrain the evolutionary history as it evolves through the ejecta of the supernova remnant in which it was born. Spectroscopic observations yield information on the mass and composition of shocked ejecta into which the nebula expands, and on the expansion velocity. Measurements of the broadband spectrum provide determinations of the nebular magnetic field and the maximum energy of the particles injected into the PWN.These observations continue to inform theoretical models of relativistic shocks which, in turn, have broad importance across the realm of high-energy astrophysics.At late phases, interactions between the PWN and the SNR RS produce a complex combination of the relic nebula and freshly-injected particles. Hydrodynamical simulations of the entire composite SNR system can reveal information on the long-term evolution, which depends on the details of the pulsar motion, its wind properties, the ejecta mass and explosion energy of the SNR, and the properties of the surrounding medium. Such systems may eventually fade into obscurity, with γ-ray emission from the relic electrons providing an important signature before the pulsars exit their SNRs and traverse the ISM at supersonic speeds, producing elongated bow shock nebulae whose structure continue to provide a glimpse of the relativistic outflows from the aging pulsars.AcknowledgementsThe author would like to thank the many colleagues with whom he has collaborated on studies that have been briefly summarized in this Handbook contribution. Partial support for this effort was provided by NASA Contract NAS8-03060. Cross-References∙ Supernova of 1054 and its remnant, the Crab Nebula∙ The Historical Supernova of AD1181 and its remnant, 3C58∙ Supernovae from super AGB Stars (8-12 M_⊙)∙ Explosion Physics of Core - Collapse Supernovae∙ Radio Neutron Stars∙ Distribution of the spin periods of neutron stars∙ Dynamical Evolution and Radiative Processes of Supernova Remnants∙ X-ray Emission Properties of supernova remnants∙ Infrared Emission from Supernova Remnants: Formation and Destruction of Dustspbasic 99.acc02 Arzoumanian Z, Chernoff DF., Cordes JM (2002)The Velocity Distribution of Isolated Radio Pulsars. ApJ 568:289-301aa96 Atoyan AM, Aharonian FA (1996)On the mechanisms of gamma radiation in the Crab Nebula. MNRAS 278:525-541beg92 Begelman MC, Li Z-Y (1992)An axisymmetric magnetohydrodynamic model for the Crab pulsar wind bubble. ApJ 397:187-195blo01 Blondin JM, Chevalier RA, Frierson DM (2001)Pulsar Wind Nebulae in Evolved Supernova Remnants. ApJ 563:806boc+10 Bocchino F, Bandiera R., Gelfand JD (2010)XMM-Newton and SUZAKU detection of an X-ray emitting shell around the pulsar wind nebula G54.1+0.3. A&A 520A:71bog99 Bogovalov SV (1999)On the physics of cold MHD winds from oblique rotators. A&A 349:1017br14 Brownsberger S, Romani RW (2014)A Survey for H-alpha Pulsar Bow Shocks, ApJ 784:154buc11Bucciantini N (2011)MHD models of Pulsar Wind Nebulae. ASSP 21:473bb01 Bucciantini N, Bandiera R (2001)Pulsar bow-shock nebulae. I. Physical regimes and detectability conditions. A&A 375:1032-1039cam+06 Camilo F, Ransom SM, Gaensler BM, Slane P, Lorimer DR, Reynolds J, et al. (2006)PSR J1833-1034: Discovery of the Central Young Pulsar in the Supernova Remnant G21.5-0.9. ApJ 637:456-465car13 Carrigan F, Brun F, Chaves RCG, Deil C, Donath A, Gast H, et al. (2013)The H.E.S.S. Galactic Plane Survey - maps, source catalog and source population. In: Proceedings of the 33rd InternationalCosmic Ray Conference (arXiv:1307.4690)chev77 Chevalier RC (1977)Was SN 1054 A Type II Supernova? In: Schramm DN (ed) Supernovae,Astrophysics and Space Science Library 66:53dejag08 de Jager OC, Slane PO, LaMassa S (2008)Probing the Radio to X-Ray Connection of the Vela X Pulsar Wind Nebula with Fermi LAT and H.E.S.S. ApJ 689:L125fes97 Fesen RA, Shull JM, Hurford AP (1997)An Optical Study of the Circumstellar Environment Around the Crab Nebula. AJ 113:354-363gae04 Gaensler BM, van der Swaluw E, Camilo F, Kaspi VM, Baganoff FK, Yusef-Zadeh F, et al. (2004)The Mouse that Soared: High-Resolution X-Ray Imaging of the Pulsar-powered Bow Shock G359.23-0.82. ApJ 616:383-402gae06 Gaensler BM, Slane PO (2006)The Evolution and Structure of Pulsar Wind Nebulae. ARA&A 44:17-47gel07 Gelfand JD, Gaensler BM, Slane PO, Patnaude DJ, Hughes JP, Camilo, F (2007)The Radio Emission, X-Ray Emission, and Hydrodynamics of G328.4+0.2: A Comprehensive Analysis of a Luminous Pulsar Wind Nebula, Its Neutron Star, and the Progenitor Supernova Explosion. ApJ 663:468-486gsz09 Gelfand, JD, Slane, PO, Zhang, W (2009)A Dynamical Model for the Evolution of a Pulsar Wind Nebula Inside a Nonradiative Supernova Remnant. ApJ 703:2051-2067gst15 Gelfand JD, Slane PO, Temim T. (2015)The Properties of the Progenitor Supernova, Pulsar Wind, and Neutron Star inside PWN G54.1+0.3. ApJ 807:30gj69 Goldreich P, Julian WH (1969) Pulsar Electrodynamics. ApJ 157:869hes08 Hester JJ (2008)The Crab Nebula: An Astrophysical Chimera. ARA&A 46:127-155kar+15 Kargaltsev O, Cerutti B, Lyubarsky Y, Striani E (2015)Pulsar-Wind Nebulae. Recent Progress in Observations and Theory. SSRv 191:391-439 kc84 Kennel CF, Coroniti FV (1984)Magnetohydrodynamic model of Crab nebula radiation. ApJ 283:710-730lyu02 Lyubarsky YE (2002)On the structure of the inner Crab Nebula. MNRAS 329:L34-L36 lyu03 Lyubarsky YE (2003)The termination shock in a striped pulsar wind. MNRAS 345:153msh05 Matheson H, Safi-Harb S (2005)The plerionic supernova remnant G21.5-0.9: In and out. AdSpR 35:1099-1105ps73 Pacini F, Salvati M (1973)On the Evolution of Supernova Remnants. Evolution of the Magnetic Field, Particles, Content, and Luminosity. ApJ 186:249-266pos+02 Possenti A,Cerutti R, Colpi M, Mereghetti S (2002) Re-examining the X-ray versus spin-down luminosity correlation of rotation powered pulsars. A&A 387:993-1002ren84 Reynolds, SP, Chevalier, RA (1984)Evolution of pulsar-driven supernova remnants. ApJ 278:630-648rom10 Romani RW, Shaw MS, Camilo F, Cotter G, Sivakoff GR (2010)The Balmer-dominated Bow Shock and Wind Nebula Structure of gamma-rayPulsar PSR J1741-2054. ApJ 724:908-914ss11 Sironi L, Spitkovsky A (2011)Acceleration of Particles at the Termination Shock of a Relativistic Striped Wind. ApJ 741:39sir+13 Sironi L,Spitkovsky A, Arons J (2013)The Maximum Energy of Accelerated Particles in Relativistic Collisionless Shocks. ApJ 771:54sla+00 Slane P, Chen Y, Schulz NS, Seward FD, Hughes JP, Gaensler BM (2000)Chandra Observations of the Crab-like Supernova Remnant G21.5-0.9 ApJ 533:L29-L32sla04 Slane P, Helfand DJ, van der Swaluw E, Murray SS (2004)New Constraints on the Structure and Evolution of the Pulsar Wind Nebula 3C 58. ApJ 616:403-413sla08 Slane P,Helfand DJ, Reynolds SP, Gaensler BM, Lemiere A, Wang Z (2008)The Infrared Detection of the Pulsar Wind Nebula in the Galactic Supernova Remnant 3C 58. ApJ 676:L33tc12 Tang X, Chevalier RA (2012)Particle Transport in Young Pulsar Wind Nebulae ApJ 752:83tgw+06 Temim T, Gehrz RD, Woodward CE, Roellig TL, Smith, N, Rudnick, L,et al.(2006)Spitzer Space Telescope Infrared Imaging and Spectroscopy of the Crab Nebula. AJ 132:1610-1623tem10Temim T, Slane P, Reynolds SP, Raymond JC, Borkowski, KJ (2010)Deep Chandra Observations of the Crab-like Pulsar Wind Nebula G54.1+0.3 and Spitzer Spectroscopy of the Associated Infrared Shell. ApJ 710:309-324tem13Temim T, Slane P, Castro D, Plucinsky PP, Gelfand J, Dickel JR (2013)High-energy Emission from the Composite Supernova Remnant MSH 15-56. ApJ 768:61tem15Temim T,Slane P, Kolb C, Blondin J, Hughes JP, Bucciantini N (2015)Late-Time Evolution of Composite Supernova Remnants: Deep Chandra Observations and Hydrodynamical Modeling of a Crushed Pulsar Wind Nebula in SNR G327.1-1.1. ApJ 799:158th15 Timokhin AN, Harding AK (2015)On the Polar Cap Cascade Pair Multiplicity of Young Pulsars. ApJ 810:144tor13 Torres D, Martn J, de Oa Wilhelmi E, Cillis A (2013)The effects of magnetic field, age and intrinsic luminosity on Crab-like pulsar wind nebulae. MNRAS 436:3112-3127van03 van der Swaluw E (2003)Interaction of a magnetized pulsar wind with its surroundings. MHD simulations of pulsar wind nebulae. A&A 404:939-947vanrom11 Van Etten A, Romani RW (2011)Multi-zone Modeling of the Pulsar Wind Nebula HESS J1825-137. ApJ 742:62ver08 Verbiest JPW, Bailes M, van Straten W, Hobbs, GB, Edwards RT, Manchester RN et al. (2008)Precision Timing of PSR J0437-4715: An Accurate Pulsar Distance, a High Pulsar Mass, and a Limit on the Variation of Newton's Gravitational Constant. ApJ 679:675-680wilk96 Wilkin FP (1996)Exact Analytic Solutions for Stellar Wind Bow Shocks. ApJ 459:L31yc15 Yang H, Chevalier RC (2015)Evolution of the Crab Nebula in a Low Energy Supernova ApJ 806:153zej+12 Zajczyk A, Gallant YA, Slane P, Reynolds SP, Bandiera R, Gouiffs C (2012)Infrared imaging and polarimetric observations of the pulsar wind nebula in SNR G21.5-0.9. A&A 542:A12 | http://arxiv.org/abs/1703.09311v1 | {
"authors": [
"Patrick Slane"
],
"categories": [
"astro-ph.HE"
],
"primary_category": "astro-ph.HE",
"published": "20170327211316",
"title": "Pulsar Wind Nebulae"
} |
theoremTheorem[section] definitionDefinition[section] lemmaLemma[section] corollaryCorollary[section] propositionProposition[section] exampleExample[section] remarkRemark[section] assumptionAssumption[section] 5 De-anonymization of Social Networks with Communities: When Quantifications Meet Algorithms Luoyi Fu^* Shanghai Jiao Tong University [email protected] Xinzhe Fu^* Shanghai Jiao Tong University [email protected] Zhongzhao Hu Shanghai Jiao Tong University [email protected] Zhiying Xu Shanghai Jiao Tong University [email protected] Xinbing Wang Shanghai Jiao Tong University [email protected] December 30, 2023 ========================================================================================================================================================================================================================================================================================================================================================================================= A crucial privacy-driven issue nowadays is re-identifying ano-nymized social networks by mapping them to correlated cross-domain auxiliary networks. Prior works are typically based on modeling social networks as random graphs representing users and their relations, and subsequently quantify the quality of mappings through cost functions that are proposed without sufficient rationale. Also, it remains unknown how to algorithmically meet the demand of such quantifications, i.e., to find the minimizer of the cost functions. We address those concerns in a more realistic social network modeling parameterized by community structures that can be leveraged as side information for de-anonymization. By Maximum A Posteriori (MAP) estimation, our first contribution is new and well justified cost functions, which, when minimized, enjoy superiority to previous ones in finding the correct mapping with the highest probability. The feasibility of the cost functions is then for the first time algorithmically characterized. While proving the general multiplicative inapproximability, we are able to propose two algorithms, which, respectively, enjoy an ϵ-additive approximation and a conditional optimality in carrying out successful user re-identification. Our theoretical findings are empirically validated, with a notable dataset extracted from rare true cross-domain networks that reproduce genuine social network de-anonymization. Both theoretical and empirical observations also manifest the importance of community information in enhancing privacy inferencing.*: the first two authors contributed equally to this paper. § INTRODUCTIONThe proliferation of social networks has led to generation of massive network data. Although users can be anonymized in the released data through removing personal identifiers <cit.>, with their underlying relations preserved, they may still be re-identified by adversaries from correlated cross domain auxiliary networks where user identities are known <cit.>. Such idea of unveiling hidden users by leveraging their information collected from other domains, or alternatively called social network de-anonymization <cit.>, is a fundamental privacy issue that has received considerable attention. Inspired by Pedarsani and Grossglauser <cit.>, a large body of existing de-anonymization work shares a basic common paradigm: with an underlying network representing social relations between users, both the published anonymized network and the auxiliary un-anonymized network are generated from that network based on graph sampling that captures their correlation, as observed in many real cross-domain networks. The equivalent node sets they share are corresponded by an unknown correct mapping. With the availability of only structural information, adversaries attempt to re-identify users by establishing a mapping between networks. To quantify such mapping qualities, several global cost functions have been proposed <cit.> in favor of exploring the conditions under which the correct matching can be unraveled from the mapping that minimizes the cost function. Despite those dedications to de-anonymization, it is still not entirely understood how the privacy of anonymized social network can be guaranteed given that adversaries have no access to side information other than network structure, primarily for three reasons. First, the widely adopted Erdős-Rényi graph or Chung-Lu graph <cit.> for the modeling of underlying social networks <cit.>, though facilitating analysis, falls short of well capturing the clustering effects that are prevalent in realistic social networks; Second, the cost functions <cit.> in measuring mapping qualities not only lack sufficient rationale in analytical aspects, but most importantly, it remains unclear whether the feasibility of minimizing such cost functions could be theoretically characterized from an algorithmic aspect <cit.>; Last but not least, due to the rarity of true cross-domain datasets, current empirical observations of social network de-anonymization are either based on synthetic data, or real social networks with artificial sampling in construction of correlated published and auxiliary networks, and consequently do not well represent the genuine practical de-anonymization <cit.>. While a thorough understanding of this issue may better inform us on user privacy protection, this paper is particularly concerned about the following question: Is it possible to quantifyde-anonymization in a more realistic modeling, and meanwhile algorithmically meet the demandbrought by such quantifications?The answer to this question entails appropriate modeling of social networks, well-designed cost functions as metrics of mappings and elaborated algorithms of finding the mapping that is optimal according to the metric, along with data collection that can empirically validate the related claims. To present a more reasonable model of underlying social network that incorporates the clustering effect, we adopt the stochastic block model <cit.> where nodes are partitioned into disjoint sets representing different communities <cit.>. Based on that, we investigate the problem following the paradigm, as noted earlier, where the published and auxiliary networks serve as two sampled subnetworks. Both of them inherit from the underlying network the community structures that can be leveraged as side structural information for adversaries. Similarly, we assume that other than network structure, there is no additional availability of side information to adversaries as it will only further benefit them. Varying the amount of availability of community information, here we classify our de-anonymization problem into two categories, i.e., bilateral case, and its counterpart, unilateral case, literally meaning that adversaries have access to community structure of both or only one network. A more formal definition of the two cases information is deferred to Section <ref>. Subsequently, we summarize, built on the model, our results on metrics, algorithms and empirical validations into three aspects answering the question raised.Analytical aspect: For both cases, our first contribution is to derive the cost functions as metrics quantifying the structural mismappings between networks based on Maximum A Posteriori (MAP) estimation. The virtue of MAP estimation ensures the superiority of our metrics to the previous ones in the sense that the minimizers of our cost functions equal to the underlying correct mappings with the highest probability. Also, as we will rigorously prove later, under fairly mild conditions on network density and the closeness between communities, through minimizing the cost function we can perfectly recover the correct mapping.Algorithmic aspect: Following the derived quantifications, our next significant contribution is to take a first algorithmic look into the demand imposed by the quantifications, i.e., the optimization problems of minimizing such cost functions. We find that opposed to the simplicity of the cost functions in form, the induced optimization problems are computationally intractable and highly inapproximable. Therefore, we circumvent pursing exact or multiplicative approximation algorithms, but instead seek for algorithms with other types of guarantees. However, the issue is still made particularly challenging by the intricate tension among cost function, mappings, network topology as well as the super-exponentially large number of candidate mappings. Our main idea to resolve the tension is converting the problems into equivalent formulations that enable some relaxations, through bounding the influence of which, we demonstrate that the proposed algorithms have their respective performance guarantees. Specifically, one algorithm enjoys an ϵ-additive approximation guarantee in both cases, while the other yields optimal solutions for bilateral de-anonymization when the two sub-networks are highly structurally similar but fails to provide such guarantee for the unilateral case due to its lack of sufficient community information. Further comparisons of algorithmic results between the two cases also manifest the importance of community as side information in privacy inferencing.Experimental aspect: Finally, we empirically verified all our theoretical findings under both synthetic and real datasets. We remark that one dataset, as never appeared in this context previously, is extracted from true cross-domain co-authorship networks <cit.> serving as published and auxiliary networks. As a result, it leads to no prior work, other than ours, that reproduces genuine scenarios of social network de-anonymization without artificial modeling assumptions. The experimental results demonstrate the effectiveness of our algorithms as they correctly re-identify more than 40% of users even in the co-authorship networks that possess the largest deviation from our assumptions. Also, it empirically consolidates our argument that community information can increase the de-anonymization capability.The rest of this paper is organized as follows: In Section <ref>, we briefly survey the related works. In Section <ref>, we introduce our model for de-anonymization problem of social networks with community structure and characterize the cases of bilateral and unilateral information. In Sections <ref> and <ref>, we present our results on analytical and algorithmic aspects of bilateral de-anonymization. Following the path of bilateral case, we introduce our results on unilateral de-anonymization and make comparisons between the two cases in Section <ref>. We present our experiments in Section <ref> and conclude the paper in Section <ref>. § RELATED WORKS The issue of social network de-anonymization, which has received considerable attention, was pioneeringly investigated by Narayanan and Shimatikov <cit.>, who proposed the idea that users in anonymized networks can be re-identified through utilizing auxiliary networks with the same set of users from other domains. In that regard, they designed practical de-anonymization schemes that rely on side information in the form of a seed set of “pre-mapped" node pairs, i.e., a subset of nodes that are identified priorly across the two networks. Then the mapping is generated incrementally, starting from the seeds and percolating to the whole node sets.Following this framework, Pedarsani and Grossglauser developed a succinct modeling that is amiable to theoretical analysis and serves as the paradigm for a family of subsequent related works on social network de-anonymization <cit.>. They assumed that the published and auxiliary networks are two graphs that share the same node sets with the edge sets resulted from independent samples of an underlying social network. Additionally, they studied a more challenging but practical version of de-anonymization that are free of prior seed information.The two seminal works triggered a flurry of subsequent attempts that all fall into the categories of either seeded or seedless de-anonymization, tuning the model of the underlying social networks. Specifically, in terms of seeded de-anonymization, current literature focuses on designing efficient de-anonymization algorithms that are executed by percolating the mapping to the whole node sets starting from the seed set. Yartseva et al. <cit.>, Kazemi et al. <cit.>, and later Fabiana et al. <cit.> proposed percolation graph matching algorithms for de-anonymization on Erdős-Rényi graph and scale-free network, respectively. Assuming that the underlying social network is generated following the preferential attachment model, Korula and Lattenzi <cit.> designed a correspnding efficient de-anonymizaiton algorithm. Chiasserini et al. <cit.> characterized the impact that clustering imposes on the performance of seeded de-anonymization. Under the classification of both perfect and imperfect seeded de-anonymization, Ji et al. <cit.> analyzed the two cases both qualitatively and empirically.While this type of seed-based de-anonymizing methods works well in analysis, it is rather difficult to acquire pre- identified user pairs across different networks as many real situations limit the access to user profiles. Therefore, more often we are faced with adversaries without seeds as side information, which is also the case considered in the present work. A natural alternative, under such circumstance, is to define a global cost function of mappings and unravel the correct mapping through the minimizer of the cost function. For instance, Pedarsani and Grossglauer <cit.> studied the seedless de-anonymization problem where the underlying social network is an Erdős-Rényi graph, the results of which were further improved by Cullina and Kiyavash <cit.>. Ji et al. analyzed perfect and partial de-anonymization on Chung-Lu graph <cit.>. Kazemi et al. <cit.> focused on the case of de-anonymization problem on Erdős-Rényi graph where the published network and auxiliary network exhibit partial overlapping. A very recent work that shares the highest correlation with ours, belongs to that of Onaran et al. <cit.>, who study the situation where there are only two communities in networks, a special case that can be embodied in our bilateral de-anonymization case.§ MODELS AND DEFINITIONSIn this section, we introduce the models and definitions of the social network de-anonymization problem. We first present the network models and then formally define the problem of social network de-anonymization. §.§ Network Models The network models consist of the underlying social networks G, the published network G_1 and the auxiliary network G_2 as incomplete observations of G. In reality, the edges of G, for example, might represent the true relationships between a set of people, while G_1 and G_2 characterize the observable interactions between these people such as communication records in cell phones or “follow" relationships in online social networks.§.§.§ Underlying Social Network To elaborate this, let G=(V,E,𝐌)[For a matrix 𝐌, we use M_ij to denote the element on its ith row and jth column and 𝐌_i to denote its ith row vector.] be the graph representing the underlying social relationships between network nodes, where V is the set of nodes, E is the set of edges and 𝐌[ 𝐌_ij=1 if (i,j)∈ E and 𝐌_ij=0 otherwise.] denotes the adjacency matrix of G. We treat G as an undirected graph and define the number of nodes as |V|=n. We assume that G is generated according to the stochastic block model <cit.>. Specifically, the model is interpreted as follows: the set of nodes in V are partitioned into κ disjoint subsets denoted as C_1,C_2,…,C_κ indicating their communities with |C_i|=n_i and ∑_in_i=n. The edges between nodes in different communities are drawn independently at random with certain probabilities. Let c:V↦{1…κ} be the community assignment function that assigns to each node the label of the community it belongs to, we have Pr{(u,v)∈ E}=Pr{𝐌_uv=1}=p_c(u)c(v),where affinity values {p}_ab (1≤ a,b≤κ) are pre-defined parameters that indicate the edge existence probabilities and capture the closeness between communities. It has been shown that this model well captures the community structures in social networks and can generate graphs with various degree distributions by tuning the values of {p} <cit.>.§.§.§ Published Network and Auxiliary NetworkWe define G_1(V_1,E_1,𝐀) as the graph representing the published network and G_2(V_2,E_2,𝐁) as the graph representing the auxiliary network with E_1,E_2 denoting their edge sets and 𝐀,𝐁 denoting their adjacency matrices respectively. In correspondence to real situations, G_1 represents the publicly available anonymized network where user identities are removed for privacy concern.In contrast, G_2 represents the auxiliary cross-domain un-anonymized network where those users' identities are known, and can be collected by the adversary to re-identify the users in G_1. Following previous literature <cit.>, we assume the node sets in G_1 and G_2 are equivalent and that the published network and the auxiliary network are independent samples obtained from the underlying social network G with sampling probabilities s_1 and s_2, respectively. Specifically, for i=1,2, we have Pr{(u,v)∈ E_i}={[ s_i (u,v)∈ E,; 0 .; ]. Technically, G, G_1 and G_2 are defined as the random graph variables for the networks. However, for ease of representation, we will also use them to denote the realizations of the random graph variables without loss of clearance. In the sequel, we will also use θ as a shorthand of the set of parameters including affinity values {p} and sampling probabilities s_1,s_2 in the models of G,G_1,G_2, §.§ Social Network De-anonymizationGiven the published network G_1 and the auxiliary network G_2, the problem of social network de-anonymization aims to find a bijective mapping π:V_1↦ V_2 that reveals the correct correspondence of the nodes in the two networks. Equivalently, a mapping π[In this paper, allthe mappings are assumed to be bijective. Hence, we simply refer to them as mappings for brevity.] can be represented as a permutation matrix Π where Π_ij=1 if π(i)=j and Π_ij=0 otherwise. We naturally extend the definition of mapping of node set to the mapping of edge set, as π(e=(i,j))=(π(i),π(j)). We define π_0 (or equivalently Π_0) to be the correct mapping between the node sets of G_1 and G_2. Note that we do not have access to π_0 or the generator G of G_1 and G_2. In other words, although the node sets of G_1 and G_2 are equivalent, the labeling of the nodes does not reflect their underlying correspondence. We interpret this in the way that the published network G_1 has the same node labeling as the underlying network G while the node labeling of G_2 is permuted. Following this interpretation, the community assignment function of G_1 equals to c. However the community assignment function of G_2, which we further define as c', may be different. We illustrate an example of our network models in Figure <ref>.The community assignment functions of the two networks may serve as important structural side information for de-anonymization, which naturally divide the social network de-anonymization problem into two types where the adversary possesses different amount of information on the community assignment. In the first type, the adversary possesses the community assignments of both G_1 and G_2. The corresponding problem is formally defined as follows. (De-anonymization with BilateralCommunity Information) Given the published network G_1, the auxiliary network G_2, the parameters θ, as well as the community assignment function c for G_1 and c' for G_2, the goal is to construct a mapping π that satisfies ∀ i, c(i)=c'(π(i)) and is closest to the correct mapping π_0.Since in this case, we have the community assignment of G_2, we can perform a relabeling on nodes in G_2 to make its community assignment equals to that of G_1. Hence, without loss of generality, for the case of de-anonymization with bilateral information, we denote c as the community assignment function of both G_1 and G_2 in the sequel.The second variant corresponds to the case where the adversary only possesses the community assignment of the published network, which is formally stated as follows. (De-anonymization with Unilateral Community Information) Given the published network G_1, the auxiliary network G_2, parameters θ, as well as the community assignment function c for G_1, the goal is to construct a mapping that is closest to the correct mapping π_0.Intuitively, de-anonymization with unilateral information is harder than that with bilateral information due to the lack of side information. We will validate this argument with subsequent theoretical analysis and experiments. In addition, for brevity, we may refer to de-anonymization problem with bilateral community information and with unilateral community information as bilateral de-anonymization and unilateral de-anonymization respectively.Remark: Till now, we have not given the quantifying metric of the closeness to the correct mapping π_0. A natural choice would be the mapping accuracy, i.e., percentage of nodes that are mapped identically as in π_0. However, as we have no knowledge of π_0, such ground-truth-based metrics do not apply. To tackle this, we leverage the Maximum A Posteriori (MAP) estimator to construct cost functions for measuring the quality of mappings based solely on observable information. The main notations used throughout the paper are summarized in Table <ref>. § ANALYTICAL ASPECT OF BILATERAL DE-ANONYMIZATIONFirst, we investigate the de-anonymization problem with bilateral information, starting with an appropriate metric measuring the quality of mappings. We define our proposed metric in the form of a cost function that derived from Maximum A Posteriori (MAP) estimation.§.§ MAP-based Cost FunctionAccording to the definition of MAP estimation, given the published network G_1, auxiliary network G_2, parameters θ and the community assignment function c, the MAP estimate π̂ of the correct mapping π_0 is defined as:=-1ptπ̂= max_π∈ΠPr(π_0=π| G_1,G_2,c,θ),where Π={π:V_1↦ V_2|∀ i, c(i)=c(π(i))}, i.e. the set of bijective mappings that observe the community assignment. From the results in <cit.>, the MAP estimator in Equation (<ref>) can be computed as =3pt =3pt =3pt =3pt π̂ =min_π∈Π∑_i≤ j^nw_ij|1{(i,j)∈ E_1}-1{(π(i),π(j))∈ E_2}|≜min_π∈ΠΔ_π,where w_ij=log(1-p_c(i)c(j)(s_1+s_2-s_1s_2)/p_c(i)c(j)(1-s_1)(1-s_2)). Based on Equation (<ref>), we have our cost function Δ_π as the metric for the quality of mappings, which can also be interpreted as weighted edge disagreements induced by mappings. §.§ Validity of the Cost Function Since our cost function Δ_π is derived using the MAP estimation,the minimizer of Δ_π, being the MAP estimate of π_0, coincides with the correct mapping with the highest probability <cit.>. Aside from this, we proceed to justify the use of MAP estimation in de-anonymization problem from another perspective. Specifically, we prove that if the model parameters satisfy certain conditions, then the MAP estimate π̂ asymptotically almost surely[An event asymptotically almost surely happens if it happens with probability 1-o(1).] coincides with the correct mapping π_0, which means that we can perfectly recover the correct mapping through minimizing Δ_π. Let α=min_abp_ab, β=max_abp_ab, w=max_ijw_ij and w=min_ijw_ij. Assume that α,β→ 0, s_1,s_2 do not go to 1 as n→∞ and logα/logβ≤γ. Suppose that α(1-β)^2s_1^2s_2^2log(1/α)/s_1+s_2=Ω(γlog^2 n/n)+ω(1/n), then π̂=π_0 holds almost surely as n→∞.We use standard Knuth's notations in this paper. Due to space limitations, here we only presenting an outline of the proof and defer the details to Appendix <ref>. Recall that for a mapping π, we define Δ_π=∑_i≤ j^nw_ij|1{(i,j)∈ E_1}-1{π(i),π(j)∈ E_2}|. Also, we denote Π_k as the set of mappings that map k nodes incorrectly and S_k as a random variable representing the the number of mappings π∈Π_k with Δ_π≤Δ_π_0. We then define S=∑_k=2^nS_k as the total number of incorrect mappings π with Δ_π≤Δ_π_0 and derive an upper bound on the mean of S as 𝔼[S]≤∑_k=2^nn^kmax_π∈Π_kPr{Δ_π-Δ_π_0≤ 0}. We further show that under the conditions stated in the theorem, this upper bound, and consequently 𝔼[S], go to 0 as n→∞, which implies that π_0 is the unique minimizer of Δ_π and concludes the proof. Remark: We now present two further notes regarding Theorem <ref>. (i) Applicability of the Theorem:Recall that for a random Erdős-Rényi graph G(n,p) to be connected and free of isolated nodes with high probability, it must satisfy p=Ω(log n/n) <cit.>, and the absence of isolated nodes is necessary for successful de-anonymization since there is no way that we can distinguish the isolated nodes in G_1 and G_2. Conventionally setting the sampling probabilities s_1,s_2 as constants, it is easy to verify that the conditions in Theorem <ref> only have constant gap from the graph connectivity conditions even when the expected degree distributions (or equivalently, the closeness between the communities) of G_1 and G_2 are non-uniform (e.g. power law distribution where α/β =O(n) and logα/logβ =O(log n)). From this aspect, the conditions are quite mild and thus make Theorem <ref> fairly general; (ii) Extension of the Theorem: The cost function we design is robust, in the sense that any approximate minimizer Δ_π can map most of the nodes correctly. We formally present the claim in Corollary <ref>. Let α,β,w,w be the same parameters defined in Theorem <ref>. Assume that α,β,s_1,s_2 do not go to 0 and logα/logβ≤γ. Additionally, let δ,ϵ be two real numbers with 0≤δ,ϵ≤ 1 with ϵ=O(δ-δ^2/2)α(1-β)s_1s_2log(1/α). If =2pt α(1-β)^2s_1^2s_2^2log(1/α)/s_1+s_2=Ω(γlog^2 n/(1-δ/2)n)+ω(1/n), then for all π^* with Δ_π^*-min_π∈ΠΔ_π≤ϵ n^2, π^* is guaranteed to map at least (1-δ)n nodes correctly as n→∞. The proof is similar to that of Theorem <ref>. Instead of bounding ∑_k=2^n∑_π∈Π_kPr{Δ_π-Δ_π_0≤ 0}, we upper bound ∑_k=δ n^n∑_π∈Π_kPr{Δ_π-Δ_π_0≤ϵ n^2}. Using similar technique as in Theorem <ref>, we have that under the conditions stated in the corollary, ∑_k=δ n^n∑_π∈Π_kPr{Δ_π-Δ_π_0≤ϵ n^2}→ 0 as n→∞. Therefore, for a mapping π^* with Δ_π^*-Δ_π_0≤ϵ n^2, it maps at most k=δ n nodes incorrectly. Since Δ_π_0≥min_π∈ΠΔ_π, we conclude that all π^* with Δ_π^*-min_π∈ΠΔ_π≤ϵ n^2 are guaranteed to map at least (1-δ)n nodes correctly as n→∞.§ ALGORITHMIC ASPECT OF BILATERAL DE-ANONYMIZATIONThe quantification in Section <ref> justified that, under mild conditions, we can unravel the correct mapping through computing its MAP estimate, i.e., the minimizer of Δ_π. This naturally puts forward the optimization problem of computing the minimizer of Δ_π, which reasonably serves as the instantiation of the social network de-anonymization problem (Definition 3.1).To meet the demand of the quantification, in this section, we formally define and investigate this optimization problem, presenting a first look into the algorithmic aspect of social network de-anonymization.§.§ The Bilateral MAP-ESTIMATE ProblemNaturally, with some previously defined notations inherited, the optimization problem induced by the cost function can be formulated as follows. (The BI-MAP-ESTIMATE Problem) Given two graphs G_1(V_1,E_1,𝐀) and G_2(V_2,E_2,𝐁), community assignment function c and a set of weights {w}, the goal is to compute a mapping π̂:V_1↦ V_2 that satisfies =-1pt 𝐏1:π̂ =min_π∈Π∑_i≤ j^nw_ij|1{(i,j)∈ E_1}-1{π(i),π(j)∈ E_2}|≜min_π∈ΠΔ_π, where Π={π|∀ i,c(i)=c(π(i))}. Note that we require the weights {w} to be induced by implicit and well-defined community affinity values and sampling probabilities. Also, the BI-MAP-ESTIMATE Problem denoted as 𝐏1 above has several equivalent formulations, which will be presented later.The BI-MAP-ESTIMATE seems to be easy at first glance due to the simplicity of its objective function Δ_π, but as justified by the following proposition, it is not only computationally intractable but also highly inapproximable. BI-MAP-ESTIMATE problem is NP-hard. And there is no polynomial time (pseudo-polynomial time) approximation algorithm for BI-MAP-ESTIMATE with any multiplicative approximation guarantee unless GI∈ P (GI∈ DTIME(n^polylogn)).[GI denotes the complexity class Graph Isomorphsim.] The proof can be easily constructed by reduction from the graph isomoprhism problem. The reduction is completed by just setting the two graphs in the instance of the graph isomorphism as G_1 and G_2, as well as assigning all w_ij=1 and c(v)=1 for all v∈ V_1,V_2. Obviously, if the two graphs are isomorphic, the value Δ_π̂ of the optimal mapping π̂ will be zero. Therefore, in this case, any algorithm with multiplicative approximation guarantee must find a mapping π with Δ_π=0. Furthermore, if G_1 and G_2 are not isomorphic, then any mapping π must induce a Δ_π strictly larger than 0. Hence, a polynomial time approximation algorithm for BI-MAP-ESTIMATE with multiplicative guarantee implies a polynomial time algorithm for the graph isomorphism problem. Note that the result can be further extended as there is no pseudo-polynomial time algorithm with multiplicative approximation guarantee unless GI∈ DTIME(n^polylogn). §.§ Approximation AlgorithmsAs demonstrated above, the BI-MAP-ESTIMATE problem bears high computational complexity and approximation hardness. It is thus unrealistic to pursue exact or even multiplicative approximation algorithms. To circumvent this obstacle and still find solutions with provable theoretical properties, we propose two algorithms with their respective advantages: one has an ϵ-additive approximation guarantee and the other has lower time complexity and yields optimal solutions under certain conditions. The main idea behind them is to convert P1 to equivalent formulations which are more amenable to relaxation techniques. §.§.§ Additive Approximation Algorithm The additive approximation algorithm we propose is based on the following quadratic assignment formulation of the BI-MAP-ESTIMATE Problem which we denote as 𝐏2. =3.5pt𝐏2:maximize∑_i,j,k,lq_ijklx_ikx_jl s.t.∑_ix_ij=1,∀ i∈ V_1∑_jx_ij=1,∀ j∈ V_2 x_ij∈{0,1}The coefficients {q}_ijkl of 𝐏2 are defined as: =3pt =3ptq_ijkl=w_ij,if (i,j)∈ E_1, (k,l)∈ E_2 c(i)=c(k),c(j)=c(l),-1if c(i)≠ c(k) or c(j)≠ c(l),0otherwise. The solutions to 𝐏2 are a set of integers {x}. We will refer to the value of ∑_i,j,k,lq_ijklx_ikx_kl as the value of {x}. Based on a solution {x}, we can construct its equivalent mapping for the BI-MAP-ESTIMATE problem by setting π(i)=j iff x_ij=1. The following proposition shows the correspondence between 𝐏1 and 𝐏2. Given G_1, G_2, c and {w}, the optimal solutions of 𝐏1 and 𝐏2 are equivalent. We write the equivalent set of integers {x} of a mapping π as {x^π}. First, we prove that the optimal solution {x^*} to 𝐏2 must observe the community assignment, i.e., if x^*_ij=1, then c(i)=c(j). Indeed, for a solution {x} having some x_i_0i_1=1 but c(i_0)≠ c(i_1), we can find a “cycle of community assignment violations" starting from i with x_i_0i_1=x_i_1'i_2=x_i_2'i_3=… x_i_ρ'i_0' and c(i_0)=c(i_0'),c(i_1)=c(i_1'),…,c(i_ρ)=c(i_ρ'). Due to the special structure of the coefficients {q}, this cycle only contributes negative value to the objective function of 𝐏2. Therefore, by “reversing" the cycle, we obtain a new solution {x'} from {x} with x'_i_0i_0'=x'_i_1'i_1=x'_i_2'i_2=…=x'_i_ρ'i_ρ=1 and ∑_i,j,k,lq_ijklx'_ijx'_kl>∑_i,j,k,lq_ijklx_ijx_kl.The process of reversing cycles of community assignment violations is demonstrated in Figure 2. If follows that the optimal solution to 𝐏2 must observe the community assignment. Then, we proceed to show that the optimal solution to 𝐏1 is equivalent to the optimal solution to 𝐏2. Notice that for all {x^π} that observe the community assignment, we have ∑_ijw_ij=∑_ijklq_ijklx^π_ikx^π_jl+Δ_π. Therefore, the corresponding {x^π̂} of the optimal solution π̂ to 𝐏1 is also optimal for 𝐏2 and vice versa. The proof of Proposition <ref> also provides the two main stages in our additive approximation algorithm: (i) Convert the instance of the BI-MAP-ESTIMATE problem into its corresponding quadratic assignment formulation 𝐏2 where the solution is then computed. (ii) Reverse all the “cycles of community assignment violations" in the solution and construct the desired mapping based on it.For the first stage, we adopt the relaxing-rounding based algorithm proposed by Arora at al. <cit.> as a sub-procedure referred to as “QA-Rounding" to solve the converted instances of 𝐏2. QA-Rounding has additive approximation guarantee when the instances have coefficients {q} that do not scale with the size of the problem <cit.>. Note that the requirement for the coefficients to be independent of the size of the problem is one of the key factors for the seemingly unnatural formulation of 𝐏2. For the sake of completeness, we state in the following lemma the related result from <cit.>.(Theorem 3 in <cit.>) Given an instance of 𝐏2 with -C≤ q_ijkl≤ C for all i,j,k,l∈{1… n} where C is a constant that is independent of n, then for any ϵ>0, QA-Rounding finds a solution {x} with =2pt =2pt ∑_i,j,k,lq_ijklx_ikx_jl≥∑_ijklq_ijklx^*_ijkl-ϵ n^2 in n^O(log n/ϵ^2) time, where {x^*} is the optimal solution.The second stage can be completed by repeatedly traversing the solution {x} to identify all the cycles of community assignment violations and reversing them. Algorithm 1 illustrates a whole diagram of our proposed additive approximation algorithm.Approximation Guarantee: By Lemma <ref>, QA-Rounding yields a solution whose value has a gap of less than ϵ n^2 from the optimal. Combined with the equality ∑_i,jw_ij=Δ_π+∑_i,j,k,lq_ijklx_ikx_jl and the fact that the reversal of all the cycles of community assignment violations only incurs an increase on the value of the computed solution {x}, we have that the mapping π given by Algorithm 1 has an ϵ-additive approximation guarantee and satisfies c(i)=c(π(i)) for all i. Moreover, by Corollary <ref>, we know that when ϵ,δ satisfy the conditions in the corollary, the mappings yielded by Algorithm 1 map at least (1-δ)n nodes correctly.0.2em Time Complexity: The QA-Rounding has a time complexity of n^O(log n/ϵ ^2). The reversal of all the cycles can be completed inO(n^2) time when {x} is represented in the form of an adjacency list-like structure. Based on those, the time complexity of Algorithm 1 is O(n^O(log n/ϵ^2)+n^2). §.§.§ Convex Optimization-Based HeuristicBeside the algorithm that provides additive approximation guarantee under general case, it is also useful to pursue algorithms that have stronger guarantee in special cases. In this section, we present one such algorithm that can find the optimal solution in the cases where the structural similarity between the two networks are higher than certain threshold.The algorithm is based on convex optimization, which relies on a matrix formulation of the BI-MAP-ESTIMATE problem. The main idea is to first solve a convex-relaxed version of the matrix formulation and then convert the solution back to a legitimate one. Specifically, the matrix formulation of the BI-MAP-ESTIMATE problem, which we denote by P3, is formally stated as follows: =2pt 𝐏3:mininize 𝐖∘(𝐀- Π^T𝐁Π)_F^2+μΠ 𝐦-𝐦_F^2 s.t. ∀ i∈ V_1, ∑_iΠ_ij =1 ∀ j∈ V_2, ∑_jΠ_ij =1∀ i,j, Π_ij∈ {0,1}, where 𝐖 is a symmetric matrix with 𝐖_ij=𝐖_ji=√(w_ij), 𝐦 represents the community assignment vector (c(1),…,c(n))^T, μ is a positive constant that is large enough, ∘ denotes the matrix Hadamard product with (𝐖∘ 𝐀)_ij=𝐖_ij·𝐀_ij and ·_F represents the Frobenius norm.Note that 𝐏3 is equivalent to 𝐏1 from the perspective of the relation between a mapping and its corresponding permutation matrix, as is stated in the following proposition. Given G_1, G_2, c and {w}, the optimal solution of 𝐏1 and 𝐏3 are equivalent. The proof is similar to that of Proposition <ref>. First, due to the existence of the penalty factor μΠ 𝐦-𝐦_F^2, we have that the optimal solution of 𝐏3 must observe the community assignment. Second, as for all the permutation matrices Π's and their corresponding mappings π's that observe the community assignment, it is easy to show that Δ_π=𝐖∘(𝐀-Π^T𝐁Π)_F^2+μΠ 𝐦-𝐦_F^2 (the second term equals to 0 in this case). Hence, the optimal solution of 𝐏1 and 𝐏3 are equivalent.Before introducing the algorithm, we further transform the objective function of 𝐏3 into an equivalent but more tractable form. Lemma <ref> gives the main idea of the transformation. Let Ã=𝐖∘ 𝐀 and 𝐁̃=𝐖∘ 𝐁 be the weighted adjacency matrices of G_1 and G_2 respectively, then for all permutation matrices that observe the community assignment[A permutation matrix Π observes community assignment if for all Π_ij=1, c(i)=c(j).],the following equality holds: 𝐖∘(𝐀-Π^𝐓𝐁Π)_F=ΠÃ-𝐁Π̃_F. We prove the lemma by repeatedly using the symmetry of 𝐀 and 𝐁 and special properties of 𝐖 and Π. The detailed steps are presented as follows: 𝐖∘(𝐀-Π^T𝐁Π)_F =𝐖∘(Π(𝐀-Π^T𝐁Π))_F=𝐖∘(Π𝐀-𝐁Π)_F=𝐖∘(Π 𝐀)-𝐖∘(𝐁Π) _F=Π(𝐖∘𝐀)-(𝐖∘𝐁)Π_F=(ΠÃ-𝐁̃Π)_F. Note that Equation (<ref>) holds because multiplying by a permutation matrix does not change the value of element-wise Frobenius norm. Equations (<ref>), (<ref>) and (<ref>) hold due to the definition of Hadamard product and Ã,𝐁̃. The validity of Equation (<ref>) is less straightforward and can be interpreted in the following way: For the weight w_ij of a node pair (i,j), it is determined only by p_c(i)c(j),s_1,s_2. Therefore, if c(i)=c(j),c(k)=c(l) for some nodes i,j,k,l, then we have 𝐖_ik=𝐖_jl, i.e., the weight is invariant within communities. This crucial property, combined with the fact that Π is permutation matrix that observes the community assignment, makes the Hadamard products and normal matrix multiplication in Equation (<ref>) interchangeable.Based on Lemma <ref>, we can rewrite the objective function of 𝐏3 as (ΠÃ-𝐁̃Π)^2_F+μΠ 𝐦-𝐦^2_F. Then, we further relax constraints (<ref>) and (<ref>) in 𝐏3 and obtain the optimization problem 𝐏3' that can be formulated as: =6pt𝐏3'minimize (ΠÃ-𝐁̃Π)_F^2 +μΠ 𝐦-𝐦_F^2 s.t. ∀ i, ∑_i∈ V_1Π_ij =1 Obviously the objective function and the set of feasible solutions are both convex. Immediately we can conclude that 𝐏3' is a convex-relaxed version of 𝐏3, which is stated in the following lemma. 𝐏3' is a convex optimization problem.With all the prerequisites above, we are now ready to present our second convex optimization-based algorithm, which firstly solves for a fractional optimal solution of 𝐏3' and then projects that fractional solution into an integral permutation matrix (and its corresponding mapping). During the projection process, we use an n-dimensional array Mapped to record the projected nodes and a set Legal_i for each node i to record the remaining legitimate nodes to which it can be mapped. The details are illustrated in Algorithm 2. Performance Guarantee: Generally, Algorithm 2 can not yield the optimal solution to the BI-MAP-ESTIMATE problem and the gap between its solution and the optimal one may be large. However, we will demonstrate that when the similarity between G_1 and G_2 are high enough, or equivalently, the difference between the weighted adjacency matrices à and 𝐁̃ is sufficiently small, Algorithm 2 is guaranteed to find the optimal mapping.0.2em Let 𝐁̃' be a symmetric matrix that is related with à by a unique Π̂ that observes the community assignment, i.e., 𝐁̃'=Π̂𝐀Π̂^𝐓. Denote 𝐁̃'=𝐔Λ 𝐔^𝐓 as its unitary eigen-decomposition with ϵ_2≤∑_j|𝐔_ij|≤ϵ_1 for all i. Define λ_1,λ_2,…,λ_n as the eigenvalues of 𝐁̃' with σ=max_i|λ_i| and δ≤ |λ_i-λ_j| for all i,j.Assume that there exists a matrix 𝐑 that satisfies 𝐁̃=𝐁̃'+𝐑. We denote 𝐄=𝐔𝐑𝐔^𝐓 with 𝐄_F=ξ and 𝐌=𝐦^𝐓𝐦 with 𝐌_F=M. Let Π^p be the solution obtained by Algorithm 2 and Π^* be the optimal solution. If =0pt =0pt (σ^2+1)ξ^2+μ^2M^2≤[δ^2/(2√(n)+1)(1+√(n)ϵ_1/ϵ_2)(1+2ϵ_1/ϵ_2)]^2, then Π^𝐩=Π^*. The proof is divided into three steps: (i) First, similar to the argument in <cit.>, by constructing the Lagrangian function of 𝐏3' and setting its gradient to 0, we obtain the necessary conditions that the optimal fractional solution Π^f to 𝐏3' must satisfy; (ii) Then, combining these with the conditions stated in the theorem and the projection from Π^f to Π^p, we show that Π^p=Π̂; (iii) Finally, we prove that in this case Π̂=Π^*, which concludes the proof. 1. Derivation of the Necessary Conditions: We start the first step with rewriting 𝐏3' as an optimization problem with respect to 𝐐=ΠΠ̂^𝐓. Since ΠÃ-𝐁̃Π=(ΠΠ̂^𝐓𝐁̃'-𝐁̃ΠΠ̂^𝐓)Π̂=(𝐐𝐁̃'-𝐁̃𝐐)Π̂, and Π 𝐦-𝐦=(𝐐𝐦-𝐦)Π̂, we can reformulate the objective function of 𝐏3' with 𝐐 as variable and divide it by two for ease of further manipulation as 1/2𝐐𝐁̃-𝐁𝐐^2_F+μ/2𝐐 𝐦-𝐦_F^2. The constraint ∑_jΠ_ij for all i can be expressed as 𝐐1=1. The solution of the reformulated version can be associated with the original one by Π=𝐐Π̂. Next, by introducing multiplier α for the equality constraint of 𝐏3', we construct its Lagrangian function as =6pt =6pt L(𝐐,α)=1/2𝐐𝐁̃-𝐁𝐐^2_F+μ/2𝐐 𝐦-𝐦_F^2+tr(𝐐1-1)α^T. The key element of the proof of the lemma is the sufficient conditions for 𝐐 to be the optimal (fractional) solution to 𝐏3'. To yield the sufficient conditions, we take the gradient of L(𝐐,α) with respect to 𝐐 and set it as 0. Then we have =2pt ▽_𝐐L(𝐐,α)=𝐐𝐁^2+𝐁̃^2𝐐-2𝐁̃𝐐𝐁+α 1^T+μ(𝐐𝐌-𝐌) =0. Multiplying 𝐔^T to the left side of ▽_𝐐L(𝐐,α) and 𝐔 to the right side we get (𝐅Λ^2+Λ^2𝐅-2Λ 𝐅Λ) +(𝐅𝐄Λ+𝐅Λ 𝐄-2Λ 𝐅𝐄)+γ 𝐯^T+𝐅𝐆+μ𝐅𝐌'-μ𝐌'=0, where 𝐅=𝐔^T𝐐𝐔, 𝐯=𝐔^T1, γ=𝐔^𝐓α, 𝐆=𝐄^2 and 𝐌'=𝐔^𝐓𝐌𝐔. Rewriting the equation coordinate-wise, we have 𝐅_ij( λ_i-λ_j)^2+𝐯_jγ_i-μ𝐌'_ij+∑_k𝐅_ik(𝐄_kj(λ_j+λ_k-2λ_i)+𝐆_kj+μ𝐌'_kj)=0 Substituting i=j into the above equation and plugging the results back to eliminate variables γ_i's, it follows that 𝐅_ij𝐯_i(λ_i- λ_j)^2+∑_k𝐅_ik(𝐯_i𝐆_kj-𝐯_j𝐆_ki+μ𝐯_i𝐌'_kj-μ𝐯_j𝐌'_ki)+∑_k𝐅_ik(𝐯_i 𝐄_kj(λ_j+λ_k-2λ_i)-𝐯_j𝐄_kj(λ_k-λ_i))+μ (𝐯_j𝐌'_ii-𝐯_i𝐌'_ij)=0 We further define the following variables r_ij =μ/(λ_i-λ_j)^2(𝐯_j𝐌'_ii-𝐯_i𝐌'_ij) s_jk^i=1/(λ_i-λ_j)^2(𝐄_kj(λ_j+λ_k-2λ_i)-𝐯_j/𝐯_i𝐄_ki(λ_k-λ_i)) t_jk^i =1/(λ_i-λ_j)^2(𝐆_kj-𝐯_j/𝐯_i𝐆_ki) w_jk^i =μ/(λ_i-λ_j)^2(𝐌'_kj-𝐯_j/𝐯_i𝐌'_ki), for i≠ j. And s_ik^j=t_ik^j=w_jk^i=r_ij=0 for i=j. Then, we arrive at the following linear system 𝐅_ij+∑_k𝐅_ik (s_jk^i+t_jk^i+w_jk^i+r_ij/n)=0, i≠ j∑_k𝐅_ik𝐯_k =𝐯_i, where the second set of equations come from the constraint 𝐐1=1. Equations (<ref>) and (<ref>) represent conditions that the optimal solution 𝐐 (or equivalently 𝐅) needs to satisfy. 2. The Equivalence of Π^p and Π̂: Based on the conditions above, we move on to the second step. Recall that in this step our goal is to prove that Π^p, which is a projection of the optimal fractional solution Π^f, equals to Π̂. We formalize this notion in Lemma <ref>, the proof of which carries on the main idea of the second step. Let Π^p be the solution computed by Algorithm 2 and Π̂ be defined as in Theorem <ref>. Under the conditions stated in the theorem, Π^p=Π̂. As the optimal fractional solution Π^f=𝐐Π̂, we first show that 𝐐 (or 𝐅) is sufficiently close to the identity matrix 𝐈, from which using the property of the projection process we obtain that Π^p is identical to Π̂. We achieve this by treating linear system consisting of Equations (<ref>) and (<ref>) as a perturbed version of 𝐅_ij =0, i≠ j ∑_k𝐅_ik𝐯_k =𝐯_i, the solution of which is clearly 𝐈. Then using the results from stability of perturbed linear system <cit.> that is presented in Lemma <ref> below and the conditions in Theorem <ref>, we can bound the difference between 𝐅 and 𝐈.(Theorem 1 in <cit.>) Let · be any p-norm. For two linear systems 𝐃𝐱=𝐛 and 𝐃̃𝐱=𝐛̃, let 𝐱_0 and 𝐱 be their solutions, if 𝐃-𝐃̃𝐃^-1<1, then we have 𝐱-𝐱_0/𝐱_0≤𝐃𝐃^-1/1-𝐃-𝐃̃𝐃^-1{𝐃-𝐃̃/𝐃+𝐛-𝐛̃/𝐛}. Denoting by 𝐟=(𝐅_11,…,𝐅_1n,…,𝐅_n1,…,𝐅_nn)^T the row stack vector representation of 𝐅, we can rewrite the perturbed system as (𝐃+𝐍)𝐟=𝐛, and the original unperturbed system as 𝐃𝐟=𝐛, with 𝐃=diag{𝐃_1,…,𝐃_n} being an n^2× n^2 block-diagonal matrix, where each 𝐃_i is an n× n block consisting of identity matrix with the ith row replaced by vector 𝐯^T. 𝐍 is also an n^2× n^2 block-diagonal matrix with the n× n blocks 𝐍_i being a matrix with elements (𝐍_i)_jk=s_jk^i+t_jk^i+w_jk^i+r_ij/n. And 𝐛 is an n^2× 1 vector with the [(i-1)(n+1)+1]-st element as v_i and other element as 0. Using Lemma <ref> on the perturbed system and the unperturbed one with · taken as 2-norm (Euclidean norm), we obtain that 𝐟-𝐟_0≤𝐟_0𝐃^-1𝐍/1-𝐃^-1𝐍, where 𝐟_0 is the row stack vector representation of 𝐈. Therefore, to derive the upper bound for the difference between 𝐅 and 𝐈, we need to further upper bound the RHS of Inequality (<ref>). The technique we use here is harnessing the special structure of 𝐃 and 𝐍 so that we can derive bounds for 𝐃^-1 and 𝐍, which are represented in functions of variables {s},{t},{w} and {r}. By further associating the variables with the spectral parameters δ,ϵ_1,ϵ_2, etc. defined in the theorem, we yield an upper bound for the RHS of Inequality (<ref>) that depends on those spectral parameters. Due to the space limitations, we defer the detailed derivation of the upper bound to Appendix <ref>. Based on the upper bound, we have that if the conditions in the theorem are satisfied, then 𝐅-𝐈_F=𝐟-𝐟_0≤1/2. Since Π^f-Π̂_F=𝐐Π̂-Π̂_F=(𝐐-𝐈)Π̂_F=𝐐-𝐈_𝐅=𝐅-𝐈_𝐅≤ 1/2, the entry-wise difference between Π^f and Π̂ is less than 1/2. Thus, the projection process in Algorithm 2 is bound to project Π_f as Π̂, which concludes the second step, i.e., the proof of Lemma <ref>. 3. Optimality of Π̂: Now we proceed to the final step and prove that Π̂=Π^* by contradiction. If there exists some permutation matrix Π'≠Π̂ with Π' Ã-Π'𝐁̃_F<Π̂Ã-Π̂𝐁̃_F. Then, we consider 𝐁̃=𝐁_0+𝐑' with 𝐁_0=Π'^T𝐀Π'. Obviously, 𝐑' satisfies the conditions in Theorem <ref>. Hence, by Lemma <ref>, we should have that the the solution Π^p computed by Algorithm 2 equals to Π'. However, we also have Π^p=Π̂, which leads to a contradiction. Thus, Π̂ is the optimal solution to 𝐏3, which finishes the proof of the theorem.Time Complexity: In the first stage of Algorithm 2, we use the primal interior point algorithm proposed in <cit.> to solve the instance of 𝐏3', which has a time complexity of O(N^3)=O(n^6) where N=n^2 is the number of variables in the instance. The projection process of the second stage can be implemented in O(n^2) time. Thus, the total time complexity of Algorithm 2 is O(n^6). Note that the result is only the worst case guarantee and the average time complexity of Algorithm 2 is much lower <cit.>. § DE-ANONYMIZATION WITH UNILATERAL COMMUNITY INFORMATIONIn this section, we investigate the de-anonymization problem with unilateral community information, i.e., when the adversary only possesses the community assignment function of the published network G_1. Following the path of the bilateral de-anonymization in Sections <ref> and <ref>, we will give the corresponding results we obtain for the unilateral case. Through comparisons of these results and illustration in our later experiments, we demonstrate that de-anonymization with only unilateral community information is harder than that with bilateral community information, which shows the importance of community assignment as side information. §.§ MAP-based Cost FunctionWe first derive our cost function in the unilateral case.Again, according to the definition of MAP estimation, given the published network G_1, auxiliary network G_2, parameters θ and the community assignment function c of G_1, the MAP estimate π̂ of the correct mapping π_0 is defined as: =2pt =2pt =2pt =2ptπ̂= max_π∈ΠPr(π_0=π| G_1,G_2,c,θ),where Π denotes the set of all bijective mappings from V_1 to V_2. Note that in the unilateral case we have no prior knowledge of the community assignment of G_2. Consequently, we can not restrict Π to the set of mappings that observe the community assignment. Due to the space limit, we omit the processing of the MAP estimator (<ref>) and present the detailed steps in Appendix <ref>. After a sequence of manipulations, we arrive at the following equation for calculation of the MAP estimate.π̂ = min_π∈Π{∑_i<j^nw_ij(1{(i,j)∉ E_1,(π(i),π(j)∈ E_2)})} ≜ min_π∈ΠΔ_π,where w_ij=log(1-p_c(i)c(j)(s_1+s_2-s_1s_2)/p_c(i)c(j)(1-s_1)(1-s_2)). Note that different from the bilateral case, the cost function in the unilateral case is equivalent to a single-sided weighted edge disagreement induced by a mapping. This subtle difference has crucial implications to our analysis on the algorithmic aspect of unilateral de-anonymization.§.§ Validity of the Cost FunctionFollowing the same thread of thought, we proceed to justify the MAP estimation used in unilateral de-anonymization. Using similar proof technique, we derive the same result for the cost function in unilateral case as in bilateral one. Let α=min_abp_ab, β=max_abp_ab, w=max_ijw_ij and w=min_ijw_ij. Assume that α,β→ 0, s_1,s_2 do not go to 1 as n→∞ and logα/logβ≤γ. Furthermore, suppose that α(1-β)^2s_1^2s_2^2log(1/α)/s_1+s_2=Ω(γlog^2 n/n)+ω(1/n), then the MAP estimate π̂ in the unilateral casealmost surely equals to the correct mapping π_0 as n→∞. The proof is basically identical to the proof of Theorem <ref>. The only difference here is that we redefine X_ij as a Bernoulli random variable with mean p_ijs_1(1-p_π(i)π(j)s_2) and Y_ij as a Bernoulli random variable with mean p_ijs_1(1-s_2). Then, by using the same bounding technique for Pr{X_π-Y_π≤ 0}, we conclude the same result for the cost function in unilateral case. Theorems <ref> and <ref> show that the cost function based on MAP estimation is equally effective in de-anonymization with bilateral and unilateral community information. However, as we will show in the sequel, the feasibility of the cost function in unilateral case is weaker than in bilateral case. =0.52pt =0pt =0pt §.§ Algorithmic AspectIn this section, we investigate the algorithmic aspect of de-anonymization with unilateral community information and propose corresponding algorithms as in the bilateral case.§.§.§ The Unilateral MAP-ESTIMATE ProblemWe first formally introduce the combinatorial optimization problem induced by minimizing the cost function in unilateral de-anonymization. (The UNI-MAP-ESTIMATE Problem) Given two graphs G_1(V,E_1,𝐀) and G_2(V,E_2,𝐁), community assignment function c of G_1 and weights {w}, the goal is to compute a mapping π̂:V_1↦ V_2 that satisfies π̂ =min_π∈Π{∑_i<j^nw_ij(1{(i,j)∉ E_1,(π(i),π(j))∈ E_2)}}≜min_π∈ΠΔ_π, where Π={π:V_1↦ V_2}. Similar to the bilateral de-anonymization, we require the weights {w} to be induced by well-defined community affinity values {p}, s_1 and s_2, though the latter ones are not explicitly given.Due to the asymmetry of Δ_π in unilateral de-anonymization, intuitively, the UNI-MAP-ESTIMATE problem may bear higher approximation hardness than the BI-MAP-ESTIMATE problem in bilateral de-anonymization. The proposition we present below consolidates this intuition. UNI-MAP-ESTIMATE problem is NP-hard. Moreover, there is no polynomial time (pseudo polynomial time) approximation algorithm for UNI-MAP-ESTIMATE with any multiplicative approximation guarantee unless P=NP (NP∈ DTIME(n^polylog n)). The proof is done by reduction from k-CLIQUE problem. Given a graph G(V,E), the k-CLIQUE problem asks whether there exists a clique of size no smaller than k in G. The main idea of the reduction is that: Given an instance of k-CLIQUE with G(V,E) and k, we set G_1 as G and G_2 as a graph consisting of aclique of size k and (|V|-k) additional nodes. Setting w_ij=1 and c(v)=1 for all v in G_1, we have an instance of UNI-MAP-ESTIMATE. Obviously, if the G contains a clique of size no less than k, the value Δ_π̂ of the optimal mapping π̂ in UNI-MAP-ESTIMATE will be zero. Therefore, in this case, any algorithm with multiplicative approximation guarantee must find a mapping π with Δ_π=0. Furthermore, if G does not contain a clique of size no smaller than k, then any mapping π must satisfy Δ_π>0. Hence, a polynomial (pseudo-polynomial) time approximation algorithm for BI-MAP-ESTIMATE with multiplicative guarantee implies a polynomial (pseudo-polynomial) time algorithm for k-CLIQUE. Since k-CLIQUE problem is NP-Complete, we justify the approximation hardness of UNI-MAP-ESTIMATE as stated in the proposition. Note that the graph isomorphism problem is at least as hard as the problems in P, which implies that the approximation hardness result for UNI-MAP-ESTIMATE is stronger than that for BI-MAP-ESTIMATE.§.§.§ Additive Approximation AlgorithmWe design a similar approximation algorithm with an ϵ-additive approximation guarantee as in the bilateral case, by formulating the UNI-MAP-ESTIMATE problem in quadratic assignment fashion as followsminimize∑_i,j,k,lq_ijklx_ikx_jl s.t.∑_ix_ij=1,∀ i∈ V_1∑_jx_ij=1,∀ j∈ V_2 x_ij∈{0,1}with the coefficients {q}_ijkl of the formulation defined as:q_ijkl=w_ij,if (i,j)∉E_1, (k,l)∈ E_20otherwise.Note that due to the absence of community assignment constraints, we can directly formulate the problem as a minimization one and omit the penalty factor as in bilateral de-anonymization. By invoking the same QA-Rounding procedure on the formulated instance and convert the resulting solution {x} to its equivalent mapping π. Using similar analysis technique as in Section <ref>, we have that the algorithm obtains solutions that have a gap of at most ϵ n^2 to the optimal ones in time O(n^O(log n/ϵ^2)+n^2).§.§.§ Convex Optimization Based HeuristicWe now proceed to present the heursitc based on convex optimization for the UNI-MAP-ESTIMATE problem, which relies on the following matrix formulation. mininize 𝐖∘(Π𝐀- 𝐁Π)_⌊ F⌋^2 s.t. ∀ i∈ V_1, ∑_iΠ_ij =1 ∀ j∈ V_2, ∑_jΠ_ij =1∀ i,j, Π_ij∈ {0,1}, where 𝐖 and ∘ share the same definitions as those in 𝐏3 and ·_⌊ F⌋ is defined to be a variant of Frobenius norm. Specifically, 𝐌_⌊ F⌋=√(∑_i=1^n∑_j=1^n(1{𝐌_ij≤0}𝐌_ij^2)) for a matrix 𝐌, where only negative elements contribute to the value of the norm[It is easy to verify that operator ·_⌊ F⌋ satisfies the definition of norm.]. By relaxing the integral constraint (<ref>), we again arrive at an optimization problem, which is shown to be convex in Appendix <ref>. Our second algorithm for unilateral de-anonymization is to first solve the relaxed version of the matrix formulation of UNI-MAP-ESTIMATE and then project the fractional solution to an integral one. Unfortunately, due to the asymmetry of the operator ·_⌊ F⌋, it is difficult to derive closed form expression for the gradient of the Lagrangian function of UNI-MAP-ESTIMATE. Thus, we cannot prove conditional optimality of the algorithm as we did in BI-MAP-ESTIMATE. We provide a summary of the differences existing in bilateral and unilateral de-anonymizations from a higher level in Appendix <ref>). § EXPERIMENTSIn this section, we present our experimental validation of our theoretical results and the performances of the proposed algorithms. We first introduce our experimental settings and provide detailed results subsequently.§.§ Experimental Settings§.§.§ Experiment DatasetsRecall that the two key assumptions made in the modeling are that the underlying social network is generated by the stochastic block model and that the published and the auxiliary networks are sampled from the underlying network. To validate our theoretical findings and meanwhile evaluate the proposed algorithms in real contexts, we conduct experiments on three different types of data sets, with each one closer to the practical situations than the last one by gradually relaxing the assumptions. (i) Synthetic Dataset: Following the stochastic block model, we generate three sets of networks with Poisson, power law and exponential expected degree distributions respectively by properly assigning the community affinity values {p}. The size of each community is determined by adding a slight variation to the average community size, which equals to the number of nodes divided by the number of communities. For each set of networks, we take the sampling probabilities of the published and the auxiliary networks as s_1=s_2 ranging from 0.3 to 0.9. As this dataset strictly observes the assumptions of our models, it provides direct validations to our theoretical results. (ii) Sampled Social Networks: The underlying social networks are extracted from LiveJournal online social network <cit.>, with the communities following from the ground-truth communities inLiveJournal and the affinity values assigned to be proportional to the ratio of the edges between the communities over the number nodes in the communities. The published and the auxiliary networks are sampled from the underlying networks, again, with the sampling probabilities s_1=s_2 ranging from 0.3 to 0.9.This “semi-artificial" dataset lies in the middle of synthetic datasets and true cross-domain networks, which enables us to measure the robustness of our theoretical results against the restrictions imposed on the underlying social network. (iii) Co-authorship Networks: We extract four co-authorship networks in different areas from Microsoft Academic Graph (MAG) <cit.>. From those, we construct a group of networks with equivalent sets of nodes (2053 nodes in each set) and set up the correspondence of nodes as ground-truth based on the unique identifiers of authors in MAG. The communities are assigned based on the institution information of the authors (the affinity values in this case are assigned as in Sampled Social Networks). The four networks are then combined into six pairs, in which one is set as the published network and the other as the auxiliary network. Without relying on any artificial assumptions of generating the published and auxiliary networks, these procedures enable us to construct most genuine scenarios of de-anonymization from cross-domain social networks, which renders the dataset a touchstone for the applicability of our proposed algorithms.Note that our empirical results in the first two datasets are respectively obtained by taking the average from 50 repetitive experiments. The statistics of the datasets are summarized in Table <ref>. §.§.§ Algorithms Involved in Comparisons For both bilateral and unilateral de-anonymization, we run genetic algorithm (GA-BI,GA-UNI) in hope of finding exact minimizer of our cost functions, i.e., the optimal solution of BI-MAP-ESTIMATE and UNI-MAP-ESTIMATE problems. In both de-anonymization cases, we also evaluate the performance of our two proposed algorithms: the additive approximation algorithm (AA-BI,AA-UNI) and the convex optimization-based algorithm (CO-BI,CO-UNI). §.§.§ Performance MetricsThe two performance metrics we calculate in the experiments are the accuracy of the mappings yielded by the algorithms and the values of the cost function Δ_π of the mappings. The accuracy of a mapping π is defined as the portion of the nodes that π maps correctly (as the ground-truth correct mapping) over the total number of nodes. Since we are not interested in the absolute values of the cost function of the mapping, we calculate the relative value with respect to the cost function of the mappings produced by GA, i.e., for a mapping π and the mapping π_GA produced by GA, π's relative value is computed as (Δ_π-Δ_π_GA)/Δ_π_GA. Due to space limitations, we defer all the graphical representations of results on the mappings' cost function to Appendix <ref>. §.§ Experiment Results§.§.§ Synthetic NetworksWe plot the performance of the aforementioned algorithms on synthetic networks with {500,1000,1500,2000} number of nodes in Figures <ref> and <ref>, based on which we have the following observations: (i) Both GA-BI and GA-UNI exhibit good performance, achieving a de-anonymization accuracy close to 1 when the sampling probability is large in networks with Poisson and power law degree distribution; (ii) The relative value of the correct mapping (TRUE-BI,TRUE-UNI) is fairly small. Hence, we conclude that,when the sampling probability is large, the cost function based on MAP estimation is an effective metric in both bilateral and unilateral de-anonymization, and is applicable to a wide range of degree distribution, which justify our theoretical results on the validity of the MAP estimate. However, when the sampling probability is small (e.g. s=0.3,0.4) or the expected degree distribution has large variation (exponential distribution), the accuracy of GA degrades substantially, only achieving a value of less than 0.4. This can be attributed to the fact that when the sampling probability becomes small, the published and the auxiliary networks have lower degree of structural similarity and the parameters deviate from the conditions in our theoretical results. In terms of the two algorithms we propose, we can see that they obtain good performance with respect to both approximately minimizing the cost function and unraveling the correct mapping, with AA superior than CO especially in low-sampling-probability area. Note that although the relative value of the two algorithms is large in high-sampling-probability area, this does not imply the poor performance of the algorithms but is mainly due to the optimal Δ_π becoming considerably small as the similarity of G_1 and G_2 grows high. §.§.§ Sampled Social Networks Figures <ref> and <ref> plot our empirical results on the second datasets where the published and auxiliary networks are sampled from real social networks with the number of nodes set as {500, 1000, 1500, 2000}.As demonstrated by Figures <ref> and <ref>, although in this case the underlying social networks do not follow the stochastic block model, through minimizing the cost function we can still reveal a large proportion (up to 80%) of the correct mapping, which demonstrates the robustness of the cost function we proposed. Furthermore, the two algorithms AA and CO still achieve reasonable accuracy of up to 0.7, which is not surprising due to that the cost function they seek to minimize is still effective in this case. However, a little defect is that the accuracy of AA can be higher than GA at some points. This reflects that the deviation of the real life social networks from the stochastic block model more or less influences the quality of the MAP estimate. §.§.§ Cross-domain Co-authorship NetworksAs stated in experimental setup, we extract four groups of cross-domain co-authorship networks named as Networks A, B, C, D and thus construct six scenarios for social network de-anonymization[We do not distinguish the interchange of the published and auxiliary networks as different scenarios.]. We evaluate the performance of the algorithms on the six scenarios and show the results in Figures <ref> and <ref>. The figures present several observations and implications: (i) the proposed cost functions still serve as meaningful media for recovering the correct mapping even in realistic scenarios as therelative value of the correct mapping is close to zero and GA achieves an average accuracy of 67.3% in bilateral case and 59.0% in unilateral case; (ii) The two proposed algorithms still enjoy reasonable accuracy, with AA successfully de-anonymizing 60.8% of nodes in bilateral case and 51.5% of nodes in unilateral case, and CO successfully de-anonymizing 44.4% of nodes in bilateral case and 35.9% of nodes in unilateral case. Therefore, the two algorithms can be qualified as effective methods for seedless social network de-anonymization, which implies that the privacy of current anonymized networks still suffers from attacks of adversaries even when pre-mapped seeds are unavailable; (iii) The performance of CO is most susceptible to the structure of networks among all three algorithms as the standard deviation of its accuracy on the six scenarios are above 3.5% (3.51% for CO-BI, 3.81% for CO-UNI) while the counterparts of the other two algorithms are below 3.0%. §.§.§ Significance of Community InformationA notable phenomenon from all the experiments is that the accuracy of the algorithms in bilateral de-anonymization is higher than that in unilateral de-anonymization, especially for AA and CO. According to the experimental results, the gap is at least 3.5% in each setting and can reach up to 15% in the worst case. This, from an empirical point of view, demonstrates the importance of the community information on social network de-anonymization. § CONCLUSION In this paper, we have presented a comprehensive study of the community-structured social network de-anonymization problem. Integrating the clustering effect of underlying social network in our models, we have derived a well-justified cost function based on MAP estimation. To further consolidate the validity of such cost function, we have shown that under certain mild conditions, the minimizer of the cost function indeed coincides with the correct mapping. Subsequently, we have investigated the feasibility of the cost function algorithmically by first proving the approximation hardness of the optimization problem induced by the cost function and then proposing two algorithms with their respective performance guarantee by resolving the interweaving of cost function, network topology and candidate mappings through relaxation techniques. All our theoretical findings have been empirically validated through both synthetic and real datasets, with a notable dataset being a set of rare true cross-domain networks that reconstruct a genuine context of social network de-anonymization. 99 cite:SNAP J. Leskovec and A. Krevl, “SNAP Datasets: Stanford Large Network Dataset Collection", <http://snap.stanford.edu/data>, 2014. cite:targeted-advertising E. Bakshy, D. Eckles, R. Yan and I. Rosenn, “Social influence in social advertising: evidence from field experiments", in Proc. ACM EC, pp. 146-161, 2012. cite:social-privacy W. Wang, L. Ying and J. Zhang, “The value of privacy: strategic data subjects, incentive mechanisms and fundamental limits", in Proc. ACM SIGMETRICS, pp. 249-260, 2016. cite:de-anonymization A. Narayanan and V. Shmatikov, “De-anonymizing social networks", in IEEE Symposium on Security and Privacy, pp. 173-187, 2009. cite:seedlessP. Pedarsani and M. Grossglauser, “On the privacy of anonymized networks" in Proc. ACM SIGKDD, pp. 1235-1243, 2011. cite:xiaoyang L. Kong, L. He, X-Y. Liu, Y. Gu, M-Y. Wu and X. Liu, “Privacy-preserving compressive sensing for crowdsensing based trajectory recovery"in Proc. IEEE ICDCS, pp. 31-40, 2015. cite:allertonE. Kazemi, L. Yartseva and M. Grossglauser, “When can two unlabeled networks be aligned under partial overlap?", in IEEE 53rd Annual Allerton Conference on Communication, Control, and Computing, pp. 33-42, 2015. cite:improved-bound D. Cullina and N. Kiyavash, “Improved achievability and converse bounds for Erdős-Rényi graph matching", in Proc. ACM SIGMETRICS, pp. 63-72, 2016. cite:ER-GraphP. Erdős and A. Rényi, “On random graphs", in Publicationes Mathematicae, pp. 290-297, 1959. cite:ChungLu-GraphF. Chung and L. Lu, “The average distance in a random graph with given expected degrees", in Internet Mathematics, Vol. 1, No. 1, pp. 91-113, 2003. cite:shouling1S. Ji, W. Li, M. Srivatsa and R. Beyah, “Structural data de-anonymization: Quantification, practice, and implications", in Proc. ACM CCS, pp. 1040-1053, 2014. cite:arxiv-community E. Onaran, G. Siddharth and E. Erkip, “Optimal de-anonymization in random graphs with community structure", arXiv preprint arXiv:1602.01409, 2016. cite:shouling2S. Ji, W. Li, N. Z. Gong, P. Mittal and R. Beyah, “On your social network de-anonymizablity: Quantification and large scale evaluation with seed knowledge" in NDSS 2015. cite:blockmodel A. Decelle, F. Krzakala, C. Moore and L. Zdeborová, “Asymptotic analysis of the stochastic block model for modular networks and its algorithmic applications" in Physical Review E, No. 84, Vol. 6, pp. 066106, 2011. cite:model M. Newman, “Networks: an introduction", Oxford university press, 2010. cite:MAG<https://www.microsoft.com/en-us/research/project/microsoft-academic-graph/> cite:percolation-matchingL. Yartseva and M. Grossglauser, “On the performance of percolation graph matching", in Proc. ACM COSN, pp. 119-130, 2013. cite:vldb1 E. Kazemi, S. H. Hassani and M. Grossglauser, “Growing a graph matching from a handful of seeds", in Proc. the VLDB Endowment, pp. 1010-1021, 2015. cite:garetto1 C. Chiasserini, M. Garetto and E. Leonardi, “Social network de-anonymization under scale-free user relations", in IEEE/ACM Trans. on Networking, Vol. 24, No. 6, pp. 3756-3769, 2016. cite:vldb N. Korula and S. Lattanzi, “An efficient reconciliation algorithm for social networks", in Proc. the VLDB Endowment, pp. 377-388, 2014. cite:garetto2C. Chiasserini, M. Garetto and E. Leonardi, “Impact of clustering on the performance of network de-anon- ymization", in Proc. ACM COSN, pp. 83-94, 2015.cite:community M. Girvan and M. Newman, “Community structure in social and biological networks", in Proc. the National Academy of Sciences, Vol. 99, No. 12, pp. 7821-7826, 2002. cite:MAP-baseG. Fanti, P. Kairouz, S. Oh, K. Ramchandran and P. Viswanath, “Rumor Source Obfuscation on Irregular Trees", in Proc. ACM SIGMETRICS, pp. 153-164, 2016.cite:QAP S. Arora, A. Frieze and H. Kaplan, “A new rounding procedure for the assignment problem with applica- tions to dense graph arrangement problems", in Math- ematical programming, Vol. 92, No. 1, pp. 1-36, 2012.cite:convex Y. Aflalo, B. Alexander and R. Kimmel, “On convex relaxation of graph isomorphism", in Proc. the National Academy of Sciences, Vol. 112, No. 10, pp. 2942-2947, 2015. cite:perturb J. R. Bunch, “The weak and strong stability of algorithms in numerical linear algebra", in Linear Algebra and Its Applications Nol. 88, pp. 49-66, 1987. cite:polysolution D. Goldfarb and S. Liu, “An O(n^3L) primal interior point algorithm for convex quadratic programming", in Mathematical Programming, No. 49, Vol. 1, pp. 325-340, 1990. cite:livejournal J. Yang and J. Leskovec, “Defining and evaluating network communities based on ground-truth", in Knowledge and Information Systems, No. 42, Vol. 1, pp. 181-213, 2015. cite:chernoffbound P. Raghavan, “Probabilistic construction of deterministic algorithms: Approximating packing integer programs", in Journal of Computer and System Sciences, No. 37, Vol. 2, pp. 130-143, 1988. § PROOF OF THEOREM 4.1 The method we use here is similar to that in <cit.>. Recall that for a mapping π, we define Δ_π=∑_i≤ j^nw_ij|1{(i,j)∈ E_1}-1{π(i),π(j)∈ E_2}|. Then the proof can be briefly divided into two major steps. The first one is to derive an upper bound for the expectation of the number of (incorrect) mappings π's with Δ_π≤Δ_π_0. The second one is to show that the derived upper bound converges to 0 under the conditions stated in the theorem, as n→∞. Based on that, the proof can be concluded as the number of π's with Δ_π≤Δ_π_0 goes to 0, i.e., the correct mapping π_0 is th unique minimizer for Δ_π as n→∞. Now we turn to the first step as follows: 1. Derivation of the Upper Bound: We define Π_k as the set of all the mappings in Π that map k nodes incorrectly. Obviously, Π_0={π_0}. Now we have |Π_k|≤nk(k!/2)≤ n^k. We subsequently define S_k as a random variable representing the number of incorrect mappings in Π_k whose value of cost function is no larger than Δ_π_0. Formally, S_k is given by S_k=∑_π∈Π_k1{Δ_π≤Δ_π_0}. Summing over all k, we denote S=∑_k=2^nS_k as the total number of incorrect mappings that induce no larger cost function than the correct mapping π_0. The mean of S can be calculated as: 𝔼[S] =∑_k=2^n𝔼[S_k]=∑_k=2^n∑_π∈Π_k𝔼[1{Δ_π≤Δ_π_0}]=∑_k=2^n∑_π∈Π_kPr{Δ_π-Δ_π_0≤ 0}≤∑_k=2^nn^kmax_π∈Π_kPr{Δ_π-Δ_π_0≤ 0}. For a mapping π, let V_π be the set of vertices that it maps incorrectly. Then, we define E_π=V_π× V, i.e., the set of node pairs with one or two vertices mapped incorrectly under π. For a π∈Π_k, we have |E_π|=nk-k^2/2-k/2. As every node pair in V× V-E_π is mapped identically in π and π_0, they contribute equally to Δ_π_0 and Δ_π respectively. Next, we define two random variables for π as X_π =∑_(i,j)∈ E_πw_ij|1{(i,j)∈ E_1}-1{(π(i),π(j)∈ E_2}|, Y_π =∑_(i,j)∈ E_πw_ij|1{(i,j)∈ E_1}-1{(i,j)∈ E_2}|. It is easy to verify that Δ_π-Δ_π_0=X_π-Y_π for all π, where Y_π is the value of cost function contributed by node pairs in E_π under the correct permutation. For a node pair (i,j), the probability that it contributes to Y_π equals to p_c(i)c(j)(s_1+s_2-2s_1s_2). Therefore, Y_π is the weighted sum of independent Bernoulli random variables. For X_π, assume that π has ϕ≥ 0 transpositions[If a mapping π has a transposition on i,j, it means that π(i)=j and π(j)=i.], then each transposition induces one invariant node pair in E_π. The remaining node pairs are not invariant under π, i.e., they are mapped incorrectly under π. Each node pair (i,j) contributes w_ij to X_π if (i,j)∈ E_1 and (π(i),π(j))∉ E_2 or vice versa. This happens with probability p_c(i)c(j)(s_1+s_2-2p_c(i)c(j)s_1s_2). Note that the random variable for each node pair is not independent. As in <cit.>, we conservatively ignore the positive correlation and get a lower bound of X_π, which is the weighted sum of independent random Bernoulli variables. Also, since transpositions in π can only occur in nodes in V_π, we have that ϕ≤ k/2. Now, denote X_ij as a Bernoulli random variable with mean p_c(i)c(j)(s_1+s_2-2p_c(i)c(j)s_1s_2) and Y_ij as a Bernoulli random variable with mean p_c(i)c(j)(s_1+s_2-2s_1s_2) as Y_ij. Based on the above manipulations, we can get a lower bound of X_π and an upper bound of Y_π as follows: X_π(stoch.)≥ ∑_(i,j)∈ E_π\ϕw_ijX_ij≜ X_π' Y_π(stoch.)≤ ∑_(i,j)∈ E_πw_ijY_ij≜ Y_π'. (stoch.)≥ denotes stochastic domination Therefore, we can use the probability of event {X_π'-Y_π'}≤ 0 to upper bound the probability of event {X_π-Y_π}≤ 0. Denoting λ_X as the expectation of X_π' and λ_Y as the expectation of Y_π', the bound we use for Pr{X_π-Y_π≤0} is summarized in the following lemma. For all mapping π, random variables X_π and Y_π satisfy that Pr{X_π- Y_π≤ 0 }≤ 2exp(-(λ_X-λ_Y)^2/12(λ_X+λ_Y)) First, we have that for all π Pr{X_π-Y_π≤ 0 }≤ Pr{X_π'-Y_π'≤ 0 }≤ Pr{ Y_π'≥λ_X+λ_Y/2}+Pr{ X_π'≤λ_X+λ_Y/2} Then we invoke Lemma <ref> (Theorems 1 and 2 in <cit.>), which presents Chernoff-type bounds for weighted sum of independent Bernoulli variables. (Theorems 1 and 2 in <cit.>) Let a_1,a_2,…,a_r be positive real numbers and let X_1,…,X_n be independent Bernoulli trials with 𝔼[X_j]=p_j. Defining random variable Ψ=∑_j=1^ra_jX_j with 𝔼[Ψ]=∑_j=1^ra_jp_j=m, we have Pr{Ψ≥(1+δ)m} ≤exp( -mδ^2/3 ), Pr{Ψ≤(1-δ)m} ≤exp( -mδ^2/2 ). Using Lemma <ref> by treating X_π' and Y_π' as the weighted (w_ij) sum of random variables X_ij (Y_ij), we obtain that Pr{Y_π'≥λ_X+λ_Y/2} ≤exp( -(λ_X-λ_Y)^2/12(λ_X+λ_Y)), Pr{X_π'≤λ_X+λ_Y/2} ≤exp( -(λ_X-λ_Y)^2/8(λ_X+λ_Y)). Hence, we have Pr{X_π-Y_π≤ 0}≤ 2exp(-(λ_X-λ_Y)^2/12(λ_X+λ_Y)). We now proceed to derive lower bound for the numerator and upper bound for the denominator in the exponent of the RHS of Inequality (<ref>) to obtain the upper bound of the RHS. By standard calculation, we have (λ_X-λ_Y)^2 ≥ (2∑_(i,j)∈ E_π\ϕw_ijp_c(i)c(j)(1-p_c(i)c(j))s_1s_2-kwβ(s_1+s_2-2s_1s_2)/2)^2 ≥ k^2/4[4(n-k/2-1)wα(1-β)s_1s_2-wβ(s_1+s_2-2s_1s_2) ]^2, and λ_X+λ_Y ≤ ∑_(i,j)∈ E_π[w_ijp_c(i)c(j)(s_1+s_2-2s_1s_2)+w_ijp_c(i)c(j)(s_1+s_2-2p_c(i)c(j)s_1s_2)] ≤2∑_(i,j)∈ E_πw_ijp_c(i)c(j)(s_1+s_2) ≤2(nk-k^2/2-k)wα(s_1+s_2). Therefore, by Lemma <ref>, Pr{X_π-Y_π≤ 0} can be upper bounded by Pr{X_π-Y_π≤ 0}≤ 2exp[-(λ_X-λ_Y)^2/12(λ_X+λ_Y) ] ≤2exp{-k^2[4(2n-k+2/2)wα(1-β)s_1s_2-wβ(s_1+s_2-2s_1s_2) ]^2/96(nk-k^2/2-k)wα(s_1+s_2)} ≤ exp{-k^2[(n-k/2-1)wα(1-β)s_1s_2]^2/6(nk-k^2/2-k)wα(s_1+s_2)}, where Inequality (<ref>) follows from the conditions stated in the theorem. 2. Convergence of the Upper Bound: Now, we further show that the derived upper bound converges to 0 as n→∞. Due to the monotonicity of w_ij with respect to p_c(i)c(j),we easily obtain that w=log( 1-α(s_1+s_2-2s_1s_2)/α(1-s_1)(1-s_2)) and w=log( 1-β(s_1+s_2-2s_1s_2)/β(1-s_1)(1-s_2)). Hence, w and w can be determined by α,β,s_1,s_2. Plugging Inequality (<ref>) intoInequality (<ref>), we have 𝔼[S] ≤ 2∑_k=2^n n^k·exp(-k^2[(n-k/2-1)wα(1-β)s_1s_2]^2/6(nk-k^2/2-k)wα(s_1+s_2)) ≤∑_k=2^∞exp{ k(-[(n-k/2-1)wα(1-β)s_1s_2]^2/6(n-k/2-1)wα(s_1+s_2)+log n)}≤∑_k=2^∞exp{ k(-[(n-k/2-1)w^2α^2(1-β)^2s_1^2s_2^2]/6wα(s_1+s_2)+log n)} Since α,β→ 0,logα/logβ≤γ, we also have w/w≤γ'=Θ(γ) and w=Θ(log1/α) where γ' may be a function of γ. Hence, we have for some constant C, 𝔼[S] ≤∑_k=2^∞exp{ k(-[Cnα^2(1-β)^2s_1^2s_2^2log1/α]/γ'^2α(s_1+s_2)+log n)}. Therefore, if α(1-β)^2s_1^2s_2^2log(1/α)/s_1+s_2=Ω(γlog^2 n/n)+ω(1/n), the sum of the above geometric series goes to zero as n goes to infinity. Therefore, 𝔼[S]→ 0. Hence, with the above conditions in Theorem <ref> satisfied, the MAP estimate π̂ coincides with the correct mapping π_0 with probability goes to 1 as n goes to infinity. § SUPERIORITY OF OUR COST FUNCTIONIn this section, we compare our cost functions over previous ones proposed in the literature. Specifically, we demonstrate the superiority of our cost function in bilateral case over the most similar previous cost function proposed by Pedarsani et al. <cit.>. Recall that the cost function derived in <cit.>, which we denoted as Δ'_π, is Δ'_π=∑_i≤ j^n|1{(i,j)∈ E_1}-1{(π(i),π(j))∈ E_2}|. The advantages of our cost function is two-fold. First, Δ'_π, as an unweighted version of our proposed Δ_π, corresponds to the MAP estimator in bilateral de-anonymization when the underlying social network is an Erdős-Rényi graph. Therefore, our cost function in a sense, subsumes the cost function in <cit.> as a special case in bilateral de-anonymization, and has more generality when the underlying network is non-uniform or the adversary only possesses unilateral community information. Second, we show that in certain cases, the correct mapping π_0 is the unique minimizer of Δ_π, while it is not the unique minimizer of Δ'_π. Indeed, when the underlying social network is as shown in Figure <ref>, and the sampling probabilities s_1=s_2≤γ'/2, with γ' defined as in the proof of Theorem <ref>, we have that the unique minimizer of Δ_π asymptotically almost surely coincides with π_0 by Theorem <ref>. However, as Δ'_π does not count the weight of node pairs, in each realization of G_1 and G_2, there exists a mapping π' that permutes π_0=min_π∈ΠΔ_π on some nodes in C_3 with Δ'_π'≤Δ_π_0. Therefore, in this case, the minimizer of Δ'_π does not equals to π_0, which demonstrates that Δ_π has wider application.§ UPPER BOUND OF INEQUALITY (19) To present the upper bound of Inequality (<ref>), we begin with bounding 𝐃^-1 and 𝐍. First, by the special block-diagonal structure of 𝐃, we readily have that 𝐃^-1 is also block diagonal with each n× n diagonal block as 𝐃_i^-1, which is the identity matrix with the ith row replaced by 1/𝐯_i(-𝐯_1,…,-𝐯_i-1,1,-𝐯_i+1,…,-𝐯_n). We have 𝐃_i^-1≤ 1+√(n)ϵ_1/ϵ_2, for all i. Hence we have, 𝐃^-1≤max_i=1… n𝐃_i^-1≤ 1+√(n)ϵ_1/ϵ_2. Similarly, we obtain 𝐍^2 ≤max_i,j=1… n𝐍_i_F^2=max_i=1… n∑_jk(s_jk^i+t_jk^i+w_jk^i+r_ij/n)^2≤ 4(max_i=1… n∑_k (s_jk^i)^2+max_i,j=1… n∑_jk(t_jk^i)^2 ..+max_i,j=1… n∑_k (w_jk^i)^2 + max_i,j=1… n∑_k (r_ij/n)^2)^2. Next, we bound these the terms s_jk^i, t_jk^i, w_jk^i and r_ij one by one in the following inequalities. max_i=1… n∑_jk(s_jk^i)^2 = max_i=1… n∑_k1/(λ_i-λ_j)^4·(𝐄_kj(λ_j+λ_k-2λ_i)-v_j/v_i𝐄_ki(λ_k-λ_i))^2 ≤ max_i=1… n1/δ^4(4σ∑_jk|𝐄_kj|+2σϵ_1/ϵ_2∑_kj|𝐄_kj|)^2 ≤ 4σ^2/δ^4(1+2ϵ_1/ϵ_2)^2ξ^2 max_i=1… n∑_jk(t_jk^i)^2 = max_i=1… n∑_jk1/(λ_i-λ_j)^4(𝐆_kj-v_j/v_i𝐆_ki)^2 ≤ ∑_jk1/δ^4(𝐆_kj+2ϵ_1/ϵ_2𝐆_ki)^2 ≤ 1/δ^4(1+2ϵ_1/ϵ_2)^2𝐆_F^2 ≤ 1/δ^4(1+2ϵ_1/ϵ_2)^2ξ^4. max_i=1… n∑_jk(w_jk^i)^2 = max_i=1… n∑_jkμ^2/(λ_i-λ_j)^4(𝐌'_kj-𝐯_j/𝐯_i𝐌'_ki)^2 ≤ μ^2/δ^4max_i=1… n∑_jk(∑_k𝐌'_kj+2ϵ_1/ϵ_2∑_k𝐌'_ki)^2 ≤ μ^2/δ^4(1+2ϵ_1/ϵ_2)^2𝐌'_F^2 ≤ μ^2/δ^4(1+2ϵ_1/ϵ_2)M^2. max_i,j=1… n∑_k(r_ij/n)^2 = max_i,j=1… nμ^2/n(λ_i-λ_j)^4(𝐯_j𝐌'_ii-𝐯_i𝐌'_ij)^2 ≤ 4ϵ_1^2μ^2/nδ^4𝐌'_F^2 ≤ 4ϵ_1^2μ^2/nδ^4M^2. From the above manipulations, we have 𝐍^2 ≤4[(1+2ϵ_1/ϵ_2)^2(σ^2/δ^4ξ^2+1/δ^4ξ^4+μ^2/δ^4M^2) +4ϵ_1^2μ^2M^2/nδ^4]≤ 5[(1+2ϵ_1/ϵ_2)^2(σ^2/δ^4ξ^2+1/δ^4ξ^4+μ^2/δ^4M^2) ], for sufficiently large n. Substituting Inequalities (<ref>) and <ref> into (<ref>), it follows that 𝐅-𝐈_F=𝐟-𝐟_0≤√(n)1-(1+√(n)ϵ_1/ϵ_2)√(5/δ^4[(1+2ϵ_1/ϵ_2)^2(σ^2ξ^2+ξ^4+μ^2M^2) ])/(1+√(n)ϵ_1/ϵ_2)√(5/δ^4[(1+2ϵ_1/ϵ_2)^2(σ^2ξ^2+ξ^4+μ^2M^2) ]).§ MAP ESTIMATION OF UNILATERAL DE-ANONYMIZATIONIn this section, we derive the MAP estimator for unilateral de-anonymization. Recall that given G_1, G_2, c, θ, the MAP estimate π̂ of the correct mapping π_0 is defined as followsπ̂= max_π∈ΠPr(π_0=π| G_1,G_2,c,θ), The MAP estimator can be further written as:π̂=max_π∈Π∑_G∈𝒢_πp(G,π| G_1,G_2,c,θ),where 𝒢_π is the set of all realizations of the underlying social network that are consistent with G_1, G_2 and π. By Bayesian rule, we havemax_π∈Π∑_G∈𝒢_πp(G,π| G_1,G_2,c,θ)= max_π∈Π∑_G∈𝒢_πp(G_1,G_2| G,π)p(G,π)/p(G_1,G_2)= max_π∈Π∑_G∈𝒢_πp(G_1,G_2| G,π)p(G )p(π)= max_π∈Π∑_G∈𝒢_πp(G_1| G)p(G_2| G,π)p(G).Note that we drop parameters c and θ for brevity since their values are fixed. From the definitions of the models, we have: max_π∈Π∑_G∈𝒢_πp(G_1| G=g)p(G_2| G,π)p(G)= max_π∈Π∑_G∈𝒢_π∏_i<j^n(1-s_1)^|E^ij|-|E_1^ij|s_1^|E_1^ij|·∏_i<j^n(1-s_2)^|E^ij|-|E_2^π(i)π(j)|s_2^|E_2^π(i)π(j)|·∏_i<j^np_c(i)c(j)^|E^ij|(1-p_c(i)c(j))^1-|E^ij| = max_π∈Π(∏_i<j^n(s_1/1-s_1)^|E_1^ij|(s_2/1-s_2)^|E_2^π(i)π(j)|)·(∑_g∈𝒢_π∏_i<j^k( p_c(i)c(j)(1-s_1)(1-s_2)/1-p_c(i)c(j))^|E^ij|) = max_π∈Π∏_i<j^n(s_2/1-s_2)^|E_2^π(i)π(j)|·∑_g∈𝒢_π∏_i<j^k( p_c(i)c(j)(1-s_1)(1-s_2)/1-p_c(i)c(j))^|E^ij|= ∑_g∈𝒢_π∏_i<j^k( p_c(i)c(j)(1-s_1)(1-s_2)/1-p_c(i)c(j))^|E^ij|, where |E^ij|,|E_1|^ij,|E_2^ij| take value 0 or 1 indicating whether there exists an edge between nodes i and j in G,G_1,G_2 respectively. Note that in the above manipulations, we frequently eliminate the terms that do not depend on π. Particularly, in the last step, although the term (s_2/1-s_2)^|E_2^π(i)π(j)|depends on π, the value of the whole product ∏_i<j^n(s_2/1-s_2)^|E_2^π(i)π(j)|is independent of π itself since it is a bijective mapping. Now, let G_π^* be the graph having the smallest number of edges in 𝒢_π, which is equivalent to that G_π^*=(V,E_1∪π(E_1)). An illustration of G_π^* is provided in Figure <ref>. Denote the set of edges in G_π^* as E_π^*, with |E_π^*^ij| indicating the number of edges between i and j. By the definition we have that in 𝒢_π, all the graphs have edge sets that are supersets of G_π^*. By summing over all the graphs in 𝒢_π, we have thatπ̂ =max_π∈Π∏_i<j^n(p_c(i)c(j)(1-s_1)(1-s_2)/1-p_c(i)c(j))^|E^ij_π^*|·∏_i<j^n(1+(p_c(i)c(j)(1-s_1)(1-s_2)/1-p_c(i)c(j)))^1-|E^ij_π^*|,where the above equality follows from that ∑_g∈𝒢_π∏_i<j^n( p_c(i)c(j)(1-s_1)(1-s_2)/1-p_c(i)c(j))^|E^ij|-|E^ij_π^*|= ∑_0≤ k_ij≤ 1-|E_π^*^ij|∏_i<j^n( p_c(i)c(j)(1-s_1)(1-s_2)/1-p_c(i)c(j))^k_ij= ∏_i<j^n(1+(p_c(i)c(j)(1-s_1)(1-s_2)/1-p_c(i)c(j)))^1-|E^ij_π^*|.Then, from the above equation we can further write the MAP estimator as:max_π∈Π∏_i<j^n(p_c(i)c(j)(1-s_1)(1-s_2)/1-p_c(i)c(j)(s_1+s_2-s_1s_2))^|E_π^*^ij|= min_π∈Π∏_i<j^n(1-p_c(i)c(j)(s_1+s_2-s_1s_2)/p_c(i)c(j)(1-s_1)(1-s_2))^|E_π^*^ij|= min_π∈Π[|E_π^*^ij|log(1-p_c(i)c(j)(s_1+s_2-s_1s_2)/p_c(i)c(j)(1-s_1)(1-s_2))]. Next, by the definition of g_π^*, we notice that|E^ij_π^*|=⌈(|E_1^ij|+|E_2^π(i)π(j)|)/2⌉.Hence, by setting w_ij=log(1-p_c(i)c(j)(s_1+s_2-s_1s_2)/p_c(i)c(j)(1-s_1)(1-s_2))we have π̂= min_π∈Π( ∑_i<j^nw_ij(1{(i,j)∉ E_1,(π(i),π(j))∈ E_2}) Note that the MAP estimator is not symmetric with regard to G_1 and G_2. This stems from the fact that the adversary in this case only has knowledge on the community assignment function of G_1. § CONVEXITY OF THE RELAXED UNI-MAP-ESTIMATEIn this section, we prove that the relaxed matrix formulation of the optimization problem UNI-MAP-ESTIMATE is convex. The relaxed formulation is presented as follows:mininize 𝐖∘(Π𝐀- 𝐁Π)_⌊ F⌋^2 s.t. ∀ i, ∑_iΠ_ij =1 ∀ j, ∑_jΠ_ij =1Obviously, the set of feasible solutions defined by Constraints (<ref>) and (<ref>) is a convex set. Then, for the objective function 𝐖∘(Π𝐀-𝐁Π)_⌊ F⌋^2, according to the definition of operator ·_⌊F⌋ it can be interpreted as weighted summation of truncated quadratic functions of each element of Π with the weights being positive real numbers. Each truncated function is equivalent to the square of a linear function of an element of Π with the part where the elements take positive values truncated. Therefore, each truncated function is convex. It follows that the whole objective function, being a weighted combination of convex functions, is convex. Thus, we conclude that the relaxed UNI-MAP-ESTIMATE is a convex optimization problem, the global optima of which can be found in O(n^6) time using the same algorithm as in the bilateral case.§ DIFFERENCES BETWEEN BILATERAL AND UNILATERAL DE-ANONYMIZATIONIn this subsection, we summarize, from a higher level, the differences existing in the essence of bilateral and unilateral de-anonymization and the results we obtain for the two problems. * The extra knowledge on the community assignment function in bilateral de-anonymization enables us to restrict the feasible mappings to the ones that observe the community assignment, thus decreases the number of possible candidates and makes the problem intuitively easier than unilateral one. * The community assignment as side information is the main reason behind the difference of the posterior distribution of the optimal mapping, which leads to different MAP estimates, and thus different cost functions in the two cases. Note that the cost function for bilateral de-anonymization cannot be calculated in unilateral case since we have no knowledge on the community assignment of G_2. * Although under similar conditions, minimizing the cost function asymptotically almost surely recovers the correct mapping in both cases, the lack of community assignment in unilateral de-anonymization impose asymmetry in its cost function and render the cost function harder to (approximately) minimize, as justified by our stronger complexity-theoretic result. * In terms of the proposed algorithms, the additive approximation algorithms for both bilateral and unilateral de-anonymization share the same guarantee. However, the convex optimization-based algorithm has been shown to yield conditionally yield optimal solutions only for bilateral de-anonymization. * The empirical results demonstrate that in all the contexts, our algorithms successfully de-anonymize larger portion of users when provided with bilateral community information. § GRAPHICAL RESULTS ON RELATIVE VALUE OF COST FUNCTIONIn this section we present graphical results on the relative value of the cost function of the mappings produced by the algorithms. Recall that for a mapping π and the mapping π_GA produced by GA algorithm, the relative value of the cost function of π equals to (Δ_π-Δ_π_GA)/Δ_π_GA. | http://arxiv.org/abs/1703.09028v3 | {
"authors": [
"Luoyi Fu",
"Xinzhe Fu",
"Zhongzhao Hu",
"Zhiying Xu",
"Xinbing Wang"
],
"categories": [
"cs.SI",
"cs.CR",
"cs.NI"
],
"primary_category": "cs.SI",
"published": "20170327121735",
"title": "De-anonymization of Social Networks with Communities: When Quantifications Meet Algorithms"
} |
Department of Physics and Astronomy, University of Delaware, Newark, DE 19716, USALeibniz-Institute for Astrophysics Potsdam (AIP), An der Sternwarte 16, 14482, Potsdam, GermanyCenter for Astrophysics and Space Science, University of California San Diego, La Jolla, CA 92093, USAHarvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USAWe use Kepler K2 Campaign 4 short-cadence (one-minute) photometry to measure white light flares in the young, moving group brown dwarfs 2MASS J03350208+2342356(2M0335+23) and2MASS J03552337+1133437 (2M0355+11), and report on long-cadence (thirty-minute) photometry ofa superflare in the Pleiades M8 brown dwarf CFHT-PL-17. The rotation period (5.24 hr) and projected rotational velocity (45 km s^-1) confirm 2M0335+23 is inflated (R ≥ 0.20 R_⊙) as predicted for a 0.06M_⊙, 26-Myr old brown dwarf βPic moving group member. We detect 22 white light flares on 2M0335+23. The flare frequency distribution follows a power-law distribution with slope -α = -1.8 ± 0.2 over the range 10^31 to 10^33 erg. This slope is similar to that observed in the Sun and warmer flare stars, and is consistent with lower energy flares in previous work on M6-M8 very-low-mass stars; taken the two datasets together, the flare frequency distribution for ultracool dwarfs is a power law over 4.3 orders of magnitude.The superflare (2.6×10^34 erg) on CFHT-PL-17 shows higher energy flares are possible. We detect no flares down to a limit of2 × 10^30 erg in the nearbyL5γ AB Dor Moving Group brown dwarf 2M0355+11, consistent with the view that fast magnetic reconnection is suppressed in cool atmospheres. We discuss two multi-peaked flares observed in 2M0335+23, and argue that these complex flares can be understood as sympathetic flares, in which a fast-mode MHD waves similar to EUV waves in the Sun trigger magnetic reconnection in different active regions. § INTRODUCTION The paradigm for the evolution of magnetic activity in low-mass main sequence stars is that magnetic braking causes the initially rapid rotation from pre-main sequence contraction to gradually decline, and this in turn causes the magnetic fields generated by the dynamo to weaken <cit.>. As a result, both the rotation rate and magnetic activity such as flaring decrease with age <cit.>. For fully convective 0.3 M_⊙ stars, half the angular momentum is shed between 3 Myr and the Pleiades age <cit.>. The rotation and magnetic activity evolution of brown dwarfs is quite different. Measurements of v sin i for field brown dwarfs <cit.> imply a mean rotation period of 4.1 hours <cit.>, and a large sample of mid-infrared photometric periods confirm this view <cit.>. All of these are rapid rotators compared to field stars. <cit.> show that turbulent dynamos can generate magnetic fields in stars, brown dwarfs and planets, and that provided the object is rapidly rotating, the strength of the magnetic fields is determined by the energy flux. <cit.> show this theory implies that massive brown dwarfs have fields of a few kilogauss in their first few hundred million years, weakening to fields of 100-1000G by an age of 10^10 years.Simulations of the turbulent dynamo in fully convective stars show that both large-scale dipole and small-scale magnetic fields are generated <cit.>. Overall, the expectation is that all brown dwarfs have significant magnetic fields, and indeed radio observations support the existence of magnetic fields even in cool T-type brown dwarfs<cit.>.Despite the presence of strong magnetic fields, the fraction of a star or brown dwarf's energy converted into chromospheric activity weakens for “ultracool dwarfs" with temperatures belowthe M6 spectral type <cit.>. <cit.> have shown this can be understood as a consequence of the increasingly neutral atmospheres: As the ionization fraction drops and the resistance increases, the magnetic fields become decoupled from the matter. These were equilibrium calculations, and as <cit.> noted, the existence of flares implies transient, time-dependent processes are important.A transition from fast magnetic reconnection at high temperatures to a high resistivity regime where only slow magnetic reconnection is allowed may explain the decline in chromospheric and coronal activity but continued radio emission <cit.>. This scenario sees the fast reconnection events resulting in a range of energy release events, from many nanoflares that heat the chromosphere and corona to rarer but more powerful white light flares that can be individually observed.Additional parameters seem to be important in magnetic activity: Magnetic topology may explain the difference between radio-quiet, X-ray brightdwarfs and the radio-loud, X-ray faint dwarfs <cit.>.Even setting aside the numerous radio-only bursts, it is well established that X-ray and optical flares do occur in stellar late-M and early-L dwarfs – some notable examples include the very first optical spectrum of VB10, the first known M8 dwarf <cit.>, the discovery of a nearby M9 dwarf due to a huge X-Ray flare with L_X/L_ bol =0.1 <cit.>, and an L0 dwarf with a Δ V<-11 white light flare <cit.>. The serendipitous optical spectra of the M7-M9 dwarf flares reported by <cit.>,<cit.>, <cit.>, <cit.>, <cit.>, and <cit.> all showed strong atomic emission lines and many included veiling or a blue continuum. A difficulty, however, with flare studies is that detectable flares are rare enough that it is difficult to assess their frequency as a function of energy. <cit.> monitored four field M6-M8 stars in U-band and found 39 flares over 59 hours. These flares followed a power-law frequency distribution, as seen in hotter flare stars, but with a rate comparable to “inactive" but more luminous M0-M2.5 dwarfs.Similarly, Kepler optical monitoring of aL1 dwarf star also found a power-law flare frequency distribution <cit.>. Young M-type brown dwarfs with similar T_ eff also exhibit flares, such as the ∼500-Myr M9 lithium brown dwarf LP 944-20 <cit.>. X-Ray flares, as well as quiescent emission, have also been reported from very young (<5 Myr) M6-M9 brown dwarfs in Orion <cit.> and Taurus <cit.>.By monitoring over 100,000 stars over four years, Kepler mission <cit.> is known to have detected over 800,000 flares in 4041 stars <cit.>.These include superflares in A and F stars with thin convective zones<cit.> and solar-like G dwarfs <cit.> as well as fully-convective M dwarfs <cit.> and even an L1 dwarf <cit.>. These flares are detected by white light emission enhancing the normal stellar photospheric emission through Kepler's broad (430nm - 900nm) filter; thus, only extremely energetic events (>10^34 erg) are seen in the warmer stars but weaker flares can be seen in the coolest stars. In the models of <cit.>, a beam of non-thermal electrons with a energy flux of 10^13 erg cm^-2 s^-1 can produce a dense, hot chromospheric condensation that emits white light like a ∼10,000K blackbody. Solar flares with energies ∼10^31 erg also emit most of their energy in white light like ∼9000K blackbodies <cit.> — flares of this energy would not be detectable by Kepler on solar-type stars but were detected on the L1 dwarf and the M4 dwarf GJ 1243 <cit.>.The extended Kepler K2 mission <cit.> allows many new targets to be observed. We are using K2 to monitor ultracool dwarfs for white light flares as well as measuring rotational periods and searching for transits. Targets that happen to lie within each K2 field of view are monitored for a ∼2.5 month-long campaign.The overall aim of our survey is to measure quantities such as the flare frequency, maximum flare energy, flare light curve morphology in order to understand their dependence on parameters such as temperature, mass, and age. In this paper, we present K2 Campaign 4 observations of three brown dwarfs which are confirmed members ofnearby moving groups and clusters, so that unlike most field dwarfs, their age, mass, radius and other parameters are well determined.We present the target properties in Section <ref>, the K2 observations in in Section <ref>,and discussion of the magnetic activity in Section <ref>.§ TARGETS AND SPECTROSCOPY§.§ Target Characteristics In Table <ref>, we list the key properties of our targets.[EPIC 211046195 and EPIC 210327027 were observed for K2 Guest Observer Program4036 (PI Gizis); EPIC 211110493 was observed for GO Programs 4024 (PI Lodieu), 4026 (PI Scholz), and 4081 (PI Demory).] 2MASS J03350208+2342356 (hereafter 2M0335+23) was observed by K2 as source EPIC 211046195. This field ultracool dwarf was discovered by <cit.>, who classified it as M8.5 in the optical and noted Hα emission. It is apparently single in Hubble Space Telescope imaging <cit.>. <cit.> detected significant rotational broadening (v sin i ≈ 30 ), lithium in absorption, and again detected Hα emission. The presence of lithium identified this object unambiguously as a brown dwarf <cit.>.<cit.> measured its trigonometric parallax distance to be 42.4 ± 2.3 pc and showed that its distance and space velocity identified it as a member of the β Pic Moving Group (BPMG).The latest parallax from <cit.> places it at 46± 4 pc; for consistency with the literature we adopt the nominally more precise distance of 42.4pc for our analysis. The age of this group is24 ± 3 Myr <cit.>.<cit.> have analyzed 2M0335+23 in detail to find that it is 60.9^+4.0_-4.4 jupiter masses (i.e, 0.058±0.004 M_⊙) with a radius of 2.40 ± 0.04 jupiter radius according to models <cit.>.Adopting BC_J = 2.0 <cit.>, the luminosity is 10^-2.55 L_⊙ and T_ eff = 2700K. <cit.> measured it to have apparent (AB) magnitude i=15.601 (as source DANCe 5121623). Infrared spectroscopy confirms that it is lower surface gravity than ordinary field dwarfs, with a classification of M7 VL-G<cit.> and M7.5β <cit.>. In the optical, low surface gravity leads to enhanced VO features <cit.>; this would bias <cit.>'s classification to a later type. We re-classify the spectrum as M7β in the optical. In particular, the optical spectrum of 2M0335+23 is definitely “earlier" (warmer) than young M9 brown dwarf 2MASS J06085283-2753583 <cit.> which may be a BPMG member <cit.> or more likely a 40-Myr-old Columba member <cit.>.The observed Hα emission line strength of EW≈ 5Å imply log Hα/L_bol≈ -5.5 <cit.>. Despite 2M0335+23's rapid rotation and youth, this places it in the bottom half of the M7 activity range <cit.>. Finally, we use the mass, luminosity and radius to predict theoretical mean surface magnetic fields of our targets using Equation 1 of <cit.>.2MASS J03552337+1133437 (hereafter 2M0355+11) was observed by K2 as source EPIC 210327027. Discovered by <cit.> and classified as low-surface gravity (L5γ) with lithium by <cit.>, this brown dwarf is now recognized as a dusty, low-surface-gravity member of the AB Doradus Moving Group (ABDMG) which shares many spectral characteristics with directly imaged exoplanets <cit.>. <cit.> derive an ABDMG age of 149^+51_-19 Myr; note that this age is tied to the Pleiades age of 130 ± 20 Myr from lithium depletion <cit.> that has been updated to 112 ± 5 Myr <cit.>. <cit.> measure a trigonometric parallax of 109.5± 1.4 mas, and we use a distance of 9.1 pc for the rest of this paper. For discussion purposes, we adopt the values derived by<cit.>:log L/L_bol = -4.10 ± 0.03, radius R=1.32±0.09 R_J, surface gravity log g = 4.45 ± 0.21, T_ eff = 1478 ± 57K, and mass M=19.98 ± 7.76 M_J (i.e., ∼0.02 M_⊙).<cit.> measured v sin i = 12.31 ±0.15 , noting that it is an unusually slow rotator for a L dwarf. This, however, still implies a minimum rotation period of 13 hr, a rapid rotator compared to M dwarf stars. No Hα emission has been detected (EW<24.29Å, ) though the upper limit is above the emission level of most L dwarfs.[2MASS J03430016+2443525]CFHT-PL-17, a brown dwarf member of the Pleiades discovered by <cit.>), was observed by K2 as source EPIC 211110493. <cit.> classified it as optical spectral type M7.9 (which we will hereafter round off to M8.) and found Hα emission (EW≈ 7Å). <cit.> confirm it has a 100% chance of being a cluster member and measure i=19.745 (AB). We adopt the VLBI Pleiades distance of 136.2 ± 1.2 pc <cit.>; <cit.>, using the same distance, find a luminosity of 0.0008456 L_⊙ (T_ eff = 2500K) which implies the mass is 0.06 M_⊙.Thus, CFHT-PL-17 is very similar in mass to 2M0335+23, but older and one spectral type later (∼200K cooler), and it is a similar age to 2M0355+11, but more massive and warmer.[We also identify a flare in the candidate Pleiades brown dwarf[2MASS J03454521+2258449]BPL 76 <cit.>, observed as EPIC 211000317. <cit.>, however, have measured its proper motion and assign it a membership probability of zero.]llllrrrcrr0pcKey Target Properties ObjectEPIC K_p Type DistanceAge Mass log L/L_⊙T_ eff B2M0335+23 211046195 16.7 M7β 42.4 pc 24 Myr 0.06 M_⊙ -2.55 2700K2.2 kG 2M0355+11 210327027 20.4 L5γ 9.1 pc 150 Myr 0.02 M_⊙-4.10 1480K 1.1 kG CFHT-PL-17 211110493 20.8 M8β 136 pc 112 Myr 0.06 M_⊙ -3.072500K 2.5 kG Parameters given are rounded off. The mean surface magnetic field B is theoretical. See text for references and uncertainties. §.§ New Spectroscopy We observed 2M0335+23 on UT Date 2016 February 3 with the Keck NIRSPEC spectrograph <cit.> to obtain spectra with λ / Δλ =20,000 in the 2.3 μm region dominated by CO bands. Conditions were clear with 1 seeing. We obtained two exposures of 750 sec each, following observation of the A0V star HD 19600 for telluric calibration. The setup and data analysis were as described in <cit.>. We achieved a typical signal-to-noise of >50 for these observations. We find v_rad = 12.6 ± 1.0 and v sin i = 45.4 ± 3.4 . This radial velocity increases 2M0335+23's probability of BPMG membership to 96.5% using the <cit.> astrometry in the BANYAN II model <cit.>.§ K2 PHOTOMETRY Kepler records the pixels for every target as averages over “long" (30 minute, ) cadences; for 2M0335+23 and 2M0355+11, it also recorded “short" (1 minute, ) cadence data. We report Kepler mission times, which are equal to BJD - 2454833.0.The brightnesses of Kepler and K2 targets are described on the K_p magnitude system <cit.> tied to ground-based photometry; this system was not designed for ultracool dwarfs and the EPIC catalog <cit.> magnitudes for our targets are not useful. <cit.>defined K_p ≡ 25.3 - 2.5 log( flux), where K_p ≈ K_p for most (e.g., AFGK-type) stars and “flux" is the count rate measured through a 3-pixel radius aperture.We find that the apparent K_p magnitudes of 2M0335+23, 2M0355+11, and CHFT-PL-17 are 16.7, 20.4, and 20.8. By using K_p we can discuss both extremely red sources and time-dependent (blue) flares in a terms of the well established K_p system.2M0335+23 is bright enough that photospheric variability is detectable. We use the K2 mission pipeline photometry corrected for the effects of pointing drift and other systematic errors. For 2M0335+23, the Lombe-Scargle periodogram shows a strong signal at P=0.2185 day (5.244 hour), which we identify as the rotation period of the brown dwarf. The phased data are shown in Figure <ref> normalized to the median. We have verified that other K2 pipelines <cit.> give consistent results for this source. We do not detect periodic photometric variations in 2M0355+11 or CFHT-PL-17. We note that in the case of the K2 mission corrected photometry for 2M0355+11, there is a periodic signal of 1.11 days, but we believe this is a spurious signal, and it is not present in the other reduction pipelines.We measured short cadence photometry using the K2 Release 10 pixel files.(There is not yet any K2 mission official light curve product for short cadence data.) We used circular aperture photometry (photutils) with radius of 2 pixels centered on the target; but we verified that our results are qualitatively unchanged with circular apertures of radius 3 or rectangular fixed apertures. The sources, especially 2M0355+11, are faint enough that centroiding introduces photometric noise; we adopt a best position based on the median of all centroid measurements, and then adjust itfor each observation using the spacecraft motion estimate calculated by the mission (recorded as POS_CORR1 and POSS_CORR2 in the FITS file headers). The photometry shows the usual systematic drifts, but these have little effect on measurements of flares which have timescales of a few minutes. We lack short cadence photometry for CFHT-PL-17, but motivated by strong flare at mission day 2261.94 which we noticed in the mission pipeline photometry, we measure our own 2 pixel radius long cadence photometry in the same way to analyze its flare. We identified 22 flares in 2M0335+23 by visually examining the lightcurves; they are shown in Figure <ref> and listed in Table <ref>.We reject all events that brighten in only a single observation or are not centered on the target's position. No flares are detected in 2M0355+11; we note however that a passing asteroid creates a spurious brightening in aperture photometry at mission time 2271.13.For each 2M0335+23 flare, we fit the M dwarf flare light curve described by <cit.>, hereafter D14, who found that most flares on the M4 dwarf GJ 1243 could be described by a fast rise (a 4th order polynomial) and a slower double exponential decay: Δ F =A (α_i e^-γ_i Δ t/t_1/2 + α_g e^-γ_g Δ t/t_1/2) All flare light curves are then described by nine universal parameters:α_i=0.6890 (± 0.0008), α_g = 0.3030 (± 0.0009), γ_i=1.600(±0.003), γ_g = 0.2783(±0.0007) plus the five polynomial parameters given in D14. Each flare also has three unique parameters: the peak amplitude of the flare, the full-width time at half-max(t_1/2), and the time of the flare peak. We fit four free parameters to each flare using emcee <cit.>: the non-flaring photosphere, the peak amplitude of the flare, the full-width time at half-max(t_1/2), and the time of the flare peak. The fits are shown in red in Figure <ref>. Two of the brighter flares (on mission days 2240 and 2287) are complex flares with two peaks in their light curves: We have fit them as the superposition of two flares.The equivalent duration listed in Table <ref> is a measure of the flare energy compared to the quiescent luminosity; it is obtained by integrating the observed flare count rate and dividing by the photosphere count rate.This is a distance-independent measure of the flare, but it is dependent on the filter and the photospheric properties of the stars:Our durations will be much longer than otherwise identical flares observed on a G dwarf due to the lower photospheric flux.Our detection of weak flares is limited by noise, and it is clear that we cannot reliably detect flares below equivalent durations of 20s. The flares on mission days 2238, 2249, and 2251 in Figure <ref> are examples of marginal detections that may be some form of correlated noise rather than real flares. The flare on mission day 2299 is also questionable because most of the flux occurs in a single time period; its exclusion would have negligible effects on the remaining analysis.The strongest 2M0335+23 flare (mission time 2253.65107) has noticeable deviations from the D14 template: The best fitting model (red) under-predicts the peak and over-predicts the gradual phase, with a t_1/2 that is too short. We therefore use a new model template in which we keep the polynomial rise parameters fixed but allow the decay parameters to be fitted. This adds three free parameters, since we require (α_i + α_g = 1). The results are: α_i=0.9233± 0.0055 (α_g=0.0767), γ_i = 1.3722 ± 0.054, and γ_g=0.1163 ± 0.011. <cit.> argues during the impulsive decay phase cooling is by blackbody emission, which suggests the relative contribution of this component of radiative cooling was different in this flare.To calibrate equivalent duration in terms of energy, we follow the procedures described in <cit.>. Because flares are much hotter than the brown dwarf targets, white light flares have a higher average energy per detected K2 photon than the photosphere. The photosphere is modeled with an active M7 dwarf template <cit.> scaled to the measured i photometry and known trigonometric parallax distance. The flare is modeled as an 10,000K blackbody, which gives good agreement with the flare measurements in <cit.>. Figure <ref> shows the optical and near-infrared spectral energy distribution of 2M0335+23 and flare with the same count rate through the Kepler filter. We find that the a flare with equivalent duration of 1 second has a total (bolometric UV/Vis/IR) energy of 2.0 × 10^30 erg. We emphasize that we have extrapolated to wavelengths not detected by K2 and that our analysis includes atomic emission features between 430nm and 900nm in the observed “white light" photometry. Finally, we also report the peak (short cadence) absolute K_p magnitude of the flare.For these, we have applied an aperture correction of 1.08 to correct the r=2 pixel aperture to the r=3 pixel aperture.We note that the total equivalent duration of detected flares is 0.030 days, so that just 0.04% of the 2M0335+23's optical light over the course of the campaign is due to white light flares. rrrrrl0pcFlares in 2M0335+23 TimePeak Ratio Eq. Duration Energy Secondary Time Comments (d) (s) (10^31 erg) (d) 2253.65107 1.83 518 103.5 2240.04075 0.68 494 98.9 2240.0511 Complex2281.57290 1.47 339 67.8 2284.68258 1.69 267 53.42287.91009 0.22 203 40.5 2287.9161 Complex2268.87127 0.20 145 29.02295.39965 0.75 110 22.02299.37770 0.66 56 11.2 Questionable2291.45770 0.38 53 10.72248.59421 0.24 48 9.62261.51045 0.23 47 9.42269.02792 0.15 35 7.02249.17243 0.11 35 6.92276.02709 0.17 32 6.42229.08574 0.07 28 5.5 2293.30131 0.13 26 5.22295.02575 0.15 25 5.02258.54786 0.04 24 4.8 Complex?2284.92844 0.09 23 4.72238.85706 0.11 18 3.5 Questionable 2251.42265 0.05 13 2.6Questionable 2249.41148 0.05 10 1.9 Questionable We detect no flares in 2M0355+11. We verified that we could have detected flares by adding our observed 2M0335+24 flare data back into the 2M0355+11 at random times, and recovered them all. Because 2M0355+11 is 4.7 times closer than 2M0335+24, a flare with22 times less energy would produce the same count rate.We conclude that we could have detected flares with E>2.0×10^30 ergs, and place a 95% confidence upper limit of 3 such flares over 70.7 days.(The effect of 2M0355+11 photosphere's much fainter apparent magnitude would simply to be increase the equivalent duration or relative amplitude of the flare.) If the timescale for these flares, however, was less than one minute than we could not distinguish them from cosmic rays or other noise sources. However, because the late-M dwarf flares (, Table 4.1) with energies at or above our limit have timescales of several minutes, we conclude that this effect is not a concern.We measure the flare on CFHT-PL-17 using 3-pixel radius aperture photometry (Figure <ref>). The flare is first detectable at mission time 2261.9407 where it has brightened to 9.0 times the original photosphere. In the 2261.9612 exposure, it has reached 77 times the photosphere toachieve K_p= 16.0.The flare then declines, with the last detectable excess of 24% at 2262.1246, for a total observed duration of five hours.The equivalent duration is 170,000 s (2.0 days). Using the same calibration procedure with an activeM8 template <cit.>, we calculate the flare energy is 2.6 × 10^34 erg. Given the sharply peaked light curve, we conclude that t_1/2 < 30 min. We can fit the D14 template by computing it on one-minute timesteps but comparing to the long-cadence data, as in our analysis of an L dwarf super flare in Paper I <cit.>. We find that the peak is 380 times the photosphere (K_p ≈ 14.3), with t_1/2 of 3.9 minutes. This should be viewed with caution because we do not know if the D14 template applies to this flare, or if the flare was complex and multi-peaked. On the other hand, Kepler short cadence photometry of comparable energy flares in F stars shown by <cit.> are sharply peaked.§ DISCUSSION§.§ The Radius of a 24 Myr Brown Dwarf Young brown dwarfs should have inflated radii compared to field stars of the same spectral type. Our measured rotation period and measured v sin i together imply that R sin i = 0.196 ± 0.015 R_⊙ for 2M0335+23, whose age of 24 ± 3 Myr is independently known. This is much larger than the radii of field M7 dwarfs, which have have R = 0.12 R_⊙ <cit.>. This result can be seen as either an independent confirmation of the evolutionary model prediction that young brown dwarfs are larger, or if the models are trusted, as independent support for BPMG group membership of 2M0335+23.Using the previously estimated radius of <cit.>, we find that the inclination is i = 54.4 ± 6.6^∘.§.§ Flare Frequency Distribution Studies of flare stars have found that the cumulative flare frequency distribution (FFD) follows a power-law trend <cit.>. We compute the cumulative frequency (ν) of 2M0335+23 flares as the number (N) observed with a given energy or greater divided by the total time of observation (70.7 days) and plot the results in in Figure <ref>.The statistical properties of power-law (“Pareto") distributions are reviewed by <cit.>.We consider both the graphical technique of fitting a line to Figure <ref> and maximum likelihood estimation: <cit.> notes that the two techniques are consistent and that the traditional graphical techinique is “only slightly inferior" to the maximum likelihood estimates. Considering flares in the energy range 4×10^31 erg to 1.1 × 10^33 erg, using frequency units of day^-1, and weighting each point by √(N), we fit a linear relationship. logν = a +βlog(E/10^32erg) We find β= -0.66 ± 0.04 and a= -0.83 ± 0.01 for 2M0335+23.An alternative expression of this power law dependence is: dN ∝ E^-α dE Here dN is the number of events between energy E and E+dE.As often discussed for flares, if α>2 the total energy in small events (nanoflares) would diverge. Because β = -α +1(seefor helpful discussion), α=1.7 for 2M0335+23.We also plot limits on the FFD for the L5γ brown dwarf 2M0355+11. We also show the Kepler L1 dwarf W1906+40 <cit.> FFD but over the energy range10^31 erg to2 × 10^32 erg.For this star, a=-1.35±0.06and β= -0.59 ± 0.09, but we caution that the slope depends sensitively on the energy range chosen.An alternative approach is to use the maximum likelihood estimator for α <cit.>: (α -1) = n [ ∑_i=1^nlnE_i/E_min]^-1 where for 2M0335+23 case n=19 and E_min = 4.66×10^31 erg. The uncertainty in this estimator can also be calculated (we used the software from <cit.>). Because our sample is relatively small the uncertainty is large (± 0.2), and importantly, the estimator is biased. <cit.> shows it can be made unbiased by multiplying it by a factor of n-2/n, giving α = 1.79 ± 0.21. We conclude that the maximum likelihood estimate of α is consistent with the linear fit value. We adopt the (larger) uncertainty of ± 0.2 from the maximum likelihood estimator. For W1906+40, the maximum likelihood estimator is α=1.6 ± 0.2. The 2M0355+11 limit is shown in red in Figure <ref>. If one assumes that 2M0355+11 would follow the same power law slope as 2M0335+23 or W1906+40, then the red dashed-line upper limit shown in Figure <ref> applies. In any case, 2M0355+11's incidence of flares of 2 × 10^32 erg or greater is less than 13% that of 2M0335+23 and less than 40% that of the much older star W1906+40.<cit.> found that for four M6-M8 dwarfs, a=20.49 ± 3.3 and β = -0.73 ± 0.1 over the range 10^27.94≤ E_U ≤ 10^30.60 erg, where the frequency is in hour^-1, E_U is the energy in U-band, and the reported intercept a is at zero energy, not 10^32 erg. In order to compare this FFD, we can make a crude energy correction by noting that <cit.> found that the total energy of a AD Leo flare was 806/145=5.55 times greater than the U-band energy (their Table 6).(The 10,000K blackbody agrees well with this flare, see ).Applying this correction, the <cit.> relation agrees remarkably well not only in slope but also normalization (Figure <ref>).The agreement in normalization seems rather fortuitous given the uncertain energy corrections applied, the combination of multiple stars, and the fact that as a young brown dwarf, 2M0335+23 is both larger and more luminous than a field M7 dwarf.We conclude that the flares for ultracool dwarfs follow a power law over the range 5 × 10^28 to 10^33 erg, a range of 4.3 orders of magnitude.The existence of the Pleiades flare shows that this distribution must continue to at least 3 × 10^34, although we cannot determine whether it follows the same power-law slope or turns over.The predicted rate using the extrapolated 2M0335+23 FFD for the Pleiades superflare is 1.4 per year. The striking aspect of this power-law slope is that it agrees so well not only with stellar M6-M8 dwarfs, but also “inactive" M dwarf stars. It is also in excellent agreement with solar flares.<cit.> found that α≈ 1.5 for solar hard-X ray flares over three orders of magnitude.<cit.> argue that solar flares follow a power low slope with α=1.8 over the range 10^24 to 10^32 erg.Power-law slopes near this value are explained by self-organized criticality models <cit.> where flares are due to “avalanches" of many small magnetic reconnection events. 2M0355+11 may be at a temperature (1480K) below which fast magnetic reconnection events are no longer possible. While the slopes (α) are consistent between the brown dwarfs, the Sun, and other stars, the overall normalization of the flare rate (a) is much different. Flares on 2M0335+23 are much less frequent than in the rapidly rotating M4 dwarf GJ 1243 <cit.>, comparable to “inactive" M dwarfs studied by <cit.>, and morefrequent than in the Sun.It is perhaps most interesting to compare to the L1 dwarf W1906+40. At an energy of 10^32 ergs, flares are 3-4 times as frequent on 2M0335+23 as on W1906+40. However, 2M0335+23 is 13 times more luminous than W1906+40, so despite its higher temperature, it is less efficient at converting its energy into flares; it also has seven times the surface area of W1906+40 so the flare rate per unit area is lower. It is intriguing that in the <cit.> theory, W1906+40's predicted magnetic field is 3.1 kG, 40% stronger than 2M0335+23.If we compute the total power in 2M0335+24 white light flares as the integral of EdN from the solar microflare energy of10^24 erg to the observed superflare energy of 2.6×10^34 erg using the 2M0335+23 FFD fit, we findlog L_ WLF / L_ bol = -4.2 .A more conservative upper limit 10^33, and lower limit of 5 × 10^28 erg still gives log L_ WLF / L_ bol = -4.7. §.§ The Pleiades Brown Dwarf Superflare With an energy of 2.6 × 10^34 erg, the CFHT-PL-17 flare event is comparable to superflares observed in Kepler G, K, and M stars <cit.>, though below the mean observed superflare energy <cit.>. It is helpful to consider the flare in terms of the observed peak absolute M_Kp = 10.3 (long cadence) or ∼8.7 (short cadence): If this brown dwarf were an unresolved companion to an A star (M_Kp≈ 0) or F dwarf (M_Kp≈ 2.5) the flare would be a detectable event with Kepler. However, the flare rates seen in some A and F stars by <cit.> seem to be too high to be explained by brown dwarf companions, since they re-occur on timescales of 1-120 days whereas we expect less than one per year due to a brown dwarf.X-ray triggered events have revealed that even more energetic superflares occur in young M dwarfs such as the ∼30-Myr old M dwarf binary DG CVn <cit.>. For comparison to other ultracool dwarfs, this superflare has more energy thanthe L1 dwarf superflare we reported in Paper I of this series <cit.> but less than the ASASAN-16ae L0 dwarf superflare <cit.>.As noted above, the extrapolated flare rate of 2M0335+23 suggests a superflare only ∼1.4 times per year. We find that there are nine well-resolved Pleaides brown dwarfs in the spectral type range M6-M9 Campaign 4 for which we would have been detected a similar superflare. The combined superflare rate of these nine brown dwarfs is∼1.7 times per year. This suggests that the superflare may be understood as the high energy tail of the white light flare power-law distribution.However, <cit.> suggested superflares in solar-type stars may be the result of interactions with a planetary companion, and we have no information about whether CFHT-PL-17 has a lower-mass brown dwarf or exoplanet companion. We see no reason to invoke interactions with a substellar companion to explain the white light flares in either 2M0335+23 or CFHT-PL-17.§.§ Complex flares: Is sympathetic flaring at work?The two 2M0335+23 complex flares (Fig <ref>, Table whatever) are very well described as the sum of two individual flares that follow the template, and we assume through this section that they are occurring on 2M0335+23 rather than an unknown companion.The time separation is 14.6 minutes for the flare on mission day 2240 and 9.2 minutes for the flare on mission day 2287.A third possible example of a complex flare occurs on mission day 2258, with a time separation of about 8 minutes. However, in the third case, the noise level is large enough that it is not altogether clear that two individual flares can be reliably identified. Thus, among the 22 flares illustrated in Fig. <ref> , we can confidently state that about 10% exhibit the occurrence of two flares within a time interval shorter than 20 minutes. Flares which occur within a short time interval of each other may belong to the class of “sympathetic flares" (SF). By definition, SF are related to each other in the sense that a disturbance (of a kind that we will discuss below) generated by the first flare propagates to another active region and triggers a flare there.However, a “short" time interval between flares is not necessarily an indication of SF.In fact, it may be difficult to identify with confidence a bona fide SF in certain stars. For example, a very active flare star may have multiple flares occurring randomly in multiple active regions within short time intervals, and these flares may have little or physical relationship to one another. Is there a way to distinguish between unrelated flares and SF? We suggest that one possible approach may be to consider the ratio of (i) the time interval T between two particular flares, and (ii) the mean time interval T(m) between flares averaged over the length of the entire observing period.For example, the M4 flare star GJ 1243 <cit.> was observed by Kepler for a period of 11 months, during which 6107 “unique events" were identified as flares. This star has an average of 18-19 flares per day, i.e. T(m) = 75-80 minutes. An example of a “complex flare" is illustrated by <cit.> in their Fig. 6, which spans an interval of 3.6 hours: 7 template flares are required to produce a good fit to the light curve. The average time interval between the template flares in this case is T = 31 minutes. This is already shorter than T(m) by a factor of 2, and might therefore suggest that SF could be at work. A fortiori, if we exclude one outlier flare at late times (with a peak at abscissa 549.865 days), inspection of their Fig. 6 suggests that 5 template flares occurred on GJ 1243 within a time interval of only 75 minutes, i.e. T = 15 minutes. This is 5-6 times shorter than T(m), again suggestive of SF.In contrast to the active flare star GJ 1243, when we return to considering the object of interest to us here (2M0335+23), we find that the flares in Fig. <ref> were observed over a 66-day interval. This means that, with 21 flares in our sample, the mean interval between flares is T(m) =3 days. The fact that we have identified two (or possibly 3) pairs of flares separated by only 20 minutes means that our pairs of flares have time separations T which are shorter than 0.01T(m). As a result, while we may assert that neighboring flares on 2M0335 are probably randomly related if they are separated by 3 days or more, it is much more difficult to make such an assertion for flares which are separated by less than 1% of T(m). It seems more likely to consider the possibility the two flares which are separated by only 1% of T(m) are physically related to each other. Specifically, is it possible that we are observing pairs of SF in 2M0335+23?In the Sun, the SF possibility was subject to opposing claims in the 1930s based on optical data<cit.>. Conflicting claims for the existence (or non-existence) of SF surfaced again in the 1970s, based on radio and X-ray data (: hereafter FCS). Based on X-ray data, FCS reported on the absence of significant evidence for SF except in one sub-set of their data: active regions which were closer to each other than a critical distance exhibited a 3.4σ increase in the occurrence of “short" time intervals between flares. Coincidentally, FCS defined “short" as being <20 minutes, i.e. the same interval as we mentioned above in connection with flares on 2M0335+23. However, FCS seemed suspicious about even the one SF case they had detected because they could not identify “any mode of propagation of a triggering agent in the solar atmosphere."In the case of stars, <cit.>[OT] analyzed the time intervals between successive flares in YZ CMi and UV Cet and found that the intervals in general followed a Poisson distribution.However, in UV Cet, there exist certain “sequences of closely spaced flares whose probability of occurrence is very small in the case of a Poisson process." In one case, 7 flares occurred in 5.4 minutes, and on each of 2 separate occasions, 3 flares were observed within a 2-minute interval. OT demonstrated that such small intervals of time between flares are highly improbable in the context of the overall Poisson distribution.Also in the case of the same flare star as that discussed by OT, <cit.>[HS] reported on an independent study of the time intervals between flares on UV Cet. In the course of 26 hours of observing, they detected 94 flares. Thus, they obtained T(m) = 17 min as the average time between flares. However, when they examined the distribution of time intervals between individual flares, the intervals ranged up to as long as 110 min. A Poisson distribution was found to fit the flare interval data at the 98% confidence level with one proviso: only intervals larger than 4 min were included. At intervals shorter than 4 min, there is a large spike in the distribution: 38 of the 94 flares were found to have T≤ 4 min. This excess at short times is far above what the Poisson distribution predicts. HS cited OT as having also “noted" this excess at short times.But then, with regards to the excess at short times, HS make the following statement (which has no explicit analog in OT): “This might be due to triggering of the second flare by the first, like sympathetic flares on the sun."A possible triggering agent for SF in the Sun was suggested by <cit.>: a fast-mode MHD wave/shock which is launched into the corona by certain flares.The idea is that as the wave/shock propagates through the corona, it may encounter a second active region: in that case, the wave/shock may perturb the second active region in such a way that a “sympathetic" flare occurs in that active region. Evidence for disturbances propagating away from certain flare sites was at first based solely on chromospheric data, where a “Moreton wave" was observed sweeping across the chromosphere. An observational break-through as regards a triggering agent for SF occurred with the launch of SOHO in 1995, when the Extreme-ultraviolet [EUV] Imaging Telescope (EIT) detected waves which propagate through large distances in the solar corona following certain events. The waves are referred to variously as “EIT waves" or “EUV waves." The waves were first interpreted as fast-mode MHD waves driven either by an erupting coronal mass ejection (CME) or as a blast wave driven by the energy release in a flare. The earliest data indicated that EUV waves propagate at speeds of 200-500 km/sec in the solar corona <cit.>, but speeds as large as 1400 km/sec have been reported <cit.>.An extensive survey of multiple theories which have been proposed to explain EUV waves <cit.> has concluded that the waves in the Sun are “best described as fast-mode large-amplitude waves or shocks that are initially driven by the impulsive expansion of an erupting CME in the low corona."In the case of stellar (and brown dwarf) flares, is it possible that we might rely on solar-like phenomena to understand SF? Flares on stars involve magnetic energy release, so to the extent that a stellar flare contributes to the launch of a blast wave in the corona (analogous to the Sun), we may expect that flare-induced EUV waves could contribute to stellar SF. What about EUV waves generated by CMEs? Can we count on those to occur in flare stars and serve to launch EUV waves to help generate SF? Although flares and CMEs in the Sun both involve release of magnetic energy, they do not always occur together: one can occur without the other, depending on local details of the parent active region. We note that at least one detection of a stellar CME has been reported from an active K dwarf which is known to be a flare star <cit.>.Let us examine the hypothesis that the 2 complex flares which we have detected in 2M0335+23 involve SF which are triggered by the equivalent of an EUV wave. In this context, the maximum speed of the wave would be obtained if the two active regions which are involved in the individual flares were located at antipodal points on the surface of 2M0335+23, at a distance π R from each other. Inserting the radius R of 2M0335+23, and inserting time delays of 14.6 and 9.2 minutes, the SF hypothesis leads to v(EUV) < 600 and 950 km s^-1. Such values are well within the range of EUV wave speeds which have been reported in the solar corona. With fast-mode speeds determined mainly by the Alfven speed (which greatly exceeds the thermal speed of order 100-200 km/sec in a 1-2 MK corona), our SF interpretation suggests that Alfven speeds in the corona of 2M0335+23 may not differ greatly from those in the solar corona. In the latter, a map of coronal Alfven speeds reported by <cit.> spans a range from 500 to 900 km/sec at essentially all latitudes within a radial distance of 5 solar radii. §.§ Complex flares: Does the weakness of the second flare contain physical information?We note that for the complex flares in Figure <ref>, when the tail of the first light curve is subtracted from the second flare, the amplitude at the peak of the second flare is smaller than the amplitude at the peak of the first flare. We ask: Is this “weaker secondary" a common feature of complex flares? To address this, we consider some flare data which were recorded in different settings.* A large homogeneous sample of X-ray flares which were recorded by Chandra in the Orion Ultradeep Project <cit.>. In their study of the 216 brightest flares from 161 pre-main sequence stars, 8 events were classified as double flares, i.e. they look like two overlapped typical flares. By subtracting off the tail of the first flare in each case, we evaluated the ratio of peak 2 to peak 1, and we found the following values: 0.3, 0.4, 0.2, 0.5, 1.1, 0.2, 0.3, and 0.1. Thus, in 7 out of 8 cases, the sympathetic flare has a smaller amplitude than the original flare. * Optical data (in the r band) for stars in the intermediate age cluster M37 (0.55 Gyr) resulted in detection of several hundred flares from cluster members <cit.>. <cit.> drew attention to the result that their algorithm often detects secondary flares which occur during the decay of a much larger flare. Visual inspection of Fig. 2 in <cit.> and Figs. 17, 18 in <cit.> suggests that as many as 8 or 9 secondary flares can be identified among the plotted light curves of 23 stars: in all cases but one, the flare which occurred later in time was smaller in amplitude than the first flare.* A small sample (15-20) of light curves in a variety of visible and near UV wavelengths from a number of solar neighborhood flare stars has been presented by <cit.>[,pp. 195-205]Secondary flares can be identified in at least 5 cases, and in all cases, the flare which occurred later in time had a smaller peak intensity. These examples suggest that, no matter which wavelength range we examine, the flare which occurs later in time (the “secondary") in a “close double" has (in most cases) a smaller amplitude than the flare which occurs earlier. We believe that this is a feature which contains information related to one of the key physical processes involved in sympathetic flaring. How might this “weaker secondary" behavior be understood in the context of the SF explanation proposed above (i.e. SF is triggered by a fast-mode MHD wave)? We suggest that it may be understood in terms of wave refraction. When fast-mode MHD waves propagate through an inhomogeneous medium, the compressive nature of the waves has the effect that the waves are refracted away from regions of high Alfven speed (v_A), and are refracted into regions where v_A is small <cit.>. (Alfven waves, lacking compression, refract differently.) Tests of Uchidas predictions have been provided by observations (e.g. Shen et al. 2013) and modelling (e.g,,) of fast-mode waves propagating through a variety of structures in the solar corona. However, the tests mentioned in the preceding sentence were based on indirect inferences of the v_A value in different regions of the Sun. For more reliable tests of Uchidas theory, it is preferable to consider a medium where in situ measurements of field strength and ion density can be made directly: in such a medium, the local value of v_A can be calculated,various wave modes can be distinguished (fast MHD, slow MHD, Alfvenic), and Uchidas predictions can be tested directly. The solar wind is one such medium. In the solar wind at a radial distance of about 1 AU, data from the ACE satellite have been used to demonstrate that fast mode waves are indeed depleted in high-v_A regions <cit.>, and fast mode waves are indeed enhanced in regions of low-v_A <cit.>. Now let us consider how these results might find an application in SF in low-mass stars. Suppose an initial flare is triggered (somehow) in a certain active region (AR), thereby launching a fast-mode EUV wave with a certain speed: the speed will be related to the v_A value in the AR where the flare was initiated.Suppose there are two other ARs on the surface of the star, AR-A with large v_A, and AR-B with small v_A. What happens when the EUV wave approaches AR-A? The fast-mode wave will be refracted away from AR-A because of the locally large v_A : the wave will be unable to penetrate into AR-A. Therefore, a SF is unlikely to occur in AR-A. But in the case of AR-B, the fast-mode wave will be refracted into the AR, thereby having a chance to perturb the plasma inside AR-B, perhaps enough to initiate a flare. In this scenario, a SF is more likely to occur in an AR with a small value v_A: or, in the terminology of <cit.>, the “impact" of the wave on AR-B (with its lower v_A ) would be larger. What might cause the v_A value in AR-B to be smaller? It could be either weaker field or higher density, or both. In cases where the field is weaker, we expect that (other things being equal) a flare in such an AR will have (in general) a smaller total energy. Thus the flare originating in AR-B (where the “impact" of the fast-mode wave is largest) will be “weaker," and is expected to have a smaller peak amplitude. If this is a correct interpretation of SF in stars, then the phenomenon of the “weaker secondary" may provide an observational signature of the physics of refraction of fast-mode MHD waves in the inhomogeneous corona of a flare star. § CONCLUSIONS White light flares on a young (24 Myr) M7-type brown dwarf are similar in most respects to stellar M dwarf flares, including their light curves, power-law flare frequency distribution, and sympathetic flaring.Adding a flare on a Pleiades brown dwarf, we see that these flares extend up to at least 2.6 × 10^34 erg. However, we observe no white light flares on the L5γ brown dwarf despite its known young age and rapid rotation. Since there is overwhelming observational and theoretical evidence that magnetic fields exist on L and T-type brown dwarfs, we conclude that the change in flare rates is direct evidence that fast magnetic reconnection is suppressed or forbidden at temperatures ∼1500K.In this work, we have studied brown dwarfs of known age. In our next paper, we will measure the white light flare rates of a sample of field late-M and L dwarfs and investigate their dependence on age, rotation, effective temperature, and other observable properties. We thank James Davenport and Rachel Osten for discussions of stellar flares, Jonathan Gagné and Jackie Faherty fordiscussions of moving groups,Mike Liu and Conard Dahn for comments on the preprint, and the anonymous referee and statistical consultant for suggestions. This paper includes data collected by the Kepler mission. Funding for the Kepler mission is provided by the NASA Science Mission directorate. The material is based upon work supported by NASA under award Nos. NNX15AV64G, NNX16AE55G, and NNX16AJ22G.A.J.B. acknowledges funding support from the National Science Foundation under award No. AST-1517177.Some of the data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-HST data is provided by the NASA Office of Space Science via grant NNX09AF08G and by other grants and contracts. Some of the data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation.This research has made use of NASA's Astrophysics Data System, the VizieR catalogue access tool, CDS, Strasbourg, France, and the NASA/ IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA.We have also made use of the[List of M6-M9 Dwarfs]https://jgagneastro.wordpress.com/list-of-m6-m9-dwarfs/ maintained by Jonathan Gagné. IRAF, AstroPy <cit.>, photutils, emcee <cit.>, PyKE <cit.>, APLpy, powerlaw <cit.>Kepler, Keck:II (NISPEC) | http://arxiv.org/abs/1703.08745v2 | {
"authors": [
"John E. Gizis",
"Rishi R. Paudel",
"Dermott Mullan",
"Sarah J. Schmidt",
"Adam J. Burgasser",
"Peter K. G. Williams"
],
"categories": [
"astro-ph.SR"
],
"primary_category": "astro-ph.SR",
"published": "20170325220246",
"title": "K2 Ultracool Dwarfs Survey II: The White Light Flare Rate of Young Brown Dwarfs"
} |
compatibility=false,justification=raggedright,singlelinecheck=off theoremTeorema corolaryCorolário propProposição definitionDefinição exampleExemplo exerciseExercício lemmaLemma 170mm 240mm -1.2mm 5mm -48pt | http://arxiv.org/abs/1703.08895v2 | {
"authors": [
"Sergio Andres Vallejo",
"Antonio Enea Romano"
],
"categories": [
"astro-ph.CO",
"gr-qc"
],
"primary_category": "astro-ph.CO",
"published": "20170327014226",
"title": "Reconstructing the metric of the local Universe from number counts observations"
} |
[email protected]@ihep.ac.cn [email protected]^1 School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000, China ^2 Research Center for Hadron and CSR Physics, Lanzhou University and Institute of Modern Physics of CAS, Lanzhou 730000, China ^3 School of Physical Science and Technology, Inner Mongolia University, Hohhot 010021, China ^4 Institute of High Energy Physics, YuQuanLu 19B, Beijing 100049, China ^5 School of Physics, University of Chinese Academy of Sciences, YuQuanLu 19A, Beijing 100049, China ^6 INPAC, Shanghai Key Laboratory for Particle Physics and Cosmology, School of Physics and Astronomy, Shanghai Jiao-Tong University, Shanghai 200240, ChinaThe existence of doubly heavy flavor baryons has not been well established experimentally so far. In this Letter we systematically investigate the weak decays of the doubly charmed baryons, Ξ_cc^++ and Ξ_cc^+, which should be helpful for experimental searches for these particles. The long-distance contributions are first studied in the doubly heavy baryon decays, and found to be significantly enhanced. Comparing all the processes, Ξ_cc^++→Λ_c^+K^-π^+π^+ and Ξ_c^+π^+ are the most favorable decay modes for experiments to search for doubly heavy baryons.Discovery Potentials of Doubly Charmed Baryons Zhen-Xing Zhao^6 December 30, 2023 ==============================================§ INTRODUCTION Plenty of hadrons including quite a few exotic candidateshave been discovered in experiments during the past few decades. Doubly and triply heavy flavor baryons, however, with two or three heavy (b or c) quarks, are so far still absent in hadron spectroscopy <cit.>. Searches for the doubly and triply heavy baryons will play a key role in completing hadron spectroscopy and shedding light on perturbative and non-perturbative QCD dynamics.The only evidence was reported by the SELEX experiment for Ξ_cc^+ via the process Ξ_cc^+→Λ_c^+K^-π^+ in 2002 <cit.>, followed by Ξ_cc^+→ pD^+K^- <cit.>. However, this has notbeen confirmed by any other experiments so far. The FOCUS experimentreported no signal right after SELEX's measurements <cit.>. The BaBar <cit.> and Belle <cit.> experiments searched for Ξ_cc^+(+) with the final states ofΞ_c^0π^+(π^+) and Λ_c^+K^-π^+(π^+), and did not find any evidence. The LHCb experiment performed a search using the 0.65 fb^-1 data sample in the discovery channel used by SELEX, Ξ_cc^+→Λ_c^+K^-π^+, but no significant signal was observed <cit.>.Besides, the mass measured by SELEX, m_Ξ_cc^+=3518.9±0.9 MeV, is much lower than most theoretical predictions, for instance m_Ξ_cc= 3.55-3.67 GeV predicted by lattice QCD <cit.>. These puzzles can only be solved by experimental measurements with high luminosities.At the Large Hadron Collider (LHC), plenty of heavy quarks have been generated, and thereby abundant doubly heavy hadrons has been produced due to quark-hadron duality, such as B_c^±, which has been studied in great detail by LHCb. The cross sections of the hadronic production of Ξ_cc at the LHC have been calculatedin QCD <cit.>, and are of the same order as those ofB_c <cit.>. As LHCb has a data sample larger than 3 fb^-1 and is collecting even more data during Run 2, there is now a good opportunity to study Ξ_cc. One key issue left is to select the decay processes with the largest possibility of observing doubly charmed baryons.In this work, we willsystematically study the processes of Ξ_cc^++ and Ξ_cc^+ decays to find those with the largest branching fractions which should be helpful for experimental searches for the doubly charmed baryons. The lowest-lying heavy particles can only decay weakly. We will analyze the color-allowed tree-operator dominated decay modes using the factorization ansatz.For other decay channels which are suppressed in the factorization scheme, the non-factorizablecontributions might be significant and behave as long distance contributions. With a direct calculation in the rescattering mechanism, we will demonstrate that long-distance contributionsare significantly enhanced for some decay modes with high experimental efficiencies. At the end,we will point out that instead of searching for the Ξ_cc using the SELEX discovery channels Ξ_cc^+→Λ_c^+K^-π^+ and pD^+K^-, one shouldmeasure Ξ_cc^++→Λ_c^+K^-π^+π^+ and Ξ_c^+π^+ with the highestpriority.The branching fractions depend on the lifetimes of Ξ_cc^++ and Ξ_cc^+ which, however, are predicted to be quite different in the literature. Predictions for the lifetime of Ξ_cc^+ vary from 53 fs and 250 fs, while those for Ξ_cc^++ range from 185 fs to 670 fs <cit.> except for τ_Ξ_cc^++=1550 fs in Ref. <cit.> which is too large compared to the lifetimes of singly charmed baryons. Despite the large ambiguity in the absolute lifetimes, it is expected that τ(Ξ_cc^++)≫τ(Ξ_cc^+) for their relative lifetimes, due to the effect of the destructive Pauli interference in the former. The ratio between their lifetimes is then ℛ_τ≡τ_Ξ_cc^+τ_Ξ_cc^++ =0.25∼0.37, with small uncertainty in all these calculations <cit.>. The branching fractionsof Ξ_cc^++ decays should be relatively largerdue to its longer lifetime, compared to those of Ξ_cc^+. Besides, particles with longer lifetimes can be better identified with high efficiency at the detectors. Thus, we recommend experimentalists to search for Ξ_cc^++ before Ξ_cc^+.§ FORM FACTORS In the study of the exclusive modes of heavy hadron decays, the transition form factors are required in the calculations. The hadronic matrix elements of Ξ_cc decaying into the anti-triplet and sextet singly charmed baryons, i.e. ℬ_c=Ξ_c, Ξ'_c, Λ_c and Σ_c, are expressed in terms of the form factors as ⟨ℬ_c(p_f)|J^ w_μ|Ξ_cc(p_i)⟩=u̅_f(p_f)[γ_μ f_1(q^2)+iσ_μνq^ν m_if_2(q^2)+q_μ m_if_3(q^2)]u_i(p_i) -u̅_f(p_f)[γ_μ g_1(q^2)+iσ_μνq^ν m_ig_2(q^2)+q_μ m_ig_3(q^2)]γ_5u_i(p_i), where the initial and final baryons are all 12^+ states, J^ w_μ is the weak current in the relevant decays and q=p_i-p_f. In this work, the form factors are calculated in the light-front quark model (LFQM). The LFQM is a relativistic quark model under the light front approach, and has been successfully used to study the form factors of heavy meson and heavy baryon decays <cit.>. We adopt the diquark picture for the two spectator quarks <cit.>. The diquark state with a charm quark and a light quark can be either a scalar (J^P=0^+), or an axial-vector state (1^+). Considering the wave functions of the relevant baryons, the hadronic matrix elements of Ξ_cc decaying into the anti-triplet and sextet singly charmed baryons are linear combinations of the transitions with the scalar and the axial-vector diquarks, ⟨ℬ_c(3) | J^ w_μ| Ξ_cc⟩= √(6)4⟨ J^ w_μ⟩_0^+ + √(6)4⟨ J^ w_μ⟩_1^+,⟨ℬ_c( 6)| J^ w_μ | Ξ_cc⟩ = -3√(2)4⟨ J^ w_μ⟩_0^+ + √(2)4⟨ J^ w_μ⟩_1^+. In the case with two identical quarks in the final state, for instance Σ_c^0, an overall factor of √(2) has to be considered. The details of the calculations of the form factors in the LFQM can be seen in Ref. <cit.>. The results are given in Table <ref>, with the q^2 dependence ofF(q^2)=F(0)/( 1+αq^2 m_ fit^2+δq^4 m_ fit^4),where α=+1 for g_2(q^2) with 1^+ diquarks as shown with stars in Table <ref>, and α=-1 for all the other form factors. Under the flavor SU(3) symmetry, the form factors are related to each other between Ξ_cc^++ and Ξ_cc^+ decays, and between c→ s and c→ dtransitions as seen in Table <ref>. The uncertainties of the form factors can then be mostly cancelled in the relative branching fractions between decay channels. § SHORT-DISTANCE CONTRIBUTION DOMINATEDPROCESSES With the form factors obtained above, we proceed to study the non-leptonicdecays ofΞ_cc. The short-distance contributions in the external and internal W-emission amplitudes of two-body non-leptonic modes are calculated in the factorization approach manifested in the heavy quark limit. The amplitudes of Ξ_cc decaying into a singly charmed baryon and a light meson (M) can be expressed by the product of hadronic matrix elements𝒜(Ξ_cc →ℬ_cM)_ SD =λ⟨ M(q)|J^μ|0⟩⟨ℬ_c(p_f)|J^ w_μ|Ξ_cc(p_i)⟩,where λ=G_F√(2)V_CKM a_1,2(μ), V_CKM denotes the product of the corresponding Cabibbo-Koboyashi-Maskawa matrix elements, and a_1(μ)=C_1(μ)+C_2(μ)/3 for the external W-emission amplitudesand a_2(μ)=C_2(μ)+C_1(μ)/3 for the internal W-emission ones, with C_1(μ)=1.21 and C_2(μ)=-0.42 at the scale ofμ=m_c <cit.>. In this work M denotes a pseudoscalar meson (P) or a vector meson (V), with the hadronic matrix elements of decay constants as⟨ P (q) | J^μ |0⟩= i f_Pq^μ, ⟨ V (q) | J^μ |0⟩= f_Vm_Vϵ^μ*In this work, we show the results of a few gold channels with the highest probability of being observedin experiments. More discussions on variousprocesses can be foundin Refs. <cit.>. According to Eq. (<ref>), the relative branching fractions of the other processescompared to that of Ξ_cc^++→Ξ_c^+π^+ are given as ℬ(Ξ_cc^+→Ξ_c^0π^+) / ℬ(Ξ_cc^++→Ξ_c^+π^+) =ℛ_τ=0.25∼0.37, ℬ(Ξ_cc^++→Λ_c^+π^+) / ℬ(Ξ_cc^++→Ξ_c^+π^+)=0.056, ℬ(Ξ_cc^++→Ξ_c^+ℓ^+ν)/ℬ(Ξ_cc^++→Ξ_c^+π^+)=0.71, The above relations are basically unambiguous, since the uncertainties from the transition form factors are mostly cancelled under the flavor SU(3) symmetry.It is obvious that the branching fraction of Ξ_cc^++→Ξ_c^+π^+ is the largest, compared to that of Ξ_cc^++→Λ_c^+π^+ which is a Cabibbo-suppressed mode, and that of Ξ_cc^+→Ξ_c^0π^+ due to the expected smaller lifetime of Ξ_cc^+. The semi-leptonic mode of Ξ_cc^++→Ξ_c^+ℓ^+ν suffers a low efficiency of detection for the missing energies of neutrinos. Similarly, some other processes with possible larger branching fractions may lose lots of events at hadron colliders, due to neutral particles such as in Ξ_c^+ρ^+(→π^+π^0) and Ξ_c^'+(→Ξ_c^+γ)π^+. The final state of Ξ_c^+a_1^+(→π^+π^+π^-) has two more tracks, reducing the efficiency of detection. Besides, the longer lifetimes of Ξ_cc^++ and Ξ_c^+ compared to those of Ξ_cc^+ and Ξ_c^0, respectively, benefit higher efficiencies of the identification of the particles in experiments. Thus the Ξ_cc^++→Ξ_c^+π^+ process is the best of the external W-emission processes to search for the doubly charmed baryons. The absolute branching fraction of Ξ_cc^++→Ξ_c^+π^+ is calculated to be ℬ(Ξ_cc^++→Ξ_c^+π^+)=(τ_Ξ_cc^++ 300 fs)×7.2%. This result is given by comparing the lifetime of Ξ_cc^++ with 300 fs, which is in the range of the predictions. Even if considering the uncertainties of the transition form factors and the lifetime, the branching fraction of this process is of the order of percent, which is large enough for measurements. To measure Ξ_cc^++→Ξ_c^+π^+, Ξ_c^+ can be reconstructedusing the mode Ξ_c^+→ p K^- π^+ at hadron colliders with all the charged particles in the final state. The absolute branching fraction of this process has never been directly measured, but the relative branching ratio was measured as ℬ(Ξ_c^+→ pK^*0)/ℬ( Ξ_c^+→ p K^- π^+)=0.54±0.10 <cit.>. Besides, the relation 𝒜(Ξ_c^+→ pK^*0)=𝒜(Λ_c^+→Σ^+ K^*0) holds under U-spin symmetry. With the measurement of ℬ(Λ_c^+→Σ^+ K^*0)=(0.36±0.10)% <cit.>, the branching fraction isℬ( Ξ_c^+→ p K^- π^+)=(2.2±0.8)%.The relatively larger branching fraction of this Cabibbo-suppressed mode is induced by the larger phase space of Ξ_c^+→ p K^*0 and the longer lifetime of Ξ_c^+.The main uncertainty in Eq. (<ref>) arises from the branching fraction of Λ_c^+→Σ^+K^*0 and the ratio between Ξ_c^+→ pK^*0 and Ξ_c^+→ p K^- π^+, which may be measured by BESIII, Belle II and LHCb with higher precision. Considering the relatively large value of ℬ( Ξ_c^+→ p K^- π^+) within the 1σ range, we suggest to measure the process of Ξ_cc^++→Ξ_c^+π^+ with Ξ_c^+ reconstructed by the final state p K^- π^+.§ LONG-DISTANCE CONTRIBUTION DOMINATED PROCESSESIn the factorization approach, only the factorizable contributions are taken into account. For the color-allowedtree-operator dominated channels, the non-factorizable contributions are expected to be small. For the color-suppressed processes with a tiny Wilson coefficient a_2, the decay widths are likely to be underestimated in the factorization framework.For instance, the branching fractions of the internal W-emission decays of Ξ_cc^++→Σ_c^++(2455)K^*0 and Ξ_cc^+→Λ_c^+K^*0 are predicted to be of the order of 10^-5, due to a_2(μ)≈-0.02. However, the long-distance contributions are usually significantly enhanced in charmed meson decays, which can be described well by the rescattering mechanism of the final-state-interaction effects <cit.>. The rescattering mechanism in the heavy-flavor-baryon decays was only considered in Ref. <cit.> to study the Cabibbo-suppressed decays of Λ_c^+→ pπ^0 and nπ^+, whose results have not been directly manifested so far, but are consistent with the upper limit recently measured by BESIII <cit.>. In doubly heavy flavor baryon decays, the long-distance contributions have never been considered. In this work we first calculate the rescattering effects in two-body non-leptonic Ξ_cc decays for the internal W-emission and W-exchange amplitudes, and then find some other processes with large branching fractions.The absorptive part of the amplitudes is obtained by the optical theorem <cit.>, summing over all possible amplitudes of Ξ_cc(p_i) decaying into the states {p_k}, followed by the rescattering of {p_k} into the final state ℬ_c(p_f)M(q), Absℳ(p_i→ p_f q)=12∑_j(∏_k=1^j∫d^3p_k(2π)^32E_k)(2π)^4 ×δ^4(p_f+q-∑_k=1^jp_k)ℳ(p→{p_k})T^*(p_fq→{p_k}). One typical rescattering diagram is given in Fig. <ref>, taking as example the t-channel triangle diagram of Ξ_cc^++→Ξ_c^(')+ρ^+→Σ_c^++K^*0 via quark exchange.The rescattering amplitudes are calculated using the effective Lagrangian <cit.>. The hadronic strong coupling constants are related to each other under the flavor SU(3) symmetry, and the chiral and heavy quark symmetries <cit.>, with the values taken from Refs. <cit.>. The effective Lagrangian and the strong coupling constants are given in the Appendix. Most of the uncertainties will then be cancelled in the relative branching ratios. The results of the rescattering amplitudes depend on the form factor F(t,m) which describes the off-shell effect of the exchanged particle. It is parametrized as F(t,m)=(Λ^2-m^2) / (Λ^2-t) <cit.>, with the cutoff Λ=m+ηΛ_ QCD, m and t being the mass and the momentum squared of the exchanged particle, respectively, and Λ_ QCD taken as 330 MeV. The free parameter η cannot be calculated from first principles. In this work, we take η varying in the range from 1.0 to 2.0, as found in Ref. <cit.>. The dependence of F(t,m) on η is plotted in Fig.<ref>.The relative branching fractions of some processes dominated by the long-distance contributions compared to that of Ξ_cc^++→Σ_c^++(2455)K^*0 are shown in Table <ref>.The results of the relative branching fractions are less ambiguous in theory, since the uncertainties from the effective hadronic strong coupling constants and from the transition form factors are mainly cancelled due to the flavor SU(3) symmetry and the chiral and heavy quark symmetries as discussed before. The absolute branching fractions depend heavily on the values of the parameter η, as seen in the top plot of Fig. <ref>, taking as examples ℬ(Ξ_cc^++→Σ_c^++(2455)K^∗0) and ℬ(Ξ_cc^++→ pD^∗+) as functions of η.In the bottom of Fig.<ref>, we plot the ratio ℬ(Ξ_cc^++→ pD^∗+)/ℬ(Ξ_cc^++→Σ_c^++K^∗0) as a function of η. The ratio of branching fractions is insensitive to η. Therefore, the theoretical uncertainties are under control for the relative branching fractions. From Table <ref>, it is obvious that Ξ_cc^++→Σ_c^++(2455)K^*0 has the largest branching fraction, which is useful for experimental measurements.In the process Ξ_cc^++→Σ_c^++(2455)K^*0, the dominant rescattering amplitude is Ξ_cc^++→Ξ_c^(')+ρ^+→Σ_c^++K^*0 with exchange of K^*±, depicted in Fig. <ref>. Considering some other triangle diagrams including the t-channel rescattering byΞ_c^(')+π^+ with exchange of K^±, and u-channels rescattering by Ξ_c^(')+π^+ and Ξ_c^(')+ρ^+ with exchanging Λ_c^+ or Σ_c^+, the absolute branching fraction is ℬ(Ξ_cc^++→Σ_c^++(2455)K^*0)=(τ_Ξ_cc^++ 300 fs)×(3.8∼24.6)%, where the range corresponds to the value of η varying between 1.0 and 2.0. Compared to the short-distance results of Ξ_cc^++→Σ_c^++(2455)K^*0 as 𝒪(10^-5), the long-distance contributions in the doubly charmed baryon decays are significantly enhanced. The branching fractions of Ξ_cc^++→Ξ_c^+ρ^+ and Ξ_c^'+ρ^+ are 12.6% and 17.4% respectively, which are large enough to lead to a result for Ξ_cc^++→Σ_c^++(2455)K^*0 of the order of percent. In the charmed meson decays, the large-N_c approach gives a good description of the internal W-emission contributions, which amounts to |a_2^ eff(μ_c)|≈ C_2(μ_c)∼-0.5 <cit.>. With this value, the branching fraction of this process is then 4.6%, which is in the range of Eq. (<ref>). So the results considering the rescattering mechanism for long-distance contributions are trustworthy.Ξ_cc^++→Σ_c^++(2455)K^*0 is actually a four-body process with the strong decays of Σ_c^++→Λ_c^+π^+ and K^*0→ K^-π^+. In the charmed meson decays, the resonant contributions almost saturate the decay widths <cit.>. The final-state particles are not very energetic in charm decays, and hence easily located in the momentum range of resonances. Thus the resonant contributions can indicate the key physics. For the four-body process Ξ_cc^++→Λ_c^+K^-π^+π^+, there are many low-lying-resonance contributions, such as Σ_c^++(2455) and Σ_c^++(2520) for Λ_c^+π^+, and K^*0 and (Kπ)_ S-wave for K^-π^+. Recalling that ℬ(Ξ_cc^++→Σ_c^++(2455)K^*0) is as large as shown in Eq.(<ref>), the branching fraction of Ξ_cc^++→Λ_c^+K^-π^+π^+ will be expected to be a few percent, or even reach 𝒪(10%). With Λ_c^+ reconstructed by pK^-π^+, the process Ξ_cc^++→Λ_c^+K^-π^+π^+ can be used to search for the doubly charmed baryon.Apart from the above four-body decay mode, the branching fraction of Ξ_cc^+→Λ_c^+K^-π^+ should be considerable, contributed by Λ_c^+K^*0 and Σ_c^++(2455)K^-. This process is just the one used by the SELEX experiment to report the first would-be evidence of Ξ_cc^+ <cit.>. However, no significant signal was observed by the LHCb experiment using this channel <cit.>. We find the process Ξ_cc^++→Λ_c^+K^-π^+π^+ is better than Ξ_cc^+→Λ_c^+K^-π^+ for the searches for doubly charmed baryons, for the following reasons. For the dominant resonant contributions in these two processes, the branching fraction of Ξ_cc^++→Σ_c^++(2455)K^*0 is larger than that of Ξ_cc^+→Λ_c^+K^*0 by a factor of about five, due to the predicted value of ℛ_τ∼0.3 with a small uncertainty, seen in Eq. (<ref>). As explained before, the efficiency of identifying Ξ_cc^++ is larger than that of Ξ_cc^+ at LHCb by a factor around the lifetime ratio, in the range of their predicted lifetimes. Even though Ξ_cc^++→Λ_c^+K^-π^+π^+ suffers a lower efficiency in detection by a few factors, due to one more track than Ξ_cc^+→Λ_c^+K^-π^+, it can still be expected that there would be more signal yields in the former process. For the other discovery channel by the SELEX experiment, Ξ_cc^+→ pD^+K^-, there are no low-lying-resonance contributions. In the mode Ξ_cc^+→Λ D^+, whose branching ratio is small, the Λ state is below the pK^- threshold, while the higher excited resonances would be more difficult to produce. Therefore, the process Ξ_cc^++→Λ_c^+K^-π^+π^+ is the best of the long-distance contribution dominated processes for the searches for doubly charmed baryons.In addition to the study of the ground states of doubly charmed baryons, the suggested processes should be useful to search for excited states below the charm-meson-charm-baryon thresholds. Such particles strongly or radiatively decay into the ground states, which would be reconstructed in experiments by the most favorable modes found in this work. Besides, the long-distance contributions, for which we found large and important Ξ_cc decays, should also be considered in the studies on the search for the discovery channels of other heavy particles, such as bottom-charm baryons and stable open flavor tetraquarks and pentaquarks. § SUMMARYWe have systematically studied the weak decays of Ξ_cc^++ and Ξ_cc^+ and recommend the processes Ξ_cc^++→Λ_c^+K^-π^+π^+ and Ξ_cc^+π^+ as the most favorable decay modes for searches for doubly charmed baryons in experiments. The channels Ξ_cc^+→Λ_c^+K^-π^+ and pD^+K^- used by the SELEX and LHCb experiments are not as good as the above two Ξ_cc^++ decay processes. The short-distance contributions of the decay amplitudes are calculated under the factorization approach. The long-distance contributions are first studied in the double-charm-baryon decays, considering the rescattering mechanism. It is found that the long-distance contributions are significantly enhanced and are essential for the favorable mode Ξ_cc^++→Λ_c^+K^-π^+π^+. Our suggestions are based on the analysis of the relative branching fractions between decay modes, which is less ambiguous since the theoretical uncertainties are mainly cancelled by the flavor symmetries. The absolute branching fractions of Ξ_cc^++→Λ_c^+K^-π^+π^+ and Ξ_c^+π^+ are estimated to be a few percent, or even reach the order of 10%, which are large enough for experimental measurements. [1] Note added: very recently, the LHCb collaboration reported the discovery of Ξ_cc^++ with the final state Λ_c^+K^-π^+π^+ <cit.>.We are grateful for Ji-Bo He for enlightening discussions which initiated this project, and Hai-Yang Cheng and Xiang Liu for careful proofreadings. Appendix:The effective Lagrangians used in the rescattering mechanism are <cit.>:ℒ_eff= ℒ_π hh+ℒ_ρ hh+ℒ_πℬℬ+ℒ_ρℬℬ+ℒ_ρππ +ℒ_ρρρ+ℒ_ρ DD+ℒ_π D^∗D+ℒ_ρ D^∗D^∗, ℒ_π hh= g_πℬ_6ℬ_6Trℬ̅_6iγ_5Πℬ_6+g_πℬ_3̅ℬ_3̅Trℬ̅_3̅iγ_5Πℬ_3̅+{g_πℬ_6ℬ_3̅Trℬ̅_6iγ_5Πℬ_3̅+h.c.}, ℒ_ρ hh= f_1ρℬ_6ℬ_6Tr[ℬ̅_6γ_μV^μℬ_6]+f_2ρℬ_6ℬ_6/2m_6Tr[ℬ̅_6σ_μν∂^μV^νℬ_6]+f_1ρℬ_3̅ℬ_3̅Tr[ℬ̅_3̅γ_μV^μℬ_3̅]+f_2ρℬ_3̅ℬ_3̅/2m_3̅Tr[ℬ̅_3̅σ_μν∂^μV^νℬ_3̅]+{f_1ρℬ_6ℬ_3̅Tr[ℬ̅_6γ_μV^μℬ_3̅]+f_2ρℬ_6ℬ_3̅/m_6+m_3̅Tr[ℬ̅_6σ_μν∂^μV^νℬ_3̅]+h.c.}, ℒ_πℬℬ= g_πℬℬTr[ℬ̅iγ_5Πℬ], ℒ_ρℬℬ= f_1ρℬℬTr[ℬ̅γ_μV^μℬ]+f_2ρℬℬ/2m_ℬTr[ℬ̅σ_μν∂^μV^νℬ], ℒ_ρππ= ig_ρππ/√(2)Tr[V^μ[Π,∂_μΠ]], ℒ_ρρρ= ig_ρρρ/√(2)Tr[(∂_νV_μ-∂_μV_ν)V^μV^ν] =ig_ρρρ/√(2)Tr[(∂_νV_μV^μ-V^μ∂_νV_μ)V^ν], ℒ_ρ DD= -ig_ρ DD(D_i∂_μD^j†-∂_μD_iD^j†)(V^μ)^i_j, ℒ_π D^∗D=-g_π D^∗D(D^i∂^μΠ_ijD_μ^∗ j†+D_μ^∗ i∂^μΠ_ijD^j†), ℒ_ρ D^∗D^∗ =ig_ρ D^∗D^∗(D_i^∗ν∂_μD_ν^∗ j†-∂_μD_i^∗νD_ν^∗ j†)(V^μ)^i_j+4if_ρ D^∗D^∗D_iμ^∗†(∂^μV^ν-∂^νV^μ)^i_jD_ν^∗ j. where the corresponding Π, V, ℬ and ℬ_6, ℬ_3̅ respectively represent the matricesΠ= ( [π^0/√(2)+η/√(6)π^+K^+;π^- -π^0/√(2)+η/√(6)K^0;K^- K̅^0 -√(2/3)η;]), ℬ_6= ( [ Σ_c^++1/√(2)Σ_c^+ 1/√(2)Ξ_c^'+;1/√(2)Σ_c^+Σ_c^0 1/√(2)Ξ_c^'0; 1/√(2)Ξ_c^'+ 1/√(2)Ξ_c^'0Ω_c;]), V= ( [ρ^0/√(2)+ω/√(2)ρ^+ K^∗+;ρ^- -ρ^0/√(2)+ω/√(2) K^∗0; K^∗-K̅^∗0ϕ;]), ℬ_3̅= ( [0Λ_c^+Ξ_c^+; -Λ_c^+0Ξ_c^0; -Ξ_c^+ -Ξ_c^00;]),ℬ= ( [Σ^0/√(2)+Λ/√(6)Σ^+p;Σ^- -Σ^0/√(2)+Λ/√(6)n;Ξ^-Ξ^0 -2/√(6)Λ;]). According to the generalized form of baryons coupled with mesons in Eq.(<ref>), we extend to the vertex ℬ_cℬD and ℬ_cℬD^∗, and write the Lagrangian asℒ_Λ_cND_q =g_Λ_cND_q(Λ̅_ciγ_5D_qN+h.c.), ℒ_Λ_cND_q^∗ =f_1Λ_cND_q^∗(Λ̅_cγ_μD_q^∗μN+h.c.)+f_2Λ_cND_q^∗/m_Λ_c+m_N(Λ̅_cσ_μν∂^μD_q^∗νN+h.c.) ℒ_Σ_cND_q =g_Σ_cND_q(Σ̅_ciγ_5D_qN+h.c.), ℒ_Σ_cND_q^∗ =f_1Σ_cND_q^∗(Σ̅_cγ_μD_q^∗μN+h.c.)+f_2Σ_cND_q^∗/m_Σ_c+m_N(Σ̅_cσ_μν∂^μD_q^∗νN+h.c.).where N denotes baryons belong to the octet baryon matrix ℬ.The strong coupling constants are taken from the literature <cit.>, and listed in Tables.<ref>, <ref> and <ref>.99 Klempt:2009pi E. Klempt and J. M. Richard,Rev. Mod. Phys.82, 1095 (2010) doi:10.1103/RevModPhys.82.1095 [arXiv:0901.2055 [hep-ph]].Crede:2013sze V. Crede and W. Roberts,Rept. Prog. Phys.76, 076301 (2013) doi:10.1088/0034-4885/76/7/076301 [arXiv:1302.7299 [nucl-ex]].Cheng:2015iom H. Y. Cheng,Front. Phys. (Beijing) 10, no. 6, 101406 (2015). doi:10.1007/s11467-015-0483-zChen:2016spr H. X. Chen, W. Chen, X. Liu, Y. R. Liu and S. L. Zhu,Rept. Prog. Phys.80, no. 7, 076201 (2017) doi:10.1088/1361-6633/aa6420 [arXiv:1609.08928 [hep-ph]].Mattson:2002vu M. Mattson et al. [SELEX Collaboration],Phys. Rev. Lett.89, 112001 (2002) doi:10.1103/PhysRevLett.89.112001 [hep-ex/0208014].Ocherashvili:2004hi A. Ocherashvili et al. [SELEX Collaboration],Phys. Lett. B 628, 18 (2005) doi:10.1016/j.physletb.2005.09.043 [hep-ex/0406033].Ratti:2003ez S. P. Ratti,Nucl. Phys. Proc. Suppl.115, 33 (2003). doi:10.1016/S0920-5632(02)01948-5Aubert:2006qw B. Aubert et al. [BaBar Collaboration],Phys. Rev. D 74, 011103 (2006) doi:10.1103/PhysRevD.74.011103 [hep-ex/0605075].Chistov:2006zj R. Chistov et al. [Belle Collaboration],Phys. Rev. Lett.97, 162001 (2006) doi:10.1103/PhysRevLett.97.162001 [hep-ex/0606051].Kato:2013ynr Y. Kato et al. [Belle Collaboration],Phys. Rev. D 89, no. 5, 052003 (2014) doi:10.1103/PhysRevD.89.052003 [arXiv:1312.1026 [hep-ex]].Aaij:2013voa R. Aaij et al. [LHCb Collaboration],JHEP 1312, 090 (2013) doi:10.1007/JHEP12(2013)090 [arXiv:1310.2538 [hep-ex]].Lewis:2001iz R. Lewis, N. Mathur and R. M. Woloshyn,Phys. Rev. D 64, 094509 (2001) doi:10.1103/PhysRevD.64.094509 [hep-ph/0107037].Flynn:2003vz J. M. Flynn et al. [UKQCD Collaboration],JHEP 0307, 066 (2003) doi:10.1088/1126-6708/2003/07/066 [hep-lat/0307025].Liu:2009jc L. Liu, H. W. Lin, K. Orginos and A. Walker-Loud,Phys. Rev. D 81, 094505 (2010) doi:10.1103/PhysRevD.81.094505 [arXiv:0909.3294 [hep-lat]].Alexandrou:2012xk C. Alexandrou, J. Carbonell, D. Christaras, V. Drach, M. Gravina and M. Papinutto,Phys. Rev. D 86, 114501 (2012) doi:10.1103/PhysRevD.86.114501 [arXiv:1205.6856 [hep-lat]].Briceno:2012wt R. A. Briceno, H. W. Lin and D. R. Bolton,Phys. Rev. D 86, 094504 (2012) doi:10.1103/PhysRevD.86.094504 [arXiv:1207.3536 [hep-lat]].Alexandrou:2014sha C. Alexandrou, V. Drach, K. Jansen, C. Kallidonis and G. Koutsou,Phys. Rev. D 90, no. 7, 074501 (2014) doi:10.1103/PhysRevD.90.074501 [arXiv:1406.4310 [hep-lat]].Zhang:2011hi J. W. Zhang, X. G. Wu, T. Zhong, Y. Yu and Z. Y. Fang,Phys. Rev. D 83, 034026 (2011) doi:10.1103/PhysRevD.83.034026 [arXiv:1101.1130 [hep-ph]].Chang:2005bf C. H. Chang, C. F. Qiao, J. X. Wang and X. G. Wu,Phys. Rev. D 71, 074012 (2005) doi:10.1103/PhysRevD.71.074012 [hep-ph/0502155].Karliner:2014gca M. Karliner and J. L. Rosner,Phys. Rev. D 90, no. 9, 094007 (2014) doi:10.1103/PhysRevD.90.094007 [arXiv:1408.5877 [hep-ph]].Kiselev:2001fw V. V. Kiselev and A. K. Likhoded,Phys. Usp.45, 455 (2002) [Usp. Fiz. Nauk 172, 497 (2002)] doi:10.1070/PU2002v045n05ABEH000958 [hep-ph/0103169].Chang:2007xa C. H. Chang, T. Li, X. Q. Li and Y. M. Wang,Commun. Theor. Phys.49, 993 (2008) doi:10.1088/0253-6102/49/4/38 [arXiv:0704.0016 [hep-ph]].Onishchenko:2000yp A. I. Onishchenko,hep-ph/0006295.Guberina:1999mx B. Guberina, B. Melic and H. Stefancic,Eur. Phys. J. C 9, 213 (1999) [Eur. Phys. J. C 13, 551 (2000)] doi:10.1007/s100529900039, 10.1007/s100520050525 [hep-ph/9901323].Ke:2007tg H. W. Ke, X. Q. Li and Z. T. Wei,Phys. Rev. D 77, 014020 (2008) doi:10.1103/PhysRevD.77.014020 [arXiv:0710.1927 [hep-ph]].Ke:2012wa H. W. Ke, X. H. Yuan, X. Q. Li, Z. T. Wei and Y. X. Zhang,Phys. Rev. D 86, 114005 (2012) doi:10.1103/PhysRevD.86.114005 [arXiv:1207.3477 [hep-ph]].Cheng:1996if H. Y. Cheng, C. Y. Cheung and C. W. Hwang,Phys. Rev. D 55, 1559 (1997) doi:10.1103/PhysRevD.55.1559 [hep-ph/9607332]. Li:2017ndo R. H. Li, C. D. L眉, W. Wang, F. S. Yu and Z. T. Zou,Phys. Lett. B 767, 232 (2017) doi:10.1016/j.physletb.2017.02.003 [arXiv:1701.03284 [hep-ph]]. 1707.02834 W. Wang, F. S. Yu and Z. X. Zhao,Eur. Phys. J. C 77, no. 11, 781 (2017) doi:10.1140/epjc/s10052-017-5360-1 [arXiv:1707.02834 [hep-ph]]. Wang:2017azm W. Wang, Z. P. Xing and J. Xu,Eur. Phys. J. C 77, no. 11, 800 (2017) doi:10.1140/epjc/s10052-017-5363-y [arXiv:1707.06570 [hep-ph]]. Shi:2017dto Y. J. Shi, W. Wang, Y. Xing and J. Xu,Eur. Phys. J. C 78, no. 1, 56 (2018) doi:10.1140/epjc/s10052-018-5532-7 [arXiv:1712.03830 [hep-ph]]. Li:2012cfa H. n. Li, C. D. Lu and F. S. Yu,Phys. Rev. D 86, 036012 (2012) doi:10.1103/PhysRevD.86.036012 [arXiv:1203.3120 [hep-ph]].Link:2001rn J. M. Link et al. [FOCUS Collaboration],Phys. Lett. B 512, 277 (2001) doi:10.1016/S0370-2693(01)00590-1 [hep-ex/0102040].Link:2002zx J. M. Link et al. [FOCUS Collaboration],Phys. Lett. B 540, 25 (2002) doi:10.1016/S0370-2693(02)02103-2 [hep-ex/0206013].Ablikim:2002ep M. Ablikim, D. S. Du and M. Z. Yang,Phys. Lett. B 536, 34 (2002) doi:10.1016/S0370-2693(02)01812-9 [hep-ph/0201168].Li:2002pj J. W. Li, M. Z. Yang and D. S. Du,HEPNP 27, 665 (2003) [hep-ph/0206154].Fajfer:2003ag S. Fajfer, A. Prapotnik, P. Singer and J. Zupan,Phys. Rev. D 68, 094012 (2003) doi:10.1103/PhysRevD.68.094012 [hep-ph/0308100].Li:1997vu X. Q. Li and B. S. Zou,Phys. Rev. D 57, 1518 (1998) doi:10.1103/PhysRevD.57.1518 [hep-ph/9709508].Chen:2002jr S. L. Chen, X. H. Guo, X. Q. Li and G. L. Wang,Commun. Theor. Phys.40, 563 (2003) doi:10.1088/0253-6102/40/5/563 [hep-ph/0208006].Ablikim:2017ors M. Ablikim et al. [BESIII Collaboration],Phys. Rev. D 95, no. 11, 111102 (2017) doi:10.1103/PhysRevD.95.111102 [arXiv:1702.05279 [hep-ex]].Cheng:2004ru H. Y. Cheng, C. K. Chua and A. Soni,Phys. Rev. D 71, 014030 (2005) doi:10.1103/PhysRevD.71.014030 [hep-ph/0409317].Yan:1992gz T. M. Yan, H. Y. Cheng, C. Y. Cheung, G. L. Lin, Y. C. Lin and H. L. Yu,Phys. Rev. D 46, 1148 (1992) Erratum: [Phys. Rev. D 55, 5851 (1997)]. doi:10.1103/PhysRevD.46.1148, 10.1103/PhysRevD.55.5851Casalbuoni:1996pg R. Casalbuoni, A. Deandrea, N. Di Bartolomeo, R. Gatto, F. Feruglio and G. Nardulli,Phys. Rept.281, 145 (1997) doi:10.1016/S0370-1573(96)00027-0 [hep-ph/9605342].Meissner:1987ge U. G. Meissner,Phys. Rept.161, 213 (1988). doi:10.1016/0370-1573(88)90090-7Li:2012bt N. Li and S. L. Zhu,Phys. Rev. D 86, 014020 (2012) doi:10.1103/PhysRevD.86.014020 [arXiv:1204.3364 [hep-ph]].Aliev:2010yx T. M. Aliev, K. Azizi and M. Savci,Phys. Lett. B 696, 220 (2011) doi:10.1016/j.physletb.2010.12.027 [arXiv:1009.3658 [hep-ph]].Aliev:2010nh T. M. Aliev, K. Azizi and M. Savci,Nucl. Phys. A 852, 141 (2011) doi:10.1016/j.nuclphysa.2011.01.011 [arXiv:1011.0086 [hep-ph]].Khodjamirian:2011jp A. Khodjamirian, C. Klein, T. Mannel and Y.-M. Wang,JHEP 1109, 106 (2011) doi:10.1007/JHEP09(2011)106 [arXiv:1108.2971 [hep-ph]].Azizi:2014bua K. Azizi, Y. Sarac and H. Sundu,Phys. Rev. D 90, no. 11, 114011 (2014) doi:10.1103/PhysRevD.90.114011 [arXiv:1410.7548 [hep-ph]].Yu:2016pyo G. L. Yu, Z. G. Wang and Z. Y. Li,Chin. Phys. C 41, no. 8, 083104 (2017) doi:10.1088/1674-1137/41/8/083104 [arXiv:1608.03460 [hep-ph]].Azizi:2015tya K. Azizi, Y. Sarac and H. Sundu,Nucl. Phys. A 943, 159 (2015) doi:10.1016/j.nuclphysa.2015.09.005 [arXiv:1501.05084 [hep-ph]].Ballon-Bayona:2017bwk A. Ballon-Bayona, G. Krein and C. Miller,Phys. Rev. D 96, no. 1, 014017 (2017) doi:10.1103/PhysRevD.96.014017 [arXiv:1702.08417 [hep-ph]].Cheng:2010ry H. Y. Cheng and C. W. Chiang,Phys. Rev. D 81, 074021 (2010) doi:10.1103/PhysRevD.81.074021 [arXiv:1001.0987 [hep-ph]].Cheng:2010rv H. Y. Cheng and C. W. Chiang,Phys. Rev. D 81, 114020 (2010) doi:10.1103/PhysRevD.81.114020 [arXiv:1005.1106 [hep-ph]].Aaij:2017ueg R. Aaij et al. [LHCb Collaboration],Phys. Rev. Lett.119, no. 11, 112001 (2017) doi:10.1103/PhysRevLett.119.112001 [arXiv:1707.01621 [hep-ex]]. | http://arxiv.org/abs/1703.09086v3 | {
"authors": [
"Fu-Sheng Yu",
"Hua-Yu Jiang",
"Run-Hui Li",
"Cai-Dian Lü",
"Wei Wang",
"Zhen-Xing Zhao"
],
"categories": [
"hep-ph"
],
"primary_category": "hep-ph",
"published": "20170327140246",
"title": "Discovery Potentials of Doubly Charmed Baryons"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.